Last time, I talked about Service Oriented Architecture (SOA) -- specifically, wrapping our systems (think databases) in services. (See: Not Your Daddy's SOA.) That way, when a table adds a new key or a column changes names, we don't have to re-test and rewrite everything. We can even add new services while maintaining backwards compatibility with the old ones, just by making sure the new service has a different signature or version number.
Externally, this allows us to partner with other companies. The cruise line Royal Carribean, for example, has an API to allow large travel agencies to search, reserve, and pay for cruises. This allows companies like VacationsToGo to build its own applications on top of Royal Carribean's, and tie directly to its customer relationship management software and other systems. But today I don't want to talk about the external API.
Today I want to talk about the back end.
The nasty back end
Yes, I know about the back end, that collection of systems that includes enterprise resource planning, human resources, CRM, finance, and the data warehouse; the thing that runs all the time, and that copies files which run in a batch to update something that is legacy and painful.
Look, legacy ain't bad. The word itself means a gift from another generation, usually finance, land, or a good family name. With software, it is usually a system that is handling millions (or billions) in transactions a year.
Day after day, legacy systems do the real work of the company: generating money. Our goofy web-based front ends are just pretty pictures to get the customer to press "Order," so the real system can take over and actually do the stock purchase, process the credit application, or fill the order. Legacy systems are good and make money.
The only thing is, they go down a lot.
Okay, not a lot. Maybe it's more like minutes at a time these days, perhaps a couple of hours if your company can get away with being offline on a Saturday between 1:00 a.m. and 3:00 a.m. Still, legacy systems need system maintenance, or run in batch, or upgrades.
And this is a problem with SOA, because your PlaceOrder call is going to hit a service that isn't up or isn't able to place the order. Now what?
Here's one fix for that: the message queue.
Message queue or service bus?
One of the core ideas of the Enterprise Service Bus (ESB) was supposed to be that when an action happens, the bus would notify every sub-system subscribed to that event. So you'd have an order come through, and the ESB would tell the CRM system and the warehouse and the data warehouse about the new order, which was great... unless one of those systems was down.
Messaging queues can help organizations deal with legacy systems' unscheduled downtime.
Message queues fix that problem. The web server drops a message on the queue, and the backend service pulls messages off the queue when it needs them. Turn the legacy system off for two hours, turn it back on and wake up the service, and it can pull and process everything that has been queued up during its nap.
That means no race conditions. No adding to a table and maintaining a flag since last time the table was checked. System testing gets easier because you know where a sub-system ends. Did the message get dropped on the bus? And you know how to test the next level down; drop a message on the bus and see if it gets processed.
You don't have to pay a lot for messaging queues, either. RabbitMQ is free and open-source, though you may want to consider getting some support.
I'm not saying they are the solution to every problem, but as I see larger and larger IT infrastructures, the ability to create a seam, pull a system down, fix it, put it back, and have everything keep humming along is invaluable.
Somehow I thought that was worth writing about.
— Matt Heusser is principal consultant of Excelon Development.