Guidewire projects start with an inception phase, one of the activities during inception is where the integration team comes together to discuss and agree upon the integration architecture that will be used to make Guidewire’s products communicate with the our client’s existing systems landscape. Many Guidewire implementations are part of a far larger enterprise transformation project, designed to replace existing older technologies with their more up to date counter parts, usually over a period of years. Mainframe systems are replaced with web applications designed to streamline operational business processes, solve sticky business problems like fraud detection, claims leakage and reduced time to market for new insurance products. In addition many want to take on new marketing channels and disruptive technologies to sell insurance and manage the customer experience in new and exciting ways. All good stuff.
During such an inception at a new implementation of the Guidewire InsuranceSuite, today we had the inevitable discussion around the subject of ESB implementation. The architects we were meeting had been working hard researching integration technologies and discussing their needs with various vendors. Part of the transformation is obviously the systems transformation and that usually includes some sort of hub based integration capability like either an ESB(Enterprise service bus) or database based integration capability. So, the discussion turned to the proposed ESB development and the usual suspects of canonical data model, conformed data structures and routing all integration traffic through the ESB, orchestration of services reared their heads. This is a common theme in most inceptions and while the ESB is a very powerful weapon in the architect’s tool box, most of the plans are unnecessarily complex, limiting on the applications that need to communicate with each other, and very expensive to implement. This often goes against the agile principals of only build what you need, keep it simple and avoid big upfront design.
To further elaborate on this, lets summarize the above. Organizations generally use ESB’s to do the following:
Communicate between system that use different protocols (web service to message queue, web service to CICS or IMS, web service to flat file)
Communicate between systems that utilise different data formats(XML to CSV, XML schema 1 to XML schema 2, XML to sql database etc etc)
To provide standards based communications between systems e.g. ACCORD, SOA etc
To provide loose coupling between applications so that applications swapped out easily if the insurance company wishes to change supplier.
To utilise a canonical data model which models all of the data in their various systems allowing transforms in and out of which to facilitate communication between their various systems.
Usually we start with a diagram like this:
This is a simplified representation of what we come across many times and it’s a pretty fair assessment of the insurance company’s IT landscape. If anything, the real diagrams are more complex than this. At a typical inception for a medium sized customer we see integration requirements for 40 or so systems requiring maybe 120+ actual service calls(an interesting point is that the average number of interfaces we generally do implement is about 25 we’ll come back to that) But this type of things shows up in most project inceptions and a fair amount of Vendor material as well. This is the complexity that we aim to solve with an ESB. The communications between systems are a combination of flat files, databases and proprietary protocols using expensive drivers likes MQSeries, CICS, IMS, AS400 RPG, etc. as well standards’ based web services. Most architects are very proud of this complexity, they usually describe it as an architectural nightmare, but usually with a wry smile on their faces. Managing large legacy environments like this is difficult, requires a large amount of skill and ingenuity, and encapsulates a body of knowledge that only really exists in the architect’s head in many cases.
So the target architecture looks something like this:
Well that’s fantastic isn’t it? Data can just be sent to the ESB and it’ll pass it on to the systems that need it and the complexity is gone. Plus the ESB presents a consistent interface to the entire plant allowing me to swap out systems and replace them with new ones without affecting any of the other systems. What on earth can be wrong with that?
Well it’s true that it’s better than before and it’s true that the ESB does do a very good job of allowing legacy systems that use proprietary protocols to talk to each and to web service enabled systems, however you need to bear the following in mind.
Many of the newer systems you buy will be web service enabled OOTB many will provide facilities to communicate with mainframe systems using MQSeries and JMS, especially if they’ve been designed for the insurance market. This often leaves loose coupling (i.e. the ability to switch applications with similar but better functionality without affecting the rest of your applications) as the main driver. The diagram above is a sample representation, but it reflects what is described in middleware vendor documentation. A set of applications that can suddenly talk to each other simply by putting an ESB into the picture. In practice this doesn’t happen automatically. There’s always a mapping to be done between the inputs and the outputs of each service. For web services this can double the mapping effort, even for pass through services where the sending system has already mapped it’s internal data structures to the target system. An additional mapping is done in the ESB. On the application side the developer sees a service he has to call so he does the work to make that call. On the ESB the developer knows he have to expose a service for the application guy to call and then call a service on the target application. If we look at it from the application guy’s perspective we can see that if he calls the service on the target application directly instead, then he has virtually the same work as before, if we choose the ESB the ESB work is all additional effort. This overhead can really add up when doing this for every integration in a large program.
In fact, the second diagram plays a couple of neat conceptual tricks on the architect’s mind. Firstly, it shows the ESB as a black box that maps and routes all your traffic with almost no effort; Secondly it gives the impression that every system talks to every other system, in fact this is rarely true, but if it were then the attempt to obtain loose coupling in this way might be justified (yay!! we can swap out the Producer Management system without changing the 42 systems that (don’t really) interface with it); Thirdly it assumes functional and scope equivalence for all system replacements, which is simply not the case. It’s going to be a rare situation where you swap one system for another that does exactly the same thing the same way. Which means the work to conform it to the ESB service contract is going to be significant.
In fact the target architecture shown in the second diagram is not what the architecture team should be aiming for. It embeds in stone the complexity of their current landscape. Modern applications like Guidewire’s InsuranceSuite are functionally far richer than their mainframe counterparts i.e. they do more and remove the need for so many independent systems. PolicyCenter can replace the two PolicyAdmin systems the reinsurance database and the Producer Management systems shown in the first diagram. ClaimCenter can replace the FNOL system the Outsourced Claims System and ContactManager can replace the Vendor Management system. In addition an enterprise’s systems don’t all talk to each other, further more many (most) integrations may truly be point to point. Once all of your Claims Processing is consolidated in an application like Guidewire ClaimCenter then the claim reserve change interface to the General ledger is only going to called from ClaimCenter. Writing an ESB service to loosely couple it from ClaimCenter doubles the mapping effort. If you swap your General Ledger system for some something better you’ve simply moved the task of interfacing to the new service into the ESB domain rather than the ClaimCenter domain and that’s only if ClaimCenter carries on with the old service contract. If it wants to use new improved features of the services provided by the new GL, you have the double mapping dilemma all over again. A more appropriate vision of where the architecture should be going is shown below.
The diagram is one simplified example and a depiction of the target. We have less systems because the off the shelf application we buy contain everything we need within their vertical domain. We recognised that every system doesn’t talk to every other system and not all services are generic services in fact most are point to point communication between two systems. There are less systems, less interfaces and interposing additional layers between systems make less sense, especially when the cost of doing so is higher than the cost of fixing any perceived issues that are likely to be incurred by not doing so.
So bad news for ESB huh? No, as we said up front ESB’s are very powerful and exciting tools in the architect’s toolbox, so let’s stop looking for nails and see what an ESB hammer can actually do.
Where should I use an ESB?
Loose Coupling – So let’s go back to that loose coupling thing. The main problem is double mapping for point to point interfaces, those can really slow down your development and your ability mitigate changing requirements if you do it for every interface. However, it does make sense if you have an interface with a simple generic service that is used my multiple systems.
Federated services going to Multiple Systems requiring complex orchestration – more likely to be found in a legacy environment. The classic example of this is for ClaimCenter implementation searching for Policies across multiple legacy policy admin systems. ESB can be a great way of hiding the complexity of the search process behind a simple service implementation. (Although you may see a big data approach to this problem where the search is sent to the data warehouse).
Proprietary Driver for legacy interfaces – if the service simply involves communication with mainframe CICS IMS, or MQ series or it’s simply accessing data from a database, it might be a good idea to the let the ESB do the legacy access and publish the service as something more standard like a SOAP or JSON RPC etc etc.
Finally, anywhere you need to do something that your vertical app doesn’t do easily out of the box(OOTB) – But remember, Guidewire core apps can do: asynchronous messaging over any protocol including soap and JMS; in bound webservice calls with simple annotations, generate XML from any GOSU class; set up daemons for TCP and other low level communication protocols using the startable plugin feature etc. So read the documentation before looking elsewhere.
Key to a virtuous use of ESB technology
Analysis – look at your problems as they are now, when trying to decide on which integrations need ESB and which should be point to point to point
Only do ESB services for the purpose of loose coupling where the service is called by at least two different systems
Only use the ESB if it’s bringing something to the party your calling application doesn’t have (like legacy driver support)
Where both applications have web service support then strongly consider making the integration point to point.
Don’t code to isolate your plant from change as change happens in ways we often don’t think of. Solve today’s problems today and tomorrow’s problems tomorrow!
Mike Carter is an Enteprise Architect at Cynosure. Mike’s prior experience includes over 12 years as Principal and an Enterprise Architect at Guidewire Software that includes advising clients on best practices to integrate enterprise client systems with Guidewire InsuranceSuite products.