Integration is a required element in nearly all modern application development planning. Over the years, experience...
with SOA and with front-end web development has taught planners and architects a lot about integration. Early experience in virtualization has expanded this all the more, but the cloud breaks down many modern integration practices, so planners and architects need to start their cloud integration projects by asking why the cloud is different. They then need to assess cloud plans with integration in mind. For most, the key point will be how to accommodate application integration tools within cloud deployment.
In early applications, developers either wrote monolithic applications or tightly coupled separate components into a common load image. A large number of applications are still written this way, but SOA and agile development have encouraged architects to build independent functional components that can be assembled into applications. As business applications became more integrated with business processes and with one another, they required a looser coupling among the pieces simply to eliminate a single enormous load image, an all-or-nothing IT model.
Directory-based integration of these components has become the rule. With directory-based integration, a component registers itself somewhere, and through that registration it can be located and sent work. The directories can be fairly simple, like DNS, or can be a repository for functionality-based browsing, as SOA UDDI would ideally be. In all cases though, the directory expects to create a link to a loaded component or to trigger component loading for initial use.
The cloud challenges this in two ways. First, the cloud presumes a high level of dynamic resource allocation. A component could be put anywhere in the cloud, and so the issue of how to reach it has few boundaries, while in the past you could assume everything resided in at least a fixed data center. Second, one of the principle goals of the cloud is management of availability and performance through scaling of component instances. This means that many components will have to share work or failover in real time. Often the process of establishing integration links to components is not instantaneous, and in the delay period, transaction processing may be compromised.
Meeting these challenges is a matter of assessing cloud plans to identify integration pain points. To start, look for any places where a component could be cloudsourced under load or in failure conditions. Also, look for situations where a cloud component could be relocated by the cloud provider to respond to problems. Any of these scenarios will require some special handling in integration with other components and workflows.
Cloud users report a preference for DNS-based load balancing as a means of steering work-to-cloud components that could be failed over or horizontally scaled. In scaling situations, DNS-based load balancing will allow work to continue with current component links while a new one is added, so the only risk in QoE comes with component failure, which most companies will accept. If any downtime is intolerable, that can be addressed by having at least two copies of the components available at any point and integrated via the DNS.
The issue with DNS-based load balancing is that it doesn’t support component browsing (there’s no WSDL) and it may create problems for workflows to components that are stateful. If it’s not possible to use DNS-based load balancing for either reason, the next-best strategy is to rely on the UDDI and WSDL or BPEL to select among components. That is a potential problem if the application control processes that manage the component links aren’t responsible for moving components in the cloud. A component could be un-linked if moving it changes the address. Amazon’s solution to this is the elastic IP addresses, which let a static URL reference a movable component. This approach of address translation can be used within private clouds as well.
Security and compliance should always be the final item on any integration checklist.
The Amazon elastic IP address model demonstrates a basic truth about cloud integration. There are two forms of “component mobility”: one that must recognize separate components as discrete elements to be linked into workflows, and one that recognizes successor component copies created by cloud processes, rather than by application processes. Accommodating this combination with standard integration tools (including DevOps or CAMP) is easier if you adopt the principle that the URL is the boundary between logical component movement and physical component location.
Integration tools should be used to bring components together whenever the components are explicitly separate because workflows have to be directed to them individually. The goal of these tools is to direct work to URLs, with the expectation that the URLs will then be matched to a resource location by a separate back-end integration process.
It’s possible to manage the address represented by a URL in an integration tool, providing that the tool can be invoked by resource managers when the address of a component changes. The critical issue is not managing the change but managing the impact of the change on in-process transactions. It is very dangerous to allow any stateful flow to change elements mid-stream. This could cause what was once called “tail-ending,” wherein someone can ride in on the end of an authentic transaction and inherit the rights of the initiator. Thus it’s probably best for stateful workflows to report failures on in-process transactions before changing URL target addresses.
Security and compliance should always be the final item on any integration checklist. Workflows may present stateful-related, tail-ending security and compliance issues, but even the component links can create problems an application security audit might blanch at discovering. Elasticity in component loading multiplies the opportunity to introduce a non-authentic version of a component. As a result, the more integration work is needed in the cloud to insure workflows are sustained through elastic resource use, the more you’ll need to examine your component on-boarding processes to insure that you introduce only suitable and authentic elements into your workflows.