An enterprise architect once compared cloud integration and standard application integration to jumping between...
squares on the sidewalk, versus jumping between the roofs of moving cars. Having everything fluid makes a difference, certainly, but the cloud planner or architect who focuses on “cloud integration” has already made the first mistake. To create a successful cloud integration strategy, forget the cloud and embrace the benefits of workflow analysis. Balance the goals of agility and efficiency properly, and build in the mechanisms for problem detection and resolution.
The cloud is known to most architects today through deployment tools. They know about Openstack's or Amazon’s management APIs; about DevOps and orchestration. So, when they encounter the need to define cloud integration practices, they start with cloud deployment tools. How can they connect this deployed component to that component using Neutron or Puppet? Turning a cloud problem into a deployment problem -- because they’re deployment-focused -- can be fatal.
Application components need to integrate for better workflow, not for better cloud support. Workflows define the migration of information through the process steps, and this migration must be the focus of integration practices. In math terms, all workflows are vectors. They have direction, velocity or size information associated with them. They define the interdependent questions of how work -- the message bus, Business Process Execution Language (BPEL) etc. -- is done, as well as how the process components are linked, and in what order.
The best integration strategies start with a unified model for passing work items, which means cloud architects should maximize the flexibility of workflow engines. This can also translate to fewer component interface options, a wide variety of component connections to workflow engines, and a consistent set of state management and transaction integrity mechanisms. Before architects even start putting things together, much less making processes cloud-compatible, they have to make them logically efficient.
In the workflow analysis, architects will likely find that information flows are inherently circular. The loop starts with a presentation stage, where the worker interface takes place, then moves to a process stage where changes are made and new data is recorded, before returning to that presentation phase for verification. Presentation components tend to be web-like, because Web tools are ideal for supporting them. Back-end process phases are typically more SOA-like. If workflow design reflects this, applications will tend to be influenced by human activity in the beginning and end and data driven in between. Reflect this in the design if possible and limit explicit workflow handling to the activities that are compelled by process needs and not human needs. The web-driven activities would normally be integrated with DNS and the rest may be integrated to the directory.
This may be a good place in the workflow analysis to explore questions about application and business process evolution. If applications support an expensive workforce with significant variation in their workflow, and if competition pressures practices to evolve, the workflow link to the actual workers will need to be very agile. The best way to achieve this is by componentizing the core process elements, pushing information interfaces into the Web portion of the application so presentation can be personalized to worker needs. It also helps to use BPEL to define work paths among components. That will not only give more flexibility to the work process, it will show the specific places where workflow and cloud might intersect. The most agile parts of an application are the parts where the cloud value proposition is most likely to be made.
The biggest mistake an architect can make at this point is to over-componentize something in the name of making it more agile. Process steps that are not logically divisible should not be separated. Even goals of component reuse should be examined here. Remember that there are collateral goals of agility and efficiency here, and it’s easy to create too much BPEL-level steering and subsequently increase process delays and costs.
Architects often expect the workers themselves to be the most effective monitors of application QoE by acting as an early warning system for problems. Research and enterprise surveys suggest this is not true. Workers tend to accept application issues up to the point where they have an impact on their work efficiency. It’s important to set objectives for workflows, and for the workflow circle to close back to the user with a proper response. Architects should also measure times as part of the application performance management (ALM) functions.
The right time to think about the cloud in detail is after the workflow-circle homework has been done and the natural integration requirements have been laid out. The general rule is to make all the web-side processes cloud-oriented in the way that they deploy. The goal is to use cloud-compatible ALM processes to deploy these components of the application, even if it means making the data center resources into private cloud resources.
For the data-driven parts of the workflow, think predominantly about reducing performance variability. This is caused by either cloudbursting or workflow crossing between the cloud and the data center. One way to do this is to focus on minimizing the times a circular workflow crosses cloud boundaries. Interfaces at the crossing points can be a special integration target. Integration choices will often be one of two things: a structured directory-based integration, or a URL-and-DNS integration. Architects can control directory-based performance more tightly. They can also often use application monitoring to spin up component copies and change workflows in advance, reducing worker impact.
Cloud integration doesn’t have to be hard. Just think of it as integration first and the cloud second.
Introduction to BPEL
Learn more about enterprise BPM