Manage Learn to apply best practices and optimize your operations.

Integrating applications across cloud boundaries

Application integration can be broken into two layers. One is interfaces and addressing, the other is deployment and management practices. Get to the bottom of solutions for both.

Integrating applications distributed in public and private clouds in a case-by-case manner is an invitation to...

reliability problems and lost savings. Architects and planners need to understand the layers of cloud integration, plan an approach at each layer, and ensure their integration strategies meld with each other and with application lifecycle management (ALM).

Integrating applications in the cloud starts with interfaces and addressing. In order for an application to cross a cloud boundary, it has to be made up of discrete components, representing separate interfaces. Each of these components has to be addressable for work to flow within the application. The application is only a collection of services at this highest level of integration. It's irrelevant that some or all of it is hosted in the cloud.

Component addresses are registered in a directory like universal description, discovery and integration or domain name system. At the highest level of integration, it's essential that application directories be accessible by the processes steering the workflow. A service bus, when used, must be able to access the directory to find the next component in the workflow. That's normally easy since the components don't address each other. If components directly reference other components, then architects may need to take special steps to establish directory access for each component and to ensure the address space used by one cloud is compatible with the others. This can be an issue if users employ private IP addresses, because these IPs can't be referenced, except on virtual private networks (VPNs). Architects who use such addresses will have to add all of their cloud resources onto a common VPN to open connections among components.

The second level of integration is deployment and management practices. Launching an application and its components in a cloud generally requires creating a load or machine image and using a management API to call for it to be loaded. The details of the image and the API will likely vary among public cloud providers, between public and private clouds, or between the cloud and the data center. Variation will have to be accommodated in the tools used or through manual processes.

There are two promising paths to addressing these two layers at once: integration platform as a service (iPaaS) and orchestration. The evolving idea of iPaaS is when a cloud service takes responsibility for deploying application components and integrating them as needed. Dell Boomi or Mulesoft's CloudHub are examples of iPaaS tools. The other path is orchestration, a higher-layer development and systems operations (DevOps)-like function that can be used to collect the different DevOps scripts or models for multiple clouds and organize them to permit deployment across cloud boundaries. OpenStack Heat is an evolving open source orchestration engine and CloudFormation template language. OpenStack orchestration products include Ubuntu's JuJu, a service orchestration tool for cloud deployments. In commercial products, consider those based on the leading orchestration specification, the OASIS TOSCA Standard for Managing Applications across Clouds. Orchestration tools from Fujitsu, HP, Huawei, IBM, SAP and others use TOSCA's templates to move applications from one cloud to another and orchestrate them after the move.

The complexity of integrating applications across cloud boundaries makes all-purpose frameworks appealing.

Both these approaches have current limitations on the scope of their processes, and both can require considerable work on the part of the cloud architect. The best strategy is to review all of the available options, starting with a list of public cloud provider candidates and private cloud software stacks. IT teams can return to their lists should there be changes in service levels or pricing down the line. At the present, users report that iPaaS tools are further along than orchestration tools, so the planned evolution of orchestration tools and features should be explored carefully if that option is considered.

The key point for both iPaaS and orchestration is that there are two layers. The first considers the applications deployed as a collection of software as a service components linked with APIs. The second focuses on deploying and managing the application components on the optimum platforms. This two-layer model should be explicitly acknowledged in an IT team's tool selection or there's a risk that a new cloud option, or the need to integrate a data center-hosted component, will require considerable manual tuning.

At the bottom of the deployment and management layer is the management of cloud application quality of experience. Both iPaaS and orchestration tools vary in their ability to support FCAPS (fault management, configuration, accounting, performance and security) on a per-cloud basis. If it's necessary to manage user quality of experience closely for an application or group of applications, look for a tool that offers strong support for the collection and integration of management data. In particular, look at how failures of a host or network connection will be reported. It's impossible to manage a response to a problem architects can't detect, and cloud services vary significantly with respect to their reporting. In some cases, it may be best to integrate management components with functional components to be deployed so full management can be exercised directly at the component level.

This is also where the tie between cloud integration and ALM is likely to be a factor in planning. It's not only important that ALM addresses components in realistic data-driven scenarios, but also that it address the variations in workflow that could occur as a result of differences in how components are spread across cloud providers. This can add a new and complex dimension to ALM testing at all levels if application components can be spread across multiple clouds. Users will often elect to tie specific groups of components to specific implementation options in order to reduce the variability and manage the integration more efficiently.

The complexity of integrating applications across cloud boundaries makes all-purpose frameworks appealing. These middleware suites have the tools for integrating heterogeneous applications, automating processes, scaling applications and more. There are many to evaluate, such as JBoss Enterprise Middleware, BEA WebLogic and Oracle Fusion Middleware suite. I've worked mostly with Oracle Fusion and found it, particularly the business process execution language process manager and service bus, to be a good framework for building cloud applications that can be integrated.

There are often benefits to crossing cloud boundaries with applications, but there are also often costs and risks. Architects should review both paths for addressing integration carefully and re-calibrate their processes whenever cloud pricing or service levels change.

Next Steps

Why application integration matters

Beat challenges of integrating SaaS apps with on-premises legacy apps

This was last published in September 2014

Dig Deeper on Cloud data integration and application integration



Find more PRO+ content and other member only offers, here.

Join the conversation

1 comment

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

What tools do you use for integrating applications across cloud boundaries?