Evaluate Weigh the pros and cons of technologies, products and projects you are considering.

How to extend the private cloud model to hybrid cloud

Private cloud is a great lead-in to hybrid cloud. Companies need to set concrete goals to move from the private cloud model to hybrid cloud.

When enterprises began to exploit virtualization of server farms to improve efficiency and lower costs, many quickly...

found themselves supporting what looked more like cloud computing than virtualization. Most of these same enterprises already use public cloud resources, and they need a new IT model based on hybridizing all their resources and data elements. To extend the private cloud model to a new hybrid data and processing model, users should establish a goal of resource transparency; harmonize data models, APIs and development practices to the new goal; and use design patterns to harmonize application-specific needs and tools.

The way virtualization has evolved into cloud computing demonstrates why it's risky to build IT plans on specific technologies. A better approach is to build on transparency, which means focusing on creating abstractions for server and data resources. Developers can then map these abstractions to a specific approach, and that approach can evolve as resource costs and needs change over time.

It's risky to build IT plans on specific technologies. A better approach is to build on transparency, which means focusing on creating abstractions for server and data resources.

Private cloud users have a big advantage because their internal IT is already based on a cloud abstraction. All that's needed for private cloud extension is for IT to map current private cloud management APIs to suitable public cloud services. In many cases, private cloud planning includes selecting a cloud management system whose APIs are compatible with public cloud APIs or where public cloud options are supported "under" the private cloud APIs.

The critical requirement for resource transparency at the hostin g level is policy-based selection of resources across the public and private boundary based on cost, availability and more. Where this capability wasn't included in the original private cloud toolkit, IT will have to consider adding it. Cloud tools from companies like HP, IBM, Oracle and Microsoft are likely to provide these capabilities, but they may be in the form of add-on packages with extra cost.

On the data resource side, the goal of transparency is met by recognizing two independent forms of "data dynamism" that exist today. One is the persistence of the data, whether it is changed based on real-time transactions or derived in a static sense from historical data or fixed databases. The other is whether the data is presentation-dynamic, meaning that you can build webpages automatically from data views.

Persistence is a data attribute that should be explicit with any database because persistent data can be more easily migrated across cloud boundaries or replicated to improve the performance of distributed components that use the data. Presentation dynamism in data is helpful in exposing data assets without authoring custom applications that might presume a specific database location or level of accessibility. Developers should try to maximize both forms in the extension of their private cloud model.

The APIs and application lifecycle management practices now have to be framed to maximize transparency. For example, application components that should be maintained entirely in or out of the public cloud portion of a hybrid cloud should be grouped as a virtual component and hosting policies should then enforce the placement required. This also allows developers to test the components correctly during the application lifecycle management (ALM) process. In some cases this may require creating a façade API to represent a virtual component, so that the makeup of the component can change over time if it's necessary to take development steps to make resource use more flexible. You also can use APIs to provide what appears to applications to be unified access to persistent and non-persistent data. In some cases these new virtual data models can also drive dynamic data creation of web pages for access and update.

One specific strategy on the data side is to ensure that application access to a mixture of persistent and non-persistent data is managed carefully. Componentize applications so that access to transactional or dynamic data is constrained to as few components as possible, because the components with real-time data needs will likely be harder to distribute for efficient operations. Developers should manage mixing persistent and non-persistent data APIs in a component carefully.

The use of design patterns (like the façade example) is a powerful and flexible way of framing resources in a transparent way when the low-level APIs don't provide all the control developers would like. For example, an application component that's hosted somewhere will need access to its data. If the data is a mixture of static and dynamic, split it based on type. If the application component is moved to the public cloud, move the data with it to facilitate access to the data. Abstracting data access this way helps insulate the details of resources from components, and that's essential if developers are going to keep their hosting options open.

If it's necessary to build cloud bursting or failover into existing applications, design patterns are important in ensuring that the processes of horizontal scaling and load-balancing are consistently accommodated. Experience shows that trying to write in resource transparency on a per-application basis will generate a variety of solutions, and testing and validating them all will be a headache. It's also much more difficult to manage the application-to-resource relationship if the components all go their own way with respect to resource use. While it may be up-front work to change applications to use new design patterns instead of their older APIs, it may pay off quickly in reduced ALM and operations costs and improved resource agility.

The cloud -- whether public, private or hybrid -- is not the goal. What is the goal is resource-independent hosting of application components. As cloud applications evolve from simple migration of underused servers to the cloud to cloud-specific development, the benefits of optimally balancing private IT and public cloud will increase. So will opportunities to exploit transparence through new APIs and application models, so what developers and architects learn from their transition from private to hybrid cloud will prepare them for the future of IT.

Next Steps

Hybrid cloud storage solutions eliminate common issues

Get the facts about private cloud

Know the benefits and drawbacks of a hybrid cloud model

This was last published in January 2015

Dig Deeper on Hybrid and private cloud applications



Find more PRO+ content and other member only offers, here.

Join the conversation


Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

Are you planning on extending your private cloud? What are your concerns?
Although my plan is to extend my private cloud for off-site storage of important files, I do have some concerns about external cloud performance and security. Recent "celebrity hacks" of private photos and information have brought attention to the fact that files uploaded to the cloud may not be as secure as one would hope. To work around this, I plan to ensure that my cloud provider uses double-blind encryption and authentication when attempts are made to access my data.
Can you expand upon what this means? "Private cloud users have a big advantage because their internal IT is already based on a cloud abstraction." What cloud abstraction does IT already have in place?