The cloud is based on shared resources, and controlling how that sharing happens is the area where private cloud APM differs from public cloud APM.
Start with which resources are consumed and how when you're dealing with cloud application performance monitoring (cloud APM). To ensure gains, start by understanding application resource consumption in detail, target performance enhancements where they'll count, and understand how to balance specializing performance tools for some applications with the need for a large resource pool for the private cloud.
Applications consume CPU capacity, memory, storage I/O, and network I/O. If any of these resources are oversupplied to gain performance, the economic benefit of the cloud is reduced. Since applications vary in terms of how much of each resource type they need, it's important to start private cloud APM planning by auditing resource usage for every separately hosted component, using any of the widely available resource monitoring tools.
The next step is to plot the resource usage for every resource type. Declare the center third of the distribution the "average usage," the high side of the curve the "heavy usage" and the low side the "light usage." Then assign each application a score of 0 for being in the light-usage class, 1 for being in the average-usage class, and 2 for being in the heavy-usage class. Replot based on the combined score.
The distribution of this last combined-usage plot will tell developers how much APM they really need. If a company's applications all tend to have scores of 4 or less, its private cloud APM planning will be based on optimizing the average state. If a company has a significant population of applications with a score greater than 6, developers need to optimize for heavy usage. In between, developers will be augmenting the averages.
The baseline strategy for private cloud APM is to identify a target application class ("average" or "heavy"), then balance the number of applications per host to achieve good utilization (usually 50% to 70%). Developers should size their resources to fit that target, increasing server memory as needed, using faster disks or storage networks, etc. In both these cases, the resource pool will have fairly uniform resources available per virtual machine or container, so developers can allocate applications or components based on the current resource loads without further considering specific application needs.
The "between" state indicates that developers have some applications that require considerably more resources than others. When a team is in that situation, the goal is to decide how to allocate resources to applications or components in a way that will be a bit more effective than just picking an available VM or container. One way to do that is to allocate applications to hosting points based on resource conservation.
In a resource conservation approach, the goal is to allocate an application component to a server where the application's resource needs will leave the largest remaining capacity for future applications. This means picking a location where the average utilization of capacity after the application is loaded will be lowest. That way, other applications or components are less likely to encounter resource issues. Where practical, the resource-conservation approach can be made even more effective by allocating the most resource-intensive applications to hosting points first.
Resource-conservation strategies can fail to deliver uniform application quality of experience if the heaviest resource use is much heavier than average. There can be two reasons for this: Applications may need specialized resources because of extremely high use of capacity , or there may be enough outlying heavy-use applications or components to make it difficult to find a place with resources to run them. The response to this situation depends on the incremental cost of capacity for the highly utilized resources. This cost has to be balanced against the difficulties in securing resource efficiency if the resource pool is segmented.
If the scarce resource is relatively inexpensive -- memory, for example -- IT may want to simply add, in this case, memory per server to make the resource more available, which could then reduce the complexity of trying to optimize resource use when picking an application and component hosting point. When it's more expensive to add resources (upgrading an entire data center network for more speed, for example), it is tempting to augment the scarce resource for a select subset of IT's resource pool to contain costs. That can contradict the most basic principle of cloud economics: resource equivalence.
A pool of resources is more efficient than a single resource because it can be selectively optimized by picking hosting points to conserve resources. When a company has a multi-pool strategy, the number of possible hosting points for each type of application based on resource needs may be too small to achieve any utilization efficiency -- a situation often described as losing economy of scale. It may be better to move a few highly demanding applications off the cloud and host them on dedicated servers rather than incur cloud software and management overhead and increase operations complexity. Sometimes a private cloud is enhanced as much by what's not included as by what is.
As a final point, remember that platform software tools for network acceleration, application acceleration and storage optimization can always be applied to private cloud APM. It's smart to assume that if most of a company's applications are storage-intensive, developers should plan to use optimized storage hierarchies and faster storage area networks, and the same is true with the network. Developers also can apply more general principles of application componentization and design to improve performance, the same tools that worked for the public cloud. Just be sure not to do too much at once -- a good strategy has to be recognized on its own in order to be followed effectively.
Calculating the cost advantages of private cloud
Cloud application performance management: Doing the job right