luchschen_shutter - Fotolia
What do we want? Better apps. When do we want them? Now.
You won't hear that chant from users on any picket line, but helping developers to create better apps faster is the mission of Lee Atchison, principal cloud architect and advocate at New Relic Inc., based in San Francisco. His singular job is to understand and drive the industry in the areas of cloud architecture, microservices, scalability and availability. In a keynote presentation, he spoke to a standing-room-only crowd at New York's Cloud Expo about how highly available, highly scalable systems can help developers attain the goal of better apps faster.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
Atchison has been there. He built Amazon's first app store, used for downloading software and video games. Developers, however, might know him as the man behind Amazon Web Services' Elastic Beanstalk platform-as-a-service offering. Scalability and availability are his two main focuses; he has the track record to back it up.
The cloud is generally used in two different ways: to merely build a better data center, which Atchison dubbed the static approach, and to create a dynamic environment that can grow and change in response to business requirements or market circumstances.
Static cloud strategy
In using the cloud as a better data center, Atchison said, "We allocate resources such as servers to specific uses, just like a traditional data center." For example, if a new service requires 20 servers, then 20 server instances are spun up. "This is a common, but very static way of building up a cloud infrastructure." A key advantage in deploying scalable systems is the speed at which resources can be provisioned, though capacity planning is still required.
Even though this strategy is tantamount to picking up an on-premises infrastructure and relocating it to the cloud, its key advantage is the ability to add capacity quickly. "Clearly, this is important for building applications faster," Atchison said.
When employing a static cloud model, it is the operations people who are impacted more so than developers. This is because they have responsibility for adding capacity and redundancy. "They care about how services perform in the cloud," Atchison said. "Developers don't care where operation runs my services, as long as they run them. In this model, developers don't care anything about the cloud." In the end, this model is used primarily to get applications up and running faster, though not necessarily better.
"Like a traditional data center, resources are allocated to uses, but provisioning is faster, and the lifetime of components is relatively long," Atchison said. "Capacity planning is still important and still applies."
Lee Atchisonprincipal cloud architect and advocate, New Relic
When doing a lift-and-shift infrastructure relocation, monitoring is essential to ensure peak performance, according to Atchison. That admonition shouldn't come as a surprise, considering he works for New Relic, a company that provides cloud infrastructure and application monitoring services. The advantage to a cloud architecture, he said, is monitoring can run as a software as a service and without the need to build instrumentation into individual applications.
A question Atchison is asked frequently concerns Amazon's CloudWatch monitoring service, and why it might not be comprehensive enough for development and operations environments. "CloudWatch monitors a low-level infrastructure, the virtual hardware, but it does nothing for monitoring the rest of the server or the application running on the server," he said. "You can't see memory or file-system utilization. It knows only about the low-level virtualization layer and nothing more." To understand the nuances of application performance, CloudWatch is not the right answer, he explained. The best service is a combination of application and server monitoring, available not just from New Relic, but other providers as well.
The dynamic cloud
While there is nothing intrinsically wrong with a static cloud strategy, it falls short of leveraging the full power and breadth of the modern cloud, and it does not fully empower developers to create better applications faster. The answer is dynamically scalable systems.
"In a dynamic cloud environment, you allocate only those resources that you need at the moment you need them," Atchison said. "You allocate these resources on the fly, and resource allocation is not a static process managed by operations, but it is a dynamic process that's built into the core nature of your application." In employing this scalable systems approach, an application knows and understands the resources it requires to operate optimally, and allocates and deallocates those resources in real time. "This is under control of the application, not of the operations team," he said.
Ultimately, the way applications work is changing, and the use of dynamic allocation, including microservices and containers, is now an essential part of any successful cloud application development effort, Atchison said. Analogous to servers now being provisioned in minutes instead of weeks, microservices and containers allow processes within applications to be dynamically provisioned and deprovisioned in milliseconds, as needed, driven by customer requirements. "That is the secret of second-generation cloud growth." The ability to combine ultrafast infrastructure provisioning for development and testing, combined with the ability for microservices to be spun up and down instantaneously forms the basis upon which developers can create better applications faster.
How do others monitor Elastic Beanstalk?
Advantages of cross-platform mobile development tools
When building apps, don't forget the specs