Google has undertaken a major overhaul of its cloud architecture in order to address some of the limitations of...
Java EE and improve cloud development practices. In addition, Google has been helping to improve the performance of the Jetty Web server underpinning many cloud services.
The cloud enables organizations to iterate applications quickly and with a low cost of entry, said Alexis Moussine-Pouchkine, developer relations at Google, at the JavaOne Conference in San Francisco. But to get the most from these benefits, organizations need to think about architecting around microservices, many of which are written in Java.
While Java EE does have profiles for the Web, they are a poor match for the cloud, Moussine-Pouchkine argued. He believes more work is required to evolve a cloud-enabled profile. The Google App Engine only provides a limited number of application program interfaces (APIs) from the Java EE Java development kit to address these concerns. Some APIs were left out for security reasons and others just didn't make sense, he said.
Making the most of cloud services
A good cloud application development practice is for enterprise architects and software developers to carefully enable state resilience in cloud-based applications, Moussine-Pouchkine said. Cloud failures are not anomalies. A small number of hard drives and network switches can fail at any point, so the reliability is implemented at an abstraction level above the physical hardware. This can be done using tools like distributed data stores and distributed file systems. "In the cloud, you should consider that to be the inspiration for your architecture," Moussine-Pouchkine said.
Another thing to consider is that instances of microservices can come and go at any time. A request sent to a server may not go to the same instance the next time around. Many architects focus on building stateless applications, but Moussine-Pouchkine said the state needs to live somewhere, which requires a dedicated tier. He advocates this be implemented in an in-memory database or distributed cache rather than the application tier.
Another good practice is to implement services rather than plain APIs, noted Moussine-Pouchkine. The distinction is that services are deployed as fully configured running instances. These include caching services, data services and message-queuing services. This approach makes it easier to scale cloud applications up and down more quickly and with greater resilience.
Leveraging Jetty for quicker cloud services
One approach being used to quickly spin up new microservices is to use the open source Jetty Web and Java Servlet container. Google is making extensive use of Jetty as part of its cloud strategy. Greg Wilkins, Jetty project leader at the Eclipse Foundation and Webtide chief architect at Intalio, said Jetty is ideal for the cloud owing to its small footprint, efficiency and modular deployment capabilities.
Wilkins argued that one of the examples where Java EE is not good for the cloud is the large number of dependencies involved in implementing new applications. The Java EE approach to including a large number of Java Archive (JAR) files is good for developers to get up and running quickly. But this approach also comes at the cost of increasing the time required to spin up new instances.
The problem is caused by the overhead associated with scanning all of the JAR files as new instances are spun up. There is a lot of work to be done to find all this stuff, Wilkins said. The net result of a longer startup time for cloud-based services is that the enterprise ends up paying for more CPU cycles during the startup process and microservices need to be over-provisioned to meet the service-level agreement.
Another problem is that the larger code base in Java EE also leaves a larger potential for security vulnerabilities, Wilkins said. Organizations need to be diligent in scanning these larger files to ensure unintended servlets are not included. This can be a major problem after thousands of instances of an application have been deployed. Jetty's modularity can help streamline the deployment of new microservices by reducing the required JAR files to the bare minimum required.
A number of extensions and a quick-start mode are being added to Jetty to help improve the speed of deploying microservices in the cloud. These will make it easier to spin up new instances of a service in under a second, compared to thirty seconds or more. The Jetty team is working with Google to refine these improvements for even better performance, as well.
Combining the best of App Engine and Compute Engine
Google plans to use these improvements as part of its new Docker-enabled Google Cloud Platform. This new architecture combines some of the modular benefits of the Google App Engine and the flexibility of the Google Compute Engine.
One of the biggest complaints about Google App Engine was that it was difficult to access services like Abstract Window Toolkit through it. The new Google Cloud Platform will bring the best features together into what Google is calling a managed virtual machine (VM), which Ludovic Champenois, Google Cloud Platform tech lead, said is different than a traditional infrastructure as a service or platform as a service cloud offering. The new managed VM service is now available for private access. Champenois said they are hoping for a general release in early November.
The Google Cloud Platform will add support for longer running processes with up to a 24-hour maximum request time compared with a 60-second maximum request time now available with the Google Compute Engine. The new platform will also allow background processes, secure shell debugging, writing to a local disk, and a customizable server stack.
Rather than Web archive files, developers will deploy new applications to the Google Cloud Platform using Docker containers with the application, a Jetty server and a streamlined version of the Linux OS. This approach will also make it possible for Google to offer pre-tested Docker container instances containing specific versions of Jetty, the OS and the Java Development Kit best suited for specific use cases.
Red Hat has been working to create a Dockerized container implementation of the Google App Engine so developers will be able to code locally on a Mac, PC or Linux platform. Developers can then push this version to the cloud with the assurance that it will work well.
Taking advantage of this will require developers to learn new skills around building applications using Docker containers, Champenois said. Also, enterprise architects need to consider new approaches for orchestrating services that span multiple servers. Tools like Kubernetes and Apache Mesos could help with this.
Google jumps into the IaaS race with Google Compute Engine