News Stay informed about the latest enterprise technology news and product updates.

How TensorFlow helps deep learning applications in the cloud

TensorFlow optimizes the development and deployment of machine learning algorithms. But how can it leverage the cloud to build deep learning apps? Expert George Lawton explains.

Deep learning applications are quickly moving to the forefront, as they transition from research to mainstream application development. Enterprise architects should consider how they might use a new generation of artificial intelligence algorithms that leverage data from cloud infrastructure. Cloud-based tools for implementing AI promise to streamline the software development lifecycle of AI apps and enable new classes of applications, said experts at the Spark Summit in San Francisco.

Deep learning is an approach to automatically generating data models for processing data, such as speech recognition, image classification or spam filtering. Researchers have traditionally had to manually code AI applications that leverage deep learning. But now, cloud infrastructure providers, like Google, Microsoft and IBM, are developing new machine learning architectures, such as Google Brain, that automate this process. Jeff Dean, a senior fellow at Google said, "We now have a good handle on how to store and perform computations on large data sets." This promises to make it easier to create applications that understand the meaning of the underlying data.

How deep learning is being used today

This kind of transformation makes it possible to automatically classify and tag photos, improve recommendations and extract the meaning of spoken words. "If I am speaking to you, you want the machine to understand what I am saying. Neural networks can learn a complicated function, like taking raw pixels and outputting a single word, like cat," Dean explained.

Some machine learning programs at Google have included:

  • Reducing the speech recognition error rate by 30%.
  • Development of a service for automatically tagging Google Photos into categories and by events, like birthdays.
  • The development of a rooftop solar calculator based on Google map data that automatically tells a home owner the expected solar power capacity based on roof size, angle and average sun.
  • A smart reply service for Google Inbox that suggests responses now used by more than 10% of all email.

How new algorithms scale machine learning

Neural networks are loosely based on the current understanding of the brain. These tools leverage high-level programming abstractions that behave vaguely like human neurons. These collections of neurons learn to cooperate in order to accomplish high-level tasks. While the function of each neuron is relatively simple, the composition of millions of neurons is quite powerful. "One of the nice properties of neural nets is that the results get better with more data, bigger models and more computations," Dean said.

One of the big challenges: Training is a computationally and data-intensive process. Until recently, it was difficult to run these types of applications across multiple servers. Google developed a deep learning pipeline, called TensorFlow, which optimizes the development and deployment of machine learning algorithms. "TensorFlow is a system that allows us to express a machine learning idea and provision them at scale ... We are hoping that TensorFlow becomes a standard way of expressing machine learning ideas and computations," Dean said.

It was introduced as an open source project in November 2015, and quickly became the No. 1 most used machine learning project on GitHub. Other leading deep learning tools include Caffe, Torch and Theano.

There was not a lot of perceived need at Google when Dean pioneered the initial work. This began to change once they streamlined the infrastructure around provisioning new deep learning applications. Over time, more teams within Google began to pick up the programs. Quite often, a model developed for one domain, like speech recognition, could be completely repurposed for others with a different set of training data. Now, the core TensorFlow applications are being used to improve a variety of applications, including Android apps, drug discovery and auto-responding in Gmail.

Deep learning development pipeline

The core of TensorFlow is a graph execution engine. This makes it possible to run the machine learning algorithms across different servers or devices. Developing deep learning applications often includes two main phases. The training phase could run on a specialized cluster in the cloud, or a GPU-enhanced PC or server. This leads to the development of an optimized data model for executing a particular kind of process, like translating speech into text. In the deployment phase, the optimized data model is deployed onto a cloud service, smartphone or PC. Movidius, a machine learning services provider, has even crafted a dedicated USB stick optimized for running these algorithms on a PC.

One key component of Dean's team has been focusing on reducing the turnaround time for experiments. It can take weeks or months to run a deep learning training program on a single server. Running an experiment of this nature on a cluster in the cloud makes it possible to run an experiment in a few days, which Dean said is tolerable. They accomplished this by applying computational resources to a model in parallel. This leads to an almost linear improvement in processing speed with more GPUs. For example, applying 50 GPUs instead of one GPU leads to a 30-fold increase in performance.

Different approaches

Dean said enterprise architects can leverage four approaches to get started with machine learning with TensorFlow:

  • Use a cloud API, like vision or speech, as part of a new application.
  • Run a pretrained model developed by the enterprise.
  • Use an existing machine learning model and fine-tune it for enterprise data.
  • Develop a new machine learning model that leverages the TensorFlow library.

"These models are making a big difference. If you are not considering how to use neural networks in your vision or understanding project, you probably should be," Dean concluded.

Next Steps

Reap the benefits of putting in place machine learning applications

Machine learning can help big data and IoT, but AI apocalypse is not imminent

How security can benefit from machine learning in the cloud

PRO+

Content

Find more PRO+ content and other member only offers, here.

Essential Guide

Special Report: Artificial intelligence apps come of age

Join the conversation

1 comment

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

What are the pain points of building deep learning applications?
Cancel

-ADS BY GOOGLE

SearchAWS

TheServerSide

SearchFinancialApplications

SearchBusinessAnalytics

SearchCRM

SearchSalesforce

Close