Google Designed Chip for Its AI

Google Designed Chip for Its AIGOOGLE HAS DESIGNED its own computer chip forthe AI technology that is reinventing the way Internet services operate.This morning at Google I/O, the centerpiece of the company’s year, CEO SundarPichai said that Google has designed an ASIC (application-specific integrated circuit). These are networks of hardware and software that can learn specific tasks by analyzing vast amounts of data. Google uses neural nets to identify objects and faces in photos, recognize the commands you speak into Android phones, or translate text from one language to another. This technology is even transforming the Google search engine.

 

Google calls its chip the Tensor Processing Unit, or TPU, because it underpins TensorFlow, the software engine that drives its deep learning services.This past fall, Google released TensorFlow under an open-source license, which means anyone outside the company can use and modify it. Google has not indicated it will share the designs for the TPU, but outsiders can make use of Google’s machine learning hardware and software via various Google cloud services.

 

Google says it has been running TPUs for about a year, and that they were developed not long before that.Google is just one of many companies incorporating deep learning into a wide range of Internet services. Facebook, Microsoft, and Twitter are also taking part in this AI-driven transformation. Typically, these Internet giants drive their neural nets with chips called Graphics Processing Units (GPU) made by companies like Nvidia. But some, including Microsoft, are also exploring the use of chips called Field Programmable Gate Arrays (FPGA) which can be programmed for specific tasks.

 

A TPU board fits into the same slot as a hard drive on the massive hardware racks inside the data centers that power Google’s online services, the company says, adding that its own chips provide “an order of magnitude better-optimized performance per watt for machine learning” than other hardware options.

 

“TPU is tailored to machine learning applications, allowing the chip to be more tolerant of reduced computational precision, which means it requires fewer transistors per operation,” the company says in a blog post. “Because of this, we can squeeze more operations per second into the silicon, use more sophisticated and powerful machine learning models and apply these models more quickly, so users get more intelligent results more rapidly.”