Since its founding, Google has been trying to solve complex problems with esoteric computer technology and insight.
The most complex problems, in fact, often exist in everyday life. It is exciting, Google gradually became more and more a part of people's life: Android equipment monthly active users just to break the 2 billion; YouTube not only has 1 billion users, has up to 1 billion hours on playing time; Google map navigation mileage over 1 billion km. But without the big wave of computer switching to mobile devices, all this growth would be impossible. It also led Google to rethink the product modifications to fit new interactive patterns, such as multitouch.
Today, computers face a new twist: shift from mobile priority to artificial intelligence first. As before, Google will try to conceive of a world that is more natural and seamless in technical interaction. Such as Google search, it is built on the understanding of text and pages based on the ability. And now, through the depth of learning technology innovation, the machine images, photos and video understanding is gradually mature, which in the past can not be achieved. Now, your camera is the "vision", you can talk to the mobile phone, and get a response -- Speech and image for computer technology it has become of great significance, its importance gradually even equivalent to the keyboard and multi touch.
Of the many new AI developments mentioned above, the Google Assistant is a good example. It has been able to run on 100 million devices and is playing a growing role. Now, Google Home has been able to distinguish between different voices, allowing users to get a more personalized experience when interacting with the device. At the same time, the camera capabilities of smart phones can also help users do a lot of work.
Google Lens is a set of vision based computing capabilities that identify what the user is viewing and help the user to act on the information. For example, users lie on the floor of a friend's home to check the long and complicated Wi-Fi code on the back of the router. Now, the user's cell phone can identify the password and log on automatically after the user is aware of the need to log in to the Wi-Fi. Most importantly, users do not have to learn this functionality by learning - the human-computer interaction experience is more intuitive than completing and pasting applications across a smartphone.
Google will first enable Google Lens functionality in Google, Assistant, and Google Photots, and users can expect to use Google Lens in other products.
All of this requires a corresponding computing framework. Last year's I/O, Google released its first generation TPUs, which enabled Google machine learning algorithms to run faster and more efficiently. Today, Google released the next generation of TPUs-Cloud TPUs, a new version of TPUs optimized for reasoning and training, and can handle large amounts of information. Google will introduce Cloud TPUs into Google Compute Engine for better use by companies and developers.
Google is committed to putting the ever-changing technology into use to better serve everyone, not just users of Google products. Google believes that if scientists and engineers can have better computer tools and create more powerful research results, there will be a huge breakthrough in the resolution of complex social problems. However, at present, there are still many obstacles to achieve such a breakthrough.
This is the original intention of Google.ai. It has aggregated all the efforts of Google in the field of artificial intelligence, thus reducing the obstacles in the research process and improving the efficiency of researchers, developers and companies in this field.
Google hopes to reduce the threshold of artificial intelligence by simplifying the design of machine learning models of neural networks. Today, the design of neural networks is extremely time consuming, and the high requirements for professional knowledge greatly reduce the applicability of the crowd. Only scientific researchers and engineers have access to them. That's why Google created AutoML, and AutoML showed that neural networks designed for neural networks were also possible. Google wants AutoML to have the capabilities now available to some doctors and, within three to five years, allow many developers to design neural networks through AutoML to meet their specific needs.
In addition, Google.ai has worked with Google researchers, scientists, and developers to solve problems in various fields and achieve promising results. For example, machine learning is used to improve algorithms for detecting the spread of breast cancer to adjacent lymph nodes. Google also sees a huge leap in the speed and accuracy of artificial intelligence, enabling researchers to predict the nature of molecules, or to sort out the human genome.
This transition not only with the construction of future equipment and cutting-edge research, Google also believes that it can now help millions of people, with the popularization of the channels of access to information and access to new opportunities. For example, nearly half of the employers in the United States say they are recruiting, job problems and difficulties; on the other hand, unemployed employees are often on the side of the vacant positions for the ignorant, because of high flow characteristics of the work of the low volume of business, and the name of the position does not match, so it is difficult to accurately search engine the screening of these work.
And with the new Google for Jobs, Google hopes to help companies connect with potential employees and help people find new jobs. In the next few weeks, Google will be adding new features in the "search", so as to help people find experience across different needs and different wage levels, including hard classification and search in the traditional sense of the work, such as retail and service jobs.
Google was pleased to see that artificial intelligence has finally produced the fruits of every person's enjoyment. If Google can make artificial intelligence technology more and more convenient for the public to use - whether at the tool level or the application level - then everyone will benefit from the artificial intelligence technology faster.