Recently, Google Brain research engineer Eric Jang in Quora answered "IBM, Google, Facebook, Apple and Microsoft and several other companies, who led the development of AI research? "This is a very short time to get more than 4000 readings." [/ Span] [/ url] [/ span] [/ url] The reason for such a wide range of attention, on the one hand because Eric Jang sharp style of writing, but also because of his original answer to the original Yann LeCun made a refutation.
The following is Eric Jang on the ranking of several companies, compiled by the Lei Feng network.
First of all, my answer contains a partial bias because I am currently working on Google Brain and I really like it here. My point of view only represents me personally and does not represent my colleague and Alphabet's position.
I mentioned in the title of these few "AI research leader" do the following order:
I think Deepmind is largely the number one today.
The results of their work are highly valued by the research circle, and they involve very top-level issues such as depth-enhanced learning, Bayesian neural networks, robotics, migration learning, and so on. They have absorbed a lot of talent from Oxford University and Cambridge University, which have a lot of great machine learning courses in Europe. Deepmind has created a diverse team dedicated to the study of generic AI, both traditional software engineers to build infrastructure and tools, UX designers to help build research tools, and even ecologists (Drew Purves) to study more There are far-reaching issues, such as the relationship between Earth's ecology and intelligent technology.
Deepmind in the PR and seize the public imagination, but also no one can out of its right. Such as DQN-Atari and the creation of historical AlphaGo. Once Deepmind has a paper published, it often leads to Reddit's Machine Learning section and Hacker News, which justifies how much of their work is valued by the tech circle.
Yes, I put two Alphabet companies into the top two, before you turn my eyes, I want to say, in fact, I also put Facebook and OpenAI are on the second place. But if you do not want to listen to me about Google Brain things, down just fine (smile).
I am very respectable to Yann LeCun (his previous answer is also very good), but I think his evaluation of Google Brain in the field of research is wrong. He said:
"But Google is primarily concerned with the application and product development, rather than the long-term AI research work. & Rdquo;
This sentence is absolutely wrong, completely wrong.
TensorFlow (the most important product of the Google Brain team) is just one of the many R & D results of Google Brain's team, and I know that this team is the only team to build external products. Google Brain is still relatively biased towards the project, but today, Google Brain has a lot of employees are focused on long-term AI research, and is involved in all possible AI secondary areas, which is similar to Facebook FAIR and Deepmind.
Facebook FAIR has 16 papers received by the ICLR 2017 conference, of which 3 were selected as a live paper presentation.
In fact, Google Brain ICLR 2017 conference to receive the number of papers is also slightly more than Facebook FAIR, a total of 20, of which four were selected as a live paper show. And this does not count on Deepmind and other Google internal team work (such as search, VR, pictures). The number of papers received is not a good measure, but I would like to refute those hints that Google Brain does not study the depth of the study.
Google Brain is also the most co-operative industry research organization. I think there will not be any other research organization in the world, whether it is industrial or other nature of the organization, with the University of Berkeley, Stanford, CMU, OpenAI, Deepmind, Google X and many of Google's top product team at the same time to cooperate The
I believe that Google Brain will soon be ranked first-class R & D institutions. I used to get both Google Brain and Deepmind's offer, and I chose the former because I think Google Brain can give me more freedom to design my own research project and stay in close cooperation with other Google internal teams. I can also participate in some very interesting robot projects, of course, these items I can not disclose more information.
2, Facebook (tied for second)
FAIR laboratory papers are great, my impression is that they are more concerned with the field of voice problems, such as Q & A, dynamic memory (dynamic memory), Turing test type. They occasionally have some "data physics and depth of learning combined with" a class of papers. Obviously, they also do computer visual class related work. I hope I can speak more, but I do not know much about FAIR, but their reputation is good.
Due to the widespread use of TensorFlow, Facebook has failed almost in the depth of the learning framework. But we still have to see, Pytorch can grab some of the market share.
2, OpenAI (tied for second)
Opene has a series of star employees, such as Ilya Sutskever, John Schulman, Pieter Abbeel, Andrej Karpathy (Char- RNN and CNNs), Durk Kingma (VAEs co-inventor), Ian Goodfellow (inventor of GANs), and the like.
Although they are only a small team of less than 50 people, they also have a first-class engineering team, and the release of first-class, very thoughtful research tools, such as Gym and Universe. They bring value to the broader research community by providing relevant software that has been in the hands of tech giants. Their practice also forced other research teams to start open source code and tools.
In view of OpenAI and Deepmind have a first-class talent team, I also intend to put the two tied for the first, but OpenAI set up not long, I can not have enough confidence to put the two together. OpenAI has not yet done a system like AlphaGo, although Gym and Universe are also important for the research circle, but have not reached the height of AlphaGo.
As a small, nonprofit research organization that builds their infrastructure from scratch, OpenAI does not have as rich as GPU resources, robots and software infrastructures like big companies. Has a strong computing power, the research ability has a great impact, and sometimes even the idea of research can also have an impact.
Baidu SVAIL and the IDL Institute are great research places where they are working on a number of potential technologies such as family assistants, blind ads and autopilot cars.
Baidu this company does face some problems in the reputation, such as the recent violation of the ImageNet competition rules scandal, low-quality search results led to a Chinese student died of cancer and the Americans stereotyped that is a blind imitation of the technology company.
However, the company in China's AI field, is the strongest.
3, Microsoft Research Institute (tied for third)
Before the depth of the learning revolution, Microsoft Research was once the most prestigious place. They hired researchers with years of experience, which also explained why Microsoft Research Institute missed a little bit of depth learning, because the depth of learning revolution is largely driven by the doctoral students.
Unfortunately, almost all of today's deep learning studies are on the Linux platform, and Microsoft's CNTK depth learning framework did not get as much attention as TensorFlow, torch and Chainer.
Apple is really facing some problems in the recruitment, because the researchers want to open their own research results. Apple also does some product-driven research, which does not attract researchers who want to solve generic AI problems or want their work to be seen by academic circles. I think that their design also requires some corresponding research, especially when faced with bold ideas, but I also see the release of new products will become a long-term basic research an obstacle.
I know the former employee of an IBM Watson project, and he described IBM's "cognitive computing work" as a disaster. The project is driven by management, and this group of people do not know what machine learning can do, can not do anything, just holding the hot words everywhere to sell. As far as I know, Watson uses deep learning to do image comprehension, but the rest of the information retrieval system does not apply the latest depth learning techniques. Basically, IBM is facing a crisis, for start-ups, there are many opportunities in the secondary market to use machine learning opportunities.
To tell the truth, the above mentioned companies (perhaps in addition to IBM) are doing a very good place to study in depth, in view of open source software and the rapid development of the entire field, I do not think any technology companies to the absolute advantages of " Lead the AI research & rdquo ;.
My advice for the future depth of study is to find a team / project that you are really interested in, ignore the reputation of others and other conditions, focus on your work to do the best, to help your organization to become AI research Field leader.
Yann LeCun's answer
Here is the answer to Yann LeCun on this question, the original answer published in July 2016.
I have a certain bias (smile), but I can talk about the following:
Apple is not a major participant in AI research because they have a secret culture. You can not do advanced research in private circumstances, if you can not publish research results, which can not be considered a study, at best regarded as a technology development.
Microsoft is doing some very good work, but it's a lot of employees are lost to Facebook and Google. Their deep learning of voice is very good (in the early 21st century, they do well in handwriting recognition), but compared to FAIR and DeepMinde's work, Microsoft seems to have little ambition in terms of learning in depth.
Google (including Google Brain and other internal teams) may be leaders in the depth of learning into products and services, because they are early than anyone else, because it is a big company. Google has done a lot of work on the infrastructure (such as TensorFlow and TPU hardware). But Google focuses on application and product development, rather than long-term AI research. Google Brain many top researchers have left the original team, went to DeepMind, OpenAI, or FAIR.
DeepMind has done a good job of learning AI. Their long-term research goals are very similar to our goals in FAIR Labs, and our work topics are similar, including: unsupervised / generated models, planning, enhanced learning, games, memory enhancement networks, microprogrammable, and so on. DeepMind's challenge is geographically and organizationally separated from Alphatbet's largest customer. This makes DeepMind more difficult to generate revenue for its owner Alphatbet to pay for their trip, but it looks like they seem to be doing well.
Facebook created FAIR Labs two and a half years ago and managed to make itself a leader in AI research in a short time. I am surprised that we can attract so many world-class researchers (now FAIR in New York, Menlo Park, Paris and Seattle in a total of about 60 researchers and engineers). I am also impressed by the quality and influence of FAIR research work in two and a half years. We are ambitious about our goals, we are ready for long-term competition, and we have an influence on the company, which makes it easy to prove the value of FAIR. The most important thing is that we are very open: our researchers publish multiple papers every year. If a potential young researcher joins a less open company or a start-up company, there is nothing more alarming from the study circle.