Home > News content

Science organized a sharp question and answer, Yann LeCun's answer is still upright

via:博客园     time:2018/2/19 21:02:55     readed:589

From Reddit | Machine Heart Compilation | Machine Hearts Editors

Today, AAAS organized a question and answer session on Reddit, and Google, Microsoft and Facebook AI Labs answered a number of the most intricate questions of the inquirer on topics including quantum computing, privacy, cutting-edge research, pseudo-artificial intelligence, and more . Interestingly, Yann LeCun's response to quantum computing and machine learning, pseudo-artificial intelligence and other issues quite straightforward. The heart of the machine compiles some of these questions, and interested readers can see all the discussions from the last link.

Respondents:

Yann LeCun, Chief Scientist, Facebook Artificial Intelligence Research Center

Eric Horvitz, head of Microsoft Research

Peter Norvig, Google Research Director

Question 1: In your opinion, what kind of work will be replaced by artificial intelligence and what kind of work is safe for the next generation? I was asked as a high school teacher who regularly advises students on career options. Many people talk about the subversion of artificial intelligence when it comes to driving the car and other aspects, excluding other areas. I now have a student who plans to be a pilot. I told him to think about drones, but he thought it was not a threat. I tell students that it is safer to enter the trade sector, especially trade that requires a lot of liquidity. What other areas now seem safer?

Peter Norvig:I think the more significant point of view is what the mission is, not the occupation. If an ambitious business pilot seeks advice in 1975, then you should advise: Do you like to take off and land? You can suggest this for many years to come. Do you like to fly for a long time? Unfortunately, this task will be almost entirely automated. So I think most areas are safe, but the mix of tasks you do in any job will change, the payoffs between different occupations will change, and the number of people needed for each job change . We can hardly predict these changes. For example, many people now drive trucks. Sometime in the future, most long-distance driving will be automated. I think there will still be one person in the car, but their work will be more focused on loading and unloading goods and customer relationship / marketing, not driving. If they can sleep in the car while the car is moving (and finally it is possible) and / or to schedule a larger fleet of trucks, you might think that we need fewer truck drivers, but if the cost of trucking Demand is also growing as the rail or sea transport declines. So it's hard to predict things decades later, and the best advice is to stay flexible and be ready to learn new things-whether you're changing a job or changing careers in a profession.

Eric Horvitz:The development of artificial intelligence will have multiple impacts on the economy's workforce. I believe certain changes will be disruptive and may occur in a relatively quick manner. Subversion may appear to be driving a car or truck. Other influences include the way in which work is carried out and the way in which people perform their tasks in different fields. In general, my opinion is positive on the impact of the development of artificial intelligence on the distribution of work and the nature of work. I see many tasks are supported by more complex automation, rather than being replaced by them. These include work in the fields of art and science exploration and work requiring delicate physical work, and much more often requires the cooperation and care of people - including teaching, mentoring, medical care, social work and upbringing Child adult. With regard to the latter, I hope to see the more prominent "caring economy" emerge and gain support in this increasingly automated world.

Someone might be interested in learning some recent research that considers the future. Here's a very interesting study that considers the impact of machine learning advances on a particular function: http://science.sciencemag.org/content/358/6370/1530.full. I recommend this article because it is a good example of how to help people understand how to combine certain structures to predict the future of AI and jobs.

By the way: Yesterday we had a link at AAAS in Austin about the enhancement of human capabilities and the transformation of tasks with the advancement of artificial intelligence.

Yann LeCun:There is still a long time before we can have a robot plumber, a carpenter, a handyman, a hairdresser and so on. In general, artificial intelligence does not replace work, but it changes work. In the end, artificial intelligence will make every job more efficient. But work that requires human creativity, interaction skills, and emotional intelligence will not disappear for a long time. Creative work such as science, engineering, art and hand-making will also be retained.

Question 2: At present, many machine learning research seems to have shifted to deep learning. 1) Will this have a negative impact on the diversity of machine learning research? 2) To support deep learning research, do you need to completely discard other paradigms such as probabilistic graph models, support vector machines, etc.? It is possible that these models do not perform well at present, but breakthroughs will occur in the future, just as deep learning took place in the 1990s.

Yann LeCun:With the growth of our AI technology, my feeling is that deep learning is only a part of it. The idea of ​​integrating parametric modules in complex (possibly dynamic) graphs and optimizing parameters based on data is not obsolete. In this sense, deep learning will not be obsolete as long as we have not found an efficient way to optimize parameters using gradients. Therefore, deep learning is not enough to build a complete AI. I think the ability to define a dynamic depth architecture (that is, graphically defined graphs whose structure changes with new inputs) can extend deep learning to differentiable programming.

As for Question 2), there is no contradiction between deep learning and graph models. You can use a graph model such as a factor graph, where the factor is a complete neural network. They are irrelevant concepts. People have built probabilistic programming frameworks over deep learning frameworks. For example, Uber's Pyro is based on PyTorch (probabilistic programming can be seen as the promotion of graph models, similar to the concept of differentiable programming is the promotion of deep learning). It has proven useful to use backpropagation gradients for inference in graph models. When data is scarce and can be manually characterized, SVM / kernel methods, tree models, etc. are better for use.

Eric Horvitz:Deep neural networks have many bright spots in the performance of classification and prediction tasks. We also witness the continuous improvement in the accuracy of target recognition, speech recognition, translation, and even learning the optimal strategy (combined with reinforcement learning thinking). However, AI is a very broad area with a large number of potential branch disciplines, and AI machine learning branches have a large number of branches.

We need to continue to develop in depth the potential AI technologies (combined with their own strengths), including the already rich results of probability maps, decision theory analysis, logical reasoning, programming, algorithmic game theory, meta-reasoning and cybernetics. We also need to extend the domain, for example by extending the bounded rationality model to explore the limits of agents in the open world.

Question 3: How to break the task-specific AI into more general intelligence? At the moment we seem to be spending a great deal of energy on winning Go in Go or using Deep Learning to perform specific scientific tasks. This kind of progress is very good, but compared to most people's AI is still very narrow. How do we build common intelligence that makes it possible to adapt to any task? I think that simply integrating millions of task-specific applications does not build common intelligence.

Yann LeCun:I think getting the machine to learn predictive models by observing it is the biggest obstacle to General Artificial Intelligence (AGI). It seems that human babies and many animals can gain a common sense by looking at and interacting with the world (although they require very little interaction compared to our reinforcement learning system). My intuition is that a large part of the brain is predictive of the machine. It trains itself to predict everything (predicting nothing seen from what it has seen). Through learning predictions, the brain carefully constructed a hierarchical representation. Predictive models can be used to plan and learn new tasks in the least amount of interaction with the world. The current "modelless" reinforcement learning system, such as AlphaGo Zero, requires a lot of interaction with the "world" to learn (though they learn very well). They perform well in Go or Chess, but such "worlds" are simple, deterministic, and fast to run on multiple computers simultaneously. It is easy to interact with such a "world," but it can not be extended to the real world. You can not learn the "can not hit" rule with 50000 hits while driving a car. Humans learn this rule even with just one experience. We need to let the machine learn such a model.

Eric Horvitz:True, the current state of Artificial Intelligence is the intellectual and narrow "scholar."

Our understanding of human intelligence is far from adequate. These include how humans learn in an open world (in an unsupervised manner), the mechanisms by which we form "common sense," and the secrets that we can easily generalize to new tasks.

I think there are two very important ways to promote the development of general intelligence: one is to organically combine many specific applications and then explore the relevance of these applications; the other is to focus on a core methodology Such as DNN, and then explore the more common structure among them.

A paper can give us an interesting framework and direction to AGI:http://erichorvitz.com/computational_rationality.pdf

Question 4: I am a nuclear engineering / plasma physics graduate student planning to move to AI research.

About the AI ​​field: What is the next milestone in AI research? What are the current challenges in meeting these milestones?

Expertise in this area: What key skills / knowledge do I need to succeed? Do you have any general suggestions or recommended learning resources for beginners?

Yann LeCun:Next Milestone: Deep Unsupervised Learning, Inferential Deep Learning System. The Challenge of Unsupervised Learning: Learning the Hierarchical Representation of the World to Understand the Explaining Factors of Change. We need to let the machine learn how to make predictions in an imperfectly predictable world. Key Skills: Mastery / Good Horizons for Continuous Mathematics (Linear Algebra, Multivariable Calculus, Probability Statistics, Optimization, etc.). Skilled programming skills. Skilled scientific methodology. In short: creativity and intuition.

Peter Norvig:I am very interested in an assistant who can really understand the human language and have a real conversation, which will be a very important milestone. One of the big challenges is to match the patterns (which we're good at) with abstraction and planning, and for now we can only do well in very formal areas like chess, and far in the real world not enough.

Being a physicist is a big plus for you, with a well-suited mathematical background and experimental, modeling and handling of uncertainty and error thinking. I have seen many physicists doing great work in this area.

Question 5: What are the scenarios behind artificial intelligence support and we are not aware of? for example.

Eric Horvitz:There are quite a few AI systems and services "under the hood." One of my favorite examples is a result of our close collaboration with the Windows team at Microsoft Research, a move called Superfetch. If you're using a Windows machine, your system is using machine learning to learn about your work patterns and next steps (the process is private, done locally) and it continually makes predictions by preloading and Pre-deposit application to best manage memory. Because your machine will be in the background to reason about your next move and will soon be able to reason about what you will do sometime of the day and someday of the week, it will be faster and magical. These methods are always running and have been getting better and better since the first release of Windows 7. People at Microsoft Research and Windows have formed a joint team to work together. Using real workloads to experiment with us has allowed us to grow fast and help us to choose the best approach.

Yann LeCun:Filter objectionable content, use satellite imagery to build maps, help content designers optimize designs, use compact feature vectors to represent content (images, videos, texts) for indexing and searching, and text recognition in images & nbsp; & nbsp; & nbsp; & nbsp;

Peter Norvig:Any place where there is data, there is the possibility of optimization. Some things you may already know. Others will never be noticed by the user. For example, we've done a lot of work to optimize our data center - how we build data centers, how we can let workloads flow through them, how we cool them, and so on. We apply a variety of techniques (deep learning, operational research models, convex optimization, etc.); you can decide for yourself to think of these as "artificial intelligence" or "just statistics."

Question 6: I am a PhD student and I do not have enough money to invest in multiple GPUs and large (in terms of computing power) deep learning platforms. As a student, I am under pressure to publish a thesis (my area of ​​study is Computer Vision / Machine Learning) and I know I can not test it quickly enough on my "new module" network by the deadline All hyperparameters. Much more resources are available to people researching at companies such as Facebook and Google that can quickly produce dissertation papers. At the meeting, we get the same evaluation criteria & mdash; & mdash; so I have no chance of winning. If I can do the experiment on time and then publish the only way to be an intern at a big company - do not you think this is a big problem? I live in the United States, a little better. What should people do in other countries? Do you have any ideas for solving this problem?

Peter Norvig:We can provide support: Your professor can apply for Google Cloud: https://cloud.google.com/edu/, which includes 1000 TPUs.

If your goal is to develop an end-to-end computer vision system, then as a student, you will have difficulty competing with the company. This is not the exclusive case of deep learning. I remember when I was a graduate student there was a CPU design friend, and they knew they could notIntelcompetition. To complete a large project development project, hundreds of people need to develop hundreds of components, if any one component fails, you are not the most leading. But a student can better demonstrate and present new ideas for a component (perhaps using the open source model and demonstrating the improvements brought by your new component).

Yann LeCun:I have two titles: Facebook's chief artificial intelligence scientist and New York University professor. My students at New York University can use GPUs but do not have as many GPUs as intern at FAIR. You do not want yourself to compete directly with large industry teams, and there are many ways in which research can be done without competition. Many, if not most, of the innovative ideas still come from academia. For example, the idea of ​​using attentional mechanisms in neuro-mechanical translation comes from MILA. This approach swept through neuromechanical translation like a hurricane and was adopted by major companies in a matter of months. After that, Yoshua Bengio told MILA members to stop the translation of the competition data for better translation because there was no point in competing with companies such as Google, Facebook, Microsoft and Baidu. Decades ago there have also been such things in the area of ​​character recognition and speech recognition.

Eric Horvitz:Microsoft and other companies are working to demobilize artificial intelligence, developing tools and services to help people outside big companies easily make great achievements in the field of artificial intelligence. I can understand that the calculation problem will appear. Among various projects, you may find Azure for Research and AI for Earth valuable, which can help you gain access to Microsoft's computing resources.

Question 7: As an ML practitioner, I'm getting tired of the recent "fake AI" skyrocketed. such as:

Sophia, a puppet with a pre-programmed answer, is presented as a living, conscious being. Ninety-five percent of the job opportunities involved in machine learning are not AI positions, but adding the catchphrase "AI" or "machine learning" to make the company look more attractive.

To me, only a few thousand people in the world are working on machine learning, but 100 times are pretending to be AI. It's a disease that hurts everyone and takes away what ML really did in the near future. What can we do to stop this behavior?

Peter Norvig:do not worry. This is not the case in the AI ​​realm alone. Every time a hot word appears, some people want to exploit it inappropriately. AI and ML are the same, organic, gluten-free, paradigm shift, disruption, pivot and so on. They can only get some short-term attention and will eventually disappear.

Eric Horvitz:I agree with Peter's point of view. The enthusiasm for AI research is great to see, but there are really overheating, misunderstandings, and estrangements, just as those who jump in the air in every way (including adding an "AI" to everything :-)) .

Mark Twain has a famous saying: "History will not repeat, but it will rhyme." In the mid-1980s, the AI ​​system became an overheated AI. In 1984, some AI scientists reminded everyone that misguided fanaticism and failure to meet expectations could lead to the collapse of interest and funding. Indeed, a few years later, we entered what some people call "AI winter." I do not think this is the inevitable result of this. I think there will be a glowing embers in the fire that will spark the AI ​​field, but it's also important for AI scientists to continue educating people in many fields about what we can really achieve, and for the first time since the term "artificial intelligence" Use the difficult issues we've tried hard to solve for 65 years.

Yann LeCun:Serious ML / AI experts see this situation, do not hesitate to shout "bull shit" directly. I myself have always done this. Yes, "AI" has become a commercial hot word, but today there are still a lot of serious and cool jobs in the AI ​​/ ML field.

Q8: Does your company retain some algorithmic / architectural secrets to preserve its competitive advantage? I know the data set will bring a great competitive advantage, then the algorithm will do? In other words, if your company made a breakthrough in an algorithm / architecture, such as next-generation CNN or next-generation LSTM, will you open it for scientific development or will you keep secrets in the interest of preserving competitive advantage?

Peter Norvig:As of now, you can see that many of our common algorithms are being released by our three companies (and others) and I think we will continue to do so. I think there are three reasons for this: First, we believe in scientific development; second, the competitive advantage comes from the hard work we do with algorithms and the whole process of creating a product, not the core algorithms themselves; third, you can not think of them as Confidentiality preservation, if we can think of, can be thought of by others in the same research community.

Yann LeCun:At FAIR, we make everything public about what we do. The reason is as follows:

(1) As Peter put it, "We believe in scientific development; the competitive advantage comes from the hard work we do with algorithms and the processes surrounding the creation of a product, not the core algorithms themselves." I would also like to add that competition The advantage comes also from the speed at which the algorithm / model is converted into a product or service.

(2) The main issue for AI today is not whether one company is ahead of the other (no single company can always lead the way), but the AI ​​sector itself needs some rapid progress in some important directions. We do not want to solve this problem alone. We need to work together to study the community for progress.

(3) You can only attract good scientists by allowing scientists to publish results; you can retain them (at least in part) by assessing them, at least in part, for their academic influence on the broader research community.

(4) Reliable findings can only be obtained by telling them they must publish the results. People are usually more sloppy if you do not plan on making the results public.

(5) Exposing innovative research helps shape the company into a leader and innovator, which helps to recruit the best talent. In the technology industry, the ability to attract the best talent means everything.

Eric Horvitz:Since its inception in 1991, Microsoft Research is an open research lab. An important basis for our lab is that researchers decide whether to publish research results, share ideas and learning, which is based on DNA from our lab. I think it is great to see other companies moving in this direction. Based on Peter, I want to say that great innovations and IPs have been developed around the details of the actual product realization in different areas, which may not be shared as the core technology evolves.

Q9: Will progress in quantum computing drive the research behind artificial intelligence? How do you think of the future of the two integration?

Peter Norvig: Much of the things I want to do have no quantum help. I often want to deal with large amounts of text with a relatively simple algorithm, which is not helpful for quantum computation.

However, quantum computation may help search the parameter space of deep networks more efficiently. I do not know if anyone has ever made such a quantum algorithm, regardless of whether the hardware machine can do it, but in theory it could be helpful.

Yann LeCun:Driving? of course not. For me, it's not at all clear that quantum computing can have any effect on artificial intelligence. In a short time even more impossible.

Question 10: The value of traditional statistical models lies in the ease of understanding the model's behavior, how to draw conclusions, and the uncertainty of inference / prediction. New deep learning methods have yielded good results in forecasting, but I think they are often "black boxes." At present, to what extent do we understand the internal mechanism of ANN and other models? And how important do you think it is to understand its internal mechanisms? I think this is especially important when the model is used to make big decisions, such as driving a car or making clinical decisions.

Peter Norvig:This is an important part of current research. You can see many examples of how Google worked hard from the Big Picture blog or Chris Olah's blog. I think the difficulty of comprehension comes more from the difficulty of "problem" than of solution technology. Two-dimensional linear regression is well understood, but it is less useful for problems that do not have good linear models. Similarly, people say that the "if / then" rules in random forests or standard Python / Java code are easy to understand, but the code does not get bugs if it's easy to understand. The code often exists bug. Because these easy-to-understand models are also prone to confirmation bias.

I prefer to describe it not only by "understanding" but also "trustworthiness." When we can trust a system, and especially when the system makes major decisions, there are several things to think about:

Can I understand the code / model?

Is it validated over a large number of examples for a long time?

Am I convinced that the world will not change, bringing us to a state that the model has never seen before?

Is the model resistant to attack?

Does the model withstand degradation tests, where we deliberately weaken some of them and see how others work?

Is there a similar technology that has proven to be successful in the past?

Is the model continuously monitored, validated and updated?

What checks exist outside the model? Is the input and output checked by other systems?

What language do I use to communicate with the system? I can ask what it is doing? Can I suggest it to me? If it makes a mistake, I can only provide thousands of new training samples, or can I say "no, you mistook X because you ignored Y."

& hellip; & hellip;

This is a great area of ​​research and I hope I can see more research in this area.

Q11: What do you think about Capsule Network? With the exception of MultiMNIST, did you successfully apply it on other datasets? Can it replace CNN when more data is entered?

Yann LeCun:Capsule is a very cool idea because it takes time for such ideas to be practiced on large data sets. Geoff Hinton has been thinking for decades (for example, the doctoral thesis of his student Rich Zemel is the TRAFFIC model). It has taken him a long time to find a valid method on MNIST, so it can take some time for the Capsule to work on the ImageNet dataset (or another dataset). In addition, it is unclear whether it has a performance advantage or not, and whether the benefits of training samples in practice are valid in practice. The Capsule network can be thought of as a convolutional network that is pooled in a special way.

Question 12: I am a 13-year-old student and I enjoy playing games and programming myself in JS and Python. Do I want to be my own music and machine learning program and have any suggestions for young developers like me?

Yann LeCun:Learn to study math and physics again.

Peter Norvig:In addition to learning, do some open source projects. Either open source yourself on Github, or participate in existing fun projects.

Question 13: Peter, Google has been researching artificial intelligence aiding in identifying images, and the results are quite good, but there are still some strange places. Last year I used your API, enter a cat's image, is very simple, the effect is not bad. But because the tail was exposed from above, the API also speculated that it was a unicorn. This kind of wrong human does not make, and artificial intelligence will, especially when input 2D image, do you think AI will overcome this kind of problem?

Peter Norvig:Recognizing images has also been done in recent years, and development is stable, but as you said, artificial intelligence can make some embarrassing mistakes, even though there are tasks that make it beyond human performance. As we have more experience, more data, this situation will improve, and there is hope for migration learning so that we do not have to do every model from scratch. Video may have a bigger advantage than still images, which is very good. Our computing power has grown exponentially, but we have not reached the level where we can enter a bulk video. On the day you can do that, you will see incredible progress.

Question 14: Can I define "expert system" and "artificial intelligence"? You study more about expert systems or artificial intelligence, or both? What are your goals or success criteria for researching expert systems or artificial intelligence?

Peter Norvig:I think an expert system is a program that encodes what he knows by interviewing an expert, including the ontology in his field, procedural knowledge about what to do and when to reach it. Then, given a new goal, the program can try to mimic the behavior of the experts. Expert systems peaked in the 1980s.

In contrast, the normative system only tries to do "the right thing", in other words, "maximizes the expected utility" and does not care about the imitation of expert behavior.

In addition, the "machine learning" system is built by collecting data from around the world, rather than using manual coding rules.

Today, we focus on a canonical machine learning system as it turns out to be more robust than expert systems.

Question 15: You obviously are committed to the ultimate decline of mankind. Why do you do this for any reason?

Yann LeCun:Instead, we are committed to making human beings better. Artificial intelligence is an extension of human intelligence. Fire, bows and arrows, the emergence of agriculture to human decline?

China IT News APP

Download China IT News APP

Please rate this news

The average score will be displayed after you score.

Post comment

Do not see clearly? Click for a new code.

User comments