Something went wrong!
Hang in there while we get back on track
Lex Fridman
00:00:00 - 00:00:45
Welcome everyone to, 2019. It's really good to see everybody here make it in the cold. This is 6S094, deep learning for self driving cars. It is part of a series of courses on deep learning that we're running throughout this month. The website that you can get all the content, the videos, the lectures, and the code is deeplearning.mit.edu. The videos and slides will be made available there along with a GitHub repository that's accompanying the course. Assignments for registered students will be emailed later on in the week. And you can always contact us with questions, concerns, comments at hcai, human centeredai@mit.edu.
Lex Fridman
00:00:49 - 00:02:03
So let's start through the basics, the fundamentals. To summarize in one slide. What is deep learning? It is a way to extract useful patterns from data in an automated way with as little human effort involved as possible. Hence the automated. How the fundamental aspect that we'll talk about a lot is the optimization of Neural Networks. The practical nature that will provide to the code and so on is that there's libraries that make it accessible and easy to do some of the most powerful things in deep learning using Python, TensorFlow and Friends. The hard part always with Machine Learning, Artificial Intelligence in general is asking good questions and getting good data. A lot of times the exciting aspects of what's the news covers and a lot of the exciting aspects of what is published in the prestigious conferences, in an archive, in a blog post is the methodology.
Lex Fridman
00:02:04 - 00:03:19
The hard part is applying that methodology to solve real world problems, to solve fascinating interesting problems and that requires data. That requires asking the right questions of that data, organizing that data, and labeling selecting aspects of that data that can reveal the answers to the questions you ask. So why has this breakthrough over the past decade of the application of neural networks, the ideas in neural networks? What has happened? What has changed? They've been around since the 19 forties and ideas have been percolating even before. The digitization of information, data, the ability to access data easily in a distributed fashion across the world. All kinds of problems have now a digital form. They could be accessed by learning algorithms. Hardware, compute, both the Moore's law Moore's law of CPU and GPU and ASICs, Google's TPU systems, hardware that enables the efficient effective large scale execution of these algorithms. Community.
Lex Fridman
00:03:19 - 00:04:39
People here, people all over the world being able to work together, to talk to each other, to feed the fire of excitement behind Machine Learning. GitHub and beyond. The tooling, as we'll talk about TensorFlow, PyTorch and everything in between that enables the a person with an idea to reach a solution in less and less and less time. Higher and higher levels of abstraction empower people to solve problems in less and less time with less and less knowledge. Where the idea and the data become the central point not the effort that takes you from idea to the solution. And there's been a lot of exciting progress. Some of which we'll talk about from face recognition to the general problem of scene understanding image classification to speech, text, natural language processing, transcription, translation in medical applications and medical diagnosis. And cars being able to solve many aspects of perception in autonomous vehicles with drivable area, lane detection, object detection, digital assistance, ones on your phone and beyond the ones in your home.