The slide at 4:45 with the grand engineering challenges for the 21st century helped a lot. I often get overwhelmed or confused by all the projects and applications coming out of the tech world. Many of them don't make sense. This slide gave me a good framework for making sense of what these technologies are trying to solve. Good presentation. 😊
I always had this idea that AI should be smart enough to determine what models should be tried when it was given a data file. It should be able to run an initial analysis to classify the nature of the file and predict the intention of file usage. Base on this analysis it should be able to find the best one from the existing models. And if it could not find one, it should be able to create a new one. I guess Google has already put my idea in practice.
Regarding autoML, after time there would seem to be an ever increasing corpus of models. Humans, being the limited creatures that tend to have the same problems, might not actually need to have a ‘fresh’ model trained every time their brain perceives a problem that needs solving. That solution probably already exists and has been solved. Rather , it might be faster (and much less energy intensive)to simply archive these models with a set of useful metadata so that a google search can find the model that solves the problem. And metadata selection and assignment to individual models can be automated after they are designed by autoML. The metadata can be considered as the ‘label’ for the model. This metadata can also be used to ‘explain’ to a user ‘why’ the machine selected a particular model/algorithm. in addition, the machine would be able to engage the user in a ‘conversation’ - as it ‘asks for metadata’. The user would perceive this discourse as questions about the dataset/problem that he/she has-meanwhile the machine is building an information tree to sift/sort from its vast library of models. This also addresses the human problem where the user often starts by choosing the wrong approach to solving the problem. Or just as often uses the ‘cooked spaghetti’ approach to model selection - throw them all against the wall of the problem and see what sticks.
Outline: Restore & improve urban infrastructure(Combining vision and robotics for grasping task, Self-supervised imitation learning) Advance health informatics(Predicting properties molecules) Engineer the tools of scientific discovery(Tensorflow and its applications) Some pieces of work and how they fit together / Bigger models, but sparsely activated(Sparsely gated mixture of experts layer-MoE) AutoML-Automated Machine learning"learning to learn"(Cloud AutoML) Special computation properties of deep learning(Reduce precision, Handful of specific operations) More 36:49
With reference to scientific learning: When you have a lot of data, but no data in the particular point in parameter hyperspace that you are interested in, what do you do? Extrapolate the model will result in bias an loss of accuracy. Experiments in real world systems seems unavoidable , experimental datapoint that is ofthen very expensive. The interaction between Machine Learning modeling and the planing and execution of experiments seems to be new and very interesting research area.
It just seems like an extension of the use of computers to crunch numbers and do linear algebra on a level humans couldn't manage in a lifetime but on a level heretofore unseen. As he said, the idea of machine learning has been around for a long time. I'm sure we all remember the famous line from T2: "My CPU is a neural net processor, a learning computer".
This is good for robotics and computer vision applications. Are these slides available anywhere? Expect more developers to share the trained AI models into the Model Play.
Regarding automobiles, we built the auto interface for humans. We leveraged our built in sensors (eyes and ears) and designed a bipartite system - vehicle and road. But why are we now trying to shoehorn AI into that human centric system? If we were designing a system from scratch for AI and machines, would we build it the same way? Would it not make sense to build telemetry into the road - make the road more intelligent and let it direct vehicles more directly? Do we need vehicles that can go where there are no roads? This would reduce the cost of complex and hack able vehicle based systems.
Please explain how developing artificial intelligence solutions for subsurface data analysis in oil and gas exploration and production is 'socially beneficial'.
As an engineering student who knows so far only basic c/c++ coding, i found this talk excellent, easy to understand and greatly informative!
The slide at 4:45 with the grand engineering challenges for the 21st century helped a lot. I often get overwhelmed or confused by all the projects and applications coming out of the tech world. Many of them don't make sense. This slide gave me a good framework for making sense of what these technologies are trying to solve.
Good presentation. 😊
I wonder whose agenda that is ?
"I wanted to call it the Arm Pit, but I was overruled." Lost it. 😂
I always had this idea that AI should be smart enough to determine what models should be tried when it was given a data file. It should be able to run an initial analysis to classify the nature of the file and predict the intention of file usage. Base on this analysis it should be able to find the best one from the existing models. And if it could not find one, it should be able to create a new one. I guess Google has already put my idea in practice.
Regarding autoML, after time there would seem to be an ever increasing corpus of models. Humans, being the limited creatures that tend to have the same problems, might not actually need to have a ‘fresh’ model trained every time their brain perceives a problem that needs solving. That solution probably already exists and has been solved. Rather , it might be faster (and much less energy intensive)to simply archive these models with a set of useful metadata so that a google search can find the model that solves the problem. And metadata selection and assignment to individual models can be automated after they are designed by autoML. The metadata can be considered as the ‘label’ for the model. This metadata can also be used to ‘explain’ to a user ‘why’ the machine selected a particular model/algorithm. in addition, the machine would be able to engage the user in a ‘conversation’ - as it ‘asks for metadata’. The user would perceive this discourse as questions about the dataset/problem that he/she has-meanwhile the machine is building an information tree to sift/sort from its vast library of models. This also addresses the human problem where the user often starts by choosing the wrong approach to solving the problem. Or just as often uses the ‘cooked spaghetti’ approach to model selection - throw them all against the wall of the problem and see what sticks.
Outline:
Restore & improve urban infrastructure(Combining vision and robotics for grasping task, Self-supervised imitation learning)
Advance health informatics(Predicting properties molecules)
Engineer the tools of scientific discovery(Tensorflow and its applications)
Some pieces of work and how they fit together / Bigger models, but sparsely activated(Sparsely gated mixture of experts layer-MoE)
AutoML-Automated Machine learning"learning to learn"(Cloud AutoML)
Special computation properties of deep learning(Reduce precision, Handful of specific operations)
More 36:49
@8:30 the same technic used by naruto when he was training using his many clones :)
With reference to scientific learning: When you have a lot of data, but no data in the particular point in parameter hyperspace that you are interested in, what do you do? Extrapolate the model will result in bias an loss of accuracy. Experiments in real world systems seems unavoidable , experimental datapoint that is ofthen very expensive. The interaction between Machine Learning modeling and the planing and execution of experiments seems to be new and very interesting research area.
It just seems like an extension of the use of computers to crunch numbers and do linear algebra on a level humans couldn't manage in a lifetime but on a level heretofore unseen. As he said, the idea of machine learning has been around for a long time. I'm sure we all remember the famous line from T2: "My CPU is a neural net processor, a learning computer".
Jeff ! We need TPUv4!
تحياتي الخالصة شكرا جزيلا thank you
machine learning is taking machine learning experts' jobs. :-)
Thank you very much, this is inspiring and eye opening as to what can be done!
Thank you a lot for the talk given. The idea of ML automation sounds great.
Very informative and insightful talk. Thank you Google for sharing it with us.
that tree design behind himm is pretty cool
This is good for robotics and computer vision applications.
Are these slides available anywhere?
Expect more developers to share the trained AI models into the Model Play.
This is awe inspiring I love it thank you Google
Recommendable!
Best talk of #io2019
Best talk of #io2019
Interesting and insightful.
Regarding automobiles, we built the auto interface for humans. We leveraged our built in sensors (eyes and ears) and designed a bipartite system - vehicle and road. But why are we now trying to shoehorn AI into that human centric system? If we were designing a system from scratch for AI and machines, would we build it the same way? Would it not make sense to build telemetry into the road - make the road more intelligent and let it direct vehicles more directly? Do we need vehicles that can go where there are no roads? This would reduce the cost of complex and hack able vehicle based systems.
This is good for robotics and computer vision applications.
Expect more developers to share the trained AI models into the Model Play.
Very insightful.
Amazing talk!
This is awe inspiring I love it thank you Google
"Humans that you plop on the carpet in your living room" _stuff data scientists say_
Nice classes sir I want to neads thisclasses
amazing talk
This is awe inspiring I love it thank you Google
Recommendable!
Great info, thank you for sharing!
Is there a way to get those slides?
Thank you!
Superb thank you for uploading
Recommendable!
Are these slides available anywhere?
can I access the power of v3 tpu pod on google cloud platform?
sure, feel free to use all of them
Please explain how developing artificial intelligence solutions for subsurface data analysis in oil and gas exploration and production is 'socially beneficial'.
11:19 wrong india map
Many places outside India I see this map. We need to get it modified at the source
cool
How can it solve problems, when Humanity is at stake now.
I train my models on GpuClub com and don't worry about maintaining these huge machines. No investment is the best investment...
Waymo, really?
M(x,y)dx+N(x,y)dy=0
Lidar lol