came across this in my junior year in college, was really inspired and I actually decided to do a master in machine learning because of this. Thank you Mr.Norvig!
It's funny because the video is amazing, there are great insights about data science and all, however, if one turns on the automatic subtitles, it has mostly nothing to do with what he's actually saying.
Very interesting insights in how comp sci is now using inference from data to solve problems previously tackled by rules. Maybe this is closer to how humans learn languages....it sure isn't by learning all that grammar...
Although it is an enlightening talk, the graph at 52:00 is slightly misleading. If you set the lower limit of y-axis to zero, you will see what I mean.
Aren't you assuming that there's little value in the difference of ~48% vs ~53% accuracy? Depending upon the scenario, incremental improvement like that can be quite valuable. Imagine a similar chart for, say, website uptime: What would it look like with 99.9% plotted against 99.999999% when the y-axis starts at zero? Is that a useful visual?
came across this in my junior year in college, was really inspired and I actually decided to do a master in machine learning because of this. Thank you Mr.Norvig!
0:11 "you'll have to make do with me". Love the humility and self-deprecating humor
this is so underrated , needs more views
It's funny because the video is amazing, there are great insights about data science and all, however, if one turns on the automatic subtitles, it has mostly nothing to do with what he's actually saying.
Caused they trained the base model using neutral accent , not American . :):)
Very interesting insights in how comp sci is now using inference from data to solve problems previously tackled by rules. Maybe this is closer to how humans learn languages....it sure isn't by learning all that grammar...
Beautiful work!
15:27 predicting the present... very powerful
39:45 You can bet Borges understood exponential growth, given he knew enough mathematics to understand the theory of transfinite numbers.
28:30 the spelling corrector function described
Although it is an enlightening talk, the graph at 52:00 is slightly misleading. If you set the lower limit of y-axis to zero, you will see what I mean.
Aren't you assuming that there's little value in the difference of ~48% vs ~53% accuracy? Depending upon the scenario, incremental improvement like that can be quite valuable.
Imagine a similar chart for, say, website uptime: What would it look like with 99.9% plotted against 99.999999% when the y-axis starts at zero? Is that a useful visual?
25:50 a bit of levity with word play
19:07 define the function that will read smashed words
26:45 harder problems spelling corrections
21:29 the code for his parser. It looks like Python but not commented correctly.
32:30 the python code for his spell correction app
12:42 how to do orbital mechanics just using text!
49:00 questions from the audience
brain behind the Google translate ;)
o_0! Norvig is amazing!
11:22 Google's corpus of N grams... 13M+ unique words...
4:15 expert system approach to learning
part starting 24:50 is very funny
18:13 let's read Chinese!
5:03 Image models and then text models. Image models first.
10:20 Text models
6:50 first problem is scene completion
8:05 Old CS school method of solving problems...
4:44 Statistical machine learning Judea Pearl see his wikipedia page
10:36 props to Frederick Jelinek see wikipedia.