Francois Chollet - On the Measure Of Intelligence
Вставка
- Опубліковано 2 чер 2024
- And this is the machine learning street talk episode where Dr. Tim Scarfe, Yannic Kilcher and Connor Shorten covered François Chollets "On the Measure of Intelligence" paper. Chollet thinks that deep learning methods are great for pattern recognition but not the route to AGI. Generalisation comes from a high level of abstraction and reasoning capability. He advocates strongly that we need to start looking at program synthesis methods. He created the ARC dataset and Kaggle challenge to test developer-aware generalisation and created a formalism for measuring intelligence as a function of generalisation difficulty and priors.
00:00:00 MAIN SHOW FLASHY INTRO
00:09:51 SHOW STARTS
00:11:21 GENERALISATION LEVELS
00:14:21 THE G FACTOR
00:22:20 INCLUDING THE CONTEXT OF INTELLIGENCE i.e. Creators, society, evolution
00:26:51 DERMGAN PAPER - GANS TO HELP US MODEL KNOWN UNKNOWNS(?)
00:37:41 WOZNIAK COFFEE CUP vs AlphaGo and broad intelligence
00:43:11 PRIORS, CORE KNOWLEDGE (DONT MISS THIS!)
00:46:31 MULTI TASK BENCHMARKS
00:47:01 ARC CHALLENGE (DONT MISS!)
00:48:51 LEG AND HUTTER, UNIVERSAL INTELLIGENCE
00:54:21 CHOLLETS formalism of intelligence
01:02:17 SPARSE FACTOR GRAPH TO LEARN RELATIONSHIPS
01:03:31 HOW SMART IS ALPHA GO, DEVELOPER AWARE GENERALISATION
01:04:41 AUTOML ZERO
01:05:41 THE EXTENDED MIND
01:12:41 ARC CHALLENGE 2
01:20:21 Hofstadter's string analogy problem
01:22:31 HOW WOULD WE SOLVE ARC? (DONT MISS!)
01:34:31 META LEARNING AND PROGRAM SYNTHESIS (DONT MISS!)
01:37:21 SIMPLEST SOLUTION TO ARC, CHOLLET MAKES IMPLICIT UNSPOKEN ASSUMPTIONS?
01:40:21 DNNS ARE GLORIFIED HASH TABLES SKIT
01:43:31 MORE ARC CONVERSATION, RULE FINDING, GAN solution
01:47:11 REDDIT COMMENTS
01:51:51 COMMENT RE:MLST FROM REDDIT! (FUNNY)
01:55:31 REDDIT Q continued- meta learning, consciousness, alphago
02:14:51 LOOKING AT KAGGLE SOLUTIONS
02:16:41 BACK TO REDDDIT COMMENTS
02:17:51 BUILDING A GENERATIVE MODEL
02:23:51 FINAL TAKES ON PAPER
02:32:31 TIM'S FINAL TAKES ON CHOLLET
Paper: arxiv.org/abs/1911.01547
Abstract:
"To make deliberate progress towards more intelligent and more human-like artificial systems, we need to be following an appropriate feedback signal: we need to be able to define and evaluate intelligence in a way that enables comparisons between two systems, as well as comparisons with humans. Over the past hundred years, there has been an abundance of attempts to define and measure intelligence, across both the fields of psychology and AI. We summarize and critically assess these definitions and evaluation approaches, while making apparent the two historical conceptions of intelligence that have implicitly guided them. We note that in practice, the contemporary AI community still gravitates towards benchmarking intelligence by comparing the skill exhibited by AIs and humans at specific tasks such as board games and video games. We argue that solely measuring skill at any given task falls short of measuring intelligence, because skill is heavily modulated by prior knowledge and experience: unlimited priors or unlimited training data allow experimenters to "buy" arbitrary levels of skills for a system, in a way that masks the system's own generalization power. We then articulate a new formal definition of intelligence based on Algorithmic Information Theory, describing intelligence as skill-acquisition efficiency and highlighting the concepts of scope, generalization difficulty, priors, and experience. Using this definition, we propose a set of guidelines for what a general AI benchmark should look like. Finally, we present a benchmark closely following these guidelines, the Abstraction and Reasoning Corpus (ARC), built upon an explicit set of priors designed to be as close as possible to innate human priors. We argue that ARC can be used to measure a human-like form of general fluid intelligence and that it enables fair general intelligence comparisons between AI systems and humans."
The best channel to be updated about AI. Thank you all the team.
Very insightful and informative ,hope to see such kind of discussion so often
Best video you have made! 😃
Awesome discussion
Never been so early! Just started listening....
You guys are fantastic, you should post the audio from these episodes as a podcast so people can listen from podcast apps too. Keep up the great work!
We do! Search for machine learning street talk podcast 😀😀
@@machinelearningdojowithtim2898 Nice, will be listening regularly!
The return!
Alright this paper might be the best paper of 2019, but Tim, can we also get more cat footage? I see that little guy back there around 10 minutes 🐈
I second more cat footage 🐈
That is Kina the cat! There is actually an introduction to Kina and we asked her what she thought about Chollet rejecting the universal intelligence theory. Next week we will shout out the first person to find the time index! 😂
A hard task is done by dividing it in smaller tasks.
Knowing how to divide this task, is just another task.
Natural language to sparql translation is a game changer.
yes, would completely revolutionize enterprise analytics
Is it possible to include guesses with a confidence metric into datasets in order to make extrapolation possible? Give known training data 100% confidence and guesses at the far extremes of the manifold low confidence.
I think GANs are the key to AGI. Human intelligence is predicated on our propensity to forecast and hypothesize everything from body language to intonation, where speech in this case would be the data. A GAN that actively keeps training the neural network with hypotheses in parallel with the algorithm operating in the wild, that over time creates intelligence.
But even then, no matter what it would always have to ask a human "is this result significant to you humans?" because it is pattern matching "like" intelligence but its acuity for generalization is always ultimately engineered by the model's structure. Our brains' model was evolved, but there's also quantum mechanics flipping our biochemical switches in a continuum.
Silicone isn't an instantiation of consciousness, it's an expression of consciousness. But to that argument: our consciousness is then the expression of the sun's... lol philosophy is magnificently intricate
Great talk! The sunglasses are extremely rude.
I've been looking for a systemic way to link consciousness and information. And this is the closest thing I've found. Consciousness is what converts information (experience) from the realm beyond language and ideas, into the realm of spreadable/communicable information. .
Yannic says the only thing (in a system) you can really measure is skill (13:55). Is the ability to gain skills efficiently at a previously unknown task a skill in itself? Or is this actually intelligence? Maybe it is just semantics, but the answer may be both. Chollet states (p.27) that "skill is merely the output of the process of intelligence", so he clearly wants to distinguish between them. By the definition of intelligence given in the paper (also p.27), "The intelligence of a system is a measure of its skill-acquisition efficiency over a scope of tasks, with respect to priors, experience, and generalization difficulty" and assuming that Yannic is correct that skill IS all you can measure, then intelligence IS a skill. But, in Chollet's view, it seems intelligence is a unique skill, as it is a meta-skill of ALL skills.
hey tim ? could you pls recommend me paper or books like these things? Thanks
Geoffrey Hinton: It seemed to me there's no other way the brain could work. It has to work by learning the strength of connections. And if you want to make a device do something intelligent, you’ve got two options: You can program it, or it can learn. And people certainly weren't programmed, so we had to learn. This had to be the right way to go.
And we are still having this debate....
Peter, Chollet is still advocating for learning (programs)
"People certainly weren't programmed". This is laughably false. People are born with countless biases and unlearned subroutines. Some of them, like recognizing faces and language, begin in infancy and can be "switched off" or "broken", as in Broca's aphasia or visual agnosia.
The brain is programmed. And it is programmed, in part, to learn.
@@snippletrap Hinton cites the Baldwin effect as to how learned behaviours become innate. So for him it is still learning.
Dehumanization is the way!
To generalize, that was quite a roasting 😂
Solving those ARC puzzles can be done the same way as winning Atari games. An adversarial network should solve every one. The difference is, the computer has to "design" it's own objective function. What we are able to do is guess what the test designer wants based on having seen similar puzzles. Therefore there is an initial categorization problem to choose a puzzle type and matching objective function from a set of puzzles and objective functions. Once an objective function is selected, run its corresponding GAN to find the solution. One needs the appropriate priors to "recognize" a solution. We need priors and so does an ML algorithm.
Normalize the sound levels, please
Now my goal will be to build the first AI that has a prior that consists of the entire catalog of IKEA
Do I know symmetry as a concept? Or do I just know a bunch of examples labeled as symmetry? Is that the same thing? I can also give a definition -- which is, of course, just another list, this time of properties. In a CNN, a machine can learn to properly classify something as symmetric or asymmetric. Is that fundamentally different from what I do? Do I exist in a realm of Platonic forms? Or do I simply have a model in my brain that I run my vision through? I really don't know the answer to this, but more and more, I suspect that my own intelligence works in all the same ways ML works and can work -- just with more compute and better priors. Nature doesn't usually reinvent what works well. We see this across diverse species. And make no mistake, ML is as much a natural process as a bird building a nest.
I thought Francois was actually going to join in
Can you guys work on a better video ranking algo for UA-cam so people actually get to see your videos? Haha
If the task is human intelligence (the only known and completely undefined intelligence), teach an ai abstracting logic and still hit that barn more often then not.
this dude is just reading lines from his smart-glasses
The two men doeth lie too often
A human child has general intelligence.
An entitiy has general intelligence if it can create a cup of coffee from an average household
One human child that never know coffee cannot create coffee from an average household.
Therefore this one human child does not have general intelligence.
This does not make sense.
You will need data even if it is with little or big data in order to create an actionable model.
Even human do need repeatable experience in order to learn especially on task that they are still bad at.
If a task is quite similar to other task or the rules for this particular task are subset of rules
that previously learned, then of course the agent will be able to do the task in a few shot or
even zero shot sample.
So after 2h 30min talk you realize you should have first discussed your alignment?