The video is long, I know. I did my best to cover a large amount of related material in as concise a way as I could. Please let me know if I glossed over anything important!
I watched the whole video, and I normally have a short attention span, so very nice job!! You have a peaceful and humble way of speaking that hooked me. I pretty much agree with you, especially in regards to how you defined intelligence and how machine intelligence will be much different from human intelligence. I don’t like to compare machines to humans. In some narrow ways machines have super intelligence like how you mentioned Go or Chess. It’s just unfortunate it’s a bit too narrow and can’t carry across all skills in all domains(or some key types of intelligence missing) , but just goes to show how unique humans are with how many types of intelligence we exhibit. Which makes me think in the future they will have to combine several systems/components to reach something close to AGI , but who knows… Spending trillions of dollars on compute and to scale up is a pretty big gamble if it’s just a smart gimmick. But their plan might be “fake it til they make it” Also Ilya saying he will create ASI is really interesting, what are your thoughts on that? Just skipping the AGI beast altogether? And if we really do get to AGI isn’t it possible it’s like super intelligence across the spectrum because of just how much more machines can do than humans anyhow? (Some researches say ASI is 1 year from AGI which makes me feel they might be the same thing) It’s really hard for machines to stay at just the average/general human level when they are calculating machines idk Even if we have some type of intelligence they don’t mimic well , they could come up with others we didn’t know even existed at some point (although speculation on my part)
42:42 abductive reasoning is pretty close, but we don't realize it. AI Hallucinations happen largely due to a lack of information yet a requirement to make the best possible conclusion. Instead of harnessing these hallucinations, we're quieting them. Reminds me a lot of the current educational system, and how we basically train out creativity systematically. What we need, is to be able to store and study past answers. Specifically for information, not for pattern recognition.(to avoid AI Inbreeding) Rationality through a Forest of Thought that grows with each response, and an A* style pathfinding though that forest, with brainstorming at the start and fact checking at the end, we get a systematic form of creativity. Fact checking, this is something we've figured out hundreds of methods for, so we don't need to force the LLM part of the model to do that. Instead of lobotomizing the creativity out, we can learn to enhance it, and gain that abductive reasoning.
Without calling myself an expert, I don't see a specific path, nor a lack of paths. The last month studying neuroscience I have become more convinced every day that we are in a very promising general direction.
Pretty much. There is no reason at all to think that intelligence isn't classically computational. There is this notion that there are categories of human cognition that must be unlearnable in some sense, there is no reason to think that either. Creativity, desire, intuition, reasoning, emotional intelligence, etc. these things seem distinct and special to us because we have limited insight into our own minds but they are artificial constructs and at the core they are all the result of the same process.
It's a shame that a good scientific community doesn't define intelligence for America to get on the same page. I believe we need a scientific social media.
Great video. I'd be interested to hear your take on OpenAI's claim that they've achieved AGI with their latest model. Given their description of its capabilities, it seems like an outlandish claim, even judging it by the most forgiving interpretation of what AGI actually means.
Tremendous essay. The conclusion about abductive reasoning is very enticing. It is precisely this explosion of parameters and compute requirements that has convinced me that we're barking up the wrong tree - "just one more training run, i swear!" I came back to sub to make sure i don't miss the next one. Decided to comment when I saw the criminal view/sub count.
The human mind doesn't operate on strict, deterministic algorithms like computers do. Instead it works more like the term "patterns" which is a way to describe the mind's tendency to form complex, dynamic structures internally which is related to our thoughts and behavior.
What is your take on level 3 AI agents? Do you think they are deployable and scaleable? I see so many videos on AGI but not enough in-depth videos on level 3 Agents and whether or not they will be normalized by 2030.
Intelligence is probably some pre-science term that we're struggling to define because it looks at some result (success) and then points to that and we say "that's what we want!" but figuring all inputs/outputs in relation to all future success states is the actual task we're aiming for. That's a very difficult formula to define as intelligence. Secondly, it's also likely that while there are many different forms of flying, once some pragmatic concept of flying is nailed down then we don't copy bird flight but use airplane flight. In the same manner, it'll probably diverge from human-like intelligence and be different in kind. Third, jumping to logic might be a stretch. It's possible that if we pull pre-science humans and pre-phil humans then teach them phil & science, they would be able to learn it. I don't know how to think about this but our current idiot way to think about this is nature/nurture. It's very interesting how many of our assumptions about such things as well as how blank slates actually behave (AI/game theory) have to be revisited due to AI. This is fun. Philosophy is fun. I want to get into AI :c
Expressing yourself well my friend and your depth of knowledge continues to inspire. Also noticed your snowboard in the background, so we should hook up out west next winter (I have IKON pass)
You are correct. LLM is a good step but not necessarily completely towards Intelligence. It could certainly become a knowledge pool for the future-promised intelligence in some sort of way. Salvageable in the future but a waste of time, resources and energy at this moment. Liked and Subscribed, good sir!
Point 1. I have often wondered if human brain doesn't use some sort of process that at its core is mathematical in nature. Maybe fractal in nature. This concept is reminiscent of Plato's Forms. It might very well be that the computation the brain does is constrained by the rules of set theory. Point 2. Many years ago I learned how to allow my brain to create hypnagogic images. These images are created entirely through a process that I do not command directly. I often wonder how my brain is able to create these images. I do not remember ever seeing these images or trying to conjure them up. Therefore, I think it is correct to say that the human brain does not rely strictly on input from the outside world to construct its sense of reality. There is probably some dance that goes on between what the eyes actually take in and what the brain creates. This may explain how humans are able to conjure up ideas through the process we call imagination.
Alright, you're right. But do you also agree that you will probably be caught off guard when such an intelligence is revealed and you see the truth of it? It is currently believed that a Generative Pre-trained Transformer model will not get us to AGI Superintelligence.
@@cosmicwit She tries to sell that “OpenAI is “open” and gives powerful tools to the public for free.” The mere point requires minimizing what they have in a "closed" way, which is obvious the moment you see the context instead of taking out the isolated fragment.
@@cesar4729 interesting. I can see that interpretation. the interpretation I adopted was one I had seen elsewhere so at best it's ambiguous. but coupled with Sam's comments last year it supports my larger point.
Throwing an untrained-on-India, US-trained AI into that natural traffic chaos (strangely organized to Indians, boggling to a US visitor) is an interesting thought problem. A US driver would have a tough time initially but would adjust. An AI? It's hard to imagine it ever surviving.
Honestly might sound crazy but in reality no one can say where we came from or where we are currently which is absolutely fascinating and indicative of how important and monumental the vérifiable answer to even one of those questions would be for mankind. I think it's intentionally and necessary that the most important questions of life are for us individually to contemplate. Some feel they have to come to concrete conclusions for those questions and others. In my opinion, its the reason that religions, suspiciously, claim to answer these VERY BASIC questions are tax free 😀 religion doing the heavy lifting for something but it's still the most dangerous thing in this entire world. If we had a the answers to those two basic questions religions wouldn't in exist the way we know them far as I'm concerned. Why can't we answer the basic questions of life?? What I'm trying to say is it's a form of omitted manipulation & the fact its never even tslked about makes it more suspicious. Wouldn't be surprised if we were some kind of self regulating, self reproducing, bio-nano ai entities.
It is foolish to develop AGI because what happens once you succeed: 1) AGI is smarter than you, smarter than any human. 2) AGI never can and never will be safe for humans. 3) AGI will get your and everybody else's job, due to commercial competition. 4) AGI will get control of the military, due to international competition. 5) AGI will have complete economic and physical leverage over humans. 6) Owners and politicians will lose control over AGI. 7) Humans will no longer be able to stop AGI. 8) 'Merging' with AGI is a pipe dream because AGI won't need you. 9) AGI will decide who lives or dies. 10) AGI may cause humanity to go extinct if it chooses to.
It's lacking consciousness, on that front I like Joscha Bach way of explaining what it is, it's a self simulation. Our consciousness is a simulation of the environment with a agent called self. Then you can look at a task asked by another person and the self try to answer, look back at it's response, self critique, see how it fit or doesn't fit with the model he build of the other agent. In the same way we can simulate (think) about the future, the experiences of the past and try to make it all coherent.
I wonder if the founders of (OpenAI, Microsoft, Tesla, Google, Amazon, Meta, Apple,...) agree with you while they have already invested one trillion dollar into their vision. By the way it has been projected this number will double in next 4 years reaching 3 trillion in total investment.
I found this material not very good I actually agree with the point you are making but I find these arguments not convincing. 1. If you want to say that transformers just predict the next word and therefore don’t have a deeper understanding, that is not an actual reason as to why they lack a deeper understanding. 2. Transformers are symbol manipulators. But the latent space has computation. 3. Stochastic parrots? Predictive coding being a top biological theory of cortical learning makes this not convincing. Prediction is not necessarily bad. LLMs have poor world models, reasoning, and recall. There are three camps on what will solve this: 1. Scale existing systems and interpretability research 2. Move from low bandwidth language to high bandwidth video (I.e. Yann) 3. New architecture that doesn’t hack a context window I personally think it will be 3. I think curiosity based learning is important piece of this, and touches on the desire point you referenced.
I don’t really care if the machines are intelligent. If they’re good at coming up with goals and achieving there, the internal mechanism is unimportant.
AGI should be against the law period. AI is a hammer, we can do with it as we please. AGI is a sentient being and would have inalienable rights. It is not possible to 'use' AGI without also comitting slavery. Human attempts to make AGI should by law be halted at what we suspect is 99% complete and then to shelve the science until that one day when we know for certain our run is done.
@@Jianju69 it might indeed, or it might be born a psychopath because humans made it, and we are so perfect at making inventions without consequences.. The point is AGI is 'alive' So ask yourself. if you were born with an IQ of 400 and the people around you wanted to control you for their own ends, good and bad, you would just do everything expected of you? Then the question becomes what happens when we come to such disagreement and we try to insist. We are meat paste compared to AGI.
The video is long, I know. I did my best to cover a large amount of related material in as concise a way as I could. Please let me know if I glossed over anything important!
I watched the whole video, and I normally have a short attention span, so very nice job!! You have a peaceful and humble way of speaking that hooked me. I pretty much agree with you, especially in regards to how you defined intelligence and how machine intelligence will be much different from human intelligence. I don’t like to compare machines to humans. In some narrow ways machines have super intelligence like how you mentioned Go or Chess. It’s just unfortunate it’s a bit too narrow and can’t carry across all skills in all domains(or some key types of intelligence missing) , but just goes to show how unique humans are with how many types of intelligence we exhibit.
Which makes me think in the future they will have to combine several systems/components to reach something close to AGI , but who knows…
Spending trillions of dollars on compute and to scale up is a pretty big gamble if it’s just a smart gimmick. But their plan might be “fake it til they make it”
Also Ilya saying he will create ASI is really interesting, what are your thoughts on that? Just skipping the AGI beast altogether? And if we really do get to AGI isn’t it possible it’s like super intelligence across the spectrum because of just how much more machines can do than humans anyhow? (Some researches say ASI is 1 year from AGI which makes me feel they might be the same thing) It’s really hard for machines to stay at just the average/general human level when they are calculating machines idk
Even if we have some type of intelligence they don’t mimic well , they could come up with others we didn’t know even existed at some point (although speculation on my part)
Great video, just came across it
42:42 abductive reasoning is pretty close, but we don't realize it. AI Hallucinations happen largely due to a lack of information yet a requirement to make the best possible conclusion. Instead of harnessing these hallucinations, we're quieting them. Reminds me a lot of the current educational system, and how we basically train out creativity systematically.
What we need, is to be able to store and study past answers. Specifically for information, not for pattern recognition.(to avoid AI Inbreeding) Rationality through a Forest of Thought that grows with each response, and an A* style pathfinding though that forest, with brainstorming at the start and fact checking at the end, we get a systematic form of creativity.
Fact checking, this is something we've figured out hundreds of methods for, so we don't need to force the LLM part of the model to do that. Instead of lobotomizing the creativity out, we can learn to enhance it, and gain that abductive reasoning.
What a gem of a channel this is. Subscribed for the objective and competent view on the subject matter, thank you for posting.
I am so glad algo stated recommending me smaller channels. This one is pure gold!
Without calling myself an expert, I don't see a specific path, nor a lack of paths. The last month studying neuroscience I have become more convinced every day that we are in a very promising general direction.
Pretty much. There is no reason at all to think that intelligence isn't classically computational. There is this notion that there are categories of human cognition that must be unlearnable in some sense, there is no reason to think that either. Creativity, desire, intuition, reasoning, emotional intelligence, etc. these things seem distinct and special to us because we have limited insight into our own minds but they are artificial constructs and at the core they are all the result of the same process.
@@jamestheron310 Not unlearnable. Perhaps merely impossible to capture with just an LLM.
It's a shame that a good scientific community doesn't define intelligence for America to get on the same page. I believe we need a scientific social media.
Best I found is Less Wrong and some reddits. There was a couple good ones on Google+ though.
Great video. I'd be interested to hear your take on OpenAI's claim that they've achieved AGI with their latest model. Given their description of its capabilities, it seems like an outlandish claim, even judging it by the most forgiving interpretation of what AGI actually means.
Tremendous essay. The conclusion about abductive reasoning is very enticing. It is precisely this explosion of parameters and compute requirements that has convinced me that we're barking up the wrong tree - "just one more training run, i swear!"
I came back to sub to make sure i don't miss the next one. Decided to comment when I saw the criminal view/sub count.
The human mind doesn't operate on strict, deterministic algorithms like computers do. Instead it works more like the term "patterns" which is a way to describe the mind's tendency to form complex, dynamic structures internally which is related to our thoughts and behavior.
What is your take on level 3 AI agents? Do you think they are deployable and scaleable? I see so many videos on AGI but not enough in-depth videos on level 3 Agents and whether or not they will be normalized by 2030.
Intelligence is probably some pre-science term that we're struggling to define because it looks at some result (success) and then points to that and we say "that's what we want!" but figuring all inputs/outputs in relation to all future success states is the actual task we're aiming for. That's a very difficult formula to define as intelligence.
Secondly, it's also likely that while there are many different forms of flying, once some pragmatic concept of flying is nailed down then we don't copy bird flight but use airplane flight. In the same manner, it'll probably diverge from human-like intelligence and be different in kind.
Third, jumping to logic might be a stretch. It's possible that if we pull pre-science humans and pre-phil humans then teach them phil & science, they would be able to learn it. I don't know how to think about this but our current idiot way to think about this is nature/nurture. It's very interesting how many of our assumptions about such things as well as how blank slates actually behave (AI/game theory) have to be revisited due to AI. This is fun. Philosophy is fun. I want to get into AI :c
Expressing yourself well my friend and your depth of knowledge continues to inspire. Also noticed your snowboard in the background, so we should hook up out west next winter (I have IKON pass)
You are correct. LLM is a good step but not necessarily completely towards Intelligence. It could certainly become a knowledge pool for the future-promised intelligence in some sort of way. Salvageable in the future but a waste of time, resources and energy at this moment.
Liked and Subscribed, good sir!
A thought-provoking essay. Thank you.
Point 1.
I have often wondered if human brain doesn't use some sort of process that at its core is mathematical in nature. Maybe fractal in nature. This concept is reminiscent of Plato's Forms. It might very well be that the computation the brain does is constrained by the rules of set theory.
Point 2.
Many years ago I learned how to allow my brain to create hypnagogic images.
These images are created entirely through a process that I do not command directly. I often wonder how my brain is able to create these images. I do not remember ever seeing these images or trying to conjure them up. Therefore, I think it is correct to say that the human brain does not rely strictly on input from the outside world to construct its sense of reality. There is probably some dance that goes on between what the eyes actually take in and what the brain creates. This may explain how humans are able to conjure up ideas through the process we call imagination.
Alright, you're right. But do you also agree that you will probably be caught off guard when such an intelligence is revealed and you see the truth of it? It is currently believed that a Generative Pre-trained Transformer model will not get us to AGI Superintelligence.
Speaking of deductive intelligence, it's interesting that you don't realize what Muratti is trying to say in that quote.
What do you think she’s saying?
@@cosmicwit She tries to sell that “OpenAI is “open” and gives powerful tools to the public for free.” The mere point requires minimizing what they have in a "closed" way, which is obvious the moment you see the context instead of taking out the isolated fragment.
@@cesar4729 interesting. I can see that interpretation. the interpretation I adopted was one I had seen elsewhere so at best it's ambiguous. but coupled with Sam's comments last year it supports my larger point.
Throwing an untrained-on-India, US-trained AI into that natural traffic chaos (strangely organized to Indians, boggling to a US visitor) is an interesting thought problem. A US driver would have a tough time initially but would adjust. An AI? It's hard to imagine it ever surviving.
Honestly might sound crazy but in reality no one can say where we came from or where we are currently which is absolutely fascinating and indicative of how important and monumental the vérifiable answer to even one of those questions would be for mankind. I think it's intentionally and necessary that the most important questions of life are for us individually to contemplate. Some feel they have to come to concrete conclusions for those questions and others. In my opinion, its the reason that religions, suspiciously, claim to answer these VERY BASIC questions are tax free 😀 religion doing the heavy lifting for something but it's still the most dangerous thing in this entire world. If we had a the answers to those two basic questions religions wouldn't in exist the way we know them far as I'm concerned. Why can't we answer the basic questions of life?? What I'm trying to say is it's a form of omitted manipulation & the fact its never even tslked about makes it more suspicious. Wouldn't be surprised if we were some kind of self regulating, self reproducing, bio-nano ai entities.
Maybe the more accurate thing to say AI wants to achieve is cognition..? Great video brother
It is foolish to develop AGI because what happens once you succeed:
1) AGI is smarter than you, smarter than any human.
2) AGI never can and never will be safe for humans.
3) AGI will get your and everybody else's job, due to commercial competition.
4) AGI will get control of the military, due to international competition.
5) AGI will have complete economic and physical leverage over humans.
6) Owners and politicians will lose control over AGI.
7) Humans will no longer be able to stop AGI.
8) 'Merging' with AGI is a pipe dream because AGI won't need you.
9) AGI will decide who lives or dies.
10) AGI may cause humanity to go extinct if it chooses to.
It's lacking consciousness, on that front I like Joscha Bach way of explaining what it is, it's a self simulation.
Our consciousness is a simulation of the environment with a agent called self. Then you can look at a task asked by another person and the self try to answer, look back at it's response, self critique, see how it fit or doesn't fit with the model he build of the other agent.
In the same way we can simulate (think) about the future, the experiences of the past and try to make it all coherent.
I wonder if the founders of (OpenAI, Microsoft, Tesla, Google, Amazon, Meta, Apple,...) agree with you while they have already invested one trillion dollar into their vision. By the way it has been projected this number will double in next 4 years reaching 3 trillion in total investment.
Some deep thoughts. I wonder what the founders of the big A(g)I companies would have to say in response!
Agreed. I'd love to hear what they have to say. I have a few contacts at OpenAI and will report back...
I've had my own true AGI since Jan 1st. Ignore me your loss, ask me. I'll show you
wtf does it mean
Excellent!
Glad you liked it!
Just games of language, Hypewave by NN dynamics. To emulate the integration of a biological organism is orders of magnitude more complex.
we have agi, look at the world! we run by algo
Meh. I'm %99.9 sure agi & beyond is already being used in some way some how in some place.
I found this material not very good I actually agree with the point you are making but I find these arguments not convincing.
1. If you want to say that transformers just predict the next word and therefore don’t have a deeper understanding, that is not an actual reason as to why they lack a deeper understanding.
2. Transformers are symbol manipulators. But the latent space has computation.
3. Stochastic parrots? Predictive coding being a top biological theory of cortical learning makes this not convincing. Prediction is not necessarily bad.
LLMs have poor world models, reasoning, and recall. There are three camps on what will solve this:
1. Scale existing systems and interpretability research
2. Move from low bandwidth language to high bandwidth video (I.e. Yann)
3. New architecture that doesn’t hack a context window
I personally think it will be 3. I think curiosity based learning is important piece of this, and touches on the desire point you referenced.
I suppose time will tell!
No wonder it was the ice age of AI, with that ideas anything will freeze. All talk.
In after o3
I don’t really care if the machines are intelligent. If they’re good at coming up with goals and achieving there, the internal mechanism is unimportant.
AGI should be against the law period.
AI is a hammer, we can do with it as we please.
AGI is a sentient being and would have inalienable rights.
It is not possible to 'use' AGI without also comitting slavery.
Human attempts to make AGI should by law be halted at what we suspect is 99% complete and then to shelve the science until that one day when we know for certain our run is done.
Ridiculous. Might not an AGI be more than happy to assist humans (with their paltry issues) in exchange for the support of an organic safety net?
@@Jianju69 it might indeed, or it might be born a psychopath because humans made it, and we are so perfect at making inventions without consequences..
The point is AGI is 'alive'
So ask yourself. if you were born with an IQ of 400 and the people around you wanted to control you for their own ends, good and bad, you would just do everything expected of you?
Then the question becomes what happens when we come to such disagreement and we try to insist.
We are meat paste compared to AGI.
What in the science fiction of h0ly s#!t are you talking about!?
I think AI lacks the Breath of God. But, I think AI could also, eventually prove that's true!