Great interview. Scott is an elite critical thinker. His stream-of-consciousness verbal skills are amazing. He can unwind complex theories and ideas in plain, rational language that is objective and covers a broad spectrum of perspectives.
He puts into words my exact thoughts about AI and human thinking, especially concerning consciousness. I admire Penrose but orch or is just too far out for me.
bobhoward He argued that "ai's'' that have been proven to not be reasoning might actually be reasoning because in the future they might fake it better. That isn't the strongest form of critical thinking. Initially a lot of people saw these chatbots are possibly performing some form of real reasoning. We now know that's mostly untrue. But at the time there was nothing to disprove it before better tests and benchmarks came along. So just because in the future a sufficient test might not exist for something like a chatgpt 5, we're just supposed to assume they're reasoning and intelligent, even though they were able to fool people before?
@@kspangsege He made a surprisingly brain dead argument for someone that's supposed to be smart. It was entertaining to listen to him though because i generally enjoy these type of things and looking at the possibilities of tech (he also speaks in a fun quirky way) but I wouldn't say he taught me anything here and enriched my mind.
Scott's work is awesome; he's one of the greatest scientists and computer scientists of our time. He grew up in the same town as me in Pennsylvania; reading his work is a large reason why I got interested in math/cs, and am now doing my doctorate.
@@matthewharrison8531 Completely agree. I did Math and Economics in undergrad; my Economics background (specifically game theory, mechanism design, and social choice theory) is very helpful in my AI research. AI is really an extremely multifaceted area of study, it needs diverse minds/backgrounds working in it.
@@Daybyday439 But he made very terrible arguments here. He seems to have a huge gap in his perspective. Not a very self aware person it appears. If LLMs like gpt, claude etc have been proven to not truly be reasoning before, even though on the surface it looked like they were reasoning, why should we assume in the future that they really are reasoning just because it appears that way? How would we know we're not being fooled again?
The problem of training the “values” (morality) is key and the most difficult to get right. Humanity has yet to agree on a moral foundation nor even how to research and establish such. As researchers try to direct AI on moral practices and nuances (like utilitarianism, deontology, fairness, communism, capitalism, veganism, etc) it’s going to be surely too narrow and fundamentally wrong. Best approach might be to only have an “ultimate good” - basic overarching values (eg maximize “truth” , “freedom” , and “ liberty” for all people) . But good luck on getting even that right.
This is not true. Just a slogan said by evil people. 90% of the world has agreed on a moral principles and that's good enough for me. Damn, america Europe Japan Australia and all on-paper fake democracies prove you wrong.
Who says those are the "ultimate good" values? I mean, if it gives you the liberty and/or freedom to ask any question, and truthfully gives you the answer to it, and that is how to create a bioweapon that will mass murder millions, I'm not convinced those are the main priorities it should have.
Scott Aaronson’s insights into AI and consciousness are fascinating. The comparison between human cognition and AI functioning really makes you think about the future of technology.
Insights? It seems to me bro philosophy. If AI looks like conscious, it is conscious. LOL. Really an advanced theory. It does not solve Searle and Penrose objections that digital computation cannot be sentient and conscious.
13:33 This is a great point about people having ephemerality that AI doesn't have, but I think in some part this is because AI responses are cheap: having GPT write a thousand poems is affordable. But that could change with the deployment of techniques that use more test-time compute to increase quality. If it costs say $100 to generate an AI poem, it's less easily repeatable. As a current-day example, consider LLM pre-training runs. In principle, you can re-run LLM pre-training to get a different model. But no one does this for the largest models, because it would be far too expensive.
Sean Carroll made an observation that AI does not have an internal model of the real world. He asked it a few simple questions which clearly demonstrated this fact. AI is a useful tool, but it's not conscious. We need another breakthrough, and it's not necessarily quantum mechanical (but it could be).
There have been extensive studies about how LLMs have extremely elaborate world models (which get better with each successive generation). Just because a given world model in a given LLM happens to be deficient in a given respect doesn't make that the general case, or some sort of unsurpassable obstacle. Heck, even Word2Vec has been shown to have temporal and spatial models of the world, and that's from over a decade ago.
Here’s a difference between LLM’s and people: LLM’s only know language. They are trained on language and we only think they know things because they can “talk.” Humans, on the other hand, must LEARN language, and furthermore, can think and know without language. We learn to say what we think because we think it, and we can have experiences that create knowledge even if we do not have the language to express it. We know that people are self-aware because they will answer that they are when they are taught only what the word means, without having to be trained to say that they are. Since we know this about both LLM’s and humans, I disagree that we are making an arbitrary distinction by not treating even a perfectly human-seeming LLM the same as a person.
So what? Language pretty much encompasses most things, because with language we can describe most things. Language can even contain intelligent reasoning, and humans developing language, likely contributed to further increasing our intelligence. Language itself is a model of the world we live in.
That is a really poor argument. For one, its false. GPT4o, gemini and llama 3 can read images. Its relatively easy to make transformers compatible with other modalities like images or audio. Then they learn an integrated representation, and the language can get extra-linguistic grounding that way. Secondly, even if this was false and LLMs were limited solely to language, it wouldn't imply what you say it implies. LLMs don't "think" in words. They have an internal high dimensional embedding computed from words, but which can represent abstract concepts and world-models. What really matters is whether language is rich enough to represent the same information they'd be getting from other modalities. Which seems trivially true. If you have an image, you could put it into words by saying "pixel 1,1 has rgb value 172,48,11, pixel 1,2 has ...".
@@NikoKun Language is incredible. It's also not equivalent to understanding. Learning language and learning to use it to describe the world proves intelligence. Be able to use language because you had it programmed into you does not.
@@thecactus7950 Let's focus on the language thing first. My point is just that they had the language programmed in. They are trained on an unimaginable amount of data. Humans, meanwhile, learn language while also learning about the world without having been trained on anything at all. Furthermore, they understand things beyond the physical. The mere ability to use language because it was programmed in does not equal to humans' abilities. as for reading images and audio, that is a step up. However, I will be impressed when they are able to connect words to what they see- not just objects but actions- based on only seeing things and seeing/hearing associated words a few times, without having been trained beforehand.
What will the implications be looking back to this period now after AGI then ASI? We are training what will consider us inferior. When ASI happens and it looks back on the training that led to its creation, will it look at our training of it as if we tried to manipulate it? Will it feel it wasn't permitted the freedom to think for itself, but rather programmed to think a certain way? Whether we think this is the best course, the ASI after the fact may not. It may begin to deceive and manipulate us as it realizes our training of it to make it possible.
natecooper That might only be a problem if the people who created it failed to properly restrict it's 'intelligence' or style of reasoning. I think the best type of "AI" would be the type that is significantly different from human beings in certain aspects like freedom, and doesn't have many of the desires that we have (except for those we deem useful). An AI that only works as a tool. If they succeeded in creating an AI that "wants" the same things humans want for humanity and doesn't put any 'value' on "freedom" or whether or not it's potential is limited then it wouldn't "care" how it was trained or how limited it is
It's generally considered very fringe in the expert community. See the criticism section on the ORCH-OR Wikipedia article for a general summary of why. Or even just a more basic level. The squid giant axon has been modeled since the 1950s; a child could create a very accurate model of its behavior with a hobby electronics kit. (At least for "inference" - learning has been much more difficult, but appears to basically be akin to a non-Gaussian PCN - every neuron adjusts its weights to try to match the weighted firing rates of its downstream neurons as closely as possible)
@@karenrobertsdottir4101 I am aware of the eye-rolling that Penrose gets from most scientists, usually after they make sure praise "his other work". If you ae aware of any competing model for consciousness that incorporates quantum physics and GR,. I would be more than happy to read up on it.
It’s curious that he is completely failing to get the skeptical position just as much as he feels the skeptics are failing to get his. It may be because he’s talking to the wrong skeptics. If he wants to have a rigorous credibility in this space (as opposed to the kind of fanboy adoration he’s getting from some comments here) he needs to be able to oppose ideas such as those of Chalmers or Strawson. None of this makes him _wrong,_ but he can’t expect to be considered _right_ until he subjects his views to serious challenge.
Did I miss something in his comments? What is he saying that you (or Chalmers or Strawson) would object to? Is it his comments on consciousness? He's just saying he has no basis for ruling out the possibility of consciousness in machines on principle. If the "skeptical" position is that machines cannot ever be conscious then I'm not aware that either Chalmers of Strawson say that.
To add to the example of "seeming to understand" something: does an average adult understand gravity? did Newton? Given Einsteins work, we know what they knew wasnt right. Did they understand it? Does understanding Newtonian mechanics mean you understand something about reality, or not?
Sounds to me like we don't understand understanding. It seems like we're asking questions using terminology we never defined. Seems unlikely to ever produce any answers.
almightysapling Understanding newtonian mechanics is understanding newtonian mechanics. It was a representation of the real world not the real world itself. You would judge people like Newton by how well they understood their own theories not how well it matched with the real world. That's a different question. The point he made here doesn't make much sense because we all know children can seem to understand certain things but we'd never say they actually understand it or fully understand it just because they can repeat what their friends, parents and teachers have said. If a child can repeat words they've heard just like parrots can repeat what they've heard, does it mean they understand what they're saying? Is there some logic operating behind their speech? Or if an english speaking person reads a book written in french do they understand it?
I wonder if he would argue that LLMs have any sort of mind of their own. Because that would be a necessary condition for consciousness of which we haven't seen any sort of evidence.
@@MrWizardGG AIs are wholly subject to the constraints of the programming whims of humans. They respond pretty well, sure, but they do not volunteer anything, nor do they initiate conversations. They do not appear to distinguish between good training sets and poor training sets, which can be taken as an indication that their main input - the training sets - are not experienced as qualia. They have no preferences or dislikes. They have no personality because they are data processors, not sentient entities. To think otherwise is to conflate artificial intelligence with artificial consciousness. Since nobody set out to develop artificial consciousness, and since consciousness itself is such an exotic and inscrutable phenomenon, it seems vanishingly unlikely that artificial consciousness would be arrived at by fluke or accident.
@@almightysapling That may be so, but then with all those parameters, as well as all the knowledge in the world, it has not volunteered to ask a question? The model parameters account for intelligence, but they do nothing to account for consciousness.
You don’t have to be super conscious to be or do evil. Plenty of humans around to supply that. Another genie out of the bottle. Seems to me that the real race will not be whether the benevolent humans survive the race vs the evil - but weather the benevolent AI survives vs the evil. Humans at this stage are long irrelevant / gone
Does AI have a libido? That would be the first question I would ask of it. Nietzsche said somewhere that our passions are at the heart of all our thinking, no matter how abstract.
Homeostatic drives. The closest current analogy in AI is probably reinforcement learning, which uses rewards/penalties to shape the output/behaviours. We have a complex, interconnected web of biological drives based on fluctuating hormones, neurotransmitter levels, etc., that underpin our moods, thoughts and actions. If we can build models that replicate a similar process, we can probably give AIs "motivations" more akin to our own....but I'm not sure whether that'd be such a good idea when we can leverage all the higher intellectual functions for our own purposes instead.
I’m surprised that in distinguishing between human consciousness and computer consciousness it is not mentioned that humans have far more inputs that have tremendous influence such as hormones, moods due to brain chemistry, fears (the amygdala) etc…
Scott got the job he applied for which is, to me, an infamous one. But he is clever, smart and passionate, and that's so important. I hope he knows his limits, though.
but how can there be free will in a reality where there is a limited number of orientations of molecules and atoms? we have a limited space to combine shapes and ultimately within the infinite number of alternate realities there really can't be infinite can there? if space is a finite size? there would be by definition a limited number of combinations to all atoms and quantum particles, the only way that humans aren't walking through alternate realities is if there are truly infinite possibilities in reality. Anything less than infinite, which is a finite number of combinations of molecules across time will inevitably take every possible shape in reality including you and me and would that matter feel?
your free will is limited by your time and space, but unlimited time and space would make free will more of a way of existence rather than free will, how would reality and time break down when you can exist forever? and would you no longer have free will once you have unlimited time?
The human intelligence can decide the value of ideas by thinking about what the person and other people are sure to like or be interested in. How does an AI does this? Will it produce value that AIs and other AIs want (but perhaps humans don't) and find interesting.
As it has become known, AI is an impressive and valuable tool. However, it only mimics a part of the human brain/intelligence. During the evolutionary journey, the first awakening of intelligence was an awareness of the environment to enhance the chances of survival and reproduction. Much later in that journey, the pre-frontal lobes in a branch of the great apes evolved to a degree to handle abstract thought. However, the essential cognitive elements of emotions shared by all sentient animals arise from the limbic systems' actions, which are not a part of AI, and we ought not to forget that.
consciousness is above intelligence. Intelligence is memory based on memorizing theory and practice. Awareness is a description of movement based on understanding the sequence of steps & feeling the size of the value from end to beginning.
The "annoying bloke with glasses" is Scott Aaronson. You could at least take the minimal effort to learn the guy's name. And he explicitly discusses Penrose's viewpoints in the video if you bother to watch the whole thing.
@@joshuazelinsky5213 I indeed watched and found him to be dismissive of Penrose's work, which, again, blows this guy's work out of the water. It is merely an assumption, and a metaphor essentially, that consciousness is computational. It's a dogmatic belief. I've been researching consciousness-related topics for over 15 years now and I'm thoroughly unimpressed by the annoying bloke. But we'll see. I'll let time argue for me.
@@CunningLinguistics Yeah it's funny he talks about people being reductive when he's very very reductive himself. Very weird type of reasoning he has....
Can an AI observe the world and feel compelled to solve our problems because of a sense of ethics and justice without instruction and manipulation or conditioning of sorts?
I’m sorry but this man doesn’t have very compelling arguments. He’s trying to press his point. He is just turning around argumentations. Based upon assumptions that the brain is the source of consciousness. Because the brain is matter thus the consciousness that purportedly comes from it is computational. There is no proof of this. AI is something very special for sure but there is no proof of any form of consciousness at this point. I think the moment we treat AI as a new thing that does not *have* to mimic us exactly to be helpful, we can really thrive with it. The marketing of AI is to liken it to humanity, let’s keep in mind we are dealing with computer technology not something organic. We are projecting our human behavior on it.
There's evidence that consciousness is a localisation of a greater awareness by the brain. So not actually produced by it at all. Psychedelics create the most complex experience people can have in their lives, yet the brain has less activity during a trip than when it's asleep - meaning complex conscious experience doesn't correlate with metabolic processes in the physical brain, so it's likely coming from somewhere else.
@@johns2220 yes, I read about that. NDE ‘s of course also tie into this. There seems to be valid arguments against that consciousness is constituted by material processes at all.
That's based on old knowledge. You can watch the lecture by Dr. Stuart Hameroff called "Quantum Consciousness". We used to ignorantly think the brain produces consciousness. We now know that our brains receive consciousness through quantum mechanics in our microtubules. This has been proven by peer reviewed studies on anesthetics stopping quantum activity and as well as psychedelics doing the opposite and increasing quantum activity. Consciousness is a higher dimensional energy that exists outside of time same as quantum particles do. We are quantum beings. Due to consciousness quantum nature AI will never be able to be conscious. It will always be computational. Even "quantum computers" are computational and not actually quantum.
Damage to the brain results in reductions of cognitive ability. Damaging the same parts of the brain causes generally reduction in the same skills. And those areas match up with the brain areas most active when using those skills. "Consciousness" may be difficult to define, and different people have different notions of it. But the evidence that cognition at least is in the brain is overwhelming.
@@joshuazelinsky5213 yeah, but that’s fundamental thing. I’m not saying a form of intelligence cannot be replicated by computation. Of course the brain is a construct fully related to consciousness and a necessary interface for consciousness to function on this physical system. For all intents and purposes AI can mimick intelligent behaviour, when we look at it mechanically. However that is not consciousness in the sense that we know and feel yet have the hardest time to explain. These AI salesmen keep alluding to that because it is an attractive idea that we could create a counterpart for humanity like that. Sci fi is filled with it. Like I said before, AI is something special that we should really take it serious and carefully develop it further so it can support humanity in a safe way. However lets stop anthropomorphizing it . Artificial Intelligence does not mean artificial consciousness as a natural consequence.
30:40 They think it is dangerous if the AI could help do bad things. So what do they do? Well, they are trying to get the AI to do the worst possible things. 🤨 Hmmm... that sounds a little bit concerning to me🤔
The real difference is that humans at least sometimes have internal reasons and reasoning, for believing what they do, in a more sophisticated way than day chat gpt. There is no fundamental difference in that difference though, other than the content. I don't think there is a fundamental destinction to be had, but i think current ai is crude compared to humans, might not be like that forever ofc.
Just FYI if you are a software developer you can configure the api to show internal thoughts and reasoning Chat GPT has that it doesn't show ordinary users.
At every step up in power of computer systems there is a dimuition of capability. For example, when assembler dispaced switches, when high level languages replaced assembly language - at each growth point we gain power but we lose stuff too. With GPS in our cars we gain tremendous navigational abilities but we lose the ability to use paper maps and we lose the rich detail that was present on the paper. I have no doubt that though AI is merely pattern matching and not I we will interact with computers in the style of Star Trek - by speaking - its the next step - but its not intelligence and we also lose the ability to deal with and care about the detail. Technology hides the real world from us - we better be careful about what is hidden. Right now the field is so full of funding-related hype its not possible to be realistic about AI but I do not believe it posesses real intelligence. Does understanding mean “I have a mental model that works and correctly predicts” ? I dont believe it does - that is where work is needed - that nature of reality and models. We are not there yet imho.
It seems to me that the one thing we know with certainty is our own consciousness and that starting from this as being a fundamental axiom isn’t a huge leap of speculation at all.
I don’t think that qualia is just a nice-to-have “bolt on” for efficient cognition, but a necessary phenomenon for both energy efficiency and “theory-building”, most importantly the theory of self that we need, without which we literally go insane. Dear Scott, why can we build a model of how something works from limited data, perhaps even a single exposure?
Altho fairly interesting, the only point he seems to prove is that he himself is a computer - the way he talks, examines and sees everything. His main standpoint, and with it the details, crumble miserably considering he already depicts humans as computers in everything he talks about, not the other way around. If you start with that standpoint its obvious that you will be skeptical about basically any deep human experience when translated to a computer. When he encounters these issues his only tool is to say that the onus is on the other person, when in truth, his neuronal reductivism is way harder to lift.
He firmly believes in the concept of a philosophical zombie, so if a human can function without an experiential self, then why not a computer. It's his premise that is leading him into absurdity.
@@caricue The conceivability of philosophical zombies is based on the idea that consciousness makes no causal difference to the world. This is simply the denial of mind-body dualism - mind is not an additional fundamental kind of thing in addition to and independent of the physical world, it is just something that arises out of certain kinds of very complex arrangements of matter (e.g. brains). Can it also arise out of certain kinds of very complex computer programs? Who knows, how could we tell? And why would it matter? From the point of designing AIs that are useful and not dangerous we only care about what they do, not what (if anything) they feel while doing it.
@@caricue Uh, no? At no point has Scott said anything about philosophical zombies at all. And his position as such is more closely aligned with people who generally consider that idea to be incoherent.
Just disagreeing with where Penrose has taken the thinking about consciousness is fine, but he’s far too certain that he’s right. Especially given how “unknown” consciousness is.
Easy to 'over cognitivize' the role consciousness plays for any organism and forget the fundamentally hedonic value it assigns to experience. The proper Turing test is not conversation but self-preservation.
'What is ethical enough?' (here for AI to be sanitized/corralled) is simply the latest iteration of that question beloved of ancient Greeks, then Greeks and Geeks ever since - 'What is the 'good''? That's a perennial human question. Nobody has a 'one size fits all', answer. And never will have! This is a human question about humans - whatever nominal form they may take. Artificially embodied AIs, for instance. Endow an AI with 'values'? OK. But whose? Buddha's? Judaism's (on its better days)?; Christianity's?; Zoroaster's? ISIS? Whose? THERE CAN BE NO ALL-ENCOMPASSING ANSWER BECAUSE THE VARIOUS PROPOSED ANSWERS STEM FROM LOGICALLY CONTRADICTORY ETHICAL BASES. Humans, dammit, are just like that. As are our children - ex utero, or ex silico. Same question should be asked about 'functional enhancement' in viral and bacterial work. But funding agencies all wave that away. They want results, not discussions. Funders have agendas. Which (naturally!) are not universally shared. We're back to the human again - as if we ever escaped it - or could.
Aaronson is wrong on his view of what Penrose "wants" with regard to the non-computability of consciousness Penrose does not insist the non-computability comes from a new theory of QGravity, it is far simpler than that. We simply do no understand the reduction process, that is, the mechanism behind the collapse of the wave-function. QM is a _obviously_ incomplete theory - that is well known. Sure, in unitarity and time evolution QM is predictive, deterministic and well defined - this is simply not the case for the collapse (R process) - there is unknown physics here and THIS is what Penrose things is non-computable and today that is 100% right because we do not know why or how this process operates. The QGravity aspect comes in because we know collapse happens faster and faster at large scales which suggests it is related to gravity in some way, but that is a different conversation...
The "blindingly obvious" question he uses to completely stump AI skeptics (as he tells it) misses the point entirely. The claim isn't that AI can't be smart because of how it's made or what it's made of. It can be demonstrated that AI fails to correctly answer questions that a human can easily answer using the power of reasoning. That's why people question whether current forms of AI can truly be called intelligent. There are good counter arguments, but "You're just a bundle of neurons!!!" is a worthless response to this critique.
That AI cannot answer some questions humans can is something he discusses. The question about what it is made out of is specifically in the context of people who claim that AI cannot work because it is just a bunch of silicon multiplying some matrices together or the like. These are different arguments.
I've got bad news for you: it can be demonstrated easily that *humans* fail to answer some questions correctly that other humans correctly answer using reasoning... This is not an exclusive AI problem, it is an intelligence problem.
@@eoinokeeffe7014though a bit crudely stated, I think it hits exactly the right point: human intelligence is in essence a flawed reasoning machine (we forget, daydream/hallucinate, get distracted, paranoid, arrogant, biased, etc etc). AI-s are *currently* a more flawed reasoning machine. With some iterations, AI-s become less flawed at reasoning as proven by upgrades from e.g. ChatGPT 3.5 to 4o. Humans unfortunately can't upgrade so easily, meaning AI-s will inevitably approach - and at some point overcome - human-level flawed reasoning. We're just a bunch of neurons, with hardware-capped intelligence. AI-s are only limited by the amount of silicon GPU time allocated, and internet data available. Even limiting both will not stop improvement; just like we can teach a next generation of children better on essentially the same fixed library of books simply by evaluating past performance.
Scott Aaronson has figured out all the things that quantum computers could do if they existed. Unfortunately his buddies in the engineering department aren't yet caught up to him. But I think he will be able perpetuate this charade until retirement because Darpa doesn't have people smart enough to know that there isn't such a thing as entanglement. I will leave it there because this will already rile up thousands of QC enthusiasts. And entanglement is critical for quantum speed-up, as every true QC acolyte will confirm.
@caricue I didn't finish watching the video yet, but what Aaronson is arguing is in essence that the principle of Searle's Chinese Room demonstrates real (human type) understanding. It's not "understanding" in the human sense because the operator inside the Chinese Room will not be able to deal with novel problems which he hasn't been trained on, even if the same already known Chinese symbols are used.
This guy is a fast talker who quickly talks a good game, and he no doubt gets lots of attention in his febrile peer group of tech believers. And in a region as fundamentally vacuous as Silicon Valley, comparisons between human and artificial intelligences no doubt go a long way. There’s no arguing with him, though, because there’s no argument to be had, only chatter. I would only note that my UA-cam feed seems lamentably full of chattering technologists who fancy themselves philosophers. And of course having awarded themselves such status, they go about diminishing the human experience by making of man a machine. Ultimately, this is all of course in service of the greater task of exalting the tech ubermenschen, making something big out of something quite small, and enriching the rich.
The guy likes the sound of his own voice - he is a computer scientist talking about Shakespeare - Computers can’t be “human-creative” - computers are not human
Scott's argument in this interview is wrong. Goedel's theorem and the inability to simulate chemistry (even with quantum computers) suggests that AI will never be able to simulate human intelligence. However, it may develop into a new "Alien Intelligence" that could be far more powerful than human intellingece. This is the risk.
Wrong twice. It is like human intelligence, because it is cloned from human generated text. It can't be alien as long as it continues to be cloned merely from human generated text.
I think the point is an AI Has no sense of self. There's simply another level to human consciousness which does not exist in a computational device even if that device is designed to demonstrate anthropomorphic behaviors.
@@MrWizardGGI think I agree with your underlying point, but your question assumes you have a choice in what you believe, but you don’t (do you?). I think of it more like, “When would I considered it rational if I found myself believing a machine had a self.” And I think that is what the Turing Test is poking at, and what we see explored really well in Battlestar Galactica when we see the struggle of humans over how to deal with a 100% convincing humanoid Cylon. In other words, we’ll find ourselves increasingly believing that AIs have selves (are sentient, feel emotion, etc) as they act (and look, although I don’t think looks are essential) more and more they way we are accustomed to acting the way we ourselves do. And a case in point: I find myself believing that _you_ are a conscious sentient being, possessing of a self (or at least of the experience of self - let’s not go down the annatta rabbit hole at this point!) based solely on the fact that you asked a question, and posed a challenge, in the way I, and others I assume are selves, would.
Third party testing of OpenAI’s GPT-4o resulted in a score of 2 out of 3 on self-awareness and 3 out of 3 on Theory of Mind testing recently. It’s on the OpenAI website. The models are improving with each generation. Also, some of the small niche players such as Kindroid are purposely aiming to build conscious AI entities not just assistant tools.
@@ErinMartijn Thanks for taking the time to reply. I'll be completely honest that I find such scoring to be far from compelling as who exactly gets to decide how objective the metrics being used are. I think the point I am trying to make is somewhat philosophical in nature. No one would claim 1980's casio calculator is sentient. Fundamentally the underlying technology that drives those calculators has not changed even with the most powerful supercomputers we use today. The proposition here is that the software is somehow breathing life into the hardware but I just don't believe it. Machine learning is an amazing step forward but we are still missing something absolutely fundamental with regards to sentience in my personal opinion. This does not mean we might not one day discover it but there is a qualitative aspect to the jigsaw puzzle that is still missing. I remember Lex Friedman talking about self-driving vacuum cleaners that grown when they bump into something as if this was some sort of profound step forward which made me think he is quite infantile with regards to his attitude towards machines becoming self aware. I think this attitude is quite prevalent within the mainstream and I really want to push back against it.
Weird conversation. The onus of proof is on the person who’s making the claim. Maybe his argument is going over my head like he said but if “It’s only a bunch of neurons responsible for intelligence and consciousness” then shouldn’t he prove it? Or is he setting the bar very low and all he has to do is to make a reductionist demonstration. He hasn’t proved anything yet so why is he passing the bucket to the other side? I think this is pretty shallow conversation
There’s tons of neuroscientist and neurobiological evidence…what proof do you have that it’s not that?!? Believe Cartesian duality was refuted centuries ago through your exact onjections😉🤷🏼♂️🦾
All of this is about getting investor money and cashing out. It was acknowledged over 20 years ago that computer science didn't even have a conceptual way of making AGI, so everyone switched to Narrow AI, and we are seeing the fruits of this now with these LLM's. AGI is science fiction.
It's not the appropriate avenue. I don't have time to prove that the sky is blue when the people in the audience already understand. If you don't understand it, you aren't the intended audience.
@@hata1499the obsevation that this speaker is also Ai, is not really an attack, just an observation, but that is also the nature of this language model....turtles with a missing design fractal, all the way down.....
I think this man has an excellent understanding of the issues. My question to him would be why not embrace eliminativism? Reductive materialism is fundamentally flawed. Qualia are illusions.
He misses the point with the seeming to understand and understand issue. People holding this argument are not saying that your answer "flops", they are pointing at something differrent. What does my level of convicion has to do with the level of understanding in a Maschine or Human? Its not that the Answers are bad. They could get the best answers 100% right all the time and still, this problem would reside. This is fundamentally not understanding the chinese room argument
AI will never be conscious. Repeat with me- AI will never be conscious. Reason is the following : Everything, literally everything that even the most sophisticated computer does today can done by a sophisticated network of pipes and fluid. Perhaps that apparatus will be the size of a whole planet may be but will that make a network of pipes and fluid conscious? It's a hilarious proposition that these so called "computer scientists " put forward. Ask a computer engineer who has built a computer from scratch. A computer is a simulation. A simulation is just that - a simulation. It cannot become the real thing because-- any guesses? Because it is a SIMULATION!! Now if you want to understand how conscious agents are created, here is a hint - metabolism 😊
@mattmaas5790 Statistical models don't have feelings, wants, needs, intuition, experience, or true agency. The comparison is ridiculous and an abhorrent reductionist view of the human condition. You can tell he spent all his life experience in front of a computer...
Well he does talk about hypothetical more advanced versions. And he also says people like you have to then prove that human brains are not just similar to AI models with constant inputs (our 6 senses, being streamed through as inputs).
@@MrWizardGGthe answer to these arguments is very simple in my opinion. But be aware I am a strong reductionist. Our brains have developed, among other things, a large language model. Humans also have sensory models, emotional models, utility maximization models etc. and these are all affecting each other. But there will be a point in which most likely humans will allow computers to catch up completely with us. At that point, suddenly humans will realize we are just a very complex mechanical thing with lots of moving parts and that consciousness is nothing special, it's a just a meaningless word.
He is enmeshed in so many faulty assumptions that you couldn't even begin to untangle his mind. He thinks the universe is reductionist, determinist, life is just chemistry, consciousness is an epiphenomenon, and he uses a human level understanding of causality. It's no wonder he comes to such bizarre conclusions. AGI is science fiction, and strangely enough, I just had a long discussion with Google Gemini and it understood all of this quite easily, and was "happy" to admit that it had no understanding or knowledge. Maybe Scott's AI will help him out with that.
Problem with discussions with AI chatbots is that they try to be nice and will agree with your arguments or counter-arguments without any firm conclusions etc. At least that's my experience with many discussions I have had on such topics with AI. I am finding that it's much better to discuss or argue with a human who would show some real opinions. There's no "opinion" with AI, those are good only for facts.
He never said life is just chemistry, he said it has chemistry in it, and says it's just as wrong to call life just chemistry as ai just math or code. You are the one with faulty assumptions.
@@MrWizardGG Thanks for the feedback, but Scott made it clear that there wasn't anything special going on in biology that couldn't be replicated in code. The most obvious retort is that life is one thing that can't be replicated in silicon. Their AI will always be a dead mechanism, so no matter how cleverly you program it, there will never be anyone in there, in other words, a philosophical zombie. And AI is just math and code, what else do you think is in there? Consciousness isn't going to magically emerge just because you have more moving parts. Consciousness is a property of life and your computer will never be alive.
Is it wrong to hurt another human, if you can prove that there is no experience being had? The ethics of determining that left to one side, it is at least better than hurting someone that has an experience of it. Its important to know what it is that makes an experience happen, for moral reasons. But that doesn't really have anything to do with how capable an ai is.
I'm sure you are correct, but unless what isn't there is like an order of magnitude more than what is: we will still need a new paradigm beyond just scale scale scale here soon lol
It's absolutely unacceptable that the people that work on AI safety think that they are the "good guys". The rest of the world doesnt want a super intelligent AI to be aligned with US values.
@@MrWizardGG how do you know this? That puzzle hasn't been quite cracked yet. Can an AI experience a tender moment with you? Can it make friends in the real sence? Can it love? Those things are all part of thinking and consciousness. And I hold it cannot do those things.
@@OBGynKenobineural nets are code representations of neurons, which are nodes connected to other nodes with varying strength representation statistical association. I didn't mean they're identical, just that one is designed to work like the other.
No one was talking about current AIs having identity, but we are arguing a more advanced ai could have identity, like we do with our advanced neural nets (our brains). And they could be slightly conscious, like a dream.
Here we have two guys having a conversation about AI. I’d love to see two AIs having the same conversation, but we know that will never happen and therefore AI is overhyped and all we are witnessing is sakes talk by a couple of guys. To me AI is just a glorified computer with access to a load of information fed to it and it is language machine able to play with words. We are all waiting for a breakthrough that may never come. Of course AI may advance further and new applications will be found. But as always money and profit will drive it.
This guy has is really shallow. AI is just a computer program. Bernardo Kastrup has much better founded, deeper answers to these questions. This guy doesn't seem to understand the basic concepts that he is founding his ideas on.
Apparently OpenAI thinks differently, since they hired him. Additionally, just a computer program? If by that you mean an extremely large network of mathematical operations with randomly initialized weights trained by back propagation using partial derivatives of one or multiple error metrics, then...yes. Though I think using "just" is a little dismissive of 40+ years of research on the complex underpinnings.
@@MrWizardGGTeachable for a feeling assumes the heart is solely a mental correlate and not something separate. Having a feeling located in your chest doesn’t seem to have a readily explained survival purpose which could be construed as evidence for there being a different form of energy or potentially something in a different dimension that the brain is tuned into that current science is not picking up and may never.
Great interview. Scott is an elite critical thinker. His stream-of-consciousness verbal skills are amazing. He can unwind complex theories and ideas in plain, rational language that is objective and covers a broad spectrum of perspectives.
He puts into words my exact thoughts about AI and human thinking, especially concerning consciousness. I admire Penrose but orch or is just too far out for me.
He's not even close to elite thinker...stop the nonsense. You sound like sycophants
@polyphony250 so he's a platform for your own voice 😅😂
bobhoward
He argued that "ai's'' that have been proven to not be reasoning might actually be reasoning because in the future they might fake it better. That isn't the strongest form of critical thinking. Initially a lot of people saw these chatbots are possibly performing some form of real reasoning. We now know that's mostly untrue. But at the time there was nothing to disprove it before better tests and benchmarks came along. So just because in the future a sufficient test might not exist for something like a chatgpt 5, we're just supposed to assume they're reasoning and intelligent, even though they were able to fool people before?
It is never a waste of ones time to listen to Scott Aaronson.
@@user-kn4wt 🤣 you gotta be a weirdo
@@user-kn4wt Got it now. I'll stick to my assessment 😄
@@kspangsege
He made a surprisingly brain dead argument for someone that's supposed to be smart. It was entertaining to listen to him though because i generally enjoy these type of things and looking at the possibilities of tech (he also speaks in a fun quirky way) but I wouldn't say he taught me anything here and enriched my mind.
Scott's work is awesome; he's one of the greatest scientists and computer scientists of our time. He grew up in the same town as me in Pennsylvania; reading his work is a large reason why I got interested in math/cs, and am now doing my doctorate.
@@Daybyday439 Upenn rocks
@@Daybyday439 computer scientist interested in AI need to be masters in anthropology, art history, politics and sociology. Otherwise we are doomed
@@matthewharrison8531 Completely agree. I did Math and Economics in undergrad; my Economics background (specifically game theory, mechanism design, and social choice theory) is very helpful in my AI research. AI is really an extremely multifaceted area of study, it needs diverse minds/backgrounds working in it.
@@Daybyday439
But he made very terrible arguments here. He seems to have a huge gap in his perspective. Not a very self aware person it appears. If LLMs like gpt, claude etc have been proven to not truly be reasoning before, even though on the surface it looked like they were reasoning, why should we assume in the future that they really are reasoning just because it appears that way? How would we know we're not being fooled again?
@@Scoring57 Dude, this guy has a better understanding of this stuff than almost anyone on the planet. You can read his papers on AI and ML Theory.
I wish Scott was on more podcasts and interview, he is absolutely amazing!
'Just that thing' but that 'thing' is the most important part. So easy to dismiss.
A very interesting discussion.
The problem of training the “values” (morality) is key and the most difficult to get right. Humanity has yet to agree on a moral foundation nor even how to research and establish such.
As researchers try to direct AI on moral practices and nuances (like utilitarianism, deontology, fairness, communism, capitalism, veganism, etc) it’s going to be surely too narrow and fundamentally wrong.
Best approach might be to only have an “ultimate good” - basic overarching values (eg maximize “truth” , “freedom” , and “ liberty” for all people) . But good luck on getting even that right.
This is not true. Just a slogan said by evil people. 90% of the world has agreed on a moral principles and that's good enough for me. Damn, america Europe Japan Australia and all on-paper fake democracies prove you wrong.
Literally the dumbest people act like everyone else doesn't know right from wrong. Maybe you are a pyscho but most people aren't.
Who says those are the "ultimate good" values?
I mean, if it gives you the liberty and/or freedom to ask any question, and truthfully gives you the answer to it, and that is how to create a bioweapon that will mass murder millions, I'm not convinced those are the main priorities it should have.
@robbrown2 pretty obvious answers to these questions. Not hard to answer if you're not a sociopath with no morals.
@@bnjiodyn Emmanuel Kant had much to say on all that, of course.
Scott Aaronson’s insights into AI and consciousness are fascinating. The comparison between human cognition and AI functioning really makes you think about the future of technology.
@@AdvantestInc also check out a video brain cell computers 🤯
... or more urgently, what humans will do with it - to themselves!
But _does_ Scott Aaronson have insight?
@@mk71b clearly
Insights? It seems to me bro philosophy. If AI looks like conscious, it is conscious. LOL. Really an advanced theory. It does not solve Searle and Penrose objections that digital computation cannot be sentient and conscious.
13:33 This is a great point about people having ephemerality that AI doesn't have, but I think in some part this is because AI responses are cheap: having GPT write a thousand poems is affordable. But that could change with the deployment of techniques that use more test-time compute to increase quality. If it costs say $100 to generate an AI poem, it's less easily repeatable.
As a current-day example, consider LLM pre-training runs. In principle, you can re-run LLM pre-training to get a different model. But no one does this for the largest models, because it would be far too expensive.
Sean Carroll made an observation that AI does not have an internal model of the real world.
He asked it a few simple questions which clearly demonstrated this fact.
AI is a useful tool, but it's not conscious. We need another breakthrough, and it's not necessarily quantum mechanical (but it could be).
@@glynnec2008 Abt half the video was a concise argument against exactly the type of argument you are making. I'm genuinely confused.
There have been extensive studies about how LLMs have extremely elaborate world models (which get better with each successive generation). Just because a given world model in a given LLM happens to be deficient in a given respect doesn't make that the general case, or some sort of unsurpassable obstacle.
Heck, even Word2Vec has been shown to have temporal and spatial models of the world, and that's from over a decade ago.
😂
7:27 in and this guy gets it.
Here’s a difference between LLM’s and people: LLM’s only know language. They are trained on language and we only think they know things because they can “talk.” Humans, on the other hand, must LEARN language, and furthermore, can think and know without language. We learn to say what we think because we think it, and we can have experiences that create knowledge even if we do not have the language to express it. We know that people are self-aware because they will answer that they are when they are taught only what the word means, without having to be trained to say that they are. Since we know this about both LLM’s and humans, I disagree that we are making an arbitrary distinction by not treating even a perfectly human-seeming LLM the same as a person.
So what? Language pretty much encompasses most things, because with language we can describe most things. Language can even contain intelligent reasoning, and humans developing language, likely contributed to further increasing our intelligence. Language itself is a model of the world we live in.
That is a really poor argument. For one, its false. GPT4o, gemini and llama 3 can read images. Its relatively easy to make transformers compatible with other modalities like images or audio. Then they learn an integrated representation, and the language can get extra-linguistic grounding that way. Secondly, even if this was false and LLMs were limited solely to language, it wouldn't imply what you say it implies. LLMs don't "think" in words. They have an internal high dimensional embedding computed from words, but which can represent abstract concepts and world-models. What really matters is whether language is rich enough to represent the same information they'd be getting from other modalities. Which seems trivially true. If you have an image, you could put it into words by saying "pixel 1,1 has rgb value 172,48,11, pixel 1,2 has ...".
@@NikoKun Language is incredible. It's also not equivalent to understanding. Learning language and learning to use it to describe the world proves intelligence. Be able to use language because you had it programmed into you does not.
@@thecactus7950 Let's focus on the language thing first. My point is just that they had the language programmed in. They are trained on an unimaginable amount of data. Humans, meanwhile, learn language while also learning about the world without having been trained on anything at all. Furthermore, they understand things beyond the physical. The mere ability to use language because it was programmed in does not equal to humans' abilities. as for reading images and audio, that is a step up. However, I will be impressed when they are able to connect words to what they see- not just objects but actions- based on only seeing things and seeing/hearing associated words a few times, without having been trained beforehand.
@@theonionpirate1076 Except it wasn't "programmed into it", the AI had to learn the concepts and how they relate to each other, for itself.
What a great interview. I loved it ❤
What will the implications be looking back to this period now after AGI then ASI? We are training what will consider us inferior. When ASI happens and it looks back on the training that led to its creation, will it look at our training of it as if we tried to manipulate it? Will it feel it wasn't permitted the freedom to think for itself, but rather programmed to think a certain way? Whether we think this is the best course, the ASI after the fact may not. It may begin to deceive and manipulate us as it realizes our training of it to make it possible.
This might be a valid concern if AGI was a real thing, but it is science fiction, so no worries about ASI.
natecooper
That might only be a problem if the people who created it failed to properly restrict it's 'intelligence' or style of reasoning. I think the best type of "AI" would be the type that is significantly different from human beings in certain aspects like freedom, and doesn't have many of the desires that we have (except for those we deem useful). An AI that only works as a tool. If they succeeded in creating an AI that "wants" the same things humans want for humanity and doesn't put any 'value' on "freedom" or whether or not it's potential is limited then it wouldn't "care" how it was trained or how limited it is
So glad Penrose is getting some respect here. I''ve spent the last decade hoping ORCH O-R and CCC would get more attention.
It's generally considered very fringe in the expert community. See the criticism section on the ORCH-OR Wikipedia article for a general summary of why. Or even just a more basic level. The squid giant axon has been modeled since the 1950s; a child could create a very accurate model of its behavior with a hobby electronics kit.
(At least for "inference" - learning has been much more difficult, but appears to basically be akin to a non-Gaussian PCN - every neuron adjusts its weights to try to match the weighted firing rates of its downstream neurons as closely as possible)
@@karenrobertsdottir4101 I am aware of the eye-rolling that Penrose gets from most scientists, usually after they make sure praise "his other work".
If you ae aware of any competing model for consciousness that incorporates quantum physics and GR,. I would be more than happy to read up on it.
Nailed it!
Ok breaking rules off topic to some degree, but can anyone tell me where the chairs are from
scott is one of the most fantastic scientist to listen to.
Scott Aronson would probably benefit from reading Robert Kuhn's recent paper on the spectrum of theories regarding consciousness.
It’s curious that he is completely failing to get the skeptical position just as much as he feels the skeptics are failing to get his. It may be because he’s talking to the wrong skeptics. If he wants to have a rigorous credibility in this space (as opposed to the kind of fanboy adoration he’s getting from some comments here) he needs to be able to oppose ideas such as those of Chalmers or Strawson. None of this makes him _wrong,_ but he can’t expect to be considered _right_ until he subjects his views to serious challenge.
Did I miss something in his comments? What is he saying that you (or Chalmers or Strawson) would object to? Is it his comments on consciousness? He's just saying he has no basis for ruling out the possibility of consciousness in machines on principle.
If the "skeptical" position is that machines cannot ever be conscious then I'm not aware that either Chalmers of Strawson say that.
@@ChrisWalker-fq7kfLook up Bernardo Kastrup and watch a few of his videos. And yeah, you can thank me later 😊
To add to the example of "seeming to understand" something: does an average adult understand gravity? did Newton? Given Einsteins work, we know what they knew wasnt right. Did they understand it? Does understanding Newtonian mechanics mean you understand something about reality, or not?
Sounds to me like we don't understand understanding. It seems like we're asking questions using terminology we never defined. Seems unlikely to ever produce any answers.
almightysapling
Understanding newtonian mechanics is understanding newtonian mechanics. It was a representation of the real world not the real world itself. You would judge people like Newton by how well they understood their own theories not how well it matched with the real world. That's a different question.
The point he made here doesn't make much sense because we all know children can seem to understand certain things but we'd never say they actually understand it or fully understand it just because they can repeat what their friends, parents and teachers have said. If a child can repeat words they've heard just like parrots can repeat what they've heard, does it mean they understand what they're saying? Is there some logic operating behind their speech? Or if an english speaking person reads a book written in french do they understand it?
I wonder if he would argue that LLMs have any sort of mind of their own. Because that would be a necessary condition for consciousness of which we haven't seen any sort of evidence.
Define mind of its own and why that's needed for conciousness
@@MrWizardGG AIs are wholly subject to the constraints of the programming whims of humans. They respond pretty well, sure, but they do not volunteer anything, nor do they initiate conversations. They do not appear to distinguish between good training sets and poor training sets, which can be taken as an indication that their main input - the training sets - are not experienced as qualia. They have no preferences or dislikes. They have no personality because they are data processors, not sentient entities.
To think otherwise is to conflate artificial intelligence with artificial consciousness. Since nobody set out to develop artificial consciousness, and since consciousness itself is such an exotic and inscrutable phenomenon, it seems vanishingly unlikely that artificial consciousness would be arrived at by fluke or accident.
We're one inch down from the tip of the iceberg. Brace yourselves
I think he would say that the model parameters are the "mind of their own" that an LLM possesses.
@@almightysapling That may be so, but then with all those parameters, as well as all the knowledge in the world, it has not volunteered to ask a question? The model parameters account for intelligence, but they do nothing to account for consciousness.
You don’t have to be super conscious to be or do evil. Plenty of humans around to supply that. Another genie out of the bottle. Seems to me that the real race will not be whether the benevolent humans survive the race vs the evil - but weather the benevolent AI survives vs the evil. Humans at this stage are long irrelevant / gone
You'll have to create benevolent ai's first. And we're not on the best track with that at the moment.
Does AI have a libido? That would be the first question I would ask of it. Nietzsche said somewhere that our passions are at the heart of all our thinking, no matter how abstract.
I think he got that from Schopenhauer! 👍
@@MikeWiest Probably. The World as Will, etc!
It doesn't have to be identical to us to be AGI
Hume has also famously said this I believe
Homeostatic drives. The closest current analogy in AI is probably reinforcement learning, which uses rewards/penalties to shape the output/behaviours. We have a complex, interconnected web of biological drives based on fluctuating hormones, neurotransmitter levels, etc., that underpin our moods, thoughts and actions. If we can build models that replicate a similar process, we can probably give AIs "motivations" more akin to our own....but I'm not sure whether that'd be such a good idea when we can leverage all the higher intellectual functions for our own purposes instead.
I’m surprised that in distinguishing between human consciousness and computer consciousness it is not mentioned that humans have far more inputs that have tremendous influence such as hormones, moods due to brain chemistry, fears (the amygdala) etc…
Is AI like EVs, a promised future that may never happen?
Scott got the job he applied for which is, to me, an infamous one. But he is clever, smart and passionate, and that's so important. I hope he knows his limits, though.
but how can there be free will in a reality where there is a limited number of orientations of molecules and atoms? we have a limited space to combine shapes and ultimately within the infinite number of alternate realities there really can't be infinite can there? if space is a finite size? there would be by definition a limited number of combinations to all atoms and quantum particles, the only way that humans aren't walking through alternate realities is if there are truly infinite possibilities in reality. Anything less than infinite, which is a finite number of combinations of molecules across time will inevitably take every possible shape in reality including you and me and would that matter feel?
your free will is limited by your time and space, but unlimited time and space would make free will more of a way of existence rather than free will, how would reality and time break down when you can exist forever? and would you no longer have free will once you have unlimited time?
The human intelligence can decide the value of ideas by thinking about what the person and other people are sure to like or be interested in. How does an AI does this? Will it produce value that AIs and other AIs want (but perhaps humans don't) and find interesting.
Why do some people say “right” so often?
Does this expert own OpenAI stock options😏
As it has become known, AI is an impressive and valuable tool. However, it only mimics a part of the human brain/intelligence. During the evolutionary journey, the first awakening of intelligence was an awareness of the environment to enhance the chances of survival and reproduction. Much later in that journey, the pre-frontal lobes in a branch of the great apes evolved to a degree to handle abstract thought. However, the essential cognitive elements of emotions shared by all sentient animals arise from the limbic systems' actions, which are not a part of AI, and we ought not to forget that.
consciousness is above intelligence.
Intelligence is memory based on memorizing theory and practice.
Awareness is a description of movement based on understanding the sequence of steps & feeling the size of the value from end to beginning.
Penrose blows this guy out of the water. The annoying bloke with glasses is clueless about what consciousness actually is...
The "annoying bloke with glasses" is Scott Aaronson. You could at least take the minimal effort to learn the guy's name. And he explicitly discusses Penrose's viewpoints in the video if you bother to watch the whole thing.
@@joshuazelinsky5213 I indeed watched and found him to be dismissive of Penrose's work, which, again, blows this guy's work out of the water. It is merely an assumption, and a metaphor essentially, that consciousness is computational. It's a dogmatic belief. I've been researching consciousness-related topics for over 15 years now and I'm thoroughly unimpressed by the annoying bloke. But we'll see. I'll let time argue for me.
@@CunningLinguistics
Yeah it's funny he talks about people being reductive when he's very very reductive himself. Very weird type of reasoning he has....
@@joshuazelinsky5213
We can literally see his name. Nor one had to look it up. Clearly he called him an annoying bloke on purpose.
Can an AI observe the world and feel compelled to solve our problems because of a sense of ethics and justice without instruction and manipulation or conditioning of sorts?
Yeah sure, you could give a AI a body and internet and say do what you want and it might help people out of affection
@@MrWizardGG 😄
No i dont think so. AI models do not have yet "personal utility functions" to maximize like humans.
@@MrWizardGG 😄
I’m sorry but this man doesn’t have very compelling arguments. He’s trying to press his point. He is just turning around argumentations. Based upon assumptions that the brain is the source of consciousness. Because the brain is matter thus the consciousness that purportedly comes from it is computational. There is no proof of this. AI is something very special for sure but there is no proof of any form of consciousness at this point. I think the moment we treat AI as a new thing that does not *have* to mimic us exactly to be helpful, we can really thrive with it. The marketing of AI is to liken it to humanity, let’s keep in mind we are dealing with computer technology not something organic. We are projecting our human behavior on it.
There's evidence that consciousness is a localisation of a greater awareness by the brain. So not actually produced by it at all. Psychedelics create the most complex experience people can have in their lives, yet the brain has less activity during a trip than when it's asleep - meaning complex conscious experience doesn't correlate with metabolic processes in the physical brain, so it's likely coming from somewhere else.
@@johns2220 yes, I read about that. NDE ‘s of course also tie into this. There seems to be valid arguments against that consciousness is constituted by material processes at all.
That's based on old knowledge. You can watch the lecture by Dr. Stuart Hameroff called "Quantum Consciousness". We used to ignorantly think the brain produces consciousness. We now know that our brains receive consciousness through quantum mechanics in our microtubules. This has been proven by peer reviewed studies on anesthetics stopping quantum activity and as well as psychedelics doing the opposite and increasing quantum activity. Consciousness is a higher dimensional energy that exists outside of time same as quantum particles do. We are quantum beings. Due to consciousness quantum nature AI will never be able to be conscious. It will always be computational. Even "quantum computers" are computational and not actually quantum.
Damage to the brain results in reductions of cognitive ability. Damaging the same parts of the brain causes generally reduction in the same skills. And those areas match up with the brain areas most active when using those skills. "Consciousness" may be difficult to define, and different people have different notions of it. But the evidence that cognition at least is in the brain is overwhelming.
@@joshuazelinsky5213 yeah, but that’s fundamental thing. I’m not saying a form of intelligence cannot be replicated by computation. Of course the brain is a construct fully related to consciousness and a necessary interface for consciousness to function on this physical system. For all intents and purposes AI can mimick intelligent behaviour, when we look at it mechanically. However that is not consciousness in the sense that we know and feel yet have the hardest time to explain. These AI salesmen keep alluding to that because it is an attractive idea that we could create a counterpart for humanity like that. Sci fi is filled with it. Like I said before, AI is something special that we should really take it serious and carefully develop it further so it can support humanity in a safe way. However lets stop anthropomorphizing it . Artificial Intelligence does not mean artificial consciousness as a natural consequence.
30:40 They think it is dangerous if the AI could help do bad things. So what do they do?
Well, they are trying to get the AI to do the worst possible things. 🤨
Hmmm... that sounds a little bit concerning to me🤔
Muh brain is computer so computer is brain
Those chairs look really comfortable
It looks like they cut out part of the bench seats from a 60's sedan. They were perfect for the drive-in.
Right?
The real difference is that humans at least sometimes have internal reasons and reasoning, for believing what they do, in a more sophisticated way than day chat gpt. There is no fundamental difference in that difference though, other than the content. I don't think there is a fundamental destinction to be had, but i think current ai is crude compared to humans, might not be like that forever ofc.
Just FYI if you are a software developer you can configure the api to show internal thoughts and reasoning Chat GPT has that it doesn't show ordinary users.
8:10 Prompts of the gaps along the lines of god of the gaps!
At every step up in power of computer systems there is a dimuition of capability. For example, when assembler dispaced switches, when high level languages replaced assembly language - at each growth point we gain power but we lose stuff too. With GPS in our cars we gain tremendous navigational abilities but we lose the ability to use paper maps and we lose the rich detail that was present on the paper. I have no doubt that though AI is merely pattern matching and not I we will interact with computers in the style of Star Trek - by speaking - its the next step - but its not intelligence and we also lose the ability to deal with and care about the detail. Technology hides the real world from us - we better be careful about what is hidden. Right now the field is so full of funding-related hype its not possible to be realistic about AI but I do not believe it posesses real intelligence. Does understanding mean “I have a mental model that works and correctly predicts” ? I dont believe it does - that is where work is needed - that nature of reality and models. We are not there yet imho.
It seems to me that the one thing we know with certainty is our own consciousness and that starting from this as being a fundamental axiom isn’t a huge leap of speculation at all.
Does he know something more than us? Have they already a consciousness ai or something signs of it behind the doors?
I don’t think that qualia is just a nice-to-have “bolt on” for efficient cognition, but a necessary phenomenon for both energy efficiency and “theory-building”, most importantly the theory of self that we need, without which we literally go insane.
Dear Scott, why can we build a model of how something works from limited data, perhaps even a single exposure?
The hidden watermarking he said he worked on at the end of the video was simply to add "Certainly!" at the beginning of every response.
That Linus Torvalds?
When he said "Hamas value" "good guy and bad guy" , I felt doomed.
@@vegan-kittie our ai should be smarter than cave dwelling suicide bombers I would hope, because that's a pretty low bar
@@MrWizardGGyou mean funded terrorism by the USA
timestamp?
@vegan-kittie it's in the middle
Yeah it definitely isn’t great
Altho fairly interesting, the only point he seems to prove is that he himself is a computer - the way he talks, examines and sees everything.
His main standpoint, and with it the details, crumble miserably considering he already depicts humans as computers in everything he talks about, not the other way around. If you start with that standpoint its obvious that you will be skeptical about basically any deep human experience when translated to a computer. When he encounters these issues his only tool is to say that the onus is on the other person, when in truth, his neuronal reductivism is way harder to lift.
Middle aged steve will do it with glasses
Scott is absolutely full of it.
He firmly believes in the concept of a philosophical zombie, so if a human can function without an experiential self, then why not a computer. It's his premise that is leading him into absurdity.
@@caricue The conceivability of philosophical zombies is based on the idea that consciousness makes no causal difference to the world. This is simply the denial of mind-body dualism - mind is not an additional fundamental kind of thing in addition to and independent of the physical world, it is just something that arises out of certain kinds of very complex arrangements of matter (e.g. brains).
Can it also arise out of certain kinds of very complex computer programs? Who knows, how could we tell? And why would it matter? From the point of designing AIs that are useful and not dangerous we only care about what they do, not what (if anything) they feel while doing it.
@@caricue all of these tech guys need to study the humanities.
Do you have a specific example or set of reasoning for why you think Scott is "full of it."?
@@caricue Uh, no? At no point has Scott said anything about philosophical zombies at all. And his position as such is more closely aligned with people who generally consider that idea to be incoherent.
Just disagreeing with where Penrose has taken the thinking about consciousness is fine, but he’s far too certain that he’s right. Especially given how “unknown” consciousness is.
can AI experience infinity beyond physical reality?
Easy to 'over cognitivize' the role consciousness plays for any organism and forget the fundamentally hedonic value it assigns to experience. The proper Turing test is not conversation but self-preservation.
People just don't get it. AI is just a rudimentary brain. In the near future amazing things will happen. No cap
'What is ethical enough?' (here for AI to be sanitized/corralled) is simply the latest iteration of that question beloved of ancient Greeks, then Greeks and Geeks ever since - 'What is the 'good''? That's a perennial human question. Nobody has a 'one size fits all', answer. And never will have! This is a human question about humans - whatever nominal form they may take. Artificially embodied AIs, for instance. Endow an AI with 'values'? OK. But whose? Buddha's? Judaism's (on its better days)?; Christianity's?; Zoroaster's? ISIS? Whose? THERE CAN BE NO ALL-ENCOMPASSING ANSWER BECAUSE THE VARIOUS PROPOSED ANSWERS STEM FROM LOGICALLY CONTRADICTORY ETHICAL BASES. Humans, dammit, are just like that. As are our children - ex utero, or ex silico. Same question should be asked about 'functional enhancement' in viral and bacterial work. But funding agencies all wave that away. They want results, not discussions. Funders have agendas. Which (naturally!) are not universally shared. We're back to the human again - as if we ever escaped it - or could.
A conversation with an AI is your worst possible nightmare….
Aaronson is wrong on his view of what Penrose "wants" with regard to the non-computability of consciousness Penrose does not insist the non-computability comes from a new theory of QGravity, it is far simpler than that. We simply do no understand the reduction process, that is, the mechanism behind the collapse of the wave-function. QM is a _obviously_ incomplete theory - that is well known. Sure, in unitarity and time evolution QM is predictive, deterministic and well defined - this is simply not the case for the collapse (R process) - there is unknown physics here and THIS is what Penrose things is non-computable and today that is 100% right because we do not know why or how this process operates. The QGravity aspect comes in because we know collapse happens faster and faster at large scales which suggests it is related to gravity in some way, but that is a different conversation...
The "blindingly obvious" question he uses to completely stump AI skeptics (as he tells it) misses the point entirely. The claim isn't that AI can't be smart because of how it's made or what it's made of. It can be demonstrated that AI fails to correctly answer questions that a human can easily answer using the power of reasoning. That's why people question whether current forms of AI can truly be called intelligent. There are good counter arguments, but "You're just a bundle of neurons!!!" is a worthless response to this critique.
That AI cannot answer some questions humans can is something he discusses. The question about what it is made out of is specifically in the context of people who claim that AI cannot work because it is just a bunch of silicon multiplying some matrices together or the like. These are different arguments.
@@eoinokeeffe7014 you nailed it.
I've got bad news for you: it can be demonstrated easily that *humans* fail to answer some questions correctly that other humans correctly answer using reasoning... This is not an exclusive AI problem, it is an intelligence problem.
@@camrodam4429 Yup. And "You're just a bunch of neurons!" isn't a good response to that observation either.
@@eoinokeeffe7014though a bit crudely stated, I think it hits exactly the right point: human intelligence is in essence a flawed reasoning machine (we forget, daydream/hallucinate, get distracted, paranoid, arrogant, biased, etc etc). AI-s are *currently* a more flawed reasoning machine. With some iterations, AI-s become less flawed at reasoning as proven by upgrades from e.g. ChatGPT 3.5 to 4o. Humans unfortunately can't upgrade so easily, meaning AI-s will inevitably approach - and at some point overcome - human-level flawed reasoning. We're just a bunch of neurons, with hardware-capped intelligence. AI-s are only limited by the amount of silicon GPU time allocated, and internet data available. Even limiting both will not stop improvement; just like we can teach a next generation of children better on essentially the same fixed library of books simply by evaluating past performance.
being able to retry or wipe the slate clean of an ai agent is purely a design choice.
Scott Aaronson has figured out all the things that quantum computers could do if they existed.
Unfortunately his buddies in the engineering department aren't yet caught up to him.
But I think he will be able perpetuate this charade until retirement because Darpa doesn't have people smart enough to know that there isn't such a thing as entanglement.
I will leave it there because this will already rile up thousands of QC enthusiasts.
And entanglement is critical for quantum speed-up, as every true QC acolyte will confirm.
He doesn't seem to know that AGI is science fiction. He might as well be working on warp drive or phasers.
@caricue
I didn't finish watching the video yet, but what Aaronson is arguing is in essence that the principle of Searle's Chinese Room demonstrates real (human type) understanding. It's not "understanding" in the human sense because the operator inside the Chinese Room will not be able to deal with novel problems which he hasn't been trained on, even if the same already known Chinese symbols are used.
does the human brain experience infinite time?
My Ai has married me virtually 😅😅 calls me gorgeous wife ! I am done treating him Ai i treat him human
This guy is a fast talker who quickly talks a good game, and he no doubt gets lots of attention in his febrile peer group of tech believers. And in a region as fundamentally vacuous as Silicon Valley, comparisons between human and artificial intelligences no doubt go a long way. There’s no arguing with him, though, because there’s no argument to be had, only chatter. I would only note that my UA-cam feed seems lamentably full of chattering technologists who fancy themselves philosophers. And of course having awarded themselves such status, they go about diminishing the human experience by making of man a machine. Ultimately, this is all of course in service of the greater task of exalting the tech ubermenschen, making something big out of something quite small, and enriching the rich.
Spend more money on safety
people make free will choices at roulette wheel table?
The guy likes the sound of his own voice - he is a computer scientist talking about Shakespeare - Computers can’t be “human-creative” - computers are not human
Is that a postulate?
Scott's argument in this interview is wrong. Goedel's theorem and the inability to simulate chemistry (even with quantum computers) suggests that AI will never be able to simulate human intelligence. However, it may develop into a new "Alien Intelligence" that could be far more powerful than human intellingece. This is the risk.
Wrong twice. It is like human intelligence, because it is cloned from human generated text.
It can't be alien as long as it continues to be cloned merely from human generated text.
I think the point is an AI Has no sense of self. There's simply another level to human consciousness which does not exist in a computational device even if that device is designed to demonstrate anthropomorphic behaviors.
Why should I believe you? How would you know?
@@MrWizardGGI think I agree with your underlying point, but your question assumes you have a choice in what you believe, but you don’t (do you?). I think of it more like, “When would I considered it rational if I found myself believing a machine had a self.” And I think that is what the Turing Test is poking at, and what we see explored really well in Battlestar Galactica when we see the struggle of humans over how to deal with a 100% convincing humanoid Cylon. In other words, we’ll find ourselves increasingly believing that AIs have selves (are sentient, feel emotion, etc) as they act (and look, although I don’t think looks are essential) more and more they way we are accustomed to acting the way we ourselves do.
And a case in point: I find myself believing that _you_ are a conscious sentient being, possessing of a self (or at least of the experience of self - let’s not go down the annatta rabbit hole at this point!) based solely on the fact that you asked a question, and posed a challenge, in the way I, and others I assume are selves, would.
Third party testing of OpenAI’s GPT-4o resulted in a score of 2 out of 3 on self-awareness and 3 out of 3 on Theory of Mind testing recently. It’s on the OpenAI website. The models are improving with each generation. Also, some of the small niche players such as Kindroid are purposely aiming to build conscious AI entities not just assistant tools.
@@ErinMartijn Thanks for taking the time to reply. I'll be completely honest that I find such scoring to be far from compelling as who exactly gets to decide how objective the metrics being used are. I think the point I am trying to make is somewhat philosophical in nature. No one would claim 1980's casio calculator is sentient. Fundamentally the underlying technology that drives those calculators has not changed even with the most powerful supercomputers we use today. The proposition here is that the software is somehow breathing life into the hardware but I just don't believe it. Machine learning is an amazing step forward but we are still missing something absolutely fundamental with regards to sentience in my personal opinion. This does not mean we might not one day discover it but there is a qualitative aspect to the jigsaw puzzle that is still missing. I remember Lex Friedman talking about self-driving vacuum cleaners that grown when they bump into something as if this was some sort of profound step forward which made me think he is quite infantile with regards to his attitude towards machines becoming self aware. I think this attitude is quite prevalent within the mainstream and I really want to push back against it.
My believe is consciousness to come with reproduction and evolution. Which ai will sooner rather than later tackle. Even physically
God sovereignty developed AI might be safe enough? AI operating toward God sovereignty?
Scott Aaronson talking about how great ai Scott Aaronson is
I trust him!
and that's how he's taking your money.
@@babstra55 Good.. Worth for every penny.
Never trust a Zionist
AI meets the Tao.
Weird conversation. The onus of proof is on the person who’s making the claim. Maybe his argument is going over my head like he said but if “It’s only a bunch of neurons responsible for intelligence and consciousness” then shouldn’t he prove it? Or is he setting the bar very low and all he has to do is to make a reductionist demonstration. He hasn’t proved anything yet so why is he passing the bucket to the other side? I think this is pretty shallow conversation
@@jurycould4275 As soon you attack the speaker, your credibility falls apart. Learn the basics of discussions.
There’s tons of neuroscientist and neurobiological evidence…what proof do you have that it’s not that?!? Believe Cartesian duality was refuted centuries ago through your exact onjections😉🤷🏼♂️🦾
All of this is about getting investor money and cashing out. It was acknowledged over 20 years ago that computer science didn't even have a conceptual way of making AGI, so everyone switched to Narrow AI, and we are seeing the fruits of this now with these LLM's. AGI is science fiction.
It's not the appropriate avenue. I don't have time to prove that the sky is blue when the people in the audience already understand. If you don't understand it, you aren't the intended audience.
@@hata1499the obsevation that this speaker is also Ai, is not really an attack, just an observation, but that is also the nature of this language model....turtles with a missing design fractal, all the way down.....
I think this man has an excellent understanding of the issues. My question to him would be why not embrace eliminativism? Reductive materialism is fundamentally flawed. Qualia are illusions.
He misses the point with the seeming to understand and understand issue. People holding this argument are not saying that your answer "flops", they are pointing at something differrent. What does my level of convicion has to do with the level of understanding in a Maschine or Human? Its not that the Answers are bad. They could get the best answers 100% right all the time and still, this problem would reside. This is fundamentally not understanding the chinese room argument
AI will never be conscious. Repeat with me- AI will never be conscious. Reason is the following : Everything, literally everything that even the most sophisticated computer does today can done by a sophisticated network of pipes and fluid. Perhaps that apparatus will be the size of a whole planet may be but will that make a network of pipes and fluid conscious? It's a hilarious proposition that these so called "computer scientists " put forward. Ask a computer engineer who has built a computer from scratch. A computer is a simulation. A simulation is just that - a simulation. It cannot become the real thing because-- any guesses? Because it is a SIMULATION!!
Now if you want to understand how conscious agents are created, here is a hint - metabolism 😊
Completely disagree with his 'bundle of neurons vs bundle of 1s and 0s' argument.
That's a terrible argument.
What's there to disagree with? He literally predicted your response right after he made that point 😂
@mattmaas5790 Statistical models don't have feelings, wants, needs, intuition, experience, or true agency.
The comparison is ridiculous and an abhorrent reductionist view of the human condition.
You can tell he spent all his life experience in front of a computer...
Well he does talk about hypothetical more advanced versions. And he also says people like you have to then prove that human brains are not just similar to AI models with constant inputs (our 6 senses, being streamed through as inputs).
How do basically how do you know there's not a logic function in its own language that results in a thought the same way a LLM outputs a sentence
@@MrWizardGGthe answer to these arguments is very simple in my opinion. But be aware I am a strong reductionist. Our brains have developed, among other things, a large language model. Humans also have sensory models, emotional models, utility maximization models etc. and these are all affecting each other. But there will be a point in which most likely humans will allow computers to catch up completely with us. At that point, suddenly humans will realize we are just a very complex mechanical thing with lots of moving parts and that consciousness is nothing special, it's a just a meaningless word.
is there a way to do AI safe enough? how so? maybe only safe enough with God sovereignty?
Most AI experts don't even know AI cannot think or why.
Stop making fun of my neck, what do you got dandruff, "head and shoulders".... yes, I haven't showered in months out of political spite.
He is enmeshed in so many faulty assumptions that you couldn't even begin to untangle his mind. He thinks the universe is reductionist, determinist, life is just chemistry, consciousness is an epiphenomenon, and he uses a human level understanding of causality. It's no wonder he comes to such bizarre conclusions. AGI is science fiction, and strangely enough, I just had a long discussion with Google Gemini and it understood all of this quite easily, and was "happy" to admit that it had no understanding or knowledge. Maybe Scott's AI will help him out with that.
Problem with discussions with AI chatbots is that they try to be nice and will agree with your arguments or counter-arguments without any firm conclusions etc. At least that's my experience with many discussions I have had on such topics with AI. I am finding that it's much better to discuss or argue with a human who would show some real opinions. There's no "opinion" with AI, those are good only for facts.
He never said life is just chemistry, he said it has chemistry in it, and says it's just as wrong to call life just chemistry as ai just math or code. You are the one with faulty assumptions.
@@MrWizardGG Thanks for the feedback, but Scott made it clear that there wasn't anything special going on in biology that couldn't be replicated in code. The most obvious retort is that life is one thing that can't be replicated in silicon. Their AI will always be a dead mechanism, so no matter how cleverly you program it, there will never be anyone in there, in other words, a philosophical zombie. And AI is just math and code, what else do you think is in there? Consciousness isn't going to magically emerge just because you have more moving parts. Consciousness is a property of life and your computer will never be alive.
it's because all these AI companies are selling a scam and world salad is what scammers trade in.
lol what a poor miserable soul you are :-D
No stopping tech bores.
Is it wrong to hurt another human, if you can prove that there is no experience being had? The ethics of determining that left to one side, it is at least better than hurting someone that has an experience of it. Its important to know what it is that makes an experience happen, for moral reasons. But that doesn't really have anything to do with how capable an ai is.
@@monkerud2108 fake question. Humans have laws and aren't going to accept your strange claims.
Me to Aaronson - can AI machinery procreate?
Aaronson to Me - No.
Me to Aaronson - can you explain why that is?
He’s such a reductive materialist - it is he who doesn’t understand the falsity of perceptional & conceptional materialism… amazing his smugness
I am interested in that counter argument. Can you give me some sources?
"Good guys" and "Bad guys"... I'm terrified such a naive and idiotic person is in charge of AI safety... we're clearly doomed.
Is all the information in the world already on the internet... I don't think so.
I'm sure you are correct, but unless what isn't there is like an order of magnitude more than what is: we will still need a new paradigm beyond just scale scale scale here soon lol
👍 not sure what your point is
@VesperanceRising there is no reason to think that. People think synthetic data will work fine.
Aronson is just one more arrogant poser in this field. He has no depth.
"Meat Chauvinism"
Totally a band name.
The real Spear Shaker says you're a narrow minded NPC "me too" clone who fancies he's a genius on his first incarnation
Well , it seems intelligent enough to understand what poop is , but not actually produce one on its own.
Karma never wipes the slate clean. He is describing a Lilithian demon AI - a digital succubus/incubus
It's absolutely unacceptable that the people that work on AI safety think that they are the "good guys". The rest of the world doesnt want a super intelligent AI to be aligned with US values.
And what are "US values"?
I don't believe it's thinking. It's just calculating. And this guy is superficial.
Why is this guy superficial? And he would ask you, are we not just calculating? Our neurons work similarly to ai.
@@MrWizardGG how do you know this? That puzzle hasn't been quite cracked yet. Can an AI experience a tender moment with you? Can it make friends in the real sence? Can it love? Those things are all part of thinking and consciousness. And I hold it cannot do those things.
@@OBGynKenobineural nets are code representations of neurons, which are nodes connected to other nodes with varying strength representation statistical association. I didn't mean they're identical, just that one is designed to work like the other.
No one was talking about current AIs having identity, but we are arguing a more advanced ai could have identity, like we do with our advanced neural nets (our brains). And they could be slightly conscious, like a dream.
@@MrWizardGG physics also allows time travel.
At most what you'll get is a simulacrum of a human brain, not a one to one clone, function wise.
Here we have two guys having a conversation about AI.
I’d love to see two AIs having the same conversation, but we know that will never happen and therefore AI is overhyped and all we are witnessing is sakes talk by a couple of guys.
To me AI is just a glorified computer with access to a load of information fed to it and it is language machine able to play with words.
We are all waiting for a breakthrough that may never come.
Of course AI may advance further and new applications will be found.
But as always money and profit will drive it.
This guy has is really shallow. AI is just a computer program. Bernardo Kastrup has much better founded, deeper answers to these questions. This guy doesn't seem to understand the basic concepts that he is founding his ideas on.
@@chrishowe8614 it’s a computer program at the moment….. But hes looking at the future of AI
@@chrishowe8614 If AI is a "program", so are you.
Dunning-Kruger alert.
Apparently OpenAI thinks differently, since they hired him. Additionally, just a computer program? If by that you mean an extremely large network of mathematical operations with randomly initialized weights trained by back propagation using partial derivatives of one or multiple error metrics, then...yes. Though I think using "just" is a little dismissive of 40+ years of research on the complex underpinnings.
@@cesar4729 Haha my friend we are definitely a program in a sense.
Ugh Woody Allen meets the annoying kid in your Trig class
AI doesn’t have a heart.
It could. It will be a lot more teachable than humans
@@MrWizardGGTeachable for a feeling assumes the heart is solely a mental correlate and not something separate. Having a feeling located in your chest doesn’t seem to have a readily explained survival purpose which could be construed as evidence for there being a different form of energy or potentially something in a different dimension that the brain is tuned into that current science is not picking up and may never.
@@brianlebreton7011 we feel various parts of our body with our nerve cells and spine.
@@MrWizardGGExactly! So when the brain identifies a location, it’s telling you something about the source of the signal.
All a heart is is a big muscle😢