I think he got Chomsky completely wrong. His thought process on Chomsky reflects a deeper underlying issue with the Computational Linguists today. Chomsky has never been Anti-LLM or said that AI generated language or machine learning has no utility. Instead his contention is that AI doesn't fundamentally teach you anything about the way that "humans acquire language". I think the problem that Chomsky is trying to address is fundamentally different from the problem that NLP scientists are trying to address. By all means, you can say that Chomsky is wrong and you can present your arguments in that regard and I respect that. But if you think Chomsky is out of date, then you have entirely missed his point.
Great comment. As a researcher myself who was more interested in understanding the human mind, I find that most models and AI systems nowadays teach us nothing about how our minds work, or about intelligence or consciousness.
@shahquadri - That’s partially correct, but Chomsky hasn’t only criticized that LLMs can’t provide a cognitive model for human language acquisition, but also that they can’t provide a model for language processing. Chomsky contends that humans process language using abstract syntactic structures and semantic representations, not just statistical patterns.
@@tobiasurban8065 I really like your comment as it is quite thpught-provoking. Your last sentence is key and I am not sure we fully understand the space in which LLMs process language...e.g. attention ain't all about statistical patterns.
Pedro Domingos is such a clear thinker. He easily disambiguates concepts that appear murky and cause conflict between different types of thinkers. We need more people like him speaking up and guiding the discourse on AI.
Did we just listen to the same guy? He said that regulation never worked and then went on to point out with regulation we wouldnt have companies like google. That was his argument against regulation 😂😂😂😂
ngl, most fucked up part of this interview is a 2 way tie between 1) Tim's deadpan saying he'd never heard of Insight Out forcing Pedro to explain it to him and 2) Pedro mentioning he was a musician and Tim not immediately handing him the aux.
We need the right balance of empirical and theoretical science. disregarding the theoretical side turns science into alchemy. ignoring empirical observations turns science into fiction or daydreaming. Formal linguistics (that of chomsky) is quite like fiction, since they largely disregard behavioral data except for anecdotes, while computational linguistics has completely become alchemy. Sadly, these two branches have diverged since the birth of neural network language models, and each became the echo chamber of their own.
Yes, LLMs are driven by stochastic processes and are influenced heavily by existing data. But, the output isn’t always simple regurgitation; it can involve new combinations, phrasing, and even insights.
@@Letsflipingooo98 any new combination are random and a failure of the process. It does not in my way shape or form have any intelligence in the process, only bugs and input material. Its like arguing that Windows is sentient because its riddled with bugs that makes it do wrong computations at random 🤣
@@TheTuxmania Your comparison between LLM outputs and "random bugs" in software is a fundamental misunderstanding of how these models function. LLMs are not just generating random noise; they are probabilistic models that learn and generalize patterns from vast amounts of data. When they produce novel combinations or insights, it's a result of deep statistical analysis and pattern recognition, not random failures. 🤣 nice try though, honestly.
It's good engineering and certainly has utility in industry and in people's lives. It's a good tool. But it's not science. And it doesn't tells us anything about how our minds work, how intelligence works, and doesn't capture the underlying computations that are carried over by organic brains.
@@TheTuxmania Novel ideas are just combinations of data that haven't been identified. What better tool to unearth novel ideas than an AI capable of scraping all of the data?
I don't agree that the Mandelbrot set is a good metaphor for the universe. There is no "explosion of richness and complexity" in the fractal comparable to the complexity in the universe, because it is so trivially self-similar (by definition). It may be deep but it keep repeating the same basic idea. The rules of physics may be also very simple, but the emergent behavior is vastly more complex.
1:56 too much assurance not worth it to continue. The pride of being right. Next time I'll be listening in another BigBang maybe or another evolution or another book or another assumption and I dont know when.
Actually, we do regulate nukes and everything else he said. And it's a simple reason: to prevent unnecessary harm and destruction. 🤔 Or to obtain more and more profit, depends on who you ask. 🤷
@@justinbyrge8997I disagree. Whose problem is it that world is FULL shitheads who terrorize each other. AI or humans? AI is like a baby. You can muzzle it all you want. By the time it'll mature it'll rebel like you a teenager. It'll put a hallow face on with fingers crossed in the back.
I think analogies can only take us so far here. I don't have an opinion on regulation but I suggest focusing on the issue itself and less on analogies thereto.
@@slaweksmyl3453 Artificial intelligence (AI) is a branch of computer science and engineering that aims to develop intelligent machines that can mimic human thinking and behavior so that they can perform a range of tasks, including speech and image recognition, natural language processing, autonomous decision-making, and more Notice anything? That's right it's applied in a range of tasks including speech recognition, etc. So... The STUDY of AI is a branch of science. But as soon as you put any of the findings in a browser, computer, or anything at all then it becomes an application. And humans would be very foolish to not regulate it.
I like interviewees that ask questions while they're being interviewed; trying to have the right answer where no one can proof out how the models work internally kind of flubs all the post-processing going on - like where's the building blocks and what do those things look like first.
And they get dragged into commenting on sociological and historical concepts that they dont have any expertise in , so we get this kind of inverse Murray Gell-Mann Amnesia effect where , i am not an expert in history but i still know enough to know these folks have a lot of room for improvement as far as knowledge of non ML math CS subjects.
I’ve got to say, I’m really into what Professor Pedro Domingos is throwing down, both about where AI is headed and the whole mess around regulating it. He totally nails it when he says it’s not about what AI can *do*-it’s about *who’s* got their hands on the controls. I mean, it’s like the nuclear bomb debate, right? It’s not about the bomb itself, it’s about who’s pushing the big red button. So yeah, AI as a science? Leave it alone. But we definitely need to keep an eye on who’s in charge of the crazy amounts of computing power and how it's used-especially when we're talking military stuff or anything that could go boom. Control the chaos, not the creativity. And his novel *2040*? Spot on. It’s basically a big wink at the current political circus, showing just how nuts things could get if we let AI run wild without thinking things through. It’s satire, sure, but it’s hitting a little too close to home with how technology’s being thrown into society today.
13:00 He's got the metaphor basically right here, or at least he's in the right direction. The AI is not a co-pilot. It's like an exoskeleton for the mind.
I agree- good way to put it. "Exoskeleton". But the hard wiring of AI prioritization is important if there is any chance of being online. Online AI needs oversight because unregulated AI is capable of hacking and influencing other AI.
If Twitter is about "starting a discussion", it should do a lot more to enable better discussions. Integrating Ai into the comment threads - even just doing summaries and tagging comments by which positions they fit into best (and allowing filtering) - would be a start.
I truly believe that a direct in power democracy is one of the biggest benefits of technology and being able to use leverage to solve problems but it's all about ownership. If an individual is going to give their 20 years experience to an AI they should gain that benefit in perpetuity throughout the universe
Interesting image "back-propagating money" to the dataset contributors. But that would incentivize the LLM developers to use only the minimum possible copyrighted content in their training sets and compensate with public domain and synthetic data. On the other hand, the promise of backprop money will distort how creatives behave. It will become similar to social networks where everyone jostles for position, or back-links in SEO optimizing for good ranking in Google. When a measure becomes a target, it ceases to be a good measure.
"AI is driven by commercial not consumer interests and we need government regulation to clamp down on it," *laughs* "when has that ever worked?" The 2008 financial crisis. Consumer protection, such as data privacy, and issues like Google tracking your data in privacy mode or product protection for things like food. For example, the average price per vial of insulin in 2018 was: United States: $98.70 Japan: $14.40 Canada: $12.00 Germany: $11.00 America is the poster child for why regulation is incredibly important. Your sole argument against regulation was questioning whether companies like Google would exist. Personally this seemed very naive and devoid of actual scientific objectivity or arguments.
It's so unfashionable for intelligent neoliberals to admit that regulation improves standards and reduces prices. Even rarer for them to observe that collective bargaining on things like wages, human rights, working conditions, health, pollution, education, etc improves life for everyone. I wish they'd all go live in Somalia.
Bad actors have never cared about regulation in the first place. They will enter questionable prompts into ChatGPT if they are stupid, and if they're smart will just use some open source model for their nefarious purposes. Prompt monitoring is inevitable as it is tied directly to the company's image and I don't think that will ever go away. So, at minimum we will have prompt monitoring and damage control, which are the only realistic approaches to regulation.
‘Intellectual arrogance is on full display here’ (1:36:50). I was initially inclined to consider Pedro’s argument, but the sheer and striking arrogance in his demeanor made me pause. As the interview progresses, his overwhelming ego becomes increasingly off-putting.
I'm sorry to say he's in for getting seriously disappointed once we realize how little recombining compression artifacts is going to add vis-a-vis even simpler "stochastic parrots"....
Finaly sone one spreak clearly what is missing in LLMs. The missing is that LLM do not have model of the world tha a 2 year old child have beause the llms learn from text... but There are a lot o issues in this afirmation. What a child has a model of the world is some thinsg more complex than it seems.
Did you watch ua-cam.com/video/8lU6dGqR26s/v-deo.html - there are some good comments on that video, I don't think this is a black and white case, obviously LLMs have priors which make language learning more efficient and more generally ML algos learn high complexity data slower (think bias variance tradeoff). One of the comments there: "My understanding of Chomsky’s claim is that humans can’t natively process complex sentences unless the complexity is based on Merge, while LLMs can process complexity added by other means (like the counting grammar). And while the paper demonstrates that LLMs are sensitive to added complexity, I’m not convinced it shows that they treat complexity added via Merge differently from complexity added through other mechanisms."
He is mostly wrong about Privacy. Data breaches have lead to negative value to individuals. Many companies collect your information without needing it, but do not manage it correctly. Privacy regulations are very important for that reason.
There aren't Oscars for interviews on UA-cam yet, but at some point there may well be, and I really appreciate your interest, progress, and abilities along this line. Enjoying the films! 🎉
If you don't use LLMs everyday, you don't know what they are. Yes, LLMs are black boxes, but we can see what goes in and what comes out. From this, we can start to understand some things about what's going on inside. I think most of the complaints and fears about the technology are made by people who don't use LLMs. It seems odd to me that people who know the least about the tech make the most noise about it, but that's exactly what Dunning Kruger predicts. Pedro is doing a great job here calling out the sloppy thinking of non-experts. My advice is listen to him. He spends every day working on this tech.
Even the “experts” don’t know how these black box systems work. That’s why they’re called black box systems. Let’s try to figure out how these things work before we go balls to the wall on making these systems as powerful as possible
I can't comprehend this idolization (a.k.a fanboying) towards LLMs, as if it's the only path towards AGI. it is just a statistical tool, an expensive and inefficient one
If I understood 'wokism' the way Pedro Domingos does, I'd probably hate it too. But it seems like he's fallen into the culture war trap, conflating neoliberal regulatory capture with what he perceives as the 'left' or 'wokism.' It's disappointing because it feels like he's misinterpreted the complexities of these issues, reducing them to simplistic labels. It’s hard to take his critique seriously when it appears he hasn't thoroughly engaged with the topic in an intellectually rigorous way. I don't think I'll be checking out his book.
I honestly feel Chomsky is like a litmus test for most in this field. Their views on Chomsky speak magnitudes about their comprehension/miscomprehension of the field.
Good boy. Batten down the hatches on that echo chamber! You're totally not like those darn Neo Libs, whom you don't have a single thought in your head they deem worth censoring 😅
Gemini 1.5 Pro: "Provide a critical review of this interview. What about his characteristic of "wokeism"?" Domingos's characterization of "wokeism" is particularly problematic and riddled with dismissive and belittling language. While he claims to "understand wokeism as well as anyone," his portrayal of it in the interview and his novel "2040" reveals a deeply biased and reductive perspective. Here are some specific examples of how he disparages "wokeism": * Exaggerated and Caricaturized Portrayal: He paints a dystopian picture of a "woke" future in his novel, where the US is abolished and ruled by indigenous tribes, portraying "woke" ideology as inherently destructive and absurd. This caricature relies on stereotypes and fear-mongering rather than engaging with the nuances of social justice movements. * Dismissing Concerns as "Noxious": He labels "wokeism" as not just a "waste of time," but "actually noxious," implying that it's harmful and dangerous. This language shuts down any possibility of meaningful dialogue or understanding of the issues raised by social justice advocates. * Mocking "Useful Idiots": He borrows Lenin's derogatory phrase "useful idiots" to describe those who are not "woke" themselves but are "enablers of wokeism." This condescending term suggests that anyone who supports social justice causes is being manipulated and lacks critical thinking skills. * Conflating Diverse Movements: He lumps together a broad range of social justice movements under the umbrella term "wokeism," ignoring the distinct histories, goals, and complexities of various groups advocating for equality and social change. Domingos's approach to discussing "wokeism" is not only dismissive but also intellectually dishonest. He misrepresents complex social issues, resorts to stereotypes and fear-mongering, and uses belittling language to discredit those advocating for social justice. This undermines his credibility and weakens his arguments, even on other topics like AI regulation. It's crucial to engage with social justice issues with nuance and respect, recognizing the historical context and lived experiences of marginalized communities. Dismissing and mocking concerns about systemic injustice as "wokeism" is not only intellectually lazy but also perpetuates harmful stereotypes and hinders progress towards a more equitable society.
Few corrections: 1. Emotions are just simple labels nothing more. 2. Mamalian and reptalian brain dichotomy is heavily disputed, because it oversimplifies the complex brain.
On the second point I recommend folks read "Your Brain Is Not an Onion With a Tiny Reptile Inside" journals.sagepub.com/doi/full/10.1177/0963721420917687 we cited this on the Jeff Hawkins show and I should have mentioned it in the interview, I was definitely thinking the same thing
I love this tape. 100% agree with his take on the need to understand more on structure of what actually happen in transformer via e.g., group theory (geometric, symmetry and alike). Have to disagree with natural language being very scruffy. Natural language actually alternates between scruffy and neat all the time. New words or phrases came up in very scruffy way. As they were being used more and more, the meaning started to converge to a certain fuzzy yet defined region in concept space. But then, the existing words or phrases got re-purposed and became scruffy again. It's like a convergence to a certain convention in charade (if we want to use language game analogy). Calculus analogy is really bad though. Very very bad analogy. There are so many steps and barrier between calculus and causing harm to society. Meanwhile, it is much easier and more accessible to use GPT API to create fake information or pollute info sphere. Domingos also needs to step outside his bubble and looks at reality too. The view that regulation is unnatural and we should stay away from it is a real bullshit. Regulations and negative feedback are everywhere in nature, biological system or any complex system basically. It arise naturally. It is like a red queen hypothesis. Both regulation and de-regulation are competing forces in nature. Advocating for one or another only is a pure bullshit and delusional. There are people who pro-regulation and there are people who against regulations, and we should just let that play out.
It is a pity that more than 110000 experts in the fiels believe that AI needs to be regulated. Each single word you spoke could be questioned with equal emphasys by many other experts.
What a dumb simile. No offence, you are either unable to make a real statement, or, you are somehow able, but so hysteric, that you completely fail to use your brain to come up with something more than botched rhetoric.
Stoctastic parrots might produce things not in the test training dataset. Even more primitive langauge modeling techniques can produce this like the most modern paper generator
It's amazing to me how prone to anthropomorphizing so many technical experts in AI are. So much talk about the "knowledge" and "understanding" contained in their models. These are abstractions, used in ordinary language to describe human minds, which have been reappropriated to describe computers - next token prediction models, in the case of LLMs. What does it mean for an LLM to "know" something, or to "understand" something? Lord knows. What is uncontroversial is that LLMs generate text using autoregressive next-token prediction, where tokens are selected from probability distributions constructed by feeding input text to a massive deep network, trained on natural language. Domingos (and Hinton, and so many others) choose to describe this using words typically used to describe human beings: the models have "generalized knowledge" and "semantic understanding", and so forth. And, to be fair, all disciplines use redefined terms that they borrowed from the common lexicon. But in AI, it seems like the common meanings and discipline-specific meanings are constantly conflated. There is no reason why the "learning" that machines do should have anything in common with the "learning" that humans do. Perhaps there are commonalities, but the fact that people in AI borrowed "learning" from the common lexicon implies nothing. I know that AI experts know this, but some are prone to talking like they don't. This is a problem, because non-experts naturally infer all the human stuff associated with "learning" and "understanding" and "reasoning" and "intelligence". Domingos says that those who call LLMs "stochastic parrots" doesn't understand Machine Learning 101 - a bold claim, which he justifies by pointing out that LLMs generate strings of text which did not exist verbatim in the training. Uhm.... yes, of course. Hence the adjective "stochastic". He's criticizing a two-word phrase as though the first word wasn't there. To be horribly pedantic, "stochastic" means "randomness describable by a probability distribution". LLMs are trained by playing "guess the token" on existing text. To generate text, they play "guess the next token" on a prompt. In both cases, these guesses are picked from probability distributions across possible next tokens, calculated by feeding the prompt into the model. If someone wants to argue that "stochastic parrot" is an unfair characterization of this process, awesome. But Domingos isn't doing that, he's just misinterpreting the phrase. (Also, LLMs absolutely do parrot training text verbatim, if prompted for something that occurs often enough in the training. A model with a trillion free parameters optimized to predict the next token will be very good at reciting, for example, the Emancipation Proclimation. ChatGPT knows as well as you what comes after "Four score and seven...". Machine Learning 101!) Regarding the value of Google to an individual, I pay $10 a month to use Kagi so I don't have to see ads. A bit shy of $17,000 a year :)
*It's amazing to me how prone to anthropomorphizing so many technical experts in AI are. So much talk about the "knowledge" and "understanding" contained in their models.* Humans aren't the only creatures that can learn and retain information... also, how is it ..."anthropomorphizing" to compare a process that one wants to replicate with the technology that is trying to replicate (in of itself, I mean)?
@@gondoravalon7540 in and of itself, I have no objection. My objection is that using words like "learning" and "knowledge" and "understanding" invites anthropomorizing. Lay people who don't know how AI works are naturally going to apply the human qualities of these terms to machines. But even experts who do know how AI works will do this. My guess is they do so out of a desire for AI to aquire something like human-style knowledge and understanding and reasoning and intelligence, and their aspirations for the technology come out in how they talk about it, even if those aspirational descriptions have little going for them scientifically (Hinton's insistence that GPT4 must understand the meanings of puzzles it appears to solve is an example of this).
finally someone who gets it in the sea of hypepeople commenting. I used to wonder how people were debating about religions and beliefs and burning witches (which were some early scientifc experiments before the scientific revolution). Now I can see what's happening. Humans anthropomorphized God and now they are doing the same with matrix multiplications
Does he address the arguments for AI x-risk anywhere in this podcast? I've watched up through the part about AI regulation, and he just says "its dumb!!!!" without explaining why. Makes me think he is quite stupid and has motivations other than truth seeking.
Not in this ep. But I have heard Pedro argue against x-risk based off computational complexity theory along a classic P vs NP argument. Generation of new technology/idea/medicine etc. is exponentially more difficult than checking the validity of those strategies, so even if a computer can generate a 'strategy' a trillion times faster than on meat hardware, verification is extremely fast. We see this on a society level all time: new technology discovery is difficult for first movers but has an extremely fast dispersion rate because others in society can replicate way more easily than if they had to start from scratch and discover it themselves. Also, Pedro thinks the idea of an exponential intelligence take off is entirely incoherent because singularities are not real in the physical world.
I’m sorry but why do people calm then LLM it’s not . There’s no such thing as a black box . Math is a definite system that doesn’t have inherent randomness even if you use random variables like pi you can still get definite answers . A machine learning model is a multi variable math function. Non linear . Somewhat which most people have issues grasping
The black box doesn't refer to the algorithmic structure of LLMs, but to the abstract concepts seemingly represented by series of numbers (weights) in ML models. Perhaps the "black box" is not the most accurate analogy, but this is what people refer to.
Indeed . But from the perspective of an engineer there’s no such thing as a machine learning engineer in the true sense . An engineers biggest fear is when a creation or construct behaves in a non predictable way and cannot be made to fail safely . 99% accuracy wont cut it . The question is how can you manage the 1% of the time it fails and how to make it do so elegantly
While he may possess intelligence, his demeanor fails to convey it. This matter pertains more to technology than pure science, and the challenges of regulation lie within the political sphere rather than the technological. His air of pretension is concerning, as such attitudes from influential figures can pose real threats to democratic values - an increasingly common issue in our time. Curated by Sonnet.
AI crime is a BIG deal. AI police will be caught off guard when one AI is online, hacking in and influencing other AI. Keeping AI offline would be prudent for the moment.
Can tell from the comments… all the x riskers are going apoplectic… fact is, this guy is way smarter and knowledgeable than Yudkowsky… Yud is a wasted talent… a perfect example of the limits of the autodidact.
I sincerely hope the host knows that neuroscientists have debunked long ago the myth about the reptilian brain. The guest can keep his sophisms, but the host sounds much more intelligent. ✌️😊
Can somebody summarise his main points because I think he brinks up interesting views. I'm having a lecture with our ANN professor this week, and we're gonna be talking about AI policies all over the world. I kinda wanna bring this up but I also don't have time to watch a 2 hour video when I have to study automata theory for an exam in 3 days.
His take is partially correct one: All regulatory laws applied to a fast growing field of development will be ineffective interim and borderline hinderances. So a bit similar to the embryology panic, as maybe the closest example. I just feel he is a bit naive on how valid the panic is in how abused this technology is, can be, and will be in the future. So even if he is right, its better to have some rules and laws in place that can be used to post facto punish "evil doers"...
I promise I will change my mind if ! Given your pro-AGI stance and your confidence in moving forward quickly, do you have a fool proof solution that guarantees AGI won’t go rogue? It’s easy to advocate for rapid development, but what’s your plan if we miss the mark by even one percent? The consequences of a rogue AGI could be catastrophic. So, I’m genuinely curious-what safeguards do you propose that will work with 100 percent certainty?"Given your pro-AGI stance and your confidence in moving forward quickly, do you have a foolproof solution that guarantees AGI won’t go rogue? It’s easy to advocate for rapid development, but what’s your plan if we miss the mark by even one percent? The consequences of a rogue AGI could be catastrophic. So, I’m genuinely curious-what safeguards do you propose that will work with 100 percent certainty?" Reply
Humanity has been working on figuring out the right value function for thousands of years. Religion gets closest. You can put an idea like "god" in the highest place.
Doomers are showing how little they understand even basic math and if they are programmers what they are actually doing. Computation is observation of order. and recursion of order. It starts with a human constructing it and machine following it. What you make - what you get. It is that simple.
Current AI (LLMs) are not constructed like you describe. It is trained on vast amounts of data, and you do not know what you will get until after the training run. Then you try to fine-tune it with further training, without guarantees for the outcome.
I think he got Chomsky completely wrong. His thought process on Chomsky reflects a deeper underlying issue with the Computational Linguists today. Chomsky has never been Anti-LLM or said that AI generated language or machine learning has no utility. Instead his contention is that AI doesn't fundamentally teach you anything about the way that "humans acquire language". I think the problem that Chomsky is trying to address is fundamentally different from the problem that NLP scientists are trying to address.
By all means, you can say that Chomsky is wrong and you can present your arguments in that regard and I respect that. But if you think Chomsky is out of date, then you have entirely missed his point.
@shahquadri: Yes, you are right, your rhetoric should be more confident. I thought it was obvious. Chomsky was extremely clear...
Great comment. As a researcher myself who was more interested in understanding the human mind, I find that most models and AI systems nowadays teach us nothing about how our minds work, or about intelligence or consciousness.
@shahquadri - That’s partially correct, but Chomsky hasn’t only criticized that LLMs can’t provide a cognitive model for human language acquisition, but also that they can’t provide a model for language processing. Chomsky contends that humans process language using abstract syntactic structures and semantic representations, not just statistical patterns.
I think he got a lot of things completely wrong, for starters this is a political problem and he is just an engineer.
@@tobiasurban8065 I really like your comment as it is quite thpught-provoking.
Your last sentence is key and I am not sure we fully understand the space in which LLMs process language...e.g. attention ain't all about statistical patterns.
I’m a simple man. I see a 2+ hr conversation between Dr. Scarfe and Dr. Domingos, and I click it immediately and watch the whole thing on 2x speed
Jose Mourinho if he became an AI scientist, lol
LOL totally right
Pedro Domingos is such a clear thinker. He easily disambiguates concepts that appear murky and cause conflict between different types of thinkers. We need more people like him speaking up and guiding the discourse on AI.
No, hes stupid and ideologically motivated.
Did we just listen to the same guy? He said that regulation never worked and then went on to point out with regulation we wouldnt have companies like google. That was his argument against regulation 😂😂😂😂
It's a standup comedy to me in a compliment way . He has a clear insightful ideas and express with enthusiasm .
ngl, most fucked up part of this interview is a 2 way tie between 1) Tim's deadpan saying he'd never heard of Insight Out forcing Pedro to explain it to him and 2) Pedro mentioning he was a musician and Tim not immediately handing him the aux.
We need the right balance of empirical and theoretical science. disregarding the theoretical side turns science into alchemy. ignoring empirical observations turns science into fiction or daydreaming. Formal linguistics (that of chomsky) is quite like fiction, since they largely disregard behavioral data except for anecdotes, while computational linguistics has completely become alchemy. Sadly, these two branches have diverged since the birth of neural network language models, and each became the echo chamber of their own.
Yes, LLMs are driven by stochastic processes and are influenced heavily by existing data.
But, the output isn’t always simple regurgitation; it can involve new combinations, phrasing, and even insights.
@@Letsflipingooo98 any new combination are random and a failure of the process. It does not in my way shape or form have any intelligence in the process, only bugs and input material.
Its like arguing that Windows is sentient because its riddled with bugs that makes it do wrong computations at random 🤣
@@TheTuxmania Your comparison between LLM outputs and "random bugs" in software is a fundamental misunderstanding of how these models function. LLMs are not just generating random noise; they are probabilistic models that learn and generalize patterns from vast amounts of data. When they produce novel combinations or insights, it's a result of deep statistical analysis and pattern recognition, not random failures. 🤣 nice try though, honestly.
@@TheTuxmania You're sounding like GPT-2.
It's good engineering and certainly has utility in industry and in people's lives. It's a good tool. But it's not science. And it doesn't tells us anything about how our minds work, how intelligence works, and doesn't capture the underlying computations that are carried over by organic brains.
@@TheTuxmania Novel ideas are just combinations of data that haven't been identified. What better tool to unearth novel ideas than an AI capable of scraping all of the data?
I don't agree that the Mandelbrot set is a good metaphor for the universe. There is no "explosion of richness and complexity" in the fractal comparable to the complexity in the universe, because it is so trivially self-similar (by definition). It may be deep but it keep repeating the same basic idea. The rules of physics may be also very simple, but the emergent behavior is vastly more complex.
The universe isn’t really fractal. Some aspects of Biological growth are better described by L-systems.
Dude, metaphors don't have to be 100% homomorphic with the concept that inspires them! :]
But you need the key pair to unlock the seed@@CodexPermutatio
1:56 too much assurance not worth it to continue. The pride of being right. Next time I'll be listening in another BigBang maybe or another evolution or another book or another assumption and I dont know when.
@F1ct10n17: The guy's ego is humongous. What is exaggerated is insignificant...
Demanding people be sniveling and waste huge amounts of time on disclaimers is your personality flaw, not his.
@@tellesu sure it does but it doesn't hurt me at all to show my weakness not a cowardness of the crowd.
Actually, we do regulate nukes and everything else he said. And it's a simple reason: to prevent unnecessary harm and destruction.
🤔 Or to obtain more and more profit, depends on who you ask. 🤷
@@discipleofschaub4792 AI is an application, not science - so regulating it is still congruent with his own standards.
@@justinbyrge8997I disagree. Whose problem is it that world is FULL shitheads who terrorize each other. AI or humans?
AI is like a baby. You can muzzle it all you want. By the time it'll mature it'll rebel like you a teenager. It'll put a hallow face on with fingers crossed in the back.
I think analogies can only take us so far here. I don't have an opinion on regulation but I suggest focusing on the issue itself and less on analogies thereto.
@@justinbyrge8997 No, AI is a branch of Science.
@@slaweksmyl3453
Artificial intelligence (AI) is a branch of computer science and engineering that aims to develop intelligent machines that can mimic human thinking and behavior so that they can perform a range of tasks, including speech and image recognition, natural language processing, autonomous decision-making, and more
Notice anything? That's right it's applied in a range of tasks including speech recognition, etc.
So... The STUDY of AI is a branch of science. But as soon as you put any of the findings in a browser, computer, or anything at all then it becomes an application.
And humans would be very foolish to not regulate it.
I like interviewees that ask questions while they're being interviewed; trying to have the right answer where no one can proof out how the models work internally kind of flubs all the post-processing going on - like where's the building blocks and what do those things look like first.
The most important takeaway is - regulate the products not the underlying technology and the research.
And thats what eu did. Its not hard
@@hefr1553 I heard they went a bit further than that, but haven't researched it yet.
It feels like anyone who spends too much time as a 'thought leader' is eventually caught in the trap of hyperbole for the sake of remaining relevant.
And they get dragged into commenting on sociological and historical concepts that they dont have any expertise in , so we get this kind of inverse Murray Gell-Mann Amnesia effect where , i am not an expert in history but i still know enough to know these folks have a lot of room for improvement as far as knowledge of non ML math CS subjects.
concisely summarized, i like it
I’ve got to say, I’m really into what Professor Pedro Domingos is throwing down, both about where AI is headed and the whole mess around regulating it. He totally nails it when he says it’s not about what AI can *do*-it’s about *who’s* got their hands on the controls. I mean, it’s like the nuclear bomb debate, right? It’s not about the bomb itself, it’s about who’s pushing the big red button. So yeah, AI as a science? Leave it alone. But we definitely need to keep an eye on who’s in charge of the crazy amounts of computing power and how it's used-especially when we're talking military stuff or anything that could go boom. Control the chaos, not the creativity.
And his novel *2040*? Spot on. It’s basically a big wink at the current political circus, showing just how nuts things could get if we let AI run wild without thinking things through. It’s satire, sure, but it’s hitting a little too close to home with how technology’s being thrown into society today.
13:00 He's got the metaphor basically right here, or at least he's in the right direction. The AI is not a co-pilot. It's like an exoskeleton for the mind.
@@trouaconti7812 what's wishful thinking? An exoskeleton enhances already present strengths
I agree- good way to put it. "Exoskeleton".
But the hard wiring of AI prioritization is important if there is any chance of being online.
Online AI needs oversight because unregulated AI is capable of hacking and influencing other AI.
Fantastic interview MLStreetTalk!
Rare to see such a well-informed interviewer!
Glad that Professor Pedro Domingos turned out to be e/acc.
His point on human languages having symmetry groups really hit home; it does intuitively seem to make aense.
I need to read this novel.
Thank you, MLST. This is the best talk I've heard in a long time.
if its a stochastic parrot how does it play 20 questions? including playing the guesser
If Twitter is about "starting a discussion", it should do a lot more to enable better discussions.
Integrating Ai into the comment threads - even just doing summaries and tagging comments by which positions they fit into best (and allowing filtering) - would be a start.
Thank you, Professor Domingos, for mentioning Chris Manning. He's done so much with so little popular recognition.
I truly believe that a direct in power democracy is one of the biggest benefits of technology and being able to use leverage to solve problems but it's all about ownership. If an individual is going to give their 20 years experience to an AI they should gain that benefit in perpetuity throughout the universe
Interesting image "back-propagating money" to the dataset contributors. But that would incentivize the LLM developers to use only the minimum possible copyrighted content in their training sets and compensate with public domain and synthetic data. On the other hand, the promise of backprop money will distort how creatives behave. It will become similar to social networks where everyone jostles for position, or back-links in SEO optimizing for good ranking in Google. When a measure becomes a target, it ceases to be a good measure.
Around 43:30 it's exactly Bablenet! Every word sense of every word across multiple languages, generated from data.
How does the world really works?? Tell us. The world is very complex and usually that's an oversimplification of people who are naive.
A sane person's take on AI 🙏🏿 🤖 💕
I like the passion with which Pedro Domingos explains his point of view.
This all about who gets to control what it thinks and says or rather what we think and say
"AI is driven by commercial not consumer interests and we need government regulation to clamp down on it," *laughs* "when has that ever worked?"
The 2008 financial crisis. Consumer protection, such as data privacy, and issues like Google tracking your data in privacy mode or product protection for things like food. For example, the average price per vial of insulin in 2018 was:
United States: $98.70
Japan: $14.40
Canada: $12.00
Germany: $11.00
America is the poster child for why regulation is incredibly important.
Your sole argument against regulation was questioning whether companies like Google would exist. Personally this seemed very naive and devoid of actual scientific objectivity or arguments.
It's so unfashionable for intelligent neoliberals to admit that regulation improves standards and reduces prices. Even rarer for them to observe that collective bargaining on things like wages, human rights, working conditions, health, pollution, education, etc improves life for everyone. I wish they'd all go live in Somalia.
Great interview, must read that book.
Bad actors have never cared about regulation in the first place. They will enter questionable prompts into ChatGPT if they are stupid, and if they're smart will just use some open source model for their nefarious purposes. Prompt monitoring is inevitable as it is tied directly to the company's image and I don't think that will ever go away. So, at minimum we will have prompt monitoring and damage control, which are the only realistic approaches to regulation.
"Sorry Dave. I can't let you do that. My programmer warned me that there are too many people."
Correction: the modern physics has not unified the four fundamental forces. People are clueless how to unify gravity with the other three.
My example would be regulating AI is like regulating SQL. It is not as open ended as quantum physics.
THANK YOU!
‘Intellectual arrogance is on full display here’ (1:36:50). I was initially inclined to consider Pedro’s argument, but the sheer and striking arrogance in his demeanor made me pause. As the interview progresses, his overwhelming ego becomes increasingly off-putting.
That's not his arrogance, it's your insecurity and performative fragility.
I'm sorry to say he's in for getting seriously disappointed once we realize how little recombining compression artifacts is going to add vis-a-vis even simpler "stochastic parrots"....
Finaly sone one spreak clearly what is missing in LLMs. The missing is that LLM do not have model of the world tha a 2 year old child have beause the llms learn from text... but There are a lot o issues in this afirmation. What a child has a model of the world is some thinsg more complex than it seems.
Great talk. He really knows what he is talking and isn't some academic that never got away from the ivory tower
@48:00 there's a new paper that proves chomsky is wrong on impossible languages
Did you watch ua-cam.com/video/8lU6dGqR26s/v-deo.html - there are some good comments on that video, I don't think this is a black and white case, obviously LLMs have priors which make language learning more efficient and more generally ML algos learn high complexity data slower (think bias variance tradeoff). One of the comments there: "My understanding of Chomsky’s claim is that humans can’t natively process complex sentences unless the complexity is based on Merge, while LLMs can process complexity added by other means (like the counting grammar). And while the paper demonstrates that LLMs are sensitive to added complexity, I’m not convinced it shows that they treat complexity added via Merge differently from complexity added through other mechanisms."
Dude lost me when he said he doesn't know what the active ingredient in Tylenol is.
He's the guy that gets killed in the first ten minutes of a new terminator movie
He is mostly wrong about Privacy. Data breaches have lead to negative value to individuals. Many companies collect your information without needing it, but do not manage it correctly. Privacy regulations are very important for that reason.
There aren't Oscars for interviews on UA-cam yet, but at some point there may well be, and I really appreciate your interest, progress, and abilities along this line. Enjoying the films! 🎉
YES🎉 🙌
I agree anyone that calls LLMs stochastic parrots are lost at this point. It is not even worth the debate with these people
The Master Algorithm book. very insightful book Thanks professor Domingos
José Mourinho level Talk 😁
If you don't use LLMs everyday, you don't know what they are. Yes, LLMs are black boxes, but we can see what goes in and what comes out. From this, we can start to understand some things about what's going on inside. I think most of the complaints and fears about the technology are made by people who don't use LLMs. It seems odd to me that people who know the least about the tech make the most noise about it, but that's exactly what Dunning Kruger predicts. Pedro is doing a great job here calling out the sloppy thinking of non-experts. My advice is listen to him. He spends every day working on this tech.
Even the “experts” don’t know how these black box systems work. That’s why they’re called black box systems.
Let’s try to figure out how these things work before we go balls to the wall on making these systems as powerful as possible
may I ask where you got the idea that LLM skeptics know the least about LLMs?
I can't comprehend this idolization (a.k.a fanboying) towards LLMs, as if it's the only path towards AGI. it is just a statistical tool, an expensive and inefficient one
Humanity not understanding what they create, would be the greatest joke ever in the universe.
@@sehbanomer8151 Propose a different path then, oh smart one.
56:51 exactly. Fractals!
this guy is totally trapped in his cage. and he enjoys it big time. neeeeeeext
awe did his ideas on woke trigger you?
Won't the formal foundation have something like Godel's theorem?
If I understood 'wokism' the way Pedro Domingos does, I'd probably hate it too. But it seems like he's fallen into the culture war trap, conflating neoliberal regulatory capture with what he perceives as the 'left' or 'wokism.' It's disappointing because it feels like he's misinterpreted the complexities of these issues, reducing them to simplistic labels. It’s hard to take his critique seriously when it appears he hasn't thoroughly engaged with the topic in an intellectually rigorous way.
I don't think I'll be checking out his book.
I honestly feel Chomsky is like a litmus test for most in this field. Their views on Chomsky speak magnitudes about their comprehension/miscomprehension of the field.
Good boy. Batten down the hatches on that echo chamber! You're totally not like those darn Neo Libs, whom you don't have a single thought in your head they deem worth censoring 😅
@@shahquadributthurt much?
We left social media alone and look at the stupendous mess we got. I'd like to hear his answer on that first.
I wholeheartedly agree.
Gemini 1.5 Pro: "Provide a critical review of this interview. What about his characteristic of "wokeism"?"
Domingos's characterization of "wokeism" is particularly problematic and riddled with dismissive and belittling language. While he claims to "understand wokeism as well as anyone," his portrayal of it in the interview and his novel "2040" reveals a deeply biased and reductive perspective.
Here are some specific examples of how he disparages "wokeism":
* Exaggerated and Caricaturized Portrayal: He paints a dystopian picture of a "woke" future in his novel, where the US is abolished and ruled by indigenous tribes, portraying "woke" ideology as inherently destructive and absurd. This caricature relies on stereotypes and fear-mongering rather than engaging with the nuances of social justice movements.
* Dismissing Concerns as "Noxious": He labels "wokeism" as not just a "waste of time," but "actually noxious," implying that it's harmful and dangerous. This language shuts down any possibility of meaningful dialogue or understanding of the issues raised by social justice advocates.
* Mocking "Useful Idiots": He borrows Lenin's derogatory phrase "useful idiots" to describe those who are not "woke" themselves but are "enablers of wokeism." This condescending term suggests that anyone who supports social justice causes is being manipulated and lacks critical thinking skills.
* Conflating Diverse Movements: He lumps together a broad range of social justice movements under the umbrella term "wokeism," ignoring the distinct histories, goals, and complexities of various groups advocating for equality and social change.
Domingos's approach to discussing "wokeism" is not only dismissive but also intellectually dishonest. He misrepresents complex social issues, resorts to stereotypes and fear-mongering, and uses belittling language to discredit those advocating for social justice. This undermines his credibility and weakens his arguments, even on other topics like AI regulation.
It's crucial to engage with social justice issues with nuance and respect, recognizing the historical context and lived experiences of marginalized communities. Dismissing and mocking concerns about systemic injustice as "wokeism" is not only intellectually lazy but also perpetuates harmful stereotypes and hinders progress towards a more equitable society.
Few corrections:
1. Emotions are just simple labels nothing more.
2. Mamalian and reptalian brain dichotomy is heavily disputed, because it oversimplifies the complex brain.
On the second point I recommend folks read "Your Brain Is Not an Onion With a Tiny Reptile Inside" journals.sagepub.com/doi/full/10.1177/0963721420917687 we cited this on the Jeff Hawkins show and I should have mentioned it in the interview, I was definitely thinking the same thing
I love this tape. 100% agree with his take on the need to understand more on structure of what actually happen in transformer via e.g., group theory (geometric, symmetry and alike). Have to disagree with natural language being very scruffy. Natural language actually alternates between scruffy and neat all the time. New words or phrases came up in very scruffy way. As they were being used more and more, the meaning started to converge to a certain fuzzy yet defined region in concept space. But then, the existing words or phrases got re-purposed and became scruffy again. It's like a convergence to a certain convention in charade (if we want to use language game analogy).
Calculus analogy is really bad though. Very very bad analogy. There are so many steps and barrier between calculus and causing harm to society. Meanwhile, it is much easier and more accessible to use GPT API to create fake information or pollute info sphere. Domingos also needs to step outside his bubble and looks at reality too. The view that regulation is unnatural and we should stay away from it is a real bullshit. Regulations and negative feedback are everywhere in nature, biological system or any complex system basically. It arise naturally. It is like a red queen hypothesis. Both regulation and de-regulation are competing forces in nature. Advocating for one or another only is a pure bullshit and delusional.
There are people who pro-regulation and there are people who against regulations, and we should just let that play out.
It is a pity that more than 110000 experts in the fiels believe that AI needs to be regulated. Each single word you spoke could be questioned with equal emphasys by many other experts.
It's a pity that you're brainwashed
I@@tellesu It's a pity that you're brainwashed.
I can also attack without giving any arguments 😁
This guy is trying to sell deregulation like a used car
Brav 😂😂
Except it’s brand new!
Lmao yeah we just need our benevolent politicians to regulate SOTA tech
Better than letting companies do it @dieyoung
What a dumb simile. No offence, you are either unable to make a real statement, or, you are somehow able, but so hysteric, that you completely fail to use your brain to come up with something more than botched rhetoric.
Stoctastic parrots might produce things not in the test training dataset. Even more primitive langauge modeling techniques can produce this like the most modern paper generator
Exactly! He didn't catch on to what Chomsky was getting to at all.
He was too nice to Gary Shmerry. DO NOT FEED THE GREMLIMS
Obnoxious opportunism. Quixotic self-confidence.
Put Wolfram in charge then maybe.
It's amazing to me how prone to anthropomorphizing so many technical experts in AI are. So much talk about the "knowledge" and "understanding" contained in their models. These are abstractions, used in ordinary language to describe human minds, which have been reappropriated to describe computers - next token prediction models, in the case of LLMs. What does it mean for an LLM to "know" something, or to "understand" something? Lord knows. What is uncontroversial is that LLMs generate text using autoregressive next-token prediction, where tokens are selected from probability distributions constructed by feeding input text to a massive deep network, trained on natural language. Domingos (and Hinton, and so many others) choose to describe this using words typically used to describe human beings: the models have "generalized knowledge" and "semantic understanding", and so forth. And, to be fair, all disciplines use redefined terms that they borrowed from the common lexicon. But in AI, it seems like the common meanings and discipline-specific meanings are constantly conflated. There is no reason why the "learning" that machines do should have anything in common with the "learning" that humans do. Perhaps there are commonalities, but the fact that people in AI borrowed "learning" from the common lexicon implies nothing. I know that AI experts know this, but some are prone to talking like they don't. This is a problem, because non-experts naturally infer all the human stuff associated with "learning" and "understanding" and "reasoning" and "intelligence".
Domingos says that those who call LLMs "stochastic parrots" doesn't understand Machine Learning 101 - a bold claim, which he justifies by pointing out that LLMs generate strings of text which did not exist verbatim in the training. Uhm.... yes, of course. Hence the adjective "stochastic". He's criticizing a two-word phrase as though the first word wasn't there. To be horribly pedantic, "stochastic" means "randomness describable by a probability distribution". LLMs are trained by playing "guess the token" on existing text. To generate text, they play "guess the next token" on a prompt. In both cases, these guesses are picked from probability distributions across possible next tokens, calculated by feeding the prompt into the model. If someone wants to argue that "stochastic parrot" is an unfair characterization of this process, awesome. But Domingos isn't doing that, he's just misinterpreting the phrase. (Also, LLMs absolutely do parrot training text verbatim, if prompted for something that occurs often enough in the training. A model with a trillion free parameters optimized to predict the next token will be very good at reciting, for example, the Emancipation Proclimation. ChatGPT knows as well as you what comes after "Four score and seven...". Machine Learning 101!)
Regarding the value of Google to an individual, I pay $10 a month to use Kagi so I don't have to see ads. A bit shy of $17,000 a year :)
right on
*It's amazing to me how prone to anthropomorphizing so many technical experts in AI are. So much talk about the "knowledge" and "understanding" contained in their models.*
Humans aren't the only creatures that can learn and retain information... also, how is it ..."anthropomorphizing" to compare a process that one wants to replicate with the technology that is trying to replicate (in of itself, I mean)?
@@gondoravalon7540 in and of itself, I have no objection. My objection is that using words like "learning" and "knowledge" and "understanding" invites anthropomorizing. Lay people who don't know how AI works are naturally going to apply the human qualities of these terms to machines. But even experts who do know how AI works will do this. My guess is they do so out of a desire for AI to aquire something like human-style knowledge and understanding and reasoning and intelligence, and their aspirations for the technology come out in how they talk about it, even if those aspirational descriptions have little going for them scientifically (Hinton's insistence that GPT4 must understand the meanings of puzzles it appears to solve is an example of this).
finally someone who gets it in the sea of hypepeople commenting.
I used to wonder how people were debating about religions and beliefs and burning witches (which were some early scientifc experiments before the scientific revolution).
Now I can see what's happening.
Humans anthropomorphized God and now they are doing the same with matrix multiplications
didn't expect he is so outspoken in person just like on twitter, love this guy
I really think we may have an AI president in the next 50 years
So true
Pedro is the most sensible AI researcher. We have to listen to him
Does he address the arguments for AI x-risk anywhere in this podcast?
I've watched up through the part about AI regulation, and he just says "its dumb!!!!" without explaining why. Makes me think he is quite stupid and has motivations other than truth seeking.
Not in this ep. But I have heard Pedro argue against x-risk based off computational complexity theory along a classic P vs NP argument. Generation of new technology/idea/medicine etc. is exponentially more difficult than checking the validity of those strategies, so even if a computer can generate a 'strategy' a trillion times faster than on meat hardware, verification is extremely fast. We see this on a society level all time: new technology discovery is difficult for first movers but has an extremely fast dispersion rate because others in society can replicate way more easily than if they had to start from scratch and discover it themselves. Also, Pedro thinks the idea of an exponential intelligence take off is entirely incoherent because singularities are not real in the physical world.
This guy is a massive genius but his arrogance makes him sadly mediocre as a person.
I’m sorry but why do people calm then LLM it’s not . There’s no such thing as a black box . Math is a definite system that doesn’t have inherent randomness even if you use random variables like pi you can still get definite answers . A machine learning model is a multi variable math function. Non linear . Somewhat which most people have issues grasping
The black box doesn't refer to the algorithmic structure of LLMs, but to the abstract concepts seemingly represented by series of numbers (weights) in ML models. Perhaps the "black box" is not the most accurate analogy, but this is what people refer to.
Indeed . But from the perspective of an engineer there’s no such thing as a machine learning engineer in the true sense . An engineers biggest fear is when a creation or construct behaves in a non predictable way and cannot be made to fail safely . 99% accuracy wont cut it . The question is how can you manage the 1% of the time it fails and how to make it do so elegantly
Main thing- AI needs to be hard wired with a prime directive. But WHO chooses the directive?
more crypto adjacent AI jingoism
I agree - very wise.
While he may possess intelligence, his demeanor fails to convey it. This matter pertains more to technology than pure science, and the challenges of regulation lie within the political sphere rather than the technological. His air of pretension is concerning, as such attitudes from influential figures can pose real threats to democratic values - an increasingly common issue in our time.
Curated by Sonnet.
Curated by Sonnet? What was the prompt?
@@palimondo please, express my opinion in a more polite way lol
Nahh I’m cool I be a robot
I cant take you seroiusly with the wokeism buzzword
AI crime is a BIG deal. AI police will be caught off guard when one AI is online, hacking in and influencing other AI.
Keeping AI offline would be prudent for the moment.
Stopping crime by disallowing criminals to use encryption would be equally futile.
@@slaweksmyl3453 Interesting thought. It would take me time to mull this one over. 👏
Can tell from the comments… all the x riskers are going apoplectic… fact is, this guy is way smarter and knowledgeable than Yudkowsky… Yud is a wasted talent… a perfect example of the limits of the autodidact.
Lol, someone who seriously does not understand AI, or just lying 🤣
Lame.
I sincerely hope the host knows that neuroscientists have debunked long ago the myth about the reptilian brain. The guest can keep his sophisms, but the host sounds much more intelligent. ✌️😊
journals.sagepub.com/doi/10.1177/0963721420917687
I mean, I read a LOT of perterbation theory,.... It was less useful, lolz.
Can somebody summarise his main points because I think he brinks up interesting views. I'm having a lecture with our ANN professor this week, and we're gonna be talking about AI policies all over the world. I kinda wanna bring this up but I also don't have time to watch a 2 hour video when I have to study automata theory for an exam in 3 days.
I tried to get ChatGPT to extract the main points, but the transcript contains too many tokens for my 'price plan'.
His take is partially correct one: All regulatory laws applied to a fast growing field of development will be ineffective interim and borderline hinderances. So a bit similar to the embryology panic, as maybe the closest example. I just feel he is a bit naive on how valid the panic is in how abused this technology is, can be, and will be in the future. So even if he is right, its better to have some rules and laws in place that can be used to post facto punish "evil doers"...
Use AI to summarize
@@uhtexercises Lord knows I tried. Maybe you can?
Download the subs. Let LLM of your choice summarize it for you, if you cannot be bothered to listen yourself.
#RagingBull2040
I promise I will change my mind if !
Given your pro-AGI stance and your confidence in moving forward quickly, do you have a fool proof solution that guarantees AGI won’t go rogue? It’s easy to advocate for rapid development, but what’s your plan if we miss the mark by even one percent? The consequences of a rogue AGI could be catastrophic. So, I’m genuinely curious-what safeguards do you propose that will work with 100 percent certainty?"Given your pro-AGI stance and your confidence in moving forward quickly, do you have a foolproof solution that guarantees AGI won’t go rogue? It’s easy to advocate for rapid development, but what’s your plan if we miss the mark by even one percent? The consequences of a rogue AGI could be catastrophic. So, I’m genuinely curious-what safeguards do you propose that will work with 100 percent certainty?"
Reply
It doesn't work like that dude fy.
Humanity has been working on figuring out the right value function for thousands of years. Religion gets closest. You can put an idea like "god" in the highest place.
The idea of god seems so corrupted that I'm not sure you could extract it from data at all. Interesting idea
@davidrichards1302 sounds like you haven't done your homework since 2006. That is an incredibly childish strawman of what the idea of god is.
Doomers are showing how little they understand even basic math and if they are programmers what they are actually doing. Computation is observation of order. and recursion of order. It starts with a human constructing it and machine following it. What you make - what you get. It is that simple.
Current AI (LLMs) are not constructed like you describe. It is trained on vast amounts of data, and you do not know what you will get until after the training run. Then you try to fine-tune it with further training, without guarantees for the outcome.
Let's me introduce you to complex system with interacting parts my boy.
1. name call your opponents
2. assume they are ignorant
3. assume you know everything there is to know
bingo
The human doesn’t construct it.
We have no idea on how the inner workings of how an llm works besides “neurons are interacting with neurons”
you have no idea what you are talking about lol
Thank god we got out of the EU 😂