God-like AI is Closer Than You Think | Aaron Bastani Meets Ian Hogarth | Downstream

Поділитися
Вставка
  • Опубліковано 20 тра 2023
  • Ian Hogarth has invested in more than 50 artificial intelligence companies and is co-author of the annual “State of AI” report. And he's worried. Not only in regard to the immensely disruptive consequences of machine learning for employment, as AI automates potentially millions of jobs, but with the potential rise of an 'AGI' (an artificial general intelligence). For Hogarth, the prospect of a machine able to augment its own intelligence is of grave concern - and something which, so far, political elites have ignored.
    So what could the emergence of an AGI mean? And how soon before it arrives - if at all? How important is quantum computing? And does the existence of these technologies within the broader framework of capitalism mitigate or amplify risks? Could AI operate more effectively under a different kind of economic system, and will its diffusion herald a break with capitalism as we know it?
    _________________________________________________________
    Support Novara Media for as little as £1 a month:
    novaramedia.com/support
    Buy Novara Media merch here:
    shop.novaramedia.com/

КОМЕНТАРІ • 828

  • @kashphlinktu
    @kashphlinktu 11 місяців тому +62

    One thing I think most people don’t understand is that the business world is going to pursue ai as fast as possible and as widely as possible with absolutely zero consideration of consequences. They are putting all their resources into pushing this as quickly as they can. I’ve been at corporate events where they talk about ai and they have dollar signs in their eyes and do not give one single solitary fuck about the risks. You have to understand how these people think. They simply do not care about us at all. If it makes money, they do it, and they never feel guilt about what they’ve done. Maybe they can’t feel guilt. I don’t know. But they’re not going to slow down one iota unless they are forced to, and it would have to be a credible threat of jail time and bankruptcy for them to think twice.

    • @Tomgood1984
      @Tomgood1984 10 місяців тому +1

      Please tell me which part of "Artificial intelligence" is artificial?

    • @kashphlinktu
      @kashphlinktu 10 місяців тому +3

      @@Tomgood1984 …the part that runs on silicon hardware? Rather than a squishy organic brain? What are you not understanding

    • @JustJanitor
      @JustJanitor 8 місяців тому +2

      This comment is spot on, thank you

    • @danielstansfield7556
      @danielstansfield7556 7 місяців тому

      And the scary thing is just how incompetent and ignorant the governments of the world are in the face of this! They’re our only hope and they don’t have the skills to see the danger! Basically we’re fucked, if you ask me.

    • @homergee3381
      @homergee3381 6 місяців тому

      The flat tax system creates the most extreme version of capitalism possible, why so surprised when there is no limit whatsoever to what amount of money can be made? Time for a fully progressive tax don't you think?

  • @scooble
    @scooble Рік тому +324

    As bad as it is that everyday bad people doing bad things with AI, one of the biggest threats I see is the biggest and most powerful corporations will be able to turbo charge wealth extraction methods that will accelerate wealth inequality, beyond anything we've seen before.

    • @DavidCodyPeppers.
      @DavidCodyPeppers. Рік тому +25

      When Corporations can fire 90% of lawyers, online/phone customer service, coders, and creatives, they will be rolling in profits.
      In other words bye bye ALL white collar professionals.
      Peace!
      \o/

    • @TheIgnoramus
      @TheIgnoramus Рік тому +9

      Don’t look into blackrock then

    • @madeleineswords704
      @madeleineswords704 Рік тому +9

      Exactly! It's pinkie and the brain on steroids!
      What will they do? What do they always do?

    • @r0galik
      @r0galik Рік тому +16

      ​@@DavidCodyPeppers. nobody will be able to buy their products in that case. It's one of the simplest contradictions of capitalism. Also, I'm sure that the US government (and others) are doing everything to have exclusive rights to the best/unconstrained versions of AI models.

    • @DavidCodyPeppers.
      @DavidCodyPeppers. Рік тому +1

      @@r0galik
      The last time I checked the US government is run by capitalist corporations.
      But you keep telling yourself White Collar Professionals won't be obsolete in 10 years.
      The FED needs unemployment to get above 6% to bring down inflation and in case you haven't noticed....
      Blue collar jobs are bare bones as it is. In addition, workers making under $20 an hour are NOT the reason for inflation.
      Peace!
      \o/

  • @trupinys1979
    @trupinys1979 Рік тому +9

    I am... or, was a translator. Loved my job. Now I realise it's gone forever. I need to work for peanuts to post-edit machine translations, which not only takes away my work, but also my dignity. And it's impossible to survive doing that. Sure, it's technological progress, but there will soon be millions upon millions of people like me. I don't know how countries will cope with so many unemployed professionals all at once. And not only that, but these will be people who will feel lost and heartbroken because they truly loved their job and trained for it for a long time. How will they all survive and find a new purpose?

    • @ferencszabo3504
      @ferencszabo3504 11 місяців тому

      These people don't give a f. about the others. They only concern is profit and control! Sorry to tell ya this but the World is turning upside and we're hearing nonsense BS comforting talks. The good news is that the millionaires will survive!Rejoice!

  • @nish663
    @nish663 Рік тому +211

    As someone who actually works in the field, I can definitively say that no one knows how close we are to AGI. People like to think they know, but they don't. If only for the reason that we don't know what AGI actually even looks like.

    • @mitsterful
      @mitsterful Рік тому +10

      I don't agree with the title of this video, but certainly more effort should be made by governments to regulate AI, and it should do so quickly given that we don't know how much of a game changer the current language models will/won't be.

    • @nish663
      @nish663 Рік тому +17

      @@mitsterful I don't disagree with regulation. I disagree with the title, and the sentiment that it portrays.

    • @obibraxton2232
      @obibraxton2232 Рік тому +7

      What’s AGI ?

    • @OneTrueScotsman
      @OneTrueScotsman Рік тому +1

      @@obibraxton2232 Human level artificial intelligence.

    • @OneTrueScotsman
      @OneTrueScotsman Рік тому +7

      You're right, we can't really agree upon a definition. The original standard, the Turing test I believe, meets current chatGPT ability (at least on a good result. They fluctuate and hallucinate too much to say they meet it every time). But the pace of development is fascinating, and every surprising the top experts in the field.

  • @TuringTestFiction
    @TuringTestFiction Рік тому +23

    I had never heard of Ian Hogarth before but he is quite good at giving an overview of not just AI/AGI but also quantum computing and even related policy issues from a high level investor point of view. This would be a pretty good introduction to a lot of these issues for many people.

  • @fleurramsay8113
    @fleurramsay8113 Рік тому +88

    The third path is unrealistic - the military is the first to take up new technological developments. Thinking that this can remain benevolent is pie in the sky.

    • @ko6el
      @ko6el Рік тому +6

      Yes, thought same they're already using military Digi dogs in the police force, if that's the first port of call all other beers are off. Wish the boys would be more realistic about their toys

    • @bowantoia8536
      @bowantoia8536 Рік тому +2

      Can you imagine, a war where nobody dies on the Battlefield? Would it be machines destroying infrastructure or going after the meaty humans hiding behind their machines?

    • @Mightypi
      @Mightypi Рік тому +3

      ​@@bowantoia8536 that only lasts until one side runs out of functioning robots

    • @bowantoia8536
      @bowantoia8536 Рік тому +2

      @@Mightypi I know, warfare never really changes. Just becomes more efficient.

    • @squamish4244
      @squamish4244 Рік тому +2

      @@Mightypi On the other hand, an Iraq veteran wrote on a 'war robot' video that he wishes it a robot had gotten its legs blown off instead of him. It's easy to intellectualize this stuff when it's not your ass being sent off to war.

  • @brendanlea3605
    @brendanlea3605 Рік тому +37

    Wonderful discussion thank you. The more I listen to thoughtful intelligent people, the more i despair for our current political leaders.

  • @HC-xl2en
    @HC-xl2en Рік тому +17

    This is one of the best conversations on AI and the near future I've heard. Thank you.

    • @AGMI9
      @AGMI9 11 місяців тому

      how? its just some guy screaming for regulation from govs that can barely fix potholes and which is run by boomers who can barely use the internet, one of the dumbest convos ive wasted my time listening to, people are afraid of their own shadows these days if the media tells them to be

  • @neilhorsefield7220
    @neilhorsefield7220 Рік тому +20

    The best interviews of this kind make you re-examine your own thinking. This was one of them.

  • @joop5415
    @joop5415 Рік тому +8

    I'm a masters in physics with philosophy who specialises in the philosophy *of* physics and I thought I should point out that the thing that the quantum physicist referenced at 38:15 is wrong about the relationship between consciousness and quantum physics. The relationship between quantum physics and consciousness isn't what is often taught/thought about within the context of physics departments. Quantum physics is such a radical departure from our intuitions about the way that the world works that it is often taught as, fundamentally, a theory of of "measurement" itself, rather than as a theory about the way that the world works that we then go out and measure with instruments and experiments. This might not look like much of a distinction because it's a subtle one and, in the context of quantum physics, a very important one. The reason that ideas about the nature of consciousness get mixed up here is that physicists generally don't have a precise definition of "measurement" in the context of quantum mechanics and instead make use of experimentally reliable but ultimately imprecise and philosophically bankrupt intuitive notion of "measurement" as having to do with the contact between some kind of conscious mind and the external world. This is likely why the quantum physicist being referenced thinks that consciousness has some kind of fundamentally "quantum" nature - that there has to be quantum measurement processes in order for there to be consciousness at all. But there is no real justification for thinking that quantum measurement requires a "consciousness" nor, as this physicist says, that consciousness requires some process of quantum measurement. And there are good reasons to think that both of these ideas are just incoherent.
    As it stands, the answer that Ian landed on is basically correct - consciousness is absolutely not understood well enough to say with any confidence whether large language models or some hypothetical general intelligence is conscious. It may never be. And that isn't just on a scientific/biological level but even on a conceptual level. Basically no one will agree on a definition of what it means to be "conscious" that would allow for consensus-building scientific experiments to be conducted on machine learning models to determine whether they're conscious.

    • @TBOBrightonandHove
      @TBOBrightonandHove Рік тому +2

      Do you need proof to determine you are conscious? However you care to define it? Surely you can safely say that the entire Universe as you perceive it and as you will perceive it in the future is filtered through your consciousness. No consciousness, no reality, whatever that is. Humbling.

    • @joop5415
      @joop5415 Рік тому +2

      @@TBOBrightonandHove That's not really relevant to the question of whether large language models are/could be conscious. And besides, just because all of our knowledge about the world comes through our conscious experience doesn't mean that the world exists in some sense independently of any particular consciousness.

    • @bernardofitzpatrick5403
      @bernardofitzpatrick5403 Рік тому +1

      Interesting insights !

    • @kashphlinktu
      @kashphlinktu 11 місяців тому +1

      Agree 100%, and it is rather frustrating when people insist on saying that quantum physics says things it doesn’t actually say. Quantum physics has a bunch of aspects that are counterintuitive but when we talk about measurement we’re not trying to support your vague statements about consciousness.

  • @odetocycling
    @odetocycling Рік тому +12

    Aron, that was absolutely fascinating. I've just been blown away by the intelligent and thoughtful discussion captured in this video. Thank you for pulling this together.
    I subscibe to Novara Media (and some other progressive news outlets) and no longer have a TV or licence. People wonder how I survive without a TV but I honestly think I'm more informed and receiving more valuable and quality content and avoiding so much noise and dross.
    For such important topics as AI, hearing from those who have first hand knowledge and are so well connected and immersed in this next industrial revolution (the fourth?) as Ian is makes a huge diffference.
    After several plugs about your book I went to wob and secured the only copy they had. Can't wait to read it.
    Keep up this outstanding service to knowledge and true democracy.

  • @loumachugh9641
    @loumachugh9641 Рік тому +4

    I used to identify with Novara Media, but as a leftist working in tech, I find their takes on technology to be sensationalized and fearmongering. Instead, I recommend reading "The Myth of Artificial Intelligence" by Erik J. Larson and "Digital Detox: The Politics of Disconnecting" for a more accurate and balanced understanding of how technology impacts us. Novara Media means well, but there are better sources of information to address these concerns.

  • @tatreides8425
    @tatreides8425 Рік тому +25

    See Chomsky’s comments on AI - this is a hysterical distortion in human priorities. We don’t even understand our own brain/nervous system and there is no tec in existence close to AI as it is dependent on what we put in the programming.
    The real problem is the philosophy hence ‘economy’ that the tec is being developed in. Capital cannot meet the long term basic needs of the majority of the population. Any future tec needs a sustainable economic base and scientific outlook to develop from - capital is unable to provide this.
    In fact, science is arguably going backwards due to being misdirected for corporate/profit and due to the creeping mysticism that currently dominates the capitalist class.

    • @Rybo4
      @Rybo4 Рік тому

      Correct

    • @jinxterx
      @jinxterx Рік тому

      tech.

    • @LHKKKing
      @LHKKKing Рік тому

      Nice

    • @michaelcole3896
      @michaelcole3896 Рік тому

      Flying machines don’t fly like birds. The brain is a Red Herring potentially.

  • @shadowofmyfutureself
    @shadowofmyfutureself Рік тому +42

    We need to slow down everything except the rollout ot renewables and rewilding

    • @b00ts4ndc4ts
      @b00ts4ndc4ts Рік тому +10

      If only they invented as much time into saving the planet.

    • @TBOBrightonandHove
      @TBOBrightonandHove Рік тому +1

      Lol. It is human civilisation that may need saving from its current trajectory. The planet and life in general will do just fine.

    • @shadowofmyfutureself
      @shadowofmyfutureself Рік тому

      @@TBOBrightonandHove @TBOBrightonandHove If you consider that mass extinctions of fauna and flora and life on earth in general is doing OK fair enough

    • @honkytonk4465
      @honkytonk4465 Рік тому

      Nope!

    • @vsstdtbs3705
      @vsstdtbs3705 Рік тому

      Yes we need rewilding, but that will not happen in universal suffrage democracy. Beginning of democracy only men voted, could have happened then. Now most voters are women. Men have lost their rights to put down their subjects, so we are overpopulated. Rewilding means old ladies can not deforest for their rest home, can not deforest for roads, can not burn fossil fuels for transport, have no landfill for their plastic.

  • @Learna_Hydralis
    @Learna_Hydralis Рік тому +9

    very articulate guest .. It was a great watch, thank you.

  • @scrampop3353
    @scrampop3353 Рік тому +14

    We desperately needed the mainstream press and media to have had these types of discussion years ago, but they abdicated all responsibility for it and now we have a general population ignorant of these incredibly fast moving scenarios, and no skills to adapt. Can't but think we are leading to a more polarised and disparate society as Hogarth mentioned in the closing minutes, both at a macro nations level but more so at the human scale, altruism and empathy are not a natural language for either technology or capital markets.

  • @johnbwill
    @johnbwill 10 місяців тому +2

    Ian comes across as a very moral and authentic guy. And the interviewer did a great job at asking and listening. Quality on both sides. Thank you.

  • @rangerCG
    @rangerCG Рік тому +4

    Something I just thought of is that with an increase in intelligence in animals comes an increase in morality. It's easy to see all the bad that humans do, and we do a lot, but if you think about how things are in the natural world, it's incredibly brutal. The long term trend of humanity is an increase in morality, equality and quality of life.

  • @neilgordon1981
    @neilgordon1981 Рік тому +53

    I enjoyed the interview, but it's a shame the viewpoint of the 'technological left' wasn't more part of the questioning - the idea that large corporations who already have the most powerful versions of this technology are scaremongering and rushing to lock out competition from the public domain (open source).
    They stand to lock in their leading positions by being part of the process of regulation. Is making sure only a few very powerful players get to sell services to everyone else who wants AI more powerful than a toy sensible?
    I quite like Yannic Kilcher's perspective (youtube ai tech communicator who actually plays a part in the open source model game) on these things, but I believe the Electronic Frontier Foundation has a similar perspective.

    • @staninjapan07
      @staninjapan07 Рік тому +2

      Can I ask you about that?
      What do you mean by 'technological left'?
      Do you see a political left/right dichotomy in the AI world?
      I imagine it must exist, more through the people involved than though the tech' itself, but I am quite new to this discussion.
      Thanks.

    • @robertweekes5783
      @robertweekes5783 Рік тому +1

      Of course the big players want to pull up the ladder behind them, but they need to be regulated too! No more large scale training farms. AGI is too risky and a path best not taken.

    • @robertweekes5783
      @robertweekes5783 Рік тому

      @@staninjapan07 Well, big tech is already very left leaning because it comes out of Silicon Valley, a massive lefty hub. Social media censorship has proven it time & again

    • @yourlogicalnightmare1014
      @yourlogicalnightmare1014 Рік тому

      This is why the government would never allow anyone to own an anti-gravity device or tons of other tech that humans can't be trusted with 😂

    • @41-Haiku
      @41-Haiku 11 місяців тому

      I don't think this conspiracy theory holds water.
      - Do you really think AI Safety advocates and at least half of machine learning experts are all claiming the technology is dangerous for cynical reasons, rather than because that is what they really believe?
      - Do you think claiming that your own company needs strict regulation applied to it from the outside is the best way to prevent new companies from joining the industry and catching up?
      - Do you think the technical experts who have been warning about the dangers of AGI since the earliest days of computers were somehow all in on it?

  • @user-ic8rg6ke1w
    @user-ic8rg6ke1w Рік тому +3

    A great presenter. Bringing the message home with simplicity but with hard core to crack. Thank you.

  • @clarewillison9379
    @clarewillison9379 Рік тому +6

    Two questions: 1. Why are a few private investors racing to create a super AI? It’s rhetorical, the real question is: Who will be the high bidders they’re competing for? 2. Have none of them read Asimov?
    Since the first human picked up a rock and hit another human with it our capacity for wiping ourselves out has been demonstrated in countless ways but they’ve all had limitations or ways to be countered. But innovations like nuclear weapons and plastics show what happens when an idea is set in motion by those assuming they’ll deal with the problem of disposal and clean up later (meaning someone else will).
    Reaping the harvest (or whirlwind) of a god-level AI might still be a way off but when you have been seeding more and more AIs, all it takes is one or two iterations to germinate faster than anticipated (and/or cross-pollinate) and we could be on track to be overrun by a Japanese-knotweed-scenario.
    Maybe it’s the anxious aunt in me but I always want to know, “have you tested the brakes?” With so much technology of the last couple of centuries the responses might well have been, “oh, we won’t need brakes”. Cue minor apocalypse…

    • @jamesgravil9162
      @jamesgravil9162 Рік тому +1

      "Roads? Where we're going, we don't need roads."
      - Doc Brown, _Back to the Future_

    • @bernardofitzpatrick5403
      @bernardofitzpatrick5403 Рік тому +1

      Well said 👍

    • @41-Haiku
      @41-Haiku 11 місяців тому +1

      This is an excellent formulation of the problem; I hadn't quite heard it put that way before.

  • @blindinbluesmusic
    @blindinbluesmusic Рік тому +16

    Thank you Novara Media for keeping us informed - absolutely informative with intelligence depth

  • @jamietulacz7742
    @jamietulacz7742 Рік тому +7

    Really great interview Aaron, love this sort of content!

  • @jrvr4538
    @jrvr4538 Рік тому +26

    A definition of "intelligence" would have been helpful (artificial or otherwise). As I understand it none of these language models actually have the capacity to think. Doing a bar exam repeatedly and "learning" the correct answers doesn't mean the model has the ability to apply the learning in a real world context. Normally love these interviews too, but Aaron was absolutely credulous in this one. The tech sector has a huge talent for hype and PR and sucking up public and private money. How is this most recent hype cycle any different from the dozens that have come previously? Self driving cars are right around the corner, really? A few years ago a friend was telling me that he had been at a conference where they had discussed how the trucking industry was on the verge of being disrupted by self-driving vehicles, and that we were needed to prepare for the mass unemployment in the sector that the technology would cause. Nearly a decade later, the vision isn't any closer to becoming reality.

    • @noelmagann1
      @noelmagann1 Рік тому

      Its like we are supposed to continue to think big tech will help the world. But the people in this latest con job are in it for the money just like all the others. Its a short term pump stock rug pull waiting to happen just like with Terrinos. All the slimy grifters (like this sweaty, greasy wide eyed lying through his teeth snake) come out of the rocks they live under to make a few quid. I dont really care if individual idiots want to lose money but countries over invest state funds into these things and lose all the time and its the poors that get fcuked over. Just like silicon bank collapse. Meanwhile all these circle jerkers get a few new houses and cars. In ireland where i live, civil servants are having conferences where they have to sit there and listen to these Ted conference douchbags over promise on "a.i' and it is all for stealing public money.

    • @AlbertoGarcia-wd7sc
      @AlbertoGarcia-wd7sc Рік тому +1

      This is exactly my thought. Personally, I think it's not casual that just as the crypto fade dies, the AI rises. So called AI is not real. This is just software that copies and remixes things that exist already

    • @bashful228
      @bashful228 Рік тому +5

      Trucking FSDA isn’t there yet, but manly because batteries are still in short supply and you can sell 20 Teslas or more from the same amount of cells so much more mark up and that market is there to be won and lost today. The self driving part is extremely difficult, not to get pretty good, but to get absolutely infallible in all weather, driving conditions and unpredictable behaviour on the road from animals, cars, pedestrians etc etc all while moving 200 tonnes at 110 kph.
      Don’t be so confident that it’s all a century away and overhyped, it really isn’t.

    • @_obdo_
      @_obdo_ Рік тому +9

      It would be really incompetent and/or dishonest if OpenAI was touting gpt4’s performance on the bar exam if it had merely memorized the exam. And equally so for others to go around quoting the result. Do you really think that poorly of all these people? According to the gpt-4 technical report, “a minority of problems in the exams were seen by the model during training”. And then when they administered the exam to the model, they removed any questions that had been seen during training.
      You’re right that we were over-optimistic about the timeline for autonomous driving, but I think you’re under-appreciating what’s happening with LLMs.

    • @thisusedtobemyrealname7876
      @thisusedtobemyrealname7876 Рік тому +3

      The keywords are "exponential growth." You are ignorant if you think it does not have capacity to "think". It can solve complex problems already on it's own without human input. What is that if not thinking? Before this AI meant a program that had strictly defined boundaries defined by human programmers. An AI program that can learn on it's own has huge effects on the society as a whole since it can and will learn exponentially faster than any human. If lots of people lose their jobs quickly to it en masse what do you think are the ramifications for the society as a whole? Violent riots come to mind since it is not so easy to get another job if the whole field you have been working on is completely replaced by AI.
      Whatever your stance on AI it is going to profoundly change the society we live in much quicker than anyone anticipated. When that happens is anyone's guess. I really hope we are ready for that change.

  • @WilliamThePayne
    @WilliamThePayne Рік тому +28

    This was great! Do more stuff like this!

    • @noelmagann1
      @noelmagann1 Рік тому +2

      Yes. Give more time to capialist con men! Totally a good use of everyone's time and attention. 😂

  • @possiblystevo9141
    @possiblystevo9141 Рік тому +5

    there’s a startling similarity between an unconscious AGI and a corporation in the way it’s described here

    • @GungaLaGunga
      @GungaLaGunga Рік тому

      Yep. Good observation. I am a capitalist. However, we don't have capitalism. We have corporatations gone wild and unchecked greed and corruption. Just like the current state of AI. Sam Altman is a liar, and hellbent on finishing what Mark Zuckerberg started. All in the name of greed, pride, profit, ego. All served up with a facade of 'good intentions' but will bring the end of humanity. That the lies are intentional or not doesn't matter.

  • @shanek1195
    @shanek1195 Рік тому +47

    The problem isn't robots becoming more human-like, it's humans becoming more like machines.
    ~ Jean François Lyotard

    • @TheFormerkgbchief
      @TheFormerkgbchief Рік тому

      We Are the Robots...

    • @underfellgirlsans2243
      @underfellgirlsans2243 Рік тому +1

      It has already happened to some degree, depending on the individual's tendencies.

    • @miguelclarkeottovonbismarck
      @miguelclarkeottovonbismarck 11 місяців тому

      Thank you for the quote, I have been searching for some time for thinkers who explicate my understanding of the evolution of homo sapien to robosapien; the majority of fast adopters and the minority who are living fossils, resistant to change, and only use digital drugs in an acutely limited way;old fasioned of course.

  • @vercingetorix1557
    @vercingetorix1557 Рік тому +15

    Fully automated luxury Armageddon anyone?

    • @frankshailes3205
      @frankshailes3205 Рік тому +1

      Plots from Blakes 7 and Dr Who, forty to fifty years ago: AI takes over, pandemic wipes everyone out, nuclear war wipes everyone out, biological warfare wipes everyone out, climate change wipes everyone out. This is old, old stuff. Try something new.

    • @forestsunset9617
      @forestsunset9617 Рік тому

      Fully automated luxury society will never exist. If the rich have that level of tech then they don't need to plebs anymore.

    • @AgentsofRush
      @AgentsofRush Рік тому +1

      @@frankshailes3205 There's a storm coming....

    • @41-Haiku
      @41-Haiku 10 місяців тому

      @@frankshailes3205 It's not clear what you mean by this. Nuclear war probably wouldn't kill literally everyone, but it could easily kill most humans and destroy civilization. We are incredibly lucky that hasn't happened yet. There have been several very close calls.
      Pandemics are also a very serious problem. It may be possible in the very near future to create pathogens that are extremely virulent and extremely contagious. If such a pathogen is released (intentionally or accidentally), the consequences could be as devastating as a nuclear war.
      Climate change probably won't wipe us all out, but we are fragile there, too. Rising temperatures could collapse ecosystems and lead to global food shortages, water shortages, more frequent extreme weather events, and so on.
      AI is the most dangerous thing on that list. A misaligned superintelligence (i.e. a very advanced version of current ML systems) would be able to influence its environment and choose the future state of the world. We have no idea how to control something smarter than us or design it with robust goals or teach it to care about our existence. World states in which humans and other life exist and are thriving are a very small percentage of possible worlds, so the most likely thing that happens if we create a superintelligence is that everything dies as a side-effect of whatever asinine goal the AI ended up optimizing for.

  • @fori6377
    @fori6377 Рік тому +5

    Amazing, this is the kind of forward looking thought that we need

  • @fungibu7184
    @fungibu7184 Рік тому +10

    The huge danger of a super GAI is it's goal. Intelligence isn't measured by how altruistic or moral your goals are but how efficiently you can strategise to accomplish any goal asked of you.

    • @41-Haiku
      @41-Haiku 11 місяців тому

      Bang-on. The danger of a highly capable agentic optimizer (AI system with a goal) is in its capability. A sufficiently capable optimizer can choose any possible future and bring it about. Most possible futures don't contain humans.
      Add to that the fact that we don't know how to instantiate our intended goals into such a system (especially in a way that correctly generalizes), and boy oh boy do we have a problem!

  • @mheiseus
    @mheiseus Рік тому +10

    The sad part is that humans think robots will think like us but without a conscious.. what if robots think about another way?

    • @FrediBach
      @FrediBach Рік тому +2

      Even humans themself have wildly different thinking styles. Some for example don't have an inner voice. And consciousness is probably not even as important as most think. We know for a while that decisions can be tracked in the subconscious long before they bubble up to consciousness.

    • @tancreddehauteville764
      @tancreddehauteville764 Рік тому

      You can programme the AI to follow human ethical standards.

    • @41-Haiku
      @41-Haiku 11 місяців тому

      @@tancreddehauteville764 We can't "program" the AI at all, technically speaking. And if we knew how to get it to reliably follow human ethical standards, the Alignment Problem would already be solved, which it is not.

  • @dalvinderbasi3495
    @dalvinderbasi3495 2 місяці тому

    Wow fantastic interview. Absolutely wonderful. A very serious intellect being interviewed but very modest. Truly wonderful.

  • @GaryParris
    @GaryParris Рік тому +4

    A good discussion. This is something that is an important topic for me that most people rarely understand. it definitely brought up some questions as well. but anyway. good to see Novara at least discussing rationally compared to any other media outlet

  • @stephenblack8919
    @stephenblack8919 Рік тому +1

    Fascinating interview. Really enjoyed it.

  • @subcynic
    @subcynic Рік тому +8

    Everything has changed. Nothing has changed.

  • @opanike87
    @opanike87 Рік тому +5

    Absolutely world class work here again. Thanks guys, this to me indicates the rapid need for revolution, the window on this is closing daily and exponentially.

    • @frankwhite1816
      @frankwhite1816 6 місяців тому

      Well said. Sadly we may have already missed the window. Let's hope not. Despite all the terrible things humans have done I feel like there's something worth saving there. Pizza? Good music? Something.

  • @gronkymug2590
    @gronkymug2590 Рік тому +7

    I agree with that most people thinking just glitch or freezes as soon as you mention to them that there will be no jobs which have to be done by humans. It's completely over their heads. Even politicians react in the same way. One of them told me that there will be jobs in Multiverse of kind (virtual world jobs). And I said, no jobs mean no jobs, AI will do those too. No response after that. They just can't imagine the world in which people can decide to not work :D

    • @LHKKKing
      @LHKKKing Рік тому +2

      AI will start replace human, not by type of job, but by the requirement of IQ
      today's GPT is around IQ 80. When it evolve to IQ150 or 200, most people will be obsolete.

    • @LHKKKing
      @LHKKKing Рік тому +2

      BTW, the average IQ of human is only around 100

  • @valq10
    @valq10 Рік тому +1

    This was simply an amazing discussion

  • @robertweekes5783
    @robertweekes5783 Рік тому +2

    General AI needs to be regulated now! No more large scale training farms. AGI is too risky and a path best not taken.

    • @farmerjohn6526
      @farmerjohn6526 Рік тому

      Chatgpt is not agi. No amount of training will cause thst. Its dumb, it has no will, it cant think, its not skynet, so relax. It would need a much bigger brain and much more training data and senses to even remotely be human agi. And we dont know what that would be like. Or if it would work. Currently chatgpt is biased, if it were trained on the network it would be hopelessly confused.😂

    • @41-Haiku
      @41-Haiku 11 місяців тому

      Agreed. We are way past the point of deliberation on whether it's dangerous. Arguments on that front are never reassuring. We have to stop now, throw everything we have at solving the Alignment Problem, and then we can reap the unfathomable rewards. We can't afford to jump first and try to invent a parachute on the way down.

  • @thomasswords6837
    @thomasswords6837 Рік тому +3

    It's hard not to see Hogarth as having an agenda to maximize all his investments in AI. I'm very skeptical of everyone talking about AI, but I'm *most* skeptical of those who fearmonger about it as a deceptive/counterintuitive way of driving investor excitement. I wish Aaron had pushed back with harder questions. Unfortunately, I find that he is unwlling to do so when he's ideologically aligned with his interview subjects. The most distasteful one was an interview he did with a China apologist, who basically denied the Uiyghur genocide.
    This isn't to say we shouldnt be concerned about AI, but haven't we just seen totally unfounded hype about self-driving cars, crypto currency, and NFTs? I'm sorry if I don't automatically trust people who have a major economic incentive.

  • @thomaserandall
    @thomaserandall Рік тому +7

    Aaron, please put your left hand down when the interviewee is responding - it’s a bit distracting at the camera forefront 😂

  • @gronkymug2590
    @gronkymug2590 Рік тому +1

    I think we should switch from 1 & 0 kind of law to morality based law. This would simplify things greatly and made companies more self aware and watchful about the damage they do or may potentially do.

  • @skinnybreakfast
    @skinnybreakfast Рік тому

    You did well Ian for the layman. Will try to find you talking to peers now. Thanks for your time.

  • @Saharah253
    @Saharah253 Рік тому

    Best session yet!

  • @JohnSilver-un8qy
    @JohnSilver-un8qy Рік тому +3

    Competion in AGI research has become out of control with Chinese institutions producing 4.5 times as many papers than American institutions since 2010, and significantly more than the US, India, UK, and Germany combined. Moreover, China is significantly leading in areas with implications for security and geopolitics, such as surveillance, autonomy, scene understanding, and object detection. We are in an AGI arms race.

    • @bashful228
      @bashful228 Рік тому +1

      You can say that about a lot of science areas, materials science, battery chemistries research, my father was a nuclear physicist and had lots of Chinese post-docs work with him on sabbatical in his lab. I think I read a stat that China and Burma are training thousands and nuclear physicists to masters level every year. Burma doesn’t even have nuclear reactors or weapons, where are they working?!?

  • @aramis720
    @aramis720 Рік тому

    Great interview highlighting very very serious challenges ahead

  • @jeremyhadwin1676
    @jeremyhadwin1676 11 місяців тому

    One of your best, thank you

  • @tekannon7803
    @tekannon7803 Рік тому +1

    I am commenting after only 11:09 into Novara Media's already captivating interview with Ian Hogarth and I will watch the entire interview later. I am not in the AI field, or business, or finance, or politics or even the military---I am an artist and singer/songwriter. If I may, I would like to throw into the argument about what Ian said someone told him about AI's future and the future could be lights out for humanity. What I think needs to happen is for AI people and especially the developers and scientists and technicians who need to fully understand one very important thing: AI in all its aspects and in robotics and in sentient silicon beings must never go past being one thing. Ian, I am sure this will come across as naïve, but I believe we have the ability to make AI equal to the family dog in its future relationships with humans in whatever form it (AI) takes. What do I mean by family dog? A husky is a powerful dog most likely bred from the wolf family---although I am only guessing---and yet a husky will treat a human baby with care. I don't know how else to say this, but we must tweak the silicon genes from the get go so that no matter how powerful or intelligent AI systems in all its forms becomes, it must be and has to always be protective, obedient and friendly---with absolutely no exceptions---like the family dog is to humans. Lastly, I have no problem with AI doing art or composing music; we cannot stop machines from learning, but we can make robots with the sole purpose of helping humans progress. Period. It is to our peril if we ever allow a sentient silicon being to progress to ever being able in even the most inconsequential way of controlling us, or being in charge of a company or a government department or God-forbid, a commander in the military, etc...

  • @JustJanitor
    @JustJanitor 8 місяців тому +1

    This is such a fascinating topic. Thank you

  • @nmk5003
    @nmk5003 Рік тому +1

    I will keep pushing for this interview Roberto Unger might be the best thing that could happen for this segment.

  • @midgerhys5627
    @midgerhys5627 Рік тому +1

    In Malta petrol remained at €1.34/litre throughout the last 2 years. Electricity went up only slightly in price. Food has seen a 6% increase in price - nowhere near as much as UK. People care much more for one another here than in the UK. One reason why I left the UK to live here after Brexit.

  • @marktellez3701
    @marktellez3701 Рік тому +1

    Please realize that it is always "authors" an "researchers" who say these things. People who actually understand how LLMs work know AGI isn't here, and isn't close. They will take your job (with the help of people like me who use it to do my job better) but they aren't about to go skynet on us.

  • @cleonawallace376
    @cleonawallace376 Рік тому +2

    The whole discussion of AI ad quantum computing goes a bit above my very low tech head, but Ian seems like a good person to bring some sense to the whole venture. And it's great to hear someone talking of Europe as a superpower and counterbalance to the US vs China dichotomy. I believe that (with a bit of socialist reformation!) the EU can become a real leader to help us achieve a better world.

  • @positivepolitics1
    @positivepolitics1 Рік тому

    Another brilliant conversation

  • @carysmeredith9909
    @carysmeredith9909 Рік тому

    Educational. Thank you!

  • @vas1900
    @vas1900 Рік тому

    Very cool video, thank you. Was very inspirational.

  • @nwickstead
    @nwickstead Рік тому +8

    If you got to the end. Whilst I don't deny that quantum computers are great. As a chemist with a background in quantum mechanics, almost all chemistry is quantum, not just super-magnets. At points he goes a bit too tech bro no base knowledge

    • @mitsterful
      @mitsterful Рік тому

      He mentioned making drugs too.

    • @41-Haiku
      @41-Haiku 11 місяців тому

      I had the same thought. "Consciousness is quantum" is pretty much just "[magicky science thing] is related to [other magicky science thing], because they are both weird."

  • @ehalles4335
    @ehalles4335 Рік тому

    Super interesting, thank you!

  • @PhilippaBeale
    @PhilippaBeale Рік тому +8

    Commenting on my small area of expertise and about something we all take for granted. Every letter from every typeface found on any programme on your computer starts its life by being drawn by hand. Car bodies still start their life in clay. Most cgi starts as a handmade model or drawing. The generation by AI after these primary handmade processes is fantastic, timesaving and marvellous. But as someone said earlier Mary Shelley got there first. It is what what we do with A I that counts. On one level my vacuum is more intelligent than I until I switch him off.

    • @41-Haiku
      @41-Haiku 11 місяців тому

      That will be the case right up until the moment that AI outstrips human capabilities in all domains at all timescales. What we have on our hands at that point is a superintelligence, not a tool. We could try to turn it off, but... and hear me out here... what if it thought of that?
      When you play chess against Stockfish 15, I can't tell you exactly what moves it will make. If I could do that, I would be as good as Stockfish 15 at chess. However, I can say with complete certainty that you will lose.
      The more complex the domain, the wider a gap there is between human capability and the theoretical limit of capability.
      A superintelligence would not do anything that would cause humans to want to turn it off, until it was certain that it could prevent humans from turning it off.
      If we create a superintelligence without first solving the Alignment Problem, humans will lose control of our future.

  • @TedThomasTT
    @TedThomasTT Рік тому

    Great guest, great interview 👏

  • @alertbri
    @alertbri Рік тому

    What a great find, subscribed and off to follow your excellent guest! What a great Brit too! 🇬🇧

  • @davidmcculloch8490
    @davidmcculloch8490 Рік тому +3

    I have to say that Ian Hogarth has more faith in government than I do. Albeit he alludes to the obsession with neoliberalism and the lack of technological and investment leadership by government. Our government is seen to do something; rarely the right thing; never enough. He draws an interesting comparison with the attitude of the German government. Imo, raw greed is driving us to destruction. We need proactive direction and regulation.

  • @andrerios
    @andrerios Рік тому

    When you start hearing the "you know..." nonstop and you cannot go on anymore...my head is exploding.

  • @lolwalters2936
    @lolwalters2936 Рік тому +3

    The more we "progress", the more our future looks like Soylent Green!

  • @helenswan705
    @helenswan705 11 місяців тому

    Love the way Hogarth gives lots of examples. makes it much more understandable Soooo much to think about here.

  • @tomvandelft
    @tomvandelft Рік тому

    Great interview 👍

  • @jals45
    @jals45 Рік тому +1

    Very interesting, informative, thank you. Comments below has made me smile ..

  • @NoraGermain
    @NoraGermain Рік тому

    Excellent conversation 🙂

  • @DiceDecides
    @DiceDecides 11 місяців тому +1

    AGI will take off once physical robots are integrated with the next language model; GPT-5 is gonna be huge

  • @ilovehanoivietnam2521
    @ilovehanoivietnam2521 3 місяці тому

    Absolutely fantastic podcast….i have watched 3 times….Ian is so credible…..powerful word credible !

  • @KatieBadenhorst
    @KatieBadenhorst Рік тому +1

    In some was science fiction is both a blessing and a curse, because we've spent decades speculating about these changes, but now that they're coming people can't take it seriously.

  • @Josh-gp8gz
    @Josh-gp8gz Рік тому +3

    Honestly quite disappointed in Novara for uncritically having this guy on. He makes money from making AI sound more impressive than it is. And I think the real dangers of AI are making secure jobs unsecure, meaning what this guy thinks "good AI" looks like is terrible for the rest of us.
    Can't help but feel Aaron was just really excited to have someone on who confirmed something from his book? Which, I mean, fair enough, but be more critical and pick someone who maybe isn't cashing in on blowing AI out of proportion? Just my thoughts.

    • @41-Haiku
      @41-Haiku 11 місяців тому

      Ah, yes. The "AI Safety advocates are all colluding to make AI sound super dangerous so that people will build it faster for some reason" conspiracy theory.

  • @skylark8828
    @skylark8828 Рік тому +1

    Politicians aren't taking this seriously, only if something catastrophic happens will they really listen. Let's see what happens in the next 2 years ...

  • @drazen1972cro
    @drazen1972cro Рік тому +1

    AGI will be called Mother, lab facility will be called Nostromo, and parent company will be Weyland-Yutani

  • @pnk2748
    @pnk2748 Рік тому

    Finally a thoughtfull interview grounded on facts i would put Ian Hogarth in charge of an AI control body in a heartbeat.

  • @mjc01
    @mjc01 Рік тому +7

    Perverse incentives: ie just leave it to "the market". There's also a perverse incentive for Water Companies not to give a shit about rivers, land and seas. It's called capitalism.

  • @colinbrigham8253
    @colinbrigham8253 Рік тому

    Thank you Arron 😊

  • @stephenisienyi7726
    @stephenisienyi7726 3 місяці тому

    Ian Hogarth. Immensely talented. A true son of his people. God bless.

  • @schr4nz
    @schr4nz Рік тому +1

    I keep saying to people, I want the machines to take our jobs, I want them to want to serve us, what I don't want is:
    1. for them to obliterate us out of existence or;
    2. for the techno-elites in charge to create a new underclass fighting for the scraps...
    the first might be a problem bigger than us, the second is a question for humanity and the way we structure society/government

    • @41-Haiku
      @41-Haiku 11 місяців тому

      Same. Many people act like massive job-loss is the nightmare scenario, and I'm over here like... _"Huh? Isn't that the good ending?"_

  • @nicolasnott2739
    @nicolasnott2739 Рік тому

    Great stuff

  • @komalshah7033
    @komalshah7033 2 місяці тому

    Wonderful interview

  • @buster6093
    @buster6093 11 місяців тому

    Brilliant, thankyou

  • @pauljenkin297
    @pauljenkin297 Рік тому +1

    The thought of developing and using an AGI without knowing how to control it or turn it off if it becomes a threat to humanity seems like building a rocket-powered racing car without considering how to steer it and, more importantly, how to stop and/or remotely apply a 'destruct' button. I'm not even close to understanding the issues or the potential issues, however, until science can find a way to have ultimate control over the existence of and direction of travel of AI / AGI, everything is telling me we should pause or stop its development.

    • @ivankaramasov
      @ivankaramasov Рік тому +1

      I am extremely sceptical about whether it will be possible to safeguard AGI in an absolutely foolproof way

    • @NullHand
      @NullHand Рік тому +1

      The publicly accessible "Open Source" LLM AIs apparently strategize (display Theory of Mind) at the level of a 9-11 year old human.
      Nobody intentionally programmed this (we think).
      Nobody knew it was doing this until a few months ago (we think).
      Who even knows what the big funders (Google, Tesla, Microsoft, CCP) have under wraps in research labs already?
      If, as it now appears, the traits we loosely bundle as "intelligence” are in fact emergent properties, then we might not even know when AGI, or even ASI occurs until well after it exists.

    • @41-Haiku
      @41-Haiku 11 місяців тому

      This is really it.
      We don't know how to control a superintelligence or align it with ~human values.
      We have evidence that a superintelligence may be hostile or uncaring by default.
      We are building a superintelligence anyway.
      It's just plain suicide.

  • @philipclarke6826
    @philipclarke6826 Рік тому

    By far the best of MANY treatments on AI ////////////. Thanks to Both from Australia !..

  • @johnwalter8139
    @johnwalter8139 11 місяців тому

    Interesting to hear there 5 years of annual reports of your perspective , im going to read them

  • @tayl1r
    @tayl1r Рік тому +5

    AI companies talk about this far-off doomsday scenario because it doesn't change their current business model or address the more immediate impacts of AI on jobs and privacy. And it makes the tech seem more impressive than just guessing letters and colours. This interview is really, really annoying. With most of it being Aaron seemingly starstruck while Ian pushes the exact narrative he wants. It is no surprise that almost immediately after this OpenAI, portrayed here as really caring about AI safety, threatened to leave the EU if AI was regulated.

    • @BMoser-bv6kn
      @BMoser-bv6kn 11 місяців тому +1

      I don't think ten years is really all that "far-off" : (
      Hopefully you're aware of the rate of doubling and how many parameters they're at, or you're just stumbling around in the dark

    • @41-Haiku
      @41-Haiku 11 місяців тому

      Yeah.... superintelligence might be 5 years away. I don't consider 5 years to be far off.
      On a snarky note...
      _Somebody better tell MIT to improve their coursework, because it turns out you can ace tests in their STEM programs by just guessing letters and colors._
      _Same goes for the bar exam, IQ tests, literary criticism, mathematics, chess, novel coding tasks, theory-of-mind tasks, logic puzzles, etc._
      The point is... what matters most is what a system can _do,_ not whether we think the mechanism is silly. State-of-the-art language models can do many things more intelligently than either of us can that it was never designed to do. That's pretty impressive for a machine, and pretty dangerous at a modestly higher level of capability.

    • @tayl1r
      @tayl1r 11 місяців тому +1

      Look, I've got a lot of thoughts on this tech, but I'm running the risk of being sidetracked by the *exact distraction* I was talking about: This is having a negative impact NOW. And tech bros are trying to sweep it under the carpet by talking about a fanciful future.
      All these models are trained on human work and rarely with permission. Stolen artwork and scraped websites make up the majority of training data. You can literally buy pre-trained models of a specific artist's portfolio packaged up so you can clone their style immediately. Created for profit without permission of that artist, and with no compensation for that artist.
      And the large writers guild strike happening right now is, in part, about AI being used for scripts. Not GOOD scripts, the tech isn't there yet, but having them out the room and contracting them to tidy up an AI script is cheaper and takes away their control.
      This is without getting into the underground of deepfakes and unconsenting "adult entertainment."
      And you're right. This technology is going to get better, in a couple of years the hands won't be backwards, they'll put in the correct amount of teeth, and eyes won't look like they've got spider eggs in them. Which is more reason to be sceptical and acknowledge the very real impact on creators and people, rather than going gaga for a tech investor talking about how concerned he is about skynet

    • @tayl1r
      @tayl1r 11 місяців тому +1

      But also (and I realise I'm writing a page of A4 on a month old youtube video), if you want to be snarky, the point of that sentence was that this technology is not doing what it is described as doing. The “artificial” in AI doesn’t mean digital or inorganic; something that is intelligent but not done with a squishy brain - it means it is fake. A magic trick. It is intelligent in the way a videogame character is: curated specifically to resemble something real, but under the hood doing none of the actions we would consider rational thought or learning.
      ChatGPT's innovation wasn't in LLM but making it accessible to a layman. The tech goes linearly one word at a time. It doesn’t know what sentence it’s writing, it doesn’t have ideas, it is a very sophisticated auto-complete that has been curated with the goal of looking like human writing. Which it does very successfully. But in regards to superintelligence or godlike intelligence we still haven’t really achieved step one.
      Did you know a lot of these bots actually have a human being between the user and the bot approving what it says? And that's the reason they stopped being super racist?
      So I’m more concerned with the real impact rather than the smokescreen from AI companies who are, to be clear, run by capitalists trying to become trillionaires and not philanthropists concerned with AI safety. Aaron didn't address any of this. He gave them exactly what they wanted and looked like a awestruck child while doing it.

  • @rick4electric
    @rick4electric 11 місяців тому +1

    I don't see what the rush is to make people obsolete! It seems like we are headed in exactly the wrong direction!

  • @xsandwich
    @xsandwich 11 місяців тому

    One of the biggest 'problems' is arguably the hyperbole; 'god-like', 'everything that humans can do' - without a body.

  • @harmonizedigital.
    @harmonizedigital. Рік тому +1

    Things are already starting to break down. Look at the writers strike. The main issue is a guarantee that they won't use AI. Not just better pay.

  • @sterlingarcher4989
    @sterlingarcher4989 11 місяців тому +1

    A non-sentient hyper intelligent AI is the most scary version, because it could make unemotional decisions, based on arbitrary information choices, which could effect humanity poorly. At least a sentient AI could consider a loss as an overall win.

  • @fazwazz
    @fazwazz Рік тому +8

    Aaron asking some excellent questions.

  • @michaelporter6341
    @michaelporter6341 11 місяців тому

    Memories of 2001. I'm Afraid I Can't Do That, Dave. - HAL. Humour aside, I found this discussion thoughtful and intelligent. Ian Hogarth is admirable, especially the explanation around exponential, which for me is the key element and poorly understood. The exponential drop in price for solar panels and batteries will lead to a world of low cost electricity, which I see as a positive thing, as it has the ability to allow a more decentralized control of that resource. The exponential rise of AI on the other hand seems to be centralizing poiwer in the hands of a small chosen elite, which I find worrying. How can the mass of people have any control over something like AGI?

  • @bulentosmane
    @bulentosmane 4 місяці тому +1

    "things are never so bad that they could not get worse." lets try and prevent worse!

  • @DaveCharnock
    @DaveCharnock Рік тому

    Weird how all the AI investors did an about turn on whether it should be regulated only after Google admitted that proprietary models will never compete with open source. It's almost as if they have a financial interest in restricting it's development to big private companies or sommat.

  • @tobiaszb
    @tobiaszb Рік тому

    Self-awareness requires having a model of oneself and a valuation of how this model is successful.

  • @frankwhite1816
    @frankwhite1816 6 місяців тому +1

    Excellent conversation but you all need to do your Metacrisis homework. AGI is not the path forward, neither is any market based socioeconomic system. We need a WHOLE NEW system. Something based in Holism, Wisdom and compassion or it’s ‘lights out’ for the humans. Which for me, to be honest, sounds about right.

  • @VortexMotiveVision
    @VortexMotiveVision 8 місяців тому

    Fascinating.

  • @mawkernewek
    @mawkernewek Рік тому

    A few bits of this made me think he sounded a bit like Keir Starmer, maybe it was just when he mentioned 'missions' 47:53

  • @aaaaaaaaabaaaaaaaaa
    @aaaaaaaaabaaaaaaaaa Рік тому +2

    Hi, would like to make one correction re: quantum computers - quantum computers are no more powerful than classical computers in what they are able to compute or what they would be able to simulate. The real power from quantum computers comes from being able to do certain things much faster than classical computers. The example most often used is cracking encryption, classical computers are able to crack encryption, its just that it's really slow and would take millions of years to do so, whereas on quantum computers we are able to do it in reasonable timeframes. So it is incorrect to say it can do things that classical computers can't, the things it can do are _exactly_ the same as the things classical computers can do, it just can do some of the stuff a great deal faster.

    • @dylancope
      @dylancope Рік тому

      The fact that they are in the same computational class has no practical bearing on whether they can solve the same problems. I don't think you're making a correction here - you're mistaking "theoretically solvable with infinite resources" with whether or not we could ever actually solve a problem. Reducing this to "just faster" is a vast understatement.

    • @aaaaaaaaabaaaaaaaaa
      @aaaaaaaaabaaaaaaaaa Рік тому +1

      ​@@dylancope Well specifically he something to the effect of "quantum computers allow you to simulate things in the real world that are quantum, which we have previously not been able to simulate, because we have been using something deterministic", which really gives off the impression that he thinks classical computers are simply unable to compute these things, but quantum computers can. Which is, of course, not true.

    • @john_hunter_
      @john_hunter_ Рік тому

      Quantum computers aren't following the same algorithm as classical computers. They are fast because they can execute algorithms that are outside the realm of classical computers. They aren't just following the same algorithm but doing it faster.

    • @TheQeltar
      @TheQeltar Рік тому +1

      @@john_hunter_ Shor's algorithm can be run on a classical computer, but it would be horrendously inefficient.

    • @aaaaaaaaabaaaaaaaaa
      @aaaaaaaaabaaaaaaaaa Рік тому +1

      @@john_hunter_ The algorithms they run _can_ be run on classical computers, its just that, at least for the algorithms we care about, their asymptotic complexity is much higher on classical computers than on quantum computers. But to say that we could not translate quantum computer algorithms into algorithms that do the same thing on classical computers is completely false. We can easily simulate quantum computers on classical computers, for instance.

  • @shadowofmyfutureself
    @shadowofmyfutureself Рік тому

    The fact that Conservative politicians are still thinking about what happened to British Leyland IS really worth looking at - because that thinking is what brought the state to the state we're in in the UK now