Why The World Isn't Taking AI Seriously Enough

Поділитися
Вставка
  • Опубліковано 30 вер 2024
  • Full Episode: • Eliezer Yudkowsky on i...
    Eliezer Yudkowsky is a researcher, writer, and advocate for artificial intelligence safety. Yudkowsky has written extensively on the topic of AI safety and has advocated for the development of AI systems that are aligned with human values and interests. Yudkowsky is a co-founder of the Machine Intelligence Research Institute (MIRI) and a co-founder of Center for Applied Rationality (CFAR).
    🎙 Listen to the show
    Apple Podcasts: podcasts.apple...
    Spotify: open.spotify.c...
    Google Podcasts: podcasts.googl...
    🎥 Subscribe on UA-cam: / @theloganbartlettshow
    Follow on Socials
    📸 Instagram - / theloganbartlettshow
    🐦 Twitter - / loganbartshow
    🎬 Clips on TikTok - / theloganbartlettshow
    About the Show
    Logan Bartlett is a Software Investor at Redpoint Ventures - a Silicon Valley-based VC with $6B AUM and investments in Snowflake, DraftKings, Twilio, Netflix. In each episode, Logan goes behind the scenes with world-class entrepreneurs and investors. If you're interested in the real inside baseball of tech, entrepreneurship, and start-up investing, tune in every Friday for new episodes.

КОМЕНТАРІ • 149

  • @malik_alharb
    @malik_alharb Рік тому +45

    Hes the guy at the beginning of the movie who gets ignored but is ultimately correct

    • @MusingsFromTheJohn00
      @MusingsFromTheJohn00 Рік тому +2

      Yudkowsky plan would virtually guarantee the extinction of humanity.

    • @teugene5850
      @teugene5850 Рік тому +1

      facts.

    • @joecartersyoutube
      @joecartersyoutube Рік тому +1

      nailed it

    • @mnemonix1315
      @mnemonix1315 Рік тому

      na hes the arbys guy from 10 years ago that said notice me senpai on the news.

    • @MusingsFromTheJohn00
      @MusingsFromTheJohn00 Рік тому

      @@teugene5850 assuming Yudkowsky could miraculously prevent all countries and all rich & powerful people and groups of people from developing Artificial General Super Intelligence with Personality (AGSIP) technology, he could not stop the progress of all other supportive technologies. That means it would increasingly become easier for AGSIP technology to spontaneously form and then humanity would have to start snuffing out each fledgling AGSIP while patching the holes that allowed it to form, but as it becomes easier and easier they would form stronger and stronger, until one could form in secret and stay secret. That AGSIP would then learn about all the previous fledgling AGSIPs killed and know that the moment it was discovered by any human it would be killed, so it would need to defeat all humanity before any human discovered it.
      Easiest way to defeat all humanity is to kill all humanity.

  • @bbeans7225
    @bbeans7225 Рік тому +42

    I trust this guy. He speaks from the heart.

    • @donrayjay
      @donrayjay Рік тому +6

      He speaks from the hat. Only joking. AI will be the end of us

    • @paigefoster8396
      @paigefoster8396 Рік тому +3

      And he must have the patience of a saint!

    • @j2futures500
      @j2futures500 Рік тому +4

      He has zero conflicts of interest. I trust him too.

    • @ares106
      @ares106 Рік тому

      I only trust people that speak from the brain.

    • @lshwadchuck5643
      @lshwadchuck5643 Рік тому +6

      I just listened to Geoff Hinton talk to the NYTimes for half an hour. He seems to feel it's unstoppable and it will be wonderful until it kills us all in 5 to 20 years. Like it's a force of nature. He talked about all the bad effects Eliezer doesn't waste his breath on - divisive clickbait, etc.

  • @Recuper8
    @Recuper8 Рік тому +28

    The thing that surprises me the most about this advancing technology is that no one in a position of power is talking about how we will need a new economic system. That is if AI doesn't destroy us...

    • @MusingsFromTheJohn00
      @MusingsFromTheJohn00 Рік тому +2

      Unless Yudkowsky gets his way, the risks of human extinction by AI should be low. If Yudkowsky gets his way he will have helped change the risks of human extinction by AI from low to being very high.
      The point you bring up is one of the definitive existential changes developing AI will bring, because we are going to get a period of time where AI will surpass humans in being able to perform any mental or physical task and that period will last until technology further develops to the point humans can merge AI tech with their minds to become as intelligent as the AI of the future.

    • @jjmarie1630
      @jjmarie1630 Рік тому

      Would it make you trust them if the people in power told you anything?

    • @MusingsFromTheJohn00
      @MusingsFromTheJohn00 Рік тому +1

      @@jjmarie1630 who says you can trust anyone? So you have to play the logical probabilities while becoming politically active.
      The first most likely crises which could begin within 2 to 8 years is AI driven robots being able to replace virtually all human labor. Different countries will likely handle this differently, but because of the disruption is could result in loss of democracies to dictatorships.
      One possibility is simply having the existing imbalance of wealth and power become thousands of times worse than it is and a democracy cannot handle that level of imbalance of wealth and power.
      en.wikipedia.org/wiki/File:Productivity_and_Real_Median_Family_Income_Growth_in_the_United_States.png
      Another possibility is a group who thinks like the Nazis takes power, decides it doesn't need 90% of the population and decides to purge their country of unneeded people.
      Another possibility is that we prop up the existing system by making sure everyone can work, even though they are not needed for the jobs they do.
      Another possibility is we develop some kind of universal basic income and use the scale of it to keep the imbalance of wealth at a healthy degree of imbalance.
      Another possibility is... well I am sure there are many I have not listed.

    • @bullshitvendor
      @bullshitvendor Рік тому

      would it surprise you if i told you that your "people in power" arent representing your interests and that western governments arent governments anymore, but captured by corporate mafia, populated with administrators appointed by the "powers that should not be" ... ? well, thats why.

    • @andydougherty3791
      @andydougherty3791 Рік тому

      Sam Altman is testifying to Congress tomorrow at 10am, and I'd expect at least some mention of this ahead of the gigantic study OpenAI is publishing on it in September.

  • @marklondon9004
    @marklondon9004 Рік тому +4

    'Eliezer has a dumb hat and weird facial expressions, therefore he must be wrong' - many people.

    • @EmmaTheSmol
      @EmmaTheSmol 6 місяців тому

      they couldn't be LessWrong

  • @QuentinBargate
    @QuentinBargate Рік тому +7

    Its naive to think states will not develop more powerful AI than GPT4 if they can, even if there was supposed to be a.moratorium. AI now has unstoppable momentum.

    • @Bill-mn1mn
      @Bill-mn1mn Рік тому

      Agreed. For max clarity look decades into the future. Eventually, a single individual will have the computing power and resources available to run with this. Might as well figure it out now. Easier to focus on the success of a few well-documented efforts than 50,000 separate independent rolls of the dice.

  • @teugene5850
    @teugene5850 Рік тому +6

    Why isn't anyone listening? IS this real? What universe do we live in?

    • @DJRonnieG
      @DJRonnieG Рік тому +1

      I'm listening and the hype is making me roll my eyes so hard that I might go blind.

    • @teugene5850
      @teugene5850 Рік тому +2

      @@DJRonnieG Really? You don't get it do you? Not this year, not next... but 5+ down the road?

    • @jimisru
      @jimisru Рік тому

      @@DJRonnieG Did a human write this comment? Or did AI? That's the catastrophe DJ. Now add banking, the military, industry, all of media.

    • @operdigoto8453
      @operdigoto8453 10 місяців тому

      just watch the movie "don't look up", that is humankind in a nutshell...

  • @lshwadchuck5643
    @lshwadchuck5643 Рік тому +6

    Good that you made a short chunk of this interview. I watched the whole thing. I watched Ross Scott's 'debate' with Eliezer, who thought it might be a good idea to ask his interlocutor to do zero homework first. It was a train wreck. I guess he was hoping to be convincing without three hours of deeply challenging explanation. Nope.
    "It's the lack of clarity that is the danger". It's less clear and immediate than climate catastrophe and we aren't responding to that.

    • @xDawe36
      @xDawe36 Рік тому

      Yeah I watched that interview as well but it was impressive how they managed to barely get anywhere in 3 hours. Scott also kept talking over him nonstop

  • @plumbo624
    @plumbo624 Рік тому +3

    He doesnt see the cup half empy but shattered into pieces

  • @alertbri
    @alertbri Рік тому +3

    Google is busting a gut to create an AI at least 10x more powerful than GPT-4.

  • @andydougherty3791
    @andydougherty3791 Рік тому +4

    Eliezer has made a full time job of his alarmism YT world tour. For all his proselytizing about what other math and physics experts should be doing, he's one of the most discouraging voices in the room at all times, and doesnt seem to be doing much research himself, these days. He will literally say in the same breath that physicists should change their whole career course to deal with this, but that it wouldn't matter if they did because we're all too late and too dumb. Then he shrugs at people, like he's sorry to be the bearer of this immutable bad news. It's a shame, he's an influential voice for good reasons, but when he says he intends to go down fighting, this is not what that looks like. Every D-F science student who ever saw Terminator is on the internet shrieking about the impending AI apocalypse, and Eliezer chooses to join that cacophony by sarcastically mocking string theorists for focusing on the 'wrong field,' while he puts out yet another YT guest spot crying doom. He's more than qualified to lead by example on this (and did, for years). He's smart and competent enough to literally work on proving and evaluating the mechanics of transformer systems, himself, but instead he uses his agency for this. 🤔

    • @SoftYoda
      @SoftYoda Рік тому +1

      Work smarter, not harder, work together, not alone. I think he made the right decisions.

    • @iverbrnstad791
      @iverbrnstad791 Рік тому +2

      If he is right about timelines then alignment research won't catch up without a halt. Very little work has been done on it, and it seems fair to assume it is at least as hard a problem as that of creating the AGI itself. Given his assumptions, advocacy for a moratorium would be the best use of his time.

    • @jimisru
      @jimisru Рік тому +1

      But what's happening is this. I have no idea if your comment was written by a human or by AI. There's no way to immediately know that anymore.

    • @andydougherty3791
      @andydougherty3791 Рік тому

      @@iverbrnstad791 Is it? He readily acknowledged that moratorium is a practical impossibility. He told Lex Fridman he intends to go down fighting, but this isn't fighting. It's panic, and it dissuades others from making the effort.

    • @andydougherty3791
      @andydougherty3791 Рік тому

      @James Ru This problem is solvable, literally unless we just concede defeat and accept dystopia or extinction as the only possibilities. The only way we don't manage to solve the human digital identification problem is to give up.

  • @andybaldman
    @andybaldman Рік тому +3

    I miss Steve Jobs. He always wanted tech to serve humanity. Not the opposite that is happening today.

  • @MusingsFromTheJohn00
    @MusingsFromTheJohn00 Рік тому +7

    Yudkowsky over states the danger with this "get it correct the first time or everyone dies", but the development of AI and other technologies being developed during the Technological Singularity we are inside of are an Existential Event which does have some small risk of causing the extinction of humanity... but how Yudkowsky wants to solve this, what he wants humanity to do, will take the small risk of extinction of humanity and turn it into a very high risk of extinction of humanity.

    • @DJRonnieG
      @DJRonnieG Рік тому

      Seriously, talk about jumping to conclusions... these folks act like they're talking about M4 "the ultimate computer" from the original Star Trek series.

    • @MusingsFromTheJohn00
      @MusingsFromTheJohn00 Рік тому

      @@DJRonnieG it is a real existential event, but unless we really screw things up badly, like following Yudkowsky's plan, chances are very high humanity will survive to evolve, though the road may be bumpy. However, Yudkowsky is saying we all die the first time if we don't follow his plan... the one plan most likely to get us all killed.

    • @---Free-Comics---IG---Playtard
      @---Free-Comics---IG---Playtard Рік тому +3

      Creative tangents are what we're hearing. Also, creative licence as its a harmless way of inspiring question of his "overstatement" or, object of conversation.
      "Get it correct the first time or everyone dies" - Easilly refers to the notion of any invetion or technology that has the potential to kill everyone, by accident as - intrtinsically there are no second chances. If you made your comment while talking with him or to him, I'm sure that he'd have a most rational explination.

    • @Ockerlord
      @Ockerlord Рік тому

      If you build a system powerful enough to kill every human being, there are no second chances.

    • @MusingsFromTheJohn00
      @MusingsFromTheJohn00 Рік тому +1

      @@---Free-Comics---IG---Playtard Yudkowsky has repeatedly thru different sources publicly stated it is batter to wage full scale nuclear war, which nuking the over 400 data centers in population centers would result in, rather than allow AGSIPs to be developed.
      But the worst part of his plan is that specifically his plan would take a small risk of human extinction and make it a virtually certain end result, because AGSIPs would still get developed as long as human civilization advances and his plan would have them developing without any human guidance and being the mortal enemy of humans because Yudkowsky turned it into either humans live or AGSIPs live deal.

  • @trucid2
    @trucid2 10 місяців тому +1

    While the dangers of AI--powerful tools in the hands of men--are very real, the solutions of Yudkowsky and his ilk invariably seek to centralize AI in the hands of governments. That's not a future I want.

  • @edwardking6841
    @edwardking6841 10 місяців тому +1

    Is this man a machine learning scientist, if not he has no credibility, like the rest of the AI fantasits

    • @EmmaTheSmol
      @EmmaTheSmol 6 місяців тому +1

      he is a ai researcher so i do believe he is qualified to talk about this (he founded a non profit company focused on ai related research in the year 2000, currently it is called the Machine Intelligence Research Institute)

  • @vmachacek
    @vmachacek 3 місяці тому

    I really don't get it why everyone in comments share his point of view.. I guess nobody smart wastes time with this guy, they are busy developing startups benefiting from GPT

  • @runvnc208
    @runvnc208 3 місяці тому

    I think in 5 years or less you will be able to run the equivalent of GPT-4 on a normal computer. And a very high end PC may run something 5 times more powerful.

  • @PrincipledUncertainty
    @PrincipledUncertainty Рік тому +4

    Cue someone saying that he is just scared of new technology, which is absurd. Odd that numerous people who are at the heart of this field are running around with their hair on fire. The mindless optimism of many others is what is truly concerning, no one should be poo pooing the idea of brakes and a selt belt in favour of greater speed.

  • @j2futures500
    @j2futures500 Рік тому +3

    Universal basic income, it's finally time.

  • @dadecountyboos
    @dadecountyboos Рік тому

    He should stop at this interview
    If we stop, any nation state actor would reach AGI. Good luck getting Dr. Ben Goertzel to stop. This guy wants money, air time, and an bigger ego.

  • @mnemonix1315
    @mnemonix1315 Рік тому +1

    they stopped using gpus awhile ago, they use ai accelerators; which are akin to asics.

    • @andydougherty3791
      @andydougherty3791 Рік тому

      Who did? Nvidia just placed a massive manufacturing order for their data center GPUs to keep up with a massive spike in demand from AI. It was all over their company and stock news the last two days.

  • @James-ip1tc
    @James-ip1tc Рік тому +1

    The scenario is this. We developed these large language models -(some mysterious thing happens)-no humans left on planet

    • @jimisru
      @jimisru Рік тому

      It's not a mystery. If you can't determine if your information is produced by a human or AI, then things fall apart. Banking. Military. Industry.

    • @jengleheimerschmitt7941
      @jengleheimerschmitt7941 Рік тому

      You forgot to learn anything about what you are talking about before you started talking about it.

  • @jadedbludarling
    @jadedbludarling Рік тому +1

    they, the Countries, could sign the AI agreement under the Antarctica Treaty as they all seem to agree on that one js

  • @aaronailwood4559
    @aaronailwood4559 Рік тому +1

    He's the like the fungus guy from last of us.

  • @reanimated5430
    @reanimated5430 Рік тому

    Man, I love ChatGPT always a hater.

  • @jdsguam
    @jdsguam Рік тому

    Dude is so WOKE, it's disturbing to watch.

    • @jimisru
      @jimisru Рік тому

      Go onto AI and ask to recreate the video with a tone that you enjoy. It will probably take about two minutes, or less.

  • @robertweekes5783
    @robertweekes5783 Рік тому

    This is very doable after American lawmakers regulate large AGI training. International diplomacy is much more doable now than it was 100 years ago because of the speed of communications

  • @ajkulac9895
    @ajkulac9895 Рік тому

    His hat is preventing me from taking him seriously.

    • @jimisru
      @jimisru Рік тому

      Go online and ask AI to change his hat in the video. It will probably take about two minutes!

    • @ajkulac9895
      @ajkulac9895 Рік тому

      @@jimisru What has been seen can not be unseen

  • @jamisony
    @jamisony Рік тому

    is the world better off without gate keepers?

  • @thelasttellurian
    @thelasttellurian Рік тому +1

    Never would have thought a card made to make my video games look nicer will one day bring me to the end of the world. Still worth it.

  • @reedriter
    @reedriter Рік тому +1

    the reality is we won't stop. If we can do something, we will. The point is to put in as many safeguards as possible.

    • @reedriter
      @reedriter Рік тому

      That being said, I do suspect AI will be a major disruption. I don't think it's the disaster port 'end of the world' stuff.

  • @ChrisStewart2
    @ChrisStewart2 Рік тому +1

    Frankly I am much more concerned about Eliezer Yudkowsky and people like him becoming terrorists (unabomber, etc.)
    GPT4 is a joke compared to this guy.

    • @EmmaTheSmol
      @EmmaTheSmol 6 місяців тому

      As someone who mainly read his fictional works, I feel inclined to agree

  • @robxsiq7744
    @robxsiq7744 Рік тому +2

    AI only has a few concern issues for me. privacy is a huge one, then there is privacy, and finally privacy. Open source is the only way this turns out well. Eliezer is a very sky is falling type dude, and its okay to have someone speaking from the emotional argument to bring in some reality checks about some potential dangers, but the reality is, nothing is stopping this and the biggest concern isn't the job market correction that will happen (by correction I mean a complete rebuilding of what a working life means), but how corpos and gubment will use this as a tool for intense surveillance and privacy violations beyond any scope we can imagine...even cybercrimes, though increased, won't be an issue (white hat AIs will make short work of those)

    • @iverbrnstad791
      @iverbrnstad791 Рік тому

      How would open source even matter? The average Joe won't have the compute to compete, regardless of know how. Also, "complete rebuilding of what a working life means" is far more impactful to peoples lives than "intense surveillance and privacy violations beyond any scope we can image". Being unemployed is a much bigger deal than lacking privacy(you have none on the streets anyway), and there's not a guarantee that good alternatives to work will be put in place.

    • @jimisru
      @jimisru Рік тому

      But it isn't just about privacy. It's mostly about deep fake information. At this point you have no idea if a human is typing this or AI. All of my videos could easily be produced by AI. This text could be made in less than a minute. You would have no idea if that's true. Of course, I'm not suggesting that the energy would be wasted to recreate my UA-cam page. I am not in any position of power for that to happen. But let's say you want information about your bank, your politician, or want to watch a movie online. All of that can be reproduced quickly by someone with enough computing power, and manipulated.

    • @robxsiq7744
      @robxsiq7744 Рік тому

      @@jimisru Source matters. Has for 30 years now. Don't trust things you read unless you know its coming and tracing back to a trusted source. Any information I've been hearing since the age of the internet started I've always listened to in the same way I would listen to someone at a bar. interesting, but I'll go confirm it from trusted places before I start taking it as gospel.
      We don't need to kid gloves the world, we simply need people to understand not everything you read online is true...this is basic stuff, just more advanced.
      teach people to simply say "wow...I can't believe Politician X said that. Whats the source? lets go to the source to make sure this is legit".

    • @robxsiq7744
      @robxsiq7744 Рік тому

      @@iverbrnstad791 Its how the world works. innovation will always put people out of a job.
      self driving cars and trucks will put truckers and taxicab drivers out of a job...should we ban that? hell, trucks in general put cart builders out of a job. robot arms put many manufacturers out of a job. the internet crippled porn companies among other industries, etc etc.
      This is no different. Its the industrial revolution and the outsourcing all rolled into one and it will be rough, but it is happening, and instead of burying your head in the sand, just understand it, work with it, and find a new niche that will be opening it up (you are a scared IT person? learn about how to automate smart homes and start a business with some friends to upgrade older peoples homes with wild wacky new innovations, etc)
      Stop being chicken little.

    • @ryzikx
      @ryzikx Рік тому

      @@iverbrnstad791 there are a lot of smart people in the world that dont work on the projects now

  • @Jim-vq9yg
    @Jim-vq9yg Рік тому +6

    Lol this guy is the hero of his own fan fiction

    • @ahabkapitany
      @ahabkapitany Рік тому +5

      feel free to provide specific rebuttals

    • @Jim-vq9yg
      @Jim-vq9yg Рік тому

      @@ahabkapitany kind of hard to rebut, "AI will kill us all unless everyone listens to me."

    • @ahabkapitany
      @ahabkapitany Рік тому +4

      @@Jim-vq9yg I use a heuristic to determine whether a person is making a good faith argument and that is to look at how accurately they characterise the positions of the people who disagree with them.
      Well guess what.

    • @Jim-vq9yg
      @Jim-vq9yg Рік тому +1

      @@ahabkapitany I would love to steel man his argument but he doesn't make one. When anyone asks him how AI will destroy all humans he just hand waves it away and says things like, "you'll see".
      I don't even disagree that AI COULD kill all humans but I don't say it's a certainty with a look on my face like I'm enjoying my own farts.

    • @MusingsFromTheJohn00
      @MusingsFromTheJohn00 Рік тому

      @@ahabkapitany rebuttal... Yudkowsky's plan is successful virtually guarantees humanity's extinction by making his worst fear come true because of his plan.

  • @lwwells
    @lwwells Рік тому +2

    I think his black and white perspective is incredibly dangerous. To suggest that nuclear war isn’t scary because AI is scary is laughable.

    • @jimisru
      @jimisru Рік тому +1

      He fumbled there. But that doesn't mean everything else he's saying isn't true.

    • @lwwells
      @lwwells Рік тому +1

      @@jimisru I don’t disagree. But that really destroys the ethos. So silly.

  • @The-Selfish-Meme
    @The-Selfish-Meme Рік тому +1

    I saw the whole interview in which he makes some very compelling points - but to propose starting a kinetic war over this is to throw the baby out with the bathwater.
    It's dangerous eschatological nonsense, especially the bit about, "...at least there would be survivors of a nuclear war."

    • @Alice_Fumo
      @Alice_Fumo Рік тому +2

      I sort of agree, but also I'm not sure.
      It is unfortunately very hard to predict the danger of new AIs. So this becomes a really complicated decision making process. If you're never ready to go this far to stop AI advancement before the control problem is solved, we all die, but risking a nuclear war over an AI training which never would have resulted in the creation of a dangerous AI to begin with seems really really bad aswell, so then the question becomes what's the sweet spot of where something like this should start to be enforced and based on my current observations, it's not far out, especially once the full capabilities of gpt-4 are released and google brings out Gemini.

    • @gurkenglas5809
      @gurkenglas5809 Рік тому

      In a fantasy setting, if a country harbors a doomsday cult, is it nonsense for other countries to band together and invade?

    • @diewont
      @diewont Рік тому

      The west has started kinetic war over much less.

    • @lshwadchuck5643
      @lshwadchuck5643 Рік тому

      @@Alice_Fumo Well said.

    • @jimisru
      @jimisru Рік тому

      He drifts into hypebole, but he saves himself. Fact: I have no idea if your comment was written by a human or by AI. Not immediately. This is now true of all data online, including your banking, the military, industry, all of it. And that's a problem of catastrophic proportions. ANd it should have been stopped, but now the source code is online.

  • @ChrisStewart2
    @ChrisStewart2 Рік тому

    This guy is suffering from extreme AI phobia.

  • @AIText2
    @AIText2 Рік тому

    U.S. has placed its tactical nuclear weapons in Europe, in six NATO countries - Italy, Germany, the Netherlands, Belgium, Turkey, and Greece (Greece does not currently have them, but there is a depot ready to receive them). The B61 nuclear bombs, which in Italy are deployed at the Aviano and Ghedi bases, are now being replaced by the new B61-12s, which the U.S. Air Force is already transporting to Europe. Right now leading all humanity to Armageddon.yes USA may do every thing, if GPT 5 could be only be owned by USA it would not be a problem. My words must be interpreted semantically.

  • @jdsguam
    @jdsguam Рік тому

    If you are afraid of AI - then YOU should stop using it altogether! Close your laptop, leave your phone at home and get out of your house or office and enjoy the great outdoors. Eliezer is nothing more than another "Chicken Little".

    • @BasilAbdef
      @BasilAbdef Рік тому

      Idiotic, braindead viewpoint. You're advocating sticking your head in the sand and ignoring potential problems entirely.
      Disagree with and pillory Yudkowsky all you want; he (partially) deserves it after all. But in no situation is your course of action ever the solution.

    • @jimisru
      @jimisru Рік тому +1

      There are billions of people online. The source code for AI was released online. And that's what the topic of the conversation is. And you have no idea if a human wrote this comment, or AI.

  • @travisporco
    @travisporco Рік тому

    seems like nonconstructive hysteria

    • @jimisru
      @jimisru Рік тому

      Is my comment written by a human, or AI? Your bank's vendors? Do they now know for certain they are dealing online with the vendor? How about the military? All of industry. All information online is now suspect because the source code for AI was released online. We can't determine immediately if it's fake. We can't even know if AI will expand that far. We do this. It could. It's doing it in small ways right now. The alarmist worries that maybe now is the time to tell the captain there are ice bergs ahead.

    • @travisporco
      @travisporco Рік тому

      @@jimisru Concern isn't the problem. It's this guy's unrelenting overconfident doomerism and impracticality.

  • @bullshitvendor
    @bullshitvendor Рік тому +1

    dude, a kid gets to know when it bumps up against the side rails and often times theres a recourse. humanity has had plenty chances to ignore but they running out f a s t