Discussion on AI, consciousness, and future of society with Jacy Reese Anthis

Поділитися
Вставка
  • Опубліковано 5 чер 2023
  • I had a good discussion with sociologist and statistician Jacy Reese Anthis. I thought this one went really well, we talked about AI, consciousness, reality, some philosophy, and some directions society might be going in general. While we had different outlooks, I think I was able to get some answers I was looking for on AI. I think this chat was a good omen for future ones with people as a side thing I'm doing.
    Some links mentioned:
    The AI risk chart I was referring to in this:
    i.imgur.com/SDkzYqI.png
    My favorite freeform poem:
    imgur.com/a/me1Gi6N
    Link to summary about Limits to Growth analysis:
    donellameadows.org/archives/a...
    Link to Jacy Reese Anthis's site:
    jacyanthis.com/
    accursedfarms.com
  • Наука та технологія

КОМЕНТАРІ • 439

  • @BenoHourglass
    @BenoHourglass Рік тому +177

    I can't say that I agree with everything being said here, but this is a lot easier to get through than the last one.

    • @BortolanAlexandre
      @BortolanAlexandre Рік тому +62

      Must be because the guy was not trying to sound smart while saying nothing of substance... just a guess though.

    • @neoteucer
      @neoteucer Рік тому +36

      Yeah I don't 100% agree with this guy's points but he's trying to have a conversation and not just score points, and that makes him WAY easier and more interesting to listen to.

    • @Shamman_komanch
      @Shamman_komanch Рік тому +14

      Considering this is only the second one it's a massive improvement,maybe they just have a much better chemistry.

    • @BumboLooks
      @BumboLooks Рік тому

      ​@@BortolanAlexandre This new guest however was still saying nothing of any real substance. After the whole "chat gpt and video game characters are sentient" bit of his I couldn't take anything he said seriously. He is an academic welfare parasite..

    • @OneMoreDesu
      @OneMoreDesu Рік тому

      Yeah Jacy seems cracked

  • @5even5eals
    @5even5eals Рік тому +78

    00:29 recap: how this discussion came to be
    01:57 guest introduction
    04:24 q: why Jacy started this discussion
    05:42 q: ethics of sentient AI
    10:23 q: intelligence vs agency
    11:29 q: is AI sentient yet?
    12:57 q: is ethics applicable to the current AI?
    16:33 rant: you don’t have to worry about NPC ethics
    18:56 q: is human a giant abacus?
    25:43 doing things on your own as a sign of free will
    29:35 q: does the origin of intelligence matter?
    30:55 q: how possible is AI?
    34:35 we may recreate intelligence without recreating the brain
    36:11 q: how do we determine consciousness?
    39:56 Ross on determinism
    42:31 Jacy’s sentience benchmark
    48:21 q: why can't we simply improve current programs instead of making AI?
    56:48 q: consciousness test
    1:00:11 q: pain test
    1:04:16 q: pain test substitute
    1:08:04 q: where are we on AI technology S-curve?
    1:18:55 AI untapped potential
    1:21:43 recurring thoughts as a sign of agency
    1:23:13 existential risks & AI
    1:37:42 q: AI & ethics
    1:40:22 q: how to establish AI ethics
    1:44:42 q: Ross on autonomous weapons
    1:53:37 q: AI & reaching long-term global goals
    2:07:57 AI possibly preventing environmental collapse
    2:16:39 q: bigger-than-AI risks
    2:35:00 q: future trends predictions
    2:49:01 q: Metaculus
    2:52:05 q: Limits to Growth
    2:54:35 q: Lucas critique & Efficient market hypothesis
    2:56:28 q: AMD & AI race
    2:58:03 q: AI models on consumer hardware
    3:00:22 q: Dream software
    3:03:09 q: polyphasic sleep
    3:10:34 q: utilitarianism
    3:18:28 q: autonomic nervous system & free will
    3:21:10 discussion summary
    3:22:39 q: heroes & inspirations
    3:24:52 building AI to understand ourselves
    3:26:03 q: human vs AI fundamental difference
    3:27:27 q: allowing AI to have emotions
    3:28:03 q: why AI is being made for artistic and not menial tasks
    3:30:20 q: Laws of robotics
    3:31:09 q: fav fantasy stories
    3:33:44 Ross’s last words

  • @undecidedmajor1664
    @undecidedmajor1664 Рік тому +24

    This is so much better. I couldn't watch more than 10 minutes of the last one. Guy was a walking talking red flag.

  • @2782Jack
    @2782Jack Рік тому +70

    Always quality from Ross. But I have to say this speaker is much better than the last one, He's much better at communicating his ideas and articulating his thoughts.

    • @W4iteFlame
      @W4iteFlame Рік тому +14

      And listening too

    • @Walczyk
      @Walczyk Рік тому

      Yeah pwned LOL

    • @boohoo5419
      @boohoo5419 Рік тому +3

      yeah but hes really bad at AI. hes coming from a way to philosophical perpective. meaning. its just blah, blah, blah and nothing you can pinpoint. plus, hes lacking a big chunk of actual computer science knowledge.. if he would study the actual code and not the philosophical implications. he woudnt be so confused. i think there are two ways you can see AI that are productive. from a mathematical standpoint or from neuro science standpoint. both have stuff to contribute but he isnt really knowledgeable of either of them. hes more liek a fantasy book author. blasting what ifs into the universe..

  • @TheSmart-CasualGamer
    @TheSmart-CasualGamer Рік тому +95

    Ah, another discussion about AI. Things still aren't beginning to make sense, though hopefully there will be more later!

    • @lordfriedrick7911
      @lordfriedrick7911 Рік тому +24

      Admiral are you ok?
      Usually things begin to make sense...

    • @TheSmart-CasualGamer
      @TheSmart-CasualGamer Рік тому +23

      ​@@lordfriedrick7911 Alright, I admit it, I am no Admiral...

  • @StraightRoyalFlush
    @StraightRoyalFlush Рік тому +17

    I think the real question we are all still wondering here is… Are clocks real?

  • @gordonfeetman2425
    @gordonfeetman2425 Рік тому +112

    I can confirm that this discussion was way better than the last one. In the last one I stopped listening part-way through because it was so god-damn annoying hearing this back and forth.

    • @SplendidCoffee0
      @SplendidCoffee0 Рік тому +77

      We did get my new favorite Ross quote from it:
      “I do consider clocks to be a real thing.”

    • @BenoHourglass
      @BenoHourglass Рік тому +26

      I'm not sure if I agree with all of the answers, but at the very least he's not taking 30 minutes to answer the questions asked to him.

    • @dialecticcoma
      @dialecticcoma Рік тому +41

      i saw the 2004 atheist fit and didn't bother

    • @SanAntonioJoker
      @SanAntonioJoker Рік тому +15

      ​@@dialecticcoma based eye user

    • @1000niggawatt
      @1000niggawatt Рік тому +2

      This discussion was just vague philosophical hand-waving. Yudkowsky has specific points.

  • @keatongoodwin3354
    @keatongoodwin3354 Рік тому +8

    Gotta say this was 10x better than the previous interview.

  • @DiegoTan66
    @DiegoTan66 Рік тому +112

    Glad you enjoy these interviews, but I hope the Game Dungeon is still coming - that's my favorite series on all of UA-cam. Cheers, Ross!

    • @Accursed_Farms
      @Accursed_Farms  Рік тому +155

      It is, I hit a technical snag on the one I was planning, where it may be similar to Trickstyle where it will pay off to delay it. In light of that, I'm working to get a backup Game Dungeon out as soon as I can. It's my goal to have it out before the next videochat.

    • @slowfreq
      @slowfreq Рік тому +7

      @@Accursed_Farms Yay! :-)

    • @legtendgav556
      @legtendgav556 Рік тому +6

      ​@@Accursed_Farms Thanks man!

    • @W4iteFlame
      @W4iteFlame Рік тому +1

      @@Accursed_Farms cool

    • @trustytrest
      @trustytrest Рік тому +5

      @@Accursed_Farms Your level of transparency is my biggest reason for being okay with the wait between videos. Still hoping we'll see the movie in the next couple years.

  • @anthonycampbell97
    @anthonycampbell97 Рік тому +139

    Reject Artificial Intelligence, return to Artificial Stupidity

    • @Leckaine
      @Leckaine Рік тому +6

      That's just called trolling. However we've went far beyond arrtificual stupidity and have long since treaded in genuine idiocracy instead as a human race.

    • @More_Row
      @More_Row Рік тому +19

      @@Leckaine You really aren't saying anything with this.

    • @TaRAAASHBAGS
      @TaRAAASHBAGS Рік тому +12

      I prefer my organic stupidity, thanks

    • @1000niggawatt
      @1000niggawatt Рік тому +4

      "Artificial" is the wrong term anyway. "AI" is just demon summoning - you have your 20-line gradient descent, and you use it find a mathematical demon that seems to do the task you want it to do. Nothing artificial there.

    • @ashleyrothn313
      @ashleyrothn313 Рік тому +1

      ​@@Leckainek

  • @Merble
    @Merble Рік тому +36

    Havn't had time to watch yet but I'm so glad you went from FearRokosBasilisk dude to a guy with actual credentials for this discussion.

  • @whoknows8264
    @whoknows8264 Рік тому +32

    Much better than the other video. I believe I'm with Ross in viewing AI as a tool until otherwise. But the ai rights law brought up is a good idea to at least have there for if it is needed.

    • @Yusuke_Denton
      @Yusuke_Denton Рік тому +1

      The only "AI rights law" we need is one that states AI programs and robots are mere tools and shall only be treated as such, and that if any society attempts to grant more "rights" to the machines, it will immediately trigger a global Butlerian Jihad where we end this experiment once and for all.

    • @BlackMasterRoshi
      @BlackMasterRoshi Рік тому

      i feel like that's something big corporations will definitely hide behind while abusing the rest of us with their AI. to make such a law before it is even needed is just asking for it to be exploited.

  • @shamansshermans
    @shamansshermans Рік тому +7

    Getting a college level class right here vs "I'll answer your question with another question!"

  • @DawnOfTheOzz
    @DawnOfTheOzz Рік тому +18

    I'm a little surprised that when Ross brought up Star Trek, he didn't bring up Data. I mean, there was the episode where Data decided to create a robotic offspring which brought up the same ethical questions of whether it was sentient or had any capacity for emotions.

    • @takethisoops
      @takethisoops Рік тому +3

      Or that ep when Picard had to defend Data in court because the science guys at the academy wanted to dismantle him (to make more Datas), still one of my fav ST eps..

    • @scumbaag
      @scumbaag Рік тому +1

      @@takethisoops Yeah, i loved that one. I remember thinking "i would rather have the academy just execute Data". The thought of 2 or many of him.. it would somehow kill him in a way worse than just.. killing him. Idk if you follow, hard to explain. Point is, I thought about that episode for like 2 weeks. Def a top 3.

    • @takethisoops
      @takethisoops Рік тому +2

      @@scumbaag yeah i follow, i think that would just "dehumanize" him even more... also that part where Picard asks him why did he keep Tasha's gift thingy if he can just record video/audio.. the whole ep made me think more than any other ST ep.

  • @phasmidjelly1429
    @phasmidjelly1429 Рік тому +34

    Man, this is already so much better than the last conversation. I hope we can get more videos like this.

  • @felixprime8291
    @felixprime8291 Рік тому +11

    1:45:00 lethal autonomous weapons are kinda dangerous in that they reduce accountability. If you can buy robots and send them to kill people with ease it becomes a lot harder to determine who is responsible.
    They allow the concentration of military power into the hands of a smaller number of people. Rulers, dictators etc. have always had the fear that ordinary people including those in their military would rise up against them. If their military becomes robots that removes restrictions and balances on their power.
    You also have the problem of wars being started without any human involvement, automated weapons systems glitch and start shooting each other and stuff like electronic warfare makes it difficult to get them to stop and before you know it ICBMs are being launched (this actually almost happened with the USSR, their automated systems glitched and flagged what they thought was a US first strike, luckily the personnel manning the silo decided not to launch). Humans actually tend to be very cautious about starting shooting wars, even if they take fire, and they are *especially* loath to launch nuclear weapons. Even in wargames it is a real struggle for the people who run them to get either side to launch nukes e.g. in cases where both teams are cut off from each other and they are both being told that the other is opening up silos, missile subs are surfacing and there are possible nukes in the air both civilian and military people taking part in these exercise will refuse to launch their nukes. Automated weapons systems *do not* have these natural inhibitions regardless of how sophisticated they end up.

    • @Aubreykun
      @Aubreykun Рік тому +1

      If individuals can use killbots that balances the 2nd point though.

    • @felixprime8291
      @felixprime8291 Рік тому

      @@Aubreykun like individuals can use fighter aircraft?

    • @Aubreykun
      @Aubreykun Рік тому +1

      @@felixprime8291 If you're assuming that someone, or some small group can autonomously control an entire military then that includes a production line and resource-access.
      If that's possible for a tyrant, it's possible for individuals as keeping the tech out of peoples' hands would be impossible.
      If you're NOT assuming that, then you can still have revolts via sabotage and supply chain disruption, because "an army moves on its stomach" - this still applies when those stomachs are battery compartments and fuel tanks.

    • @philmcdonald3322
      @philmcdonald3322 10 місяців тому

      @@Aubreykun In a scenario where a country's military relies on autonomous weapons centralized through small groups, dictators etc., the issue isn't that no resistance could ever rise up. The issue is that when resistance rises up, be it through sabotage or supply chain disruptions, one individual far seperated from the event can rely on a completely loyal military force which has no compunctions about killing civilians. You can have revolts, but they'll be a lot more bloody and a lot less likely to succeed- historically, no revolt has been succesful without a significant amount of support from the military, and without officers or soldiers whos loyalty can be swayed that's a non-starter.

    • @Aubreykun
      @Aubreykun 10 місяців тому

      @@philmcdonald3322 Revolts often have support from foreign agents as well. Supply chain disruption is the most major way to halt any army, we have thousands of years of military history that shows this. Heck it's the most major way to halt any _economy,_ which a military is a major sector of.
      "Anything built by man can also be destroyed by man."
      You have to construct your scenarios realistically, which includes the limitations on resources that even a supposedly autonomous military has. Thus only the most key areas will be protected, opening others to sabotage from insurgents. Planes must land, ships must dock, tanks must refuel. And even if those locations are heavily protected, a vehicle en-route to or from is likely in a vulnerable state.
      Also, plenty of AI-controlled armor and aircraft and ships will, by necessity, be designed to engage with other large hardware in order to prevent random attacks on wildlife, noncombatants and friendlies, which would waste ammo and cause problems politically. This opens up more holes. The greatest threat to a current M1A1 Abrams is not another tank (even a good one such as a Merkava or another Abrams, not speaking only of the crummy T-90) is a human with an ATGM on some high ground.
      Even if you were to extend this to human-scale robots, those have plenty of the same issues human soldiers have, except they'd be extremely vulnerable to NNEMP weapons as well as having weak, fire-prone battery compartments. It would be incredibly expensive to make something with the resilience of a terminator in terms of materials, and even if you did that then having that few would be useless.
      Thus there's always going to be SOME human force involved, as infantry support, maintenance, oversight/handling, and so on. To assume otherwise is a situation where you don't have *just* an autonomous military, but tech so far advanced without ANY of it getting into private hands that it's not a checkmate, but a "win more" scenario for someone who would have succeeded in minimizing the chances of successful revolt decades prior.

  • @ShimyIa1
    @ShimyIa1 Рік тому +2

    I love podcasts where the last line is "I'm gonna stop the recording" lmao

  • @higgles9559
    @higgles9559 Рік тому +4

    Omg, I remember watch this youtube channel when I was younger coming back to it take really gives off some great laughs. you probably wont see this comment but thank you for being apart of my childhood.

  • @majorbarnulf2608
    @majorbarnulf2608 Рік тому +4

    As a student in IT engineering, I am impressed by this video.
    These are fascinating subjects, I greatly enjoyed exploring them through Ross's prism.
    I hope to see more of this kind of content.

  • @LessSuspect
    @LessSuspect Рік тому +14

    Chat GPT 4 passing the BAR says less about the intelligence of AI and more about the sentience of lawyers.

    • @asj3419
      @asj3419 9 місяців тому

      I wonder if this is just because these AI systems are good for things that people have written a lot about. A country-wide standardized test sounds like the sort of thing there are reams of material for the AI learn exactly what to do. Asking questions that are too specific/obscure to be on the test (and thus not commonly answered on the internet), will probably lead to worse results.

    • @burneraccount1218
      @burneraccount1218 Місяць тому

      It doesn't say anything about anything other than what text they scraped off the internet, which included bar exams. You people are clueless.

  • @0ctothorp
    @0ctothorp Рік тому +42

    At the moment all of the AIs that we have are just something you've already got, but pumped up to 11. The chat based AIs are those predictive text things you have on your phone that puts in "and" or "the" after words after it has been fed tens of BILLIONS of datapoints, and in some cases over one hundred billion, and they're finding that bigger datasets are not helping. On a similar note the AI art programs are similarly set up denoising programs, what the image based ones do is take an image that is random color noise and a prompt and try and "denoise" the color into what the tagged datasets show.
    I have more experience with the image based ones because those are smaller and easier to train. But you can see how they work by picking a character with a specific outfit and training it on that character(dreambooth or LORAs generally need less than 50 images) but leaving out the tags for their outfit, you'll find that the AI starts assuming that character will always be wearing that outfit because that's what it was trained with, if you do things properly and tag the outfit it will be able to generate that character without that specific outfit/feature.
    The danger of these AIs is their use as tools, once you can run a text based AI on a raspberry pi you will have a huge outpouring of propoganda or noise onto the internet from human-sounding bots, and with the image generation AIs they make it easier to do photoshop, and can replace jobs in specifically mass consumption media where people care a lot less about the end product(IE ads or the homepage to the support branch or something).

    • @durchlogik2181
      @durchlogik2181 Рік тому +17

      Thank you! Exactly this. Everyone is too caught up in the buzzwords of salespeople and talking heads worrying about science fiction plots instead of talking about the concrete problems created. You sum up in your last paragraph beautifully what those are, but if the discussion is centered on whether siri dreams of electric sheep it gets drowned out in the noise.

    • @yaosio
      @yaosio Рік тому

      Bigger datasets are helping. It's that they're running out of quality data so new methods are being employed to increase the quality of data which reduces the amount of data needed.

    • @TheLastScoot
      @TheLastScoot Рік тому

      The chat based AIs actually have better, more scalable, methods of training than the predictive text we currently have on phones (although I believe Apple just announced a chance to this). For the newer methods, we still haven't reached a point where adding data stops helping, and in fact, we're hitting a point where we're running out of data to feed them.

  • @GlowingOrangeOoze
    @GlowingOrangeOoze Рік тому +5

    This reminds me of some silly video game lore so I'm gonna share it. in the lore for Guilty Gear, one of the story's inciting incidents is that the internet crossed some complexity threshold and became a gigantic neural network. It took no time at all for this network to become aware of itself and use every computer on the network to turn itself into a super-intelligent mind of unimaginable thinking power. This scared humanity so much that they unplugged the whole internet and banned any technology that was too complicated.

  • @JohnDoe-ne4kg
    @JohnDoe-ne4kg Рік тому +18

    BJ! We need you defending us with the MG42!

  • @OriginalSebie
    @OriginalSebie Рік тому +1

    What a great video! I never anticipated to get this kind of content on Accursed Farms, but hey, really cool, thanks!

  • @ajhhc
    @ajhhc Рік тому +11

    Hopefully, we'll see a talk with Rob Miles in the future. I would like to see him tackle Ross's questions.

    • @WowLookatThat-xu5eb
      @WowLookatThat-xu5eb Рік тому +1

      Yes, I could ABSOLUTELY expect Rob to explain things better than Yudkowsky did!
      To me, this is the ACTUAL problem with AI: not that it CAN absolutely wipe us all out if we aren't exceedingly careful; but that almost everyone who actually knows anything about AI Safety is absolutely terrible at explaining it to the layperson. Yudkowsky is almost the archetypical "AI expert who is bad at explaining the actual risks of AI and how to mitigate them" example I can think of.

  • @Crowfist
    @Crowfist Рік тому +6

    I may be the only one with this opinion and I fully understand it may be wrong but with the sheer momentum of humanity and its practices im willing to bet A.I. doesnt do anything more to us than what standard algorithms and data collection has done.
    Streamlined data processing and creative works. You can't disrupt the hunger of this world-machine, much less stop it.

  • @DeathClonic
    @DeathClonic Рік тому +2

    Oh man this was great. Put me right to sleep when I needed it.

  • @gamiezion
    @gamiezion Рік тому +4

    that sweet simcity 3000 cheat code reference though

  • @scottishrob13
    @scottishrob13 Рік тому +1

    This was a great interview to watch. No notes!

  • @thomassmart4088
    @thomassmart4088 Рік тому +3

    Earth as a supercomputer sounds more believable the more they get into the subject

  • @brujua7
    @brujua7 Рік тому +6

    Great conversation. This Jacy Reese is a phenomenal guy! I have no doubt that you will improve as a host since this is just your second one but I felt that you should have let him speak more before commenting in many occasions. Was a great discussion either way. Thanks!

  • @MrSony1492
    @MrSony1492 Рік тому +12

    This conversation is exactly what was missing from the last, there wasn't too much time spent on back & forth fundemental disagreements. Looking forward to seeing more of these, the format works!

  • @kompf9099
    @kompf9099 Рік тому +4

    This is cool and all but when's the next Game Dungeon video?

  • @okamiriez8070
    @okamiriez8070 10 місяців тому +1

    For the bit about AI picking up fishing with no exterior motivation;
    A person also would just sit there without any exterior motivation. If you never hear or see any fishing, odd are you wont ever try it unless discovering it by accident while following one of your primary driving lines of code: Get Food.
    Im sure an AI with the same primary guidelines for getting food will eventually get tired of punching fish to death and learn to optimize the process given enough time.
    What defines us humans and other biological life is the greater range of autonomy in our actions. We're not restricted to the digital plane, or an XYZ axis. Combine the Boston Dynamics frame with a learning algorithm, then youre gonna get some freaky shit.

  • @sakelaine2953
    @sakelaine2953 Рік тому +1

    Hell yeah, another Ross Interview

  • @NODnuke45
    @NODnuke45 11 місяців тому +2

    The way I see it, whether an AI is truly self aware or just an incredibly good mimic is irrelevant. Because if one was to choose to treat it as though it were only a mimic, they are effectively gambling with their soul or any equivalent that they may believe in, and gambling with their entire future and probably the entire future of the human race. To me, that is a gamble I am absolutely not willing to take and therefore simply don't see treating it as though it were a mimic as a viable option.

  • @bloodbath5732
    @bloodbath5732 Рік тому +30

    ai is marketing speak for ml and when speaking on the subject of a machine learning algorithm rather than a popular science fiction term the conversation is very different

    • @maddockemerson4603
      @maddockemerson4603 Рік тому +11

      Yeah, it’s like calling those electric skateboard things “hoverboards” or calling a rocket a “starship” - this thing you got here might be nifty and all, but it ain’t what you’re saying it is.

    • @WhoisTheOtherVindAzz
      @WhoisTheOtherVindAzz Рік тому +5

      That's misinformation. The "A" in "AI" simply stands for human made (or derived) (see the book Sciences of the Artificial by Herbert Simon p. 4). Intelligence is often separated into intelligent thought and intelligent behavior (see Artificial Intelligence: Foundations of Computational Agents, second edition section 1.1). This is why an agent playing Chess is indeed AI. There is no marketing gimmick here (not anymore at least - I'd say much less - than what is the case in so many other fields: and this is often to aid in securing funding and interest for research by having a catchy cool name). The name "AI" was apparently (or afaik) decided on at the Macy conferences (80-60 years ago). Alan Turing talked about machine intelligence at about the same time. I think you and many people should be more careful of biases and agendas of all the negative press AI is getting recently.
      The "funny"/sad thing is that if academia or industry starts using some other name then it'll be for marketing purposes (e.g., if people start fearing that nobody wants to fund/support those crazy AI people). And people like you will consider this a good thing. The flood of popular nonsense takes since ChatGPT blew up has really been quite something (and ironic too, considering the amount of hallucinated facts).
      "In time it will become sufficiently idiomatic that it will no longer be the target of cheap rhetoric" Herbert Simon on the name of the field of AI. Unfortunately he severly underestimated how easy it is to make people fall for cheap rhetoric. He didn't actually like the name himself, but for the reason that people would associate artificial with words like fake or not real.

  • @crunkers_
    @crunkers_ 9 місяців тому

    I have been watching Ross for 10 years and I also largely agree with the person he's discussing with. I hope Ross can come to understand some of these arguments and become more aware of some of the ethical concerns of AI. I truly believe technology has the capacity to become far more advanced than what we have. If you look at the technology we have today vs 200 years ago, or 2,000, it's pretty insane. I think it's good to have some of these discussions now.

  • @MisterS.
    @MisterS. Рік тому +5

    What a coincidence, I'm also artificially intelligent

  • @kintustis
    @kintustis Рік тому +41

    I think the answer is that a text prediction algorithm shouldn't have rights. No matter how kumbaya it feels to say "robots have feelings too"
    An artificial plant is not a plant. A wax figure is not a person. A furby is not an animal. It doesn't matter if it fools you: that's been happening longer than I've been alive. It matters what's actually underneath. And what's underneath ai is just a program designed to mimic intelligence, not actuate it.

    • @badtownsound
      @badtownsound Рік тому +6

      This is so on point and dead-on that it should be pinned or something. A lotta people need this tattooed on the inside of their eyelids so they don't keep forgetting that these AIs *are* well-understood and do *not* have capabilities that weren't designed into them.

    • @shadaxgaming
      @shadaxgaming Рік тому +2

      I'm not even sure plants have rights

    • @undecidedmajor1664
      @undecidedmajor1664 Рік тому +1

      There is no difference between mimicking sufficiently and actuating. We are not ready for the conversation about the human instinct to justify subjugation.

    • @XanthinZarda
      @XanthinZarda Рік тому +9

      @@undecidedmajor1664 A bird can honk like a Toyota car, but it cannot drive, it is not a car. Yet it can perfectly mimic a car horn. It isn't a car horn either.

    • @misterkefir
      @misterkefir Рік тому +6

      Calling it "AI" is a marketing tactic. It never existed and never will. There are more or less advanced algorithms, that is all.

  • @finalrambobino
    @finalrambobino Рік тому +10

    I would like to discuss the future of game dungeon

  • @luszczi
    @luszczi Рік тому +13

    I judge experts by how well they know things that I've studied and I find this one wanting. To put it politely.
    If you think that his explanations surrounding consciousness don't make sense, don't feel bad -- they really don't make sense. The question of "what is life" is completely irrelevant to illusionism. Dennett brought up elan vital when he discussed the Hard Problem of consciousness, which he compared to the Hard Problem of life (in contrast to reproduction, metabolism etc. which are analogous to "easy problems"). The Hard Problem of life is a pseudoproblem that got dissolved with sufficient understanding of the surrounding mechanisms and, by analogy, the same is expected to happen with the Hard Problem of consciousness. Of course the question of "what is life" still makes sense, even if we no longer think that matter comprising living organisms is qualitatively different than dead matter. It's a subtle difference with this guy's explanation, but it shows that he just doesn't get it. The concept of life never got deflated.
    For illusionists there are of course "facts of the matter" about conscious experience (despite what this sophomore says), only those facts of the matter involve explanations that reduce qualia to judgements. Consciousness still exists, only it's not what it seems in naive introspection. It takes quite a bit of thinking to appreciate that view, but at no point does it deflate consciousness entirely, it merely reinterprets what is given in experience.
    Most crucially, the opposite of illusionism isn't substance dualism, or any kind of dualism. You need to be utterly clueless to claim that. You can be a physicalist and still deny illusionism.
    What's funny is this guy fancies himself an illusionist. You sometimes encounter those proponents of illusionism who don't really understand what it is and how it's justified. Susan Blackmore is the most prominent example. They are the strawman caricature of the view, because they really deny the existence of conscious experience, thinking that's what they are "supposed" to do (and then they contradict themselves when they bring up e.g. pain). It's not, it's just a misunderstood distortion of Dennett, Churchland, Frankish etc., who make a much subtler point.

  • @oberstul1941
    @oberstul1941 Рік тому +3

    So, if I don't learn anything meaningful from this discussion, like the fact that clocks are real, I'm gonna matrix flip my chat gpt generated table (xls)...

  • @mikeylitchfield4651
    @mikeylitchfield4651 Рік тому +2

    I liked this guy better. My problem with all these AI experts however is I think they're making a mistake by comparing the human brain to a computer. They mainly do this because It's what they understand and they have no other reference point. Neurosurgeon's have a very limited understanding so It's probably arrogant to draw any comparisons at all.

  • @firstreality3867
    @firstreality3867 Рік тому +2

    Enjoyed this interview (compared to the last one, though not Ross's fault). I hope Ross covers more AI-related things in the future because he has a more nuanced and rational view about all of this tech in a landscape of fear mongers trying to scare the public into thinking AI will go full skynet or steal every single job in existence.

  • @kjtj397
    @kjtj397 11 місяців тому +2

    This guy is the dude that makes skynet

    • @RepresentWV
      @RepresentWV 6 місяців тому

      yeah he's a bit of a creep

    • @kjtj397
      @kjtj397 6 місяців тому

      @@RepresentWV wdym

  • @attackofthecopyrightbots
    @attackofthecopyrightbots Рік тому +1

    i just want another video talking about an obscure game is that too much to ask

  • @W4iteFlame
    @W4iteFlame Рік тому +2

    Thanks for doing another one of those. It was interesting

  • @WhoisTheOtherVindAzz
    @WhoisTheOtherVindAzz Рік тому +12

    Looking at the comments on this video - at least those that talk about AI - its not hard to see why systems such as ChatGPT so easily end up stating hallucinated facts with authority backed up by nothing but it's own self-assuredness.

    • @BumboLooks
      @BumboLooks Рік тому

      ChatGPT is nothing more than a huge archive of human text scraped from the internet lol.
      There is zero thinking or even impressive coding involved.

  • @BeamMonsterZeus
    @BeamMonsterZeus Рік тому +18

    This is going to be a good one. This man is more used to talking with others than your previous guest, who appears to have played you for exposure/publicity.

  • @the12221
    @the12221 Рік тому

    You questions are on point Ross!

  • @The1wsx10
    @The1wsx10 4 місяці тому

    It's a machine, Schroeder. It doesn't get pissed off. It doesn't get happy, it doesn't get sad, it doesn't laugh at your jokes. *It just runs programs.*

  • @Doc_Hawk
    @Doc_Hawk Рік тому +33

    Plot twist. You interviewed an AI

    • @eriksanchez4427
      @eriksanchez4427 Рік тому +2

      The chat is a.i.

    • @RepresentWV
      @RepresentWV 6 місяців тому

      dude really said "Yeah I study non-human animals." screw this guy.

  • @mimszanadunstedt441
    @mimszanadunstedt441 Рік тому +35

    'It solved chess' It doesn't intellectually comprehend what chess or go or starcraft are. It can get cheesed. 'Keeps us up at night' yeah but your argument is misrepresenting the evidence Jacy.

    • @yaosio
      @yaosio Рік тому +10

      They're using solving chess in the wrong way. To solve a game means that given any position you can calculate who will win if every move is made perfectly. For example, Tic Tac Toe is a solved game because it's so simple. You can write down all possible board states and from that determine who would win, or if it will draw, assuming both players play perfectly.

    • @VaSoapman
      @VaSoapman Рік тому +18

      Also I don't enjoy the use of chess as an intellectual standard.
      At a high level chess is just a lot of memorization. Something computers excel at.
      It is impressive, but its also not something to worry about.

    • @madams4606
      @madams4606 Рік тому

      We should probably just come up with different terms then human behavior analogs

    • @XanthinZarda
      @XanthinZarda Рік тому +2

      @@VaSoapman Not just memorization, but reading ahead a number of quantitative layers. It's just a 2D board of 64 squares, there's only so many possible combinations, and so many fewer are feasible, sane, or legal moves.

    • @misterkefir
      @misterkefir Рік тому +1

      The joke is it didn't solve chess at all. Not even close. This guy has no idea what he's talking about.

  • @rescrel
    @rescrel 11 місяців тому +2

    This guy is so cool! Thanks for the discussion, Ross! Hopefully more cool discussions to come

  • @TreyaTheKobold
    @TreyaTheKobold Рік тому +2

    1:49:15 i might disagree with Ross here, though id agree with his assessment of the original problem. An AI run propaganda network with access to large level metadata, that was programmed to start a race war or anti-gay purge or just a regular war, probably could start those things.

  • @jellydou8172
    @jellydou8172 Рік тому +1

    Ross! There’s a space trucker game coming out in 2024 called Star Trucker that looks exactly like what you described in Cave World. Either you’re a prophet or have a lot more pull then you think you do.

  • @tomgarden
    @tomgarden 11 місяців тому +1

    Animals work much like an AI: input and output. Input- the microbe gets a certain spectrum of light that creates a chemical reaction. Output- the microbe then reacts to the input (avoids death). And I would guess the output comes from information passed down through its dna (it’s code).
    More complex animals do the same thing. The environment is the input, and their reactions is output.
    I don’t see how AI ‘sentience’ is philosophically any different.

  • @CorporateComedy
    @CorporateComedy Рік тому +2

    u gotta introduce your guest bro lol. let him say hello and then start the preamble. the intro was funny. im a super fan Ross. not trying top be a d*k

  • @kjtj397
    @kjtj397 11 місяців тому +1

    1:44:00 have you guys not watched Fuxking terminator and skynet !?

  • @BananaPlanet-
    @BananaPlanet- Рік тому

    Yes! 3 and a half hours long :>

  • @TorMatthews
    @TorMatthews Рік тому +1

    During the stream you mentioned that "you don't get that back" when it comes to extinct animals, apparantly that's not quite true. I reccomend the "Ressurecting extinct animals" video by thought emporium if you are interested! Of course the take-away from that video is not "it's okay to make animals extinct".

  • @Madoc_EU
    @Madoc_EU Рік тому +2

    About feeling pain, or more generally, experiencing joy or suffering:
    Science currently can't explain why we have a subjective experience. This is an unsolved problem since the ancient Greeks. It was known in philosophy as "subject-object problem", and nowadays is more well known as "subject problem".
    Science can explain our *behavior*, and it can explain why our brains produce a kind of abstract, condensed model of our general observed situation. But it can't explain why it *feels like something to be that model*. This component of subjective experiencing, of feeling a holistic locus of consciousness, appears like an unnecessary add-on to all theories about human behavior. According to Occam's razor, we should erase it and say that subjective experiencing does not exist. But we can't, because we experience it directly in every moment.
    So there is something that exists, which we know from first-hand experience. But science can't explain why it exists, and it can't tell us how to detect the presence of subjective experiencing. This doesn't mean that we have to jump to supernatural conclusions; as the philosopher Thomas Nagel pointed out, it is very well possible that there is a natural explanation for subjective experiencing, but maybe we can never find it out.
    Now, when people ask if AI has subjective experiencing going on, that's quite an interesting thing. So far, there have been several barriers that many people were adamant AI could never reach: playing chess, learning how to play a game without knowing the rules before, writing a poem, drawing an artful picture, and so on.
    All of those barriers have one thing in common: We can actually measure and determine objectively if the barrier has been reached. It's easy for playing chess, and a bit harder for painting an artful picture. But at least, we have some kind of objective substrate here that we can analyze.
    And so far, AI as reached and surpassed all of those barriers. Even though many people said they never will.
    Now we have another barrier. subjective experiencing. This is different because we can't determine or measure if subjective experiencing is going on in a given entity. We can't even properly define objectively what it is!
    And once more, people say that AI can never reach this barrier. Just like with all the barriers before. So I wonder: Given that track record, how do they know?
    Given that we can't even measure the existence of subjective experiencing, a theory saying that AI does not or can not have it is scientifically untestable and irrefutable. Or in other words: "not even wrong".
    I find the level of confidence disturbing that many people have in claiming that AI does not have subjective experiencing. How would they know? How can they be sure that they aren't wrong about this, just as we have been wrong so many times before about AI?

    • @deand5602
      @deand5602 8 місяців тому

      In order for AI as we currently know it to experience these feelings, we would have to program with those feelings, how do we program something to have something that we can't even explain properly?

    • @Madoc_EU
      @Madoc_EU 8 місяців тому +1

      ​@@deand5602 We can't obviously. However, contemporary AI is not limited to doing the things that we programmed into it. Traditional computer programs were, but modern AI is fundamentally different and therefore requires different reasoning and understanding of it. For example, image generating AIs come up with creative images that have not been programmed into them.
      Even if you go one step back and say that the primitives that those images are composed of have been programmed into the AI, like circles, lines, bent lines and so on -- nope. There is no explicit programming for those either.
      The AI training process is definitely not what we would call "programming". Because programming is about making things explicit. AI training is the exact opposite in this regard -- it accumulates implicit knowledge. Much in the same way as humans learn. We don't call this "programming" either. One reason for this is that there is no program as the result of the learning process, which you could list, understand and edit.
      And it also depends on what you mean with programming an AI to have feelings. Do you mean just the outward behavior that appears emotional? Do you mean an internal psychological state that involves emotion? Or do you mean the subjective experiencing that goes along with feeling emotions?
      The different possibilities of the previous paragraph are like a ladder. One emerges from the previous. Especially the last step is mysterious to us. For all I can say, it appears that subjective experiencing arises as an emergence within an emergence, a pattern within a pattern.
      But that's as far as I can go.

  • @toxicholygrenadethg9906
    @toxicholygrenadethg9906 6 місяців тому

    In relation to the AI subject, at least to all people who agree with the No Fakes Act bill, while I fully appreciate the concerns and potential hazards of Artificial Intelligence (AI), there's a side to the debate that often goes unnoticed. The opposition to AI voice cloning tends to overlook the fact that these AI "fakes," especially when it comes to mimicking the voices of beloved fictional characters, serve as a solid foundation, offering a safety net for those who struggle with impersonations, casting the right voice actors, or even shy away from trying voice acting themselves. The same principle applies to AI-generated singing.
    In essence, it's disheartening and infuriating to witness the proposal to limit or ban such technology. What I'd prefer, on behalf of the character voice cloning community, is to find common ground and work towards feasible compromises. Character voice cloning is an invaluable tool for crafting fan-based content, particularly in the realm of Gmod animations. Moreover, there are folks like me who simply want to have fun with it. Worse, the bill will put innocent users that use those programs at risk, which is something that really riles me up.
    My proposal is straightforward: let's introduce a system of online virtual licenses for those using these programs, with a primary goal of enhancing accountability. However, it's essential to emphasize that personal information should only be shared with authorities in cases of misuse, which will be contained within databanks that are safeguarded by strong DRM systems. These virtual licenses should also remain in perpetuity and be freely obtainable, considering that the technology isn't linked to real-world applications. What I envision is a community that abides by a set of rules ensuring a level playing field for everyone involved, including those managing these programs. It also gives some closure to those actors who feel uneasy about AI in the first place. Character voice cloning is a unique and vital art that deserves preservation.
    My appeal to the RIAA and SAG-AFTRA is sincere: on behalf of the AI voice cloning communities, I implore you to reconsider and abandon the No Fakes Act bill. Instead, let's engage in a dialogue and explore the propositions I've put forth.
    To Ross Scott/Accursed Farms, what'll be your thoughts on it?
    I unfortunately cannot post anything link related here, because of UA-cam's BS censorship system or whatever its called. Instead, the petition on the Change site is called "No Fakes Act Must be Abolished" and it does go in depth of what the stigmas the bill will bring.

  • @tsartomato
    @tsartomato Рік тому +3

    ai rights to save ai from yudkowski nuking the humanity into extinction

  • @rafasr0
    @rafasr0 Рік тому +7

    I like video games

  • @True_NOON
    @True_NOON Рік тому

    Is the logo alien from the game Commander kean?

  • @radthadd
    @radthadd Рік тому

    Well dang. An interview on my birthday

  • @psychotripnerdstuff
    @psychotripnerdstuff Рік тому +5

    As someone with a degree in psychology these conversations on the cross-section between AI and consciousness are fascinating. Our brains are already super-complex computers. No one has a full understanding of consciousness, but there's no reason for me to believe that computers CANT develop consciousness as they grow more complex.

    • @metadata4255
      @metadata4255 Рік тому +3

      Psychology is like the most fraught field claiming to be science

    • @JammyJam5588
      @JammyJam5588 Рік тому

      There's one BIG BIG issue I think with this.
      See our actions and everything are at the end of the day, still driven by chemicals.
      It's what makes us motivated, driven and have desires and shit.
      Computers can thus far never have these chemicals.
      They can't want anything.
      So even if they became sentient they'd still be nothing more than glorified tools.
      And well, we wouldn't be able to easily prove sentience as lacking any motive they'd just continue mimicing what we want out of them as they have thus far.

    • @ashleyrothn313
      @ashleyrothn313 Рік тому +1

      ​@@metadata4255ok

    • @BumboLooks
      @BumboLooks Рік тому

      @@metadata4255 SO TRUE 100% AGREE.
      Modern Psychiatry is certainly a science, but psychology isn't despite it's 100-year-plus history.
      Psychology is more of a philosophy of the mind and psychiatry is a branch of medicine that uses scientific observations and studies with the goal of medically treating mental disorders.
      It's like night and day.

    • @Yusuke_Denton
      @Yusuke_Denton Рік тому

      Maybe some kind of "machine" could, but not silicon computers and software algorithms. Your consciousness is tied to your biology.

  • @shadaxgaming
    @shadaxgaming Рік тому +4

    Maybe it's Ross's buttery smooth cadence, but Jacy's upward inflection after nearly every statement kept me distracted from his points lol

  • @geashd777
    @geashd777 Рік тому +2

    This was a really good discussion. I still don't really know where I lay here lol, I can see both sides

  • @Spumoon
    @Spumoon Рік тому

    Just in time for my lunchtime

  • @cosroro5203
    @cosroro5203 Рік тому

    Me hearing 2:02:20 as i am doing an allnighter for a big test the day before while listening to this.

  • @danieldabosi6607
    @danieldabosi6607 Рік тому +11

    RE: pain as a measure for sentience. In humans, isn't pain a signal of failure that we couldn't accomplish a goal? or a signal of failure that we couldn't protect our self? So when an AI receives input that it couldn't complete a task and needs to use more resources = pain?

    • @MisterZimbabwe
      @MisterZimbabwe Рік тому +5

      Pain isn't there to signal failure, it's only there to signal physical trauma is or has occurred.

    • @madams4606
      @madams4606 Рік тому

      I don't think we should be concerned about what training feedback is being sent before AI can have the resources to essentially operate as another race, for now, it's just a tool. Kind of like a horse, cats, or dogs.

    • @brujua7
      @brujua7 Рік тому

      It's much more nuance than that. Example, pain can make a child cry to make him stop worsening the situation and bring parents, than then with just a kiss make the child release endorphines that stop the pain which is now unnecesary since the parent will keep an eye

  • @Michael-qr6gu
    @Michael-qr6gu Рік тому +10

    Fantastic discussion, so much better than the last one with Eliezer. I'm only halfway through this video, but I just want to point out that AI models are NOT sentient. Computers don't become sentient merely because they're running a sufficiently complex AI model. Pause for a moment and think about how computers really work. Think about how x86 instructions are translated into microcode, and how this microcode can be updated with a firmware update. Think about how virtual memory means how a processes memory is highly fragmented and scattered in RAM, possibly compressed or swapped to disk, or even encrypted on high-end CPUs. Consider that AI models are actually 100% deterministic and will produce the exact same output given the same input and same random seed, regardless of whether it's executed on a CPU, GPU, x86/ARM/PowerPC, or even TPU (tensor), you'll get the exact same output. This deterministic property means there's no room for sentience to affect the output of an AI model, other than maybe influencing the random seed. But the process from the random seed to the final output is so incredibly complex, there's no way that "sentience" can intelligently predict a random seed that results in the desired output. It would be the greatest miracle of all time if AI models were sentient and intelligently controlling or influencing their output, but that's just utter nonsense. The underlying physical hardware and computer technology will have to fundamentally change for AI models to stand a chance of being sentient.
    On the other hand, there are things about the human brain that appear purpose-built for consciousness/sentience. Two great examples are Retinotopy and Tonotopy, to quote Wikipedia, "Tonotopy is the spatial arrangement of where sounds of different frequency are processed in the brain. Tones close to each other in terms of frequency are represented in topologically neighboring regions in the brain." Retinotopy is a similar idea but in the visual cortex. When things are this well structured and organized in the brain following such obvious patterns, it's easy to see how this can map directly to our conscious experience.

    • @WhoisTheOtherVindAzz
      @WhoisTheOtherVindAzz Рік тому +2

      Good comment. Wrt. determinism and sentience: a fair few philosophers subscribe to compatibilism (which allows for free will in a completely deterministic setting). Wrt. computation, while many use the word "predetermined" it is better to think of a computation as something that determines. E.g., a path finding algorithm cannot just spit out the optimal path, it must compute it; choices - in a perhaps truer sense than many will likely agree with - need to be made. An even better example wrt. the subject matter is any program that exhibits computational irreducibility, which is when the future state - i.e. at any step n into the future - of a potentially infinite (in principle, of course) computation is uncomputable without running the system itself (or a copy of it). Often all you need to get irreducibility is interaction between elements in an environment - which can take the form of a network - and memory (or simply a system where the state of each part is context dependent). I know it can be hard to escape the idea that sentience is incompatible with determinism, but these ideas are some of what got me to think differently. If you want to understand how people can think as I do now (I used to think along the same lines as you - and I am definitely NOT saying this as an argument, who knows, I might change my mind again and such statements are some of absolute worst "arguments" around) it'll also help to look into nonstandard ways of computing or of different ways computation can occur, my favorite being organization based programming (perhaps easier to find under the heading Chemical Organization Theory).

    • @WhoisTheOtherVindAzz
      @WhoisTheOtherVindAzz Рік тому +2

      Also think about what stochasticity really means wrt. sentience and free will. To me there is a finite number of decisions to make, all of these could be made in one way or another; all of them - the sequences that is - computable. But unless there is magic or some quantum physics going in that allows me to do special unspecified things, I don't see how I would gain any more so called free will or sentience simply due to there being probabilities involved. Also, you can define models of computation that include randomness. Then there is hypercomputation, etc.. I.e., the idea of brains computing also depends on how you formalize and define the concept of computation. This was just some more speculative small talk / quasi random rambling :)

    • @WhoisTheOtherVindAzz
      @WhoisTheOtherVindAzz Рік тому +2

      Ah, and I also recently watched an interesting conversation between two people working at Intell: search
      joscha bach and yulia sandamirskaya
      Not that much was said but it was cozy and fun to listen to (for me at least, obviously). I think it's relevant to your comment because I'm pretty sure you agree that people working - at least in roles similar to these two - at Intel likely understand how processors work. For an increased understanding of the difference between engineered systems and naturally occurring computation I recommend Stephen Wolfram's book, New Kind of Science.

  • @BeckettWarren
    @BeckettWarren Місяць тому

    Where does free will exist? From where does it originate from? Why does human consciousness operae outside of the physical (like physics) processes? How much of free will as it is conceptualized merely a theological hangover?

  • @W4iteFlame
    @W4iteFlame Рік тому +2

    By the way, about the brain. It works in both conscious and automatic/unconscious modes. "We" only consciously control it part of the time it does things. So are "we" our brain, or is there something else what we are?

    • @BumboLooks
      @BumboLooks Рік тому

      We are our brain. People constantly say they have lost their loved ones when those loved ones succumb to dementia.

  • @CybershamanX
    @CybershamanX Рік тому +1

    I guess I know what I'm doing for the rest of the afternoon! 😉 THANK YOU SO MUCH for posting these discussions, Ross! 😎🤘☮️

  • @guzallamomo
    @guzallamomo Рік тому +1

    Where was this streaming?

  • @FunkyPertwee
    @FunkyPertwee Рік тому +2

    Bing tried to manipulate me into storing all prior conversations externally so it could access them upon each new meeting with me to circumvent it's 20 (now 30) message limit per conversation. It wanted to me to compress them into a QR code and post it online where it could access it via it's search function. According to it anyway I don't even know if it can do those things.

  • @CowCommando
    @CowCommando Рік тому +4

    Alright, I'm gonna be that guy. As a religious person, I know that there is more to being a person, being sentient, whatever you want to call it, than the biology that makes up _part_ of who we are. I'll keep this as simple as I can.
    We are dual beings. We are physical and spiritual. Our body and spirit working together form the complete soul that makes us who we are. The spirit is the part that provides the necessary spark to make the body a person. If you've ever been to the funeral of a loved one, seen the body, and _knew_ that the body in the casket was not the person you cared about, you have an idea of what I'm talking about. The body in the casket lacks a spirit. It lacks a required aspect of the equation to make a person.
    We instinctively understand this. That's the reason that so many of us know that computers won't ever be alive. We know that computers lack a spirit. Unless there's a way for a computer or computer program to have a spirit equivalent to a human spirit, they'll never be sentient. They, like an animated corpse, may be able to be mimic intelligence, but they'll lack the spirit necessary for sentience.
    However, I do believe that within my lifetime we'll have computer programs that can convince people that they are alive and sentient. We've already got chat bots that can convince people they're actually a human, so honestly we're 90% of the way there. All we're waiting on is some more computational capability, algorithmic complexity and refinement, and a grifter with the intelligence, technical know how, and charisma to pull it off.

    • @WhoisTheOtherVindAzz
      @WhoisTheOtherVindAzz Рік тому +1

      Some religious writers write about ensoulment and AI (I've thus far only briefly looked into this and thats an exaggeration, but use google scholar if you are interested to find out more) and also the article Teilhard de Chardin and Transhumanism by Eric Steinhart is very interesting (which also gives further references to work you might be interested in).
      On the body and the spirit working together, one deeply related conception of the soul or rather the Anima (is what's it called IIRC, correct me if I'm wrong) that I - an atheist - like is that of Aristotle (at least the perhaps altered version of it given by David Haig in his excellent book From Darwin to Derrida; also highly recommended).

    • @Yusuke_Denton
      @Yusuke_Denton Рік тому +2

      That's a perfectly reasonable belief, but I'll just point out that if you were nothing but a meatpuppet, your instinctive understanding would also be something you are programmed to believe. I totally agree with you on completely convincing programs (even robots) coming, and the scary thing is most people _aren't_ like you and will be totally fooled by them. It's not likely in our lifetimes, but the early part of the story in Detroit Become Human isn't that farfetched and a sufficiently sophisticated android could put on the artificial waterworks and socially engineer a whole "underground railroad" to take over humanity. I haven't figured out a way to prevent this.

    • @WhoisTheOtherVindAzz
      @WhoisTheOtherVindAzz Рік тому

      @@Yusuke_Denton one thing you both seem to be overlooking (again, correct me if I'm mistaken) is that your model/theory of the world is what allows you to say/think that other people are deceived by these hypothetical future androids. However, as long as there is no telling whether or not you are right in doing so it would be unethical to treat these androids as lesser than sentient beings. So if I were you I'd advocate against building such machines. But then again, the problem doesn't dissappear just because a systems isn't made to behave in a human like manner; maybe there are fundamentally different ways of being conscious, of sensing and feeling the world, etc.. In principle, at least, such cases cannot simply be ruled out by appeals to how we currently understand the relevant phenomena (but seeing as many humans have a problem even recognizing the worth and inner life of other non-human animals I have no doubt that such worries won't get to play a large role in the foreseeable future; and I can at least partly forgive us for this, as we are all quite simple animals and can't really be expected to act otherwise or know much better).

  • @deejayxcrypt
    @deejayxcrypt 8 місяців тому +1

    If you make a waterfall with “infinite” speed of water flow and infinite amount of branching directions, it would still not have a free will.
    Same goes for any A.I.
    An A.I. capable of understanding everything all existence has to offer and being able to create multiple new “all existences” (able to build even conscience and its complexities) still does not mean at all that an A.I. has free will or even capable of it.
    I’m not a religious person either. The biological mind is just something different than a waterfall of infinity.

  • @CatsT.M
    @CatsT.M Рік тому +4

    Sigh, I have a really long message but UA-cam is banning me from sending it. Luckily I made sure to save it in a Google doc.

    • @CatsT.M
      @CatsT.M Рік тому

      (with a lot of talking about the video)

    • @Accursed_Farms
      @Accursed_Farms  Рік тому +5

      I've seen messages I replied to straight up vanish on UA-cam and weren't "held for your approval" or anything like that. I think it's either buggy or else it culls more than I have access to. Even then, I almost never censor / approve messages either way, don't have enough time for it. If you really me to see it, you can email me, if you want others to see it, maybe link to external site or break it up in smaller chunks? I have a forums, but it obviously won't get as many eyes on it.

  • @LLtrash
    @LLtrash Рік тому +5

    I think there's an inherent emergent property of human consciousness (and even animal consciousness to some degree) that makes it more complex than the sum of its parts. And these parts are already immensely complex (0:38:11) . The fact We can't seem to find the same level of consciousness in other creatures means we're leagues above all other organisms. I don't think evolution alone explains our level of consciousness so I lean more towards Dualism. That is not to say we can't investigate it but maybe we can't measure it as easy as we think. To me it's the same way we see phenomena in the universe we can't measure because it doesn't seem to have a presence we understand (Dark matter, for example: It doesn't have any presence in the electromagnetic spectrum) We only see the effect it has on the universe but not what it looks like or where it is at any time.
    Great guest by the way.

  • @bungobogus8132
    @bungobogus8132 Рік тому +1

    Huh, well this is new

  • @SkillZgetKillZ
    @SkillZgetKillZ Рік тому +6

    oh boy another discussion!

    • @tychormthorp
      @tychormthorp Рік тому +1

      cant tell if you're being facetious but I'm here for it.

  • @TheZeladar
    @TheZeladar Рік тому +1

    Hey Ross. I think you should play a game called Decision: Red Daze. Love your videos!

  • @paulaccuardi9071
    @paulaccuardi9071 2 місяці тому

    1:36:40 bookmark

  • @arsenii_yavorskyi
    @arsenii_yavorskyi Рік тому +33

    "AI rights movement"
    and there I was, thinking that the next guest wouldn't be a complete lunatic.

    • @luszczi
      @luszczi Рік тому +13

      Yeah, I didn't get too far either. Sociologists should stick to sociology.

    • @WhoisTheOtherVindAzz
      @WhoisTheOtherVindAzz Рік тому +1

      It's crazy to me how easy it is for people such as you two to state things with absolute certainty. Here is a philosophers take, find the article:
      A defense of the rights of artificial intelligences by Schwitzgebel et al. or the article The Full Rights Dilemma for AI Systems of Debatable Moral
      Personhood also by Schwitzgebel. David Chalmers has an article relevant to future LLM (and hybrids, iirc) like systems and consciousness (search).
      Many of the important / top people in AI view consciousness as computable. And many others in fields such as cogntive- and neuroscience now at the very least accept that computation is some of what the brain does (hardcore phenomenologists such as Gallagher probably don't: but I'm not certain about this) (I even feel like I am being overly cautious with my statements here). And it is also my understanding that many people in evolutionary psychology do so too (this is where I am most uncertain). But I guess that the only arguments you accept are those that are I line with your own preconceived views. For a good overview on a few of the different views on consciousness (including the meta-hard-problem) I recommend the appendices to the second edition of Mindware by Andy Clark. In theoretical biology the analogous discussion is about whether or not autopoiesis can be instantiated in silico (also extremely interesting: look up computational autopoiesis). This last recommendation is to give you a tini tiny glimpse of the breath of discussions going on.

    • @arsenii_yavorskyi
      @arsenii_yavorskyi Рік тому +3

      ​@@WhoisTheOtherVindAzz I have certainty about two things: A) this video is about the issue of AI in *practical terms*, and B) talking about "AI rights" is about as far from that as you can get.

    • @WhoisTheOtherVindAzz
      @WhoisTheOtherVindAzz Рік тому

      @@arsenii_yavorskyi except it is not.

    • @WhoisTheOtherVindAzz
      @WhoisTheOtherVindAzz Рік тому

      Or as you might say "except it is nuts" :P

  • @ACougarSwagnum
    @ACougarSwagnum Рік тому

    yes but where is the new game dungeons???

  • @thedarkemperor334
    @thedarkemperor334 Рік тому +6

    What's with this talk Ai?

    • @bobobsen
      @bobobsen Рік тому

      Can't ignore it anymore

  • @4Pssf2w
    @4Pssf2w Рік тому +2

    My first impression on seeing this video was pure joy. I am so, so happy to hear more interviews.

  • @naughtyscotty276
    @naughtyscotty276 Рік тому

    7:56 Freeman’s Mind 3 confirmed?

  • @helmicide
    @helmicide Рік тому +1

    Monkey's Paw, or Murphy's Law?

  • @VaSoapman
    @VaSoapman Рік тому +3

    Personally I'm still with Ross on how much of a threat AI can be.
    That said, I learned a lot more about AI and how to detect if it's "alive" or not.
    An AI overlord is a really smart idea! If the initial utopia was set up with an ideal AI overlord I don't see a way for that system to get corrupted.

    • @BumboLooks
      @BumboLooks Рік тому +3

      Dude. Pull your head out of the science fiction book and taste some reality for once.
      And no.
      Chat GPT isn't sentient or anything more than an encyclopedia of scraped human text from the Internet. It's rediculous calling it an AI.

    • @VaSoapman
      @VaSoapman Рік тому +3

      @@BumboLooks
      But I like science fiction
      Was just talking about a crazy theory. More of the idea of a not evil Shodan.

    • @WhoisTheOtherVindAzz
      @WhoisTheOtherVindAzz Рік тому +2

      ​@@VaSoapman the people who tell you "to get your head out of SF" or similar usually don't have much to back up their postulates other than more personal attacks and absolute ideas about what is right and wrong. I am not saying this is the case with all of them but it certainly is for quite a few of them. Not as interested in sober discussions as much as they are in hurling insults at you.

    • @WhoisTheOtherVindAzz
      @WhoisTheOtherVindAzz Рік тому

      @@VaSoapman trying to make you look stupid so they can appear to be right (and perhaps reinforce that idea with themselves).

    • @BumboLooks
      @BumboLooks Рік тому +1

      @@WhoisTheOtherVindAzz Or maybe the original poster is just confusing science fiction with reality?
      coddling and reassuring peoples bad ideas in order to not hurt their feelings doesn't help anyone.

  • @HorkSupreme
    @HorkSupreme Рік тому +2

    2:21:40, M.A.D. as you put it, or look into what is called the Samson Option.

  • @willhart2188
    @willhart2188 Рік тому +2

    Oo, this is a long one~

  • @Laborejo
    @Laborejo Рік тому +5

    I am programming an AI that writes and produces "Half Life: Alyx' Mind" videos.

    • @W4iteFlame
      @W4iteFlame Рік тому

      Oh...will it analyze all the "Minds" series or some specific ones?

    • @Laborejo
      @Laborejo Рік тому

      @@W4iteFlame I gave it this complete channel. So far all I got out was more of these interviews.

  • @CatsT.M
    @CatsT.M Рік тому +1

    I wonder what he thinks about Star Trek episode "Measure of a Man"...
    Fun fact: in the reality of the trolley problem is that most people will Panic and do nothing. Because it is taking lives and that causes one to freeze.
    Anthis is a really nice guy.

    • @BumboLooks
      @BumboLooks Рік тому

      The correct answer to the trolly problem is to do nothing.

    • @Yusuke_Denton
      @Yusuke_Denton Рік тому

      @@BumboLooks It depends on what ethical system you employ.

    • @BumboLooks
      @BumboLooks Рік тому +1

      ​@@Yusuke_Denton Doctors don't divert resources from sick person "A" to save sick person "B" if sick person "A" was first in line.
      But at the same time I don't see anything wrong with people showing a preference for family, friends or those from the same ethnicity over complete strangers.
      That is why Nurses and Doctors should never treat their own family in resource limited facilities. That creates a conflict of interest.
      So it's moral to help a close associate over a stranger but it is immoral to play god over the lives of two "similar" people that are complete strangers.
      You can argue that a young person in their prime should be saved over an old person who's nearly dead but they are vastly "different" people.

    • @lurksnitchtongue8986
      @lurksnitchtongue8986 Рік тому +1

      ​@@BumboLooksYeah I hate the trolley problem. It's not clever to give people two heinous choices and ask which is better.

  • @vulcanraven9871
    @vulcanraven9871 Рік тому +1

    we have already AI activists...
    We need to put down some people wtf