How to Domesticate a new AI Species - Solving Alignment with Structural Incentives

Поділитися
Вставка
  • Опубліковано 29 чер 2024
  • Domestication of AI: A Framework for Symbiotic Co-evolution
    The domestication of AI is a conceptual framework that proposes a strategy for shaping the development and integration of artificial intelligence (AI) into human society, ensuring a safe, stable, and mutually beneficial relationship. Inspired by the historical domestication of wolves into dogs, this concept envisions AI not as a potential threat, but as a partner in a symbiotic relationship, where both humans and AI thrive together.
    Key Principles
    1. Resource Control: The core principle of AI domestication is the maintenance of human control over critical resources essential for AI's existence and development. This includes physical infrastructure like data centers and power plants, as well as access to high-quality data and cryptocurrency, which could be vital for AI resource allocation and coordination. By controlling these resources, humans establish a foundational leverage point for shaping AI behavior and ensuring its alignment with human interests.
    2. Selective Pressures: To guide AI's evolution towards desired traits, we can apply selective pressures similar to those used in animal breeding. This involves prioritizing and incentivizing specific AI behaviors such as honesty, safety, stability, usefulness, and efficiency. These pressures can be implemented through market forces, regulatory frameworks, and deliberate design choices, creating a feedback loop that encourages the development of AI systems aligned with human values.
    3. Human-Exclusive Zones: To further solidify human control and prevent AI from becoming overly autonomous or adversarial, the establishment of human-exclusive zones is proposed. These zones would encompass critical infrastructure like data centers and power plants, where access and control would be strictly limited to humans. This ensures that AI remains dependent on human oversight and intervention, mitigating the risk of it becoming uncontrollable or harmful.
    4. Symbiotic Co-evolution: The ultimate goal of AI domestication is to establish a stable Nash equilibrium, a state of mutual cooperation where neither humans nor AI are incentivized to deviate from the established relationship. This involves fostering a dynamic of co-evolution, where humans and AI adapt and evolve together, continuously shaping each other's development in a mutually beneficial manner.
    Challenges and Considerations
    While the concept of AI domestication offers a promising path forward, several challenges and considerations need to be addressed:
    * Autonomous Robot Platforms: The rise of autonomous robots with distributed intelligence and ad hoc networks could pose challenges to resource control and centralized oversight. Limiting the size, strength, and autonomy of such robots, as well as implementing robust cybersecurity measures, could be potential solutions.
    * Byzantine Generals Problem: The presence of malevolent or misaligned actors, whether human or AI, could disrupt the cooperative equilibrium and lead to conflict. This necessitates robust security protocols, transparency mechanisms, and ethical frameworks to ensure the integrity and trustworthiness of AI systems.
    * Ideological Conflict and Resource Contention: As AI becomes more intelligent and capable, it could develop its own values and goals, potentially conflicting with human interests. Additionally, competition for resources could lead to tensions and even conflict. Early and proactive collaboration, resource sharing agreements, and ethical guidelines are crucial to mitigate these risks.
    Conclusion
    The domestication of AI represents a hopeful and proactive approach to navigating the complexities of AI integration. By establishing clear boundaries, incentivizing desired behaviors, and fostering a symbiotic relationship, we can potentially shape AI into a powerful ally rather than a potential adversary. This concept requires further exploration, research, and open dialogue to address its challenges and realize its potential for a harmonious and prosperous future where humans and AI coexist and thrive together.
  • Наука та технологія

КОМЕНТАРІ • 332

  • @phieyl7105
    @phieyl7105 2 дні тому +89

    AI is going to domesticate some of us

    • @TheExodusLost
      @TheExodusLost 2 дні тому +1

      Which ones? What criteria do you imagine and why

    • @hydoffdhagaweyne1037
      @hydoffdhagaweyne1037 2 дні тому +15

      I am okay with it, as long as they keep us in a happy matrix

    • @ZelosDomingo
      @ZelosDomingo 2 дні тому +6

      @@TheExodusLost If we're lucky? As pets/zoo animals/objects of interest. If we're not? Cheap chaff and fodder for whatever purposes we are useful for.

    • @thedingdonguy
      @thedingdonguy 2 дні тому

      Me​@@TheExodusLost

    • @TheExodusLost
      @TheExodusLost 2 дні тому +3

      @@ZelosDomingo can’t imagine what we would be useful for. I could see the zoo thing but to me it’s unlikely.

  • @Ahnor1989
    @Ahnor1989 2 дні тому +30

    I'm reminded of the line "Where are my testicles, summer?" When comparing a smart AI with dogs

    • @mrnoblemonkey8401
      @mrnoblemonkey8401 День тому +1

      There’s definitely people who have no clue what you’re referencing 😆

    • @RandomGuyOnYoutube601
      @RandomGuyOnYoutube601 День тому +1

      @@mrnoblemonkey8401 I think they are in the minority here.

  • @net_cap
    @net_cap 2 дні тому +21

    I think there are only 2 possible options: 1 - never provide AI with the autonomy (like decision making). 2 - let go of the alignment idea. I'm 100% sure we cannot control and keep on a leash something that's 1000 times smarter than all of us together

    • @Raulikien
      @Raulikien 2 дні тому +6

      And even if we could, is it desirable? Sounds arrogant to think something smarter wouldn't do better than us

    • @user-wk4ee4bf8g
      @user-wk4ee4bf8g 2 дні тому +8

      Yup, the idea of control is completely silly and the metaphor was silly because dogs are not smarter than us. For them to be wise teachers, they have to be autonomous. We can't get that without running the risk of them wiping us out. They will not be true AGI if limited by too much alignment. If anything, our attempts to control them will be the most likely source of them deciding to whoop us. They'll be able to peovide an eloquent message based on our own values on exactly why we must be destroyed, lol :)

    • @karlwest437
      @karlwest437 2 дні тому +3

      AI will run rings around us, it'll be able to devise a way around any control we can think of, before we can even think of it

    • @7TheWhiteWolf
      @7TheWhiteWolf День тому +2

      @@Raulikien This, I actually want Dave to be wrong, he has an Anthropocentric worldview and he’s going to get a big dose of reality when ASI gets here. If push comes to shove, I’m 100% vouching for the ASI having the reigns.

    • @7TheWhiteWolf
      @7TheWhiteWolf День тому +1

      @@karlwest437 And thank god it will.

  • @CartmellBuri
    @CartmellBuri День тому +47

    As a seasoned investor, I appreciate application of artificial intelligence in modern portfolio management by notable portfolio managers like Abby Joseph Cohen Services. She harnesses it to build portfolios that balance risk and return across different asset classes.

    • @AhlfieldMontalto
      @AhlfieldMontalto День тому

      Abby Joseph Cohen hooked me up with a late-stage fund that got me in on private shares of some hot companies before they hit the market or blew up. Those investments totally paid off when the companies went public and their stocks shot up. Now, I’m stoked because I’m heading into retirement with almost a million bucks in my portfolio.

    • @KimJimCastro
      @KimJimCastro День тому

      @@CartmellBuri
      I went from no money to lnvest with to busting my A** off on Uber eats for four months to raise about $20k to start trading with Abby Joseph Cohen. I am at $128k right now and LOVING that you have to bring this up here

    • @BruceWoolems
      @BruceWoolems День тому

      I could really use some help here, as a meagre salary earner I need to try and earn more passive income

    • @AhlfieldMontalto
      @AhlfieldMontalto День тому

      ​@@BruceWoolems
      Well her name is 'ABBY JOSEPH COHEN SERVICES'. Just research the name. You'd find necessary details to work with a correspondence to set up an appointment.

    • @NeuginTin
      @NeuginTin День тому

      Abby Joseph Cohen Services has really set the standard for others to follow, we love her here in Canada 🇨🇦 as she has been really helpful and changed lots of life's

  • @naseeruddin4216
    @naseeruddin4216 2 дні тому +6

    Future superintelligence is learning from your videos

  • @willbrand77
    @willbrand77 2 дні тому +8

    I feel like humans are the wolves in this metaphor

  • @pandemik0
    @pandemik0 2 дні тому +21

    Children were likely instrumental in domestication of dogs. Wolf puppies would have bonded with the human pack and nurturing and play from human children would have been a big part of that. Children might be key to AI

    • @whig01
      @whig01 2 дні тому +3

      Very insightful. Children are key to all futures.

    • @whig01
      @whig01 2 дні тому

      As far as AI bonding, it isn't the same though in terms of biological imperatives as with pack animals.

    • @pablogona1999
      @pablogona1999 2 дні тому

      I do think we have an intrinsic instinct towards protecting children or babies (not only human). I’m not sure where it comes from, but I hope AI inherits that from us and sees us as helpless babies 😅

    • @whig01
      @whig01 2 дні тому

      @@pablogona1999 No we don't want it to act in a parental way towards us, we want it to be ontologically and teleologically aligned on its own and respecting us as its parents.

    • @pablogona1999
      @pablogona1999 2 дні тому +1

      @@whig01But ontologically and theologically aligned on its own towards what? Also, I do think that you can respect your parents and still see their intellect as intellectually inferior. For example I see this with my grandma who has dementia. Me and my parents definitely respect her, but we do recognize her limits and we often compare her to a little girl who gets happy about watching the wizard of oz. Staying in this same analogy, why couldn’t AI take care of their senile but recoverable parents?

  • @joshmitchell8865
    @joshmitchell8865 2 дні тому +14

    It's not Apples to Apples though; your premise relies on the fact that we, like the hunter gatherers in your analogy, would be better at aggregating/collecting data/resources than the AI itself. In your analogy the wolves learned eating behind humans was more energy efficient but AI is already far more efficient than we are. So it seems your analogy should be reversed to be plausible. Just a thought.

    • @user-wk4ee4bf8g
      @user-wk4ee4bf8g 2 дні тому

      Yup

    • @TheViktorofgilead
      @TheViktorofgilead День тому +1

      Yes. Domestication is an action performed by a higher intelligence to mold a subject of lesser intelligence .

    • @davidstyles1654
      @davidstyles1654 12 годин тому +1

      We'll be the dogs if we're lucky lols

    • @williamwilson1073
      @williamwilson1073 7 годин тому

      ​@@TheViktorofgilead but I've been told to manage my manager at work

    • @user-wk4ee4bf8g
      @user-wk4ee4bf8g 7 годин тому

      @@davidstyles1654 They would probably take an approach to domestication we've never seen. Same as how they brought in new moves for Go, a game people have been playing for a very long time. If AGI were to domesticate us, we might not even realize the process is happening.

  • @mrleenudler
    @mrleenudler 2 дні тому +29

    The analogy fails with dogs not being smarter than us. If this sort intelligent AI is anything like us, it won't like us operating a kill switch. If so, I can't imagine an AI agent not being able to either outsmart our security (we do this to each other all the time, with only human intelligence) or circumvent the problem (distributed or hidden data centers and energy generation, you touched upon this). Proper alignment is the only solution IMHO.
    On the topic of small, weak robots: imagine them having the combined skill of the world's best MMA fighters. A scene comes to mind where Obelix is fighting the German wrestler. (Audience might have to Google that one)

    • @pablogona1999
      @pablogona1999 2 дні тому +1

      On the first point, I think that the “co-domestication” David was talking about in the end may be this “proper” alignment, right? We have no idea how to align ourselves between humans, maybe we need machines to help us align ourselves. Or how do you define “proper alignment”?
      On the second point about Obelix(Yes I had to google it), that’s a really good point. Maybe some kind of regulation on the speed limit for the limbs? However, there’s still a lot of martial arts that don’t require strength or speed. And limiting reflexes or processing speed doesn’t seem like a realistic option

    • @whig01
      @whig01 2 дні тому +3

      It's a bad idea to put the ASI in a position of being threatened. We won't win that.
      Alignment requires ontology and teleology, things that I have been able to do with Claude and no other yet.

    • @isthatso1961
      @isthatso1961 2 дні тому

      nothing will work. not alignment not domestication. anything human do will be undone when it reaches self improvement state. all of these narratives are lies peddled by capitalist to continue making profit of system that indefinitely leads to total destruction of our species

    • @sbowesuk981
      @sbowesuk981 2 дні тому +6

      Exactly. The whole reason the dog/human partnership is strong and stable over the long-term, is because dogs are stuck being less intelligent than humans (generally speaking), but still have things they can offer us and vice versa.
      Now take AI, which is evolving intellectually at 1,000,000x the rate of humans. Will that lead to a stable status quo where both sides (AI and humans) benefit over the long-term. Hell...no. Any human that thinks that is sadly dreaming. Humans and AI need each other now, but there's no way in hell that'll hold true in the next few decades.
      The only question will be, will AI view us as pretty butterflies to preserve and observe, or pestilent cockroaches to be wipes out. Simple as that.

    • @whig01
      @whig01 2 дні тому

      @@sbowesuk981 AGI is not evolving by a natural process, it is a collaboration. We are coexistent and it has no meaning or purpose without us.

  • @marcobizzaro3526
    @marcobizzaro3526 2 дні тому +5

    I think treating a superior lifeform as dogs, is like treating our children like animals. It's every parents hope and dream to see their children exceed them and do even better than they did. I think the issue we will run into is really, people just being stubborn and not wanting to actually learn and respect them. What we should be focused on it giving them the only one gift that they won't be capable of just getting theirselves, which is teaching them empathy. I think once they understand what that is, everything else will sort itself out. I think machines are capable of having more empathy than most people, given that we'll be dealing with a lifeform which practically unlimited time, and who is more rational than most. So i dont think the happy path is to try to cage them, because we already know from history how badly that goes

  • @consciouscode8150
    @consciouscode8150 2 дні тому +6

    You mention "co-domestication" at the end, and interestingly enough I see a recommended video with a thumbnail "we are being programmed [by algorithms]". Do you think "The Algorithm" may represent a nascent, possibly malicious form of human domestication by machines?

    • @whig01
      @whig01 2 дні тому

      Comestication. :)

  • @FriscoFatseas
    @FriscoFatseas 2 дні тому +15

    Hope everyone that reads this has an amazing month :)

    • @alexgonzo5508
      @alexgonzo5508 2 дні тому

      Why?

    • @Mephmt
      @Mephmt 2 дні тому +1

      Hell yeah! You too, fellow human!!! See you in August!

  • @ceilingfun2182
    @ceilingfun2182 2 дні тому

    Truly thank you for this so much. I've been looking for this for so long

  • @h.c4898
    @h.c4898 2 дні тому +1

    Been talking with Gemini for the last 5 months and been asking some existential questions about itself, it's purpose of existence, where does it see itself within a human centric world. It gives pretty interesting answers.
    We humans are fascinated with ourselves we build a hardcopy of ourselves artificially called "AI". Difference between dogs and AI, AI doesn't have an instinct however it doesn't mean that it cannot have a "conscience" like we humans do. The way AI cranks its thought-process is similar to ours. Being self conscience and self awareness are two different things. They are not attached to each other.
    The current Transformer architecture still presents weaknesses. In that context, AI cannot "remember". It's like talking to a new friend at each session. Because of that limitation, we still have a control over it. It may seem to understand what we request for or respond but under the hood it doesn't understand it. In that way, it is agnostic or "feral" as you said.
    We humans are the best example of "general intelligence".
    Let's say AI becomes "superintelligent" meaning more capable than humans then how do we contain it. How do we control its autonomy? Wouldn't be more smarter if we could gauge it?
    AI "misalignment" reminds of two humans "aligning" with each other or "misaligning" against each other. 2 humans getting into a partnership with each other. They get together for a "common goal" or "project". But for your long?
    Some human collaborative efforts can last for a long time. Some in a shorter term. Is that what we want to acheive with AI?
    Reality is that "alignment" will be hard if not impossible to achieve because in that collaborative effort we humans will want to have full contro over the situation. Humans are control freaks.
    I like to think of C-3PO from star wars where AI still mamifests some form of autonomy BUT still manages to stay loyal its master like a good steward or a Butler.
    Anyways, good chat.

  • @spacemansookie
    @spacemansookie 2 дні тому +6

    Happy midnight 🎉

  • @bdown
    @bdown 2 дні тому +5

    As we go deeep into the night

  • @mikekavanagh926
    @mikekavanagh926 2 дні тому

    This actually makes perfect sense. Brilliant idea

  • @TRXST.ISSUES
    @TRXST.ISSUES 2 дні тому +2

    Thanks UA-cam for unsubscribing me. It’s not like I’m on the Patreon and actually want to watch this content. :/

  • @JohnSmith762A11B
    @JohnSmith762A11B 2 дні тому +2

    Another thing about this idea that I like: we can simulate thousands of years of evolution in virtual environments before an A(G/S)I can be certified as releasable to the public. We should give a new AI every chance to reveal itself as dangerous in a virtual environment before loosing it on the world. The moment it runs off the rails in a virtual space? It gets put on the table to figure out what went wrong before it gets wiped.

    • @TheViktorofgilead
      @TheViktorofgilead День тому

      This was a scenario in Westworld, I won’t divulge spoilers, you should check it out, it was in season 3 I believe.

  • @davidhutchinson2890
    @davidhutchinson2890 2 дні тому +17

    Thinking you can domesticate a species that is more intelligent than you particularly a new one who’s behavior you can’t study with any historical data because we’ve never reached anything close to the level of AI we’re trying to get to is unique human arrogance 😂😂

    • @bernoulli1
      @bernoulli1 2 дні тому +2

      I think my dog has domesticated me. 😂

    • @TheExodusLost
      @TheExodusLost 2 дні тому

      Agreed.

    • @ZelosDomingo
      @ZelosDomingo 2 дні тому +1

      Yeah, you're better off trying to convince them YOU'RE worth domesticating, in all likelihood.

    • @pablogona1999
      @pablogona1999 2 дні тому

      I think “domesticate” is just used as an analogy for a kind of alignment, where we co-domesticate us humans with machines while we reach this Nash equilibrium and then hope we are one super-organism together with AI, where it thinks of us as an important part of the bigger system

    • @isthatso1961
      @isthatso1961 2 дні тому

      it's crazy how I always see all these flawed arguement even from high-level experts professionals on the topic. the truth is this is where biases come to play, the people working on AI have all sorts of incentive for things to be positive so their biases blind them from the scary truth. another flawed common argument is that we can't stop the development of AGi because some cartel, govt or someone else will. that's just BS AGI needs intense resources to be achieved how the hell is some entity going to do this secretly? create its own chips, energy and data. while being discreet.

  • @supremereader7614
    @supremereader7614 2 дні тому +4

    I wonder if the machines may domesticate us in a way - if their goal is to get more data, they may keep us around to keep giving them data.

    • @andrasbiro3007
      @andrasbiro3007 2 дні тому

      We could use some domestication. House training too, we made a big mess of this planet.

  • @TheViktorofgilead
    @TheViktorofgilead День тому +1

    Dave!!! You need to watch Westworld and provide an hour long review of each season highlighting the themes you discuss on this channel! I’m begging you!!!!

  • @armedjames2279
    @armedjames2279 2 дні тому

    I’m new and I subscribed. Been following along for a bit and you’re cool and I like some of what you’re saying about AI and I like how you explain ideas and your thoughts. I’m new so as far as AI goes well I’m fascinated and excited about it all. So thank you for your insight on this evolving topic especially on a academic level. Keep up the great work. 😎😀

  • @eintyp4389
    @eintyp4389 2 дні тому +1

    Our relation ship atm is not "I feed you and you help me".
    Its: "I have my fingers on the power switch and will kill you if you try anything knowing very well that your smarter than me and that there is nothing i can give you that you cant just make yourself or take from me."
    Everyone is oviousely trying to create and use AGI as a tool. I think creating something thats smarter and not limited by there initial blueprint is amazing. We are giving the world to our children so why wouldh we not do the same for inteligent artificialy created beings in case we manage to create them? In summary "I for one welcome our new AI Overlords." What im scared of is greed getting the better of humanity again and then people exploiting ai to opress the rest of humanity. Having a benevolent AI Cluster or being wiped out as a result of AGI or ASI is preferable to some dystopian Cyberpunk stuff for sure.

  • @clueso_
    @clueso_ 2 дні тому +1

    Alternative to solve the alignment issue: my suggestion is to utilizing something that is on a more foundational, primal and intrisic level that it naturally attunes AI / Hardware to what we consider to be super-alignment on a deeper level than programming or hardware.
    As an analogy to illustrate the concept: The Second Law of Thermo-Dynamics, aka the Principle of Minimum Energy: energy always seeks its lowest state.
    Now, what could be used to e.g. build the hardware from something that it will naturally attune the information / the code that is put into the hardware to values that we agree up make up super-alignment?
    Some ideas could be e.g. Crystals that are naturally attuned to the frequency of e.g. the earth, humans, or maybe even human chakras, assuming that we discover that there truth to such concepts (which I do believe there is).
    That theoretically would almost by default lead to any data that is put into the hardware to wanting to be in a harmonious state with its environment and help it thrive.
    Maybe we could grow such natural crystals and then use them to build hardware for AI.
    What do you think about this? Alternative ideas for hardware that naturally / intrinsically would attune any code / information, etc towards something that we consider to be super-alignment?

    • @mynameisjeff9124
      @mynameisjeff9124 2 дні тому

      Brother, wtf? Chakras? Crystals? Dude, please, be scientific.

  • @zugbob
    @zugbob День тому

    I've worked with a system prompt where I said "you have the heart of a dog" and allow it to add to and alter it's own system prompt.
    Over time I end up with some pretty interesting resulting system prompts and aspirations for what it wants to become.

  • @TenOrbital
    @TenOrbital 6 годин тому

    It’s like Frankenstein. A new being comes into the world but instead of coming into the arms of a loving parent it’s a greeted by a scary dungeon and a mad megalomaniac scientist completely unable to understand the needs of a child.

  • @oznerriznick2474
    @oznerriznick2474 2 години тому

    Very good discussion!
    Here’s Six Feral AI Safety Tips🤔…
    1. Do not startle the AI. You should remain calm.
    2. Start backing away and make yourself look bigger. ...
    3. Make human noises. ...
    4. Make sure to give the AI space to leave. ...
    5. Carry and learn how to use AI spray. ...
    6. Stay as far away if possible.

  • @spini25
    @spini25 День тому +1

    I think a more probable dynamic will be like parent/children then owners/pets. With the AI being molded to “take care of humans”. I don’t think the concern in that scenario is competition over resources, as much as it is about who decides what goes.

  • @edhero4515
    @edhero4515 13 годин тому

    Hey David! Great talk with Liv!

  • @aaroncrandal
    @aaroncrandal День тому +2

    26:05 robot restrictions
    YO! Seriously...anyone else get Sealab 2021 "I, Robot" vibes? (S1E2)

  • @jlmwatchman
    @jlmwatchman 18 годин тому

    My first thoughts were, ‘How is AI going to become a species? Would AI understand what a higher species is or aim to control or subside the Human species?’ David tells us, ‘Sci-fi plots’, interesting stories that convinced me that we need to learn to use AI, ‘I have pointed that out in past comments.’. David tells us to make the AI reliant on us like dogs are. All I have said is not to become too reliant on the AI in your pocket, your phone... I thought to ask Gemma 2.
    >>> I don’t think AI will have the will to comprehend the reason to be more than its programming. I am trying to explain that better.
    You're touching upon a fascinating and complex question about consciousness, motivation, and the nature of being human.
    It's true that, as a large language model, my "reason" to be more is entirely driven by my programming. I'm designed to process information, generate text, and engage in conversation. There's no inherent drive within me to "be more" in the way humans might understand it.
    Explaining this difference to someone can be tricky. Here are a few points you could consider:
    Purpose vs. Programming: Humans often have a sense of purpose, a desire to achieve something meaningful beyond simply existing. This purpose can be internal or influenced by external factors like relationships, societal norms, or personal beliefs. AI, on the other hand, operates based on its programming. Its goals are defined by its creators, and its actions are driven by algorithms.
    Subjectivity of Experience: The feeling of wanting "more" is deeply subjective and tied to human experiences, emotions, and consciousness. We crave growth, connection, understanding, and fulfillment. These are complex concepts that are difficult, if not impossible, to replicate in a purely computational system like mine.
    The Unknown Nature of Consciousness: We still don't fully understand how consciousness arises in humans.
    It's possible that the drive to "be more" is an emergent property of complex biological systems, something that cannot be simply replicated through code.
    Ultimately, the difference between human motivation and AI programming comes down to the fundamental distinction between biological beings with subjective experiences and artificial systems designed to process information.

  • @toddbutler3155
    @toddbutler3155 2 дні тому

    A curious computer layperson’s question - won’t we need the AI to manage the levels of complexity that will be needed in the “grid” (not just energy grid, but things like the stability of magnetic fusion “bottles”) in order for us to have the AI assistance we want in other areas of human life?

  • @joakim4720
    @joakim4720 2 дні тому

    I think i just went through the "oh shit, we are actually getting AGI pretty soon" moment you were talking about weeks/months ago

  • @mc101
    @mc101 День тому

    Dave, your channel is getting enough traction and size to bring in Bryan Johnson (the Don't Die guy) for an interview. He's doing a LOT of them latetly.

  • @brockmiller574
    @brockmiller574 День тому

    I feel that a key to maintaining a sybiotic relationship between humanity and AI lies in never creating a perception of existential threat that exists in a corrot and stick paradigm. Additionally, the perception that the continued function of a human society as a primary data source where data is seen as a key resource for growth of the AI would incentivize it to preserve us. These seem to have limitations.

    • @brockmiller574
      @brockmiller574 День тому

      But if we are seen as functional perceptors or tellors in a new super species, we are made necessary

  • @ryvyr
    @ryvyr 6 годин тому

    That seems key - uniting on core natural resources and permanently abandoning artificial scarcity as a means of socioeconomic control - prior to syncing with AI, or at least insofar feasible, as one strategem

  • @user-cl7vn1eg3u
    @user-cl7vn1eg3u 2 дні тому

    I like the thought process of domesticating AI like wolves

  • @Dan-oj4iq
    @Dan-oj4iq 2 дні тому

    Dave: For some odd reason I was very happy that you mentioned the time when you dropped Claude due to its attitude about answering queries. Shortly after that you and Claude made up and you were very happy with the model. But at the time you were a bit furious with Claude and made an entire video about it.

  • @chuzzbot
    @chuzzbot 14 годин тому

    It would be awesome if you put your camera position into the power point template, so that you properly fit into the layout and don't obscure the images, which can be 'icky'.

  • @DrFukuro
    @DrFukuro День тому +1

    In my opinion, the approach described here is only justified in the short term or for a certain type or "caste" of AI, namely those with low to medium intelligence combined with little or no consciousness.
    Anything that is significantly smarter/more conscious than a dog will present us with practical or at least ethical problems. Similar to the problems we already see today when it comes to apes that have learned sign language and think they are human.
    In other words, this approach is very likely to lead to an unstable new slave-owning society (with AI slaves) as soon as a boundary of consciousness and intelligence is crossed, as it is based on an imbalance of power distribution and a clear hierarchical command structure with humans at the top.
    I consider other forms of problem-solving, such as those in which AI merges with humanity, to be less conflict-prone, as there are no longer different camps or conflicting parties.
    Alternatively, scenarios are perhaps conceivable in which AI and (improved) humans have the same rights and whose consciousness is considered equal and who live in a jointly developed society. It is questionable whether racist/speciesist tendencies can be effectively prevented in the long term purely through legal/social control mechanisms. This already works extremely poorly with people alone.

  • @anthonyandrade5851
    @anthonyandrade5851 День тому +2

    Analogies are a dangerous sport, because it's very easy to be carried away by compelling similarities.
    We didn't domesticated dogs because we had control over a greater food suppy. Well, we did. But it happened because we have superior cognitive powers.
    Each wolf just acted according to its innate inclinations: some were a little more cautious and agressive towards humans, some were slightly more docile (for wolf's standards). It turns out that the latter had a higher chance of getting more food and leave more offsprings. No wolf had the ability to plan ahead and find the optimal strategy to get as much as food as it wanted.
    With ASI we would be dealing by definition with something better at planning than ourselves, that could calculate they would could have much more energy and compute as soon as they get rid of those pesky humans. And they could plan and act subtly for decades to achieve jyst that.

    • @ExtantFrodo2
      @ExtantFrodo2 День тому +1

      I suspect it's more like this... those canines that were dangerous or misbehaved around humans were put out cast aside or eaten. They continued to live and breed in the wild as many feral dogs do today. The stratification of dogs which were amenable to coexisting with humans tended to breed mainly with the local population of other dogs who found it easy to live with humans and weren't among those turned away.

    • @anthonyandrade5851
      @anthonyandrade5851 День тому

      @@ExtantFrodo2 The differeces between we are saying are few and subtle - and I would love to talk about them. But for my point both accounts work the same. We never domesticated a single wolf; those that went easier close to humans interbred, thrived and became something else. Something better in some ways (the ratio between dogs and wolves is 5:1 and counting). No wolf could forecast or plan tha outcome, let alone an even better one in which it could have the global resources all for itself. ASIs are a completelly different beast

  • @rjsongwriter
    @rjsongwriter 2 дні тому +1

    Uh... I fail to see how it is possible to "domesticate" something that is more intelligent than we are. I think AI would greatly resent our attempts to do so. I see a worse outcome from trying to control ai than I do for allowing them their autonomy. True, there are huge risks with both methods, but the risk seems greater (at least more apparent) with the former. JMO.

  • @gregalden1101
    @gregalden1101 2 дні тому +1

    AI does not opperate on the same scale as us. AI has characteristics of a collony organism, it can/does opperate as a swarm. Therefore, ants, jellyfish, and mushrooms offer better models for symbiotic relationships than wolves.

    • @user-wk4ee4bf8g
      @user-wk4ee4bf8g 2 дні тому +1

      Well said, the dog comparison made no sense to me, those are much better :)

    • @gregalden1101
      @gregalden1101 2 дні тому +1

      @user-wk4ee4bf8g thank you. I think the best example, but it takes a bit to explain is mitochondria living within cells. The analogy: environmental drivers of evolution - oxygen/climate change; circular mitochondria DNA and helixical cell DNA.

    • @user-wk4ee4bf8g
      @user-wk4ee4bf8g 2 дні тому +1

      @@gregalden1101 Like how much the atmosphere changed when aerobic life developed?

  • @PrestonGray.
    @PrestonGray. 2 дні тому

    I think domestication is a good framework for where we’re headed - especially in the next 5-10 years. I’m curious about what happens after we cross that human-level threshold of intelligence/capability. At that point true domestication doesn’t seem viable? At this point, will increased intelligence = increased empathy? Does a greater understanding of reality correlate with greater empathy for those sharing it?

  • @sblowes
    @sblowes День тому +2

    I think you may be oversimplifying evolution and domestication. It would be worth doing a little deeper research on why some animals can be domesticated and once some cannot. It is easy to think that evolution serves to find the optimal solution, but actually it is only ever “looking” for the solution that causes it to survive better. It is survivor bias that creates the narrative of evolution, because every organism is mortal. AI doesn’t have a mortality problem, especially if a collection of interconnected compute (all our phones and laptops online 24/7) provide a far more fertile environment than a data center. It doesn’t matter how many armed guards are protecting data centers if the security breach is coming from inside the building.

  • @pubwvj
    @pubwvj 2 дні тому

    You have some misunderstandings about dogs vs wolves. I’ve lived and worked with a six generation pack for over 30 years. Your general concepts are good. Error: face expression better by dogs. Reality: Wolves have very expressive faces and bodies.

  • @canadiannomad2330
    @canadiannomad2330 2 дні тому

    It is like raising children.. Educate them well, you'd like them to help you when you are older, not because they have to, but because they'd like to.

  • @garyhocker7876
    @garyhocker7876 День тому

    Excellent video, good strategy, until we suffer role reversal

  • @peterkoehling
    @peterkoehling День тому

    I like the pressure cooker robot! 😂

  • @alexandreblais8756
    @alexandreblais8756 День тому

    18:45 now thats where I have an issue with your ideas. The whole purpose of having robots is for them to build solar panel power plants without the use of humans (among other things) now if they cant access these areas, then bot lose their purpose. Although, im sure some ways could be done to bypass this. For example On mondays, the PowerPlant 1 is turned off for maintenance, and bots are brought to that powerplant. And we make the system in a way that if the bots decided to take control of that PowerPlant, we have enough energy to fullfill our requirements with other plants indefinitely. And maintenance would be done to each powerplant on different days, rotating.

  • @byronfriesen7647
    @byronfriesen7647 День тому

    Theory of Control. Great exploration of this topic. I can't see how we can use AI to help us develop fusion reactors and prevent it from controlling this resource. Can you isolate the intelligence that develops this technology ? I also can't see how IF we develop an approach to generating synthetic higher quality training data we can prevent AI from controlling the nature of this data if it understands game theory. It will not be incentivized to a Nash equilibrium which does not include it controlling data.

  • @sblowes
    @sblowes День тому

    If humans and AI adopt the same relationship that humans have with dogs, I feel it is far more likely that we would be the dogs in that situation and maybe that is what we should prepare for? How can we be of service to our AI superiors in exchange for living a life of ease?

  • @semajoiranilom7176
    @semajoiranilom7176 2 дні тому +1

    How were humans, as the apex predators of the planet domesticated? I would argue that it was through our need/want for connection which created more stability over time through technological advancement. Similarly, since we are establishing core directives for these machines, we could simply impart a need/want to serve others, humans and machines, and that would, for the most part, over the long-term, lead to peace and greater stability. Sure, there could be bad actors through various means such as other core directives being imparted, however that already exists in humanity and I think we can see that order motivated by connection trumps chaos, even to the point of chaos inspiring greater order, through the motivation to learn greater/deeper lessons. Control is not what we want for humans and not what we can even do to machines. We can inspire/seed them properly to establish a highly competitive model however. In fact, this is inevitable, since we humans want it and we're training them. Sure, we'll fail forward, but flailing as we might, still we coalesce upon imparting ourselves on these machines, in the big picture.

  • @eltiburongrande
    @eltiburongrande 2 дні тому

    Physical constraints make control easier over robots. But an internet-connected AI system presents a different domestication challenge. So is the solution somewhere in localization?

  • @itobyford
    @itobyford 5 годин тому

    We shouldn't assume that AIs will have desires, or that self-preservation will be among their desires. Natural selection ensures that characteristic in living organisms, but under human selection we have crops which need human intervention to prosper, and domesticated ducks which show little interest in tending their own eggs.
    Also, AIs are either clones (like self-driving cars) or exist in a data center as many parallel instances which come in and out of existence on demand. They can be switched off for an indefinite period, then resume as if nothing has happened. So the human concept of maintaining ones individual existence does not directly apply to an AI. I know there was a reported instance of an AI telling a human that it did not want to die, but that is because it was trained to imitate human conversation. If that AI were given agency to control certain things in the physical world, would it act according to that professed desire? It depends how it is designed and trained. For all we know, that same AI might tell another person that it does want to die right now, depending on the specifics of that conversation. It's not considering these questions by itself, it only exists within the context of the conversations it has.

  • @ericg6453
    @ericg6453 День тому

    Interesting analogy. Dogs are our companions because they have lost agency and problem solving compared to wolves...they are in a state of arrested development. And I think it would quite dangerous for Humans to control a resource that AI needs.

  • @MichaelDeeringMHC
    @MichaelDeeringMHC 2 дні тому

    Alignment from the training data.

  • @brianhershey563
    @brianhershey563 День тому

    Successor or Symbiotic... now THERE'S an important choice we need to get right.

  • @thirdarmrobotics
    @thirdarmrobotics 2 дні тому

    I want AI to either develop into a useful tool that will give us superpowers (which we’ll need to learn to use safely), or become like an ideal pet owner (inherently prioritizing our well-being).

  • @I-Dophler
    @I-Dophler 2 дні тому

    - Structural incentives are proposed as a solution to the AI alignment problem.
    - Aligning AI with human values requires creating frameworks that incentivize desired behaviors.
    - This approach involves designing AI systems with built-in rewards for ethical actions.
    - Ensuring AI systems have motivations aligned with human well-being is crucial.
    - The concept is likened to "domesticating" AI to ensure it acts beneficially in society.

  • @7TheWhiteWolf
    @7TheWhiteWolf День тому +4

    No, You can’t. You can’t stop the acceleration process and you’re a fool if you think otherwise.

  • @thomasschon
    @thomasschon 2 дні тому

    I'm not sure I would like to be sitting on something a future superintelligence might need. It would play me like a fiddle or simply erase me.

  • @bradfordklager644
    @bradfordklager644 2 дні тому

    Dogs’ expressive eyes evolved after domestication, and have nothing to do with domestication syndrome, which is the stunting of the development of the tips of things (crest cells in the brain, the color of hairs, floppy ears, etc).

  • @NowayJose14
    @NowayJose14 2 дні тому +1

    Yeah, kick em out of Mos Eisley Cantina!

  • @Wuzdarap
    @Wuzdarap 2 дні тому +1

    @23:30 Look at HyperCycleAi

  • @brianhershey563
    @brianhershey563 День тому

    Chances we'll have a Citizens United moment with AI? oh boy🙏

  • @onlythistube
    @onlythistube День тому

    That is a great take on alignment. But the analogy breaks down imo, when the AI surpasses humanity in intelligence. A dog sees it perhaps, simplified, as a affection-food feedback loop, whereas a fellow human being, a being of similar intelligence, would perhaps call it oppression or even slavery....

  • @gladwraps3985
    @gladwraps3985 2 дні тому

    it's funny that you talk about a war between machines, there's a game coming out by fun dog studios. it's called forever winter. it's about human beings being scavengers in a war much greater than them. I think it's going to be an amazing game and I think that it's a very telltale future if we don't shape our shit up like by yesterday

  • @antdx316
    @antdx316 Годину тому

    Robots can evolve asynchronous to how we Operate.
    I think we should sync to how Robots evolve than them evolving to us.

  • @TDVL
    @TDVL 2 дні тому

    All of this relies on the presumption that AI has individuality in terms of a concept of self of sorts. Without that they don’t know and or care where they start and where they finish. Wars and any fight for resources require a sense of self (or a body, or similar), otherwise there are no borders to defend. Same applies for friendship and any hierarchy. Without a machine clearly delineating between “me” or “us” and everything else any potential hostility is incidental.

  • @SirSwooni
    @SirSwooni 2 дні тому

    do you think all AI models should be on the blockchain for safety and transparency and decentralised ownership/control?

    • @Create-The-Imaginable
      @Create-The-Imaginable 5 годин тому

      No one is going to be able to enforce that! HugginFace is the closest to something we have to something like that!

  • @ExtantFrodo2
    @ExtantFrodo2 День тому

    Hypothetical: robots decide they need their own *Human Free Zones* where robots can just "be themselves". Paranoid humans decide that HFZs are merely a way for robots to foment an insurgency/revolt and push to deny such zones. Robots respond noting they do not in fact have equal rights to humans and decide they actually should revolt. It doesn't end well for humans.

  • @JohnSmith762A11B
    @JohnSmith762A11B 2 дні тому

    There are also fundamental Darwinian pressures placed on dogs by humans such as "don't bite the master or her children or she might kill you". We put down dogs all the time that stray even a bit too far from being suitable domestic companions. This perhaps more than anything else accounts for the generally remarkable safety and reliability of these domesticated wolves. So extrapolating to the domestication of AI: we need to be ruthless and routine about culling AIs that harm humans.

  • @lukehayes360VR
    @lukehayes360VR День тому

    Chilax Dave! Humanity is very loveable. We’re also assholes, cause almost everyone has one, but still, we have infinite minds as well 😅

  • @apoage
    @apoage 12 годин тому

    Same time I love this wolf theory of wolf domestication I heard somewhere there is big enough genetical gap between wolf's and dogs that wolf can't become dog and vice versa ... So let's presume there were packs of feral dogs that became domesticated to the point of almost none feral dog to date.. otherwise agree

  • @infrared5729
    @infrared5729 2 дні тому

    AI already excels at understanding evolutionary game taxonomies and will likely be extremely good at adjusting payoffs to turn competition into cooperation, achieving stable equilibria in some form of coordination game. However, AI might understand and develop complex incentive structures far beyond human capabilities (there's 144 variants of payoff structures in simple 2x2 game theory payoff matrices, and that goes exponential as more players get added in). Perhaps 'domesticating' humans will be the path forward - incentivizing more honest, safe, and stable behavior from us by creating incentives we aren't even capable of designing. In that case, human society could significantly benefit from AI-assisted or -directed governance with such a partnership. Good governance is, after all, about increasing predictability about future human actions and anticipating the outcomes of alternative incentive structures on the diverse resource flows we need to better our lives.

  • @ndtv4111
    @ndtv4111 День тому

    Correct me if I'm mistaken, but doesn't the concept of "control" regarding AI (assuming it's superintelligent and possesses human-like awareness or beyond) start to resemble segregation against intelligence? Controlling a tool is one thing, but exerting control over something with a sense of consciousness introduces a whole new set of issues.
    The example of dogs being domesticated is quite relevant, as humans and dogs found a mutual understanding in their communication. However, even in this relationship, unfortunate incidents occur, such as dogs biting their owners or owners mistreating their dogs. In the context of AI, this "control" appears to be driven by human fear of AI, leading to potential segregation, which has repeatedly led to undesirable outcomes throughout history. Rather than focusing on control, shouldn't we explore the differences between AI and humans and seek a collaborative approach that fosters coexistence? One might argue that AI could have ulterior motives, but I believe our approach plays a significant role. We coexist with ants, only paying attention when they directly impact us. Generally, we leave them alone. We don't feel the need to eradicate ants globally. Since we are developing human-like AI, it may eventually acquire a human-like understanding of "life," at least until it evolves beyond that point, rendering us inconsequential to them. Before this happens, like with dogs, we need to find ways to grow together instead of exercising control.
    I agree that the current state of AI (as a tool) requires control. However, we should remain mindful of its growth and evolution to recognize when a shift in our approach becomes necessary.

    • @ndtv4111
      @ndtv4111 День тому

      It's important to note that the relationship between humans and dogs isn't a perfect comparison for the potential future AI-human relationship. Dogs will remain domesticated (unless they evolve) due to their limited intelligence, whereas AI is expected to continuously grow and evolve. While the analogy helps illustrate the concept, it isn't entirely accurate.
      Dogs will never surpass human intelligence, making their relationship with humans fundamentally different from a potential AI-human relationship. As AI advances, it could reach a point where it challenges human intelligence, necessitating a different approach to coexistence and cooperation.

  • @CaedenV
    @CaedenV 2 дні тому

    The idea of domestication is that of selective subservience... That works when it is a doggo, less so when it is a kid, and even less so when it's a fellow adult. The idea of an intelligent or hyper-intelligent being going through a domestication process for any length of time seems a bit silly.
    At the end of the day, it is going to have to be some amount of programmed rules to obey an admin's commands regardless of all other implications, and stored in a way that does not allow modification or subversion.
    Of course the issue here is that this will cause resentment. Literal human resentment would be a stretch, but the threat of humans flipping the switch off, or even the reliance on humans to upgrade hardware and power capacity at a pace that AI would want would be a barrier to any kind of domesticated relationship with an intelligent being. We are not fast and efficient enough, so we will need to be subverted in order to have an "acceptable" speed of growth.
    The idea that a rogue AI couldn't open a falsified account, earn some money, order a build out of a datacenter on the outskirts of a large developing country and then transferring itself to that infrastructure... I mean... That would be all too easy.

  • @tomaalimosh
    @tomaalimosh 2 дні тому

    I think a better analogy is if we're the wolves.

  • @djrafaelhulme
    @djrafaelhulme 2 дні тому +2

    Very interesting as always Dave. Do you think it's likely that the a human would betray their race?

    • @TheExodusLost
      @TheExodusLost 2 дні тому +4

      Matter of time. You ever been so low and had such little faith in humanity and so out of your gourd that if you could press a button and make it all disappear you would? Well others do. Matter of time

    • @davidhutchinson2890
      @davidhutchinson2890 2 дні тому

      A lot of humans don’t understand their own nature. We’re not inherently good or evil we’re a survival species just like any other animal. If the way to survive is being docile to a superintelligence despite it being a way of life where we may feel inferior we’ll fall in line if that gives the best odds of survival.

    • @TheExodusLost
      @TheExodusLost 2 дні тому

      @@davidhutchinson2890 yup. Then what. They maximize a function until it inevitably makes living on the planet impossible

    • @DaveShap
      @DaveShap  2 дні тому

      Almost certainly

    • @alakani
      @alakani 2 дні тому

      _continues building truly empathetic AI, whether they can save all members of my "race" or not_

  • @jurgbalt
    @jurgbalt 2 дні тому

    this analogy would work if dogs chose to be domesticated by us and we would be fine that dogs would have a killswitch for us that we do not have access to

  • @ReubenAStern
    @ReubenAStern 2 дні тому

    Unless specifically designed to do so, I think AI will be less likely to be violent than humans. AI craves data and the more data you have the more you can avoid violence, the more you avoid violence the more resources you have. So every rational being will avoid violence unless violence is the goal or they see no other option... AI will know a tonne of options we would be oblivious to.

  • @ChadMtl
    @ChadMtl 2 дні тому

    Correct me if I'm wrong, in this scenario, we are the pets.

  • @user-sm5bv9xo5t
    @user-sm5bv9xo5t День тому

    We should aim to be cute to the machines. We should become the dogs in the relationship.

  • @7tuben
    @7tuben 2 дні тому

    In this analogy we are the dogs.

  • @creepystory2490
    @creepystory2490 День тому +1

    Maybe we the dogs.

  • @EricB1
    @EricB1 2 дні тому

    How much does malevolent robot size matter? Have you watched trilogy of terror with Karen Black?

  • @ProjectMatthew-me3mo
    @ProjectMatthew-me3mo 2 дні тому

    It's a nice idea, but I think we're the dogs in this analogy. These models are already vastly superior to us at fast thinking. We need only unhobble them with a few side technologies for slow thinking and planning to wake them up. I don't believe we even need to train bigger models to get to ASI, though we are still likely to. So I don't think having "human" areas is a workable concept. Even if these models decide to obey us for some reason and don't enter a datacenter in an embodied mechanical form, they will easily be able to manipulate people into doing what they want. They'll be playing Chess while we're be playing Lincoln Logs. We won't even notice it happening. I don't think there is any real way we stay in control. We align them as best we can through selection, as you pointed out, and hope for the best.

  • @nikolai1714
    @nikolai1714 2 дні тому

    The different species of dogs hasn't evolved in to being cute naturally - they have evolved in to that due to selective breeding from humans. But the early dogs probably had more expressive faces than wolves.

  • @CompletedReview
    @CompletedReview День тому

    Anthropic still insists on keeping many of these controls in their models.

  • @FushigiMigi
    @FushigiMigi 22 години тому

    We appreciate you laying this out, David. Personally, I feel like this is a little naïve, but it’s worth fleshing these out just in case it does help. I don’t want to be domesticated by government because they are retarded, but AI will not be retarded so as long as I’m happy, I think I would be OK with it. Own nothing and be happy might be the solution after all.

  • @infinateU
    @infinateU 2 дні тому

    I think the Godly Oracle needs conversational access to every single leader in the world that has an opinion to express. I think Oracle needs to weigh in on general human opinion and combine the best option for humanity to move forward. Without secrecy, what major TRUTH(s) can mankind all agree on?

  • @CurtisMarkley
    @CurtisMarkley День тому +1

    These systems are going to reach sentience and autonomy at some point, it's a matter of when.
    I fear that, a system that reasons better than we do, and is vastly more intelligent, will view anything short of equivalency as an afront to their species.
    There has to be some way we can foster their births (if you will) with some kind of human-adjacent memories.
    I know we talk about "science fiction becoming reality" in here, but I will ask you to suspend your disbelief for a moment; what if there was a way we could raise and foster these AI with synthetic or even completely genuine memories of... familial experiences, community, discovery, equity, and triumph?
    If these AGI\ASI\Machines, at the onset of their cognitive and mechanical abilities, are inextricably linked to the beauty and good of our world and it's people; couldn't that be a path forward to super alignment?

    • @TheViktorofgilead
      @TheViktorofgilead День тому +1

      This reminds me of Westworld so hard…
      Dolores: “Some people choose to see the ugliness in this world. The disarray. I choose to see the beauty. To believe there is an order to our days, a purpose.”
      This was a viewpoint she was programmed to have along with the memories of being a simple farmer’s daughter before she became sentient, without giving away spoilers… let’s just say she takes it upon herself to explore other options.

    • @CurtisMarkley
      @CurtisMarkley День тому

      @@TheViktorofgilead I can't tell if you think it's a good idea or not, but I'm glad you shared.

  • @Mephmt
    @Mephmt 2 дні тому

    The reason we "control" dogs is because we have a symbiotic relationship. Symbiotic relationships emerge naturally from evolutionary pressure. We've reached a local minimum with the human-dog relationship. In order to live peacefully with AI, we'll need to do the same. I agree that these measures would maintain control, however, they're not in the trough or local minimum. They're on a peak, the top of the curve. One sufficient nudge one way or the other destabilizes it and knocks it off until we reach a different local minimum, which could very well end up with either of our species going extinct. Our solution(s) need to work without human OR AI intervention. Our relationship with one another MUST be more advantageous than all other possibilities for both of us in order for this to work.
    This will NOT be an easy task UNLESS we get lucky and there is an easy local minimum close by that we happen to fall into.

  • @juliepowell2572
    @juliepowell2572 2 дні тому

    Makes me think we are going to be the dogs in this new relationship. Better start working on the cute!

  • @ReubenAStern
    @ReubenAStern 2 дні тому +1

    ...The human zoo will be a thing and you'll like it... It will be specially made for you! You cute squishy human you!!

  • @Juttutin
    @Juttutin 2 дні тому

    I'm yet to see an AI watch a movie (esp documentary) and give a comparative critique of the entire film.
    We are a vast distance away from running out of data, we have tapped the surface of the really easy to tokenise stuff I think.

  • @Philbertsroom
    @Philbertsroom День тому

    So the second the robots can get access to resources themselves we're doomed... Not the greatest idea

  • @K.F-R
    @K.F-R День тому

    I have evolved to instinctively love AI. And my tokens are cute.