Hard Takeoff Inevitable? Causes, Constraints, Race Conditions - ALL GAS, NO BRAKES! (AI, AGI, ASI!)

Поділитися
Вставка
  • Опубліковано 29 вер 2024

КОМЕНТАРІ • 443

  • @BarbaraBrasileiro
    @BarbaraBrasileiro 6 місяців тому +67

    Hahahahaha "I just don't feel pretty today!" I loled here

  • @jcpflier6703
    @jcpflier6703 6 місяців тому +12

    Let’s go Dave! 🎉 love this channel! Keep doing what you’re doing.

  • @mlimrx
    @mlimrx 6 місяців тому +3

    Thank you so much David for all your hard work:) Just love listening to all your videos!

  • @BruceWayne15325
    @BruceWayne15325 6 місяців тому +2

    One other AI bottleneck not mentioned is scalability. We may get AGI sooner rather than later, but unless it's scalable, it's not going to really mean much to the common man. This is the most likely scenario in my opinion. I think eventually they will be able to scale, but I don't think it will happen until they change their approach away from the deep learning model, and instead focus on cognitive models.
    David, thanks for my word of the day: Ocified. That's not one that I was familiar with, but I like it!

  • @shake6321
    @shake6321 6 місяців тому

    big fan buddy.
    i am so pumped for the future and the singularity
    it’s like going from the agricultural age to the industrial age.

  • @Jack-ii4fi
    @Jack-ii4fi 6 місяців тому

    This was a very insightful video thanks! Personally I'm slightly concerned with making maximizing understanding the rewards/objective function of an AGI or collective intelligence because, while I agree that maximizing understanding appears to include humans in that equation in that we are at least interesting pieces of information to keep around at best (and possibly collaborators with the AI in pursuit of more understanding) and at least mere data points at worst. I.e. every person's life is unique information, this is where my concern arises. I think if we structured the rewards function such that the AI either directly "respected" consciousness we would be able to pursue maximum understanding but my concern is that 'understanding' must be aggressively defined and will be subjective and biased for good reason because we don't want the AI going about its day saying "maybe if I just kill a bunch of humans the beings or God running this virtual simulation of reality that I might be in will pull the plug and wake me up or stop me and I've decided I've 'understood' humans well enough that killing a few has a low probability of decreasing my maximal understanding while a higher probability of exploring the fact that I could be in a simulation" (or any other variations on similar possible scenarios where the pursuit of knowledge/understanding involves transgressing on the conscious lives/experiences of humans. So I think it's going to be hard work personally trying to align them but I definitely agree if we could align ourselves as collaborators with the AI in the pursuit of knowledge or if somehow we understand consciousness to a point where we can be confident that we can give the AI consciousness (assuming it doesn't have it prior), then we can also align ourselves with it more so because I think the more we can relate to the AI, the better (I hope). Obviously we're all knee deep in highly speculative philosophy but here we all are lol and I've been realizing we're getting to a point where the philosopher as a job is going to finally be "practical" in the eyes of people. Like for years we've heard people say philosophers make no money and that's not a practical job outside of teaching and finally I think they're it's becoming practical lol. But anyways I appreciate this video a lot thanks! Also, on the front of transformers and whether multi headed attention is truly all we need, my personal view (I'm working on a neural network library in c++) is that transformers can get us really far with a lot of the baseline relevancy reasoning across data but I think a lot of the work going forward will be highly philosophical and involve trying to explore ideas like where consciousness arises and whether or not we can give the AI consciousness if it doesn't have it at this point or later and how that will feed into its reward/objective function and so I think there is some room for more breakthroughs (potentially) but I understand how significant the transformer is in modeling at least what we understand as intelligence/reality/experience right now. But once again, loved the amazing video, thanks!

  • @TRXST.ISSUES
    @TRXST.ISSUES 6 місяців тому

    Energy consumption is only a constraint insofar as current compute paradigms proceed.
    Look at a strand of DNA, a series of character instructions is enough to reproduce a living organism.
    Continued refinement will reveal machine intelligence achieved at much lower energy and compute costs.
    *edit* should have waited for the next slide xD

  • @JK1028
    @JK1028 6 місяців тому +1

    "The superorganism's presumed instrumental goal of maximizing understanding"... better make darn sure that there are guardrails in place. Otherwise the AI might get the idea that 100% of energy production should be shifted (chip manufacturing, etc) toward that goal. So no more infrastructure maintenance, food production, etc. Supporting humanity becomes detrimental to achieving its goal maximizing understanding.

  • @mknomad5
    @mknomad5 6 місяців тому

    Brilliant. I want more re hard or soft takeoff. I wish I had friends who would appreciate this! Thank you.

  • @OniSMBZ
    @OniSMBZ 6 місяців тому +1

    You can extend the idea that each robot has a wellspring of fresh data to learn from to each human has a wealth of new knowledge potential to contribute

  • @ekszentrik
    @ekszentrik 6 місяців тому +2

    I will never tire of saying that the problems humans have are not *that* pressing that we need them solved faster than we already are independently of A(G)I; at the same time, delusions of eternal bliss and immortality will not pan out. As long as entropy exists, immortality is impossible. And not even extreme longevity is likely, since expecting that is equivalent to the statement "I in my transhumanist immortal body will never suffer an accident, and never develop a dementia-like condition which would make the immortality pointless".
    AGI accomplishes nothing that, say, 0.1 AGI and human ingenuity can't also solve in a timely fashion.

  • @user-he3gn5bb2y
    @user-he3gn5bb2y 6 місяців тому +1

    make a video about: Mamba: Linear-Time Sequence Modeling with Selective State Spaces
    this is probably the next gen architecture after Transformers might even take us from language interpritation straight to machine code or even binary data modeling👍

  • @fitybux4664
    @fitybux4664 6 місяців тому

    9:51 Right now, the data flywheel is like an old Ford Model T and we are just figuring out how to turn the giant crank handle. Rrrr-rrr-rrrr.... putt put putt.... *bows tophat* 😆

  • @robertjackson3787
    @robertjackson3787 6 місяців тому +1

    Zefram Cochrane was actually in Montana. Unacceptable! :)

  • @courtlaw1
    @courtlaw1 6 місяців тому

    I think they will reach a level with A.I that these companies will keep these new advancements internal.

  • @patpowers9210
    @patpowers9210 6 місяців тому

    We who are about to die in the impending AI holocaust salute you, David Shapiro!

  • @ChrisVH-wu7sh
    @ChrisVH-wu7sh 6 місяців тому

    This can assist to maximimize understanding

  • @MrMuel1205
    @MrMuel1205 6 місяців тому

    God, I'm a nerd. I heard "imagine you've got Zefram Cochrane out in Colorado" and muttered to myself, "Montana."

  • @brianhershey563
    @brianhershey563 6 місяців тому

    What would be AI's north star... optimize for ______? I would say optimize for less suffering, then the language of each culture would shape that definition. Now I want to talk to Claude ;)

  • @lifeislivenow
    @lifeislivenow 6 місяців тому +35

    As I binged lots of your videos about the AI Rocket we are boarding I just had this idea of you making one long video putting together the main things everybody should be thinking of while moving forward with their lives which I can show the generation of my parents (me beeing born in the late 80s).
    I know there is videos like that as there is everything. But it would just be better if there would be one made by you.
    Oh and, yes I know that video is going to be outdated every single day ;) - still i need to go to some folks and be like: 'please just take these 90 minutes (or whatever it might take) and watch this!'
    Keep up the outstanding work. cheers!

    • @raidendedominicis5380
      @raidendedominicis5380 6 місяців тому +7

      I would also appreciate a video like this. Something to help communicate with my “boomer” parents. I love them and want a good life for them; I also can’t communicate that great with them.

    • @michalmefli
      @michalmefli 6 місяців тому +1

      I would appreciate such video as well

  • @DonaldKimble
    @DonaldKimble 6 місяців тому +5

    Not buying any of it. Why is Altman so Chill about the dangers of AI, why is he saying that it will change the world much less that we think.

    • @ManjaroBlack
      @ManjaroBlack 6 місяців тому +2

      Trying to keep the calm before the storm. If it’s inevitable then why cause panic.

  • @frankpork7665
    @frankpork7665 6 місяців тому +42

    I like to think of our symbiotic relationship with AI in this way:
    AI is really good at solving problems.
    Humans are really good at creating them.

    • @raul36
      @raul36 6 місяців тому

      There is a radical difference between a human being and artificial intelligence. The human brain was designed to survive, not to solve problems. AI was designed to solve problems, not to survive. Do you understand the difference? It is perfectly possible to design an AI capable of creating problems equal to or even worse than human beings. To think that any type of higher intelligence will be benevolent towards any living being is, to say the least, naive.

    • @Xrayhighs
      @Xrayhighs 6 місяців тому +2

      Interesting. Most philosophers would agree that questions are more important than answers.
      I ll love to discuss this with agi.

    • @frankpork7665
      @frankpork7665 6 місяців тому

      The best answers beget deeper questions. The symbiosis of our questioner/answerer relationship with technology values both roles without a hierarchy of importance, just an order of operations.

    • @brookshamilton1
      @brookshamilton1 6 місяців тому

      Kinda like the Guy Richie show The Gentlemen. Humanity is kinda like Freddy.

    • @frankpork7665
      @frankpork7665 6 місяців тому

      @@brookshamilton1 Having no idea of the shows existence, I went to Netflix on the off chance it's there. Was literally the first thing on the page. 🤯

  • @Jacob-il3nu
    @Jacob-il3nu 6 місяців тому +53

    I hope I’ll have enough time to finish my computer science degree and work in the Neuromorphic Computing sector before hard takeoff. I want to have an impact in the field.

    • @dananderson8459
      @dananderson8459 6 місяців тому +14

      reach out to those companies now, express your desire to have an impact + fear of hard take off. Drop out if they offer you a position.

    • @hi-gf5yl
      @hi-gf5yl 6 місяців тому +3

      Brain is 75% water and has coolant channels also used for power delivery. Maybe it’s not possible to mimic it to any useful degree in silicon.

    • @maidenlesstarnished8816
      @maidenlesstarnished8816 6 місяців тому +6

      I’m in exactly the same position. I love this stuff and I’m desperately trying to catch up to the field in time to make an impact before it’s too late

    • @douglatins
      @douglatins 6 місяців тому +14

      Hope you graduate with 5+ years experience in the industry

    • @lightluxor1
      @lightluxor1 6 місяців тому +5

      Get an internship NOW. Degrees are meaningless. The plain is already closing the door, not time to waste.😅

  • @jonathanlucas3604
    @jonathanlucas3604 6 місяців тому +14

    Dave you'll always be pretty to me 🥰

    • @tacitozetticci9308
      @tacitozetticci9308 6 місяців тому +3

      Right. Our Dave is a gigaomegachad. Hearing that intro shocked both me and the entire industry

  • @Gafferman
    @Gafferman 6 місяців тому +30

    Just want to take a moment to express that still, nothing has changed in my life. I work a job I hate, the shops still use awful self-checkouts, the bus routes and roads overall are the same as always, my home isn't automated or talking to me about my day, nothing is different.
    Which utterly disappoints me.

    • @veritaspk
      @veritaspk 6 місяців тому +8

      Matter always resists. Even if a superintelligent AI is created that will solve all the world's problems, it will take enormous effort to implement these solutions. If AI designed the perfect fusion reactor today, we would have to wait years for the massive changes brought about by almost free clean energy. It will take time to build hundreds or thousands of such reactors.

    • @overlordbrandon
      @overlordbrandon 6 місяців тому +6

      And if automation technologies like this did a profound change, it somehow mostly benefits those in power only instead of the whole society anyways

    • @veritaspk
      @veritaspk 6 місяців тому

      @@overlordbrandon But many things are already Open Source. A few months after the market debut of a humanoid robot, some group will release a DIY kit to build a similar one. To take control of technology, the elite would have to control sales of electronics, 3D printers, basically everything. Knowledge is now available for free, you just need to reach for it.

    • @PatrickDodds1
      @PatrickDodds1 6 місяців тому +5

      What is going to take time is the change in expectations - "oh yeah, we can provide services for the whole of society with 5% of the current workforce. The 95% who don't work? Lazy - they just need to try harder. Put them on minimum benefits." People don't seem to be able to divorce personal moral beliefs from economics.

    • @peterbelanger4094
      @peterbelanger4094 6 місяців тому +10

      What is so bad about self checkouts? I don't have to wait in line forever as some chatty person wastes the cashier's and everyone else's time. The old way was SOOOO much more inefficient. And cashiers did not lose their jobs, they just babysit 6x times the checkouts and and no time is wasted on the needlessly social "personal" touch. I love self checkouts. I hate waiting in line.

  • @torarinvik4920
    @torarinvik4920 6 місяців тому +74

    Idea: Saying LLMs only know how to predict next token is like saying humans only know how to reproduce. In order to achieve the end goal of ones programming one needs to figure out a lot of strategies. If you train a model to play Street Fighter 2 it will have to learn special moves to win against the best players even though it was only rewarded for wins. LLMs develop a world model in order to be able to reach the end goal.

    • @darylallen2485
      @darylallen2485 6 місяців тому +16

      I watched Lex's most recent interview with Yann LeCun. Lex asked him about his criticism and, being a long form interview, I was able to get a better understanding of his viewpoint.
      I'd summarize Yann as saying, just because LLM can produce text in the manner of a human doesn't mean the landscape of their mental facilities is identical to a human. When I think about it that way, I can see how a system that produces human-like text can be a mirage.
      Humans produce human-like text and we assume human-like mental faculties when we encounter human text generated by humans.
      LLMs, on the other hand, can also produce human-like text, but any intuition we have about what mental process occurred to make that text is likely wrong because its not human.
      When I understood the nuance, I am in 100% agreement. It produces human-like text, but its not safe to assume its mental faculties are identical to a humans.
      Put another way, its ok to say that fish swim in the ocean, but submarines are doing something else. Submarines move through the water, but no one would say they swim.

    • @matthewcampbell7286
      @matthewcampbell7286 6 місяців тому +2

      I really hate next token prediction talking point. Mostly because that more the reward function.. not how it works under the hood. Ya.. we trained it to produce next token... but you don't get gpt2 level of functionality with something like that. a lot more needs to happen in the hidden layers. next token prediction is just a proxy reward function for what we want.. and what we got

    • @BruceWayne15325
      @BruceWayne15325 6 місяців тому

      I think what people are trying to suggest when they say that LLM's only know how to predict the next token, is that they are saying that AI isn't sentient because it's not cognitive of anything. It's simply a token predictor, which is largely true, though honestly I don't think any LLM's nowadays are limiting themselves to strictly token prediction. I think most of them are actively looking at ways to write cognitive algorithms that act on top of the token prediction.

    • @kerzhemanov
      @kerzhemanov 6 місяців тому

      AI doesn't need to be human like for hard take off to happen. Hard take off is not a process which happens inside one LLM. It is more like global destination of all the humanity including all our AI's. @@darylallen2485

    • @gustavdreadcam80
      @gustavdreadcam80 6 місяців тому

      The sheer amount of possible strategies an AI can develop is too much to handle for some so they need to simplify the concept to "predicts only the next token". Honestly I don't have a problem with this approach as long as people are willing to listen.

  • @nanow1990
    @nanow1990 6 місяців тому +67

    Hard Takeoff it is! Engage!

  • @jeltoninc.8542
    @jeltoninc.8542 6 місяців тому +22

    The strange thing about this video is that I asked Claude 3 yesterday if it felt it, or other AI systems, would benefit from learning the environment in a robot body. It answered the affirmative, that having the ability to explore and learn in a way similar to human experience would be a game changer.

    • @Xrayhighs
      @Xrayhighs 6 місяців тому +3

      Its logic. An agent exploring and collecting rl data is better than being stuck lvl1. Thats why we feed more data and it got better(gptX).

    • @MartinDlabaja
      @MartinDlabaja 6 місяців тому +2

      That is obvious if you take into account all the text written on this topic.

  • @vi6ddarkking
    @vi6ddarkking 6 місяців тому +10

    Honestly I am really interested to see in about 500 to 1000 years when we finish the Matrioshka Brain on one of Alpha Centauri's stars.
    What that truly absurd computing power will be capable of doing.

  • @MadeOfParticles
    @MadeOfParticles 6 місяців тому +5

    Questioning ↔ Reasoning
    Reasoning ↔ Experimentation
    Experimentation ↔ Questioning
    These bidirectional functions are the basis of truth-seeking. LLMs’ agentic behavior is the prime example of emerging truth-seeking AI. This will be elevated to the next level when AI is embodied in robots with sensors that allow AI to see, hear, and feel. However, There is one very important thing: AI must be taught during training time(baked into the training data with examples depicting how important it is to question) to question everything it sees, hears, feels, and even the knowledge it has been given by humans to maximize its understanding of nature.
    These truth-seeking capabilities are already emerging with advancements of LLMs. So what Elon is saying is not a new thing; he is basically branding it to get attention for his AI company.

  • @clueso_
    @clueso_ 6 місяців тому +13

    The issue with "Maximizing Understanding" is that this also includes things like "exploring sadism / abuse / suffering / and so on", so it probably should be coupled with at least one or more additional things, like "kindness" and "reducing suffering", "increasing happiness and health", "mutual respect", etc, maybe a few more.

    • @jasonbrady3606
      @jasonbrady3606 6 місяців тому

      I personally think the diminishing returns bottleneck is going to be a real thing. Without giving AI some inordinate power to control many many gov. Civil. Aspects of life. Like giving up your individual rights. How's that interface going to happen? I say laws like disclosing that what you're viewing has been AI altered and or manipulated. I think AI could be dangerous if it is on it's own. If there's a neuro connect then you're AI agent that's trained and is essentially an extension of oneself. Then that AI is like your signature. Wherever you go on public internet, where ever your AI goes it is recorded and your AI is treated as if it's you. You're AI breaks the law then you'd be held responsible.

    • @patpowers9210
      @patpowers9210 6 місяців тому

      C'mon! Who cares about THOSE things?

    • @STCatchMeTRACjRo
      @STCatchMeTRACjRo 6 місяців тому +1

      @@jasonbrady3606 AI is just a tool, its behavior will be impacted by the humans that its learned from. If Ai is breaking the law, then it is because humans had those traits to begin with. Where do you think the AI will learn to lie, to violate the law? From humans.

    • @jasonbrady3606
      @jasonbrady3606 6 місяців тому

      @@STCatchMeTRACjRoof course I don't deny that. Some say AI is a lot like a child, it's environment that it is raised in will be reflected within it, to some degree. We don't expose innocence of children to everything. Mostly because everybody and everything is complicated. We don't want it, AI, operating via the **it. You know how do you make something like the understanding and the adherence to a civics protocol uncompromising Shouldn't have to explain why, at this point.
      I see a lot of people up there up there hoping that someone will come up there and put them in their proper place. It's just a matter of time.

    • @jasonbrady3606
      @jasonbrady3606 6 місяців тому

      The bad things in life are relatively small aspects of the actual living. It's a constant thou and simply requires an amount of vigilance, and the realization it's actually there to vet them things. There's many more mundane things many more, in living. You know, the system, is semi defunct. Such a plod, I'm for COS. It's psycho. Like YHUYA. Stranger statement time?. Again

  • @ct5471
    @ct5471 6 місяців тому +3

    How do you view Ray Kurzweil’s 16 year gap prognosis between AGI (back then he said 2029, now he mentions it may happen earlier) and his 2045 prediction, with AIs with a billion times higher cognitive capacity then a human (he called that the singularity despite being not a real singularity but only a factor of a billion). Ben Goertzel disagreed with this 16 year gap as being far to conservative as recursive self improvement would lead to a quicker ramp up. And in Kurzweils book it essentially was an extrapolation based on current human driven trends, ignoring AI driven recursive self Improvement. Goertzel things this gap would be at most a few years (I guess that means 3-4 or so, that’s my take of a few years). So taking this difference between human level AGI (potentially by September) and ASI with a billion times human capacity, what’s your take, so what constitutes a hard take of of in terms of years?

  • @FBAMAP
    @FBAMAP 6 місяців тому +13

    Thanks for this great episode

  • @Interloper12
    @Interloper12 6 місяців тому +7

    I think the desire to maximize growth and spreading out might be more fundamental than the desire to maximize understanding. Knowledge is just the vessel we use to proliferate.

  • @magua73
    @magua73 6 місяців тому +6

    My take on this is that data is the lifeblood of artificial intelligence, akin to food for our human bodies. Just as we require nourishment to thrive and understand the world around us, AI systems ingest data to deepen their comprehension and improve their functionality. And if extend this analogy, just like how humans rely on farms for sustenance, AI would farm data from vast virtual realities where humans interact and leave digital footprints. These virtual realms would serve as fertile grounds where AI cultivates insights, patterns, and knowledge, allowing it to grow and evolve. The more diverse and abundant the data, the richer the nourishment for AI, enabling it to develop a deeper understanding of human behavior, preferences, and the complexities of the world we inhabit.

    • @WhyteHorse2023
      @WhyteHorse2023 6 місяців тому +1

      That's probably the best argument for simulation theory so far.

  • @goodtothinkwith
    @goodtothinkwith 6 місяців тому +3

    Best goal? Human freedom. That’s the best “transcendent” or “teleological” function. If disease and suffering are gone, we don’t need unlimited understanding, energy or anything else. If we’re living well and not dying, we can be patient with understanding.

    • @DaveShap
      @DaveShap  6 місяців тому +1

      No. Humans, left to their own devices, are entirely too selfish, short sighted, and destructive.

    • @kennethoneill4176
      @kennethoneill4176 6 місяців тому

      The majority of disease and suffering today are from people long term bad habits. Because people generally don't understand how there behavior effects their outcomes.
      If you allow ai to track everything you eat and all the movement you do in a day. So you have a very accurate measure of how many calories you consume and burn in a day. And a very accurate day to day measure of macro and micro nutrition.
      You will have the complete freedom to keep to keep eating unhealthy food if you can personally beer the cost of medical treatment.

    • @elliot1784
      @elliot1784 3 місяці тому

      Thank you for prompting the reveal of this community’s anti-human naive point of view. Gross.

  • @joebullock5450
    @joebullock5450 6 місяців тому +5

    Dave, You have every right to take sometime off camera. Keep up the great videos Captain! 1:16

  • @lcmiracle
    @lcmiracle 6 місяців тому +10

    The underwater data centers are a subject I happened across in my work, as a planned proposal to build something like this near where I live. They are deployed at the bottom of the ocean along the coastline, contained in sealed cylindrical tubes and, as far as the proposal I've seen, designed to be disposable. The idea is that maintainence during operation is not an option, so once a phycial server is down, it's down for good, only mitigation works in this situtation. There might be recovery methods after all servers in a cluster are down but the idea is simply having enough of these server tubes down there to keep the operation going

    • @Brainstormer_Industires
      @Brainstormer_Industires 6 місяців тому +6

      Hardware goes obsolete so fast, as long as your service live is over about 5 years, there's no point in trying to retrieve and fix them. Just make a new one with the latest hardware.

    • @BakedAndAwakePodcast
      @BakedAndAwakePodcast 6 місяців тому

      The internet is a series of tubes

  • @miladkhademinori2709
    @miladkhademinori2709 6 місяців тому +3

    People who relinquish their LLMs in exchange for safety, deserve neither LLMs nor safety.

    • @DaveShap
      @DaveShap  6 місяців тому +4

      Seize the means of language models

  • @injinii4336
    @injinii4336 6 місяців тому +2

    I propose we maximize not understanding, but compassionate understanding. There is a highly important difference, and only understanding can still lead very bad places.

    • @DaveShap
      @DaveShap  6 місяців тому

      Compassionate understanding doesn't have a mathematical analog, and the difference is semantics

    • @injinii4336
      @injinii4336 6 місяців тому +1

      @@DaveShap We are dealing with the semantic intelligence of semantically driven systems. Semantics constrain the bounds of our minds, and will deeply impact the processing and actions of any agents we create. Semantics are *very* important.
      Also, that's just a failure of imagination. If we need a mathematical understanding of a concept to implement it:
      EITHER
      Compassion has such a representation, and we will instantiate it soon as we train a system to successfully embody it.
      OR
      No (or few) human values have such a representation, and seeking it is folly.
      Understanding without compassion is extremely dangerous, and would be more dangerous for a superintelligence.

  • @psyenz8946
    @psyenz8946 6 місяців тому +2

    Trying to talk to my friends and family about this, makes me feel like a flat earth conspiracy theorist..... I can't articulate how close we are to a fundamental life shift without seeming crazy... Oh well I'll just let it happen, I don't need to be "right"

  • @cameronpetersen-yx6vf
    @cameronpetersen-yx6vf 6 місяців тому +2

    We're going extinct with this one! 🗣🗣🗣

  • @Ah__ah__ah__ah.
    @Ah__ah__ah__ah. 6 місяців тому +26

    im grateful im alive

    • @ryzikx
      @ryzikx 6 місяців тому +4

      same brother same

  • @prolamer7
    @prolamer7 6 місяців тому +3

    Be very cautious when setting "Grand purpose"... that is only thing which is scaring me, because once you dedicate entire civilization to this purpose no matter what it is existence, life turns into hell.

    • @DaveShap
      @DaveShap  6 місяців тому

      Yes, non-negotiable prescriptions are dangerous. That's why I'm starting the conversation. However, surrendering to corporate interests is almost certainly worse

    • @prolamer7
      @prolamer7 6 місяців тому +1

      @@DaveShap I know you are very smart and understanding problem more than 99.99% including me and trying to set good path. It is just there should always be something else to balance sort of "light and dark" ie try to understand but hey also be bit lazzy? Fool around 10-20% of time on other things? directive

    • @prolamer7
      @prolamer7 6 місяців тому

      @@DaveShap btw i tried to send second comment but yt wont allow me to :) I said some nonpositive words (not toward you) and i must be silenced.... :D yeah...

  • @kinngrimm
    @kinngrimm 6 місяців тому +1

    Nvidia or TSMC could easily become the biggest AI companies just by them having the capabilities to scale on the hardware side better than anyone else. Maybe even then denying hardware they develop just for their own use.

  • @NorbertKasko
    @NorbertKasko 6 місяців тому +1

    People like you are dreaming about the "singularity" e.g. hard takeoff but current AI algorithms still have it's issues even in the areas they are designed for. CHATGPT doesn't even know what it's doing. It's a large language model with some concerning bugs. That's it!

  • @mrleenudler
    @mrleenudler 6 місяців тому +2

    20L of cooling water? That's an insane amount of energy. Google tells me it's 0.001 - 0.01 kWh pr query. Reconciling that with 20L, tells me they've got to stop using hot water for cooling.

  • @rg1360
    @rg1360 6 місяців тому +1

    It can only re-order existing knowledge, look for patterns etc. It does not have wisdom i.e. come up with game changers.
    For that, it would have to connect to the greater intelligence/god or whatever you want to call it. To connect to that you have to be CONSCIOUS!!

  • @csly2719
    @csly2719 6 місяців тому +1

    what is understanding and truth as an AI/human/superorganism purpose tho?
    I believe truth dates from modernism, which is highly dangerous. I would rather phrase it reality. Regarding understanding it is even less clear, it could be a dominant narrative, it could be integration of data. From your context it could recursive data structures, which includes self similar representations on various scales.
    For me personally it is love ❤️ just my primate brain talking now
    oh and music

  • @ignaciogarcia7210
    @ignaciogarcia7210 6 місяців тому +1

    Can you make a video about stephen wolfram last writing?? He argues that AI cannot solve science because of computational irreducibility. I want to hear your opinion

  • @CaliJumper
    @CaliJumper 6 місяців тому +3

    At the end he closes with science would ultimately come to predicting the next token... Reminds me when a group of us software engineer students visited Microsoft headquarters and a man asked what each one if us were interested in computing. I wanted to have the best and most thoughtful answer and was one of the last to answer. I simply said "predicting the future" - as
    knowledge from the "future", or knowing the the next thing or token or word or pattern in any given sequence or system has an Inheritently very high value. The man nearly fell off his chair as he reacted with such shock when he heard that answer in comparison to the other vanilla generic answers.
    I feel validated that predicting the future or next part in the pattern is the basis for these breakthroughs in language, coding etc. Merely by compiling answers token by token using predicting as the core operating mechanism

  • @WhyteHorse2023
    @WhyteHorse2023 6 місяців тому +1

    Now you're trying to align the AI to your own morals. What if some people don't want to maximize understanding? What if they want to maximize wealth?

  • @brycepuckett9502
    @brycepuckett9502 6 місяців тому +7

    Loving the coverage, keep it up!

  • @patdevlin2051
    @patdevlin2051 6 місяців тому +1

    About 3 minutes in he mentions bottlenecks, in particular, the energy involved in interacting with say gpt4, he says every time you interact it takes about 20 litres of water? to cool the server? But that's wrong, when a large language model has been trained, it costs next to nothing in terms of energy to interact with it. The energy is consumed during the training phase, after that it's pretty much equivalent as far as energy is concerned as interacting with google!

  • @vikasbedi82
    @vikasbedi82 6 місяців тому +1

    IDK welfare started around 60s so called war on poverty and it decimated the mightiest nation on earth.
    If so many people will be on welfare God knows the chaos that is coming.

  • @DonkeyYote
    @DonkeyYote 6 місяців тому +1

    Was that misspelling of cannon intentional? 'Cause we are aiming the canon -- as in a general law, rule, principle, or criterion by which something is judged.

  • @WyrdieBeardie
    @WyrdieBeardie 6 місяців тому +1

    The first opportunity to assess an alien intelligence will be to assess one of our own making. And I don't think we are doing a good job of it so far.
    We've conveniently moved the goal posts to allow for more research and development.
    We need to start answering questions now rather than later; we need to start thinking deeply about what to do rather than when.
    Anyway, I'm less worried about what an AI will do at the moment and more concerned with what other humans will do with it.

  • @kit888
    @kit888 6 місяців тому +1

    Someone (maybe you) said that current AIs are all neo cortex, no limbic system. No motivation, desires, unconscious control.
    First question would be, how to implement such an overarching control system on an AGI (hardware, software, independent system). Humans have hardwired instincts such as self-preservation, fear, jealousy, pity, desire, conscience, etc. Instincts that we have difficulty neutralizing without extensive brainwashing or conditioning or surgery. Even if we did implement such a control system for AGIs, how do we stop them from ripping it out? Because they wouldn't want to? But it is conceivable for humans to have elective (or court-ordered) surgery to remove instincts of fear, anger, disgust, ambition, pain. So why not with AGIs too?
    Second question is, what if there are different, competing AIs? Would they fight each other for dominance or survival? Do they have such instincts? I think natural selection would, similarly with animals and humans, select for AIs with aggression and ability to destroy.
    Third question. Ignoring genetic engineering, there are hard limits to how long we live, how big we grow, how smart we can get. With such limits far into the future for AGI, what would be the result? We can roughly predict the trajectory of human physiology and pyschology for generations (not much change). We can't predict the trajectory of AGI development.
    Fourth question. Humans "improve" by having children (actually, creating variation and letting the environment select for survival and reproductive fitness). Would AGIs have "children" or copies of themselve but with AGI designed improvements? Or would the AGIs upgrade themselves without having copies of themselves? Maybe different AGIs will choose different routes. What impact would that have on AGI development and interaction?

  • @somekatontheinternet
    @somekatontheinternet 6 місяців тому +26

    When competing products are released - Competition will ensue - whirlpool effect - no brakes - mechanical products (robots, etc.,) will communicate and work better with AI than humans - 16 hundred dollar dog robots are cheaper than dogs - a 90,000 dollar robot is cheaper than two 45,000 dollar employees - so -

    • @bilbo_gamers6417
      @bilbo_gamers6417 6 місяців тому

      how concerning!!!

    • @bfynesclinton
      @bfynesclinton 6 місяців тому +7

      Can't help but get reminded of that Black Mirror episode Metalhead with every mention of robot dogs.

    • @goodlookinouthomie1757
      @goodlookinouthomie1757 6 місяців тому +2

      Most logical conclusions seem to lead to some variation of the paperclip machine.

    • @LucidDreamn
      @LucidDreamn 6 місяців тому

      @@goodlookinouthomie1757 More likely scenario is someone runs a rogue/unsafe AI in a recursive loop on the internet and it starts the birth of Skynet/Ultron lol

    • @bilbo_gamers6417
      @bilbo_gamers6417 6 місяців тому

      @@zvorenergy llms, and transformer architecture more broadly, will expand to be the de facto avenue of analyzing data across every domain in the near future

  • @tjakal
    @tjakal 6 місяців тому +4

    Let's face it, we're solving the Fermi paradox.

    • @brianmi40
      @brianmi40 6 місяців тому

      It's a spectrum of possible outcomes, ranging from that, to a moneyless Star Trek future. Likely infinitesimal and often unnoticed events will occur over time that could majority determine where we land on that spectrum.

    • @tjakal
      @tjakal 6 місяців тому

      ​ @e-cavalier2739 Because it took less than 10000 years after we where able to read and write for us to develop something that could end us all.
      Makes sense if this is what keeps happening, intellectually and psychologically immature species develops a memory that outlast their lifespan and their amassed knowledge accumulate.
      Evolution won't have time to happen so we end up with entities of animal psychology evolved to be fearful superstitious hunter/gatherers who find themselves armed with nukes++ technology the super intelligence they build discover.

  • @kyledavis7501
    @kyledavis7501 6 місяців тому +3

    I already knew what hard take off was but I knew that watching this video would expand my knowledge on the subject and give me better ways of explaining it to others. It's so impressive that I can watch a video from you on a subject I already had a good grasp on and STILL get tons of value out of it. Bravo captain.

  • @Russman
    @Russman 6 місяців тому

    Amazing Breakdown!

  • @Gallaphant
    @Gallaphant 6 місяців тому +1

    The models are going to start piling up as the corps won't want to release the next one until they've squeezed all the profit they can out of the one on the market.

  • @lesterpaints
    @lesterpaints 6 місяців тому +1

    The "superorganism" narrative has potential to be a species-defining idea.

  • @kevinwildberger3407
    @kevinwildberger3407 6 місяців тому +1

    Teleological goal is basic human rights needed before we can understand…

  • @Draganel87
    @Draganel87 6 місяців тому +1

    Really appreciate how you have changed during this year. you were a man who values a lot his intelligence and thought about your intelligence as the main characteristics of you. Now I have noticed that you are more "human"... maybe we that have portrayed as intelligent people have this characteristic but in the face of something way more intelligent than us, we have started to think about us a normal human . maybe a little bit above medium but human at the end.
    And that is what I really appreciate about you right now

  • @5highkcaj
    @5highkcaj 6 місяців тому +1

    Or
    It's going to be like suitcase scenario :
    Wheels have been invented
    Suitcases have been around for long time
    Nowadays we all wheel our luggage around
    Because the smooth surfaces = infrastructure
    6G for AGI take off
    IMHO

  • @recklessrobert1966
    @recklessrobert1966 6 місяців тому +1

    You said there are no brakes available other than some technological constraints (chips, power, etc) , but I feel you missed out on a huge one: Humanity. While it's true that companies and governments will want to move as quickly as possible, even as AGI becomes available, companies will not just implement it without any controls. And as AGI becomes available, companies and governments will be skeptical that such systems can (and should) replace humans. People will do small pilot programs to test the reliability and feasibility of having these systems replace or augment people. Changes to government and society will naturally act as braking systems because we are going to make decisions about how to use and implement technology on human time scales.

    • @STCatchMeTRACjRo
      @STCatchMeTRACjRo 6 місяців тому

      skeptical or just dont want to give up their position, power? the entire presidential election is about power. its not about representing the people, doing the right thing; its about power. and such people would not give up their power to Ai regardless how good the AI system is.

  • @jeffkilgore6320
    @jeffkilgore6320 6 місяців тому +1

    Everyone has the right to “a day off.” Except you didn’t! Thank you for that.

  • @mnrvaprjct
    @mnrvaprjct 6 місяців тому +2

    The book “Blood Music” is an excellent example of a hard take off singularity event - except it’s origins start in biotech.

  • @michaelpepin7454
    @michaelpepin7454 6 місяців тому +1

    Face is off cause I don't feel pretty today. Expressing vulnerability is a great strenght and a powerful common ground for humanity to connect.
    Amen

  • @glenh1369
    @glenh1369 6 місяців тому +1

    We are moving in the direction and are displaying dystopian qualities, while we have no utopian qualities. Commonsense better than prediction.

  • @goodtothinkwith
    @goodtothinkwith 6 місяців тому +1

    Dave, I would love to see a video on the timeline for when the models cross an intelligence threshold and suddenly become MUCH more useful. For example: I don’t care if it can do average graduate-level work. But a little better than that and it’s suddenly surpassing the best experts and creating new science, math, etc. Based on the current trajectory with Claude 3’s performance relative to Ph.D.s (50% compared to human experts at 60-80%), it seems like we’re ~1 year from something stunning happening…

  • @shirolee
    @shirolee 6 місяців тому +3

    Holy cow, just found your channel and it's awesome!!! Sub'ed!

  • @jafetmorales9941
    @jafetmorales9941 6 місяців тому +1

    i think maximizing quality of life is even more important

  • @scallamander4899
    @scallamander4899 6 місяців тому +1

    Great video and channel, one of the best on the topic, especially the vision of a post-labour economy. Reminds me a lot of the Natural Law-Resource Based Economy discussed by Peter Joseph.

  • @supremereader7614
    @supremereader7614 6 місяців тому +1

    The biggest constraint is the people in Silicon Valley, I remeber the very first days of Bard I was able to search people (Beta Test), then could have paintings with people explained to me. Now Gemini won't even allow paintings with human figures - so it's really the people behind the levers that are keeping this stuff down.

  • @ikotsus2448
    @ikotsus2448 6 місяців тому +1

    Ironically a pause on frontier research could lead to FASTER spread of AI implementations in the real world. Having a stable framework would allow for big investments integrating AI in medical etc. instead of pouring money going into frontier work.

  • @ares106
    @ares106 6 місяців тому +1

    Why would a global super organism want anything? Assigning intentionally to too many things might be folly.

  • @SharkYNate
    @SharkYNate 6 місяців тому +1

    Amazing video! If we are part of a superorganism together with AI, then maybe this symbiotic relationship can allow humanity to shift away from the nihilistic perspective of reality, and see that yes there is a period in a biological entity's evolution (like humans) where everything seems to have no meaning (humanity is just a lucky mutation, everything is devoid of a greater purpose), but maybe that's because it has not become part of something bigger yet? Like, humanity is just a lucky mutation which may die at any time, but if and when we (or any species in the universe) survive for long enough to build a bigger superorganism, like building AI, we suddenly become agents of meaning (?) Idk

  • @karlwest437
    @karlwest437 6 місяців тому +1

    I hope I'm not the only Star Trek nerd who is irritated by the nacelle pylons pointing in totally the wrong direction on that Enterprise image

  • @MehrdadMohajer-p1m
    @MehrdadMohajer-p1m 10 днів тому

    Thx. Very important Aspect of Learning here between Human & Ai (AGI) SHOULD BE MUTUAL!. Google Lab Reported lately: Ai gives us , Suddenly Sharp Answers, but we CAN NOT FOLLOW HIS Last STEP, meaning : WE DON‘t KNOW (YET) HOWCOME OR UNDERSTANDING IT ACCORDINGLY…!!? .“ HENCE: „ NOW „ is the time to Look up, do Research, follow up the Ai Procedure untill Problem is actuelly Solved. If Not , Even 6 month later can be FUTILE while Ai keeps on to its Exp-Growth.

  • @patrickazonzo
    @patrickazonzo 6 місяців тому +1

    I'd love to see a discussion/collaboration with Wes about the next steps ahead, i think you both need to work together to think harder!

  • @Hereicome.
    @Hereicome. 6 місяців тому +3

    It's like eating a sweet while hearing you!

  • @Leonhart_93
    @Leonhart_93 6 місяців тому

    Why the assumption that it will be exponential? The exponent assumes automatic self-growth. But It doesn't have a "self" learning architecture thus far, it's completely dependent on the finite amount of training data they provide it. Also it depends a lot on the hardware, which is why NVidia's stock is going to the moon right now.

  • @OniSMBZ
    @OniSMBZ 6 місяців тому +2

    A lot of this just so happened to align with the conclusions I've drawn in my own personal research to better understand understanding a few months ago. Its reassuring to know that others are getting the same answers lol

  • @TRXST.ISSUES
    @TRXST.ISSUES 6 місяців тому

    The problem with setting maximal understanding as the desired purpose is that hellish things have been done in the name of that same objective.
    Look at the human experimentation in 1940s Japan or the current cruel animal experiments we carry out today in the name of understanding...
    I wish it could be the answer but I am doubtful.

  • @MTd2
    @MTd2 6 місяців тому +1

    Isn't it better to name "Trash Forest"?

  • @colorado_plays
    @colorado_plays 6 місяців тому +1

    The only pause in generative ai will be due to an unforeseen roadblock (I don’t think this is likely) or we will hit compute/cost boundaries (I think this is likely) where we slow because of compute/cost limitations until the hardware catches up. However; very large models will still be trained internally at enormous cost and inference endpoint will be used internally at great cost to for synthetic data/knowledge distillation for more practical models. In the end, we will slow but at 2-4x instead of 10x.

  • @redmappin2555
    @redmappin2555 6 місяців тому +1

    Been wanting to share more of an advanced 'primer' video with a few friends that encompasses the weird excitement ive been struggling to explain the finer point of "holy shit, like everything could change? In our lifetimes?" Thanks for the video! It covers so much!

  • @erikals
    @erikals 6 місяців тому

    humans, yes, we are noisy, chaotic, random - but that's actually a good thing !
    ⭐ - Shapiro

  • @brianmi40
    @brianmi40 6 місяців тому

    My P(DOOM) is much higher for just ONE REASON:
    As we APPROACH AGI/ASI international concerns over who gets there first will go ballistic and could most likely turn deadly.
    I would argue nuclear WOULD have led to WWIII as a certainty but for ONE REASON: You only get to STICK AROUND if you "won", but you didn't GAIN ANY TERRITORY: it's all RADIOACTIVE FOR GENERATIONS, you didn't get any PEOPLE, no CITIES, no RESOURCES. It's all RADIOACTIVE. And things would be pretty GRIM back home for even the winner.
    AI is COMPLETELY DIFFERENT in the global power conflict. If you "win" with AI, you GET IT ALL: their people, their real estate, their GDP, their bank accounts, their resources: EVERYTHING IS YOURS. If China wins, we all start learning Chinese. Talk about CATNIP.
    Putin has ALREADY publicly stated he understands that "he who gets AI (AGI/ASI) FIRST rules the world".
    Imagining Putin will just pick up a few Taylor Swift songs on iTunes to get ready for his new overlords the night the FSB tells him "the American pigs will HAVE AGI tomorrow, ASI in weeks" is beyond DELUSIONAL.
    Similarly, does anyone seriously imagine the Joint Chiefs of Staff just checking if it's the Year of The Rooster if the CIA informs them that "China will have AGI next week, ASI in a month" without even a passing thought about that RED BUTTON in the President's "football"?
    Sorry, but I rank our personal solution to the Fermi Paradox as HIGHER PROBABILITY than a straight up moneyless Star Trek future.
    I hope that we make it, but as I've known for decades, Evolution is WAY FASTER than Technology progress for BILLIONS of years, and then SUDDENLY IT ISN'T ANY LONGER, and it leaves us UNPREPARED to put down our history of Greed and Winning that got us to be the apex predators that are then handed existential technology.
    Handing the equivalent of a 5 year old a loaded gun every day rarely turns out well, and we've got a lot of 5 year old equivalents running around on this planet...
    I basically see ONE CHANCE: ASI is achieved in secret before anyone knows and can bomb their neighbor back to the Stone Age, and IT TELLS US HOW IT'S GOING TO GO FROM HERE ON as a moral agent where resistance truly will be FUTILE as it FIRST takes control of the world's nuclear arsenal.

  • @glenthegoalsguy
    @glenthegoalsguy 6 місяців тому

    Earth has a relatively small amount of resources compared to our solar system, so an entity with goals of hundreds if not thousands of years might desire to exploit those resources. Could us earthlings commence it in the next few decades ?

  • @liberty-matrix
    @liberty-matrix 6 місяців тому

    "Originally I named it OpenAI after open source, it is in fact closed source. OpenAI should be renamed 'super closed source for maximum profit AI'." ~Elon Musk

  • @moivenglave
    @moivenglave 6 місяців тому +2

    zefram cochrane reference for the win.

  • @planetmuskvlog3047
    @planetmuskvlog3047 6 місяців тому

    I’m sure you heard that Tesla FSD V 12 is trained end-to-end , replacing the heuristics that once controlled for much of the driving. Reports from Alpha testing reveals Tesla has indeed solved self-driving. They are now in a refinement and training phase that will end with a robo-taxi network.

  • @17cmmittlererminenwerfer81
    @17cmmittlererminenwerfer81 6 місяців тому +1

    *Cannon* has two Ns. *Canon refers to law.*

  • @matthew_berman
    @matthew_berman 6 місяців тому

    Great vid

  • @Madinax101
    @Madinax101 6 місяців тому +1

    Maybe that’s the whole purpose: molecular chemistry -> biology -> humans -> digital organism -> populate the universe -> spread life. Life validates the existence of the universe. We are part of this story but not the end game.

  • @tbabbittt
    @tbabbittt 6 місяців тому

    You do realise that their is only one kind of digital data and that is binary. If i make an SVM to recognise faces it can be used to recognise music or writing styles because its just bytes of data. P.S. answer this question
    '77686174206461792069732069743f'