Artificial Intelligence Isn't Ready for Mass Application || Peter Zeihan

Поділитися
Вставка
  • Опубліковано 19 січ 2025

КОМЕНТАРІ • 1 тис.

  • @fritzbloedow29
    @fritzbloedow29 17 днів тому +245

    The server farms aren't using consumer grade GPUs, they're using A100, H100, etc, those are made specifically for data crunching, they work very efficiently for AI, and would almost never be in a regular persons computer. We're still in the early days of AI development, with so many unanswered questions about it. Much of the time I think PZ gets things right, this time he is missing too many points on this complicated subject. Peter please don't take this as an attack, I still get a lot from your videos, keep up the good work.

    • @jf9593
      @jf9593 16 днів тому +1

      lmao "experts" amirite

    • @urlauburlaub2222
      @urlauburlaub2222 16 днів тому +5

      While this is true, more computing power doesn't better AI results. Take in mind, that most data it uses for "better results", is also legally grey if not outright stolen. So, it's sneaky, but even if you have a computer hiding and spotting you everywhere, the AI doesn't know what's right and wrong. So, technical advantages are more about limiting power waste, than actually improving things. GPU chips for computer graphics were not only pushed because of gaming in the past, but gaming was pushed to have them focussed on traditional human actions within military or in civil security like aircraft, shipping etc...Cheaper chips were pushed also 20 years ago, for education purposes in Africa or Asia, where AI was also used to teach them basics things to avoid polution and to fight analphabetism. Laws will soon be made more drastic, if there is a surviving population and if those population understands more of the legality and legitimacy of the data used, when they encounter it. Having an old world population, I don't see those elderly adapt for the sake of selling chips everywhere and selling of AI. Also, individualism and personal achievements of artists and designers of all kind are undercut or destroyed, if everyone get's the same, even if partly individualized through AI. This is totally against the whole philosophy of the Western/European-centered world, which made this technology possible and constructed it.

    • @djphat1736
      @djphat1736 16 днів тому +5

      These systems use power supplies in the 3+ KW range, and several of them. For redundancy and for well, the need for power. Say nothing of the heat it produces, to which cooling it gets out of control. They are not efficient, or even price effective. Let alone if you can get it without waiting awhile. Efficient would be the ARM chips, specifically Apple M chips. And if they could scale it up quickly enough, or make a discrete Ai/GPU card. They could eat Nvidia's breakfast, lunch and dinner. That is the revolution we need in this space.

    • @thomastaylor648
      @thomastaylor648 16 днів тому +4

      Garbage in garbage out. Not a new concept, but folks don't understand it, so this is the shit we get, fake experts...
      Reality is that we have to provide nearly perfect data to train these models to get acceptable performance from them. This is always going to be a problem. It really hasn't anything to do with chips or supply chains, or energy. Fixing those issues doesn't change that. That taste of AI you speak of is our cherry picking and those are typically academic examples, based on hand picked, highly processed training sets.

    • @TakeshiYoung
      @TakeshiYoung 16 днів тому +7

      Yeah, Peter misses the mark here. Training new AI models uses astronomical resources, but once they are trained they are a lot less resource intensive. And LLMs do far more than data processing

  • @williamsteveling8321
    @williamsteveling8321 17 днів тому +39

    A few things that are off...
    First, as noted elsewhere, laptop GPUs are made for laptops. The desktop market uses more robust devices, commercial applications are similarly beefier
    Second, newer parallel processors (which GPUs are a subset) are coming out. Power is still an issue, but processing per Watt is better
    Third, they're not using targeted randomness. They're applying logic chains to input using weighted values at each stage. And they're pretty good at following instructions.
    Fourth, there are programmable analog processors being worked on that might radically (by a factor of 10^6 possible, 10^3 likely) and they're likely to be available a lot sooner than 2040. When setting weights for processing, they'd be a game changer depending on plausible applications
    Likely, Mr. Zeihan is correct that it's further out for practical purposes than some think, but it's also closer than he thinks. The truth is likely somewhere in the middle. My job is very AI-adjacent, so I see some things up close. Long-term very big worry, short term a low-to-mid level worry. NOTE: All of this ignores black swan developments which by definition are unknown unknowns

    • @thailandmalcolm
      @thailandmalcolm 15 днів тому +3

      Very informative. Thank you.

    • @smcg2490
      @smcg2490 15 днів тому +5

      I agree. I enjoy watching Peter for his broad-brush analysis, but I wouldn’t stake too much on his predictions due to his lack of specialised expertise. His take on the AI space exemplifies this. AI isn’t just about chips or raw power; he overlooked the groundbreaking work being done in information processing that directly improves AI’s accuracy-and thus its usefulness. For instance, innovations like Liquid AI’s architecture and the steady stream of research papers on more efficient computing are reshaping the field in an almost daily basis. I believe AI has been in its early developmental stages, and only now are we approaching its true emergence. When AI can reliably evaluate itself and autonomously refine its own code, we’ll witness its full arrival. Personally, I think the most important and interesting area that true AI capabilities will have an impact on is human life longevity. When we can effectively research ageing as a disease, and get good treatments, what does that mean for the future.

    • @chaselevinson7950
      @chaselevinson7950 15 днів тому

      As I understand it, LLMs still have a level of stochastic variability. The “temperature” of GPT model will determine if the next word in an answer entirely deterministic (always chooses the highest probability entry) or entirely random (picks from the list with some level of randomness). That’s why you’ll see the same prompt give different answers in the same LLM.

  • @TheSacredOrderOfKnightlyValor
    @TheSacredOrderOfKnightlyValor 17 днів тому +344

    This guy needs to get out of the house more often.

    • @Rob_F8F
      @Rob_F8F 17 днів тому +14

      😂😂😂

    •  17 днів тому +16

      You mean a little less often. 😂
      He needs more data. 😅

    • @macbaryum
      @macbaryum 17 днів тому +10

      It really sucks to be chained to a desk all the time.

    • @bonanzatime
      @bonanzatime 17 днів тому +10

      Actually, he spends most of his time 'Getting Out Of Town' before it's too late.😂.. selling snake oil is not as easy as you might think. .. especially with Trump in office.🤫

    • @davidsingh6944
      @davidsingh6944 17 днів тому +2

      😂😂😂

  • @alexbuccheri5635
    @alexbuccheri5635 16 днів тому +20

    As a physicist turned scientific software engineer, I'd say Peter's largely on the money with these points, although his timescales for new hardware are a little pessimistic.
    Couple of amendments:
    If one increases an algorithm's performance by >= 10x when porting from CPU to GPU, GPUs can actually be more energy-efficient. At least this is the figure we throw around in computational physics.
    Large, modern HPCs aren't cooled with fans, they use liquid cooling. Still, (if I recall correctly) the heat given off by Finland's LUMI machine actually powers the neighboring town. Cooling these things is no joke.

    • @dixonhill1108
      @dixonhill1108 12 днів тому

      Personally I think supply chains are gonna break down a lot faster than you think. Covid prevented a lot of supply chain interuptions, while the majority of people think it created the interuptions. It's just my opinion, but I think if you look at IQ scores things make a lot more sense quickly.

    • @josephtaylor6285
      @josephtaylor6285 11 днів тому

      @@alexbuccheri5635 Wow! Powering a whole town with the heat of the servers is a mind blower.

  • @tankomanful
    @tankomanful 17 днів тому +123

    😅its becoming very clear that Peter always has an opinion, even if he lacks the insights. Instead of explaining how AI can transform the workforce landscape, he jumps into his favorite topic of chip scarcity. Anyone watching this should take this commentary with a grain of salt

    • @cushmfg
      @cushmfg 17 днів тому +2

      But the thing is IS that computing happens on chips. There is only one TSMC.

    • @Stevenpwalsh
      @Stevenpwalsh 17 днів тому +3

      @@cushmfg China just trained a SOTA model for $5M, chips might not be everything.

    • @cushmfg
      @cushmfg 17 днів тому +2

      @@Stevenpwalsh 1. China 2. SOTA is a made up arbitrary acronym with no basis in reality 3. So what, there are a lot of existing useless models. Still needs to run on chips. Still probably doesn’t do anything more useful than any of the existing models

    • @bonanzatime
      @bonanzatime 16 днів тому +1

      @@tankomanful a grain of salt. And A Roll Of Toilet Paper!

    • @segalliongaming8925
      @segalliongaming8925 16 днів тому +1

      @@StevenpwalshWhat’s SOTA? The equivalent of ChatGPT3?

  • @BenGrimm977
    @BenGrimm977 17 днів тому +115

    Listening to Peter talk about AI makes it clear that not every topic needs his take. He’s great with geopolitics, but AI clearly isn’t his lane. Being an expert in one area doesn’t mean you can speak with authority on everything, and sometimes it’s better to just stick to what you know.

    • @derekhall4017
      @derekhall4017 17 днів тому +19

      Thought you were going to say something.

    • @maartenkranendonk8954
      @maartenkranendonk8954 17 днів тому

      @@derekhall4017 Be quiet little boy.

    • @zee9709
      @zee9709 17 днів тому +2

      what he know? everything is surface level, we here just want to hear his opinion regardless truth or not

    • @gladius1275
      @gladius1275 17 днів тому +7

      Pointless comment since you make a statement with no elaboration or supporting data to refute.

    • @maartenkranendonk8954
      @maartenkranendonk8954 17 днів тому

      @@gladius1275 Your comment is pointless. I just looked it up on ChatGPT

  • @mav45678
    @mav45678 17 днів тому +90

    GPU is the size of a postal stamp not because "it was designed to be put into a laptop" (there is a big market for GPUs for dekstop, which have less space limitations), but because speed of light limits intra-chip communication. If you want parts of the chip to communicate with each other a couple billion times per second, they can only be so far away from each other for the electrical impulse (travelling at the speed of light) to reach them within alloted time.

    • @24zer0nd
      @24zer0nd 17 днів тому +6

      a big reason why nvidia chips dont have as much ram as gamers want as they are sticking with low memory bandwidth so the chips can be used in laptops, they are also waiting to use the new memory thats around the corner to fit higher amounts of ram on that same smaller memory bus

    • @oregonhighway
      @oregonhighway 17 днів тому +3

      I was a little surprised by his prediction, chips the size of a dinner plate. I don't think that's possible, it sounds like you agree

    • @TeodorSpiridon
      @TeodorSpiridon 17 днів тому +5

      @@oregonhighway You could if you go with AMD's approach of multiple chiplets on a larger substrate.
      Monolithic chips of that size are impractical due to the yields being absurdly low as the size grows.

    • @alex_madeira
      @alex_madeira 17 днів тому

      @@oregonhighway Google Wafer-Scale Processors

    • @alex_madeira
      @alex_madeira 17 днів тому

      @@TeodorSpiridon Lookup Cerebras

  • @JinKee
    @JinKee 17 днів тому +152

    To be fair, my comments aren’t much better than targeted randomness.

    • @MrNicoJac
      @MrNicoJac 17 днів тому +8

      You still greatly outperform a cat walking across a keyboard, and that's not even fully random! :)

    • @TimAZ-ih7yb
      @TimAZ-ih7yb 17 днів тому +13

      Yes, but your brain consumes 10W of power whereas the GPT needs 100 kW to fling its foolishness. 😊

    • @urlauburlaub2222
      @urlauburlaub2222 16 днів тому

      @@TimAZ-ih7yb AI suggest to use people with low brains as energy, or was it Agent Smith serving the Matrix? I don't know, what do you suggest?

    • @thecustodian1023
      @thecustodian1023 16 днів тому +4

      How many comments do we all see every day now that are grammatically and spelling-wise perfect but absolute word salads that have no relevance to the conversations they are in? Thats real-world AI.

    • @josephtaylor6285
      @josephtaylor6285 11 днів тому

      @@JinKee Don’t be so hard on yourself. Kudos for your self deprecation and self reflection.

  • @CasualRelief
    @CasualRelief 17 днів тому +71

    I think you might be wrong on this one. The LLM companies have only just run up against that power/compute wall. They haven't even tried to optimize the algorithms much yet. I would bet that we'll see a crazy improvement in the model efficiency in the next 2 years. The physical limitations of power and chips will slow things down for sure, but programatic improvements to the algorithms with aide from the models themsselves using long run time compute will lead to more efficient models. We will also see a shift away from the "general" model and we'll many very specialized models that super efficient for the task.

    • @diviningrod2671
      @diviningrod2671 17 днів тому +2

      More importantly why couldn't AI tell me how many blueberries in a pie??
      Is AI a nothing burger?
      Iirc it's 475 berries

    • @hi117117
      @hi117117 17 днів тому +4

      The thing is that there really isn't a path forward for optimizing the software side of this. for many many years now even before large language models, The solution has always been to just throw more data at a bigger model, with significant advances in the software side of things only coming once every 5 years if even that. The only ones I can really think of are things like recurrent neural networks, convolutional neural networks, the stuff behind large language models, and switching the activation function to sigmoids. this represents something like 20 years of advancement in the field of AI.
      The point is basically that while it's not impossible to optimize the software side, it is much harder and if we're at that point now it basically puts us in an AI never a future much like how there's potential for fusion power but realistically we will never get it.

    • @Stevenpwalsh
      @Stevenpwalsh 17 днів тому

      @@hi117117 There is still a lot of low hanging fruit. We're finding all kinds of ways, some are data techniques ie: distillation of larger model output into smaller models. There are gains being made in parallelzation and caching with inference. Some people are playing with fundumental architecture ie (being more sophisticated than a simple soft max), there are gains from having better data etc.

    • @chevgr
      @chevgr 17 днів тому +4

      he's wrong on many things, including his speciality subject (he said it was impossible for Trump to win in 24)

    • @crackyflipside
      @crackyflipside 17 днів тому +1

      The last sentence you have is huge. If you use a software like Palantir Foundry, you'll see that the low cost models are waaaaaaay more efficient for the majority of intermediate LLM functions in the data pipeline.

  • @H.G.Wells-ishWells-ish
    @H.G.Wells-ishWells-ish 17 днів тому +63

    One thing I've found interesting has been the shift in the drivers of technological innovation. In the 1950 through the 1990s, the military had been the main driver for innovation. But, in recent years, the gaming and entertainment industries have closed the gap, particularly in high tech. Even small drones seem to have been present for entertainment purposes before it was viewed as an applicable asset on the battlefield.

    • @lord0Fwar93
      @lord0Fwar93 17 днів тому +7

      Mankind can advance without war, but some of us still chose war

    • @AtheismScientism
      @AtheismScientism 17 днів тому +8

      False. Drone tech dates back to WW2 and the US military was using small drones in the ‘90s. Civilian innovation has been the reason for certain applications, but small drone tech wasn’t a recent innovation of the video game industry…

    • @thedj3319
      @thedj3319 17 днів тому +8

      Yes and no. Military innovation makes it happen, but market innovation makes it affordable.
      Drones are a perfect example for this. The US has been using drones for decades now. Thats what Predator drones are, and are far superior to market drones- staying longer in the air, more deadly ammo, better cameras, ect.
      But the market drones will do in a pinch if you dont have the option of widespread Predators. They are cheap, affordable, easy to manufacture.

    • @H.G.Wells-ishWells-ish
      @H.G.Wells-ishWells-ish 17 днів тому +3

      @@AtheismScientism 1) I never said small drone warfare was a component of the video game industry. They were a part of leisure entertainment industry. 2) Nor did I ever say the military didn't have some type of drone component. I said it SEEMS like small drone warfare didn't pick up until after civilian drones had become a significant part of the leisure industry. It was an observation, not a definitive 'true or false' assertion.

    • @RuffinItAB
      @RuffinItAB 17 днів тому +2

      The cold war was a serious driver of government R&D.

  • @NickApex
    @NickApex 17 днів тому +94

    If you take 5 minutes to look into Nvidia and its offerings you’ll realize how absolutely moronic this video is.

    • @raynash4748
      @raynash4748 17 днів тому +14

      Some of his video's are comical.

    • @mastervibes2296
      @mastervibes2296 17 днів тому +8

      Trying to pump your Nvidia stock?

    • @WhiskeyJ_TV
      @WhiskeyJ_TV 17 днів тому

      What about the software ? 😂

    • @michaelcallahan8412
      @michaelcallahan8412 17 днів тому +6

      I work in AI, and I use Nvidia products, and nothing Zeihan said was inaccurate. I think you might be confusing GPUs built for AI with chips built for AI? But I'm not really sure what your issue with the video was.

    • @RyanDorough
      @RyanDorough 17 днів тому

      This Apple white paper might have some bearing on the topic. pbs.twimg.com/media/GeSeGlQWoAACgUr.jpg

  • @richdurbin6146
    @richdurbin6146 17 днів тому +26

    I think letting the news marinate for a week before seeing his takes, work pretty well for getting better context.

    • @thecustodian1023
      @thecustodian1023 16 днів тому

      Or a month. Way too much of what he says is nonsense that turns out to have been built on propaganda lies that just haven't been publically exposed all the way yet.

  • @PaulJackson-x8e
    @PaulJackson-x8e 16 днів тому +54

    Hit 240k today Appreciate you for all the knowledge and nuggets you had thrown my way over the last months. Started with 24k in September 2024..

    • @PaulJackson-x8e
      @PaulJackson-x8e 16 днів тому

      I will be forever thankful to you, you changed my life I will continue to speak on your behalf for the world to hear that you saved me from huge financial debt with just a little trade, thank you Jihan Wu you're such a life saver

    • @AmirRezaie-d2h
      @AmirRezaie-d2h 16 днів тому

      As a beginner in this, it’s essential for you to have a mentor to keep you accountable.
      Jihan Wu is also my trade analyst, he has guided me to identify key market trends, pinpointed strategic entry points, and provided risk assessments, ensuring my trades decisions align with market dynamics for optimal returns.

    • @HollyGarwell
      @HollyGarwell 16 днів тому

      Jihan Wu Services has really set the standard for others to follow, we love him here in Canada 🇨🇦 as he has been really helpful and changed lots of life's

    • @RichardArthurBaker
      @RichardArthurBaker 16 днів тому

      Most rich people stay rich by spending like the poor and investing without stopping then most poor people stay poor by spending like the rich yet not investing like the rich but impressing them. People prefer to spend money on liabilities, Rather than investing in assets and be very profitable

    • @JenniferCochran-w5e
      @JenniferCochran-w5e 16 днів тому

      Please how can I get in touch with this coach Jihan Wu ? I really need to give him a try

  • @psych0r0gue1
    @psych0r0gue1 17 днів тому +11

    One of the things I always wonder when I hear about the waste heat problem with AI is why we can't recapture some of this waste heat to use for secondary power production.

    • @MattsAwesomeStuff
      @MattsAwesomeStuff 17 днів тому +10

      Good question. It's because despite the volume of heat being large, the intensity of the heat is low. Intensity (large difference in temperature) is what you need to use it to accomplish something. For example, if I took a whole swimming pool of cold water, and heated it up 2 degrees, that would take a massive amount of energy. But you still couldn't cook a potato with it, it's only 2 degrees warmer than the cold water. If you wanted to use that warmer water to preheat your hot water tank to take a bath, you barely accomplish anything, you only gained 2 degrees. Entropy only goes in one direction. If you boil a pot of water and dump it into the pool, you can't get boiling water back out of it, even though energy was conserved.

    • @beans100
      @beans100 16 днів тому +2

      @@MattsAwesomeStuff Thank you, good explanation !

    • @denisblack9897
      @denisblack9897 15 днів тому

      Cause its fucking radioactive?

    • @psych0r0gue1
      @psych0r0gue1 15 днів тому

      @@denisblack9897 well, it's not, but also we use radioactive stuff to generate power. Thank your trolling with us today. Feel free to climb back into your hole.

  • @4mb127
    @4mb127 17 днів тому +33

    Good example how Peter has the knowledge vast as an ocean and deep as a puddle.

    • @SA2004YG
      @SA2004YG 17 днів тому +1

      😂

    • @immortaljanus
      @immortaljanus 17 днів тому +4

      He's a generalist, he'd said so many times. I imagine he has other people who are more specialized that do deeper, narrow analyses, he incorporates their outcomes into his generalist approach aftwerwards.

    • @mountainmanmike1014
      @mountainmanmike1014 17 днів тому +1

      @@immortaljanus his team says they pick the story for him to rant about and he doesn't have any experience regarding most of them. he regurgitates statist news.

    • @tkzsfen
      @tkzsfen 16 днів тому

      I miss the "deep" part. What would you add to his comments, that he misses?

  • @thievingpanda
    @thievingpanda 10 днів тому +1

    P vs. NP. That is the real question here, will it be solved. Sadly, I think large portions of the workforce can still be automated by AI even if P vs. NP is not solved.

  • @nicholaidajuan865
    @nicholaidajuan865 17 днів тому +29

    As a gamer I wish nvidia would refocus on the graphics market instead of building their H100 AI cores that are literally the size of dinner plates (an entire silicon wafer), are liquid cooled, and are made to be rack mounted for use in AI server farms. The world you describe is here and in such a world chips that are worth several hundred thousand each are flown and so are not subject to shipping restrictions

    • @woznotwoz-s8j
      @woznotwoz-s8j 17 днів тому +6

      No. Because money.

    • @3rdHalf1
      @3rdHalf1 17 днів тому +10

      Gamers complaining that Nvidia don’t cater to them is like suburban stay at home mothers complaining about lack of products from John Deere.
      Gamers are not the reason Nvidia is the biggest company on earth. Kinda soqs, I know.

    • @AtheismScientism
      @AtheismScientism 17 днів тому +2

      I have yet to meet a group of people who complain more than Gamers…

    • @Stevenpwalsh
      @Stevenpwalsh 17 днів тому +1

      AI is how we build a world with diminishing worker population. It's the most important thing we are building today.

    • @JakeWestington
      @JakeWestington 17 днів тому

      Lol why would they do that? Look at their earnings on AI vs gaming. It's not even close anymore.

  • @DemetriusTrumpClips
    @DemetriusTrumpClips 10 днів тому +2

    He's also wrong about the power needed to supply LLMs globally. LLMs such as LLAMA from meta have trained smaller modeles to be just as eficient/acurate. Once you train a massive LLM it can be optimized to run on less powerfull machines by having big models train smaller models.

  • @tankvibe
    @tankvibe 16 днів тому +3

    As someone actively working on agentic frameworks, this feels about 6 months outdated. Low end functional chips that can run cars are available under $400 for the size of a wallet
    You have your size understanding of GPUs and TPUs backwards.
    Long time viewer, lot of respect.

  • @HomeSlize
    @HomeSlize 16 днів тому +6

    Damn, Peter still around? He's one of the best examples of the Dunning-Kruger effect.

    • @OGPressident
      @OGPressident 16 днів тому +1

      100% he speaks far too confidently on too much stuff and is wrong so much on so many details it’s actually dangerous to take his point of view too seriously

    • @HomeSlize
      @HomeSlize 15 днів тому

      @@OGPressident exactly.

  • @frankbieser
    @frankbieser 17 днів тому +5

    The advantage of a GPU for AI is the same reason they are good for graphics; they are optimized to perform specific hashing functions (particular types of math if you will). CPUs can do a lot of different tasks at the same time too. But they are generalized processors for supporting all kinds of math operations. GPUs are specialized processors for a specific set of math operations which makes them more efficient and therefore useful for AI and graphics rendering.

    • @briancase6180
      @briancase6180 16 днів тому +1

      No, incorrect. They do not "perform specific hashing functions" unless you consider multiplies and additions to be hashing functions (which is one way to look at them, but nobody does).

  • @kev2582
    @kev2582 16 днів тому +2

    Peter, there is basically two distinct computation scenarios - training and inference.
    GPU is general purpose so it can do both. Custom inference ASIC is optimized for inference.
    Inference chips are many fold more efficient than GPU, but not orders of magnitude.
    Chip supply and enabled scenarios are two different things.

  • @snarky_user
    @snarky_user 17 днів тому +36

    Sixty years ago, computer centers were very large rooms "with a massive heat problem" because of vacuum tubes.

    • @Ayvengo21
      @Ayvengo21 17 днів тому +12

      They still large rooms with massive heat problem but for other reasons.

    • @snoomtreb
      @snoomtreb 17 днів тому +4

      Indeed this is why bitcoin farms and data centers are often in Iceland. Cooling is way cheaper there ;)

    • @Brent-z2s
      @Brent-z2s 17 днів тому +1

      My dad saw one at a large insurance company in the 60s and I told him recently his phone has more computing power.

    • @joey199412
      @joey199412 17 днів тому

      @@snoomtreb No. it's because of the almost limitless geothermal power generation.

    • @snoomtreb
      @snoomtreb 17 днів тому +2

      @@joey199412 it definitely helps. Same reason why aluminum smelters are there.

  • @gregkelly2145
    @gregkelly2145 17 днів тому +28

    I'm not an expert, but I do know that Tesla IS using completely custom AI chips right now. The ones used in their cars are made in the US by Samsung. The D1 (Dojo) chips are made by TSMC in Taiwan. But, they are both completely custom AI chips.

    • @chevgr
      @chevgr 17 днів тому +5

      Peter def isnt an expert either.

    • @nod5770
      @nod5770 17 днів тому +2

      peter's not an expert in anything. He's a front man for an intelligence war. His only skill is acting.

    • @segalliongaming8925
      @segalliongaming8925 16 днів тому +5

      DOJO isn’t ready yet. That’s why xAI is still relying on nVidia chips.

    • @TrendyStone
      @TrendyStone 16 днів тому

      If anyone even mentions Elon Musk or Tesla, Peter goes into a crazy cognitive dissonance loop. It’s odd to see. Smart guy…and I read all of Peter’s books…but he can’t handle certain topics objectively.

    • @SignalCorps1
      @SignalCorps1 16 днів тому

      so is AWS and I suspect Azure and GCP

  • @okalov
    @okalov 17 днів тому +17

    My favourite thing about AI is all the software developers raving about how it will replace all nuanced professions that require 'soft skills' within the decade, and yet we're seeing the first thing it really replacing is junior software developers and programmers...

    • @mav45678
      @mav45678 17 днів тому +12

      That's fake news, I'm in the industry and I haven't seen or even heard of anyone's job replaced by AI yet.

    • @talideon
      @talideon 17 днів тому

      ​@@mav45678 The problem isn't replacement, but hiring freezes on people going into low-level positions. This is deeply misguided, but it's happening.

    • @belava82
      @belava82 17 днів тому

      @@mav45678 They already replacing all sorts of "designers" and "tech writers" who's job was to slightly modify some templates etc. They are giving those types of work for interns now.

    • @tklarp4735
      @tklarp4735 17 днів тому +10

      Wishful thinking. I know people have a hate-boner for programmers because they make a lot of money, so they want to see them lose their jobs, but this isn't happening. Layoffs and slowed hiring are happening because of interest rates, not AI.

    • @tarazieminek1947
      @tarazieminek1947 17 днів тому +5

      Yeah, it might replace some of the weaker junior devs, but that just means the senior devs become more productive and less people will be able to get entry level jobs in the software field. So senior devs actually become more valuable.

  • @woznotwoz-s8j
    @woznotwoz-s8j 17 днів тому +15

    Will AI replace Peter Zeihan in geopolitical predictions ?

    • @mindguru22
      @mindguru22 17 днів тому +1

      Both are same. At least the Artificial part😂

    • @hardheadjarhead
      @hardheadjarhead 16 днів тому

      It is certain to be more accurate.

  • @DogmaticAtheist
    @DogmaticAtheist 17 днів тому +20

    Nanometer is no longer a technical term but a marketing term..

    • @Apjooz
      @Apjooz 16 днів тому

      Who's talking about nanometers in the year 2025.

    • @TrendyStone
      @TrendyStone 16 днів тому

      @@ApjoozPeter

  • @JohnDoe-td3xx
    @JohnDoe-td3xx 16 днів тому +2

    Zeihan: These chips have the largest supply chains in history
    Anno Players: 🤔

  • @BluegillGreg
    @BluegillGreg 17 днів тому +35

    To be fair, this begs the question: What the heck is a "postage stamp?"

    • @mcf8615
      @mcf8615 17 днів тому +2

      😂😅

    • @mindguru22
      @mindguru22 17 днів тому

      Same as what is “potato chip”?

    • @dirtydish6642
      @dirtydish6642 17 днів тому +7

      *_Raises_* the question. _Begs the Question_ is a phrase referring to a logical fallacy.

    • @sluggo206
      @sluggo206 16 днів тому

      It's those things your brother collects.

    • @luciusael
      @luciusael 16 днів тому

      Are you Gen alpha or something?

  • @Stevenpwalsh
    @Stevenpwalsh 17 днів тому +10

    I use/build AI in the healthcare sector. My team of 2 guys is already finding millions in savings (stuff we used to have a hundred SME's do), the stuff we're finding with it is crazy. Beyond proof of concept, we're using it in production to find real value for customers.
    Peter is pretty stuck on compute limitations, it's just not a problem for us. One of the biggest boosts is taking the output from a very large model, and distilling it into a small model. We can do almost all the things (minus some of the clinical stuff, which is getting better... but not quite there yet) that we're doing today with a 14b parameter model, which is super cheap to run. We can run that 14B parameter model on a low end GPU, no need for a super high end space age machine. Though honestly, the foundational model companies are getting so cheap and fast, we don't even need to use the locally run distilled models either (we mostly keep using them for privacy reasons).

    • @FloydThePink
      @FloydThePink 17 днів тому +2

      So where in US is healthcare cost going down because these savings are being passed to the patient? Cynical me would think you are increasing CEO bonus and enabling even more stock buybacks. Nothing some Luigis can't fix.

    • @dalehill6127
      @dalehill6127 16 днів тому

      ​@@FloydThePinkI think you'll find that in the US cost savings, especially in the healthcare sector, are basically *never* passed on to the customer. After all, when you're ill you'll pay anything won't you, and that's all that US companies ever need to know.

    • @romik1231
      @romik1231 16 днів тому

      I agree, but why don't we see AI flushing the market? I mean it should replace all chat based hotlines / support, maybe even the voice based ones. It can also do most of office work stuff. Still, I don't see it implemented anywhere. Why is it so?

    • @FloydThePink
      @FloydThePink 16 днів тому

      @@romik1231 AFAIK The cost of ai is prohibitive to small business due to cost. Nvidia stock has skyrocketed because of the insane sales margin that impresses even Apple. I know of no other chipmaker that is even kinda sorta close to Nvidia, so no competition to drive prices down anytime soon.

    • @Stevenpwalsh
      @Stevenpwalsh 16 днів тому

      @@FloydThePink In my eyes it's an upgrade to the bucket we've been using to scoop water out of the boat with a hole in it. AI won't fix this mess, the issue is multi-modal. Payers get a lot of blame because they're the ones saying no, but the issue is far wider than them.

  • @billkemp9315
    @billkemp9315 17 днів тому +4

    Peter, you are missing some things in your assumptions. Check out photonics. Most people are unaware of the next level of computing, and it is easier to produce than electronics. They work at the speed of light and produce very little heat in the data center. Data center cooling is 40-50% of the electricity cost of a data center. The industry is finally shifting to liquid cooling instead of air cooling, which is 20 times more efficient. I have designed and built 34 tier 4 data centers, so I am not just some random guy with an opinion.

    • @bernl178
      @bernl178 17 днів тому +1

      1,000,000%. I as well and following Photonics

    • @antonyphipps5671
      @antonyphipps5671 13 днів тому +1

      Hooraay, someone finally mentions optical soluitons (besides me and the folks at Poet technologies).

  • @texasgermancowgirl
    @texasgermancowgirl 10 днів тому

    I work in this field and the biggest issue is that we don’t have the data center and energy infrastructure to run it on a mass industrial scale and data management

  • @colbysmith6201
    @colbysmith6201 17 днів тому +6

    Happy New Year Peter..I enjoy your point of view and videos..Thanks for giving all us viewers something to think about.

  • @GregBaker-ifost
    @GregBaker-ifost 16 днів тому +2

    The reason that Peter is wrong in this video is that he isn't considering that there's a difference between AI *inference* (using existing models) and *training* a new model. Training a new model requires enormous amounts of power and compute (and specialised GPU cards). The world could stop training tomorrow (they won't, but even if they did) and we could continue to use existing models.
    Inference on the other hand *doesn't* new massive amounts of power and compute. When you query ChatGPT, you are probably going to interact with gpt-4o which uses about the same amount of hardware as a 20-person LAN party -- but of course it only uses that for a few seconds, before handling the next query.
    A gpt-4o-mini query uses less hardware than a high-end gaming PC.
    We already have more than enough hardware in the world to run inference.
    That's before you we start talking about the leaps and bounds being made in knowledge-distilled small models: rkwv, phi4, the ultra tree models of my PhD, and so on. These are models that are so small that they run quickly and well on your laptop (or even on the laptop that you stopped using because it was too slow for running Office any more).

    • @Art-is-craft
      @Art-is-craft 16 днів тому

      That’s the problem you are trying to predict he is just stating what is in existence now.

    • @mountainmanmike1014
      @mountainmanmike1014 16 днів тому

      ​@Art-is-craft Peter isn't predicting anything? BS. Is this one of Peter's team or another delusional fan?

  • @JohnKerbaugh
    @JohnKerbaugh 17 днів тому +3

    Why does a president need to decide? Why not let the market decide?

    • @Apjooz
      @Apjooz 16 днів тому

      Yeah why not just lose. No biggie.

  • @Pdotta1
    @Pdotta1 13 днів тому +1

    Peter mentions Doom! Gave me a giggle.

  • @monkeywrench1951
    @monkeywrench1951 17 днів тому +6

    How can [mainland] China disappear economically and at the same time take over Taiwan and deprive the US of AI chips ?

    • @mountainmanmike1014
      @mountainmanmike1014 17 днів тому +3

      dragons

    • @judewarner1536
      @judewarner1536 16 днів тому +4

      Because empires in meltdown often attack third parties either as a means of deflecting a disaffected population or as a scapegoating tactic.

    • @richardpavlov442
      @richardpavlov442 16 днів тому

      Nukes

  • @rockydopeydoge6730
    @rockydopeydoge6730 14 днів тому

    Spot on with the “where are we going to deploy” question and humanity hasn’t been great at that so far unfortunately…

  • @I_Lemaire
    @I_Lemaire 17 днів тому +4

    Happy New Year, Peter and thank you.

  • @tommoody728
    @tommoody728 17 днів тому +28

    I thought Nvidea has already produced multiple generations of chips designed specifically for AI.

    • @Withnail1969
      @Withnail1969 17 днів тому +16

      They have, Peter hasn't bothered to do any research as usual.

    • @navcenter77
      @navcenter77 17 днів тому

      @@Withnail1969 The "AI" chips are just Gaming GPUs with a different label and a higher price tag. Nvidea has played this game before advertising to Bitcoin Miners during the last Boom cycle. You can build your own LLM on a standard PC with a generic GPU and have a better experience than ChatGPT

    • @joey199412
      @joey199412 17 днів тому +8

      They are just adapted GPU designs meant for gaming. There currently are *no* actual AI accelerator chips specifically for training and inferencing transformer architecture AI. Not even Google's TPU fit that bill.

    • @SirSchmittyX
      @SirSchmittyX 17 днів тому

      @@Withnail1969 Which processors are designed specifically for AI? One generation back is the ampere and those were designed for graphics processing. They did package ampere chips specifically for data center use but the chip itself isn't designed to just handle AI. Current gen Hopper architecture is mostly an enhancement of that same technology but organized as a multichip architecture and packaged for data centers. I do think its a fine line he is treading with the language but it is true that Nvidia hasn't rolled out a specifically designed from the ground up AI silicon chip.

    • @Withnail1969
      @Withnail1969 17 днів тому

      @@SirSchmittyX They have been making dedicated AI stuff for a year or two now, no?

  • @MatthewSargeant
    @MatthewSargeant 17 днів тому +14

    Yeah guys, should have done some research. This video is incorrect on many levels. Not only are AI edge chips and large server chips already in mass production, but other AI models dont need massive powerful chips to run, you can run them on your cpu and gpu on a normal pc, in the cloud etc etc whereever. Open source AI is progressing incredibly fast too. To do a lot of tasks you dont even need frontier stuff like the latest o1 03 models.

    • @dalehill6127
      @dalehill6127 16 днів тому +1

      You know more about it than I do and I know more about it than Mr Zeihan.😊

  • @Equalzer
    @Equalzer 16 днів тому +1

    Necessity is the mother of innovation and constraints are the father of creativity.

  • @conjurermast
    @conjurermast 17 днів тому +36

    I think PZ is not qualified to talk about AI being targeted randomness and such at all. It's shocking how good it is at certain things, like translation & coding.

    • @Stevenpwalsh
      @Stevenpwalsh 17 днів тому +4

      Really makes you wonder, if he's this far off with AI.... how far off is he when he talks about all the other stuff he is supposed to be an expert on?

    • @chevgr
      @chevgr 17 днів тому +3

      @@Stevenpwalsh like the US presidential election for example

    • @TravisBerthelot
      @TravisBerthelot 17 днів тому +6

      AI is still horrible at coding.

    • @Cream-i5u
      @Cream-i5u 17 днів тому +2

      @@TravisBerthelotai is in its infancy, you wont be talking about mundane things like this in 3 years.

    • @TravisBerthelot
      @TravisBerthelot 17 днів тому

      ​@@Cream-i5u AI is purely the result of the amount of compute that you can buy. I don't think you can afford infinite compute in 3 years anymore than I can. So the mundane is here to stay.

  • @JerichoTheBeagle
    @JerichoTheBeagle 17 днів тому +9

    They're literally trying to solve their software limitations with raw hardware power and it's futile.

  • @jigglejaggle4732
    @jigglejaggle4732 17 днів тому +33

    We already have custom AI specialized chips, one example is the TPU by google, Google has been using these to train for a while. Furthermore, nvidia has been redesigning their chips to be more specialized for AI, to the point where they may not even be called GPUs anymore. You're way off, the singularity is nearer!

    • @audio9849
      @audio9849 17 днів тому +6

      I was thinking the exact same thing. Why is Navidia worth 3 trillion? Because of their AI "GPU's".

    • @bobobricklayer
      @bobobricklayer 17 днів тому +2

      Don’t confuse poor Peter with anything technical, especially semiconductor.

    • @avocade
      @avocade 17 днів тому

      Groq

  • @20thcenturyboy85
    @20thcenturyboy85 16 днів тому

    Thank you for your videos.

  • @andrewblain3405
    @andrewblain3405 17 днів тому +5

    The first thing we will use a good ai chip for will be to design a better faster cheaper more efficient ai chip.
    Obvs.

  • @sebastianfletcher-taylor1024
    @sebastianfletcher-taylor1024 14 днів тому

    I completely agree. I use AI extensively in my work, and part of my job is to research what current models can and can't do effectively, and to engineer prompts to be as effective as possible while ensuring accuracy and correct results.
    I think Zeihan underestimates how much this field of technology is already being utilized by many people in a variety of fields.

  • @michaelcallahan8412
    @michaelcallahan8412 17 днів тому +38

    *Me, working in AI for genetics:* Zeihan talks about a million things confidently, now finally I can see if he actually knows that he's talking about.
    *Zeihan:* Doesn't know what he's talking about.

    • @TyrannicG
      @TyrannicG 17 днів тому +5

      comments like this make you look very stupid.
      Its very easy to do this
      "Zeihan: Doesn't know what he's talking about."
      By quoting him, you know the exact part im referring too.

    • @Chr1s-fm6bi
      @Chr1s-fm6bi 17 днів тому +10

      Can always rely on someone who points to mistakes but can’t say what is wrong or correct it.

    • @richkroberts
      @richkroberts 16 днів тому +2

      If you are going to make such a comment, consider providing some examples of where Peter is wrong. Seriously, it sounds as though you might have constructive points to make given you have practical experience with AI.

    • @Broc-e5n
      @Broc-e5n 16 днів тому +2

      For those replying to Michael asking why... well, here's an example... Google's own AI chip is in its 6TH GENERATION. Google has been designing its own AI chips for over ten years. It made its cloud TPU available in 2018 and used it internally as early as 2015.

  • @Eric-ue5ed
    @Eric-ue5ed 14 днів тому

    It's always hard to pick winners. There is photonic computing, neuromorphic, PAPs, quantum, graphene-based transistors.

  • @Utoko
    @Utoko 17 днів тому +21

    This is so wrong. The current NVIDIA GPU's for Servers are highly optimised for AI. Just because you can design even better chips for AI training doesn't go against it.
    and all the issues he is talking about will be supported by AI. So there is no do AI or do improve financial sector. There go together like all the others.
    Also AI inference is becoming cheaper and cheaper fast. old GPT4 was $60 million token to $0.014 from DeepSeek model.
    1/4000 of the cost to not even 2. years ago.

    • @goukux5908
      @goukux5908 17 днів тому +4

      yes exactly. Sometimes Peter gets out of his swim land and it goes horribly wrong.

  • @thailandmalcolm
    @thailandmalcolm 15 днів тому +1

    We use the chip to make me Grande Master in Overwatch 2!!!

  • @KatySei
    @KatySei 17 днів тому +8

    Weren’t you in New Zealand like 5 minutes ago ago?

  • @pierredubois9366
    @pierredubois9366 14 днів тому +1

    Linear thinking in an exponential world

  • @Thandar324
    @Thandar324 17 днів тому +3

    I wonder how quantum computers might affect this in the future?

  • @gyoza6510
    @gyoza6510 15 днів тому

    Love this episode!

  • @tinytim71301
    @tinytim71301 17 днів тому +3

    Peter has the coolest sunglasses.

  • @jaku5796
    @jaku5796 16 днів тому +1

    One of the problem is we can imrpove productivity by AI to reduce issue with lower population, but AI would not solve an issue with shrinking markets due too lower population.

  • @porkyfedwell
    @porkyfedwell 17 днів тому +6

    Artificial Intelligence is Artificial, but it isn't Intelligent. Anyone who's been "assisted" by an AI assistant already knows this "secret."
    Your jobs are secure for now.

    • @mountainmanmike1014
      @mountainmanmike1014 17 днів тому +2

      if one person is 5x more productive then you don't need to keep hiring the other four. AI is a tool

    • @markcalhoun8219
      @markcalhoun8219 17 днів тому

      AI is a great and expensive way to steal IP and get worse outputs than algorithms we already had. IE it's a pump and dump scheme.

    • @Stevenpwalsh
      @Stevenpwalsh 17 днів тому

      Skill issue

  • @tomrutledge393
    @tomrutledge393 16 днів тому

    There is one more angle you might be considering but didn't mention .. the odds are pretty good that by the time we are ready for what is expected next state of the art, that line will have moved. We're able to design / improve designs faster than we can get those improvements to market. So how do we jump ahead to be ready to build the newest and best while it is still .. newest and best?

  • @josephdouglas6260
    @josephdouglas6260 17 днів тому +14

    Watching Peter consistently misunderstand technological progress always makes me smile. Companies did major AI specific tape-outs year before last, next gen last year- you’re completely missing how AI is reorienting geopolitical priorities and incentives.

    • @nadavshemer
      @nadavshemer 17 днів тому +1

      He's correct about dinner-plate-sized chips not hitting the market this year. Or next year. Or ever :D
      Guy's hilarious

    • @alst4817
      @alst4817 17 днів тому +2

      Dude, I hope Jason Huang paid you, cos that is wildly optimistic. Lemme guess, you’ve never actually tried to use them for anything important?

    • @GMK189-f2k
      @GMK189-f2k 17 днів тому +2

      Yep I can’t even believe he does this as people who understand what is actually happening know he isn’t credible. It hurts him to make these videos as it casts doubts on other subjects he takes on. He couldn’t be more poorly informed. His belief that Blackwell and other AI chipsets are just juiced up gaming GPUs is embarrassing.

  • @antonyphipps5671
    @antonyphipps5671 13 днів тому

    Peter, there are a number of companies working on internalizing optical systems (photons) within the GPUs to replace the "wires" that transmit/manipulate electrons (resistance causes the heat load). Optical computational and transmission systems will reduce enormously the energy demand for cooling systems in the server farms. Look at the Canadian firm Poet Technologies, for example.

  • @MilosKvakic
    @MilosKvakic 16 днів тому +104

    Yo, just went through the book called 'The Hidden Path to Manifesting Financial Power' - honestly, didn’t expect it to be this solid. Definitely worth a look

  • @konrad7492
    @konrad7492 17 днів тому

    Love to hear you talking about GPU's and IT in general. One thing i feel i need to say is - i don't agree gaming was the main GPU market driver before machine learning and Ai applications. Think of anything that is computer aided design or CGI, that was the main market driver. Large businesses.

  • @giuseppegiannini3697
    @giuseppegiannini3697 17 днів тому +6

    Happy New year Mr.Zehian. 😊

  • @wawaldekidsfun4850
    @wawaldekidsfun4850 17 днів тому +1

    While I respect Zeihan's geopolitical insights, his take on AI hardware shows concerning gaps in technical understanding. The claim about AI-specific chips not existing until 2025-2030 ignores already-deployed solutions from Google, NVIDIA, and others, while his "dinner plate sized" chip prediction misses how modern processors actually work. On AI tech, we'd be better served listening to those who work directly in the field.

  • @nicknach
    @nicknach 17 днів тому +3

    I'm in the tech industry, and first time i EVER saw GPUs running non-gaming worloads was at an Oil and Gas company (back in 2010). They were using the GPUs to process data for geo-science (upstream oil exploration)

  • @Lazarus1095
    @Lazarus1095 17 днів тому +2

    The bottom line that I'm seeing here is that tech bros are not going to help us reduce carbon emissions by making energy more efficiently.
    Instead they'll just hog more of it.

    • @budbin
      @budbin 17 днів тому

      Aren’t they investing in nuclear power now?

    • @josephdouglas6260
      @josephdouglas6260 17 днів тому

      Wtf is a tech bro? Anyone with a CS degree?

    • @Lazarus1095
      @Lazarus1095 16 днів тому

      @budbin Sure they are. And this rate they will spend every single ounce of benefit for themselves.

    • @Lazarus1095
      @Lazarus1095 16 днів тому

      @josephdouglas6260 Do I really need to explain the concept of a tech bro to you? In 2025?

  • @vgernyc
    @vgernyc 17 днів тому +3

    So the moment to quote Consuela from Family Guy....."Noooo, noooo, nooo." The current stock market bubble needs to pop to get the technology to develop properly. If Nvidia disappears or fades as a result, all the better.

  • @hasanamin4668
    @hasanamin4668 17 днів тому

    Thank you for creating this video-it provided valuable insights into your depth(not deep) of knowledge about everything you are taking about.

  • @comentedonakeyboard
    @comentedonakeyboard 17 днів тому +3

    Given how flawed AI still is and what happened with "gain of function" in the genome of the corona virus, i would prefer it if AI was NOT used for genetic research (or anything else that might lead to dangerous results)

  • @ericgregori
    @ericgregori 17 днів тому +1

    The Dunning-Kruger effect is when poor performers in many social and intellectual domains seem largely unaware of just how deficient their expertise is. Their deficits leave them with a double burden-not only does their incomplete and misguided knowledge lead them to make mistakes but those exact same deficits also prevent them from recognizing when they are making mistakes and other people choosing more wisely.

  • @gordoncrespo2045
    @gordoncrespo2045 17 днів тому +5

    How can a supposedly intelligent guy be so wrong on this topic.

    • @chevgr
      @chevgr 17 днів тому

      easily. I'm not an expert but I intuitively think he's wrong here. can you explain why?

    • @dalehill6127
      @dalehill6127 16 днів тому

      Because it's a very specialised area and Mr Zeihan is a generalist so his skill set doesn't fit.

    • @naomieyles210
      @naomieyles210 16 днів тому +1

      @@chevgr here goes:
      - many useful AI models are small, and can run on a PC.
      - training AI models is intense, running them much less so.
      - much of AI progress is about business fit, not raw technological power.
      - AGI is just ridiculous hype, but AI already has many uses.
      - We are not living in a command economy. We can do all these things with AI, but the organisations with the deepest pockets will get the best chips.
      - AI neural networks are just multi-dimensional geometry, so a GPU was a very sensible starting place.

  • @davidsingh6944
    @davidsingh6944 17 днів тому +2

    It’s ironic that the actual Flaw of AI was predicted by 2001 a space odyssey

  • @annekepannekoek8538
    @annekepannekoek8538 17 днів тому +4

    There are enough workers, the problem is they study social science instead of something usefull which could actually make them able to pay back student loans

    • @neondiddle1676
      @neondiddle1676 17 днів тому +1

      Social science is extremely useful. It’s the area of study that looks at ai, looks at how humans generally behave, looks at our current economic and cultural model, and makes predictions to say “lol, this is about to be a dumpster fire if we don’t get some preemptive laws in place”

    • @ivancho5854
      @ivancho5854 17 днів тому +1

      And english literature majors (etc) working in coffee shops or on disability because they're cracking up as they realise that their education is worthless.

  • @markoconnell804
    @markoconnell804 16 днів тому

    Well done.

  • @Atiliusmagnus
    @Atiliusmagnus 17 днів тому +5

    There are two areas where you should not venture unless you lower your head enough to learn: AI and China's capacity to show you how wrong you are when you think you understand it (your predictions of imminent collapse of China go back to when you were a junior in Stratford, over 20 years ago). As for the rest and perhaps including your wrong takes in the mentioned subjects, I am a loyal follower because your analyses are helpful and interesting, whether right or wrong. For what is worth, you have changed my mind in several subjects. Thank you!

    • @Zarrov
      @Zarrov 17 днів тому +2

      if you follow this guy 20 years then you shpuld know that he is not making "predictions". He is analysing. His analysis of China is spot on and has been fulfilled. Look into the method, not into what you want conclusions to be.

    • @TomTomicMic
      @TomTomicMic 17 днів тому

      China's doing great, it's doing this that and the other, Hooray🎉!!!!........but it's doing great because it has the biggest debt to GDP ratio in the World, bigger than the Wider West (Including Japan!) combined and its Boomer generation is going "away" much faster, so China's access to money by way of its citizens savings will reduce over the next decade greatly. They are currently (...and have been since 2007/ 8, presently they are borrowing 22% to achieve 5% growth!) borrowing their way out of trouble, into unfortunately more trouble, there will be a big reset or terminal decline, that's the choice the CCP face, there is no magic money tree, communists always run out of other people's money!?!

    • @shinymike4301
      @shinymike4301 17 днів тому

      @@Zarrov He predicted Trump would not win in 2024. LOL. Petey "predicts" all the time, and often badly. AI is here now and is growing exponentially. Petey is showing himself to be a Luddite.

  • @loadb5985
    @loadb5985 10 днів тому

    The biggest problem is the randomness, even in well organized data sets. The results are non-deterministic.
    N vs NP has not been solved and providing accurate response even in “small” problem spaces fail.

  • @phlyte7
    @phlyte7 17 днів тому +3

    "Gamers. Who play fortnite and doom" I'm crying

  • @nicholasarthur
    @nicholasarthur 16 днів тому +2

    Quantum Computing is 200-300x’s more energy efficient than classical. Quantum is finally starting to be commercialized little by little and with partnerships between AI/Quantum companies including nvidia. That would be the rocket fuel that AI needs. Both are still in their infancy but things will take off a lot sooner than 2040. Exciting times. Love Peter, but early AI integration would complicate a lot of his stances or predictions. Which is ok, can’t predict the future on everything can we? 😂

    • @GregBaker-ifost
      @GregBaker-ifost 16 днів тому +2

      None of the serious ML algorithms in use today would be accelerated by quantum computation. Cryptography on the other hand...

  • @marcbotnope1728
    @marcbotnope1728 17 днів тому +7

    We will use it to create "content" and propaganda.

  • @k54dhKJFGiht
    @k54dhKJFGiht 17 днів тому

    Amen to that! We desperately need to hear politicians OPNELY and PUBLICLY DEBATE how AI should be used! American Society deserves to have input on this. Social Media strategic ambiguity was handled EXTREMELY POORLY!

  • @spoddie
    @spoddie 17 днів тому +33

    This is bunk. TPUs are already mass produced

    • @TimAZ-ih7yb
      @TimAZ-ih7yb 17 днів тому +7

      And the result is still 90% hype and 10% useful work. The coming AI “letdown” will be a debacle for the ages. On the positive side we will see new CEOs at Google and Microsoft.

    • @marshallj2415
      @marshallj2415 17 днів тому +2

      @@TimAZ-ih7yb WRONG

    • @philbiker3
      @philbiker3 17 днів тому +5

      @@TimAZ-ih7yb more like 99% hype and 1% useful work.

    • @markcalhoun8219
      @markcalhoun8219 17 днів тому +1

      TPU's are marketing, not technical results

    • @Stevenpwalsh
      @Stevenpwalsh 17 днів тому

      @@TimAZ-ih7yb There might be a "letdown", but it would be more like the 2000's internet bubble. ie: there was an insanely useful, and economically valuable thing, but we traded stocks like it was 2020, and not 2000. I'm using AI every day, the value is real....

  • @barrettvelker198
    @barrettvelker198 15 днів тому

    Chips don't need to be "designed" for LLMs. LLM architecture will be naturally adapted to better fit current hardware. All Computer Science algorithms that end up surviving in the long run are specifically adapted to _existing_ hardware. Hardware companies only take notice when there is durable demand (5+ years) for a specific algorithm. Part of what makes Machine Learning so special is that we will see improvements from both the software/algorithmic layer and the hardware layer. AI tech will continue to improve if we NEVER get better hardware

  • @ScentlessSun
    @ScentlessSun 17 днів тому +3

    AlphaFold is already revolutionizing medicine.

  • @geofflewis8599
    @geofflewis8599 15 днів тому

    years ago I heard a definition of AI - ''AI will become a form of life as different from biological life, as biological life is from inanimate objects''..

  • @aum1040
    @aum1040 17 днів тому +19

    Peter's take in this video is so monumentally ignorant, I don't even know where to start critiquing him.

    • @gladius1275
      @gladius1275 17 днів тому +6

      Pointless comment since you make a statement worth no elaboration or stopping data to refute.

    • @scrout
      @scrout 17 днів тому +2

      Start with why he knows so little about Nvidia....or Gemini, or Grok....

  • @dennisclapp7527
    @dennisclapp7527 16 днів тому

    Thanks Peter

  • @Voxta
    @Voxta 17 днів тому +8

    We’re a small team of developers working on some pretty amazing tech, and Peter really gets it. He speaks with a deep understanding of the field, and it’s inspiring to hear someone who truly knows what they’re talking about. Like he said, we’ve only had a taste of what this technology can do, and it’s already mind-blowing.
    Just a couple of years ago, getting a program to produce two coherent sentences felt like a win. Now, we’re working with compact systems that can recognize images, control software, animate 3D models, and even handle long-term memory. There’s no denying we’re still reliant on dedicated hardware, both on the development and consumer side, but the progress with local models has been incredible.
    The next few years promise to be a wild ride. Whether it turns out to be a good thing or not... well, we’ll find out soon enough. (This comment was rewritten by AI )🤣

  • @rominegroupreCA
    @rominegroupreCA 17 днів тому

    How accurate is this timeline?

  • @mrallan8063
    @mrallan8063 17 днів тому +13

    My God you don't know anything about silicon design, manufacturing, and use... let alone the need for software. Stick to China's collpase in the next five years, that you have been saying for the last 20 years.

  • @hankfowler8194
    @hankfowler8194 17 днів тому

    So, how to play this in the stock market ?

  • @BluegillGreg
    @BluegillGreg 17 днів тому +5

    This technological development is also used for Artificial Stupidity. Remember that.

  • @JamesR-f9l
    @JamesR-f9l 13 днів тому

    You are right on the end piece in regards to the pace of technology about every 15 years. You are are also right in regards to increased power consumption. For the short term there will be a hierarchal tier model for AI need. Keep in mind that massive parallel GPUs are preferred but not essential to run AI algorithm as they can be run on any CPU. In the future data farms will have miniature nuclear reactors which will allow for higher power loads. There is a lot if investments from big tech going into one particular green tech that is nuclear.

  • @millenniummastering
    @millenniummastering 17 днів тому +18

    Ironically you should have ran this past gemini, Claude or 01 first to check for accuracy!!! 😅

  • @jimmyolsenband
    @jimmyolsenband 16 днів тому

    Peter may not be right in the namoverse details of chips, but he is absolutely correct about the dynamic of scarcity and costs. We need to make choices and have conversations before being gaslit into over investing in tech that doesn't benefit us. Plus the chip isn't the problem, it's the gazillion of mini nuclear plants they want o make to power the servers that power the chips.

  • @definty
    @definty 17 днів тому +7

    Nvidia has loads of cards dedicated to training AI. These are not gaming cards, they're designed for AI

    • @joekraska
      @joekraska 17 днів тому +2

      These aren't much different than gaming cards though, and are based on fundamentally the same tech. Earliest versions of these were barely different than their GPU models but with the graphics ports/DACs removed. These days they also have more resources than a GPU needs.

    • @adurpandya2742
      @adurpandya2742 17 днів тому

      Ya, they were planning this for a long time. Gamers and Cryptocur generated most of the money.

  • @jasonhindle4054
    @jasonhindle4054 17 днів тому

    So, my take on this is you have two types of players in the software space. You have OpenAI, who look like they're on a headlong fake it 'til you make it mission with AGI. Hence their $200 per month premium plan. Then, more established companies are playing a longer and more sustainable game. For example, Google says that its latest models can provide improved answers in drastically less time and at half the computing cost. Hold that thought. Writing more efficient software might just be making a comeback after twenty years of writing software however we want because the hardware kept getting better and better.

    • @Stevenpwalsh
      @Stevenpwalsh 17 днів тому

      I think there are 2 companies, Anthropic and OpenAI... but it's more like app vs api. Anthropic's revenue mostly is derived from the API usage, and that is mostly coming from business. OpenAI is mostly coming from ChatGPT subscriptions, which are mostly personal subscriptions.

  • @Analyst104
    @Analyst104 17 днів тому +7

    Actually, the question is are Humans ready for AI? Considering Humans are going through a stupid phase, maybe they should hold off on AI until they get their shit together.

    • @mountainmanmike1014
      @mountainmanmike1014 17 днів тому +4

      when was our "smart phase", smashing rocks?

    • @romik1231
      @romik1231 16 днів тому +1

      @@mountainmanmike1014 Yeah, you are correct. We did however, it would seem, be more competent, when we were poorer. We had to try harder. Now, having everything available we became lazy and somewhat less knowledgeable. I wonder if AI will make us smarter of dumber. If you can get any information with AI, why would you learn or memorize it?

    • @stopdropnroll
      @stopdropnroll 16 днів тому

      Humans are vile destructive creatures often, and always selfish. AI governance is part of levelling up

  • @halbritt
    @halbritt 16 днів тому

    There are already specific processors being made for AI, that would be TPU that Google manufactures. Though they aren't available on the retail market. In practice, they tend to offer less peak performance than GPU from Nvidia, but are more power efficient.
    Other companies are starting to ship as of now. Tenstorrent, for example is now shipping data center class "AI" chipsets, though I'm not sure at what volume. They're using Samsung Foundry and I believe are on 3nm presently manufactured in Korea.