Get access to global coverage at an exclusive 20% discount at economist.com/moneymacro Further reading from the Economist: 1. www.economist.com/business/2025/01/20/openais-latest-model-will-change-the-economics-of-software 2. www.economist.com/business/2025/01/27/deepseek-sends-a-shockwave-through-markets 3. www.economist.com/briefing/2025/01/23/chinas-ai-industry-has-almost-caught-up-with-americas
Something to add. People cannot run "Open"ai (and other closed source organizations) models on their own because the weight of these models are not published, not because of the high hardware barrier. In contrast, deepseek (and many other open "source" organizations) release the weight of their model so people can do anything with it, including running the model on some high-end PCs, or even do more optimization for inference to run with modest PCs.
@@QuantumConundrum I use 156gb of RAM with every job I send off to hpc at work in a company where hundreds of people are doing the same several times a day
@@QuantumConundrumThat’s common for everything even for my schools OS and computer architecture course. You just use disk storage as virtual memory. Much slower, but still works.
It's actually a huge win for us Linux fanboys. Open source software is superior in many ways. If you remember the Crowd Strike hack that took down most of the world's computers. Only Russia's airports were spared because we have Linux now after Microsoft left in 2022. There are still vulnerabilities in the FOSS space, but programmers can react much more quickly and close any backdoors that get snuck in.
Software algorithms have a very low ceiling. (To understand this, you should take computer science courses like Data Structure and Analysis Algorithm). Why is AI going through its 3rd wave of hype in the last 70 years? It's because of the advancement in "hardware." DeepSeek used a method called knowledge distillation by using OpenAI's model (bigger or teacher model) and transfer the information onto their model (smaller or student model). Not much too much technology advancement from DeepSeek. Also, AI is still impractical and premature in most industries, if not all, and it is still in its 2nd inning. There will be many, many more breakthroughs in AI as the hardware progressing in advancement. Open source has been around for more than 40 years and many of the more advanced semiconductor technology (unscaleable) in China are taken from open source. There are a lot of problems with open source, for example security, and continuous support, etc.
@@kennethli8 I would say that Deepseek actually bring quite the technological advancement. It showed that instead of trying to do one big, smart model you can do many smaller specialised models to achieve much higher efficiency. If OpenAI uses their resources (data + compute) to recreate that approach they might achieve something that is actually generating revenue. They got too focused on creating AGI to actually consider other approaches
I remember when China said they had a battery that could be charged in 3 mins. And when China said they had some magical AI chip that could read your mind. Yeah it was all lies, guessing this is the same. Even Chinese programmers think this is just a lie so the company can do crowd funding.
This video has a bit of a flaw by not mentioning that though DeepSeek was run on whatever chips it can be run on any chips. All of a sudden Jensen's not the only one selling shovels when ppl are looking to dig ditches. Also worth pointing out that if the efficiency increases turn out to be real it means it is a bubble. Because if before you needed 100X the power to do some tasks now you just need 1x. And while R1 may not be the best out there. At 100 times less operating costs it's dirt cheap by example. Ppl always choose the good enough option. It was a bit of a bubble before because the future expectations of profit were not justified but now it's much worse because not only is the moat not large enough, the barrier to entry got lowered to the point that rich individuals can enter, not just the extremely rich.
Great summary. This increased efficiency means doing the same AI tasks with dramatically less energy, slowing down the carbon-fueled race to the bottom (of the planet's health).
For some things, a price drop by a factor 100 leads to a demand increase of more than 100 though. Compare to e.g. books. When medieval monks laboriously copied every book by hand, books were expensive and few. Now you can buy a book for the price of a cup of coffee and build your own personal library that could rival any in existence 1000 years ago, and books are everywhere.
@@beardmonster8051Yup but that demand may not be just satisfied by Nvidia’s chips thus the profit margins of Nvidia’s would lower affecting the short term stock price.
@ Sure, that's possible. But it may also be that demand rises by a factor 200 of which competitors take 70 or something, still leading to profit increase for Nvidia. The equation is changing, and the uncertainty may in itself cause the stock price to waver, but it's really hard to know what all of it will lead to.
Personally I think that the sector will get the competition it so desperately needed. The efficiency upgrade can shift the industry from "buy API calls" to "buy a few cards and rent the model" that every small company can use. The potential is there imo
@@olafsigursons there was not much in the way of competition, no. Mostly because nobody is actually satisfying a market anywhere near the value of the R&D put into it. These major SOTA models cost like 10x what revenue they are bringing in. Their models are actually losses, so nobody has an incentive to gain more users. In fact, everyone wants people to NEVER use their service, unless it means having more users/data for R&D and future profits. DeepSeek basically said, "Every single SOTA can *never* be profitable as of right now." No matter how good their model is, it can never recuperate the cost to create the product when DeepSeek can do 95% of what they can for 5% the cost. Every single one these companies now need to identify some way to make their models more than just an LLM. It needs to be something that DeepSeek can't replicate for millions of dollars. Something you can only do with billions of dollars.
underrated comment. something is always based on value and the best possible efficient way there can be. People were buying chatgpt subscription but it was completely a monopoly or that we can say with time, technology turns to be efficient and cheap and so that is the case with it here. people who know they know that we are nowhere close to achieve AGI any soon and there was tremendous hype and build-up fear of what's just coming next and everyone getting left out, but clearing the mist, we are going to use sand for what sand is usable for, and gold for what gold can be used for. this is as simple as it gets. AND your idea/prediction is on spot!
@ maybe? Idk, haha. I'm not the CTO of one of these companies desperately trying to figure out how in the world I am going to recuperate tens of billions of dollars without a product or service to sell. I mean, if they DID figure out how to wash my car and clean my house, that's worth like... $1,200/year? Get 9 million households to buy in for 6 years and you'll make back about $55B of your R&D!
Well China just demonstrated yes they can make an AI with cheaper value but it wouldn't stop Smaller US or European companies from replicating the same process as China did thus making it more competitive over time.
What by copying their own stuff? The figures don't makes sense unless deep seek was learning from chatgpt. But as chatgpt etc are learning for free, (until court cases), I'm sure there's some Maxwell bs going on, lot's of redirected funds....
@@julianshepherd2038 Not true. Just because you make something more efficient doesn't mean we won't use more. Compute power have been getting twice more efficient every 2 years for the last decades, yet we use more of them every year.
@@julianshepherd2038No. If even mid size companies can make ai models, and basically anyone with a powerful but still consumer market GPU can run them, that might very well increase demand for chips. Think of a customer service on a internet based store that uses AI to automate some answers to questions sent by customers. They could buy an AI specifically made for that by some consent that specialises in that specific application and then they can run it from their own office for basically a single say 10k purchase. Such things becomes way more feasible for a lot of companies now. Which might very well increase the demand for chips. Shit, for all I know they'll end up having them in cars, so you can get an assistant without internet connection. And so on.
2:30 2001 ... the market wasn't "wrong about most tech companies", it was wrong about very, very few tech companies. I was there. I was in the middle of it. It was mad!
Above all this means more competition, and less of all the AI power in only American hands, considering who's sitting in the White house right now, I would say that China just did us a huge favor.
I think we’re definitely in an AI bubble. AI means LLM right now. All of the “model providers” are losing A LOT of money without having a good plan on how to at least be cashflow positive. They think that “scaling” will keep improving the models. There are definitely some ideas on how the current models can be improved. There’s too much funding going towards machine learning (LLMs are just models) and almost nothing to general AI.
@@krissp8712Me too, I don’t think anyone is actually taking general AI seriously. What is being taken seriously are deep reasoning models. For example, I have a bachelors in Computer Science. But if you sit me down in front of an AMC, AIME, codeforces, or some competitive programming or competitive math, I am going to flop hard. Deep reasoning models itself would be able accommodate the gaping hole of extremely high skilled labor that might be necessary in certain settings.
@@krissp8712 I think they meant "general AI" as in "AI in general" - meaning technologies other than LLMs, which are hyperfocused on linguistics. That approach is just incapable of providing what OpenAI and others promise ("general AI" or AGI), yet they only invest in LLMs
I'm not trying to be mean, but I was expecting a bit more of a detailed analysis. This is basically the same surface summary most analysts gave. As others have said, the big thing is that this is an open source AI, and that wasn't mentioned even once in the video with economic implications that could bring. I've also heard that Nvidia makes most of their profits from the higher end cards, not the "cheap" consumer ones used for gaming. So I was hoping to hear that mentioned in the video, if it was true and what were the implications.
Thanks for your analysis. One point is that it seems the goal is not to economically dominate the AI World, rather to make it useful for everybody. Don't only think in monetary terms, please!
This is likely giving little comfort to those whose pay-wall was broken (e.g. the non-open OpenAI). And on the hardware side, if you pay a little bit more attention to the technical aspects, you can see that the implementation took some serious effort in bypassing the CUDA-moat. It could turn out to be a mistake to just give ourselves a pat on the back and say that everything will be fine. After all, the S&P 500 is frothy at a trailing 12 month PE of close to 30, let alone those AI stocks. Of course we can always define everything to be not bubbly.
One thing I hypothesize is that one major reason for the lack of tangible innovation over the past few decades is due to two factors. The first is low hanging fruit was already scraped thus improvements are no longer orders of magnitude but often fractional. That being said gradual increases over time should still amount to changes which is where the second factor comes in. Essentially now any truly innovative sector or idea requires such capital to do, either infastructurally or otherwise, that we are coming up to the walls of what capitalism can do. Any technology that would be disruptive is purchased by those that would be disrupted and either not further developed or developed in such a way that is detrimental to us (enshitification). Why build a good game when you can micro transaction and dlc a game endlessly
Sam Altman said no small company could ever replicate his company's bussiness, hinting it as a justification for monopolistic practices. Deepseek just made him eat his words dry.
Just being slightly nitpicking: I wouldn't call it an AI bubble, but rather an LLM bubble, and DS definitely burst it somewhat. Also, there can be reason to think that DeepSeek was trained using ChatGPT, so possibly some kind of model distillation, which makes the training a whole lot cheaper as well. What's also missing is why DS is so powerful: they did not scale via larger model or training, but by ingenious engineering. I think the stock market is much more happy to scale using bigger model, but good engineering is just not as popular.
I find a lot of this hilarious. I'm happy to see companies pushing LLMs and related technologies burn, because they're essentially peddling snake oil. And yet, what makes this even funnier is selling of Nvidia stock! It's all because it's an idiotic lack of understanding of where GPU, &c., manufacturers are in all of this. Insofar as it isn't snake oil, LLMs like DeepSeek are subject to Jevon's paradox: a cheap, (relatively) effective LLM will trigger broader use and thus vastly broader use of the technology, meaning it generates demand for what Nvidia provide. AI hype is dumb, but selling off Nvidia is dumber still.
What I think is funniest about all of these discussions: "We first need to understand ..." seems like a fundamental question to ask. However 99.99% of the people "influencing" the discussions around AI have no idea, what "AI" is, what it does as a technology, and what kind of limitations are inherent to this technologies. People don't even care to differentiate between AI as it has been used and developed and progressed for decades, and generative AI / LLM. Outside of "ELIZA" Effect Chatbots and mass creation of Plagiarism adjacent content Slop, there are still no business cases this technology. And since it is mostly just useful to re-invent (LLM AI has no intelligence - it doesn't even have memory, nor can it lear: neither a successful answer nor a failed response will improve "the AI LLM" in any way) the same basic stuff over and over, the real potential lies in replacing middle management and figureheads like Altman himself.
it's hot air. the 'bubble'. AI! AI! AI! AI! Billions are needed, it'll do incredible things like develop human characteristics and take over the world... "does this exist?" well.. no? but it could. one day.
@@olafsigursons Nor does he understand AI. Just repeating the two year old factoids anyone can skim read off of Reddit thread titles. Like how LLM doesn't have memory nor can it learn. Like we were still living in the era of 4096 token context windows and test time compute or Google's titan models not existing. To claim that the software side has stagnated, we have reached some inherent limitation of the technology and we are now just pumping compute at it demonstrates profound ignorance of the field.
There are absolutely business uses for this stuff. Last year I participated in an audit where it was used to identify high risk transactions from the entire G/L (as opppsed to a sample). This year I'm working in tax and we're using an "AI" based program to input complex K-1s.
Someone else pointed out other countries are making advancements as well. So it’s not big news what China did, it’s just big news China is one of them doing it
No everyone blindly followed US companies thinking they need giant server facilities. China didn’t because it couldn’t and discovered that you don’t need supercomputer to run high performance AIs
Finally some sense its so much politics and tech bros trying to pump the models for sweet yt money that i was almost going insane just by looking a the thumbnails thks econ bro
You have to differentiate between industry bubble vs company bubble.. Even before deepseek, the valuation of open Ai was overstated ... discounted cashflows looks miserably negative , open Ai never had 1 profitable year... it's funded by equity raising ... Pls check the financials of open ai ... The company is a bubble ... not the industry..
As far as I know did Open AI lose money because much more people than expected use the platform than expected while the rate the costumers pay is too low to cover the costs of running ChatGpt. Microsoft is patient and do give some money to its little child so it can keep on surviving. And I believe this is a good strategy, as AI is the future and DALLE-3 and ChatGPT are quite good AI programs and can easily compete with similiar services of other programs. And yes companies can temporarily lose money before they turn a profit. Nokia lost money all the first 17 years in its existence before it started to make profits. And Toyota did also have a hard time before it started to generate profits. Personally do I think its good that Microsoft has found a new hobby instead of created one unwanted crappy operating system after another. Now it is starting to do useful things for humanity instead, and that is good.
This isn't how the term open source is used for models. An open source model just means that you have the weights. It would be ridiculous to give out the initial data for an llm, as only a few companies in the world can run the training for it.
If I understand it correctly, they released both the weights and the method used to train it. But it's not 100% open source to the point that you can reproduce the whole thing because they didn't release the dataset.
@@TL-fe9si in machine learning, you don't release the dataset unless it is extremely small (relative to the size of some datasets which can literally be thousands of hard drives of data). You instead describe the structure of the dataset, like with a sample of data. If you have your own data, then the theory of the mathematical model should result in a unique (though similar) set of weights in the model. This is because the model is supposed to be trained to generalize data, so if they have a sample of the population and you have a sample of the population, the model should generalize well from both samples (given they're of the same population and similar in quality and quantity). Even if you train an AI on the exact same data, exact same math, it can come out as a different set of weights due simply to hardware floating point operations. So an open-source AI model is almost always just the weights and methods.
Consider this: Doesn't Jevons Paradox apply here which you have mentioned in previous videos? With smaller yet more powerful models-like DeepSeek-R1-AI technology becomes more accessible, potentially increasing overall usage and, in turn, driving higher demand on cloud providers and chip manufacturers.
I dont think you understand. Due to the fact that people can run these models on consumer computers nvidias whole business model takes a huge hit. They were banking on nuclear powered data centers that relied entirely on their chips. People aren't going to be going out and buying those anytime soon and even big companies will probably prefer something cheaper if they can.
there's also quite a bit of scepticism on the 5.6 milion price tag. since it was a sideproject from a hedge fund. they couldve used models created before, implement them here and say those didn't come at any costs thereby suppressing costs immensly.
I am no expert so feel free to correct me, but would that mean that now even smaller firms without r&d departments can train and fine tune the model on their own, because it's cheaper and lighter? If so, that would be remarkable.
Well, yes and no. Mainly depends on whatbyou define as small company. Deepseek managed to get their training cost down to 6 million, which is still a good chunck of money. That is also only the cost for the single sucessfull training, they spend millions more in compute plus the time of their AI experts to fine tune and do multiple trail runs to get to that fine tuned sucessfull training. And it also was trained on servers the company already owned, not rented servers. And assuming your small company without r&d branch is not constantly training new models, that means they will rent the compute and thus pay more. TL;DR realistically we are likely talking about costs of 20+ million dollars to train a model of that quality, and to find true economic use for your model it will still be significantly more complex to train, aka you still probably look at 50-100 million minimum for economic use.
How do we know we are in bubble... my dude, this planet could be called Bubblelon (5 if you wish). Its not like we cant see the nonsense outside of our windows.
New models require 10x less compute to run --> next iteration will be 10x larger and use all available compute again. Increasing power efficiency is just another front where the technology is rapidly advancing. I don't see a bubble. I just see the world accelerating steadily towards a new paradigm, where AI/robotics takes over 99% of the job market.
One challenge for NVIDIA dominance is if their close sourced CUDA stops being the defacto language for GPU training. Deepseek seem to have used lower level instructions to bypass it.
I personally think there is a bubble, but DeepSeek technology-wise is a non-issue as of now. The performance data and financial statements are all based on claims by DeepSeek, not objective analysis. They are not stress tested, it's not known if their method uses countermeasures against an entropic collapse, there is no industrial application, and they used western graphic cards, i.e. they can be cut from development technology. Correct me if I'm wrong, but I have not seen an objective analysis of this model.
They don't have to use western graphics cards. Just be cause they did, doesn't mean it is required. They even hand wrote some software to bypass the need of using Nvidia's software for interacting with the hardware. Not sure what you mean by no industrial application. It'd be the same as any other LLM, just at something like pennies on the dollar. Entropic collapse is irrelevant to the results of the already trained model. Loads of people have already tested it locally. Seek them out for objective analysis. NONE of the LLMs are "stress tested" for anything mission critical. Anyone using an LLM in anything important is already doing something very dumb.
@@connorskudlarek8598 A model is trained continuously, and I doubt you know what entropic collapse is, they are incapable of producing a relevant amount of 7nm processors, asking logical riddles is not stress testing, and industrial application is the entire point of LLMs. Everything except the last sentence is wrong, it's impressive.
i guess that you will said there is no ai bubble and the us consummer is alright and there is no problem of the cost of living and everything goes well ?
P/E for Nvidia is around 48,5 right now, compared to around 30 for the S&P 500. Just the other year Nvidia's P/E was over 200, so most of those expected earnings actually happened. Today the valuation doesn't seem to rely on vastly increased earnings anymore, just a bit more than average.
Do bubbles even exist? And if so, what are the quantifiable factors involved in identifying a bubble? Such that one can backtest using those factors and independently identify "bubbles" and results of those bubbles through market history. Questions adapted from the common ones Eugene Fama asks when people suggest bubbles exist.
There are plenty of applications for ai in the future. The question is how much data processing we will need. Models will get less expensive obviously. And would for example automated factories want to use servers or have their own ones at the factory to allow faster exchange of information.
For the last 40 years or so the world has been developing more powerful computer chips and CPUs and GPUs but suddenly it seems we don't need them anymore?
Like, bruh, dis whole thing got me thinkin what else needs poppin ? Can we pop my neighbor’s bass heavy playlist at 3 AM? Can we pop the bubble wrap I’ve been savin for a rainy day that’s never comin? And don’t even get me started on cherries, fam. Like, why we poppin A.I. when we could be poppin cherries into our mouths like we’re in some 90s rom com montage?
Great point that if the new reasoning models can run on consumer-grade computers, that will mean a bigger market for NVidia than if only supplying for large server farms. It would allow an EXPLOSION of the use of AI in many contexts where the required computer power is now prohibitive.
Yes it's just a short term red my portfolio is also green Nd nvidia is also back to 3.1 trillion dollar market cap in just 5 day 🗿 in Lockheed Martin and USA we trust 🗿
The Chinese AI is far cheaper because they didn't do any initial innovation, they just built on what was already on the market. Deepseek is good but it's not original, so I don't understand the panic while other smaller nation players are yet to release theirs.
1. We are definitely in a bubble with AI and the mag7 - a few stocks will be winners and the rest will likely go bankrupt at some point. 2. No the Chinese LLM's didn't pop the bubble and it's hard to know if they will or not
Since it’s open source, we’ll see many other companies building, contributing and improving it to the point where it makes the mag7 and openAI less relevant or at the very least bring in more competition to drive down cost.
Maybe thinking in terms of national economy gives a better idea of identifying bubbles. Does it actually make sense for AI providers to capture a significant % of GDP to deliver software services? At some point real people spending and earning money should enter the equation!
The fall in Nvidia makes no sense. Whatever you can do with cheaper chips, you will always be able to do more and faster with better chips. Do people really think that if the DeepSeek developers had had the cash for more compute that they'd have just said "no thanks, we're good with what we have"???
Yes and no.while you are correct that there will always be a market for Nvidia, 2 major things are still relevant for the stock drop: 1) The valuation was made on the assumption of pree deepseek model needs, which meant fewer, larger customers for the hardware. Changing a few large customers for many smaller ones nearly always leads to commodification, and a drop in the price for the products you sell to them. Nvidia will still make money, they will just steer back towards normal profit margins like in the past. 2) the big one for Nvidia specifically is that deepseeks models don't require cuda. You cann toss them onto intel, AMD, Qualcoom, or whatever else hardware and they will run, and at about the same performance. If that spreads more (and nearly all AI companies are very interested in that, and have put in some work towards it even before deep seek arrived) then nvidia will have multiple direct competitors, which will craterprofitability. It will still be profitable, just not 200-800% profit margin
EU is doing a very sh*tty job in my opinion, and its a shame that it is anti-innovation because it is against member countries doing infant industry protection. So we will never be able to compete in this field. So it will be USA that dominates this field by walk over - as usual with all high tech industries nowadays because of the EU. And yea, China and even Russia are quite good at AI. Which makes all this even more embaressing. Russia do have good image search programs like Yandex and Pimeyes. And the little I have seen of AI image creation programs from China, do I think that they do a much better job than the bad AI programs that we got in the west. However I still believe that Midjourney, Leonardo AI, Flux and DALLE-3 are superior. Janus-pro was not very good when I tried it a few days ago, so I think that China got a little more job to do there.
It was never in bubble, but china does not have capacity to burst the bubble. When I buy a stock in future, bubble will burst and come down. Because economy would rather burst the bubble than I make profit.
I really don't see why the US AI companies wouldn't just take the openly available code for DeepSeek, and then just improve upon it and release their own even better version. Since they have so huge amounts of ressources they should be able to hypercharge such a model into becoming far better than the one that Chinese company released.
OpenAI could, but the point of this video is that it has been shown that it is no longer shielded from competition due to capital requirements since DeepSeek was able to train their model for MUCH less, so OpenAI will always be seen as less valuable from this point on.
economic growth in the us in the last 14 years was linked to the tech so if there was no tech evolution that time the gdp will grow only just 0.5% but that will not be bad like people will imagine because even with the last 14 years growth the life now is much harder average american works 2 to 3 job to pay the bills so only the tech bros benefited
I also don't think you could call this a bubble. Just because something drops significantly in value doesn't mean that the market was irrational beforehand. Fact is, nobody saw DeepSeek coming and it took everybody by surprise, even experts.
I'm pretty sure these caption were AI generated and they suck, close enough but no dice. I think generative AI has been overhyped since the beginning, even calling it reasoning models seems like an exaggeration. As long as it doesn't actually know what it is you're asking them, it isn't that useful, the hallucination won't stop as long as it is probabilistic. At least for mass market, in medicine it's doing great.
Good analysis! It might be an overreaction. Deep seek did not share the whole story. Only post training on (illegally) distilled data from another company (open ai) resulted in the model. As a user I am greatfull for their really good models but yeah not super good for open ai. But ai as a whole will coninue to grow with emergent (exponentially so) use cases as ai get cheaper and better, it's a known paradox. When a resource gets cheaper and can be used for more
I agree with most part except for Nvidia, and I highly doubt Nvidia will make a comeback Even before Deepseek, I already regarded Nvidia as overvalued. 1. Nvidia GPU is far from the only solution to train and run AI, google has been using its inhouse chip TPU since 2015, Amazon and Microsoft have also beening developing their own inhouse AI chip in partenership with Intel 2. Nvidia's gross margin is nearly 50%, such a high premium would only further incentivize other company to join the race 3. One of the most important aspects of Nvidia GPU is its CUDA environment, it provides a solution without needing AI developers to do hardware programming themselves. It is supposed to be efficient and effective. While deepseek did use Nvidia GPU, their did not use CUDA environment. They proved that CUDA environment is not as efficient or effective as we previously assumed, which suggests to companies who buy Nvidia GPU that it is not worth the price premium and other brand GPUs(AMD, Intel) could also be used to train and run AI given CUDA is not as good as Nvidia claimed
The success of deepseek's r1 model has only accelerated the race to AGI and ASI. In that future, compute (and energy) is everything. This is not a bubble pop, this is your best buying opportunity IMO
Do you have an example of an AGI or ASI (which is the replacement word for AI, which has become too marketised to be clear in its meaning) in existence? Assuming this theoretical technology does become a reality, what are the applications. If I want to build an AI pilot, I'm using different hardware and training, perhaps in the case of this AGI I'm using hardware that doesn't exist yet -- because AGI's don't exist and what we have created with current hardware and software is not an AGI. // on that pilot, do I need a pilot who can write code, chat, and do poetry? Applications. I agree with you hardware demand will increase.
Popping the financial bubble is good, but it's only half the problem. We're still stuck with the fact that calling it "artificial Intelligence" is 99% BS -- a return to a childish falsehood of the 1940s. An intelligence has to have a self and be aware of that self in some meaningful way. Obviously very, very little of what is paraded as "AI" meets this simple and obvious criterion. Cut it out, OK, fellas? What you're talking about is very advanced, often impressive, and sometimes even useful, computation. There isn't an ounce of intelligence in a boatload of it. The word you're looking for might be "advanced computation" -- assuming, that is, that you haven't figured out the name for what you're talking about and therefore have to fall back on a witless generality. You've been warned: people who use the phrase "artificial intelligence" have proclaimed themselves fakes.
Can we not be so fast in rushing to conclusions? I am deeply skeptical about anything that comes from China but even if it were legit I don't see how this can't be replicated by American companies and be a step forward to the progress of AI technologies.
I think you are missing the point. You basically have an AI model that performs as well or better for **free** because it is open source. As a result, advanced AI is not controlled a few big US tech companies or oligarchies.
@@wyim3677 Yes but a distilled model still needs the base model in order to train it so that is still necessary. Yes you have something more optimized and accessible for all and thus more useable. But that doesn't necessarily means it's the most cutting edge where the progress happens. It wouldn't have existed without OpenAI o1 model. The problem is not the optimization, the problem getting more breakthroughs and advancements in the first place. There is no point if that stops, that is what the investments are for. Even if you can make a drug for pennies, that doesn't change the fact that the research of that drug can cost a lot.
@@adrixshadow I think to most people investments are about returns, not technological development. People were betting on monopolies, now they're not. Edit: Actually maybe it's a bit bigger than that, some of these companies have previous on entering markets and operating at a huge loss to drive out competition, betting that's not an option here.
@@adrixshadowevery current allegation could be true about their Training process, yet that would still leave the significant innovations in run efficiency cost being an order magnitude less per million tokens while achieving comparable complexity and performance. While distillation seems likely, I’m skeptical of people claiming secret NVidia chips smuggled by Singapore since Singapore has actual uses for it given that they’re the main local data center hub of their region, 2nd in Asia by data center power consumption.
@@philroo1 Not quite. What is really going on is a battle between Microsoft and Google over the future of Google Search and their advertising empire. Every other investor are just useful idiots that are funding research for Google or Microsoft's benefit or at best get a paycheck from them if they happen to be useful.
Software algorithms have a very low ceiling. (To understand this, you should take computer science courses like Data Structure and Analysis Algorithm). Why is AI going through its 3rd wave of hype in the last 70 years? It's because of the advancement in "hardware." DeepSeek used a method called knowledge distillation by using OpenAI's model (bigger or teacher model) and transfer the information onto their model (smaller or student model). Not much too much technology advancement from DeepSeek. Also, AI is still impractical and premature in most industries, if not all, and it is still in its 2nd inning. There will be many, many more breakthroughs in AI as the hardware progressing in advancement. Open source has been around for more than 40 years and many of the more advanced semiconductor technology (unscaleable) in China are taken from open source. There are a lot of problems with open source, for example security, and continuous support, etc.
Who told you that? the biggest advancements of DeepSeek are related to efficiency in training (they have tuned their training process so that it never has to wait for new data to arrive, something most training has been restricted by)... btw this in-line with nVidia's statements recently about how most gains would be in software.
Please.. Just keep to what you're knowledgeable about which is economics. Not saying you are entirely wrong here, you're on a slippery youtuber/influencer slope here.
You should make a video on how China is competitive with Amurica in most top industries despite being a semi socialist economy that jails billionaires has zero immigration and executes white collar criminals. Basically doing the exact opposite of the neoliberal consensus
@@Merle1987go on rednote and take a look at the prices of living there, and ask around for their wages. You will see that most unemployed youth were not unemployed, just are from financially stable families and are unwilling to accept a job that pays below their expectations.
Deep Seek or Llama is not open source. Only the weights are. however this does not allow you to reproduce the model. so it is not fully transparent. only given the weights is pretty worthless and does not allow to understand how the model waas obtained. the secret is the data and this facebook and co will for sure not release as truely open source! and this is not open source. the term open source in the context of llms like deep seek or llama is just bs!
Get access to global coverage at an exclusive 20% discount at economist.com/moneymacro
Further reading from the Economist:
1. www.economist.com/business/2025/01/20/openais-latest-model-will-change-the-economics-of-software
2. www.economist.com/business/2025/01/27/deepseek-sends-a-shockwave-through-markets
3. www.economist.com/briefing/2025/01/23/chinas-ai-industry-has-almost-caught-up-with-americas
Something to add. People cannot run "Open"ai (and other closed source organizations) models on their own because the weight of these models are not published, not because of the high hardware barrier.
In contrast, deepseek (and many other open "source" organizations) release the weight of their model so people can do anything with it, including running the model on some high-end PCs, or even do more optimization for inference to run with modest PCs.
I mean, those models that are >200bil require more GB of memory than is reasonable, but ok.
@@QuantumConundrum I use 156gb of RAM with every job I send off to hpc at work in a company where hundreds of people are doing the same several times a day
@@QuantumConundrum Just buy the $5,000 computer server
@@QuantumConundrumThat’s common for everything even for my schools OS and computer architecture course. You just use disk storage as virtual memory. Much slower, but still works.
@@AlfarrisiMuammar You mean $50,000?
Deepseek is open source. This seems like an important aspect of how it will affect future development of the industry.
It's actually a huge win for us Linux fanboys. Open source software is superior in many ways. If you remember the Crowd Strike hack that took down most of the world's computers. Only Russia's airports were spared because we have Linux now after Microsoft left in 2022. There are still vulnerabilities in the FOSS space, but programmers can react much more quickly and close any backdoors that get snuck in.
Software algorithms have a very low ceiling. (To understand this, you should take computer science courses like Data Structure and Analysis Algorithm). Why is AI going through its 3rd wave of hype in the last 70 years? It's because of the advancement in "hardware." DeepSeek used a method called knowledge distillation by using OpenAI's model (bigger or teacher model) and transfer the information onto their model (smaller or student model). Not much too much technology advancement from DeepSeek. Also, AI is still impractical and premature in most industries, if not all, and it is still in its 2nd inning. There will be many, many more breakthroughs in AI as the hardware progressing in advancement. Open source has been around for more than 40 years and many of the more advanced semiconductor technology (unscaleable) in China are taken from open source. There are a lot of problems with open source, for example security, and continuous support, etc.
its open weights. still closed source, but more like an executable than a api call.
closed source has gotten even more closed now than it used to
@@kennethli8 I would say that Deepseek actually bring quite the technological advancement. It showed that instead of trying to do one big, smart model you can do many smaller specialised models to achieve much higher efficiency.
If OpenAI uses their resources (data + compute) to recreate that approach they might achieve something that is actually generating revenue. They got too focused on creating AGI to actually consider other approaches
I remember when China said they had a battery that could be charged in 3 mins. And when China said they had some magical AI chip that could read your mind. Yeah it was all lies, guessing this is the same. Even Chinese programmers think this is just a lie so the company can do crowd funding.
This video has a bit of a flaw by not mentioning that though DeepSeek was run on whatever chips it can be run on any chips. All of a sudden Jensen's not the only one selling shovels when ppl are looking to dig ditches.
Also worth pointing out that if the efficiency increases turn out to be real it means it is a bubble. Because if before you needed 100X the power to do some tasks now you just need 1x. And while R1 may not be the best out there. At 100 times less operating costs it's dirt cheap by example. Ppl always choose the good enough option.
It was a bit of a bubble before because the future expectations of profit were not justified but now it's much worse because not only is the moat not large enough, the barrier to entry got lowered to the point that rich individuals can enter, not just the extremely rich.
Great summary. This increased efficiency means doing the same AI tasks with dramatically less energy, slowing down the carbon-fueled race to the bottom (of the planet's health).
I think it will just spawn another race of training AI, now we just get more training requirements. And everyone needs GPU power
For some things, a price drop by a factor 100 leads to a demand increase of more than 100 though. Compare to e.g. books. When medieval monks laboriously copied every book by hand, books were expensive and few. Now you can buy a book for the price of a cup of coffee and build your own personal library that could rival any in existence 1000 years ago, and books are everywhere.
@@beardmonster8051Yup but that demand may not be just satisfied by Nvidia’s chips thus the profit margins of Nvidia’s would lower affecting the short term stock price.
@ Sure, that's possible. But it may also be that demand rises by a factor 200 of which competitors take 70 or something, still leading to profit increase for Nvidia. The equation is changing, and the uncertainty may in itself cause the stock price to waver, but it's really hard to know what all of it will lead to.
Personally I think that the sector will get the competition it so desperately needed. The efficiency upgrade can shift the industry from "buy API calls" to "buy a few cards and rent the model" that every small company can use. The potential is there imo
There is already a lot of competition: chatGTP, Gemini, Claude, Llama.
@@olafsigursons there was not much in the way of competition, no. Mostly because nobody is actually satisfying a market anywhere near the value of the R&D put into it.
These major SOTA models cost like 10x what revenue they are bringing in. Their models are actually losses, so nobody has an incentive to gain more users. In fact, everyone wants people to NEVER use their service, unless it means having more users/data for R&D and future profits.
DeepSeek basically said, "Every single SOTA can *never* be profitable as of right now." No matter how good their model is, it can never recuperate the cost to create the product when DeepSeek can do 95% of what they can for 5% the cost.
Every single one these companies now need to identify some way to make their models more than just an LLM. It needs to be something that DeepSeek can't replicate for millions of dollars. Something you can only do with billions of dollars.
@@connorskudlarek8598 and what will it do, wash your car and clean your house?
underrated comment. something is always based on value and the best possible efficient way there can be. People were buying chatgpt subscription but it was completely a monopoly or that we can say with time, technology turns to be efficient and cheap and so that is the case with it here.
people who know they know that we are nowhere close to achieve AGI any soon and there was tremendous hype and build-up fear of what's just coming next and everyone getting left out, but clearing the mist, we are going to use sand for what sand is usable for, and gold for what gold can be used for.
this is as simple as it gets. AND your idea/prediction is on spot!
@ maybe? Idk, haha.
I'm not the CTO of one of these companies desperately trying to figure out how in the world I am going to recuperate tens of billions of dollars without a product or service to sell.
I mean, if they DID figure out how to wash my car and clean my house, that's worth like... $1,200/year? Get 9 million households to buy in for 6 years and you'll make back about $55B of your R&D!
I would like to see the AI bubble pop a bit more...
There is no bubble. All this bubble talk has been going on since at least 2018.
Inference on Huawei GPUs
@secretname4190did you watch the video?
@ Yes. I disagree with it.
@secretname4190 and what is your reasoning?
I am not an AI and this video was great! Good Job keep going {video creator}!
That's exactly what an AI would say.
Well China just demonstrated yes they can make an AI with cheaper value but it wouldn't stop Smaller US or European companies from replicating the same process as China did thus making it more competitive over time.
What by copying their own stuff? The figures don't makes sense unless deep seek was learning from chatgpt. But as chatgpt etc are learning for free, (until court cases), I'm sure there's some Maxwell bs going on, lot's of redirected funds....
Still means all that computing power and electricity won't be demanded.
@@julianshepherd2038 Not true. Just because you make something more efficient doesn't mean we won't use more. Compute power have been getting twice more efficient every 2 years for the last decades, yet we use more of them every year.
@@julianshepherd2038No. If even mid size companies can make ai models, and basically anyone with a powerful but still consumer market GPU can run them, that might very well increase demand for chips.
Think of a customer service on a internet based store that uses AI to automate some answers to questions sent by customers. They could buy an AI specifically made for that by some consent that specialises in that specific application and then they can run it from their own office for basically a single say 10k purchase.
Such things becomes way more feasible for a lot of companies now. Which might very well increase the demand for chips.
Shit, for all I know they'll end up having them in cars, so you can get an assistant without internet connection. And so on.
However, if that happened it wouldn't be useful to inflate financial bubbles. That's what capitalism currently lives on.
2:30
2001 ... the market wasn't "wrong about most tech companies",
it was wrong about very, very few tech companies.
I was there. I was in the middle of it. It was mad!
Usually stock traders are technically and scientifically illiterate and they invest in hype without understanding
Above all this means more competition, and less of all the AI power in only American hands, considering who's sitting in the White house right now, I would say that China just did us a huge favor.
Considering who run China for years, you are pretty stupid. Trump is a liberal compare to Winnie.
I think we’re definitely in an AI bubble. AI means LLM right now. All of the “model providers” are losing A LOT of money without having a good plan on how to at least be cashflow positive. They think that “scaling” will keep improving the models. There are definitely some ideas on how the current models can be improved. There’s too much funding going towards machine learning (LLMs are just models) and almost nothing to general AI.
You had me in the first half but how are you defining general AI? Are you talking about superintelligences?
@@krissp8712Me too, I don’t think anyone is actually taking general AI seriously. What is being taken seriously are deep reasoning models. For example, I have a bachelors in Computer Science. But if you sit me down in front of an AMC, AIME, codeforces, or some competitive programming or competitive math, I am going to flop hard. Deep reasoning models itself would be able accommodate the gaping hole of extremely high skilled labor that might be necessary in certain settings.
@@krissp8712 I think they meant "general AI" as in "AI in general" - meaning technologies other than LLMs, which are hyperfocused on linguistics. That approach is just incapable of providing what OpenAI and others promise ("general AI" or AGI), yet they only invest in LLMs
Bro, you have no idea how long I been waiting for this video!!, thanks bro, always appreciated your knowledge.
I'm not trying to be mean, but I was expecting a bit more of a detailed analysis. This is basically the same surface summary most analysts gave. As others have said, the big thing is that this is an open source AI, and that wasn't mentioned even once in the video with economic implications that could bring. I've also heard that Nvidia makes most of their profits from the higher end cards, not the "cheap" consumer ones used for gaming. So I was hoping to hear that mentioned in the video, if it was true and what were the implications.
China came out with something faster, cheaper, and more advanced than the American version 😮 I'm SoooOoOO surprised 😂
Yeah, that’s why China has 1/5 the gdp per capita of the US..wait, if it’s always doing better than the US, why are they so behind?
Nice, thank you for doing this,
Simple and yet comprehensive explanation.
Thanks for your analysis. One point is that it seems the goal is not to economically dominate the AI World, rather to make it useful for everybody. Don't only think in monetary terms, please!
This is likely giving little comfort to those whose pay-wall was broken (e.g. the non-open OpenAI). And on the hardware side, if you pay a little bit more attention to the technical aspects, you can see that the implementation took some serious effort in bypassing the CUDA-moat. It could turn out to be a mistake to just give ourselves a pat on the back and say that everything will be fine. After all, the S&P 500 is frothy at a trailing 12 month PE of close to 30, let alone those AI stocks. Of course we can always define everything to be not bubbly.
insightful and easy to understand.
fantastic as always. just posting to boost you in the algo
One thing I hypothesize is that one major reason for the lack of tangible innovation over the past few decades is due to two factors. The first is low hanging fruit was already scraped thus improvements are no longer orders of magnitude but often fractional. That being said gradual increases over time should still amount to changes which is where the second factor comes in. Essentially now any truly innovative sector or idea requires such capital to do, either infastructurally or otherwise, that we are coming up to the walls of what capitalism can do. Any technology that would be disruptive is purchased by those that would be disrupted and either not further developed or developed in such a way that is detrimental to us (enshitification). Why build a good game when you can micro transaction and dlc a game endlessly
Amazing analysis!
Sam Altman said no small company could ever replicate his company's bussiness, hinting it as a justification for monopolistic practices. Deepseek just made him eat his words dry.
Just being slightly nitpicking:
I wouldn't call it an AI bubble, but rather an LLM bubble, and DS definitely burst it somewhat.
Also, there can be reason to think that DeepSeek was trained using ChatGPT, so possibly some kind of model distillation, which makes the training a whole lot cheaper as well.
What's also missing is why DS is so powerful: they did not scale via larger model or training, but by ingenious engineering. I think the stock market is much more happy to scale using bigger model, but good engineering is just not as popular.
I find a lot of this hilarious. I'm happy to see companies pushing LLMs and related technologies burn, because they're essentially peddling snake oil. And yet, what makes this even funnier is selling of Nvidia stock! It's all because it's an idiotic lack of understanding of where GPU, &c., manufacturers are in all of this. Insofar as it isn't snake oil, LLMs like DeepSeek are subject to Jevon's paradox: a cheap, (relatively) effective LLM will trigger broader use and thus vastly broader use of the technology, meaning it generates demand for what Nvidia provide. AI hype is dumb, but selling off Nvidia is dumber still.
Deepseek is OPEN SOURCE.
You mentioned it briefly but it would be cool if you would make a video about the Jevons' paradox.
Jevon's paradox doesn't say anything about who benifit from the increased demand.
Jevon’s paradox also doesn’t mention timelines.
The very first think im asking AGI for is for a way to completely avoid seeing ads, forever.
What I think is funniest about all of these discussions: "We first need to understand ..." seems like a fundamental question to ask. However 99.99% of the people "influencing" the discussions around AI have no idea, what "AI" is, what it does as a technology, and what kind of limitations are inherent to this technologies. People don't even care to differentiate between AI as it has been used and developed and progressed for decades, and generative AI / LLM. Outside of "ELIZA" Effect Chatbots and mass creation of Plagiarism adjacent content Slop, there are still no business cases this technology. And since it is mostly just useful to re-invent (LLM AI has no intelligence - it doesn't even have memory, nor can it lear: neither a successful answer nor a failed response will improve "the AI LLM" in any way) the same basic stuff over and over, the real potential lies in replacing middle management and figureheads like Altman himself.
You funny, you say people don't understand AI then you go along demonstrating you don't understand business.
it's hot air. the 'bubble'. AI! AI! AI! AI! Billions are needed, it'll do incredible things like develop human characteristics and take over the world...
"does this exist?"
well.. no? but it could. one day.
@@olafsigursons Nor does he understand AI. Just repeating the two year old factoids anyone can skim read off of Reddit thread titles. Like how LLM doesn't have memory nor can it learn. Like we were still living in the era of 4096 token context windows and test time compute or Google's titan models not existing. To claim that the software side has stagnated, we have reached some inherent limitation of the technology and we are now just pumping compute at it demonstrates profound ignorance of the field.
Yeah most of those people they studied business are not engineers... myself included obviously.
There are absolutely business uses for this stuff. Last year I participated in an audit where it was used to identify high risk transactions from the entire G/L (as opppsed to a sample). This year I'm working in tax and we're using an "AI" based program to input complex K-1s.
Thanks
AI companies have been disrupting each other every few months. What happened here is that people only just noticed that China is also doing things.
Someone else pointed out other countries are making advancements as well. So it’s not big news what China did, it’s just big news China is one of them doing it
No everyone blindly followed US companies thinking they need giant server facilities. China didn’t because it couldn’t and discovered that you don’t need supercomputer to run high performance AIs
Finally some sense
its so much politics and tech bros trying to pump the models for sweet yt money that i was almost going insane just by looking a the thumbnails
thks econ bro
Thank you for the video, commenting for the algorithm so more people can find you.
So deepseek is basically custom rom of LLM models
You have to differentiate between industry bubble vs company bubble..
Even before deepseek, the valuation of open Ai was overstated ... discounted cashflows looks miserably negative , open Ai never had 1 profitable year... it's funded by equity raising ...
Pls check the financials of open ai ...
The company is a bubble ... not the industry..
As far as I know did Open AI lose money because much more people than expected use the platform than expected while the rate the costumers pay is too low to cover the costs of running ChatGpt. Microsoft is patient and do give some money to its little child so it can keep on surviving. And I believe this is a good strategy, as AI is the future and DALLE-3 and ChatGPT are quite good AI programs and can easily compete with similiar services of other programs.
And yes companies can temporarily lose money before they turn a profit. Nokia lost money all the first 17 years in its existence before it started to make profits. And Toyota did also have a hard time before it started to generate profits.
Personally do I think its good that Microsoft has found a new hobby instead of created one unwanted crappy operating system after another.
Now it is starting to do useful things for humanity instead, and that is good.
awesome. Minor nitpick. Deepseek is not open source, for that we would need the initial data to reproduce the model. It is open weight though.
This isn't how the term open source is used for models. An open source model just means that you have the weights. It would be ridiculous to give out the initial data for an llm, as only a few companies in the world can run the training for it.
@secretname4190 No. It's not how it's used, but it's how it should be.
If I understand it correctly, they released both the weights and the method used to train it. But it's not 100% open source to the point that you can reproduce the whole thing because they didn't release the dataset.
@ the dataset is terabytes of natural language data. They can't just put a zip file on their website.
@@TL-fe9si in machine learning, you don't release the dataset unless it is extremely small (relative to the size of some datasets which can literally be thousands of hard drives of data).
You instead describe the structure of the dataset, like with a sample of data. If you have your own data, then the theory of the mathematical model should result in a unique (though similar) set of weights in the model. This is because the model is supposed to be trained to generalize data, so if they have a sample of the population and you have a sample of the population, the model should generalize well from both samples (given they're of the same population and similar in quality and quantity).
Even if you train an AI on the exact same data, exact same math, it can come out as a different set of weights due simply to hardware floating point operations.
So an open-source AI model is almost always just the weights and methods.
Consider this: Doesn't Jevons Paradox apply here which you have mentioned in previous videos? With smaller yet more powerful models-like DeepSeek-R1-AI technology becomes more accessible, potentially increasing overall usage and, in turn, driving higher demand on cloud providers and chip manufacturers.
I dont think you understand. Due to the fact that people can run these models on consumer computers nvidias whole business model takes a huge hit. They were banking on nuclear powered data centers that relied entirely on their chips. People aren't going to be going out and buying those anytime soon and even big companies will probably prefer something cheaper if they can.
there's also quite a bit of scepticism on the 5.6 milion price tag. since it was a sideproject from a hedge fund. they couldve used models created before, implement them here and say those didn't come at any costs thereby suppressing costs immensly.
Just imagine how many of these companies exist
We need something on U.S Canada trade war.
I am no expert so feel free to correct me, but would that mean that now even smaller firms without r&d departments can train and fine tune the model on their own, because it's cheaper and lighter? If so, that would be remarkable.
Well, yes and no. Mainly depends on whatbyou define as small company. Deepseek managed to get their training cost down to 6 million, which is still a good chunck of money. That is also only the cost for the single sucessfull training, they spend millions more in compute plus the time of their AI experts to fine tune and do multiple trail runs to get to that fine tuned sucessfull training. And it also was trained on servers the company already owned, not rented servers. And assuming your small company without r&d branch is not constantly training new models, that means they will rent the compute and thus pay more.
TL;DR realistically we are likely talking about costs of 20+ million dollars to train a model of that quality, and to find true economic use for your model it will still be significantly more complex to train, aka you still probably look at 50-100 million minimum for economic use.
How do we know we are in bubble... my dude, this planet could be called Bubblelon (5 if you wish). Its not like we cant see the nonsense outside of our windows.
New models require 10x less compute to run --> next iteration will be 10x larger and use all available compute again. Increasing power efficiency is just another front where the technology is rapidly advancing.
I don't see a bubble. I just see the world accelerating steadily towards a new paradigm, where AI/robotics takes over 99% of the job market.
One challenge for NVIDIA dominance is if their close sourced CUDA stops being the defacto language for GPU training. Deepseek seem to have used lower level instructions to bypass it.
I personally think there is a bubble, but DeepSeek technology-wise is a non-issue as of now. The performance data and financial statements are all based on claims by DeepSeek, not objective analysis. They are not stress tested, it's not known if their method uses countermeasures against an entropic collapse, there is no industrial application, and they used western graphic cards, i.e. they can be cut from development technology.
Correct me if I'm wrong, but I have not seen an objective analysis of this model.
They don't have to use western graphics cards. Just be cause they did, doesn't mean it is required. They even hand wrote some software to bypass the need of using Nvidia's software for interacting with the hardware.
Not sure what you mean by no industrial application. It'd be the same as any other LLM, just at something like pennies on the dollar.
Entropic collapse is irrelevant to the results of the already trained model.
Loads of people have already tested it locally. Seek them out for objective analysis.
NONE of the LLMs are "stress tested" for anything mission critical. Anyone using an LLM in anything important is already doing something very dumb.
@@connorskudlarek8598 A model is trained continuously, and I doubt you know what entropic collapse is, they are incapable of producing a relevant amount of 7nm processors, asking logical riddles is not stress testing, and industrial application is the entire point of LLMs. Everything except the last sentence is wrong, it's impressive.
QQQ is still around 520, NVDA is off a bit from mid year but it's not cratering. Where is this massive selloff I keep hearing about?
Did China just poop on US AI ? yeah
i guess that you will said there is no ai bubble and the us consummer is alright and there is no problem of the cost of living and everything goes well ?
P/E for Nvidia is around 48,5 right now, compared to around 30 for the S&P 500. Just the other year Nvidia's P/E was over 200, so most of those expected earnings actually happened. Today the valuation doesn't seem to rely on vastly increased earnings anymore, just a bit more than average.
Do bubbles even exist? And if so, what are the quantifiable factors involved in identifying a bubble? Such that one can backtest using those factors and independently identify "bubbles" and results of those bubbles through market history. Questions adapted from the common ones Eugene Fama asks when people suggest bubbles exist.
There are plenty of applications for ai in the future. The question is how much data processing we will need. Models will get less expensive obviously.
And would for example automated factories want to use servers or have their own ones at the factory to allow faster exchange of information.
For the last 40 years or so the world has been developing more powerful computer chips and CPUs and GPUs but suddenly it seems we don't need them anymore?
You need them for games 😂
There's still time for the AI bubble's top pop.
Like, bruh, dis whole thing got me thinkin what else needs poppin ? Can we pop my neighbor’s bass heavy playlist at 3 AM? Can we pop the bubble wrap I’ve been savin for a rainy day that’s never comin? And don’t even get me started on cherries, fam. Like, why we poppin A.I. when we could be poppin cherries into our mouths like we’re in some 90s rom com montage?
Hahahaha you made me smile.
Great point that if the new reasoning models can run on consumer-grade computers, that will mean a bigger market for NVidia than if only supplying for large server farms.
It would allow an EXPLOSION of the use of AI in many contexts where the required computer power is now prohibitive.
There is no such thing as "intrinsic value". All value is subjective.
My S&P 500 ETF I bought on the 4th of Jan is currently in the Green. Literally took 3 days from the crash.
Yes it's just a short term red my portfolio is also green Nd nvidia is also back to 3.1 trillion dollar market cap in just 5 day 🗿 in Lockheed Martin and USA we trust 🗿
So deepseek helped the stock market to "be talked about" (selling and buying produces more turnaround than not buying and selling)...
Why would deepseek pop the AI bubble? Makes absolutely no sense. Correlation ain't causation.
The Chinese AI is far cheaper because they didn't do any initial innovation, they just built on what was already on the market.
Deepseek is good but it's not original, so I don't understand the panic while other smaller nation players are yet to release theirs.
1. We are definitely in a bubble with AI and the mag7 - a few stocks will be winners and the rest will likely go bankrupt at some point. 2. No the Chinese LLM's didn't pop the bubble and it's hard to know if they will or not
Since it’s open source, we’ll see many other companies building, contributing and improving it to the point where it makes the mag7 and openAI less relevant or at the very least bring in more competition to drive down cost.
It’s all about the grief from Wall Street.
Its not nvidia crisis , it open ai crisis😢
first clip is entirely too quiet, poor editing man.
The bubble is still quite inflated considering the knowledge we have now
Maybe thinking in terms of national economy gives a better idea of identifying bubbles.
Does it actually make sense for AI providers to capture a significant % of GDP to deliver software services?
At some point real people spending and earning money should enter the equation!
best comment I saw so far on the deepseek arrival.
Typical case of "The Emperor has no Clothes" - Fort Knox is Empty
The fall in Nvidia makes no sense. Whatever you can do with cheaper chips, you will always be able to do more and faster with better chips. Do people really think that if the DeepSeek developers had had the cash for more compute that they'd have just said "no thanks, we're good with what we have"???
Yes and no.while you are correct that there will always be a market for Nvidia, 2 major things are still relevant for the stock drop:
1) The valuation was made on the assumption of pree deepseek model needs, which meant fewer, larger customers for the hardware. Changing a few large customers for many smaller ones nearly always leads to commodification, and a drop in the price for the products you sell to them. Nvidia will still make money, they will just steer back towards normal profit margins like in the past.
2) the big one for Nvidia specifically is that deepseeks models don't require cuda. You cann toss them onto intel, AMD, Qualcoom, or whatever else hardware and they will run, and at about the same performance. If that spreads more (and nearly all AI companies are very interested in that, and have put in some work towards it even before deep seek arrived) then nvidia will have multiple direct competitors, which will craterprofitability. It will still be profitable, just not 200-800% profit margin
why did you ask GATgpt what is a bubble instead of is Nvidia a bubble
China just came to show that american AI companies aren't invulnerable to competition, which is interesting.
EU is doing a very sh*tty job in my opinion, and its a shame that it is anti-innovation because it is against member countries doing infant industry protection. So we will never be able to compete in this field. So it will be USA that dominates this field by walk over - as usual with all high tech industries nowadays because of the EU.
And yea, China and even Russia are quite good at AI. Which makes all this even more embaressing. Russia do have good image search programs like Yandex and Pimeyes. And the little I have seen of AI image creation programs from China, do I think that they do a much better job than the bad AI programs that we got in the west. However I still believe that Midjourney, Leonardo AI, Flux and DALLE-3 are superior. Janus-pro was not very good when I tried it a few days ago, so I think that China got a little more job to do there.
It was never in bubble, but china does not have capacity to burst the bubble.
When I buy a stock in future, bubble will burst and come down.
Because economy would rather burst the bubble than I make profit.
I really don't see why the US AI companies wouldn't just take the openly available code for DeepSeek, and then just improve upon it and release their own even better version. Since they have so huge amounts of ressources they should be able to hypercharge such a model into becoming far better than the one that Chinese company released.
Sounds like a great idea for a guide on "How to get sued"
应该对美国公司来说deepseek优势不大,而且openai花了很多钱该怎么和金主交代。但是我已经发现很多小国家也打算搞AI了。有了deepseek之后 降低了使用门槛 小国家也可以用小资金玩一玩
@@pettahify DeepSeek is open source, meaning it's legal to use their code for any reason, including commercial ones.
OpenAI could, but the point of this video is that it has been shown that it is no longer shielded from competition due to capital requirements since DeepSeek was able to train their model for MUCH less, so OpenAI will always be seen as less valuable from this point on.
And what makes you think the Chinese won't do the same? Everyone is gonna do that, which inherently lowers the value of the current ai giants.
economic growth in the us in the last 14 years was linked to the tech so if there was no tech evolution that time the gdp will grow only just 0.5% but that will not be bad like people will imagine because even with the last 14 years growth the life now is much harder average american works 2 to 3 job to pay the bills so only the tech bros benefited
I also don't think you could call this a bubble. Just because something drops significantly in value doesn't mean that the market was irrational beforehand.
Fact is, nobody saw DeepSeek coming and it took everybody by surprise, even experts.
Maybe the experts aren't really experts?
next up US tariffs, I hope
No idea I don't speak the AI language techbros do
It's all recovered and my portfolio is also green 💚 🆙and nvidia is again at 3.1 trillion dollar in just 5 day 🗿 in Lockheed Martin and USA we trust 🗿🗿
resuming: YES
I'm pretty sure these caption were AI generated and they suck, close enough but no dice. I think generative AI has been overhyped since the beginning, even calling it reasoning models seems like an exaggeration. As long as it doesn't actually know what it is you're asking them, it isn't that useful, the hallucination won't stop as long as it is probabilistic. At least for mass market, in medicine it's doing great.
$500 billion v $5.5 million😂
Nvidia is at exactly the same price as a week ago - before the DeepSeek fiasco!!!!!
the last time it was at this price was in October last year my dude...
The DeepSeek news has knocked 16% of the NVIDIA price as at 1st Feb 2025. It will be interesting to see its price at the end of February.
why is your voice pitched up??
if state-of-the-art GPUs are Nividia's mote, why isn't AMD's stock exploding? Their GPUs are pretty good too...
They didn't invest as much in helping people use GPUs for things other than games, though I bet they're working on it now
Good analysis! It might be an overreaction. Deep seek did not share the whole story. Only post training on (illegally) distilled data from another company (open ai) resulted in the model. As a user I am greatfull for their really good models but yeah not super good for open ai. But ai as a whole will coninue to grow with emergent (exponentially so) use cases as ai get cheaper and better, it's a known paradox. When a resource gets cheaper and can be used for more
I agree with most part except for Nvidia, and I highly doubt Nvidia will make a comeback
Even before Deepseek, I already regarded Nvidia as overvalued.
1. Nvidia GPU is far from the only solution to train and run AI, google has been using its inhouse chip TPU since 2015, Amazon and Microsoft have also beening developing their own inhouse AI chip in partenership with Intel
2. Nvidia's gross margin is nearly 50%, such a high premium would only further incentivize other company to join the race
3. One of the most important aspects of Nvidia GPU is its CUDA environment, it provides a solution without needing AI developers to do hardware programming themselves. It is supposed to be efficient and effective. While deepseek did use Nvidia GPU, their did not use CUDA environment. They proved that CUDA environment is not as efficient or effective as we previously assumed, which suggests to companies who buy Nvidia GPU that it is not worth the price premium and other brand GPUs(AMD, Intel) could also be used to train and run AI given CUDA is not as good as Nvidia claimed
I hope ppl wont forget how Omnius AI kickstarted the whole Dune Saga and endless war for thousands of years 😂
Dude, well done on getting sponsored by the economist! Big up!
The success of deepseek's r1 model has only accelerated the race to AGI and ASI. In that future, compute (and energy) is everything. This is not a bubble pop, this is your best buying opportunity IMO
Do you have an example of an AGI or ASI (which is the replacement word for AI, which has become too marketised to be clear in its meaning) in existence?
Assuming this theoretical technology does become a reality, what are the applications. If I want to build an AI pilot, I'm using different hardware and training, perhaps in the case of this AGI I'm using hardware that doesn't exist yet -- because AGI's don't exist and what we have created with current hardware and software is not an AGI. // on that pilot, do I need a pilot who can write code, chat, and do poetry? Applications. I agree with you hardware demand will increase.
Popping the financial bubble is good, but it's only half the problem.
We're still stuck with the fact that calling it "artificial Intelligence" is 99% BS -- a return to a childish falsehood of the 1940s.
An intelligence has to have a self and be aware of that self in some meaningful way. Obviously very, very little of what is paraded as "AI" meets this simple and obvious criterion.
Cut it out, OK, fellas?
What you're talking about is very advanced, often impressive, and sometimes even useful, computation. There isn't an ounce of intelligence in a boatload of it.
The word you're looking for might be "advanced computation" -- assuming, that is, that you haven't figured out the name for what you're talking about and therefore have to fall back on a witless generality.
You've been warned: people who use the phrase "artificial intelligence" have proclaimed themselves fakes.
Can we not be so fast in rushing to conclusions?
I am deeply skeptical about anything that comes from China but even if it were legit I don't see how this can't be replicated by American companies and be a step forward to the progress of AI technologies.
I think you are missing the point. You basically have an AI model that performs as well or better for **free** because it is open source. As a result, advanced AI is not controlled a few big US tech companies or oligarchies.
@@wyim3677 Yes but a distilled model still needs the base model in order to train it so that is still necessary.
Yes you have something more optimized and accessible for all and thus more useable.
But that doesn't necessarily means it's the most cutting edge where the progress happens.
It wouldn't have existed without OpenAI o1 model.
The problem is not the optimization, the problem getting more breakthroughs and advancements in the first place.
There is no point if that stops, that is what the investments are for.
Even if you can make a drug for pennies, that doesn't change the fact that the research of that drug can cost a lot.
@@adrixshadow I think to most people investments are about returns, not technological development. People were betting on monopolies, now they're not. Edit: Actually maybe it's a bit bigger than that, some of these companies have previous on entering markets and operating at a huge loss to drive out competition, betting that's not an option here.
@@adrixshadowevery current allegation could be true about their Training process, yet that would still leave the significant innovations in run efficiency cost being an order magnitude less per million tokens while achieving comparable complexity and performance.
While distillation seems likely, I’m skeptical of people claiming secret NVidia chips smuggled by Singapore since Singapore has actual uses for it given that they’re the main local data center hub of their region, 2nd in Asia by data center power consumption.
@@philroo1 Not quite. What is really going on is a battle between Microsoft and Google over the future of Google Search and their advertising empire. Every other investor are just useful idiots that are funding research for Google or Microsoft's benefit or at best get a paycheck from them if they happen to be useful.
People ignoring the Trump tariffs for the nvidia crash 😂
what does "pop" mean in this context, community. Take by its horns? Make fun of?
It means spontaneous ejaculation, something that can upset the delicate equilibrium of a bubble.
Software algorithms have a very low ceiling. (To understand this, you should take computer science courses like Data Structure and Analysis Algorithm). Why is AI going through its 3rd wave of hype in the last 70 years? It's because of the advancement in "hardware." DeepSeek used a method called knowledge distillation by using OpenAI's model (bigger or teacher model) and transfer the information onto their model (smaller or student model). Not much too much technology advancement from DeepSeek. Also, AI is still impractical and premature in most industries, if not all, and it is still in its 2nd inning. There will be many, many more breakthroughs in AI as the hardware progressing in advancement. Open source has been around for more than 40 years and many of the more advanced semiconductor technology (unscaleable) in China are taken from open source. There are a lot of problems with open source, for example security, and continuous support, etc.
Who told you that? the biggest advancements of DeepSeek are related to efficiency in training (they have tuned their training process so that it never has to wait for new data to arrive, something most training has been restricted by)... btw this in-line with nVidia's statements recently about how most gains would be in software.
Please.. Just keep to what you're knowledgeable about which is economics.
Not saying you are entirely wrong here, you're on a slippery youtuber/influencer slope here.
You should make a video on how China is competitive with Amurica in most top industries despite being a semi socialist economy that jails billionaires has zero immigration and executes white collar criminals. Basically doing the exact opposite of the neoliberal consensus
Keep drinking the CCP koolaid tofu brain
Their youth unemployment though? Real estate bubble though? Local government financing problems though?
@@Merle1987go on rednote and take a look at the prices of living there, and ask around for their wages.
You will see that most unemployed youth were not unemployed, just are from financially stable families and are unwilling to accept a job that pays below their expectations.
本来人多就没有那么多就业岗位,直接出国工作啊很难嘛?你说房地产经济泡沫那不是更好房价降下来不是刚好可以入手嘛!
Time to buy some Nvidia?
Deep sink hahahahah
Long live The people's republic of China!
Deep Seek or Llama is not open source. Only the weights are. however this does not allow you to reproduce the model. so it is not fully transparent. only given the weights is pretty worthless and does not allow to understand how the model waas obtained. the secret is the data and this facebook and co will for sure not release as truely open source!
and this is not open source. the term open source in the context of llms like deep seek or llama is just bs!
Prove it