I see quite opposing trend: cost of LLMs is growing exponentially with LLM size. That means LLMs will soon not be able to grow in size due to prohibitive costs.
That is very kind of you, thank you! We have covered and will be covering AI quite extensively because it's advancing so quickly. For retail investors, making money off a promising technological advancement isn't always so clear cut. That's where our experience can help as we've been researching tech for decades now :) Thanks again. Joe P.
I think we will continue to see more importance on cost per answer. That's the real differentiation. Googles "there is no moat" paper was probably right. When Oai advanced voice costs more for an hour than a low-cost foreigner, we still have some work to do. The stockfish vs AI also brings this to light, stockfish apparently performed worse than the AI but it used significantly less compute. I guess for bigger problems, compute costs don't matter much, like with drug creation. Great video as always.
Finally, you get it! This is partly why Blackwell--and then Rubin--will make so much profits for Nvidia's. And it's why Groq got $640 million in a VC round and why Cerebras did a 180 and said "hey, were GREAT for Inference and we're going IPO to prove it!" Cerebras always used to say "we're for training. We resell Qualcomm cards for Inference." No more.(If you decide to do an inference-hardware analysis, ask me, i can help.)
We've always "gotten it" since we invested in NVIDIA nearly nine years ago. ;) As an investor it's extremely important to reign in hype. Profiting from AI and the future of AI are two very different stories. This piece should be the next one to watch for people getting too excited: ua-cam.com/video/aVspgOTGuEs/v-deo.html
Here's our piece on Cerebras (www.nanalyze.com/2024/10/cerebras-stock-nvidia-killer-or-luxury-good/). We'll hit you up if we decide to do a piece on inference but this probably the extent to which we'll cover this for now. Thank you for the offer! Joe P.
By most metrics the entire market is dangerously overvalued, whether this is driven by AI bubble or overvaluations of a few behemoths, remains to be seen. The issue is, even if the market is overvalued, what investement sector isn’t?
It's important to be aware of possible downside risks associated with AI as well. Goldman actually did quite a good report on that covered here: ua-cam.com/video/aVspgOTGuEs/v-deo.html
Well, AI danger can't be an unknown unknown if we are discussing it and the EU has already regulations trying to address it. I think one way to address the "destroy humanity" problem is a bit analogous to how we manage ourselves as humans: don't let a single AI dominate ("benevolent dictator"), always try to have a set of advisor AI agents and force them to agree on actions. That way we should get a diversity of biases. I don't think the single unbiased AI will exist.
I think it was Tim Urban that penned a verbose piece on how AGI might start acting maliciously without anyone knowing. It could "quickly take control of every digital system on the planet." We may not see what's coming if we create something that's exponentially more intelligent than any human. When a bureaucracy tries to set rules in place to manage these risks, it may not succeed. The proposal you've made in your last few sentences sounds interesting! Collective decision making.
Thank you for this insightful video-it provides much to consider. In my view, the only metric that truly matters is whether a company can achieve Artificial General Intelligence (AGI) and how soon it can do so relative to others. If a company can create one AGI system, it can be replicated thousands of times, and it would be like having thousands of engineers or scientists, potentially operating for free or at a fraction of the cost. The company can be the Amazon of Amazons, disrupting any and every sector. Whatever industry they target first will be disrupted - whether it's NVIDIA, Amazon, the financial markets, or politics. AGI could disrupt entire ecosystems. The company that controls AGI will have the unprecedented ability to reshape the global economy and social order.
Great comment here. Could not multiple companies achieve this? Or will the one AGI system that emerges first become superior through recursive learning and advance so fast others can never catch up? Really hard to say. Agentification and a focus on AI thinking are the current trends to watch but it's likely this space evolves and changes very rapidly. We'll stay on top of it. Thank you for the thoughtful comment. Joe P.
ANY of the top 7 US companies can hire 1000 human engineers TODAY and go disrupt whatever industry they like. So why it's not happening and how will it be different with AI engineers? Money is not the problem, they all have dozens of billions cash on their balance sheets. Is is shortage of good enough engineers? Some VC's (Chamath Palihapitiya) have argued that FAANG have been hogging good software engineers, but maybe not anymore since the recent layoffs in tech.
@@Martinit0 great point! Amazon exemplifies this perfectly. They've, over the years, disrupted multiple industries: online shopping, cloud computing (AWS), logistics, retail (Whole Foods), pharmacy (PillPack), and healthcare (One Medical), among others. Currently, disruption requires Amazon to either hire experienced professionals, poach them from competitors, or acquire existing companies. Building human expertise takes 10-20 years, and you need many skilled workers. The key difference with AI is scalability: once you train a system, you can replicate it thousands of times by simply copying it to another computer at minimal cost. When robotaxis become available, they'll offer unbeatable prices compared to human drivers. While AGI might not arrive immediately, the massive financial investment in AI is driving rapid improvements. For perspective, the moon landing around cost $159.6 billion in today's money, while AI investment is projected to reach $200 billion by 2025, according to Goldman Sachs. This level of investment will accelerate AI development significantly.
Otoh there will never be a model that doesn’t return generated errors. Where will the error machine be safe to integrate? In far fewer places than evangelists claim… mostly nuisance as a service
Current LLM are operating in a mode that humans would describe as "intuition", "gut feeling" or "winging it". Which is why the focus is now on building a reasoning capability as Joe mentioned.
You need to be subscribed to this channel before reading the below comments. Sorry, we don't make the rules.👮
ua-cam.com/users/nanalyze
I 100% agree
This is not true. I'm not subscribed, but I can read all and post here.
@@gappsanon4869 Well now you've done it. You've gone and broken the rules. Heaven help us all.
I see quite opposing trend: cost of LLMs is growing exponentially with LLM size. That means LLMs will soon not be able to grow in size due to prohibitive costs.
They'll also need to show a proper ROI at some point.
McRib reference. Solid!
Innit
Thank you, valuable point of view. Helps strengthen my thesis on AI investing.
That is very kind of you, thank you! We have covered and will be covering AI quite extensively because it's advancing so quickly. For retail investors, making money off a promising technological advancement isn't always so clear cut. That's where our experience can help as we've been researching tech for decades now :) Thanks again. Joe P.
I think we will continue to see more importance on cost per answer. That's the real differentiation. Googles "there is no moat" paper was probably right. When Oai advanced voice costs more for an hour than a low-cost foreigner, we still have some work to do. The stockfish vs AI also brings this to light, stockfish apparently performed worse than the AI but it used significantly less compute. I guess for bigger problems, compute costs don't matter much, like with drug creation.
Great video as always.
Yeah the ability to scale compute based on the complexity of the question being asked makes a lot of sense to manage costs. Great comment, thank you.
Thanks bud. This is solid channel and oasis of reason from paid Tesla pumpers and get rich quick daytraders.
"Oasis of reason" is a great compliment. Thank you for the support! Joe P.
Finally, you get it! This is partly why Blackwell--and then Rubin--will make so much profits for Nvidia's. And it's why Groq got $640 million in a VC round and why Cerebras did a 180 and said "hey, were GREAT for Inference and we're going IPO to prove it!" Cerebras always used to say "we're for training. We resell Qualcomm cards for Inference." No more.(If you decide to do an inference-hardware analysis, ask me, i can help.)
We've always "gotten it" since we invested in NVIDIA nearly nine years ago. ;) As an investor it's extremely important to reign in hype. Profiting from AI and the future of AI are two very different stories. This piece should be the next one to watch for people getting too excited: ua-cam.com/video/aVspgOTGuEs/v-deo.html
Here's our piece on Cerebras (www.nanalyze.com/2024/10/cerebras-stock-nvidia-killer-or-luxury-good/). We'll hit you up if we decide to do a piece on inference but this probably the extent to which we'll cover this for now. Thank you for the offer! Joe P.
Be most critical of the stocks you like the most.
@@riffsoffov9291 Could not agree more!
By most metrics the entire market is dangerously overvalued, whether this is driven by AI bubble or overvaluations of a few behemoths, remains to be seen. The issue is, even if the market is overvalued, what investement sector isn’t?
Good contrast to Goldman Sachs coming out today and saying the S&P 500 will only return 3% annually for the next decade 😂
It's important to be aware of possible downside risks associated with AI as well. Goldman actually did quite a good report on that covered here: ua-cam.com/video/aVspgOTGuEs/v-deo.html
That's usually the case when an outrageously overvalued market crashes 50% and then slowly recovers.
Well, AI danger can't be an unknown unknown if we are discussing it and the EU has already regulations trying to address it.
I think one way to address the "destroy humanity" problem is a bit analogous to how we manage ourselves as humans: don't let a single AI dominate ("benevolent dictator"), always try to have a set of advisor AI agents and force them to agree on actions. That way we should get a diversity of biases. I don't think the single unbiased AI will exist.
I think it was Tim Urban that penned a verbose piece on how AGI might start acting maliciously without anyone knowing. It could "quickly take control of every digital system on the planet." We may not see what's coming if we create something that's exponentially more intelligent than any human. When a bureaucracy tries to set rules in place to manage these risks, it may not succeed. The proposal you've made in your last few sentences sounds interesting! Collective decision making.
Thank you for this insightful video-it provides much to consider. In my view, the only metric that truly matters is whether a company can achieve Artificial General Intelligence (AGI) and how soon it can do so relative to others. If a company can create one AGI system, it can be replicated thousands of times, and it would be like having thousands of engineers or scientists, potentially operating for free or at a fraction of the cost. The company can be the Amazon of Amazons, disrupting any and every sector. Whatever industry they target first will be disrupted - whether it's NVIDIA, Amazon, the financial markets, or politics. AGI could disrupt entire ecosystems. The company that controls AGI will have the unprecedented ability to reshape the global economy and social order.
Great comment here. Could not multiple companies achieve this? Or will the one AGI system that emerges first become superior through recursive learning and advance so fast others can never catch up? Really hard to say. Agentification and a focus on AI thinking are the current trends to watch but it's likely this space evolves and changes very rapidly. We'll stay on top of it. Thank you for the thoughtful comment. Joe P.
ANY of the top 7 US companies can hire 1000 human engineers TODAY and go disrupt whatever industry they like. So why it's not happening and how will it be different with AI engineers? Money is not the problem, they all have dozens of billions cash on their balance sheets.
Is is shortage of good enough engineers? Some VC's (Chamath Palihapitiya) have argued that FAANG have been hogging good software engineers, but maybe not anymore since the recent layoffs in tech.
@@Martinit0 great point! Amazon exemplifies this perfectly. They've, over the years, disrupted multiple industries: online shopping, cloud computing (AWS), logistics, retail (Whole Foods), pharmacy (PillPack), and healthcare (One Medical), among others.
Currently, disruption requires Amazon to either hire experienced professionals, poach them from competitors, or acquire existing companies. Building human expertise takes 10-20 years, and you need many skilled workers. The key difference with AI is scalability: once you train a system, you can replicate it thousands of times by simply copying it to another computer at minimal cost. When robotaxis become available, they'll offer unbeatable prices compared to human drivers.
While AGI might not arrive immediately, the massive financial investment in AI is driving rapid improvements. For perspective, the moon landing around cost $159.6 billion in today's money, while AI investment is projected to reach $200 billion by 2025, according to Goldman Sachs. This level of investment will accelerate AI development significantly.
AI topic in general, too dificult for me to catch. Damn...
It's aight. We keep it simple ;)
@@Nanalyze Merci!
✌️
✌️
Otoh there will never be a model that doesn’t return generated errors. Where will the error machine be safe to integrate? In far fewer places than evangelists claim… mostly nuisance as a service
We shall see what happens
Current LLM are operating in a mode that humans would describe as "intuition", "gut feeling" or "winging it". Which is why the focus is now on building a reasoning capability as Joe mentioned.