A few more points I didn't mention in the video: 1. A day after i uploaded this video I saw tech bros on twitter saying "all you need is Claude" and you can code almost anything.. yet they couldn't even recreate a basic component on neetcode.io that I literally coded as a junior engineer. So once again, people are vastly overstating what AI can do. If only this hadn't happened in human history a million times before. 2. Amazon invested billions in Alexa, only for it to be obsoleted by LLMs. I worked in Alexa (for a brief time) and it's obvious it wasn't well run. Big tech doesn't always know what they're doing. 3. Amazon Go's "AI" turned out to be indian workers watching security cameras. 4. Nearly every advancement goes through hype cycles. Not just the dotcom bubble, even the railroads were overbuilt in the 1800s. "The Panic of 1893 was the largest economic depression in U.S. history at that time. It was the result of railroad overbuilding and shaky railroad financing, which set off a series of bank failures." Fwiw I literally use LLMs on a daily basis to automate my own tasks. Yes it helps somewhat, but I'm also very familiar with its limitations. If you disagree with me you may very well me right. But at least give me your best argument :) Sources: - ua-cam.com/video/U_cSLPv34xk/v-deo.htmlsi=Czh2GAG1wfVxjfhD - x.com/swyx/status/1815053785548661128 - arstechnica.com/gadgets/2023/11/amazon-lays-off-alexa-employees-as-2010s-voice-assistant-boom-gives-way-to-ai/ - www.businessinsider.com/amazons-just-walk-out-actually-1-000-people-in-india-2024-4 - en.wikipedia.org/wiki/History_of_rail_transportation_in_the_United_States
I fully agree the hype still surprises me and don't worry even when I post a comment and question the capabilities of curent A.I. approaches, some tech bro fanboys emerge and will say "this didn't age well "when an update of an LLM is made or they saw one of those (faked) techdemos. I mean if anyone worked (or rather tried to work) lets say like 5 hours on something more serious and accurate you will very quickly learn about the (crippling) limitations. I can't trust an "assistant" if it doesn't know what id doesn't know and "hallucinates" righot out false infromation instead. These programs have their usecases but its neither transformative nor money making at the moment and probably in the near future.
Thanks for such a cool headed commentary on this subject! Would be lovely to see a video on how you leverage LLMs to automate your tasks. Please keep up the great work!
I think that you, like a lot of devs I know, are underestimating what is going on. I work in IT since 80's and AI from 8 to 10 years ago. The first mistake is to think that a single LLM model is the benchmark of AI evolution. No, it is not. From design, LLMs can only give you reasoning. Information always will be unprecise. But this is not the way it will go ahead. When you start to use agentic and tooling systems this is a whole new world. LLM is only a tool in a system that is capable to have better than human performance and cost effectiveness. The simple LLMs that we have today running locally are already capable, if you use the correct tecniques , to make things no single LLM can dream of. The next level is the level of specialist systems. The ones designed to replace humans in complex tasks like programming, engineering, medicine etc. You may ask why you are not seeing this now. The first reason is that you need knowledge. The vast majority of young guys that are devs or work with AI have no hint of the knowledge an engineer or a physician must have to do the work. That is why people continues to check useless benchmarks on LLMs. They want somebody (AGI) to solve problems for them, as they do not even are capable to ask for. The second reason is that these systems are difficult to develop. We have bunch of people capable to develop solutions, but only if someone else details exactly the problem and expected solution. The problem is not the dev part , it is the necessary business knowledge. The third reason is security. Big companies certainly are developing these kind of solution for themselves to gain efficiency and very, very likely this will mean white collar layoffs. I expect from 1.5 to 4.5 years to have a huge impact in the job market area. Security of their(company) knowledge is paramount. These systems will not be marketed. They will be the company core. I understand that a lot of peple do not agree with what I am saying. I have a lot of experience in all these areas. What we are seeing is a revolution in the making. Not because these entertainment things like SORAs or mimicking voices, but because these tools will make a huge impact on our lives. I think the market for IT will shrink after a short term bump. Large part will be automated. These systems certainly will layoff a lot of professionals because this efficiency gain will be absorbed by the companies and not by the employees. If you have 10 people, this will change to 8, 6 or even less. The clock is ticking, there is no way out. No, it will not take 20 or 30 years. We are talking of a span of time that is fatal for young generations. In five to ten years the world will be alien for people with university degrees and professionals that do not work with material things (like surgeons or field engineers). If the work is only intellectual it will be replaced.
Someone asked me (SE) if I was worried about my job being automated. I told them, no, because if a machine could do my job, then it could also make a better version of itself. That's the singularity.
As a network engineer, I remember as far back as 2017 there was hype about automation systems and Software-defined networks taking over and that network engineers would be obsolete. 7 years later and I've only seen the requirements increase for engineers, just that the technology stack has been changing. So not falling for that hype again. Just get good at what you do and explore how to use AI and new tech in your workflow.
This is actually different. Use AI daily and experience it's the same leap from no search engine to altavista and google. The lesson should be: hype can be false alarm but it can also show something truly remarkable. AI is that. It may not be what the propaganda wants you to believe, but that doesn't mean it's not huge. It is huge. And you know that because you can really experience it yourself. It's going to be really fast this time, I'm starting to get fed up with just simple search tools. We are talking factors of improvement.
@@jamad-y7m " She seems to have gone from an intern to a Senior Product Manager at Tesla...then a couple jumps later and suddenly she works as an AI researcher...or from other sources as the VP of Applied AI and partnerships...with no educational background in AI. She must be extremely brilliant, but I am just dumbfounded on how quickly she went up the ladder. "
Such a good video! I did physics in undergrad and comp. neuroscience in grad school and am now working with a mix of researchers from various disciplines, broadly around cognitive science and evolution of human/primate cognition. I was never upset because my job was acutely endangered but it's been quite disheartening to listen to so many really very intelligent people get so on board with this hype to the point where it was ridiculous. I am very much on board with automating as much as we can. But the degree to which people were willing to believe it can do *anything* and would call more realistic assessments "unnecessarily negative" was crazy. When people who study neuroscience and infant development and (should) know how different human learning is from what LLMs do stand there and tell you that AI can replicate human-level cognition and will "soon" be able to randomly learn and synthesize knowledge with just a few more iterations of the models, I really am at a loss for words.
I have a masters in computer science focusing mainly on machine learning and I was furious when back then gpt 3 launched and everyone (even whom I respect in the field, Fireship, Nick Chapsas, etc.) was losing their minds about AI taking people's jobs in a year or so. It's good to see that now most of the knowledgable people are actively settling the discussion and point out the false claims. Glad that you do that also!
Well people are finally realizing that the improvement in LLMs is really stagnating. The jump between gpt2 and 3.5 was huge, the jump between gpt3.5 and 4 was impressive but not of the same magnitude. And right now we don’t see that big of a difference anymore between each new generation so the hype is finally slowing down!
@@the3Dandres I'm pretty sure that comes down to the decelerating growth in the model parameters. GPT-3 already used thousands of GPUs and months of training time.
@@seer775 Partially, but also consider that a doubling in parameters does not result in a doubling of output quality. so this curve is really flattening out. after all its still just matrix operations not real intelligence.
@@the3Dandres depends on what task you're measuring on. Complex tasks will see more ROI with more compute, but you will eventually hit a ceiling in terms of how much compute you can provide.
AI is a great toy for making memes and writing blogs. Its entirely useless for 99% of things I need it for. I doubt it will ever be more than a novelty. I doubt fsd or humanoid bots will be a thing in the next 5 years
The 99% applies very hard to the software development part. Dev codes, it fails, Dev fixes. AI codes, it fails, Dev has to go through an entire codebase of generated code to fix … If you push the idea to its paradox: In a world where AI produces most of the code, fewer and fewer Devs would be able to fix errors which would increase the cost of failures => failure rate goes down, failure cost goes up.
It's not how it works. I produced big chunks of code with LLMs, I know the code as if I wrote it myself. The flow is like that rather : Dev+AI code, it fails, Dev+AI fix.
@@adadaprout how much real world experience do you have? Could I look at your github? Sorry if this comes across as hostile, but I am genuinely curious.
@@mofumofutenngokuexcept that it was accurate. Those google employees make bank compared to your brokeass but in comparison to what the company makes that's barely a quarter.
"Fake it till you make" it the philosophy a lot of start up companies follow because its the only way to get financial support to gain the resources they need to propel themselves This is commonly seen in the Bay Area at places such as Stanford, Berkeley, SF. .. etc
It doesn't matter if AI can or cannot replace engineers. They are still going to be fired, and the software engineers remaining will be doing triple the work, working every weekend to compensate for the fired ones. Yes, they will do it because they will be driven by the fear of being fired and replaced by another engineer. If AI can actually replace jobs it's just a bonus, it's actually not necessary.
Every startups have to take some big risks, and that phrase "fake it till you make it" is usually spoken by the already successful. The whole story behind anybody's success is often way more complicated. That being said, taking risk is not bad for business nor the customer when handled properly.
They already delivered everything after like a year or two. They have made up for the overpromises like five times already. If AI is anything like that, then we are getting superintelligent AGI in 2030
NMS eventually had the bare minimum to technically meet the claims that were made pre-release that could be verified or easily quantified (eg now it has multiplayer!! Wow!!), and then paid a popular content creator to make an hour long videoessay to hype up that update as way more than it was. Just like the other guy said, that kind of thing is already happening with AI!
@@vitriolicAmaranth You are crazy if you think they haven't overdelivered on _everything_ they promised by now. I still think the game is kinda boring, but they've done everything and more to support it, and to even surpass the overblown initial expectations. If that's the kinda game you're looking for, NMS is it. No asterisks. It's just it.
You should much rather call it the Cyberpunk strat. It's a good game now, but it is _currently_ in the state that it should have released in. (even slightly behind in certain areas)
4:36 “Google is a monopoly. They have so much money they don’t know what to do with it. They sure as hell aren’t going to give it to their employees...” I subbed after that. 😂
Many now claim AI is overrated and all that, but I'm pretty sure the hype was just a collective misunderstanding of what this 'AI' actually is. I think people expected 'AI' to suddenly change all paradigms; they were mislead by the media and youtubers and the explosion of 'AI apps' (which are mostly based on GPT API calls). The Distributed System example is a great one.
With every new technology happens the same. When block-chain came out first waves were the grifters and deep inside were 1% true useful project (it's a tracking technology). I grew up in Romania's early 90's. We went from one single bank CEC to a decentralized system. We had many scams and national pyramid schemes (main cause - education). Same is just happening with AI, just much faster and I feel because of the disconnect from information and the amount of distractions only a small percentage of the world's population really understand what AI brings while 99% are witnessing the grifting part of AI hype. The big problem I see is actually a tsunami coming if you really understand the level of AI compute we currently have. For example the price per token is down 99% in just two years for top AI. Not to mention there are plenty of open source that work relatively well on any machine at this point. AI is a tool. Anyone can just build and create now at a lower cost than ever. In order to generate software the most important part is to actually have the idea and be able to communicate it. You are not good at communication, no worries AI can help you with that too. You want AI to make a plan for you? Done. Anyone can create almost anything at this point... The more context about yourself, your dreams and current skills and assets the more it can help you achieve your goals. As long as you use information properly feeding in the results can be phenomenal for any individual from any corner of the earth.
it's not a collective misunderstanding, but a plain stupidity. People love to overhype things they don't understand. Not only that, but people in addition to that love to act as if they actually know what they are hyping for. I hate it when kids on youtube are doing so. Like that time when some minecrafter made "an ai" from command blocks, when in reality it was just a pathfinding algotythm with overcomplicated momorization process. But I can't really hate those kids, because I know their fathers are doing exacly the same with Elon Musk's persona. Hyping over AI being overhyped would be a next step for sure, so people can jump on the vagon of diminishing AI hype and start something actually useful 😂
It was a disinformation campaign led by people who stood to profit, actively supported by academics who wanted to get in on the action, and disseminated by a willfully uncritical media. The general public never stood a chance.
My dad and I are both software engineers and our recent conversations were mostly around AI because my dad's company started to replace part of the team that my dad's on with LLM, the anxiety that people have been having about AI is sometimes soul-crushing when it comes to your closed ones, so I had to keep constantly reminding myself that for those content creators/companies that are intentionally hyping LLM up, they make money out of doing that and most of them never cared about where the tech is bringing us. I am glad to see this vid.
Its great that this video helped someone! I do think that there is a lot of hype on AI, but I do think that this hype is not baseless, as there is a base of technological innovation there, similar to the internet in the 90s. There was a bubble and it was initially overhyped, but looking from it back now some promises were certainly overoptimistic in a year, but right on the money in 20 years.
I hope they will cry and seethe when they find out their automated "employees" don't generate any value and they don't have enough workers lmao. 3 hours of work vs 1 hour debug VS 1 minute of work, 6 hours of debugging, scrapping the code and doing the above.
AI cannot replace software engineers…for now. “if it bleeds, it can die”. If it recognizes error in code, it will eventually develop its own fully functional system. Give it time.
It's amazing that people can even say such a thing. All you have to do is use it for a few hours and you know: this is a complete game changer. It's not just a toy, you can feel it. From managing my Linux system to programming the Google spreadsheets to extracting summary info about entire fields of knowledge. And then I've done some experiments that show that Gemini has been neutered to not be too useful. It's just this big of a deal, you can feel it. Anyone who laughs this away is completely in denial.
You don’t think when autopilot was in prototype, planes didn’t malfunction and crash? Of course it did. Same with any new tech. The beginning is always shittier than the new ones, which improve it. If everyone had a perfectionist mindset, we wouldn’t go anywhere past the Stone Age. The point of technology is to innovate. That’s why you have new versions of the same product, that does it better than the last. Sure a few planes need to be sacrificed, but that’s how you improve the AI. The scary thing is that AI is improving at such a rapid pace that now it gives people an opportunity to build anything of their wildest dreams and in 10 years, this technology will be extremely powerful. You clearly don’t see the big picture. But then again I’m sure when the Wright Brothers first invented the airplane that just flew a few feet, people said it will never touch the skies and the next century later, you have thousands of them in the air. You’re only limited to your imagination.
@@the-ironclad It's because AI creates open ended functions. You can't possibly know how it will react to every single set of inputs, while for human designed functions you can know for sure how it will behave with enough analysis. In response to input that the AI wasn't trained on, AI can do some really weird things because it's wasn't trained to behave normally in that domain. If it can distort it's response in untrained domains to perform better in trained domains, it will. All that uncertainty means that all an engineer can tell you when asked "will it work?" is "hopefully, since it's worked thus-far on our finite amount of training data".
You don't think automated cars are the same? They are at the point where they drive more reliable than most humans, so why would an airplane be any different?
I wonder if AI would tell Boeing to make a new plane model by putting oversized engines on a frame not designed for them, and compensate with controls software.
Nice polemic. They don't let AI code such software, but they will go and pick its brains to come up with good solutions in much much less time. Once they get the hang of it it will improve everything. Just start using an LLM for a few days and you'll see what it does: you are still in charge but everything gets so much easier that you can actually focus on the important things. Sure, the hypers do not emphasize that, but that's what constitutes the real payload of these AI tools.
To anyone young and curious how to take advtange. My wisdom is this. The californians that got rich during the gold rush. There were a few that found gold. But the shops that sold shovels made far more bang for buck. The weed industry. Its not the growers pulling in fat stacks. Its the lights and water techs that service the warehouse. In hedge funds. Its the dude who finds the mew formula for others to exploit. What im trying to say. Is its probably less risky to sell to the people doing the risk, than it is to incur the risk yourself. Make honest money off their ambition and as long as its honest, youll be good.
As a ML Engineer, I hate the conversations we’re having around AI and ML and all the hype. ML is a good tool for a subset of problems, but it’s not the endgame of CS. At work, we do our best to find a deterministic solution first before we use ML. People think this tech should be used to think for them instead.
Being a ML Engineer is not renough to make you some kind of authority on the subject, you're a data scientist basically, not a scientist from OpenAI or Anthropic.
As another "ml engineer", i would say that all human functions will be done better by machines, except those involving empathy, connection, or responsibility. if i have a robot that costs 5,000 and it has super human intelligence and types 200 WPM, why would i hire a human? i would basically only hire humans for front desk receptionist
@@AL-kb3cbI don’t think I’m an “authority,” but given that I understand and develop the algos and systems that utilize the algos, and often implement papers into code, I am educated enough to be able to discern bs from reality in my field. But on a side note: I have also done research in the field, which makes me think I am capable, but likely not competitive for research roles.
@@RoboticsOdysseyA good book to read is called “The Myth of Artificial Intelligence”. It talks about the fundamental reasons ML algorithms likely can’t completely replace humans even in cognition.
And ML still hallucinates, gaslights, lies, or refuses to cooperate at times. You should know enough about your problem-solution set, so you can see if a "solution" is dead wrong, without wasting time, money, or causing a disaster.
While AI hype can be misleading, real advancements are undeniable. DeepMind's AlphaFold, for example, revolutionized biology by accurately predicting protein structures. As a software engineer, I use multi-agent systems to automate tasks efficiently. These tools show AI's practical benefits beyond exaggerated claims.
Totally agree. AlphaFold is a perfect example of how amazing it is and how about these AI Chatbots that you can talk to that are indistinguishable from a human? That’s “Her” from the 2014 sci fi movie that’s sci fact in 2024 and this rate of improvement is exponential.
@@seva4411 These chatbots are really cool and cute and also, extremely useless. I mean, they have their uses, but it's almost decorative. They don't substitute anyone's work. At best, they can serve as useful learning tools.
It doesn't matter if AI can or cannot replace engineers. They are still going to be fired, and the software engineers remaining will be doing triple the work, working every weekend to compensate for the fired ones. Yes, they will do it because they will be driven by the fear of being fired and replaced by another engineer. If AI can actually replace jobs it's just a bonus, it's actually not necessary.
@@seva4411 youre right, but AlphaFold has nothing to do with what people nowadays refer to as AI/predecessors of AGI. Its "simple" machine learning, as it has existed for a while. And it is for sure not threatening to replace half the workforce tomorrow
@@RoboticsOdyssey I think its not that its @just hype' but rather that its a technological Gartner hype cycle, with specific stages and that we could be heading for the through of disillusionment soon, but after about 5 years it will be the plateau of productivity. 👍
@@OnePlanetOneTribe that's true, but aI has been through 60 years of those cycles since McCarthy formalized common sense in 1958. AI is a lot bigger than LLMs. Things like alphafold can create industries. No one really knows whats about to happen.
@@RoboticsOdyssey You sound like you're 15 years old and you missed the internet bubble pop of 2000. The internet WAS hype at one point. Many people who saw the hype and foresaw the pop made a pretty penny out of it. A few of them didn't need to work a whole day for the rest of their lives. I remember what telly sounded like in the late 90s. It was something like this: "Blah blah blah the internet this, blah blah the internet that, blah blah blah blah the internet patatee, blah blah blah blah the intenet patatah." Replace "the internet" with "AI" and that's where we are today.
@@chesshooligan1282 I cannot help but observe that the internet was the *one* success where the hype feeds into the notion that there's something to these other fads, whether they be AI, quantum computing, fusion, cryptocurrencies ... I'm pretty sure I'm missing others. What's more, while the internet itself ended up finding its place in the world, there were nonetheless a *lot* of companies that rode the hype bubble, and ended up collapsing rather than growing.
It's been really hard to stay motivated with my school work as a CSE student, my life for the past ten years has been in shambles, and learning programming genuinely gave me happiness I have not felt since I was a child, I want to program for a living, I want to make software the people use on a daily basis, I dont want AI to do everything for me and/or completely replace me and programming becomes just a hobby with no chance of competing against AI systems.(I also hate AI for art, it kinda kills the whole purpose of it but thats a different story) I agree with all your points as I have been following Gary Marcus and Yann Lecun for a while now, but the chance we're wrong, and AI does invalidate all my hard work right now, creeps into my brain while trying to learn. I'm hoping either the bubble bursts or the tech just takes off, this middle area of not knowing is honestly killing me.
You got this. I'm also learning coding and just started in late 2022, but AI has only helped me to learn programming quicker, it's not an enemy. It's just another tool available in your arsenal. You will still be competing against other humans for jobs, all of whom will probably use AI to different degrees. But AI being able to do everything itself is absolutely not happening for a very long time. It's really just regurgitating publicly available code, the more you ask for unique instructions that are not be published on the internet somewhere, the more your margin for error shoots way up. Try and ask it to code in any brand new version of a framework or sdk that just came out this year - it literally can't because it only has been trained on the previous versions. Good luck out there, it's crazy times but if you work on your craft as much as possible and leverage AI to your advantage you can probably find something. I'm constantly looking at what other people with
I'm in the camp that current "AI" in no shape will invalidate any meaningful work you will do as a software engineer. Sure, it may be able to help generate some basic boilerplate, and maybe very basic CRUD apps, but that's it. Anything that is remotely complicated AI will NEVER be able to do, or at least this current version. Try doing any project with moderate scale; AI completely and utterly fails. And it will remain this way for the immediate future because I personally believe these LLMs are already near their limits.
As an AI dev working for one of the big companies, I can tell you that we will always need more good programmers and engineers, AI is at the top of the hype cycle if you look at the gartner hype cycle chart we’re at the peak of inflated expectations and it’s going to crash at some point soon.
It's impressive how different one person can be to another, while still being similar. I wanted to learn programming since I was a child. I eventually wanted to program for a living, making software for everyone to use, but I DO want AI to replace most humans and do everything for us and even completely replace me even if programming becomes just a hobby because of that (for the last 20 years it's been just a hobby anyway since I haven't got any job related to computers so far, lol). And I also LOVE AI art. It makes art creation accesible for me and everyone who has always had something to express without the means to do it. I believe that whoever dislikes AI art is just denying the true purpose of art, (which is to communicate something) to instead exclusively elevate the technical part of art because that's the only thing they can do so they protect it to death. AI, if properly integrated, will put an end to all the bad things that humans have brought into the world. It will be the greatest change we will see in centuries. Been waiting for it since child. Developing AI is the reason why I wanted to learn to code, actually. Hype or not, it is The thing. We must keep trying to achieve it. At (almost) all costs. ___ BTW, I don't think AI will be properly integrate into the world. In the end, we will just have a partial dystopia thanks to it being misused by corps and gov, but I'm just a person so I can't do anything about it, but to hope.
One of the most sane video I have seen about AI. I talk these things with my friends but your arguments are rigid and reasonable. And again, the oscar goes to hype economy. I think people are still trying to figure out a way to live with all this communication. We are being bombarded with connection but we mostly use it to fool others on the line so we are better off in this equation, which break us all. In the end we are all hungry for security and trust.
There is an elephant in the room that they just don't want to talk about. If AI tools became broadly used the amount of electrical power needed is beyond the capability of our current electric infrastructure. I sure don't see fusion being available around the corner either.
if anyone needs anymore of anything and have the money to pay for it then the supply will expand to meet the demand. The current electricity supply we have right now matches the current electrical demand. I don’t know if we are running out of resources to build infrastructure and if so you’re right but the notion that AI is inviable because of the current electrical capacity of society goes against the laws of supply and demand
@@iubankz7020 But in the case of the power grid this process spans across several decades. Also with new environmental regulations and anti-nuclear sentiments it's unclear whether such an expansion is feasible at all.
@@kSergio471 AI (and AGI) is fundamentally based upon inputs to create something. It isn't from "scratch"/nothing. AGI is essentially trying to create human intelligence. It comes from a source, that being humans providing the algorithm and inputs. Always remember that something coming from nothing can seem a bit odd, since that something is probably based on something else (not nothing). This ignorance to that something that came from something (but is perceived to come from nothing), can lead to hype. This is what I took away from the ending. As with anything, you take from it whatever you want. Even if it's nothing.
@@1337erBoards thanks 👍 However, it seems a bit odd to me: even if ai is capped by what’s possible for human brain, this cap is still something unbelievable
with the current economic conditions, i personally believe "AI" is just a unicorn that major tech companies want to ride and it is in their best interest to entice as many investors as possible to join them for the ride.
Nope. AI delivers almost ideal workers that will instantly replace many pesky employees. It's real and it works. As for the real workers: the LLM is like the colleague they always wanted: figure out how to do this, make me a sketch of how to do that, etc. It needs just a few instructions, and does a thorough job.
Unrelated but I’ve just gotten my first SWE job, looking at apartments to move into, and you’ve inspired me to find something more humble haha. You must have some serious bags but still living simple, good stuff man
10:43 I'm a dev in my 50th. I've seen a lot of major and minor hypes and this analysis is spot on. AI will be a huge and hopefully not a dangerous thing. But it's at least 10 years from now. Probably much longer. Big tech knows this is a dead end when several nuclear power plants are needed to get the intelligence of a 4 yo into a computer. We need AI to be 99,9% right in everything it's doing before it can be really useful. Now we are getting thrilled if AI is spot on half of the time. That is not useful in real life as a "workforce".
Have you actually used it? I mean are you creating things not yet pre-planned? New areas? I've taken these things for a spin and boy, I'm programming stuff now in a fraction of the time it cost me before. Of course, if I'd be doing stuff I'm already really familiar with, then it's just a sort of double-checking facility. But in new areas ... And of course you cannot use the code as-is, but it sure as hell helps A LOT to see what it comes up with. The efficiency boost is simply extraordinary. Literally out-of-the-ordinary. I have never experienced anything like this. It WILL change a lot of disciples for good, yes indeed especially software engineering. If you think otherwise, you either tried something trivial, or you're simply in denial.
10:26 - That problem is actively being worked on. It's a software issue. There's several directions, but the one I like the most is: Once trained, the model ain't fixed. It can re-learn and overwrite what it learned in the past, allowing it to update tiny chunks of its knowledge instead of having to retrain its whole brain.
Yeah never thought about that. AI output will outnumber human output. Therefore 80% of input to AI will be by AI. A true garbage in garbage out garbage in.
I discussed this with my professor. We also talked about how they change from GPT three to GPT four involved doubling the amount of neurons in the neural network- this begs the question if you are doubling the amount of neurons, are you doubling the performance? It seems like there is not a doubling in performance. This means that there are probably very severely diminishing returns as hardware tries to catch up with the exponentially increasing computational demands of iterative neural network improvement.
GPT 3.5 had 175B parameters, GPT 4 has 1.5T. Thats an 8x increase in parameters but there is nowhere near to an 8x increase in performance. Also just a couple days ago, Meta release Llama 3.1 with 405B parameters which is comparable to GPT 4. So just infinitely throwing more parameters at a model doesn't really help much.
Scaling isn't the only avenue AI researchers are pursuing, it's the hack that unlocked somewhat capable language models. Now that we have them, it's given researchers something tangible to study and build on, which has led to chain of thought, tree of thought, mixture of experts, retrievement-augmented generation, multimodal models, data distillation, etc. Scaling will be pursued as far as economics and data will allow, but it's not the only game in town. I also expect the recent trend of more capable smaller models to continue.
Even 3% is very good by the way. Along the way it picked up concepts in language, math, and or coding. Which other models spent a lot of years doing equivalent concepts. So yes it is huge. If you want chatgpt 4o to be double in performance that's scary because you and I may not know how many higher level of applications or concepts it knows. Of course they are doing more complicated models and end to end functionality which just like chatgpt 4o picks up language, math and coding along the way. It will rise up and still haven't see the plateau of transformer based models. Because although 100trillion seems overfitting but the architecture can still be improved for higher end to end functionality. You must not care too much about the diminishing returns because it's also dataset +architecture complexity and functionality. Not just parameter count. These are hyper parameters. Most are based on statistics to find optimal values.
Finally some clear thinking! Well done! I think you're being generous when you say that there have been many times when people (i.e. for-profit corporations) have blurred the lines between hype and fraud. If the manufacturer of a machine tool claims its new product is the first to achieve milling tolerances below some value x and customers buy it on that basis, only to discover that the actual tolerances it can achieve are nowhere close to the claims, we would not say the manufacturer "blurred the lines between hype and fraud". What allows software companies to get away with this?
If you don't know anything about AI (It's not really AI though), it will look like magic. But as you unwrap its intricacies, you'll realize that AGI can still be classified as "impossible".
It could be possible that making AGI out of the transformer architecture is impossible (at the moment I would say it is even very likely), but I think it is not really possible that AGI is impossible as a whole. General intelligence is possible within the laws of nature and it is achievable in a quite efficient way. The human brain represents a system with many functions that are not wanted for AGI (so it is more complex) and still absolutely possible. Even in the worst case where scientists need to mimic the functionality of the brain very closely, which would take us at least many decades and huge amounts of resources, AGI would technically still be possible. On the other hand, for the case of AGI being impossible, there needs to be something so inherently unique to biological brains that is categorically impossible to mimic or replicate. What process should that be? The formation of brains is complex but no wizardry. From my perspective the more important question is, how much of the brain’s complexity is needed for solid general intelligence. Considering how much capability is already achieved by rather simplistic mathematical models, the amount of groundbreaking discoveries to reach this level is seemingly much lower than expected, but still very high.
Yeah, LLMs are advanced auto complete. They won’t magically become sapient no matter how much training, memory, and processing you throw at it. It’s just fundamentally the wrong architecture. It’s like how people use to take these vague, nonsense estimates of the raw processing power of the human brain and point out that we’ll soon have super computers with more power. Well, we do, and yet none of them are sapient. The internet as a whole has orders of magnitude more processing power, why hasn’t it magically become self aware? People who don’t understand this stuff pretend it’s just a matter of more data, faster processing, that’s not how biological neural networks operate at all.
@@ozymandias_yt I will love to be enlightened more about how it can be possible without using "general" representations. Tell me some specific ones, like the technicalities of how "GI" is possible within the laws of nature and it is achievable in a quite efficient way". I am not a hater of AI in any way (I specialize in ML). But as far as my knowledge goes, "AI" is nothing but ML with lines on steroids. No hate for tech but I'm ready to be proven wrong and will stand on my claim that AGI is still impossible, atleast currently.
@@leeris19 Maybe our definition of general intelligence isn’t the same. For me AGI is the point of human-level intelligence (reasoning, consistency, competence…). The proof for the existence of human-level intelligence is trivial and the synthesis to some extent therefore always theoretically achievable. The concept of “general representations” isn’t really present in the human cognition without limitations. Example: What is a game? AGI as the ultimate clean intelligence of eternal truth is indeed impossible, because it is logically implausible. Language isn’t well defined in many aspects, so no amount of data can train an AI to give always “perfect answers”. To full fill the visions of the AI revolutionaries, AGI in Form of human-like intelligence is needed, so complex tasks can be understood and executed. We can train humans to do these tasks and an AGI should be capable of learning at least with the same success humans. Side note: Regarding the hype, I see a typical pattern of over correction. In the beginning of the computer revolution, AI was described as something of the near future, which was of course way to optimistic. Throughout the decades, the prognoses for AGI extended into the range of 2080-2200, which is rather pessimistic. AI companies bragging about AGI in the next few years are quite likely over correcting their predictions again.
The problem with these LLM's is the bell curve distribution / probability distributions they use to determine their answers. They are gathering their input from the most common information. This is clearly the basis for the learning they do. The problem with this is three fold. First if you want excellent answers it's just not capable of doing this. Secondly, as content is generated from these responses it further dilutes the pool of exceptional content. Secondly people naturally will rely on this as a crutch and get worse at producing the content on their own. Thirdly as the LLM will learn from this double-diluted content further diluting the better content, points 1 and 2 will just speed that process up. Unless they find effective ways to drastically combat this I'm fairly sure it's a doomed technology.
Really. I have found a experiment that AI forgot what it learned from a math video after it watched several tik-tok shorts. The diluted information harms the cognitive ability of AI as it did for our brain.
there is a fourth issue - if the most common answer is incorrect then you will get an incorrect answer. The LLM does not know the correct answer, it gives you the most likely answer - which is not the same thing. And a fifth issue is that it has to give you an answer, even if the likelihood of it being correct is low.
Saying humans can learn how to drive in 30 minutes is just a blatant misunderstanding of reality. You should easily be able to see that that is blatantly false for 1 year old children, so obviously there are years of development at a minimum before people can even begin to learn how to drive. Even then, we have evolved over billions of years to interact with the world. Not taking this into account is being intentionally ignorant.
Watched this all the way through for the 2nd time. Makes even more sense, 4 months down the line. And I am an AI developer about to launch my own "wrapper" application. What has been made even more clear to me is the necessity to make sure my customers understand what my application is, and Is not. What it can do, and what it can not do. Thank you, again, for this very excellent video!
I wasn't pro leetcode, but leetcode is like mental gymming which improves problem solving, step by step. Kudos to you, your voice is like music to my ears.
This kinda sounds as the perspective of someone who's threatened - or feels as if - by the advances he is criticizing. For instance, quoting the image near the end of the video: 2015 - Self driving in 2 years: The technology has existed since pretty much 2017, it can't be adequately deployed because most people can't afford it yet; and since few people use it, society as a whole hasn't changed fast enough to really adopt it. 2016 - Radiologists obsolete in 5y: Hospitals can barely afford to function - they can't invest in deploying such sophisticated systems. But the capability exists and it's possible to make it work just as imagined. The whole video feels like cherry picking from the lowest branches possible. it lacks depth, it doesn't seem to consider second or third degree of consequences and what arguments are valid are actually very shallow and in so inconsiderate. "Remember something: this is the worst this technology will ever be."
@@minhuang8848 You're super confused. AI was never necessary to replace those office jobs and the AI implementations used are no better than the infamous phone mazes that replaced customer service call centers. Customers didn't like them then and won't now and they'll never be helpful for anything but the most trivial things there should never have been a call about while hampering real problems and information from reaching the company. Those companies will sink or figure things out in time. As usual, these trends come and go with the hype. You're clearly lacking the historical perspective. I am actually an expert by the way, most of my colleagues work almost exclusively in AI (the sub-team I work in does bioinformatics, in particular statistical genetics, because frankly the AI stuff can't be trusted in the context of real medical data where our conclusions may affect the real treatment people receive).
He is right, this “too big to fail” mentality was the downfall of many companies. Once Ford, GM, Chrysler were the biggest companies in the world they weren’t able to keep up with times, so they are nothing compared to what they were. Kodak is an even better example because they were ahead of the trend when it came to digital photography but they where already too invested in B&M stores and those stupid Kiosk things that people used to print their photos so they failed. IBM was fucking huge, they also failed. What do all of these companies have in common? They were enormous in terms of their structure and hierarchies and a given of those characteristics is having a really hard time at adapting, being flexible and innovating. The next big thing comes around and they’ll eventually fail in keeping up and some newcomer will take their place. They’re trying to stay afloat with this AI hype, but lets be honest is there anything meaningful that AI can do that consumers at large are willing to spend their money on? No there isn’t. In my work I see so many businesses wanting to adopt AI into their business and the most adamant of people about it are always clueless c-level executives that have no clue about how AI works or what it can do, for them it is some kind of Black Magic. We are at a time where the next big step in technological advancement is nowhere to be seen. Elon with Space X is going after something that was already accomplished in the 60’s, just with an innovation with rockets that can land themselves… If the investment in that area was constant since the inception of space exploration we would be way pass that. Taking into account all the technological advancement since, the moon landing Space X’s accomplishments are meek in comparison… They are all going crazy trying to predict the next big thing and the only thing they can do is hype because the next really meaningful advancement for humanity is nowhere to be seen. Funniest thing is these companies are really young when compared to the giants I mentioned in the beginning of my comment. Can’t wait for this shit to be over, as long as companies are chasing the hype we will be wasting the smartest people in an entire generation doing something that in decades will be irrelevant. I’m not saying all this AI investment will be useless in a some decades, but it isn’t going to change the way we live as a human species directly.
As someone who has been on the cutting edge of AI and neuroscience research for 20 years now: massive backpropagation-trained networks will become a thing of the past within 5-10 years. They will be seen as the compute-hungry brute-force approach to making a computer learn after all is said and done. What's coming down the pipe are sparse predictive hierarchical behavior learning algorithms that can be put into a machine to have it learn from scratch how to perceive the world and itself in it, and be rewarded and motivated to explore unknowns in its internal world model - which will yield curiosity and playful behavior. These will be difficult to wrangle at first, with humans controlling the reward/punish signals manually, but once they're trained to behave they will be the most resilient, robust, adaptive, and versatile machines in the history of mankind. Judging by how compute-efficient the existing realtime learning algorithms that people have been experimenting with are, it won't be very expensive to have a pet robot that behaves like a pet, runs around and fiddles with stuff like a pet, and is self-aware and clever like a pet, and the whole thing will run on commonly available consumer hardware - like that you have in your laptops and phones. This same learning algorithm will be limited in its abstraction capability by the hardware it is running on. As such, it won't be difficult to scale it up to human and super-human levels of abstraction capability, as long as the hardware that it is running on has the capacity to run the algorithm in realtime (i.e. 20-30hz) so that it can realistically handle the dynamics of its physical self and the world around it. Mark my words. Nobody building a massive backprop network right now is going to be glad they did in another 2-3 years. They're going to look like the dotcom bubble hype bros of the 90s, and become disgraced for being so naive in their blind faith that backprop-training was the end-all be-all of machine intelligence, like there couldn't possibly be something better, more efficient and useful. They just took someone else's backprop work and ran with it like it was going out of style, and it's cringey, at least to someone like me who has been watching all of this unfold from my uncommon perspective. Some people learn the hard way, I guess.
But these more sparse approaches are... already existing and just not so shiny or hype-filled. We're essentially talking about interpolation with better sampling. Chebyshev Polynomials, Fast Krigging, Polyharmonic Splines, or the more Bayesian approaches and some other things along those lines with some sort of gradient-based performance metric or Bayesian sampling in the Bayesian cases. It's mostly stuff that exists... but it's not cool or sexy and doesn't get people excited thinking it might be a sort of real "intelligence." There's no hype for it. But these don't have quite the capabilities you aim for... those require a significant breakthrough that might happen... or might not. Maybe next year, maybe not for a hundred or a thousand.
I Love your approach: - facts driven - friendly/funny, but frank - clearly stated opinion - open to respectful disagreement I’ve gotten really I to AI/LLMs lately, but we need more people with your perspective - reasonable expectations for this tech, not hype.
When I began a major in computer science in 2007 the "everybody knows" prediction at the time was that all programming would move to remote workers in third world countries and wages would trend toward $20k / year or less. Outsourcing was all over publications like Software Developer magazine. Kids were being told not to go to school for CS. But outsourcing died because of communication and quality issues. AI is nowhere near surpassing third-world developers for these 2 shortcomings.
@@kyokushinfighter78 I guess here's a way to look at it: when a hospital administrator can say, "write me a system that manages my surgery staff and patient records", and the AI fully masters that use case, then it will have full real-world intelligence and we won't need hospital administrators, lawyers, Congress or anyone else. Until then, there will still be humans designing and specing these systems.
one point that I do have a problem with is the rate of improvement, there isn't any actual data with your ROI. Anybody that's used both especially for programming knows really well that 3.5 to 4.0 has a far more substantial improvement than what you're giving it credit for.
While that’s true, it’s asymptotic. Eventually, the output difference between being trained on 99% of data and 100% of data on the web is next to nothing. Pretty sure anything past 90% is largely the same. Even though the progression from chatgpt 3.5 to 4o (not 4.0) was large, those gaps will eventually be smaller and smaller until we have a “perfect” gpt that gives the most correct answer available to the entire internet. Now, is that anything more than a glorified search engine? It’s up to you to decide that.
@@hanikanaan4121 what makes you think that AI was already trained on 99% of the internet? Maybe it learned on 10% and thats not speaking on how the hardware is advancing too, and the software.
@@TheManinBlack9054 notice how I said eventually. Also, a significantly huge part of the internet is unusable, outdated, or ToS failing information. The data they’ve used so far is the vast majority of the data that’s usable and beneficial. Is there more to be used? Absolutely. Will it change the entire game, and result in AGI or something? Pretty much a guaranteed no. Additionally, hardware doesn’t actually improve the results or accuracy of the model, it just speeds up the process of training. More accurately, it requires less data to reach a “definitive” point where answers can/will be given with certainty, but the accuracy on the entire dataset will be unchanged regardless of whether you’re training on an intel celeron processor or the strongest TPU on the market. GPT is not the way forward in advancement of AI, it’s simply the replacement for search engines. To reach the next tier of “autonomous” AI, it’ll be through something different from the current progression of text based training. I’m fairly certain that NN chess engines have shown higher levels of “creativity” and “thinking” than any currently available GPT system, be it from Anthropic, OpenAI, Google, etc.
It doesn't matter if AI can or cannot replace engineers. They are still going to be fired, and the software engineers remaining will be doing triple the work, working every weekend to compensate for the fired ones. Yes, they will do it because they will be driven by the fear of being fired and replaced by another engineer. If AI can actually replace jobs it's just a bonus, it's actually not necessary.
I don't comment on UA-cam videos much, but I have to give it to you: you are very articulate and you have excellent critical thinking skills. We need more of this! Personally, my takeaway over the past few years has been that, despite having a technical background, I (and my peers) could all benefit from more macro understanding (e.g., poltics, economics, ...). The world doesn't make sense right now and these "blurred lines" are a sign of the times. We will inherit the mess though, so we better wisen up and get ahead of it.
The 99 per cent thing is interesting. When you do something like linear regression it’s really easy to get to say 80 percent but to improve that by even 1 percent involves crazy amounts of fine tuning.
If we want to stay open minded, you also have to consider Neetcode would want students to keep pursing CS as that would mean continuous revenue for his platform. Overall, great video - I think you touched a couple great points. At the end of the day, consume information from a neutral stand point. No one knows the future for certain, we must manage risk and hedge when given the opportunity whether we are in a stable market or an uncertain one.
you can just add a prefrontal cortex to the ai. it will override any command to crash the car. some hard coded limits on acceleration/decelleration/crashing and stuff.
Something which is common among all the tech creators on YT that I follow keep saying that AI isn't taking any jobs. Can it be so that, the shift to other professions and interests among students, driven by concerns about CS's future profitability, lead to reduced engagement in their videos so they would want to make sure people continue watching them?
Take the LLMs for a spin for real work in areas that you're not so familiar with and you'll change your mind. It's unthinkable that this technology will not have a huge impact in many areas. You can try to focus on the stuff it's not so good at yet, and then you'll miss out on what it already delivers: real, tangible, spectacular cuts in time to figure things out, in almost any area. Think about this: there are still positions in chess that a human can do better than any chess engine. So what? Did those not revolutionize the whole field?
at my current job we had github copilot business(?) version for a month, to give it a try. guess what, 90% of the generated code was calling not existent class methods in java, 5% didn’t work or looked incorrect, 5% was generating the code that worked and was looking correct but has a bug in it which was really hard to detect. after this month I have no anxiety anymore about ai replacing us(btw I turned this shit off in the end and threw it away). it was in may 2024.
10:56 this is actually false. OpenAI published a paper several years ago that explains exactly how fast AI will improve. And to summarize, we need to exponentially increase the data and compute to keep making AI better. Which means progress will slow down and OpenAI knows that it will slow down! All the hype is just marketing, designed so that investors keep giving them money. AI is almost guaranteed to get better, but it’s also almost guaranteed to slow down.
Does that mean AI improvement will slow down? OpenAI can just generate new data to train the next model. They are already doing that with synthetic data
@@nihilisticprophet6985 the models won’t necessarily slow down, but to maintain the current rate of progress, each model will have to be 10-100x more expensive than the one before. Synthetic data isn’t a silver bullet. There are many small techniques you can use to generate synthetic data-eg, translating computer code from a common language (like Python) to a less common language (like PHP). But I don’t know how well that can scale.
Completely get your point, but I'm still blown away by the leading edge models, and how fast better ones are coming out. GPT-4 is definitely smarter than all of us in a wide range of topics, but not specific ones. But the idea of it being the dumbest version definitely has me "hyped" as a young person given the room for improvement. Great video though.
Great video! You’ve done an excellent job breaking down the hype versus reality in the AI industry. It’s refreshing to see a balanced perspective that acknowledges both the potential and limitations of current AI technology. Your historical comparisons and thought experiments really help put things into context. I have a question: Given the current rate of improvement in AI technologies and the prevalence of hype, what do you think are the most realistic applications of AI in the next five years that will have a tangible impact on everyday life?
This was such an amazing watch, thank you! My takeaway is hype is still required to an extent. Selling hope and dreams can still produce positive results - it makes us progress somehow.
You know, it is SO refreshing seeing the hype cycle finally wearing off. Especially since being a [self proclaimed] "AI experts" has pretty much translated to being an unreflected OpenAI / Elon Musk fanboy in the last couple of years. Reminds me of how all the "digital natives" were once heralded as exceptional Internet prodigies, when in fact all most of them really mastered were Snapchat, Instagram and TikTok (tech that was largely conceived and created by the previous generation) There NEEDS to be a paradigm shift, LLMs simply won't cut it in the long run
Thank you Mr. Christian Schubert. I have a direct line to Sam Altman if you'd like to enlighten him with your insights. Why the hell are you not heading a top AI research lab?!! How did you slip through the cracks?! Whoever said armchair quarterbacks can't throw? You've got a solid arm dude. Don't ever let anyone tell you that you don't know better than the coach. After all, you've got quite the view from the TV. I also am not sure if you are aware, but they are already moving beyond LLMs. The paradigm switch is already happening, but you're too blinded by your compulsive need to be a wet blanket, projecting a cynicism that implies an intelligence. It reeks of parochial insecurity. Wear it like a blanket. Use it as your pacifier. Use whatever heuristics you feel you need to use to make it through this period. 'Unreflected (the actual word would be unreflective) OpanAI/ Elon Musk fanboys' certainly works. That's definitely a way you can choose to understand what's happening before your eyes.
@@__D10S__ Do you? Ask him how he defines consciousness, how he responds to the chinese room argument, how he proves computationalism and how he proves that all he is doing is not just a poor mimicks of humans. And also what does he think about SNNs.
My gut tells me this hype is in part attributable to public misunderstanding; I’m merely a hobbyist programmer, so really I’m apart of said public. I think there is a conflation of statistical data mashing (relevant xkcd: 1838/) with what has been popularised in Hollywood and other mainstream media that have sparked people’s imagination in the wrong direction.
I just wanted to comment because I work in that field. I see the limits of large language models on a daily basis and you are correct in many ways. The last 10% is 50% of the work and that still applies today. I just wanted to let anyone reading this far into the comment know that LLMs are not the solve it all and we still don't have a solution to an ever expanding self learning compute or AGI as it's called. I don't know if we would soon or when but it may come. However for now, we are still within reasonable limits. With all that said, LLMs are extremely useful for a specific set of cases, not all, but a lot. Cheers to the future 🍻
I worked on the early internet in the 90s until the 2000's. This hype all looks and feels SO familiar, and almost no technology ever gets used in the way it was originally intended. So great video. However, as tech teacher with 500 unique teens students a year, I would argue one point: human nature is changing.
14:06 Yann LeCun with the receipts LMFAO once I learned "A.I." was probability, statistics, and linear algebra in a trench coat, I realized it was a bubble.
@@lordseidon9”Planes are bullshit, they are just applied thermodynamics” The real argument should be about the complexity of the models that use these disciplines, so we can distinguish between what is solidly persisted competence and what is just a useful artefact from the data. Better AI models have a structural integrity beyond its NNs (like hard and soft beliefs and policies), so it can not just go from logical reasoning to total nonsense by just one unfortunate transition.
Hype is never going to stop. But neither is the advancement of AI. I wouldn't get too used to patting yourself on the back for being right about the difference between hype and reality, because you won't be for long.
Well unless there is fraud which discourages the investment (like theranos level, death of people level) which is unlikely but not impossible. It also will depend on who adopts first: enterprise? Retail? And what products bc right now it’s not profitable in long term just churning thru cash. Also this is without talking about the energy problem which makes it unlikely to scale
Well, this didn't change my mind, but only because I was already there before the hype hit full swing. LLM's are not AI. The researchers are trying to recreate human minds without any understanding of what a human mind actually does or how it really works. It's the wrong approach. If you want real AI, then you need to think in a completely different way. Personally, I'm glad they're going about it the wrong way, because it means I don't have to fear a robot uprising. That would truly be the end of humanity in a very thorough way. Of course, I still have to fear some evil person putting NN-based tech together with an armed drone and either controlling or mostly destroying humanity, but that's a concern for 5 to 10 years down the road and not right now.
I think the same will happen to AI what happened to the Internet: 1. massive hype 2. a bubble starts forming, it gets used for a lot of things, by far most nonsensical 3. the bubble gets bigger 4. the bubble bursts, many companies go bankrupt and the economy at large is in a downturn 5. companies start to figure out actual use cases After that, bets are off, because it depends on what we get out of step 5. It could even go into loop afterwards.
Generative machine learning is absolutely insane. I agree that most publicly available or basically used models are not that crazy, but the fundamentals are there and generative learning should be hyped . Honestly The hype isn't enough, I promise. Source: I am a post grad researcher studying AI and founder of a collabortive intelligence platform at one of the top research institutes in the world.
It would be helpful to include any time frame assumptions at all in the video. Ofc current models suck. But what about in 5 or 10 years from now? That’s really not far away at all
80 papers in 2 years, isn't that like a paper every 11 days? For sure, what kind of science is that? That man deserves ALL the Nobel Prizes for making humanity reach a technological breakthrough every 11 days.
Modern day "research", especially in the field of AI, is another Pandora box that would deserve its own video. He might as well have given the number of podcasts he had gone to and it would've still been a better vanity metric. That said, he probably expected most people who read that tweet to be either fools or deeply unfamiliar with how academia works... and that assumption would be correct
I mean, he probably wasn't sole author considering his function. Most likely he got to put his name on there for guiding the team doing the actual research, which, don't get me wrong, can be a valuable task on its own
@@FluhanFauci DingDingDing - this is how most any kind of research works: (Please mentally change the pronouns to your own preference ;-) The senior researcher guides the work of the entire group, and his name appears somewhere in the list of authors of every paper the group puts out. If he contributed in some critical way, he’d be lead author, if he was fairly hands on but wasn’t directly involved in the work, he might be somewhere in the middle. If he just told someone “hey, you should check this out” he’d be toward the bottom, and if he had nothing much to do with it but it came out of his lab, he’d be the last author. So 80 papers or whatever is how many the entire team, possibly hundreds of people, put out.
Not sure what the exact argument for this entire video..is it "AI is just a hype" thing or is it, "AI is useless and will never replace devs" thing? It did not really changed my mind about anything, it confused me for sure. I think I'm just too dumb to watch this kind of video.
AI is over hyped and will not do most of the things that people say it will do because of the limitations of LLMs. It will have an impact but not the one that's promised to us and which is faked by companies to encourage investment. What's the point of pointing this out? Well for one this video is an antidote to hype which is sorely needed especially now as companies attempt to implement AI into everything.
This did not start in 2022 lol. Atleast go back to GPT-3, hype was really starting to build then. ChatGPT was more available to the general population, and hype within tech circles definitely got bigger but it did not start with ChatGPT. "Computers are just incompatible with the level intelligence that many people are expecting them to have". If you are saying computers are just fundamentally incompatible, then I strongly disagree. If you are referring to current gen models then yeah. ALSO do not just compare timelines of release lol, compare compute over timelines. GPT-4o, from what I know, is a smaller model than GPT-4 (obviously. It is much cheaper and faster with lower latency), so OAI has done some sort of algorithmic improvement or trained on more data to get more performance out of smaller models. BUT, since GPT-4, every model that has released has been in the similar domain of GPT-4 level compute and cost to train. We know the main factor to intelligence in these models is effective compute which is highly dependant on raw compute. The ONLY model I know of trained with decent amount of compute over GPT-4 is Claude 3.5 Opus, which is yet to be released however Anthropic said it was trained with 4x the compute over Claude 3 Opus (which is a GPT-4 class and trained with approximately GPT-4 level compute). For context GPT-4 was trained with 6x the compute over GPT-3.5, and GPT-3.5 was trained with 12x the compute over GPT-3.This is the story of raw compute with GPT series models, but It gives us a window into the scales of compute needed for any form of improvement. To the people who do not have access to the training runs and current stages of models, bigger intelligence gains are not incremental over a time period, they are on a per model release basis. The last real intelligence gain was GPT-4, every other model released since then is some optimisation to that class of models or just straight up meant to be in this class of model. As I said the only model I know of to have compute scale up over GPT-4 is Claude 3.5 Opus, 4x the compute over current GPT-4 class models like Claude 3 Opus. And also Claude 3.5 Sonnet is 6x the compute over Claude 3 Sonnet. Claude 3 Sonnet was a high end GPT-3.5 class model, the compute jump put it as a high end GPT-4 class model, but not enough to go really beyond GPT-4 class models. That is what Claude 3.5 Opus is going to do. But, again, it will be a smaller gap than between GPT-3.5 and GPT-4.
A few more points I didn't mention in the video:
1. A day after i uploaded this video I saw tech bros on twitter saying "all you need is Claude" and you can code almost anything.. yet they couldn't even recreate a basic component on neetcode.io that I literally coded as a junior engineer. So once again, people are vastly overstating what AI can do. If only this hadn't happened in human history a million times before.
2. Amazon invested billions in Alexa, only for it to be obsoleted by LLMs. I worked in Alexa (for a brief time) and it's obvious it wasn't well run. Big tech doesn't always know what they're doing.
3. Amazon Go's "AI" turned out to be indian workers watching security cameras.
4. Nearly every advancement goes through hype cycles. Not just the dotcom bubble, even the railroads were overbuilt in the 1800s. "The Panic of 1893 was the largest economic depression in U.S. history at that time. It was the result of railroad overbuilding and shaky railroad financing, which set off a series of bank failures."
Fwiw I literally use LLMs on a daily basis to automate my own tasks. Yes it helps somewhat, but I'm also very familiar with its limitations.
If you disagree with me you may very well me right. But at least give me your best argument :)
Sources:
- ua-cam.com/video/U_cSLPv34xk/v-deo.htmlsi=Czh2GAG1wfVxjfhD
- x.com/swyx/status/1815053785548661128
- arstechnica.com/gadgets/2023/11/amazon-lays-off-alexa-employees-as-2010s-voice-assistant-boom-gives-way-to-ai/
- www.businessinsider.com/amazons-just-walk-out-actually-1-000-people-in-india-2024-4
- en.wikipedia.org/wiki/History_of_rail_transportation_in_the_United_States
Lol wut. Can’t recreate a basic component from your shit shilling website?
I fully agree the hype still surprises me and don't worry even when I post a comment and question the capabilities of curent A.I. approaches, some tech bro fanboys emerge and will say "this didn't age well "when an update of an LLM is made or they saw one of those (faked) techdemos. I mean if anyone worked (or rather tried to work) lets say like 5 hours on something more serious and accurate you will very quickly learn about the (crippling) limitations. I can't trust an "assistant" if it doesn't know what id doesn't know and "hallucinates" righot out false infromation instead. These programs have their usecases but its neither transformative nor money making at the moment and probably in the near future.
Please do "this will change your mind for quantum computing" plzzzz
Thanks for such a cool headed commentary on this subject!
Would be lovely to see a video on how you leverage LLMs to automate your tasks.
Please keep up the great work!
I think that you, like a lot of devs I know, are underestimating what is going on. I work in IT since 80's and AI from 8 to 10 years ago. The first mistake is to think that a single LLM model is the benchmark of AI evolution. No, it is not. From design, LLMs can only give you reasoning. Information always will be unprecise. But this is not the way it will go ahead. When you start to use agentic and tooling systems this is a whole new world. LLM is only a tool in a system that is capable to have better than human performance and cost effectiveness. The simple LLMs that we have today running locally are already capable, if you use the correct tecniques , to make things no single LLM can dream of. The next level is the level of specialist systems. The ones designed to replace humans in complex tasks like programming, engineering, medicine etc. You may ask why you are not seeing this now. The first reason is that you need knowledge. The vast majority of young guys that are devs or work with AI have no hint of the knowledge an engineer or a physician must have to do the work. That is why people continues to check useless benchmarks on LLMs. They want somebody (AGI) to solve problems for them, as they do not even are capable to ask for. The second reason is that these systems are difficult to develop. We have bunch of people capable to develop solutions, but only if someone else details exactly the problem and expected solution. The problem is not the dev part , it is the necessary business knowledge. The third reason is security. Big companies certainly are developing these kind of solution for themselves to gain efficiency and very, very likely this will mean white collar layoffs. I expect from 1.5 to 4.5 years to have a huge impact in the job market area. Security of their(company) knowledge is paramount. These systems will not be marketed. They will be the company core. I understand that a lot of peple do not agree with what I am saying. I have a lot of experience in all these areas. What we are seeing is a revolution in the making. Not because these entertainment things like SORAs or mimicking voices, but because these tools will make a huge impact on our lives. I think the market for IT will shrink after a short term bump. Large part will be automated. These systems certainly will layoff a lot of professionals because this efficiency gain will be absorbed by the companies and not by the employees. If you have 10 people, this will change to 8, 6 or even less. The clock is ticking, there is no way out. No, it will not take 20 or 30 years. We are talking of a span of time that is fatal for young generations. In five to ten years the world will be alien for people with university degrees and professionals that do not work with material things (like surgeons or field engineers). If the work is only intellectual it will be replaced.
I hate this hype economy
you mean capitalism?
@@3breze757 capitalism go brrrr, until no one can afford it.
@@juanmacias5922what's great about socialism is that nobody affording anything becomes the default
AGI Will be man's last invention
we've been in an attention economy for a long time now
"If you wish to make an apple pie from scratch, you must first invent the universe" ... pure gold
Don't eat from that tree, Adam! I need those apples for my apple pie.
That bit from Carl Sagan was used in the first stanza of Glorious Dawn by Melodysheep :) Lovely
Этот момент зацепил меня сильно. Очень в тему.
@@VolodymyrPankov vsem poebat'
@@takashimurakami3560 on you and on the fact that you are a senseless biological non-entity.
Bro just solved the "should i drop out of college" problem in O(1) time complexity
Computer Engineering
Hilarious
Good one.
😂
I've read from a book somewhere to help with decisions, "Imagine you are 90 years old right now, looking back on this decision, would you regret it?"
Someone asked me (SE) if I was worried about my job being automated. I told them, no, because if a machine could do my job, then it could also make a better version of itself. That's the singularity.
That’s simply not true lol. Coming from an engineer
Your forgetting it's sentient and improves on its own.
@@LuckyLucky-pc3tz no I get that part. I'm saying there won't be any jobs after that.
@@LuckyLucky-pc3tz whoosh
Its still taking your job though 😂
As a network engineer, I remember as far back as 2017 there was hype about automation systems and Software-defined networks taking over and that network engineers would be obsolete.
7 years later and I've only seen the requirements increase for engineers, just that the technology stack has been changing. So not falling for that hype again.
Just get good at what you do and explore how to use AI and new tech in your workflow.
This is actually different. Use AI daily and experience it's the same leap from no search engine to altavista and google. The lesson should be: hype can be false alarm but it can also show something truly remarkable. AI is that. It may not be what the propaganda wants you to believe, but that doesn't mean it's not huge. It is huge. And you know that because you can really experience it yourself. It's going to be really fast this time, I'm starting to get fed up with just simple search tools. We are talking factors of improvement.
"Google has a shit ton of money and they are not giving it to their employees" delivered blankly is peak dystopian humor.
Celebrity CEO's job is marketing.
There prime role is Acting.
Acting the rols part of a CEO
Musk
@@jamad-y7m " She seems to have gone from an intern to a Senior Product Manager at Tesla...then a couple jumps later and suddenly she works as an AI researcher...or from other sources as the VP of Applied AI and partnerships...with no educational background in AI. She must be extremely brilliant, but I am just dumbfounded on how quickly she went up the ladder. "
@@jamad-y7m the interview on Bloomberg TV was worth a watch
Steve job pioneered as an marketing CEO. Everyone forgets he couldn't even write one line of code.
When Devin first came out there was job opening for a developer on their official website.
Bro what is your educational qualification?
@@mrkike7343 Ain't got time for shit
@@mrkike7343 Why does his educational qualification have anything to do with what he said lmao
lol
lmao that's funny
Such a good video!
I did physics in undergrad and comp. neuroscience in grad school and am now working with a mix of researchers from various disciplines, broadly around cognitive science and evolution of human/primate cognition. I was never upset because my job was acutely endangered but it's been quite disheartening to listen to so many really very intelligent people get so on board with this hype to the point where it was ridiculous.
I am very much on board with automating as much as we can. But the degree to which people were willing to believe it can do *anything* and would call more realistic assessments "unnecessarily negative" was crazy.
When people who study neuroscience and infant development and (should) know how different human learning is from what LLMs do stand there and tell you that AI can replicate human-level cognition and will "soon" be able to randomly learn and synthesize knowledge with just a few more iterations of the models, I really am at a loss for words.
So it is true, what I understood? We are miles away from granularly "randomly" learning AI models?
I have a masters in computer science focusing mainly on machine learning and I was furious when back then gpt 3 launched and everyone (even whom I respect in the field, Fireship, Nick Chapsas, etc.) was losing their minds about AI taking people's jobs in a year or so. It's good to see that now most of the knowledgable people are actively settling the discussion and point out the false claims. Glad that you do that also!
Well people are finally realizing that the improvement in LLMs is really stagnating. The jump between gpt2 and 3.5 was huge, the jump between gpt3.5 and 4 was impressive but not of the same magnitude. And right now we don’t see that big of a difference anymore between each new generation so the hype is finally slowing down!
@@the3Dandres I'm pretty sure that comes down to the decelerating growth in the model parameters. GPT-3 already used thousands of GPUs and months of training time.
@@seer775 Partially, but also consider that a doubling in parameters does not result in a doubling of output quality. so this curve is really flattening out. after all its still just matrix operations not real intelligence.
@@the3Dandres depends on what task you're measuring on. Complex tasks will see more ROI with more compute, but you will eventually hit a ceiling in terms of how much compute you can provide.
AI is a great toy for making memes and writing blogs. Its entirely useless for 99% of things I need it for. I doubt it will ever be more than a novelty. I doubt fsd or humanoid bots will be a thing in the next 5 years
The 99% applies very hard to the software development part.
Dev codes, it fails, Dev fixes.
AI codes, it fails, Dev has to go through an entire codebase of generated code to fix …
If you push the idea to its paradox: In a world where AI produces most of the code, fewer and fewer Devs would be able to fix errors which would increase the cost of failures => failure rate goes down, failure cost goes up.
So true
is this the same for legacy code now that almost everyone programming in high level languages(including c)
It's not how it works. I produced big chunks of code with LLMs, I know the code as if I wrote it myself. The flow is like that rather : Dev+AI code, it fails, Dev+AI fix.
@@adadaprout how much real world experience do you have? Could I look at your github?
Sorry if this comes across as hostile, but I am genuinely curious.
@@TragicGFuel Yes I code in real world, in what other world can we code than the real world ?
“They sure as hell aren’t gonna give that to the employees”
Google emploees make fucking bank, while I am over here busting my ass making minimum wage. That statement was just inaccurate.
@nightshade8958 is not really inaccurate, I mean, they do make bank, but that doesn't even compare to the numbers Google is supposedly worth.
@@mofumofutenngokuexcept that it was accurate. Those google employees make bank compared to your brokeass but in comparison to what the company makes that's barely a quarter.
The Employees working for Google, Apple, Amazon, and Facebook are already making $200k/Year salaries.
4:35
"Fake it till you make" it the philosophy a lot of start up companies follow because its the only way to get financial support to gain the resources they need to propel themselves
This is commonly seen in the Bay Area at places such as Stanford, Berkeley, SF. .. etc
It doesn't matter if AI can or cannot replace engineers. They are still going to be fired, and the software engineers remaining will be doing triple the work, working every weekend to compensate for the fired ones. Yes, they will do it because they will be driven by the fear of being fired and replaced by another engineer. If AI can actually replace jobs it's just a bonus, it's actually not necessary.
Every startups have to take some big risks, and that phrase "fake it till you make it" is usually spoken by the already successful. The whole story behind anybody's success is often way more complicated.
That being said, taking risk is not bad for business nor the customer when handled properly.
And the philosophy of Elon😂
Almost like California is a cancer
@@AL-kb3cbthis here what matters the most and most practical explaining of our current situation
It's the No Man's Sky marketing strat lmao
Hype your crap then actually finish building and delivering it a decade later after the promised date.
They already delivered everything after like a year or two.
They have made up for the overpromises like five times already.
If AI is anything like that, then we are getting superintelligent AGI in 2030
NMS eventually had the bare minimum to technically meet the claims that were made pre-release that could be verified or easily quantified (eg now it has multiplayer!! Wow!!), and then paid a popular content creator to make an hour long videoessay to hype up that update as way more than it was. Just like the other guy said, that kind of thing is already happening with AI!
Haha so true! I'm stealing that 🤣
@@vitriolicAmaranth You are crazy if you think they haven't overdelivered on _everything_ they promised by now.
I still think the game is kinda boring, but they've done everything and more to support it, and to even surpass the overblown initial expectations.
If that's the kinda game you're looking for, NMS is it. No asterisks. It's just it.
You should much rather call it the Cyberpunk strat.
It's a good game now, but it is _currently_ in the state that it should have released in. (even slightly behind in certain areas)
4:36 “Google is a monopoly. They have so much money they don’t know what to do with it. They sure as hell aren’t going to give it to their employees...” I subbed after that. 😂
Many now claim AI is overrated and all that, but I'm pretty sure the hype was just a collective misunderstanding of what this 'AI' actually is. I think people expected 'AI' to suddenly change all paradigms; they were mislead by the media and youtubers and the explosion of 'AI apps' (which are mostly based on GPT API calls). The Distributed System example is a great one.
With every new technology happens the same. When block-chain came out first waves were the grifters and deep inside were 1% true useful project (it's a tracking technology). I grew up in Romania's early 90's. We went from one single bank CEC to a decentralized system. We had many scams and national pyramid schemes (main cause - education). Same is just happening with AI, just much faster and I feel because of the disconnect from information and the amount of distractions only a small percentage of the world's population really understand what AI brings while 99% are witnessing the grifting part of AI hype. The big problem I see is actually a tsunami coming if you really understand the level of AI compute we currently have.
For example the price per token is down 99% in just two years for top AI. Not to mention there are plenty of open source that work relatively well on any machine at this point.
AI is a tool. Anyone can just build and create now at a lower cost than ever. In order to generate software the most important part is to actually have the idea and be able to communicate it. You are not good at communication, no worries AI can help you with that too. You want AI to make a plan for you? Done. Anyone can create almost anything at this point... The more context about yourself, your dreams and current skills and assets the more it can help you achieve your goals. As long as you use information properly feeding in the results can be phenomenal for any individual from any corner of the earth.
I feel the same about the 2nd coming of J.C.
it's not a collective misunderstanding, but a plain stupidity. People love to overhype things they don't understand. Not only that, but people in addition to that love to act as if they actually know what they are hyping for.
I hate it when kids on youtube are doing so. Like that time when some minecrafter made "an ai" from command blocks, when in reality it was just a pathfinding algotythm with overcomplicated momorization process. But I can't really hate those kids, because I know their fathers are doing exacly the same with Elon Musk's persona.
Hyping over AI being overhyped would be a next step for sure, so people can jump on the vagon of diminishing AI hype and start something actually useful 😂
It was a disinformation campaign led by people who stood to profit, actively supported by academics who wanted to get in on the action, and disseminated by a willfully uncritical media. The general public never stood a chance.
The end of the world an Ai so powerful it can’t be controlled not even on an remote island
My dad and I are both software engineers and our recent conversations were mostly around AI because my dad's company started to replace part of the team that my dad's on with LLM, the anxiety that people have been having about AI is sometimes soul-crushing when it comes to your closed ones, so I had to keep constantly reminding myself that for those content creators/companies that are intentionally hyping LLM up, they make money out of doing that and most of them never cared about where the tech is bringing us. I am glad to see this vid.
Its great that this video helped someone! I do think that there is a lot of hype on AI, but I do think that this hype is not baseless, as there is a base of technological innovation there, similar to the internet in the 90s. There was a bubble and it was initially overhyped, but looking from it back now some promises were certainly overoptimistic in a year, but right on the money in 20 years.
@@TheManinBlack9054 That's also true, imagination is still the foundation of innovation, keep learning and try staying educated!
I hope they will cry and seethe when they find out their automated "employees" don't generate any value and they don't have enough workers lmao.
3 hours of work vs 1 hour debug
VS
1 minute of work, 6 hours of debugging, scrapping the code and doing the above.
There is no way any LLM today is replacing any software engineer...
@@DanielFenandes It can make software engineers more efficient and axe a lot of support roles
AI cannot replace software engineers…for now. “if it bleeds, it can die”. If it recognizes error in code, it will eventually develop its own fully functional system. Give it time.
It's amazing that people can even say such a thing. All you have to do is use it for a few hours and you know: this is a complete game changer. It's not just a toy, you can feel it. From managing my Linux system to programming the Google spreadsheets to extracting summary info about entire fields of knowledge. And then I've done some experiments that show that Gemini has been neutered to not be too useful. It's just this big of a deal, you can feel it. Anyone who laughs this away is completely in denial.
If Boeing let AI start coding thier controls software, a lot more planes would end up in the ocean.
You don’t think when autopilot was in prototype, planes didn’t malfunction and crash? Of course it did. Same with any new tech. The beginning is always shittier than the new ones, which improve it. If everyone had a perfectionist mindset, we wouldn’t go anywhere past the Stone Age. The point of technology is to innovate. That’s why you have new versions of the same product, that does it better than the last. Sure a few planes need to be sacrificed, but that’s how you improve the AI. The scary thing is that AI is improving at such a rapid pace that now it gives people an opportunity to build anything of their wildest dreams and in 10 years, this technology will be extremely powerful. You clearly don’t see the big picture. But then again I’m sure when the Wright Brothers first invented the airplane that just flew a few feet, people said it will never touch the skies and the next century later, you have thousands of them in the air. You’re only limited to your imagination.
@@the-ironclad It's because AI creates open ended functions. You can't possibly know how it will react to every single set of inputs, while for human designed functions you can know for sure how it will behave with enough analysis. In response to input that the AI wasn't trained on, AI can do some really weird things because it's wasn't trained to behave normally in that domain. If it can distort it's response in untrained domains to perform better in trained domains, it will. All that uncertainty means that all an engineer can tell you when asked "will it work?" is "hopefully, since it's worked thus-far on our finite amount of training data".
You don't think automated cars are the same? They are at the point where they drive more reliable than most humans, so why would an airplane be any different?
I wonder if AI would tell Boeing to make a new plane model by putting oversized engines on a frame not designed for them, and compensate with controls software.
Nice polemic. They don't let AI code such software, but they will go and pick its brains to come up with good solutions in much much less time. Once they get the hang of it it will improve everything. Just start using an LLM for a few days and you'll see what it does: you are still in charge but everything gets so much easier that you can actually focus on the important things. Sure, the hypers do not emphasize that, but that's what constitutes the real payload of these AI tools.
To anyone young and curious how to take advtange. My wisdom is this. The californians that got rich during the gold rush. There were a few that found gold. But the shops that sold shovels made far more bang for buck. The weed industry. Its not the growers pulling in fat stacks. Its the lights and water techs that service the warehouse. In hedge funds. Its the dude who finds the mew formula for others to exploit.
What im trying to say. Is its probably less risky to sell to the people doing the risk, than it is to incur the risk yourself. Make honest money off their ambition and as long as its honest, youll be good.
Thank you.
Thank you.
So how this apply to Ai?
@watcheronly71 learn how to to draw hands and feet ;)
@@watcheronly71 NVIDIA is making a killing off the AI hype.
As a ML Engineer, I hate the conversations we’re having around AI and ML and all the hype.
ML is a good tool for a subset of problems, but it’s not the endgame of CS. At work, we do our best to find a deterministic solution first before we use ML.
People think this tech should be used to think for them instead.
Being a ML Engineer is not renough to make you some kind of authority on the subject, you're a data scientist basically, not a scientist from OpenAI or Anthropic.
As another "ml engineer", i would say that all human functions will be done better by machines, except those involving empathy, connection, or responsibility.
if i have a robot that costs 5,000 and it has super human intelligence and types 200 WPM, why would i hire a human?
i would basically only hire humans for front desk receptionist
@@AL-kb3cbI don’t think I’m an “authority,” but given that I understand and develop the algos and systems that utilize the algos, and often implement papers into code, I am educated enough to be able to discern bs from reality in my field.
But on a side note:
I have also done research in the field, which makes me think I am capable, but likely not competitive for research roles.
@@RoboticsOdysseyA good book to read is called “The Myth of Artificial Intelligence”.
It talks about the fundamental reasons ML algorithms likely can’t completely replace humans even in cognition.
And ML still hallucinates, gaslights, lies, or refuses to cooperate at times. You should know enough about your problem-solution set, so you can see if a "solution" is dead wrong, without wasting time, money, or causing a disaster.
One of the best takes I've seen on the topic, awesomely articulated.
Weirdly, this is the most comforting video about AI I've seen in the last months...
While AI hype can be misleading, real advancements are undeniable. DeepMind's AlphaFold, for example, revolutionized biology by accurately predicting protein structures. As a software engineer, I use multi-agent systems to automate tasks efficiently. These tools show AI's practical benefits beyond exaggerated claims.
Totally agree. AlphaFold is a perfect example of how amazing it is and how about these AI Chatbots that you can talk to that are indistinguishable from a human? That’s “Her” from the 2014 sci fi movie that’s sci fact in 2024 and this rate of improvement is exponential.
@@seva4411 These chatbots are really cool and cute and also, extremely useless. I mean, they have their uses, but it's almost decorative. They don't substitute anyone's work. At best, they can serve as useful learning tools.
@@LuisManuelLealDias They will soon serve as companions and mentors in many ways and will be far from just decorations.
It doesn't matter if AI can or cannot replace engineers. They are still going to be fired, and the software engineers remaining will be doing triple the work, working every weekend to compensate for the fired ones. Yes, they will do it because they will be driven by the fear of being fired and replaced by another engineer. If AI can actually replace jobs it's just a bonus, it's actually not necessary.
@@seva4411 youre right, but AlphaFold has nothing to do with what people nowadays refer to as AI/predecessors of AGI. Its "simple" machine learning, as it has existed for a while. And it is for sure not threatening to replace half the workforce tomorrow
ya i agree its happened before, with VR hype, I haven't touched my VR set in months
yeah the internet is just hype. so is indoor plumbing. and electricity.
@@RoboticsOdyssey I think its not that its @just hype' but rather that its a technological Gartner hype cycle, with specific stages and that we could be heading for the through of disillusionment soon, but after about 5 years it will be the plateau of productivity. 👍
@@OnePlanetOneTribe that's true, but aI has been through 60 years of those cycles since McCarthy formalized common sense in 1958.
AI is a lot bigger than LLMs.
Things like alphafold can create industries.
No one really knows whats about to happen.
@@RoboticsOdyssey You sound like you're 15 years old and you missed the internet bubble pop of 2000. The internet WAS hype at one point. Many people who saw the hype and foresaw the pop made a pretty penny out of it. A few of them didn't need to work a whole day for the rest of their lives.
I remember what telly sounded like in the late 90s. It was something like this: "Blah blah blah the internet this, blah blah the internet that, blah blah blah blah the internet patatee, blah blah blah blah the intenet patatah." Replace "the internet" with "AI" and that's where we are today.
@@chesshooligan1282 I cannot help but observe that the internet was the *one* success where the hype feeds into the notion that there's something to these other fads, whether they be AI, quantum computing, fusion, cryptocurrencies ... I'm pretty sure I'm missing others. What's more, while the internet itself ended up finding its place in the world, there were nonetheless a *lot* of companies that rode the hype bubble, and ended up collapsing rather than growing.
It's been really hard to stay motivated with my school work as a CSE student, my life for the past ten years has been in shambles, and learning programming genuinely gave me happiness I have not felt since I was a child, I want to program for a living, I want to make software the people use on a daily basis, I dont want AI to do everything for me and/or completely replace me and programming becomes just a hobby with no chance of competing against AI systems.(I also hate AI for art, it kinda kills the whole purpose of it but thats a different story) I agree with all your points as I have been following Gary Marcus and Yann Lecun for a while now, but the chance we're wrong, and AI does invalidate all my hard work right now, creeps into my brain while trying to learn. I'm hoping either the bubble bursts or the tech just takes off, this middle area of not knowing is honestly killing me.
You got this. I'm also learning coding and just started in late 2022, but AI has only helped me to learn programming quicker, it's not an enemy. It's just another tool available in your arsenal. You will still be competing against other humans for jobs, all of whom will probably use AI to different degrees. But AI being able to do everything itself is absolutely not happening for a very long time. It's really just regurgitating publicly available code, the more you ask for unique instructions that are not be published on the internet somewhere, the more your margin for error shoots way up. Try and ask it to code in any brand new version of a framework or sdk that just came out this year - it literally can't because it only has been trained on the previous versions.
Good luck out there, it's crazy times but if you work on your craft as much as possible and leverage AI to your advantage you can probably find something. I'm constantly looking at what other people with
I'm in the camp that current "AI" in no shape will invalidate any meaningful work you will do as a software engineer. Sure, it may be able to help generate some basic boilerplate, and maybe very basic CRUD apps, but that's it. Anything that is remotely complicated AI will NEVER be able to do, or at least this current version. Try doing any project with moderate scale; AI completely and utterly fails. And it will remain this way for the immediate future because I personally believe these LLMs are already near their limits.
Don't give up. There's always something new around the corner.
As an AI dev working for one of the big companies, I can tell you that we will always need more good programmers and engineers, AI is at the top of the hype cycle if you look at the gartner hype cycle chart we’re at the peak of inflated expectations and it’s going to crash at some point soon.
It's impressive how different one person can be to another, while still being similar.
I wanted to learn programming since I was a child. I eventually wanted to program for a living, making software for everyone to use, but I DO want AI to replace most humans and do everything for us and even completely replace me even if programming becomes just a hobby because of that (for the last 20 years it's been just a hobby anyway since I haven't got any job related to computers so far, lol).
And I also LOVE AI art. It makes art creation accesible for me and everyone who has always had something to express without the means to do it. I believe that whoever dislikes AI art is just denying the true purpose of art, (which is to communicate something) to instead exclusively elevate the technical part of art because that's the only thing they can do so they protect it to death.
AI, if properly integrated, will put an end to all the bad things that humans have brought into the world. It will be the greatest change we will see in centuries.
Been waiting for it since child. Developing AI is the reason why I wanted to learn to code, actually.
Hype or not, it is The thing.
We must keep trying to achieve it.
At (almost) all costs.
___
BTW, I don't think AI will be properly integrate into the world. In the end, we will just have a partial dystopia thanks to it being misused by corps and gov, but I'm just a person so I can't do anything about it, but to hope.
One of the most sane video I have seen about AI. I talk these things with my friends but your arguments are rigid and reasonable. And again, the oscar goes to hype economy. I think people are still trying to figure out a way to live with all this communication. We are being bombarded with connection but we mostly use it to fool others on the line so we are better off in this equation, which break us all. In the end we are all hungry for security and trust.
There is an elephant in the room that they just don't want to talk about. If AI tools became broadly used the amount of electrical power needed is beyond the capability of our current electric infrastructure. I sure don't see fusion being available around the corner either.
if anyone needs anymore of anything and have the money to pay for it then the supply will expand to meet the demand. The current electricity supply we have right now matches the current electrical demand. I don’t know if we are running out of resources to build infrastructure and if so you’re right but the notion that AI is inviable because of the current electrical capacity of society goes against the laws of supply and demand
@@iubankz7020 But in the case of the power grid this process spans across several decades. Also with new environmental regulations and anti-nuclear sentiments it's unclear whether such an expansion is feasible at all.
What? The electricity used isn't as much as you think. Weird how this rumor circulated.
@@cortster12 This is not a rumor. I think you should do a search related to AI energy use.
@@brianh9358
I did, and it's overblown. It's basically as energy intensive per output as playing a particularly gpu intensive game.
The ending with Carl Sagan punchline is 🤌🏾
How’s it exactly related to the topic?
@@kSergio471 AI (and AGI) is fundamentally based upon inputs to create something. It isn't from "scratch"/nothing. AGI is essentially trying to create human intelligence. It comes from a source, that being humans providing the algorithm and inputs. Always remember that something coming from nothing can seem a bit odd, since that something is probably based on something else (not nothing). This ignorance to that something that came from something (but is perceived to come from nothing), can lead to hype.
This is what I took away from the ending. As with anything, you take from it whatever you want. Even if it's nothing.
@@1337erBoards thanks 👍 However, it seems a bit odd to me: even if ai is capped by what’s possible for human brain, this cap is still something unbelievable
@@LT-dn7mt this amount of power is required to _train_ a model simulating human brain?
@@1337erBoardsthanks for the breakdown. It wasn’t immediately obvious to me.
with the current economic conditions, i personally believe "AI" is just a unicorn that major tech companies want to ride and it is in their best interest to entice as many investors as possible to join them for the ride.
Nope. AI delivers almost ideal workers that will instantly replace many pesky employees. It's real and it works. As for the real workers: the LLM is like the colleague they always wanted: figure out how to do this, make me a sketch of how to do that, etc. It needs just a few instructions, and does a thorough job.
Unrelated but I’ve just gotten my first SWE job, looking at apartments to move into, and you’ve inspired me to find something more humble haha. You must have some serious bags but still living simple, good stuff man
Congrats man!
happy for you man! good luck!
:D
Lmaoo I’m in the same boat rn I just spent 2k furnishing my new apartment 😭
Always be prepared to get laid off, or work on something people actually want to use.
10:43 I'm a dev in my 50th. I've seen a lot of major and minor hypes and this analysis is spot on. AI will be a huge and hopefully not a dangerous thing. But it's at least 10 years from now. Probably much longer. Big tech knows this is a dead end when several nuclear power plants are needed to get the intelligence of a 4 yo into a computer. We need AI to be 99,9% right in everything it's doing before it can be really useful. Now we are getting thrilled if AI is spot on half of the time. That is not useful in real life as a "workforce".
Have you actually used it? I mean are you creating things not yet pre-planned? New areas? I've taken these things for a spin and boy, I'm programming stuff now in a fraction of the time it cost me before. Of course, if I'd be doing stuff I'm already really familiar with, then it's just a sort of double-checking facility. But in new areas ... And of course you cannot use the code as-is, but it sure as hell helps A LOT to see what it comes up with. The efficiency boost is simply extraordinary. Literally out-of-the-ordinary. I have never experienced anything like this. It WILL change a lot of disciples for good, yes indeed especially software engineering. If you think otherwise, you either tried something trivial, or you're simply in denial.
10:26 - That problem is actively being worked on. It's a software issue. There's several directions, but the one I like the most is:
Once trained, the model ain't fixed. It can re-learn and overwrite what it learned in the past, allowing it to update tiny chunks of its knowledge instead of having to retrain its whole brain.
you are talking on point. Glad that someone talked on this hype of AI
AI is drinking its own kool aid, since their training data contains AI output
Yeah never thought about that. AI output will outnumber human output. Therefore 80% of input to AI will be by AI. A true garbage in garbage out garbage in.
Ai effectiveness decreases sharply as it cannibalizes itself.
it's like a thirsty guy on a dessert drinking his own pee
Im sure AI would be able to detect AI generated content and ignore it. Or maybe it’s something only humans can do so far.
@@sillymesilly It's usually GIGO, but we've finally managed to invent GOGI...
I discussed this with my professor. We also talked about how they change from GPT three to GPT four involved doubling the amount of neurons in the neural network- this begs the question if you are doubling the amount of neurons, are you doubling the performance? It seems like there is not a doubling in performance. This means that there are probably very severely diminishing returns as hardware tries to catch up with the exponentially increasing computational demands of iterative neural network improvement.
GPT 3.5 had 175B parameters, GPT 4 has 1.5T. Thats an 8x increase in parameters but there is nowhere near to an 8x increase in performance.
Also just a couple days ago, Meta release Llama 3.1 with 405B parameters which is comparable to GPT 4. So just infinitely throwing more parameters at a model doesn't really help much.
@@janek4913 it can even reduce performance (e.g. if you have too little meaningful data).
@@janek4913 so what does it really improve like what tasks
Scaling isn't the only avenue AI researchers are pursuing, it's the hack that unlocked somewhat capable language models. Now that we have them, it's given researchers something tangible to study and build on, which has led to chain of thought, tree of thought, mixture of experts, retrievement-augmented generation, multimodal models, data distillation, etc.
Scaling will be pursued as far as economics and data will allow, but it's not the only game in town. I also expect the recent trend of more capable smaller models to continue.
Even 3% is very good by the way. Along the way it picked up concepts in language, math, and or coding. Which other models spent a lot of years doing equivalent concepts. So yes it is huge. If you want chatgpt 4o to be double in performance that's scary because you and I may not know how many higher level of applications or concepts it knows. Of course they are doing more complicated models and end to end functionality which just like chatgpt 4o picks up language, math and coding along the way. It will rise up and still haven't see the plateau of transformer based models. Because although 100trillion seems overfitting but the architecture can still be improved for higher end to end functionality. You must not care too much about the diminishing returns because it's also dataset +architecture complexity and functionality. Not just parameter count. These are hyper parameters. Most are based on statistics to find optimal values.
"Human nature doesn't change", debatable.
It evolves due to environment and culture.
It "changes". It really just repeats itself. It is cyclical, but that's just me.
Everything is in constant change, including homo sapiens.
Finally some clear thinking! Well done!
I think you're being generous when you say that there have been many times when people (i.e. for-profit corporations) have blurred the lines between hype and fraud. If the manufacturer of a machine tool claims its new product is the first to achieve milling tolerances below some value x and customers buy it on that basis, only to discover that the actual tolerances it can achieve are nowhere close to the claims, we would not say the manufacturer "blurred the lines between hype and fraud". What allows software companies to get away with this?
I like how he allocates all his cycles to content. His room is still the same as when he started neetcode.
he probably has millions of dollars and is sleeping in what looks like a college room dorm
@@stephpainI heard this is just a studio to keep the aesthetic consistent.
in one video his cammera moved about 2 degrees to the left and you could see some gold bars stacked up to the ceiling
lol this is like saying Zuckerburg is still wearing Gap sweaters, how modest he is, while he is building a 1400 acre bunker in Hawaii.
All his cycles? Lol
If you don't know anything about AI (It's not really AI though), it will look like magic. But as you unwrap its intricacies, you'll realize that AGI can still be classified as "impossible".
"intricacies"
It could be possible that making AGI out of the transformer architecture is impossible (at the moment I would say it is even very likely), but I think it is not really possible that AGI is impossible as a whole. General intelligence is possible within the laws of nature and it is achievable in a quite efficient way. The human brain represents a system with many functions that are not wanted for AGI (so it is more complex) and still absolutely possible. Even in the worst case where scientists need to mimic the functionality of the brain very closely, which would take us at least many decades and huge amounts of resources, AGI would technically still be possible.
On the other hand, for the case of AGI being impossible, there needs to be something so inherently unique to biological brains that is categorically impossible to mimic or replicate. What process should that be? The formation of brains is complex but no wizardry.
From my perspective the more important question is, how much of the brain’s complexity is needed for solid general intelligence. Considering how much capability is already achieved by rather simplistic mathematical models, the amount of groundbreaking discoveries to reach this level is seemingly much lower than expected, but still very high.
Yeah, LLMs are advanced auto complete. They won’t magically become sapient no matter how much training, memory, and processing you throw at it.
It’s just fundamentally the wrong architecture.
It’s like how people use to take these vague, nonsense estimates of the raw processing power of the human brain and point out that we’ll soon have super computers with more power.
Well, we do, and yet none of them are sapient.
The internet as a whole has orders of magnitude more processing power, why hasn’t it magically become self aware?
People who don’t understand this stuff pretend it’s just a matter of more data, faster processing, that’s not how biological neural networks operate at all.
@@ozymandias_yt I will love to be enlightened more about how it can be possible without using "general" representations. Tell me some specific ones, like the technicalities of how "GI" is possible within the laws of nature and it is achievable in a quite efficient way". I am not a hater of AI in any way (I specialize in ML). But as far as my knowledge goes, "AI" is nothing but ML with lines on steroids. No hate for tech but I'm ready to be proven wrong and will stand on my claim that AGI is still impossible, atleast currently.
@@leeris19 Maybe our definition of general intelligence isn’t the same. For me AGI is the point of human-level intelligence (reasoning, consistency, competence…). The proof for the existence of human-level intelligence is trivial and the synthesis to some extent therefore always theoretically achievable. The concept of “general representations” isn’t really present in the human cognition without limitations. Example: What is a game? AGI as the ultimate clean intelligence of eternal truth is indeed impossible, because it is logically implausible. Language isn’t well defined in many aspects, so no amount of data can train an AI to give always “perfect answers”.
To full fill the visions of the AI revolutionaries, AGI in Form of human-like intelligence is needed, so complex tasks can be understood and executed. We can train humans to do these tasks and an AGI should be capable of learning at least with the same success humans.
Side note: Regarding the hype, I see a typical pattern of over correction. In the beginning of the computer revolution, AI was described as something of the near future, which was of course way to optimistic. Throughout the decades, the prognoses for AGI extended into the range of 2080-2200, which is rather pessimistic. AI companies bragging about AGI in the next few years are quite likely over correcting their predictions again.
16:24 Samir, you’re breaking the car!
please samir, listen to me samir please
listen to my calls 😡
The problem with these LLM's is the bell curve distribution / probability distributions they use to determine their answers. They are gathering their input from the most common information. This is clearly the basis for the learning they do. The problem with this is three fold. First if you want excellent answers it's just not capable of doing this. Secondly, as content is generated from these responses it further dilutes the pool of exceptional content. Secondly people naturally will rely on this as a crutch and get worse at producing the content on their own. Thirdly as the LLM will learn from this double-diluted content further diluting the better content, points 1 and 2 will just speed that process up.
Unless they find effective ways to drastically combat this I'm fairly sure it's a doomed technology.
Really. I have found a experiment that AI forgot what it learned from a math video after it watched several tik-tok shorts. The diluted information harms the cognitive ability of AI as it did for our brain.
there is a fourth issue - if the most common answer is incorrect then you will get an incorrect answer. The LLM does not know the correct answer, it gives you the most likely answer - which is not the same thing. And a fifth issue is that it has to give you an answer, even if the likelihood of it being correct is low.
@@zoeherriotthat’s just an extreme case of the issue of non-excellent sources
@@aeroslythe6881 which... is still an issue.
@@zoeherriot You’re right. In fact there’s a sixth issue…
Saying humans can learn how to drive in 30 minutes is just a blatant misunderstanding of reality. You should easily be able to see that that is blatantly false for 1 year old children, so obviously there are years of development at a minimum before people can even begin to learn how to drive. Even then, we have evolved over billions of years to interact with the world. Not taking this into account is being intentionally ignorant.
I've had these thoughts for a while but it's great to hear it from you, glad not everyone is salivating for AI.
You didn't even mention practical limits, like power usage.
The energy demands of data factories is a potential bottleneck
NeetCode is not only good in coding, he is also good in seeing the truth~
Part two to the hypewave is when the UA-camrs come out and call it a hypewave.
Love the meta.
Watched this all the way through for the 2nd time. Makes even more sense, 4 months down the line. And I am an AI developer about to launch my own "wrapper" application. What has been made even more clear to me is the necessity to make sure my customers understand what my application is, and Is not. What it can do, and what it can not do. Thank you, again, for this very excellent video!
I just want to thank you for lessening my anxiety in these topics.
I wasn't pro leetcode, but leetcode is like mental gymming which improves problem solving, step by step. Kudos to you, your voice is like music to my ears.
Leetcode et al are neurotypical gatekeeping and poverty enforcing machines.
always looked for a way to put this into words. NEVER buy into hype, engage as you would anything. Fundamentals tend to trump all
There was a lot of hoarded cash that needed to be spent. Stock buybacks weren't going to cut it.
Basically trump years tax cuts. You think companies take those cuts and put the money back into their businesses?
Even worse. The stock system entirely is a hoarding system...there are trillions locked in stocks. And we cry why we are poor. Where is all the money?
This kinda sounds as the perspective of someone who's threatened - or feels as if - by the advances he is criticizing.
For instance, quoting the image near the end of the video:
2015 - Self driving in 2 years: The technology has existed since pretty much 2017, it can't be adequately deployed because most people can't afford it yet; and since few people use it, society as a whole hasn't changed fast enough to really adopt it.
2016 - Radiologists obsolete in 5y: Hospitals can barely afford to function - they can't invest in deploying such sophisticated systems. But the capability exists and it's possible to make it work just as imagined.
The whole video feels like cherry picking from the lowest branches possible. it lacks depth, it doesn't seem to consider second or third degree of consequences and what arguments are valid are actually very shallow and in so inconsiderate.
"Remember something: this is the worst this technology will ever be."
This is the comment I was looking for!
@@minhuang8848 You're super confused. AI was never necessary to replace those office jobs and the AI implementations used are no better than the infamous phone mazes that replaced customer service call centers. Customers didn't like them then and won't now and they'll never be helpful for anything but the most trivial things there should never have been a call about while hampering real problems and information from reaching the company. Those companies will sink or figure things out in time. As usual, these trends come and go with the hype. You're clearly lacking the historical perspective. I am actually an expert by the way, most of my colleagues work almost exclusively in AI (the sub-team I work in does bioinformatics, in particular statistical genetics, because frankly the AI stuff can't be trusted in the context of real medical data where our conclusions may affect the real treatment people receive).
Very well said.
He is right, this “too big to fail” mentality was the downfall of many companies. Once Ford, GM, Chrysler were the biggest companies in the world they weren’t able to keep up with times, so they are nothing compared to what they were. Kodak is an even better example because they were ahead of the trend when it came to digital photography but they where already too invested in B&M stores and those stupid Kiosk things that people used to print their photos so they failed. IBM was fucking huge, they also failed.
What do all of these companies have in common? They were enormous in terms of their structure and hierarchies and a given of those characteristics is having a really hard time at adapting, being flexible and innovating. The next big thing comes around and they’ll eventually fail in keeping up and some newcomer will take their place. They’re trying to stay afloat with this AI hype, but lets be honest is there anything meaningful that AI can do that consumers at large are willing to spend their money on? No there isn’t.
In my work I see so many businesses wanting to adopt AI into their business and the most adamant of people about it are always clueless c-level executives that have no clue about how AI works or what it can do, for them it is some kind of Black Magic. We are at a time where the next big step in technological advancement is nowhere to be seen. Elon with Space X is going after something that was already accomplished in the 60’s, just with an innovation with rockets that can land themselves… If the investment in that area was constant since the inception of space exploration we would be way pass that. Taking into account all the technological advancement since, the moon landing Space X’s accomplishments are meek in comparison…
They are all going crazy trying to predict the next big thing and the only thing they can do is hype because the next really meaningful advancement for humanity is nowhere to be seen. Funniest thing is these companies are really young when compared to the giants I mentioned in the beginning of my comment. Can’t wait for this shit to be over, as long as companies are chasing the hype we will be wasting the smartest people in an entire generation doing something that in decades will be irrelevant. I’m not saying all this AI investment will be useless in a some decades, but it isn’t going to change the way we live as a human species directly.
They sure as hell aren’t going to give it to their employees 😂😂😂😂
That would not generate revenue, it's not surprising
As someone who has been on the cutting edge of AI and neuroscience research for 20 years now: massive backpropagation-trained networks will become a thing of the past within 5-10 years. They will be seen as the compute-hungry brute-force approach to making a computer learn after all is said and done. What's coming down the pipe are sparse predictive hierarchical behavior learning algorithms that can be put into a machine to have it learn from scratch how to perceive the world and itself in it, and be rewarded and motivated to explore unknowns in its internal world model - which will yield curiosity and playful behavior. These will be difficult to wrangle at first, with humans controlling the reward/punish signals manually, but once they're trained to behave they will be the most resilient, robust, adaptive, and versatile machines in the history of mankind. Judging by how compute-efficient the existing realtime learning algorithms that people have been experimenting with are, it won't be very expensive to have a pet robot that behaves like a pet, runs around and fiddles with stuff like a pet, and is self-aware and clever like a pet, and the whole thing will run on commonly available consumer hardware - like that you have in your laptops and phones. This same learning algorithm will be limited in its abstraction capability by the hardware it is running on. As such, it won't be difficult to scale it up to human and super-human levels of abstraction capability, as long as the hardware that it is running on has the capacity to run the algorithm in realtime (i.e. 20-30hz) so that it can realistically handle the dynamics of its physical self and the world around it. Mark my words.
Nobody building a massive backprop network right now is going to be glad they did in another 2-3 years. They're going to look like the dotcom bubble hype bros of the 90s, and become disgraced for being so naive in their blind faith that backprop-training was the end-all be-all of machine intelligence, like there couldn't possibly be something better, more efficient and useful. They just took someone else's backprop work and ran with it like it was going out of style, and it's cringey, at least to someone like me who has been watching all of this unfold from my uncommon perspective. Some people learn the hard way, I guess.
Awesome comment, brother
That's a good point
But these more sparse approaches are... already existing and just not so shiny or hype-filled. We're essentially talking about interpolation with better sampling. Chebyshev Polynomials, Fast Krigging, Polyharmonic Splines, or the more Bayesian approaches and some other things along those lines with some sort of gradient-based performance metric or Bayesian sampling in the Bayesian cases. It's mostly stuff that exists... but it's not cool or sexy and doesn't get people excited thinking it might be a sort of real "intelligence." There's no hype for it. But these don't have quite the capabilities you aim for... those require a significant breakthrough that might happen... or might not. Maybe next year, maybe not for a hundred or a thousand.
finally somebody talked about this. thank you !!
Thank you taking a lot of things I've said and thought over the last 2 years and put it together so well.
I Love your approach:
- facts driven
- friendly/funny, but frank
- clearly stated opinion
- open to respectful disagreement
I’ve gotten really I to AI/LLMs lately, but we need more people with your perspective - reasonable expectations for this tech, not hype.
When I began a major in computer science in 2007 the "everybody knows" prediction at the time was that all programming would move to remote workers in third world countries and wages would trend toward $20k / year or less. Outsourcing was all over publications like Software Developer magazine. Kids were being told not to go to school for CS. But outsourcing died because of communication and quality issues. AI is nowhere near surpassing third-world developers for these 2 shortcomings.
You'll see.
@@kyokushinfighter78 I guess here's a way to look at it: when a hospital administrator can say, "write me a system that manages my surgery staff and patient records", and the AI fully masters that use case, then it will have full real-world intelligence and we won't need hospital administrators, lawyers, Congress or anyone else. Until then, there will still be humans designing and specing these systems.
@@kyokushinfighter78I agree and very soon.
@jwoya I also believe that AI plus human may well always be better than AI alone regardless of how smart it gets.
This is an incredible explanation. Thank you for staying true to your word and not caving to the haters!
one point that I do have a problem with is the rate of improvement, there isn't any actual data with your ROI. Anybody that's used both especially for programming knows really well that 3.5 to 4.0 has a far more substantial improvement than what you're giving it credit for.
While that’s true, it’s asymptotic. Eventually, the output difference between being trained on 99% of data and 100% of data on the web is next to nothing. Pretty sure anything past 90% is largely the same. Even though the progression from chatgpt 3.5 to 4o (not 4.0) was large, those gaps will eventually be smaller and smaller until we have a “perfect” gpt that gives the most correct answer available to the entire internet. Now, is that anything more than a glorified search engine? It’s up to you to decide that.
@@hanikanaan4121 what makes you think that AI was already trained on 99% of the internet? Maybe it learned on 10% and thats not speaking on how the hardware is advancing too, and the software.
Another problem is the assumption that AI started in 2022. We have developed AI since the 70s. We have more data than one line between two points.
@@TheManinBlack9054 notice how I said eventually. Also, a significantly huge part of the internet is unusable, outdated, or ToS failing information. The data they’ve used so far is the vast majority of the data that’s usable and beneficial. Is there more to be used? Absolutely. Will it change the entire game, and result in AGI or something? Pretty much a guaranteed no.
Additionally, hardware doesn’t actually improve the results or accuracy of the model, it just speeds up the process of training. More accurately, it requires less data to reach a “definitive” point where answers can/will be given with certainty, but the accuracy on the entire dataset will be unchanged regardless of whether you’re training on an intel celeron processor or the strongest TPU on the market.
GPT is not the way forward in advancement of AI, it’s simply the replacement for search engines. To reach the next tier of “autonomous” AI, it’ll be through something different from the current progression of text based training. I’m fairly certain that NN chess engines have shown higher levels of “creativity” and “thinking” than any currently available GPT system, be it from Anthropic, OpenAI, Google, etc.
It doesn't matter if AI can or cannot replace engineers. They are still going to be fired, and the software engineers remaining will be doing triple the work, working every weekend to compensate for the fired ones. Yes, they will do it because they will be driven by the fear of being fired and replaced by another engineer. If AI can actually replace jobs it's just a bonus, it's actually not necessary.
I don't comment on UA-cam videos much, but I have to give it to you: you are very articulate and you have excellent critical thinking skills. We need more of this!
Personally, my takeaway over the past few years has been that, despite having a technical background, I (and my peers) could all benefit from more macro understanding (e.g., poltics, economics, ...). The world doesn't make sense right now and these "blurred lines" are a sign of the times. We will inherit the mess though, so we better wisen up and get ahead of it.
really like your approach and explanation, better than many well recognized experts
“There’s a new virus running around” “It’s as old as human history”
The 99 per cent thing is interesting. When you do something like linear regression it’s really easy to get to say 80 percent but to improve that by even 1 percent involves crazy amounts of fine tuning.
ChatGPT was not first released in 2022. It had already been around for a couple of years at that point.
If we want to stay open minded, you also have to consider Neetcode would want students to keep pursing CS as that would mean continuous revenue for his platform. Overall, great video - I think you touched a couple great points. At the end of the day, consume information from a neutral stand point. No one knows the future for certain, we must manage risk and hedge when given the opportunity whether we are in a stable market or an uncertain one.
you can just add a prefrontal cortex to the ai. it will override any command to crash the car. some hard coded limits on acceleration/decelleration/crashing and stuff.
Something which is common among all the tech creators on YT that I follow keep saying that AI isn't taking any jobs. Can it be so that, the shift to other professions and interests among students, driven by concerns about CS's future profitability, lead to reduced engagement in their videos so they would want to make sure people continue watching them?
Take the LLMs for a spin for real work in areas that you're not so familiar with and you'll change your mind. It's unthinkable that this technology will not have a huge impact in many areas. You can try to focus on the stuff it's not so good at yet, and then you'll miss out on what it already delivers: real, tangible, spectacular cuts in time to figure things out, in almost any area. Think about this: there are still positions in chess that a human can do better than any chess engine. So what? Did those not revolutionize the whole field?
After watching this vid still not sure what AI overpromised and underdelivered.
Devin
8:00 But Tesla has been a complete failure? It hasn't made any profit. What are you talking about?
from 10:23 to 14:13 , this was probably a mind-changing experience for people who doesn't major engineering. clean video and explanation bro. thanks
AI is still progressing. Fast. I can’t imagine people not using AI daily in ten years. It will be ubiquitous.
at my current job we had github copilot business(?) version for a month, to give it a try. guess what, 90% of the generated code was calling not existent class methods in java, 5% didn’t work or looked incorrect, 5% was generating the code that worked and was looking correct but has a bug in it which was really hard to detect. after this month I have no anxiety anymore about ai replacing us(btw I turned this shit off in the end and threw it away). it was in may 2024.
10:56 this is actually false. OpenAI published a paper several years ago that explains exactly how fast AI will improve. And to summarize, we need to exponentially increase the data and compute to keep making AI better. Which means progress will slow down and OpenAI knows that it will slow down! All the hype is just marketing, designed so that investors keep giving them money.
AI is almost guaranteed to get better, but it’s also almost guaranteed to slow down.
Does that mean AI improvement will slow down? OpenAI can just generate new data to train the next model. They are already doing that with synthetic data
@@nihilisticprophet6985 the models won’t necessarily slow down, but to maintain the current rate of progress, each model will have to be 10-100x more expensive than the one before.
Synthetic data isn’t a silver bullet. There are many small techniques you can use to generate synthetic data-eg, translating computer code from a common language (like Python) to a less common language (like PHP). But I don’t know how well that can scale.
Completely get your point, but I'm still blown away by the leading edge models, and how fast better ones are coming out. GPT-4 is definitely smarter than all of us in a wide range of topics, but not specific ones. But the idea of it being the dumbest version definitely has me "hyped" as a young person given the room for improvement. Great video though.
Great video! You’ve done an excellent job breaking down the hype versus reality in the AI industry. It’s refreshing to see a balanced perspective that acknowledges both the potential and limitations of current AI technology. Your historical comparisons and thought experiments really help put things into context.
I have a question: Given the current rate of improvement in AI technologies and the prevalence of hype, what do you think are the most realistic applications of AI in the next five years that will have a tangible impact on everyday life?
This was such an amazing watch, thank you! My takeaway is hype is still required to an extent. Selling hope and dreams can still produce positive results - it makes us progress somehow.
You know, it is SO refreshing seeing the hype cycle finally wearing off.
Especially since being a [self proclaimed] "AI experts" has pretty much translated to being an unreflected OpenAI / Elon Musk fanboy in the last couple of years.
Reminds me of how all the "digital natives" were once heralded as exceptional Internet prodigies, when in fact all most of them really mastered were Snapchat, Instagram and TikTok (tech that was largely conceived and created by the previous generation)
There NEEDS to be a paradigm shift, LLMs simply won't cut it in the long run
Thank you Mr. Christian Schubert. I have a direct line to Sam Altman if you'd like to enlighten him with your insights. Why the hell are you not heading a top AI research lab?!! How did you slip through the cracks?! Whoever said armchair quarterbacks can't throw? You've got a solid arm dude. Don't ever let anyone tell you that you don't know better than the coach. After all, you've got quite the view from the TV.
I also am not sure if you are aware, but they are already moving beyond LLMs. The paradigm switch is already happening, but you're too blinded by your compulsive need to be a wet blanket, projecting a cynicism that implies an intelligence. It reeks of parochial insecurity. Wear it like a blanket. Use it as your pacifier.
Use whatever heuristics you feel you need to use to make it through this period. 'Unreflected (the actual word would be unreflective) OpanAI/ Elon Musk fanboys' certainly works. That's definitely a way you can choose to understand what's happening before your eyes.
@@__D10S__ Well, in your defense, you've got one thing right. That should've been "unreflective". My phone apparently thought otherwise.
@@__D10S__
Do you? Ask him how he defines consciousness, how he responds to the chinese room argument, how he proves computationalism and how he proves that all he is doing is not just a poor mimicks of humans. And also what does he think about SNNs.
NeetCode is slowly becoming my favorite tech person in youtube
My gut tells me this hype is in part attributable to public misunderstanding; I’m merely a hobbyist programmer, so really I’m apart of said public. I think there is a conflation of statistical data mashing (relevant xkcd: 1838/) with what has been popularised in Hollywood and other mainstream media that have sparked people’s imagination in the wrong direction.
I just wanted to comment because I work in that field. I see the limits of large language models on a daily basis and you are correct in many ways.
The last 10% is 50% of the work and that still applies today.
I just wanted to let anyone reading this far into the comment know that LLMs are not the solve it all and we still don't have a solution to an ever expanding self learning compute or AGI as it's called. I don't know if we would soon or when but it may come. However for now, we are still within reasonable limits. With all that said, LLMs are extremely useful for a specific set of cases, not all, but a lot. Cheers to the future 🍻
I worked on the early internet in the 90s until the 2000's. This hype all looks and feels SO familiar, and almost no technology ever gets used in the way it was originally intended. So great video. However, as tech teacher with 500 unique teens students a year, I would argue one point: human nature is changing.
14:06 Yann LeCun with the receipts LMFAO once I learned "A.I." was probability, statistics, and linear algebra in a trench coat, I realized it was a bubble.
You will be surprised to know that your brain runs on probability and statistics too
@@lordseidon9”Planes are bullshit, they are just applied thermodynamics”
The real argument should be about the complexity of the models that use these disciplines, so we can distinguish between what is solidly persisted competence and what is just a useful artefact from the data. Better AI models have a structural integrity beyond its NNs (like hard and soft beliefs and policies), so it can not just go from logical reasoning to total nonsense by just one unfortunate transition.
Life is probability and statistics
How u know😅 , even the greatest neurosurgeon cannot answer that question completely😅@@lordseidon9
Once I learned human brains are just neurons firing and neurotransmitters shuttling between synapses, I realized we are moronic.
Hype is never going to stop. But neither is the advancement of AI.
I wouldn't get too used to patting yourself on the back for being right about the difference between hype and reality, because you won't be for long.
Well unless there is fraud which discourages the investment (like theranos level, death of people level) which is unlikely but not impossible. It also will depend on who adopts first: enterprise? Retail? And what products bc right now it’s not profitable in long term just churning thru cash. Also this is without talking about the energy problem which makes it unlikely to scale
Yann LeCun disagrees
Well, this didn't change my mind, but only because I was already there before the hype hit full swing. LLM's are not AI. The researchers are trying to recreate human minds without any understanding of what a human mind actually does or how it really works. It's the wrong approach. If you want real AI, then you need to think in a completely different way. Personally, I'm glad they're going about it the wrong way, because it means I don't have to fear a robot uprising. That would truly be the end of humanity in a very thorough way. Of course, I still have to fear some evil person putting NN-based tech together with an armed drone and either controlling or mostly destroying humanity, but that's a concern for 5 to 10 years down the road and not right now.
I think the same will happen to AI what happened to the Internet:
1. massive hype
2. a bubble starts forming, it gets used for a lot of things, by far most nonsensical
3. the bubble gets bigger
4. the bubble bursts, many companies go bankrupt and the economy at large is in a downturn
5. companies start to figure out actual use cases
After that, bets are off, because it depends on what we get out of step 5. It could even go into loop afterwards.
Generative machine learning is absolutely insane. I agree that most publicly available or basically used models are not that crazy, but the fundamentals are there and generative learning should be hyped . Honestly The hype isn't enough, I promise.
Source: I am a post grad researcher studying AI and founder of a collabortive intelligence platform at one of the top research institutes in the world.
It would be helpful to include any time frame assumptions at all in the video. Ofc current models suck. But what about in 5 or 10 years from now? That’s really not far away at all
I think he’s talking about people being fired NOW I guess
But that was the same in the 60s... you have no idea how much hype the Perceptron had.
high quality content! bro is telling the hidden truth
It’s not hidden. Most people just don’t bother looking and take things on face value
@@karlos1008so it’s hidden from most eyes
80 papers in 2 years, isn't that like a paper every 11 days? For sure, what kind of science is that? That man deserves ALL the Nobel Prizes for making humanity reach a technological breakthrough every 11 days.
Modern day "research", especially in the field of AI, is another Pandora box that would deserve its own video.
He might as well have given the number of podcasts he had gone to and it would've still been a better vanity metric. That said, he probably expected most people who read that tweet to be either fools or deeply unfamiliar with how academia works... and that assumption would be correct
He's likely slapping his name as a contributor on every paper worked on at Meta, which can entail the work of hundreds if not thousands of researchers
I mean, he probably wasn't sole author considering his function. Most likely he got to put his name on there for guiding the team doing the actual research, which, don't get me wrong, can be a valuable task on its own
@@FluhanFauci DingDingDing - this is how most any kind of research works: (Please mentally change the pronouns to your own preference ;-) The senior researcher guides the work of the entire group, and his name appears somewhere in the list of authors of every paper the group puts out. If he contributed in some critical way, he’d be lead author, if he was fairly hands on but wasn’t directly involved in the work, he might be somewhere in the middle. If he just told someone “hey, you should check this out” he’d be toward the bottom, and if he had nothing much to do with it but it came out of his lab, he’d be the last author. So 80 papers or whatever is how many the entire team, possibly hundreds of people, put out.
I come from an university and I most researchers only mix papers in order to get a bonus lol
You do know "the recent transformer architecture" is older than this UA-cam video you made and your channel as a whole
As a programmer I can confirm massive tech layoffs and productivity gains with LLMs (AI) is clearly just hype 😂💀
Not sure what the exact argument for this entire video..is it "AI is just a hype" thing or is it, "AI is useless and will never replace devs" thing? It did not really changed my mind about anything, it confused me for sure. I think I'm just too dumb to watch this kind of video.
Argument is "if something is hyped its fake and won't change much" which is dumb in it's nature, really impactful things were too hyped about
AI is over hyped and will not do most of the things that people say it will do because of the limitations of LLMs. It will have an impact but not the one that's promised to us and which is faked by companies to encourage investment. What's the point of pointing this out? Well for one this video is an antidote to hype which is sorely needed especially now as companies attempt to implement AI into everything.
@@RuthvenMurgatroyd ai is rapidly development, limitations only exist with current models
This did not start in 2022 lol. Atleast go back to GPT-3, hype was really starting to build then. ChatGPT was more available to the general population, and hype within tech circles definitely got bigger but it did not start with ChatGPT.
"Computers are just incompatible with the level intelligence that many people are expecting them to have". If you are saying computers are just fundamentally incompatible, then I strongly disagree. If you are referring to current gen models then yeah.
ALSO do not just compare timelines of release lol, compare compute over timelines. GPT-4o, from what I know, is a smaller model than GPT-4 (obviously. It is much cheaper and faster with lower latency), so OAI has done some sort of algorithmic improvement or trained on more data to get more performance out of smaller models. BUT, since GPT-4, every model that has released has been in the similar domain of GPT-4 level compute and cost to train. We know the main factor to intelligence in these models is effective compute which is highly dependant on raw compute. The ONLY model I know of trained with decent amount of compute over GPT-4 is Claude 3.5 Opus, which is yet to be released however Anthropic said it was trained with 4x the compute over Claude 3 Opus (which is a GPT-4 class and trained with approximately GPT-4 level compute). For context GPT-4 was trained with 6x the compute over GPT-3.5, and GPT-3.5 was trained with 12x the compute over GPT-3.This is the story of raw compute with GPT series models, but It gives us a window into the scales of compute needed for any form of improvement.
To the people who do not have access to the training runs and current stages of models, bigger intelligence gains are not incremental over a time period, they are on a per model release basis. The last real intelligence gain was GPT-4, every other model released since then is some optimisation to that class of models or just straight up meant to be in this class of model. As I said the only model I know of to have compute scale up over GPT-4 is Claude 3.5 Opus, 4x the compute over current GPT-4 class models like Claude 3 Opus.
And also Claude 3.5 Sonnet is 6x the compute over Claude 3 Sonnet. Claude 3 Sonnet was a high end GPT-3.5 class model, the compute jump put it as a high end GPT-4 class model, but not enough to go really beyond GPT-4 class models. That is what Claude 3.5 Opus is going to do. But, again, it will be a smaller gap than between GPT-3.5 and GPT-4.
Bro is persuading us to not leave a tech career. What a legend
I knew that already and im happy more people notice 🙏🏼
@6:20 if you think that's why companies hired like crazy in 2021, then you have a fundamental misunderstanding of economics.