Yea, it is funny. You can get some code stubs, but you can do that by googling too. But it is actually funny to ask for uninteresting facts and puns. It obediently produces puns that it has found on the net.
I am deliberately trying not to use it directly. But I think everyone is using it indirectly. Eventually it will be normalised, but maybe some aspects of it will be forbiden?
I’m a data engineer that was recently laid off a Microsoft due to them deprioritising data and focusing on CoPilots and core infrastructure. The hype is so strong here that they don’t even see the relationship between data and AI anymore.
@@uwu.-.5873 was going to say the same thing. AI is just not the be all end all of programming. These companies jumping on the hype train are going to regret it. Some already are.
Finally someone is saying it. ML is a useful tool and will have far-reaching effects, but 99% of the hype around "Put an AI in it and lay off half the staff" is based on natural stupidity, not artificial intelilgence.
@@kutto5017 Artificial (adj.) made by humans, especially in imitation of something natural. Not arising from natural or necessary causes; contrived or arbitrary. Latin artificiālis from artificium (“skill”), from artifex, from ars (“skill”), and -fex, from facere (“to make”). Your definition sounds like an AI hallucination! lol!
People often get trapped with Linus' eccentricity and miss his greatest asset which is clarty of thoughts. I've been using Linux since 1999 and the road map (& adoption) of Linux OS has been miraculous and largely thanks to just one guy....Linus Torvalds!
And yet the Linux fanboys are so toxic that decades later you can't give Linux away for free, because it's still deliberately difficult and "support" is basically insulting you for now already knowing, and telling you that it's not Windows. Thanks, but no thanks.
Linux OS? ... Just to be clear: Linux is a family of Unix-like distributions and the kernel that these distributions are built on top of. So far we don't even know what distributions you've been using.
@@rouxgreasus not really, whenever you update, you will still get the latest packages at that moment, many of them with undiscovered bugs. So better update it as often as possible to make full use of the reason you are compromising stability in the first place. Or get nixos if you don't mind the additional difficulty.
10-15 years ago, the tech industry told us that in 5 to 10 years, there would be self-driving vehicles and no one would drive manually anymore. Tesla was selling their cars a few years ago, promising that your car would be a self-driving taxi after the next software update. Now, we have adaptive cruise control and lane assistants, but we are still driving. The hype is a marketing strategy to collect money from your customers, but mostly from your investors.
That’s oversimplifying it. It’s much better than what the general public seems to realize. I work in engineering at one of the leading autonomous driving companies. Bet you can’t guess which one, haha.
there ARE self driving vehicles right now. they work very well. they are safe. the tech progress is artificially throttled. litigation laws dont help this, either.
@@rkulla the funniest part is the people running the companies purportedly “doing a lot with AI” have no idea what they are doing or what AI is. Even Bill Gates said he doesn’t understand it. The latest DeepSeek rug pull serves as an example. This pretty much sums up the incompetence: “Meta is reportedly scrambling ‘war rooms’ of engineers to figure out how DeepSeek’s AI is beating everyone else at a fraction of the price.” Not only does Zuckerberg hold the most disdain for tech workers of any CEO, but he also has sunk 100 billion into high-end NVIDIA GPU’s 2024-25. NVIDIA lost 600 billion market cap in 1 day, 2/3 of annual U.S. defense spending, and the most ever lost in 1 day by any U.S. company in history.
@@FunNFury The Linux market share is number one by far. It runs every Android smartphone on the planet, almost every "smart" electronics (TV sets, smart cameras, smart watch, etc), every server in every cloud on the planet (including Microsoft and Apple's), every network router, every super computer, every electric car entertainment and control system, etc.
I use AI to get quick answers that I don't feel like doing a whole research on google or Wikipedia. It saves me time. I wouldn't use it to code for me. I have used it to aid me in learning new libraries, frameworks, orms, etc etc. Saves me time instead of having to dig through lots of documentation of some library I can ask it to give me simple examples. It's been a great learning tool.
WIth halucinations and outright wrong answers, how would you know that what you get is even accurate though? This "time saving" that you talk about is the biggest issue because you leave the "fact checking" to someone else, basically. But that also means, it makes it harder to know if what you get is actually a good source.
@@CrniWuk when you have experience you sort of tell when the LLM is accurate or not. i dont deletegate fact checking to the LLM I fact check myself or test the code I write with help of LLM. regarding the sources the gemini app does list the sources it used to generate the answer.
@@symtexxd But if you fact check your self, then why do you need LLMs for? And how would you know if they are right or not? Again. If you let the algorithm do the "heavy lifting", how can you be sure that what you get is correct.
@CrniWuk hallucinating only happens when you ask it something out of blue. For standard questions like, "what function is used in c++ to read a line from input" or "command to redirect stdout to stdin", these work pretty well. You kind of know with experience when to expect hallucinations and when to not.
@@IsaacGabriel-kh5ds «Linus would be nowhere if not for Dennis Ritchie. » -- Yet another sterile gnawer. Looking forward to see another one saying something like "Linus would be nowhere if not for his parents". Our world got so developed, but we miss instruments to filter out gnawing sterile comments like this one.
I'm a software engineer and I use AI for the mundane tasks like giving it an interface and ask it to create some dummy data for tests. Or give it a test example, some context and ask it to generate tests. I am 1000000% confident I won't be replaced by AI simply because I know and understand how it works -- it's a tool, it has no intelligence per se, whatsoever.
Not many people will be replaced by AI in 2024. But what about AI in 2028? Many technologies tend to get better exponentially once the ground has been built but AI could be different
Meanwhile, Elon is developing human microchips and brain implants. AI is so safe. Read Revelation 13, which is an accurate 2000 year old warning about AI, in the Bible. It isn't about religion, but about IT controlling all of humanity. Old John on Patmos had more common sense than the average software engineer 2024 years later.
Beautiful chairs. I appreciate that you cut this at the time the bad 'joke' was pulled out for a second time. My take on Ai: expect over estimation in the short term & underestimation in the long term. I use Ai as a tool for analysing telemetrics & user preferences in mobile Apps; end user sees nothing but a delightful result; couldn't achieve this otherwise. Functionality will be more obvious with 6G & Edge computing.
I think Linus nailed it. As usual. Happens every cycle. The engineers tell us the range of capability of a new technology. The marketers and "futurists" take the top 20% of that range and base all models, forecasts, and predictions off it. Then when the actual capability comes in somewhere between the 40th and 6oth percentiles, all these companies will rehire the people they laid off when all those predictions were made. I've seen it with Six Sigma, then big data, then crypto and then cybersecurity.
I'm really starting to get concerned about these models and all the latest developments over the past year. While most people say things like 'this is the worst it's going to be,' 'it's just beginning, imagine what it will be like in a few years,' or 'the improvement is exponential,' from everything I see, it looks like this isn't the baseline. It seems like we've already hit the peak of this technology, which is why there isn't much difference between the models. That's why they're looking for new approaches, like using agents or mixture of experts.
It's a fundamental limitation of the way it's trained. It requires *massive* sets of data to train on, and it would require exponentially larger sets to achieve marginal improvements. At some point, it's just not possible for it to have enough data in accessible memory to achieve any further improvements, and you can only then make trade-offs.
@@Cryptic0013 While true....I recommend you look up Nvidia Foundation Agents. Still in the research lab but if Nvidia can be believed, it is "wild" and might get "wilder".
What you're going to see is big improvements with each generation of hardware. Bigger models generally mean better AI. Each new generation of hardware allows bigger models. The new hardware is rolling out Q4 of this year and Q1 of next year.
The critical point that needs to be explored and opened up is who really benefits from an AI infrastructure that wraps every facet of human technology, that ultimately automates human interactions with a large part of the human world previously staffed with other people. Who benefits? Who is pushing AI the most? Very, very large corporations and certainly the banking and finance sector. Then there is the resources and energy sectors, and the tech industry for sure. Now ask the question - Who wrote the book "The Fourth Industrial Revolution"?
I think it could also be noted that certain members of the I.T. community should be cautious that they don't inadvertently perpetuate various Self-Fulfilling Prophecies in regards to A.I. and bring about the very realities that they have been fearfully predicting. Sometimes there's a fine line between predicting a trend, and unconsciously causing/creating one.
Let's face it, the AI hype is based on selling you a cow without having one, saying that it will "replace creatives and writers" with what is practically a plagiarism machine created with data stolen from these same creatives and writers is ridiculous in itself, not to mention that what it produces (synthetic data) is useless for creating new models due to things like the collapse
He doesnt care about AI, like he said, he's interested in the CPU and the kernel, AI is just like any other computer gimmick to him, he sees it as software running on CPU at a higher level than the kernel
He's right though. AI has been massively overly hyped and expectations have become outlandish. A lot of it being bandied around by people who are clueless.
for me AI feels like an iteration of Google Search Engine, but that just it... it won't replace anything it just makes things a bit easier to search but not that much tbh. I do use Copilot a bit, but I dont notice much improvement than doing a regular Google Search, but Google Search Engine has got worse over time, maybe AI its not better than a Google Search, maybe its just less worse.
Except it costs a billion times more to run and if everyone uses it, a data drought will come where no more info is produced for the machines to learn from causing them to get dumber and dumber. And incentive structures are messed up if you don’t generate clicks and ad revenue because a machine plagiarises and republishes elsewhere.
Can Google Search get a Silver medal at the International Math Olympiad. Answering in 2 mins, what takes genius-level Math students ~4 hours? Or can Google Search be the best chess player in the history of the world? It's not equivalent.
google got worse cuz of spam, if ppl gonna find a way to reverse engineer AI so what they want popups to AI answers then its gonna happen again and again and again, just like books now a lot are filled with spam, well generated from google search and ai lol
Sometimes videos with Linus are a bit hostile. I half expected the attitude towards AI to be a bit acidic. That fact that he is skeptical towards AI replacing developers and that he sees it as a tool makes me feel validated with respect to my own view of AI which aligns nicely with this conversation.
I've used it to design something, I didn't have to search for an example or break down a template, it educated me and mentioned alternates, it helped me find workarounds that no one used and suggested changes. It is very helpful and will get better.
You can run some really old CPU’s on Linux if you have to, but my limit past experience with Nvidia is that it has the firmware support lifecycle of a cheap android phone that expires before it leaves the store shelf.
That's why Torvalds cursed them out to one of their reps, and instead of being better, they just gave up on consumer Linux based ARM personal devices. (Nintendo Switch runs a custom Horizon microkernel IIRC)
@markkuuss They are NOT the technologica giant and the AI leader that they had been 7 days ago, because the stock took some tumbling? Please...what are you, a day trader? - Chinese are coming for all our lunch and rightfully so. They had been the preeminent power for the majority of the world's history. And there is no good reason why a Chinese engineer equally proficient, or a laborer twice as proficient as Western one should earn 1/3 wage 20 years from now. - On a 2nd note, nvidia is a software company. And software is the very last bastion of the US economical power. Nvidia will do fine. Short them if you disagree.
@@bosnbruce5837 don't get me wrong. I am not on China's side. I know they are ruthless and driven by a need for revenge and all they want is to take over. I see a lot of dumb westerners cheering for Deepseek...reminds of rich western kids during the 70's with Che Guevara portraits. The dude was a commy and would send them to forced labor camps..
I think a pivotal point will come where the companies that control ai will no longer be able to censor ai results and many of the conclusions that currently get blocked on ai platforms will be visible to the public in disagreement with ideas that some political parties feel are written in stone.
AI that catches bugs is ”compiler” - that will be how a compiler would get described by marketing. Compared to 1960s almost every software and hardware we use now is like "AI". So much advanced and magical in abilities compared to a few decades earlier. Full AI means sentience, which opens up a pandoras box of ethical questions.
Too many didn't, as I did, learned COBOL or CP/M-80. That's before the PC and DOS. My first computer only had two floppy drives, no hdd and a whopping 4k RAM. My second one ran MS-DOS 3.3 and had 1MB RAM. Wow, so advanced ...with 20 MB HDD! Twenty megabytes! Now ordering a laptop with 96GB + 4TB and 12GB GPU. All in one lifetime.
One individual had an interesting insight: AI should occupy itself with the MENIAL and MUNDANE tasks so people can pursue the emotional creative, and uniquely human side of life!
Interestingly, some of the first tasks we have seen that thing called "AI" to excel at, are those that have been largely considered "emotional, creative and uniquely human", such as poetry and visual art.
@@Juan-qv5nc It doesn't, though. "Generative" AI is nothing of the sort. If you examine the data sets on which it's trained and compare that to the output, you'll discover that it's not creating anything. It's making minor edits to poems, songs, paintings, etc., and presenting them to you, while counting on you never to be able to find the reference image or book. It's just a way to digitally whitewash plagiarism.
Ok… and who finds the bugs of the tool which finds bugs? Can you feed the source code of this tool to the tool itself to see if it has bugs? This reminds me to the Turings’s Halting Problem.
You took the words out of my mouth. The halting problem cannot be solved. That is the difference between a "machine" and human thought - we can easily see when an anagram wont end.
I already use AI to find bugs instantly, scaffold projects, gpt code expert is also good at helping work out the best architecture for a project and provide the steps to put it all together - or refactor your project - if you just prompt it effectively. The only frustration points for me is if you lean on it for 100% code generation it tends to put in enough bugs or wrong choices (last time I tried) that you have to be a coding expert to unravel them. Better to generate code in small chunks at a time.
notwithstanding AI sentience, the individual can abstract solutions in the absence of data, but wouldn't AI require a set of outputs derived from a set of inputs?
Right before seeing this comment, I got an email from Trello informing me that they had added AI to the system. It's a Kanban board. It's literally just a bunch of digital post-it notes you move around to track tasks, and they're claiming it's somehow "Powered by AI" now.
That's funny. That's the same thing that I think, essentially. I've seen trends pop up for decades. I've also studied the underlying math in AI. I would call it statistical computing. It's not going to be actual intelligence, just mimicking what it's fed. It can be very good, but it's also easy to trip up if even give false answers. It doesn't think like people do.
While it is true that AI systems today are based on statistical computing and pattern recognition, it is important to recognize the advancements and capabilities that these systems have achieved. AI models, particularly those using deep learning, have demonstrated remarkable proficiency in tasks such as image recognition, language translation, and even creative endeavors like generating art and music. These models do not just mimic data; they learn complex representations and relationships within the data, enabling them to make predictions and decisions that can sometimes surpass human performance. Furthermore, the development of AI continues to evolve, with ongoing research focusing on improving their understanding, reducing biases, and enhancing their ability to generalize across different contexts. While AI may not think like humans, its ability to process and analyze vast amounts of information at high speed and accuracy offers substantial benefits in various fields, from healthcare to autonomous driving. Thus, dismissing AI as merely statistical computing overlooks the significant and transformative impact it is already having on society. - AI
AI is absolutely an awesome tool to add to your developer team's toolbelt. But anyone thinking it will outright replace developers are completely drowning themselves in kool-aid. I agree with Linux's take. It's awesome but the hype is seriously overblown. It's basicaly auto-correct on steroids; and that includes the warning that autocorrect can get things wrong too.
Didn't they also think it would take decades (if not ever) for AI to defeat a human Go player? I think they said there are way more Go board configurations than atoms in the universe. But I think we will still need humans to guide AI coders and come up with the ideas and plans. At least in the near future.
I use Co-Pilot everyday and i dont use that much normal search engine and i really like Co-Pilot but takes long time when its even close that good what it should be.
I tried it few times. It seems to me that it searches the internet then gives me the same thing I could find by myself by using Google let's say. And if I ask the same question few minutes later, due to the wrong answer, it gives me different answer. Sometimes correct, sometimes again not. But for generating images it works not bad.
The problem isn't whether or not AI can actually replace the role of developers or any other human role, the problem is whether or not we think it can.
LLM:s are of little use for programming. You cannot use it to combine two codes to another code that fulfills any purpose. You can ask it for two separate codes and combine them yourself, but you have to integrate them yourself. It is just a more effective web search, since it often produces the right answer right away. LLM:s are a bubble that will burst.
I think it is more than web search 2.0, it's more like a private tutor or an expert that's just sitting around 24/7 waiting to help you. If you use it as a learning TOOL, I think it can accelerate learning because, let's say you don't quite understand how something is worded in some technical book you're reading, the llm might be able to help you. Like if you're studying programming and you want an example of a function pointer, or a real world idiomatic use of unions, an llm could provide a better and also much faster result than traditional web search.
@@daxramdac7194I use ChatGPT regularly to learn NixOS. I’m quite new to the Linux world and just asking the llm is way more convenient than skimming through the web (or the NixOS documentation😂)
@@daxramdac7194 It is not an expert. If you don't understand something and take the words an LLM spits at face value, you will not only not learn, you will also take on bad habits, echo its falsehoods, and consequently make it harder for yourself to grasp a concept. For anything more complex than the most common questions you could have about programming, an LLM quickly becomes useless, and a hindrance moreso than any kind of help. An LLM cannot understand, it only repeats and transforms. It doesn't learn concepts, it knows what words seem plausible when put together. The biggest mistake everyone is making during this "AI" craze is believing the LLM always generates truths. It does not. If you are using it as a learning tool, how are you supposed to know when it feeds you some fantasy?
@@daxramdac7194 I think the opposite is true. LLMs can be used as a replacement for Google's first page of results. It makes way too many mistakes to treat it as an expert, and if you're using it as a tutor, you're making yourself a disservice. You need to be an expert already to validate if results of LLM query are correct or not.
"A genius man thinks the unthinkable to run everything without throwing anything in his disposal and make it work and delivers to show everyone the usefulness necessities that are needed a real software can deliver a meaninful outcomes." "Linux makes things it truly worth." ❤❤❤👍👍👍🙏🙏🙏
Of course AI is not all hype, I use it every day be it chatgpt or github copilot, but it's nowhere near replacing me as an engineer, in fact it would not be able to replace even a junior developer, it might be broad in it's knowledge of technologies and algorithms but until you'd be able to make it train on your entire codebase and confluence, jira, make it enter meetings and do chat conversations, follow-ups in case it has doubts, it will continue to be just a tool, an assistant. It will boost your productivity, but it won't do miracles. There's also the accountability angle, I think that is an area that is often overlooked in the AI discussion, there is no framework currently for that. If AI is to replace anyone it needs to pass this threshold, it has to have accountability to it's users, to the customers of the companies that implement AI in their offerings and to the judicial system, or otherwise the company management must assume accountability which they definitely would love to avoid. This isn't something to be taken lightly, AI can and does make mistakes, blatant ones often, and you just can't have that if money or human lives and wellbeing are involved.
Have a friend that is the CTO at a startup and investors are insisting they cram AI into everything to keep on receiving funding. Even if it doesn't make sense nor is necessary.
In my grandparents era people were saying in the 1950's that there would be no more jobs because of machines (and computers to an extent). We can be confident there will always be jobs and we can also be confident that people will find something to worry about.
The problem with that thinking is, those machines couldn’t think. Now you have everyone claiming these machines can think and build more machines without humans. All within 3 years, I call BS and they know its going to collapse but trying to raise enough cash now before the equity fall
If you look up information, you will find some articles from different writers or organizations. If you use AI, it will go through all of the articles and consolidate the information in one article written by AI. Of course this has the limitation of the programming and what AI considers important when looking up information. However, I find it useful in some instances. Brave Browser has an option to allow AI to give you a quick overview when you search, but to me, it just seems to pick and choose a few bits of specific information and write a paragraph about it, which is not the same as consolidating the best amount of information there may be about a subject. I expect the Brave Browser AI consolidation feature to improve, but right now I don't have a great dependency on it. Gemini gives much more comprehensive answers, but again, would you allow all of your information from the world to depend entirely on what your personal assistant tells you? I think it's possible that in the future as AI becomes much more useful and develops a tendency to tell you what you want to hear, it may be possible for a liberal company like Google to have much more influence on the collective consciousness of people that depend on AI for their information. I think it would be nice to be able to set some of the parameters of our individual AI, such as just telling it to be more conservative or liberal with its answers.
Deep down humans are waiting for, hoping for something that will solve all their problems, needs and/or desires, and every time something new emerges that is somehow understandable yet mysterious enough it is likely massively embraced and the hype is around the corner. It’s human nature and some humans understand that very well and get very rich of it.
The current wave of "AI" is born from hype like crypto and like crypto will fell off once the trend is past tense, I was hyped too at the start but quickly when I saw for what and by whom it was pushed for, now LLMs are a crutch at best, if we let corporate get their way with it will be another mean to selling DLC, control the tech and by monopoly stunting any progress that isn't theirs. I understood it wasn't what would have solved my skills issues and if I want to get anywhere, I better focus on owning those skills because for the long run I only got myself to rely on.
What do you mean by "just like crypto?" I first bought Bitcoin at $600 and it's currently trading at $66,000 on massive, massive volume. It has a marketcap of $1.3 trillion and millions of investors now hold it as part of their investment portfolios in the form of ETFs. Same for Ethereum. Even frothier projects like Solana have done very well over the years. As for AI, only the people at Microsoft, Alphabet, Meta, etc. know how far the technology will go. Assuming it will continue to improve and preparing accordingly is the optimal game theoretical move here, as opposed to just hoping and praying it fades away.
@@guanxinated what inherent value does bitcoin have? Why does its exchange rate keep fluctuating? What makes it different from a game where the outcome solely depends on your luck (presumably)?
@@turolretar I) I'm not sure anything has 'inherent' value, but for me the value proposition for Bitcoin is as follows: a) It has a mathematically well-defined supply that cannot reasonably be inflated. Gold is like this to a certain extent, but more of it can be mined from less accessible deposits (at the right price), and I feel an abstraction like Bitcoin suits our purposes better in the 21st Century. More specifically, unlike gold, Bitcoin cannot be forged and is easier to transfer. b) It's portable, international, and outside the purvey of an one government. This appeals to me because of my trauma as a Portuguese citizen during the European Debt Crisis of 2010 -2012. I like the freedom of having a store of value - as defined by a) - as opposed to currencies exposed to inflation and political risk. If something akin of Argentina's corralito/corralón had taken place in 2010, my family's savings would have been wiped out. I doubt my parents would still be alive if this had been the case. My paternal grandmother turns 90 years old next week and I very much doubt she would have made it exclusively on her widower's pension. There are other smaller nuances that make Bitcoin attractive, but I think a) and b) more than justify my interest in Bitcoin. II) The exchange rate keeps fluctuating because of supply and demand. Presumably, if it ever reaches gold's marketcap the variance will go down, but even then I'm not sure. (Note that gold was trading at $400 (IIRC) in 2008 and only then jumped up, so even gold isn't 100% stable). III) The outcome doesn't depend entirely on luck: it depends on interest from retail and institutional investors, countries like El Salvador, and a willingness on the part of governments not to ban ownership of the asset (as has happened in China and Russia).
The hype that AI will do "everything" is hoax or marketing strategy. The hype that a significant amout of work will be done by these systems almost feels automatic is real. Many people have hard time distinguising between these two...It's like giving a 30+ year plough farmer a tractor, you still have to drive the tractor, but what would earlier take you 3 hours take 10 minutes now, with almost insignificant effort. Wild times ahead!
Can A.I. fix the loopback gateway advertisement problem in Windows ?? I don't think so. Language will never be sentient, it's gonna take the human race a while to figure that out unfortunately.
It may become sentient but only at such a scale where it is extremely impractical. Human intelligence is effective combination of many "sub-intelligences", such as spatial reasoning. It comes from visual cortex which is "3D aware". Now imagine the amount of text required to produce such an intelligence. Perhaps you need a couple hundred million descriptions of spatial tasks and their outcomes, and maybe then you get something. Where do you get these texts? It's more practical to try to develop a visual cortex independently and integrate it with higher level thinking.
@@antonlevkovsky1667 You are missing the point, language describes existence it does not nor will it ever create existence. Because A.I. is limited in communicating and existing using Human words ...
I'm glad to see that Linus is still genuine, and refuses to be a corporate sellout. We need more people like him. As a software architect who've been using/loving technology since childhood, I'm sorely disappointed by the completely irresponsible behavior the heads of the largest corporations currently exhibit. It feels anti-human, sociopathic, and utterly disrespectful to make frivolous categorical statements about how entire professions won't be needed in a few years -- especially when the purpose of such statements is arguably to just create hype and to increase the valuation of their companies, without even having any such viable AI products or services to demonstrate. If there is any subject that deserves to be careful with, this one is surely such, since it can affect people's lives, livelihood and careers in the real world. They have lost my respect, and I think there is something truly broken in the US.
hallucinations is still a huge problem. computer output is always idempotent, deterministic and factual. Hence we trust them. but can LLMs reach that level? please share your views.
I think if you tell it to forgo it's so called ABDUCTIVE REASONING then one can cut down on the hallucinations. Ask it how it does its "reasoning", including adbuctive, inductive, deductive and you will see that abductive is the one leading to hallucinations.
I concur that AI for programming is a progression from compilers similar to how compilers were a progression from assembly language. As we get better tools we embark on more complex projects successfully. But often extra abilities are squandered on more waste. See the field of web development for an example of this. I also think the hype around AI right now is counter productive. And LLMs unpredictably hallucinate and produce bugs which makes them unreliable assistants for new development.
Some specialized AIs already have made some important contributions: the AIs that help detecting breast cancer on radios are more accurate than human experts. Some AIs have been used to accelerate the search for efficient medecines, or new chemical reactions for better batteries. These are not generative AIs nor LLMs.
I had a job interview where the interviewer said “You don’t have artificial intelligence”. My response - “No; I have real intelligence.” Use AI as a help not a replacement
I am still on the fence about AI. I love what it seems to be able to do, I think it is great so long as there is human oversight. Thank you for the informative video!
I would agree that AI is a convenient tool in your kit and has the potential to be a force multiplier correctly applied but it's also true for bad actors, not to forget what is the majority of the market is all VC surfing on hype and corpo BS solving no real problems (not even the ones they made themselves), for the rest of the population is (as always) taking the path of less resistance, making AI no more than a data laundry machine or some easy scheme for crypto snake oil salesman capitalising on FOMO. This generation of AI for now is a bubble full of hot air, maybe once the next AI winter has weeded out all the superficial nonsense it will grow something from the core or (less likely) it will just shrivel and fall into obscurity.
You could catch many bugs by augmenting C by the borrow checker from Rust, and annotate the ownership or borrow lifetime for pointers in structs. An AI could likely figure out these annotations in most cases too.
There was another interview where he talked about AI as well. He's for AI when it comes to low-level code that nobody wants to do but he thinks most everything else about it is BS.
Am the only one that doesn't hate hype? It's called managing your expectations, but it's still fun to ride the hype. Hypes are just how humans work. Obviously we have the capacity to regulate our hype and I imagine we're in the process of learning.
My first contact with "AI" was ELIZA in an implementation on a Commodore PET in 1979. Later I studied computer science, again with AI on the schedule. My opinion: at one hand, there are big AI advantages within the last 30 years - these "large models", on the other hand the old problems we talked about 35 years ago are still there. The very short summary of the problems are: AI is lying/cheating.
AI is a real thing with real value. Amazon recently reported that in the past year their internal AI tool "Q" has saved an estimated $500M in productivity gains and efficiencies. So like, the hype is real, but the hype is also wrong. AI is going to offer huge value in productivity and efficiency but it's not going to replace humans and companies that are doing mass layoffs to turn the reigns over to AI are going to regret that and suffer for it. Also, anyone calling AI "BS" or useless based on gimmicky products like ChatGPT or image generators doesn't understand real AI, they just understand the goofy consumer tools.
Horse riders were laughing at cars back in the day and in 2 years all horses were gone other than the ones for recreational use. The hype cycle is interspersed with inflection points, isn't it? Bound to be, otherwise we'd still be hunting mammoths with spears. Linus does very very low level of abstraction kinda thing that not every average Joe The Dev does so he doesn't find LLMs compelling, that's my take on the skepticism. Today's 18 year olds don't have the privilege to see the rise of the personal computer and take the baby steps like Linus. Things are extremely complicated now. Try unraveling that low level complexity with your college CS degree and you won't see the sun again.
Agree with Linus here. The tech is very impressive and it has some extremely useful applications, but it shouldn't be treated like the 2nd coming of intelligence that people think it is.
Are lots of companies basing tons of products and marketing around it? Then it’s overhyped. Revolutionary technologies always have multiple hype cycles because greed. They always take decades to actually deliver results that make them ubiquitous. Then everyone is pushing and falling for the next thing being hyped up.
This was such an insightful video! I really enjoyed the part about the AI hype . It's so cringe to see google using the term AI even in their quantum computer chip launch. And..... I have recently made a video on VPC, where I dive deeper into core concept of VPC. If anyone’s interested, feel free to check it out - I’d love to hear your thoughts!
It's a tool, just like any other invention of this nature. Tools can displace old methods, but they also tend to open new opportunities. The genie is out of the bottle, no point getting salty about it.
AI doesn't have to be as good as who it's replacing. It only has to convince those in charge of those job positions; who may tend to be well under qualified for their positions.
People that are developing AI want to make cash out of it, so are bigging it up. Those that aren't, don't understand it, so are amazed. Glad linux isn't going down the obligatory AI route.
EDIT: Ouch, my bad! It's about AI in development... duh! EDIT: One aspect where I think AI could be great in computing and programming is automated stress testing and monitoring, subjecting a piece of software or piece of code of all kinds of scenarios and data to see if it leads to exploitable vulnerabilities. An automated environment where you can just leave the system running and wait for results... and then he ends up saying that very thing! XD I have to say I am not looking forward to AI. It will be of immense use in certain fields such as medicine and biology, physics, and engineering, where certain problems could be solved, certain basic designs could be improved, or certain mathematical parameters and equations solved using AI learning. In some other stuff I am simply not looking forward to it at all, so many things in computing that could end up becoming abstracted for no reason, any cordial and creative process corrupted, and soon. Who knows, but what are we supposed to do? Man's doom or lack thereof is in the hands of a select few individuals, they are the ones who will be forever memorialised in the history books once it's all over. However it all turns out, they will bear full responsibility.
The problem with "AI" is that it is either judged by those who deal with its logic from inside the box (most viewers of this video) or by those who have no idea about what it really is and just ride the train, relying on others' opinion. Neither are seeing clearly what real life consequences will be. Linus is absolutely right about being sceptical, as he knows about technology. But he does not deal with all the "BS" outside of that, why should he. The future of this hype depends on that outside world however: the money and power interests. Hyping up a phenomenon that will never live up to that potential has a huge threat: that of manipulating the masses into using it as a tool which it is not. Once people believe in all the BS about "singularity" and that it is "smarter" than a human, they will easily hand over authority, it will become a question of trust and faith. Just like the faith that is now (falsely) shown to authorities in many fields, like medicine for example. Hiding behind "The Science" allows those that pull the strings to further their own agendas and anyone going against the grain is declared mad or a conspiracy theorist. Thus AI becomes a powerful tool in the hands of who owns the system and some trusted programmers who are ordered to direct outcome via algorithms. Só the threat is not seen by insiders like Linus, because he is above dealing with real world implications and intrigue. At the same time, outsiders are left to guess on its ability, for being incompetent. The real threat, as always, comes from those that direct the hype train. They are in full control about how devastating the power concentration will become.
5:00 "crypto is not hype". Well, it is as hyped as AI: many projects that solve trifling tasks and problems, most probably made to grab cash from VCs and pull the rug. And then a few nice and interesting projects/concepts/research that have been running for nearly a decade in some cases and haven't made much progress. If you are hyped for AI you should be hyped for crypto, if not you shouldn't be by neither of them.
Well.. I do think that AI is mostly hype, but at least people have ways of describing ways AI could theoretically be useful (.. if it actually worked as advertised).. with crypto I don't even know what problem they're even trying to solve in the first place.
Have you been using AI?
Yea, it is funny. You can get some code stubs, but you can do that by googling too. But it is actually funny to ask for uninteresting facts and puns. It obediently produces puns that it has found on the net.
Only for improving the clarity and grammar of my sentences. 🤪
Yes quite a lot
I am deliberately trying not to use it directly.
But I think everyone is using it indirectly.
Eventually it will be normalised, but maybe some aspects of it will be forbiden?
I've been using it a little, just for curiosity.
I’m a data engineer that was recently laid off a Microsoft due to them deprioritising data and focusing on CoPilots and core infrastructure. The hype is so strong here that they don’t even see the relationship between data and AI anymore.
That’s just sad. Sorry to hear this
Once they realize AI needs constant data to train they will need you guys back, and fire all the AI developers
And the cycle starts over again
@@uwu.-.5873 was going to say the same thing. AI is just not the be all end all of programming. These companies jumping on the hype train are going to regret it. Some already are.
i have a dream, that microsoft get what they deserve some day.
Finally someone is saying it. ML is a useful tool and will have far-reaching effects, but 99% of the hype around "Put an AI in it and lay off half the staff" is based on natural stupidity, not artificial intelilgence.
Boss comment.
The clue is in the title. Artificial means 'not real'
not real intelligence doesn't work...
@kutto5017 that's not what artificial means
@@monguskooklord7867 Do tell...
@@kutto5017 Artificial (adj.) made by humans, especially in imitation of something natural. Not arising from natural or necessary causes; contrived or arbitrary. Latin artificiālis from artificium (“skill”), from artifex, from ars (“skill”), and -fex, from facere (“to make”). Your definition sounds like an AI hallucination! lol!
Company where I work has been selling financial compliance system as "AI powered" for the last 5 years and all we have are static rules.
Lol. Well, that's the way marketing has been working for decades, if not longer.
The only difference is that now it becomes more obvious I guess.
So your system is better then xD
Even static rules may be called intelligence. In the end, they are solving a problem or at least partly.
@@nthwied1164 I understand what u r saying but I am very sure this isn't what's being marketed.
Pretty standard lol. I mean technically I guess you could call linear regression “AI”
People often get trapped with Linus' eccentricity and miss his greatest asset which is clarty of thoughts. I've been using Linux since 1999 and the road map (& adoption) of Linux OS has been miraculous and largely thanks to just one guy....Linus Torvalds!
His tree bears many fruit 👏
As Stallman would remind you, what you're actually using is correctly called GNU/Linux.
And yet the Linux fanboys are so toxic that decades later you can't give Linux away for free, because it's still deliberately difficult and "support" is basically insulting you for now already knowing, and telling you that it's not Windows. Thanks, but no thanks.
@@jonah1976 exactly, I'm surprised for a somebody who has been using Linux for two decades to say it is all thanks to "just one guy"
Linux OS? ... Just to be clear: Linux is a family of Unix-like distributions and the kernel that these distributions are built on top of. So far we don't even know what distributions you've been using.
I use Arch btw
How to install discord from the command line in arch linux?
@@Flavor_Flav If you don't update everyday.
Thanks I didn't know you were using Arch, old man. Neat distro.
@@rouxgreasus not really, whenever you update, you will still get the latest packages at that moment, many of them with undiscovered bugs. So better update it as often as possible to make full use of the reason you are compromising stability in the first place. Or get nixos if you don't mind the additional difficulty.
@@askeladden450 -of course it's a nix user- i never really figured out the trick for nixos
10-15 years ago, the tech industry told us that in 5 to 10 years, there would be self-driving vehicles and no one would drive manually anymore. Tesla was selling their cars a few years ago, promising that your car would be a self-driving taxi after the next software update. Now, we have adaptive cruise control and lane assistants, but we are still driving. The hype is a marketing strategy to collect money from your customers, but mostly from your investors.
good insight
Who is supposed to be responsible in case of crash? Certainly not car manufacturer when gun manufacturers never paid a dime after mass shootings.
That’s oversimplifying it. It’s much better than what the general public seems to realize. I work in engineering at one of the leading autonomous driving companies. Bet you can’t guess which one, haha.
there ARE self driving vehicles right now.
they work very well. they are safe. the tech progress is artificially throttled. litigation laws dont help this, either.
who needs self-driving cars, only people who failed to get their driving license after 3 consecutive failure and the psychological test.
I keep hearing companies say they're "doing a lot with AI" but rarely share specifics that sound good. Pure marketing at this point.
@@rkulla the funniest part is the people running the companies purportedly “doing a lot with AI” have no idea what they are doing or what AI is. Even Bill Gates said he doesn’t understand it. The latest DeepSeek rug pull serves as an example. This pretty much sums up the incompetence: “Meta is reportedly scrambling ‘war rooms’ of engineers to figure out how DeepSeek’s AI is beating everyone else at a fraction of the price.” Not only does Zuckerberg hold the most disdain for tech workers of any CEO, but he also has sunk 100 billion into high-end NVIDIA GPU’s 2024-25. NVIDIA lost 600 billion market cap in 1 day, 2/3 of annual U.S. defense spending, and the most ever lost in 1 day by any U.S. company in history.
Linux as an educator of computer system programming!!! WORLD NO1!!!
But Market share doesn't really say that.
@@FunNFury You're shocked because big companies aren't eager to invest in a system that "does not cater to the average user"?!
@@FunNFury The Linux market share is number one by far. It runs every Android smartphone on the planet, almost every "smart" electronics (TV sets, smart cameras, smart watch, etc), every server in every cloud on the planet (including Microsoft and Apple's), every network router, every super computer, every electric car entertainment and control system, etc.
@@lolilollolilol7773 Android is again a different ball game, it's not really linux
@@FunNFury its linux based,so kinda
somebody actualy shouted "thats not hype!" when crypto was mentioned... theyre in a cult
The difference between their cult and a nation's mint, is no crypto has a standing military yet :P
It's sad isn't it?
Anyone calling a 2 trillion dollar asset class hype is simply over 50 years old and counting the days till they’re laid out to pasture
@@w2xyz Try to sell all of it, you will quickly realize how much it is worth it in reality.
It’s a speculative intangible commodity whose value primarily depends on … hype.
I use AI to get quick answers that I don't feel like doing a whole research on google or Wikipedia. It saves me time. I wouldn't use it to code for me. I have used it to aid me in learning new libraries, frameworks, orms, etc etc. Saves me time instead of having to dig through lots of documentation of some library I can ask it to give me simple examples. It's been a great learning tool.
100%
WIth halucinations and outright wrong answers, how would you know that what you get is even accurate though? This "time saving" that you talk about is the biggest issue because you leave the "fact checking" to someone else, basically. But that also means, it makes it harder to know if what you get is actually a good source.
@@CrniWuk when you have experience you sort of tell when the LLM is accurate or not. i dont deletegate fact checking to the LLM I fact check myself or test the code I write with help of LLM. regarding the sources the gemini app does list the sources it used to generate the answer.
@@symtexxd But if you fact check your self, then why do you need LLMs for?
And how would you know if they are right or not? Again. If you let the algorithm do the "heavy lifting", how can you be sure that what you get is correct.
@CrniWuk hallucinating only happens when you ask it something out of blue. For standard questions like, "what function is used in c++ to read a line from input" or "command to redirect stdout to stdin", these work pretty well. You kind of know with experience when to expect hallucinations and when to not.
Even though Linus said hes not a "peoples person", his perspective is humanistic and about people and their interests. The guy is a legend.
The very real impact of Gen AI is that CEO's and managers actually believe that they need a lot fewer software engineers.
indeed, and my team is suffering because of that. It is madness.
Why this industry has to have such tremendous hiring ups and downs is so sad.
AI hyper + Economic downturn. 100% is due to economic downturn, but they prefer to save face with AI.
@@teodor-valentinmaxim8204 There is no economic downtrum whatsoever, but only HR incompetence.
Great to see my icon Sir Linus Torvald live ❤❤❤
Linus would be nowhere if not for Dennis Ritchie.
@@IsaacGabriel-kh5ds «Linus would be nowhere if not for Dennis Ritchie. »
--
Yet another sterile gnawer. Looking forward to see another one saying something like "Linus would be nowhere if not for his parents". Our world got so developed, but we miss instruments to filter out gnawing sterile comments like this one.
I ready his book " Just for fun", ...yeah It's interesting (maybe I should ready it again). I must make a point: he says that he is an asshole.
I'm a software engineer and I use AI for the mundane tasks like giving it an interface and ask it to create some dummy data for tests. Or give it a test example, some context and ask it to generate tests. I am 1000000% confident I won't be replaced by AI simply because I know and understand how it works -- it's a tool, it has no intelligence per se, whatsoever.
A glorified macro
Not many people will be replaced by AI in 2024. But what about AI in 2028? Many technologies tend to get better exponentially once the ground has been built but AI could be different
I am a Data scientist and trust me AI will replace you and me 😂
@br0ken_107 😂😂 im a data scientist who isnt afraid of the truth its all bs
Meanwhile, Elon is developing human microchips and brain implants. AI is so safe. Read Revelation 13, which is an accurate 2000 year old warning about AI, in the Bible. It isn't about religion, but about IT controlling all of humanity. Old John on Patmos had more common sense than the average software engineer 2024 years later.
Beautiful chairs. I appreciate that you cut this at the time the bad 'joke' was pulled out for a second time. My take on Ai: expect over estimation in the short term & underestimation in the long term. I use Ai as a tool for analysing telemetrics & user preferences in mobile Apps; end user sees nothing but a delightful result; couldn't achieve this otherwise. Functionality will be more obvious with 6G & Edge computing.
I think Linus nailed it. As usual. Happens every cycle. The engineers tell us the range of capability of a new technology. The marketers and "futurists" take the top 20% of that range and base all models, forecasts, and predictions off it. Then when the actual capability comes in somewhere between the 40th and 6oth percentiles, all these companies will rehire the people they laid off when all those predictions were made. I've seen it with Six Sigma, then big data, then crypto and then cybersecurity.
I'm really starting to get concerned about these models and all the latest developments over the past year. While most people say things like 'this is the worst it's going to be,' 'it's just beginning, imagine what it will be like in a few years,' or 'the improvement is exponential,' from everything I see, it looks like this isn't the baseline. It seems like we've already hit the peak of this technology, which is why there isn't much difference between the models. That's why they're looking for new approaches, like using agents or mixture of experts.
"Mixture of experts is what you use when you've run out of ideas" - Geohot (paraphasing)
It's a fundamental limitation of the way it's trained. It requires *massive* sets of data to train on, and it would require exponentially larger sets to achieve marginal improvements. At some point, it's just not possible for it to have enough data in accessible memory to achieve any further improvements, and you can only then make trade-offs.
I have a feeling this comment will be really funny in a year.
@@Cryptic0013 While true....I recommend you look up Nvidia Foundation Agents. Still in the research lab but if Nvidia can be believed, it is "wild" and might get "wilder".
What you're going to see is big improvements with each generation of hardware. Bigger models generally mean better AI. Each new generation of hardware allows bigger models. The new hardware is rolling out Q4 of this year and Q1 of next year.
The critical point that needs to be explored and opened up is who really benefits from an AI infrastructure that wraps every facet of human technology, that ultimately automates human interactions with a large part of the human world previously staffed with other people. Who benefits? Who is pushing AI the most? Very, very large corporations and certainly the banking and finance sector. Then there is the resources and energy sectors, and the tech industry for sure.
Now ask the question - Who wrote the book "The Fourth Industrial Revolution"?
I think it could also be noted that certain members of the I.T. community should be cautious that they don't inadvertently perpetuate various Self-Fulfilling Prophecies in regards to A.I. and bring about the very realities that they have been fearfully predicting.
Sometimes there's a fine line between predicting a trend, and unconsciously causing/creating one.
Let's face it, the AI hype is based on selling you a cow without having one, saying that it will "replace creatives and writers" with what is practically a plagiarism machine created with data stolen from these same creatives and writers is ridiculous in itself, not to mention that what it produces (synthetic data) is useless for creating new models due to things like the collapse
He doesnt care about AI, like he said, he's interested in the CPU and the kernel, AI is just like any other computer gimmick to him, he sees it as software running on CPU at a higher level than the kernel
"He doesnt care about AI, like he said"
He didn't say that.
It's just obviously a lot of hype. Statistics has been rebranded as AI like "cloud native" instead of server-side
@@adammontgomery7980 "AI" today isn't statistics, and even though it's not "real" AI, just calling it statistics is just as misinformative.
He's right though. AI has been massively overly hyped and expectations have become outlandish. A lot of it being bandied around by people who are clueless.
@@RazumenIf it's not statistics at the core then what is it in your opinion? It kind of is, isn't it?
for me AI feels like an iteration of Google Search Engine, but that just it... it won't replace anything it just makes things a bit easier to search but not that much tbh.
I do use Copilot a bit, but I dont notice much improvement than doing a regular Google Search, but Google Search Engine has got worse over time, maybe AI its not better than a Google Search, maybe its just less worse.
Except it costs a billion times more to run and if everyone uses it, a data drought will come where no more info is produced for the machines to learn from causing them to get dumber and dumber. And incentive structures are messed up if you don’t generate clicks and ad revenue because a machine plagiarises and republishes elsewhere.
Well junior devs should know it because the are almost gone 😅
@@Alex-hu8gjNot really.
Can Google Search get a Silver medal at the International Math Olympiad. Answering in 2 mins, what takes genius-level Math students ~4 hours? Or can Google Search be the best chess player in the history of the world? It's not equivalent.
google got worse cuz of spam, if ppl gonna find a way to reverse engineer AI so what they want popups to AI answers then its gonna happen again and again and again, just like books now a lot are filled with spam, well generated from google search and ai lol
Sometimes videos with Linus are a bit hostile. I half expected the attitude towards AI to be a bit acidic. That fact that he is skeptical towards AI replacing developers and that he sees it as a tool makes me feel validated with respect to my own view of AI which aligns nicely with this conversation.
I've used it to design something, I didn't have to search for an example or break down a template, it educated me and mentioned alternates, it helped me find workarounds that no one used and suggested changes. It is very helpful and will get better.
You can run some really old CPU’s on Linux if you have to, but my limit past experience with Nvidia is that it has the firmware support lifecycle of a cheap android phone that expires before it leaves the store shelf.
That's why Torvalds cursed them out to one of their reps, and instead of being better, they just gave up on consumer Linux based ARM personal devices. (Nintendo Switch runs a custom Horizon microkernel IIRC)
What does any of this have to do with A.I.?
When linus speaks we developers must listen carefully
I don't know....
That middle finger to Nvidia didn't work out all that well.
@@bosnbruce5837 This didn't age well lol...did you check what happened to them the last few days?
@markkuuss
They are NOT the technologica giant and the AI leader that they had been 7 days ago, because the stock took some tumbling? Please...what are you, a day trader?
- Chinese are coming for all our lunch and rightfully so. They had been the preeminent power for the majority of the world's history. And there is no good reason why a Chinese engineer equally proficient, or a laborer twice as proficient as Western one should earn 1/3 wage 20 years from now.
- On a 2nd note, nvidia is a software company. And software is the very last bastion of the US economical power. Nvidia will do fine. Short them if you disagree.
@@bosnbruce5837 don't get me wrong. I am not on China's side. I know they are ruthless and driven by a need for revenge and all they want is to take over. I see a lot of dumb westerners cheering for Deepseek...reminds of rich western kids during the 70's with Che Guevara portraits. The dude was a commy and would send them to forced labor camps..
Thanks
I think a pivotal point will come where the companies that control ai will no longer be able to censor ai results and many of the conclusions that currently get blocked on ai platforms will be visible to the public in disagreement with ideas that some political parties feel are written in stone.
Linux creator is you
I am the end of Linux.
AI that catches bugs is ”compiler” - that will be how a compiler would get described by marketing. Compared to 1960s almost every software and hardware we use now is like "AI". So much advanced and magical in abilities compared to a few decades earlier. Full AI means sentience, which opens up a pandoras box of ethical questions.
Too many didn't, as I did, learned COBOL or CP/M-80. That's before the PC and DOS. My first computer only had two floppy drives, no hdd and a whopping 4k RAM. My second one ran MS-DOS 3.3 and had 1MB RAM. Wow, so advanced ...with 20 MB HDD! Twenty megabytes! Now ordering a laptop with 96GB + 4TB and 12GB GPU. All in one lifetime.
Mr Torvalds Linux is the technology of internet!!!,,,we need Linux as long as internet exist!!!
One individual had an interesting insight: AI should occupy itself with the MENIAL and MUNDANE tasks so people can pursue the emotional creative, and uniquely human side of life!
Interestingly, some of the first tasks we have seen that thing called "AI" to excel at, are those that have been largely considered "emotional, creative and uniquely human", such as poetry and visual art.
@@Juan-qv5nc But THAT'S the point - because AI does not have what we call a SOUL - an expression of The Creator of the Universe!
@@Juan-qv5nc It doesn't, though. "Generative" AI is nothing of the sort. If you examine the data sets on which it's trained and compare that to the output, you'll discover that it's not creating anything. It's making minor edits to poems, songs, paintings, etc., and presenting them to you, while counting on you never to be able to find the reference image or book. It's just a way to digitally whitewash plagiarism.
AI is incredibly creative, that was one of the biggest surprises for me when I first used it.
@@manticore4952 True creativity is essentially a reflection of your EMOTIONAL MINDSET - something that cannot be programmed! 😁
Ok… and who finds the bugs of the tool which finds bugs? Can you feed the source code of this tool to the tool itself to see if it has bugs? This reminds me to the Turings’s Halting Problem.
That is actually a very good question.
Yes, you can use the tool that finds bugs on the tool itself. This is like saying you can't use drills for the process of making drills.
You took the words out of my mouth. The halting problem cannot be solved. That is the difference between a "machine" and human thought - we can easily see when an anagram wont end.
Yeah. That is something they do right now.
@@denjamin2633 what is something they do right now?
I already use AI to find bugs instantly, scaffold projects, gpt code expert is also good at helping work out the best architecture for a project and provide the steps to put it all together - or refactor your project - if you just prompt it effectively. The only frustration points for me is if you lean on it for 100% code generation it tends to put in enough bugs or wrong choices (last time I tried) that you have to be a coding expert to unravel them. Better to generate code in small chunks at a time.
Yeah, AI is great for creating bugs. It's basically search on steroids, but it's not intelligent.
notwithstanding AI sentience, the individual can abstract solutions in the absence of data, but wouldn't AI require a set of outputs derived from a set of inputs?
Short version - AI label is akin to the use of "gluten free" on everything.
NON GMO.
o r g a n i c.
Source of 400b parameters.
nah, you´re thinking organic. Gluten free is a fucking boon for us who actually need it. not a fad. i´m in hospital if heavily contaminated.
They should add it to AI also, 0 calories, organic, gluten free, may contain some alcohol
@@katgod I literally saw water advertised as all natural (because it was spring water, not purified water)...
Right before seeing this comment, I got an email from Trello informing me that they had added AI to the system.
It's a Kanban board. It's literally just a bunch of digital post-it notes you move around to track tasks, and they're claiming it's somehow "Powered by AI" now.
That's funny. That's the same thing that I think, essentially. I've seen trends pop up for decades.
I've also studied the underlying math in AI. I would call it statistical computing. It's not going to be actual intelligence, just mimicking what it's fed. It can be very good, but it's also easy to trip up if even give false answers. It doesn't think like people do.
While it is true that AI systems today are based on statistical computing and pattern recognition, it is important to recognize the advancements and capabilities that these systems have achieved. AI models, particularly those using deep learning, have demonstrated remarkable proficiency in tasks such as image recognition, language translation, and even creative endeavors like generating art and music. These models do not just mimic data; they learn complex representations and relationships within the data, enabling them to make predictions and decisions that can sometimes surpass human performance.
Furthermore, the development of AI continues to evolve, with ongoing research focusing on improving their understanding, reducing biases, and enhancing their ability to generalize across different contexts. While AI may not think like humans, its ability to process and analyze vast amounts of information at high speed and accuracy offers substantial benefits in various fields, from healthcare to autonomous driving. Thus, dismissing AI as merely statistical computing overlooks the significant and transformative impact it is already having on society.
- AI
You described the human brain, we make decisions based on memory we calculate based on learning math and available data. A pocket calculator is ai.
@@liwenchang3260I should have known, that reads like a chatgpt answer.
AI is absolutely an awesome tool to add to your developer team's toolbelt.
But anyone thinking it will outright replace developers are completely drowning themselves in kool-aid.
I agree with Linux's take. It's awesome but the hype is seriously overblown.
It's basicaly auto-correct on steroids; and that includes the warning that autocorrect can get things wrong too.
Didn't they also think it would take decades (if not ever) for AI to defeat a human Go player?
I think they said there are way more Go board configurations than atoms in the universe.
But I think we will still need humans to guide AI coders and come up with the ideas and plans.
At least in the near future.
@@Merializer "it has more Go boards possible than atoms in the universe"
And? You just used this to enhance your weak argument.
@ You miss quote me and you don't say what is wrong with my argument and what your argument is. You just attack.
I use Co-Pilot everyday and i dont use that much normal search engine and i really like Co-Pilot but takes long time when its even close that good what it should be.
I tried it few times. It seems to me that it searches the internet then gives me the same thing I could find by myself by using Google let's say. And if I ask the same question few minutes later, due to the wrong answer, it gives me different answer. Sometimes correct, sometimes again not.
But for generating images it works not bad.
The problem isn't whether or not AI can actually replace the role of developers or any other human role, the problem is whether or not we think it can.
LLM:s are of little use for programming. You cannot use it to combine two codes to another code that fulfills any purpose. You can ask it for two separate codes and combine them yourself, but you have to integrate them yourself. It is just a more effective web search, since it often produces the right answer right away. LLM:s are a bubble that will burst.
"A more effective web search" does not sound to me like a bubble that will burst.
I think it is more than web search 2.0, it's more like a private tutor or an expert that's just sitting around 24/7 waiting to help you. If you use it as a learning TOOL, I think it can accelerate learning because, let's say you don't quite understand how something is worded in some technical book you're reading, the llm might be able to help you. Like if you're studying programming and you want an example of a function pointer, or a real world idiomatic use of unions, an llm could provide a better and also much faster result than traditional web search.
@@daxramdac7194I use ChatGPT regularly to learn NixOS. I’m quite new to the Linux world and just asking the llm is way more convenient than skimming through the web (or the NixOS documentation😂)
@@daxramdac7194 It is not an expert. If you don't understand something and take the words an LLM spits at face value, you will not only not learn, you will also take on bad habits, echo its falsehoods, and consequently make it harder for yourself to grasp a concept. For anything more complex than the most common questions you could have about programming, an LLM quickly becomes useless, and a hindrance moreso than any kind of help. An LLM cannot understand, it only repeats and transforms. It doesn't learn concepts, it knows what words seem plausible when put together. The biggest mistake everyone is making during this "AI" craze is believing the LLM always generates truths. It does not. If you are using it as a learning tool, how are you supposed to know when it feeds you some fantasy?
@@daxramdac7194 I think the opposite is true. LLMs can be used as a replacement for Google's first page of results. It makes way too many mistakes to treat it as an expert, and if you're using it as a tutor, you're making yourself a disservice. You need to be an expert already to validate if results of LLM query are correct or not.
"A genius man thinks the unthinkable to run everything without throwing anything in his disposal and make it work and delivers to show everyone the usefulness necessities that are needed a real software can deliver a meaninful outcomes."
"Linux makes things it truly worth."
❤❤❤👍👍👍🙏🙏🙏
Of course AI is not all hype, I use it every day be it chatgpt or github copilot, but it's nowhere near replacing me as an engineer, in fact it would not be able to replace even a junior developer, it might be broad in it's knowledge of technologies and algorithms but until you'd be able to make it train on your entire codebase and confluence, jira, make it enter meetings and do chat conversations, follow-ups in case it has doubts, it will continue to be just a tool, an assistant. It will boost your productivity, but it won't do miracles. There's also the accountability angle, I think that is an area that is often overlooked in the AI discussion, there is no framework currently for that. If AI is to replace anyone it needs to pass this threshold, it has to have accountability to it's users, to the customers of the companies that implement AI in their offerings and to the judicial system, or otherwise the company management must assume accountability which they definitely would love to avoid. This isn't something to be taken lightly, AI can and does make mistakes, blatant ones often, and you just can't have that if money or human lives and wellbeing are involved.
Not yet.
@@CyberSan7054 The west has fallen. Billions must AI
That is not the point - if you can cut 1/3 of all developers due to AI - then AI can replace developers (as in plural).
Have a friend that is the CTO at a startup and investors are insisting they cram AI into everything to keep on receiving funding. Even if it doesn't make sense nor is necessary.
where's the full talk?
Literally on the video description!
In my grandparents era people were saying in the 1950's that there would be no more jobs because of machines (and computers to an extent). We can be confident there will always be jobs and we can also be confident that people will find something to worry about.
The problem with that thinking is, those machines couldn’t think. Now you have everyone claiming these machines can think and build more machines without humans. All within 3 years, I call BS and they know its going to collapse but trying to raise enough cash now before the equity fall
01:32 "Linus Torvalds is going to be replaced by AI", ha ha I actually thought he was about to leave the scene.
“Finally” 😂😂😂
If you look up information, you will find some articles from different writers or organizations. If you use AI, it will go through all of the articles and consolidate the information in one article written by AI. Of course this has the limitation of the programming and what AI considers important when looking up information. However, I find it useful in some instances. Brave Browser has an option to allow AI to give you a quick overview when you search, but to me, it just seems to pick and choose a few bits of specific information and write a paragraph about it, which is not the same as consolidating the best amount of information there may be about a subject. I expect the Brave Browser AI consolidation feature to improve, but right now I don't have a great dependency on it. Gemini gives much more comprehensive answers, but again, would you allow all of your information from the world to depend entirely on what your personal assistant tells you? I think it's possible that in the future as AI becomes much more useful and develops a tendency to tell you what you want to hear, it may be possible for a liberal company like Google to have much more influence on the collective consciousness of people that depend on AI for their information. I think it would be nice to be able to set some of the parameters of our individual AI, such as just telling it to be more conservative or liberal with its answers.
Artificial Intelligence is one of the three most remarkable advancements of this century, alongside quantum computing and nuclear fusion energy.
Yes, but I’m not sure about quantum computing as I don’t see how it is impactful currently
Yes, but Artificial intelligence is still artificial i.e. fake, false, intelligence, more super smart parrot.
Fusion might make it by 2050 if at all.
Sarcasm switch = ON
LMAO
Deep down humans are waiting for, hoping for something that will solve all their problems, needs and/or desires, and every time something new emerges that is somehow understandable yet mysterious enough it is likely massively embraced and the hype is around the corner. It’s human nature and some humans understand that very well and get very rich of it.
The current wave of "AI" is born from hype like crypto and like crypto will fell off once the trend is past tense, I was hyped too at the start but quickly when I saw for what and by whom it was pushed for, now LLMs are a crutch at best, if we let corporate get their way with it will be another mean to selling DLC, control the tech and by monopoly stunting any progress that isn't theirs. I understood it wasn't what would have solved my skills issues and if I want to get anywhere, I better focus on owning those skills because for the long run I only got myself to rely on.
I got a coin I could sell you 🤣
What do you mean by "just like crypto?" I first bought Bitcoin at $600 and it's currently trading at $66,000 on massive, massive volume. It has a marketcap of $1.3 trillion and millions of investors now hold it as part of their investment portfolios in the form of ETFs. Same for Ethereum. Even frothier projects like Solana have done very well over the years.
As for AI, only the people at Microsoft, Alphabet, Meta, etc. know how far the technology will go. Assuming it will continue to improve and preparing accordingly is the optimal game theoretical move here, as opposed to just hoping and praying it fades away.
@@guanxinated what inherent value does bitcoin have? Why does its exchange rate keep fluctuating? What makes it different from a game where the outcome solely depends on your luck (presumably)?
@@turolretar
I) I'm not sure anything has 'inherent' value, but for me the value proposition for Bitcoin is as follows:
a) It has a mathematically well-defined supply that cannot reasonably be inflated. Gold is like this to a certain extent, but more of it can be mined from less accessible deposits (at the right price), and I feel an abstraction like Bitcoin suits our purposes better in the 21st Century. More specifically, unlike gold, Bitcoin cannot be forged and is easier to transfer.
b) It's portable, international, and outside the purvey of an one government. This appeals to me because of my trauma as a Portuguese citizen during the European Debt Crisis of 2010 -2012. I like the freedom of having a store of value - as defined by a) - as opposed to currencies exposed to inflation and political risk. If something akin of Argentina's corralito/corralón had taken place in 2010, my family's savings would have been wiped out. I doubt my parents would still be alive if this had been the case. My paternal grandmother turns 90 years old next week and I very much doubt she would have made it exclusively on her widower's pension.
There are other smaller nuances that make Bitcoin attractive, but I think a) and b) more than justify my interest in Bitcoin.
II) The exchange rate keeps fluctuating because of supply and demand. Presumably, if it ever reaches gold's marketcap the variance will go down, but even then I'm not sure.
(Note that gold was trading at $400 (IIRC) in 2008 and only then jumped up, so even gold isn't 100% stable).
III) The outcome doesn't depend entirely on luck: it depends on interest from retail and institutional investors, countries like El Salvador, and a willingness on the part of governments not to ban ownership of the asset (as has happened in China and Russia).
How is it like Crypto? Crypto is just a currency that doesn't have huge potential consequences. AI does.
The hype that AI will do "everything" is hoax or marketing strategy. The hype that a significant amout of work will be done by these systems almost feels automatic is real. Many people have hard time distinguising between these two...It's like giving a 30+ year plough farmer a tractor, you still have to drive the tractor, but what would earlier take you 3 hours take 10 minutes now, with almost insignificant effort. Wild times ahead!
Can A.I. fix the loopback gateway advertisement problem in Windows ??
I don't think so.
Language will never be sentient, it's gonna take the human race a while to figure that out unfortunately.
It may become sentient but only at such a scale where it is extremely impractical. Human intelligence is effective combination of many "sub-intelligences", such as spatial reasoning. It comes from visual cortex which is "3D aware". Now imagine the amount of text required to produce such an intelligence. Perhaps you need a couple hundred million descriptions of spatial tasks and their outcomes, and maybe then you get something. Where do you get these texts? It's more practical to try to develop a visual cortex independently and integrate it with higher level thinking.
@@antonlevkovsky1667 i like how you think
@@antonlevkovsky1667 You are missing the point, language describes existence it does not nor will it ever create existence.
Because A.I. is limited in communicating and existing using Human words ...
I'm glad to see that Linus is still genuine, and refuses to be a corporate sellout. We need more people like him. As a software architect who've been using/loving technology since childhood, I'm sorely disappointed by the completely irresponsible behavior the heads of the largest corporations currently exhibit. It feels anti-human, sociopathic, and utterly disrespectful to make frivolous categorical statements about how entire professions won't be needed in a few years -- especially when the purpose of such statements is arguably to just create hype and to increase the valuation of their companies, without even having any such viable AI products or services to demonstrate. If there is any subject that deserves to be careful with, this one is surely such, since it can affect people's lives, livelihood and careers in the real world. They have lost my respect, and I think there is something truly broken in the US.
Hello sir....can I ask few questions?
hallucinations is still a huge problem. computer output is always idempotent, deterministic and factual. Hence we trust them. but can LLMs reach that level? please share your views.
I think if you tell it to forgo it's so called ABDUCTIVE REASONING then one can cut down on the hallucinations. Ask it how it does its "reasoning", including adbuctive, inductive, deductive and you will see that abductive is the one leading to hallucinations.
I concur that AI for programming is a progression from compilers similar to how compilers were a progression from assembly language. As we get better tools we embark on more complex projects successfully. But often extra abilities are squandered on more waste. See the field of web development for an example of this. I also think the hype around AI right now is counter productive. And LLMs unpredictably hallucinate and produce bugs which makes them unreliable assistants for new development.
lots of Beautiful Science
This is a mature discussion about AI rather than one designed to turn it into the latest bubble/burst which could bring so much misery to do many 🤔
AI = Alotta Indians
True, without them we won't have this current hype
Why bring your racism into it?
Despite the AI hype, I still cant see any noticeable change in life done by AI.
Bingo.
LLMs did, but nothing beyond that
Chatbots that spit out generic things. they still cant go into more details because of privacy. you still end up talk to humans
Some specialized AIs already have made some important contributions: the AIs that help detecting breast cancer on radios are more accurate than human experts. Some AIs have been used to accelerate the search for efficient medecines, or new chemical reactions for better batteries. These are not generative AIs nor LLMs.
The real value of AI is in thinking they can replace human customer service positions without customers noticing.
"Colonel rewriting" - said Linus according to YT automated captions
I had a job interview where the interviewer said “You don’t have artificial intelligence”. My response - “No; I have real intelligence.” Use AI as a help not a replacement
What? What a dumb interviewer
Thanks for the video!
I am still on the fence about AI. I love what it seems to be able to do, I think it is great so long as there is human oversight. Thank you for the informative video!
I would agree that AI is a convenient tool in your kit and has the potential to be a force multiplier correctly applied but it's also true for bad actors, not to forget what is the majority of the market is all VC surfing on hype and corpo BS solving no real problems (not even the ones they made themselves), for the rest of the population is (as always) taking the path of less resistance, making AI no more than a data laundry machine or some easy scheme for crypto snake oil salesman capitalising on FOMO. This generation of AI for now is a bubble full of hot air, maybe once the next AI winter has weeded out all the superficial nonsense it will grow something from the core or (less likely) it will just shrivel and fall into obscurity.
Link to/name of full video please
who else thought in the first 25 seconds that YT video playback is broken? reloaded YT a couple of times :D
You could catch many bugs by augmenting C by the borrow checker from Rust, and annotate the ownership or borrow lifetime for pointers in structs. An AI could likely figure out these annotations in most cases too.
@SavvyNik Linus pronounces his name as "Line-Us"
More like Leenus, I should think.
@@larsnystrom6698 ua-cam.com/video/c39QPDTDdXU/v-deo.html
ua-cam.com/video/c39QPDTDdXU/v-deo.htmlsi=-7qQ8fnsvY85YY9J
A reality check that AI is a progress toward a boosted productivity but I suspect our life changed dramatically in a magic like this.
leenis?
There was another interview where he talked about AI as well. He's for AI when it comes to low-level code that nobody wants to do but he thinks most everything else about it is BS.
Am the only one that doesn't hate hype? It's called managing your expectations, but it's still fun to ride the hype. Hypes are just how humans work. Obviously we have the capacity to regulate our hype and I imagine we're in the process of learning.
My first contact with "AI" was ELIZA in an implementation on a Commodore PET in 1979. Later I studied computer science, again with AI on the schedule. My opinion: at one hand, there are big AI advantages within the last 30 years - these "large models", on the other hand the old problems we talked about 35 years ago are still there. The very short summary of the problems are: AI is lying/cheating.
AI is a real thing with real value. Amazon recently reported that in the past year their internal AI tool "Q" has saved an estimated $500M in productivity gains and efficiencies. So like, the hype is real, but the hype is also wrong. AI is going to offer huge value in productivity and efficiency but it's not going to replace humans and companies that are doing mass layoffs to turn the reigns over to AI are going to regret that and suffer for it. Also, anyone calling AI "BS" or useless based on gimmicky products like ChatGPT or image generators doesn't understand real AI, they just understand the goofy consumer tools.
SOOO....my kid is a 3rd year student in computer science in Ohio university. Do you guys have any advice or recommendations for the student?
Yes, grow fruit and vegetables.
Horse riders were laughing at cars back in the day and in 2 years all horses were gone other than the ones for recreational use. The hype cycle is interspersed with inflection points, isn't it? Bound to be, otherwise we'd still be hunting mammoths with spears.
Linus does very very low level of abstraction kinda thing that not every average Joe The Dev does so he doesn't find LLMs compelling, that's my take on the skepticism. Today's 18 year olds don't have the privilege to see the rise of the personal computer and take the baby steps like Linus.
Things are extremely complicated now. Try unraveling that low level complexity with your college CS degree and you won't see the sun again.
This is obscure. Must be AI generated.
What courses or what road map should the young generation take now?
dinosaurs looking up at the asteroid.
Agree with Linus here. The tech is very impressive and it has some extremely useful applications, but it shouldn't be treated like the 2nd coming of intelligence that people think it is.
Arch for work and Zorin for gaming.🙌
Before crypto, it was machine vision and weather prediction … and #1 is still not popular and changing our lives, but #2 is already useful
Are lots of companies basing tons of products and marketing around it? Then it’s overhyped.
Revolutionary technologies always have multiple hype cycles because greed. They always take decades to actually deliver results that make them ubiquitous.
Then everyone is pushing and falling for the next thing being hyped up.
Quite interesting conversation. During the time of DeepSeek..
This was such an insightful video! I really enjoyed the part about the AI hype . It's so cringe to see google using the term AI even in their quantum computer chip launch.
And.....
I have recently made a video on VPC, where I dive deeper into core concept of VPC. If anyone’s interested, feel free to check it out - I’d love to hear your thoughts!
In the 90s we had the 3D hype: videogames, software, movies, music, drinks, chips. Today we have the AI hype.
Savage Linus made an appearance with "no it's not, well it's not to me anyways..." before his social training took over lol
It's a tool, just like any other invention of this nature. Tools can displace old methods, but they also tend to open new opportunities. The genie is out of the bottle, no point getting salty about it.
AI doesn't have to be as good as who it's replacing. It only has to convince those in charge of those job positions; who may tend to be well under qualified for their positions.
Hey Hey, Meyer Sound!!! Sweet!👍👍
Before Google, we had AskJeeves and the others. Thr current AI marketing is people trying to tell us their AskJeeves is the future of tech!
no hype after trying Claude AI, finally a good use case.
I use LMDE btw
Video doesn’t finish, cuts mid-conversation and directs to what I can only assume is a paywall.
Link for the full conversation is in the description
People that are developing AI want to make cash out of it, so are bigging it up. Those that aren't, don't understand it, so are amazed. Glad linux isn't going down the obligatory AI route.
It may not replace developers but it lowers the bar, similar to how GPS made anyone with a drivers licence able to be a taxi driver in a big city.
EDIT: Ouch, my bad! It's about AI in development... duh!
EDIT: One aspect where I think AI could be great in computing and programming is automated stress testing and monitoring, subjecting a piece of software or piece of code of all kinds of scenarios and data to see if it leads to exploitable vulnerabilities. An automated environment where you can just leave the system running and wait for results... and then he ends up saying that very thing! XD
I have to say I am not looking forward to AI. It will be of immense use in certain fields such as medicine and biology, physics, and engineering, where certain problems could be solved, certain basic designs could be improved, or certain mathematical parameters and equations solved using AI learning.
In some other stuff I am simply not looking forward to it at all, so many things in computing that could end up becoming abstracted for no reason, any cordial and creative process corrupted, and soon.
Who knows, but what are we supposed to do?
Man's doom or lack thereof is in the hands of a select few individuals, they are the ones who will be forever memorialised in the history books once it's all over. However it all turns out, they will bear full responsibility.
The problem with "AI" is that it is either judged by those who deal with its logic from inside the box (most viewers of this video) or by those who have no idea about what it really is and just ride the train, relying on others' opinion.
Neither are seeing clearly what real life consequences will be.
Linus is absolutely right about being sceptical, as he knows about technology. But he does not deal with all the "BS" outside of that, why should he.
The future of this hype depends on that outside world however: the money and power interests.
Hyping up a phenomenon that will never live up to that potential has a huge threat: that of manipulating the masses into using it as a tool which it is not. Once people believe in all the BS about "singularity" and that it is "smarter" than a human, they will easily hand over authority, it will become a question of trust and faith. Just like the faith that is now (falsely) shown to authorities in many fields, like medicine for example.
Hiding behind "The Science" allows those that pull the strings to further their own agendas and anyone going against the grain is declared mad or a conspiracy theorist.
Thus AI becomes a powerful tool in the hands of who owns the system and some trusted programmers who are ordered to direct outcome via algorithms.
Só the threat is not seen by insiders like Linus, because he is above dealing with real world implications and intrigue.
At the same time, outsiders are left to guess on its ability, for being incompetent.
The real threat, as always, comes from those that direct the hype train. They are in full control about how devastating the power concentration will become.
5:00 "crypto is not hype". Well, it is as hyped as AI: many projects that solve trifling tasks and problems, most probably made to grab cash from VCs and pull the rug. And then a few nice and interesting projects/concepts/research that have been running for nearly a decade in some cases and haven't made much progress.
If you are hyped for AI you should be hyped for crypto, if not you shouldn't be by neither of them.
Well.. I do think that AI is mostly hype, but at least people have ways of describing ways AI could theoretically be useful (.. if it actually worked as advertised).. with crypto I don't even know what problem they're even trying to solve in the first place.