AI is a Lie - Cutting Through the Hype
Вставка
- Опубліковано 4 лип 2024
- Thanks to MANSCAPED for sponsoring today's video. Get 20% Off + Free International Shipping with promo code TECHTIPS or visit manscaped.com/techtips
Check out the MSI Gaming Desktop PC at msi.gm/SFE12FA9
AI is the buzzword de jour, and for good reason: It’s enabling a LOT of new and useful tools for everyday life. But just what IS it? And why do we think it’s being portrayed dishonestly?
Discuss on the forum: linustechtips.com/topic/15734...
Purchases made through some store links may provide some compensation to Linus Media Group.
► GET MERCH: lttstore.com
► GET EXCLUSIVE CONTENT ON FLOATPLANE: lmg.gg/lttfloatplane
► GET A VPN: www.piavpn.com/linus
► SPONSORS, AFFILIATES, AND PARTNERS: lmg.gg/partners
► EQUIPMENT WE USE TO FILM LTT: lmg.gg/LTTEquipment
► OUR WAN PODCAST GEAR: lmg.gg/wanset
FOLLOW US
---------------------------------------------------
Twitter: / linustech
Facebook: / linustech
Instagram: / linustech
TikTok: / linustech
Twitch: / linustech
MUSIC CREDIT
---------------------------------------------------
Intro: Laszlo - Supernova
Video Link: • [Electro] - Laszlo - S...
iTunes Download Link: itunes.apple.com/us/album/sup...
Artist Link: / laszlomusic
Outro: Approaching Nirvana - Sugar High
Video Link: • Sugar High - Approachi...
Listen on Spotify: spoti.fi/UxWkUw
Artist Link: / approachingnirvana
Intro animation by MBarek Abdelwassaa / mbarek_abdel
Monitor And Keyboard by vadimmihalkevich / CC BY 4.0 geni.us/PgGWp
Mechanical RGB Keyboard by BigBrotherECE / CC BY 4.0 geni.us/mj6pHk4
Mouse Gamer free Model By Oscar Creativo / CC BY 4.0 geni.us/Ps3XfE
CHAPTERS
---------------------------------------------------
0:00 Intro
1:18 "AI" is not what you think it is
2:28 What is modern AI good for?
3:26 ANI vs AGI - A difference WITH a distinction
6:27 What is AGI, and why don't we have it?
7:48 Knowing the difference may save your life
9:40 AI now means nothing - And that's on purpose
11:33 It's "thinking" as much as Dreamcast did in 1999
13:01 Even if it's not "real", it's still changing our world - Наука та технологія
Correction for the sponsor spot at 1:02 - We meant to say "14700 KF", as is shown on screen 🙂
We all skip that part so we don’t really notice.
@@EnivokeWell I was going to comment about it
James IS also an ai. He was hallucinating on this sponsor 🤖
Crisis averted
Thx for telling.
I work in AI developing models.
The entire industry is currently filled with MBA jargon and people in suits trying to collect money from investors.
In the future, looking back on this decade will be super painful.
shady mechanics, ie technologists, swindling the uninformed is nothing new.
How long do we have before their stock crashes?
meh not any different from any other rush to capitalize on trend
I also work in AI. Granted a lot of companies are using AI as a silly marketing term but that doesn't mean there hasn't been massive innovation over the last few years.
yeah, the marketing nonsense is absurd.
I do find it disappointing that Linus is claiming LLMs are just faster versions of old tech. They are far closer to the entire language system of a human than they are to ELIZA. And GPT4-o isn't an LLM with its new multi-modality. These models are neither everything some promise them to be, nor as limited as many hope them to be.
You think we're actually going to crack AGI as the robots are put to work and add their data to the hoard @ducks742 or do you think we'll run out of processing power first?
I find it hilarious how "Apple Intelligence" has the exact same AI acronym. That is THE most Apple thing I have ever seen.
they copied alibaba inteligence, jack ma was just that far ahead man
@@elcohole100 Fr, if only the didnt make him diappear he could have put a lawsuit on ai (coz why not)
Well at least it matches user base as in Applesheep Intelligence 😁
AI more like AD (Apple Deception)
@@TheXlen i cringed so hard after reading this
in computer science in HIGH SCHOOL they made a big deal about the difference between artificial intelligence and machine learning. its like machine learning was completely removed from the dictionary in the past 2-3 years
Im from a country where pre bachelor/vocation CS education lacks a lot. What is the difference between both, i for real dont know what is it
@@naoyanaraharjo4693 Teaching computers to learn from data is machine learning wheras ai is more broad but closely associated with agi or thinking like a human
Thanks for pointing that out. Now I also remember reading this in high school. Basically we're still dealing with machine learning.
@@michealcondry5384 So according to you guys the real AI would be as useless as a baby and we would need to send them to school to learn all the stuff needed to help us? :/
@@naoyanaraharjo4693 I mean, Linus explains it in the very video you're watching... Just watch the video for a primer and we can clear up any details.
The previous explanation is misspeaking to the point of being inaccurate. "AI" does not exist presently. At all. Anywhere. There are NO actual "artificial intelligences" out there, yet. "AI" is a general term that marketing and executive wanks misuse. Entirely. "AI" is a concept of a system that can self-teach (learn) new things it hasn't seen before, on its own. That does not presently exist.
"Machine learning" is where some engineer _sets up explicit scenarios,_ like Linus talking about the person trying to sit in a chair. At first, the machine basically just permutates through possible methematical states, and adjusts things to be more and more in line with what it's _specifically told by the engineer_ is correct. It has ZERO idea what it's doing or why at ANY point in time, even _after_ it's "learned" to the point of being highly accurate at the task. It's LITERALLY just linear algebra spitting out numbers. There is never at any point anything remotely close to a "thought" in the system, unless you extend it out to the human engineer setting it all up.
"AI" would require the computer to set up those tests, confirm the results, and set up success conditions, _all on its own._ Ideally, while being able to explain what and why. No system currently does that with anything remotely approaching a complex task.
It’s always marketing. I hate marketing. I worked at it for 3 years+ and I concluded that it was an art of deceiving customers.
I want a law, the right to not be advertised to
you're right, most of marketing and advertising advice out there is to trick, deceive or otherwise manipulate people into buying products. I've chosen to sell honestly, with products that I believe in and that sell themselves.
As a digital marketing professional, I think it’s somehow even worse than that. Nowadays, we don’t even try to deceive (or communicate with) humans anymore - most of what we do, most of the content we create is actually for Google’s crawler robots and indexing algorhythms. The absolute first priority is for the machine to like your content, everything else is secondary becuase if you can’t please the algorhythm, humans can’t even see what you put out there, so they don’t even have a chance to like or dislike it. To be honest, my career goal is to get to a point where I can use the skills and tools associated with digital marketing to support an organization or cause I believe in. To build a financial background secure enough to be able to work for NGOs or non-profit projects even if they don’t pay particularly well.
Taking a marketing class is like a crash course on psychology and propaganda at the same time.
On the other hand,
let´s say you personally are selling a product/service (maybe shooting weddings)
But there is 10 other people doing the same thing.
You kind of have to tell the potential customer, that you have the most modern tech and that you can do the best job, for a better price ratio.
Even though you know that you are probably not the best.
The solution is to let only one entrepreneur have an absolute monopoly?
Is there anything that can be done?
Imagine a man sitting at a computer. A series of Chinese symbols and characters appear on his screen. He spends his time and energy rearranging these symbols that he knows nothing about and has no context for. Sometimes a buzzer blares and he has to try again, but sometimes a bell rings and he gets to move on. After a long time of doing this, he's gotten pretty good at determining the pattern of the symbols that generally result in a bell instead of a buzzer.
Let's presume you can understand Chinese. You walk up to this man one day and ask him what he does. He explains that he plays this pattern recognition game where arranging these symbols in a way the computer likes lets you continue to the next one. On his screen in Chinese is the question "What is ice cream?", and you watch as he responds in perfect Chinese "Ice cream is a cold dessert food made of ice, sugar, and either milk or cream." You ask him if he knows what the symbols mean and he has no idea.
That is machine learning.
_"rearranging these symbols that he knows nothing about and has no context for."_ , that's the big problem the Chinese room experiment is pointing to. Because how would we know? He doesn't learn the context in the proces, how so? Would one be able to make a perfect translation without knowing the context? And even if it is the case that he doesn't, but he would still be able to produce perfect answers to questions, why would that make the answers useless if people would still be able to understand the answers; to curate the good ones? If that's the case, why would we not call those answers intelligent?
[I don't get why people like that "Chinese room" thought experiment] As if, if a model like that would make 3 stupid answers and one good one to a question about a cure for cancer, people would be rolling their eyes like: "Pffff this thing is stupid, it doesn't even understand the suggestions it makes1", well, _maybe_ , but that thing just got a cure for cancer. People make bad guesses too before they make a perfect one, who cares.
No it's not. If he did that a billion times with billion different contexts, he WOULD understand chinese. Deaf and blind people from birth can still understand the concepts of a picture or sound.
The first coming later finally able to understand Chinese would be the AGI
Or maybe never
AI is genuinely just a marketing buzz word these days
Just like turbo was in the 80's
I have a feeling that this fcomment is going to blow up
Wish I knew when it was going to end so I can optimally dump these insane NVDA shares lol
and space in the 70s
I said the same thing the other day to people is that every "Innovation" from all public companies think of it as an ad for their stock market, and now we have the "AI" words for their ad business, which kinda more concerning since can be more privacy nightmare than ads tracking. xd
I mean, AI has always been a very general term even before all this AI craze. NPC behavior in-game was called AI, so was an AI general in a strategy game.
I think people can grasp the difference in the meaning of the term when talking about it in very different ways e.g. npc game AI in a AAA game vs AI that's designed to drive your car.
@@DaemonJax but is there and actual difference? Or is it just one is trained better?
The thing is, video game AI could arguably be considered in some forms to be more intelligent than this. Most NPC AI is based on state machines, which basically considers information about its surroundings to switch between pre-defined states. You could use machine learning to enhance that declarative programming by giving higher weighting to attack patterns that appear to be successful to make those states more likely. So called "generative" AI just does this on a pixel or character level, making specific words or patterns of pixels more likely based on input keywords, which means all it does is spit out averages of the input data. So the marketing term "AI" is actually based around the cult like idea of "emergent" programming, basically that if we throw enough data at the machine eventually it will stop averaging and start programming itself. Instead what we get is a lot of smoke and mirrors from people obsessively trying to coach these averaging machines to LOOK like they're creating novel outputs, while simultaneously stealing any and all data on the web to fuel their fraud.
@@Damiancontursi Completely different approaches.
@@benflightart State machines are just a good way of organizing a ton of if statements. It has nothing to do with intelligence. The behavior of generative ai is fundamentally learned behavior and it's definitely part of how actual intelligence works, it's just not the whole answer.
This is a conversation that needs to continue happening. I’ve really struggled to explain to people that “AI” isn’t AI, and more importantly why it matters that we distinguish between AI and ML. In a way, it feels similar to the whole USB-C issue where the vast majority of the public didn’t understand that just because a connector is USB-C doesn’t mean that it’s fast, it just means that it’s USB-C and it’s important to distinguish between USB protocols vs USB connectors
You lost me with the USB part. I don't work in tech but it seemed obvious to me that AI is not sentient or is self-aware and doesn't have evil intentions because it doesn't have any intentions at all, it's basically regurgitating information and humans still need to weed through it. Some people find this hard to grasp.
However I hadn't thought about the fact that "artificial intelligence" isn't an accurate description. I think you could argue that it is accurate, in the same way that an artificial flavor doesn't taste quite the same as the natural flavor; artificial intelligence means it's like a substitute for intelligence.
@@sarahberkner Any such explanation of AI that you come up with can equally be applied to the human neurological pathway. One can just as easily argue that humans regurgitate information and give off the illusion of self-awareness.
@@spadaacca not really,, as linus says in the video AI doesnt actually understand what its doing. you can ask an artist to breakdown a drawing, and theyll tell you what they did, how the body interacts with the enviornment, etc. you can do the same thing with a writer, you can ask them why they wrote it, how they wrote it, etc. you try to ask an "AI" to breakdown anything it makes, and it wont understand it. AI doesn't iterate, humans do
@@jellyloab Is that different from us humans? We make about 35,000 decisions each day, and would struggle to explain our reasoning for most. For the ones we make consciously, we commit our thought processes and feelings to memory, allowing us to explain them after the fact.
Current LLM's do not commit any sort of thought process or internal monologue to memory, and so can only explain its reasoning using its previous output as context, i.e. it is not actually recalling, but creating an answer using its previous output as a reference. This does not mean, however, that there was no "thought process" (by which I mean "calculation") that went on to create the original output, nor is it a good measure of intelligence.
Linus's example of the AI struggling to count letters is also quite misleading: due to how current LLM's are designed and trained, they excel at pattern-based tasks, but tend to struggle with precise manipulation of symbols (hence why math can be so precarious too.) I'm not exactly sure what point Linus is trying to make here--is a child not intelligent if he struggles to count?
@@jellyloab Funny you should mention bodies interacting with their environments. I want you to do an experiment: go outside, take a jog. But tell your brain not to speed up your heart rate. Keep it at resting heart rate. Does it listen to your executive commands in the service of our supposed free will? I think a big part of this conversation leaves out just how little free will humans demonstratively posses. In that light, most of what a human is, is automation. I'm taking a bike ride today. That I decided to do - I decide to turn the pedals. But how my body accomplishes this task on an anatomical level isn't up to me; not one bit.
a few years ago, marketing tricked us with "3D" and "Smart", today it's "AI"
Paint 3D😂
@@gnanasabaapatirg7376 I don't know about you, but I didn't know for quite a while that Paint 3D could in fact be used to create 3D models. Thought it was just a gimmicky rebranding, though some may still say it's a gimmick.
Don't forget crypto, blockchain and NFT. Never forget.
@@InnososThe sooner we forget about those, the better.
"SmartTV" aka "now comes with built-in advertising"
It's going to become the same as how "Smart" got overused and is still overused to describe literally anything with internet or a timer of some sort.
Yup... bought a new fridge and apparently it's fucking sentient. Got ARTIFICIAL INTELLIGENCE plastered on it. Must be shy though, hasn't said a word so far.
@@c50m4 AI colot oversaturion on my TV
I mean, at this point an appliance that can connect to the Internet and run a few apps is a reasonable definition of a "smart" appliance. Usually I feel like it's fairly clear what you're getting, although I guess there's a range of ability.
@@danieljensen2626agree with you, just the OP is saying that the term AI which ought to be a pretty dang impressive description of something artificially in a similar category to human intelligence, is going to get relegated to being defined as something far less impressive. Linus made the same point. Need a new term to represent the farther future of intelligence that comes closer to human.
AI is really bad. If you know anything about a topic, both GPT and Gemini fall apart. 95% of the time, its making things up. Semi advanced things like the effectiveness of spinosad as a pesticide for plants. Or a viroid called HLVD thats impacting plant growth. Or questions about auxins that promote root development, its always making things up in regards to these topics. Anything that goes beyond surcafe level "write me a better ending to my tv show" kind of stuff ends up giving you incorrect info. The worst part is, most people dont catch on.
I’m a tech hobbyist at best but seeing laymen being tricked into thinking that Ava or Glados is right around the corner infuriates me
Ok perhaps I’m a layman but how else are people supposed to interpret it when AI advances so insanely quickly?
@@seb1520 Only because you climb a tree insanely quickly it doesn't mean you will reach the Moon.
we are getting closer to a perfect copy of what a human seems to be, that is not a agi which is an absolute terrifying thing, but for the average person, if AI stopped at a simulacrum of us, we wouldnt care...and honestly it would probably be better for our species survival if we dont go making AI that can combine old and new concepts to come to a new answer, we dont even use that ability for good
Why would this infuriate you? Why would you be so sure it isn't? I get frustrated with the AGI hype train too but plenty of very well trained professionals are considering this possibility every day. Why would you insult your fellow laymen because they choose to listen to a different professional than you is misguided?
@@Slvl710 Yes, you are getting closer to the Moon by climbing a tree.
Been a disaster in university with group projects. Half the team usually doing all their work with gpt rather than having an original thought themselves
I can confirm this. I myself use gpt on programming subjects as im in an accounting major. But only there, the others used it for everything
That's great actually. The ones using LLMs to code see the future that is coming.
having been in the school system, its probably an overall improvement, if they keep using it, it will appear that IQ has gone up
yes a disaster, No-one wants to "think" anymore, just ask AI.
@@phatwila the ones who use LLMs to generate code for study projects can't even tell if generated code is good or bad. Also if they can't do even simple things on their own how they gonna program something complicated that LLM can't handle ?
My org recently named a new "Chief AI Officer." He's got a masters in marketing and a GPT subscription. Apparently, that's all you need to get to the C-suite nowadays.
That makes sense, if he said he had 20 years of experience in AI then you know he's faking it. They mainly needed someone who was "with the times".
I've worked at places where I was in charge of something I wasn't qualified for because everyone else would have been worse at it.
@@sarahberkner "AI" has been on the go from even before the 1960s, Perceptrons were developed in the 50s I think, one could easily have 20 years of experience in machine learning, language modelling, generative models -- which is what people are calling AI now. Not that many people do I'm sure, but still.
Lmao
Yeah? So why don't you do it?
Being a good BS artist is a great skill to have. Just make sure you seperate your work personality and personal life or you can run into issues.
AI Rice Cooker was the tipping point
Dankpods!
i shared the same sadness that wade did using that thing
No it was the AI thermal paste
They had me at AI screwdriver
Now, an AI toaster is truly terrifying ( red dwarf reference )
I was going to comment something about how I've gone back to calling it machine learning, but my wife said you sound like Bob the tomato, so I'm commenting that instead.
Your wife's right, how could we have overlooked this critical fact??
It's been noted before. The Bob the Tomato part. It was in the second Linus Responds to Mean Comments video.
I appreciate hearing someone say the actual truth about “AI”. Try doing anything novel with it and it can’t. It’s just an amazing pattern recognition and replay system.
Just like your brain.
You'd be surprised by how much our own "novelty" in arts and technology is just rearranging and minor variations of existing components and ideas
AI is really not doing something too different to what we do. Try having a completely new thought, and then realize it was most likely a reiteration of something you already thought or read or heard.
@@mrosskne sure... On the most basic, 10th grade science level.
Meanwhile, in reality, the cyanobacteria that grow moss on a log are collectively more intelligence than chatgpt. Their swarm dynamics demonstrate more emergent and intelligent behaviour.
Machine learning engineer here (Image generation focus). I am so glad a major youtube channel finally got it right, rather than fear mongering. The amount of horrific information even from sources that should be educated on tech like this is truly disheartening. Thank you for this video, which seems to be a rare one with a relatively neutral look into a set of technologies that will continue to shape the world for many years to come.
Can i ask a sincere question? Why do you want to make generative images?
Can I ask you a quick question as another programmer who's dabbled in ML?
It seems to me that AI/ML is really just data science (or at least data-driven development).
My understanding is that it's basically just gradient descent used to optimize a function that maps inputs to outputs based on some loss function.
I learned how to fit data to a function via gradient descent in high-school statistics, and from what I see, fitting a 10,000 weight convolutional filter to a dataset isn't really all that different conceptually than using Excel to create a graph with a least-squares regression curve if you ignore the difference in dimensionality.
Do you agree/disagree with any of that? People keep saying AI is a bad term and people should call it ML instead, but even ML seems like a bit of a stretch if it's just data science curve fitting with some fancy gradient descent on top (albeit with a 10,000 dimension curve fit to millions data points). Seems to me the only reason people use the term AI/ML is to make it easier to get VC funding, because data-driven development doesn't sound cool or sexy.
+1 from another data and AI professional. I do ML every day for work and I couldn't have said it better than Linus. He's exactly right.
As an AI, I agree with this statement.
@@brydenfrizzell4344 ML is a subset of AI. ML is data science.
Essentially, corporations chose to muddy the definition of AI, for profit. Just like with Hoverboards. And now we need new words for those old things we envisioned...
Don't come up with new words. Refuse. Stick with the old ones. If people don't understand you, screw them.
How do we make sure this 'muddying' of words doesn't happen? Just call things more specifically and don't give a hyped up name? Or keep on doing what we're currently doing which is 'invent a new word for the previous expectation of the technology'?
oh dog, the "hoverboard" one was SO freaking stupid, it drove me nuts,
@@fireninja8250 It'll always happen honestly and it's not solely related to tech so we can't even stop it.
AI got a buzzword for tech corps (and the average joe) alike, if you think outside of the tech space we have a ton of words that have been muddied and/or re-defined, be it g a y, white knight, simp (with especially that one still having that incel usage taste every time you read it) or other examples that we don't even think about anymore.
AGI will just be as normal in usage as some of the other things have become for its re-definition over time.
@@fireninja8250 you can't prevent "muddying" of words. It's an inevitable part of society and language
the amount of Ai bros coping in this comment section, telling Linus to "stay on his lane", is hilarious
If something is so novel that nobody has ever seen on the road before I think people would panic and cause more accidents than a self driving car would. Have you seen the way most people react under pressure? It's all eyes closed.
Totally agree on the misuse of the term "AI" in marketing. It's definitely creating confusion and sometimes even harm due to misconceptions.
this comment is going to blow up how am i so early
@@K131real 😂😅
Hard agree with you. 👏🏿
The moment they flashed all of the other Buzz words that have been used over the past few years in the tech industry, all the other crazy stuff that has happened in the tech industry Flashed in my head at the same time.
Especially 64-bit. That one got a laugh out of me.
Marketers picked it up, but it was academics who came up with the term and used it to define a subfield of computer science thst includes narrow AI.
Also, machine learning itself, which used to be called pattern matching.
It,'s not just industry on this hype train.
i feel like you could rebrand a 40-year old technology that involves a linear regression as AI and no one would bat an eye lol, weird times
as someone who has studied for a university degree in AI, this whole hypetrain is extremely infuriating to me. Imagine you're a physicist and every physical product is called "black hole" because *technically* all mass has gravitational pull. Similarly, everything is called "AI" now because it has more than 500 lines of code.
Don't worry, give it a few years and they will move on to a new buzzword.
@@roymarshall_ Unfortunately the wreckage of old hypes doesn't magically go away, and will haunt everyone affected for decades to come, albeit in sanitized form. We're still dealing with fallout of the OOP hype in programming today, and that was, what, the 70s? And most programmers today would likely not even recognize which parts are the genuine concepts, and which parts are just holdovers from decades-old hype that have remained in use because "that's how we've always done it".
huh? this is nothing new. The bar for what was seen as AI was way lower than it is now
Behold! AI:
if (condition) {
//
} else {
//
}
If, as the paper suggests, an intelligent octopus faced with a bear attack doesn't know how to react, don't you think that if a human were to reincarnate as an octopus in the same scenario, they would respond similarly? We could perhaps improve the octopus's response to be scared of unknown situations.
Assuming that current AI is somewhat similar to humans based on this idea, aren't we essentially searching for something god-like? If AI could provide correct answers to any scenario, no matter how absurd or unexpected, could a human even handle it? For instance, if tomorrow everyone dies and you get shot to Mars, entering the 45th dimension where Mars is habitable, but you must return to 3D because in the 45th dimension you're a disabled person with no senses, and the 45th dimension's version of Elon Musk keeps you as a pet in his belly pouch called '&%&^5757,' how would humans solve a question this?
And, If AI could take even one step toward solving this scenario, as Linus suggests, by using context clues to make sense of an absurd situation and lead us to the correct answer, then wouldn't that AI be able to solve any issue, no matter how absurd? At that point, wouldn't it be considered not just software, but something god-like?
Or is AGI simply about quantifying all five human senses-vision, hearing, touch, smell, and taste-in numbers and then training on the thousands of ways humans have developed machine learning techniques (perceptrons, neural networks, U-Net, transfer learning, Gradient Descent, Stochastic gradient descent, PSO, Bird Swarm Algorithm, Transformers, and Thousands more)?
What is it what is AGI is it a search for GOD or is it making a Human so perfect its basically GOD ?
This is some Really Mind Bending Shit......
“On the road, anything can happen” shows footage of a literal meteorite 😭😭
AI is just a web scraper that answers user prompts
This is why people who call themselves "AI Artists" are embarrassing. You don't call yourself an artist for doing a Google Image Search.
the GNU+Linux copypasta reference was goated
100% I was laughing at that
1:56 Linux Tech Tips lol
And it was gpt
*GNUted
I immediately knew Emily wrote this.
Nobody realizes the news is lying until they talk about something you're knowable about. Then we go back to thinking they're experts on everything else, lol.
no one doing the news are experts on that subject - they just interview "experts" which tend to lie to you/them or don't bother actually specifying it further because they're... well either part of the company, don't actually know what they're talking about - or simply forget that they should specify things for the average viewer.
kinda like that person that was (or still is? don't know if that stopped ever since he got called out a while ago) giving cyber security tipps to companies and gets invited to train their people, while literally providing "proof of his work" with issue report ids - with the exception that he's not listed on any except for one (and blows the issue up bigger than it was) and no one listed on the other ids even knowing him - same concept, the people tasked with hiring someone for that don't know about it as the stuff they have to know about is an entirely different topic and just assume that it's correct all while not having the time (or resources) to contact anyone listed/read through more than the first.
@@Unknown_Genius Some stuff is such a simple search to find. I've seen absolutely ridiculous claims made by anchors that anyone even remotely knowledgeable wouldn't have made. To your first point, I can think of many examples of anchors talking out their butts like experts but you're probably right that they're just repeating what they were told without digging into the topic whatsoever.
100%. Right between the eyes.
Just like a certain pandemic
@@Unknown_Genius what you are missing is that news isn't news anymore, it is entertainment, and saying the facts is boring, and doesn't get ratings. All 'news' cares about now is viewing figures, so BS'ing about AI and everything else is ok, as long as when they go to ads there are plenty of eyeballs still watching.
It's pretty ridiculous and SCARY AF if we're letting "AI" go about important tasks when it can't even tell us how many times a letter appears in a word.
Luke and Linus were the number one promoters of AI, talking about everyone getting replaced. They were so gitty to never hire another software engineer again.
That last paragraph is really soul chilling... some definite Cyberpunk 2077 vibes there, and not in a good way...
"The folks in charge of helping us deal with all of this have a lot less funding than the ones who are trying to sell it to us"
Would also add that those in charge are taking advice and lobby dollars from the CEOs of the companies selling it to us.
Deus Ex crew checking in
Yeah, it's getting way too easy for bad actors to deepfake evidence that can have very chilling impacts. Wanna get rid of a political dissident? Just fabricate some video evidence on a CO2 belching AWS datacenter. Want to track a group of marginalized people? ML-powered face recognition software and the ever present cameras and GPS receivers with mobile internet connections makes that trivially easy. I honestly struggle to get excited about technology anymore because it seems like any developments (especially machine learning and ever-present telemetry spyware devices) are only ever bad for the working class. There may be some positive applications in the medical field or logistics management, for example, but overwhelmingly it's cars that report driving habits to insurance companies and law enforcement and have "autopilot" systems that are known to kill people (in no small part due to cost cutting), or buggy software, ads, and spyware in everyday appliances that used to at most have some simple microcontroller code that did exactly what it should and nothing else. I'm starting to think the Matrix had it right; maybe 1999 was the peak of human civilization (at least from a technological perspective).
Fallout would like a word as well
That's also an incorrect take. The ones 'helping us deal with it all', if I had to guess Linus political view, is the government. The last think the government is lacking is 'funding', and they will do their best to pass on more useless regulation, mostly with the intention to get more 'funding'.
AI peaked when the monsters fought each other in Doom.
Not at all.
On a side note, I love how Minecraft skeletons shoot each other when 1 arrow accidentally hits the other
AI peaked when the Quake 3 bots were toxic to you
Thanks for that nastogia hit!
I dunno, I was quite impressed by Half-Life's AI behaviors for both lone enemies and squads. Even the cockroaches had an idea on how to behave somewhat convincingly.
You say that, but my Tesla drove me from Austin to Dallas today with no interventions but I guess there were no edge cases there
Let us cope mfer
Everyone's a gangster until some biology nerds make a real fleshy brain like GPU and play doom on it in real time
The Torment Nexus?
Thought emporium
Human brain SLI when?
Korrok
Have to think about a quote from Edsger Dijkstra:
"The effort of using machines to mimic the human mind has always struck me as rather silly. I would rather use them to mimic something better."
Thats the whole point of AGI?
From UA-cam channel Explaining Computer.
"The 2nd most intelligent specie on the planet is the dolphin, and we never expect dolphin to imitate a person..."
Better? I'm wondering what I'm supposed to get from that quote. It's too open-ended.
The problem with this quote is very simple. Despite our millennia of accumulated knowledge our own minds are by FAR the most advanced and capable thing we know of. And we barely understand just the most basic principals of their operation. Nothing is more capable of handling problems and adapting to new complex situations that the human mind. And by definition the capability of creating something better than a human mind, must include the capability of creating something as good as a human mind.
@@sunlaThe point of the quote is to ask why are we trying to make computers do what we can do, rather than what we CAN'T do? I.E., why are we trying to automate the human spirit with 'art generation' and similar things rather than use it for the things that we just can't really do like immensely complex simulations, data processing, etc? Now just to be clear neural networks are in fact being developed for loads of genuine scientific applications, but a lot of the mainstream tech buzz isn't about that but about gimmicky things that aren't actually helping the world at large. The question basically is why aren't we focusing on doing the things that would take us as a species far, far too many man hours to do.
AI Linus isn't real he can't hurt you. Meanwhile AI Linus :
My first computer used a 6502, a Commodore VIC-20. 5K of RAM and it was smarter than AI in 2024!
this comment is going to blow up bro :(
I thought the same thing looking at this thumbnail lmao
#L-AI-nus
Most of my graduate studies and Master's thesis involved AI and Deep learning and I cannot begin to count the number of times friends/coworkers (who studied something completely unrelated like business or marketing) have tried to tell me how AI will solve everything and that I "just don't understand it" whenever I explain why their idea with AI wouldn't work
sounds like you're a little bitter about not getting the job you wanted in AI dev after you did that masters thesis
@@Lindsey_Lockwood And yet your account is 17 years old. Makes you wonder...
@@hexoson looks like you been wondering about it a lot. Sorry I upset you enough to cause you to do homework LOL
intuition, common sense, emotion >>>>>> science, reason, logic
I don't think you realize that training these AI models effectively finds the best way of being an "autocomplete on steroids" -- and it turns out the best way to be an "autocomplete on steroids" is to be an AGI.
Wanted to mention Kasparov used 20 watts of caloric energy to play chess and deep blue used 1400 to do the same task. This difference in energy efficiency only grows with more powerful A.I. systems that use megawatts of power to do the equivalent task a human can do with a hamburger worth of calories.
Consciousness and sentience and all that is crazy but the craziest thing about our brains is the sheer power efficiency.
Where did you get a figure for Deep Blues power consumption? Tried to look or it and turned up nothing, the only figures I could find were the max draw of the 30xPPC604e but that's unlikely to be the bulk which I'd guess would be the custom VLSI stuff or RAM. But bringing Deep Blue into this is like arguing against public transport using gasoline or diesel engines as opposed to horse buggies on the basis of the fuel consumption of a Ford Motor co. Model T.
We humans don’t usually make our own food, it’s more fair to factor in the energy used by the tractor used to harvest the food and all of the machines in the processing plants and all of the energy used to distribute the food with ships and trucks and other people.
@@tomh9553then let’s factor in all the energy required for building a power plant when we talk about ML models energy consumption. Not to even mention that a model needs people to build the power plant
True at the time, but nowadays your phone can beat Kasparov easily on hundreds of milliwatts, give or take
Imagine calling memory foam as "AI enabled cushion"
I would like to purchase 2 of these cushions.
The tech is just good enough to convince a lot of companies that they no longer need writers and editors, which killed my 20-year career not long after ChatGPT launched.
I now wash dishes for a living.
To be fair, humans also run out of tokens when they stay up for over 48 hours
"This Rabbit (R1) hole goes deeper than you think"
Nah, those guys were genius'. The people that bought it are idiots.
@@SiCSpiT1In what way? How is an R1 anything but objectively worse than a smartphone?
@@John_JackThe point is that they were geniuses at fooling others to buy their scam. I disagree, as I do not find it that hard to scam less knowledgeable people into buying useless tech. I could probably do it, the difference is that I was raised properly and wouldn't want to.
@@John_Jack easy money with little effort for the company that made the Rabbit R1.
@@John_Jack Read a review. It's pretty obvious.
When are you making the Ai screwdriver???
Still waiting for the AI Apple-leather Jacket called Jensen
It's the same as the current screwdriver, but you need six fingers to use it
Right after they start selling NFTs of AI-generated "Trust me Bro" tshirts.
It's not AI it's Al screwdriver. As in Aluminium. Like one of those knockoffs you can get off temu and use to remove exactly half to one screw
it recognizes the screw and automatically switches to the best bit? now that would be pretty cool and useful
70$ for a screwdriver?
Got the entire iFixit Pro kit for that amount.
It's an AI screwdriver.
Ever closer to the emergency medical hologram.
Please state the nature of your medical emergency.
I love that there are literal "ai-powered" birdhouses on Amazon selling for hundreds.
DankPods bought a rice cooker that touted that it was "AI" powered. Opening it up, it used the same mechanical magnetic latch system as any cheap rice cooker from the last 40 years.
You think its crazy? There is AI thermal paste : )
It’s been introduced into every facet of life. You’ve barely seen the tip of the iceberg. Tech boom 1950-2000 . This is gonna change everything more drastically much more quickly.
There are some scams on Amazon unfortunately. But I also think being an early adapter is kind of a scam, better to wait until they work the bugs out.
But as this video points out, this isn't early... Machine learning is not new.
I worked in a company that had AI in it's name, all we did was to add a chat GPT api...
And? Chat gpt is ai.
@@mattmaas5790 sure, but the company itself didn't have any AI of their own (they were going to use chatgpt as part of their 'AI') and they wanted the government to get subsidies because of that.
That's not illegal but it's sure not ethical
@@dany_fg I disagree, I think it's weird to expect every company to re-invent what chat gpt has already invented.
Customer, "server, my AI rice is moving"
Server, "um, that's not rice."
ask AI to write me some python script, script doesn't work, paste it back into AI and it tells me that script won't work. Oh thanks
Dunno.
A tool is just as good as the user.
For me it does wonders.
-It is life changing.
I pay $30 monthly for GPTPlus subscription and GithubCopilot.
-Probably would sell half my soul for it.
Lmao ai writes bad code
@@chady51what languages do you work with?
It can write some basic code pretty well, not always efficiently but it can do it. Anything beyond that and it starts making fundamental errors. Easier to use google.
Claude 3 writes perfect python. Yall just making yourself look bad. Gpt 2 is like 5 years out of date, use a smart model.
Decades ago, the term I was told was “computers are only as smart as a human makes it”, even in this age of AI I still believe that is true
The power of a computer is equivalent to the universe but keep in mind not all equals are equal.
ftfy: "computers are only as smart as a human think they have made it seem”
This is even more true with machine learning. The main datasets used to create these models are text from the internet.
No.
Not true.
While not smarter than man now, they can be made to make themselves smarter.
Of course. And that's because we are consciousness, not objects.
An important correction is that AI *always* hallucinates. Just because a response happens to coincide with reality, it doesn't mean it is not hallucinating.
That's pretty silly. Hallucinating doesn't just mean "answering" I'm pretty sure. I'm pretty sure it being a wrong answer is the definition.
8:30 "ANI is not capable of handling an edge case that it has never seen before"
Have you ever heard about neural network's ability to generalise? Which basically means it can see many examples of correct behaviour and then infer what correct behaviour should be in an edge case it has never seen before.
Yes, except that this is not realitly. Linus is completely right, ANI is not capable of handling these.
The capability is not extending to scenarios outside the knowledge-base, which is at best sufficient today. Sufficient meaning it is able to handle it's core functionality it is designed for w/o hallucinating every second about it. So the bar is very low.
@@user-xk6jw3wi5u Linus is not completely right. The performance of an AI model is meant to be measured on data it has never seen, and for ML there are many techniques to improve generalisation and reduce overfitting on training data for this. Linus and the octopus example in the article at 11:50 do correctly state that these models only learn correlations between words, but they both mistakenly assume the association of words in language is not prescriptive nor descriptive of the real world, or that this aspect of correlations can't be learned. An ML model can generalise because it learns how to derive conceptual components from input and apply them.
ML models do require intervention when the domain or task changes, but there are methods to address this (transfer learning and continual learning). For language and vision models though, this is usually not a problem because the domains (multilingual text; camera/lidar/radar) and tasks (word prediction; safe navigation) are applicable for the domains/tasks we want specifically. With robotics especially, we use layered control systems: so an AI car has separate systems for image segmentation, 2d/3d world modelling, road navigation, avoidance, etc. So, lack of representation in the training set is a source of inaccuracy, but it is not the sole cause, nor is it incapable of edge cases of the same domain and task (which the examples people give are)
@@user-xk6jw3wi5u @Xor is on the money. It is able to solve problems not seen in the input dataset. Does it every day of the week for me. It is able to generalize from its training data. Now, it is also a static model, and so can't learn from further experience. Current LLMs are a form of AGI - General Intelligence - they are just not examples of Dynamic Learning Intelligence because the 'learning' is limited to training using backprop, which is impractical and slow for real time training. Hallucination is imagination - LLMs were never a database to simply query for correct answers. In fact the emergent behaviour isn't understood much less intented. LLMs are not so much designed as emerge from the training data.
"Decent summarization engines and lukewarm guessing machines tunned for working with different type of medias. They can't reason." Loved it!
Except, they can reason much better than many humans can.
@@spadaacca You're living proof of that, it seems.
@@hexosonpretty stupid response there.
Soddenly all websites have these AI help bots that are just as stupid as ones I saw many years ago
Powerless* they are objectively less stupid, doesn't mean they are given permission to make changes to your account.
The AI in Quake III is amazing. The bots can rocket-jump.
It reminds me of the days of "Cloud". When every online provider slapped the word "Cloud" on everything all of a sudden, regardless of what technologies actually made it work.
Organic free range grass fed sustainably farmed fair trade climate friendly safe space AI!!
But both cloud and ai technology is real and very influential on software.
Couldn't finish video, put too much glue on my pizza and died.
AI as a buzzword encompassing narrow AI isn't new.
The Pacman ghosts were AI, they just each only had the one parameter.
Video games have been marketing computer controlled entities as AI for decades.
Unless it's a real-life Bender the Robot I'm not interested. Yes, I actually want a lazy beer guzzling robot friend(?)
I just want to point out the hypocrisy of these companies saying all the content for training the models should be free to use and then charging for the end result. It's a little like paying for insurance and then having to pay full price for what you were insured for anyway.
They're technically not saying it all should be free, they've offered several companies million dollar deals for the data.
10:59 it's really easy to Gaslight gpt-4o to think 2+2=5 and then tell it that's wrong the whole thread stops after that
It's actually not. Literally go try it right now, you won't be able to do it.
You're doing a cool thing that people typically call "Hallucinating" when an LLM does it, but "lying" when a human does it! The more you know!
given chatgpt 4o has to be made to accept obvious lies in the name of politically correct there's basically zero way they can ever take that problem out of the code. Gullibility is a design feature to those in charge of it.
What about chat GPT 5,6,7,8, etc?
@@feminaproletarius7815 what are some of these "obvious lies" which are "politically correct" ?
@@xyzgaming450 if you know, you know. no sense arguing with a hallucinating N.N.I.
This video is gonna age much worse than current AI boom.
This is going to be a lesson to manufacturers that the consumer doesn't want automation, we want innovation.
The same way they slapped "Turbo" on everything back in the 80's.
Was thinking the same thing.
So lets market the Smart Turbo 3D AI Cloud
There’s a Porsche Taycan electric car with the word turbo after it as though it has a turbo engine, even though there is no engine
I immediately thought of the Turbo character from Wreck-It Ralph.
@@smellcaster this reply sent me😂😭where to pre-order
3:06 So AI is a hamster. Got it.
5:48 Wait, no. Ai is a monkey.
11:41 Um, AI is a hyperintelligent octopus that knows nothing of bears.
bay boy say he wan his gionmion jiggalasnack
Aaah damn nearly chocked from laughter, +1 internets to you sir!
All of the above but might not be all the above. It's a should or could be, but never quite a definitive yes.
I believe he said a room full of monkeys in fairness
AI is literally the equivalent of 3D TVs. If a tech company wants new buyers, just slap AI into it
"AI is literally the equivalent of 3D TVs" - let's see how this comment dates next year.
Got a screenshot
@@spadaaccait's already proven false today, ai is being used at a huge scale in enterprise businesses already today, unlike 3d tvs
If its ML it's Pyhton if its AI it's PowerPoint
video idea: AI branded PC build (there are AI PC case, AI motherboard, AI SSD, AI memory, AI power supply..., AI keyboard, AI mouse, AI monitor)
What about letting ChatGPT-4 omni decide a build? Give it the prompt: make me a list of hardware needed to build a PC for gaming that is around 1000 USD. Now that would be interesting. Maybe they already did that.
@@bobthegoat7090its actually fairly good at it, and the more detailed your requirements the better your results may be, just make sure that after it gives you the parts list you ask it to double check the compatibility of the components and youll have a decent result, its a lot better at pc part lists than a lot of humans that i know 😂
@bobthegoat7090 I just did this, it was a pretty standard high end computer. I don't think it'd be that entertaining to watch them build it. The only odd part is that it suggested an optical drive, lol.
CPU: AMD Ryzen 5 7600XCPU Cooler: Noctua NH-U12S ReduxMotherboard: MSI MPG B650 TOMAHAWK WIFIMemory: Corsair Vengeance LPX 32GB (2 x 16GB) DDR5 6000MHzPrimary Storage: Samsung 980 Pro 1TB NVMe M.2 SSDSecondary Storage: Crucial MX500 2TB SATA SSDGraphics Card: NVIDIA GeForce RTX 4070 TiPower Supply: Corsair RM850x 850W 80+ GoldCase: NZXT H510 FlowOperating System: Windows 11 HomeOptional: ASUS DRW-24B1ST SATA 24x DVD Burner, Noctua NF-P12 redux-1700 PWM case fans
@@randomblock1_ wow terrible cpu cooler choice too, otherwise yeah totally not bad at all
@@randomblock1_some still have CDs/DVDs at home and with some outdated products, software still comes on a DVD, so it's not such a bad idea to have one.
The thing about AI reminds me of what's gone on with cross stitch patterns. People are selling all this "we can make any image into a cross stitch pattern!" stuff, but it's just them scaling an image down to 100x100 pixels and then picking the closest colors that matched the embroidery floss colors available for sale. What these cross stitch patterns have always lacked is the backstitch: to decide what is worth adding an outline to, and where to use a couple out-of-outline stitches to add details otherwise too small to represent: for example, flower pistils or the texture of fur. So I still much prefer working with human-designed cross stitch, even though I am theoretically able to get a computer to make a cross stitch pattern for anything I want.
I've since learned that all AI is like this.
Computer generated patterns are horrible confetti-stitched monstrosities that only look good from 2+ metres away. They make me think of Victorian ladies with Berlin wool work "copies" of Monarch of the Glen.
That's an oddly specific and directly related to (what i assume) is a rather small target audiance. And yet this is a simply brilliant statement, and exactly explains why computers and AI are simply tools to make things easier for humans, and by failing to do this in the example above they have simply proven we are not yet there.
AI still unable to make a robot pour a glass of water. Something our caveman ancestors would have worked out in a few hours. Human brain will always be superior to Computers/AI.
@@BurntFaceMan Always? Meh, not always.
we're just in the "prehistoric" era of "AI", it started just 100 years ago, and we know 100 years its nothing
we created a lot of things that today are much better than what we can do "bare handed".
that's what humans are best at, we create tools that surpass our normal capabilities, its our thing
we will all be dead by then, but im sure one day we will have a true AGI with consciousness, that can take care of all the boring shit any human can do, with no error margin
I watched a video on a similar topic, but it was with AI generated crochet patterns. Perhaps you already saw it, but in case you haven't and you're interested, the title is "How to spot fake (AI) crochet so you don't get scammed" by Elise Rose Crochet. It's very interesting. I need to see an AI cross stitch pattern, it's probably wild.
I think that highlighting the successes that Anthropic has had recently with regard to the “explainability” of these large models would have been good
In order to be able to say the next word in an arbitrary text you need to be intelligent. We force the neural nets to predict the next word and therefore force them to be intelligent. A quote from the godfather of neural nets which I find very interesting!
Geoff Hinton knows this subject far better than Linus.
" in an arbitrary text "
It's not arbitrary at all.
It's statistics.
It isn't much "intelligent" when a machine repeats words like a parrot, with zero understanding of the meaning.
like how HIIT is a marketing term for interval workouts lol
I like the Mass Effect nomenclature. "ANI" they call "VI" (Virtual Intelligence). VI is useful but certainly not actually intelligent.
That's what I've been saying. What we have right now is more akin to VI in the ME universe.
Just wanted to comment about this but you beat me to it.
Heavily agree, MEs take on artificial intelligence with it's artificial and virtual split is still the best depiction of it in media ever imho.
VI is a perfect analogy to how things are right now. At least Avina isn't trying to date us though...
Semantics
Not actually intelligent, huh? You mean like intelligence, but not real intelligence? Something artificial, like some sort of artificial intelligence?
7:48 Let's talk about the IDF's AI-assisted kill lists.
AI can tell you that a recipe for pizza almost always includes dough and cheese, but it doesn't know what pizza, dough, cheese, or a recipe actually are.
Love how their motto used to be “Think Different”, and now they’re chasing the same trends as everyone.
vvho are you talkinb about
Apple
siri was one of the first assistants. even though they didn't create it they popularized having assistants on phones. and they were very slow to start talking about AI or adopt it, just like they're slow to adopt anything other new tech, so not sure what you're on about. apple sucks for plenty of reasons that are factual.
kinda like follow the trend has always been brainwash to sell bullshit.
@@reanimationxp you answered your own question. They are 'slow' to do anything 'different' nowadays because they're too worried about the 'apple ecosystem'. They're late to trends by several years with the hope their enormous budget is enough to make them steal everyone's attention.
Ai is a perfect pool for the Dunning Kreuger effect. Peak ignorance will contribute to rampant misinformation. The widest tech moat we've seen in my lifetime.
You haven’t used it to do anything have you?
Explained: The conspiracy to make AI seem harder than it is! By Gustav Söderström
Spotify R&D
LOL he shit on them.. but they still tell the lies.
I have seen this personally. Friends of mine who are somewhat tech-literate think "AI" is going to replace all coding/SWE jobs and take over the world. My other friends, who have actually worked in the tech field, think it's an overhyped guessing engine with niche real-world uses
@@Clone895people who's jobs will be gone in a few years have a reason to cope like that
@@RellisLCTI'm a software dev (>10yrs) and I use GitHub Copilot daily. It saves time, but is nowhere near replacing devs yet. While *someday* I think AI will remove all but the highest-level versions of programming (ie. deciding on what you want), it still has a long way to go. My head is not 'stuck in the sand'; I use it daily, so I think I'd know its limitations more than people who don't. (it can initiate new projects alright, but it has serious problems with retaining code quality for example; one thing it *is* great at though is discovery / faster onboarding into a new project)
If you want an idea for alternative thermal paste use car ball baring grease. I found out it works after i used it
Not to mention the massive amounts of often nonconsensual data mining that goes into training these models. And the fact that these models are often not run locally and thus requiring you to send personal data to their servers to process.
I'm so glad we have a video to send people to now. I'm so tired of AI branding everywhere when it doesn't even do the most common versions of machine learning or neural processing, etc .
Most things have some machine learning in them since at least the 90s.
I am seemingly out of touch with pop culture enough that I have I don’t remember the last I heard someone use AI when they meant artificial general intelligence (not counting old TV shows)
7:38
His kids looking at him at a distance: 👁👄👁
Hol up
best comment. nothing in this comment section will top this.
A different kind of "my paste"
Lmaooooooo
This video had absolutely the worst segway. A good work Linus! Don't die, please.
or, at least, when Linus 2.0+ clone could be manufactured! :D
Quite a few people believe that large language models are just statistical functions that don't reason and don't model reality. I know they're wrong. This explanation should convince just about anyone who understands the language I'm using.
Logits in transformers are not traditional statistical functions because they don't represent probabilities directly. Instead, they're used as a way to squash the output of the transformer's attention mechanism into a form that can be used by the softmax function.
In traditional statistics, logits are often used to model the log-odds of an event occurring. However, in transformers, logits are used more like a linear transformation to prepare the output for the next layer.
The softmax function is then applied to the logits to get the final probabilities. This means that the transformer's attention mechanism is effectively computing a set of values that can be interpreted as log-odds, but they're not directly representing probabilities.
This subtle distinction allows the transformer architecture to operate more efficiently and accurately than traditional statistical models that rely on traditional probability calculations.
I did not understand the language you were using and was therefore unconvinced.
Things: normal reaction
Things "AI": *hyping*
AI is a relevant descriptor for products
5:05 The cancer detection comes up every time, but it's not so simple. The problem is that neural networks are black boxes, you don't 100% know how they come up with their answers.
I read about a study where an AI was suppose to be better at recognizing cancer than human doctors, but in the end it turned out that the AI was cheating by recognizing additional data on the x-ray images in the training data the study used, older x-ray images and x-rays from certain hospitals just simply had a significant higher likelihood of having cancer which gave the AI an advantage. This advantage obviously completely disappears once it operates in the real world. So if the AI was deployed like that it could've actually been way worse at detecting cancer than a human doctor without people knowing it.
I'm a road safety engineer, and your points on "AI" in vehicle's are very true. They are not even close to a level where it would be safe for them to be let loose on the road network, for so many reasons. Especially in Europe where we have ancient, evolved roads with all sorts of surprises.
But in typical ragebait fashion, you guys are pretending like car manufacturers call their safety suite "ai". They dont.
I hope you don't train hamsters by punishing them.
AI now is like "HD", "3D" and "VR" were in the past, just words to stick to the packaging. I still remember seeing an air conditioning proclaiming it had "HD" some years back.
On the company I work for, if they create a bot with pre-programmed answers, they call it AI.
That’s so ignorant 😂🤣
Neuro-sama will never be a lie.
keep dreaming it's a llm with azure tts
There is a short of Neuro-sama trying to spell "Hi Anny" and it's the funniest shit I've ever seen about the current state of AI 😂
Neuro-sama would never lie to us
I can't wait for a Linus vedal collaboration, he really needs those h100's
*wink*
Thinking as much as the Dreamcast did in 1999? What does that 2nd to last flag mean?
Linus, you're about a year and a half late on "It's not AI"
Such a stupid thing to be wrong about too. I'm pretty sick of people trying to redefine words like ai and liberal to sound smart.
Bro dropped the hardest thumbnail and thought we wouldn't notice
For someone using a lain icon I’m surprised you watch this garbage.
Read a comment that said “Nothing convinced me of the existence of a soul more than AI Art”
I appreciate you using your platform to call out these malicious tech companies.
One thing I wish you'd spent a little more time on however is the training data. I'm an artist and all of my work and the work of my peers is now being used to replace us. Our copyright over our own work was completely ignored as the industry tried to move too fast to be stopped - they fully know what they're doing is wrong which is why in interview after interview they'll dodge the question of where the training data came from and instead use yet another buzzword: "publicly available". As if putting something online makes it royalty free. Anyone who parks their car on the street better be careful, because that's publicly available too.
Even if you're someone who doesn't care about artists or creatives and thinks we should all "get a real job". I'd like you to know that there have been illicit images of minors found in these models and people are using them to generate more. If you've ever put pictures of your kids online, they'll be in those models too. It doesn't take a huge leap to guess what's going to happen when this algorithm needs to figure out what a child looks like in order to produce new illicit images - pictures of your kids are it's reference material.
I don't remember exactly which channel posted about this exact thing but they talked about it back in 2017ish about the levels of "AI" and what to expect from each level. you did a great job in summarizing this.
Yeah and I made a personal voice on my iPhone and that shit is scary, it’s a bit buggy but sounds really good for what it is😭
10:50 the gaslighting on display here is absolutely masterful. Had me double checking if strawberry actually had 3 ‘r’s in it
it does. StRawbeRRy.
I swear I thought the AI was right and that they were gaslighting it into believing it was 3 lmao
This guy gets paid big dollars to feed you people bad info
Total FAIL right out of the gate by using science fiction to define your terms. Was a little like having 'Quantum Leap' define Quantum Eletrodynamics.
In relation to driving edge cases; the collected data and synthetic generated data that Tesla uses to train its models will make it familiar with and trained to deal with situations no human driver would. When trained as a pilot we were drilled into the idea that we need to have the right actions occur instinctively, so we practiced things like stall and spin recovery. In driver education the basic requirement is to basically steer and control the car. There is no training in the same kind of emergency conditions because it is VERY dangerous. But Tesla's are training on exactly these in-silico. Machines will be able to react far faster than the human nervous system. Machines don't need to be perfect, they just need to be better than humans. And humans SUCK at driving. They get distracted, tired, drunk and high. They get impacient, take crazy risks, get crazy angry and sometimes drive the wrong way down a highway. By far the most dangerous thing on the road in future is other human drivers.
12:12 - You made a Linus LORA for Stable Diffusion and it's now out there somewhere next to Pony Diffusion XL, an unfortunate weight-merge just waiting to happen.