in computer science in HIGH SCHOOL they made a big deal about the difference between artificial intelligence and machine learning. its like machine learning was completely removed from the dictionary in the past 2-3 years
@@michealcondry5384 So according to you guys the real AI would be as useless as a baby and we would need to send them to school to learn all the stuff needed to help us? :/
@@3dcomrade I mean, Linus explains it in the very video you're watching... Just watch the video for a primer and we can clear up any details. The previous explanation is misspeaking to the point of being inaccurate. "AI" does not exist presently. At all. Anywhere. There are NO actual "artificial intelligences" out there, yet. "AI" is a general term that marketing and executive wanks misuse. Entirely. "AI" is a concept of a system that can self-teach (learn) new things it hasn't seen before, on its own. That does not presently exist. "Machine learning" is where some engineer _sets up explicit scenarios,_ like Linus talking about the person trying to sit in a chair. At first, the machine basically just permutates through possible methematical states, and adjusts things to be more and more in line with what it's _specifically told by the engineer_ is correct. It has ZERO idea what it's doing or why at ANY point in time, even _after_ it's "learned" to the point of being highly accurate at the task. It's LITERALLY just linear algebra spitting out numbers. There is never at any point anything remotely close to a "thought" in the system, unless you extend it out to the human engineer setting it all up. "AI" would require the computer to set up those tests, confirm the results, and set up success conditions, _all on its own._ Ideally, while being able to explain what and why. No system currently does that with anything remotely approaching a complex task.
I work in AI developing models. The entire industry is currently filled with MBA jargon and people in suits trying to collect money from investors. In the future, looking back on this decade will be super painful.
I also work in AI. Granted a lot of companies are using AI as a silly marketing term but that doesn't mean there hasn't been massive innovation over the last few years.
yeah, the marketing nonsense is absurd. I do find it disappointing that Linus is claiming LLMs are just faster versions of old tech. They are far closer to the entire language system of a human than they are to ELIZA. And GPT4-o isn't an LLM with its new multi-modality. These models are neither everything some promise them to be, nor as limited as many hope them to be. You think we're actually going to crack AGI as the robots are put to work and add their data to the hoard @ducks742 or do you think we'll run out of processing power first?
you're right, most of marketing and advertising advice out there is to trick, deceive or otherwise manipulate people into buying products. I've chosen to sell honestly, with products that I believe in and that sell themselves.
As a digital marketing professional, I think it’s somehow even worse than that. Nowadays, we don’t even try to deceive (or communicate with) humans anymore - most of what we do, most of the content we create is actually for Google’s crawler robots and indexing algorhythms. The absolute first priority is for the machine to like your content, everything else is secondary becuase if you can’t please the algorhythm, humans can’t even see what you put out there, so they don’t even have a chance to like or dislike it. To be honest, my career goal is to get to a point where I can use the skills and tools associated with digital marketing to support an organization or cause I believe in. To build a financial background secure enough to be able to work for NGOs or non-profit projects even if they don’t pay particularly well.
On the other hand, let´s say you personally are selling a product/service (maybe shooting weddings) But there is 10 other people doing the same thing. You kind of have to tell the potential customer, that you have the most modern tech and that you can do the best job, for a better price ratio. Even though you know that you are probably not the best. The solution is to let only one entrepreneur have an absolute monopoly? Is there anything that can be done?
I read "The Worlds I See" by Fei Fei Li and shared her sadness/ambivalence at the fact that further developments in AI now rests with big corporations with lots of money, access to data and computing power, and no longer the passion projects of curious scientists. Professors in universities just can no longer keep up with big corporations and ended up working for them. She worked on a project which uses computer vision AI to guide doctors/nurses to properly wash their hands and perform other hygiene procedures correctly. One sentence from the doctor who participated in this project strikes me, he said that the CEOs only talk about AI replacing people; whilst scientists like Fei Fei Li actually use AI to help, not replace him
I said the same thing the other day to people is that every "Innovation" from all public companies think of it as an ad for their stock market, and now we have the "AI" words for their ad business, which kinda more concerning since can be more privacy nightmare than ads tracking. xd
My org recently named a new "Chief AI Officer." He's got a masters in marketing and a GPT subscription. Apparently, that's all you need to get to the C-suite nowadays.
That makes sense, if he said he had 20 years of experience in AI then you know he's faking it. They mainly needed someone who was "with the times". I've worked at places where I was in charge of something I wasn't qualified for because everyone else would have been worse at it.
@@sarahberkner "AI" has been on the go from even before the 1960s, Perceptrons were developed in the 50s I think, one could easily have 20 years of experience in machine learning, language modelling, generative models -- which is what people are calling AI now. Not that many people do I'm sure, but still.
Most of my graduate studies and Master's thesis involved AI and Deep learning and I cannot begin to count the number of times friends/coworkers (who studied something completely unrelated like business or marketing) have tried to tell me how AI will solve everything and that I "just don't understand it" whenever I explain why their idea with AI wouldn't work
Imagine a man sitting at a computer. A series of Chinese symbols and characters appear on his screen. He spends his time and energy rearranging these symbols that he knows nothing about and has no context for. Sometimes a buzzer blares and he has to try again, but sometimes a bell rings and he gets to move on. After a long time of doing this, he's gotten pretty good at determining the pattern of the symbols that generally result in a bell instead of a buzzer. Let's presume you can understand Chinese. You walk up to this man one day and ask him what he does. He explains that he plays this pattern recognition game where arranging these symbols in a way the computer likes lets you continue to the next one. On his screen in Chinese is the question "What is ice cream?", and you watch as he responds in perfect Chinese "Ice cream is a cold dessert food made of ice, sugar, and either milk or cream." You ask him if he knows what the symbols mean and he has no idea. That is machine learning.
_"rearranging these symbols that he knows nothing about and has no context for."_ , that's the big problem the Chinese room experiment is pointing to. Because how would we know? He doesn't learn the context in the proces, how so? Would one be able to make a perfect translation without knowing the context? And even if it is the case that he doesn't, but he would still be able to produce perfect answers to questions, why would that make the answers useless if people would still be able to understand the answers; to curate the good ones? If that's the case, why would we not call those answers intelligent?
[I don't get why people like that "Chinese room" thought experiment] As if, if a model like that would make 3 stupid answers and one good one to a question about a cure for cancer, people would be rolling their eyes like: "Pffff this thing is stupid, it doesn't even understand the suggestions it makes1", well, _maybe_ , but that thing just got a cure for cancer. People make bad guesses too before they make a perfect one, who cares.
No it's not. If he did that a billion times with billion different contexts, he WOULD understand chinese. Deaf and blind people from birth can still understand the concepts of a picture or sound.
Yup... bought a new fridge and apparently it's fucking sentient. Got ARTIFICIAL INTELLIGENCE plastered on it. Must be shy though, hasn't said a word so far.
I mean, at this point an appliance that can connect to the Internet and run a few apps is a reasonable definition of a "smart" appliance. Usually I feel like it's fairly clear what you're getting, although I guess there's a range of ability.
@@danieljensen2626agree with you, just the OP is saying that the term AI which ought to be a pretty dang impressive description of something artificially in a similar category to human intelligence, is going to get relegated to being defined as something far less impressive. Linus made the same point. Need a new term to represent the farther future of intelligence that comes closer to human.
AI is really bad. If you know anything about a topic, both GPT and Gemini fall apart. 95% of the time, its making things up. Semi advanced things like the effectiveness of spinosad as a pesticide for plants. Or a viroid called HLVD thats impacting plant growth. Or questions about auxins that promote root development, its always making things up in regards to these topics. Anything that goes beyond surcafe level "write me a better ending to my tv show" kind of stuff ends up giving you incorrect info. The worst part is, most people dont catch on.
Essentially, corporations chose to muddy the definition of AI, for profit. Just like with Hoverboards. And now we need new words for those old things we envisioned...
How do we make sure this 'muddying' of words doesn't happen? Just call things more specifically and don't give a hyped up name? Or keep on doing what we're currently doing which is 'invent a new word for the previous expectation of the technology'?
@@fireninja8250 It'll always happen honestly and it's not solely related to tech so we can't even stop it. AI got a buzzword for tech corps (and the average joe) alike, if you think outside of the tech space we have a ton of words that have been muddied and/or re-defined, be it g a y, white knight, simp (with especially that one still having that incel usage taste every time you read it) or other examples that we don't even think about anymore. AGI will just be as normal in usage as some of the other things have become for its re-definition over time.
Term term "AI" has been thoroughly ruined now, being applied to everything. My clothes washer has "AI". The more often you pick a program, the higher in the list in the shows up. ARTIFICIAL INTELLIGENCE.
This is a conversation that needs to continue happening. I’ve really struggled to explain to people that “AI” isn’t AI, and more importantly why it matters that we distinguish between AI and ML. In a way, it feels similar to the whole USB-C issue where the vast majority of the public didn’t understand that just because a connector is USB-C doesn’t mean that it’s fast, it just means that it’s USB-C and it’s important to distinguish between USB protocols vs USB connectors
You lost me with the USB part. I don't work in tech but it seemed obvious to me that AI is not sentient or is self-aware and doesn't have evil intentions because it doesn't have any intentions at all, it's basically regurgitating information and humans still need to weed through it. Some people find this hard to grasp. However I hadn't thought about the fact that "artificial intelligence" isn't an accurate description. I think you could argue that it is accurate, in the same way that an artificial flavor doesn't taste quite the same as the natural flavor; artificial intelligence means it's like a substitute for intelligence.
@@sarahberkner Any such explanation of AI that you come up with can equally be applied to the human neurological pathway. One can just as easily argue that humans regurgitate information and give off the illusion of self-awareness.
@@spadaacca not really,, as linus says in the video AI doesnt actually understand what its doing. you can ask an artist to breakdown a drawing, and theyll tell you what they did, how the body interacts with the enviornment, etc. you can do the same thing with a writer, you can ask them why they wrote it, how they wrote it, etc. you try to ask an "AI" to breakdown anything it makes, and it wont understand it. AI doesn't iterate, humans do
@@jellyloab Is that different from us humans? We make about 35,000 decisions each day, and would struggle to explain our reasoning for most. For the ones we make consciously, we commit our thought processes and feelings to memory, allowing us to explain them after the fact. Current LLM's do not commit any sort of thought process or internal monologue to memory, and so can only explain its reasoning using its previous output as context, i.e. it is not actually recalling, but creating an answer using its previous output as a reference. This does not mean, however, that there was no "thought process" (by which I mean "calculation") that went on to create the original output, nor is it a good measure of intelligence. Linus's example of the AI struggling to count letters is also quite misleading: due to how current LLM's are designed and trained, they excel at pattern-based tasks, but tend to struggle with precise manipulation of symbols (hence why math can be so precarious too.) I'm not exactly sure what point Linus is trying to make here--is a child not intelligent if he struggles to count?
@@jellyloab Funny you should mention bodies interacting with their environments. I want you to do an experiment: go outside, take a jog. But tell your brain not to speed up your heart rate. Keep it at resting heart rate. Does it listen to your executive commands in the service of our supposed free will? I think a big part of this conversation leaves out just how little free will humans demonstratively posses. In that light, most of what a human is, is automation. I'm taking a bike ride today. That I decided to do - I decide to turn the pedals. But how my body accomplishes this task on an anatomical level isn't up to me; not one bit.
Hard agree with you. 👏🏿 The moment they flashed all of the other Buzz words that have been used over the past few years in the tech industry, all the other crazy stuff that has happened in the tech industry Flashed in my head at the same time. Especially 64-bit. That one got a laugh out of me.
Marketers picked it up, but it was academics who came up with the term and used it to define a subfield of computer science thst includes narrow AI. Also, machine learning itself, which used to be called pattern matching. It,'s not just industry on this hype train.
Machine learning engineer here (Image generation focus). I am so glad a major youtube channel finally got it right, rather than fear mongering. The amount of horrific information even from sources that should be educated on tech like this is truly disheartening. Thank you for this video, which seems to be a rare one with a relatively neutral look into a set of technologies that will continue to shape the world for many years to come.
Can I ask you a quick question as another programmer who's dabbled in ML? It seems to me that AI/ML is really just data science (or at least data-driven development). My understanding is that it's basically just gradient descent used to optimize a function that maps inputs to outputs based on some loss function. I learned how to fit data to a function via gradient descent in high-school statistics, and from what I see, fitting a 10,000 weight convolutional filter to a dataset isn't really all that different conceptually than using Excel to create a graph with a least-squares regression curve if you ignore the difference in dimensionality. Do you agree/disagree with any of that? People keep saying AI is a bad term and people should call it ML instead, but even ML seems like a bit of a stretch if it's just data science curve fitting with some fancy gradient descent on top (albeit with a 10,000 dimension curve fit to millions data points). Seems to me the only reason people use the term AI/ML is to make it easier to get VC funding, because data-driven development doesn't sound cool or sexy.
I appreciate hearing someone say the actual truth about “AI”. Try doing anything novel with it and it can’t. It’s just an amazing pattern recognition and replay system.
You'd be surprised by how much our own "novelty" in arts and technology is just rearranging and minor variations of existing components and ideas AI is really not doing something too different to what we do. Try having a completely new thought, and then realize it was most likely a reiteration of something you already thought or read or heard.
I mean, AI has always been a very general term even before all this AI craze. NPC behavior in-game was called AI, so was an AI general in a strategy game.
I think people can grasp the difference in the meaning of the term when talking about it in very different ways e.g. npc game AI in a AAA game vs AI that's designed to drive your car.
The thing is, video game AI could arguably be considered in some forms to be more intelligent than this. Most NPC AI is based on state machines, which basically considers information about its surroundings to switch between pre-defined states. You could use machine learning to enhance that declarative programming by giving higher weighting to attack patterns that appear to be successful to make those states more likely. So called "generative" AI just does this on a pixel or character level, making specific words or patterns of pixels more likely based on input keywords, which means all it does is spit out averages of the input data. So the marketing term "AI" is actually based around the cult like idea of "emergent" programming, basically that if we throw enough data at the machine eventually it will stop averaging and start programming itself. Instead what we get is a lot of smoke and mirrors from people obsessively trying to coach these averaging machines to LOOK like they're creating novel outputs, while simultaneously stealing any and all data on the web to fuel their fraud.
@@benflightart State machines are just a good way of organizing a ton of if statements. It has nothing to do with intelligence. The behavior of generative ai is fundamentally learned behavior and it's definitely part of how actual intelligence works, it's just not the whole answer.
@@gnanasabaapatirg7376 I don't know about you, but I didn't know for quite a while that Paint 3D could in fact be used to create 3D models. Thought it was just a gimmicky rebranding, though some may still say it's a gimmick.
That last paragraph is really soul chilling... some definite Cyberpunk 2077 vibes there, and not in a good way... "The folks in charge of helping us deal with all of this have a lot less funding than the ones who are trying to sell it to us"
Yeah, it's getting way too easy for bad actors to deepfake evidence that can have very chilling impacts. Wanna get rid of a political dissident? Just fabricate some video evidence on a CO2 belching AWS datacenter. Want to track a group of marginalized people? ML-powered face recognition software and the ever present cameras and GPS receivers with mobile internet connections makes that trivially easy. I honestly struggle to get excited about technology anymore because it seems like any developments (especially machine learning and ever-present telemetry spyware devices) are only ever bad for the working class. There may be some positive applications in the medical field or logistics management, for example, but overwhelmingly it's cars that report driving habits to insurance companies and law enforcement and have "autopilot" systems that are known to kill people (in no small part due to cost cutting), or buggy software, ads, and spyware in everyday appliances that used to at most have some simple microcontroller code that did exactly what it should and nothing else. I'm starting to think the Matrix had it right; maybe 1999 was the peak of human civilization (at least from a technological perspective).
That's also an incorrect take. The ones 'helping us deal with it all', if I had to guess Linus political view, is the government. The last think the government is lacking is 'funding', and they will do their best to pass on more useless regulation, mostly with the intention to get more 'funding'.
It's pretty ridiculous and SCARY AF if we're letting "AI" go about important tasks when it can't even tell us how many times a letter appears in a word.
Remember to add glue to your cheese so it sticks to your pizza better. Oh, and also eat one rock per day. (those two examples were given as genuine recommendations by google AI search help)
as someone who has studied for a university degree in AI, this whole hypetrain is extremely infuriating to me. Imagine you're a physicist and every physical product is called "black hole" because *technically* all mass has gravitational pull. Similarly, everything is called "AI" now because it has more than 500 lines of code.
@@roymarshall_ Unfortunately the wreckage of old hypes doesn't magically go away, and will haunt everyone affected for decades to come, albeit in sanitized form. We're still dealing with fallout of the OOP hype in programming today, and that was, what, the 70s? And most programmers today would likely not even recognize which parts are the genuine concepts, and which parts are just holdovers from decades-old hype that have remained in use because "that's how we've always done it".
If, as the paper suggests, an intelligent octopus faced with a bear attack doesn't know how to react, don't you think that if a human were to reincarnate as an octopus in the same scenario, they would respond similarly? We could perhaps improve the octopus's response to be scared of unknown situations. Assuming that current AI is somewhat similar to humans based on this idea, aren't we essentially searching for something god-like? If AI could provide correct answers to any scenario, no matter how absurd or unexpected, could a human even handle it? For instance, if tomorrow everyone dies and you get shot to Mars, entering the 45th dimension where Mars is habitable, but you must return to 3D because in the 45th dimension you're a disabled person with no senses, and the 45th dimension's version of Elon Musk keeps you as a pet in his belly pouch called '&%&^5757,' how would humans solve a question this? And, If AI could take even one step toward solving this scenario, as Linus suggests, by using context clues to make sense of an absurd situation and lead us to the correct answer, then wouldn't that AI be able to solve any issue, no matter how absurd? At that point, wouldn't it be considered not just software, but something god-like? Or is AGI simply about quantifying all five human senses-vision, hearing, touch, smell, and taste-in numbers and then training on the thousands of ways humans have developed machine learning techniques (perceptrons, neural networks, U-Net, transfer learning, Gradient Descent, Stochastic gradient descent, PSO, Bird Swarm Algorithm, Transformers, and Thousands more)? What is it what is AGI is it a search for GOD or is it making a Human so perfect its basically GOD ? This is some Really Mind Bending Shit......
This video underestimates transformer models' potential. Key points: 1. Massive scaling (100x) could lead to AGI-like abilities (see the paper on GPT-3, Chinchilla scaling laws). 2. Tokens aren't just text - they work for vision (ViT), audio (Wav2Vec 2.0), and more. Omni can handle multiple variants at once and their voice model is held back until autumn due to safety reasons. 3. Robot learning with LLMs is promising (PaLM-E research). 4. Efficient fine-tuning (LoRA) enables quick adaptation. 5. Consumer hardware advancements (e.g., 4090 GPU can run local post-training with fine-tuning) making robots that learn offline more feasible. Some people overestimate what AI is today. I think most people overestimate it today but underestimate what it will be in 5-10 years. Recent work on reasoning (chain-of-thought) and memory (RETRO) addresses limitations. Current approaches may be closer to AGI than implied. Happy to provide paper links if interested.
The thing is bottlenecks. The layman AGI example starts with "Imagine you have infinite processing power, a perfect model of the world, and want some stamps...", and that's the tell. Yes, advancements have been made. And impressive ones, at that. Yet the major bottlenecks (data and processing power) are still there. Hallucinations are still fundamentally tied to how the system works. There are biases in what even _can_ be used as training data. Example: how do you give the machine the concept of "sad"? With words? With pictures? With an equation? Even if you mix all three, it's still incomplete. There are stuff that cannot be losslessly compressed into data. And any data-driven system will also have that limitation. However- those gradual, discrete advancements you've listed are not characteristic of exponential growth. They're characteristic of a technology that's maturing into a plateu. That's why I don't buy the "it's gonna get better in 5 years!". It already _has_ been 5 years, and GPT4o is not Hal 9000. And that's just judging the technology by itself. If you start talking about the market, then it's another can of worms. OpenAI is not profitable, it survives off investor money. If the hype stops, everything breaks down. Sam Altman makes bonkers out-there promises that are straigh up fiction. Google's search AI was embarassing. The training data for most models are built on copyright theft. And it goes on. That's why I don't buy the " what it will be in 5-10 years". Thanks for coming to my TED talk.
Have to think about a quote from Edsger Dijkstra: "The effort of using machines to mimic the human mind has always struck me as rather silly. I would rather use them to mimic something better."
From UA-cam channel Explaining Computer. "The 2nd most intelligent specie on the planet is the dolphin, and we never expect dolphin to imitate a person..."
The problem with this quote is very simple. Despite our millennia of accumulated knowledge our own minds are by FAR the most advanced and capable thing we know of. And we barely understand just the most basic principals of their operation. Nothing is more capable of handling problems and adapting to new complex situations that the human mind. And by definition the capability of creating something better than a human mind, must include the capability of creating something as good as a human mind.
@@sunlaThe point of the quote is to ask why are we trying to make computers do what we can do, rather than what we CAN'T do? I.E., why are we trying to automate the human spirit with 'art generation' and similar things rather than use it for the things that we just can't really do like immensely complex simulations, data processing, etc? Now just to be clear neural networks are in fact being developed for loads of genuine scientific applications, but a lot of the mainstream tech buzz isn't about that but about gimmicky things that aren't actually helping the world at large. The question basically is why aren't we focusing on doing the things that would take us as a species far, far too many man hours to do.
Nobody realizes the news is lying until they talk about something you're knowable about. Then we go back to thinking they're experts on everything else, lol.
no one doing the news are experts on that subject - they just interview "experts" which tend to lie to you/them or don't bother actually specifying it further because they're... well either part of the company, don't actually know what they're talking about - or simply forget that they should specify things for the average viewer. kinda like that person that was (or still is? don't know if that stopped ever since he got called out a while ago) giving cyber security tipps to companies and gets invited to train their people, while literally providing "proof of his work" with issue report ids - with the exception that he's not listed on any except for one (and blows the issue up bigger than it was) and no one listed on the other ids even knowing him - same concept, the people tasked with hiring someone for that don't know about it as the stuff they have to know about is an entirely different topic and just assume that it's correct all while not having the time (or resources) to contact anyone listed/read through more than the first.
@@Unknown_Genius Some stuff is such a simple search to find. I've seen absolutely ridiculous claims made by anchors that anyone even remotely knowledgeable wouldn't have made. To your first point, I can think of many examples of anchors talking out their butts like experts but you're probably right that they're just repeating what they were told without digging into the topic whatsoever.
@@Unknown_Genius what you are missing is that news isn't news anymore, it is entertainment, and saying the facts is boring, and doesn't get ratings. All 'news' cares about now is viewing figures, so BS'ing about AI and everything else is ok, as long as when they go to ads there are plenty of eyeballs still watching.
Wanted to mention Kasparov used 20 watts of caloric energy to play chess and deep blue used 1400 to do the same task. This difference in energy efficiency only grows with more powerful A.I. systems that use megawatts of power to do the equivalent task a human can do with a hamburger worth of calories.
Where did you get a figure for Deep Blues power consumption? Tried to look or it and turned up nothing, the only figures I could find were the max draw of the 30xPPC604e but that's unlikely to be the bulk which I'd guess would be the custom VLSI stuff or RAM. But bringing Deep Blue into this is like arguing against public transport using gasoline or diesel engines as opposed to horse buggies on the basis of the fuel consumption of a Ford Motor co. Model T.
We humans don’t usually make our own food, it’s more fair to factor in the energy used by the tractor used to harvest the food and all of the machines in the processing plants and all of the energy used to distribute the food with ships and trucks and other people.
@@tomh9553then let’s factor in all the energy required for building a power plant when we talk about ML models energy consumption. Not to even mention that a model needs people to build the power plant
I dunno, I was quite impressed by Half-Life's AI behaviors for both lone enemies and squads. Even the cockroaches had an idea on how to behave somewhat convincingly.
I don't remember exactly which channel posted about this exact thing but they talked about it back in 2017ish about the levels of "AI" and what to expect from each level. you did a great job in summarizing this.
Been a disaster in university with group projects. Half the team usually doing all their work with gpt rather than having an original thought themselves
@@phatwila the ones who use LLMs to generate code for study projects can't even tell if generated code is good or bad. Also if they can't do even simple things on their own how they gonna program something complicated that LLM can't handle ?
It's actually not. Literally go try it right now, you won't be able to do it. You're doing a cool thing that people typically call "Hallucinating" when an LLM does it, but "lying" when a human does it! The more you know!
It reminds me of the days of "Cloud". When every online provider slapped the word "Cloud" on everything all of a sudden, regardless of what technologies actually made it work.
Luke and Linus were the number one promoters of AI, talking about everyone getting replaced. They were so gitty to never hire another software engineer again.
we are getting closer to a perfect copy of what a human seems to be, that is not a agi which is an absolute terrifying thing, but for the average person, if AI stopped at a simulacrum of us, we wouldnt care...and honestly it would probably be better for our species survival if we dont go making AI that can combine old and new concepts to come to a new answer, we dont even use that ability for good
Why would this infuriate you? Why would you be so sure it isn't? I get frustrated with the AGI hype train too but plenty of very well trained professionals are considering this possibility every day. Why would you insult your fellow laymen because they choose to listen to a different professional than you is misguided?
@@John_JackThe point is that they were geniuses at fooling others to buy their scam. I disagree, as I do not find it that hard to scam less knowledgeable people into buying useless tech. I could probably do it, the difference is that I was raised properly and wouldn't want to.
I was a computer science student for several years, and I learned a lot of ins and outs of AI. I eventually left, at least in part, because I couldn't reconcile what the tech can do with what people were speculating about. Every time I hear someone speculating on the future of AI it makes me want to pull my hair out because they just don't get it. IT DOESN'T KNOW FACTS. IT CAN'T. THAT'S NOT HOW IT WORKS. The idea of "knowledge" doesn't really even apply. It's all about training it through iteration to come to a conclusion based on the information it is given. It doesn't know "facts" as much as it can recognize words arranged factwise. This isn't me saying it's all junk. A friend was working on reading medical scans with AI to identify cancers. There is a real future that the tech has. It already does a ton of cool stuff. I've worked with the kind of handwriting identification tools they use to sort mail. A different friend was working on different industrial applications like product defect finding. But it cannot replace human intelligence, and it must not replace human agency. People will defer personal accountability to automatic systems and wash their hands of the consequences.
5:05 The cancer detection comes up every time, but it's not so simple. The problem is that neural networks are black boxes, you don't 100% know how they come up with their answers. I read about a study where an AI was suppose to be better at recognizing cancer than human doctors, but in the end it turned out that the AI was cheating by recognizing additional data on the x-ray images in the training data the study used, older x-ray images and x-rays from certain hospitals just simply had a significant higher likelihood of having cancer which gave the AI an advantage. This advantage obviously completely disappears once it operates in the real world. So if the AI was deployed like that it could've actually been way worse at detecting cancer than a human doctor without people knowing it.
I'm so glad we have a video to send people to now. I'm so tired of AI branding everywhere when it doesn't even do the most common versions of machine learning or neural processing, etc .
Most things have some machine learning in them since at least the 90s. I am seemingly out of touch with pop culture enough that I have I don’t remember the last I heard someone use AI when they meant artificial general intelligence (not counting old TV shows)
DankPods bought a rice cooker that touted that it was "AI" powered. Opening it up, it used the same mechanical magnetic latch system as any cheap rice cooker from the last 40 years.
It’s been introduced into every facet of life. You’ve barely seen the tip of the iceberg. Tech boom 1950-2000 . This is gonna change everything more drastically much more quickly.
The modern world feels like everything is at least 60% a scam. The impact of these models seems to have been pretty heavy in displacing programming jobs though, right? Honestly, there is so much interference BS information out there now (lots of "AI" generated content no doubt) that it is hard to know what's real anymore. Like the four text messages I get from random numbers every day trying different versions of "hello," or "I'm worried can you please let me know you're ok?"
I just want to point out the hypocrisy of these companies saying all the content for training the models should be free to use and then charging for the end result. It's a little like paying for insurance and then having to pay full price for what you were insured for anyway.
You guys should translate this video! I wanted to show it to my spanish speaking parents, but I can't! I guess UA-cam subtitles will have to do, for now
Just download the video, upload its audio track into GPT-4o, have it translated to Spanish, done. That's exactly what state-of-the-art AI is extremely good at.
The thing about AI reminds me of what's gone on with cross stitch patterns. People are selling all this "we can make any image into a cross stitch pattern!" stuff, but it's just them scaling an image down to 100x100 pixels and then picking the closest colors that matched the embroidery floss colors available for sale. What these cross stitch patterns have always lacked is the backstitch: to decide what is worth adding an outline to, and where to use a couple out-of-outline stitches to add details otherwise too small to represent: for example, flower pistils or the texture of fur. So I still much prefer working with human-designed cross stitch, even though I am theoretically able to get a computer to make a cross stitch pattern for anything I want. I've since learned that all AI is like this.
Computer generated patterns are horrible confetti-stitched monstrosities that only look good from 2+ metres away. They make me think of Victorian ladies with Berlin wool work "copies" of Monarch of the Glen.
@@BurntFaceMan Always? Meh, not always. we're just in the "prehistoric" era of "AI", it started just 100 years ago, and we know 100 years its nothing we created a lot of things that today are much better than what we can do "bare handed". that's what humans are best at, we create tools that surpass our normal capabilities, its our thing we will all be dead by then, but im sure one day we will have a true AGI with consciousness, that can take care of all the boring shit any human can do, with no error margin
I watched a video on a similar topic, but it was with AI generated crochet patterns. Perhaps you already saw it, but in case you haven't and you're interested, the title is "How to spot fake (AI) crochet so you don't get scammed" by Elise Rose Crochet. It's very interesting. I need to see an AI cross stitch pattern, it's probably wild.
Just wanted to comment about this but you beat me to it. Heavily agree, MEs take on artificial intelligence with it's artificial and virtual split is still the best depiction of it in media ever imho.
I was going to comment something about how I've gone back to calling it machine learning, but my wife said you sound like Bob the tomato, so I'm commenting that instead.
A very simple litmus test for AI is if it can tell you when it doesn't know something. If it hallucinates the answer rather than tell you that something was not in its dataset then it is nowhere close to AGI.
12:12 - You made a Linus LORA for Stable Diffusion and it's now out there somewhere next to Pony Diffusion XL, an unfortunate weight-merge just waiting to happen.
Dunno. A tool is just as good as the user. For me it does wonders. -It is life changing. I pay $30 monthly for GPTPlus subscription and GithubCopilot. -Probably would sell half my soul for it.
It can write some basic code pretty well, not always efficiently but it can do it. Anything beyond that and it starts making fundamental errors. Easier to use google.
What about letting ChatGPT-4 omni decide a build? Give it the prompt: make me a list of hardware needed to build a PC for gaming that is around 1000 USD. Now that would be interesting. Maybe they already did that.
@@bobthegoat7090its actually fairly good at it, and the more detailed your requirements the better your results may be, just make sure that after it gives you the parts list you ask it to double check the compatibility of the components and youll have a decent result, its a lot better at pc part lists than a lot of humans that i know 😂
@bobthegoat7090 I just did this, it was a pretty standard high end computer. I don't think it'd be that entertaining to watch them build it. The only odd part is that it suggested an optical drive, lol. CPU: AMD Ryzen 5 7600XCPU Cooler: Noctua NH-U12S ReduxMotherboard: MSI MPG B650 TOMAHAWK WIFIMemory: Corsair Vengeance LPX 32GB (2 x 16GB) DDR5 6000MHzPrimary Storage: Samsung 980 Pro 1TB NVMe M.2 SSDSecondary Storage: Crucial MX500 2TB SATA SSDGraphics Card: NVIDIA GeForce RTX 4070 TiPower Supply: Corsair RM850x 850W 80+ GoldCase: NZXT H510 FlowOperating System: Windows 11 HomeOptional: ASUS DRW-24B1ST SATA 24x DVD Burner, Noctua NF-P12 redux-1700 PWM case fans
@@randomblock1_some still have CDs/DVDs at home and with some outdated products, software still comes on a DVD, so it's not such a bad idea to have one.
siri was one of the first assistants. even though they didn't create it they popularized having assistants on phones. and they were very slow to start talking about AI or adopt it, just like they're slow to adopt anything other new tech, so not sure what you're on about. apple sucks for plenty of reasons that are factual.
@@reanimationxp you answered your own question. They are 'slow' to do anything 'different' nowadays because they're too worried about the 'apple ecosystem'. They're late to trends by several years with the hope their enormous budget is enough to make them steal everyone's attention.
I appreciate you using your platform to call out these malicious tech companies. One thing I wish you'd spent a little more time on however is the training data. I'm an artist and all of my work and the work of my peers is now being used to replace us. Our copyright over our own work was completely ignored as the industry tried to move too fast to be stopped - they fully know what they're doing is wrong which is why in interview after interview they'll dodge the question of where the training data came from and instead use yet another buzzword: "publicly available". As if putting something online makes it royalty free. Anyone who parks their car on the street better be careful, because that's publicly available too. Even if you're someone who doesn't care about artists or creatives and thinks we should all "get a real job". I'd like you to know that there have been illicit images of minors found in these models and people are using them to generate more. If you've ever put pictures of your kids online, they'll be in those models too. It doesn't take a huge leap to guess what's going to happen when this algorithm needs to figure out what a child looks like in order to produce new illicit images - pictures of your kids are it's reference material.
I work in the creative industry as a voice actor. I can confidently say that I haven't met a single artist who wants ElevenLabs or Suno. It's a cool new shiny AI that can replicate human speech very well, but it's demoralizing to artists that spend years honing their craft. Businesses and advertisers are selling their integrity for the sake of generating quantity over quality and saving money. The silver lining is that the culture is starting to shift before the AI bubble has even fully popped. People are already starting to move away from AI content in preference for more obvious human made content. I have a friend who lost their job because the agency they worked for trained a generative AI on their artwork. I'm thankful for all the lawsuits going on right now and I'm thankful the cultural ethos is starting to shift. In the end, I think AI might be one of the best things to ever happen to humanity. No because of what it gives us, but because of how it ultimately reminds us of our how precious our humanity is.
7:17 the "Exponential Growth of Computing" chart looks intuitive but isn't just already being proven wrong but relies on a very bad obviously disprovable assumption: that exponential growth lasts. It's much more likely the curve will flatten out. At least for a while until in some far future there's a new modality in computing.
it means that every small machine is connected to the web, not only computers. like everything (toaster, your doorbell etc.). all things are connected and communicate, can be remote controlled yadayada. its a bit like in cyberpunk. so yeah... everything can be hacked too. its one of the reasons why ipv6 was needed, because there arent enough public ip adresses to connect everything.
@@seigeengine You mean "the use of the internet by useless objects mainly to participate in DDoS attacks" given how often their security is absolute lackluster and the fact that they've been utilized in attacks for a while now. Kinda weird to buy a light bulb just to question yourself if it's infected and currently participating in a try to take the steam servers down honestly.
@@Unknown_Genius very small groups of people have the specialty knowledge or even the mental capacity to pull that off or even come up with a good reason and scenario to do so. 😏
I tried to get multiple art AI's to make a picture of a Centaur for my DnD campaign, not one of them could create anything even close to a centaur. They always created a picture of someone riding a horse.
If you want same thing from a human probably you will have nothing because centaur is not a common knowledge, thus AI do not have enough data to prepare a decent centaur. Yes, AI models have flaws, but this is not an important one.
THIS! yeah I ended up finding a stable diffusion model that can handle centaur-like bodies. Though I understand why ai generators can't do it. It knows what a centaur looks like- but it also knows what a person riding a horse looks like even more. so when its diffusing the image it'll naturally just make it a person riding a horse.
You just didn't use a decent model, or didn't knew how to correctly use the tool, that's all. The better ones are paid. You should also use opposite filters, or whatever they are called, to reduce the weight on some stuff, and increase the weight of others. There's almost nothing the top stable diffusion models can't generate, you just need to play around.
BRUH. That only shows that you don't know how to use an image generator, it's not just putting something and that's it, it's putting the learning before and writing until you get the result. Use other people's LORAS or train your own using a model that can be used for that. This also shows how well you know how to read and research.
I did my master thesis on machine vision in 2004. At that time, same neural networks based approach was in use that LMMs are using nowadays. Worst problem was overfitting (learning was performed too long). It increases validation error and breaks the model in long run. Nowadays there are new methods to tackle this, but something like AGI would be a huge leap from current LMMs.
Slapping AI on a product is similar to whrn everything had "gamer" slapped on it's name and was painted black and red, it doesn't mean it's better, but sure is more expensive
Ye, and the funny part about it is that nothing aside of the better (but still cheaper than absolute greatness level) office chairs really sold. apart of maybe gamer sups for the sole reason that it.. is a cheaper alternative to some other energy drinks that does taste pretty good depending on flavor.
Dan specifically said that he's not doing AI right now because he doesn't want, in his words, "a moving target". So we'll probably need to wait for the bubble to crash for that one.
Most of the AI hype right now is based on *transformer models*. You give the model a sequence of symbols, and it makes a reasonable guess as to what the output symbols are. Transformer: input symbols to output symbols. Those symbols are mostly text, each symbol is a fragment of language ranging from a single letter to short common phrases (these are tokens). The model isn't doing deductive reasoning or things like that. Input symbols to output symbols. This is part of why the models hallucinate. Let's say the model gets one of the first 100 output symbols a bit wonky. Well now its next output symbols are based on some wonky symbols. Then it gets weirder and weirder. This is why the nonsense answers like "glue on pizza" tend to exist. There ARE cases where they put glue in pizza: food photography. It makes for a better "cheese" pull... nevermind that it's not cheese. So the model hears "stick my toppings on", and the "stick" symbol(s) are there, and there's another text passage it was trained on about sticky glue in pizza, and well... Bob's your uncle, I guess. Anyway the hype is indeed vastly exuberant. Are the models pretty good at producing realistic looking output text? Yes! But are the models designed for getting all the details right? No! And even worse, *the core mechanics of the model can't easily be ammended to make it work.* There's no obvious way to say "oh, and also APPLY LOGIC to the output to make sure it makes sense. Check your sources and such." Nope. Side note: this is why the AI companies HATE the idea of being required to do attribution. Because they can't. These are statistical models that go symbol-by-symbol, with probabilities from vast mounds of training data. It's not tagged, the model can't say "I printed 'th' next because of training example number 3838382934792374 in particular from this specific source." It's more like that source, along with a million other observations, was used to nudge the model weights slightly to the left to be more accurate. Each output symbol is the amalgamation of billions of examples and trying to replicate the output. At best you could do a "jackknife" estimation where you say something like "had I not seen training example 3838382934792374 I would have been roughly 0.001% less likely to output that symbol." Side note: I'm sure people are trying to work on improvements to the models. But even ChatGPT and other top-tier models are easily confused on details and get things wrong constantly. Another several transformer-level breakthroughs will be required. First there were neural networks, then convolutional neural networks when I was in grad school as all the rage. Then recurrent neural networks for learning sequence transformations... but they were slow. Then transformers show up with a vast improvement to training speed and here we are. But to combat nonsense I think we need more logic like "if X and Y, then Z" type deductions as opposed to just big statistical models. Needs to be something more there. Final note: these AI models are basically a giant compression algorithm. They were trained on many many terabytes of data. You put in a reasonable query, and it spits out a PAGE of reasonable looking data. One way of looking at transformers is as a super juiced up lossy compression algorithm. The words aren't stored in the model, it computes them on the fly from input. Very neat.
That's super useful info. More needs to be said about attribution: and that's a nice turing test (or at least a test for intelligence and reasoning abilities) actually: get it to explain its "reasoning". That said I have seen it solve leetcode problems incredibly well and it does make me wonder WHY it's so good at that? I like your analogy of a compression algorithm. I kinda thought of it more as a sophisticated search engine that's searching an infinite library (look up "the library of Babel"). Somewhere in the infinite library is every word and every combination of words, it just needs to be found.
The other way is that it is a relativistic information system. It stores relative data. And depending on the "temperature" crap, it produces more varied probabilistic outputs. Garbage. 0 intelligence.
The pizza thing was apparently from a reddit shitpost, with Google's AI literally unable to double-check information. Much less interesting than the interesting divide between food photography and food safety. It doesn't understand the context of either, just regurgitating the input data, which includes reddit shitposts.
It's important to not conflate Generative AI and LLMs with AI in general. WIth the former there is a gold rush mentality at the moment with little concern over environmental impact or copyright issues. What's needed is some type of framework for sustainable growth in this rapidly growing field!
@@MrWizardGG sure, but the company itself didn't have any AI of their own (they were going to use chatgpt as part of their 'AI') and they wanted the government to get subsidies because of that. That's not illegal but it's sure not ethical
@@MrWizardGG not re-invent, just create locally with their own data Because otherwise it's like I'm creating a revolutionary social app but in reality it's just a twitter or reddit client
@@MrWizardGG What is potentially unethical here is what was subsidy granted for. Through an example, let's say that subsidy is given for providing new source of water, and it's given for someone extending pipes so current water facility can reach new place.
In relation to driving edge cases; the collected data and synthetic generated data that Tesla uses to train its models will make it familiar with and trained to deal with situations no human driver would. When trained as a pilot we were drilled into the idea that we need to have the right actions occur instinctively, so we practiced things like stall and spin recovery. In driver education the basic requirement is to basically steer and control the car. There is no training in the same kind of emergency conditions because it is VERY dangerous. But Tesla's are training on exactly these in-silico. Machines will be able to react far faster than the human nervous system. Machines don't need to be perfect, they just need to be better than humans. And humans SUCK at driving. They get distracted, tired, drunk and high. They get impacient, take crazy risks, get crazy angry and sometimes drive the wrong way down a highway. By far the most dangerous thing on the road in future is other human drivers.
The section that you mention how simple the ML models generate images are overally underestimated and you are under valuing the huge leap of technological changes these tools are doing especially with image generative tools.
"It takes the keywords from your prompt and it starts compositing filling in the image until it hits for example a certain percentage of computer and desk"
I still don’t understand why the Techsphere won’t adopt the monikers that Mass Effect nailed, Virtual Intelligence and Artifical Intelligence. AI referred to species like the Geth and the Reapers that were actually self aware. Virtual Intelligence referred to the narrow focused systems like ChatGPT and the like.
I still remember AI as the programs that control non player characters in computergames. Most often called out on real time strategy games for bad wayfinding and on shooter games for not seeking coverage.
Correction for the sponsor spot at 1:02 - We meant to say "14700 KF", as is shown on screen 🙂
We all skip that part so we don’t really notice.
@@EnivokeWell I was going to comment about it
James IS also an ai. He was hallucinating on this sponsor 🤖
Crisis averted
Thx for telling.
in computer science in HIGH SCHOOL they made a big deal about the difference between artificial intelligence and machine learning. its like machine learning was completely removed from the dictionary in the past 2-3 years
Im from a country where pre bachelor/vocation CS education lacks a lot. What is the difference between both, i for real dont know what is it
@@3dcomrade Teaching computers to learn from data is machine learning wheras ai is more broad but closely associated with agi or thinking like a human
Thanks for pointing that out. Now I also remember reading this in high school. Basically we're still dealing with machine learning.
@@michealcondry5384 So according to you guys the real AI would be as useless as a baby and we would need to send them to school to learn all the stuff needed to help us? :/
@@3dcomrade I mean, Linus explains it in the very video you're watching... Just watch the video for a primer and we can clear up any details.
The previous explanation is misspeaking to the point of being inaccurate. "AI" does not exist presently. At all. Anywhere. There are NO actual "artificial intelligences" out there, yet. "AI" is a general term that marketing and executive wanks misuse. Entirely. "AI" is a concept of a system that can self-teach (learn) new things it hasn't seen before, on its own. That does not presently exist.
"Machine learning" is where some engineer _sets up explicit scenarios,_ like Linus talking about the person trying to sit in a chair. At first, the machine basically just permutates through possible methematical states, and adjusts things to be more and more in line with what it's _specifically told by the engineer_ is correct. It has ZERO idea what it's doing or why at ANY point in time, even _after_ it's "learned" to the point of being highly accurate at the task. It's LITERALLY just linear algebra spitting out numbers. There is never at any point anything remotely close to a "thought" in the system, unless you extend it out to the human engineer setting it all up.
"AI" would require the computer to set up those tests, confirm the results, and set up success conditions, _all on its own._ Ideally, while being able to explain what and why. No system currently does that with anything remotely approaching a complex task.
I work in AI developing models.
The entire industry is currently filled with MBA jargon and people in suits trying to collect money from investors.
In the future, looking back on this decade will be super painful.
shady mechanics, ie technologists, swindling the uninformed is nothing new.
How long do we have before their stock crashes?
meh not any different from any other rush to capitalize on trend
I also work in AI. Granted a lot of companies are using AI as a silly marketing term but that doesn't mean there hasn't been massive innovation over the last few years.
yeah, the marketing nonsense is absurd.
I do find it disappointing that Linus is claiming LLMs are just faster versions of old tech. They are far closer to the entire language system of a human than they are to ELIZA. And GPT4-o isn't an LLM with its new multi-modality. These models are neither everything some promise them to be, nor as limited as many hope them to be.
You think we're actually going to crack AGI as the robots are put to work and add their data to the hoard @ducks742 or do you think we'll run out of processing power first?
It’s always marketing. I hate marketing. I worked at it for 3 years+ and I concluded that it was an art of deceiving customers.
I want a law, the right to not be advertised to
you're right, most of marketing and advertising advice out there is to trick, deceive or otherwise manipulate people into buying products. I've chosen to sell honestly, with products that I believe in and that sell themselves.
As a digital marketing professional, I think it’s somehow even worse than that. Nowadays, we don’t even try to deceive (or communicate with) humans anymore - most of what we do, most of the content we create is actually for Google’s crawler robots and indexing algorhythms. The absolute first priority is for the machine to like your content, everything else is secondary becuase if you can’t please the algorhythm, humans can’t even see what you put out there, so they don’t even have a chance to like or dislike it. To be honest, my career goal is to get to a point where I can use the skills and tools associated with digital marketing to support an organization or cause I believe in. To build a financial background secure enough to be able to work for NGOs or non-profit projects even if they don’t pay particularly well.
Taking a marketing class is like a crash course on psychology and propaganda at the same time.
On the other hand,
let´s say you personally are selling a product/service (maybe shooting weddings)
But there is 10 other people doing the same thing.
You kind of have to tell the potential customer, that you have the most modern tech and that you can do the best job, for a better price ratio.
Even though you know that you are probably not the best.
The solution is to let only one entrepreneur have an absolute monopoly?
Is there anything that can be done?
I read "The Worlds I See" by Fei Fei Li and shared her sadness/ambivalence at the fact that further developments in AI now rests with big corporations with lots of money, access to data and computing power, and no longer the passion projects of curious scientists. Professors in universities just can no longer keep up with big corporations and ended up working for them. She worked on a project which uses computer vision AI to guide doctors/nurses to properly wash their hands and perform other hygiene procedures correctly. One sentence from the doctor who participated in this project strikes me, he said that the CEOs only talk about AI replacing people; whilst scientists like Fei Fei Li actually use AI to help, not replace him
AI is genuinely just a marketing buzz word these days
Just like turbo was in the 80's
I have a feeling that this fcomment is going to blow up
Wish I knew when it was going to end so I can optimally dump these insane NVDA shares lol
and space in the 70s
I said the same thing the other day to people is that every "Innovation" from all public companies think of it as an ad for their stock market, and now we have the "AI" words for their ad business, which kinda more concerning since can be more privacy nightmare than ads tracking. xd
I find it hilarious how "Apple Intelligence" has the exact same AI acronym. That is THE most Apple thing I have ever seen.
they copied alibaba inteligence, jack ma was just that far ahead man
@@elcohole100 Fr, if only the didnt make him diappear he could have put a lawsuit on ai (coz why not)
Well at least it matches user base as in Applesheep Intelligence 😁
AI more like AD (Apple Deception)
@@TheXlen i cringed so hard after reading this
My org recently named a new "Chief AI Officer." He's got a masters in marketing and a GPT subscription. Apparently, that's all you need to get to the C-suite nowadays.
That makes sense, if he said he had 20 years of experience in AI then you know he's faking it. They mainly needed someone who was "with the times".
I've worked at places where I was in charge of something I wasn't qualified for because everyone else would have been worse at it.
@@sarahberkner "AI" has been on the go from even before the 1960s, Perceptrons were developed in the 50s I think, one could easily have 20 years of experience in machine learning, language modelling, generative models -- which is what people are calling AI now. Not that many people do I'm sure, but still.
Lmao
Yeah? So why don't you do it?
Being a good BS artist is a great skill to have. Just make sure you seperate your work personality and personal life or you can run into issues.
Most of my graduate studies and Master's thesis involved AI and Deep learning and I cannot begin to count the number of times friends/coworkers (who studied something completely unrelated like business or marketing) have tried to tell me how AI will solve everything and that I "just don't understand it" whenever I explain why their idea with AI wouldn't work
sounds like you're a little bitter about not getting the job you wanted in AI dev after you did that masters thesis
@@Lindsey_Lockwood And yet your account is 17 years old. Makes you wonder...
@@hexoson looks like you been wondering about it a lot. Sorry I upset you enough to cause you to do homework LOL
intuition, common sense, emotion >>>>>> science, reason, logic
can you give me an example of a situation where a friend thought this and you thought otherwise?
@ClayChapman0
Imagine a man sitting at a computer. A series of Chinese symbols and characters appear on his screen. He spends his time and energy rearranging these symbols that he knows nothing about and has no context for. Sometimes a buzzer blares and he has to try again, but sometimes a bell rings and he gets to move on. After a long time of doing this, he's gotten pretty good at determining the pattern of the symbols that generally result in a bell instead of a buzzer.
Let's presume you can understand Chinese. You walk up to this man one day and ask him what he does. He explains that he plays this pattern recognition game where arranging these symbols in a way the computer likes lets you continue to the next one. On his screen in Chinese is the question "What is ice cream?", and you watch as he responds in perfect Chinese "Ice cream is a cold dessert food made of ice, sugar, and either milk or cream." You ask him if he knows what the symbols mean and he has no idea.
That is machine learning.
_"rearranging these symbols that he knows nothing about and has no context for."_ , that's the big problem the Chinese room experiment is pointing to. Because how would we know? He doesn't learn the context in the proces, how so? Would one be able to make a perfect translation without knowing the context? And even if it is the case that he doesn't, but he would still be able to produce perfect answers to questions, why would that make the answers useless if people would still be able to understand the answers; to curate the good ones? If that's the case, why would we not call those answers intelligent?
[I don't get why people like that "Chinese room" thought experiment] As if, if a model like that would make 3 stupid answers and one good one to a question about a cure for cancer, people would be rolling their eyes like: "Pffff this thing is stupid, it doesn't even understand the suggestions it makes1", well, _maybe_ , but that thing just got a cure for cancer. People make bad guesses too before they make a perfect one, who cares.
No it's not. If he did that a billion times with billion different contexts, he WOULD understand chinese. Deaf and blind people from birth can still understand the concepts of a picture or sound.
The first coming later finally able to understand Chinese would be the AGI
Or maybe never
It's going to become the same as how "Smart" got overused and is still overused to describe literally anything with internet or a timer of some sort.
Yup... bought a new fridge and apparently it's fucking sentient. Got ARTIFICIAL INTELLIGENCE plastered on it. Must be shy though, hasn't said a word so far.
@@c50m4 AI colot oversaturion on my TV
I mean, at this point an appliance that can connect to the Internet and run a few apps is a reasonable definition of a "smart" appliance. Usually I feel like it's fairly clear what you're getting, although I guess there's a range of ability.
@@danieljensen2626agree with you, just the OP is saying that the term AI which ought to be a pretty dang impressive description of something artificially in a similar category to human intelligence, is going to get relegated to being defined as something far less impressive. Linus made the same point. Need a new term to represent the farther future of intelligence that comes closer to human.
AI is really bad. If you know anything about a topic, both GPT and Gemini fall apart. 95% of the time, its making things up. Semi advanced things like the effectiveness of spinosad as a pesticide for plants. Or a viroid called HLVD thats impacting plant growth. Or questions about auxins that promote root development, its always making things up in regards to these topics. Anything that goes beyond surcafe level "write me a better ending to my tv show" kind of stuff ends up giving you incorrect info. The worst part is, most people dont catch on.
Essentially, corporations chose to muddy the definition of AI, for profit. Just like with Hoverboards. And now we need new words for those old things we envisioned...
How do we make sure this 'muddying' of words doesn't happen? Just call things more specifically and don't give a hyped up name? Or keep on doing what we're currently doing which is 'invent a new word for the previous expectation of the technology'?
oh dog, the "hoverboard" one was SO freaking stupid, it drove me nuts,
@@fireninja8250 It'll always happen honestly and it's not solely related to tech so we can't even stop it.
AI got a buzzword for tech corps (and the average joe) alike, if you think outside of the tech space we have a ton of words that have been muddied and/or re-defined, be it g a y, white knight, simp (with especially that one still having that incel usage taste every time you read it) or other examples that we don't even think about anymore.
AGI will just be as normal in usage as some of the other things have become for its re-definition over time.
@@fireninja8250 you can't prevent "muddying" of words. It's an inevitable part of society and language
Same with Web3, Blockchain, Big Data, Internet-of-Things and other stuff turned into buzzwords
Term term "AI" has been thoroughly ruined now, being applied to everything. My clothes washer has "AI". The more often you pick a program, the higher in the list in the shows up. ARTIFICIAL INTELLIGENCE.
the pinnacle of intelligence.
This is a conversation that needs to continue happening. I’ve really struggled to explain to people that “AI” isn’t AI, and more importantly why it matters that we distinguish between AI and ML. In a way, it feels similar to the whole USB-C issue where the vast majority of the public didn’t understand that just because a connector is USB-C doesn’t mean that it’s fast, it just means that it’s USB-C and it’s important to distinguish between USB protocols vs USB connectors
You lost me with the USB part. I don't work in tech but it seemed obvious to me that AI is not sentient or is self-aware and doesn't have evil intentions because it doesn't have any intentions at all, it's basically regurgitating information and humans still need to weed through it. Some people find this hard to grasp.
However I hadn't thought about the fact that "artificial intelligence" isn't an accurate description. I think you could argue that it is accurate, in the same way that an artificial flavor doesn't taste quite the same as the natural flavor; artificial intelligence means it's like a substitute for intelligence.
@@sarahberkner Any such explanation of AI that you come up with can equally be applied to the human neurological pathway. One can just as easily argue that humans regurgitate information and give off the illusion of self-awareness.
@@spadaacca not really,, as linus says in the video AI doesnt actually understand what its doing. you can ask an artist to breakdown a drawing, and theyll tell you what they did, how the body interacts with the enviornment, etc. you can do the same thing with a writer, you can ask them why they wrote it, how they wrote it, etc. you try to ask an "AI" to breakdown anything it makes, and it wont understand it. AI doesn't iterate, humans do
@@jellyloab Is that different from us humans? We make about 35,000 decisions each day, and would struggle to explain our reasoning for most. For the ones we make consciously, we commit our thought processes and feelings to memory, allowing us to explain them after the fact.
Current LLM's do not commit any sort of thought process or internal monologue to memory, and so can only explain its reasoning using its previous output as context, i.e. it is not actually recalling, but creating an answer using its previous output as a reference. This does not mean, however, that there was no "thought process" (by which I mean "calculation") that went on to create the original output, nor is it a good measure of intelligence.
Linus's example of the AI struggling to count letters is also quite misleading: due to how current LLM's are designed and trained, they excel at pattern-based tasks, but tend to struggle with precise manipulation of symbols (hence why math can be so precarious too.) I'm not exactly sure what point Linus is trying to make here--is a child not intelligent if he struggles to count?
@@jellyloab Funny you should mention bodies interacting with their environments. I want you to do an experiment: go outside, take a jog. But tell your brain not to speed up your heart rate. Keep it at resting heart rate. Does it listen to your executive commands in the service of our supposed free will? I think a big part of this conversation leaves out just how little free will humans demonstratively posses. In that light, most of what a human is, is automation. I'm taking a bike ride today. That I decided to do - I decide to turn the pedals. But how my body accomplishes this task on an anatomical level isn't up to me; not one bit.
Totally agree on the misuse of the term "AI" in marketing. It's definitely creating confusion and sometimes even harm due to misconceptions.
this comment is going to blow up how am i so early
@@K131real 😂😅
Hard agree with you. 👏🏿
The moment they flashed all of the other Buzz words that have been used over the past few years in the tech industry, all the other crazy stuff that has happened in the tech industry Flashed in my head at the same time.
Especially 64-bit. That one got a laugh out of me.
Marketers picked it up, but it was academics who came up with the term and used it to define a subfield of computer science thst includes narrow AI.
Also, machine learning itself, which used to be called pattern matching.
It,'s not just industry on this hype train.
i feel like you could rebrand a 40-year old technology that involves a linear regression as AI and no one would bat an eye lol, weird times
Machine learning engineer here (Image generation focus). I am so glad a major youtube channel finally got it right, rather than fear mongering. The amount of horrific information even from sources that should be educated on tech like this is truly disheartening. Thank you for this video, which seems to be a rare one with a relatively neutral look into a set of technologies that will continue to shape the world for many years to come.
Can i ask a sincere question? Why do you want to make generative images?
Can I ask you a quick question as another programmer who's dabbled in ML?
It seems to me that AI/ML is really just data science (or at least data-driven development).
My understanding is that it's basically just gradient descent used to optimize a function that maps inputs to outputs based on some loss function.
I learned how to fit data to a function via gradient descent in high-school statistics, and from what I see, fitting a 10,000 weight convolutional filter to a dataset isn't really all that different conceptually than using Excel to create a graph with a least-squares regression curve if you ignore the difference in dimensionality.
Do you agree/disagree with any of that? People keep saying AI is a bad term and people should call it ML instead, but even ML seems like a bit of a stretch if it's just data science curve fitting with some fancy gradient descent on top (albeit with a 10,000 dimension curve fit to millions data points). Seems to me the only reason people use the term AI/ML is to make it easier to get VC funding, because data-driven development doesn't sound cool or sexy.
+1 from another data and AI professional. I do ML every day for work and I couldn't have said it better than Linus. He's exactly right.
@@brydenfrizzell4344 ML is a subset of AI. ML is data science.
@@brydenfrizzell4344 I usually refer to it as "linear algebra programming"
I appreciate hearing someone say the actual truth about “AI”. Try doing anything novel with it and it can’t. It’s just an amazing pattern recognition and replay system.
Just like your brain.
You'd be surprised by how much our own "novelty" in arts and technology is just rearranging and minor variations of existing components and ideas
AI is really not doing something too different to what we do. Try having a completely new thought, and then realize it was most likely a reiteration of something you already thought or read or heard.
@@nwoDekaTsyawlAThis is a very superficial view.
@@FireF1y644 To which you decided to reply with an even more superficial comment that did nothing to provide evidence counter to my statement.
@@nwoDekaTsyawlA it gave you a reason to rethink this matter just a little deeper, the rest is up to you.
I mean, AI has always been a very general term even before all this AI craze. NPC behavior in-game was called AI, so was an AI general in a strategy game.
I think people can grasp the difference in the meaning of the term when talking about it in very different ways e.g. npc game AI in a AAA game vs AI that's designed to drive your car.
@@DaemonJax but is there and actual difference? Or is it just one is trained better?
The thing is, video game AI could arguably be considered in some forms to be more intelligent than this. Most NPC AI is based on state machines, which basically considers information about its surroundings to switch between pre-defined states. You could use machine learning to enhance that declarative programming by giving higher weighting to attack patterns that appear to be successful to make those states more likely. So called "generative" AI just does this on a pixel or character level, making specific words or patterns of pixels more likely based on input keywords, which means all it does is spit out averages of the input data. So the marketing term "AI" is actually based around the cult like idea of "emergent" programming, basically that if we throw enough data at the machine eventually it will stop averaging and start programming itself. Instead what we get is a lot of smoke and mirrors from people obsessively trying to coach these averaging machines to LOOK like they're creating novel outputs, while simultaneously stealing any and all data on the web to fuel their fraud.
@@Damiancontursi Completely different approaches.
@@benflightart State machines are just a good way of organizing a ton of if statements. It has nothing to do with intelligence. The behavior of generative ai is fundamentally learned behavior and it's definitely part of how actual intelligence works, it's just not the whole answer.
a few years ago, marketing tricked us with "3D" and "Smart", today it's "AI"
Paint 3D😂
@@gnanasabaapatirg7376 I don't know about you, but I didn't know for quite a while that Paint 3D could in fact be used to create 3D models. Thought it was just a gimmicky rebranding, though some may still say it's a gimmick.
Don't forget crypto, blockchain and NFT. Never forget.
@@InnososThe sooner we forget about those, the better.
"SmartTV" aka "now comes with built-in advertising"
That last paragraph is really soul chilling... some definite Cyberpunk 2077 vibes there, and not in a good way...
"The folks in charge of helping us deal with all of this have a lot less funding than the ones who are trying to sell it to us"
Would also add that those in charge are taking advice and lobby dollars from the CEOs of the companies selling it to us.
Deus Ex crew checking in
Yeah, it's getting way too easy for bad actors to deepfake evidence that can have very chilling impacts. Wanna get rid of a political dissident? Just fabricate some video evidence on a CO2 belching AWS datacenter. Want to track a group of marginalized people? ML-powered face recognition software and the ever present cameras and GPS receivers with mobile internet connections makes that trivially easy. I honestly struggle to get excited about technology anymore because it seems like any developments (especially machine learning and ever-present telemetry spyware devices) are only ever bad for the working class. There may be some positive applications in the medical field or logistics management, for example, but overwhelmingly it's cars that report driving habits to insurance companies and law enforcement and have "autopilot" systems that are known to kill people (in no small part due to cost cutting), or buggy software, ads, and spyware in everyday appliances that used to at most have some simple microcontroller code that did exactly what it should and nothing else. I'm starting to think the Matrix had it right; maybe 1999 was the peak of human civilization (at least from a technological perspective).
Fallout would like a word as well
That's also an incorrect take. The ones 'helping us deal with it all', if I had to guess Linus political view, is the government. The last think the government is lacking is 'funding', and they will do their best to pass on more useless regulation, mostly with the intention to get more 'funding'.
It's pretty ridiculous and SCARY AF if we're letting "AI" go about important tasks when it can't even tell us how many times a letter appears in a word.
Remember to add glue to your cheese so it sticks to your pizza better. Oh, and also eat one rock per day.
(those two examples were given as genuine recommendations by google AI search help)
AI Rice Cooker was the tipping point
Dankpods!
i shared the same sadness that wade did using that thing
No it was the AI thermal paste
They had me at AI screwdriver
Now, an AI toaster is truly terrifying ( red dwarf reference )
the GNU+Linux copypasta reference was goated
100% I was laughing at that
1:56 Linux Tech Tips lol
And it was gpt
*GNUted
I immediately knew Emily wrote this.
as someone who has studied for a university degree in AI, this whole hypetrain is extremely infuriating to me. Imagine you're a physicist and every physical product is called "black hole" because *technically* all mass has gravitational pull. Similarly, everything is called "AI" now because it has more than 500 lines of code.
Don't worry, give it a few years and they will move on to a new buzzword.
@@roymarshall_ Unfortunately the wreckage of old hypes doesn't magically go away, and will haunt everyone affected for decades to come, albeit in sanitized form. We're still dealing with fallout of the OOP hype in programming today, and that was, what, the 70s? And most programmers today would likely not even recognize which parts are the genuine concepts, and which parts are just holdovers from decades-old hype that have remained in use because "that's how we've always done it".
huh? this is nothing new. The bar for what was seen as AI was way lower than it is now
Behold! AI:
if (condition) {
//
} else {
//
}
If, as the paper suggests, an intelligent octopus faced with a bear attack doesn't know how to react, don't you think that if a human were to reincarnate as an octopus in the same scenario, they would respond similarly? We could perhaps improve the octopus's response to be scared of unknown situations.
Assuming that current AI is somewhat similar to humans based on this idea, aren't we essentially searching for something god-like? If AI could provide correct answers to any scenario, no matter how absurd or unexpected, could a human even handle it? For instance, if tomorrow everyone dies and you get shot to Mars, entering the 45th dimension where Mars is habitable, but you must return to 3D because in the 45th dimension you're a disabled person with no senses, and the 45th dimension's version of Elon Musk keeps you as a pet in his belly pouch called '&%&^5757,' how would humans solve a question this?
And, If AI could take even one step toward solving this scenario, as Linus suggests, by using context clues to make sense of an absurd situation and lead us to the correct answer, then wouldn't that AI be able to solve any issue, no matter how absurd? At that point, wouldn't it be considered not just software, but something god-like?
Or is AGI simply about quantifying all five human senses-vision, hearing, touch, smell, and taste-in numbers and then training on the thousands of ways humans have developed machine learning techniques (perceptrons, neural networks, U-Net, transfer learning, Gradient Descent, Stochastic gradient descent, PSO, Bird Swarm Algorithm, Transformers, and Thousands more)?
What is it what is AGI is it a search for GOD or is it making a Human so perfect its basically GOD ?
This is some Really Mind Bending Shit......
This video underestimates transformer models' potential. Key points:
1. Massive scaling (100x) could lead to AGI-like abilities (see the paper on GPT-3, Chinchilla scaling laws).
2. Tokens aren't just text - they work for vision (ViT), audio (Wav2Vec 2.0), and more. Omni can handle multiple variants at once and their voice model is held back until autumn due to safety reasons.
3. Robot learning with LLMs is promising (PaLM-E research).
4. Efficient fine-tuning (LoRA) enables quick adaptation.
5. Consumer hardware advancements (e.g., 4090 GPU can run local post-training with fine-tuning) making robots that learn offline more feasible.
Some people overestimate what AI is today.
I think most people overestimate it today but underestimate what it will be in 5-10 years.
Recent work on reasoning (chain-of-thought) and memory (RETRO) addresses limitations. Current approaches may be closer to AGI than implied. Happy to provide paper links if interested.
Most people underestimate today, not overestimate it.
The thing is bottlenecks. The layman AGI example starts with "Imagine you have infinite processing power, a perfect model of the world, and want some stamps...", and that's the tell.
Yes, advancements have been made. And impressive ones, at that. Yet the major bottlenecks (data and processing power) are still there. Hallucinations are still fundamentally tied to how the system works. There are biases in what even _can_ be used as training data. Example: how do you give the machine the concept of "sad"? With words? With pictures? With an equation? Even if you mix all three, it's still incomplete. There are stuff that cannot be losslessly compressed into data. And any data-driven system will also have that limitation.
However- those gradual, discrete advancements you've listed are not characteristic of exponential growth. They're characteristic of a technology that's maturing into a plateu. That's why I don't buy the "it's gonna get better in 5 years!". It already _has_ been 5 years, and GPT4o is not Hal 9000.
And that's just judging the technology by itself. If you start talking about the market, then it's another can of worms. OpenAI is not profitable, it survives off investor money. If the hype stops, everything breaks down. Sam Altman makes bonkers out-there promises that are straigh up fiction. Google's search AI was embarassing. The training data for most models are built on copyright theft. And it goes on.
That's why I don't buy the " what it will be in 5-10 years". Thanks for coming to my TED talk.
@@BZero3 👏👏👏
Have to think about a quote from Edsger Dijkstra:
"The effort of using machines to mimic the human mind has always struck me as rather silly. I would rather use them to mimic something better."
Thats the whole point of AGI?
From UA-cam channel Explaining Computer.
"The 2nd most intelligent specie on the planet is the dolphin, and we never expect dolphin to imitate a person..."
Better? I'm wondering what I'm supposed to get from that quote. It's too open-ended.
The problem with this quote is very simple. Despite our millennia of accumulated knowledge our own minds are by FAR the most advanced and capable thing we know of. And we barely understand just the most basic principals of their operation. Nothing is more capable of handling problems and adapting to new complex situations that the human mind. And by definition the capability of creating something better than a human mind, must include the capability of creating something as good as a human mind.
@@sunlaThe point of the quote is to ask why are we trying to make computers do what we can do, rather than what we CAN'T do? I.E., why are we trying to automate the human spirit with 'art generation' and similar things rather than use it for the things that we just can't really do like immensely complex simulations, data processing, etc? Now just to be clear neural networks are in fact being developed for loads of genuine scientific applications, but a lot of the mainstream tech buzz isn't about that but about gimmicky things that aren't actually helping the world at large. The question basically is why aren't we focusing on doing the things that would take us as a species far, far too many man hours to do.
Nobody realizes the news is lying until they talk about something you're knowable about. Then we go back to thinking they're experts on everything else, lol.
no one doing the news are experts on that subject - they just interview "experts" which tend to lie to you/them or don't bother actually specifying it further because they're... well either part of the company, don't actually know what they're talking about - or simply forget that they should specify things for the average viewer.
kinda like that person that was (or still is? don't know if that stopped ever since he got called out a while ago) giving cyber security tipps to companies and gets invited to train their people, while literally providing "proof of his work" with issue report ids - with the exception that he's not listed on any except for one (and blows the issue up bigger than it was) and no one listed on the other ids even knowing him - same concept, the people tasked with hiring someone for that don't know about it as the stuff they have to know about is an entirely different topic and just assume that it's correct all while not having the time (or resources) to contact anyone listed/read through more than the first.
@@Unknown_Genius Some stuff is such a simple search to find. I've seen absolutely ridiculous claims made by anchors that anyone even remotely knowledgeable wouldn't have made. To your first point, I can think of many examples of anchors talking out their butts like experts but you're probably right that they're just repeating what they were told without digging into the topic whatsoever.
100%. Right between the eyes.
Just like a certain pandemic
@@Unknown_Genius what you are missing is that news isn't news anymore, it is entertainment, and saying the facts is boring, and doesn't get ratings. All 'news' cares about now is viewing figures, so BS'ing about AI and everything else is ok, as long as when they go to ads there are plenty of eyeballs still watching.
Wanted to mention Kasparov used 20 watts of caloric energy to play chess and deep blue used 1400 to do the same task. This difference in energy efficiency only grows with more powerful A.I. systems that use megawatts of power to do the equivalent task a human can do with a hamburger worth of calories.
Consciousness and sentience and all that is crazy but the craziest thing about our brains is the sheer power efficiency.
Where did you get a figure for Deep Blues power consumption? Tried to look or it and turned up nothing, the only figures I could find were the max draw of the 30xPPC604e but that's unlikely to be the bulk which I'd guess would be the custom VLSI stuff or RAM. But bringing Deep Blue into this is like arguing against public transport using gasoline or diesel engines as opposed to horse buggies on the basis of the fuel consumption of a Ford Motor co. Model T.
We humans don’t usually make our own food, it’s more fair to factor in the energy used by the tractor used to harvest the food and all of the machines in the processing plants and all of the energy used to distribute the food with ships and trucks and other people.
@@tomh9553then let’s factor in all the energy required for building a power plant when we talk about ML models energy consumption. Not to even mention that a model needs people to build the power plant
True at the time, but nowadays your phone can beat Kasparov easily on hundreds of milliwatts, give or take
The AI in Quake III is amazing. The bots can rocket-jump.
Soddenly all websites have these AI help bots that are just as stupid as ones I saw many years ago
Powerless* they are objectively less stupid, doesn't mean they are given permission to make changes to your account.
AI peaked when the monsters fought each other in Doom.
Not at all.
On a side note, I love how Minecraft skeletons shoot each other when 1 arrow accidentally hits the other
AI peaked when the Quake 3 bots were toxic to you
Thanks for that nastogia hit!
I dunno, I was quite impressed by Half-Life's AI behaviors for both lone enemies and squads. Even the cockroaches had an idea on how to behave somewhat convincingly.
AI Linus isn't real he can't hurt you. Meanwhile AI Linus :
My first computer used a 6502, a Commodore VIC-20. 5K of RAM and it was smarter than AI in 2024!
this comment is going to blow up bro :(
I thought the same thing looking at this thumbnail lmao
#L-AI-nus
I don't remember exactly which channel posted about this exact thing but they talked about it back in 2017ish about the levels of "AI" and what to expect from each level. you did a great job in summarizing this.
Been a disaster in university with group projects. Half the team usually doing all their work with gpt rather than having an original thought themselves
I can confirm this. I myself use gpt on programming subjects as im in an accounting major. But only there, the others used it for everything
That's great actually. The ones using LLMs to code see the future that is coming.
having been in the school system, its probably an overall improvement, if they keep using it, it will appear that IQ has gone up
yes a disaster, No-one wants to "think" anymore, just ask AI.
@@phatwila the ones who use LLMs to generate code for study projects can't even tell if generated code is good or bad. Also if they can't do even simple things on their own how they gonna program something complicated that LLM can't handle ?
10:59 it's really easy to Gaslight gpt-4o to think 2+2=5 and then tell it that's wrong the whole thread stops after that
It's actually not. Literally go try it right now, you won't be able to do it.
You're doing a cool thing that people typically call "Hallucinating" when an LLM does it, but "lying" when a human does it! The more you know!
What about chat GPT 5,6,7,8, etc?
@@feminaproletarius7815 what are some of these "obvious lies" which are "politically correct" ?
@@xyzgaming450 there are more 2 gender, i guess..
@@kenta326find me any peer reviewed paper that supports that conclusion.
It reminds me of the days of "Cloud". When every online provider slapped the word "Cloud" on everything all of a sudden, regardless of what technologies actually made it work.
Organic free range grass fed sustainably farmed fair trade climate friendly safe space AI!!
But both cloud and ai technology is real and very influential on software.
Luke and Linus were the number one promoters of AI, talking about everyone getting replaced. They were so gitty to never hire another software engineer again.
I’m a tech hobbyist at best but seeing laymen being tricked into thinking that Ava or Glados is right around the corner infuriates me
Ok perhaps I’m a layman but how else are people supposed to interpret it when AI advances so insanely quickly?
@@seb1520 Only because you climb a tree insanely quickly it doesn't mean you will reach the Moon.
we are getting closer to a perfect copy of what a human seems to be, that is not a agi which is an absolute terrifying thing, but for the average person, if AI stopped at a simulacrum of us, we wouldnt care...and honestly it would probably be better for our species survival if we dont go making AI that can combine old and new concepts to come to a new answer, we dont even use that ability for good
Why would this infuriate you? Why would you be so sure it isn't? I get frustrated with the AGI hype train too but plenty of very well trained professionals are considering this possibility every day. Why would you insult your fellow laymen because they choose to listen to a different professional than you is misguided?
@@Slvl710 Yes, you are getting closer to the Moon by climbing a tree.
"This Rabbit (R1) hole goes deeper than you think"
Nah, those guys were genius'. The people that bought it are idiots.
@@SiCSpiT1In what way? How is an R1 anything but objectively worse than a smartphone?
@@John_JackThe point is that they were geniuses at fooling others to buy their scam. I disagree, as I do not find it that hard to scam less knowledgeable people into buying useless tech. I could probably do it, the difference is that I was raised properly and wouldn't want to.
@@John_Jack easy money with little effort for the company that made the Rabbit R1.
@@John_Jack Read a review. It's pretty obvious.
I was a computer science student for several years, and I learned a lot of ins and outs of AI. I eventually left, at least in part, because I couldn't reconcile what the tech can do with what people were speculating about. Every time I hear someone speculating on the future of AI it makes me want to pull my hair out because they just don't get it. IT DOESN'T KNOW FACTS. IT CAN'T. THAT'S NOT HOW IT WORKS. The idea of "knowledge" doesn't really even apply. It's all about training it through iteration to come to a conclusion based on the information it is given. It doesn't know "facts" as much as it can recognize words arranged factwise.
This isn't me saying it's all junk. A friend was working on reading medical scans with AI to identify cancers. There is a real future that the tech has. It already does a ton of cool stuff. I've worked with the kind of handwriting identification tools they use to sort mail. A different friend was working on different industrial applications like product defect finding. But it cannot replace human intelligence, and it must not replace human agency. People will defer personal accountability to automatic systems and wash their hands of the consequences.
Just needs to be pointed at the right things perhaps.
5:05 The cancer detection comes up every time, but it's not so simple. The problem is that neural networks are black boxes, you don't 100% know how they come up with their answers.
I read about a study where an AI was suppose to be better at recognizing cancer than human doctors, but in the end it turned out that the AI was cheating by recognizing additional data on the x-ray images in the training data the study used, older x-ray images and x-rays from certain hospitals just simply had a significant higher likelihood of having cancer which gave the AI an advantage. This advantage obviously completely disappears once it operates in the real world. So if the AI was deployed like that it could've actually been way worse at detecting cancer than a human doctor without people knowing it.
When are you making the Ai screwdriver???
Still waiting for the AI Apple-leather Jacket called Jensen
It's the same as the current screwdriver, but you need six fingers to use it
Right after they start selling NFTs of AI-generated "Trust me Bro" tshirts.
It's not AI it's Al screwdriver. As in Aluminium. Like one of those knockoffs you can get off temu and use to remove exactly half to one screw
it recognizes the screw and automatically switches to the best bit? now that would be pretty cool and useful
Decades ago, the term I was told was “computers are only as smart as a human makes it”, even in this age of AI I still believe that is true
The power of a computer is equivalent to the universe but keep in mind not all equals are equal.
ftfy: "computers are only as smart as a human think they have made it seem”
This is even more true with machine learning. The main datasets used to create these models are text from the internet.
No.
Not true.
While not smarter than man now, they can be made to make themselves smarter.
Of course. And that's because we are consciousness, not objects.
AI is just a web scraper that answers user prompts
This is why people who call themselves "AI Artists" are embarrassing. You don't call yourself an artist for doing a Google Image Search.
Glorified search engines.
@@Pneumanon Nah, there is a term for the artists who use Google Image search and stock foto, they called Graphic designer.
That not how it works at all, if that was the case, hallucinations wouldn't be a thing.
yeah bruv sure that's exactly what it is, now put your mcdonalds hat back on and go serve those hungry customers.
I'm so glad we have a video to send people to now. I'm so tired of AI branding everywhere when it doesn't even do the most common versions of machine learning or neural processing, etc .
Most things have some machine learning in them since at least the 90s.
I am seemingly out of touch with pop culture enough that I have I don’t remember the last I heard someone use AI when they meant artificial general intelligence (not counting old TV shows)
Couldn't finish video, put too much glue on my pizza and died.
I love that there are literal "ai-powered" birdhouses on Amazon selling for hundreds.
DankPods bought a rice cooker that touted that it was "AI" powered. Opening it up, it used the same mechanical magnetic latch system as any cheap rice cooker from the last 40 years.
You think its crazy? There is AI thermal paste : )
It’s been introduced into every facet of life. You’ve barely seen the tip of the iceberg. Tech boom 1950-2000 . This is gonna change everything more drastically much more quickly.
There are some scams on Amazon unfortunately. But I also think being an early adapter is kind of a scam, better to wait until they work the bugs out.
But as this video points out, this isn't early... Machine learning is not new.
The modern world feels like everything is at least 60% a scam.
The impact of these models seems to have been pretty heavy in displacing programming jobs though, right? Honestly, there is so much interference BS information out there now (lots of "AI" generated content no doubt) that it is hard to know what's real anymore. Like the four text messages I get from random numbers every day trying different versions of "hello," or "I'm worried can you please let me know you're ok?"
"Decent summarization engines and lukewarm guessing machines tunned for working with different type of medias. They can't reason." Loved it!
Except, they can reason much better than many humans can.
@@spadaacca You're living proof of that, it seems.
@@hexosonpretty stupid response there.
I just want to point out the hypocrisy of these companies saying all the content for training the models should be free to use and then charging for the end result. It's a little like paying for insurance and then having to pay full price for what you were insured for anyway.
They're technically not saying it all should be free, they've offered several companies million dollar deals for the data.
The same way they slapped "Turbo" on everything back in the 80's.
Was thinking the same thing.
So lets market the Smart Turbo 3D AI Cloud
There’s a Porsche Taycan electric car with the word turbo after it as though it has a turbo engine, even though there is no engine
I immediately thought of the Turbo character from Wreck-It Ralph.
@@smellcaster this reply sent me😂😭where to pre-order
You guys should translate this video! I wanted to show it to my spanish speaking parents, but I can't!
I guess UA-cam subtitles will have to do, for now
Just download the video, upload its audio track into GPT-4o, have it translated to Spanish, done. That's exactly what state-of-the-art AI is extremely good at.
Imagine calling memory foam as "AI enabled cushion"
I would like to purchase 2 of these cushions.
The thing about AI reminds me of what's gone on with cross stitch patterns. People are selling all this "we can make any image into a cross stitch pattern!" stuff, but it's just them scaling an image down to 100x100 pixels and then picking the closest colors that matched the embroidery floss colors available for sale. What these cross stitch patterns have always lacked is the backstitch: to decide what is worth adding an outline to, and where to use a couple out-of-outline stitches to add details otherwise too small to represent: for example, flower pistils or the texture of fur. So I still much prefer working with human-designed cross stitch, even though I am theoretically able to get a computer to make a cross stitch pattern for anything I want.
I've since learned that all AI is like this.
Computer generated patterns are horrible confetti-stitched monstrosities that only look good from 2+ metres away. They make me think of Victorian ladies with Berlin wool work "copies" of Monarch of the Glen.
@@BurntFaceMan Always? Meh, not always.
we're just in the "prehistoric" era of "AI", it started just 100 years ago, and we know 100 years its nothing
we created a lot of things that today are much better than what we can do "bare handed".
that's what humans are best at, we create tools that surpass our normal capabilities, its our thing
we will all be dead by then, but im sure one day we will have a true AGI with consciousness, that can take care of all the boring shit any human can do, with no error margin
I watched a video on a similar topic, but it was with AI generated crochet patterns. Perhaps you already saw it, but in case you haven't and you're interested, the title is "How to spot fake (AI) crochet so you don't get scammed" by Elise Rose Crochet. It's very interesting. I need to see an AI cross stitch pattern, it's probably wild.
3:06 So AI is a hamster. Got it.
5:48 Wait, no. Ai is a monkey.
11:41 Um, AI is a hyperintelligent octopus that knows nothing of bears.
bay boy say he wan his gionmion jiggalasnack
Aaah damn nearly chocked from laughter, +1 internets to you sir!
All of the above but might not be all the above. It's a should or could be, but never quite a definitive yes.
I believe he said a room full of monkeys in fairness
"I never expected my paste to be sentient..."
"…until my wife turned it into children."
Magic wife
I like the Mass Effect nomenclature. "ANI" they call "VI" (Virtual Intelligence). VI is useful but certainly not actually intelligent.
That's what I've been saying. What we have right now is more akin to VI in the ME universe.
Just wanted to comment about this but you beat me to it.
Heavily agree, MEs take on artificial intelligence with it's artificial and virtual split is still the best depiction of it in media ever imho.
VI is a perfect analogy to how things are right now. At least Avina isn't trying to date us though...
Semantics
Not actually intelligent, huh? You mean like intelligence, but not real intelligence? Something artificial, like some sort of artificial intelligence?
I was going to comment something about how I've gone back to calling it machine learning, but my wife said you sound like Bob the tomato, so I'm commenting that instead.
Your wife's right, how could we have overlooked this critical fact??
It's been noted before. The Bob the Tomato part. It was in the second Linus Responds to Mean Comments video.
like how HIIT is a marketing term for interval workouts lol
A very simple litmus test for AI is if it can tell you when it doesn't know something. If it hallucinates the answer rather than tell you that something was not in its dataset then it is nowhere close to AGI.
12:12 - You made a Linus LORA for Stable Diffusion and it's now out there somewhere next to Pony Diffusion XL, an unfortunate weight-merge just waiting to happen.
ask AI to write me some python script, script doesn't work, paste it back into AI and it tells me that script won't work. Oh thanks
Dunno.
A tool is just as good as the user.
For me it does wonders.
-It is life changing.
I pay $30 monthly for GPTPlus subscription and GithubCopilot.
-Probably would sell half my soul for it.
Lmao ai writes bad code
@@chady51what languages do you work with?
It can write some basic code pretty well, not always efficiently but it can do it. Anything beyond that and it starts making fundamental errors. Easier to use google.
Claude 3 writes perfect python. Yall just making yourself look bad. Gpt 2 is like 5 years out of date, use a smart model.
video idea: AI branded PC build (there are AI PC case, AI motherboard, AI SSD, AI memory, AI power supply..., AI keyboard, AI mouse, AI monitor)
What about letting ChatGPT-4 omni decide a build? Give it the prompt: make me a list of hardware needed to build a PC for gaming that is around 1000 USD. Now that would be interesting. Maybe they already did that.
@@bobthegoat7090its actually fairly good at it, and the more detailed your requirements the better your results may be, just make sure that after it gives you the parts list you ask it to double check the compatibility of the components and youll have a decent result, its a lot better at pc part lists than a lot of humans that i know 😂
@bobthegoat7090 I just did this, it was a pretty standard high end computer. I don't think it'd be that entertaining to watch them build it. The only odd part is that it suggested an optical drive, lol.
CPU: AMD Ryzen 5 7600XCPU Cooler: Noctua NH-U12S ReduxMotherboard: MSI MPG B650 TOMAHAWK WIFIMemory: Corsair Vengeance LPX 32GB (2 x 16GB) DDR5 6000MHzPrimary Storage: Samsung 980 Pro 1TB NVMe M.2 SSDSecondary Storage: Crucial MX500 2TB SATA SSDGraphics Card: NVIDIA GeForce RTX 4070 TiPower Supply: Corsair RM850x 850W 80+ GoldCase: NZXT H510 FlowOperating System: Windows 11 HomeOptional: ASUS DRW-24B1ST SATA 24x DVD Burner, Noctua NF-P12 redux-1700 PWM case fans
@@randomblock1_ wow terrible cpu cooler choice too, otherwise yeah totally not bad at all
@@randomblock1_some still have CDs/DVDs at home and with some outdated products, software still comes on a DVD, so it's not such a bad idea to have one.
Customer, "server, my AI rice is moving"
Server, "um, that's not rice."
10:50 the gaslighting on display here is absolutely masterful. Had me double checking if strawberry actually had 3 ‘r’s in it
it does. StRawbeRRy.
I swear I thought the AI was right and that they were gaslighting it into believing it was 3 lmao
This guy gets paid big dollars to feed you people bad info
7:38
His kids looking at him at a distance: 👁👄👁
Hol up
best comment. nothing in this comment section will top this.
A different kind of "my paste"
Lmaooooooo
Love how their motto used to be “Think Different”, and now they’re chasing the same trends as everyone.
vvho are you talkinb about
Apple
siri was one of the first assistants. even though they didn't create it they popularized having assistants on phones. and they were very slow to start talking about AI or adopt it, just like they're slow to adopt anything other new tech, so not sure what you're on about. apple sucks for plenty of reasons that are factual.
kinda like follow the trend has always been brainwash to sell bullshit.
@@reanimationxp you answered your own question. They are 'slow' to do anything 'different' nowadays because they're too worried about the 'apple ecosystem'. They're late to trends by several years with the hope their enormous budget is enough to make them steal everyone's attention.
I appreciate you using your platform to call out these malicious tech companies.
One thing I wish you'd spent a little more time on however is the training data. I'm an artist and all of my work and the work of my peers is now being used to replace us. Our copyright over our own work was completely ignored as the industry tried to move too fast to be stopped - they fully know what they're doing is wrong which is why in interview after interview they'll dodge the question of where the training data came from and instead use yet another buzzword: "publicly available". As if putting something online makes it royalty free. Anyone who parks their car on the street better be careful, because that's publicly available too.
Even if you're someone who doesn't care about artists or creatives and thinks we should all "get a real job". I'd like you to know that there have been illicit images of minors found in these models and people are using them to generate more. If you've ever put pictures of your kids online, they'll be in those models too. It doesn't take a huge leap to guess what's going to happen when this algorithm needs to figure out what a child looks like in order to produce new illicit images - pictures of your kids are it's reference material.
Yeah and I made a personal voice on my iPhone and that shit is scary, it’s a bit buggy but sounds really good for what it is😭
Bro dropped the hardest thumbnail and thought we wouldn't notice
For someone using a lain icon I’m surprised you watch this garbage.
@@nerobaal6655 he's average at best
The thought of AI being in charge of my...manscaping is horrifing and hilarious :D
It has studied literally billions of pictures on the darkest reaches of the internet to perfect its craft.
I work in the creative industry as a voice actor. I can confidently say that I haven't met a single artist who wants ElevenLabs or Suno. It's a cool new shiny AI that can replicate human speech very well, but it's demoralizing to artists that spend years honing their craft. Businesses and advertisers are selling their integrity for the sake of generating quantity over quality and saving money. The silver lining is that the culture is starting to shift before the AI bubble has even fully popped. People are already starting to move away from AI content in preference for more obvious human made content. I have a friend who lost their job because the agency they worked for trained a generative AI on their artwork. I'm thankful for all the lawsuits going on right now and I'm thankful the cultural ethos is starting to shift. In the end, I think AI might be one of the best things to ever happen to humanity. No because of what it gives us, but because of how it ultimately reminds us of our how precious our humanity is.
1:00 ah yes my favorite processor, the core i7 1400kf
Did they notice that mistake in editing?
Why would someone even care about such a little mistake
Neuro-sama will never be a lie.
keep dreaming it's a llm with azure tts
There is a short of Neuro-sama trying to spell "Hi Anny" and it's the funniest shit I've ever seen about the current state of AI 😂
Neuro-sama would never lie to us
I can't wait for a Linus vedal collaboration, he really needs those h100's
*wink*
Things: normal reaction
Things "AI": *hyping*
AI is a relevant descriptor for products
7:17 the "Exponential Growth of Computing" chart looks intuitive but isn't just already being proven wrong but relies on a very bad obviously disprovable assumption: that exponential growth lasts. It's much more likely the curve will flatten out. At least for a while until in some far future there's a new modality in computing.
The one computer buzzword I still don’t understand is “Internet of Things”
'Everything is connected to the internet' is how I've always understood it. Still a stupid phrase though.
it means that every small machine is connected to the web, not only computers. like everything (toaster, your doorbell etc.). all things are connected and communicate, can be remote controlled yadayada. its a bit like in cyberpunk. so yeah... everything can be hacked too. its one of the reasons why ipv6 was needed, because there arent enough public ip adresses to connect everything.
@@seigeengine You mean "the use of the internet by useless objects mainly to participate in DDoS attacks" given how often their security is absolute lackluster and the fact that they've been utilized in attacks for a while now.
Kinda weird to buy a light bulb just to question yourself if it's infected and currently participating in a try to take the steam servers down honestly.
“Internet of Things” === not cloud, but cloud you can throw at the wall
@@Unknown_Genius very small groups of people have the specialty knowledge or even the mental capacity to pull that off or even come up with a good reason and scenario to do so. 😏
Everyone's a gangster until some biology nerds make a real fleshy brain like GPU and play doom on it in real time
The Torment Nexus?
Thought emporium
Human brain SLI when?
Korrok
I tried to get multiple art AI's to make a picture of a Centaur for my DnD campaign, not one of them could create anything even close to a centaur. They always created a picture of someone riding a horse.
If you want same thing from a human probably you will have nothing because centaur is not a common knowledge, thus AI do not have enough data to prepare a decent centaur. Yes, AI models have flaws, but this is not an important one.
THIS! yeah I ended up finding a stable diffusion model that can handle centaur-like bodies. Though I understand why ai generators can't do it. It knows what a centaur looks like- but it also knows what a person riding a horse looks like even more. so when its diffusing the image it'll naturally just make it a person riding a horse.
@@thedogank I appreciate what you're trying to say but you're so off the mark in this case that it's rough to read
You just didn't use a decent model, or didn't knew how to correctly use the tool, that's all. The better ones are paid. You should also use opposite filters, or whatever they are called, to reduce the weight on some stuff, and increase the weight of others. There's almost nothing the top stable diffusion models can't generate, you just need to play around.
BRUH. That only shows that you don't know how to use an image generator, it's not just putting something and that's it, it's putting the learning before and writing until you get the result. Use other people's LORAS or train your own using a model that can be used for that. This also shows how well you know how to read and research.
I did my master thesis on machine vision in 2004. At that time, same neural networks based approach was in use that LMMs are using nowadays. Worst problem was overfitting (learning was performed too long). It increases validation error and breaks the model in long run. Nowadays there are new methods to tackle this, but something like AGI would be a huge leap from current LMMs.
That GNU head slowly coming in was hilarious
Such a funny video, lots of great add-ins during the editing 😅
I didn't get the joke... Would someone like to explain?
Slapping AI on a product is similar to whrn everything had "gamer" slapped on it's name and was painted black and red, it doesn't mean it's better, but sure is more expensive
Ye, and the funny part about it is that nothing aside of the better (but still cheaper than absolute greatness level) office chairs really sold.
apart of maybe gamer sups for the sole reason that it.. is a cheaper alternative to some other energy drinks that does taste pretty good depending on flavor.
I’m so glad someone finally made a video on this topic. My friends and family all get scared when they hear AI in everything
Can't wait for Folding ideas to make a video on AI called something like "It only does" that becomes the next "Line goes up"
Honestly I thought that's what his next one was gonna be and then we all got James Rolf'd
Dan specifically said that he's not doing AI right now because he doesn't want, in his words, "a moving target". So we'll probably need to wait for the bubble to crash for that one.
@@BZero3 Yeah, that's fair. Putting all that work into the video only for it to be outdated by the time it comes out wouldn't be ideal.
Most of the AI hype right now is based on *transformer models*. You give the model a sequence of symbols, and it makes a reasonable guess as to what the output symbols are. Transformer: input symbols to output symbols. Those symbols are mostly text, each symbol is a fragment of language ranging from a single letter to short common phrases (these are tokens).
The model isn't doing deductive reasoning or things like that. Input symbols to output symbols. This is part of why the models hallucinate. Let's say the model gets one of the first 100 output symbols a bit wonky. Well now its next output symbols are based on some wonky symbols. Then it gets weirder and weirder. This is why the nonsense answers like "glue on pizza" tend to exist. There ARE cases where they put glue in pizza: food photography. It makes for a better "cheese" pull... nevermind that it's not cheese. So the model hears "stick my toppings on", and the "stick" symbol(s) are there, and there's another text passage it was trained on about sticky glue in pizza, and well... Bob's your uncle, I guess.
Anyway the hype is indeed vastly exuberant. Are the models pretty good at producing realistic looking output text? Yes! But are the models designed for getting all the details right? No! And even worse, *the core mechanics of the model can't easily be ammended to make it work.* There's no obvious way to say "oh, and also APPLY LOGIC to the output to make sure it makes sense. Check your sources and such." Nope.
Side note: this is why the AI companies HATE the idea of being required to do attribution. Because they can't. These are statistical models that go symbol-by-symbol, with probabilities from vast mounds of training data. It's not tagged, the model can't say "I printed 'th' next because of training example number 3838382934792374 in particular from this specific source." It's more like that source, along with a million other observations, was used to nudge the model weights slightly to the left to be more accurate. Each output symbol is the amalgamation of billions of examples and trying to replicate the output. At best you could do a "jackknife" estimation where you say something like "had I not seen training example 3838382934792374 I would have been roughly 0.001% less likely to output that symbol."
Side note: I'm sure people are trying to work on improvements to the models. But even ChatGPT and other top-tier models are easily confused on details and get things wrong constantly. Another several transformer-level breakthroughs will be required. First there were neural networks, then convolutional neural networks when I was in grad school as all the rage. Then recurrent neural networks for learning sequence transformations... but they were slow. Then transformers show up with a vast improvement to training speed and here we are. But to combat nonsense I think we need more logic like "if X and Y, then Z" type deductions as opposed to just big statistical models. Needs to be something more there.
Final note: these AI models are basically a giant compression algorithm. They were trained on many many terabytes of data. You put in a reasonable query, and it spits out a PAGE of reasonable looking data. One way of looking at transformers is as a super juiced up lossy compression algorithm. The words aren't stored in the model, it computes them on the fly from input. Very neat.
That's super useful info. More needs to be said about attribution: and that's a nice turing test (or at least a test for intelligence and reasoning abilities) actually: get it to explain its "reasoning".
That said I have seen it solve leetcode problems incredibly well and it does make me wonder WHY it's so good at that?
I like your analogy of a compression algorithm. I kinda thought of it more as a sophisticated search engine that's searching an infinite library (look up "the library of Babel"). Somewhere in the infinite library is every word and every combination of words, it just needs to be found.
The other way is that it is a relativistic information system. It stores relative data. And depending on the "temperature" crap, it produces more varied probabilistic outputs. Garbage. 0 intelligence.
wrong - "PAGE of reasonable looking data. "
The pizza thing was apparently from a reddit shitpost, with Google's AI literally unable to double-check information. Much less interesting than the interesting divide between food photography and food safety. It doesn't understand the context of either, just regurgitating the input data, which includes reddit shitposts.
When they started advertising Ai-powered thermal paste, I knew it was way overblown as a buzzword.
I was half expecting 'Blockchain powered paste' during the last tech hype season 😅
It's important to not conflate Generative AI and LLMs with AI in general. WIth the former there is a gold rush mentality at the moment with little concern over environmental impact or copyright issues. What's needed is some type of framework for sustainable growth in this rapidly growing field!
I worked in a company that had AI in it's name, all we did was to add a chat GPT api...
And? Chat gpt is ai.
@@MrWizardGG sure, but the company itself didn't have any AI of their own (they were going to use chatgpt as part of their 'AI') and they wanted the government to get subsidies because of that.
That's not illegal but it's sure not ethical
@@dany_fg I disagree, I think it's weird to expect every company to re-invent what chat gpt has already invented.
@@MrWizardGG not re-invent, just create locally with their own data
Because otherwise it's like I'm creating a revolutionary social app but in reality it's just a twitter or reddit client
@@MrWizardGG What is potentially unethical here is what was subsidy granted for.
Through an example, let's say that subsidy is given for providing new source of water, and it's given for someone extending pipes so current water facility can reach new place.
Companies over use AI so much so becomes almost obnoxious hearing AI
Linus: generate an image of a computer
AI: YES
I had it do all kinds of things that humans can’t do.
In relation to driving edge cases; the collected data and synthetic generated data that Tesla uses to train its models will make it familiar with and trained to deal with situations no human driver would. When trained as a pilot we were drilled into the idea that we need to have the right actions occur instinctively, so we practiced things like stall and spin recovery. In driver education the basic requirement is to basically steer and control the car. There is no training in the same kind of emergency conditions because it is VERY dangerous. But Tesla's are training on exactly these in-silico. Machines will be able to react far faster than the human nervous system. Machines don't need to be perfect, they just need to be better than humans. And humans SUCK at driving. They get distracted, tired, drunk and high. They get impacient, take crazy risks, get crazy angry and sometimes drive the wrong way down a highway. By far the most dangerous thing on the road in future is other human drivers.
I can imagine the meeting now:
"The term 'smart device' is now too old. We need a new term."
(And the rest is history)
1:56
I'd just like to interject for a moment...
"I never expected my paste to be sentient, anyways"
😏
The section that you mention how simple the ML models generate images are overally underestimated and you are under valuing the huge leap of technological changes these tools are doing especially with image generative tools.
"It takes the keywords from your prompt and it starts compositing filling in the image until it hits for example a certain percentage of computer and desk"
I still don’t understand why the Techsphere won’t adopt the monikers that Mass Effect nailed, Virtual Intelligence and Artifical Intelligence. AI referred to species like the Geth and the Reapers that were actually self aware. Virtual Intelligence referred to the narrow focused systems like ChatGPT and the like.
The main difference is that we don’t actually have a real artificial intelligence, so that distinction isn’t necessary yet.
@@CanIHasThisName we have the ttech just its keept hiddden from the public
@@midnightblue3285 Stop watching conspiracy nonsense.
"I never expected my paste to be sentient anyway"
Heh.
It "could" be with nanotech/microfluidics... but it doesn't mean it should!
I still remember AI as the programs that control non player characters in computergames. Most often called out on real time strategy games for bad wayfinding and on shooter games for not seeking coverage.
I seriously cannot think of a way this whole "AI" thing ends well.
It depends on which sides you are on in the bigger picture.
As long as I get my AI waifu I dont care sorry