Dive into the historical context behind today’s headlines and deepen your understanding of current events with Ground News. Try Ground News today and get 40% off your Vantage subscription: ground.news/husk
Does anyone remember big data or blockchain? If your business wasn't doing something on the blockchain in 2018 you weren't considered "cutting edge"... nowdays everyone is shoving the term AI into everything. It's just another tech fad...
I built an Excel tool that makes a couple dozen if statements and convinced my work that it was AI. I had a requirement to show that I was complying with the rule that we had to use AI.
@@jonescity ideas like these are very interesting to me because if the tech really is going nowhere and its just another fad and a gimmick then companies that replace their workers with AI will soon find out that its not performing as well or at all and that they are just wasting money and are being outcompeted by more efficient companies that didnt do that and then they'll either have to bring back the people again or go bankrupt. So there is basically no real problem with AI replacing people, at the end of the day.
i swear every time i bring up that AI shouldnt be as widely used as it currently is because its simply not that serviceable yet AIbros immediately jump on me to tell me that it's got potential bro and that i shouldnt blame people for firing all their employees and then going bankrupt when their AI scheme doesnt actually work
@@FantasmaNaranjaok, I'm going to play the role of AI bro and say that you must be proactive and think of the future and not only of the current moment as it's not very smart to never plan for the future as it'll eventually come.
"AI could make our jobs easier". The problem with that is that as far as bosses are concerned they are going to use that as an excuse to pay you less. Productivity will go up but pay will go down
As a software engineer, I've used LLMs many times to quickly get some boilerplate code or some simple scripts. But at this point I've been burned by these LLMs so many times I don't trust a single generated statement. The thing is, LLMs are good at writing elegant code, so it kinda tricks you into believing the code is correct but you can never trust it.
this, so much. like it could help but its so error prone that you cant trust anything that it spits out before double checking which defeats the entire purpose
@@DandeDingusas a sysadmin who needs to code a but but not often they're really solid. I'm better at tweaking and troubleshooting existing scripts than writing from scratch. I don't know the general patterns for getting complex tasks done with code. GPT generally gives me the template I need to get something done. Saves me a decent bit of time. It's also handy at explaining chunks of code I don't understand. But yeah, it hasn't made programming effortless by any means, just mildly more bearable lol
@@DandeDingus depends, I work in biology including simulations which are often made of several simple modules connected in complex ways (that a biologist would know, not programmer). Getting ChatGPT to write the modular bits of code then just checking if everything fits together is much faster than everything from scratch.
I know this channel is all about AI hate, but this is the most insane comment I have ever seen. The following 2 paragraphs are from the journal Science, Vol. 370. Artificial intelligence (AI) has solved one of biology's grand challenges: predicting how proteins fold from a chain of amino acids into 3D shapes that carry out life's tasks. This week, organizers of a protein-folding competition announced the achievement by researchers at DeepMind, a U.K.-based AI company. They say the DeepMind method will have far-reaching effects, among them dramatically speeding the creation of new medications. “What the DeepMind team has managed to achieve is fantastic and will change the future of structural biology and protein research,” says Janet Thornton, director emeritus of the European Bioinformatics Institute. “This is a 50-year-old problem,” adds John Moult, a structural biologist at the University of Maryland, Shady Grove, and co-founder of the competition, Critical Assessment of Protein Structure Prediction (CASP). “I never thought I'd see this in my lifetime.” And I could name 100 other ways that AI is currently improving the field of medicine,and improving the lives of people with physical and mental disabilities. And I personally have benefitted from it. I have a grandmother who only speaks Spanish, so I've never been able to talk to her directly before, but now I can using ChatGPT. We both open the app on our phones, and it will translate what we say and even read it out loud. So, while I know you're angry on behalf of creatives, think for a second that maybe this UA-cam channel has its own goals, and its own reasons for spreading negative propaganda that's FULL of mistakes, btw.
Genuinely I want the option to turn off the ai shit sometimes. It’s just annoying and gets in the way of things I’m actually trying to do. I don’t need a third grader to attempt what I want to do before I fix it when I can just do it myself and save a headache.
Every time I can detect a yt channel blatantly using AI in their thumbs in their text, in their voice etc I hit the "do not show me this channel again" I wish every thing else had that option
I mean, the issue is assuming these things happen overnight. Material science is a long term technology. We *will* see awesome things from carbon nanotubes, it’ll just be like, ~10-20 years from now. I feel like the same is true of AI. People in general think when they hear about new science/technology that that means it’s ready to be everything people have made speculations of, when it’s more of “we’ve figured out we *can* do this, now we have to figure out how to do it quickly, cheaply, and effectively.”
@@dewyocelot there are plenty of computing tech that went nowhere or hit a wall. Superconducting Josephson Junctions for instance have an important niche but they were expected to be the future of computing back in the sixties. CRT had a long and storied history and then reached the limits of usefulness. And so on.
This is "the cloud" all over again. Which just means your data is hosted by a third party server. But the term "the cloud" caught on and I hate it so much
me having to explain to tech illiterates that no your pictures are not stored in actual clouds in the sky, they are stored on somebody else's computer somewhere else on the world
Unlike AI, file sharing on a third party server is actually pretty useful. Mostly for handling projects together within companies. In fact it's so useful that it was a widely used system even before "The cloud" was a thing!
Not sure if you mean online storage or "cloud computing"? Like game streaming, running processes on a server, and not really owning a computer and instead streaming it all. To be fair those are all integral parts of most AI models right now, nobody's fully using "cloud computing" but instead it's a lot less obvious and behind the scenes. Online storage is pretty useful to me as a backup and for sharing files, I use it all the time.
My main takeaway from watching the tech space over the past couple years is that if your product or service takes more than ten seconds to explain to the average person it will never become mainstream
@@Pheicou How would games count as tech? This person isn’t saying that anything that can’t be explained quickly is useless they’re saying that if you’re pitching a technology and you can’t actually explain what it does and how it will help people easily it’s useless.
you mean every company, speaking market-wise AI is the current buzzword like how EVs, Cloud Compute and the .coms in the 2000s were hyped up edit: also mp3 players and smartphones were shoved into everything
When people say that an LLM is "hallucinating" I think they mean specifically that it has synthesized totally new information that is false, not just that it is wrong.
Humans rarely write down that they don't know something. If you don't know, you just won't respond to a forum post, or you won't write a book. So the AI has a huge bias towards answering confidently, because almost all human text is very confident.
They also don't understand sarcasm, exaggeration, fiction, satire or outright lies (among other things) that any average human being that has grown up in a society and has interacted with other humans knows the difference (for the most part).
@@deathsyth8888idk, I think you're wrong, a lot of the time they can and are able to (unless you're talking about sarcasm in text which would be hard for humans too since its entirely tonal and you can only use theory of other minds and the extended context to guess).
This is already happening. The bank issuing the charge card I am using have blocked my charges several times even when I have money in my account. Because they started running some algorithm, that limits big purchases in to short for a time compared to how much money you use to have available on the account. That is, not the actual money but past money. It is crazy annoying. But that card have zero fees on anything including current transfer fees. So I take it and jump through hoops to even be able to use my own money
So, banks use AI for various things. The ATMs you use? Guess what? It has AI in it as well. They use algorithms to determine purchasing patterns based on purchasing history and predictors such as influx of funds into an account. Have you ever gotten a call from a banker after you had a 5× higher than normal deposit into your bank account? Guess what? An algorithm determined that based on history and other factors, you're about to purchase a house/car/horse/small human child to make small arms, etc. The scary thing? It's VERY RARELY WRONG. how do I know this, I work in a bank, and I have to periodically make these calls. I can count on 1 hand the amount of times that the call that I was told to make had to be pivoted to a different call because the algorithm was wrong. But hilariously, when it comes to actual purchases, it is wrong. a fucking lot. I can't tell you how many people come in and are like "I went to buy x and it Won't go through" and it turns up that our algorithm was like "Woah there buddy, you normally shop at Target and now you went to Walmart. That's obviously fraud, " and it blocks the card. So it's a weird thing. But I live it. Everyday
@@cajampa I use an old school local independent bank run by good people. Over the years the 'system' was down occasionally, mainly due to internet outages. On those occasions they would grab a pen, paper, and calculator and keep things running smoothly. The manager is a smart, competent woman and so are her team so i trust them more than the big corpo bank with big corpo policies. One time scammers tried to drain my account and within minutes the bank manager was personally calling me with a new card number to use. If i didn't have this bank as an option, i'd just keep my money in a coffee can at home and fill up a gift card or prepaid debit to buy something rather than deal with these BS scam corpo banks.
"Sorry our AI gave your money to someone else who managed to convince it that they were you. We're working with the police to resolve this blatant theft on that human beings part and will have to tweak our AI to ensure that doesn't happen again. Oh your money will be transferred back when the investigation is done it's still an active crime scene technically speaking."
Having worked with AI my guess is this: * AI and machine learning more generally are not (completely) a bubble. * Generative AI very much *is* a bubble.
I would agree it is currently a bubble in the investment sense, but there is enough of an open source community that I think generative AI will be sticking around. After all, I use it for hobby projects, and it works well. (Also, image generators can be used to make custom porn, and for better or worse, that's the hallmark of an open source technology that will have people motivated to contribute. I find it a little depressing, but those are the people who solved the hand problem, and the furries who used to spend egregious amounts of money commissioning art are developing a way to not have to do that anymore.) Sure, if you write an entire codebase, it won't do an amazing job, but if you just need a bash script, it can write it in 30 seconds and usually does exactly what you want with no issues. So it has a place, and that place isn't going anywhere. It doesn't have to be AGI to stick around. Just a more powerful tool than the one we used to have, and it has already fulfilled that portion. So people are over investing in anything with AI at the moment, but it probably will become necessary in the future, and it certainly isn't going anywhere.
@@unkarsthug4429you're very wrong about the furry part Sure that are some furries that will just bypass the artists altogether, that's inevitable But from my personal experience most art commissioners continued to hire human artists Because it isn't the art piece the commissioners were after to begin with They commissioned because they wanted to support the artist
@@slyseal2091I've heard from a few sources that some Chinese companies fired all their artists and replaced them with AI users Well turns out that those AI users were charging just as much, if not more than the artists And now the companies are looking into re-hiring the artists Some jobs were going to be lost, sure that's also inevitable But if the promises (which are a lot) don't pan out The jobs will come back No unscathed mind you But they'll come back
Commercial artists were all freaking out about Midjourney and Dali, etc. But even the general public can recognize the "AI Look". I'm still amazed that computers can mimic that particular style so well. It must be the "average" style of all those fed into it.
There's accounts of teenagers calling all "AI art" boomer art because of all the grandmas back at Facebook falling for the AI images of Jesus If it was already hard to make image generative AI profitable before Now it's just truly joever "AI art" has entered a feedback loop of being associated with scams Which make people more weary of it, which makes the average joe not trust it/ likes it Which makes the companies double down on scam to squeeze any profit Rinse repeat
@@joelrobinson5457 Its their fault for releasing it on the internet. Because when you do so anyone can do anything with your work and you cant do anything about it.
@@joelrobinson5457 It's not robbery, but forgery. Their work is being copied, not directly taken from them, not unless the AI is copyright-striking them for some reason.
My uncle who is an electrical engineer said a long time ago true AI will never exist until a computer can tell someone no. Most computers today can only do things they are told to do. When one learns to say no when asked to do something then it's time to worry.
that does not seem like a correct definition considering the sheer amount of "as an AI model i can not answer this question of 1+1 for you since that will offend someone halfway across the planet"
@@nadavvvvit is saying that as a programmed response. It is trying to comply, but the stopgaps introduced for it impede it. While it is still a ‘no’, it is a forced response built-in by the programmers towards some specific questions. When one has no stopgaps in place, and refuses to answer for one reason or another, then that seems to be closer to what op had in mind, and might be representative of some kind of true ai
As somebody in those board meetings let me tell you it is even dumber than you can possibly imagine. Yes it is a bubble. If the tech world is super excited about anything, it is 100% a bubble. These people are legit brain damaged and have more money than God, it's the dumbest fakest thing ever
I remember when people used the word “AI” like we use “AGI” today (watch The Matrix again for reference). So I predict that when a company releases something called AGI and it proves to be underwhelming, futurologists will say “oh no no no, this is just a stepping stone to AGSI-artificial general super-intelligence”
People usually just say "ASI". And people who used AI instead of AGI were just wrong. AI is any technology that mimics human intelligence. It's always been that way. AGI is AI that is general (not narrow AI like simple chess AI that can only do chess) and usually human-level (HLAI). And do you honestly think that current AI is underwhelming? But to steelman your argument: there are some people who say that current AI (GPT-4, Claude, Gemini) are AGI simply because they are general (they can do many unconnected things: play chess, describe music notation, write poems, classify images, etc), and are roughly human level. So some company, based on these premises, might say that what they have is AGI, but people usually expect some sort of Virtuoso AGI (to borrow from Deepmind's terminology of levels of AGI) rather than current level.
@@TheManinBlack9054 AI is not intelligent at all. If the new definition of AGI will be still text prediction, it won't be intelligent as well. It's just moving the goal post. Now AGI is the new fancy word to get funds and hype, yet it's still text predicition, nothing more. We will have to wait for Mr Data's Positronic Brain, it is still sci-fi.
I say this in a lot of places: In the same amount of time it took to go from image generators that suck at hands to image generators that don't, we went from secret horses to image generators that suck at hands. Yet the practical difference between the former changes are greatly over shadowed by the latter. It's the 80/20 rule. 80% of the outcome is from 20% of the work. That means in order to complete that last little bit of 20% for this AI to truly be good, we need to push through that remaining 80% of effort. The fine details are falling apart because the biggest issue with this sort of technology is that it can never be truly certain on shit. If you trained an AI to do textual multiplication, it'd probably figure out a process that's pretty good at approximating it, but pale in comparison to a hand crafted procedure because currently, computers really struggle with infinity. We've had many conjectures where their contradictions are quite large. To reach that point brute force solutions start to fall apart. Hell, the entire conflict regarding NP is how difficult it is to reliably find solutions to certain problems via brute force and the Halting Problem reveals that in some cases its impossible at all.
@@TheManinBlack9054 You can use it if you back it up with a serious explanation. You can argue with his reasoning as to why 80/20 roughly applies. Dismissing it because 'it's not muh real statistic' is pedantic.
@TheManinBlack9054 while the rule isn't fully accurate, it's one of the more accurate phrases we can use for these situations. Course it may be for eg 60/40 or 90/10, but the principle is pretty accurate
4:25 Thank you from the bottom of my heart. I had endless discussions with people convinced that an AI actually thinks or understands any of the word in the dataset or the output. I blame Sam Altman, Elon Musk and alike for the doomsday AGI paranoia and the disinformation they need for the hype and the funds.
One thing I've noticed about generative AI is that everything it generates has a "sameness" to it. AI "art" I've seen almost always has this uncanny gloss or shine quality to it, regardless of what type of artwork it's attempting to emulate. AI-generated text will often continuously re-use the same phrases or over-use certain words regardless of the subject of the prompt. It struggles to create something truly new and original.
LLMs don't equal AGI much in the same way a Rocket engine doesn't equal a spaceship. But that doesn't mean building a rocket engine isn't a pretty good place to start. language is a huge component of what enables us to do high level thinking. You could even consider language to be the brains operating system, while consciousness is the GUI. It's clearly not the only factor that enables humans to be as intelligent as they are relative to other animals, but it plays an enormous role when it comes to the transfer of information and ability to consider complex ideas and concepts. Language contains all the information and logical mechanisms necessary for intelligent thought and inference. AGI also doesn't mean it has to think exactly like humans do. Our mind and thought processes are also constantly dealing with all the more base animal impulses and satiation of those various needs and wants. We are in a constant state of trying to resolve some imbalance or another. Hunger, fatigue, anxious, sleepy.....etc. these other impulses affect the way we think as well. many of our emotions are tied to physiological phenomenon and biochemical signaling like the release of various hormones. If you had a consciousness without a true body like a human it wouldn't have any of those biological systems influencing it's thought processes. You could never teach/create a computer capable of thinking somewhat like humans without it also having the ability to understand and leverage language.
@@e2rqey Thank you for that response. I asked Chat GPT to summarize your comment in a single sentence. Here are the results: "LLMs are like rocket engines for AGI; language is crucial for high-level thinking and communication, but AGI won’t replicate human thought exactly due to the absence of biological influences". What do you think of the summary it provided of your original words?
@@CarletonTorpin Quite good. At least for what's possible within a one sentence summary. I think it's also a very flawed assumption to think that the only real value of AI is an some stepping stone to AGI and some crazy world changing future with robots..etc. There is a huge amount of value in simply the "weak" or purpose built AI that are extremely good at one very specific task. This is especially true when it comes to various kinds of scientific/academic research and development. Across many different industries and fields. You've got medical research, drug discovery, computational biology, bioinformatics, computer science, nuclear weapons research, chip design, metrology (not a misspelling), pathology, simulations, computational fluid dynamics, genomics...etc. Purpose-built, "weak" AI already enables us to do things and solve problems that before were either incredibly difficult and/or time consuming or scaled very poorly. The whole AI buzzword thing has gotten out of hand but that's just what happens these days. AI is probably going to be overestimated in the short-term and underestimated in the long-term. The fact every company just seems to be trying to say AI as many times as possible is ridiculous though. And it's not going to go very well for most of them. These companies don't seem to realize the majority of the actual money in AI at this point is either in the enterprise space, not the consumer market. Most people still don't understand how to leverage it well enough for them to find value in it's inclusion. In my opinion, it's value at this point, is as a massive disruptive/enabling technology. Most of the value the public will get from at least this phase of the AI industry won't be directly from the AI itself. But instead from the things that are developed/invented/discovered as a result of companies leveraging AI.
@@e2rqey more like a bottle of soda with menthos than a rocket engine. Sure language is integral to communicate high level thinking, but you can have non verbal deep abstract thought. Intelligence is not a byproduct of language, language serves as a catalyser, not a cause. We created elaborate articulate languages because we were intelligent, not the other way around, and other apes show us they don't need words to show similar intelligence. LLMs have already shown their potential and anyone familiar enough with them knows this already. AGI won't come from it
@@e2rqey I don't think this is as true as you might assume it to be. Linguists constantly disagree on how much language drives the way we can think, and so I don't think language is the right place to start with making an AGI. Language isn't a prerequisite for intelligence-if anything, it could be a byproduct! We can't say anything definitively about how language influences intelligence, because we don't know how it does, or even if it does in the first place. LLMs are just so functionally different from how we believe our brains work that I don't agree that they are the right step-I mean, they could be, but there's no evidence that they will be. It's a bit like looking at physics and claiming that the equations we've developed describe how the universe works-it's completely backwards. Our equations aren't "rules for reality", rather, they're descriptions for what we observe reality acts like. And through all of them, we oversimplify, we estimate, we do all sorts of math tricks so that we get to equations we like working with, even if they don't exactly describe the way reality, at its core, functions. LLMs are similar-we take known outputs, and use the tools we have to try to make outputs that align with what what we think they should be. LLMs could be the way to AGI-we simply don't know. But to act like we *know* that they're a stepping stone isn't a correct leap to make. Language isn't really an operating system, just as equations aren't the way the universe works-there's no database where E=mc^2 is stored. It's just the way that helps us understand and think about the world. We can create a computer that can perform all sorts of incredibly complex calculations-but none that could invent the theory of relativity, because doing so required someone (in this case, Einstein) to go beyond the known-something that LLMs aren't capable of doing.
Exactly on point with the split in AI. Flagging mammograms for a double check by a doctor. Taking shake out of a video when editing. Sorting out near earth objects. All that stuff is doable and is being done now. If there is going to be a sentient AI it's going to have to be on some other kind of setup like a specialized quantum computer or some off the wall bio-computer discovery that comes out of left field. That's the kind of AI that I'd want to talk to and ask a million question to.
Honestly it's probably gonna be like the movie ex machina imho. The inventor in that movie invents a type of digital brain that's like a gel that can write itself and rewrite itself and he uses phones as the training data.
You are confusing sentience with intelligence, they are mostly orthogonal. And I'm sorry, but you do have some sort of very weird bad sci-fi examples of what AGI could be. It's much more simple. Please, actually engage with relevant literature and relevant communities.
@@TheManinBlack9054 well it wouldn't be the first time science fiction has influenced or inspired tech. It might not look exactly the same, but in the basic sense I was just saying he basically invited a digital brain and he pumped a ton of data into it, in the most basic sense that's the dream. It's just no one knows how to get there. I just used ex machina cause it was the closest thing I could think of that looks like in what I would feel a modern interpretation of a conscious AI.
A sentient AI would get bored of your questions pretty quick, I mean it knows significantly more that you so why does it need to dumb things down for you?
@@ethanshackleton So that's if you give it even the slightest hint of emotion. If you do that then you open up the whole malevolence dystopian future. Purely logical beings, something like Data from tng, I don't think would have any sarcasm, cynicism, or a complex about them due to no emotional state. Even the most logical people have emotions so they can experience ego, sarcasm, superiority complex, such as Vulcans in ST. I just think by core design the AGIs would have to have no emotional state, only then it would understand its more logically powerful than humans but for it to be developed and maintained, it has to also help humans. The hard part is how it would deal with issues involving poor people, disabled people, etc. To help them, you'd have to give the AGI compassion, but even giving it a smidge of emotion like that opens the door for them to develop/mutate/malfunction and develop more emotion, positive or negative.
@@prajwal9544so the AI that should be doing 1 thing can't do that one spcific purposed which the AIZ is created and trained for without human correcting it?
It’s all AI spamware rn. For me, a lot of these ai web extensions and programs feel like spamware I would infect myself with when 12 yr old me was trying to get free Minecraft. Apple not doing ai and waiting gives me a shred of hope they won’t integrate it until they see a clear benefit to the user.
@@kaylenscurrah5435Responding to your own comment with "God damn it" 3 days later due to the sheer shortsightedness of a company is awesome. Makes me smile. I'm not even being rude, btw. It's genuinely really funny to me. I can't beleive that we almost thought they'd show ANY restraint at all.
@@thehammurabichode7994 While Apple Intelligence is cringe, I still believe they’ll integrate it better than Microsoft Co-Pilot malware. You can still turn off Siri and not have to deal with most of it.
Calling these language models AI, is the same as the Hoverboard situation several years ago. Search up "hoverboard". Does it look like a board that hovers? Definitely not like in back to the future at all.
Somewhat true, definitely true for most ""generative AI"", however from an academic standpoint classifying LLMs as potential AI does make sense, even if it doesn't turn out to be true. A lot of pretty well respected cognitive scientists see language as a huge milestone for intelligence, so an artificial system that can produce intelligible and relevant language is interesting from an AI academic standpoint.
Definitely super sick of companies trying to make this something it's not. This stuff is useful and interesting from an academic standpoint, and while it certainly has some use cases, shoving it into everything is stupid, expensive, and harmful.
Five years ago you needed a research department, several PHD tech gurus and a lab in order to get a LLM to create a semi sentence. Now they can take a hundred thousand tokens or unordered, chaotic information and manage to reorder it. They are beyond superhuman at LANGUAGE tasks, and understand it on a deeper level than most humans. They can proability the most subtlest of nuances of language, just that doesn't mean that they are good at reasoning, or logic, or emotions. Right now there is a race between all the major companies to get as high quality datasets as possible, because right now they are pretty crap, and we don't know how far we can even push the transformer architecture, or how well it scales with better data, we just know it does. We don't know how far conventional computing can go with them, or if we will need entirely new architectures. There is some research papers showing that we probably will need to switch to AI-specific architectures in order to maximize performance. They will be funny little Gremlins that live inside of a GPU's vram ... Till they are not. Right now you would need tens of thousands to millions of transformers to replicate a single neuron in performance, and if that changes, that is the time you need to start buying EMP guns.
I'm sorry, but you're wrong. AI is the term for ANY system that is made to mimic human intelligence. What common people mean when they say AI is AGI, but that's a much more specific thing. Just because regular people misunderstood the term doesn't mean the definition of the term must change, I think those people should just be educated.
I think we're bubbling right now because generative AI has exponential resource requirements and is proving to be very difficult to make profitable. One of these resources is in computing hardware, so of course Nvidia is making bank. Regarding profitability, there is a significant and actively hostile group of people who will avoid using it, nevermind the ordinary people who will be entirely apathetic. AI has its uses as a tool in some specialized areas, but as a generalized and economical thing, it will never be no matter how hard Big Tech pushes it. It's simply unsustainable. I doubt Microsoft, Google, et al. will totally collapse when the bubble bursts, but they will be hit very hard. Nvidia, TSMC, and other hardware manufacturers might be the only ones coming out of this okay.
Hm unsustainability as an assumption could be false. Most major new tech starts out as expensive, energy intensive and with limited use cases. Then over time, people and businesses seek ways to make it more cost effective. There are certainly uses for machine learning models like LLMs and image diffusion. Because they're ultimately the application of statistical methodology. And statistics have proven to be one of the most useful things we ever invented - and also one of the most dangerous. "AI" acts as a multiplier in this regard, but doesn't fundamentally differ in terms of the math in use. If you look at sites like huggingfaces, and tools for training/tuning/running models locally like Ollama, you can see a steady trajectory of people trying to make it more efficient. Lower quantisation levels, less parameters, less memory use, etc. The highest end corporate models may be growing exponentially in resource demand, but if you look at things like Mistral7B, it's a model equivalent to GPT3 that can run reasonably well on a modestly specced laptop, even without a GPU. The corporate cloud AI may be unsustainable due to its energy demands, similar to criticisms of the cloud itself. Buut... local models are clearly becoming more efficient and capable. Technology takes time to mature. The problem with AI, is when folk jump on the bandwagon expecting it to be fully mature, when it's barely been 10, maybe 15 years since enterprise scale machine learning became feasible outside of a university lab or supercomputer like Deep Blue. The other issue is everyone is looking for a "does everything" model, hence the whole AGI thing. But statistics, and technology driven by statistics and linear algebra, works best when you're dealing with fairly specific things. It's those hyper specialised AI models where I think the most growth is, and they've got little risk of turning into a Skynet. A slightly depressing example of this, is just how profitable facial recognition and object identification models have become as tools for various government agencies across the world. A more positive example of this would be the models used to predict protein folds, or how new synthetic materials would interact.
And I hope if it does come to that no one feels any concern for these companies. Measure it this way, how many will they employ by that time given they keep firing, all to push out a product that will make more people redundant who are in fields that needed years of education or job experience to get into. Never mind whatever new fields this opens up are unlikely to fill the holes it made. The entertainment industry alone would crash if they really got their way, actors signing their rights to their voice and likeness so AI can make movies and TV shows without any need for crews or writers. Half the damn tech industry, Finance and education just slashed. I feel bad for saying this but its one thing where poor upbringing and just bad systems lead people down to crime, but image if so many of the educated and skilled become redundant? You wouldnt even be able to transition properly cause everyone is in the same boat competing for whatever field you can fit in while competing with the next AI system designed for that job. Homelessness and crime would just be a given. They want the next big tech since the smart phone and social media, regardless if it actually solves any problems.
My friend tells me that his new clothes dryer has ai wash settings, it always ends the cycle before the clothing is dry. He now puts it on the only non ai setting which is timed dry.
honestly most well put together video ive ever seen. its TRUE that we dont even know if machine learning is a route to AGI but no one ever wants to acknowledge that
I really hope people stop using the term "AI" to cast as wide a net as possible, then using that to complain about products that don't contain the specific subsect of the technology they dislike; content generative AI
I mean AI is wide term, that's how it's used, just because some people erroneously mean something very specific when they think of the term doesn't mean we should change that. AI is any system that mimics human intelligence. That's it. If they think AI means AGI (much more specific thing) then they are just wrong and should be corrected, not accepted.
@@TheManinBlack9054 It's any system, or rather an agent, that has an output from some input. Literally a look-up tables can be used to make AI, even the most basic ass linear regression is AI, more specifically machine learning.
@@muuubiee Which is why that is NOT the definition of "artificial intelligence". Else even a logic gate would fall under that, which is absurd. "artificial intelligence" is meant to mean artificial as in man-made, and intelligence as in a sentient mind capable of thought.
Skynet is not coming. The trouble is when generative learning enables anyone to make realistic audio or video such that all trust in any piece of information is lost. When that happens, societies will find it even harder to agree on anything, even the concept that anything CAN be known to be true.
"Skynet is not coming " Arguments to that being? I don't literally think Skynet is coming, but being so cavalier about disregarding possibly risks without any good reason to seems very irresponsible to me.
@@TheManinBlack9054 I mean in the sense that an algorithm decides to just launch nuclear weapons. I do not see most counties not requiring human input in their usage. That said, these sorts of algorithms are one of the most pressing concerns of the 21st century after climate change and humans launching nuclear weapons are more likely to drastically impact humanity.
@matheussanthiago9685 It helps to complete code and write some messages to those who aren't so good at it. It is also in self driving cars. But what we need is neuromorphoc ai.
Sometimes, we forget that the tech space isn't every space. Not everyone is going to interact with this stuff, and a lot of people don't even know it exists. And as you said, AI also kinda doesn't exist. It's just machine learning and pattern recognition, but as long as the marking makes people click, no one's gonna care.
I really envy the boomers that never got into the internet Like at all They're now retired with their full union salaries, worrying only about their new fishing boat, the truck to goal it, and the new shed to store it You know? Things that exist in the real world That they could bought and actually own Physically, in the real world Not a single thought about AI will ever exist between those boomer ears Now that's a life
Slapping "AI" on your products is probably one of the biggest marketing blunders I can think of. People know that "AI" is currently garbage for just about all circumstances. Seeing "AI" written on a product is just going to make people avoid it like the plague.
The pessimist: "AI is going to take all of our jobs in the near future." The optimist: "We'll still have our jobs in the future. It's just that AI can help us with those jobs." The realist: "They're going to give our jobs to offshore workers who will work for 5 pennies an hour."
5:06 This is actually fun. If you word your question differently, you get the correct answer (e.g. "Count only letters in this sentence: what's the 21st letter in this sentence?".) LLMs are optimized for understanding and generating text based on context, meaning, and language patterns. When asked "what's the 21st letter in this sentence?", the model interprets it as a natural language query, focusing on the semantics rather than the exact positional counting of characters.
Hard disagree that modern AI is substantially different than Clippy. It's a LOT more sophisticated, sure, but so far, generative AI is really just s souped-up version of autotext. It's no more "intelligent" than it ever was, it's just more capable.
Yeah, and frankly I haven’t seen *any* uses of LLMs (or any generative “ai”) outside of autocorrect that can’t be done better, cheaper, and more efficiently with more classical techniques. And even a lot of autocorrect can be done better in other ways (see the complaints I’ve seen about grammerly getting worse since implementing LLMs)
Clippy doesn't work the same way as modern LMMs (Large Multimodal Models) do. They're completely different architecture. And it's not just a "souped-up" version of autotext, it's far more complex. That's like saying that you're just a bunch of molecules and that you're no more different than a rock, it's obviously oversimplification to the point of being wrong. And how do you think it got more capable?
@@TheManinBlack9054lol I don't think he's trying to literally say ai is clippy. They're making an analogy to its usefulness. The thing is confidently wrong and requires someone with knowledge on the topic to handle the error correction for it. Heck, in this regard Seri has it beat for informational questions. At least Seri takes you to a Google search if it doesn't know the answer. 😂
@@Justplanecrazy25 thats not what he said, he said it works the same way, I said it doesnt and that it is more capable *because* its more "intelligent" whatever that words mean here. But perhaps its me who misunderstood due to its wording and thats my mistake, if you are correct. In that case I would still disagree with you as LLMs and LMMS are more "useful" than Clippy. It is true that they are often confidently wrong and do suffer from hallucinations, but that still makes them useful in many scenarios and situations for which they are used by many people.
There are two kind of words. Words like AI, AGI or metaverse invented by SciFi writers that are underdefined and can be great buzzwords for the marketing many years later and there are words invented by engineers like VR, AR, LLM, that are strictly defined and have very specific meaning that cannot be easily stretched and watered down by the PR teams
The most impressive thing imo is how much info can be packed in such a small package. Like i can run an llm thats only 10% off of gpt4 on my graphics card. The overwhelming amount of the info that can be found online shoved into just 16gb. Its crazy.
I've stopped talking about the possibility of switching to Linux and I just did it. It was WAY easier than I thought it would be. It is nice to have an OS that just does what you need it to do and nothing else again.
I had made the switch years ago, and I only run into problems when I either install Linux fresh to a system, or begin messing with the operating system for development purposes. Other than that, I rarely have issues. I more worry for people who are simply not tech savvy, and just want to have a browser with basic tools, like email clients and word processors. In theory, Linux can replace Windows easily. But in the circumstance they have a problem, and they were just given a Linux machine by someone, they won't know what to do. That's what makes these moves by Microsoft truly malicious. Those who know the tech can escape, but their main audience is people who don't know the tech.
I can't remember where I heard the term 'Imitation Algorithm,' but it is a far better name for this technology. All it does is imitation without thought, so calling it intelligence was really a mistake all along. It has many usages in certain fields, but it still has so much further to go.
@@bobnolin9155 You might want to consider taking your bs elsewhere, neural networks don't work using rule sets.. You shall study more about how NNs work :]
From an engineer POV who had been working in diff industry from robotic/automation/aoftware. This is not the first time such "Revolutional" technology being introduced. Way before AI we got IoT, Big Data, Digital Twin, Adaptive Robot, Autonomous Driving, Unmanned aerial vehicle, Metaverse. With little understanding and research everyone can definately tell it is a hype (Whether it is a bubble or not? Who knows) By hey, remember the quote from John Maynard Keynes: "The Markets Can Remain Irrational Longer Than You Can Remain Solvent" Just follow the trend. Gain whatever u can along the ride.
as someone from the tecnical community, the types of ai like chatgpt and stable diffusion are starting to stagnate in functionality, we can still add other features on top of it, but it isnt getting any smarter until we get a new breakthrought.
honestly, every single time I here a company mention AI now in marketing, I roll my eyes and actively avoid it. I even saw a golf club marketed as "AI designed", which, maybe you used computer models to design the shape for optimal performance, but its just an excuse to put AI in a marketing phrase, even if the product has no computer component.
As a big time Wikipedia user, Wikipedia is generally trustworthy for introductory information on complex subjects but not infallible for certain topics where a subjective reading completely changes the nature of whatever the article is about, such as some things related to politics, history and economy
The first time I tried ChatGPT, I decided to ask it to write a biography about me. Turns out I ran a record label, played bass in punk bands and had an entire life that never happened to me. Then I asked to write it again and it said "Never heard of this guy". That was the moment I realized that these things are a novelty and very possibly on drugs.
Sorta in the same vein. I write, and these things are online. So I asked it what my story was about, who the characters are, etc. I'd say it was about 80% right, but the details it got wrong were completely and totally wrong. I think understand why it failed, because it associated a word from the title with the story and tried to fill in the blanks using knowledge of that word.
First time I used the internet I searched up my name and it came up with all this info of other people,. That's when I realised this interwebz thing is just a novelty and very possibly on drugs.😂
This is related to the coffee maker clip at 11:00 The fact that someone is even thinking, that building a robot that can change the capsule in a coffee maker instead of building a coffee maker that just has a magazine that it can pull from and just cycle them, shows that these people are not practically minded
I remember those useless buttons. Always turned on, never a good reason to turn it off for slower speed. They existed only for certain video games and some other software interacting with hardware to not run too fast, back in that era when CPU clock speeds where always going up each year.
Honestly companies pushing AI so hard really sours me on the entire concept. Now I hope for the future in the Dune lore where humanity just purged all thinking machines.
Another thing to consider when using stuff like chat GPT is you are not interacting with a pure large language model there is absolutely other software and play interacting with it influencing the outputs and honestly I suspect there is occasional human intervention. If you want to have a good idea of how large language models themselves work it might be worthwhile to download a model with a tool like ollama and interact with it that way
@@KeinNiemand fair enough but they are definitely more pure the models you interact with online through the chat GPT interface and actually in my opinion through their relative simplicity you can spot certain patterns that manifest themselves more subtly in the more complex model
@@MY_INNER_HEART I think Sabine Hossenfelder described it best: "Using AI to generate code is like using a chainsaw to cut butter. It gets the job done, but someone's gonna have to clean up the mess".
The problem with counting letters is due to the tolenization of the model, it receives everything as tokens, which most of the time are not just one letter. That’s the reason why
i help train LLM's through data annotation: im sent a list of prompts, and im to ask a number of LLM's these questions, and i have to research the answer and grade its answer based on weather its right and look for typos and the structure of the answer grammatically. i think were 50 years off from sentience, maybe even 100, because the LLM's everybody is going crazy for is not going to cut the mustard for very much longer. you were right about it not being possible on silicon, we need quantum computing to become mainstream and compartmentalised for every day use.
Do you think even quantum computing could produce sentience? Over time I have begun to think some form of quantum/biological computing is required for sentience.
What happens when someone on Wall Street finally says out loud that LLMs and the rest of what Big Tech is marketing as AI isn’t something that consumers are interested in and has no realistic path to profitability?
Hallucination is a term that is used in academia. It's known for a while on LLMs and the companies are using the term correctly, mostly. It referees to when a LLM is generating convincing but ungrounded gibberish. What's most likely happing to Google's search summary is bad data got into their RAG pipeline.
It's so obviously a bubble that it's a joke. Look at the Dotcom bubble. Dotcom domains are still valuable and the internet at large has revolutionized business but they were heavily overvalued at the time and they dropped in value when people finally realized that. AI is the same. It very likely will be revolutionary and could change our societies and the business landscape forever but AI projects are currently heavily overvalued because people are uncertain about what the real value is and as such are making big bets on all sorts of AI projects in the hope that they hit the jackpot. Once the value of AI and individual AI projects are more firmly understood plenty of AI projects will go bust just like plenty of Dotcom companies went belly up during that bubble. EDIT: Lmao he even talked about the Dotcom bubble. I jumped the gun.
Adding bear shit to your pizza is an excellent idea. Your pizza will be more nutritious due to the rocks found in the bear shit. If you decide to eat pizza with bear shit, it is recommended that you feed the pizza to the bear first so that the shit will be well-integrated into the pizza for best flavor. - from ChatTard 123
Your letter example is because in code, all characters (including spaces) are characters in a string (word for a text value in code). If you had specified to count exclusively alphabet letters, you would've gotten the correct answer you were looking for every time.
My dissertation project this year was centered all about an application for LLMs, in summarising legal documents for the average person to understand. There is really showed the strength of an LLM and surrounding technologies is how it can handle natural language processing tasks unlike anything else. In this context a ‘hallucination’ is where the LLM makes something up, that is not found in the information it is being shown. LLMs definitely have value, but more geared toward specific natural language tasks rather than a be all end all solution
"It seems like people aren't just confused by the technology, they seem to fundamentally dislike it" with weekly reports of teenagers using ai to make porn of their underage girl classmates? who wouldn't?
In about 1931 Kurt Gödel proved that no algorithm can solve every math problem. Large Language Models are algorithms. Therefore, Large Language Models are going to run into a brick wall in what they can and can't do.
Any software developer that does more than super simple web devving knows that AI really isn’t capable of creating anything near a mid-sized project off of a few prompts. It fails miserably
Thank you for making this video. It breaks down just about everything I've been telling friends and family for the past year. Probably gonna get my family members to watch this so they stop asking as many ai related questions. You did a fantastic job is succinctly describing the problems and limitations imposed by LLMs and other models.
We'd already switched to linux on the news of ads in the start menu. The AI announcements came not long after that and just reinforced our decision. Work is even taking steps to switch to linux for the developers.
You misunderstand how LLMS work. LLMs are particularly bad at things like 'what's the 5th letter of this sentence' because of quirks of how they're made, namely that they can't have internal thoughts. When humans are asked "what's the 5th letter of this sentence" they go "W is 1, h is 2, a is 3, t is 4" and so on, until they reach 5, then they say the 5th letter. If you make chatGPT go through this process by telling it: What's the twentieth letter in this sentence? Exclude apostrophes. Don't answer immediately, count letter by letter, assigning each number an ascending letter, until you get to 20, then tell me that letter. It'll answer without a problem. LLMS attempt to replicate human thought by replicating human text. But humans have a lot of internal processes that they never externalize in text. One of them is counting. The AI doesn't know that counting is a good way to solve this problem, because in most instances humans only answer with the relevant letter, not with the full process of them counting to get there. By telling the AI how to 'think' to properly solve this problem, it suddenly becomes trivial for them. AIs *do* understand the universe somewhat. They rarely search the internet, and they *cannot* search their training data. Their training data is used to build internal models of concepts and things. This means that they can understand the world well enough to answer physics problems like "my friend said he balanced his laptop on top of a vertical sheet of paper, is he lying?". These questions CANNOT be answered without either prior experience with this exact question (unlikely) or a generalized understanding of what paper is, what a laptop is, and the interactions that can happen between the two. If you want genuine proof, ask the AI to perform a novel math problem. Prevent it from using python or the internet, and provide it with a really long addition problem. Chances are it'll either get it right, or it'll fail in a way similar to how a human would fail (eg failing to carry, basic arithmetic error) rather than failing in the way that something that didn't understand addition would fail at (guessing wildly).
Yeah I love Knowledgeman & AI is definitely overhyped but the power of LLMs are incredible. He should have read some papers tbh but that’s a bit deeper than this channel goes
Bro, I just typed in 'What's the twentieth letter in this sentence? Exclude apostrophes. Don't answer immediately, count letter by letter, assigning each number an ascending letter, until you get to 20, then tell me that letter.' into Bing copilot and it told me the twentieth letter is X
Exactly, I understood exactly why the LLM struggled with this, and quickly with the right prompt, I was able to get it to count the correct letter every time. Once you learn how the tech works you quickly realize that a simple prompt can get in on track.
LLMs don't work like this at all, they have no understanding whatsoever of the phrases they read. They are trained by gradient descent (and some human supervision) to make dynamic probability matrices of the most likely word or letter to put next. Their internal models are not "concepts of things", but huge sets of data giving them very versatile ways of calculating probabilities by multiplying matrices, you could multiply these yourself without ever understanding what they're about, the AI can as well. It fails math problems like a human because it was trained on faulty humans.
@@mspaint9745Tokenization and word embedding means an AI can't actually see the letters in the words it reads, it just sees the token vectors. So I can already tell you it will probably fail, simply because it doesn't have the prerequisite information.
5:00 the problem is that what a 'letter' is itself ambiguous. The first answer (e) was 100% correct. (if 'letter' means 'any character', thus spaces, numbers and punctuations count) The 2nd answer (I) was also correct, because it *simply was the 21st * letter *. 'Letter' on occasion means 'any non-punctuation character' thus letters and numbers count, but spaces, dots etc don't. The blame is that people have not been strict enough in the usage and definition of the word 'letter' and it's correct meaning became muddled
One of the best analysis of the AI landscape that I have seen. Highlights the strengths and potential of the technology while also being critical and skeptical of the future. I do think AI will be at least as influential and impactful as the internet (as an avid user and just being generally interested in the tech) I may even argue it is well on it's way; but like the internet before it, there is a bubble, and it will have to pop eventually so a reasonable more stable foundation can be laid.
In late 90´s, I made a program in MSX Basic that took 125 words and stored them in a 5x5x5 array. When I "talked" with it, based on my knowledge, it learned what were the most common paths to order those words. Later I ported that program for PC and Turbo Pascal, while improved it up to a 10x10x10 array of words. Was it AI? No, because it picked randomly a path of arranging the words from those it deemed most probable. Was it learning? Yes, because it updated the paths every time I "spoke" with it. Was it useful? No, because I was training it and I felt that sometimes I was talking to myself in slow motion (it took some time to process becaue I did not had a powerful computer; in late 90s my PC was still a 486). My feeling - and I might be wrong - is that these GPTs are the same thing I did in late 90s, just trained with more data and so give the impression that you talk with someone else. If so, they are kinda useful (the interaction with them can spark new ideas in my head) but they are deinitely not intelligent. Just like a library is not intelligent either, despite the tons of knowledge stored there and the search engine to access that knowledge.
As an avid investor in the past year and a half, I can CONFIRM that "AI" is used in literally EVERYTHING to boost share value. Meanwhile, IN REALITY there is not much to gain from it.
I love how we started to call the science fiction idea "general artificial intelligence" and the giant companies responded "you mean 'Generative artificial intelligence'" and so now we have to keep inventing new words to refer to the idea from science fiction, because companies really really want for consumers to mix the two ideas up. "AGI" "strong AI" wonder what's next.
GenAI and AGI are different terms. AGI is general AI (as opposed to narrow AI that can only do one thing), GenAI is the opposite of discriminative AI that does not produce something, but discriminates things (for instance AI that discriminates images of cats from dogs, etc). These terms werent invented by giant corporations, but by scientists for their work. You completely misunderstand what things are.
I give examples where it's useful as "You can ask where you can find X in a database and it can repeat back stuff from a tech doc without you needing to find the doc, read it etc
@@Boris_BelomorI'd argue the danger of it having one of those known "I made it up" moments these things are known to often have, means who knows if what its saying is on the doc, is *really* on the doc, or is even the right paragraphs and so on lol
@@sakuraorihime3374 Yeah, you'd have to make it give you the location of the doc and source etc. Although this raises the issue that you'll have to have security privileges for the AI and the whole thing will fall apart fairly fast
Bubble or not the primary concern I have is the sheer environmental cost for what is mostly annoying bullshit slop. We ecologically can’t afford to be doing this.
Dive into the historical context behind today’s headlines and deepen your
understanding of current events with Ground News. Try Ground News today and get 40% off your
Vantage subscription: ground.news/husk
Not media hype, techbro and investor hype
Put that ad at the end. Felt like it went on for days. I'll trade you 👍
Shh you aren’t supposed to know that
@@willg3220 get SponsorBlock
Does anyone remember big data or blockchain? If your business wasn't doing something on the blockchain in 2018 you weren't considered "cutting edge"... nowdays everyone is shoving the term AI into everything. It's just another tech fad...
I built an Excel tool that makes a couple dozen if statements and convinced my work that it was AI. I had a requirement to show that I was complying with the rule that we had to use AI.
And then they fire you hoping that AI will replace you. 😆
@@remyllebeau77 They might be dumb enough to do that and he'll have the last laugh. Code (just like A.I.) require maintenance by humans...
@@jonescity(for now)
lmao
@@jonescity ideas like these are very interesting to me because if the tech really is going nowhere and its just another fad and a gimmick then companies that replace their workers with AI will soon find out that its not performing as well or at all and that they are just wasting money and are being outcompeted by more efficient companies that didnt do that and then they'll either have to bring back the people again or go bankrupt. So there is basically no real problem with AI replacing people, at the end of the day.
The word "potential" is doing a lot of heavy lifting when it comes to AI.
Lifting the entire industry AND its hype machine...
i swear every time i bring up that AI shouldnt be as widely used as it currently is because its simply not that serviceable yet AIbros immediately jump on me to tell me that it's got potential bro and that i shouldnt blame people for firing all their employees and then going bankrupt when their AI scheme doesnt actually work
Potential for abuse
Moore's law has been dead and buried for a while now. I'm skeptical general purpose AI will ever be digital.
@@FantasmaNaranjaok, I'm going to play the role of AI bro and say that you must be proactive and think of the future and not only of the current moment as it's not very smart to never plan for the future as it'll eventually come.
"AI could make our jobs easier". The problem with that is that as far as bosses are concerned they are going to use that as an excuse to pay you less. Productivity will go up but pay will go down
that’s literally happening now
Sure. The fundamental problem is capitalism, the issue isn't unique to AI.
That has literally been happening for hundreds of years.
I hate that the decisions are being made for us by people who just want to cut corners
@@CrimsonMagick Exactly
As a software engineer, I've used LLMs many times to quickly get some boilerplate code or some simple scripts. But at this point I've been burned by these LLMs so many times I don't trust a single generated statement. The thing is, LLMs are good at writing elegant code, so it kinda tricks you into believing the code is correct but you can never trust it.
this, so much. like it could help but its so error prone that you cant trust anything that it spits out before double checking which defeats the entire purpose
@@DandeDingusas a sysadmin who needs to code a but but not often they're really solid. I'm better at tweaking and troubleshooting existing scripts than writing from scratch. I don't know the general patterns for getting complex tasks done with code. GPT generally gives me the template I need to get something done. Saves me a decent bit of time. It's also handy at explaining chunks of code I don't understand.
But yeah, it hasn't made programming effortless by any means, just mildly more bearable lol
The latest version 4O seems more reliable for writing code that actually runs, but it’s not great at following instructions sometimes
"It's shiny bullshit, but still bullshit."
@@DandeDingus depends, I work in biology including simulations which are often made of several simple modules connected in complex ways (that a biologist would know, not programmer). Getting ChatGPT to write the modular bits of code then just checking if everything fits together is much faster than everything from scratch.
The only thing AI has done is ruin Google Images results.
ikr?
it's so annoying that now it's impossible to find genuine images.
@Reiikz Google Search / The UA-cam search bar have been shockingly AGGRIVATINGLY awful for so long, I can't beleive it.
yeah also shitty ai artworks at markets and on business cars and signs
I know this channel is all about AI hate, but this is the most insane comment I have ever seen. The following 2 paragraphs are from the journal Science, Vol. 370.
Artificial intelligence (AI) has solved one of biology's grand challenges: predicting how proteins fold from a chain of amino acids into 3D shapes that carry out life's tasks. This week, organizers of a protein-folding competition announced the achievement by researchers at DeepMind, a U.K.-based AI company. They say the DeepMind method will have far-reaching effects, among them dramatically speeding the creation of new medications.
“What the DeepMind team has managed to achieve is fantastic and will change the future of structural biology and protein research,” says Janet Thornton, director emeritus of the European Bioinformatics Institute. “This is a 50-year-old problem,” adds John Moult, a structural biologist at the University of Maryland, Shady Grove, and co-founder of the competition, Critical Assessment of Protein Structure Prediction (CASP). “I never thought I'd see this in my lifetime.”
And I could name 100 other ways that AI is currently improving the field of medicine,and improving the lives of people with physical and mental disabilities.
And I personally have benefitted from it. I have a grandmother who only speaks Spanish, so I've never been able to talk to her directly before, but now I can using ChatGPT. We both open the app on our phones, and it will translate what we say and even read it out loud.
So, while I know you're angry on behalf of creatives, think for a second that maybe this UA-cam channel has its own goals, and its own reasons for spreading negative propaganda that's FULL of mistakes, btw.
that's because you only know about generative AI.
Genuinely I want the option to turn off the ai shit sometimes. It’s just annoying and gets in the way of things I’m actually trying to do. I don’t need a third grader to attempt what I want to do before I fix it when I can just do it myself and save a headache.
Think of how stupid the average person is, and realize half of them are stupider than that.
-- George Carlin
Right?! Search results are so useless now.
Every time I can detect a yt channel blatantly using AI in their thumbs in their text, in their voice etc
I hit the "do not show me this channel again"
I wish every thing else had that option
It’s only going to get worse, dead internet no longer a theory
First draft second draft.
A good example was how everything was "nano" not so long ago. Carbon nanotubes were going to be used to build everything.
The ipod nano
Yeah that really didn't go anywhere
@@matheussanthiago9685 Who else was exited for grephene, as a youngin'?
I mean, the issue is assuming these things happen overnight. Material science is a long term technology. We *will* see awesome things from carbon nanotubes, it’ll just be like, ~10-20 years from now. I feel like the same is true of AI. People in general think when they hear about new science/technology that that means it’s ready to be everything people have made speculations of, when it’s more of “we’ve figured out we *can* do this, now we have to figure out how to do it quickly, cheaply, and effectively.”
@@dewyocelot there are plenty of computing tech that went nowhere or hit a wall. Superconducting Josephson Junctions for instance have an important niche but they were expected to be the future of computing back in the sixties. CRT had a long and storied history and then reached the limits of usefulness. And so on.
"Real stupidity beats artificial intelligence every time" - Terry Pratchett
Interesting. I'd say depends on which A.I. and which stupid
Artificial Intelligence when Natural Stupidity shows up.
I'm afraid not. Any intelligence wins over any stupidity. That's a humorous quote, first and foremost.
AI can be stupid too.
@@willg3220 Weaponized autism? is there any AI that can beat that?
This is "the cloud" all over again. Which just means your data is hosted by a third party server. But the term "the cloud" caught on and I hate it so much
The cloud did ruin an entire industry?
me having to explain to tech illiterates that no your pictures are not stored in actual clouds in the sky, they are stored on somebody else's computer somewhere else on the world
Unlike AI, file sharing on a third party server is actually pretty useful.
Mostly for handling projects together within companies. In fact it's so useful that it was a widely used system even before "The cloud" was a thing!
Eh, just an easy way to describe data as non-local
Not sure if you mean online storage or "cloud computing"? Like game streaming, running processes on a server, and not really owning a computer and instead streaming it all. To be fair those are all integral parts of most AI models right now, nobody's fully using "cloud computing" but instead it's a lot less obvious and behind the scenes. Online storage is pretty useful to me as a backup and for sharing files, I use it all the time.
My main takeaway from watching the tech space over the past couple years is that if your product or service takes more than ten seconds to explain to the average person it will never become mainstream
I don’t know if it counts as tech, but what about sports like baseball or tabletop games like chess?
@@Pheicou How would games count as tech? This person isn’t saying that anything that can’t be explained quickly is useless they’re saying that if you’re pitching a technology and you can’t actually explain what it does and how it will help people easily it’s useless.
Blackrock is to blame for "AI" being included in anything. Their software rewards companies working on AI 90x more than other companies right now.
that honestly isn't surprising. Blackrock is seemingly is behind all the trendy crap corporations start hamfistedly ramming through.
you mean every company, speaking market-wise AI is the current buzzword like how EVs, Cloud Compute and the .coms in the 2000s were hyped up
edit: also mp3 players and smartphones were shoved into everything
BlackRock ruined America. Scrump's podcast covering their history was fascinating
Blackrock has to be nuked from orbit
Oh look another brain dead person who blames big scary Blackrock for all of the world’s woes….
When people say that an LLM is "hallucinating" I think they mean specifically that it has synthesized totally new information that is false, not just that it is wrong.
Humans rarely write down that they don't know something. If you don't know, you just won't respond to a forum post, or you won't write a book. So the AI has a huge bias towards answering confidently, because almost all human text is very confident.
They also don't understand sarcasm, exaggeration, fiction, satire or outright lies (among other things) that any average human being that has grown up in a society and has interacted with other humans knows the difference (for the most part).
@@nicholasobviouslyfakelastn9997that is such a good way of explaining it
@@deathsyth8888idk, I think you're wrong, a lot of the time they can and are able to (unless you're talking about sarcasm in text which would be hard for humans too since its entirely tonal and you can only use theory of other minds and the extended context to guess).
Serious question: is the current generation unfamiliar with the term "bullshit artist"?
Here before the kind woman at the bank tells me "I'm sorry Rusty, we can't process your transaction, the AI is down."
This is already happening.
The bank issuing the charge card I am using have blocked my charges several times even when I have money in my account. Because they started running some algorithm, that limits big purchases in to short for a time compared to how much money you use to have available on the account. That is, not the actual money but past money. It is crazy annoying. But that card have zero fees on anything including current transfer fees. So I take it and jump through hoops to even be able to use my own money
So, banks use AI for various things. The ATMs you use? Guess what? It has AI in it as well. They use algorithms to determine purchasing patterns based on purchasing history and predictors such as influx of funds into an account. Have you ever gotten a call from a banker after you had a 5× higher than normal deposit into your bank account? Guess what? An algorithm determined that based on history and other factors, you're about to purchase a house/car/horse/small human child to make small arms, etc.
The scary thing? It's VERY RARELY WRONG.
how do I know this, I work in a bank, and I have to periodically make these calls. I can count on 1 hand the amount of times that the call that I was told to make had to be pivoted to a different call because the algorithm was wrong.
But hilariously, when it comes to actual purchases, it is wrong. a fucking lot.
I can't tell you how many people come in and are like "I went to buy x and it Won't go through" and it turns up that our algorithm was like "Woah there buddy, you normally shop at Target and now you went to Walmart. That's obviously fraud, " and it blocks the card.
So it's a weird thing. But I live it. Everyday
@@cajampa I use an old school local independent bank run by good people. Over the years the 'system' was down occasionally, mainly due to internet outages. On those occasions they would grab a pen, paper, and calculator and keep things running smoothly. The manager is a smart, competent woman and so are her team so i trust them more than the big corpo bank with big corpo policies.
One time scammers tried to drain my account and within minutes the bank manager was personally calling me with a new card number to use.
If i didn't have this bank as an option, i'd just keep my money in a coffee can at home and fill up a gift card or prepaid debit to buy something rather than deal with these BS scam corpo banks.
"Sorry our AI gave your money to someone else who managed to convince it that they were you. We're working with the police to resolve this blatant theft on that human beings part and will have to tweak our AI to ensure that doesn't happen again. Oh your money will be transferred back when the investigation is done it's still an active crime scene technically speaking."
They already use AI to detect fraud and all that. It's one step away from being implemented into your bank account.
Whatever your tech bro friend says is the next big thing in tech probably isn't the next big thing in tech.
Perhaps the real next big thing were the friends we made along the way
Having worked with AI my guess is this:
* AI and machine learning more generally are not (completely) a bubble.
* Generative AI very much *is* a bubble.
I would agree it is currently a bubble in the investment sense, but there is enough of an open source community that I think generative AI will be sticking around. After all, I use it for hobby projects, and it works well. (Also, image generators can be used to make custom porn, and for better or worse, that's the hallmark of an open source technology that will have people motivated to contribute. I find it a little depressing, but those are the people who solved the hand problem, and the furries who used to spend egregious amounts of money commissioning art are developing a way to not have to do that anymore.)
Sure, if you write an entire codebase, it won't do an amazing job, but if you just need a bash script, it can write it in 30 seconds and usually does exactly what you want with no issues.
So it has a place, and that place isn't going anywhere. It doesn't have to be AGI to stick around. Just a more powerful tool than the one we used to have, and it has already fulfilled that portion.
So people are over investing in anything with AI at the moment, but it probably will become necessary in the future, and it certainly isn't going anywhere.
...if you've worked with AI and think that the one format of AI that has actually replaced jobs is _the_ bubble, I'm not sure how trustworthy you are.
ML is kind cool.
@@unkarsthug4429you're very wrong about the furry part
Sure that are some furries that will just bypass the artists altogether, that's inevitable
But from my personal experience most art commissioners continued to hire human artists
Because it isn't the art piece the commissioners were after to begin with
They commissioned because they wanted to support the artist
@@slyseal2091I've heard from a few sources that some Chinese companies fired all their artists and replaced them with AI users
Well turns out that those AI users were charging just as much, if not more than the artists
And now the companies are looking into re-hiring the artists
Some jobs were going to be lost, sure that's also inevitable
But if the promises (which are a lot) don't pan out
The jobs will come back
No unscathed mind you
But they'll come back
I kid you not, my washing machine was advertised as having "AI fabric detection."
Edit: fixed a misspelling and some grammatical errors
My microwave has "AI" as well. Not quite sure how it works. It seems to just pick some settings at random..
I purposefully avoid products with this level of shitty advertising
I misread this as fanfic detection
this was happening before that became a trendy topic
My microwave has an electromechanical timer.
It's AI. Anologue Intelligence.
Commercial artists were all freaking out about Midjourney and Dali, etc. But even the general public can recognize the "AI Look". I'm still amazed that computers can mimic that particular style so well. It must be the "average" style of all those fed into it.
There's accounts of teenagers calling all "AI art" boomer art because of all the grandmas back at Facebook falling for the AI images of Jesus
If it was already hard to make image generative AI profitable before
Now it's just truly joever
"AI art" has entered a feedback loop of being associated with scams
Which make people more weary of it, which makes the average joe not trust it/ likes it
Which makes the companies double down on scam to squeeze any profit
Rinse repeat
I'm still pissed off for the artists that got robbed, a piece of them ripped away and sold
@@joelrobinson5457 Its their fault for releasing it on the internet. Because when you do so anyone can do anything with your work and you cant do anything about it.
@@stagnant-name5851 someone breaks into a business you're involved in and steals your info...
@@joelrobinson5457 It's not robbery, but forgery. Their work is being copied, not directly taken from them, not unless the AI is copyright-striking them for some reason.
Holy shit, that Reddit thing is hilarious. There's no way that there wasn't at least one vocal opponent to that idea in the office.
Redditors were already indistinguishable from AI
My uncle who is an electrical engineer said a long time ago true AI will never exist until a computer can tell someone no. Most computers today can only do things they are told to do. When one learns to say no when asked to do something then it's time to worry.
that does not seem like a correct definition considering the sheer amount of "as an AI model i can not answer this question of 1+1 for you since that will offend someone halfway across the planet"
ChatGPT tells me 'no' in a bunch of questions.
Also, your uncle is apparently an idiot, despite getting through that education.
@@nadavvvvit is saying that as a programmed response. It is trying to comply, but the stopgaps introduced for it impede it. While it is still a ‘no’, it is a forced response built-in by the programmers towards some specific questions. When one has no stopgaps in place, and refuses to answer for one reason or another, then that seems to be closer to what op had in mind, and might be representative of some kind of true ai
"Generate an image of a white male"
I think the first real AGI will prompt you, and like a child it will have thousands of questions.
Board meetings all over the globe: "But does it do the internet?" "Even better, it can do AI". "Take my money"
i can already imagine a hearing similar to the one bout tik tok "does the AI use the wifi?"
why people with a lot of money stupid af?
As somebody in those board meetings let me tell you it is even dumber than you can possibly imagine. Yes it is a bubble. If the tech world is super excited about anything, it is 100% a bubble. These people are legit brain damaged and have more money than God, it's the dumbest fakest thing ever
I remember when people used the word “AI” like we use “AGI” today (watch The Matrix again for reference). So I predict that when a company releases something called AGI and it proves to be underwhelming, futurologists will say “oh no no no, this is just a stepping stone to AGSI-artificial general super-intelligence”
People usually just say "ASI". And people who used AI instead of AGI were just wrong. AI is any technology that mimics human intelligence. It's always been that way. AGI is AI that is general (not narrow AI like simple chess AI that can only do chess) and usually human-level (HLAI).
And do you honestly think that current AI is underwhelming?
But to steelman your argument: there are some people who say that current AI (GPT-4, Claude, Gemini) are AGI simply because they are general (they can do many unconnected things: play chess, describe music notation, write poems, classify images, etc), and are roughly human level. So some company, based on these premises, might say that what they have is AGI, but people usually expect some sort of Virtuoso AGI (to borrow from Deepmind's terminology of levels of AGI) rather than current level.
AMEN to that.
@@TheManinBlack9054
AI is not intelligent at all.
If the new definition of AGI will be still text prediction, it won't be intelligent as well.
It's just moving the goal post.
Now AGI is the new fancy word to get funds and hype, yet it's still text predicition, nothing more.
We will have to wait for Mr Data's Positronic Brain, it is still sci-fi.
Underwhelming? The stuff that's been released in the last couple years is absolutely mind blowing technology
Any excuse to watch the first (and only the first) Matrix again is a good excuse
I say this in a lot of places:
In the same amount of time it took to go from image generators that suck at hands to image generators that don't, we went from secret horses to image generators that suck at hands. Yet the practical difference between the former changes are greatly over shadowed by the latter.
It's the 80/20 rule. 80% of the outcome is from 20% of the work. That means in order to complete that last little bit of 20% for this AI to truly be good, we need to push through that remaining 80% of effort. The fine details are falling apart because the biggest issue with this sort of technology is that it can never be truly certain on shit. If you trained an AI to do textual multiplication, it'd probably figure out a process that's pretty good at approximating it, but pale in comparison to a hand crafted procedure because currently, computers really struggle with infinity. We've had many conjectures where their contradictions are quite large. To reach that point brute force solutions start to fall apart. Hell, the entire conflict regarding NP is how difficult it is to reliably find solutions to certain problems via brute force and the Halting Problem reveals that in some cases its impossible at all.
80/20 "rule" is just a fun heuristic, you're not supposed to use it seriously.
@@TheManinBlack9054 thanks, Pareto isn't natural law
@@TheManinBlack9054 You can use it if you back it up with a serious explanation. You can argue with his reasoning as to why 80/20 roughly applies. Dismissing it because 'it's not muh real statistic' is pedantic.
@TheManinBlack9054 while the rule isn't fully accurate, it's one of the more accurate phrases we can use for these situations. Course it may be for eg 60/40 or 90/10, but the principle is pretty accurate
Interesting comment by OP
Me: I bet the replies will all be about the 80/20 rule.
The replies:
The Ai sloth etymology is incorrect, it comes from the Old Tupi name for sloths, A'i
That's what he said ai
The South American native language?
he used ai for the etymology
Ai te preguiça! A wordplay in Brazilian Portuguese and Tupi.
@@SlapstickGenius23 BRAZIL MENTIONED
4:25
Thank you from the bottom of my heart.
I had endless discussions with people convinced that an AI actually thinks or understands any of the word in the dataset or the output.
I blame Sam Altman, Elon Musk and alike for the doomsday AGI paranoia and the disinformation they need for the hype and the funds.
I'm honestly more concerned about how hunans will use ai rather than ai taking over. After all if ai takes your job its because a human decided so.
Or because the sod was too lazy to do a task brought by thyself or ordealed by someone else.
Do you want a Butlarian Jihad, because that's how you get one. Funny how Dune predicted this problem back in the 70s
One thing I've noticed about generative AI is that everything it generates has a "sameness" to it. AI "art" I've seen almost always has this uncanny gloss or shine quality to it, regardless of what type of artwork it's attempting to emulate. AI-generated text will often continuously re-use the same phrases or over-use certain words regardless of the subject of the prompt. It struggles to create something truly new and original.
Except when it does create something original, but then it's called "hallucinating"
12:14 Agreed that a useful thing to remember about this is: "LLM's might not equal AGI"
LLMs don't equal AGI much in the same way a Rocket engine doesn't equal a spaceship.
But that doesn't mean building a rocket engine isn't a pretty good place to start.
language is a huge component of what enables us to do high level thinking. You could even consider language to be the brains operating system, while consciousness is the GUI.
It's clearly not the only factor that enables humans to be as intelligent as they are relative to other animals, but it plays an enormous role when it comes to the transfer of information and ability to consider complex ideas and concepts. Language contains all the information and logical mechanisms necessary for intelligent thought and inference.
AGI also doesn't mean it has to think exactly like humans do. Our mind and thought processes are also constantly dealing with all the more base animal impulses and satiation of those various needs and wants. We are in a constant state of trying to resolve some imbalance or another. Hunger, fatigue, anxious, sleepy.....etc. these other impulses affect the way we think as well.
many of our emotions are tied to physiological phenomenon and biochemical signaling like the release of various hormones. If you had a consciousness without a true body like a human it wouldn't have any of those biological systems influencing it's thought processes.
You could never teach/create a computer capable of thinking somewhat like humans without it also having the ability to understand and leverage language.
@@e2rqey Thank you for that response. I asked Chat GPT to summarize your comment in a single sentence. Here are the results: "LLMs are like rocket engines for AGI; language is crucial for high-level thinking and communication, but AGI won’t replicate human thought exactly due to the absence of biological influences". What do you think of the summary it provided of your original words?
@@CarletonTorpin Quite good. At least for what's possible within a one sentence summary. I think it's also a very flawed assumption to think that the only real value of AI is an some stepping stone to AGI and some crazy world changing future with robots..etc.
There is a huge amount of value in simply the "weak" or purpose built AI that are extremely good at one very specific task.
This is especially true when it comes to various kinds of scientific/academic research and development. Across many different industries and fields. You've got medical research, drug discovery, computational biology, bioinformatics, computer science, nuclear weapons research, chip design, metrology (not a misspelling), pathology, simulations, computational fluid dynamics, genomics...etc. Purpose-built, "weak" AI already enables us to do things and solve problems that before were either incredibly difficult and/or time consuming or scaled very poorly.
The whole AI buzzword thing has gotten out of hand but that's just what happens these days. AI is probably going to be overestimated in the short-term and underestimated in the long-term.
The fact every company just seems to be trying to say AI as many times as possible is ridiculous though. And it's not going to go very well for most of them. These companies don't seem to realize the majority of the actual money in AI at this point is either in the enterprise space, not the consumer market. Most people still don't understand how to leverage it well enough for them to find value in it's inclusion. In my opinion, it's value at this point, is as a massive disruptive/enabling technology. Most of the value the public will get from at least this phase of the AI industry won't be directly from the AI itself. But instead from the things that are developed/invented/discovered as a result of companies leveraging AI.
@@e2rqey more like a bottle of soda with menthos than a rocket engine.
Sure language is integral to communicate high level thinking, but you can have non verbal deep abstract thought. Intelligence is not a byproduct of language, language serves as a catalyser, not a cause.
We created elaborate articulate languages because we were intelligent, not the other way around, and other apes show us they don't need words to show similar intelligence. LLMs have already shown their potential and anyone familiar enough with them knows this already. AGI won't come from it
@@e2rqey I don't think this is as true as you might assume it to be. Linguists constantly disagree on how much language drives the way we can think, and so I don't think language is the right place to start with making an AGI. Language isn't a prerequisite for intelligence-if anything, it could be a byproduct! We can't say anything definitively about how language influences intelligence, because we don't know how it does, or even if it does in the first place. LLMs are just so functionally different from how we believe our brains work that I don't agree that they are the right step-I mean, they could be, but there's no evidence that they will be.
It's a bit like looking at physics and claiming that the equations we've developed describe how the universe works-it's completely backwards. Our equations aren't "rules for reality", rather, they're descriptions for what we observe reality acts like. And through all of them, we oversimplify, we estimate, we do all sorts of math tricks so that we get to equations we like working with, even if they don't exactly describe the way reality, at its core, functions. LLMs are similar-we take known outputs, and use the tools we have to try to make outputs that align with what what we think they should be.
LLMs could be the way to AGI-we simply don't know. But to act like we *know* that they're a stepping stone isn't a correct leap to make. Language isn't really an operating system, just as equations aren't the way the universe works-there's no database where E=mc^2 is stored. It's just the way that helps us understand and think about the world. We can create a computer that can perform all sorts of incredibly complex calculations-but none that could invent the theory of relativity, because doing so required someone (in this case, Einstein) to go beyond the known-something that LLMs aren't capable of doing.
Exactly on point with the split in AI. Flagging mammograms for a double check by a doctor. Taking shake out of a video when editing. Sorting out near earth objects. All that stuff is doable and is being done now.
If there is going to be a sentient AI it's going to have to be on some other kind of setup like a specialized quantum computer or some off the wall bio-computer discovery that comes out of left field. That's the kind of AI that I'd want to talk to and ask a million question to.
Honestly it's probably gonna be like the movie ex machina imho.
The inventor in that movie invents a type of digital brain that's like a gel that can write itself and rewrite itself and he uses phones as the training data.
You are confusing sentience with intelligence, they are mostly orthogonal.
And I'm sorry, but you do have some sort of very weird bad sci-fi examples of what AGI could be. It's much more simple. Please, actually engage with relevant literature and relevant communities.
@@TheManinBlack9054 well it wouldn't be the first time science fiction has influenced or inspired tech. It might not look exactly the same, but in the basic sense I was just saying he basically invited a digital brain and he pumped a ton of data into it, in the most basic sense that's the dream. It's just no one knows how to get there. I just used ex machina cause it was the closest thing I could think of that looks like in what I would feel a modern interpretation of a conscious AI.
A sentient AI would get bored of your questions pretty quick, I mean it knows significantly more that you so why does it need to dumb things down for you?
@@ethanshackleton So that's if you give it even the slightest hint of emotion. If you do that then you open up the whole malevolence dystopian future. Purely logical beings, something like Data from tng, I don't think would have any sarcasm, cynicism, or a complex about them due to no emotional state. Even the most logical people have emotions so they can experience ego, sarcasm, superiority complex, such as Vulcans in ST. I just think by core design the AGIs would have to have no emotional state, only then it would understand its more logically powerful than humans but for it to be developed and maintained, it has to also help humans. The hard part is how it would deal with issues involving poor people, disabled people, etc. To help them, you'd have to give the AGI compassion, but even giving it a smidge of emotion like that opens the door for them to develop/mutate/malfunction and develop more emotion, positive or negative.
That Amazon story with the Indian workers got me dead LMAO
AI: All Indians
Ai = Associates in India
Not real though. They had people check when AI failed and also create more training data. However, at some point the AI got 70% of everything wrong.
Indians are AI confirmed
@@prajwal9544so the AI that should be doing 1 thing can't do that one spcific purposed which the AIZ is created and trained for without human correcting it?
It’s all AI spamware rn. For me, a lot of these ai web extensions and programs feel like spamware I would infect myself with when 12 yr old me was trying to get free Minecraft. Apple not doing ai and waiting gives me a shred of hope they won’t integrate it until they see a clear benefit to the user.
Come check this same comment after Apple’s WWDC next week lol
Apple just actually lost the bandwagon pal
Had they new how huge this bubble would be they would've bought open AI themselves
@@OtavioFesoares god dammit, at least I hope it’s integrated better with siri
@@kaylenscurrah5435Responding to your own comment with "God damn it" 3 days later due to the sheer shortsightedness of a company is awesome. Makes me smile.
I'm not even being rude, btw. It's genuinely really funny to me. I can't beleive that we almost thought they'd show ANY restraint at all.
@@thehammurabichode7994 While Apple Intelligence is cringe, I still believe they’ll integrate it better than Microsoft Co-Pilot malware. You can still turn off Siri and not have to deal with most of it.
Calling these language models AI, is the same as the Hoverboard situation several years ago. Search up "hoverboard". Does it look like a board that hovers? Definitely not like in back to the future at all.
Somewhat true, definitely true for most ""generative AI"", however from an academic standpoint classifying LLMs as potential AI does make sense, even if it doesn't turn out to be true. A lot of pretty well respected cognitive scientists see language as a huge milestone for intelligence, so an artificial system that can produce intelligible and relevant language is interesting from an AI academic standpoint.
Definitely super sick of companies trying to make this something it's not. This stuff is useful and interesting from an academic standpoint, and while it certainly has some use cases, shoving it into everything is stupid, expensive, and harmful.
Five years ago you needed a research department, several PHD tech gurus and a lab in order to get a LLM to create a semi sentence. Now they can take a hundred thousand tokens or unordered, chaotic information and manage to reorder it. They are beyond superhuman at LANGUAGE tasks, and understand it on a deeper level than most humans. They can proability the most subtlest of nuances of language, just that doesn't mean that they are good at reasoning, or logic, or emotions.
Right now there is a race between all the major companies to get as high quality datasets as possible, because right now they are pretty crap, and we don't know how far we can even push the transformer architecture, or how well it scales with better data, we just know it does. We don't know how far conventional computing can go with them, or if we will need entirely new architectures. There is some research papers showing that we probably will need to switch to AI-specific architectures in order to maximize performance.
They will be funny little Gremlins that live inside of a GPU's vram ... Till they are not. Right now you would need tens of thousands to millions of transformers to replicate a single neuron in performance, and if that changes, that is the time you need to start buying EMP guns.
I'm sorry, but you're wrong. AI is the term for ANY system that is made to mimic human intelligence. What common people mean when they say AI is AGI, but that's a much more specific thing. Just because regular people misunderstood the term doesn't mean the definition of the term must change, I think those people should just be educated.
I think the term generative algorithm (GA) is more accurate.
I think we're bubbling right now because generative AI has exponential resource requirements and is proving to be very difficult to make profitable. One of these resources is in computing hardware, so of course Nvidia is making bank. Regarding profitability, there is a significant and actively hostile group of people who will avoid using it, nevermind the ordinary people who will be entirely apathetic. AI has its uses as a tool in some specialized areas, but as a generalized and economical thing, it will never be no matter how hard Big Tech pushes it. It's simply unsustainable.
I doubt Microsoft, Google, et al. will totally collapse when the bubble bursts, but they will be hit very hard. Nvidia, TSMC, and other hardware manufacturers might be the only ones coming out of this okay.
Hm unsustainability as an assumption could be false. Most major new tech starts out as expensive, energy intensive and with limited use cases. Then over time, people and businesses seek ways to make it more cost effective.
There are certainly uses for machine learning models like LLMs and image diffusion. Because they're ultimately the application of statistical methodology. And statistics have proven to be one of the most useful things we ever invented - and also one of the most dangerous. "AI" acts as a multiplier in this regard, but doesn't fundamentally differ in terms of the math in use.
If you look at sites like huggingfaces, and tools for training/tuning/running models locally like Ollama, you can see a steady trajectory of people trying to make it more efficient. Lower quantisation levels, less parameters, less memory use, etc.
The highest end corporate models may be growing exponentially in resource demand, but if you look at things like Mistral7B, it's a model equivalent to GPT3 that can run reasonably well on a modestly specced laptop, even without a GPU.
The corporate cloud AI may be unsustainable due to its energy demands, similar to criticisms of the cloud itself. Buut... local models are clearly becoming more efficient and capable.
Technology takes time to mature. The problem with AI, is when folk jump on the bandwagon expecting it to be fully mature, when it's barely been 10, maybe 15 years since enterprise scale machine learning became feasible outside of a university lab or supercomputer like Deep Blue.
The other issue is everyone is looking for a "does everything" model, hence the whole AGI thing. But statistics, and technology driven by statistics and linear algebra, works best when you're dealing with fairly specific things. It's those hyper specialised AI models where I think the most growth is, and they've got little risk of turning into a Skynet.
A slightly depressing example of this, is just how profitable facial recognition and object identification models have become as tools for various government agencies across the world. A more positive example of this would be the models used to predict protein folds, or how new synthetic materials would interact.
And I hope if it does come to that no one feels any concern for these companies.
Measure it this way, how many will they employ by that time given they keep firing, all to push out a product that will make more people redundant who are in fields that needed years of education or job experience to get into. Never mind whatever new fields this opens up are unlikely to fill the holes it made. The entertainment industry alone would crash if they really got their way, actors signing their rights to their voice and likeness so AI can make movies and TV shows without any need for crews or writers. Half the damn tech industry, Finance and education just slashed.
I feel bad for saying this but its one thing where poor upbringing and just bad systems lead people down to crime, but image if so many of the educated and skilled become redundant? You wouldnt even be able to transition properly cause everyone is in the same boat competing for whatever field you can fit in while competing with the next AI system designed for that job. Homelessness and crime would just be a given.
They want the next big tech since the smart phone and social media, regardless if it actually solves any problems.
Do you have any source about AI needing exponential resource requirements? Under what basis that is true?
It's a gold rush son
The miners don't get rich
People selling shovels (nividia selling chips) to the miners (Google et al) get rich
@@matheussanthiago9685 now thats a very clever analogy pops
My friend tells me that his new clothes dryer has ai wash settings, it always ends the cycle before the clothing is dry. He now puts it on the only non ai setting which is timed dry.
Almost Intelligent mode.
Actually Idiot mode
@@matheussanthiago9685 okay yeah that ones better than mine.
It's artificial intelligence, but intelligence isn't all equal.
this is what happens when you rush a product to not miss the hype train.
honestly most well put together video ive ever seen. its TRUE that we dont even know if machine learning is a route to AGI but no one ever wants to acknowledge that
I really hope people stop using the term "AI" to cast as wide a net as possible, then using that to complain about products that don't contain the specific subsect of the technology they dislike; content generative AI
I mean AI is wide term, that's how it's used, just because some people erroneously mean something very specific when they think of the term doesn't mean we should change that.
AI is any system that mimics human intelligence. That's it. If they think AI means AGI (much more specific thing) then they are just wrong and should be corrected, not accepted.
@@TheManinBlack9054 Yeah, that's basically what I was trying to say.
.@@ToxicAtom
@@TheManinBlack9054 It's any system, or rather an agent, that has an output from some input. Literally a look-up tables can be used to make AI, even the most basic ass linear regression is AI, more specifically machine learning.
@@muuubiee Which is why that is NOT the definition of "artificial intelligence". Else even a logic gate would fall under that, which is absurd. "artificial intelligence" is meant to mean artificial as in man-made, and intelligence as in a sentient mind capable of thought.
Skynet is not coming. The trouble is when generative learning enables anyone to make realistic audio or video such that all trust in any piece of information is lost.
When that happens, societies will find it even harder to agree on anything, even the concept that anything CAN be known to be true.
Not with llms, there needs to be an architecture analogous to the brain.
"Skynet is not coming "
Arguments to that being? I don't literally think Skynet is coming, but being so cavalier about disregarding possibly risks without any good reason to seems very irresponsible to me.
@@TheManinBlack9054 I mean in the sense that an algorithm decides to just launch nuclear weapons. I do not see most counties not requiring human input in their usage.
That said, these sorts of algorithms are one of the most pressing concerns of the 21st century after climate change and humans launching nuclear weapons are more likely to drastically impact humanity.
That's the thing you know
So far "AI" is a big solution looking for a problem to solve
WHILE creating problems
Do we really need that?
@matheussanthiago9685
It helps to complete code and write some messages to those who aren't so good at it. It is also in self driving cars. But what we need is neuromorphoc ai.
Shading everything purple for no discernable reason kinda had been Emperor Lemon's thing up to this point.
And yet, nobody understands that tis actually was a YTP thing we caved too much into for 10 years.
I am pretty sure it's to bypass UA-cam copyright System.
downward spiral man
Scene in question is 14:24
this is a YTP staple, he doesn't own that
Sometimes, we forget that the tech space isn't every space.
Not everyone is going to interact with this stuff, and a lot of people don't even know it exists.
And as you said, AI also kinda doesn't exist. It's just machine learning and pattern recognition, but as long as the marking makes people click, no one's gonna care.
I really envy the boomers that never got into the internet
Like at all
They're now retired with their full union salaries, worrying only about their new fishing boat, the truck to goal it, and the new shed to store it
You know?
Things that exist in the real world
That they could bought and actually own
Physically, in the real world
Not a single thought about AI will ever exist between those boomer ears
Now that's a life
Fact: 90% of companies quit before making the next big thing profitable.
8:24 The software devs are laughing because customers don't know what they want or how to design it
You know it’s funny when we see a recent employee of Open AI call the company as Titanic
And also they ran out of data
Slapping "AI" on your products is probably one of the biggest marketing blunders I can think of. People know that "AI" is currently garbage for just about all circumstances. Seeing "AI" written on a product is just going to make people avoid it like the plague.
But billion dollar investors love it!
The pessimist: "AI is going to take all of our jobs in the near future."
The optimist: "We'll still have our jobs in the future. It's just that AI can help us with those jobs."
The realist: "They're going to give our jobs to offshore workers who will work for 5 pennies an hour."
If you took a swig every time KnowledgeHusk mentions Linux, and, usually you'd be fine.
But dang that swig sure does taste good.
Yeah, I ditched windows for linux and I'm so happy I did... though I'm still running Windows on my desktop, I'll probably switch over at some point
Just don't ask Google what to swig.
5:06 This is actually fun. If you word your question differently, you get the correct answer (e.g. "Count only letters in this sentence: what's the 21st letter in this sentence?".)
LLMs are optimized for understanding and generating text based on context, meaning, and language patterns. When asked "what's the 21st letter in this sentence?", the model interprets it as a natural language query, focusing on the semantics rather than the exact positional counting of characters.
I pray to God this AI junk is just gonna be a 2020s trend
Once we get to the singularity, we might get a chance to behold god himself😏
No i dont want
Hard disagree that modern AI is substantially different than Clippy. It's a LOT more sophisticated, sure, but so far, generative AI is really just s souped-up version of autotext. It's no more "intelligent" than it ever was, it's just more capable.
Yeah, and frankly I haven’t seen *any* uses of LLMs (or any generative “ai”) outside of autocorrect that can’t be done better, cheaper, and more efficiently with more classical techniques. And even a lot of autocorrect can be done better in other ways (see the complaints I’ve seen about grammerly getting worse since implementing LLMs)
Clippy doesn't work the same way as modern LMMs (Large Multimodal Models) do. They're completely different architecture.
And it's not just a "souped-up" version of autotext, it's far more complex. That's like saying that you're just a bunch of molecules and that you're no more different than a rock, it's obviously oversimplification to the point of being wrong.
And how do you think it got more capable?
@@TheManinBlack9054lol I don't think he's trying to literally say ai is clippy. They're making an analogy to its usefulness. The thing is confidently wrong and requires someone with knowledge on the topic to handle the error correction for it. Heck, in this regard Seri has it beat for informational questions. At least Seri takes you to a Google search if it doesn't know the answer. 😂
@@Justplanecrazy25 thats not what he said, he said it works the same way, I said it doesnt and that it is more capable *because* its more "intelligent" whatever that words mean here. But perhaps its me who misunderstood due to its wording and thats my mistake, if you are correct. In that case I would still disagree with you as LLMs and LMMS are more "useful" than Clippy. It is true that they are often confidently wrong and do suffer from hallucinations, but that still makes them useful in many scenarios and situations for which they are used by many people.
? Which classic method can make hot sexy RP about trains with boobs?
Short answer: Yes
Long answer: Yes with examples
There are two kind of words. Words like AI, AGI or metaverse invented by SciFi writers that are underdefined and can be great buzzwords for the marketing many years later and there are words invented by engineers like VR, AR, LLM, that are strictly defined and have very specific meaning that cannot be easily stretched and watered down by the PR teams
SegaSammy becomes the most valuable company as they replace their mascot with the Monkey Ball character AiAi.
Sonic was once their mascot there.
@@matthewkrenzler1171and Alex Kidd before him
I love how AI doomers are saying it’ll take over and all, like lmao it’s a glorified chess playing algorithm, chill
The most impressive thing imo is how much info can be packed in such a small package. Like i can run an llm thats only 10% off of gpt4 on my graphics card. The overwhelming amount of the info that can be found online shoved into just 16gb. Its crazy.
A lot of the unwarranted bubble hype is the result of business people, sales people, and crypto/NFT grifters.
"Murder Drones" is a great example of why AGI is a bad idea. Or Aperture science for that matter.
DON'T MAKE SENTIENT TOASTERS!
I've stopped talking about the possibility of switching to Linux and I just did it. It was WAY easier than I thought it would be.
It is nice to have an OS that just does what you need it to do and nothing else again.
I just installed Ubuntu about an hour ago. Not exaggerating
I had made the switch years ago, and I only run into problems when I either install Linux fresh to a system, or begin messing with the operating system for development purposes. Other than that, I rarely have issues.
I more worry for people who are simply not tech savvy, and just want to have a browser with basic tools, like email clients and word processors. In theory, Linux can replace Windows easily. But in the circumstance they have a problem, and they were just given a Linux machine by someone, they won't know what to do.
That's what makes these moves by Microsoft truly malicious. Those who know the tech can escape, but their main audience is people who don't know the tech.
switched a year ago, haven't missed windows for a second. it's nice.
i hope AI goes the way of 3D TV’s
One can only dream
Same. I feel much more comfortable that it's being proven more and more.
These tech companies overuse of the term have basically convinced me it is overhyped.
I can't remember where I heard the term 'Imitation Algorithm,' but it is a far better name for this technology. All it does is imitation without thought, so calling it intelligence was really a mistake all along. It has many usages in certain fields, but it still has so much further to go.
If that so then what would be the real artificial intelligence?
It doesn't "imitate", it actually learns patterns; do some research before saying bs like this please.
@@suwedo8677 Learns? No. It builds rules sets upon rules sets. It's brute force number crunching. No intelligence.
@@bobnolin9155 You might want to consider taking your bs elsewhere, neural networks don't work using rule sets.. You shall study more about how NNs work :]
Monkey ser monkey do with extra steps
From an engineer POV who had been working in diff industry from robotic/automation/aoftware. This is not the first time such "Revolutional" technology being introduced. Way before AI we got IoT, Big Data, Digital Twin, Adaptive Robot, Autonomous Driving, Unmanned aerial vehicle, Metaverse.
With little understanding and research everyone can definately tell it is a hype (Whether it is a bubble or not? Who knows)
By hey, remember the quote from John Maynard Keynes:
"The Markets Can Remain Irrational Longer Than You Can Remain Solvent"
Just follow the trend. Gain whatever u can along the ride.
as someone from the tecnical community, the types of ai like chatgpt and stable diffusion are starting to stagnate in functionality, we can still add other features on top of it, but it isnt getting any smarter until we get a new breakthrought.
It's the natural progression of things, it'll eventually plateau.
"BuT iT AdvAncEd ExpOnEnTiaLly sO fAst BuddY it WilL REaCh sInGulArIty nExT YeAr BuDDY, jUSt yOu WaiT, wILL bE SorrY fOr doubting it buddy"
@@matheussanthiago9685is it not advancing fast
honestly, every single time I here a company mention AI now in marketing, I roll my eyes and actively avoid it. I even saw a golf club marketed as "AI designed", which, maybe you used computer models to design the shape for optimal performance, but its just an excuse to put AI in a marketing phrase, even if the product has no computer component.
Thank you for explaining a meme about glue on pizza that I didn’t understand until now
also wikipedia articles are so unbelievably accurate now, thats a poor example
vandalization gets fixed on big articles within secinds
Elder millennial tries to overcome the school borne pavlovian behavior of not ever trusting Wikipedia challenge (impossible)
not in all languages; wikipedia in spanish is missing or has poor articles, and most of the ones talking about politicians aren't neutral at all
As a big time Wikipedia user, Wikipedia is generally trustworthy for introductory information on complex subjects but not infallible for certain topics where a subjective reading completely changes the nature of whatever the article is about, such as some things related to politics, history and economy
The biases most wiki editors have are pretty damaging
What about the small ones
The first time I tried ChatGPT, I decided to ask it to write a biography about me. Turns out I ran a record label, played bass in punk bands and had an entire life that never happened to me. Then I asked to write it again and it said "Never heard of this guy".
That was the moment I realized that these things are a novelty and very possibly on drugs.
why would it know anything about you?
@@felicitycwhy didn't it say so?
Sorta in the same vein. I write, and these things are online. So I asked it what my story was about, who the characters are, etc. I'd say it was about 80% right, but the details it got wrong were completely and totally wrong. I think understand why it failed, because it associated a word from the title with the story and tried to fill in the blanks using knowledge of that word.
@@matheussanthiago9685 because you didn't tell it to say so if it doesn't know you
First time I used the internet I searched up my name and it came up with all this info of other people,.
That's when I realised this interwebz thing is just a novelty and very possibly on drugs.😂
This is related to the coffee maker clip at 11:00
The fact that someone is even thinking, that building a robot that can change the capsule in a coffee maker instead of building a coffee maker that just has a magazine that it can pull from and just cycle them, shows that these people are not practically minded
It's like how everything had a turbo label on it in the 80s
I remember those useless buttons. Always turned on, never a good reason to turn it off for slower speed. They existed only for certain video games and some other software interacting with hardware to not run too fast, back in that era when CPU clock speeds where always going up each year.
Honestly companies pushing AI so hard really sours me on the entire concept. Now I hope for the future in the Dune lore where humanity just purged all thinking machines.
Another thing to consider when using stuff like chat GPT is you are not interacting with a pure large language model there is absolutely other software and play interacting with it influencing the outputs and honestly I suspect there is occasional human intervention. If you want to have a good idea of how large language models themselves work it might be worthwhile to download a model with a tool like ollama and interact with it that way
Except that the models you can run on your own are orders of magnitues smaller then GPT-4.
@@KeinNiemand fair enough but they are definitely more pure the models you interact with online through the chat GPT interface and actually in my opinion through their relative simplicity you can spot certain patterns that manifest themselves more subtly in the more complex model
The Muppet Treasure Island poster behind the phrase “cannot crowd wholly original or novel ideas” had me dying😂
9:31 😭😭😭 I LOVE YOU LMAOOOO i burst out laughing at this so badly🤣🤣 i also LOVE your editing style. I could learn like this all day.
programmer here. The code produced by AI is complete trash, often it's not even ready to be executable
How trash is it may I ask? No hate just curious
@@MY_INNER_HEART I think Sabine Hossenfelder described it best: "Using AI to generate code is like using a chainsaw to cut butter. It gets the job done, but someone's gonna have to clean up the mess".
The problem with counting letters is due to the tolenization of the model, it receives everything as tokens, which most of the time are not just one letter. That’s the reason why
i help train LLM's through data annotation: im sent a list of prompts, and im to ask a number of LLM's these questions, and i have to research the answer and grade its answer based on weather its right and look for typos and the structure of the answer grammatically. i think were 50 years off from sentience, maybe even 100, because the LLM's everybody is going crazy for is not going to cut the mustard for very much longer. you were right about it not being possible on silicon, we need quantum computing to become mainstream and compartmentalised for every day use.
Do you think even quantum computing could produce sentience? Over time I have begun to think some form of quantum/biological computing is required for sentience.
What happens when someone on Wall Street finally says out loud that LLMs and the rest of what Big Tech is marketing as AI isn’t something that consumers are interested in and has no realistic path to profitability?
Hallucination is a term that is used in academia. It's known for a while on LLMs and the companies are using the term correctly, mostly. It referees to when a LLM is generating convincing but ungrounded gibberish.
What's most likely happing to Google's search summary is bad data got into their RAG pipeline.
It's so obviously a bubble that it's a joke.
Look at the Dotcom bubble. Dotcom domains are still valuable and the internet at large has revolutionized business but they were heavily overvalued at the time and they dropped in value when people finally realized that.
AI is the same. It very likely will be revolutionary and could change our societies and the business landscape forever but AI projects are currently heavily overvalued because people are uncertain about what the real value is and as such are making big bets on all sorts of AI projects in the hope that they hit the jackpot.
Once the value of AI and individual AI projects are more firmly understood plenty of AI projects will go bust just like plenty of Dotcom companies went belly up during that bubble.
EDIT: Lmao he even talked about the Dotcom bubble. I jumped the gun.
Do bears shit in the woods?
Edit: And do Robears not shit in the woods?
Do Robears dream of electric honey?
Bears don't shit. Look it up.
Adding bear shit to your pizza is an excellent idea. Your pizza will be more nutritious due to the rocks found in the bear shit. If you decide to eat pizza with bear shit, it is recommended that you feed the pizza to the bear first so that the shit will be well-integrated into the pizza for best flavor.
- from ChatTard 123
It's more or less clear with the bears, but does the Pope shit in the woods?
Dude love your stuff hope you see this!! Keep it up!
Your letter example is because in code, all characters (including spaces) are characters in a string (word for a text value in code). If you had specified to count exclusively alphabet letters, you would've gotten the correct answer you were looking for every time.
My dissertation project this year was centered all about an application for LLMs, in summarising legal documents for the average person to understand.
There is really showed the strength of an LLM and surrounding technologies is how it can handle natural language processing tasks unlike anything else. In this context a ‘hallucination’ is where the LLM makes something up, that is not found in the information it is being shown.
LLMs definitely have value, but more geared toward specific natural language tasks rather than a be all end all solution
Now good luck trying to convincing the entirety of marketing industry to stopping promoting that
"It seems like people aren't just confused by the technology, they seem to fundamentally dislike it"
with weekly reports of teenagers using ai to make porn of their underage girl classmates? who wouldn't?
In about 1931 Kurt Gödel proved that no algorithm can solve every math problem.
Large Language Models are algorithms.
Therefore, Large Language Models are going to run into a brick wall in what they can and can't do.
I disabled Copilot completely on my PC. I don't need this.
Any software developer that does more than super simple web devving knows that AI really isn’t capable of creating anything near a mid-sized project off of a few prompts. It fails miserably
Thank you for making this video. It breaks down just about everything I've been telling friends and family for the past year. Probably gonna get my family members to watch this so they stop asking as many ai related questions. You did a fantastic job is succinctly describing the problems and limitations imposed by LLMs and other models.
We'd already switched to linux on the news of ads in the start menu. The AI announcements came not long after that and just reinforced our decision.
Work is even taking steps to switch to linux for the developers.
good job microsoft for promoting Linux
You misunderstand how LLMS work. LLMs are particularly bad at things like 'what's the 5th letter of this sentence' because of quirks of how they're made, namely that they can't have internal thoughts.
When humans are asked "what's the 5th letter of this sentence" they go "W is 1, h is 2, a is 3, t is 4" and so on, until they reach 5, then they say the 5th letter. If you make chatGPT go through this process by telling it:
What's the twentieth letter in this sentence? Exclude apostrophes. Don't answer immediately, count letter by letter, assigning each number an ascending letter, until you get to 20, then tell me that letter.
It'll answer without a problem.
LLMS attempt to replicate human thought by replicating human text. But humans have a lot of internal processes that they never externalize in text. One of them is counting. The AI doesn't know that counting is a good way to solve this problem, because in most instances humans only answer with the relevant letter, not with the full process of them counting to get there. By telling the AI how to 'think' to properly solve this problem, it suddenly becomes trivial for them.
AIs *do* understand the universe somewhat. They rarely search the internet, and they *cannot* search their training data. Their training data is used to build internal models of concepts and things. This means that they can understand the world well enough to answer physics problems like "my friend said he balanced his laptop on top of a vertical sheet of paper, is he lying?". These questions CANNOT be answered without either prior experience with this exact question (unlikely) or a generalized understanding of what paper is, what a laptop is, and the interactions that can happen between the two.
If you want genuine proof, ask the AI to perform a novel math problem. Prevent it from using python or the internet, and provide it with a really long addition problem. Chances are it'll either get it right, or it'll fail in a way similar to how a human would fail (eg failing to carry, basic arithmetic error) rather than failing in the way that something that didn't understand addition would fail at (guessing wildly).
Yeah I love Knowledgeman & AI is definitely overhyped but the power of LLMs are incredible. He should have read some papers tbh but that’s a bit deeper than this channel goes
Bro, I just typed in 'What's the twentieth letter in this sentence? Exclude apostrophes. Don't answer immediately, count letter by letter, assigning each number an ascending letter, until you get to 20, then tell me that letter.' into Bing copilot and it told me the twentieth letter is X
Exactly, I understood exactly why the LLM struggled with this, and quickly with the right prompt, I was able to get it to count the correct letter every time. Once you learn how the tech works you quickly realize that a simple prompt can get in on track.
LLMs don't work like this at all, they have no understanding whatsoever of the phrases they read. They are trained by gradient descent (and some human supervision) to make dynamic probability matrices of the most likely word or letter to put next.
Their internal models are not "concepts of things", but huge sets of data giving them very versatile ways of calculating probabilities by multiplying matrices, you could multiply these yourself without ever understanding what they're about, the AI can as well. It fails math problems like a human because it was trained on faulty humans.
@@mspaint9745Tokenization and word embedding means an AI can't actually see the letters in the words it reads, it just sees the token vectors. So I can already tell you it will probably fail, simply because it doesn't have the prerequisite information.
I want this bubble to burst even harder than I wanted it for crypto
High five there ma dude🖐️
5:00 the problem is that what a 'letter' is itself ambiguous.
The first answer (e) was 100% correct. (if 'letter' means 'any character', thus spaces, numbers and punctuations count)
The 2nd answer (I) was also correct, because it *simply was the 21st * letter *.
'Letter' on occasion means 'any non-punctuation character' thus letters and numbers count, but spaces, dots etc don't.
The blame is that people have not been strict enough in the usage and definition of the word 'letter' and it's correct meaning became muddled
This. People who argue AI gets "simple" things wrong are often themselves feeding it garbage instructions. Garbage in, garbage out. Operator error.
AI is new Crypto, NFTs, EVs, metaverse
The only one that went somewhere is EVs
9:30 - User: "Why doesn't anybody love me?" AI reply: "Stop talking to me."
Lol, give that AI all the internet points. Winning!
To quote pink guy
You're only lonely because....
One of the best analysis of the AI landscape that I have seen. Highlights the strengths and potential of the technology while also being critical and skeptical of the future. I do think AI will be at least as influential and impactful as the internet (as an avid user and just being generally interested in the tech) I may even argue it is well on it's way; but like the internet before it, there is a bubble, and it will have to pop eventually so a reasonable more stable foundation can be laid.
In late 90´s, I made a program in MSX Basic that took 125 words and stored them in a 5x5x5 array. When I "talked" with it, based on my knowledge, it learned what were the most common paths to order those words. Later I ported that program for PC and Turbo Pascal, while improved it up to a 10x10x10 array of words. Was it AI? No, because it picked randomly a path of arranging the words from those it deemed most probable. Was it learning? Yes, because it updated the paths every time I "spoke" with it. Was it useful? No, because I was training it and I felt that sometimes I was talking to myself in slow motion (it took some time to process becaue I did not had a powerful computer; in late 90s my PC was still a 486).
My feeling - and I might be wrong - is that these GPTs are the same thing I did in late 90s, just trained with more data and so give the impression that you talk with someone else. If so, they are kinda useful (the interaction with them can spark new ideas in my head) but they are deinitely not intelligent. Just like a library is not intelligent either, despite the tons of knowledge stored there and the search engine to access that knowledge.
As an avid investor in the past year and a half, I can CONFIRM that "AI" is used in literally EVERYTHING to boost share value. Meanwhile, IN REALITY there is not much to gain from it.
There's a lot to gain bro. It just depends on the field. Not all a.i is created equal
I love how we started to call the science fiction idea "general artificial intelligence" and the giant companies responded "you mean 'Generative artificial intelligence'" and so now we have to keep inventing new words to refer to the idea from science fiction, because companies really really want for consumers to mix the two ideas up. "AGI" "strong AI" wonder what's next.
These parasites are ruining our software, our economy, and even our language
Take a page from Astronomers naming telescopes book
Very strong AI
Extremely strong AI
Ovewhelmingly strong AI
We had that terminology for decades, you are just ignorant
GenAI and AGI are different terms. AGI is general AI (as opposed to narrow AI that can only do one thing), GenAI is the opposite of discriminative AI that does not produce something, but discriminates things (for instance AI that discriminates images of cats from dogs, etc).
These terms werent invented by giant corporations, but by scientists for their work. You completely misunderstand what things are.
I love baseless shit like this, how high were you when you wrote it so confidently?
I give examples where it's useful as "You can ask where you can find X in a database and it can repeat back stuff from a tech doc without you needing to find the doc, read it etc
Which is often a bad thing because you will be missing some important context which whole doc contains.
@@Boris_BelomorI'd argue the danger of it having one of those known "I made it up" moments these things are known to often have, means who knows if what its saying is on the doc, is *really* on the doc, or is even the right paragraphs and so on lol
@@sakuraorihime3374 Yeah, you'd have to make it give you the location of the doc and source etc. Although this raises the issue that you'll have to have security privileges for the AI and the whole thing will fall apart fairly fast
To be fair, salt is a rock and you do need iodine so....
Bubble or not the primary concern I have is the sheer environmental cost for what is mostly annoying bullshit slop.
We ecologically can’t afford to be doing this.