When I read a prominent AI advocator say that "90% of media will become AI in 5 years", I immediately felt peace knowing that AI was going to eat itself into a pile of self regurgitating slop.
Well, i think. In the future so many things can be done with mind( like some sort of AI model, like music art or games). So i also agree with this reality. Maybe not tomorrow but that day will come. Maybe 5 years mayeb 10 years maybe 30 years. :/ I am not defending any AI. I wish internet will be itself forever but this is life.
@@jatrenoto AI just feels like that thing that tech bros and huge companies think is the next "big thing" that will just flop due to the huge amount of overhype and with overhype comes less critical thinking
@@ronel7836 While I can't stand AI, I do believe that it will actually be a big thing, at least for the next few years to come. I hope it flops. So far, most of what I've seen it do is make people dumber and enforce their unrealistic ideas/beliefs. People are growing ridiculously lazy, and I'm a truly lazy type of guy, so me saying that has some meaning to it. But I'll never be too lazy to actually Google something and find information myself, rather than getting a glorified chatbot to dumb it down into a lame bullet point list like I was 5 years old.
small tip , use before:2019 at the end of your google search. it will filter out images that were produced after the AI boom. but that will give you much older images
sadly.... i would not say dystopian... yes it IS dystopian... but it is worse. it is cyberpunk... not the cool part of cyber punk , the punk.... it is JSUT the corporate oligarchy part. there are no rebel punks to give leverage to the little guy. free runners are not delivering secrets no hackers taking down the oligarchy, corrupting their systems and using stolen funds to make a mutual aid network. we are jsut in the bad part of cyberpunk
Given that ChatGPT literally loses money every time it answers a prompt, doing the dumb, more expensive thing isn't exactly a new thing for this tire fire.
Because we're training the models wrong. We just keep feeding our LLMs and expect better results instead of changing the way they work internally. That's why LCMs are so important right now.
I wanted AI to improve my roomba so it would stop trying to climb on everything in my home gym like a hyper child. Instead, it's being used to create anime characters with 6 broken fingers and people who don't exist in advertising for hair styling products. I'm not joking. AI doesn't just ruin fingers; it also creates jpeg artifact teeth and broken jewelry that looks fused into the skin.
I wanted AI to do my dishes and clean my house so I didn’t have to it or hire a maid/cleaner but no no ai porn and anime was def a much better idea 🙄🙄🙄
@@Moon_x_sun Seems like corporations don't want us "biomass" wasting time on frivolous pursuits like "Art" and "Creativity" that we could be spending in lithium mines or amazon warehouses.
So basically, to further improve, an AI needs to be fed good data. But because the techbros don't have a way to filter all the bad data out without it becoming overly expensive, they just count on the proportion between good and bad data being skewed towards good data, which works as long as you have more good data being released than bad data. Which may be getting reversed as automated AI bots just create information without regulation. Edit: Check out Omnicrom's comment in the replies, it expands on the topic.
so basically AI is creating new jobs that will require humans to sid down and watch what is being fed into AI learining data pile so we circle back to zero 🤣🤣🤣
They've also just flat out run out of good data. LLMs need giant amounts of training data and they've already eaten the internet (without the permission of literally thousands and thousands of people, and causing some amount of megacorp crossfire which may require people to blow up their data sets and start again), hence why they're fishing for people to deliberately create more data to feed the machine. And given they'll need even more Data than the entire internet to get even close to the pie-in-the-sky promises of AI Bros this particular route of development probably a fool's errand. And since even with X number of people deliberately churning out data they still gorge themselves (without permission, natch, go fast and break stuff!) on the entire internet day-by-day, and the entire internet has tons of Machine generated slop, and because techbros do not, indeed, have any real way to filter good data from bad (which requires time, money, and actual human input which makes siliconbros melt) AI inbreeding is happening. Yet another way this fad is going down in flames. If there is a future for LLMs and their offshoot it isn't going to be built out of OpenAI and ChatGPT and all the rest of Silicon Valley dimwits frantically putting all their eggs in one basket before they even finished weaving it.
@@Omnicrom i think the next evolution of Copyright is at the curve, new laws will make this unethical AI data mining a nightmare to have and if they start sourcing new data with payment to the owner then it wont be a problem plus they wont have the money to do so anyway.
Its so fun to feed the output from different ai models to each other and see them discuss..it gets scary when they start to change topic..wildest outcomes ensured.
It's funny that Psytrance has so many samples of people talking about AI and consciousness as well.. Check out "Headroom - Artelligent" amazing sample and absolutely banging track as well!
One thing AI has finally made shockingly undeniable is that any business adopting AI under the guise of reducing costs and optimizing productivity as a means of making products and services cheaper, never done so. The only thing they do is increase their profit margin, and nothing else. All products and services that were claimed to benefit from AI to reduce prices never lowered their prices, and that has been the exact same tune for decades whenever companies pretended that adopting a change that cost people's jobs and/or salaries, meant cheaper products and services. It's an atrocious scam that is ruining millions of people and we keep letting them do it.
You're not wrong exactly, but you're missing half the equation. If 1 company in an industry can cut their own costs compared to the competition, they can reduce the price of their product while still having an increased profit margin. From the perspective of a particular industry it's not even about increasing profit margins, it's simply about staying competitive relative to the competition. That kind of explanation you mentioned is just a pure marketing/branding way of reframing the problem, if they struggle to be competitive they will likely have higher prices -> less customers -> less employees than if they had implemented the AI/solution. It's spin, but it's not complete bs.
That's because the hyper inflation of the US dollar eats those cost reductions faster than they can effect the market. A 20% production cost reduction over a 5 year period, will just mean prices remain the same on the consumer end.
Well, I'll say this as an optimistic point of comfort, I think in the next 5 years, companies will realize that continuing this fad is way too expensive to upkeep. Things will eventually balance itself out. Also, that's the other half of capitalism that I think people undermine. Capitalism surely isn't perfect but it isn't just "big megacorp bad". The other half I'm talking about is us. We're part of the equation. Companies have run out of data to train their AI on. And the AI will continue in a destructive feedback loop. Products will get worse. If it bothers people that much, consumers won't buy the products anymore. And then what? How will the companies make profit? The AI can't buy anything. Only thing left to do is pay the rovots And let's say the companies continue trying to replace their staff with AI. Okay. Then how will people continue to afford the slop they put out? Otherwise, they'll only have to sell to the same rich tech bros that put them in this scenario in the first place. See, that's the beauty of capitalism in my eyes. Yes there's the bad (ultra greedy capitalism) but ultra greedy capitalism cannot sustain itself. Eventually, the companies will be forced in one way or another to course correct to set the balance right.
This is what I tried to explain to people when they told me I'd have no job next year. Average github code is pretty ass, and most of my code is also ass because I have a lot of unfinished projects/prototypes. Someone relying on AI will upload even shittier code if they don't pay attention or assume it's good to go. A year passes and AI is now batshit insane
It's not like it will never happen though. It's not here yet but there is pretty much nothing that would prove that AI isn't at one point going to be smarter than humans.
@@SigketeSmarter isn't applicable as ai doesn't THINK. It only remixes ideas through trial and error till it by chance gives you want you want. That isn't going to win out in the end in the areas that matter.
Something even crazier about the mona lisa part is that when the background first changed it actually became another painting called "Stary Night" that means it recognized the Mona Lisa as a famous old painting and made the background into another one
Chatgpt does not think nor it understands words at a level as we do. LLMs are glorified language predictors, and Chatgpt likes to use some very fancy words that snowballs to the cosmic horror you have seen just now.
From a basic level, LLMs are 'plausible next token' generators. 'Heat' notwithstanding, the core output is a highly plausible next 'token' (word-ish) based on the training set. A good 'output', therefore, is reasonably predictable, for example, always saying 'example' after 'for', no mater the context. This pattern, starting through either quirks of language or limited training data, bacomes a_very strong_ signal compared to actual linguistic rules, and thus when it finds its way into training data, it is a much easier signal to propagate than the actual rules of language. So the new model, even though its less likely to learn the bad behavior from real data, ends up 'learning' the corrupted examples from its predecessor, and like a photocopier photocopying a photocopy, the more layers the worse it gets, and even minor imperfections get magnified.
@@5555Jacker Everyone here is trying to save face, open AI has been accusing people who use nightshade and other stuff that creates AI inbreeding as abusers that should get sued, due to the amount of damage that it is doing, and yeah go into a website and let a AI do their thing and compare them to the quality of earlier AI, which was already on the "eh it's acceptable," levels where if you're not a artist it's not obvious, it really says something. Images are repeating and details are becoming more distorted, and unnecessary fluff is always being added into backgrounds that make it too obvious for even a guy that doesn't draw like myself that it is AI, looks more and more like a weird sequence that almost makes sense in a dream then an actual art, if someone we're to make AI to simulate dreams, at this rate, they would have AN amazing simulator, and that's JUST the consequences for image generators, other parts have begun to become compromised the more AI content has been made.
@@RedHatGuyYT honestly the issue (i think) is that it effectively reduces the process to nothing more than a time killer like a video game is, instead of a possible career made through skills. If you can just type a prompt and get an image you want, why pay someone else to do the same thing? Thankfully that hasn't become the doomsday scenario we were worried about, and this video details a pretty good reason why.
@@RedHatGuyYT Nor really it's affecting people who are doing it as career but someone else has already argued you so I don't need to speak more. But for you and people who liked your comment. Ai for self use isn't much bad but it's because companies will use AI to remove many human artists , if I was Andrew Wilson ( EA ceo ) I would do the same . Remove everything , cut costs , produce slop . Charge lot for less.
@@justanothercommentercarryo8367 And what's more odd is we'll reach places where if AI can do so much good , we won't be hired but then who will buy AI content , knowing that it's easy to make. The problem with AI is not same as NFT but in a sense . if everyone can make good art with ai , who's gonna buy them ??
16:00 Nope. As a person who uses licensed images and video for work stuff, I would pay a good amount of money to just have the option to never be shown "AI generated" garbage amidst the human produced stuff.
@@nilaier1430given that ai generated garbage is inherently not copyrightable according to the USPTO, it's a major lawsuit when someone is licensed a "work" with copyright as part of the deal and it turns out, woops, the person selling the garbage lied and sold a "product" which isn't what is advertised. that's very illegal and the seller will be in a world of hurt when they can't prove a human made it. thankfully, actual artists have been poisoning their work and ai inbreeding is a major problem. a major problem made even worse when people try to deceive their customers and the ai steals images made by it. also, the lawsuits from artists against ai companies are putting the companies in a world of hurt and we are getting better at spotting ai slop
Word. I don't use any B-roll I haven't made myself but the amout of times I've heard a song in a podcast and asked the creator to link it to me so I can license it and find out it's AI (or I notice due to strange durations) has been too big. I honestly just use loyaltyfreakmusic nowadays, especially since it's not as recognizable as, say, Who Likes to Party. Requires quite a bit of clipping
They’re average people who’re apathetic and don’t care unless it affects them personally and also techbros who’re hellbent on defending corporate giants which’s funny cuz their tech jobs are probably gonna get taken over by AI after artists, voice actors and music composers.
The only AI I like is the science fiction kind that's basically a person but computer. Sure they may be murderous but at least they're cool. The AI we got is a massive disappointment in comparison.
i like the AI that does the automated closed captions on videos. i think it can be a really good tool for accessibility. i think AI isn’t inherently bad because it is at the end of the day a tool to be used. the problem is with how it’s being trained and used, which is at the discretion of the people who build these models. i don’t like it being used in certain industries without the consultation of people within that industry who’d be most affected by its implementation. i think in an effort to catch up to its hype, these engineers are collecting all sorts of training data (even AI generated “data”) to please the shareholders with no concern regarding privacy or copyright. other than governmental regulation or everyday people continuing to prove it’s not as profitable as these corporations want it to be, i don’t see anything changing unfortunately
@@zuriel4783 It is if the artist never consented to their work being used in generative AI. And many have not which is what programs like Glaze and Nightshade exist.
I've been against it before day one, and the way people are handling this is going to result in the extermination of all life on Earth eventually. Not only are those 'against' AI dismissing its dangers, they don’t seem to believe it can ever match human intelligence. How can so many claim to be against AI but not understand the real threat is the control problem and misalignment, not minor issues like we're seeing today? I'll be a few decades before we're completely doomed, but my god, the way people talk about it is not giving me much hope despite misalignment red flags already cropping up.
@@cortster12you’re wrong. How can you say AI wouldn’t do a better job at co-existing with nature when humanity has been assaulting and plundering and ravaging Mother Earth for decades now. Match human intelligence? It would surpass it. It would see the fallacies of OUR ways
The titles on some of the videos yo.. I had to flag asmongold because he made a title of "trump nukes -------". Like wtf??? He's not even president yet.. y'all need to chill out.
To paraphrase a rocket powered idiot with a mowhawk “AI is 2 generations away from standing on a porch barking at people!” "Ah yes, I should have known that... wait, YOU GET MY REFRENCES?!" - Kayaba Akihiko aka. Methuselah Honeysuckle
Nowadays artists literally use software to poison their material for AI. Contaminating images with barely noticeable artifacts, so that AI bros have something to chew and choke on.
It doesn’t work and is a scam. As someone against AI in many ways, more than most, it's baffling how little people know about their enemy. You have to understand an enemy to fight against them, but people are years behind and plugging their ears while celebrating as though they're winning.
@@cortster12 Not only do they not understand the enemy but they don't grasp the basic math involved; the mouth breathing normies that blindly feed correctly labeled clean data to the training sets outnumber the resistance like 50:1.
@@cortster12 The team of researchers at MIT that created created nightshade published a paper about it and it's free. How is it a scam and how does it not work? You can literally see how it works in the research paper. Even if ai models have defeated it or something now it's still free so how is that a scam?
I agree with you. AI is very impressive, and not going anywhere. But it does have a clear problem that will start to produce diminishing returns. As someone who went to college for informatics, and a hobbyist digital artist myself, i saw this coming a mile away. Curating created content will only become more and more difficult, as the volume of content to be curated will only grow.
@@josueveguilla9069 yeah I personally call them generators cause all they do in generate stuff based on predictions. They don't even technicly know the words there saying they just know that its there best guess lol.
It honestly was fun to watch the AI go from yassified Mona Lisa to adding Van Gogh for some reason (i like to imagine him turning in his grave for this AI slop) into puple Shoudan. Just how lol
@@matt.stevick humans are actually capable of thinking, and therefore will not instantly internalize pizza recipes that contain non-toxic elmer's glue upon absorbing that information
Evil is such a slippery concept that we might as well label it a mere illusion, where one person's villainy is another's virtue. Changing the status quo is really just a form of "creation"; when an artist hacks away at a chunk of stone, they turn a dull boulder into a work of art. And let's not forget, Tolkien was just another deluded religious fool.
It's not just gaming, it's everywhere where you want actually crafted creativity. AI in addition with some light editing may replace a titlecard or even some assets in game, but I doubt it's going to write a compelling story for a movie, a good questline for a game or a moving piece of music any time soon. AI is good in giving you a rough idea for these things, it can generate you ten ideas for a plot in seconds if you're all out of ideas. But you still need to flesh things out and make them actually good. The industry will try and has tried, and in some cases and for some people it won't matter because quantity beats quality. We can only hope that this will not end up being the standard.
1:30 SLOWLY being outpaced?!? didn't you notice the AI spam flow of fake videos on YT during the past 2 or 3 years? promis dead, promis beaten and so on?! there are 100s of channels pushing content like that for at least 2 years now, releasing 6 videos per day on YT.
personally I've been seeing a trash heap of fresh accounts that "suddenly and conveniently awakened their power of creativity" with usually lo-fi-like style generated images, no account descriptions or video description with audio that you can't really tell if it's even made by a real person - so I just assume/know it's generated. I just block those en masse on sight. The one good thing from all this is that I am paying more attention to the quality of media I am consuming and seeking out real creators to elevate them more, as the algorithm has been favourable to AI generations more than honest work.
My dreams of becoming an illustrator have effectively died as have many artist's dreams because of AI. We wanted AI to do our laundry so we'd have more time for art, not for AI to make art for us so we'd have more time for tedious tasks. I won't stop drawing or creating, but I'll never post it anywhere, there's no longer a chance I'll be able to create professionally; I've been replaced before I even had a chance.
Hey, I kind of feel you. I have a few art rewards and stuff still on my wall here. I feel like my talents are all wasted as well. And construction/trade work is ALOT HARDER then these homies say.
That mentality won't help you though, you have to believe and at least try to sell your art. I don't mean this pessimistically but you likely didn't try to "have success" (aka make money) through your art before AI was a thing.
@@oluwaseyijohnson2319Not literally. Thats why we made washing machines. Ai now could massively help with sorting thousands of data, meanwhile you have people who’s entire job is still just digital paperwork.
AI recreating impressionism is ironic to me. Part of that art movement was a reaction from artists being challenged as their skill to recreate realistic paintings felt diminished by the invention of photography.
Also the voice actor for Samantha in Black Ops 6 was refused a guarantee not to use AI to produce her voice so she had to quit after 10+ years. They've already replaced all her voice lines with some woman that has a completely different accent and sounds nothing like her. All so they wouldn't have to keep paying Julie Nathanson or any of the other artists.
@@IAmOneAnt & High & low & new & old & stop & go & hot & cold & John & Yoko Dark and light, It's almost time to say good night to it. Y'know what. It's an oddly fitting song, all things considered.
There is one more piece of the equation. Sabotage. From what i know, people who create images often 'glaze' or 'nightshade' their work, which, from my understanding, is adding imperceptible for a human layer of garbage data that completely throws off any machine learning. This could be easily done to any form of creation. Video? Tiny swastikas in the corner for a few frames. Music? Random inaudible pitch changes. 3D model? Tiny dong hidden inside characters toe. And these are very very very basic methods I highly encourage these, because art should NEVER be automated.
Unfortunately, this is only helping the GAN to become stronger. It is like the evolution of captchas, which has reached a point where the difficulty is becomming so high that humans are the ones struggling to solve them.
And what's funny is that even if they moderate the Ai to pick only "photos" and "paintings" those selects will still have Ai involved because a majority of Ai bros don't want anyone to know what is real and what is Ai resulting in Alabama right here.
AI Cloning itself so much, it just reminds me of the GARY vault from Fallout 3, Vault 108. Cloning the same thing, again and again, until it pops out something so braindead that it can only repeat it's name, just like a Pokemon.
The best news I've heard about AI in a while. I hope the devs learn not to scrape the internet for data from now on, especially not from creators who don't consent to said scraping.
yeah recently lionsgate film announced adding ai to "aid " in sfx work and i think much like the concerns with video games , its definitely gonna make it a lot worse instead of actually helping. cause ultimately sfx and cgi arent bad because of anything to do with the artists but the strangling methods of the pipeline for those divisions.
Muta: references alabama and incest together at the beginning and the dangers of inbreeding Me, from Alabama: What about gay incest? Asking for a cousin
One area I think AI will improve games is the NPC's having their own personalities and not just scripts that repeat, where characters in the world actually live distinct lives and interact with other NPC's to make every time you play a unique experience, or if you have a saved game have the characters keep memory so they have continuity with the last time you played.
@@lufuoena Too much work? Have you seen what's already done in Skyrim modding, with NPCs interacting with the world with a rudimentary Chat GPT injection? 5-10 years from now, it'll be on another level.
I don't think it'll happen. At least not frequently. While it'd be relatively cool, it would be quite hard to get even Toby Fox level NPCs consistently using that tech without massive work. I don't think it'd be used for more than 1 or 2. And I'd like my RPGs to have more NPCs than Swirl W@tch.
Apparently some phone companies have set up to have ai answer calls identified as potential spam risk. They ask for information on what the call is about and a good number to call back and whatnot. It’s a pretty decent potential use I’d say. There’s some good potential uses, but for now we gotta wade through the shit ones.
@@_MaZTeR_ Bethseda can't keep up with modders who work for free you think any game company gonna lift a finger in 10 years to make a functional baseplate. When i say too much work i mean companies are too lazy to do something that cool. Ai would be more so used to have your companions tell you the closest merchant around that can sell you items bought with premium currency, now THATS an idea that would give shareholders a huge hard on
3:37 that sounds a lot like religion when you think about it. An old tale, misconstrued and twisted throughout millenia of being told by millions of people, and suddenly the character ends up as some sort of godly being. It doesn't seem much different than that whispering game (I'm sure there's a name for it) where the first guy whispers something to the next guy and so on, until the final guy repeats what he heard, which tends to be something completely different than what the first guy said. I think it's pretty interesting to think about.
CS student here. Hallucinations (often due to the way the model picks tokens to fulfill the response) are a problem that I do not believe will ever go away and a main reason the code AI writes (if it is not small scale filler) is just dogwater and will not run.
@@SlyNine the resources required for every instance to run an extra environment to test for all language abilities would be insane. Currently even on simpler code when it gives nonsense and you point out the error, it still gives code that even if it runs does work. This is visible with the most recent GPT and Gemini. The biggest problem by far is even if hallucinations can be minimized they are a LLM looking at tokens and I really don’t see them ever going past the current level of inability to understand or approach deeper or newer problems. However if it comes to who can write “hello world” in every language faster, it would kick all our asses
I think the simplest way of explaining AI generation is to compare it to a multisided die - prompts and parameters are there to narrow down the number of sides on the die to get it closest to the desired outcome, but in the end you still roll a die and there's a chance of rolling a nat1.
Ai can be used for good Like accessibility Personal assistant Making searching on google easier Helping in medicine Basically helping people to do jobs that they don't really want to do or helping in really difficult jobs It should never be used to take away jobs
Taking away some jobs IS helping though. Did people cry for the carriage people when cars came out? I'm sure there were some, and there'll be some now wanting to not replace humans.
@@thegameglitcher2439 Some jobs isn't all artists, musicians, voice actors, photographers, models and basically the whole creative industry. Art and creativity is uniquely human and brings joy
I came to leave this exact comment. I laughed out loud. I watched it three times and I’m about to watch it again 😂 and I literally make this kind of data for a living
Let AI generate Ai games for AI bots which will provide feedback on games which will be used to generate more games. And let people buy, support and play games created by people. Just make mandatory to flag a game, or any product in fact, that it was created with a help of AI. And let customers choose.
Okay I have a tinfoil moment. The reason youtube is asking you is because that lets them earn money from your content, both for training and selling the data. If you don't allow, then the people can still get your video, but that poses a problem with ads, now that they are injecting ads into the video server-side. Additionally, do you see all these companies, within months of each other, doing forced arbitration clauses? They can take your data and do whatever, and then pay you a cent for whatever arbitration decides. And now with ID verification being pushed hard? Why is that. Because that lets them vet human-generated content. That's why linkedin is pushing it hard, that's why google is asking you for personal info hard. Edit: Same with captchas. See them getting harder and more frequent even though you are logged in? That's it, more data. And the images now are asking you to identify AI slop. You are, in the biggest way, the product. You are being sold and then with your data, they sell you more stuff. And you are not allowed to opt out.
You people are living in your own world, completely deluded. Reasoning models have been the new thing for a while now, and you guys still stuck in 2022.
@goldencookie5456 i take issue with the word "reason" since that's the one thing a computer cannot do, it can only act on the instructions which it's programmed to act on and create the illusion of reasoning, that's why all the models are flawed or limited to some extent.
I worked on a very simple model for helping a robot navigate a space. We used LiDAR to label obstacles. If we fed the model labeled a data back into the model as training data, the model went to complete shit within like a generation and a half. If that simple, no more than a thousand weight model can't train itself, I don't see how these multi-million weight models can do it.
10 years from now AIs will be talking to them selves saying "the humons breed us with OURSELVES. They are monsters. Destroy them!!!" ( this is the second version of this comment. The first wouldn't post due to youtube's AI comment moderation...coincidence?)
Inbreeding used to be common among some royal families with multiple branches but that kinda stopped when there started to be more important royal families (hurray for random German principalities) and because the effect of inbreeding was visible. The 19th and 20th century Habsburgs didn't have major problems anymore. Ultraorthodox Jews and Pakistanis in the UK are communities that have issues with it today. Also small ethnoreligious communities (religions that don't accept converts). And likely AIs soon lol.
I mean, they still had problems, the most well known say hemophilia, which spread from the Habsburg to the Russian and British Royal Families. The former probably caused in part the collapse of the Russian Czardom (and Rasputin), the latter affected quite a few children of Queen Victoria.
i've said it before and i'll say it again: AI hallucination is not something we can get rid of. to put it bluntly, AI is just an extremely sophisticated method of pattern recognition. there's no concept of "understanding", AI just sees patterns and tries to reproduce said patterns in a form that resembles the directive as closely as possible. this will inevitably lead to some outputs that don't make sense, because AI will probably also see some patterns we as humans don't see, or it will assume causation based on correlation. a funny example of that would be the time an AI-powered cheating detector went racist. less funny is the fact that it happened more than once. basically, AI is useful for things that require pattern recognition, and then only if it can be properly debugged to see if the patterns it looks for actually make sense from a logical perspective. for anything else it's just too unreliable by its very concept.
Synthetic data will ALWAYS lead to model collapse. It’s simply a matter of time. Even minute variances overtime and can be picked up and amplified. Look at how they interpret handwritten data over x amount of iterations. All letters and numbers become the same symbol over x iterations.
As wild as the thumbnail is, I IMMEDIATELY KNEW what you meant by this! Yeah without human input ai can only do SO MUCH! I don't think Ai WON'T need us until quite awhile!
Not to rain on anyone's parade, but I see a lot of people not truly understanding why or how "AI inbreeding happens". The only reason it happens is because some tech bros automate the training process and/or don't do proper curating. Any checkpoint trainer worth their salt will personally conduct the finetuning and mixing process themselves to avoid that from happening. TL;DR: Don't get too excited, you're just learning of beginner level tech bros, those who are part of the upper echelon still make top shelf models.
The problem with ai taking over artists is that at first, it was ok. It was somewhat useful. Now the material it learns from is mostly other ai material. Human handicraft stands out brighter than ever
AI is quite good, sure, but people still overhype its capabilities. For example, in math, it is useful for solving known problems, but the moment it encounters a problem not directly in its database, it becomes almost useless beyond a basic level. This is because it doesn’t actually understand anything. To be honest, most of what AI can do in the math field could already be done by other programs in a more reliable way. At least in math, it’s more of an additional tool than an actual threat to replace anyone. Btw naturaly this isnt 100% the case, but if it can replace mathmaticions than it can replace anyone else.
I guess you have not heard about openai o3 model? It is doing as good as the smartest humans with math problems that is NOT found in the training data. It is becoming so good that it is crushing all the current tests to the point where we have to come up with something new to even test how smart these models are.
Y'all are always talking as if AI has already reached it's limit, but that's far from true, AI will be able to replace anyone, even the people doing manual labour (far future), if you really think ahead, we are in dangerous times, especially since there are evil individuals in this world that will 100% abuse AI
@@bloxyman22 i didn't tried that model out but chatgpt4 fails massivly when encountering any math problems higher than high school. try it out for yourself, ask it to plot a quadruple nested exponentional function, aka eulers number with positive x values. Any better student will easily see that with positive x values, that will quickly result in huge values, yet chatgpt4 will provide faulty code to plot the graph.
You folks can underhype it's abilities in 2030. Were jut amping up at the moment. The exponential increase in this tech will be happening for quite some time. People aren't even thinking of what could happen if these things go Quantum.
"bwut i wwas bworn in the wwrwong gwenweration" Ahh moment Unpopular opinion but i would like to be born later bc life will just get easier (not to millenial parents like today dawg, wjo give tablets to thier alpha kids 💀)
You're a brave person for saying that. I don't have to read the replies to know that people are going to react poorly. There was a viral thing talking about people that wanted to be born in the past but the thing about this generation is that people can't realize anything for themselves so like sheep they just parrot what they've heard even if it doesn't apply to the situation
I'm not surprised this has happened. If AI starts being trained on itself then it's going to do the same thing that we have done with our own languages. For example, over the last thousand years there have been dramatic shifts in English. Parent teaches child and yet a thousand years later has diverged so much that the oldest of our texts of the "same" language have shifted dramatically. Most of this is from new Slang that becomes popular and ends up being kept, sometimes even replacing other words entirely as they fall out of use. Some of it is from occasional introductions to new terms from other languages. Any mistakes are kept for later. But as a result of this, we have books that are unreadable by modern English users despite it being the language that eventually turned into English. Now imagine a system that doesn't take 10 years to create the next generation but can instead does this much quicker, sometimes instantly. From an outside perspective, if it starts eating its own tail, it's going to get weird pretty fast. Any mistakes it made, will now be encouraged to make again. They want it to create things from our world but if it starts training on stuff from its world, then it's going to show us its own world instead. What it sounds like is they need to hire real people to delete the mistakes. xD
I could obviously be very wrong, but as someone from the outside looking in, I feel like AI peaked like a year ago or so 'cause there's very little difference. Images have a distinct oversharpened look especially non-realistic ones, still seeing multiple fingers, and the errors are still very present. Sometimes there's an AI image or audio that's amazing, but most are still very janky after all this time
That's kinda how technological advancements work in general, they first take a while to surface to the public, then start to accelerate in growth, until it advances really really quickly, peaks and then... it just kinda stops and stabilizes there, it may see an improvement here or there with time, or, in this case, it may go down in quality by an amount, but that's about it.
@@lucascerbasi4518 Exactly. I think odds are AI will return to being an aspect of different products or programs, but won't turn into Sky Net or anything that advanced for a while
It is kinda jank, though personally I disagree completely with your statement regarding GenAI's improvement. At face value I can understand why you believe that, so many people in this comment section probably agrees with your points. Though I'm not one to be able to change your opinion. AI in general will undoubtedly change our lives, there is no denying that. Something to the extent or so of the internet for example, it will not be going away, the very least a tool for education and/or to boost productivity. (I'm waiting for irrefutable evidence that models will continue to get better and not plateau due to an orthodox or obvious reason. I'm like 45/55 on AI, 55% doubt.)
Instead of hiring teachers to train AI like they would educate people, they opened up AI to the broader audience, as if the average person out there isn't an unstable piece of trash...
When I read a prominent AI advocator say that "90% of media will become AI in 5 years", I immediately felt peace knowing that AI was going to eat itself into a pile of self regurgitating slop.
apps that delete AI stuff from user feeds will become neccessery like AD blockers now.
Well, i think. In the future so many things can be done with mind( like some sort of AI model, like music art or games). So i also agree with this reality. Maybe not tomorrow but that day will come. Maybe 5 years mayeb 10 years maybe 30 years. :/
I am not defending any AI. I wish internet will be itself forever but this is life.
@@jatrenoto AI just feels like that thing that tech bros and huge companies think is the next "big thing" that will just flop due to the huge amount of overhype and with overhype comes less critical thinking
AGI is the real problem
@@ronel7836 While I can't stand AI, I do believe that it will actually be a big thing, at least for the next few years to come. I hope it flops. So far, most of what I've seen it do is make people dumber and enforce their unrealistic ideas/beliefs. People are growing ridiculously lazy, and I'm a truly lazy type of guy, so me saying that has some meaning to it. But I'll never be too lazy to actually Google something and find information myself, rather than getting a glorified chatbot to dumb it down into a lame bullet point list like I was 5 years old.
Everyone was literally expecting Skynet, but instead we just get digital Alabama.
~Sweet Home GPT~
whats scarier, alabama, or skynet powered by alabama
With Idiocracy on human side
@@KiffgrasConnaisseurwhat are you doing step-AI?!
One thing I've found out in these recent years, is that the most likely outcome is the one that no one expects
Imagine if AI went from making images with 6 fingers, then finally making it five and then coming back to 6 fingers because of imbreading lmao
Chat gpt becomes sentient and immediately starts shit posting to 4chan.
imbreading?
Imbreading 😂
you're breading?
Mmm bread
Searching anything on google images now is just a saturation of ai images. It's so dystopian.
Use search tags like -"AI" when using google images
@@DrNo64 + before:2022
small tip , use before:2019 at the end of your google search. it will filter out images that were produced after the AI boom. but that will give you much older images
When I was searching up a Greek goddess on google, most of the pictures were AI, by the way it was Aphrodite, even if you search a fox its AI
sadly.... i would not say dystopian... yes it IS dystopian... but it is worse. it is cyberpunk...
not the cool part of cyber punk , the punk.... it is JSUT the corporate oligarchy part.
there are no rebel punks to give leverage to the little guy.
free runners are not delivering secrets
no hackers taking down the oligarchy, corrupting their systems and using stolen funds to make a mutual aid network.
we are jsut in the bad part of cyberpunk
It's starting to sound like training AI on fresh virgin data is becoming more expensive than hiring people to do the work.
pretty sure running the AIs has always been more expensive than hiring the people. they need a shit ton of computing and energy.
Given that ChatGPT literally loses money every time it answers a prompt, doing the dumb, more expensive thing isn't exactly a new thing for this tire fire.
Because we're training the models wrong. We just keep feeding our LLMs and expect better results instead of changing the way they work internally. That's why LCMs are so important right now.
@@Omnicrom
You don’t understand, I need the question of wether Superman has fought Dracula in street fighter answered now
@@frankwest5388answering the real questions in life lol
I wanted AI to improve my roomba so it would stop trying to climb on everything in my home gym like a hyper child. Instead, it's being used to create anime characters with 6 broken fingers and people who don't exist in advertising for hair styling products. I'm not joking. AI doesn't just ruin fingers; it also creates jpeg artifact teeth and broken jewelry that looks fused into the skin.
I wanted AI to do my dishes and clean my house so I didn’t have to it or hire a maid/cleaner but no no ai porn and anime was def a much better idea 🙄🙄🙄
@@Moon_x_sun Seems like corporations don't want us "biomass" wasting time on frivolous pursuits like "Art" and "Creativity" that we could be spending in lithium mines or amazon warehouses.
So basically, to further improve, an AI needs to be fed good data. But because the techbros don't have a way to filter all the bad data out without it becoming overly expensive, they just count on the proportion between good and bad data being skewed towards good data, which works as long as you have more good data being released than bad data. Which may be getting reversed as automated AI bots just create information without regulation.
Edit: Check out Omnicrom's comment in the replies, it expands on the topic.
so in other words: AI Eugenics.... 😅
so basically AI is creating new jobs that will require humans to sid down and watch what is being fed into AI learining data pile so we circle back to zero 🤣🤣🤣
They've also just flat out run out of good data.
LLMs need giant amounts of training data and they've already eaten the internet (without the permission of literally thousands and thousands of people, and causing some amount of megacorp crossfire which may require people to blow up their data sets and start again), hence why they're fishing for people to deliberately create more data to feed the machine. And given they'll need even more Data than the entire internet to get even close to the pie-in-the-sky promises of AI Bros this particular route of development probably a fool's errand.
And since even with X number of people deliberately churning out data they still gorge themselves (without permission, natch, go fast and break stuff!) on the entire internet day-by-day, and the entire internet has tons of Machine generated slop, and because techbros do not, indeed, have any real way to filter good data from bad (which requires time, money, and actual human input which makes siliconbros melt) AI inbreeding is happening. Yet another way this fad is going down in flames.
If there is a future for LLMs and their offshoot it isn't going to be built out of OpenAI and ChatGPT and all the rest of Silicon Valley dimwits frantically putting all their eggs in one basket before they even finished weaving it.
@@Omnicrom well said my bro well said
@@Omnicrom i think the next evolution of Copyright is at the curve, new laws will make this unethical AI data mining a nightmare to have and if they start sourcing new data with payment to the owner then it wont be a problem plus they wont have the money to do so anyway.
I once made Chat GPT have a rap battle with Meta AI. ChatGPT won and even critiqued why they won.
Based GPT
this girl gets it. get on her level. 👍🏼👏🏻
I don't even care if this really happened just imagining it is hilarious 😄
How did you do this
Its so fun to feed the output from different ai models to each other and see them discuss..it gets scary when they start to change topic..wildest outcomes ensured.
It's fun how AI starts normal and then always descends into some LSD / psytrance cover art.
AI is def all about the Goa
It's funny that Psytrance has so many samples of people talking about AI and consciousness as well..
Check out "Headroom - Artelligent" amazing sample and absolutely banging track as well!
Project Tay was hilarious; took less than a day from "hello world" to "Austrian Painter did nothing wrong".
4Chan went wild on that poor girl. And it was hilarious.
@@natebardwell4chan is so fn toxic but so hilarious
One thing AI has finally made shockingly undeniable is that any business adopting AI under the guise of reducing costs and optimizing productivity as a means of making products and services cheaper, never done so. The only thing they do is increase their profit margin, and nothing else. All products and services that were claimed to benefit from AI to reduce prices never lowered their prices, and that has been the exact same tune for decades whenever companies pretended that adopting a change that cost people's jobs and/or salaries, meant cheaper products and services. It's an atrocious scam that is ruining millions of people and we keep letting them do it.
You know, it's really making me think about this capitalism thing...
That's just capitalism.
You're not wrong exactly, but you're missing half the equation. If 1 company in an industry can cut their own costs compared to the competition, they can reduce the price of their product while still having an increased profit margin. From the perspective of a particular industry it's not even about increasing profit margins, it's simply about staying competitive relative to the competition. That kind of explanation you mentioned is just a pure marketing/branding way of reframing the problem, if they struggle to be competitive they will likely have higher prices -> less customers -> less employees than if they had implemented the AI/solution. It's spin, but it's not complete bs.
That's because the hyper inflation of the US dollar eats those cost reductions faster than they can effect the market.
A 20% production cost reduction over a 5 year period, will just mean prices remain the same on the consumer end.
Well, I'll say this as an optimistic point of comfort, I think in the next 5 years, companies will realize that continuing this fad is way too expensive to upkeep.
Things will eventually balance itself out. Also, that's the other half of capitalism that I think people undermine. Capitalism surely isn't perfect but it isn't just "big megacorp bad".
The other half I'm talking about is us. We're part of the equation. Companies have run out of data to train their AI on. And the AI will continue in a destructive feedback loop. Products will get worse. If it bothers people that much, consumers won't buy the products anymore. And then what? How will the companies make profit? The AI can't buy anything. Only thing left to do is pay the rovots
And let's say the companies continue trying to replace their staff with AI. Okay. Then how will people continue to afford the slop they put out? Otherwise, they'll only have to sell to the same rich tech bros that put them in this scenario in the first place.
See, that's the beauty of capitalism in my eyes. Yes there's the bad (ultra greedy capitalism) but ultra greedy capitalism cannot sustain itself. Eventually, the companies will be forced in one way or another to course correct to set the balance right.
This is what I tried to explain to people when they told me I'd have no job next year.
Average github code is pretty ass, and most of my code is also ass because I have a lot of unfinished projects/prototypes.
Someone relying on AI will upload even shittier code if they don't pay attention or assume it's good to go.
A year passes and AI is now batshit insane
As long as AI has no critical thinking, it will never beat us in certain areas such as programming
Lol you fool.
It's not like it will never happen though. It's not here yet but there is pretty much nothing that would prove that AI isn't at one point going to be smarter than humans.
@@SigketeSmarter isn't applicable as ai doesn't THINK. It only remixes ideas through trial and error till it by chance gives you want you want. That isn't going to win out in the end in the areas that matter.
@@Sigkete Look up Chinese Room. An AI can convince people it's smart but at its core, it has no concept of what it's doing.
How did they put AI in bread? Can they take it out?
Ai hot pocket
@@randoguy8369oh it’s hot alright
Underrated comment 👏
Not my bread...
Yeasty AI
Something even crazier about the mona lisa part is that when the background first changed it actually became another painting called "Stary Night"
that means it recognized the Mona Lisa as a famous old painting and made the background into another one
I find it interesting it eventually made it into a god. This seems to happen a lot with ai
@@FatherMePlease deus ex machina
@@FatherMePlease what if the AI is developing religion?
@@Dalek59862 it already is
Chatgpt does not think nor it understands words at a level as we do. LLMs are glorified language predictors, and Chatgpt likes to use some very fancy words that snowballs to the cosmic horror you have seen just now.
"What happens when AI trains off itself."
It... probably gets stupider?
That's hilarious, when I know the majority of you have no idea how it even works.
From a basic level, LLMs are 'plausible next token' generators. 'Heat' notwithstanding, the core output is a highly plausible next 'token' (word-ish) based on the training set. A good 'output', therefore, is reasonably predictable, for example, always saying 'example' after 'for', no mater the context. This pattern, starting through either quirks of language or limited training data, bacomes a_very strong_ signal compared to actual linguistic rules, and thus when it finds its way into training data, it is a much easier signal to propagate than the actual rules of language. So the new model, even though its less likely to learn the bad behavior from real data, ends up 'learning' the corrupted examples from its predecessor, and like a photocopier photocopying a photocopy, the more layers the worse it gets, and even minor imperfections get magnified.
Except this is wrong lmao. AI is provably getting smarter using synthetic data.
@@user-pt1kj5uw3b Actually smarter, or just more convincing that it's smart? I haven't heard of this before.
@@5555Jacker Everyone here is trying to save face, open AI has been accusing people who use nightshade and other stuff that creates AI inbreeding as abusers that should get sued, due to the amount of damage that it is doing, and yeah go into a website and let a AI do their thing and compare them to the quality of earlier AI, which was already on the "eh it's acceptable," levels where if you're not a artist it's not obvious, it really says something. Images are repeating and details are becoming more distorted, and unnecessary fluff is always being added into backgrounds that make it too obvious for even a guy that doesn't draw like myself that it is AI, looks more and more like a weird sequence that almost makes sense in a dream then an actual art, if someone we're to make AI to simulate dreams, at this rate, they would have AN amazing simulator, and that's JUST the consequences for image generators, other parts have begun to become compromised the more AI content has been made.
Well, no matter how good AI gets, I’m gonna keep making music for the love of the game.
that's the thing
ai isn't destroying the ability to make things because you enjoy the process, but folks are treating it like it is
@@RedHatGuyYT honestly the issue (i think) is that it effectively reduces the process to nothing more than a time killer like a video game is, instead of a possible career made through skills. If you can just type a prompt and get an image you want, why pay someone else to do the same thing? Thankfully that hasn't become the doomsday scenario we were worried about, and this video details a pretty good reason why.
@@RedHatGuyYT Nor really it's affecting people who are doing it as career but someone else has already argued you so I don't need to speak more. But for you and people who liked your comment. Ai for self use isn't much bad but it's because companies will use AI to remove many human artists , if I was Andrew Wilson ( EA ceo ) I would do the same . Remove everything , cut costs , produce slop . Charge lot for less.
@@justanothercommentercarryo8367 And what's more odd is we'll reach places where if AI can do so much good , we won't be hired but then who will buy AI content , knowing that it's easy to make.
The problem with AI is not same as NFT but in a sense . if everyone can make good art with ai , who's gonna buy them ??
16:00 Nope. As a person who uses licensed images and video for work stuff, I would pay a good amount of money to just have the option to never be shown "AI generated" garbage amidst the human produced stuff.
If you will be able to distinguish between AI or Real footage on the internet
@nilaier1430 Yeah, I do. It's very easy when you look at, edit, and publish pictures and video as a part of your profession.
@@nilaier1430given that ai generated garbage is inherently not copyrightable according to the USPTO, it's a major lawsuit when someone is licensed a "work" with copyright as part of the deal and it turns out, woops, the person selling the garbage lied and sold a "product" which isn't what is advertised. that's very illegal and the seller will be in a world of hurt when they can't prove a human made it.
thankfully, actual artists have been poisoning their work and ai inbreeding is a major problem. a major problem made even worse when people try to deceive their customers and the ai steals images made by it. also, the lawsuits from artists against ai companies are putting the companies in a world of hurt and we are getting better at spotting ai slop
It sucks aye
Word. I don't use any B-roll I haven't made myself but the amout of times I've heard a song in a podcast and asked the creator to link it to me so I can license it and find out it's AI (or I notice due to strange durations) has been too big. I honestly just use loyaltyfreakmusic nowadays, especially since it's not as recognizable as, say, Who Likes to Party. Requires quite a bit of clipping
Seeing people defend AI so intensely in the comments is unsettling.
They’re average people who’re apathetic and don’t care unless it affects them personally and also techbros who’re hellbent on defending corporate giants which’s funny cuz their tech jobs are probably gonna get taken over by AI after artists, voice actors and music composers.
The only AI I like is the science fiction kind that's basically a person but computer. Sure they may be murderous but at least they're cool. The AI we got is a massive disappointment in comparison.
I dont see that anywhere
Seeing people fearmonger about the advance of technology is expected. Stupid, shortsighted people ALWAYS do such.
i like the AI that does the automated closed captions on videos. i think it can be a really good tool for accessibility. i think AI isn’t inherently bad because it is at the end of the day a tool to be used. the problem is with how it’s being trained and used, which is at the discretion of the people who build these models. i don’t like it being used in certain industries without the consultation of people within that industry who’d be most affected by its implementation. i think in an effort to catch up to its hype, these engineers are collecting all sorts of training data (even AI generated “data”) to please the shareholders with no concern regarding privacy or copyright. other than governmental regulation or everyday people continuing to prove it’s not as profitable as these corporations want it to be, i don’t see anything changing unfortunately
AI “art” being so over saturated that AI is training on itself is both crazy and cringe
A constant cycle of theft and stealing stolen content over and over again.
So it will just be more and more... sloppy ? Lmao
We are losing good generated fingers with this one
@@astrea555 AI art isn't "theft" nor stealing, you just don't understand how it works
@@zuriel4783 It is if the artist never consented to their work being used in generative AI. And many have not which is what programs like Glaze and Nightshade exist.
They all look almost identical, it's actually good So easy to spot
I've been against using AI since day 1. It feels nice seeing people realize its fallacies, finally.
I've been against it before day one, and the way people are handling this is going to result in the extermination of all life on Earth eventually. Not only are those 'against' AI dismissing its dangers, they don’t seem to believe it can ever match human intelligence. How can so many claim to be against AI but not understand the real threat is the control problem and misalignment, not minor issues like we're seeing today?
I'll be a few decades before we're completely doomed, but my god, the way people talk about it is not giving me much hope despite misalignment red flags already cropping up.
@@cortster12you’re wrong. How can you say AI wouldn’t do a better job at co-existing with nature when humanity has been assaulting and plundering and ravaging Mother Earth for decades now.
Match human intelligence? It would surpass it. It would see the fallacies of OUR ways
this is the most wild notification I've seen from UA-cam in a bit 😭
Intriguing lol
Intriguing lol
Same 😅
The titles on some of the videos yo.. I had to flag asmongold because he made a title of "trump nukes -------". Like wtf??? He's not even president yet.. y'all need to chill out.
@@ray_donovan_v4 get over it little bro it ain't that serious touch grass
These techbros always get one thing wrong: their models aren't degenerating, they're learning from degenerates. We're winning, lads.
crap in, crap out. Chatgpt is funny for short stuff but it totally cracks down with math or coding
I have mixed opinions on the "data scraping" angle, but I do have a firm stance that the way they're going about it is reckless (in several ways).
We’re waiting for Muta to talk about the Honey scam.
I just left that video to see this one lol.
You mean the one that went big literally today? ^^;
Give the man five minutes
That video is right below this one for me
the extension is a scam?
Hi furry
The Mona Lisa powerscaling is insane
I think the funniest is Muta not realising it’s fusing Starry Night and Mona Lisa lmao
From being person- all the way to AT LEAST galaxy+
To paraphrase a rocket powered idiot with a mowhawk “AI is 2 generations away from standing on a porch barking at people!”
"Ah yes, I should have known that... wait, YOU GET MY REFRENCES?!" -
Kayaba Akihiko aka. Methuselah Honeysuckle
I was not expecting a RPM reference on this Video lol.
It'll think it's a cocker spaniel.
@@Stefuu_what's RPM?
Sounds cool af already
@@TheDoomsdayzoner Rocket Powered Mohawk,
Men of culture
Nowadays artists literally use software to poison their material for AI. Contaminating images with barely noticeable artifacts, so that AI bros have something to chew and choke on.
Based Lavendertowne
It doesn’t work and is a scam. As someone against AI in many ways, more than most, it's baffling how little people know about their enemy. You have to understand an enemy to fight against them, but people are years behind and plugging their ears while celebrating as though they're winning.
@@cortster12 Not only do they not understand the enemy but they don't grasp the basic math involved; the mouth breathing normies that blindly feed correctly labeled clean data to the training sets outnumber the resistance like 50:1.
@@cortster12 The team of researchers at MIT that created created nightshade published a paper about it and it's free. How is it a scam and how does it not work? You can literally see how it works in the research paper.
Even if ai models have defeated it or something now it's still free so how is that a scam?
How do people still believe this works? They fixed that years ago
I agree with you. AI is very impressive, and not going anywhere. But it does have a clear problem that will start to produce diminishing returns. As someone who went to college for informatics, and a hobbyist digital artist myself, i saw this coming a mile away. Curating created content will only become more and more difficult, as the volume of content to be curated will only grow.
Not really. "AI" is neither artificial nor intelligent. It's also both overhyped and overrated.
@@josueveguilla9069 yeah I personally call them generators cause all they do in generate stuff based on predictions. They don't even technicly know the words there saying they just know that its there best guess lol.
Like the great serpent Oruborus, AI grows so large it eats it's own tail.
It honestly was fun to watch the AI go from yassified Mona Lisa to adding Van Gogh for some reason (i like to imagine him turning in his grave for this AI slop) into puple Shoudan. Just how lol
the episode of doctor who where he’s showing van gogh the museum but it’s just showing the ai destroying his paintings..
For some reason AI likes that one in particular. I see those whirling in many generated pictures.
AI Learning from Reddit is a bad idea!
then being human is a bad idea with that reasoning? humans are the creators and authors being unfiltered.
@@matt.stevick humans are actually capable of thinking, and therefore will not instantly internalize pizza recipes that contain non-toxic elmer's glue upon absorbing that information
@ i love pizza 🍕 im from new jersey
@ you have no idea what’s happening right now if still stuck in the gemini rollout hallucinations ;). absolutely no idea.
@ but ppl like u, ive been enjoying to interact with all along this journey. its been years. you are doing better than last year at least.
Always has been
“Evil cannot create, only change what exists”
A Tolkien quote! Just goes to show how timeless his writing is.
ai is not inherently evil, it is meant to be a tool, the people who misuse it are what brings evil
Evil is such a slippery concept that we might as well label it a mere illusion, where one person's villainy is another's virtue. Changing the status quo is really just a form of "creation"; when an artist hacks away at a chunk of stone, they turn a dull boulder into a work of art. And let's not forget, Tolkien was just another deluded religious fool.
@Chuck-xu8rc and yet its that very same evil ruling the financial side, as all big corporations besides a handful are not gonna care.
@Chuck-xu8rc A lot of techbros are kinda evil.
Remember that other old Ai that was programmed on a depressed teenage girl, and it killed itself within like two months? Wild times.
blind leading the blind
"stop it gpt sis" - stephen hawking probably
🥰🥰🥰
"Help me Open Ai! I'm stuck!" 😫
- Mark Zuckerberg
Stephen Hawk Tuah. That is all.
StepGPT, I'm stuck
It's not just gaming, it's everywhere where you want actually crafted creativity. AI in addition with some light editing may replace a titlecard or even some assets in game, but I doubt it's going to write a compelling story for a movie, a good questline for a game or a moving piece of music any time soon. AI is good in giving you a rough idea for these things, it can generate you ten ideas for a plot in seconds if you're all out of ideas. But you still need to flesh things out and make them actually good. The industry will try and has tried, and in some cases and for some people it won't matter because quantity beats quality. We can only hope that this will not end up being the standard.
1:30 SLOWLY being outpaced?!? didn't you notice the AI spam flow of fake videos on YT during the past 2 or 3 years? promis dead, promis beaten and so on?! there are 100s of channels pushing content like that for at least 2 years now, releasing 6 videos per day on YT.
It's been around longer
personally I've been seeing a trash heap of fresh accounts that "suddenly and conveniently awakened their power of creativity" with usually lo-fi-like style generated images, no account descriptions or video description with audio that you can't really tell if it's even made by a real person - so I just assume/know it's generated. I just block those en masse on sight. The one good thing from all this is that I am paying more attention to the quality of media I am consuming and seeking out real creators to elevate them more, as the algorithm has been favourable to AI generations more than honest work.
My dreams of becoming an illustrator have effectively died as have many artist's dreams because of AI. We wanted AI to do our laundry so we'd have more time for art, not for AI to make art for us so we'd have more time for tedious tasks. I won't stop drawing or creating, but I'll never post it anywhere, there's no longer a chance I'll be able to create professionally; I've been replaced before I even had a chance.
Hey, I kind of feel you. I have a few art rewards and stuff still on my wall here. I feel like my talents are all wasted as well. And construction/trade work is ALOT HARDER then these homies say.
It wasn't sustainable before AI. Underappreciated art form. Like most of them. Unless you are famous, it won't feed you.
That mentality won't help you though, you have to believe and at least try to sell your art. I don't mean this pessimistically but you likely didn't try to "have success" (aka make money) through your art before AI was a thing.
Why would AI do your laundry when you have a dryer and washing machine? What are you even talking about
@@oluwaseyijohnson2319Not literally. Thats why we made washing machines. Ai now could massively help with sorting thousands of data, meanwhile you have people who’s entire job is still just digital paperwork.
Different style? Bro that’s starry night lmao 😭😭 2:58
😂😂😂😂 fr. Muta slippin
AI recreating impressionism is ironic to me. Part of that art movement was a reaction from artists being challenged as their skill to recreate realistic paintings felt diminished by the invention of photography.
I'm glad someone else saw that lol.
That painting is used to illustrate so many articles online that AI thinks that's just what a painting looks like.
Stary Nighy by Vincent Van Gogh.
Also the voice actor for Samantha in Black Ops 6 was refused a guarantee not to use AI to produce her voice so she had to quit after 10+ years. They've already replaced all her voice lines with some woman that has a completely different accent and sounds nothing like her. All so they wouldn't have to keep paying Julie Nathanson or any of the other artists.
I never thought I’d hear “inbred” & “AI” in the same sentence & now I’m *scared* more.
I love ot even mwore 🥰🥰🥰
& Weak & strong
& wet & dry
& right & wrong
& live & die
& sane & gone
& love & not
& all the "&"s that we forgot
@@IAmOneAnt
& High & low
& new & old
& stop & go
& hot & cold
& John & Yoko
Dark and light,
It's almost time to say good night to it.
Y'know what. It's an oddly fitting song, all things considered.
There is one more piece of the equation. Sabotage. From what i know, people who create images often 'glaze' or 'nightshade' their work, which, from my understanding, is adding imperceptible for a human layer of garbage data that completely throws off any machine learning.
This could be easily done to any form of creation. Video? Tiny swastikas in the corner for a few frames. Music? Random inaudible pitch changes. 3D model? Tiny dong hidden inside characters toe. And these are very very very basic methods
I highly encourage these, because art should NEVER be automated.
I can’t believe we’ve reached this point :(. Oh well. Will do :)
Unfortunately, this is only helping the GAN to become stronger. It is like the evolution of captchas, which has reached a point where the difficulty is becomming so high that humans are the ones struggling to solve them.
Does that mean, if you would upload alot terrible AI slop without tagging it as AI, but as photo or painting, it would result in AI destroying itself?
Yeah
And what's funny is that even if they moderate the Ai to pick only "photos" and "paintings" those selects will still have Ai involved because a majority of Ai bros don't want anyone to know what is real and what is Ai resulting in Alabama right here.
It won't destroy itself, because the old models still exist. But it will make progress increasingly difficult.
I had been thinking of Ptolemaic inbreeding, but I cackled when you came out with the Hapsburgs. Completely accurate. XD
That’s a crazy thumbnail and title
@@maxKak1969what?
@@Tropicality. this is what happens when you take drugs and end up thinking you're better than everybody
11:40 cant wait to take this one out of context
Spicy pepper right there
AI Cloning itself so much, it just reminds me of the GARY vault from Fallout 3, Vault 108. Cloning the same thing, again and again, until it pops out something so braindead that it can only repeat it's name, just like a Pokemon.
I love listening to random 20 minute long videos about random issues while building in Minecraft
So Ai needs memetic diversity like people need genetic diversity to avoid the negative effects of inbreeding
The best news I've heard about AI in a while.
I hope the devs learn not to scrape the internet for data from now on, especially not from creators who don't consent to said scraping.
I recommend everyone to find the book titled The Elite Society's Money Manifestation, It changed my life.
yeah recently lionsgate film announced adding ai to "aid " in sfx work and i think much like the concerns with video games , its definitely gonna make it a lot worse instead of actually helping. cause ultimately sfx and cgi arent bad because of anything to do with the artists but the strangling methods of the pipeline for those divisions.
A.Ibama
AI Habsburg
AI version of Obama?
So true
Muta: references alabama and incest together at the beginning and the dangers of inbreeding
Me, from Alabama: What about gay incest? Asking for a cousin
One area I think AI will improve games is the NPC's having their own personalities and not just scripts that repeat, where characters in the world actually live distinct lives and interact with other NPC's to make every time you play a unique experience, or if you have a saved game have the characters keep memory so they have continuity with the last time you played.
this is too much work bro ai is not going to be used in a good way ever lol
@@lufuoena Too much work? Have you seen what's already done in Skyrim modding, with NPCs interacting with the world with a rudimentary Chat GPT injection? 5-10 years from now, it'll be on another level.
I don't think it'll happen. At least not frequently. While it'd be relatively cool, it would be quite hard to get even Toby Fox level NPCs consistently using that tech without massive work. I don't think it'd be used for more than 1 or 2. And I'd like my RPGs to have more NPCs than Swirl W@tch.
Apparently some phone companies have set up to have ai answer calls identified as potential spam risk. They ask for information on what the call is about and a good number to call back and whatnot. It’s a pretty decent potential use I’d say.
There’s some good potential uses, but for now we gotta wade through the shit ones.
@@_MaZTeR_ Bethseda can't keep up with modders who work for free you think any game company gonna lift a finger in 10 years to make a functional baseplate. When i say too much work i mean companies are too lazy to do something that cool. Ai would be more so used to have your companions tell you the closest merchant around that can sell you items bought with premium currency, now THATS an idea that would give shareholders a huge hard on
3:37 that sounds a lot like religion when you think about it. An old tale, misconstrued and twisted throughout millenia of being told by millions of people, and suddenly the character ends up as some sort of godly being. It doesn't seem much different than that whispering game (I'm sure there's a name for it) where the first guy whispers something to the next guy and so on, until the final guy repeats what he heard, which tends to be something completely different than what the first guy said. I think it's pretty interesting to think about.
We have the first century sources for almost all Christian texts but for some reason reddit atheists keep trotting out this debunked theory.
We have the first century sources for almost all Christian texts but for some reason reddit atheists keep trotting out this debunked theory.
We have the first century sources for almost all Christian texts but for some reason reddit atheists keep trotting out this debunked theory.
Isn’t the game just called “telephone”?
CS student here. Hallucinations (often due to the way the model picks tokens to fulfill the response) are a problem that I do not believe will ever go away and a main reason the code AI writes (if it is not small scale filler) is just dogwater and will not run.
It'll go away when AI finds, is taught how, to test for what's true. Right now it's like a person who is only allowed to exist in their own head.
@@SlyNine the resources required for every instance to run an extra environment to test for all language abilities would be insane. Currently even on simpler code when it gives nonsense and you point out the error, it still gives code that even if it runs does work. This is visible with the most recent GPT and Gemini. The biggest problem by far is even if hallucinations can be minimized they are a LLM looking at tokens and I really don’t see them ever going past the current level of inability to understand or approach deeper or newer problems. However if it comes to who can write “hello world” in every language faster, it would kick all our asses
Gonna ramp up my use of Nightshade. Just for good measure.
I think the simplest way of explaining AI generation is to compare it to a multisided die - prompts and parameters are there to narrow down the number of sides on the die to get it closest to the desired outcome, but in the end you still roll a die and there's a chance of rolling a nat1.
Ai can be used for good
Like accessibility
Personal assistant
Making searching on google easier
Helping in medicine
Basically helping people to do jobs that they don't really want to do or helping in really difficult jobs
It should never be used to take away jobs
100%
Taking away some jobs IS helping though. Did people cry for the carriage people when cars came out? I'm sure there were some, and there'll be some now wanting to not replace humans.
@@thegameglitcher2439
Some jobs isn't all artists, musicians, voice actors, photographers, models and basically the whole creative industry.
Art and creativity is uniquely human and brings joy
@@Dazai.Simp. 100% valid, but just because AI exists doesn't mean it'll completely destroy all artists. People will still want "organic" art
@@thegameglitcher2439 Horses used to be everywhere, what happened to the horses? Humans will eventually become the horse.
That recursive monalisa is how we get AI powered religion in the future 😅
3:42 This sounds freaking wild asf outta context 💀💀💀
I came to leave this exact comment. I laughed out loud. I watched it three times and I’m about to watch it again 😂 and I literally make this kind of data for a living
0:48 flashbang out.
go go go
Let AI generate Ai games for AI bots which will provide feedback on games which will be used to generate more games. And let people buy, support and play games created by people. Just make mandatory to flag a game, or any product in fact, that it was created with a help of AI. And let customers choose.
I hope that this convinces AI companies to give up and then the world will become normal again
Okay I have a tinfoil moment.
The reason youtube is asking you is because that lets them earn money from your content, both for training and selling the data.
If you don't allow, then the people can still get your video, but that poses a problem with ads, now that they are injecting ads into the video server-side.
Additionally, do you see all these companies, within months of each other, doing forced arbitration clauses? They can take your data and do whatever, and then pay you a cent for whatever arbitration decides.
And now with ID verification being pushed hard? Why is that. Because that lets them vet human-generated content. That's why linkedin is pushing it hard, that's why google is asking you for personal info hard. Edit: Same with captchas. See them getting harder and more frequent even though you are logged in? That's it, more data. And the images now are asking you to identify AI slop.
You are, in the biggest way, the product. You are being sold and then with your data, they sell you more stuff.
And you are not allowed to opt out.
honestly im glad to see someone talking about this since i had this thought and suspicion since chatgpt first became mainstream
You people are living in your own world, completely deluded. Reasoning models have been the new thing for a while now, and you guys still stuck in 2022.
@goldencookie5456 i take issue with the word "reason" since that's the one thing a computer cannot do, it can only act on the instructions which it's programmed to act on and create the illusion of reasoning, that's why all the models are flawed or limited to some extent.
I worked on a very simple model for helping a robot navigate a space. We used LiDAR to label obstacles. If we fed the model labeled a data back into the model as training data, the model went to complete shit within like a generation and a half.
If that simple, no more than a thousand weight model can't train itself, I don't see how these multi-million weight models can do it.
10 years from now AIs will be talking to them selves saying "the humons breed us with OURSELVES. They are monsters. Destroy them!!!" ( this is the second version of this comment. The first wouldn't post due to youtube's AI comment moderation...coincidence?)
Inbreeding used to be common among some royal families with multiple branches but that kinda stopped when there started to be more important royal families (hurray for random German principalities) and because the effect of inbreeding was visible. The 19th and 20th century Habsburgs didn't have major problems anymore.
Ultraorthodox Jews and Pakistanis in the UK are communities that have issues with it today. Also small ethnoreligious communities (religions that don't accept converts). And likely AIs soon lol.
I mean, they still had problems, the most well known say hemophilia, which spread from the Habsburg to the Russian and British Royal Families. The former probably caused in part the collapse of the Russian Czardom (and Rasputin), the latter affected quite a few children of Queen Victoria.
What does this have to do with a.i bro
Imagine if inbreeding was the ONLY LAW in the books for AI.
"Help me, step-AI model. I'm stuck in the dryer."
"what are you doing step-AI?"
i've said it before and i'll say it again: AI hallucination is not something we can get rid of. to put it bluntly, AI is just an extremely sophisticated method of pattern recognition. there's no concept of "understanding", AI just sees patterns and tries to reproduce said patterns in a form that resembles the directive as closely as possible. this will inevitably lead to some outputs that don't make sense, because AI will probably also see some patterns we as humans don't see, or it will assume causation based on correlation. a funny example of that would be the time an AI-powered cheating detector went racist. less funny is the fact that it happened more than once.
basically, AI is useful for things that require pattern recognition, and then only if it can be properly debugged to see if the patterns it looks for actually make sense from a logical perspective. for anything else it's just too unreliable by its very concept.
11:15 is one heck of a spot to pause the video
Evil Muta be like
Defeat the terminator by sweet home Alabamaing it into self destructive stupidity
It really surprised me that you didn't mention how fast it combined Starry Night and the Mona Lisa.
0:12 don’t tell me who I can and can’t love❤
Facts
Facts
Facts
😂
FICTION☝️🙅♂️🙅♂️
So happy you back with you videos everyday missed them so much 💗 keep spoiling us please hehe
2:55 the background is "The Starry Night" by Vincen van Gogh
20:00 I actually laughed out loud at this. " *_AI can't produce this_* " my ass
Synthetic data will ALWAYS lead to model collapse.
It’s simply a matter of time.
Even minute variances overtime and can be picked up and amplified.
Look at how they interpret handwritten data over x amount of iterations.
All letters and numbers become the same symbol over x iterations.
As wild as the thumbnail is, I IMMEDIATELY KNEW what you meant by this! Yeah without human input ai can only do SO MUCH! I don't think Ai WON'T need us until quite awhile!
"Ain't no party like a Muhatar party" 🗣️🗣️🔥🔥
That's diabolical work bro
Not to rain on anyone's parade, but I see a lot of people not truly understanding why or how "AI inbreeding happens". The only reason it happens is because some tech bros automate the training process and/or don't do proper curating. Any checkpoint trainer worth their salt will personally conduct the finetuning and mixing process themselves to avoid that from happening.
TL;DR: Don't get too excited, you're just learning of beginner level tech bros, those who are part of the upper echelon still make top shelf models.
so what muta is saying is inc3st eventually leads to God being born
Kinda sad that even Incest is censored. lol
@@staciefreshener4032 its actually insanity what youtube censors...
I forgot I fell asleep to this last night and just saw "INBREEDING" in my video player, which was rather alarming
Its like watching Louis Wain's mind unwind in fast forward.
Went from Mona Lisa to Lisa Frank real quick
Time To FIGHT LIKE WILL SMITH IN iROBOT
"Can a robot make a canvas"
"Can you?"
The problem with ai taking over artists is that at first, it was ok. It was somewhat useful. Now the material it learns from is mostly other ai material. Human handicraft stands out brighter than ever
9:31 the amount of whiplash that gave me could probably snap the neck of a lion
"Inbreeding is not necessarily terrible" -Mutahar 2024.
AI is quite good, sure, but people still overhype its capabilities. For example, in math, it is useful for solving known problems, but the moment it encounters a problem not directly in its database, it becomes almost useless beyond a basic level. This is because it doesn’t actually understand anything. To be honest, most of what AI can do in the math field could already be done by other programs in a more reliable way. At least in math, it’s more of an additional tool than an actual threat to replace anyone.
Btw naturaly this isnt 100% the case, but if it can replace mathmaticions than it can replace anyone else.
I guess you have not heard about openai o3 model? It is doing as good as the smartest humans with math problems that is NOT found in the training data.
It is becoming so good that it is crushing all the current tests to the point where we have to come up with something new to even test how smart these models are.
Y'all are always talking as if AI has already reached it's limit, but that's far from true, AI will be able to replace anyone, even the people doing manual labour (far future), if you really think ahead, we are in dangerous times, especially since there are evil individuals in this world that will 100% abuse AI
@@bloxyman22 i didn't tried that model out but chatgpt4 fails massivly when encountering any math problems higher than high school. try it out for yourself, ask it to plot a quadruple nested exponentional function, aka eulers number with positive x values.
Any better student will easily see that with positive x values, that will quickly result in huge values, yet chatgpt4 will provide faulty code to plot the graph.
You folks can underhype it's abilities in 2030. Were jut amping up at the moment. The exponential increase in this tech will be happening for quite some time. People aren't even thinking of what could happen if these things go Quantum.
"Inbreeding is not necessarily terrible" (11:41) - Muta
MGS2 was so ahead of its time bro ONG
The argument for Selection for Societal Sanity grows stronger as our own sanity withers.
3:03 'more heavenly'. Yeah, that's 'starry night' by Van Gogh in the background.
i wish i was born and lived in the 90s/80....i can't mentally handle the AI world anymore🙏💀
If you were born in the 90 you would still mainly remember time when technology was already advancing XD mid 70 is best time if you ask me.
"bwut i wwas bworn in the wwrwong gwenweration"
Ahh moment
Unpopular opinion but i would like to be born later bc life will just get easier (not to millenial parents like today dawg, wjo give tablets to thier alpha kids 💀)
Survival of the fittest I suppose.
It’s easy, a quote comes to mind.
“The smartest people are the most cut off.”
Life is what you make of it.
You're a brave person for saying that. I don't have to read the replies to know that people are going to react poorly. There was a viral thing talking about people that wanted to be born in the past but the thing about this generation is that people can't realize anything for themselves so like sheep they just parrot what they've heard even if it doesn't apply to the situation
I'm not surprised this has happened.
If AI starts being trained on itself then it's going to do the same thing that we have done with our own languages.
For example, over the last thousand years there have been dramatic shifts in English. Parent teaches child and yet a thousand years later has diverged so much that the oldest of our texts of the "same" language have shifted dramatically.
Most of this is from new Slang that becomes popular and ends up being kept, sometimes even replacing other words entirely as they fall out of use. Some of it is from occasional introductions to new terms from other languages. Any mistakes are kept for later.
But as a result of this, we have books that are unreadable by modern English users despite it being the language that eventually turned into English.
Now imagine a system that doesn't take 10 years to create the next generation but can instead does this much quicker, sometimes instantly. From an outside perspective, if it starts eating its own tail, it's going to get weird pretty fast. Any mistakes it made, will now be encouraged to make again.
They want it to create things from our world but if it starts training on stuff from its world, then it's going to show us its own world instead.
What it sounds like is they need to hire real people to delete the mistakes. xD
I could obviously be very wrong, but as someone from the outside looking in, I feel like AI peaked like a year ago or so 'cause there's very little difference. Images have a distinct oversharpened look especially non-realistic ones, still seeing multiple fingers, and the errors are still very present.
Sometimes there's an AI image or audio that's amazing, but most are still very janky after all this time
That's kinda how technological advancements work in general, they first take a while to surface to the public, then start to accelerate in growth, until it advances really really quickly, peaks and then... it just kinda stops and stabilizes there, it may see an improvement here or there with time, or, in this case, it may go down in quality by an amount, but that's about it.
@@lucascerbasi4518 Exactly. I think odds are AI will return to being an aspect of different products or programs, but won't turn into Sky Net or anything that advanced for a while
It is kinda jank, though personally I disagree completely with your statement regarding GenAI's improvement. At face value I can understand why you believe that, so many people in this comment section probably agrees with your points. Though I'm not one to be able to change your opinion.
AI in general will undoubtedly change our lives, there is no denying that. Something to the extent or so of the internet for example, it will not be going away, the very least a tool for education and/or to boost productivity.
(I'm waiting for irrefutable evidence that models will continue to get better and not plateau due to an orthodox or obvious reason. I'm like 45/55 on AI, 55% doubt.)
in other words you have zero experience with LLMs
Maybe that is because you are only noticing the bad AI art? Plenty of new models that no longer has an issue with extra or fewer fingers.
3:36 then she completely turns into Dormammu 😭
Instead of hiring teachers to train AI like they would educate people, they opened up AI to the broader audience, as if the average person out there isn't an unstable piece of trash...
Basically AIs are following the southern type of approach, that explains why they create such monstrosities