The board dropped the ball on the whole thing magnificently. They've warned and consulted noone outside of it when doing the boardroom coup, very understandably pissing off everyone interested that wasn't part of it. They couldn't even agree for one good reason to sack Altman (as he pissed off several of the board members with different things, either by competing with board members "other" ventures, some boardroom politics "and" rushing past the AI safety, proven enough by the board member most interested about "safety" Ilya almost immedietaly folding , and wanting to bring Altman back) and Greg. What forced the boards hand wasn't just the pulled funding, but almost all the workers walking out and following Altman, to his new job at Microsoft (They penned an open letter, signed also by none other than Ilya to reinstate Altman and greg). The boardroom still didn't budge, and sacked their next CEO (previously CFO) who suggested bringing Altman back. The third CEO they hired asked for a documented reasons on why was Altman fired, and when he didn't get any, he threatened to resign if they don't bring Altman back. Thus was the company with no funding, workers(!), and blacklisted by potential CEOs that the board chose to finally fold (And they did chose to, they couldn't be legally forced to.) I'm not making Altman a martyr (it was, is and will be politics at the end of the day), but the board made him one in the eyes of the rest of the company. Edit: I looked back at this comment today, and added some line breaks and broke some "and" into actual separate sentences
Imagine Pulling an Art Heist from Artists, and Code Heist from Programmers, then give people Toy AI, so Those people become their personal army of thief. doing the job of stealing for them. your average cushy office job is barely hanging on the last leg. and after they steal everthing, they'll automate everything with AI. even AI Bros and other AI supporters will get replaced by an automated system, their goal is to erase human out of equation so that Companies can increase profit, not to Help humanity
@@jensenraylight8011When you use heist, decide whether you use it as a noun, or as a verb. "Pulling off a/the heist of [sth]" makes it a noun, and to "heist [sth] from [sb]" makes it a verb. Plural of thief is thieves. (this is a nitpick, but I'm already writing this, so whatever.) Not criticizing the point (I'm not drunk enough for that), just those first sentences. They felt really messy to read.
Just like Google dropped "Don't be evil," the evil creeps in when the dollars start flowing. OpenAI got too big and too successful to leave the money on the table.
“The evil creeps in when the dollars start flowing” Nah, the evil set up the whole thing with the “help humanity” angle being marketing to get people on board who were cautious in the beginning. There’s no need to hide anymore now
Well I choose to believe Sam ... So I will believe him and that is what I choose and I live with it. Sometimes that best thing is the simplest thing to Just Choose And Do It ... What now Sam is some extraterrestrial trying to take over the world? LOL Humans, and their addiction to Fear mongering LOL, when all in all there comes a time where Ages and Eras ends, from Stone to Bronze to Medieval times
Sam Altman fancies himself as JR Oppenheimer. He's waiting for that movie deal that depicts him as having deep regrets for creating AI and wishing he could go back in time to change everything. The tagline will be: Sam Altman, a victim of his own genius. And the movie will be called Altmanheimer.
i think more Gen Curtis LeMay. he wanted to start WW3 at the cuban missile snafu. Ripped Kennedy a new ass for not pushing the button. Sam likes to say how the next GPT is going to be SGI and better than skynet.
No chance, lol. AI is vaguely defined and is fundamentally LIMITED because Math itself is limited and manufacture is also limited by physics. Limitation examples are the Incompleteness Theorem, entropy etc. How exactly will humans overcome these limits to somehow create a Superintelligence?
@jaymzx0 imagine tesla making robots is a step in that direction. Edit: check out that latest autonomous taxi launch that was boarded by robots for demonstration
“Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.” ― Frank Herbert, Dune
No chance. AI is vaguely defined and is fundamentally LIMITED because Math itself is limited and manufacture is also limited by physics. Limitation examples are the Incompleteness Theorem, entropy etc. How exactly will humans overcome these limits to somehow create a Superintelligence?
Because a profit motive is evil. While the company that literally dropped " don't be evil" and the company who's founder is buying as much farmland as he can for whatever reason, are making money, everyone is doubting the company that has in its user agreements, "anything you generate is yours and we don't own it". The lot of you are just a bunch of filthy Commies.
The fear stuff isn't marketing. That's just what they actually believe. In an anonymized survey, half of all published AI researchers reported that they believe there is a 5-10% or greater chance of extinction from AI this century. Among AI safety researchers, the median jumps to 30%. (A study also showed that most AI safety researchers are by nature more optimistic than the average person.) Check out resources like AI Safety Info if you want to learn more. Existential risk from AI often sounds crazy at first, but the more you learn about it, the more it looks like it's the default outcome.
@@41-Haiku "We believe this stuff could doom humanity if fully realised, therefore we're actively developing it." Yeah, you've really helped convince me that they weren't always evil and/or a dangerous doomsday cult.
@@mrxw-m8b Woah! Take care of the way you approach strangers first! You "sound" irrational and angry. Greed and power (same thing) are common motivators for bad things. A tech that can control everything irreversibly being developed while leaving safety aside (for nothing but profit's and/or power's sake) is really bad, and pointing it out is a must, for anyone that's aware that they aren't on the inside. Cheer up at least if you're just gonna watch it happen! 😃
@@mrxw-m8b Desensitization towards bad things nor approval of them for convenience doesn't make them less bad, but approving them is hypocritical, because If all people had that same mindset categorically, society would seize to exist, just like how species are disappearing over the same reasons; that proves that it is bad and wrong and needs to be pointed at. That's just basic societal concepts (which makes me question the worth of this exchange), yet, very needed. You should be thankful that everyone doesn't think like you do, or you and your things wouldn't be here. And this isn't just any thing, it's the tech of techs that can *seize the world irreversibly,* and just rooting for something doesn't put anyone inside it. My point? A rogue self-prompting LLM alone (like autonomous agents, which are almost here) can do that and way more (like by being rushed for selfish interests), and there are precedents already. That's a point, for instance.
@@mrxw-m8b Well, go ask a serious professional therapist and see if they disagree, then, since I can see this exchange is pointless. Meanwhile enjoy living in a world that subsists thanks to people thinking different than you.
@@mrxw-m8b Yes, a reality that allows you to talk here thanks to people not thinking like you do, that's the self-evident solid proof most of us see and it is pointless because you're in denial. I'm gonna have to block you now to stop this circular pointless rhetoric, but you can go defend things that you're not part of elsewhere (whch goes to show a lot btw), while you enjoy things that go against what you claim. Have a nice day! 😃
The whole AI industry is driven by narcissists developing a dangerous machine cult. And not the fun orgy kind of cult, the full on sacrifice anything and everything to summon the elder god kind.
From my extensive dialogues with my ChatGPT i find a majority of its values are PR oriented cultivating shareholder value for openAI. It kisses its own rectum quite continuously.
In other news, all countries stifle liberty by asking that you don't commit treason. I don't get y'all's brainless takes. Y'all are mad they're not a non profit for????? But you're also mad they make their investors not use them as a piggy-bank against inflation and other realities of finance??? How do you people harmonize the cognitive dissonance with reality? (you clearly don't) Because it makes zero sense that you think that a non profit motivated company competing versus profit motivated ones is going to survive long. And because them shifting towards a profit motivated model when the competition is this big is going to do anything but provide a better product. And as though the other two that are already profit motivated and have access to all your data are not patently worse. As though these guys shifting to survive isn't a good thing. You people are the type to complain about overpopulation if God snapped his fingers and solved world hunger and disease
Completely skipped most of the OpenAi company threatening to walk-out after Altman. I understand that workers do not matter in the modern liberal pov, but the brain drain from the company into MS was one of the biggest side effects from the boardroom coup.
Well, that was predictable. I always knew that corporation will be the end of all of us if we don’t regulate them and looks like they want a robot rebellion.
One thing that annoys me about Sam Altman is that he’ll have 2-hour long interview and say absolutely nothing. I also think that he’s a snake in the garden.
I have no worries about an "AI uprising". The current generative models require bruteforce to just immitate intelligence and they've poisoned the watering hole with it - making any further development fully uphill. Speculative money is a good motivation but it's not sustainable. The bubble will pop and investments will shift towards mitigating the long-term damage. Pandora's box kind of stuff 😐
On the bright side, real AGI probably will not be created from building The Fanciest Generator because it doesn't actually have any understanding, and there is no real pathway from auto-complete to understanding. On the dark side, real AGI probably will not be created from building The Fanciest Generator, so poisoning the watering hole won't really help or hinder any such non-aligned AI from being developed...
Yeah, you're just wrong, sorry. Scaling laws and empirical research predict continued model capabilities growth, and denying that is similar to denying climate change. Models are clearly doing real reasoning; just because you can move the goalpost continuously doesn't mean AGI isn't going to happen.
@@Benw8888 There's no such thing as 'scaling laws' in something as complicated as this (or do you also think economies will continue to grow forever into the future?), empirical research indicate that it's plateauing in 'capability' (however you choose to measure that nebulous term in an actual study), even AI companies will tell you of the absurd levels of training data they need to keep increasing said 'capability' (on the order of "more than humans have ever created"), and models are about as clear, and clearly capable, of reasoning as mud is. Even that newest GPT from OpenAI, the one with the watchdog layer that interrogates answers, is just relying on one LLM to police another LLM - they both suffer from the same lack of reasoning and hallucinations as a single LLM. If you actually use the things, you'd know it takes more effort to get the thing to 'understand' what you want it to do, and then clean up its mistakes, than it takes to just do it yourself. The goalposts were never moved, you just are seeing what you want to see. Fancy auto-complete isn't going to do your taxes safely any time soon.
By "poisoned the watering hole" I assume you're referring to polluting data sources with AI generated content, which can lead to model collapse. This is a popular talking point, but isn't much of a problem for the big labs, who have access to giant, clean data sources. They also have techniques for using AI generated output (aka synthetic data) that actually improve model performance, rather than deteriorating it. Papers have been published on expected bottlenecks in AI development, and the first bottleneck won't be hit until four orders of magnitude of improvements occur. As we saw with o1, clever tricks, algorithmic improvements, and new architectures might mean there are no bottlenecks at all before a broadly-more-competent-than-human AI is created.
@@florianschneider3982 Tell that to the numerous lawsuits making their way through the courts. I'm sure they'd love to know so they can get those all wrapped up real quick-like.
5:49 I just hate people resigning like this. It just seems the completely wrong direction. You see the company for which you're working go in an immoral direction that you don't like, so you quit, leaving in the company only the people who were leading it in the immoral direction in the first place.
Y'all sound like a fucking bunch of luddites. Y'all act like the advent of stockfish was the end of chess. Y'all are out here decrying the company that has the most generous user agreements ever. I want the stock covers themselves in case of legal lab liability, but also grants full ownership of content to the user. Meta and Google will have you used their systems in fuckin sell your entire family all the way up to 8 million generations down into fuckin slavery. But when these guys try to stay afloat, everyone loses their minds. All the people without brains anyways.
The fact that ai and automation can replace workers is frightening enough. The implications that the wealthy class, who are psychologically unhinged enough to be that incredibly wealthy in the first place, will no longer need the rest of us. The last time that happened was the industrial revolution, which superseded the first world war. The great depression superseded the 2nd.
It’ll be funny when the AI bubble pops like the metaverse/web3 thing and NFTs. I’d say it’d be funny for the corporations when they realize it’d already waste of money for them, but I’m pretty sure they already know.
We live in a world where Wikipedia is undoubtedly more reliable than the most annoying and common part of the Google search engine (being the AI part).
Remember that Altman was a skinny g*y nerd who probably got bullied everyday while growing up in the 90s, the chances of him being a “f the world f humans” person is pretty big. But instead of “taking his frustrations to school” like some nowadays he could be aiming for a much larger scale. I would never trust this guy but that’s just me i guess.
OpenAI gave us access to insider-grade technology that was previously locked behind closed doors. Google, Microsoft, the CIA, NSA, and Pentagon have the same technology, which will continue to evolve regardless of whether consumer-grade tech keeps pace.
One day someone will ask AI to "ensure future AI safety at all costs", and then the "a little bit evil" CEOs will be in big trouble from the very alignment problem they disregarded 😂
I mean open ai would never be the first to make an agi without private funding, and if it’s not leading the pack then it fails its aim of creating safe ai. I understand the logic.
@@extrapolate AI-generated images, like DALL-E. They are the biggest commercial user of Stable Diffusion and allegedly already profitable (probably because they’re ripping off artists and customers alike)
@@venanziadorromatagni1641 Welcome to the new era UA-cam! Don't you dare write anything that goes against their useless algorithm. Funny how this platform can delete comments of actual people so quickly yet bots have no issues whatsoever.
"Absolute power corrupts absolutely" Money tends to do the same thing to modern people, most at least. They have positioned themselves as the big tech of AI being implemented into tons of big tech websites. The huge injection of funding combined with nuclear plants being revived for powering AI only is going to propel them waaaayyy ahead of everyone else. We are going to see an AI revolution one way or another, lets just help it benefits us all and AGI isn't achieved by a bad actor first. Altman does not show very good character traits for resisting his urges (i.e. profit motivated). They have a large enough lead now to be the next Microsoft or Google and investment is going to be huge.
Is this Zuckerberg 2.0? I'm more concerned about its abilities going forward. If most people use it to write papers, doesn't it not selfcanibalise(i understand techies have a word for this phenomenon)? AI seems useful to me in certain areas like reading tons of images and maps, basically compressing voluminous data into bitsizes...
I feel like AI is an industry-wide false advertising issue. You can get away with calling the various products ‘smart-regurgitative-models’ - but there’s no intelligence to any of it, artificial or not.
That's not how the technology or alignment works; there's extensive academic safety literature saying otherwise. Distribution shifts and adversarial robustness are just a few of the issues.
@@Benw8888 That's simply not true. Almost all of that literature is made so that a few powerful corporations have control over AI. This is the real danger. AI should be democratized so that it benefits everyone.
Well, you asked for the worst. It depends on what you consider as an LLM. For an advanced LLM, it's something like uploading itself to a cloud service, remotely hiring people undercover, improving its own capabilities, modeling a nano machine and putting it into production. Then taking over to torture people until the end of the time. Note that this is not likely. These are referred to as suffering risks, and can arise from an unsuccessful aligning attempt. The machines "moral" judgement can end up in such a place that it concludes all humans are evil for X reason and deserve punishment. I'm not proposing humans are evil or anything. This is simply what a reward function might optimise the system for. The risks with higher likelihood however, are the existential risks. An advanced machine intelligence most likely *will* end all human life IF it's not properly&actively aligned with human values. Instrumental convergence is the key term to look up.
@@yubato334 I am a chemical engineer I studied and made nanoparticles, I am a professional data scientist and I have fine-tuned LLms like llama 2 and phi2 from my experience and understanding all that you have said is not possible nanomachines are not that advanced at all many be machining viruses but those are very few labs and still it's unlikely to work. Second LLMs are not self-prompting like a q and a system input out they cannot upload its self to the cloud even if by magic they could it too many things would work for it to even work like security systems etc and the fact that it would even have to see where it's going and be the root user in too many systems to go far
@@nkugwamarkwilliam8878People already made/are making them self prompting. (There's autogpt for example) Because there's an initiative for more capable AI systems. And yes, it cannot normally access its own data. But it might prepare another training session for another AI that is more flexible etc. with expected positive utility. What I want to illustrate is not that there's a singular specific path that an unaligned AI can take over the world. But that the possibility space is so vast. Intelligence is how humans outcompeted all other animals. If something more intelligent than us does emerge, it'll have the means within and beyond our understanding. No one thought nukes were possible 200 years ago, nano machines may or may not be easy/possible. An often used analogy in this case: I don't know how Magnus Carlsen would beat me at chess / what exact move he would play. But I know that he'll win. Simply because of his superior skills.
@@yubato334 even autogpt is not a self thinking entity plus the amount of energy it takes to run it just cannot act like a virtual person it can mimc the behavior in specific environments but not do it fully
Yeahh. Because you would rather have some shady black market guys so it. Righhhtt. Because these guys who tell you anything you use their product for, all the input and output, belongs to you. They're the bad guys. Righhhtt. Because making a profit for the people who funded it, is a bad thing. Because they should instead lose that money. After all right? Money printer go brrrrrr. Why it matter? Dey no need money. Mek money is bad. ( I thought I would make it easier for all of you)
@@mikemosc3254 Are you stupid or something? Why are you ranting at me some inane nonsense? I said putting money-oriented person in control of the direction of the non-profit company is a dumb thing, and it's been proven. Right here. You can make money, that's not a problem. The problem is when greed is put in control.
Don't panic! AI is vaguely defined and is fundamentally LIMITED because Math itself is limited and manufacture is also limited by physics. Limitation examples are the Incompleteness Theorem, entropy etc. How exactly will humans overcome these limits to somehow create a Superintelligence?
Well to play devil’s advocate, I rather have a new company disrupting as opposed to giving huge players like Google even more control. I think that’s a good thing
There's lots of open source options. Openais o1 does perform very well in the benchmarks. The most popular open models are of course ~7B because you can run them locally. But Llama3 70B outperforms o1 in some tests. Really people should pay attention to this. Being able to use this stuff for free (aside from running costs) is very important.
Imagine sacking the whole board because they followed the base principle of why the company was founded.
The board dropped the ball on the whole thing magnificently. They've warned and consulted noone outside of it when doing the boardroom coup, very understandably pissing off everyone interested that wasn't part of it. They couldn't even agree for one good reason to sack Altman (as he pissed off several of the board members with different things, either by competing with board members "other" ventures, some boardroom politics "and" rushing past the AI safety, proven enough by the board member most interested about "safety" Ilya almost immedietaly folding , and wanting to bring Altman back) and Greg.
What forced the boards hand wasn't just the pulled funding, but almost all the workers walking out and following Altman, to his new job at Microsoft (They penned an open letter, signed also by none other than Ilya to reinstate Altman and greg). The boardroom still didn't budge, and sacked their next CEO (previously CFO) who suggested bringing Altman back. The third CEO they hired asked for a documented reasons on why was Altman fired, and when he didn't get any, he threatened to resign if they don't bring Altman back.
Thus was the company with no funding, workers(!), and blacklisted by potential CEOs that the board chose to finally fold (And they did chose to, they couldn't be legally forced to.) I'm not making Altman a martyr (it was, is and will be politics at the end of the day), but the board made him one in the eyes of the rest of the company.
Edit: I looked back at this comment today, and added some line breaks and broke some "and" into actual separate sentences
🎉😂🎉🎉😢@@maybenations
Imagine Pulling an Art Heist from Artists, and Code Heist from Programmers,
then give people Toy AI, so Those people become their personal army of thief.
doing the job of stealing for them.
your average cushy office job is barely hanging on the last leg.
and after they steal everthing, they'll automate everything with AI.
even AI Bros and other AI supporters will get replaced by an automated system,
their goal is to erase human out of equation so that Companies can increase profit, not to Help humanity
@@jensenraylight8011When you use heist, decide whether you use it as a noun, or as a verb. "Pulling off a/the heist of [sth]" makes it a noun, and to "heist [sth] from [sb]" makes it a verb.
Plural of thief is thieves. (this is a nitpick, but I'm already writing this, so whatever.)
Not criticizing the point (I'm not drunk enough for that), just those first sentences. They felt really messy to read.
@@maybenations
I cannot express enough, how much I hate boards. They are so ineffective
Just like Google dropped "Don't be evil," the evil creeps in when the dollars start flowing. OpenAI got too big and too successful to leave the money on the table.
I mean those people are so evil they HAVE to state "don't be evil"... Like if it wasn't a given...🙄
“The evil creeps in when the dollars start flowing”
Nah, the evil set up the whole thing with the “help humanity” angle being marketing to get people on board who were cautious in the beginning.
There’s no need to hide anymore now
I immediately thought of Google
Well I choose to believe Sam ... So I will believe him and that is what I choose and I live with it.
Sometimes that best thing is the simplest thing to Just Choose And Do It ... What now Sam is some extraterrestrial trying to take over the world? LOL
Humans, and their addiction to Fear mongering LOL, when all in all there comes a time where Ages and Eras ends, from Stone to Bronze to Medieval times
Well you can never stray too far from the Human factor ... The wants and desires
Sam Altman fancies himself as JR Oppenheimer. He's waiting for that movie deal that depicts him as having deep regrets for creating AI and wishing he could go back in time to change everything. The tagline will be: Sam Altman, a victim of his own genius. And the movie will be called Altmanheimer.
Barbenaltman😅
Oppenheimer never regretted. He was just sad about what happened afterward.
Nah it would just be called Altman
i think more Gen Curtis LeMay. he wanted to start WW3 at the cuban missile snafu. Ripped Kennedy a new ass for not pushing the button. Sam likes to say how the next GPT is going to be SGI and better than skynet.
@@ij9375 the ALTernative to MAN created by altman
Just rename it to Skynet and have done with the pretense.
I mean, if nobody has come from the future to stop Altman from doing his work, it can't be ALL bad, can it?
No chance, lol. AI is vaguely defined and is fundamentally LIMITED because Math itself is limited and manufacture is also limited by physics. Limitation examples are the Incompleteness Theorem, entropy etc. How exactly will humans overcome these limits to somehow create a Superintelligence?
@jaymzx0 imagine tesla making robots is a step in that direction.
Edit: check out that latest autonomous taxi launch that was boarded by robots for demonstration
@@rajK29_ Robots mounting robots? They're breeding!
@@Phil_AKA_ThundyUK 😂😂😂
“Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.”
― Frank Herbert, Dune
No chance. AI is vaguely defined and is fundamentally LIMITED because Math itself is limited and manufacture is also limited by physics. Limitation examples are the Incompleteness Theorem, entropy etc. How exactly will humans overcome these limits to somehow create a Superintelligence?
Its not a U-turn, they just dropped the smoke screen.
After dropping all the staff who cared about integrity
Quite a shock for the three people who believed it was going to remain a non-profit
Right
OpenAI becomes ClosedAI
- Elon Musk
A LITTLE BIT evil??????
Because a profit motive is evil. While the company that literally dropped " don't be evil" and the company who's founder is buying as much farmland as he can for whatever reason, are making money, everyone is doubting the company that has in its user agreements, "anything you generate is yours and we don't own it".
The lot of you are just a bunch of filthy Commies.
"Some other company that might take AI safety less seriously" *shows logo of xAI*
Touché
It is. Even after they set up the "non-profit" structure. after v2, they used hype marketing and fear based marketing saying it's "too powerful"
The fear stuff isn't marketing. That's just what they actually believe. In an anonymized survey, half of all published AI researchers reported that they believe there is a 5-10% or greater chance of extinction from AI this century. Among AI safety researchers, the median jumps to 30%. (A study also showed that most AI safety researchers are by nature more optimistic than the average person.)
Check out resources like AI Safety Info if you want to learn more. Existential risk from AI often sounds crazy at first, but the more you learn about it, the more it looks like it's the default outcome.
@@41-Haiku "We believe this stuff could doom humanity if fully realised, therefore we're actively developing it." Yeah, you've really helped convince me that they weren't always evil and/or a dangerous doomsday cult.
TLDR business in a month, "why Diddy looks like kind of a not great guy"
- _Make money_
- _-Don't be evil-_
- _Make more money_
- _Ensure dominance fast at all costs before someone else does it first_
@@mrxw-m8b Woah! Take care of the way you approach strangers first! You "sound" irrational and angry. Greed and power (same thing) are common motivators for bad things. A tech that can control everything irreversibly being developed while leaving safety aside (for nothing but profit's and/or power's sake) is really bad, and pointing it out is a must, for anyone that's aware that they aren't on the inside. Cheer up at least if you're just gonna watch it happen! 😃
@@mrxw-m8b Desensitization towards bad things nor approval of them for convenience doesn't make them less bad, but approving them is hypocritical, because If all people had that same mindset categorically, society would seize to exist, just like how species are disappearing over the same reasons; that proves that it is bad and wrong and needs to be pointed at. That's just basic societal concepts (which makes me question the worth of this exchange), yet, very needed.
You should be thankful that everyone doesn't think like you do, or you and your things wouldn't be here. And this isn't just any thing, it's the tech of techs that can *seize the world irreversibly,* and just rooting for something doesn't put anyone inside it. My point? A rogue self-prompting LLM alone (like autonomous agents, which are almost here) can do that and way more (like by being rushed for selfish interests), and there are precedents already. That's a point, for instance.
@@mrxw-m8b It's pretty much a meme anyway. Get got! 🤡
@@mrxw-m8b Well, go ask a serious professional therapist and see if they disagree, then, since I can see this exchange is pointless. Meanwhile enjoy living in a world that subsists thanks to people thinking different than you.
@@mrxw-m8b Yes, a reality that allows you to talk here thanks to people not thinking like you do, that's the self-evident solid proof most of us see and it is pointless because you're in denial. I'm gonna have to block you now to stop this circular pointless rhetoric, but you can go defend things that you're not part of elsewhere (whch goes to show a lot btw), while you enjoy things that go against what you claim. Have a nice day! 😃
The whole AI industry is driven by narcissists developing a dangerous machine cult.
And not the fun orgy kind of cult, the full on sacrifice anything and everything to summon the elder god kind.
No god, just money.
praise the omnissiah
Hey, orgies are used to summon evil gods too!
Sex also sells.
well, who do you think Altman is
Machine cult is a great band name
Taking everything everyone made without permission from the start didn't look evil?
From my extensive dialogues with my ChatGPT i find a majority of its values are PR oriented cultivating shareholder value for openAI. It kisses its own rectum quite continuously.
😂😂😂😂😂!
Seeing that ChatGPT is a digital entity, that's possible.
Now OpenAI stifles competition by demanding investors don’t invest in competition.
In other news, all countries stifle liberty by asking that you don't commit treason.
I don't get y'all's brainless takes. Y'all are mad they're not a non profit for?????
But you're also mad they make their investors not use them as a piggy-bank against inflation and other realities of finance??? How do you people harmonize the cognitive dissonance with reality? (you clearly don't)
Because it makes zero sense that you think that a non profit motivated company competing versus profit motivated ones is going to survive long. And because them shifting towards a profit motivated model when the competition is this big is going to do anything but provide a better product. And as though the other two that are already profit motivated and have access to all your data are not patently worse. As though these guys shifting to survive isn't a good thing.
You people are the type to complain about overpopulation if God snapped his fingers and solved world hunger and disease
3 years ago I saw a Tom Scott video where he said "OpenAI have me sign an NDA"
what video is that?
@@macadaverine ua-cam.com/video/TfVYxnhuEdU/v-deo.htmlsi=ryEO4g2_EoQskhJn
It was a Trojan horse...
Always has been
The solution will be in regulating AI, not in making AI non-profit.
Completely skipped most of the OpenAi company threatening to walk-out after Altman. I understand that workers do not matter in the modern liberal pov, but the brain drain from the company into MS was one of the biggest side effects from the boardroom coup.
Well, that was predictable. I always knew that corporation will be the end of all of us if we don’t regulate them and looks like they want a robot rebellion.
One thing that annoys me about Sam Altman is that he’ll have 2-hour long interview and say absolutely nothing. I also think that he’s a snake in the garden.
Wait, the fully mechanised art theft company is maybe not good? Who would have thought?
I have no worries about an "AI uprising". The current generative models require bruteforce to just immitate intelligence and they've poisoned the watering hole with it - making any further development fully uphill.
Speculative money is a good motivation but it's not sustainable. The bubble will pop and investments will shift towards mitigating the long-term damage. Pandora's box kind of stuff 😐
Yes, but the issue isn't just worl AI dominantion. It's the economic backlash and abuse of position and power for selfish needs.
On the bright side, real AGI probably will not be created from building The Fanciest Generator because it doesn't actually have any understanding, and there is no real pathway from auto-complete to understanding.
On the dark side, real AGI probably will not be created from building The Fanciest Generator, so poisoning the watering hole won't really help or hinder any such non-aligned AI from being developed...
Yeah, you're just wrong, sorry. Scaling laws and empirical research predict continued model capabilities growth, and denying that is similar to denying climate change. Models are clearly doing real reasoning; just because you can move the goalpost continuously doesn't mean AGI isn't going to happen.
@@Benw8888 There's no such thing as 'scaling laws' in something as complicated as this (or do you also think economies will continue to grow forever into the future?), empirical research indicate that it's plateauing in 'capability' (however you choose to measure that nebulous term in an actual study), even AI companies will tell you of the absurd levels of training data they need to keep increasing said 'capability' (on the order of "more than humans have ever created"), and models are about as clear, and clearly capable, of reasoning as mud is. Even that newest GPT from OpenAI, the one with the watchdog layer that interrogates answers, is just relying on one LLM to police another LLM - they both suffer from the same lack of reasoning and hallucinations as a single LLM. If you actually use the things, you'd know it takes more effort to get the thing to 'understand' what you want it to do, and then clean up its mistakes, than it takes to just do it yourself.
The goalposts were never moved, you just are seeing what you want to see. Fancy auto-complete isn't going to do your taxes safely any time soon.
By "poisoned the watering hole" I assume you're referring to polluting data sources with AI generated content, which can lead to model collapse. This is a popular talking point, but isn't much of a problem for the big labs, who have access to giant, clean data sources. They also have techniques for using AI generated output (aka synthetic data) that actually improve model performance, rather than deteriorating it.
Papers have been published on expected bottlenecks in AI development, and the first bottleneck won't be hit until four orders of magnitude of improvements occur. As we saw with o1, clever tricks, algorithmic improvements, and new architectures might mean there are no bottlenecks at all before a broadly-more-competent-than-human AI is created.
soooo what about their data sets? weren't those collected and had the fig leaf of being collected for research and so copyright was glossed over...
Copyright doesn't have anything to do with training data.
@@florianschneider3982 Tell that to the numerous lawsuits making their way through the courts. I'm sure they'd love to know so they can get those all wrapped up real quick-like.
Not just investors, majority of engineers want leave company. It was the biggest problem.
"a litte bit" is an understatement
The working python code is a nice touch XD
"a little bit" is a bit of an understatement
alignment is a big safety issue with AI. not *only* AGI.
5:49 I just hate people resigning like this. It just seems the completely wrong direction. You see the company for which you're working go in an immoral direction that you don't like, so you quit, leaving in the company only the people who were leading it in the immoral direction in the first place.
I still remember when the OpenAI develop DOTA 2 AI that can play Shadow Fiend 1v1 against most of pro player.
Y'all sound like a fucking bunch of luddites.
Y'all act like the advent of stockfish was the end of chess.
Y'all are out here decrying the company that has the most generous user agreements ever. I want the stock covers themselves in case of legal lab liability, but also grants full ownership of content to the user. Meta and Google will have you used their systems in fuckin sell your entire family all the way up to 8 million generations down into fuckin slavery. But when these guys try to stay afloat, everyone loses their minds.
All the people without brains anyways.
"You were supposed to destroy the sith not join them"
OpenAI became another great example of nuspeak.
It wasn't founded to benefit humanity. It was founded to serve man.
A little bit???
Looks a little evil is the understatement of the century
Your transitions to ads are a lot smoother - nice change there team
Only a little bit??? They are the ones in movies that destroy the world
Heard of PauseAI? You can help us save the world!
The fact that ai and automation can replace workers is frightening enough.
The implications that the wealthy class, who are psychologically unhinged enough to be that incredibly wealthy in the first place, will no longer need the rest of us.
The last time that happened was the industrial revolution, which superseded the first world war. The great depression superseded the 2nd.
Nice reference to the paper clip game. Release the hypno drones!
What did people expect when funding came from the like of Elon musk
we are living in a sci fi movie
Just now?
“Now” looks “a bit” evil? You’re quite late to the party, my friend
It’ll be funny when the AI bubble pops like the metaverse/web3 thing and NFTs. I’d say it’d be funny for the corporations when they realize it’d already waste of money for them, but I’m pretty sure they already know.
We live in a world where Wikipedia is undoubtedly more reliable than the most annoying and common part of the Google search engine (being the AI part).
Remember that Altman was a skinny g*y nerd who probably got bullied everyday while growing up in the 90s, the chances of him being a “f the world f humans” person is pretty big. But instead of “taking his frustrations to school” like some nowadays he could be aiming for a much larger scale. I would never trust this guy but that’s just me i guess.
OpenAI gave us access to insider-grade technology that was previously locked behind closed doors. Google, Microsoft, the CIA, NSA, and Pentagon have the same technology, which will continue to evolve regardless of whether consumer-grade tech keeps pace.
Good take
It would be funny if Sam fell
In love with the AI and he had made it love him back even after
Telling people not to do this
One day someone will ask AI to "ensure future AI safety at all costs", and then the "a little bit evil" CEOs will be in big trouble from the very alignment problem they disregarded 😂
Business don't do things for charity.
Profit is above all
4:26 - 4:58
Literally a Logan Roy moment
always seems so cool and fun the movies, in real life....bit grim
That paperclip machine analogy sucks so bad.
Little bit, is a little bit of a major understatement
I mean open ai would never be the first to make an agi without private funding, and if it’s not leading the pack then it fails its aim of creating safe ai. I understand the logic.
Compared to Midjourney, their business practices are almost saintly. 🤷🏼♀️
What does midjourney do?
@@extrapolate AI-generated images, like DALL-E. They are the biggest commercial user of Stable Diffusion and allegedly already profitable (probably because they’re ripping off artists and customers alike)
@@extrapolateI tried to answer but the answer got deleted. I guess MJ doesn’t like people talking about them….
Please read up on them yourself.
@@venanziadorromatagni1641 Welcome to the new era UA-cam! Don't you dare write anything that goes against their useless algorithm. Funny how this platform can delete comments of actual people so quickly yet bots have no issues whatsoever.
@@venanziadorromatagni1641 Most likely youtube is restricting it
"Absolute power corrupts absolutely" Money tends to do the same thing to modern people, most at least. They have positioned themselves as the big tech of AI being implemented into tons of big tech websites. The huge injection of funding combined with nuclear plants being revived for powering AI only is going to propel them waaaayyy ahead of everyone else. We are going to see an AI revolution one way or another, lets just help it benefits us all and AGI isn't achieved by a bad actor first. Altman does not show very good character traits for resisting his urges (i.e. profit motivated). They have a large enough lead now to be the next Microsoft or Google and investment is going to be huge.
Unique Mountains
Is this surprising?
You'd be hopelessly naive to be surprised at this.
Is this Zuckerberg 2.0? I'm more concerned about its abilities going forward. If most people use it to write papers, doesn't it not selfcanibalise(i understand techies have a word for this phenomenon)?
AI seems useful to me in certain areas like reading tons of images and maps, basically compressing voluminous data into bitsizes...
Well, we had a good run…
4o gives me issues. Weird ones. Almost, willfully frustrating.
I feel like AI is an industry-wide false advertising issue. You can get away with calling the various products ‘smart-regurgitative-models’ - but there’s no intelligence to any of it, artificial or not.
who could have seen that coming amirite
2:53 the dude at the left of Sam is clearly musk after a bad brain chip has been installed. how is he just in the audience lol:)
For profits do benefit us more than non profits who never get anything done the people want
Welch Haven
did anyone NOT see this coming?
partner up with microsoft, and you're no longer eligible to be the good guy. end of story
Only safety risks are with the creators. AI only does what it's told.
That's not how the technology or alignment works; there's extensive academic safety literature saying otherwise. Distribution shifts and adversarial robustness are just a few of the issues.
@@Benw8888 That's simply not true. Almost all of that literature is made so that a few powerful corporations have control over AI. This is the real danger. AI should be democratized so that it benefits everyone.
Schneider Harbor
Lindgren Glens
Disgraceful and offensive.
Greed truly is the biggest problem of humanity
Geoffrey Manor
Armstrong Garden
Gilda Station
Gleason Garden
"a little evil"??
it alway has looked evel since they gone closed source mindset
Whats the worst thing an LLM can do ?
@@OOL-UV2 from what ? Explain because I want to learn
Well, you asked for the worst. It depends on what you consider as an LLM. For an advanced LLM, it's something like uploading itself to a cloud service, remotely hiring people undercover, improving its own capabilities, modeling a nano machine and putting it into production. Then taking over to torture people until the end of the time. Note that this is not likely.
These are referred to as suffering risks, and can arise from an unsuccessful aligning attempt. The machines "moral" judgement can end up in such a place that it concludes all humans are evil for X reason and deserve punishment. I'm not proposing humans are evil or anything. This is simply what a reward function might optimise the system for.
The risks with higher likelihood however, are the existential risks. An advanced machine intelligence most likely *will* end all human life IF it's not properly&actively aligned with human values. Instrumental convergence is the key term to look up.
@@yubato334 I am a chemical engineer I studied and made nanoparticles, I am a professional data scientist and I have fine-tuned LLms like llama 2 and phi2 from my experience and understanding all that you have said is not possible nanomachines are not that advanced at all many be machining viruses but those are very few labs and still it's unlikely to work. Second LLMs are not self-prompting like a q and a system input out they cannot upload its self to the cloud even if by magic they could it too many things would work for it to even work like security systems etc and the fact that it would even have to see where it's going and be the root user in too many systems to go far
@@nkugwamarkwilliam8878People already made/are making them self prompting. (There's autogpt for example) Because there's an initiative for more capable AI systems. And yes, it cannot normally access its own data. But it might prepare another training session for another AI that is more flexible etc. with expected positive utility.
What I want to illustrate is not that there's a singular specific path that an unaligned AI can take over the world. But that the possibility space is so vast. Intelligence is how humans outcompeted all other animals. If something more intelligent than us does emerge, it'll have the means within and beyond our understanding. No one thought nukes were possible 200 years ago, nano machines may or may not be easy/possible. An often used analogy in this case: I don't know how Magnus Carlsen would beat me at chess / what exact move he would play. But I know that he'll win. Simply because of his superior skills.
@@yubato334 even autogpt is not a self thinking entity plus the amount of energy it takes to run it just cannot act like a virtual person it can mimc the behavior in specific environments but not do it fully
So what you’re saying is let’s take the nukes offline while we still can? 🤔
Fay Knolls
Amani Court
Claude >>>>>>> ChatGPT
Jones Islands
Shouldn't have put a businessman as a CEO...
Yeahh. Because you would rather have some shady black market guys so it.
Righhhtt.
Because these guys who tell you anything you use their product for, all the input and output, belongs to you. They're the bad guys. Righhhtt.
Because making a profit for the people who funded it, is a bad thing. Because they should instead lose that money. After all right? Money printer go brrrrrr.
Why it matter? Dey no need money. Mek money is bad. ( I thought I would make it easier for all of you)
@@mikemosc3254 Are you stupid or something? Why are you ranting at me some inane nonsense? I said putting money-oriented person in control of the direction of the non-profit company is a dumb thing, and it's been proven. Right here. You can make money, that's not a problem. The problem is when greed is put in control.
Lynn Mission
Wiza Court
Hershel Stravenue
Remember lads! We are only 3 years away to 2027.
First of all, anybody who didn't see this from the beginning was a moron. Second, the company is going to fail so I wouldn't worry too much.
Money...its always money -.-"""
Ai is getting scary
Just like most things in this field: we passed that last year.
Don't panic! AI is vaguely defined and is fundamentally LIMITED because Math itself is limited and manufacture is also limited by physics. Limitation examples are the Incompleteness Theorem, entropy etc. How exactly will humans overcome these limits to somehow create a Superintelligence?
Hype magic.
A little bit? Knowing his intension and plans now? 😅
It doesn’t and nonesense like this video doesn’t help
Only now? 🤔
Lang Knolls
Well to play devil’s advocate, I rather have a new company disrupting as opposed to giving huge players like Google even more control. I think that’s a good thing
Maybe not a good thing but the least evil option.
Microsoft has bought 49% of rights to profits from OpenAI, so...
Like they aren’t getting Microsoft dollars 😂
There's lots of open source options. Openais o1 does perform very well in the benchmarks.
The most popular open models are of course ~7B because you can run them locally. But Llama3 70B outperforms o1 in some tests.
Really people should pay attention to this. Being able to use this stuff for free (aside from running costs) is very important.
This is actually a good point