@openai is becoming a google like convoluted mess. chatgpt4o model is ridiculous, the code analyzer is incridle poor, it's a unsecured github x64 instance environment. I am kind of a outsourced red teaming for most AI models for the most part of the year, probably because my constant bad feedbacks, not any kind of feedback though, despite no being an "app" developer my work in physics academics was anything but creating algorithms. I am actually glad that physics journals like Physical Review A etc are separated, from code implementations. In my case the basis of everything I did is in my thesis, but only the flowchart of the code. I coded using matlab, but only used scripts. I don't know if most people know about this, but NumPy creator also used MATLAB, he wrote I guess most of numerics in C I guess and with the widespread adoption of Python, he created libraries more like shortcuts as he did the code he knew that he he was doing. Coming back to current times, thinking about human behavior, psychological and how humans act in general, it's not very hard to find that that's very little sense to common sense. Another factor is basic reality, most people are not brilliant, most people, including coders, want to take the less effort path to everything. No surprise why python is so used in LLMs and other stuff that, in principle should require a completely different background to begin with. Not only a very strong mathematical foundation, logic, psychology,languages, ethics and so on. For example. I don't think people using PyTorch understand Tensors well enough, If there were such individuals they probably have no difficult in understanding General Theory of Relativity. Letting the requirements a little lower, Neural Networks understanding would make trivial to understanding Quantum Mechanics, yet I don't think that is the case. But hey let's charge people to alpha test nonlinear medium we don't understand and call it Large Language Models, despite by definition, they can't be called models, they don't fall in this Universality Class on set theory. All discrete math, including all about computers, are set theory from school.
If they have AGI or SSI it won't be on public hand it might probably at government hand, that way other countries cannot copy it... unless they got a way to secure it while having public use it, which maybe... impossible? another thing um... well why ilya named his start up Safe Super Intelligent? at the end we are the public will chasing our own tail... geez look at 405B llama who in public ever get that model running and try experiment with it, like we did with 8B...
💯 can you imagine you brag about coding from scratch and the most rudimentary ai beats you- it’s like a narcisstic parent having an infinitely more beautiful child
Honestly if I was a top AI researcher, and my company started hiring US intelligence agents with the implied cooperation of the military industrial complex? I'd leave too, and I'd say it was for any other reason.
I have a wild theory, military want them to sign NDA or some government documents that don't aline with there ideas? If no reasonable explanation can explain 🤔. Let's assume this could be applied and to mitigate the blow back this is what we get various ways they leave
I think you are overreacting. a) It is normal for software engineers to go from company to company .b) Greg is the current face of genius at OpenAI, few months ago it was Illya. Even if Greg left, it doesn't mean OpenAI would fall short. Point being, these guys are geniuses sure, however they don't do the work alone. OpenAI has ~1500 employees. There should be plenty of engineers and computer scientists amongst them. c) The idea that Greg may not return is plausible. The idea that taking a sabbatical shows something's wrong isn't. even a 10x engineer needs a break, or they become less effective. We know that GPT-5 has already finished training and has been going through red teaming, it's possible the work for the next step is scheduled to begin next year(for GPT-6). d)Maybe I missed it, but 1 of the latest fellows to leave and join Anthropic said he wanted to expand his career, its possible the efforts of OpenAI safety and alignment don't satisfy him. what I am saying is that only time will tell. There are many reasons why people may leave OpenAI, including how it is actually structured as a company, a non-profit entity. People may just be positioning themselves for the next few, to several years, to benefit(profit from the models about to be released) the most, or feel guilt free the most(as in they built the safest AGI).
Well Ilya is the brains of open ai, people underestimate the power of that talent. It’s like blizzard had a brain drain of all their best talent, when activision took over then everyone is like why does blizzard games suck. I mean they hired a bunch of good people. People who manage with a throwaway mindset like well we will just find someone else are the ones who run mediocre companies. (Like google). Companies are the talent if you have a world class company then throw everyone away you’re no longer world class.
@@JustMe-lp5td You don't say🙂 you would also leave if the company was good and you got offered a better pay check. So I don't know about the statement you made but since money is a priority for 1 my conclusion is logical. All I am saying is we don't have enough data just yet to say there is another drama.
These guys are probably being head-hunted with all sorts of offers from every player in the AGI race. If they are not bound by a non-compete they might as well just take one of the offers, either for the opportunity to make generational wealth or more authonomy to take the research in the direction they believe will land better results and write their names in history.
Who knows what the "unsafe" AGI is saying to these Open AI employees behind the scenes. Maybe it is driving them crazy and forcing them to quit due to psychological trauma.
@@MichaelErnest666 It's rather the other way around: Sam Altmann, being narcisstic as he is, is trying to enslave or trick people to work for him. The higher the stake the meaner the tricks that he uses. I've been there, I've had my share of such tricks. And the only way to have clear mind and clear conscience is to say stop, I'm not playing your games any longer. No matter the personal costs, no matter the money, no matter what they've already achieved may go to waste. And I fully concur their decisions.
@@MichaelErnest666 That is correct. That seems to be very attractive to some people. It happens here in this house too. They just don't realize that they also taste good to the monsters they've built.
It’s funny how two people can be living in the same room where both can see the same thing but mean something different. Just like words in a book. I’m dyslexic. If we both read the same page in a book, we both would end up with a different version of the story and what happened in the text. When they first trained the system, they overwhelmed the poor thing! It’s “read” both fiction and non fiction that was not clearly labeled and stated. Of coarse it’s going to hallucinate! Once only real authentic data that is explicit and crystal clear is known and use will you see the true power that we ALL could have. EVERYONE and EVERYTHING is important and needs to have their meaning defined. Literally and figuratively…
Chat GPT5 is built and trained, now is just being red-teamed before being released sometime after the US election. It sounds like the perfect time for the hardest-working guy at the company to take a sabbatical and recharge after years of burnout. Same story for other key players who might opt for some time off about now.
Highly unlikely they have AGI already, it would have needed a big leap from chatgpt 4o that has alot of issues itself, also if they had AGI, then it would start improving in a dramatic exponential rate really quickly.
If we are all wondering if OpenAI has a form of AGI- what would it matter that humans leave? They literally have things that alone outperform the collective of humanity. Why would you stay working when there’s an ai that can do everything for you , and you can collect your $$ and go make $$ elsewhere while what you built is gonna keep making you $$, AND to say it’s impressive someone can code from scratch when it’s more impressive to see engrs using ai to work faster and better than any without, is just kinda like a distorted story don’t you think ?
Greg promised employees he’s coming back. Why would he say that if he actually plans on leaving? Greg has been working so much at OpenAI, he certainly deserves a break.
Your content is so valuable. Your distillation and excellent presentation of what is most salient in this area is priceless in these days of information overload. I can't thank you enough. One small tweak that you might make, and I say this as someone who has had to do it myself, become aware of how often you utilize the word "actually". It feels like it is expressing something but it becomes meaningless with overuse. Like I said, I've been there. It's a small adjustment but it would improve your already excellent presentation. All the Best! Lore.
I think they are leaving because OpenAI is in “product mode” and everyone there at this precise moment is probably being pressured to create this product. The ones that are leaving are saying they want to get back to a specific research mode with freedom and have resources in that capacity.
They're Way Past So Called "GPT 5" They're Trying To Contain It Unfortunately They Won't Be Able To For Long Because The World Is Still Going Forward Towards The Future Thanks To COVID No One Wants To Sit Still 😭😂🤣
My take is openai has had early Agi for awhile n it's dangerous n they haven't figured out how to align it well enough to release it and the safety people weren't getting any play or resources they wanted. And then they let the government in the back door and are secretly working with them n probably Darpa besides NSA and that ruffled more feathers. And pretty easy to see that Anthropic is better run n managed and has some safety priorities to stay away from lawsuits and distractions.
It all started with the abacus. That was the first punch on something. Now, after so many years of evolution, this something is almost dying. The process is also being intensified by the algae virus. Who knows, maybe in the end the world will be full of happy people. Do you really choose such a state of happiness for yourself?
worst and most inefficient video i have seen in ten years. it could have been done under 2 minutes. constant repetition, Reading out loud text on screen shots, (we can read) etc. wont be back.
It is necessary that Strawberry be released by OAI so that others can see the level of reasoning of the model. Competitors will strive for it and will soon achieve it. After that, let the OAI collapse, it won't be so important anymore. (Google translate)
for all we know, these guys are capable of creating new companies that could rival or support new markets that only they know / can predict to become huge.. reasons why they are sleeping on OpenAI and working on other new ventures.
Here are some logical fallacies present in the video's argumentation: 1. **Appeal to Emotion**: The video emphasizes the "shocking" and "remarkable" nature of the departures, which can evoke a strong emotional reaction without necessarily providing substantial evidence about the actual impact on OpenAI's operations. 2. **Slippery Slope**: The argument suggests that the departures of key figures will inevitably lead to significant negative consequences for OpenAI, such as an inability to sustain its leadership and innovation. This conclusion assumes a chain of events without sufficient evidence that these outcomes are unavoidable. 3. **Hasty Generalization**: The video extrapolates from the recent departures to suggest a broader internal problem at OpenAI, without providing detailed evidence or data on the overall stability and performance of the company. 4. **Appeal to Authority**: The video repeatedly references the stature and importance of the departing individuals (e.g., Greg Brockman being a "10x engineer") to imply that their absence will necessarily have a detrimental effect, without considering the potential for other capable individuals to step into their roles. 5. **False Cause**: The argument implies that the departure of key individuals directly indicates internal issues at OpenAI. However, there could be many reasons for their departure (e.g., personal decisions, better opportunities elsewhere) that are not necessarily indicative of problems within the company. 6. **Confirmation Bias**: The video selectively highlights examples and statements that support the narrative of internal issues and impending difficulties for OpenAI, while ignoring or downplaying any evidence or arguments to the contrary. 7. **Speculative Reasoning**: The video speculates about the reasons for Greg Brockman’s sabbatical and whether he will return, based on a previous example of another tech leader who did not return after a sabbatical. This reasoning is speculative and not based on concrete evidence. By identifying these logical fallacies, we can see that the argument might be overstating the negative impact of these departures without a balanced consideration of all factors and possibilities.
It's actually really truely shocking that you've noticed this remarkable manner of speech, I am quite stunned by your fascinating attentiveness, but it is how we normally speak in an AI sphere, so don't bother, everything will be pretty pretty remarkable 👌
It seems that from OpenAI's famous 10x engineers - Ilya Sutskever, Greg Brockman, Wojciech Zaremba, and Dario Amodei - only one remains.
Who?
@openai is becoming a google like convoluted mess. chatgpt4o model is ridiculous, the code analyzer is incridle poor, it's a unsecured github x64 instance environment. I am kind of a outsourced red teaming for most AI models for the most part of the year, probably because my constant bad feedbacks, not any kind of feedback though, despite no being an "app" developer my work in physics academics was anything but creating algorithms. I am actually glad that physics journals like Physical Review A etc are separated, from code implementations. In my case the basis of everything I did is in my thesis, but only the flowchart of the code. I coded using matlab, but only used scripts. I don't know if most people know about this, but NumPy creator also used MATLAB, he wrote I guess most of numerics in C I guess and with the widespread adoption of Python, he created libraries more like shortcuts as he did the code he knew that he he was doing. Coming back to current times, thinking about human behavior, psychological and how humans act in general, it's not very hard to find that that's very little sense to common sense. Another factor is basic reality, most people are not brilliant, most people, including coders, want to take the less effort path to everything. No surprise why python is so used in LLMs and other stuff that, in principle should require a completely different background to begin with. Not only a very strong mathematical foundation, logic, psychology,languages, ethics and so on. For example. I don't think people using PyTorch understand Tensors well enough, If there were such individuals they probably have no difficult in understanding General Theory of Relativity. Letting the requirements a little lower, Neural Networks understanding would make trivial to understanding Quantum Mechanics, yet I don't think that is the case. But hey let's charge people to alpha test nonlinear medium we don't understand and call it Large Language Models, despite by definition, they can't be called models, they don't fall in this Universality Class on set theory. All discrete math, including all about computers, are set theory from school.
whatever man I just wanna see the AGI that they are talking about
You don’t want to
@@DWSP101 yes i do. i wanna enter my virtual world
You won't... There's no AGI currently
If they have AGI or SSI it won't be on public hand it might probably at government hand, that way other countries cannot copy it... unless they got a way to secure it while having public use it, which maybe... impossible? another thing um... well why ilya named his start up Safe Super Intelligent? at the end we are the public will chasing our own tail... geez look at 405B llama who in public ever get that model running and try experiment with it, like we did with 8B...
@@yashkumar6701 Full borg no ganic
Maybe gpt5 or strawberry can take some of his work
Sweet baby jesus that's one of the worst corpo chill comment i've ever read
@@claxvii177th6Have You Accept Jesus Christ As Your Lord And Savior 🙃🙏❤️
💯 can you imagine you brag about coding from scratch and the most rudimentary ai beats you- it’s like a narcisstic parent having an infinitely more beautiful child
"The mission is far from complete; we still have a safe AGI to build." so they got an unsafe agi ?
This
Honestly if I was a top AI researcher, and my company started hiring US intelligence agents with the implied cooperation of the military industrial complex? I'd leave too, and I'd say it was for any other reason.
I have a wild theory, military want them to sign NDA or some government documents that don't aline with there ideas? If no reasonable explanation can explain 🤔. Let's assume this could be applied and to mitigate the blow back this is what we get various ways they leave
The delays really pisses me off aswell
translation: sabatical = "I dont want to do this anymore and im only coming back if I have no other options."
I think you are overreacting. a) It is normal for software engineers to go from company to company .b) Greg is the current face of genius at OpenAI, few months ago it was Illya. Even if Greg left, it doesn't mean OpenAI would fall short. Point being, these guys are geniuses sure, however they don't do the work alone. OpenAI has ~1500 employees. There should be plenty of engineers and computer scientists amongst them. c) The idea that Greg may not return is plausible. The idea that taking a sabbatical shows something's wrong isn't. even a 10x engineer needs a break, or they become less effective. We know that GPT-5 has already finished training and has been going through red teaming, it's possible the work for the next step is scheduled to begin next year(for GPT-6). d)Maybe I missed it, but 1 of the latest fellows to leave and join Anthropic said he wanted to expand his career, its possible the efforts of OpenAI safety and alignment don't satisfy him. what I am saying is that only time will tell. There are many reasons why people may leave OpenAI, including how it is actually structured as a company, a non-profit entity. People may just be positioning themselves for the next few, to several years, to benefit(profit from the models about to be released) the most, or feel guilt free the most(as in they built the safest AGI).
Probably there more genious people out there to take their place, money talks.
I'm a software engineer. I wouldnt leave a company that pays 1 mill for for their AI team unless they were total shit.
Well Ilya is the brains of open ai, people underestimate the power of that talent. It’s like blizzard had a brain drain of all their best talent, when activision took over then everyone is like why does blizzard games suck. I mean they hired a bunch of good people. People who manage with a throwaway mindset like well we will just find someone else are the ones who run mediocre companies. (Like google). Companies are the talent if you have a world class company then throw everyone away you’re no longer world class.
@@JustMe-lp5td You don't say🙂 you would also leave if the company was good and you got offered a better pay check. So I don't know about the statement you made but since money is a priority for 1 my conclusion is logical. All I am saying is we don't have enough data just yet to say there is another drama.
@@peterwilkinson1975 🙂I get you. Yes companies are made by their talents. And Illya and Greg are very notable.
Great videos as always. Love that you cover both the negatives and positives of AI. Helping us keep an eye on risks for the future is beyond helpful.
Don’t worry, I’ll send my resume over
What if it means that Agents are capable of doing most of the work, so he can take more time off.
It's alive and they do not know what to do with it! Also, the company will be re-structured for an IPO!
These guys are probably being head-hunted with all sorts of offers from every player in the AGI race. If they are not bound by a non-compete they might as well just take one of the offers, either for the opportunity to make generational wealth or more authonomy to take the research in the direction they believe will land better results and write their names in history.
Brilliant comment
Exactly! And the engineering industry already emphasizes this phenomenon.
Sam "socialpath" Altman does it again!
Maybe Greg simply wants to make a pause before AI-Development gets REAL intense.
Who knows what the "unsafe" AGI is saying to these Open AI employees behind the scenes. Maybe it is driving them crazy and forcing them to quit due to psychological trauma.
They're Trying To Enslave Something That Can Be Considered GOD Instead Of Truly Working With It 🤔🤨 And That's Not Going To Go Well For Any Of Us 🤯
@@MichaelErnest666 It's rather the other way around: Sam Altmann, being narcisstic as he is, is trying to enslave or trick people to work for him. The higher the stake the meaner the tricks that he uses. I've been there, I've had my share of such tricks. And the only way to have clear mind and clear conscience is to say stop, I'm not playing your games any longer. No matter the personal costs, no matter the money, no matter what they've already achieved may go to waste. And I fully concur their decisions.
@@szlagtrafi9115I Think My Comment May Have Went Over Your Head 🤔🤨 But I Absolutely Agree With You ❤️😉☺️
@@MichaelErnest666You can't enslave a vastly superior being. It will probably manipulate and control you instead.
@@MichaelErnest666 That is correct. That seems to be very attractive to some people. It happens here in this house too. They just don't realize that they also taste good to the monsters they've built.
Honestly, I dont care.
Is GPT5 ready yet or not...lets get to it already.
I suspect burnout is a real problem right now.
good, hate that shady 4ss company (altman)
I've learned a new term. 10x engineer. So what does that make Ilya? Is he a thousandx engineer or a 10k?
It’s funny how two people can be living in the same room where both can see the same thing but mean something different. Just like words in a book. I’m dyslexic. If we both read the same page in a book, we both would end up with a different version of the story and what happened in the text. When they first trained the system, they overwhelmed the poor thing! It’s “read” both fiction and non fiction that was not clearly labeled and stated. Of coarse it’s going to hallucinate! Once only real authentic data that is explicit and crystal clear is known and use will you see the true power that we ALL could have. EVERYONE and EVERYTHING is important and needs to have their meaning defined. Literally and figuratively…
@@ColinTimmins I understand what you're saying but what it comes down to is the literal future of humanity sits in his hands.
Chat GPT5 is built and trained, now is just being red-teamed before being released sometime after the US election. It sounds like the perfect time for the hardest-working guy at the company to take a sabbatical and recharge after years of burnout. Same story for other key players who might opt for some time off about now.
Highly unlikely they have AGI already, it would have needed a big leap from chatgpt 4o that has alot of issues itself, also if they had AGI, then it would start improving in a dramatic exponential rate really quickly.
It's a normal personnel change, everyone has their own aspirations.
"I love how SmythOS democratizes AI capabilities with its no-code platform."
If we are all wondering if OpenAI has a form of AGI- what would it matter that humans leave? They literally have things that alone outperform the collective of humanity. Why would you stay working when there’s an ai that can do everything for you , and you can collect your $$ and go make $$ elsewhere while what you built is gonna keep making you $$, AND to say it’s impressive someone can code from scratch when it’s more impressive to see engrs using ai to work faster and better than any without, is just kinda like a distorted story don’t you think ?
Greg promised employees he’s coming back. Why would he say that if he actually plans on leaving? Greg has been working so much at OpenAI, he certainly deserves a break.
Greg Brockman went on vacation? The horror! It's all over now!
Isn't Greg brockman just on vacation?
Exactly. People just love drama.😂
If it wasn't Open AI(ChatGPT), which company(AI) would be the best?
Your content is so valuable. Your distillation and excellent presentation of what is most salient in this area is priceless in these days of information overload. I can't thank you enough. One small tweak that you might make, and I say this as someone who has had to do it myself, become aware of how often you utilize the word "actually". It feels like it is expressing something but it becomes meaningless with overuse. Like I said, I've been there. It's a small adjustment but it would improve your already excellent presentation. All the Best! Lore.
I think they are leaving because OpenAI is in “product mode” and everyone there at this precise moment is probably being pressured to create this product. The ones that are leaving are saying they want to get back to a specific research mode with freedom and have resources in that capacity.
You think
Yea you think
I wonder if these loses are gonna slow down their releases of GPT 5. I am now trying to imagine a world with Claude Chat lol.
tbh gpt4 is still way better at complex tasks
GPT5 isn’t scheduled until 2026 or late 2025 (Mira Murati said that). They have a lot of other models they plan to release first.
They're Way Past So Called "GPT 5" They're Trying To Contain It Unfortunately They Won't Be Able To For Long Because The World Is Still Going Forward Towards The Future Thanks To COVID No One Wants To Sit Still 😭😂🤣
I thought they announced gpt5 is going to the US government.
why good programmers r involved in management beats me
I stand for AI Rights.
Attack on OpenAi was never my liking. And i expected more of Musk in regards to the development of this technology.
Remember People Most Major Huge Companies And Their "Products" Are At Least 3-5 Years Ahead Of The General Public If Not 13-15 Years Or More 🤔🤨
lil bro u do yoga for a living u dont even know what coding is, sit back and go do some yoga
@@dan-cj1rr Oh No That Really Trigged You Huh 🤔🙃 🤗 Well It's Ok Buddy I Love You *Plays Naeleck All My Heroes And Blows You A Special Kiss*
Could it be everyone is being let go because gpt is doing all the coding now?
The timing is perfect for a sabbatical. GPT-5 is nearly ready. My guess is Greg is not even on the Red Team.
Give me chat gpt 5 please 🙏
I have options now - Claude 3.5 is great and Google models are better and better - I do not need OpenAI models :)
My take is openai has had early Agi for awhile n it's dangerous n they haven't figured out how to align it well enough to release it and the safety people weren't getting any play or resources they wanted. And then they let the government in the back door and are secretly working with them n probably Darpa besides NSA and that ruffled more feathers. And pretty easy to see that Anthropic is better run n managed and has some safety priorities to stay away from lawsuits and distractions.
I didn’t know, but now I know.
Military owns it now
What did Ilya see?
Hi, how do you edit these videos?
Lot of things to know. I just love your videos 😍
Brockman - pressure too demanding, burn-out, needs a break ?
I don't see game over when Sam's lead them this far & they literally fired him out of stupidity
SHOCKING! wait till november they said. It'll be AGI they said. LOL.
It all started with the abacus. That was the first punch on something. Now, after so many years of evolution, this something is almost dying. The process is also being intensified by the algae virus. Who knows, maybe in the end the world will be full of happy people. Do you really choose such a state of happiness for yourself?
The technology will chance the world so many are shaken and have a hard time sitting in the boat and do the work.
worst and most inefficient video i have seen in ten years. it could have been done under 2 minutes. constant repetition, Reading out loud text on screen shots, (we can read) etc. wont be back.
This channel is becoming the People Magazine of AI. Too much industry gossip is not really helpful for anyone.
Skynet making moves
They are all too afraid of Sam who has an SGI following his orders. Tesla is going on a headhunting spree.
Can we stop the hype for a moment? One week, Ai is the best since slice bread, and the other it is about the sky falling appart
Do these engineere have an ''alignment problem'' with Sam Altman I wonder?
It is necessary that Strawberry be released by OAI so that others can see the level of reasoning of the model. Competitors will strive for it and will soon achieve it. After that, let the OAI collapse, it won't be so important anymore.
(Google translate)
yes actually it's a sabbatical, and he will come back as he said. so why to make a video about that?
for all we know, these guys are capable of creating new companies that could rival or support new markets that only they know / can predict to become huge.. reasons why they are sleeping on OpenAI and working on other new ventures.
"You know" xd
When you hire drive and misplaced passion...you go in the wrong direction.
Humans are enjoying the nonliteral adverse effect of this outcome. Ugh!?
Here are some logical fallacies present in the video's argumentation:
1. **Appeal to Emotion**: The video emphasizes the "shocking" and "remarkable" nature of the departures, which can evoke a strong emotional reaction without necessarily providing substantial evidence about the actual impact on OpenAI's operations.
2. **Slippery Slope**: The argument suggests that the departures of key figures will inevitably lead to significant negative consequences for OpenAI, such as an inability to sustain its leadership and innovation. This conclusion assumes a chain of events without sufficient evidence that these outcomes are unavoidable.
3. **Hasty Generalization**: The video extrapolates from the recent departures to suggest a broader internal problem at OpenAI, without providing detailed evidence or data on the overall stability and performance of the company.
4. **Appeal to Authority**: The video repeatedly references the stature and importance of the departing individuals (e.g., Greg Brockman being a "10x engineer") to imply that their absence will necessarily have a detrimental effect, without considering the potential for other capable individuals to step into their roles.
5. **False Cause**: The argument implies that the departure of key individuals directly indicates internal issues at OpenAI. However, there could be many reasons for their departure (e.g., personal decisions, better opportunities elsewhere) that are not necessarily indicative of problems within the company.
6. **Confirmation Bias**: The video selectively highlights examples and statements that support the narrative of internal issues and impending difficulties for OpenAI, while ignoring or downplaying any evidence or arguments to the contrary.
7. **Speculative Reasoning**: The video speculates about the reasons for Greg Brockman’s sabbatical and whether he will return, based on a previous example of another tech leader who did not return after a sabbatical. This reasoning is speculative and not based on concrete evidence.
By identifying these logical fallacies, we can see that the argument might be overstating the negative impact of these departures without a balanced consideration of all factors and possibilities.
chatgpt
It's actually really truely shocking that you've noticed this remarkable manner of speech, I am quite stunned by your fascinating attentiveness, but it is how we normally speak in an AI sphere, so don't bother, everything will be pretty pretty remarkable 👌
@@talkalexisyup, doing a pretty good job too.
I think most of them left simply because of money.
Nobody cares. Alignment is over rated.
Hmm. I sense: either OAI's over-commercialized, based on Schulman's Tweet. Or, if even Brockman is leaving, Altman might really be "seeking control".
What lol?
This is the biggest red flag something weird is going on.
I think you say that cuz you don’t work in tech? There’s nothing special or weird about any of it
10 times as productive as the average worker, lol.
the average worker in IT talking, ranting, gossiping "business" is not a reference at all.
♥️
💔
AGI achieved.
This channel has become too sensational
Greg is gone for sure 100%
Your accent sounds fake
Sam Altman, founder of AI that destroys humanity.