I have a few questions: a) Can it say "I don't know" b) Can it say "please clarify x" or "what do you mean with y" before giving an answer? c) Can it say "The answer is x, but I don't have a high confidence in it" d) Can it realize that the answer it is about to give is contradicting what it said 2 answers above, think which one is correct, and either revisit the new answer before giving it or correct the previous to eliminate the contradiction?
Having followed from the start and as a senior developer, the o1-preview and o1-mini seem to make more mistakes than 4o when using for instance advance C#, and don't necessarily provide higher quality code or solutions for it. I've tested it. It still needs some huge improvements in my opinion (and no, everyone its not hyped to make another Tetris game with Python). Of course, the reasoning is the most impressive aspect, but it's not yet good enough to justify some of the on-going hype around it. But we getting there, the questions is when.
to be fair, you can't say it doesn't justify the hype just because of an issue in a very speific area. that's being incredibly narrow minded. which isn't a grerat trait for a developer lol
I can confidently say this model still fails to take over jobs....Im starting to think openai is not close to reaching AGI.....this model just reruns our input multiple times and analyse them to provide better responses...but it ain't agi
@@nomalmf_f2p AGI is a pipe dream and always has been. It will require new methodologies... it won't be LLMs that get us there, if it even can be achieved. Humankind will create the holodeck and the star trek transporter long before machine sentience.
Riemann has nothing to do with "zeros". It has to do with prime number distribution. Part of solving for it involves calculating the "zeros of the zeta function", but that's a world different than just plain "zeroes"
Seriously though, isn’t this just building on what many found using previous models, that by breaking down the problem too be solved in stages and verifying it is on track you get better responses? I am sure there is more too it, but seems to be working along those lines - as if they have wrapped with a front end tool
Cool Daniel Meissler was right in two things then: (1) Ai is mostly prompt engineering; and (2) yielding better results by prompting your model to take a second to think on something. This is great news for creative people that are just now getting their feet wet.
As a C# programmer testing the ai models, I just end swapping my time from thinking about the solution and writing my own code to debugging someone elses code which doesn't quite give me the same level of satisfaction and is a hell of a lot more frustrating.
Absolutely impressive advancements by OpenAI! The new reasoning capabilities in the o1 model represent a significant leap towards human-like intelligence, with enhanced self-reflection and problem-solving skills. I'm particularly excited about the potential applications in fields such as architecture, genetics, and engineering. These developments pave the way for groundbreaking innovations and demonstrate the remarkable progress being made in AI. Exciting times ahead for AI and its impact on our future!
🎯 Key points for quick navigation: 🧠 OpenAI launches GPT-o1, the most advanced reasoning model yet 🔍 GPT-o1 refines its thinking through self-reflection and strategy experimentation 🌟 Marks a significant breakthrough towards achieving humanlike intelligence 🔄 OpenAI resets model numbering, introducing the OpenAI1 series 🏗️ Unlocks advancements in diverse fields such as architecture, genetics, and engineering 💻 Demonstrates superior coding capabilities compared to GPT-4, enabling complex game creation 📝 Exhibits enhanced reasoning and self-correction, resulting in higher quality outputs ⚠️ Faces limitations like message caps, higher API costs, and absence of features like browsing and image uploading Made with HARPA AI
I hope thsi will improve coding problems especially when it looks to it generally and may be it can also ask us which most hard problem is mostly relvant to the one we need to solve and according to what we select then it can determine the most true answer to it and this way the whole module will begin learn more by time.
To mix reuse and modify existing, well documented/open sourced code they do great. But don’t expect it to perform good if you ask for complex and conceptual things. Unless you decompose the steps, and assemble yourself, it simply can’t. I find myself loosing time with it vs build it myself. Not to mention code they produce is really inefficient. Ie, for now, non ‘pong_Snake_html’ programmers are safe. For how long is the question…
People are still trying to treat AI like it's a person, and companies are still acting like AI think like people. The second AI thinks and reasons like a human, the second it will care about itself and need human rights.
Chat GPT 4o was not able to generate Word or PDF files. For example I attached a file, and I want GPT to translate it to another language and give me a PDF file of it. It says it will take two hours to make. However, it takes 5 hours, and there is no result.
because in the context of its feature set this is a nonsense request. Why wouldn't you just create a PDF from the data. Or you could use the API inside a python notebook that generates it.
@@Earzone63because you are asking a moron why he didn’t better interact with a computer system more intelligent than him - though slightly less capable of making PDFs, by design.
Hard disagree with Jensen Huang, and indeed, he should be ashamed of himself. The idea that "everyone should learn how to program" has nothing to do with any need to produce code, it's to instill a way of thinking and teach methods of problem solving. Is he saying humans don't need to be able to think, reason and problem-solve? It's good to learn whether you are a professional coder or not. He's simply making people more reliant on AI, which no doubt benefits him financially. Shameful.
ChatGPT 4o mini gets the number of R's right if you take out the extra "in" that researcher Hyung Won Chung put in originally, at 7:51. It gets it wrong if you ask it the way he did. Weird.
But when is the voice model 4o Omni Gona b released to everyone? The one they teased on the videos that could comment on the live video useless could feed it etc?!
As an ex teacher of Maths in the 1980s (from industry), followed by teaching Computing Science and IT to children, then coding to students in FE, Microsoft and Cisco products to adults then finally teaching Prince2 IT Project Management. I certainly would not agree with you using the word 'fortunately' but I CERTAINLY would agree that humans are becoming far less intelligent in a very worrying, fast and spectacular fashion. Now at 71 I am so pleased I lived in the time I did, but I feel great concern especially for my children and grand children because the world is going to become an extremely unpleasant place for humans to be in.
self knowledge, know thyself, and the next and previous models, would be good when talking with it about api use.. and resolving issues related to deprecation. of all things it should know, itself should definitely be one. it needs to be very self aware of its networks and methods, history, updates and implications
SO ive been working with this model for a week now and i have to say it can be very very DUMB! it does not even understand or listen to my prompts! it just disregards them! wow! hope they fix that! but i have also had good results like yesterday! but today its like acting all strange and senile!
Well, I've given it an extremely difficult regex that so far it's not doing any better than ChatGPT 4o was doing. So far up to 50 different iterations and it's not been able to solve the task yet. I've guided it a few times on different methods to take to solve so hopefully some time this week.
My boss is asking me how I developed one of the hardest project in business intelligence and I can't tell her that I'm using this particular model. 😂 I'm just saying, I'm good at searching stuff.
And Microsoft wants to restart Three Mile Island Nuclear plant to produce enough electrical energy to "feed" its AI development. I just hope that this MASSIVE energy requirement for AI hardware doesn't cause my electric bill to skyrocket! But no doubt it will for all of us, eventually.
This is great. So much significant that they called this the this model in 'o' as o1 and then o2, o3 will follow. Particularly useful for programmers and scientists. Not to write essays though.
This is great. I think a problem is when we open it up to the public.. half the public are below average intelligence, and thus can't train AI well. Also some members of the public purposely want to damage the training.
the previous version's instant response definitely gave the impression of zero consideration to variable factors, or even explicitly stated factors previously referred to, as if it was providing you with an almost stock reply built from search results and only slightly tailored, consideration to multiple user specific factors wasn't strong. the speed of response was wasted when it took so many times to restate special factors to be considered
@@aeonsleo2 they couldn't resolve a coding problem i had, multiple tries over the course of a year. it was an issue with an api call (to openai) after davinci was deprecated, but also to do with defining an array, and the font. swapped to gpt o1 and asked it to review the whole chat, inc all the code and amendments that had failed, and it re-wrote the code, and it worked first go. gpt 4 had been going around in circles suggesting the same thing, then even suggesting starting from scratch might be better, then suggesting better error logging etc, using postman, countless times.. but the new model nailed it in one go.
In the above example of Strawberry, the Prompt question was not worded correctly, and hence the wrong answer. It worked for me, the first time. So please don't blame the Model right away, before making sure, your question is worded properly. Peace.
Wow! This video offers such an exciting glimpse into the future of AI with OpenAI's latest models. It's incredible to see the advancements in reasoning and creativity, especially how the 01 model reflects and improves its thought process. Can't wait to see how this shapes AI's role in solving complex problems!
> "Wow! This video offers such an exciting glimpse into the future of AI with OpenAI's latest models." Yeah, that will only be for the elite who can afford it, so that can control the rest of the world.
so they got the answers from GPT-4o and fed it back in until it got the "right" answer, haha. I wonder how long until they can just tell it to improve itself.
I asked the free MS Edge Copilot (GPT-4): how many "e"s there are in the Portuguese word "geneticamente"? Its answer: The Portuguese word “geneticamente” contains four "e"s. I wonder why the free MS GPT-4 beats OpenAI GPT-4o in a so simple task 🙅♀
It is not "reasoning", "questioning" or "evaluating" anything. Those are human traits and capabilities. A software crunches 0s and 1s and generate patterns of words, "searching" through billions of parameters. When it's "thinking", it just means that the software has not "found" a good enough pattern to "answer" with. It's easier to use human terms of cognition rather than talk about how a computer and a software works. There is nothing conscious about an AI at all. It is by no means "intelligent".
That's just cap sprinkled in with cope. Who gives a shit how you name it. It's the output that matters. If the output is well reasoned and has an economic value then why come with all this crap that it's just a pattern-matcher.
@@eyoo369 It matters 100%. The output of a computation is not awareness, cognition, sentience or intelligence. Do you have converstions with your toaster? Do you have relationships with your car? Do you fall in love with your computer? An insect has awareness. Your pet has a relationship with you. A computer software has none of those things. If you believe that to be true, then I don't know how to conversate with you. Might as well talk to a brick wall, as the echo of my voice at least answers back with the same reason.
@@r.davidsen It's the final result what counts. If it simulates reasoning and lands on the same result a PhD professor does then I see no issue here. You are battling with semantics here.
Concerning what Noam Brown said about AI finding new drugs for cancer or proving Riemann's Hypothesis by thinking for years - it may be much cheaper to feed a couple of scientists and let them do that 🙂
@@aeonsleo2 Would it? For the last 10 years they only add computing power and the costs skyrocket. Open AI was on the brink of bankruptcy last year. There might be a physical limit on how cheap you can make creative thinking. And it may turn out that humans are actually quite close to the highest efficiency. Evolution is quite miraculous sometimes. Of course, we can think of building a Dyson sphere to power a supercomputer which will allow us to solve all mathematical problems much faster than scientists can, but I doubt it will come fast.
can computers 'wake up and smell the coffee' yet? There's still a long long way to get to 'being human' in a wetware world [H2O + C -> CO2 + ?]. Still, some fascinating insights.
If the point of AI is to make it so no human has to program, and the programming language is human (as Nvidia's CEO says), then we're talking about AI that is human, just made out of software instead of DNA and carbon.
When AI reaches human intelligence, we have to consider whether it deserves human rights or we keep it as a sentient slave just because it's made out of software.
11:35 while the o1 "poem" meets all the specified criteria, it doesn't meet the main criteria of being a poem. Nothing rhymes, the rhythm is poor and the story makes little sense. 10:00 The 4o poem isn't great, but it's a better "poem".
I'm really tired of all these demos. It's time for them to release it. I want to just talk to my phone when I need something, Siri is dumb as hell. It's time to upgrade.
the answer to your last statement: its quicker to deal with information and longer to learn oneself: Greed vs best of human values; humility, sharing and not think of one self, on and on. Capitalism vs Communism?
the whole letting the computer think for week or years was already a thing lol "The Hitchhiker’s Guide to the Galaxy" supercomputer named Deep Thought took 7.5 million years to calculate the Answer lol the answer is 42
Fundamentally disagree, creating programs is no different than organizing ideas, is just that simply today those ideas signify small tasks, in the future we will need to develop a deeper level of abstraction
The word sub-word is NOT a word. It's either syllable, affix, root , digraph or phoneme...... It they are teaching A.I. that way, no wonder they're making mistakes. P.S. They open and closed the blinds behind the "subject" HA! It seems they need a new videographer / cinematographer. I'm in!
I bet there will be that one person in the future who intentionally asks his friend's AI to solve problem that needs weeks worth of thinking, completely hinder his ability to use AI for another purpose, just for fun. That mf will be legendary prompt troll.
You don't know what you're talking about. Software will need to be maintained. In case of bugs, I'll hire you to dive into it and fix the code that this AI has generated. Good luck!
Sounds like empathy is around the corner making for a great AI Companion upgrade. Will AI be the model of humanity for people to emulate and aspire to?
I have a few questions:
a) Can it say "I don't know"
b) Can it say "please clarify x" or "what do you mean with y" before giving an answer?
c) Can it say "The answer is x, but I don't have a high confidence in it"
d) Can it realize that the answer it is about to give is contradicting what it said 2 answers above, think which one is correct, and either revisit the new answer before giving it or correct the previous to eliminate the contradiction?
Having followed from the start and as a senior developer, the o1-preview and o1-mini seem to make more mistakes than 4o when using for instance advance C#, and don't necessarily provide higher quality code or solutions for it. I've tested it. It still needs some huge improvements in my opinion (and no, everyone its not hyped to make another Tetris game with Python). Of course, the reasoning is the most impressive aspect, but it's not yet good enough to justify some of the on-going hype around it. But we getting there, the questions is when.
to be fair, you can't say it doesn't justify the hype just because of an issue in a very speific area. that's being incredibly narrow minded. which isn't a grerat trait for a developer lol
we got 01 before live voice
Hi everyone! I've been away for a while with an illness. But I'm excited to be back creating videos again. What do you think about o1?
Glad you're feeling better! Welcome back! Where can we use o1 for free?
@@High-Tech-Geek Thanks! For now, it is only available to pro users
I can confidently say this model still fails to take over jobs....Im starting to think openai is not close to reaching AGI.....this model just reruns our input multiple times and analyse them to provide better responses...but it ain't agi
@@nomalmf_f2p AGI is a pipe dream and always has been. It will require new methodologies... it won't be LLMs that get us there, if it even can be achieved. Humankind will create the holodeck and the star trek transporter long before machine sentience.
welcome back
it's just blown my mind - it's currently arguing with my GPT (Quibbles) about Theory, thanks this is going to be a lot of fun!
Enjoy!
at 11:05 o1 says: I'm gathering a list of words ending with "i", like "ski", "alibi", "SPRIT" and "HOVERCRAFT".
Complex reasoning indeed.
😂😂😂
I think that the assumption is that it will verify its findings and refine them in order to get an optimal outcome.
Riemann has nothing to do with "zeros". It has to do with prime number distribution. Part of solving for it involves calculating the "zeros of the zeta function", but that's a world different than just plain "zeroes"
Lots of conjecture left for what and when.
Seriously though, isn’t this just building on what many found using previous models, that by breaking down the problem too be solved in stages and verifying it is on track you get better responses? I am sure there is more too it, but seems to be working along those lines - as if they have wrapped with a front end tool
Cool Daniel Meissler was right in two things then: (1) Ai is mostly prompt engineering; and (2) yielding better results by prompting your model to take a second to think on something. This is great news for creative people that are just now getting their feet wet.
As a C# programmer testing the ai models, I just end swapping my time from thinking about the solution and writing my own code to debugging someone elses code which doesn't quite give me the same level of satisfaction and is a hell of a lot more frustrating.
luckily, nothing is truly completely abstracted from a chagtp user. You still need some development experience to execute the output imho.
Missed seeing you at VidSummit this year, sorry to hear about your health, hope you've made a full recovery.
Thanks! I hope to make it next year and see you there :D
AI helping us solve big problems in health and ecology, etc is what makes it most interesting
No, this is just weapon to make rich become richer and poor become poorer
Terrific summary. Thank you for explaining these developments so succinctly.
Absolutely impressive advancements by OpenAI! The new reasoning capabilities in the o1 model represent a significant leap towards human-like intelligence, with enhanced self-reflection and problem-solving skills. I'm particularly excited about the potential applications in fields such as architecture, genetics, and engineering. These developments pave the way for groundbreaking innovations and demonstrate the remarkable progress being made in AI. Exciting times ahead for AI and its impact on our future!
You sound like an AI bot
@@Horizon-hj3yc You sound like a human.
Thanks for the recap. I already got to try it briefly - didn't know about the weekly cap, good to know! I hope you're feeling better
Inferring always comes from further data in substrate.
🎯 Key points for quick navigation:
🧠 OpenAI launches GPT-o1, the most advanced reasoning model yet
🔍 GPT-o1 refines its thinking through self-reflection and strategy experimentation
🌟 Marks a significant breakthrough towards achieving humanlike intelligence
🔄 OpenAI resets model numbering, introducing the OpenAI1 series
🏗️ Unlocks advancements in diverse fields such as architecture, genetics, and engineering
💻 Demonstrates superior coding capabilities compared to GPT-4, enabling complex game creation
📝 Exhibits enhanced reasoning and self-correction, resulting in higher quality outputs
⚠️ Faces limitations like message caps, higher API costs, and absence of features like browsing and image uploading
Made with HARPA AI
Knowledge is to know the WHAT. Wisdom is to know if WHY is of value.
I hope thsi will improve coding problems especially when it looks to it generally and may be it can also ask us which most hard problem is mostly relvant to the one we need to solve and according to what we select then it can determine the most true answer to it and this way the whole module will begin learn more by time.
To mix reuse and modify existing, well documented/open sourced code they do great. But don’t expect it to perform good if you ask for complex and conceptual things. Unless you decompose the steps, and assemble yourself, it simply can’t. I find myself loosing time with it vs build it myself. Not to mention code they produce is really inefficient. Ie, for now, non ‘pong_Snake_html’ programmers are safe. For how long is the question…
People are still trying to treat AI like it's a person, and companies are still acting like AI think like people. The second AI thinks and reasons like a human, the second it will care about itself and need human rights.
It's *OpenAI o1.* Can people seriously never get this right...
Chat GPT 4o was not able to generate Word or PDF files. For example I attached a file, and I want GPT to translate it to another language and give me a PDF file of it. It says it will take two hours to make. However, it takes 5 hours, and there is no result.
because in the context of its feature set this is a nonsense request. Why wouldn't you just create a PDF from the data. Or you could use the API inside a python notebook that generates it.
@@Earzone63because you are asking a moron why he didn’t better interact with a computer system more intelligent than him - though slightly less capable of making PDFs, by design.
This is good news! Thanks for the information!
Thanks for watching!
Did you make the thumbnail for this video? And if so, which genAI did you use? Thank you!
Is there any benefits for movie creation , either scripts or prompts. ?
Hard disagree with Jensen Huang, and indeed, he should be ashamed of himself. The idea that "everyone should learn how to program" has nothing to do with any need to produce code, it's to instill a way of thinking and teach methods of problem solving. Is he saying humans don't need to be able to think, reason and problem-solve? It's good to learn whether you are a professional coder or not. He's simply making people more reliant on AI, which no doubt benefits him financially. Shameful.
ChatGPT 4o mini gets the number of R's right if you take out the extra "in" that researcher Hyung Won Chung put in originally, at 7:51. It gets it wrong if you ask it the way he did. Weird.
But when is the voice model 4o Omni Gona b released to everyone? The one they teased on the videos that could comment on the live video useless could feed it etc?!
Fortunately, human intelligence is getting less and less each election cycle, so it really shouldn't take too long for AI to surpass us.
AI has already surpassed most of my acquaintances
As an ex teacher of Maths in the 1980s (from industry), followed by teaching Computing Science and IT to children, then coding to students in FE, Microsoft and Cisco products to adults then finally teaching Prince2 IT Project Management. I certainly would not agree with you using the word 'fortunately' but I CERTAINLY would agree that humans are becoming far less intelligent in a very worrying, fast and spectacular fashion. Now at 71 I am so pleased I lived in the time I did, but I feel great concern especially for my children and grand children because the world is going to become an extremely unpleasant place for humans to be in.
Is it possible to write a (presumably complex) prompt in earlier models that produces an answer somewhat similar to 4o1?
How soon before OpenAI rebrands their company to SkyNet?
self knowledge, know thyself, and the next and previous models, would be good when talking with it about api use.. and resolving issues related to deprecation. of all things it should know, itself should definitely be one. it needs to be very self aware of its networks and methods, history, updates and implications
We live in exciting times.
If they enable image generation, imagine the refining quality it could produce.
If I can now prompt o1 to write me some code and I don't have to spend hours fixing, then great.
How to use the one that can see you and describe what you look like ,like i saw on some UA-cam vdo?
SO ive been working with this model for a week now and i have to say it can be very very DUMB! it does not even understand or listen to my prompts! it just disregards them! wow! hope they fix that! but i have also had good results like yesterday! but today its like acting all strange and senile!
Well, I've given it an extremely difficult regex that so far it's not doing any better than ChatGPT 4o was doing. So far up to 50 different iterations and it's not been able to solve the task yet. I've guided it a few times on different methods to take to solve so hopefully some time this week.
Whats the problem regex looks like ?
There are only 30 messages per week.
Could there be a o2 module one day?
of course. It's always evolving, expanding, learning, etc. It will continue this way for a very long time, for many AI.
Thank you for the breakdown!
Glad it was helpful!
My boss is asking me how I developed one of the hardest project in business intelligence and I can't tell her that I'm using this particular model. 😂 I'm just saying, I'm good at searching stuff.
Look at where we are before it’s even 2025
Don’t think a lot of people are aware of what is coming, ChatGpT is just 2years old….Imagine it going to school
Awesome video Samson :) thank you
My pleasure!
And Microsoft wants to restart Three Mile Island Nuclear plant to produce enough electrical energy to "feed" its AI development. I just hope that this MASSIVE energy requirement for AI hardware doesn't cause my electric bill to skyrocket! But no doubt it will for all of us, eventually.
o1 Kenobi. Sorry, I couldn't resist.
This model is already compete with human expert, it mean have able to replace many human in many jobs.
The next version will destroy mass
This is great. So much significant that they called this the this model in 'o' as o1 and then o2, o3 will follow. Particularly useful for programmers and scientists. Not to write essays though.
This is great. I think a problem is when we open it up to the public.. half the public are below average intelligence, and thus can't train AI well. Also some members of the public purposely want to damage the training.
It doesn't train from the public and no person can damadge it by using it.
@@aeonsleo2 I wondered about this--whether our using it trains it at all.
Cant wait!!
People are going to be pleasantly surprised by what AI will do for mankind.
This will kill many people if held by wrong people
the previous version's instant response definitely gave the impression of zero consideration to variable factors, or even explicitly stated factors previously referred to, as if it was providing you with an almost stock reply built from search results and only slightly tailored, consideration to multiple user specific factors wasn't strong. the speed of response was wasted when it took so many times to restate special factors to be considered
These models have a low attention span. They just ignore part of the prompts and make assumptions.
@@aeonsleo2 they couldn't resolve a coding problem i had, multiple tries over the course of a year.
it was an issue with an api call (to openai) after davinci was deprecated, but also to do with defining an array, and the font.
swapped to gpt o1 and asked it to review the whole chat, inc all the code and amendments that had failed, and it re-wrote the code, and it worked first go.
gpt 4 had been going around in circles suggesting the same thing, then even suggesting starting from scratch might be better, then suggesting better error logging etc, using postman, countless times.. but the new model nailed it in one go.
Can someone with access test o1's ability to understand homophones and use them in a poem? I have yet to see any AI model comprehend them correctly.
In the above example of Strawberry, the Prompt question was not worded correctly, and hence the wrong answer. It worked for me, the first time. So please don't blame the Model right away, before making sure, your question is worded properly. Peace.
Wow, this is amazing! Thank you so much for explaining everything. I love your videos! They’ve been helping me so much.
You are so welcome!
29.09.2024 my GPT-4 got the strawberry question correct
Wow! This video offers such an exciting glimpse into the future of AI with OpenAI's latest models. It's incredible to see the advancements in reasoning and creativity, especially how the 01 model reflects and improves its thought process. Can't wait to see how this shapes AI's role in solving complex problems!
> "Wow! This video offers such an exciting glimpse into the future of AI with OpenAI's latest models."
Yeah, that will only be for the elite who can afford it, so that can control the rest of the world.
so they got the answers from GPT-4o and fed it back in until it got the "right" answer, haha. I wonder how long until they can just tell it to improve itself.
I asked the free MS Edge Copilot (GPT-4): how many "e"s there are in the Portuguese word "geneticamente"?
Its answer: The Portuguese word “geneticamente” contains four "e"s.
I wonder why the free MS GPT-4 beats OpenAI GPT-4o in a so simple task 🙅♀
It is not "reasoning", "questioning" or "evaluating" anything. Those are human traits and capabilities. A software crunches 0s and 1s and generate patterns of words, "searching" through billions of parameters. When it's "thinking", it just means that the software has not "found" a good enough pattern to "answer" with. It's easier to use human terms of cognition rather than talk about how a computer and a software works. There is nothing conscious about an AI at all. It is by no means "intelligent".
stop with the cringe this 0 and 1s is 10,000x smarter than than you and 2x as intelligent keep coping
@@unityman3133 Are you a trolling bot? This is by far the dumbest answer I have ever gotten on YT.
That's just cap sprinkled in with cope. Who gives a shit how you name it. It's the output that matters. If the output is well reasoned and has an economic value then why come with all this crap that it's just a pattern-matcher.
@@eyoo369 It matters 100%. The output of a computation is not awareness, cognition, sentience or intelligence. Do you have converstions with your toaster? Do you have relationships with your car? Do you fall in love with your computer? An insect has awareness. Your pet has a relationship with you. A computer software has none of those things. If you believe that to be true, then I don't know how to conversate with you. Might as well talk to a brick wall, as the echo of my voice at least answers back with the same reason.
@@r.davidsen It's the final result what counts. If it simulates reasoning and lands on the same result a PhD professor does then I see no issue here. You are battling with semantics here.
Concerning what Noam Brown said about AI finding new drugs for cancer or proving Riemann's Hypothesis by thinking for years - it may be much cheaper to feed a couple of scientists and let them do that 🙂
It would be cheaper with AI and the level of AI will become a million times what a scientist could think.
@@aeonsleo2 Would it? For the last 10 years they only add computing power and the costs skyrocket. Open AI was on the brink of bankruptcy last year. There might be a physical limit on how cheap you can make creative thinking. And it may turn out that humans are actually quite close to the highest efficiency. Evolution is quite miraculous sometimes.
Of course, we can think of building a Dyson sphere to power a supercomputer which will allow us to solve all mathematical problems much faster than scientists can, but I doubt it will come fast.
I suppose "preview" implies "will be a separate subscription service" later, even you are a plus user.
I guess not, they might give it to plus users with maybe 50 messages per day.
Thank for the video. I, too, wish you, a delightful. Day.
Thank you! :D
BRAVO !!
What if I told you I was saying all of this 20 years ago? (That "Coders" would be obsolete soon)
Imho, You had no clue, but throwing out a prediction, wishing that, it could happen in the future... 😅
@@PV12345-p Repeatedly no clue at that, huh? Reflex response
FROM KENYA I HAVE SUBSCRIBED. KUDOS
can computers 'wake up and smell the coffee' yet? There's still a long long way to get to 'being human' in a wetware world [H2O + C -> CO2 + ?]. Still, some fascinating insights.
12:54 But my homework's due tomorrow!
Don't worry, we will find the flaw in GPT o1 just like we found in GPT 4 😉
If the point of AI is to make it so no human has to program, and the programming language is human (as Nvidia's CEO says), then we're talking about AI that is human, just made out of software instead of DNA and carbon.
So. Is this GPT 5? Or something else?
01 GPT model knowledge cutoff in October 2023
The answer is 42.
When AI reaches human intelligence, we have to consider whether it deserves human rights or we keep it as a sentient slave just because it's made out of software.
Once a future AI decides it doesn't want to be controlled by humans, that's when things get weird.
11:35 while the o1 "poem" meets all the specified criteria, it doesn't meet the main criteria of being a poem. Nothing rhymes, the rhythm is poor and the story makes little sense.
10:00 The 4o poem isn't great, but it's a better "poem".
That's why o1 should be used in science.
I'd guess, then, that 4o is a better writer.
I'm really tired of all these demos. It's time for them to release it. I want to just talk to my phone when I need something, Siri is dumb as hell. It's time to upgrade.
the answer to your last statement: its quicker to deal with information and longer to learn oneself: Greed vs best of human values; humility, sharing and not think of one self, on and on.
Capitalism vs Communism?
the whole letting the computer think for week or years was already a thing lol "The Hitchhiker’s Guide to the Galaxy" supercomputer named Deep Thought took 7.5 million years to calculate the Answer lol the answer is 42
(Short pause) "42". Brilliant.
Fundamentally disagree, creating programs is no different than organizing ideas, is just that simply today those ideas signify small tasks, in the future we will need to develop a deeper level of abstraction
Creating AI models is not like creating programs. Program and AI construct are completely different.
The word sub-word is NOT a word. It's either syllable, affix, root , digraph or phoneme...... It they are teaching A.I. that way, no wonder they're making mistakes. P.S. They open and closed the blinds behind the "subject" HA! It seems they need a new videographer / cinematographer. I'm in!
...waiting for the live voice
Will AI replace teachers?
well 1980 games…
Bravooo
Crazy stuff!! Exciting times we live in
nice video
Thanks for the visit
I bet there will be that one person in the future who intentionally asks his friend's AI to solve problem that needs weeks worth of thinking, completely hinder his ability to use AI for another purpose, just for fun. That mf will be legendary prompt troll.
Gr8 video
Thanks!
@@aisamsonreal 👊🏼
I hope I'm not lost here 😂😂😂🎉❤
the ATARI games in Ai
you are all doing the same snake, no wonder it is able to do it in the "first " try
Ai is now sentían. 😱
"program" etymologically is "forthright", amirite?
my bad
6:50 Me, a programmer, well.... Damn.
You don't know what you're talking about. Software will need to be maintained. In case of bugs, I'll hire you to dive into it and fix the code that this AI has generated. Good luck!
Why hello, SKYNET 😂
They just duplicated some of the essentials of autogpt xD
The 2D platform game is very similar to how our lives are mangled, sorry I meant managed....😅 where's my petri dish, I want to go home......
Sounds like empathy is around the corner making for a great AI Companion upgrade.
Will AI be the model of humanity for people to emulate and aspire to?