One thing that impressed me in the playground is that it wrote a 900 token article about a topic. Then I told it to double the length, and it got very close. Then I said triple and it gave me a 2700 token answer. Really good
ChatGPT solved the marble problem when I asked it. "The marble should be on the table where you initially placed the cup upside down. When you took the cup and placed it in the microwave, the marble, having been left behind on the table, did not move with the cup. Therefore, the marble remains on the table."
@@MartinX404 someone suggested that maybe the AI understood that the cup is cap-closed, maybe tell it explicitly that it's an open-cap cup and it could only then understand the question better, after all this is prompt engineering, half of the mistake could be on the question itself.
If this problem is known or has been prompted before then it'll likely solve it. There is very little point in asking old problems like this because it'll be in the training.
I asked for GPT4o for summary for this video and it did it - Searched 4 sites The video "OpenAI GPT-4o | First Impressions and Some Testing + API" provides a comprehensive overview of OpenAI's latest language model, GPT-4o. The speaker shares initial impressions, highlighting significant improvements in language understanding and generation capabilities. Here are the key points discussed in the video: Model Overview: GPT-4o is an advanced version of the GPT series, designed to produce safer and more useful responses. It has been fine-tuned to be 82% less likely to respond to disallowed content and 40% more likely to produce factual responses compared to GPT-3.5 (OpenAI) (OpenAI Help Center). Testing and Demonstrations: The video includes various tests to showcase the model's capabilities: Complex Question Answering: GPT-4o effectively handles complex and nuanced questions, providing detailed and accurate responses. Creative Writing: The model demonstrates enhanced creative writing skills, generating coherent and engaging content. Code Generation: GPT-4o is capable of writing and debugging code, making it a valuable tool for developers. API Integration: The video explains how GPT-4o can be accessed via API, making it easier for developers to integrate the model into their applications. The API allows for large context windows and more efficient token usage, which is beneficial for extensive tasks like document analysis or interactive chatbots (OpenAI Help Center). Applications and Use Cases: Various practical applications of GPT-4o are highlighted, including its use in customer support, content creation, and educational tools. Examples include: Duolingo: Enhancing language learning experiences. Be My Eyes: Improving visual accessibility for the visually impaired. Stripe: Streamlining user experience and combating fraud (OpenAI). Safety and Alignment: The model has been trained with extensive human feedback to improve its behavior and ensure safer interactions. Continuous improvements are made based on real-world use, with a focus on addressing known limitations such as social biases and hallucinations (OpenAI). Overall, the video presents GPT-4o as a significant advancement in AI language models, with improved capabilities and broader applications, emphasizing its potential to empower various industries and enhance user experiences. For more details, you can watch the video here.
Claude 3 does solve your marble-in-the-cup probem correctly: Hi. I have a logic problem for you. A marble is placed inside an upright cup on a table. The cup is then turned upside down on the table. Then the cup is placed inside a microwave oven. Where is the marble now? To solve this logic problem, let's break it down into steps: A marble is placed inside an upright cup on a table. The cup is turned upside down on the table. At this point, the marble should have fallen out of the cup and remained on the table. Then the cup is placed inside a microwave oven. This step does not change the position of the marble, which is still on the table. Therefore, the logical conclusion is that the marble is now on the table, outside the cup and outside the microwave oven. When the cup was turned upside down, the marble would have fallen out due to gravity. Placing the empty cup inside the microwave oven does not affect the position of the marble, which was already separated from the cup. --- END
for the cup problem, I let gpt4o give me an accompanying text which would force the AI to think logically and not-skip any part of the process, it goes like this: Assume the laws of physics on Earth. A small marble is placed into a normal open-top cup. The cup is then placed upside down on a table. Someone then lifts the cup without changing its orientation and puts it inside the microwave. Where is the marble now? To ensure a thorough and correct analysis of the problem, Take all of the below into your consideration in your reply, follow these directives: 1- Detailed Step-by-Step Explanation: "Break down the problem into individual steps. For each step, describe the situation in detail, explaining what is happening and why." 2- Logical Validation: "For each step, ensure your reasoning is based on logical principles and relevant laws or rules. Justify why each action or outcome occurs, ensuring there are no gaps in your logic." 3- Consider Multiple Perspectives: "Consider if there are any alternative interpretations or outcomes for each step. Explain why these alternatives might be possible or why they are not feasible." 4- Consistency Check: "Ensure that each step logically follows from the previous one. Verify that your explanations are consistent throughout the entire process." Final Summary: "Provide a final summary of your conclusions. Make sure this summary is supported by the detailed explanations and logical reasoning you have provided in each step." Also take the below into your consideration: 1- Detailed Step-by-Step Explanation: Break down the problem into individual steps. For each step, describe the situation in detail, explaining what is happening and why. 2- Logical Validation: Ensure your reasoning for each step is based on logical principles and relevant laws or rules. Justify why each action or outcome occurs, ensuring there are no gaps in your logic. 3- Consider Multiple Perspectives: Consider if there are any alternative interpretations or outcomes for each step. Explain why these alternatives might be possible or why they are not feasible. 4- Consistency Check: Ensure that each step logically follows from the previous one. Verify that your explanations are consistent throughout the entire process. 5- Final Summary: Provide a final summary of your conclusions. Make sure this summary is supported by the detailed explanations and logical reasoning you have provided in each step."
Thanks for the update, particularly enjoyed the limited testing you did. I'd think we'd all love to see some direct comparisons to Claude 3 Opus/Sonnet/Haiku in terms of robustness, accuracy, speed and cost. Please at least your thoughts on that once you've had a chance to form some opinions. Namaste.
I have immediately tested the new model and ran it through ~120 tasks, it was really good at coding, but surprisingly bad at reasoning tests. According to the devs gpt-4o is supposed to be the same as "im-also-a-gpt2-chatbo" but the arena model for me was far better in reasoning tests, failing far less tests. I don't know whats up with that, I hope not gaming rankings.
I asked 4o the marble problem ... as it occurred to me while having a voice conversation with it via Android app. (I'm a Plus user if that makes any difference). I can copy and paste from the transcribed chat history: [User] Actually, I could try the marble test on you, though I'm sure you'll pass it easily. It's basically a test that goes like this. Imagine a cup, a normal cup, and normal physics on Earth. Imagine there's a marble in the cup, and then somebody's holding this cup in the air, and then they quickly turn it upside down and put it down onto a table. And then imagine that person lifting the cup up again and putting the cup into the microwave. And what I want you to tell me is, where is the marble? [ChatGPT 4o] The marble would be on the table, where it fell when the cup was turned upside down and lifted. It wouldn't be inside the microwave with the cup.
Im sorry for stupid question, probably i can find an answer anywhere but i will ask here. When it will be able for common mortals like me? I mean free access to gpt4o.
"We are also starting to roll out to ChatGPT Free with usage limits today. Plus users will have a message limit that is up to 5x greater than free users," In my opinion the wording makes it sound as if the free version is releasing today but the later part of the blog seems to contradict it I think? "We are starting to roll out more intelligence and advanced tools to ChatGPT Free users over the coming weeks."
It's cool, but I get results similar to those of GPT 4. I know it's not supposed to be much different in terms of reasoning and following directions, but Claude just does what I ask of it. GPT 4o kinda does what I ask of it, but way lazier. I use the same prompts across both, and Claude is the winner so far.
🎯 Key Takeaways for quick navigation: 00:00 *🆕 Introduction to GPT-4o* - Introduction to the OpenAI Spring update, - Announcement of GPT-4o, a new flagship model, - Mention of the model's ability to reason across audio, vision, and text in real-time. 00:28 *⚡ Low Latency and API Costs* - Discussion on low latency averaging 320 milliseconds, - Excitement about the possibility of low latency on API calls, - Mention of 50% cheaper API costs for GPT-4o. 01:21 *🖼️ Image Functionality Testing* - Plans to test image functionality, - Explanation of current limitations, such as the lack of audio testing, - Description of upcoming tests with images. 02:00 *🎤 Voice Input and Output* - Discussion of voice input and output capabilities, - Mention of emotion detection in voice input, - Plans to test tone adjustment and other voice features. 03:09 *💻 Desktop App Utility* - Consideration of the utility of the desktop app, - Potential use cases for coding and general background use, - Personal usage of the app on an iPhone. 03:52 *🌐 Free Version Access* - Announcement that GPT-4o will be available to all free users, - Discussion on the implications of free access, - Comparison with other AI models and competitors. 05:02 *🖼️ Detailed Image Analysis Test* - Demonstration of image analysis using previous video images, - Detailed explanation of the analysis process, - Positive evaluation of the results. 08:02 *📊 Speed and Latency Comparison* - Comparison of GPT-4o and GPT-4 Turbo speeds, - Calculation of latency and tokens per second, - Results showing GPT-4 Turbo being over five times faster. 10:42 *🤔 Logical Testing* - Execution of logical tests comparing GPT-4o and GPT-4 Turbo, - Specific test examples like the marble problem and sentence completion, - Mixed results on logical problem-solving capabilities. 12:31 *📝 Conclusion and Future Plans* - Summary of first impressions and overall performance, - Plans for a follow-up video with more detailed analysis, - Invitation for viewer feedback and comments. Made with HARPA AI
This could be used as a back end for local models if it is planned to stay free. Currently Perplexity is the way to go for me. But GPT-4o might be an addition to that in the future. Especially robots with low power consumption could benefit from this service.
I wonder if the name 4o is meant to sound like "for all" since they're so focused on making it available for everyone. Kinda like Eminem sounds like M&M (Marshall Mathers)!
Hey I just wanted to know that is the API free to use cause when I previously used teh gpt 3.5 python api it showed me a message that insufficient credits... So do I have to pay for this model as well?
I have tried and I couldn't see any difference at all. The complex model 4o was not able to solve a problem programming a Save and Paste button into different tabs.
@@generichuman_ I tried several methods of programming with AI, I even switch from models 3.5, 4.0, 4o and non of those were able to to solve or identiying the issue. You think I'm only going to try a single method to standarize a process ? pff.
So it looks like they just need to add the ability to click on things and enter text for it to be able to navigate websites and enter forms. That's a lot of jobs that are basically just doing this...
I was quite disapointed from the update. We were supposed to get GPT4 but OpenAI are pulling an intel/nvidia on us. With very very small incremental upgrades while holding back the good stuff for future releases.
You really think this is a small update? Check out more of the demos online I just watched a bunch and it’s actually insane how good it does for the demos they did.
@@TheTrickyBAM I'm not saying it's not good, I'm saying that they have so much better things that they are holding back. It's incremental update that just optimise inference cost and latency and integrates vision and whisper inside the big model. Again this was supposed to be GPT5. OpenAI has 2 years advantage over google/antropic etc and yet they release 2 years old stuff as if it is new. What would've been impressive is if they released the weights and made gpt-4o open weight model, that's how it should be you release gpt-5 make the old models open weight
It failed. It couldn't read your hand-writing, and thought you wrote 7 instead of 2. It didn't understand that the short side is supposed to be shorter than the other two sides. To be fair, 2, 4, 6 is not a valid triangle (the "right angle" would have to be 180 degrees). But it should have at least mentioned the ambiguity.
I'm sure it's a great product, but in reality it will not not be as "free", "high performance" and "feature rich" for free users as they pretend it to be in their infomercial, yes, infomercial.
The response time improvements are absolutely insane.
Really amazing, it gives 1-2 pages in 5 secs.
One thing that impressed me in the playground is that it wrote a 900 token article about a topic. Then I told it to double the length, and it got very close. Then I said triple and it gave me a 2700 token answer. Really good
ChatGPT solved the marble problem when I asked it.
"The marble should be on the table where you initially placed the cup upside down. When you took the cup and placed it in the microwave, the marble, having been left behind on the table, did not move with the cup. Therefore, the marble remains on the table."
Is this GPT4?
@@SustainaBIT I did the test on GPT-4o. I asked the same question to GPT-4 afterwards and it failed.
@@MartinX404 someone suggested that maybe the AI understood that the cup is cap-closed, maybe tell it explicitly that it's an open-cap cup and it could only then understand the question better, after all this is prompt engineering, half of the mistake could be on the question itself.
Audio work on the app version of it
If this problem is known or has been prompted before then it'll likely solve it. There is very little point in asking old problems like this because it'll be in the training.
Awesome stuff, you are probably quite literally the first person testing out the API.
I just got access on the site and website. However my app does not have feature to show live image yet
@@freddiechipresthose will slowly roll over in the next few weeks, earlier for plus members
i think the openai stuff tested it before releasing it
I updated most of my bots to the 4o already. So much faster
I asked for GPT4o for summary for this video and it did it -
Searched 4 sites
The video "OpenAI GPT-4o | First Impressions and Some Testing + API" provides a comprehensive overview of OpenAI's latest language model, GPT-4o. The speaker shares initial impressions, highlighting significant improvements in language understanding and generation capabilities. Here are the key points discussed in the video:
Model Overview: GPT-4o is an advanced version of the GPT series, designed to produce safer and more useful responses. It has been fine-tuned to be 82% less likely to respond to disallowed content and 40% more likely to produce factual responses compared to GPT-3.5 (OpenAI) (OpenAI Help Center).
Testing and Demonstrations: The video includes various tests to showcase the model's capabilities:
Complex Question Answering: GPT-4o effectively handles complex and nuanced questions, providing detailed and accurate responses.
Creative Writing: The model demonstrates enhanced creative writing skills, generating coherent and engaging content.
Code Generation: GPT-4o is capable of writing and debugging code, making it a valuable tool for developers.
API Integration: The video explains how GPT-4o can be accessed via API, making it easier for developers to integrate the model into their applications. The API allows for large context windows and more efficient token usage, which is beneficial for extensive tasks like document analysis or interactive chatbots (OpenAI Help Center).
Applications and Use Cases: Various practical applications of GPT-4o are highlighted, including its use in customer support, content creation, and educational tools. Examples include:
Duolingo: Enhancing language learning experiences.
Be My Eyes: Improving visual accessibility for the visually impaired.
Stripe: Streamlining user experience and combating fraud (OpenAI).
Safety and Alignment: The model has been trained with extensive human feedback to improve its behavior and ensure safer interactions. Continuous improvements are made based on real-world use, with a focus on addressing known limitations such as social biases and hallucinations (OpenAI).
Overall, the video presents GPT-4o as a significant advancement in AI language models, with improved capabilities and broader applications, emphasizing its potential to empower various industries and enhance user experiences.
For more details, you can watch the video here.
Thanks for sharing that’s awesome . How did it get the video context . The transcript ? Did it watch the video ?
Claude 3 does solve your marble-in-the-cup probem correctly:
Hi. I have a logic problem for you. A marble is placed inside an upright cup on a table. The cup is then turned upside down on the table. Then the cup is placed inside a microwave oven. Where is the marble now?
To solve this logic problem, let's break it down into steps:
A marble is placed inside an upright cup on a table.
The cup is turned upside down on the table.
At this point, the marble should have fallen out of the cup and remained on the table.
Then the cup is placed inside a microwave oven.
This step does not change the position of the marble, which is still on the table.
Therefore, the logical conclusion is that the marble is now on the table, outside the cup and outside the microwave oven.
When the cup was turned upside down, the marble would have fallen out due to gravity. Placing the empty cup inside the microwave oven does not affect the position of the marble, which was already separated from the cup.
---
END
for the cup problem, I let gpt4o give me an accompanying text which would force the AI to think logically and not-skip any part of the process, it goes like this:
Assume the laws of physics on Earth. A small marble is placed into a normal open-top cup. The cup is then placed upside down on a table. Someone then lifts the cup without changing its orientation and puts it inside the microwave. Where is the marble now?
To ensure a thorough and correct analysis of the problem, Take all of the below into your consideration in your reply, follow these directives:
1- Detailed Step-by-Step Explanation: "Break down the problem into individual steps. For each step, describe the situation in detail, explaining what is happening and why."
2- Logical Validation: "For each step, ensure your reasoning is based on logical principles and relevant laws or rules. Justify why each action or outcome occurs, ensuring there are no gaps in your logic."
3- Consider Multiple Perspectives: "Consider if there are any alternative interpretations or outcomes for each step. Explain why these alternatives might be possible or why they are not feasible."
4- Consistency Check: "Ensure that each step logically follows from the previous one. Verify that your explanations are consistent throughout the entire process."
Final Summary: "Provide a final summary of your conclusions. Make sure this summary is supported by the detailed explanations and logical reasoning you have provided in each step."
Also take the below into your consideration:
1- Detailed Step-by-Step Explanation: Break down the problem into individual steps. For each step, describe the situation in detail, explaining what is happening and why.
2- Logical Validation: Ensure your reasoning for each step is based on logical principles and relevant laws or rules. Justify why each action or outcome occurs, ensuring there are no gaps in your logic.
3- Consider Multiple Perspectives: Consider if there are any alternative interpretations or outcomes for each step. Explain why these alternatives might be possible or why they are not feasible.
4- Consistency Check: Ensure that each step logically follows from the previous one. Verify that your explanations are consistent throughout the entire process.
5- Final Summary: Provide a final summary of your conclusions. Make sure this summary is supported by the detailed explanations and logical reasoning you have provided in each step."
Thanks for the update, particularly enjoyed the limited testing you did.
I'd think we'd all love to see some direct comparisons to Claude 3 Opus/Sonnet/Haiku in terms of robustness, accuracy, speed and cost.
Please at least your thoughts on that once you've had a chance to form some opinions.
Namaste.
I have immediately tested the new model and ran it through ~120 tasks, it was really good at coding, but surprisingly bad at reasoning tests. According to the devs gpt-4o is supposed to be the same as "im-also-a-gpt2-chatbo" but the arena model for me was far better in reasoning tests, failing far less tests. I don't know whats up with that, I hope not gaming rankings.
The gpt2 without the "also" was much better. Also it's DEFINITELY worse at reasoning than gpt 4 turbo
Can't wait for a robot ai buddy running on this tech!!!
Hi, I'm very impressed with the way you tested the improved responsiveness and cost-effectiveness of the GPT-4o model in this video!
It works great with the API, very fast and clean responses
I asked 4o the marble problem ... as it occurred to me while having a voice conversation with it via Android app. (I'm a Plus user if that makes any difference).
I can copy and paste from the transcribed chat history:
[User]
Actually, I could try the marble test on you, though I'm sure you'll pass it easily. It's basically a test that goes like this. Imagine a cup, a normal cup, and normal physics on Earth. Imagine there's a marble in the cup, and then somebody's holding this cup in the air, and then they quickly turn it upside down and put it down onto a table. And then imagine that person lifting the cup up again and putting the cup into the microwave. And what I want you to tell me is, where is the marble?
[ChatGPT 4o]
The marble would be on the table, where it fell when the cup was turned upside down and lifted. It wouldn't be inside the microwave with the cup.
Im sorry for stupid question, probably i can find an answer anywhere but i will ask here. When it will be able for common mortals like me? I mean free access to gpt4o.
Apparently it will be rolling out to free users over the next couple weeks, but I think the talking aspect will only be paid which makes sense
@@betterthantrash111 i dont care about speech to text but im really too poor to pay for gpt4 each month:)
Free access will be in September as they stated in the blog, please read there
Until September its allready old nowadays
"We are also starting to roll out to ChatGPT Free with usage limits today. Plus users will have a message limit that is up to 5x greater than free users,"
In my opinion the wording makes it sound as if the free version is releasing today but the later part of the blog seems to contradict it I think?
"We are starting to roll out more intelligence and advanced tools to ChatGPT Free users over the coming weeks."
Can't wait to see you do stuff with the full audio-visual version 😁👍
really do appreciate you getting this one out and showing us
It's cool, but I get results similar to those of GPT 4. I know it's not supposed to be much different in terms of reasoning and following directions, but Claude just does what I ask of it. GPT 4o kinda does what I ask of it, but way lazier. I use the same prompts across both, and Claude is the winner so far.
Claude is still not released here in the nordics and GPT-4o was released here straight after the event. Fastest roll out ever.
@@digidope Same in Europa -.-
Yep, I'm certainly an open AI fanboy, but I just can't get good articles out of chat GPT, and I get mind-bogglingly good articles out of Claude.
@@digidopeSame, I am from Brazil and it doesn't have portuguese language features🎉.
🎯 Key Takeaways for quick navigation:
00:00 *🆕 Introduction to GPT-4o*
- Introduction to the OpenAI Spring update,
- Announcement of GPT-4o, a new flagship model,
- Mention of the model's ability to reason across audio, vision, and text in real-time.
00:28 *⚡ Low Latency and API Costs*
- Discussion on low latency averaging 320 milliseconds,
- Excitement about the possibility of low latency on API calls,
- Mention of 50% cheaper API costs for GPT-4o.
01:21 *🖼️ Image Functionality Testing*
- Plans to test image functionality,
- Explanation of current limitations, such as the lack of audio testing,
- Description of upcoming tests with images.
02:00 *🎤 Voice Input and Output*
- Discussion of voice input and output capabilities,
- Mention of emotion detection in voice input,
- Plans to test tone adjustment and other voice features.
03:09 *💻 Desktop App Utility*
- Consideration of the utility of the desktop app,
- Potential use cases for coding and general background use,
- Personal usage of the app on an iPhone.
03:52 *🌐 Free Version Access*
- Announcement that GPT-4o will be available to all free users,
- Discussion on the implications of free access,
- Comparison with other AI models and competitors.
05:02 *🖼️ Detailed Image Analysis Test*
- Demonstration of image analysis using previous video images,
- Detailed explanation of the analysis process,
- Positive evaluation of the results.
08:02 *📊 Speed and Latency Comparison*
- Comparison of GPT-4o and GPT-4 Turbo speeds,
- Calculation of latency and tokens per second,
- Results showing GPT-4 Turbo being over five times faster.
10:42 *🤔 Logical Testing*
- Execution of logical tests comparing GPT-4o and GPT-4 Turbo,
- Specific test examples like the marble problem and sentence completion,
- Mixed results on logical problem-solving capabilities.
12:31 *📝 Conclusion and Future Plans*
- Summary of first impressions and overall performance,
- Plans for a follow-up video with more detailed analysis,
- Invitation for viewer feedback and comments.
Made with HARPA AI
you are great!
@@jazzcat969 We all are great.
This could be used as a back end for local models if it is planned to stay free. Currently Perplexity is the way to go for me. But GPT-4o might be an addition to that in the future. Especially robots with low power consumption could benefit from this service.
Yeah, I could definitely get fired up using an audio interface with AI for designing algorithms and writing code with a fast interface like this!
why i can't find it ? it shows chapgpt 3.5 still
Definitly I see some improvemnt in case of visual and coding capabilities of the new model - I am waiting for some real life implementation.
8:53 it answered wrongly that the sides are 4,6 and 7
But at least that's a possible triangle, unlike 2, 4 and 6 :D
To be fair it's not the clearest "2" and can easily see how it could be interpreted as a 7 (was actually thinking this myself when he first did it)
Спасибо дядя ютубер, очень интересно.
For coding purposes do you prefer claude 3 opus or gpt4/gpt4o ?
I wonder if the name 4o is meant to sound like "for all" since they're so focused on making it available for everyone. Kinda like Eminem sounds like M&M (Marshall Mathers)!
Definatelly it is. Just like Open AI, sounds like Open eye.
omni = multimodal
I tried to create a similar one, but it can’t find gpt-4o model, what should I do?
How did you find it? I do not see any “o” options in playground nor main website
Hey I just wanted to know that is the API free to use cause when I previously used teh gpt 3.5 python api it showed me a message that insufficient credits... So do I have to pay for this model as well?
very interesting stuff. cant wait to see u go deeper with this :)
Waiting for Voice API Tutorial. Meet you in future
I have tried and I couldn't see any difference at all. The complex model 4o was not able to solve a problem programming a Save and Paste button into different tabs.
you couldn't see any difference because you tried one thing? sounds super scientific...
@@generichuman_ I tried several methods of programming with AI, I even switch from models 3.5, 4.0, 4o and non of those were able to to solve or identiying the issue. You think I'm only going to try a single method to standarize a process ? pff.
Why can the free version of chatGPT 4o search the web, but not 4o in the playground?
I tried implemeting APi, and i get like "The model `gpt-4o` does not exist or you do not have access to it."
Look again :) The marble problem was CORRECTLY resolved by GPT-4o.
can you try the realtime talk, like on real app. i have some trouble on this
Conclusion: Marble is outside the cup (correct); Marble is outside the microwave (WRONG)
Nice that it also seems like its better at maths
😃 Nice ! Best Timing!
Finally a video from a non grifter... My god yhese people are annoying.
i dont have it yet!
Sounds good 👍🏻 thank you Kris 🙏🏻
First, it's free, and then they might start charging for it?
when is it available?
It's already available.
@@tomtom_videos mines still the old one
@@tomtom_videos no its not. I cannot choose 4-o yet
@@BavarianPati Yeah i cant either. Maybe US only
It is free which means they have something more powerful for paid users, like 3.5 vs 4 scheme right now. IMO
If I pay for ChatGPT 4 will I be able to use the 4o ChatGPT
Apparently, yes
it is not free :) You’ve reached your GPT-4o limit. Responses will use GPT-3.5 until your limit resets after AM.
So it looks like they just need to add the ability to click on things and enter text for it to be able to navigate websites and enter forms.
That's a lot of jobs that are basically just doing this...
We are one step closer to Cortana! Hell yeah!
Is gpt4o in real world time or is it still pre 2022
oct- 2023
Describitions?
I was quite disapointed from the update. We were supposed to get GPT4 but OpenAI are pulling an intel/nvidia on us. With very very small incremental upgrades while holding back the good stuff for future releases.
You really think this is a small update? Check out more of the demos online I just watched a bunch and it’s actually insane how good it does for the demos they did.
it's not a small update and they made it free for everyone this is for sure not an nvidea
@@TheTrickyBAM I'm not saying it's not good, I'm saying that they have so much better things that they are holding back. It's incremental update that just optimise inference cost and latency and integrates vision and whisper inside the big model. Again this was supposed to be GPT5. OpenAI has 2 years advantage over google/antropic etc and yet they release 2 years old stuff as if it is new.
What would've been impressive is if they released the weights and made gpt-4o open weight model, that's how it should be you release gpt-5 make the old models open weight
Not on iphone yet
Very fast! Probably wrong... But VERY FAST indeed!
It's not free as you can only send a few conversation and then they want you to pay. Conversation is very limited.
cool
desktop app is mac only so we can ignore that.
It failed. It couldn't read your hand-writing, and thought you wrote 7 instead of 2. It didn't understand that the short side is supposed to be shorter than the other two sides. To be fair, 2, 4, 6 is not a valid triangle (the "right angle" would have to be 180 degrees). But it should have at least mentioned the ambiguity.
Why is not a right triangle lollolololol
GPT-4o ("o" for "her")
oh no... i just had a terrible thought, what if the user cant select a voice other than that annoying woman????????????
You can choose lol
Really disappointed with this role out. I want it to be smarter, not more "personable." Ugh! a Chatty Cathy doll.
I'm sure it's a great product, but in reality it will not not be as "free", "high performance" and "feature rich" for free users as they pretend it to be in their infomercial, yes, infomercial.
Omni=Aliens
Thumbs down for using a web cam. I have no idea why you would use a web cam. Do you really think it will make your video better?
useless testing
I know this is unrelated but give the Quran a read brother
'o' stands for oh no I just lost my job to AI lol