@@nutt_n1434 depends. They are losing billions maintaining it, along with paying for the research and development. You also need PHD students and graduates to make a PHD worthy model. So, lets think about this for moment. DeepSeek spent 5.6 million for the FINAL training RUN. This doesn't include, R&D, Engineer salaries, managing huge datasets, and compute. I won't factor in hardware, because that is something they have had before this. So either, they used a distilled ChatGpt model or used straight up synthetic data provided by o1, which would be cheaper than scouring the web or they are hiding the cost of pre-training, R&D, and total compute for the entire process except for it's final training run. I personally think ChatGpt is worth more considering the tools they are adding along with power of their models. It creates much more value than the model alone because there is more autonomous work behind it. You can ask deepseek for knowledge and still might become hard stuck from a lack of true understanding, but ChatGpt is getting closer and closer to the point of most of the manual work being done for you. $20 a month is fine. I think $200 dollars a month for pro features is excessive and it is the only point I would give you. What OpenAI needs to do is find a way to make compute much more affordable for these recent models. I would love it if DeepSeek truly found a way to do it, but I am skeptical. This isn't the first time China has found a way to make the market crash on our end, or has done something to stall progress by discouraging investors. I think it's time people get red-pilled and actually focus instead of praising a competitor who could upend our lives and bend it to their vision if they win the A.I. race. I could be wrong, but it's worth looking around and try to understand how they did this, not just that they did it before coming to the conclusion on what fair on Chatgpts cost.
What abilities are you speaking of? Deepseek doesn't do tasks. Deepseek doesn't have something like Operator. OpenAI has had canvas with Artifact and previews. DeepSeek doesn't have projects. What are you referring to? The think button?
@@diliupg give it few weeks, there will be janus integration and everything in deepseek, exciting times, posibiliets that localised r1 workstations ofer, new and nore eficien busineses bases on ai will pop up everywhere
YOU ARE A GENIUS.. I have watched you from the very beginning, I have watched you grow. For a long time you gave sped up information on tiktok tips and shorts with a lot of self-promotion in-between.. NOW you post You Tube videos and you post valuable tech news.. but you do something other UA-camrs are not smart enough to do... Instead a promising info and then asking people to like and doing a lot of self-promotion.. YOU JUST CUT STRAIGHT TO THE VALUABLE INFO And you take your time to explain it to the viewer.. you're editing is so smooth but I can tell it's a lot of work.. you really go out of your way to make the viewer feel valuable and appreciated.. YOUR NEXT LEVEL OF SUCCESS WILL BE ASTOUNDING.. I am so proud of you. Remember us little guys when you get your own TV show or something
I like your channel a lot you get right to the point you’re not trying to sell us on any tutorials and sign-up planner like that straight to the point and thank
rob be excited about everything. his excitement makes you say hell yeah i want that. he could sell you the same apple iphone for the next 5 years doing the exact same thing by the way he presented it
Don't really care for the new think mode. Still rate limited so it's still a capital tactic.. but I mean, the new Canvas change is exactly what I wanted when Canvas first dropped. Honestly, OpenAI just needs to be WAY more certain with timelines and release dates. I hate having a cool looking feature get announced, but then get shut down by months of delay or outright zero access due to a plan difference.
Great to see PORSCHE - Stuttgart in the background, my hometown where I grew up :-). Besides, it's really excellent information, succinct, and no-nonsense, thanks!
@@realrobtheaiguy Collectively, there is not another presenter as obnoxious as you. 1) Your talk way too fast. Yes, I could slow it down, but why should I have too. 2) Nobody thrashes around with their hands like you do. 3) What's up with the echo? You can't find a larger closet? 4) I don't want to hear your keyboard. 5) If I was in the room, do you think we would be as close as you are to the camera? You transfer all of this stress energy to me.
They need to make the API "Statefull." Right now it is "stateless" that is, it does not remember any context in the way the browser interface does. Making the API perform in the same way as the browser does would be a game changer. By the way if anyone knows of a "statefull" API, please let me know. Thanks for the video.
Chat gpt intelligence need a new level of measure. With browsing turned on when I asked it to compare cubase 14 and Luna it said as of 25th January both DAWs were not available yet which was nonsense as both were released some time ago! This happened last night (30 Jan 2025)
What I posted to him in another thread above. Collectively, there is not another presenter as obnoxious as you. 1) Your talk way too fast. Yes, I could slow it down, but why should I have too. 2) Nobody thrashes around with their hands like you do. 3) What's up with the echo? You can't find a larger closet? 4) I don't want to hear your keyboard. 5) If I was in the room, do you think we would be as close as you are to the camera? You transfer all of this stress energy to me.
If you want to duplicate some html page, just copy it and change the classes slightly, the page source is literally 2 clicks away in the browser menu 😂
I don't get how this is a new feature, it's just using the o1 model but with a better GUI toggle button for normies who don't know the difference between models. Using the o1 model in the past always lets you view its CoT too. And the coding canvas feature came out months ago.
So they added DeepSeek R1 model into ChatGPT and rename it to o1and charge you money for it, yeah no thanks. DeepSeek has way better Engineers, they will innovate into the future.
I luv it, just love it. All the times I spent hours looking for code on google and was seldom completely successful bec coders weren't giving it up. I hope it CRUSHES that industry.
Your pupil and retina is flat and you only see a flat image light that goes straight to your eye and our brain processes this into 3D. We don't actually see 3D images or objects and 3d objects don't get thrown at the eye or that would hurt. Like color. We make up color.we only see wavelength and spectrum not absorbed. Our brains make up 3d. Some women have an extra cone and they have better dept perception and we color blind to them normal vision. Anyway we still take a flattened image. Very similar to a projector is how light huts your eye. Imagen your standing where the screen is. Image that comes at you is hits the flat part of your eye. So is just as flat as the light that comes towards the projector screen. Light then hits the flat screen and bounces to our eyes if not standing where the screen is. Our brain has to turn this to 3d. Most the information is in the light and wavelength and spectrum and angle and how the light is stretched. We don't actually see shape.
ChatGPT? Hmmm it sounds familiar….is it the company who charged the government 500 billions for something that was done for just 5 millions? And is it ran by someone who’s is looking job at McDonald’s now? Greed was his downfall.
@Rob The Ai Guy hi there, love your videos. I'm hoping you could get into being more specific with your cursor location please? For example instead of saying "come over here" you could be precise and say exactly where 'here' is. Same with "click this", it, that, here etc...... I sometimes can't follow where you are. This is a genuine question and I'm respectfully asking. Cheers friend.
Canvas in Projects is available since projects are available 😅 i mean I had it since day one...also you forgot to mention that Operator is not only for Pro users but also US only. Probably be available broadly till March. But I think it's mentioned vaguely in the Operator presentation video of OpenAI, also task is available for a few weeks now 😅 Cool video, nice presentation but you remind me of a friend of mine who is super excited about news and sends me links about news what I already seen few months and weeks ago.
Think mode is gone btw they removed it necause jt was stupid and rushed also telling it to do any number of words isnt going to work llms use tokens not words so they have no idea how kany words it uses .. it seems your not the best at ai and shouldnt be doing comparison
A long time ago there was a brain experiment on a guy who had bad mental problems. They disconnected the two hemispheres of his brain and during experiments they tested his vision. Now because the two sides can't communicate in his visionary field if it was on one side of his vision like an object he could not see or visualize the object but could see this on the other side. Why? They never solved this. Also he couldn't visualize this but could draw it. Like when you have a picture without a picture when you remember something. Why? Do you always want to picture your mothers face to remember what she looks like every time and constantly with everything you see or seen? This is so you remember without thus being visual like your picturing it im your head. This also has to do with imagination when you picture something in your head like if you close your eyes and picture something someone asked. One side of the brain is for picturing something and memory of the picture and the other side is for remembering without picturing something al the time to remember something such as what something looked like. They compliment amd is not error correction at all. Was odd the guy knew something was there but couldn't see it yet could draw what was there. Remember your right and left eye are controlled by the opposite side too. Somewhere is a documentary about brain science and this information and new information with making fingers move by influencing electrical signals. Has to be trained by listening then do the signal itself.
ChatGPT? Hmmm it sounds familiar….is it the company who charged the government 500 billions for something that was done for just 5 millions? And is it ran by someone who’s is looking job at McDonald’s now? Greed was his downfall.
😂😂😂 thank you deepseek, this would have never happened with out you
Or it will cost us $$$$
I think chatgpt is worth no more than 5$ per month 😅😅😅
Absolutely! Competition is good
@@nutt_n1434 depends. They are losing billions maintaining it, along with paying for the research and development. You also need PHD students and graduates to make a PHD worthy model.
So, lets think about this for moment. DeepSeek spent 5.6 million for the FINAL training RUN. This doesn't include, R&D, Engineer salaries, managing huge datasets, and compute. I won't factor in hardware, because that is something they have had before this.
So either, they used a distilled ChatGpt model or used straight up synthetic data provided by o1, which would be cheaper than scouring the web or they are hiding the cost of pre-training, R&D, and total compute for the entire process except for it's final training run.
I personally think ChatGpt is worth more considering the tools they are adding along with power of their models. It creates much more value than the model alone because there is more autonomous work behind it. You can ask deepseek for knowledge and still might become hard stuck from a lack of true understanding, but ChatGpt is getting closer and closer to the point of most of the manual work being done for you.
$20 a month is fine. I think $200 dollars a month for pro features is excessive and it is the only point I would give you.
What OpenAI needs to do is find a way to make compute much more affordable for these recent models. I would love it if DeepSeek truly found a way to do it, but I am skeptical. This isn't the first time China has found a way to make the market crash on our end, or has done something to stall progress by discouraging investors.
I think it's time people get red-pilled and actually focus instead of praising a competitor who could upend our lives and bend it to their vision if they win the A.I. race.
I could be wrong, but it's worth looking around and try to understand how they did this, not just that they did it before coming to the conclusion on what fair on Chatgpts cost.
Deepseek a great competitor 😂
They are adding abilities copying deepseek but not reducing the price!
What abilities are you speaking of? Deepseek doesn't do tasks.
Deepseek doesn't have something like Operator.
OpenAI has had canvas with Artifact and previews.
DeepSeek doesn't have projects.
What are you referring to? The think button?
@@diliupg give it few weeks, there will be janus integration and everything in deepseek, exciting times, posibiliets that localised r1 workstations ofer, new and nore eficien busineses bases on ai will pop up everywhere
@@georgemontgomery1892 dall e3, advanced voice mode, voice mode with vision... And you guys are acting like deepthink copied nothing from chatgpt 😂
Qwen ai does
YOU ARE A GENIUS.. I have watched you from the very beginning, I have watched you grow.
For a long time you gave sped up information on tiktok tips and shorts with a lot of self-promotion in-between..
NOW you post You Tube videos and you post valuable tech news.. but you do something other UA-camrs are not smart enough to do...
Instead a promising info and then asking people to like and doing a lot of self-promotion.. YOU JUST CUT STRAIGHT TO THE VALUABLE INFO
And you take your time to explain it to the viewer.. you're editing is so smooth but I can tell it's a lot of work.. you really go out of your way to make the viewer feel valuable and appreciated..
YOUR NEXT LEVEL OF SUCCESS WILL BE ASTOUNDING.. I am so proud of you.
Remember us little guys when you get your own TV show or something
This comment means more to me than you’ll ever know - I appreciate it ❤️
I like your channel a lot you get right to the point you’re not trying to sell us on any tutorials and sign-up planner like that straight to the point and thank
My mind is blowing up daily. This stuff is insane.
All of a sudden! Thank you DeepSeek for scaring them out of gouging us. I still feel more comfortable giving my info to the CCP.🤣🤣🤣
rob be excited about everything. his excitement makes you say hell yeah i want that. he could sell you the same apple iphone for the next 5 years doing the exact same thing by the way he presented it
Cool! Thanks much for sharing.👍👍
Don't really care for the new think mode. Still rate limited so it's still a capital tactic.. but I mean, the new Canvas change is exactly what I wanted when Canvas first dropped.
Honestly, OpenAI just needs to be WAY more certain with timelines and release dates. I hate having a cool looking feature get announced, but then get shut down by months of delay or outright zero access due to a plan difference.
Pro feature. Maybe inform users about this in the beginning of the video so we don’t waste time watching…
Great to see PORSCHE - Stuttgart in the background, my hometown where I grew up :-). Besides, it's really excellent information, succinct, and no-nonsense, thanks!
thank you! I love my porsches!
You win my sub. Great video!
Incredibly informative and well-presented!
Does the think mode count towards your o1 weekly limit or does it count as a regular 4o limit which is like 40 prompts every 3 hours or so
Have the same question
it uses o1 weekly limit, also the feature is removed already lol
Well done!
Is this a feature for the two hundred dollar tier and only on pc? I done have it on android.
Thank you deep seek. Thanks to you chat GPT has gotten so much better and efficient with all the traffic that you diverted away from the platform.
Are you sure Think mode is rolled out for everyone? I am on a plus plan and it does not show (non-US user)
me too :( I was all excited and then realized I'm a loser
I am a US Team user and don't have it yet.
@@lawsplaining Im a US plus user and don't have it either
Had same question. I am doing some work at @AlienCantina where I have access to a Teams and Plus account and neither have it.
I don’t see the thinking option is this an update for everybody?
they removed the feature after 1 day xD
FR? shiz, I was wondering what was going on
Great man, true knowledge is here really thanks
Thanks! Do you mic your keyboard too?! 😂
Thanks again bro YOU Rock!!!
thank you!
@@realrobtheaiguy Collectively, there is not another presenter as obnoxious as you.
1) Your talk way too fast. Yes, I could slow it down, but why should I have too.
2) Nobody thrashes around with their hands like you do.
3) What's up with the echo? You can't find a larger closet?
4) I don't want to hear your keyboard.
5) If I was in the room, do you think we would be as close as you are to the camera?
You transfer all of this stress energy to me.
Can this be adapted to custom GPT‘s that have already been built?
I dont see think mode on my paid version... is this a pro feature?
Yup Sigh...
No they removed it
Same here.
Nope, I have Pro and don't see the option. Speaking of OpenAI they were supposed to release o3-mini so I'm completely confused what is going on.
@XwO-mk you are correct it was suppose to be today at 10 am PST idk what happened but they are saying def tomorrow now .
I have the Pro version and do not have the Think option. I can chose chtgpt 01, but no Think option....any idea why?
o1 is in think mode by default.
make sure you have the most up-to-date version of the app! You should definitely see it / begin seeing it over the next few days!
@@realrobtheaiguynot on my app as well, just updated to newest version
Nice work!
thanks!
DeepSeek to the rescue! 👏🏽👏🏽🙏🏽. Another good use case video. I'm subscribing 👍🏽
Very very useful video. One tip, ditch the typing sounds. They a are pretty annoying (at least i think so)....
That's odd, Think mode is not showing up in the desktop Version 1.2025.021. What version are you running and which subscription option?
Ver? access through browser.
@@mas5867 I thought OpenAI was encouraging people to use the app over the browser by adding more features to it-but I guess the opposite is true.
It not on desktop, mobil or browser.
I strongly miss a speech-to-text function in the desktop version of ChatGPT. I don't know why they haven't implemented it yet.
Impressive yet terrifying!
Thanks, this really helps
Glad it helped
How many additional gallons of water are consumed at the data center by using the thinking option?🤔
great vid man......
Incredibly informative and well-presented! Well done!👍
I dont have that think feature inside my account
They took it away
@@MustardGamings If they took it away already, they are very bipolar.
Hmm I wonder if it’s just querying itself 2-3 times with “enhance your answer” 🤔
Probably.
Do you think deepseek does this differently?
I thought operator was $200 per month ?
correct!
They gave it to Pro users for early access. They will provide it to us Plus users later.
Solid content
I’m sure there’s some AI audio tools you can use to fix the sound level of your keyboard.
Tasks are presently available only on Teams and Enterprise level subscriptions.
great video´🙂! sadly "operator" is not yet available in germany :/
can't you VPN?
They need to make the API "Statefull." Right now it is "stateless" that is, it does not remember any context in the way the browser interface does. Making the API perform in the same way as the browser does would be a game changer. By the way if anyone knows of a "statefull" API, please let me know. Thanks for the video.
Nice keyboard sound ⌨️
So adding free deepseek feature to the pro plan ...
Chat gpt intelligence need a new level of measure. With browsing turned on when I asked it to compare cubase 14 and Luna it said as of 25th January both DAWs were not available yet which was nonsense as both were released some time ago! This happened last night (30 Jan 2025)
Your content is good but I turn off when I hear that keyboard noise. Horrific.
Needs to get himself some Gateron creamy yellows, or maybe a good tactile
What I posted to him in another thread above.
Collectively, there is not another presenter as obnoxious as you.
1) Your talk way too fast. Yes, I could slow it down, but why should I have too.
2) Nobody thrashes around with their hands like you do.
3) What's up with the echo? You can't find a larger closet?
4) I don't want to hear your keyboard.
5) If I was in the room, do you think we would be as close as you are to the camera?
You transfer all of this stress energy to me.
Is rhis an updated version of the task feature that was introduced 3 weeks ago? That version was rather buggy.
If you want to duplicate some html page, just copy it and change the classes slightly, the page source is literally 2 clicks away in the browser menu 😂
Think isn't available on chatgpt mobile app😢
I'm a plus user and still not seeing this option yet.
Is this with the free version Rob?
some of them yes!
I am not seeing the think icon
I don't get how this is a new feature, it's just using the o1 model but with a better GUI toggle button for normies who don't know the difference between models. Using the o1 model in the past always lets you view its CoT too. And the coding canvas feature came out months ago.
I don't want to listen to typing.
I figured Chatgpt would catch up quickly without storing our data in China.
Use Perplexity..... Deepseek running on a USA server
Why Is Justin Trudo Saved As a Bookmark?
You lost me at Yankees 😂
The input widths are block... it didn't do it perfectly , I've had this same issue with several projects but I'm a real developer so I can fix !
AI Legend ❤
is it just for gmail
Still can’t put your chats into folders it still seems…
Think Mode no longer appears for me. 🤷♂
Will the US stock market now go up because of New features from ChatGPT
stop that man deepseek has made this useless
Emails from Tai Lopez?
Hopefully it is free and open source.
$200 a month is the one that idiot is promoting coz hes being paid
I don't see any of this on the app or web site
I did try the screenshot in canvas but yeah the reasoning with each model seems to be a pro exclusive feature. He probably didn’t realize this
You just gave a cloud-based AI full access to your email account? 😬
So they added DeepSeek R1 model into ChatGPT and rename it to o1and charge you money for it, yeah no thanks.
DeepSeek has way better Engineers, they will innovate into the future.
I was all on board until I noticed he didn't correct the word crisis. smh!
not on mine
? you didn't provide much context here for me to try to help my friend
That is the worst keyboard noise ever.....made stop watching 😮
Who do you think you are telling everyone what folks should or shouldn't do?
The operator is the most useless thing I ever seen. If I was that busy I wouldn't have time to here 😂
At this point I don't see any point in learning how to code. I mean what's the point if AI can do it all?
I luv it, just love it. All the times I spent hours looking for code on google and was seldom completely successful bec coders weren't giving it up. I hope it CRUSHES that industry.
Your pupil and retina is flat and you only see a flat image light that goes straight to your eye and our brain processes this into 3D. We don't actually see 3D images or objects and 3d objects don't get thrown at the eye or that would hurt. Like color. We make up color.we only see wavelength and spectrum not absorbed. Our brains make up 3d. Some women have an extra cone and they have better dept perception and we color blind to them normal vision. Anyway we still take a flattened image. Very similar to a projector is how light huts your eye. Imagen your standing where the screen is. Image that comes at you is hits the flat part of your eye. So is just as flat as the light that comes towards the projector screen. Light then hits the flat screen and bounces to our eyes if not standing where the screen is. Our brain has to turn this to 3d. Most the information is in the light and wavelength and spectrum and angle and how the light is stretched. We don't actually see shape.
ChatGPT? Hmmm it sounds familiar….is it the company who charged the government 500 billions for something that was done for just 5 millions? And is it ran by someone who’s is looking job at McDonald’s now? Greed was his downfall.
Only 200$ a month to use this feature 😂
@Rob The Ai Guy hi there, love your videos. I'm hoping you could get into being more specific with your cursor location please? For example instead of saying "come over here" you could be precise and say exactly where 'here' is. Same with "click this", it, that, here etc...... I sometimes can't follow where you are. This is a genuine question and I'm respectfully asking. Cheers friend.
sure I can try to adjust this! Thanks for the feedback and I appreciate it!
Canvas in Projects is available since projects are available 😅 i mean I had it since day one...also you forgot to mention that Operator is not only for Pro users but also US only. Probably be available broadly till March. But I think it's mentioned vaguely in the Operator presentation video of OpenAI, also task is available for a few weeks now 😅 Cool video, nice presentation but you remind me of a friend of mine who is super excited about news and sends me links about news what I already seen few months and weeks ago.
Man you need to dial down the enthusiasm.
Think mode is gone btw they removed it necause jt was stupid and rushed also telling it to do any number of words isnt going to work llms use tokens not words so they have no idea how kany words it uses .. it seems your not the best at ai and shouldnt be doing comparison
How much does it cost😂😂😂😂😂😂😂😂😂😂😂😂😂
8:58 Or maybe go back to knowledge, skill, experience, and using your brain.
Who copied whom
i see that.. today haha is gone! now wtf..
Still plays laughably bad chess.
How much this mob pay you 😢
A long time ago there was a brain experiment on a guy who had bad mental problems. They disconnected the two hemispheres of his brain and during experiments they tested his vision. Now because the two sides can't communicate in his visionary field if it was on one side of his vision like an object he could not see or visualize the object but could see this on the other side. Why? They never solved this. Also he couldn't visualize this but could draw it. Like when you have a picture without a picture when you remember something. Why? Do you always want to picture your mothers face to remember what she looks like every time and constantly with everything you see or seen? This is so you remember without thus being visual like your picturing it im your head. This also has to do with imagination when you picture something in your head like if you close your eyes and picture something someone asked. One side of the brain is for picturing something and memory of the picture and the other side is for remembering without picturing something al the time to remember something such as what something looked like. They compliment amd is not error correction at all. Was odd the guy knew something was there but couldn't see it yet could draw what was there. Remember your right and left eye are controlled by the opposite side too. Somewhere is a documentary about brain science and this information and new information with making fingers move by influencing electrical signals. Has to be trained by listening then do the signal itself.
Underwhelming compared to Deepseek
Gimmicks. Lower the price for Plus.
ChatGPT? Hmmm it sounds familiar….is it the company who charged the government 500 billions for something that was done for just 5 millions? And is it ran by someone who’s is looking job at McDonald’s now? Greed was his downfall.