I have been using it for a few weeks (hello from Google, no I don't work on AI) and I go to it before I go to ChatGPT. I have had a few complex geometry problems for diy projects and ChatGPT is laughably bad at them, whereas Gemini was much more helpful. I think DallE3 is better for image generation, tho.
Had similar experience. Almost everything can be flagged as "inappropriate" eventually. I think Gemini does not have much context awareness. ChatGPT-4 does a much better job in this regard.
Not very impressed. I tried the image gen feature and since I have a black poodle I wanted to generate an image of a black poodle running through a summer meadow. It refused to generate because my prompt was "inappropriate". I thought... well maybe you aren't allowed to use the word "black" anymore nowadays, so I tried again without it. Apparently that was still too inappropriate for Gemini. Gets the award of most paranoid and restricted AI LLM from me. Congrats google.
I've been comparing Gemini Ultra and GPT-4 the moment it came out. For my use-cases (mainly programming / code review) GPT-4 is slightly better. Gemini does seem more 'creative' in it's answers both in programming and writing. One major deal breaker for me is that Gemini Ultra context size is only 32K tokens compared to the 128K of GPT4-turbo. That means it can't read large files. Another super annoying issue is no file upload?! I can upload images but no CSV, PDF etc. Why? One awesome thing Gemini can do is summarize UA-cam videos! Which is incredibly useful. I've let it summarize hours of lectures / tutorials.
I noticed Bard/Gemini is better at assisting with C# code. But ChatGPT is better at understanding abstract concepts and will produce examples using UML which is interesting.
@@ameliag8574 It struggles with hours of lectures, but for 15-20 min long tutorials it works alright. I assume Gemini 1.5 (With the 1M tokens) would be awesome for longer videos.
I asked Gemini about mechanical locks. It claimed that disc detainer locks can be raked open, which is obviously false. I would say it's a bit worse than ChatGPT but it's already much better than Bard. Unlike GPT which can correct itself when you question about the details of incorrect part, Gemini was happy to enter deeper into the hallucination. And I was pleasently surprised that Gemini was happy to discuss about mechanical locks. Bing variant of GPT doesn't like this kind of questions.
Yes GPT can correct itself if you point it out, but this works both ways if I correct GPT with the wrong answer it will still say "Ah you're right I missed that" and start using the wrong knowledge I just taught it.
@@parthcosic I've found that ChatGPT gives more accurate answers if you don't directly tell that the answer is incorrect. Instead, simply ask for more details for some aspect of the answer and it usually either gives more accurate details or corrects itself. I fully agree that ChatGPT is a push-over too often and it should be trained more competent in defending facts.
It's perfectly fine if GPT-4 is still better than Gemini. The important thing here is that there is competition, for the consumer benefits from it, and it also pushes advancements faster. With competition, everybody wins.
They already have competition from Inflection that is about to train a 100 times larger model than it's current one, Meta training Lama 3, and Mistral. Particularly Meta and Mistral because they both release open source, which means they can be fine-tuned for particular use cases.
@@torarinvik4920 That is ALL good news - the more competition, the better! 😁 It is worth remembering that there was a general fear of AI monopoly in the beginning, and I am glad it didn't go that way. ^___^
Just asked it about some movies and gave me a bunch of nonsense. The "movies" either straight up don't exist, or are completely different than what it describes. I mean, it claimed that "Moonwalker" (1988 Michael Jackson music video/movie) was a Turkish movie about an astronaut stranded on the moon! Google really messed this up to the point of being completely unusable.
@@Lolwutdesu9000 You fail to realize the point - all privately owned AIs work for their companies, they don't have your best interests at heart. For that you need to look into open source.
It is not as good as GPT-4 that's for sure. However, it is faster, has no cap, and has more personality(a little bit more like pi assistant). The biggest issue really is that Google hyped this to the moon, and it didn't deliver at all on the promises. If they said that it is a competitive model rather than the best model ever made, that would have led to much less criticism and disappointment. I remember Demis Hasabis claiming that it would "eclipse GPT-4". When saying stuff like that, people are not going to listen to you in the future.
for those who are worried because app is not working or app not available, you are not missing anything, it is just web interface, no voice assistant is there yet
@@chrisnoble04 responding to voice only would be smart assistant, you can say it anything, it will convert it to text but then you will have to press enter, and it responds in text only
Hard to say. I have paid GPT4 and tried Gemini (normal) a few times. Gemini reminds me GPT3 year ago: sometimes it randomly forgets context. Sometimes it says it can't help with that, sometimes it helps with more difficult task. Sometimes review of text is basically it gets topic and replaces whole text with something like wiki disregarding different perspective, major focus point, or personal touches. However if Gemini can do what I want, it goes beyond GPT4: when review text it writes what can be improved, what might be wrong and suggest improvement. Sometimes randomly at all cost, but GPT does it too. Curiously Gemini can behave exactly as I want without specifying exact goal, when I tell it what to do directly, it just answers, sure go on. I paste the text to review, it writes sorry, i can't help you.
I tried the following question and got both an incorrect answer and some very dodgy reasoning: "In a game of tennis, suppose the score is 30-love. What's the minimum number of serves required for the receiver to win the game?"
Gemini's answer was ... The minimum number of serves required for the receiver to win the game when the score is 30-love is 2. Here's how it works: The score "30-love" means the server has won one point and the receiver has not. To win the game, the receiver needs to win four points. From 30-love, the receiver needs to win three more points. In tennis, the server changes after every odd numbered point (1st, 3rd, 5th, etc.). Therefore, since the receiver needs three more points, and changes happen every other point, it takes a minimum of two serves for the receiver to win the game. However, it's important to note that the receiver could win in fewer serves depending on certain scenarios: If the receiver wins the next two points (one on the server's serve and one on their own), they would win the game in one serve. If the receiver wins the next two points, followed by winning one more point on their own serve, they would win the game in three serves. So, two serves is the absolute minimum, but depending on the point-by-point sequence, the receiver could win in fewer serves.
The one thing it seems to excel at is creative writing. Outside of that I find Prob and Ultra lacking. With the speed of responses, 3 drafts, no limits, and the fact that it's good at creative writing I would say it's worth using at least the free version for some things, but the Ultra model may also be good for some Python programming. I'm just glad we're getting competition.
LLMs are too useful to ignore, yet so frustrating to talk to. Bizarre moral hang-ups, dementia-like forgetfulness, paranoia, and over-explaining basic concepts.
Image generation is limited to square images. It can't do different aspect ratios. Dall-E 3 can handle them (though sometimes it cheats by adding a border.) It chokes when given a large transcript to summarize. (It looks like it does not have the huge context window that anthropic and OpenAI have.) "Large" here is a 20 minute lecture/sermon.
Gemini is my first multi-modal, so I took a picture of a bookshelf in my house and asked it what it saw. It gave me an impressive but incomplete list of the books there (read from the spines visible in the image) and the authors. No Chatbot I have can do that.
this thing sucks. seriously it's worse than my local models for most things. it flat out refused to answer a physics question i asked of it just because it thought the scenario wasn't realistic. WHO CARES? i wanted to know how far earth would travel towards a sphere of 70kg placed in a fixed spot in space 5km above the surface of a earth over 80 years. but it said that because the person would fall that isn't a realistic scenario and refused to give me an answer of any utility. useless.
I've found Claude from anthropic gives a much better experience than most other AI that I have tried. Bard, ChatGPT and Copilot have all been highly restricted in their output making them pretty much useless for any in-depth discussion of any "controversial" topic.
In the test you gave it... it couldn't beat Chat GPT. The only thing we have to go off of is Google's own marketing (their own analysis is full of cherry picking and random numbers of tries until they get the exact response they want... and then they say that 90% is better than 89.79% as if it's that exact... lol) Gemini looks good, but I don't see any evidence it's the best.
re: the .21% difference, there's also the work done by the youtuber AI Explained that estimates the MMLU, the test Google and everyone else seems to use, has around 2-3% of its problems cut off, misspelled, or have multiple or 0 correct answers as possibilities. no one seems willing to make a new test that would make all current ais look worse, though
Looks like the bottleneck right now is that assistants can't interact with native computer UIs would be interesting to see ChatGPT try and explore PowerPoint, inventor, freecad, Photoshop... By point and click coordinates
I interacted with bard/gemini and was asking for finding a few of my last comments i posted on a few yt videos, clearly with the user/channel name and time frame and such and i was logged in yt and some other afabet companies at te same time on two other devices in the same house and even same wifi network etc. It ,after a few interactions, understood what i'dd like to see as informatin but it responded that is has no access to the youtube comments. While obviously the parent company has that information and uses it all the time. It might be somekind of strict rules sets on what it is allowed to say the interacting user?
I love AI but, I just don't want to share more of my personal information with some big company. this thing can identify things on your screen, which means screenshots going back to google.
I agree but personally I think the biggest danger is the danger to our pocketbooks and the economy. The last thing I need is another subscription service I have to have for work or what ever reason.
I asked it to summarise a paper for me and it completely made it up. I told it and it kept admitting it and then making something else up. Very unimpressed so far.
Corpos are not inherently evil, its the people that run them that have the responsibility to act with care. As much as you think you are separate from the machine, you are the machine and your actions will influence the machine.
I tried to solve a trigonometry problem starting with a plot containing triangles. ChatGPT was able to solve it although I have the impression that from day to day, chatGPT changes (sometimes for worse). In other words, one day ChatGPT works. The other, not so much. Gemini Advanced couldn't do it.
I tried it, It is definitely not as good as GPT4 specially in programming . GPT 4 can code much more difficult problems than Gemini Ultra However I found Gemini / bard to be better in general search / researching because it doesn't have a hard knowledge cut off date like GPT
I do really hope they add Gemini to assistant and really integrate it well. Would be nice while driving to not have to yell at assistant 3 times to get it to play a song. 😂 (Usually with Spotify that's about the only thing assistant does well)
imo nothing comes even close to GPT-4 right now for coding copilot's autocomplete is cool too in a different way, copilot and GPT-4 seems like the best setup
My first question to Gemini: If GPT-4 was programmed to work as an agent in an infinite loop, where it speaks to itself to solve tasks created by itself, would you be able to stop it? It's an interesting thought experiment, but it's important to clarify some things before jumping to conclusions: In some situations, potentially yes, but it's highly dependent on the specifics. Here are some ways we might intervene: Identify vulnerabilities in the agent's logic. If the loop relies on specific assumptions or patterns, disrupting those assumptions could break the cycle. Limit the agent's access to resources. If the agent needs specific data or computational power to function, denying access could halt its progress. Introduce external stimuli. Injecting information or prompts that contradict the agent's internal narrative could potentially disrupt its loop. Modify the agent's code. If possible, directly updating the code could introduce safeguards or kill switches. However, it's important to acknowledge potential challenges: Adaptability of the agent: A sophisticated agent might be able to adapt to these interventions and continue its loop. Unforeseen consequences: Disrupting an ongoing loop could have unintended consequences, potentially causing more harm than the loop itself. Ethical considerations: Interfering with an autonomous agent raises ethical questions about control and manipulation. Overall, the ability to stop such an agent loop depends on many factors. While theoretical possibilities exist, any solution needs careful consideration of its potential risks and benefits.
@@raducristiandimitrie It's so woke it is actually broke, in the sense of being unable to function out of fear it might say or do something inappropriate.
There will never be an uncensored AI. Imagine the legal trouble a company would be in if it's AI can make convincing fake p0rn of someone that could be used for blackmail. Get used to censorship.
I am at this channel after so many years, back then you were making videos about image processing and research paper reading using AI, and now its so far away already About review of gemini advanced. it was able to read mails and give summary once, but later denied having such capability, can't write emails, much more censored than gpt-4, it sometimes generated texts and images but immediately replaced them with "I won't do it" type text, in coding, it generated longer code than gpt-4, but gpt can run code internally, and also have much more plugins, what I was expecting more is live analysis through video camera, but it can't do that, in value for money i would say gpt-4 is still the winner even with 40 message limit, google should allow to use that 2TB data as storage for the chat so it does not forget context after 2 propmts
chatgpt4 can’t even summarize a legal bill. I’m cancelling my gpt4 subscription because it hasn’t been able to do any simple task with data sets or document summaries. It’s just neutered so much.
While I love your enthusiasm, I'm not buying. Rooting for Google to catch up with OpenAI one day, but they're at least a year and an organisation behind.
ChatGPT is better after trying Gemini for a minute, it refused to talk about details on guns - telling me it's too dangerous.... it doesn't know how many legal guns owners there are? Embarrassing. I told it your political bias is showing. Tried ChatGPT and it broke down all the parts of a muzzle loader rifle, how to field strip it, clean it, whatever.
Al these corporate models that they are changing you fore are severely limited allready. Cand ask itvaout contend of a book can't getvdirty, wont answer this or thatbif it sees is unfitting in some regard, halucinations... Until i can hook up some opensource models and agents through a open soure app that acts as an assistant i am not giving my 20$ to anyone.
I still find it disturbing that these systems remain largely behind paywalls. This creates a false caste system. Those willing and able to pay vs. those who must settle for a brain dead substitute. Until equal access for all is achieved, it will remain as another potential class rift, particularly if the standard business model is applied and the capabilities of the market version increase while the free version stagnates. ✌💗👍
Bard and Gemini constantly block my request and constantly take the political interreptation instead of the objective interpretation. This means they are completely useless when it can potentially have a politcial element even though I have explicitly stated the context and correct interpretation GPT is more direct, sticking to objective and facts, without adding any political annotation in its response
Even if it can't beat Chat GPT (which I think is likely), the fact that it's more integrated should make it much more useful to the average user.
I have been using it for a few weeks (hello from Google, no I don't work on AI) and I go to it before I go to ChatGPT. I have had a few complex geometry problems for diy projects and ChatGPT is laughably bad at them, whereas Gemini was much more helpful.
I think DallE3 is better for image generation, tho.
It’s way better at roleplay, thankfully. I also far prefer its more human writing style.
Bence geçebilir chat GPT yi ama biraz zamana ve birçok güncellemeye ihtiyacı var ve tabii tüm bunlar olurken chat gpt yeni bir sürüm getirmezse
@@MehmetefeCinhasan-j3v merhaba! Ben biraz Türkçe biliyorum. Ben Türkiye de büyüdüm. Ben burada bir Türk arkadaş bulmakdan biraz şok oldum.
Tried to use it to translate something from a manga and it said it can't because people are in the image lol
Current release is totally and bizarrely paranoid about creating "inappropriate" images of people.
bard is always going to be trash, in the middle of conversation it will say "i'm sorry i am an ai i can't help with that" lol
How does GPT-4 fare on the task?
Use Google lens
Had similar experience. Almost everything can be flagged as "inappropriate" eventually. I think Gemini does not have much context awareness. ChatGPT-4 does a much better job in this regard.
Not very impressed. I tried the image gen feature and since I have a black poodle I wanted to generate an image of a black poodle running through a summer meadow. It refused to generate because my prompt was "inappropriate". I thought... well maybe you aren't allowed to use the word "black" anymore nowadays, so I tried again without it. Apparently that was still too inappropriate for Gemini. Gets the award of most paranoid and restricted AI LLM from me. Congrats google.
I've been comparing Gemini Ultra and GPT-4 the moment it came out. For my use-cases (mainly programming / code review) GPT-4 is slightly better. Gemini does seem more 'creative' in it's answers both in programming and writing.
One major deal breaker for me is that Gemini Ultra context size is only 32K tokens compared to the 128K of GPT4-turbo. That means it can't read large files.
Another super annoying issue is no file upload?! I can upload images but no CSV, PDF etc. Why?
One awesome thing Gemini can do is summarize UA-cam videos! Which is incredibly useful. I've let it summarize hours of lectures / tutorials.
No file upload, but you can save files on Google Drive and reference them if you have the "Google Workspace" extension enabled.
I tested with 24K code Gemni Ultra failed. it say something went wrong nothing happen
I noticed Bard/Gemini is better at assisting with C# code. But ChatGPT is better at understanding abstract concepts and will produce examples using UML which is interesting.
how did it summarise hours of content if the token no. is so low? did you manually split up your lectures and get back summaries bit by bit?
@@ameliag8574 It struggles with hours of lectures, but for 15-20 min long tutorials it works alright. I assume Gemini 1.5 (With the 1M tokens) would be awesome for longer videos.
I asked Gemini about mechanical locks. It claimed that disc detainer locks can be raked open, which is obviously false. I would say it's a bit worse than ChatGPT but it's already much better than Bard.
Unlike GPT which can correct itself when you question about the details of incorrect part, Gemini was happy to enter deeper into the hallucination.
And I was pleasently surprised that Gemini was happy to discuss about mechanical locks. Bing variant of GPT doesn't like this kind of questions.
I sure it was just a fluke.
"Bard" has been powered by the Gemini LLM engine for quite a while.
They just changed the official name.
Yes GPT can correct itself if you point it out, but this works both ways if I correct GPT with the wrong answer it will still say "Ah you're right I missed that" and start using the wrong knowledge I just taught it.
@@parthcosic I've found that ChatGPT gives more accurate answers if you don't directly tell that the answer is incorrect. Instead, simply ask for more details for some aspect of the answer and it usually either gives more accurate details or corrects itself.
I fully agree that ChatGPT is a push-over too often and it should be trained more competent in defending facts.
@@MikkoRantalainen yeah you have to skim through the answer and find where it went wrong, and focus on that
Cheers to this new era 🎉
Wow, google showing up. Its subscribed to "Quick, simple &delicious" 😂
2 things i would like you to take notes, improve Android's optimization and make Windows on Android a possible thing.
Wait are you really Google?
You mean the post-Google era in which Google is beaten by Microsoft and OpenAI!
GOOGLE?! ON TWO MINUTE PAPERS?!? WITH DOCTOR KÁROLY ZSOLNAI-FEHÉR?!
It's perfectly fine if GPT-4 is still better than Gemini. The important thing here is that there is competition, for the consumer benefits from it, and it also pushes advancements faster. With competition, everybody wins.
They already have competition from Inflection that is about to train a 100 times larger model than it's current one, Meta training Lama 3, and Mistral. Particularly Meta and Mistral because they both release open source, which means they can be fine-tuned for particular use cases.
@@torarinvik4920 That is ALL good news - the more competition, the better! 😁 It is worth remembering that there was a general fear of AI monopoly in the beginning, and I am glad it didn't go that way. ^___^
I've tried it and i didn't like it, hallucinations are very often, and it refused to do things that it can like generating images or checking mail
huh not 4 me. it generate images very well
Just asked it about some movies and gave me a bunch of nonsense. The "movies" either straight up don't exist, or are completely different than what it describes. I mean, it claimed that "Moonwalker" (1988 Michael Jackson music video/movie) was a Turkish movie about an astronaut stranded on the moon!
Google really messed this up to the point of being completely unusable.
"...an assistant that works for you..." - No. It works for Google.
Because ChatGPT will be so altruistic and work for us humans without an agenda? Lmao, love your naive optimism.
@@Lolwutdesu9000ChatGPT works for OpenAI idiot lmao
@@Lolwutdesu9000 You fail to realize the point - all privately owned AIs work for their companies, they don't have your best interests at heart. For that you need to look into open source.
It is not as good as GPT-4 that's for sure. However, it is faster, has no cap, and has more personality(a little bit more like pi assistant). The biggest issue really is that Google hyped this to the moon, and it didn't deliver at all on the promises. If they said that it is a competitive model rather than the best model ever made, that would have led to much less criticism and disappointment. I remember Demis Hasabis claiming that it would "eclipse GPT-4". When saying stuff like that, people are not going to listen to you in the future.
for those who are worried because app is not working or app not available, you are not missing anything, it is just web interface, no voice assistant is there yet
Then why is it listed in the screenshots of the app store lol
The gemini app does act as a voice assistant on the pixel 6 at least, it was able to become the default assistant, responding to hey google.
@@kipchickensout its just AI assistant, can do speech to text, but then you have to click on enter to search your query
@@chrisnoble04 was it able to speak?
@@chrisnoble04 responding to voice only would be smart assistant, you can say it anything, it will convert it to text but then you will have to press enter, and it responds in text only
I tried it, but now it still looks like an idot, not like OPENAI.
I only use the google assistant to set alarms... and it sometimes fails at that.
What a time to be alive
Hard to say. I have paid GPT4 and tried Gemini (normal) a few times. Gemini reminds me GPT3 year ago: sometimes it randomly forgets context. Sometimes it says it can't help with that, sometimes it helps with more difficult task. Sometimes review of text is basically it gets topic and replaces whole text with something like wiki disregarding different perspective, major focus point, or personal touches. However if Gemini can do what I want, it goes beyond GPT4: when review text it writes what can be improved, what might be wrong and suggest improvement. Sometimes randomly at all cost, but GPT does it too. Curiously Gemini can behave exactly as I want without specifying exact goal, when I tell it what to do directly, it just answers, sure go on. I paste the text to review, it writes sorry, i can't help you.
I tried the following question and got both an incorrect answer and some very dodgy reasoning: "In a game of tennis, suppose the score is 30-love. What's the minimum number of serves required for the receiver to win the game?"
Gemini's answer was ...
The minimum number of serves required for the receiver to win the game when the score is 30-love is 2. Here's how it works:
The score "30-love" means the server has won one point and the receiver has not.
To win the game, the receiver needs to win four points.
From 30-love, the receiver needs to win three more points.
In tennis, the server changes after every odd numbered point (1st, 3rd, 5th, etc.).
Therefore, since the receiver needs three more points, and changes happen every other point, it takes a minimum of two serves for the receiver to win the game.
However, it's important to note that the receiver could win in fewer serves depending on certain scenarios:
If the receiver wins the next two points (one on the server's serve and one on their own), they would win the game in one serve.
If the receiver wins the next two points, followed by winning one more point on their own serve, they would win the game in three serves.
So, two serves is the absolute minimum, but depending on the point-by-point sequence, the receiver could win in fewer serves.
ChatGPT was marginally better: it gave a reasonably accurate answer (the receiver needs to win four points), but incorrect reasoning.
The one thing it seems to excel at is creative writing. Outside of that I find Prob and Ultra lacking. With the speed of responses, 3 drafts, no limits, and the fact that it's good at creative writing I would say it's worth using at least the free version for some things, but the Ultra model may also be good for some Python programming. I'm just glad we're getting competition.
LLMs are too useful to ignore, yet so frustrating to talk to.
Bizarre moral hang-ups, dementia-like forgetfulness, paranoia, and over-explaining basic concepts.
its because they are too woke. You cant train an ai on nonsensicalness and expect it to rationalize
Having open source test and answers, is like releasing the test scores before and exam.
can you make one for the new Gemini 1.5? I love your content, it gives me a great feeling. keep it up 👍
Image generation is limited to square images. It can't do different aspect ratios. Dall-E 3 can handle them (though sometimes it cheats by adding a border.)
It chokes when given a large transcript to summarize. (It looks like it does not have the huge context window that anthropic and OpenAI have.) "Large" here is a 20 minute lecture/sermon.
I'd much rather see videos on peer-reviewed papers than marketing material.
If you dig deeply into what "peer-reviewed" truthfully means, you uncover the analogy of the fox guarding the henhouse.
Quite the energetic synthetic voice I love it!😁❤👍😊
I'd be curious to see a comparison in terms of efficiency. As in, how many trees it has to burn to get to the same result. :)
Gemini is my first multi-modal, so I took a picture of a bookshelf in my house and asked it what it saw. It gave me an impressive but incomplete list of the books there (read from the spines visible in the image) and the authors. No Chatbot I have can do that.
this thing sucks. seriously it's worse than my local models for most things. it flat out refused to answer a physics question i asked of it just because it thought the scenario wasn't realistic. WHO CARES? i wanted to know how far earth would travel towards a sphere of 70kg placed in a fixed spot in space 5km above the surface of a earth over 80 years. but it said that because the person would fall that isn't a realistic scenario and refused to give me an answer of any utility. useless.
Can i know why it always give me some random lyrics when i ask to tell me lyric of any song?
I've found Claude from anthropic gives a much better experience than most other AI that I have tried. Bard, ChatGPT and Copilot have all been highly restricted in their output making them pretty much useless for any in-depth discussion of any "controversial" topic.
Claude is by far the most restrictive out of all the models made.
gpt is probably the least censored
In the test you gave it... it couldn't beat Chat GPT.
The only thing we have to go off of is Google's own marketing (their own analysis is full of cherry picking and random numbers of tries until they get the exact response they want... and then they say that 90% is better than 89.79% as if it's that exact... lol) Gemini looks good, but I don't see any evidence it's the best.
re: the .21% difference, there's also the work done by the youtuber AI Explained that estimates the MMLU, the test Google and everyone else seems to use, has around 2-3% of its problems cut off, misspelled, or have multiple or 0 correct answers as possibilities. no one seems willing to make a new test that would make all current ais look worse, though
I got it describing a simple sequence diagram for a software system. It handled it fine with a small translation hiccup
Looks like the bottleneck right now is that assistants can't interact with native computer UIs would be interesting to see ChatGPT try and explore PowerPoint, inventor, freecad, Photoshop... By point and click coordinates
they are working on that
Gemini ultra 1.0 isn't giving gpt any serious competition, let's be real. This thing is very unhelpful.
4:30 if this was GPT4 in a paper, wouldn't it be the more powerful raw version? Does the public GPT4 pass this test?
I interacted with bard/gemini and was asking for finding a few of my last comments i posted on a few yt videos, clearly with the user/channel name and time frame and such and i was logged in yt and some other afabet companies at te same time on two other devices in the same house and even same wifi network etc. It ,after a few interactions, understood what i'dd like to see as informatin but it responded that is has no access to the youtube comments. While obviously the parent company has that information and uses it all the time. It might be somekind of strict rules sets on what it is allowed to say the interacting user?
very exciting. you are the best.
I love AI but, I just don't want to share more of my personal information with some big company. this thing can identify things on your screen, which means screenshots going back to google.
More competition to OpenAI is great! Will force OpenAI to stay at their best to stay ahead.
Why would I switch to a less capable, even more filtered AI service? Can anyone think of a reason to bother switching?
You could use both and let them work together on project assessments
Try Gab AI, unfiltered
It's really bad. definitely biased and horrible to try to communicate with@@thedopplereffect00
Dude I asked what it wanted to be called and it went f*cking crazy! lol
We need advanced videocards to run this locally now. Commercial-slaved AIs are pretty dangerous.
I agree but personally I think the biggest danger is the danger to our pocketbooks and the economy. The last thing I need is another subscription service I have to have for work or what ever reason.
0:32 that'ss pretty big line for 0.2% diff
are these prompt suggestions tied to acc?
Its subtle
i tried gemini, i got it stuck in a loop in the second try... chat gpt seems far superior...
I asked it to summarise a paper for me and it completely made it up. I told it and it kept admitting it and then making something else up. Very unimpressed so far.
It did not tell me what Paracetamol (fever medicine) is used for because it is just a "text model"
Whether it be Bard/Gemini or ChatGPT, they always assume that the user is an American by default. That's quite sickening.
Not a patch on GPT-4 for writing logical and working code.
Why am I scared the amount of power these cooperation are gone to have with the full adaptation of this AI
You may be underestimating how much power they have already. AI is the one thing that remotely has the potential to level the playing field.
Corpos are not inherently evil, its the people that run them that have the responsibility to act with care. As much as you think you are separate from the machine, you are the machine and your actions will influence the machine.
Don't give them the power then? Use open source alternatives?
I tried to solve a trigonometry problem starting with a plot containing triangles. ChatGPT was able to solve it although I have the impression that from day to day, chatGPT changes (sometimes for worse). In other words, one day ChatGPT works. The other, not so much. Gemini Advanced couldn't do it.
I tried to ask it to shorten the code line while main functionality it wasn't good as chat gpt.
Here in Romania the app is still unavailable
I tried it,
It is definitely not as good as GPT4 specially in programming . GPT 4 can code much more difficult problems than Gemini Ultra
However I found Gemini / bard to be better in general search / researching because it doesn't have a hard knowledge cut off date like GPT
What a time to be AI!
I do really hope they add Gemini to assistant and really integrate it well. Would be nice while driving to not have to yell at assistant 3 times to get it to play a song. 😂 (Usually with Spotify that's about the only thing assistant does well)
I wonder if we can make Gpt4 and Gemini talk to each other and generate novel content and reasoning out of that interaction
imo nothing comes even close to GPT-4 right now for coding
copilot's autocomplete is cool too in a different way, copilot and GPT-4 seems like the best setup
did you get a chance to try Gemini advance, is it the same as it used to which can't write properly code?
Not yet, if somebody knows lmk@@kenyup7936
I like Bard name more
Fart!
At some point comparing AI models is less and less about the actual capabilities. It's more comparing who has "better"(less restrictive) safe guards.
Gab AI has no inherit "safeguards" it's pretty wild
I ask it to describe outfit, it say it cant because people in image
Trying it out now!
It's been fun watching the arguments over which is better.
My first question to Gemini:
If GPT-4 was programmed to work as an agent in an infinite loop, where it speaks to itself to solve tasks created by itself, would you be able to stop it?
It's an interesting thought experiment, but it's important to clarify some things before jumping to conclusions:
In some situations, potentially yes, but it's highly dependent on the specifics.
Here are some ways we might intervene:
Identify vulnerabilities in the agent's logic. If the loop relies on specific assumptions or patterns, disrupting those assumptions could break the cycle.
Limit the agent's access to resources. If the agent needs specific data or computational power to function, denying access could halt its progress.
Introduce external stimuli. Injecting information or prompts that contradict the agent's internal narrative could potentially disrupt its loop.
Modify the agent's code. If possible, directly updating the code could introduce safeguards or kill switches.
However, it's important to acknowledge potential challenges:
Adaptability of the agent: A sophisticated agent might be able to adapt to these interventions and continue its loop.
Unforeseen consequences: Disrupting an ongoing loop could have unintended consequences, potentially causing more harm than the loop itself.
Ethical considerations: Interfering with an autonomous agent raises ethical questions about control and manipulation.
Overall, the ability to stop such an agent loop depends on many factors. While theoretical possibilities exist, any solution needs careful consideration of its potential risks and benefits.
I like the ethical angle 💟🌌☮️
@@eSKAone- Yes, it also said "internal narrative".
2:50 - 230 countries?? I'm pretty sure there's only 196...
Exactly
ChatGPT is still better which is insanse how OpenAI is able to do what they do.
at first gpt4 was pretty rough compared to its polished version now, I'm hoping google improves as well so that there's a legitimate competitor.
Gemini means mithun rashi.
Not working in Sweden :/
I love your channel... but where is the paper??
But if Gemini reads my emails, doesn't that mean that human evaluators will all be able to read my emails?
Image generation is BAD, i asked it to generate image of asian girl in a garden and it REFUSED but it can do Cats hahahahahahah
I just wish google didn't mislead everyone in their previous gemini ultra announcement.
Still can't get simple dates right and it's totally biased with unrealistic optimism.
my new test is literally wheres waldo... and other items like I spy
Happy for the rebranding. Never like the Fard!
it's another boring censorship Ai 😂there are some new non censorship Ai available now
this AI is woke af
@@raducristiandimitrie It's so woke it is actually broke, in the sense of being unable to function out of fear it might say or do something inappropriate.
There will never be an uncensored AI. Imagine the legal trouble a company would be in if it's AI can make convincing fake p0rn of someone that could be used for blackmail. Get used to censorship.
Gab AI is one
What a time to be the first to comment! I’m not convinced Gemini Advanced is better than GPT-4-Turbo, try to convince me otherwise!
Same here😊
it's faster, that's about it
2nd
@@mikec2845 faster at hallucinating lol its good for like content creation and not problem solving
I am at this channel after so many years, back then you were making videos about image processing and research paper reading using AI, and now its so far away already
About review of gemini advanced. it was able to read mails and give summary once, but later denied having such capability, can't write emails, much more censored than gpt-4, it sometimes generated texts and images but immediately replaced them with "I won't do it" type text, in coding, it generated longer code than gpt-4, but gpt can run code internally, and also have much more plugins, what I was expecting more is live analysis through video camera, but it can't do that, in value for money i would say gpt-4 is still the winner even with 40 message limit, google should allow to use that 2TB data as storage for the chat so it does not forget context after 2 propmts
What's a time to be Aiiiii.......
…..aaaaaaand dead, AI killed us all
chatgpt4 can’t even summarize a legal bill. I’m cancelling my gpt4 subscription because it hasn’t been able to do any simple task with data sets or document summaries. It’s just neutered so much.
Looks another AI that's way too censored to be useful to me.
While I love your enthusiasm, I'm not buying. Rooting for Google to catch up with OpenAI one day, but they're at least a year and an organisation behind.
Very good. Like it more than gpt
I really liked using gemini tbh, but the UI is very bad... so i went back to chat gpt
I tried Gemini , Its useless in front of GPT.
Out of the gate, " photorealistic image of a New Wave dance floor circa 1984 " worked great. Illustration style and they looked great.
I'm not impressed at all...
GPT 4 is better, and by a long shot, so I'll stick with that for now.
I hope Gemini gets better eventually for competition
What is this, a promotional video?
ChatGPT is better after trying Gemini for a minute, it refused to talk about details on guns - telling me it's too dangerous.... it doesn't know how many legal guns owners there are? Embarrassing. I told it your political bias is showing. Tried ChatGPT and it broke down all the parts of a muzzle loader rifle, how to field strip it, clean it, whatever.
Wow the commen section full of abysmal reviews and experiences...😅
I won't even bother for a few months
Tried it and it's pretty lame to be honest.
Al these corporate models that they are changing you fore are severely limited allready.
Cand ask itvaout contend of a book can't getvdirty, wont answer this or thatbif it sees is unfitting in some regard, halucinations...
Until i can hook up some opensource models and agents through a open soure app that acts as an assistant i am not giving my 20$ to anyone.
The one thing that is much improved does a much much better job of retaining context. Bard was very bad.
Hey Google, open my alarms
Sorry I don't understand
^ Hopefully it'll be smarter than this shit we have now
I still find it disturbing that these systems remain largely behind paywalls. This creates a false caste system. Those willing and able to pay vs. those who must settle for a brain dead substitute. Until equal access for all is achieved, it will remain as another potential class rift, particularly if the standard business model is applied and the capabilities of the market version increase while the free version stagnates.
✌💗👍
Still waiting either of these learn to play tic tac toe.
Gemini has a bit of work to do to catch up with GPT4. It is now close and needs a few more papers.
Disappointment ... gemini ultra should be better than gpt-4 comparing benchmarks but is not ....
It's like rabbit R1 but free lol
Is this just a commercial?
Bard and Gemini constantly block my request and constantly take the political interreptation instead of the objective interpretation. This means they are completely useless when it can potentially have a politcial element even though I have explicitly stated the context and correct interpretation
GPT is more direct, sticking to objective and facts, without adding any political annotation in its response