we are living in the days we all imagined as a kid. we got ai talking glasses, a companion to help us out, our computer can now do tasks for us, humanoid robots like figure 02 and tesla's robot. insane!
And this is all happening really quick which is the more insane part. In 15 more years it's going to be unrecognizable from today's standards. Real magic is on the way
I've just tested the realtime streaming in AI Studio (both audio and video) and this is simply wow... Thanks for the chance to probe the AI future, Google! Really hope to see this level of modalities understanding in all your products soon, starting from Google Maps assistant and UA-cam automatic subtitles and summaries.
@@googlyguy3289 actually it reacts faster than comfortable for me. Real person would wait a bit more before replying :) I think that the future AI assistants would be also in visual contact with the person and reply only when the see, that the human finished talking.
@@d33zknots88apple intelligence involves passing off the intelligence to chatgpt. They should have chased AI with the same vigor as Google when chatgpt dropped.
This eye glass might be a winner. If Google can keep the processing power to the phone and the eye glass as a camera with minimal compute to save cost then it will beat meta and apple vision and snaps easily. these other 3 are shoving so much in the glass its overly expensive for the consumer.
@@Geeksmithing you misunderstood my beat them comparison. what I mean by that is you only have two eyes. people are only going to wear one pair of glasses. They’re not gonna wear Meta or snap and definitely not wearing Apple vision in the street when a cheaper pair that does the basic stuff that was just demonstrated cheaper. So while the other ones are stuffing all the compute in the pair of glasses Google‘s compute is the phone.
@@lamsmiley1944 saying these google glasses and Meta, Apple, and Snaps are different categories and dont compete is like saying movies/Tv and video games dont compete. you only have one set of eyes one set of attention span. Are you gonna tell me that you have a pair of Google glasses and a pair of say Meta at the same time with you on the streets or are you gonna buy one and use that and guess what people are price sensitive if they can afford a pair of glasses they can give information. They will buy the cheapest one and the cheapest one in this regard maybe Google‘s because they’re not pushing all that compute into the glasses, but rather your phone which you already carrying anyways. so my phone is yes they are competing for the same pair of eyeballs the same attention span.
Keep it coming Deep Mind and Google Staff! Don't get stagnant. You're doing great stuff for humanity! No time to slack! Gemini 1.5 Pro is a blast! I tried the Deep Research. Very impressed!
Wow, so far this and Gemini Flash 2 is far more impressive than the OpenAI "Shipmas" announcements. Playing with AIStudio, but can't wait for the Astra app and the integration with glasses.
The multilingual switching was so... smooth. And the hand-free glasses definitely are the most interesting - super cool, and most importantly, a useful idea with real-world applications I can see myself using :).
The thing that surprises me most is how fast it is for quick tasks. The bus example, 2:35, was perfect for showcasing this. Had it been a slower model, the response would be completely useless.
YES! Finally the time has come for smart glasses! Now do the software well and this is going to lead us right into the next everyday-device revolution!
I look forward to utilizing Project Astra. The phone and glasses are very user-friendly. I use mostly Apple devices, but I would be happy trying out and testing these products since I also use a lot of Google software.
I have been using Gemini to tell me how to handle my laundry based on the symbols on my clothes, and I realized my wife is spending unnecessary money to send winter coats for dry cleaning when we can just use the washing machine with a cold-water wash. 😅😅😅😅
Am impressed. Can't wait to integrate it with my project Apsara 2.0 😍 I got amazing tools that will boost its power 💪 🙌 hands off to google 😄 very creative
The AI responses seem to have been edited in afterward to appear instant. There is no lag between user request and agent response -- that just not believable. Would love to hear I'm wrong about this. Using voice with the top models, even at top speeds there is a noticeable lag for uploading and processing.
Even humans have a "noticeable lag when responding." The thing is it's still responding faster than we can respond. Love your comment though. It was articulated very well.
as far as I know, it works on your voice. I setup a google based smarthome for my mother and once we had it all up and running. The lights turned on and off with her voice. I tried it and it worked fine. But then I asked it to do something like send a message which you can do to broadcast to the house and it refused to do that. It wont make calls or anything personal and just insists that it doesn't recognise my voice. I couldn't even change the color of the lights. But my mothers voice can. So you might be able to steal the glasses and phone and ask whether its going to rain but anything personal like passwords or contacts, its going to refuse. That's as it currently is. This on show here is the next generation A.I so I imagine that's even better at avoiding other people getting at your personal info.
It’s getting there. I think the problem becomes, to have an AI assistant that can truly help you and be completely frictionless it basically has to always be present in your life and your affairs. Have access to everything, listen.. even see everything going on around you. Know every intimate conversation. Basically be a secret third person that is ever present. Only then can you truly have real awareness. It’s going to have to know everyone else’s affairs with as much context as possible… only then can you get something like the OS from the film “Her”. I know we say tech firms already know everything about us, and they do to an extent. But this will be orders of magnitude more invasive if it’s going to work as dreamed, and not just be a nuisance that you constantly need to manually fill in context and details when you put in a request that it can’t resolve.
Good points and if data is compromised in any way it becomes a major security concern. Especially if you're utilizing it to remember security codes as displayed at the beginning of the video.
If the features and the latency are anything like the video, thats impressive. Totally makes up for the underwhelming Day 5 at OpenAI. Also funny that the video was shot in my old neighbourhood in London (which makes sense, Google HQ being in St Pancras). Also, this is going to be totally inappropriate but you - lucky Google employee and Astra user in the video - you are seriously cute.
Message to all AI developers: Individual memory is, to me, the most significant function of artificial intelligence - trustworthiness its most valuable currency!
@@tk_kushal Just that the real world is very complicated. These demonstrations may be intentionally chosen to show the strengths of the the AI but at the same time avoid what the AI is not good at.
Think about it when a completely blind person attached his phone and it speaks like a friend saying what's in front of them and guide them home safely ❤
What can I say, I love everything in this demonstration. I got to memorize everything and use them to formulate more new ideas. Everything is so surreal, it gives the "WOW 😮" factor.
Maybe we could use a real time video processor and a display in the inner part to change the video in a way that it would look like its coming from specific power eye lens. An universal spectacle with features of edith and maybe apple vision pro on glasses
Its a fascinating topic ...How should i experience art ? is it a cloudy day ? can i remember a door code? How can i avoid thinking about my friend's relationship to literature in an authentic way ? and whats up w/ these festive lanterns at the entrance of chinatown ? oh..Ok....so they're festive? ....and at the entrance of chinatown? .. got it .. and in French ! Absolutely Riveting .. and how did i ever get along w/o these stunning glasses? and how much per month for the monthly subscription ? and the glasses? and the latest phone?... and the complete surrender of my identity ?... So lemme get this straight ,you'll train me to be lazier, and more afraid to do anything without permission from Project Astra ?.. And i get to pay you XXX$$$ for this? ....Where do i sign up ? Just joshing guys ! Was just pissed that my favorite bakery is overrun with dudes w/ Buddy Holly glasses and they're completely out of SCONES !!! SKONES!!! SKONNES!!! ...
This is all great and exciting - but some of these examples fundamentally reduce human interaction (ask the bus driver directly , ask your friend directly). I hope the team and others focus on improving human interaction or reducing or removing.
It’s getting there. I think the problem becomes, to have an AI assistant that can truly help you and be completely frictionless it basically to always be present in your life and your affairs. Have access to everything, listen.. even see everything going on around you. Know every intimate conversation. Basically be a secret third person ever present. Only then can you truly have real awareness. It’s going to have to know everyone else’s affairs with as much context as possible… only the. Can you get something like the OS from the film “Her”. I know we say tech firms already know everything about us, and they do to an extent. But this will be orders of magnitude more invasive if it’s going to work as dreamed, and not just be a nuisance that you constantly need to manually fill in context and details when you put in a request that it can’t resolve.
Sounds like Cortana from Halo. Glad to see smart glasses are making a comeback. Google Glass was way ahead of its time.
It said in small print that the output was audio only. :(
4:05
way way ahead
It got done so dirty considering Meta has basically the same product out without visual output now.
@@IanFitzwilliam It's not too late for Google to get into the game.
Thanks to all Gemini team. Multilingual capabilities are amazing.
Thank you! 🫶
@@Googleno thank you! For selling my data to data brokers and advertisers
Coming only to a selected few countries :D
@@ozordiprince9405 That's not this specific team though
I totally agree 💯👍🏾
we are living in the days we all imagined as a kid. we got ai talking glasses, a companion to help us out, our computer can now do tasks for us, humanoid robots like figure 02 and tesla's robot. insane!
seems like the knight rider movies are Gettig real
And this is all happening really quick which is the more insane part. In 15 more years it's going to be unrecognizable from today's standards. Real magic is on the way
This is a very cool tool if it works the way you're advertising it.
It does
@@prod.bykbrd5723 You’ve tried it?
If you take the bus you will end up in china.
@@prod.bykbrd5723 shocking after 1.0
I tried, it works indeed
The multilingual part is magical
Glad you're as excited about it as we are! 😁
honestly the least impressive part. Every major llm can do this
@@lodomir5519 really? which can read the environment and also change languages in one package? Not sure chat works with voice this way.
As a Trilingual person it's going to be amazing 🎉🎉
@@alexludwig_music well those are diffrent features
enterprise-ai AI fixes this. Exploring future AI assistant capabilities.
Amazing for us childhood gamers, we experiece this only in games, never thought it would be a reality, but now it is
We're happy you're as excited as we are. 😁
@@Google ❤
It’s like you are in a game with these, I imagine 😮
That’s just Wow✨
I've just tested the realtime streaming in AI Studio (both audio and video) and this is simply wow... Thanks for the chance to probe the AI future, Google! Really hope to see this level of modalities understanding in all your products soon, starting from Google Maps assistant and UA-cam automatic subtitles and summaries.
What about the latency?
We'll be updating everyone as more of this rolls out, but for now we're excited you're interested!
@@googlyguy3289 actually it reacts faster than comfortable for me. Real person would wait a bit more before replying :) I think that the future AI assistants would be also in visual contact with the person and reply only when the see, that the human finished talking.
this is more impressive than Apple Intelligence
There’s no intelligence in Apple “intelligence”
@@d33zknots88apple intelligence involves passing off the intelligence to chatgpt. They should have chased AI with the same vigor as Google when chatgpt dropped.
What is apple intelligence? 💀
1st generation Galaxy AI launched with S24 ultra in Jan 2024 is far ahead than current apple intelligence capabilities.
Apple isn't into ai they just use openai and maybe some small ai models they made but they're probably based on open source models
This is how I use Google lens. I'm glad to see this progress..amazing times
We're so excited, Jason! 🙌
@@Google BULLSHIT YOUT EXCITED ABOUT GETTING MORE INFORATION AND ANOTHWR DEVICE TO APOUT YOUR HATE AND MISINFORMAYPTION.
@@Googleyeahhhhhh
Love it!! Now this looks sweet, we need personal directions assistants
You're excited? We're excited to see it! 😁
@@GoogleAny toughts about integrating Maps on Astra, were it would use AR and Maps Direction to guide its users ? 😆
Absolutely incredible work by the team at Google.
Aw, shucks - we love to hear it. ❤️
We got Jarvis and edith before gta 6😂
😂😂😂
Hardly, the glasses only had audio output.
😂
Okay Jarvis is I guess that fictional AI system from IRONMAN. But what is Edith?
@@harshitbhatt3243 watch the movie first
This eye glass might be a winner. If Google can keep the processing power to the phone and the eye glass as a camera with minimal compute to save cost then it will beat meta and apple vision and snaps easily. these other 3 are shoving so much in the glass its overly expensive for the consumer.
@@KieferNguyen these glasses have no UI/ Display overlay, it's audio only, so it doesn't beat Meta, seems on par.
It’s a completely different product category, it doesn’t really compete with them.
@@Geeksmithing you misunderstood my beat them comparison. what I mean by that is you only have two eyes. people are only going to wear one pair of glasses. They’re not gonna wear Meta or snap and definitely not wearing Apple vision in the street when a cheaper pair that does the basic stuff that was just demonstrated cheaper. So while the other ones are stuffing all the compute in the pair of glasses Google‘s compute is the phone.
@@lamsmiley1944 saying these google glasses and Meta, Apple, and Snaps are different categories and dont compete is like saying movies/Tv and video games dont compete. you only have one set of eyes one set of attention span. Are you gonna tell me that you have a pair of Google glasses and a pair of say Meta at the same time with you on the streets or are you gonna buy one and use that and guess what people are price sensitive if they can afford a pair of glasses they can give information. They will buy the cheapest one and the cheapest one in this regard maybe Google‘s because they’re not pushing all that compute into the glasses, but rather your phone which you already carrying anyways. so my phone is yes they are competing for the same pair of eyeballs the same attention span.
I agree. They will become the new ear buds - as long as it has a camera, microphone, and speakers, it doesn't need an overlay.
I got Gemini from when it released lol. Keep it up Google!
I freaking love multimodal video input and audio output in real-time 🤩🔥
Keep it coming Deep Mind and Google Staff! Don't get stagnant. You're doing great stuff for humanity! No time to slack! Gemini 1.5 Pro is a blast! I tried the Deep Research. Very impressed!
Wow, so far this and Gemini Flash 2 is far more impressive than the OpenAI "Shipmas" announcements. Playing with AIStudio, but can't wait for the Astra app and the integration with glasses.
Thats just crazy im so hyped for when it comes to the public
So excited for this version of the future! What a time to be alive 🎉❤
Gemini Live is already blowing my mind. I can't wait to have this AI and the glasses.
Game changing for the blind! 🌟
Super 💥 good work google 👏
Every human will become a super human this this technology...Amazing...
Or a super idiot...
This is absolutely insane!
Honestly, that’s impressive! I’m really enjoying it.
This is super useful for some good number of people. AI assistant will help a meaning full way.
We can't wait to share more ❤
The multilingual switching was so... smooth. And the hand-free glasses definitely are the most interesting - super cool, and most importantly, a useful idea with real-world applications I can see myself using :).
We're so happy to hear you're excited about it! 🙌
@@Google LIES LIES LIES FROM GOOGLE
Thank for sharing👍
The thing that surprises me most is how fast it is for quick tasks. The bus example, 2:35, was perfect for showcasing this. Had it been a slower model, the response would be completely useless.
It is very likely sped up and cut.
The bus drove past him
@@Feel_theagi the responses, not the video
"Can you check if it's going to rain anytime son?" "It's London. It's always going to rain soon."
Holyyyyy smokessss I was waiting for this dayyy!!!!! I am totally exited to check this out
You Google,You Completely Blown us 🤯
Great jorb Google! You're almost as good as the previous version of chat gpt
YES! Finally the time has come for smart glasses!
Now do the software well and this is going to lead us right into the next everyday-device revolution!
don't forget the brain tumor from all that raditation you'll be getting.
AI is taking the world!
Technological singularity soon. omg
Loving this, Personally I want to try both Astra and the Glasses. Cant wait to see what else is in store and coming in the near future
Half of a yellow sun..a wonderful book
We need this on all phones
And maybe glasses with cameras
It would come to all android phones soon, maybe 5-6months to fully roll out
I want these pixel glasses! They look amazing. I'm ready to give Google my money. When can I get them?
Awesome! Especially for older people who can't see that good anymore
I look forward to utilizing Project Astra. The phone and glasses are very user-friendly. I use mostly Apple devices, but I would be happy trying out and testing these products since I also use a lot of Google software.
We're so happy to hear it! You can learn more about Project Astra and how to get involved here: deepmind.google/project-astra/
Amazing and blown away by what google did. 🎉
We can't wait to share more. ✨
It looks cool! If only products in reality performed the way the demos portray them. :-)
Thanks for the code ! :)
I have been using Gemini to tell me how to handle my laundry based on the symbols on my clothes, and I realized my wife is spending unnecessary money to send winter coats for dry cleaning when we can just use the washing machine with a cold-water wash.
😅😅😅😅
Hey Google! When it will be available, and will voice mode will be more human like?
Multilingual Support in Tamil is Amazing ❤😂🎉
cant wait to get my hands on those glasses, hopefully there will be support for prescription lenses
Supercool.. Its So Smooth..GOOGLE always surprised us.
Aw, shucks - we're so glad you liked it!
I'm sure the latency in between responses is longer. Usually they edit it for brevity.
Yeah, or it mishears a word then starts rambling about something else.
Can’t wait to get that experience
Can't wait to have it live
We can't wait, either. 🙌
@Google 🥰
When riding the bike with the glasses and google maps, it should show a blue outline on the road guiding you where to go. That would be surreal
The world is changing guys ....
We can't wait for you to see what's coming!
فكره حلوه ❤
Woow really seamless multilingual access is awesome 😍 Tamil❤
We're so glad you like it, Vijay! ❤️
This is why I love Google! ❤
so much time saved and didnt miss the bus!
The next 5 years is going to see huge societal and technological change.
Am impressed. Can't wait to integrate it with my project Apsara 2.0 😍 I got amazing tools that will boost its power 💪 🙌 hands off to google 😄 very creative
where can I find more info about the glasses ( with astra enabeld ), not a product yet?
That's gnarly! Rock on Google! Can't wait to get my hands on this.
Thanks for the support, Oscar! More details to come.
@@Google Google bullshit lies and hate.
“Can you surveille society for me?”
The AI responses seem to have been edited in afterward to appear instant. There is no lag between user request and agent response -- that just not believable. Would love to hear I'm wrong about this. Using voice with the top models, even at top speeds there is a noticeable lag for uploading and processing.
Even humans have a "noticeable lag when responding." The thing is it's still responding faster than we can respond. Love your comment though. It was articulated very well.
To be fair their 2.0 flash demo opened on Gemini Advanced today and I was rather shook at how damn fast it is
I just wanted to know if someone who took the glasses or any of these devices has access to the stored codes/pins/passwords.
as far as I know, it works on your voice. I setup a google based smarthome for my mother and once we had it all up and running. The lights turned on and off with her voice. I tried it and it worked fine. But then I asked it to do something like send a message which you can do to broadcast to the house and it refused to do that. It wont make calls or anything personal and just insists that it doesn't recognise my voice. I couldn't even change the color of the lights. But my mothers voice can. So you might be able to steal the glasses and phone and ask whether its going to rain but anything personal like passwords or contacts, its going to refuse. That's as it currently is. This on show here is the next generation A.I so I imagine that's even better at avoiding other people getting at your personal info.
Wow. This is incredible.
We're so glad you like it, Alper! 😁
Oh man Google is killing it 🥵
We're so glad you're as excited about it as we are! 🙌
I love Google ❤
It’s getting there. I think the problem becomes, to have an AI assistant that can truly help you and be completely frictionless it basically has to always be present in your life and your affairs. Have access to everything, listen.. even see everything going on around you. Know every intimate conversation. Basically be a secret third person that is ever present. Only then can you truly have real awareness. It’s going to have to know everyone else’s affairs with as much context as possible… only then can you get something like the OS from the film “Her”. I know we say tech firms already know everything about us, and they do to an extent. But this will be orders of magnitude more invasive if it’s going to work as dreamed, and not just be a nuisance that you constantly need to manually fill in context and details when you put in a request that it can’t resolve.
Good points and if data is compromised in any way it becomes a major security concern. Especially if you're utilizing it to remember security codes as displayed at the beginning of the video.
This looks awesome
We're glad you think so.
If the features and the latency are anything like the video, thats impressive. Totally makes up for the underwhelming Day 5 at OpenAI.
Also funny that the video was shot in my old neighbourhood in London (which makes sense, Google HQ being in St Pancras).
Also, this is going to be totally inappropriate but you - lucky Google employee and Astra user in the video - you are seriously cute.
it recognition capabilities is quite good its like magic assistance
Alr those glasses are 🔥.
It sounds like you've got good taste to us, Vebius. 😉
How would this deal with hallucinations? This is still a big issue regarding LLM assistants
This was quite impressive.
We love to hear that you liked it, Prince!
Message to all AI developers:
Individual memory is, to me, the most significant function of artificial intelligence - trustworthiness its most valuable currency!
If this really works as advertised then is so good it's uncanny. But I have some reservations because it's just too good to be true.
well I really hope you are wrong about that, I really want this.
@@tk_kushal Just that the real world is very complicated. These demonstrations may be intentionally chosen to show the strengths of the the AI but at the same time avoid what the AI is not good at.
@@BA-ul7rl the image with the washing machine by itself if its true will make it better than all open ai models
this is the ultimate solution, just general AI, not specialized thousands of startup AI's
Think about it when a completely blind person attached his phone and it speaks like a friend saying what's in front of them and guide them home safely ❤
What can I say, I love everything in this demonstration. I got to memorize everything and use them to formulate more new ideas. Everything is so surreal, it gives the "WOW 😮" factor.
We simply love to hear it, Ali!
As someone with Constructional Apraxia, this is going to change my life immensely.
Thank you so much. We're so grateful.
We're so happy to hear you're excited for it, Echo! 🙌
Maybe we could use a real time video processor and a display in the inner part to change the video in a way that it would look like its coming from specific power eye lens. An universal spectacle with features of edith and maybe apple vision pro on glasses
I can still pretty sure that you're going to make it better.
this is super cool.🤯
It’s fake bruhhh
Imagine people in London doing a tour visiting these exact same places, with Gemini holding a colorful umbrella below ahaha!
What a fun thought, António! ☂️
Its a fascinating topic ...How should i experience art ? is it a cloudy day ? can i remember a door code? How can i avoid thinking about my friend's relationship to literature in an authentic way ? and whats up w/ these festive lanterns at the entrance of chinatown ? oh..Ok....so they're festive? ....and at the entrance of chinatown? .. got it .. and in French !
Absolutely Riveting .. and how did i ever get along w/o these stunning glasses?
and how much per month for the monthly subscription ? and the glasses? and the latest phone?... and the complete surrender of my identity ?...
So lemme get this straight ,you'll train me to be lazier, and more afraid to do anything without permission from Project Astra ?..
And i get to pay you XXX$$$ for this? ....Where do i sign up ?
Just joshing guys ! Was just pissed that my favorite bakery is overrun with dudes w/ Buddy Holly glasses and they're completely out of SCONES !!! SKONES!!! SKONNES!!! ...
Amazing capabilities.
We're so happy to hear you like it - stay tuned for more updates.
This will be extraordinary for Alphafold.
Could you elaborate? How would Project Astra benefit Alphafold?
This is really really cool 😎 ❤
We couldn't agree more! 🙌
This is what Google Lens wanted to be.
Impressive! 🏆
Amazing piece of technology!
We simply can't wait to share more, Torda! ✨
Oh Google if this actually happened well..we are talking about the Revolution and Evolution at the same time. Good job people.
Impressive ❤❤😮😮
Imagine two people one a date, guided by these things.
This is all great and exciting - but some of these examples fundamentally reduce human interaction (ask the bus driver directly , ask your friend directly). I hope the team and others focus on improving human interaction or reducing or removing.
It’s getting there. I think the problem becomes, to have an AI assistant that can truly help you and be completely frictionless it basically to always be present in your life and your affairs. Have access to everything, listen.. even see everything going on around you. Know every intimate conversation. Basically be a secret third person ever present. Only then can you truly have real awareness. It’s going to have to know everyone else’s affairs with as much context as possible… only the. Can you get something like the OS from the film “Her”. I know we say tech firms already know everything about us, and they do to an extent. But this will be orders of magnitude more invasive if it’s going to work as dreamed, and not just be a nuisance that you constantly need to manually fill in context and details when you put in a request that it can’t resolve.
we are getting closer and closer to the movie Her becomes reality 😄 kind of scary and exciting 😅
The washing machine trick got me hooked lolz
It's one of our new favorite party tricks, we've gotta be honest. 😏