This is just next level, it’s much beyond intelligence, the reasoning, structured and flows are so precise and natural, not to mention how creative and unpredictable the way it expresses the idea, a big WOW
I've had an AI Chip in my Pixel phone for the past two years. Google just didn't market it that way...but Magic Eraser? Reading numbers in photos, pulling text from images? Of course, it was AI. Pixel Pro 8 is even more AI savvy.
Awesome, this is the next step in AI - more than a simple single question and answer format. It also plays to the strengths of Google and will integrate very well in their products.
It would be amazing to have a Google HQ in the Louisville Metro area of KY! The jobs, revenue, and economy burst that it would bring I can't wrap my head around it! Just WOW! And the architecture in the area is beautiful, we have some really amazing historical buildings locally right in the river that would be perfect and could use a touch of love NOT MODERNIZED AND REMODELED, but restoration and renovations would just be so lovely!
worst example ever. You're giving away personal information about your daughter because you failed as a father? I mean which father needs a machine to know how to make his kids happy? If finding ideas to make our kids happy is given to a machine, what is left to us as parents?
If these generated app cards support forwarding, then in the future, apps can be directly forwarded and used through chatting, which will change the way apps are distributed!
This demo looks amazing. Google step up the game bringing AI to the consumers. While OpenAI is more towards developers and research. I do think GPT-4 can achieve similar feat if that's where their focus. But I hope they focus on making GPT-5 amazing.
they should not focus on consumer product. let google handle that...gpt should be a profession high level tool rather than dumbed down consumer product
Is the working time shown in the video an accurate representation of how long it takes Gemini to complete the first interface generation and population?
@iandandforth GPT4 can certainly do this and usually may requiring multiple prompts. I think the advancement here is the contextual multi modality in the response in addition to what appears to me to better reasoning policies.. so yes the details matter but in fact they really shine in this case
@@hebiasohj3802I can't find anywhere any evidence that GPT-4 can create real time UIs to answer user queries for day to day tasks, only UIs made specifically for a "create a UI for.." query.
As someone who is just getting started into CS ...... What do i learn ... it seems that every development I try to learn, AI will do it better ........What do i do
I am impressed with the new integration and way of communicating with humans, it's lively and very dynamic. I hope this becomes the standard for the new Google going forward. I found it extremely efficient and relevant!!!😮😮😮😮
@@entertainmentyoutube3606 UI/UX is not just what many of us sometimes assume it is limited to...in my opinion, it spans much further than what is being demonstrated here. It still has way to go. Until then I think I'm safe, feeding the AI and training it ig lol. Eventually, yes I think just about everything will become obsolete and be considered either a waste of time or too much hard work like a thing of the past due to AI (that is when they get them to be as trustworthy and efficient as calculators). Something like the difference between hand-made and factory-made. Invention of trends and what humans like, can still only be decided by humans. That is until we unlock the algorithm for creativity.
@advaitbhore I'm a ux ui too, and I dont want to be jobless, copywriters and translators are already suffering, how they will pay next month bills? There are serious and real 5h8ngs happening thanks to this
@@entertainmentyoutube3606 Even if we go against and get them to shut the advancement officially, so that people stop the legal use and development completely, we just slightly slow down the process, not much. It's too late to put the genie back in the bottle. The advancement will continue despite the backlash, in open source or individual development. It is too useful a tool to neglect and ignore for the people. Our best bet is to focus on what still remains irreplaceable. Keep shifting to skills that have not been developed in the field by the AI yet. Keep climbing the skill ladder, one rung at a time. The top, to where the ladder leads is hidden by the dense fog of the future. It may be very close, it may be little far, but when the top level is here, everyone will reach in an instant once they've built a direct AI lift to the floor. Then we'll remain the only ones who do it manually. Being shallow and going against it will avail us possibly some headlines, and sympathy, and if we try hard enough to try to shut it down, just some consolation prize which will have an expiry date.
I'm sure this was done for presentation reasons,. but I'm surprised it did not ask what age she was,. or what your location is (to predict weather for an outside party),. or how many people would attend or how much money you want to spend. Lots of variables !.. ;P
Nor did it create a "You are a top children's party planner. Collaboratively review these children's party ideas with the client and help them choose the best one." AI persona.
This is staggering these aspects didnt make it to main demo video. If your goal is to dethrone ChatGPT and persuade the users to stop paying for chat then show this. Instead you guys went with the lets see what make the Most PR and show how gemini knows its a "rubber duck". I had no idea the experience would look and feel like this until someone had to send me this video. Shareholders will look at user subscription data, not just headlines. Short term thinking for a long term brand.
This is a great step into doing AI the right way. You have to have varied intent be at the forefront of the returned data. Who, What and Why inform the How and When
🎯 Key Takeaways for quick navigation: 00:00 🌟 *Introduction to Gemini's capabilities* - Gemini is a multimodal AI model designed to understand and reason about user intent. - It can generate bespoke user experiences beyond text responses. 00:30 🎨 *Creating a bespoke interface* - Gemini generates a visually rich, interactive interface without any coding. - It makes reasoning decisions from broad to high resolution. 01:00 📝 *Writing a product requirement document (PRD)* - Gemini creates a PRD based on user input and requirements. - It plans the functionality of the user experience. 01:57 💻 *Designing the user experience* - Gemini designs a user journey, including list and detail layouts. - It generates Flutter code and retrieves data to render the experience. 02:28 🍰 *Customizing the experience* - Users can interact with the interface to get specific information. - Gemini generates new UIs based on user requests and preferences. Made with HARPA AI
Demo looks good but I'll be over here, not holding my breath. Remember their big reveal of Google Duplex? Still in the realm of fantasy, last I checked.
You must not have checked recently than. Duplex has been launched for awhile (couple years at least) and I have used it myself a few times. They just never officially announced it was released, but it's there.
So we ar limited to what it generates for us to learn from. No new outside-the-box experiences. This system limits us to be dependent upon the world it creates for us. Sorry, I like my freedom of choice. which is not on the einternet anyway.
I'm preparing to be so disappointed!!! Best stuff is going to be paywalled, of course. And even a priest would curse the censorship. And it will never work perfect for your extremely specific application anyways.
I would love to see the entire Android Experience replaced with AI runtime generation. Video and images with interactive buttons and animations. It could learn over time and the UI could be tailored to each individual user and each individual scenerio.
Has anyone been able to test this feature yet? I would love to get my hands on this and get a feel for a truly futuristic development process that I could only dream of.
Multimodality. Spectacular! Internationalization and Localization for Customization and Personalization is an actual competitive advantage when comparing the different existing models and I hope Google / Deepmind recognizes this on time.
I always knew Google was going to come with something big. That pile of cash is certainly very useful. I am not surprised they let open AI to go fist. They were up to something, and this is that something.
Sundar said he was always skeptical of AI. The reason why they've kept it under a vault until openAI showed up and came up with the big guns. Now It's waking up.
Yes, the whole point is this generates a bespoke UI to present multiple answers to the user's request. I suspect it's all taking place in an HTML window generated by the Flutter toolkit. Bard can't do any of this.
I think this is just the text interface to the Gemini UltraSuperMega model, but you can see there's a DEBUG badge in some of the screenshots that gets Gemini to show its underlying JSON for the different steps it takes in building the dynamic UI to answer the prompt. Other LLMs output HTML so they can show tables and images and figures, this is building an interactive "app" on-the-fly. Amazing!
This is just next level, it’s much beyond intelligence, the reasoning, structured and flows are so precise and natural, not to mention how creative and unpredictable the way it expresses the idea, a big WOW
Bard's here to take things to the next level!
Power of Flutter + AI 🔥🔥
Bard, what sweater is Palash wearing? I need that fit.
Billy Reid :)
This is the next phase of generative ai, real time interfaces. Such a great demonstration, Google is so back.
It won't be real time
@@_mobasshir_ It will
@@mijaelviricocheaparra7474 Not yet. But yes it will be
@@mijaelviricocheaparra7474 you get your feings you get the money from the
I've had an AI Chip in my Pixel phone for the past two years. Google just didn't market it that way...but Magic Eraser? Reading numbers in photos, pulling text from images? Of course, it was AI. Pixel Pro 8 is even more AI savvy.
This is so cool! Like Bing Chat but 10x better
Now THAT is interesting! I've only watched a few of the Gemini demos so far, but this one is comfortably in the lead.
So glad you're enjoying our demos!
@@Googleyou’ve made such a great job!
Awesome, this is the next step in AI - more than a simple single question and answer format. It also plays to the strengths of Google and will integrate very well in their products.
With Google extensions, Bard with Gemini Pro can integrate into your apps and services! ✨
It would be amazing to have a Google HQ in the Louisville Metro area of KY! The jobs, revenue, and economy burst that it would bring I can't wrap my head around it! Just WOW! And the architecture in the area is beautiful, we have some really amazing historical buildings locally right in the river that would be perfect and could use a touch of love NOT MODERNIZED AND REMODELED, but restoration and renovations would just be so lovely!
worst example ever.
You're giving away personal information about your daughter because you failed as a father? I mean which father needs a machine to know how to make his kids happy?
If finding ideas to make our kids happy is given to a machine, what is left to us as parents?
If these generated app cards support forwarding, then in the future, apps can be directly forwarded and used through chatting, which will change the way apps are distributed!
Wow. The intergrated modalities and the interactivity with parts of the generated content is a big step up from GPT4
The most impressive thing is it's generating UI on the fly! That's insane!
That day is quite colder where apps become nonexistent
@@randomguypolololas o
😊
TT
Google was waiting to strike when we all thought openai were ahead of the curb, turns out it was google all along, amazing stuff
This demo looks amazing. Google step up the game bringing AI to the consumers. While OpenAI is more towards developers and research. I do think GPT-4 can achieve similar feat if that's where their focus. But I hope they focus on making GPT-5 amazing.
they should not focus on consumer product. let google handle that...gpt should be a profession high level tool rather than dumbed down consumer product
I asked Bard about Gemini Pro, and it redirected me to a cryptocurrency wallet.
Is that UI in flutter? I can see the debug banner on the top right corner.
He specifically says it is at one point.
Commented before he mentioned it. It's amazing how it's generating and rendering the UI. Looks like some server side rendering.
Is the working time shown in the video an accurate representation of how long it takes Gemini to complete the first interface generation and population?
@iandandforth GPT4 can certainly do this and usually may requiring multiple prompts. I think the advancement here is the contextual multi modality in the response in addition to what appears to me to better reasoning policies..
so yes the details matter but in fact they really shine in this case
@@hebiasohj3802I can't find anywhere any evidence that GPT-4 can create real time UIs to answer user queries for day to day tasks, only UIs made specifically for a "create a UI for.." query.
As someone who is just getting started into CS ...... What do i learn ... it seems that every development I try to learn, AI will do it better ........What do i do
welcome to matrix...
Is this Notion that the content is generated inside of?
I am impressed with the new integration and way of communicating with humans, it's lively and very dynamic. I hope this becomes the standard for the new Google going forward. I found it extremely efficient and relevant!!!😮😮😮😮
As a UI Designer, this is such a gr8 step in the right direction... if chatgpt needs to learn anything from the gemini release, it's this.
Are you not worried about losing your career and job?
@@entertainmentyoutube3606 UI/UX is not just what many of us sometimes assume it is limited to...in my opinion, it spans much further than what is being demonstrated here. It still has way to go. Until then I think I'm safe, feeding the AI and training it ig lol.
Eventually, yes I think just about everything will become obsolete and be considered either a waste of time or too much hard work like a thing of the past due to AI (that is when they get them to be as trustworthy and efficient as calculators). Something like the difference between hand-made and factory-made. Invention of trends and what humans like, can still only be decided by humans. That is until we unlock the algorithm for creativity.
@advaitbhore I'm a ux ui too, and I dont want to be jobless, copywriters and translators are already suffering, how they will pay next month bills? There are serious and real 5h8ngs happening thanks to this
@@entertainmentyoutube3606 Even if we go against and get them to shut the advancement officially, so that people stop the legal use and development completely, we just slightly slow down the process, not much. It's too late to put the genie back in the bottle. The advancement will continue despite the backlash, in open source or individual development. It is too useful a tool to neglect and ignore for the people. Our best bet is to focus on what still remains irreplaceable. Keep shifting to skills that have not been developed in the field by the AI yet. Keep climbing the skill ladder, one rung at a time. The top, to where the ladder leads is hidden by the dense fog of the future. It may be very close, it may be little far, but when the top level is here, everyone will reach in an instant once they've built a direct AI lift to the floor. Then we'll remain the only ones who do it manually. Being shallow and going against it will avail us possibly some headlines, and sympathy, and if we try hard enough to try to shut it down, just some consolation prize which will have an expiry date.
man you're such a bot
If the real version is 1% as good as this demo then it will still be incredible
WHICH Gemini is capable of this?
I'm sure this was done for presentation reasons,. but I'm surprised it did not ask what age she was,. or what your location is (to predict weather for an outside party),. or how many people would attend or how much money you want to spend. Lots of variables !.. ;P
Nor did it create a "You are a top children's party planner. Collaboratively review these children's party ideas with the client and help them choose the best one." AI persona.
This is mental, great job guys!
As a natural language model, this is great.
😂😂😂
ChristopherRobin, nostalgic , nice! haha
Does this mean more Google layoffs?
This is staggering these aspects didnt make it to main demo video. If your goal is to dethrone ChatGPT and persuade the users to stop paying for chat then show this. Instead you guys went with the lets see what make the Most PR and show how gemini knows its a "rubber duck".
I had no idea the experience would look and feel like this until someone had to send me this video.
Shareholders will look at user subscription data, not just headlines. Short term thinking for a long term brand.
Can it generate ai images like gpt 4
That’s exactly what it did in this video
Now GPT4 GET REALLY SERIOUS CONTENDER.. ENOUGH WITH THE FIRING CEO DRAMA, NOW GET BACK TO WORK OPENAI!
Flutter is used for ui 🎉
This is a great step into doing AI the right way. You have to have varied intent be at the forefront of the returned data. Who, What and Why inform the How and When
"inspiration for birthday party for my daughter"
this is sad. this is not something you outsource, as it stops being genuine.
What’s Google up to. It seems Google is slowly integrating flutter in all their new technologies.
Can we skip to the bit where a humanoid robot goes shoping and prepares everything? To a budget.. 🙃
Crazzyyy, just want these adhoc interfaces for data science
This looks like the earlier version of chatbot interface aka boring Interface 🤷🏼♂️👀
Next generation of UX/UI for AGI! 🙌 🎨
Those multiple modalities make text only look so 2022.
🎯 Key Takeaways for quick navigation:
00:00 🌟 *Introduction to Gemini's capabilities*
- Gemini is a multimodal AI model designed to understand and reason about user intent.
- It can generate bespoke user experiences beyond text responses.
00:30 🎨 *Creating a bespoke interface*
- Gemini generates a visually rich, interactive interface without any coding.
- It makes reasoning decisions from broad to high resolution.
01:00 📝 *Writing a product requirement document (PRD)*
- Gemini creates a PRD based on user input and requirements.
- It plans the functionality of the user experience.
01:57 💻 *Designing the user experience*
- Gemini designs a user journey, including list and detail layouts.
- It generates Flutter code and retrieves data to render the experience.
02:28 🍰 *Customizing the experience*
- Users can interact with the interface to get specific information.
- Gemini generates new UIs based on user requests and preferences.
Made with HARPA AI
Savage
Can we try Harpa?
Demo looks good but I'll be over here, not holding my breath. Remember their big reveal of Google Duplex?
Still in the realm of fantasy, last I checked.
You must not have checked recently than. Duplex has been launched for awhile (couple years at least) and I have used it myself a few times. They just never officially announced it was released, but it's there.
Wow, this is actually interesting
So we ar limited to what it generates for us to learn from. No new outside-the-box experiences. This system limits us to be dependent upon the world it creates for us. Sorry, I like my freedom of choice. which is not on the einternet anyway.
0:51
Where do I access this?
please write an article on the mauritius financial center
Hmm
I'm preparing to be so disappointed!!! Best stuff is going to be paywalled, of course. And even a priest would curse the censorship. And it will never work perfect for your extremely specific application anyways.
I would love to see the entire Android Experience replaced with AI runtime generation. Video and images with interactive buttons and animations. It could learn over time and the UI could be tailored to each individual user and each individual scenerio.
I love how there are no stupid conspiracy theorists in the comments
Copy cat google from Greg Brockman
This is really impressive
Has anyone been able to test this feature yet? I would love to get my hands on this and get a feel for a truly futuristic development process that I could only dream of.
You should ask it how to solve Israeli-Palestinian problem, not birthday parties! 😊
bespoke interfaces 🤯
I don't like this UI thing. An honest text makes me feel like it is filled with more valuable info
I don't need a UI. I need a more stable version of GPT4 with much longer output.
Its multimodal, its generative, its explainable. Checks all the important boxes.
Multimodality. Spectacular!
Internationalization and Localization for Customization and Personalization is an actual competitive advantage when comparing the different existing models and I hope Google / Deepmind recognizes this on time.
gemini can go beyond chat interfaces!!! except hte first thing he does is use a chat interface
THIS IS JUST IMPRESSIVE! Dynamic UI WOW
Taaj2x252615238276x252612266697sacidbisharaliugu1009
😢🎉
Is it just me or does this voice sound kind of "smoothed" by some audio filter /AI?
When and where is this coming?! Excited 🎉🎉
When are we getting this feature? It is not currently available in Gemini
If it ain't for free, it ain't for me, so I guess I'll go back to chat gpt!
Why i do feel person presenting thsi video is not human but an AI ?
What about the other AI, Bard? Is Google dropping it or what?
Where is this Bespoke UI interface that is being demoed in this video?
So google is the google killer 🤣. Well done google, the AI race is so on
Wow the frontend they using Flutter, Look right in the corner
Did it pull those images on the cupcake or make them?
tomorrow we find out the people were also generated by AI
Is any link can lead to try this Flutter app online?
Wow, I hope Google will not add censorship into this
Can’t wait to use Gemini
Check out Bard with Gemini Pro at bard.google.com to be a part of the magic!
AI will reason in JSON. Let that sink in.
On the right top it says 'Data Collector'.
but where did you get your sweater? it's lit.
Chubby Chamath 😂 Ded
I always knew Google was going to come with something big. That pile of cash is certainly very useful. I am not surprised they let open AI to go fist. They were up to something, and this is that something.
Sundar said he was always skeptical of AI. The reason why they've kept it under a vault until openAI showed up and came up with the big guns. Now It's waking up.
Incredibly innovative, cant wait to try it out
Innovation awaits when you're using Bard with Gemini Pro!
I just asked bard if it had Gemini and it said no
Only hope it will be like that in practice.
En español
Google is here
Flutter was there
Google is officially back to the Game💥💥💥
Looks good but it says "Bespoke UI". Can we see examples from Bard please?
Yes, the whole point is this generates a bespoke UI to present multiple answers to the user's request. I suspect it's all taking place in an HTML window generated by the Flutter toolkit. Bard can't do any of this.
يظا
Awesome, but how to use Bespoke UI
i
Sorry, there is no such word as "interactable" - the word is "interactive" cheers
Wow gpt as her work cut out for her
This is the end of human race
What tool is being used here? Doesn't look like Bard. The "Data Collector" interface looks incredibly interesting - I'd be keen to try this out.
No. Bard can't do that. It's the "Bespoke" prompt. Probably part of Gemini.
I think this is just the text interface to the Gemini UltraSuperMega model, but you can see there's a DEBUG badge in some of the screenshots that gets Gemini to show its underlying JSON for the different steps it takes in building the dynamic UI to answer the prompt. Other LLMs output HTML so they can show tables and images and figures, this is building an interactive "app" on-the-fly. Amazing!
When do we get to play with it
this is embarrassing google
Iyk. 😮yK!
A good Generative AI .
1:38
woww this amazing! 🌟
💩💩💩💩😂😂