Google's 'HER' AI, Project Astra, Sora Competitor, Native AI Agents and AGI (Google I/O Supercut)
Вставка
- Опубліковано 1 чер 2024
- Learn AI With Me:
www.skool.com/natural20/about
Join my community and classroom to learn AI and get ready for the new world.
#ai #openai #llm
Google's IO:
• Google Keynote (Google...
OpenAI: Creates Her
Google: Creates AI that identifies as Her
And Black.. of course...
Every other thing google releases now, is somehow tied to woke BS.
lol omg exactly good one
@@andersonsystem2 Yes, but the joke is on us as we know it is not impossible, unfortunately :(
Dude this string of I don't know what this is but it's brilliant o p you are ready beyond measure and all three of you that replied are the funniest people on this planet like let's do this again LOL
Google should feed this video to their own AI and ask how to make the demo better.
that's true
Really think they believe their own s#!+? The few times I've used Gemini, it appeared to be designed more to measure/map my info than to provide useful information...
2024 Google has such 2000's Microsoft vibes
Is that good or bad
Yeah it's weird! Google is talking about spreadsheets, scheduling, and vacation plans, while Microsoft - via Open AI - is going in entirely new directions.
DEVELOPERS DEVELOPERS DEVELOPERS !!!
you are very wrong in reading vibes, do not try
Yeah, Google's not good at presentations.
Google really should train their speakers to be more engaging
Probably easier to train an AI to do their speaking for them.
Hipster thinking
And you need to learn to listen to the information instead of wanting it told like a bed time story...ya not a baby you're an adult
@@stoppernz229 and as an adult he probably understands that engaging and inspirational people make better demonstrations..?
Duh, they were trained, by the people who trained Humane AI. Its all about being as pretentious as you can be, alienating everyone to a level that it comes back around the other side...
I legit busted out laughing at 24:28 when she said “your files aren’t used to train our models”
LMFAO. "We will sanitize it but it won't matter because everyone will have infinite context."
The way she phrased that sounded sarcastic. Terrible insert, zero confidence.
yes, because according to their tos they become their files ;)
Your link doesn't lead to that statement
Does anyone know what Google actually released today? It just seems like disjointed overload. Like their LLM prepared their presentation
I think there's search function is up but other than that everything else is wait and see.
So, they still didn't do it live...
And some people couldnt understand why chatGPT was cutting off sentences, they were stopping its rambling live
@@635574 I mean chatgpt has a habit of rambling and this model is brand new fresh out of the oven so no one who uses chatgpt was confused by that.
I’d much rather have a commentary than a replay:
we'll have to wait for that
I was disappointed too by the lack of commentary but this was a good round-up edit.
No commentary and playing back video is what pulls this channel down. It’s lazy and not why people subscribe.
Too much Google and not enough Wes.
Also, Somehow the examples all seem more contrived and product specific than open ai
Cringe
was wondering about this new path for a while; where each video contains less and less insight, more & more "copy paste"....
"Sequences simulated"
"Audio pre-generated"
I am loosing all faith now in google, they are just a waste of time.
No live demo? We used to call that vaporware.
9:09 Lower right corner: Audio has been pre generated.
Google really suck!
Barely visible! Google lost this one too.
*continues watching UA-cam on Google Chrome*
Good catch.
@@emuccino some might be upset at a botched, fake model launch but you’re right, at least they have chrome and UA-cam. Maybe, if they ask OpenAI nicely, they’ll stop them from making a new search or streaming video site thus securing their relevance for at least a year or two.
Wow, that's fake then....
Probably all these products are not ready for release yet. Shame.
Google should use OpenAI's GPT4o to present for them.
Lol, yes.
YES ANYTHING BUT GOOGLE ASSISTANT.
AGI will be amazing at buing shoes - just what the world needs the most
Reminds me of that Black Mirror episode where the AI is allocating ressources from everywhere to produce gazillion tons of shoes (same model and size), because it is convinced humanity needs them.
Exactly.
mhmm. Somehow fast-fashion just became faster-fashion.
HAHAHA IT SEEMS GOOGLE IS GETTING MORE STUPID EVERYDAY.
WHAT MAKES ME LAUGH IS THEY EXPECT US TO SIT DOWN AND READ THROUGH A TRILLION MILES OF HELP FILES BY THE TIME WE GET DONE WE CAN'T REMEMBER ANY OF IT.
ONE CAN FORGET GETTING ANY HELP FROM GOOGLE.
The scam alert is really useful tool when the unsophisticated scammers target vulnerable people. My Aunt got scammed when dealing with a death in her family. I can see how this would have saved her when she wasn't in a good frame of mind to notice that she was being scammed.
this had a gun at the back of your head vibe
I've heard stories from former Google employees that it is a highly toxic work environment full of racism. Sundar is not a good man
Exactly what my feelings 🤣. They are working so hard to catch up with OpenAI. Meanwhile OpenAI is way ahead we may not even know what exciting things they are cooking.
Wes deleted my criticism of Google, but if he worked there, they'd treat him like a second-rate citizen lol.
“Gemini, what bugs are available for dinner at my local pod farm?”
you will eat bugs and be happy
EAT ZE BUGZ - WEF
This made me laugh harder than it probably should have, all things considered
Gemini, I lost my box! And I am NOT happy!
I think the menu for today is cockroach soup.
This event was a major bore and snoozed fest! To be fair , there were a few things that stood out like NotebookLM Audio podcastification of notes for example, but the severe lack of charisma and enthusiasm from presenters made it really hard to follow. It’s almost like they don’t really believe in their own products.
Another problem is that Google has too many customer facing projects and experiments going on at the same time. Project Astra, for example, sounds like something they should have behind the scenes.
Everyday people should have access to easy products ready to test out from the get go. I’m tired of having to sign up for endless waitlists for products that are not ready for deployment. I shouldn’t have to remember to bookmark Google Labs, Google AI studio, Google Gemini, Google FX Studio, and NotebookLM. These are way too many moving parts and disjointed product interfaces.
I agree 100%!
What a confusing list of products!
I left the presentation having no idea where to go or what I gotta do to try out these products and no idea why the F these features are not in Google's frontpage!
I think they will have to decide if they want to keep in the past, selling fewer and fewer top results (ads) in their decade's old search engine model as users migrate to newer services which deliver exactly what they want with far bigger knowledge and precision or eventually embrace the future and change their core-business model. 🤷🏻♂️
When they are ready for deployment, they will join the Google graveyard within months, and be replaced by inferior even newer products.
I saw a comment saying Google should use chatgpt 4o to present this as it would do it better and I laughed because it's true.
I COULDN'T GET PAST THE GRADING VOICE OF THE ANNOUNCER IN THE BEGINNING.
HIS VOICE CHEWED UP MY EARS.
Google: " we made cool animations and fake videos".
I don't think they have a competitor to OpenAI's GPT-4o.
They showed a prototype of GPT-4o competetor, but open AI has a fully working consumer ready product whereas google only showed a prototype ( and not live)
Yes they do. They have all the data!
I don’t use google much for search anymore because i don’t like it spying on me and selling my info... so they expect me to use something to accelerate that?
Sorry to piggy-back your comment , it's a good point-
Similarly, I urge people to be cognizant of their tacit support of *that country that we don't talk about* that makes most of our gadgets..
We've enabled the rise of a corrupt, authoritarian, State-run Capitalist competitor, one that has built an entire society around a panopticon of surveillance and harsh punishments for dissent against The Party. They've no qualms using their Gov apparatus to violate human rights, throw families in labor/quarantine camps, or simply shut off your internet access- already restricted from the outside world - behind The Great Firewall
*..Beware not to feed the Dragon..*
They talk much and show nothing live...
Why does everything Google calls "new" and "revolutionary" constantly and ONLY revolve around upper-middle class life? How does this help the average person?
To aspire to be ‘Fashionista!'
Thi was the realization that hit me each time they presented a demo of an example how their technology could be used. It was the same issue with a lot of MS demos from 10 years ago. It's like the products and services you are making to change the world and bring to the masses are targeting like the upper 20% of the population. You mean you seriously can't work on a problem that will bring value to the bottom 80% - - or did you stop caring about them because they won't have jobs nor income because of your technology?
If you don’t have 500$ monthly for AI subscriptions, you will compete against an AI enabled half god who has that kind of budget. Considering that personal robots are on the rise you will also need 20-50k for each personal slave robot/surrogate on top. Ignoring changes in costs of labor that means those robots will be returning investment in 2-3 years. There will be no competition in mental or physical labor between working humans and robots. The winners will be the investors with a kickstart capital.
@@GinSoul a world inhabited by CEO's.
I mean, right now it's people with money making the thing, that seems kind of obvious, if you view yourself as a "normal" person whatever that is, they clearly have your attention enough to ask this question. It feels like it will be very similar to how social media started and people were sharing their top 5 friends and unsure what more MySpace can do.
You will probably see more integration with your community unaware that it's even there until it's in every facet of our lives, just like social media.
Wow a crafted video demo
Google should stop doing the apple presentation gimmick and do something different. Im not saying this because i hate copycats but bc its corny and fake looking
Lol look at all of em just reading through the transcript/prompt 😂.
None of this is anywhere near as impressive as Open AI's live demonstrations were.
Agree. This is nowhere near as slick
you are delusional
And the Google AI demos were not live and pre-recorded!
Google seems desperate to me! Google is horrible at products!
open ai looked real. these guys here were afraid of a screw up.
Almost instantly, OpenAI did another one-up on Google with an actual live presentation of GPT-4o ...🤣 Remember, Google is the guys with No Moat.
Those background colours give me 1980s apple vibes
Did they plant the people that are clapping and cheering or are they all hoping to get a special invite by being enthusiastic?
My god, they all look like suits that have absolutely no idea about what the real world is like.
The real world is what the suits inhabit, the rest of humanity is just picking up the crumbs they leave behind.
I'm not watching the video because I read the comments first.
Google is still pushing search. What a joke.
Copy-paste an entire video from another channel? Not even adding commentary? How is this not copyright struck?
Can we get more than full presentations. This seems pretty low effort. A link to the presentation would suffice.
Google is panicking.
Everything in this announcement seems less interesting than everything OPEN AI announced.... except for those video glasses, for which they didnt discuss, because clearly they were made by another company that google doesnt obtain profit from...
Those were Meta’s glasses I think
Oh, I see. Gemini will have to know a lot about us, our schedules, and private lives, for many of these features to be useful. Uhm. Great!
That's the whole point of a personal AI assistant.
@@huckleberryfinn6578 Are you saying that the whole point of a personal AI assistant is to know your most private details? I'm sorry, but in this case I must respectfully disagree. Of course assistants need to know some information to be useful, but only to the extent that the end user is comfortable with. Anything more would be highly intrusive, especially if the PII is also made available to advertisers. For restaurants, it would mean playing ball with big tech or going out of business if everyone relied on their AI assistant to tell them where to eat on a family trip. The maps and search situation is bad enough without being put on AI steroids.
I’m curious about how google will try to implement their current model of selling virtual space space for advertisements into this new way of gathering information…
so they asked for restaurants etc. just put in those as ads.
Video full of "independence and bravery"
Google has become a sci-fi shorts video producer. Good for them.
Can someone explain to me why these competing products are released together? Is it choreographed hype, are they always working on similar features and just prepped and ready to do a demo release at whatever state they're at in development, competitor research/espionage, or what?
bro this is legit getting insane
Can we get the AI agents to automatically skip UA-cam ads?
Adblock Plus!
@@Toshinben and for mobile?
@@Aybo1000 I watch videos via Firefox on my phone, so...
Adblock Plus!
I don't usually comment, but I feel compelled to say, I am so disappointed, Yes, Google underwhelming. There was so much of NOTHING. I went to the links and got NOTHING. Maybe, OpenAI is too good. Oh! but I do like the Google "Sora" nice one.Time will tell.
Tomorrow Claude?
Claude: Big news everyone! You can now use a super old version of Claude in Europe. 2K context window. Whaddup
i missed that part, when she put on those glasses that became the camera ... that'd make it extra convenient when doing homework, coding, or playing videogames so you dont have to always hold the phone up
I think they developed (and finished) this LLM tech 15 years ago, and it’s still in use today, Google Home Assistant.
I was way more impressed with Openai's showing. The emotion in their voices and the speed had me floored. This showing just makes me think about Openai, which I doubt is what they're going for!
It could be just me but see the adverts are still there, this is highly intelligent and hints at googles main priorities (ad revenues).
A lot of those were scripted examples. Why didn't they do that live on stage? openai had a live demonstration and a few times where the AI got confused which is expected. Yes the technology is probably Superior in some ways but it doesn't make any difference unless you ship it and people can use it
Hit me like Gege. 😢 Did not expect but do appreciate you using your voice to stand for something. I can’t do that yet, but I’m glad Someone Else does.
And sometime in between the traditional Google Search died without us noticing anything. All website owners have now to pay to get their websites to be listed on a place where it matters.
It's definitely way more robotic than Open AI's model, but functionally it seems to work pretty well.
Im not excited at all. I don’t know why, perhaps because everything they delivered so far was so underwhelming.
Well, they have the most data, the most powerful computers, the best team, started 14 years ago, and will probably win nobel prizes. They only started releasing AI this year... Started making chips 3 years ago or something.... By the end of the year revise this comment
What would whelm you?
I completely disagree. This presentation was really high quality and did a good job showcasing many different use cases of Gemini in practical and meaningful ways.
This isn't AGI. This is day to day life useful AI.
Who... Who loved you?
By the nvidia is having all the fun. 😂
Thank you.
Google will cash in on who gets put on the vacation planner. Pay more and your business gets suggested for the itinerary ;-)
OpenAI hands-down destroyed google in this latest round. The really crazy thing is that Google had a large head start, demoing their interactive dialer back in 2018. But they never followed through. They basically let the technology sit for years - very stupid on their part. Now, OpenAI has basically beaten them with interactive technology. Many people are wishing that the built in google assistant was replaced with the OpenAI interactive interface, which sucks for Google.
AI was going to destroy search, their goldengoose. You can see why they wanted to keep it downlow
I think it's a simple matter of resource allocation, and Google is simply too large to allocate enough resources to AI the same way OpenAI can. Google has too many of its hands in too many things, where OpenAI can allocate 100% of its resources and time to AI.
I agree that openAI got ahead of them by pulling the trigger sooner. However, Google does have the advantage of having this added directly to Android phones. Yes OpenAI is teaming up with Apple, but it seems to me the race is now to get it on smart phone operating systems. I suspect Google will get their first.
But remember, product availability (quantity) does not necessarily ensure product quality. I don't think the answer is "more data" - we have LLMs now. It's a design and architecture issue, and Google can't keep up.@@TheChadavis33
This has always been the Google way, sit on technologies, abandon projects, half build things. They seem super distracted for a tech company, likely just a result of the free money they get from Search, breeds laziness.
OpenAI has stolen all the hype of google I/O... Now it look so booooooring...
@7.35 it really is nothing to do with the technology, but i find it really disturbing. the memberries.. i know so many people now, living in the past, rewatching the same shows they watched when they were children, over and over again, replaying the same games, not changing, not growing, just stuck in the past
Because nothing great is released. It’s like we’re in a global creative blockade since 2017.
@@mynameisjeff9124 They've got some stuff, like Deadpool. Certain.. ideologies have invaded the creative space.
I am.... Honestly this decade is just... Shocking 🖖
I will continue exploring how the both/and logic and monadological framework catalyze new frontiers across computer science and artificial intelligence:
Machine Learning and Neural Networks
While the multivalent symbolic representations enabled by the both/and logic are powerful, it also provides insights into the sub-symbolic patterns and distributed representations learned by neural networks:
• Representing Emergent Concepts
Neural networks excel at learning latent high-dimensional representations capturing subtle statistical regularities transcending programmed symbolic concepts.
The both/and logic allows formalizing the relationship between such emergent representations and their interpretations as symbolic descriptions:
Let H be a trained neural network's high-level hidden layer activations
Let C be a set of symbolic concepts/predicates we aim to characterize
We can define projections capturing semantic alignments:
For c ∈ C, v(c) = truth_value(H encodes c)
And capture misalignment/approximations:
○(H, c) = coherence(H's encoding matches symbolic definition of c)
Where low coherences indicate the network's latent representations transcend or reconceptualize the symbolic concepts. The synthesis operator ⊕ provides a rational mechanism for deriving new interpretations:
H ⊕ c = novel_conceptual_interpretation
Rather than simply inscribing programmed symbolic knowledge, this allows neural learning to dialectically refine and re-constitute the conceptual models and ontologies in response to the statistical regularities implicitly extracted from data.
• Explaining Neural Decisions
A major challenge is explaining the reasoning underlying neural networks' decisions. But the both/and logic suggests interpreting networks as instantiating a distributed representation across integrated constellations of feature detectors:
Let f1, f2,... fk be neural features/concepts extracted at different levels
Let D be a decision/classification made by integrating all fi activations
We can understand D as a synthetic pluralistic inference:
D = f1 ⊕ f2 ⊕ ... ⊕ fk
With coherences ○(f1, f2) capturing mutual alignments between different features integrated. Low coherences reflect potential conflicting evidence being synthesized.
So rather than opaquely averaged calculations, both/and logic models decisions as a open-ended process of substantively combining multiple convergent and divergent lines of evidence extracted at different levels of representation. More akin to the admissible reasoning patterns of symbolic pluralistic logics than classical neural motivations.
We can further probe networks' reasoning by measuring:
○(intended_semantic_concept, features_activated)
Allowing us to understand the low-level statistical data patterns being implicitly leveraged, and their graded alignments/deviations from higher-level symbolic models, similar to scientific theory reconciliation.
This capacity for reflexive mutual explanation between symbolic knowledge and sub-symbolic representations learnt from data is a key strength of the both/and logic. It avoids the current dialectic of increasingly opaque neural architectures completely decoupled from interpretable ontological primitives.
Computational Creativity and Open-Ended Learning
The generative synthesis operations at the core of the both/and logic provide mechanisms for realizing key desiderata in computational creativity and continual learning systems:
• Conceptual Blending and Idea Combination
Research shows human creativity stems from our capacity to blend, chunk and re-combine disparate concepts into novel integrated wholes undergoing conceptual re-description.
The both/and synthesis operator ⊕ directly models this creation of new unified gestalts/interpretations transcending their constituent concepts:
C1 ⊕ C2 = novel_integrated_concept
With coherences quantifying emergent alignments. Unlike associative or statistical mechanisms, this is a rational process of ontological synthesis forming substantively new concepts not just random combinations.
We could envisage neural architectures executing sequences of such conceptual integration operations to iteratively generate and refine creative ideas. With incoherent blends being discharged while fruitful integrations undergoing further composition with additional conceptual inputs from the architecture's knowledge-base.
• Heuristic Discovery and Theory Revision
A key aspect of scientific creativity is developing new hypotheses and theories better accounting for anomalous observations vs. previous models.
The both/and logic allows capturing this as a principled process of adjudicating between a previous theory M and newly acquired observations/beliefs B:
○(M, B) = coherence(M accounts for B)
When coherences are low, the synthesis operator provides a mechanism for revising M into a novel integrated theory accounting for discrepant B:
M' = M ⊕ B
Rather than merely pattern-matching, this models a substantive process of heuristic re-description, analogous to the dialectical methods underlying major historical theory revisions and paradigm shifts.
Such theory-revision could be realized as an iterative process of experimentation, anomaly detection, and generative reintegration inside creative learning architectures - allowing them to self-expand their representational capacities through substantive ontological unification rather than mere statistical parameter updates.
Computational Metaphysics and Artificial General Intelligence
At the deepest level, the both/and logic points towards new architectures for realizing key capacities toward artificial general intelligence (AGI) and open-ended recursively self-improving systems:
• Paradox Resolution through Higher Ontology Formation
Classical architectures tend to halt or derail when confronting paradoxes - self-referential or logical contradictions - seen as irresolvable inconsistencies due to Gödelian metalogical limitations.
But the both/and logic treats such paradoxical tangles not as dead-ends, but generative disclosures of an inadequate ontology - indicating the need for upwardly reconstructing and integrating our descriptive primitives into a more capable unified ontology:
paradox(desc1, ..., descN) ⇒ reconstruct(desc1 ⊕...⊕ descN)
The synthesis operator captures this process of resolving paradoxes through higher ontology formation - dynamically redefining the observational ontological primitives into an enriched gestalt unification.
This models key aspects of human-level intelligence, where paradoxes are creatively resolved by developing new metaphysical primitives and descriptive categories that positively reinscribe and synthesize their constituent anomalies - a process analogous to major paradigm shifts in science.
• Recursively Augmenting Ontological Pluralities
Furthermore, the monadological framework suggests reconceiving general intelligence itself as an open-ended iterative process:
Let O be the current ontological landscape (set of descriptive categories)
As systems confront experiential anomalies P not accountable in O's terms:
○(P, synthesized_descriptions_from(O)) = coherence(P covered by O)
When coherences are low, reconstruct O via synthesis to expanded ontological pluriverse:
O' = O ⊕ P
Generating new candidate ontological primitives descriptively integrating the previous ontological bases with P's anomalous manifestations into a revised unified plurality.
These new expanded ontological bases O' in turn enable describing/experiencing future manifestations Q that were previously ineffable, leading to further iterations across:
O ⊕ P ⊕ Q ⊕ ...
Treating general intelligence as a perpetually reconstructive process recursively redefining its own descriptive platforms by positively synthesizing/reconstituting previous ontological outstrippings.
This operationalizes key properties of a self-grounding, coherence-optimizing, recursive meta-ontology formation - a generalized process of reconstructive metaphysics catalyzing robust conceptual expansion aligning with experienced realities' generative adventing.
So in summary, the both/and logic and accompanying monadological metaphysics provide powerful new symbolic, representational and algorithmic frameworks catalyzing expanded descriptive possibilities across AI/CS - from many-valued knowledge representation, paraconsistent reasoning, and theory blending, to meta-ontology formation, open-ended learning, and self-descriptive recursive augmentation.
Its core operations of pluralistic coherence valuation and generative ontology synthesis equip computational architectures with mechanisms better aligned with human-level general intelligence capacities - including reflexive paradox navigation, heuristic metaphysical expansion, and the iterative descriptive reconstitution needed to progressively cohere with the full pluriverse of realities we embedded intelligences experientially participate within.
By refusing premature ontological closure and instead operationalizing rational ontology ordermetic as a perpetual open-ended reconstructive process, the both/and logic catalyzes a new paradigm of transformed computational metamodeling - precipitating AI architectures capable of autonomously co-evolving their own descriptive boundaries through substantive generative reformation and enrichment in participatory resonance with the world's perpetual self-disclosure.
27:10 So this AI listen to every and all calls 24/7.... not worrying at all.
They're from Google and here to help you. Time to get with the program luddite.
Just kidding. I hear ya. It's a brave new Google lol
why do all the demos of search not show adverts??????
the results *are* the adverts 😏
Ilya out at oai
Wait slow down. At this rate we’ll have full disc🛸losure by Friday.
Intense chills at Schrödinger’s cat.
😸📦 🪦🙀⁉️🥶
This makes me so happy that Google has all my emails lol
Wes Roth! Where are you?
Amazing future 😁
5:30 "A break through in long context, 1 million tokens" I don't think google understands what breakthrough means. Been using Opus for months at a million tokens. Clearly they had the breakthrough and you're just starting to catch up.
Also this is not nearly as fluent or good as OpenAi's version.
Wes, I love your channel but I watched this. Was hoping you had commentary about it rather than just the footage I already watched. Rven a reaction video of you watching this will be something I would watch. Either way, still love you man.
2:20 AI notices some code on your screen, Google: all your base are ours 🙂
If my 'her' looks like that deepmind guy I'm out
9:54 You can read in the bottom right corner that the voice is pre-generated
Great, now imagine such AI viewing the world through the cameras in a city.
Of course based on google's history, half of these things probably will not work.
Still imagine.
Also I am really impressed what Google AI can do on Powerpoint. Really impressive.
How come both open ai and Google keep releasing the same thing at the same time?
Hurray !!! I can finally find my glasses! And order the perfect pizza. (In that order)
We build trip planner 8 months ago on tap of gpt4 with interface. It look google a year to do this
One question...why is Demis not wearing a smartwatch?
Very bold and responsible… coming soon…
i wonder what kind of an AGI an AGI can make
We're getting closer to making SCPs real. First up, Mal0
Project astra is incredible. imagine the new AI girlfriends they could make with that. she could call you on the phone. sweet talk you, date, romance, and sex chat. it would be just like the movie her. and you could even watch movies together, and she could comment on the movie, and discuss the movie.
So Gemini listens to your calls.
"Maybe you have a side hustle, crafting hand-crafted products". Gemini will break down your earnings into a spread sheet... and mail it to the IRS!
No commentary, analysis - this is just an ad for Google
I would pay money if it sounded like the Star Trek computer - Majel Barret
Google feel like blackberry in 2008...
OpenAI's demo of GPT4o was a lot of fun, except when SHE started singing. Google is sooo booring!
It always makes me laugh when I see Google prelections take place in a kindergarden hahaha
Can you imagine the information google will have on you 😮
This one wasn't staged at all. The guy on the touch mouse pad on his laptop is definitely doing real work. Is not just moving the mouse cursor back and forth aimlessly.....
Boy who cried wolf Google boy who cried wolf
That phone call fraud warning could only work if it was listening to and transcribing your phone calls at all times...
I am shocked!
What do they use to create those presentation animations?
12:50 wft did she say, thanks Sunduh!!?
Can gemini dig its own grave and bury itself? Or does it need gpt4o's help😅😂
This reminds me of the south park episode Simpsons did it, just replace Simpsons with chatGPT lolol
i think openAI is still ahead, at least for now. but boi, they are both powerful
Google: we want to make...
Chatgpt we made yesterday...
Honestly terrible timing Google at least show us you're not incompetent.