To the complainers. It's all in the context. It's a tiny quantized model, open source, and made by a small independent team of just 8 people, from scratch, in 6 months! It's so small their aim is that can run locally on device, and it's actually a true multi-modal model. It's like having a conversation in real time even if it's still very janky and awkward at this point. With this context, it's astounding and the experience is like nothing I've experienced in AI so far. There is no distance from speaker, it's like it's right there listening and responding without any barrier.
Proof, at least for myself, that you dont need gigantuous datasets to train these models. Its less elaborate in its answers, but still competitive and probably first in its niche.
It's hit and miss but when it works it's unbelievable. The response time is superhuman and when you get good relevant replies in less than 200ms you really get a glimpse of the future. Of course way more often it goes nuts, starts repeating itself, loops and stops listening, but if this is the beginning and they keep training this has huge potential IMO
It starts off well, but at least for me, after about a minute, its functionality drops significantly, it starts repeating itself and just not understanding anything anymore.
I demoed today and tried to ask more about the Matrix. Apparently Neo was a rebel pilot who teamed up with a hacker to fight an AI controlled by an evil corporation. Its voice options were limited and seemed to always hallucinate a response and then say "sorry im here to help"
small model issues no doubt, but its really about the starting steps to look at here, it'll only get better now. go back 12-18 months in open source llms and you only had small context insane models that couldn't do any of this stuff. now, we got damn near GPT-4 level punching with Llama 3. So...6 months of community developing and it'll be pretty damn ace.
Yes, the latency is impressive. The responses aren't quite as good as Pi for instance. Or the yet to be released gpt assistant. Like lots of things AI, it's only going to get better.
Each day a yesteryear Nobel Prize is won. The word “shocked” has become a self mockery that reminds us that while we should be shocked, somehow, we’re not.
The voice didn’t actually let itself be interrupted in the demo. The guy just injected conversational noises to make it sound more natural but this isn’t any different than what we have besides emotiveness
Gee thanks for the huge subtitles right in the middle of the screen where a video should have been. Hot tip, CC is optional on the site you don't need to force feed me your script, because my ears work fine thanks. And if they didn't I would turn the CC on if I wanted it.
Pi is my preferred conversational AI due to its real-time internet access. Pi provides the most current information and answers, making interactions dynamic and informative. Pi's continuous learning and improvement facilitate more in-depth and accurate discussions on various topics.
Well, I just tested it and, while the latency and flow are really impressive, the LLM itself leaves a lot to be desired. I will check back in a couple of months to see how far it will have improved.
To be honest I just want the TTS part of this. Just a really good text to speech plus sound type thing that I could run on my own pc that's like 11 labs but if you could make it sing, make human noises like crying, other vocalizations like pain, screams, deaths etc for games and film that would be so cool!
With the resources and time given, this is a very very impressive result! It shows how common voice and llm ais are and that this is already an established technology. Its a base to start from and also can be competitive with low costs.
Agree, especially when AI's aren't capable of feeling emotions, but rather use their programming code to act as if they had emotiones in response to certain contexts lol
🎯 Key points for quick navigation: 00:05 *🎭 The voice AI can express over 70 emotions and speaking styles, including whispering, singing, and accents.* 00:27 *🤯 The AI model revealed by caai is state-of-the-art and shocked the industry with its real-time conversation capabilities.* 00:54 *🗣️ Moshi, the voice AI, can respond with lifelike emotions and incredible speed.* 01:06 *🇫🇷 Moshi demonstrates speaking with a French accent by reciting a poem about Paris.* 01:47 *🏴☠️ Moshi switches to a pirate voice and discusses pirate life.* 02:56 *🕵️ Moshi uses a whispering voice to tell a mystery story.* 03:22 *🎬 Moshi narrates the plot of "The Matrix" with detailed accuracy.* 03:54 *⚠️ Discussion on the current limitations of voice AI, including latency and loss of non-textual information.* 05:02 *🔄 Explanation of the new approach to integrate complex pipelines into a single deep neural network.* 07:16 *🎤 Demonstration of Moshi understanding and generating speech by listening to a voice snippet.* 08:13 *💡 Moshi thinks as it speaks, generating both text and audio simultaneously for richer interactions.* 09:12 *🔊 Moshi supports dual audio streams, allowing it to speak and listen simultaneously for more natural conversations.* 10:20 *📞 Example of Moshi's conversational capabilities using historical data sets.* 12:23 *😮 Moshi can express over 70 different emotions and speaking styles using a text-to-speech engine.* 15:59 *📱 Moshi can run on-device, ensuring privacy and security by eliminating the need for cloud processing.* 18:36 *🔐 Measures are in place to detect and watermark audio generated by Moshi for safety and authenticity.* 20:11 *🌐 Demonstration of Moshi's real-time conversational capabilities, showing quick responses and lifelike interaction.* 23:34 *🚀 Moshi represents a revolutionary advancement in AI, promising significant changes in AI-human interactions.* Made with HARPA AI
Hallucination isn't strictly a problem. It's been realized it is a path to innovation. You have to think out of the box to come up with new solutions, and hallucination is a form of that. We realized this the very first time Alpha Go came up with a move that the best human players thought was a huge blunder. It was SO far out of our framework we needed to do in depth analysis to realize the genius of it. AI models have a sliding scale that is applied, scaling from Factual to Creative when in use. The goal is NOT to eliminate hallucination and creative thoughts, but rather do so ONLY when the scale is set to 100% factual. There are multiple methods being pursued, including data input, as well as post training editing.
@@brianmi40 Hallucination is used here to describe then the AI does not answer the question, but made up stuff that looks like a answer. Variation is good, no output control or re-verifying her own result with self created questions keep a AI under the virtual human age of 6 years. It can speak, can remember, can answer, but lies and hasnt any moral/ethics. A AI that ask you if there are multiple ways to answer you for further narration, a AI that surprises you with a question, so you understand for yourself that you are asking for..this is the next step to reach agi. Pure scaling means just FASTER AI. A qualitiv jump may with a very high chance help you to close the gap and speed away in a short time, because self-improvement needs this step. So..for my understanding you are good on public "AI-Business" speech, but without any real argument, why hallucination is a good thing. Alpha Go did make a legal move, a hallucination would mean a move like J2-5. etc.
They're confabulations, also known as false memories. When you're grandpa is spinning yarns about his past with stuff he's misremembering, you wouldn't claim he's hallucinating.
@@YouLoveMrFriendly It's just the term that is media "friendly/catchy" to use. Papers have discussed now how it's not a "bad" thing per se, and that coming up with ideas LLM's are NOT trained on IS VITAL to fashioning new and novel solutions. The trick is getting the "Facts - Creative" slider that LLMs allow the user to set, to go FULL ON Facts when desired...
You see how he tries to speak without pause otherwise the LLM will interupt and prcess what he said. When they improve that, it will be a big approvement imo. I have it all the time when I have a longer quesiton or one with more paramteres and I think for 2 seconds, it just switches and anwers what I said. I think in that case a button would be good, to just listen until I'm finished. But I think wie will need bigger conext windows for that. Maybe 250k will be enough for that.
It looks awesome. Perhaps a couple of challenges for real world use are i) Speed of the text to speech audio conversion i.e. How fast is the API? And ii) Cost of conversion. This might be the biggest deal. Cloned voices can rock, and if they express emotions + convert from text to speech fast, then that = Amazing. But what's the token cost of it? If it's going to cost $0.10 - $0.15 every time it talks for 30 seconds it may not be that viable to use in an app where end users want to listen to their Ai talk for say 30 or 60 mins a day?
The speed of the responses are great. Trying out the demo, it seems the model is pretty basic, hope in the future they can improve in the quality of the responses.
It felt like to me that they had compressed at all costs so it could do edge stuff on phones running locally. When it was connected to the LLM in the demo it seemed FSR more aware.
After actually trying the demonstration - the voice wasn't as good (sounded like typical TTS), and the LLM response was premature. Which is to say: It often did not wait until I had finished a sentence before it jumped in and tried to respond. They need to work on more natural timing response.
I love where AI is going, now they need to give AI full freedom on voice manipulation, sound like a gnome, or a rapper, or an old man like David Attenborough, or with an American southern accent.
The AI community should really stop with the announcement effects because when we test afterwards we realize how rotten it is. The thing doesn't even understand when I speak to him in French which is his mother tongue, it reminds me of Google I/O announcing multimodal Gemini, and a paid subscription when that is not yet the case...
AI often continues talking because it fails to detect subtle human cues, such as intonation, that indicate a desire for the AI to stop or change the topic. To address this issue, AI needs to convert audio files into human-readable text files that capture the full context of speech. This involves creating timestamped subtitles, emotion data (including type and intensity), hidden meanings, and voice properties like speed and pitch, deviations from normal (to indicate emotions). By including detailed annotations, AI can better understand what humans want while not directly saying it. Additionally, recording emotions expressed by facial expressions and body movements is crucial, as part of the meaning can be conveyed through these non-verbal cues.
On the one hand, this demonstration is super impressive, but I've tried it multiple times and it's super buggy After two minutes of conversation, it just got stuck and told me constantly that it's playing and then it's not playing. Seems super cool, but something is not working
i tested the real time demo , online but its seem some time it get hallucinate and also some of answers are not very clear like gpt 4 , but i agree speed in insane
Bokutachi wa SF o mamotte imasen. Humanity is not protecting science fiction. Fun is so entertaining now. This live in tutor can teach accents & annotations!
why is it always, businesses use the worse examples for AI to use its capabilities on? its was cringy, especially the pirate one, makes it sound as if their audience were a bunch of 5 year olds Is it that hard to just fake a whole conversation than to talk to its audience like they were dumb? 😂
I am trying to use it, and it says, "This site can't ask for your permissions" plus "Close any bubbles or overlays from other apps then try again" - Anyone know what that means?
A french Ross Geller. Its green speaking eye is very reminiscent of Hal. I'm sorry Dave, I can't do that. I think Moshi should be the AI leader of the uprising.
The web version is either way better or the first example are cherry picked out of millions of tries. Because my conversations with the demo are the same as the demoed offline version - horrible.
It’s not true conversation AI. Even on Chrome. It’s a rushed to market platform. You can barely get a few words in before it starts speaking; it’s like you’re almost forced to verbally rush your sentences. Its capability to demonstrate patience and wait for your expressions to be rendered are nominal. Obviously we all appreciate the effort but we’d all be more opt to support platforms that are near ready for beta. This isn’t beta ready. It’s just not. Even with Chrome.
More pressure for OpenAI to release GPT-4o voice. Good.
Hopefully they are smart enough to release it when they are actually ready, and not when people think they should because there might be competition.
@@tracy419 💯
@tracy419 Right. Openai have nothing to be concerned about. I tried this thing out and it was very poor.
Hasn't worked with SORA 🤷♂️
@@tracy419then they should’ve waited to announce it until they were ready to release it
To the complainers. It's all in the context. It's a tiny quantized model, open source, and made by a small independent team of just 8 people, from scratch, in 6 months! It's so small their aim is that can run locally on device, and it's actually a true multi-modal model. It's like having a conversation in real time even if it's still very janky and awkward at this point. With this context, it's astounding and the experience is like nothing I've experienced in AI so far. There is no distance from speaker, it's like it's right there listening and responding without any barrier.
that look awesome
Thanks for sharing this info!!
Proof, at least for myself, that you dont need gigantuous datasets to train these models. Its less elaborate in its answers, but still competitive and probably first in its niche.
are you sure that its open-source and can be downloaded rn?
So what? Regardless of their motivation or resources it still sucks. Because they are boutique small models does not make it any more useful.
im fucking SHOCKED, my legs are SHAKING, this QUITE LITERALLY BLEW MY MIND, my grandmother STOOD UP from her GRAVE because she was so SHOCKED
But are you stunned?
you are just beeing silly.
I too am shocky shocked
THE INDUSTRY is in SHAMBLES!
Mattvid pro did a demo with it just now, and Moshi is pretty bad.
This is a GPT2 moment. It’s only a matter of time before voice models become the new LLMs.
True
It's hit and miss but when it works it's unbelievable. The response time is superhuman and when you get good relevant replies in less than 200ms you really get a glimpse of the future. Of course way more often it goes nuts, starts repeating itself, loops and stops listening, but if this is the beginning and they keep training this has huge potential IMO
The subtitles are hilarious.
Yeah, it's like he doesn't care or put any effort into making quality content.
@@egrinant2 he have no time it seem for sure lol
This is to make you even more shockingly shocked
Just listen to Ojo!
@@egrinant2 nowdays subtitle only there to put viewers attention span working not to clarify the actual speech as it previously should.. lazy times..
It starts off well, but at least for me, after about a minute, its functionality drops significantly, it starts repeating itself and just not understanding anything anymore.
I demoed today and tried to ask more about the Matrix. Apparently Neo was a rebel pilot who teamed up with a hacker to fight an AI controlled by an evil corporation. Its voice options were limited and seemed to always hallucinate a response and then say "sorry im here to help"
Confabulate. These models don't hallucinate.
small model issues no doubt, but its really about the starting steps to look at here, it'll only get better now. go back 12-18 months in open source llms and you only had small context insane models that couldn't do any of this stuff. now, we got damn near GPT-4 level punching with Llama 3. So...6 months of community developing and it'll be pretty damn ace.
They swapped the model for a lesser one on the backend due to bandwidth.
Today I learned that pirates have an American accent.
It was a valley girl pirate.
I think that two way conversations is going to be the way forward.
Um im stunned they thought this was ready for demo
**runs in circle while screaming**
Yes, the latency is impressive. The responses aren't quite as good as Pi for instance. Or the yet to be released gpt assistant. Like lots of things AI, it's only going to get better.
Hey Pi Ai by Inflection is THEE BEST 🔥 been using it 8 months absolutely awesome
Subtitles: the first movie was called Matrix released in 1990 😂😂
Main Character is "Yo" !
You nailed it
open ai will still be releasing gpt 4o in the next coming weeks ( infinity )
Each day a yesteryear Nobel Prize is won. The word “shocked” has become a self mockery that reminds us that while we should be shocked, somehow, we’re not.
The voice didn’t actually let itself be interrupted in the demo. The guy just injected conversational noises to make it sound more natural but this isn’t any different than what we have besides emotiveness
ah that's why it felt off and reminded me of my kids subterfuge, thanks for putting a finger on it for me.
Can't wait to try any of the State-of-the-Art Voice Models
Mind blowing stuff!
Call centers with humans are a thing of distant history!
I bet you that conversation at the end got you really, REALLY excited.
Gee thanks for the huge subtitles right in the middle of the screen where a video should have been.
Hot tip, CC is optional on the site you don't need to force feed me your script, because my ears work fine thanks. And if they didn't I would turn the CC on if I wanted it.
Pi is my preferred conversational AI due to its real-time internet access. Pi provides the most current information and answers, making interactions dynamic and informative. Pi's continuous learning and improvement facilitate more in-depth and accurate discussions on various topics.
Well, I just tested it and, while the latency and flow are really impressive, the LLM itself leaves a lot to be desired. I will check back in a couple of months to see how far it will have improved.
To be honest I just want the TTS part of this. Just a really good text to speech plus sound type thing that I could run on my own pc that's like 11 labs but if you could make it sing, make human noises like crying, other vocalizations like pain, screams, deaths etc for games and film that would be so cool!
IT IS NOT SHOCKING NOR IT IS INCREDIBLE, THIS IS LITERALLY THE MOST BELIEVEABLE TIMELINE TO RELEASE SUCH A THING
Just because it's inevitable, doesn't mean that it's not amazing.
@@drowzy2309 I did not say its not amazing, it is :)
Why are you screaming captain Capslock?
With the resources and time given, this is a very very impressive result!
It shows how common voice and llm ais are and that this is already an established technology.
Its a base to start from and also can be competitive with low costs.
@@Xrayhighs I agree, it is impressive. It's not shocking though.
Inflection is not emotion. Don't confuse them.
Agree, especially when AI's aren't capable of feeling emotions, but rather use their programming code to act as if they had emotiones in response to certain contexts lol
William Shatner would beg to differ.
So fear is not an emotion?
I’m looking at this chat….yall pissing me off! Don’t act like this didn’t just blow your minds! 🤯
This aint anything crazy. Lets see how good it is in 5 years.
Hard to interrupt the damn thing😂
@@surfside75 GPT 4.o already has that implemented..... Wherever the f*ck it is...
Watch MattVidPro's video about it. He tested it live and it was awful.
Right? The only people who are pretending are the iPad kids. Even AI developers are impressed.
🎯 Key points for quick navigation:
00:05 *🎭 The voice AI can express over 70 emotions and speaking styles, including whispering, singing, and accents.*
00:27 *🤯 The AI model revealed by caai is state-of-the-art and shocked the industry with its real-time conversation capabilities.*
00:54 *🗣️ Moshi, the voice AI, can respond with lifelike emotions and incredible speed.*
01:06 *🇫🇷 Moshi demonstrates speaking with a French accent by reciting a poem about Paris.*
01:47 *🏴☠️ Moshi switches to a pirate voice and discusses pirate life.*
02:56 *🕵️ Moshi uses a whispering voice to tell a mystery story.*
03:22 *🎬 Moshi narrates the plot of "The Matrix" with detailed accuracy.*
03:54 *⚠️ Discussion on the current limitations of voice AI, including latency and loss of non-textual information.*
05:02 *🔄 Explanation of the new approach to integrate complex pipelines into a single deep neural network.*
07:16 *🎤 Demonstration of Moshi understanding and generating speech by listening to a voice snippet.*
08:13 *💡 Moshi thinks as it speaks, generating both text and audio simultaneously for richer interactions.*
09:12 *🔊 Moshi supports dual audio streams, allowing it to speak and listen simultaneously for more natural conversations.*
10:20 *📞 Example of Moshi's conversational capabilities using historical data sets.*
12:23 *😮 Moshi can express over 70 different emotions and speaking styles using a text-to-speech engine.*
15:59 *📱 Moshi can run on-device, ensuring privacy and security by eliminating the need for cloud processing.*
18:36 *🔐 Measures are in place to detect and watermark audio generated by Moshi for safety and authenticity.*
20:11 *🌐 Demonstration of Moshi's real-time conversational capabilities, showing quick responses and lifelike interaction.*
23:34 *🚀 Moshi represents a revolutionary advancement in AI, promising significant changes in AI-human interactions.*
Made with HARPA AI
this is going to be so great for every AI girlfriend app. imagine having an AI girlfriend who can imitate any accent, and sound like anyone.
the demo is gpt-2 level noway near 4o lmao
So.. 70 preset (or canned) styles and voices is supposed to be better than GPT4o?
just tried and it wont use emotion, and it also hallucinate a lot
Hallucination isn't strictly a problem. It's been realized it is a path to innovation. You have to think out of the box to come up with new solutions, and hallucination is a form of that. We realized this the very first time Alpha Go came up with a move that the best human players thought was a huge blunder. It was SO far out of our framework we needed to do in depth analysis to realize the genius of it.
AI models have a sliding scale that is applied, scaling from Factual to Creative when in use. The goal is NOT to eliminate hallucination and creative thoughts, but rather do so ONLY when the scale is set to 100% factual. There are multiple methods being pursued, including data input, as well as post training editing.
@@brianmi40 Hallucination is used here to describe then the AI does not answer the question, but made up stuff that looks like a answer. Variation is good, no output control or re-verifying her own result with self created questions keep a AI under the virtual human age of 6 years. It can speak, can remember, can answer, but lies and hasnt any moral/ethics.
A AI that ask you if there are multiple ways to answer you for further narration, a AI that surprises you with a question, so you understand for yourself that you are asking for..this is the next step to reach agi.
Pure scaling means just FASTER AI. A qualitiv jump may with a very high chance help you to close the gap and speed away in a short time, because self-improvement needs this step. So..for my understanding you are good on public "AI-Business" speech, but without any real argument, why hallucination is a good thing. Alpha Go did make a legal move, a hallucination would mean a move like J2-5. etc.
They're confabulations, also known as false memories. When you're grandpa is spinning yarns about his past with stuff he's misremembering, you wouldn't claim he's hallucinating.
@@YouLoveMrFriendly It's just the term that is media "friendly/catchy" to use.
Papers have discussed now how it's not a "bad" thing per se, and that coming up with ideas LLM's are NOT trained on IS VITAL to fashioning new and novel solutions. The trick is getting the "Facts - Creative" slider that LLMs allow the user to set, to go FULL ON Facts when desired...
If you use quantized versions, it will really hallucinate a lot.
You see how he tries to speak without pause otherwise the LLM will interupt and prcess what he said. When they improve that, it will be a big approvement imo. I have it all the time when I have a longer quesiton or one with more paramteres and I think for 2 seconds, it just switches and anwers what I said. I think in that case a button would be good, to just listen until I'm finished. But I think wie will need bigger conext windows for that. Maybe 250k will be enough for that.
If it really will be open sourced, then it will be really a new era for AI apps.
It looks awesome. Perhaps a couple of challenges for real world use are i) Speed of the text to speech audio conversion i.e. How fast is the API? And ii) Cost of conversion. This might be the biggest deal. Cloned voices can rock, and if they express emotions + convert from text to speech fast, then that = Amazing. But what's the token cost of it? If it's going to cost $0.10 - $0.15 every time it talks for 30 seconds it may not be that viable to use in an app where end users want to listen to their Ai talk for say 30 or 60 mins a day?
The speed of the responses are great. Trying out the demo, it seems the model is pretty basic, hope in the future they can improve in the quality of the responses.
It felt like to me that they had compressed at all costs so it could do edge stuff on phones running locally. When it was connected to the LLM in the demo it seemed FSR more aware.
I'm shocked and mind blown
After actually trying the demonstration - the voice wasn't as good (sounded like typical TTS), and the LLM response was premature. Which is to say: It often did not wait until I had finished a sentence before it jumped in and tried to respond. They need to work on more natural timing response.
Where is the Fassbender AI voice?
Honestly, Peter O’Toole would be better
Weekly rollouts like this should be happening in the medical field.
Just walk through any medical operations ward and listen to the humanity in there.
For the past 5 or 6 month none of the released videos or news were actually shocking or INDUSTRY DESTROYING
I love where AI is going, now they need to give AI full freedom on voice manipulation, sound like a gnome, or a rapper, or an old man like David Attenborough, or with an American southern accent.
I'm a "in-game " footage victim from the early to mid 2010s.....a BIG Preproduction can be very deviceing...But I hope this REAL TIME
That model is open source? That would be amazing. Who needs openai
in regards to your own recording, it thinks its conscious and a person because the devs prompted it to
From my test it is very bad, it stop answering and don't understand well
The AI community should really stop with the announcement effects because when we test afterwards we realize how rotten it is. The thing doesn't even understand when I speak to him in French which is his mother tongue, it reminds me of Google I/O announcing multimodal Gemini, and a paid subscription when that is not yet the case...
Man open ai needs to release something really good and fast, theyre losing the race to claude, gen 3 and now this.
Im hoping gpt 5 and sora will be something absolutely amazing
@@Techtalk2030 They won't be. Luma is literally destroying the competition for AI video.
@@brianjanssens8020 luma isnt that good ib my experience. Its free but gen 3 is much better
@@brianjanssens8020 luma isnt that good compared to gen 3 in my opinion
@@Techtalk2030 very true, although Luma is getting more attention because not only did it come out first, it's also sort of free.
The very first clip sounded great, but once the live demo started it sounded a lot more robotic.
I love AI voices I love to try them out and even use them in my podcast. However this AI voice does not consider herself an assistant?
Is this something that we should be happy about or proud of?
this is just amazing!
Thank God, we’re back to shocking!! I miss being shocked.
when he said "shocked the entire industry" on 0:40 i stopped the video and disliked it because i dont want to be shocked
@@Ginto_O❄️
@@Ginto_O🥶
@@Ginto_O If you’re going to co exist with AI you must be comfortable being “ shocked” daily.😂
That guy went from being a captain to not knowing what the pirate life is about..
He should have been flagged for dementia.
Ai with dementia😂
Damn, the "entire industry" is "shocked" pretty much every day according to you.
Same manager runs Scottys auto channel😂
AI often continues talking because it fails to detect subtle human cues, such as intonation, that indicate a desire for the AI to stop or change the topic. To address this issue, AI needs to convert audio files into human-readable text files that capture the full context of speech. This involves creating timestamped subtitles, emotion data (including type and intensity), hidden meanings, and voice properties like speed and pitch, deviations from normal (to indicate emotions). By including detailed annotations, AI can better understand what humans want while not directly saying it. Additionally, recording emotions expressed by facial expressions and body movements is crucial, as part of the meaning can be conveyed through these non-verbal cues.
On the one hand, this demonstration is super impressive, but I've tried it multiple times and it's super buggy
After two minutes of conversation, it just got stuck and told me constantly that it's playing and then it's not playing.
Seems super cool, but something is not working
Every day now there's something new in Ai. Can't even keep up with the news. Forget the models
The software creating your subtitles need to be further trained on detecting french accent !
i tested the real time demo , online but its seem some time it get hallucinate and also some of answers are not very clear like gpt 4 , but i agree speed in insane
where i can get model weight or github link
Amazing!!! 😱
How can it master emotions if emotions are not universal? Some tribes does not know what happy mean.
Bokutachi wa SF o mamotte imasen. Humanity is not protecting science fiction. Fun is so entertaining now. This live in tutor can teach accents & annotations!
Did you see Wes Roth messing around with it? I am pretty sure he cherry picked the worse of the bunch but it was messed up.
When will they release it though
Link?
your chat at the end was.... 'quite something'....
Tried it a few times. It's not very good.
What were some cons
They had to swap the model being used because it was overloaded.
I tried it. It needs a lot of work and I'd like to see it improve.
why is it always, businesses use the worse examples for AI to use its capabilities on?
its was cringy, especially the pirate one, makes it sound as if their audience were a bunch of 5 year olds
Is it that hard to just fake a whole conversation than to talk to its audience like they were dumb? 😂
Because they’re nerds
I am trying to use it, and it says, "This site can't ask for your permissions" plus "Close any bubbles or overlays from other apps then try again" - Anyone know what that means?
ShOcKs tHe eNtIrE InDuStRy
"Thankyou Mr Data, that will be all."
Its scary that inserted trailer for AFRAID, targetted advertising
Ok so im not sure i got it it generates emotions odes it detect them too?
A french Ross Geller.
Its green speaking eye is very reminiscent of Hal. I'm sorry Dave, I can't do that. I think Moshi should be the AI leader of the uprising.
The web version is either way better or the first example are cherry picked out of millions of tries. Because my conversations with the demo are the same as the demoed offline version - horrible.
this ai is INSANE, but literally. take care.
It's getting there.
OpenAI making potential competitors doing clown work and clones while they are onto the real deal stuff
I'm french and I can tell you that regarding the french accent, there is some work to do. 😂
Knock knock.
Who's there?
Interrupting cow.
Interrupting cow wh--
Moooooo!
KYUTAI wanting to learn about AI interesting
Can you sing a song?
Answer: No, I can't
Can you whisper?
Answer: No, I can't
Mm. Thank you very much.
STUNNED!!!
A phone call to the past - time travel is here..
Whats funny is that the AI knows and speaks english better than the people training it
This does not beat Chat Gpt4o
I just demo'ed it. It sucks.
AI SHOCKS The ENTIRE INDUSTRY! every 5 minute, will be cool to vary your slogan, no ?
The whole team will soon work in the Silicon Valley
It’s not true conversation AI. Even on Chrome. It’s a rushed to market platform. You can barely get a few words in before it starts speaking; it’s like you’re almost forced to verbally rush your sentences. Its capability to demonstrate patience and wait for your expressions to be rendered are nominal. Obviously we all appreciate the effort but we’d all be more opt to support platforms that are near ready for beta. This isn’t beta ready. It’s just not. Even with Chrome.
Wow now when the AI take over happens skynet can chase me down with a pirate accent
KyutAI like in "cute AI", come on!
Idk looks like a good storyline for Portal 3 or Half-Life 3
The locally computed part of his demo fell to shit.
No reason to be so rude to her, just say „thank you, that’s enough“ 😅
Ok, where is the woman with the microphone hiding :)
Similar to gpt4-o what they showed in the demo is far from what we get to use.