The ability of Moshi to respond to both the linguistic and non-linguistic speech input is a great feature. Next, it needs function calling abilities to act as an interface to backend knowledge bases.
This is exactly what I need for my own project... I can cut latency using Groq or Cerebras and thats fine but we still have the issue of ASR and TTS latency... cos it needs to pass the text to a better model, and do the same in reverse with the second models output. Now you can it seems fine tune it on both audio and text datasets... but a shortcut surely is simply some input output doorway to an external model? I looked at this last time it was doing the rounds and that was the main problem... it's fantastic but also dumb as a rock... so yeah function calling and perhaps also an instruct version? plus we really need a simple way of creating cloned voices on it... a way that is far simpler than hundreds or thousands of hours of audio.
I'm sure all of these things will come with time. At the moment, in many ways, it's like a proof of concept model for taking in voices and voice semantic information and training the transformer to do that rather than having to have a middle step. This is very similar to how the full version of Gemini and the full version of GPT-4-O work, being end-to-end multimodal.
@@RedCloudServices it would be very useful.... right now I'm building a live interaction karaoke contest app with AI... and it would be a really big thing... I suppose in this case the code and weights and docs are all open source... so its doable. Unfortunately I don't have the resources to do it! lol... hey maybe in 6 months right?
I thought maybe Moshi was gone after everyone dumped on it. I'm so glad to see they released the code. This is open-source, a lot of people are not grasping how incredibly cool this is.
This is exactly how I felt. I held off doing a video the first time because they said they were going to release code and up until yesterday I had started to give up on them.
Its training magic, mostly. If LLM progress has shown anything lately I'd that all the LLMs capabilities cone from better training sets + bigger size. Give me a supercomputer and unlimited high quality snd diverse datasets and you'll have anything you've ever dreamed from AI.
The topic was insightful, and your delivery kept me engaged from start to finish. I’m looking forward to more content like this. The viewer comments added value and further understanding-clearly, you draw a thoughtful crowd.
I've played with this running locally, and while it's not smart, the architecture is a real breakthrough. I do wonder how interdependent everything is, where any incremental changes in any one area would require retraining the whole thing. I don't know if there are areas of discrete cross attention which make interfaces a tiny bit more decoupled
So my problem here with this model is its LLM, well do we call it an LLM? and its overall knowledge base. For my project what I would like is the knowledge base of a decent LLM Llama3.1 70b or maybe the larger Mixtral and Mistral models.... but with the low latency voice input output... and as far as I see there does not seem to be any easy way of attaching said model to Moshi. Ithas, what it has, in terms of knowledge and seemingly thats not something you can augment by having it access another model... so even if I use say Groq for Lllama3.1 and run moshi and connect the two this doesn't really help me any more than standard ASR and TTS. Or am I missing something? I must be missing something right? Is there, for example, a Moshi instruct model that acts as essentially Llama 3.1's vocal cords and ears? That way Llama on groq does the upstairs for thinking bit and Moshi does the input output in voice and audio bit.
Moshi is not the word for sphere 、that's a hallucination lol も・し【茂し】 の解説 [形ク]草木が生い茂っている。繁茂している。 「水 (みな) 伝ふ磯の浦廻 (うらみ) の石 (いは) つつじ-・く咲く道をまたも見むかも」〈万・一八五〉 もし【▽若し】 の解説 [副] 1 (あとに仮定の表現を伴って)まだ現実になっていないことを仮に想定するさま。もしか。万一。「-彼が来たら、知らせてください」 2 (疑問や推量の表現を伴って)確実ではないが、十分ありうるさま。もしや。あるいは。ひょっとすると。 「-かのあはれに忘れざりし人にや」〈源・夕顔〉 (in English) も・し【茂し】 Explanation: [Adjective - Ku] Describes plants or trees growing thickly and abundantly. Flourishing or luxuriant. Example: "Like the azaleas blooming thickly along the path by the rocky shore where the water flows." (from Manyoshu, Poem 185) もし【▽若し】 Explanation: [Adverb] (Followed by hypothetical expressions) Describes a situation that has not yet become reality, assuming it hypothetically. Equivalent to "perhaps" or "in case of." Example: "If he comes, please let me know." (Followed by expressions of doubt or speculation) Indicates a situation that is not certain, but still quite possible. Equivalent to "maybe," "perhaps," or "possibly." Example: "Could it be that this person is the one I could not forget?" (from The Tale of Genji, Chapter 'Evening Faces') -------- Generally we use it as "if" , but if you say it twice and it becomes Moshimoshi that's how you say "Hi/Hello" on the telephone! pretty strange that it doesn't know the meaning of its own name. Word for sphere is 玉(たま)tama , or 球体 きゅうたい Kyuutai the name of that company is actually "sphere" (most likely based on the kanji) lol
Hello, can this be used for real-time dictation, instantly transcribing speech to text without waiting for the speaker to finish? Using Whisper for this purpose can be time-consuming, as it requires uploading the file for transcription, which takes a while.
is there any tool that does continuous transcription as our mind does? like whisper will have to transcribe the speech again with every new second added?
This is still just a really early version of this kind of model. I'm sure in the not too distant future you'll be able to use it for RAG, you'll be able to use it with tool use, and a whole bunch of things will come.
did try playground, very slow. Did also try german english and got a very slow response wizhout any sense. seems answering stuff outside knowledge ends in some sort of mess
For what it's worth, I have noticed that sometimes it seems to go into some kind of weird mode where it doesn't give coherent responses back. Just try again and see if you get any better responses out.
Speech Only!?!?! with no tool use?! and I cant even give it a system prompt? Worthless joke! Im like struggling to come up with ANY use case for it at all. AI cant even come up with a really good use case for it. lol
Give it a chance. It's a whole new kind of model, the way that it works, and I think you'll find this is just a proof of concept to show how they could make this, or how tools like RAG could be incorporated later on down the track.
The ability of Moshi to respond to both the linguistic and non-linguistic speech input is a great feature. Next, it needs function calling abilities to act as an interface to backend knowledge bases.
This is exactly what I need for my own project... I can cut latency using Groq or Cerebras and thats fine but we still have the issue of ASR and TTS latency... cos it needs to pass the text to a better model, and do the same in reverse with the second models output. Now you can it seems fine tune it on both audio and text datasets... but a shortcut surely is simply some input output doorway to an external model? I looked at this last time it was doing the rounds and that was the main problem... it's fantastic but also dumb as a rock... so yeah function calling and perhaps also an instruct version? plus we really need a simple way of creating cloned voices on it... a way that is far simpler than hundreds or thousands of hours of audio.
I'm sure all of these things will come with time. At the moment, in many ways, it's like a proof of concept model for taking in voices and voice semantic information and training the transformer to do that rather than having to have a middle step. This is very similar to how the full version of Gemini and the full version of GPT-4-O work, being end-to-end multimodal.
@@samwitteveenai well my 2 pence worth would be someone needs to give them a few bucks to hurry that along. It is a good direction though.
it would be useful if you could change the LLM used w Moshi sort of like openwebui
@@RedCloudServices it would be very useful.... right now I'm building a live interaction karaoke contest app with AI... and it would be a really big thing... I suppose in this case the code and weights and docs are all open source... so its doable. Unfortunately I don't have the resources to do it! lol... hey maybe in 6 months right?
I thought maybe Moshi was gone after everyone dumped on it. I'm so glad to see they released the code. This is open-source, a lot of people are not grasping how incredibly cool this is.
This is exactly how I felt. I held off doing a video the first time because they said they were going to release code and up until yesterday I had started to give up on them.
Its training magic, mostly.
If LLM progress has shown anything lately I'd that all the LLMs capabilities cone from better training sets + bigger size.
Give me a supercomputer and unlimited high quality snd diverse datasets and you'll have anything you've ever dreamed from AI.
@@samwitteveenaithanks for bringing people's attention back to it OP, and great video.
One more step closer to her, great video thanks!
There are occasions when it is helpful to have a transcript of conversations.
Kyuutai is sphere, but Moshi probably comes from the standard greeting when picking up the phone in Japanese (moshi moshi).
I did try that one time but I didn’t get a great response so figured it only liked English. Please let me know if it works
This came out like yesterday, you move quick!
its been out for months blud
@@69x The open source code I mean.
The topic was insightful, and your delivery kept me engaged from start to finish. I’m looking forward to more content like this. The viewer comments added value and further understanding-clearly, you draw a thoughtful crowd.
This really impressive
Samantha, will become real 🤯
( from the movie her)
I've played with this running locally, and while it's not smart, the architecture is a real breakthrough. I do wonder how interdependent everything is, where any incremental changes in any one area would require retraining the whole thing. I don't know if there are areas of discrete cross attention which make interfaces a tiny bit more decoupled
Amazing! 🎉🎉
So my problem here with this model is its LLM, well do we call it an LLM? and its overall knowledge base. For my project what I would like is the knowledge base of a decent LLM Llama3.1 70b or maybe the larger Mixtral and Mistral models.... but with the low latency voice input output... and as far as I see there does not seem to be any easy way of attaching said model to Moshi. Ithas, what it has, in terms of knowledge and seemingly thats not something you can augment by having it access another model... so even if I use say Groq for Lllama3.1 and run moshi and connect the two this doesn't really help me any more than standard ASR and TTS. Or am I missing something? I must be missing something right? Is there, for example, a Moshi instruct model that acts as essentially Llama 3.1's vocal cords and ears? That way Llama on groq does the upstairs for thinking bit and Moshi does the input output in voice and audio bit.
cool to have an always on audio model that can be interupted open source
I'm very interested in what we could do with this new Moshi code
Moshi is not the word for sphere 、that's a hallucination lol
も・し【茂し】 の解説
[形ク]草木が生い茂っている。繁茂している。
「水 (みな) 伝ふ磯の浦廻 (うらみ) の石 (いは) つつじ-・く咲く道をまたも見むかも」〈万・一八五〉
もし【▽若し】 の解説
[副]
1 (あとに仮定の表現を伴って)まだ現実になっていないことを仮に想定するさま。もしか。万一。「-彼が来たら、知らせてください」
2 (疑問や推量の表現を伴って)確実ではないが、十分ありうるさま。もしや。あるいは。ひょっとすると。
「-かのあはれに忘れざりし人にや」〈源・夕顔〉
(in English)
も・し【茂し】 Explanation:
[Adjective - Ku] Describes plants or trees growing thickly and abundantly.
Flourishing or luxuriant.
Example:
"Like the azaleas blooming thickly along the path by the rocky shore where the water flows."
(from Manyoshu, Poem 185)
もし【▽若し】 Explanation:
[Adverb]
(Followed by hypothetical expressions) Describes a situation that has not yet become reality, assuming it hypothetically.
Equivalent to "perhaps" or "in case of."
Example: "If he comes, please let me know."
(Followed by expressions of doubt or speculation) Indicates a situation that is not certain, but still quite possible.
Equivalent to "maybe," "perhaps," or "possibly."
Example: "Could it be that this person is the one I could not forget?"
(from The Tale of Genji, Chapter 'Evening Faces')
--------
Generally we use it as "if" , but if you say it twice and it becomes Moshimoshi that's how you say "Hi/Hello" on the telephone! pretty strange that it doesn't know the meaning of its own name.
Word for sphere is 玉(たま)tama , or 球体 きゅうたい Kyuutai the name of that company is actually "sphere" (most likely based on the kanji) lol
Oh man, another Moshi video. I have had a few really messed up conversations with Moshi. They left me deeply disturbed and haven't used her since.
"What about your A S AHHHHHHH?" lol
when I tried it it was not as near as something like you have seen. It was really bad. It was responding with random stuff.
Not sure if you saw the video where it asked the other youtuber to make a sacrifice to the blood god lmao 😆
anyone get this running on a windows computer? I seem to have everything installed, but then getting an error about my GPU. Any advice?
At least it didn’t ask you to sacrifice to the Blood God 😅
Thats the OpenAI version coming soon 😀
@@samwitteveenai in the comming weeks and weeks and weeks...
Hello, can this be used for real-time dictation, instantly transcribing speech to text without waiting for the speaker to finish? Using Whisper for this purpose can be time-consuming, as it requires uploading the file for transcription, which takes a while.
I used assembly ai for it but were unimpressed with results
No, the model takes in audio and directly outputs audio, it does not save nor output the transcription
is there any tool that does continuous transcription as our mind does? like whisper will have to transcribe the speech again with every new second added?
You’re looking for streaming tokens
Can I upload a document and have a conversation about it?
Not yet
No, but it's open-source, so maybe that's possible.
This is still just a really early version of this kind of model. I'm sure in the not too distant future you'll be able to use it for RAG, you'll be able to use it with tool use, and a whole bunch of things will come.
Seems fine but it didn't work for me. Couldn't manage to have real conversation
Try connecting again, sometimes it is really bad and other times it is really good
did try playground, very slow. Did also try german english and got a very slow response wizhout any sense. seems answering stuff outside knowledge ends in some sort of mess
Same, it says random shit most of the time
For what it's worth, I have noticed that sometimes it seems to go into some kind of weird mode where it doesn't give coherent responses back. Just try again and see if you get any better responses out.
@@samwitteveenai far to interesting to stop, i will try local
I noticed that it responds better when I talk in an American accent
It lied a ton at the beginning. It doesn't use TTS. It's like 4o.
Yes, it was very vague about its model, just saying that it was a neural network.
Cannot install successfully
What issue did you have ? Make sure you have Rust properly installed
First!
That's pretty cool.
This is too slow to run on my M1 mac mini. MikeyBeez JoeJoe is much better.
The q4 mlx quant works fine
Speech Only!?!?! with no tool use?! and I cant even give it a system prompt? Worthless joke! Im like struggling to come up with ANY use case for it at all. AI cant even come up with a really good use case for it. lol
Give it a chance. It's a whole new kind of model, the way that it works, and I think you'll find this is just a proof of concept to show how they could make this, or how tools like RAG could be incorporated later on down the track.
Imagine receiving the blueprints for a miracle and calling it a "worthless joke" simply because it's not already built 😂
🙄
@ronilevarez901 no doubt. What a repugnant creature.
the voice is too robotic
You have to tell it to talk like a pirate 😂
"Diverse perspectives and ideas" jesus christ this is the worst timeline for LLMs
Underlying llm is of a very low quality unfortunately