Moshi The Talking AI

Поділитися
Вставка
  • Опубліковано 20 вер 2024

КОМЕНТАРІ • 68

  • @johnkintree763
    @johnkintree763 2 дні тому +10

    The ability of Moshi to respond to both the linguistic and non-linguistic speech input is a great feature. Next, it needs function calling abilities to act as an interface to backend knowledge bases.

    • @mickelodiansurname9578
      @mickelodiansurname9578 2 дні тому +2

      This is exactly what I need for my own project... I can cut latency using Groq or Cerebras and thats fine but we still have the issue of ASR and TTS latency... cos it needs to pass the text to a better model, and do the same in reverse with the second models output. Now you can it seems fine tune it on both audio and text datasets... but a shortcut surely is simply some input output doorway to an external model? I looked at this last time it was doing the rounds and that was the main problem... it's fantastic but also dumb as a rock... so yeah function calling and perhaps also an instruct version? plus we really need a simple way of creating cloned voices on it... a way that is far simpler than hundreds or thousands of hours of audio.

    • @samwitteveenai
      @samwitteveenai  2 дні тому +2

      I'm sure all of these things will come with time. At the moment, in many ways, it's like a proof of concept model for taking in voices and voice semantic information and training the transformer to do that rather than having to have a middle step. This is very similar to how the full version of Gemini and the full version of GPT-4-O work, being end-to-end multimodal.

    • @mickelodiansurname9578
      @mickelodiansurname9578 2 дні тому +1

      @@samwitteveenai well my 2 pence worth would be someone needs to give them a few bucks to hurry that along. It is a good direction though.

    • @RedCloudServices
      @RedCloudServices День тому +1

      it would be useful if you could change the LLM used w Moshi sort of like openwebui

    • @mickelodiansurname9578
      @mickelodiansurname9578 День тому

      @@RedCloudServices it would be very useful.... right now I'm building a live interaction karaoke contest app with AI... and it would be a really big thing... I suppose in this case the code and weights and docs are all open source... so its doable. Unfortunately I don't have the resources to do it! lol... hey maybe in 6 months right?

  • @EDLR234
    @EDLR234 2 дні тому +5

    I thought maybe Moshi was gone after everyone dumped on it. I'm so glad to see they released the code. This is open-source, a lot of people are not grasping how incredibly cool this is.

    • @samwitteveenai
      @samwitteveenai  2 дні тому +4

      This is exactly how I felt. I held off doing a video the first time because they said they were going to release code and up until yesterday I had started to give up on them.

    • @ronilevarez901
      @ronilevarez901 День тому +1

      Its training magic, mostly.
      If LLM progress has shown anything lately I'd that all the LLMs capabilities cone from better training sets + bigger size.
      Give me a supercomputer and unlimited high quality snd diverse datasets and you'll have anything you've ever dreamed from AI.

    • @EDLR234
      @EDLR234 День тому

      ​@@samwitteveenaithanks for bringing people's attention back to it OP, and great video.

  • @chunlingjohnnyliu2889
    @chunlingjohnnyliu2889 2 дні тому +3

    One more step closer to her, great video thanks!

  • @johnkintree763
    @johnkintree763 2 дні тому +4

    There are occasions when it is helpful to have a transcript of conversations.

  • @まさしん-o8e
    @まさしん-o8e 2 дні тому +7

    Kyuutai is sphere, but Moshi probably comes from the standard greeting when picking up the phone in Japanese (moshi moshi).

    • @samwitteveenai
      @samwitteveenai  2 дні тому

      I did try that one time but I didn’t get a great response so figured it only liked English. Please let me know if it works

  • @thetagang6854
    @thetagang6854 2 дні тому +9

    This came out like yesterday, you move quick!

    • @69x
      @69x 2 дні тому +1

      its been out for months blud

    • @thetagang6854
      @thetagang6854 День тому

      @@69x The open source code I mean.

  • @donconkey1
    @donconkey1 День тому +1

    The topic was insightful, and your delivery kept me engaged from start to finish. I’m looking forward to more content like this. The viewer comments added value and further understanding-clearly, you draw a thoughtful crowd.

  • @xenoaiandrobotics4222
    @xenoaiandrobotics4222 2 дні тому +2

    This really impressive

  • @dhruvgandhi5796
    @dhruvgandhi5796 2 дні тому +2

    Samantha, will become real 🤯
    ( from the movie her)

  • @nickludlam
    @nickludlam День тому

    I've played with this running locally, and while it's not smart, the architecture is a real breakthrough. I do wonder how interdependent everything is, where any incremental changes in any one area would require retraining the whole thing. I don't know if there are areas of discrete cross attention which make interfaces a tiny bit more decoupled

  • @MeinDeutschkurs
    @MeinDeutschkurs 2 дні тому

    Amazing! 🎉🎉

  • @mickelodiansurname9578
    @mickelodiansurname9578 2 дні тому +2

    So my problem here with this model is its LLM, well do we call it an LLM? and its overall knowledge base. For my project what I would like is the knowledge base of a decent LLM Llama3.1 70b or maybe the larger Mixtral and Mistral models.... but with the low latency voice input output... and as far as I see there does not seem to be any easy way of attaching said model to Moshi. Ithas, what it has, in terms of knowledge and seemingly thats not something you can augment by having it access another model... so even if I use say Groq for Lllama3.1 and run moshi and connect the two this doesn't really help me any more than standard ASR and TTS. Or am I missing something? I must be missing something right? Is there, for example, a Moshi instruct model that acts as essentially Llama 3.1's vocal cords and ears? That way Llama on groq does the upstairs for thinking bit and Moshi does the input output in voice and audio bit.

  • @jakobpcoder
    @jakobpcoder День тому

    cool to have an always on audio model that can be interupted open source

  • @superresistant0
    @superresistant0 День тому

    I'm very interested in what we could do with this new Moshi code

  • @phen-themoogle7651
    @phen-themoogle7651 2 дні тому +1

    Moshi is not the word for sphere 、that's a hallucination lol
    も・し【茂し】 の解説
    [形ク]草木が生い茂っている。繁茂している。
    「水 (みな) 伝ふ磯の浦廻 (うらみ) の石 (いは) つつじ-・く咲く道をまたも見むかも」〈万・一八五〉
    もし【▽若し】 の解説
    [副]
    1 (あとに仮定の表現を伴って)まだ現実になっていないことを仮に想定するさま。もしか。万一。「-彼が来たら、知らせてください」
    2 (疑問や推量の表現を伴って)確実ではないが、十分ありうるさま。もしや。あるいは。ひょっとすると。
    「-かのあはれに忘れざりし人にや」〈源・夕顔〉
    (in English)
    も・し【茂し】 Explanation:
    [Adjective - Ku] Describes plants or trees growing thickly and abundantly.
    Flourishing or luxuriant.
    Example:
    "Like the azaleas blooming thickly along the path by the rocky shore where the water flows."
    (from Manyoshu, Poem 185)
    もし【▽若し】 Explanation:
    [Adverb]
    (Followed by hypothetical expressions) Describes a situation that has not yet become reality, assuming it hypothetically.
    Equivalent to "perhaps" or "in case of."
    Example: "If he comes, please let me know."
    (Followed by expressions of doubt or speculation) Indicates a situation that is not certain, but still quite possible.
    Equivalent to "maybe," "perhaps," or "possibly."
    Example: "Could it be that this person is the one I could not forget?"
    (from The Tale of Genji, Chapter 'Evening Faces')
    --------
    Generally we use it as "if" , but if you say it twice and it becomes Moshimoshi that's how you say "Hi/Hello" on the telephone! pretty strange that it doesn't know the meaning of its own name.
    Word for sphere is 玉(たま)tama , or 球体 きゅうたい Kyuutai the name of that company is actually "sphere" (most likely based on the kanji) lol

  • @mitchellmigala4107
    @mitchellmigala4107 2 дні тому

    Oh man, another Moshi video. I have had a few really messed up conversations with Moshi. They left me deeply disturbed and haven't used her since.

  • @ceaderf
    @ceaderf День тому

    "What about your A S AHHHHHHH?" lol

  • @darthvader4899
    @darthvader4899 День тому

    when I tried it it was not as near as something like you have seen. It was really bad. It was responding with random stuff.

  • @randomlettersqzkebkw
    @randomlettersqzkebkw 2 дні тому

    Not sure if you saw the video where it asked the other youtuber to make a sacrifice to the blood god lmao 😆

  • @WillJohnston-wg9ew
    @WillJohnston-wg9ew День тому

    anyone get this running on a windows computer? I seem to have everything installed, but then getting an error about my GPU. Any advice?

  • @TheRemarkableN
    @TheRemarkableN 2 дні тому +1

    At least it didn’t ask you to sacrifice to the Blood God 😅

    • @samwitteveenai
      @samwitteveenai  2 дні тому +2

      Thats the OpenAI version coming soon 😀

    • @EDLR234
      @EDLR234 День тому

      @@samwitteveenai in the comming weeks and weeks and weeks...

  • @yurijmikhassiak7342
    @yurijmikhassiak7342 2 дні тому +2

    Hello, can this be used for real-time dictation, instantly transcribing speech to text without waiting for the speaker to finish? Using Whisper for this purpose can be time-consuming, as it requires uploading the file for transcription, which takes a while.

    • @piotrnowakowski8904
      @piotrnowakowski8904 2 дні тому +1

      I used assembly ai for it but were unimpressed with results

    • @SinanAkkoyun
      @SinanAkkoyun 2 дні тому +3

      No, the model takes in audio and directly outputs audio, it does not save nor output the transcription

    • @yurijmikhassiak7342
      @yurijmikhassiak7342 2 дні тому

      is there any tool that does continuous transcription as our mind does? like whisper will have to transcribe the speech again with every new second added?

    • @superresistant0
      @superresistant0 2 дні тому

      You’re looking for streaming tokens

  • @kai_s1985
    @kai_s1985 2 дні тому

    Can I upload a document and have a conversation about it?

    • @hitlab
      @hitlab 2 дні тому

      Not yet

    • @EDLR234
      @EDLR234 2 дні тому

      No, but it's open-source, so maybe that's possible.

    • @samwitteveenai
      @samwitteveenai  2 дні тому +2

      This is still just a really early version of this kind of model. I'm sure in the not too distant future you'll be able to use it for RAG, you'll be able to use it with tool use, and a whole bunch of things will come.

  • @itblood
    @itblood 2 дні тому

    Seems fine but it didn't work for me. Couldn't manage to have real conversation

    • @samwitteveenai
      @samwitteveenai  2 дні тому +1

      Try connecting again, sometimes it is really bad and other times it is really good

  • @svenandreas5947
    @svenandreas5947 2 дні тому

    did try playground, very slow. Did also try german english and got a very slow response wizhout any sense. seems answering stuff outside knowledge ends in some sort of mess

    • @thenextbigthing1393
      @thenextbigthing1393 2 дні тому

      Same, it says random shit most of the time

    • @samwitteveenai
      @samwitteveenai  2 дні тому +1

      For what it's worth, I have noticed that sometimes it seems to go into some kind of weird mode where it doesn't give coherent responses back. Just try again and see if you get any better responses out.

    • @svenandreas5947
      @svenandreas5947 2 дні тому +1

      @@samwitteveenai far to interesting to stop, i will try local

    • @thenextbigthing1393
      @thenextbigthing1393 2 дні тому +1

      I noticed that it responds better when I talk in an American accent

  • @adamholter1884
    @adamholter1884 2 дні тому

    It lied a ton at the beginning. It doesn't use TTS. It's like 4o.

    • @samwitteveenai
      @samwitteveenai  2 дні тому

      Yes, it was very vague about its model, just saying that it was a neural network.

  • @AngusLou
    @AngusLou 2 дні тому

    Cannot install successfully

    • @samwitteveenai
      @samwitteveenai  2 дні тому

      What issue did you have ? Make sure you have Rust properly installed

  • @AshWickramasinghe
    @AshWickramasinghe 2 дні тому +1

    First!
    That's pretty cool.

  • @dr.mikeybee
    @dr.mikeybee 2 дні тому

    This is too slow to run on my M1 mac mini. MikeyBeez JoeJoe is much better.

    • @nickludlam
      @nickludlam День тому

      The q4 mlx quant works fine

  • @irbsurfer1585
    @irbsurfer1585 2 дні тому +1

    Speech Only!?!?! with no tool use?! and I cant even give it a system prompt? Worthless joke! Im like struggling to come up with ANY use case for it at all. AI cant even come up with a really good use case for it. lol

    • @samwitteveenai
      @samwitteveenai  2 дні тому +3

      Give it a chance. It's a whole new kind of model, the way that it works, and I think you'll find this is just a proof of concept to show how they could make this, or how tools like RAG could be incorporated later on down the track.

    • @ronilevarez901
      @ronilevarez901 День тому +2

      Imagine receiving the blueprints for a miracle and calling it a "worthless joke" simply because it's not already built 😂
      🙄

    • @anubisai
      @anubisai День тому

      ​@ronilevarez901 no doubt. What a repugnant creature.

  • @pondeify
    @pondeify 2 дні тому

    the voice is too robotic

    • @AmazingArends
      @AmazingArends День тому

      You have to tell it to talk like a pirate 😂

  • @UrbanLetsPlay
    @UrbanLetsPlay 2 дні тому +4

    "Diverse perspectives and ideas" jesus christ this is the worst timeline for LLMs

  • @dievas_
    @dievas_ День тому

    Underlying llm is of a very low quality unfortunately