The Secret to Instant Meeting Summaries: Whisper Diarization Revealed

Поділитися
Вставка
  • Опубліковано 19 січ 2025

КОМЕНТАРІ • 15

  • @nexuslux
    @nexuslux 10 місяців тому

    Very cool channel being so responsive to the comments. Going to check this out in more details in the coming days

  • @shaytrequesser4482
    @shaytrequesser4482 9 місяців тому +4

    Is there any way to run the transcription with diarization locally?

    • @ano2028
      @ano2028 9 місяців тому +1

      If you have a very strong GPU machine with high memory, you can clone the model to your local machine, follow their README for setting up and run it with "python subprocess" instead of replicate locally. Replicate basically a Cloud API, which "lends" you a GPU Compute Engine for those don't have enough budget to buy such one.

    • @ai-for-devs
      @ai-for-devs  7 місяців тому

      Yes, you can find the used models on huggingface.co including the instructions how to run them locally.

  • @Michaelhajster
    @Michaelhajster 10 місяців тому +1

    Great video, thanks for the tutorial! I just subscribed to your channel. Tolle Zeiten in denen wir leben!

  • @AngusLou
    @AngusLou 6 місяців тому +2

    Thank you for the impressive video, even better if there is an on-premise solution.

  • @truckfinanceaustralia1335
    @truckfinanceaustralia1335 4 місяці тому

    This vid is awesome! thanks :) I just subbed

  • @boooosh2007
    @boooosh2007 7 місяців тому

    This is great. Did you automate the final version of the meeting notes as well or cleaned it up yourself? If automated please show that as well.

    • @ai-for-devs
      @ai-for-devs  7 місяців тому

      This is done in part 2 of the course (see course preview here: ua-cam.com/video/_C-boIci0C8/v-deo.html). If you are interested please put yourself on the waiting list at ai-for-dev.com

  • @st.3m906
    @st.3m906 8 місяців тому

    What would you do if the transcript is past the token limit for the LLM?

    • @ai-for-devs
      @ai-for-devs  8 місяців тому +1

      If the transcript exceeds the token limit for the LLM, I would break it into smaller, manageable chunks and process each one sequentially.

  • @kryptonic010
    @kryptonic010 10 місяців тому +1

    The presentation was great, however instead of sending data off to aws and processing much if not all queries in house on my own data servers. Privacy is paramount.

    • @HyperUpscale
      @HyperUpscale 10 місяців тому +1

      I was thinking the same

    • @ai-for-devs
      @ai-for-devs  10 місяців тому +1

      Absolutely, prioritizing privacy by processing data in-house is a smart move. Leveraging open-source solutions and hosting LLMs on your own servers offers both control and security.