Advancing AI - Fine Tuning LLMs with Azure OpenAI

Поділитися
Вставка
  • Опубліковано 6 сер 2024
  • With all the excitement around large language models, one question has been repeatedly asked throughout - "How do I make my own?". There are plenty of techniques to pass additional context and reference material into a generic, pre-trained model, but if you want a model that is super fast and already understands your context, you need to fine-tune a model.
    We recently saw announcements from Microsoft that the Azure Open AI service now supports both training a fine-tuned model, but also hosting that model at the click of a button!
    In this video, Simon and Gavi take a look at the concept of Fine Tuning LLMs, then review a super quick demo of how we pass some data to the gpt3.5 turbo model, which is what powers the public ChatGPT service!
    For more details around the recent announcements, check out: techcommunity.microsoft.com/t...
    And as always, if you need an accelerator behind your large language model aspirations, get in touch with Advancing Analytics!

КОМЕНТАРІ • 10

  • @drummerboi4eva
    @drummerboi4eva 8 місяців тому

    Thanks for sharing Simon and Gavita !! Great video

  • @rahulcn1314
    @rahulcn1314 Місяць тому +1

    when our model is fine tunned then new endpoint is created?
    If yes then how can we use that endpoint for our use case outside azure environment ?
    btw great video 👍

  • @hasanabbas1555
    @hasanabbas1555 5 місяців тому

    intro was nice

  • @rushankpatil
    @rushankpatil 8 місяців тому +1

    How to convert json foramt into jsonl format with pyspark?

  • @victorheineken7363
    @victorheineken7363 19 днів тому

    I have a question, is it possible to build it with python?

  • @restashoremusic
    @restashoremusic 5 місяців тому

    You should tell us how to use the fine tuned model via API

  • @shalinigoel13
    @shalinigoel13 8 днів тому

    very superficial explanation. Just giving out the steps. Please also explain what is happening to the parameters of the base LLM with these 20 lines of training data?

  • @DollyBastard
    @DollyBastard 9 місяців тому +1

    Interesting, but no concrete answers to your questions.