Fine-tuning 101 | Prompt Engineering Conference

Поділитися
Вставка
  • Опубліковано 15 жов 2023
  • 🎁 Join our Skool community: www.skool.com/entry-point-ai
    Intro to fine-tuning LLMs (large language models0 from the Prompt Engineering Conference (2023)
    Presented by Mark Hennings, founder of Entry Point AI.
    00:13 - Part 1: Background Info
    -How a foundation model is born
    -Instruct tuning and safety tuning
    -Unpredictability of raw LLM behavior
    -Showing LLMs how to apply knowledge
    -Characteristics of fine-tuning
    06:25 - Part 2: When to use it
    -Examples of specialized tasks that fine-tuning benefits
    -Reasons to fine-tune a model
    -Speed and cost benefits
    -Prompt length before and after fine-tuning
    -Fine-tuning in the team environment
    -LLM workflow from prompt engineering and fine-tuning to production
    -Size of dataset for fine-tuning
    11:27 - Part 3: No-code Demo
    -Demo of no-code fine-tuning on Entry Point AI
    Learn more at www.entrypointai.com
  • Наука та технологія

КОМЕНТАРІ • 10

  • @gianni4302
    @gianni4302 Місяць тому +1

    great video mate cheers

  • @brAIncraft916
    @brAIncraft916 7 місяців тому +2

    Thanks for the upload!

  • @matijamattlaznik1652
    @matijamattlaznik1652 7 місяців тому +2

    Loved this one Mark! Super informative, I was wondering what the workflow would be like for longer-form completions.

  • @engage-meta
    @engage-meta 6 місяців тому

    Congrats! Very nice software

  • @LisaQiyaLi
    @LisaQiyaLi 11 днів тому

    Thanks for sharing! Curious - can you fine tune the model by providing images? For example, one use case is resumes. What if I'd like to upload resume examples that are in PDF or JPEG format?

    • @EntryPointAI
      @EntryPointAI  7 днів тому

      Multi-modal fine-tuning is definitely going to be a thing, but it's not available right now (at least, readily). For the time-being you would run an image through some OCR model to extract the text and then you could use that in a fine-tuned language model

  • @trackerprince6773
    @trackerprince6773 6 місяців тому +1

    Another question, can you finetune an existing finetuned gpt3.5 model with mire training data ir do do you have to start from scratch always? Especially for using the outcome to feedback in to training

    • @EntryPointAI
      @EntryPointAI  5 місяців тому

      With their new API you have to fine-tune from scratch each time, but they might add fine-tuning a fine-tuned model back at some point, they used to have it. Most of the fine-tunes I run are under $1 though.

  • @trackerprince6773
    @trackerprince6773 6 місяців тому

    Can embeddings improve the Model ability to write press releases? If ao what type of corpus of data would that include. Trying to understand how embeddings with RAG+ fine tuning can be used. Additionally, i really like the feedback loop of improving the finetuning with its results.

    • @EntryPointAI
      @EntryPointAI  5 місяців тому

      This other video I made might help
      ua-cam.com/video/YVWxbHJakgg/v-deo.html