Fine-Tune GPT-4o Model Step by Step

Поділитися
Вставка
  • Опубліковано 9 січ 2025

КОМЕНТАРІ • 21

  • @arijitpaul3968
    @arijitpaul3968 Місяць тому +5

    please make a video on how to finetune with images and text. I want to use it in my thesis. But quite not getting how to apply it. TIA

  • @aaasthaa
    @aaasthaa 4 дні тому

    thanks and please make a video on how to finetune with images and text.

  • @giladlior5821
    @giladlior5821 День тому

    Thanks for the great explanation! one thing that is unclear to me, this seems to be for just one message per user, what if I want to fine tune an entire conversation flow?

    • @FutureSmartAI
      @FutureSmartAI  День тому

      You could , since model takes conversation as expected output. If you want to train on conversation then could do this way
      example 1 : Conversation user message 1 - > expected AI message 1
      example 2 : Conversation user message 1 + expected AI message 1 - > expected AI message 2

  • @rajashekhar7211
    @rajashekhar7211 2 місяці тому +3

    Yes please explain with images and text

  • @alexramos587
    @alexramos587 2 місяці тому +2

    Please make a video to demonstrate how to fine-tune with images and text.

  • @yazanrisheh5127
    @yazanrisheh5127 2 місяці тому

    yes please do with images and text. Also, can you do a video on RAG where for example its on all company documents but someone like a junior will have access to specific documents, a manager will have access to more documents, ceo will have access to all documents etc...

    • @FutureSmartAI
      @FutureSmartAI  2 місяці тому

      I could do but these is part of access control and specific to RAG

  • @vsnew4644
    @vsnew4644 2 місяці тому

    Amazing video.
    Can fine tuning be done for Nl2sql tasks too?

    • @FutureSmartAI
      @FutureSmartAI  2 місяці тому

      Yes, absolutely! Fine-tuning can definitely be applied to NL2SQL tasks. However, for the best results, it’s important to ensure that your training data includes examples covering each column and table at least once. This will help the model learn to handle the full range of potential queries.

  • @MMSakho
    @MMSakho Місяць тому

    Hi, thanks for the tutorial. I have a question, did openai give additional cost (beside finetuning cost) for inference? And is there a cost for storage to save the finetuned model?

    • @FutureSmartAI
      @FutureSmartAI  Місяць тому

      Yes there is inference cost similar to how we pay for other models. There is no storage cost

  • @praveenbhandari2516
    @praveenbhandari2516 2 місяці тому

    Hey I'm joining a company recently which related data work in Big query and looker studio so I just want to learn and upskill I have an idea of ML and DL which I was done in certificate so can I go with deep learning engineering profile right now or something else please suggest.

  • @KumR
    @KumR 2 місяці тому +2

    I did not quite get what we are trying to achieve.. You gave sample data which extracts text and creates json format.. Then you tested with a line and that also extracted json data.. So what is next step ? What exactly does this mean ? How is this fine tuning ? How are you training the last layer weights and biases ? was this an example of LORA or QLORA ?

    • @FutureSmartAI
      @FutureSmartAI  2 місяці тому +4

      It seems there might be some confusion between fine-tuning an open-source LLM (Large Language Model) and fine-tuning a closed-source model like OpenAI's API-based LLMs. With OpenAI models, we don’t have direct control over specific layers or weights. Instead, we fine-tune by submitting our data to their platform, which processes it and provides us with a new model ID. This is different from methods like LoRA or QLoRA, which are typically used for open-source models and involve directly adjusting specific layers or weights. Let me know if you'd like more details on either approach!

    • @KumR
      @KumR 2 місяці тому

      @@FutureSmartAIYes please. I think that clarity was important to know. So in your example, what are you trying to achieve by giving model jason data? Can u tell me a real world example of where this can be used ?

    • @FutureSmartAI
      @FutureSmartAI  2 місяці тому

      Many use cases involve getting structured data from the model as a response rather than just plain text. In our example, we aimed to extract structured information from medical reports. We’ve observed that LLMs often struggle to consistently provide structured output, which can create issues in downstream processes when it fails. We usually include examples and instructions in the prompt, but sometimes that's not enough. This is why we fine-tune the model-to allow it to learn from more examples.
      The example provided is a simplified version of a real case we did for a client. The goal wasn’t to provide actual data but to demonstrate the methodology so you can apply it to your own data. In the video, I show that the model can learn the structure from examples alone, without explicit instructions.

    • @reserseAI
      @reserseAI Місяць тому

      @@KumRreal world example, i develope police investigator ai assistance, the goal is how i specialized the ai to help analize case based on court decision and my country criminal law, the problem with all ai model is made for universal task and so bias and sometimes to many halucination when facing specific domain, another problem is the constitution always update that maybe not learn by ai model, so implementing fine tune and rag help me a lot to achieve my goal, why im choosing fine tune open ai rather than open source model? The open source model free but cost a lot money if i want to deploy it in virtual server, open ai has model gpt4o mini, its not free but i think the capabalities of open ai in LAW TERMS domain much advance then any model, and again gpt4o mini is cheap 😅, still working collecting court decision to make it more better dataset

  • @ChetanAnnam
    @ChetanAnnam 2 місяці тому +1

    It would be better how the model was performing before and after fine tuning.
    Rest all is good 💯.