Build Specialized Fine-Tuned AI Agents | No Code

Поділитися
Вставка
  • Опубліковано 19 вер 2024

КОМЕНТАРІ • 24

  • @muhammadrebal8114
    @muhammadrebal8114 День тому +3

    dude never fails to deliver. Ben tussi chah gaye ho ji!

  • @SwedishPower89
    @SwedishPower89 День тому +3

    The king is back 👑

  • @Nadia-AIInsiders
    @Nadia-AIInsiders День тому +2

    It looks like we're on the same wavelength :) I've been experimenting with a similar setup and fine-tuning for LinkedIn posts for quite some time. The quality of the 4o fine-tunes is excellent! In my experience, even a small set with 70 items works well, doesn't hallucinate much, and with a higher temperature of 0.7-0.9, it generates fascinating perspectives that stay on point and feel authentic.

    • @BenAI92
      @BenAI92  17 годин тому

      Interesting Nadia! I hadn't tried increasing the temperature, I assumed it would hallucinate too much. But now that you say it, I can imagine combining a small dataset with a higher temperature would be interesting. Thanks for the insight, will try it out!

    • @ward_jl
      @ward_jl 15 годин тому

      That's interesting. I should start experimenting with fine-tuning as well. Do you also use Relevance, or would you recommend another platform to create agentic workflows?

  • @ward_jl
    @ward_jl 15 годин тому

    Super interesting to see the value of using fine-tuned models, thanks for sharing Ben!

  • @moefayed
    @moefayed 18 годин тому +1

    Loving your work Ben, super quality!!

    • @BenAI92
      @BenAI92  17 годин тому

      Thanks Moe, appreciate it!

  • @florianrolke
    @florianrolke День тому +1

    Cracked 5k, forward, Ben! Congrats :)

    • @BenAI92
      @BenAI92  17 годин тому

      Thanks man!

  • @daanjonkman
    @daanjonkman День тому

    Thanks again Ben!

  • @MKTGGiova
    @MKTGGiova День тому

    Super Usefull Stuff! TY

  • @motlalepulasello8775
    @motlalepulasello8775 16 годин тому

    This is amazing. Do they always need specific training data like in the form of documents or links or scraping. Or can you simply throw in a prompt and expect it to improvise?

  • @ygoesit9898
    @ygoesit9898 4 години тому

    Hey, thanks for this resource!
    Where can we find the Linkedin Writer Agent?
    Signed up for the tools and recreated everything but didn't see the Linkedin Writer Agent specifically to clone and pass through the finetuned model to test the results on it. Didn't see a link to it in the Gumroad or anything.

  • @marcc0183
    @marcc0183 День тому

    combine this with n8n and boomm

  • @jonathanbarber3004
    @jonathanbarber3004 День тому

    Thanks for the video... very informative. Most videos I watch about AI creating content for block posts and things like that. Are you able to show any other use cases such as checking a human's work? I run a small CAD team who make drawings from text documents from the clients. It would be great if the AI model could cross reference the instructions from the client to the CAD drawings such as dimensions and text. Is this something that AI could do? Thank you

    • @BenAI92
      @BenAI92  17 годин тому

      Thanks Jonathan! should be possible but would need more context to help you.

  • @onkelzaufsendung-dervideop2943
    @onkelzaufsendung-dervideop2943 9 годин тому

    which small business has more than 600 linkedin posts? or what do you mean with datapoints?

  • @ajaykumarporeddiwar9226
    @ajaykumarporeddiwar9226 22 години тому

    Outstanding work, as always. Thank you!

  • @onkelzaufsendung-dervideop2943
    @onkelzaufsendung-dervideop2943 8 годин тому

    how do you know all these infos about relevance?
    +

  • @olivierbennaim3145
    @olivierbennaim3145 День тому

    When you upload the dataset, is the LLM fine-tuned on the style and tone of voice or is it fine-tuned on the type of content it could write on? If the answer is both, how can we make it so the LLM is fine tuned with one dataset for tone and style, and another dataset for the type of content?

    • @BenAI92
      @BenAI92  17 годин тому

      As you assumed, it's both yes. Ideally, you train it on a combination of the type of content you like and tone of voice. But in the case you don't have that, you can use a smaller dataset and you should still get the tone of voice for the "unrelated" type of content. Then you should still be flexible with the type of content you can generate with it. You could experiment with finetuning 2 different models and add them both in a tool in relevance but I haven't tried this, so can't tell you what the performance would be.

  • @onkelzaufsendung-dervideop2943
    @onkelzaufsendung-dervideop2943 8 годин тому

    and for no code it was a lot of coding