Create Your "Small" Action Model with GPT-4o

Поділитися
Вставка
  • Опубліковано 12 лип 2024
  • Create Your "Small" Action Model with GPT-4o
    👊 Become a member and get access to GitHub and Code:
    / allaboutai
    🤖 Great AI Engineer Course:
    scrimba.com/learn/aiengineer?...
    🔥 Open GitHub Repos:
    github.com/AllAboutAI-YT/easy...
    📧 Join the newsletter:
    www.allabtai.com/newsletter/
    🌐 My website:
    www.allabtai.com
    I try to create my own "small" action model based on Python and the GPT-4o API. Will it work? Lets find out
    00:00 Small Action Model GPT-4o Intro
    01:48 GPT-4o Action Model Code
    05:54 Testing the Model
  • Наука та технологія

КОМЕНТАРІ • 26

  • @ShpanMan
    @ShpanMan Місяць тому +10

    This is actually really impressive. GPT-4o watching you act and understands what is done, then writes code to reproduce it, which can then be run and automated.
    Very clever flow, OpenAI should definitely hire you.

    • @MilkGlue-xg5vj
      @MilkGlue-xg5vj Місяць тому

      Anyone can do better than this with a powerful language model, it's not much. It's just that the rabbit is overrated.

  • @clumsy_en
    @clumsy_en Місяць тому +2

    Cool experimental project and idea 👍 The entire process can be scripted further to continuously store the most recent number of screenshots in 2-second intervals to VRAM using PyTensor, and a call can be triggered at any time with keyword through mic input or keys shortcut to send it to gpt-4o to retrieve the "reply last action script" and then automatically execute it to save time doing some mundane tasks👍👍

  • @georgestander2682
    @georgestander2682 Місяць тому +4

    Thanks, this is interesting. I was wondering about this as well and had a thought about adding log data of user interactions to give the model more telemetry. So it not just vision but also the actual logs of all the interactions happening in the background.

  • @ewasteredux
    @ewasteredux Місяць тому +5

    Are there any local LLM's this might work with?

  • @mikew2883
    @mikew2883 Місяць тому +3

    This is awesome!

  • @cyc00000
    @cyc00000 Місяць тому

    So good to see you getting onboard the rabid r1. It's seriously going to change lives.Enjoyed the video man.

  • @TTOnkeys
    @TTOnkeys Місяць тому

    I can think of so many uses for this. Great work.

  • @nic-ori
    @nic-ori Місяць тому +1

    Useful information. Thank you!👍👍👍

  • @user-yw9us2qo6g
    @user-yw9us2qo6g Місяць тому +1

    Looks great

  • @NetHyTech
    @NetHyTech Місяць тому +3

    Bro Plz create video for real time vision and response

    • @lokeshart3340
      @lokeshart3340 Місяць тому

      Woh woh look whos here bhai kya aap mere ko jante ho ya yaad rkhe ho?

  • @BThunder30
    @BThunder30 Місяць тому

    Interesting project as always.

  • @gnosisdg8497
    @gnosisdg8497 Місяць тому +2

    so where is the code for this project! looks fun

  • @ibrahimaba8966
    @ibrahimaba8966 Місяць тому

    Very interesting. I think it could also be useful to provide it with the mouse positions between different frames.
    To go further, we could create multiple actions and then implement a RAG that allows the model to choose the correct snapshot and execute it.
    Thanks for this video.

  • @Soft_Touch_
    @Soft_Touch_ Місяць тому +1

    I've been thinking recall and omni screenshots were ways to create large pratical data sets to train lams. Do you think that is what's happening? You seem to be doing a smaller version of this

  • @carstenli
    @carstenli Місяць тому

    Great start. What's the GH url for subscribers?

  • @futureworldhealing
    @futureworldhealing Місяць тому +2

    learning how to be data scientist 80% from u bro haha

  • @avi7278
    @avi7278 Місяць тому +2

    honestly more legit than scammer Jesse Lyu and RabbitR1 garbage hardware scam after his NFT game scam.

  • @darthvader4899
    @darthvader4899 Місяць тому

    How does it know where to click though? Does

  • @kalilinux8682
    @kalilinux8682 Місяць тому

    Humane and Rabbit watching this and raising another round of funding

  • @lokeshart3340
    @lokeshart3340 Місяць тому +1

    Hello sir can u recreate gemini vision fake demo in real life

  • @JNET_Reloaded
    @JNET_Reloaded Місяць тому +1

    the github is always the same repo btw itl be easyer tomake a new repo for each project and put project link in description

    • @wurstelei1356
      @wurstelei1356 Місяць тому

      I think you can link to git sub folders. The repo is pretty messy, but keep in mind, this is free. Thou I am also not able to find code for some projects on that repo.

  • @spencerfunk6697
    @spencerfunk6697 Місяць тому +2

    So literally open interpreter…