GPT-4O-Mini + Qwen2 + ContinueDev : This FAST & CHEAP Coding Copilot BEATS Github Copilot & Others!

Поділитися
Вставка

КОМЕНТАРІ • 45

  • @jackflash6377
    @jackflash6377 Місяць тому +5

    That's what I like about this channel.
    Always something useful, well explained on how to make it work and typically at the best cost / performance ratio.
    Thank you sir!

    • @AICodeKing
      @AICodeKing  Місяць тому

      Thank you very much!

    • @jackflash6377
      @jackflash6377 Місяць тому

      @@AICodeKing Welcome. I tried in the past to send Thanks but failed. I finally got it to work so went ahead and made two to make up for the times I tried in the past.
      Best channel for real, useful AI knowledge. Thanks again.

    • @AICodeKing
      @AICodeKing  Місяць тому

      Thanks a lot. I really appreciate the help! I'm glad that my tutorials help people like you. Thanks!

  • @orthodox_gentleman
    @orthodox_gentleman Місяць тому +1

    The Continue dev copilot is absolutely incredible. You can actually execute shell commands via continue extension

  • @jayv_tech
    @jayv_tech Місяць тому +1

    This is what I was looking for. Thanks for sharing.

  • @siddharthduggal
    @siddharthduggal Місяць тому +1

    Thanks!

  • @ScorgeRudess
    @ScorgeRudess Місяць тому

    This is insane, thanks!

  • @user-jj4ef6dn6z
    @user-jj4ef6dn6z Місяць тому

    Thanks for your work!

  • @mrlogo3733
    @mrlogo3733 Місяць тому

    Thanks for the video.
    I will be happy if you will help me with a question:If I add context using chat interface it will use all the data tokens in every prompt?
    If I add context using gpts settings and will take it’s (custom gpt’s) api it will work?

  • @MrMoonsilver
    @MrMoonsilver Місяць тому +1

    Hey, what would be the config file if my ollama instance is running on another server in my LAN? Can I just put the API endpoint?

    • @AICodeKing
      @AICodeKing  Місяць тому +2

      Yes, you can add an OpenAI model to Continue then in the Config file change the baseURL to Ollama's endpoint.

  • @LofiHarmony-gj2by
    @LofiHarmony-gj2by Місяць тому

    I replaced gpt-4o with gpt-4o-mini for my ebook generator. It doesn't work at all. gpt-4o-mini, doesn't even generate outlines.
    So, there are big limitations with gpt-4o-mini.

  • @jaradaty88
    @jaradaty88 Місяць тому

    Can you cover codeuim? Compare it with local models also...it is free and good

  • @BadreddineMoon
    @BadreddineMoon Місяць тому

    Thank you 🎉

  • @hsmptg
    @hsmptg Місяць тому

    Great video (as always)!
    Can you give an idea of how many input/output gpt-4o-mini tokens it takes to create a snake game, or a MERN full stack boilerplate for instance?

    • @AICodeKing
      @AICodeKing  Місяць тому +1

      For a snake game it could take about $0.1 - $0.2

  • @BenAbdallahWalid
    @BenAbdallahWalid Місяць тому

    Thanks for the good video.
    What do you think what's better QWen or Codestral?
    From my point of view Codesral is much better in coding specially if it's built locally

    • @AICodeKing
      @AICodeKing  Місяць тому +1

      I prefer Qwen but Codestral can also be good.

  • @yoannthomann
    @yoannthomann Місяць тому +2

    Isn't deepseek better at coding than gpt4o mini

    • @AICodeKing
      @AICodeKing  Місяць тому +2

      Yes, much better (if you use their API). Their API is also cheaper than GPT-4O Mini.

    • @yoannthomann
      @yoannthomann Місяць тому +1

      @@AICodeKing it is so difficult to find the best model, do you know a place to compare all of them ?

    • @BadreddineMoon
      @BadreddineMoon Місяць тому

      ​@@AICodeKing is deepseek coder v2 api slow just like their free chat?

    • @silentspy6980
      @silentspy6980 Місяць тому

      ​@@AICodeKingis the website capabilities the same as the api ones

    • @MrMoonsilver
      @MrMoonsilver Місяць тому +1

      It's slow

  • @paulyflynn
    @paulyflynn Місяць тому

    We also need local models to debate with each other to come up with the best solution.

    • @paulyflynn
      @paulyflynn Місяць тому

      Also, have the local models review your GitHub PRs.

    • @AICodeKing
      @AICodeKing  Місяць тому

      Sure

    • @simion415
      @simion415 Місяць тому

      @@paulyflynn Which local model do you prefer to review your Github PRs?

  • @brianrowe1152
    @brianrowe1152 Місяць тому

    What if you have ollama on a different machine locally. So you want to specify the URL instead. ?

    • @AICodeKing
      @AICodeKing  Місяць тому

      You can do that by specifying baseURL

    • @simion415
      @simion415 Місяць тому

      @@AICodeKing ​ Can you please share any existing video that you have? How do I use a baseUrl, api_key, api_version and a modelname hosted on a separate machine?

  • @birolyildiz
    @birolyildiz Місяць тому

    🎉❤🙏

  • @TawnyE
    @TawnyE Місяць тому

    E

  • @paulyflynn
    @paulyflynn Місяць тому

    Thanks!

  • @jackflash6377
    @jackflash6377 Місяць тому

    Thanks!