Gemini 1.5 Flash + ContinueDev + ShellGPT : FREE & FASTEST Copilot that Outperforms Github's Copilot

Поділитися
Вставка

КОМЕНТАРІ • 30

  • @TechnicalParadox
    @TechnicalParadox 2 місяці тому

    Im on ubuntu, we cant use pip install anymore, we need to install in virtual environments. Will this still work?

  • @songxian7989
    @songxian7989 2 місяці тому

    Hi King, Great video for your copilot alternatives, I just watched some of your video in this topic
    Just wondering, In your Claued-3.5 Sonnet copilot you use qwen2 for autocomplete
    But in this video you suggest deepseek-coder which an older model
    Any reason why?
    Because my mac only has 16GB memory
    I think I want to try smaller model which either qwen2 / deepseek-coder
    While deepseek-coder-v2 which is better I think will use more memory
    What do you think?
    Thanks King

  • @thailoitra6383
    @thailoitra6383 3 місяці тому

    i’m struggling with local llm options when using gpt-pilot. Do you randomly know which is the best local model for gpt-pilot. Many thanks.🙏

  • @aryindra2931
    @aryindra2931 3 місяці тому +1

    First Like my friend

  • @imranhrafi
    @imranhrafi 3 місяці тому +1

    Hey King,
    I have a question about integrating LLMs (Large Language Models) with databases.
    Imagine I have a massive database, and I want to use an LLM to answer questions directly from that data. Ideally, the LLM would access the database for each query, find the relevant information, and provide an answer.
    My concern is, wouldn't I need to transfer the entire database every time I ask a question? That seems inefficient and costly. Is there a better way to achieve this?

    • @AICodeKing
      @AICodeKing  3 місяці тому +2

      I believe the Database would be SQL or CSV or something like that. So, you can just ask for a query from the LLM and then run it on the database and fetch the data. Instead, of giving the LLM data and ask about it every time.

    • @imranhrafi
      @imranhrafi 3 місяці тому

      @@AICodeKing its the same, it will increase token, as a it will be costlier. i want to know that, will there any way to feed the data once to the llm, and then use it unlimited. so that it can decrease cost.

    • @TouchGr8ss
      @TouchGr8ss 3 місяці тому

      hello brother, what you do is you embed the question and create a short alghoritm that makes a query or filters out the only matching part related to that question so u only send a small part of the database related to the question

    • @mitchellmigala4107
      @mitchellmigala4107 2 місяці тому

      Look up RAG. It may be beneficial. There are other ways to add context to your prompts without blowing out your context window or skyrocketing your costs. You can use a local model to create your database queries to try to focus the results. Look up ways to add context. Good luck!

  • @aniketpande7549
    @aniketpande7549 3 місяці тому

    Also where can I go to locate my sgpt config file in windows 10?

    • @AICodeKing
      @AICodeKing  3 місяці тому

      I believe in the main User directory

  • @nazarmohammed5681
    @nazarmohammed5681 3 місяці тому

    how to convert figma to website in Angular

  • @anasghgyc68
    @anasghgyc68 3 місяці тому

    hi can you plzzz try plandex

  • @ryanscott642
    @ryanscott642 3 місяці тому

    I gotta say, Cursor is better. Merging changes with existing code is necessary.

    • @AICodeKing
      @AICodeKing  3 місяці тому

      Money

    • @TouchGr8ss
      @TouchGr8ss 3 місяці тому

      @@AICodeKing free trial

    • @AICodeKing
      @AICodeKing  3 місяці тому +1

      After 30 days, money.

    • @TouchGr8ss
      @TouchGr8ss 3 місяці тому

      @@AICodeKing make a new mail ;)

  • @neo1482
    @neo1482 3 місяці тому

    First

  • @aniketpande7549
    @aniketpande7549 3 місяці тому +1

    According to you, of all the models you have used till now, which is the best performing and ALSO FREE copilot method I can use on my laptop. FYI:- My system is a potato, so I think I should go with API not the local model.

    • @AICodeKing
      @AICodeKing  3 місяці тому

      Extremely Low cost option can be DeepSeek Coder V2 API or this Flash model. You can use the 1.3b model for the Autocomplete that can run even on potato computers. While, use any API that you prefer for chat.

    • @aniketpande7549
      @aniketpande7549 3 місяці тому

      @@AICodeKing Thanks for the input, I think I will go with Gemini flash API for chat and maybe deepseek 7b local for autocomplete

  • @Brainiac5
    @Brainiac5 3 місяці тому

    With your recent videos, which one is actually the best copilot without consideration of cost? 🤔

    • @AICodeKing
      @AICodeKing  3 місяці тому +2

      Claude 3.5 Sonnet combined with Codestral for Autocompletion. (Not considering the cost)

    • @CapitiStudios
      @CapitiStudios 3 місяці тому

      ​@@AICodeKingI'm following your videos but could get an API key for claud 3.5.
      Won't I be able to find my API key in cloud 3.5 as a regular paying user?

    • @Brainiac5
      @Brainiac5 3 місяці тому

      @@AICodeKing thanks so much for the response my man. You are a lifesaver