Boost Productivity with FREE AI in VSCode (Llama 3 Copilot)

Поділитися
Вставка
  • Опубліковано 27 жов 2024

КОМЕНТАРІ • 77

  • @jeffwads
    @jeffwads 6 місяців тому +4

    Very impressed with the 8B L3 in regards to coding. Amazing how much progress they have made.

  • @carta-viva
    @carta-viva 4 місяці тому

    There's pretty cool as a starting point. I think A.I in future will do many other thinks to help us on achieving more productivity.

  • @martin22336
    @martin22336 6 місяців тому +4

    I am using this is insane 😮 I think full stack developers will not like their future holy crap.

  • @kesijack
    @kesijack 4 місяці тому

    it works in macbook air m3 16gram, a little slow but can be used. Thank you Mervin.

  • @G3TG0T
    @G3TG0T 6 місяців тому +5

    Great video! How do I connect to my own local Ollama server running on my local machine with this?

    • @sefasozer4359
      @sefasozer4359 Місяць тому

      Bro did you find solition ?

    • @G3TG0T
      @G3TG0T Місяць тому

      @@sefasozer4359 no, never figured it out!

  • @MaorAviad
    @MaorAviad 6 місяців тому +1

    amazing content! maybe you can create a long video where you use this to create a full stack application

  • @joseeduardobolisfortes
    @joseeduardobolisfortes 6 місяців тому +4

    Very good tutorial. You don't speach about the platform; can I assume it will work both Windows and Linux? Another thing: What's the recommended hardware configuration to install Llama 3 locally in our computers?

    • @shivpawar135
      @shivpawar135 Місяць тому

      I have a RTX 3050 laptop worked fine for me.

  • @djshiva
    @djshiva 5 місяців тому +1

    This is amazing! Thanks for much for this tutorial!

  • @Techonsapevole
    @Techonsapevole 6 місяців тому +2

    very nice, and it's just a 8B parameter model

  •  6 місяців тому +2

    Thanks for sharing.
    I host the ollama server on a remote server. How do I make it connect to the remote machine instead of localhost?

  • @m12652
    @m12652 6 місяців тому +1

    The buttons don't do anything... note i'm working off line. The 4 buttons at the bottom of the add-ins panel just copy the code to the chat window. They don't do anything else and once clicked, the AI stops responding to questions. When i asked it what was wrong with the "explain selected code" the AI responded "nothing, its only meant to copy the code. Anyone know if this is broken for me or its simply an incomplete add-in...?

  • @kannansingaravelu
    @kannansingaravelu 5 місяців тому +1

    do we need both llama3:8b and instruct? can we not work only with instruct? Also I see your code works faster - could you specify your PC / system specs and config as it takes a good amount of time on my iMac 2017

  • @prestonmccauley43
    @prestonmccauley43 6 місяців тому

    This was a great quick lesson. One thing I was seeing if anyone figured out, often I need to refer to very new documents on API etc, has anyone tied this into like. RAG structure, so we are always looking at latest document?

  • @ah89971
    @ah89971 6 місяців тому +1

    Thanks, can you make video of pythagora using llama 3?

  • @alinciocan5358
    @alinciocan5358 6 місяців тому +2

    does it slow down my laptop if I run it locally? would I be better off running haiku on cloud? what would you recommend, I'm just getting into code

    • @allxallthetime
      @allxallthetime 6 місяців тому +1

      I have 8 GB of VRAM and when autocomplete is on for Codyai copilot, my fans turn on full blast on my laptop. I have 64 GB of ram so it doesn't slow my pc down, but if it was running on your CPU and not your GPU it might slow your computer down. I don't think that it will slow your computer down if you have enough VRAM or a ton of ram, but it could depending on your computer's specs.
      There is also an extension called "Groqopilot" on vscode that requires that you supply it with a groq api key, and when you do it will create code for your lightning fast with llama3 70b which is of course the better model of llama3 8b. It doesn't autocomplete but it behaves very much like the tutorial we just watched.

  • @rmnilin
    @rmnilin 6 місяців тому +20

    SPOILER ALERT: this is not amazing, but you'll be able to make scrambled eggs on your laptop while it writes you a crud service that actually doesn't work

    • @ragtop63
      @ragtop63 2 місяці тому +5

      SPOILER ALERT: Stop trying to run language models on a laptop.

    • @shivpawar135
      @shivpawar135 Місяць тому

      ​@@ragtop63I did and it, and it worked fin my Laptop temp is like 50-60 normally it's 90 when I play games.

    • @bigsmoke4568
      @bigsmoke4568 15 днів тому

      It depends on the laptop

  • @manojkr7554
    @manojkr7554 3 місяці тому

    ollama : The term 'ollama' is not recognized as the name of a cmdlet, function, script
    file, or operable program. Check the spelling of the name, or if a path was included,
    verify that the path is correct and try again.
    At line:1 char:1
    + ollama pull llama3:8b

  • @thebudaxcorporate9763
    @thebudaxcorporate9763 6 місяців тому +1

    wait for implement on streamlit, keep it up bro

    • @maryamarshad871
      @maryamarshad871 3 місяці тому

      Can you guide me step by step it doesn't work on my laptop

  • @m12652
    @m12652 6 місяців тому +1

    Does CodeGPT require me to be logged in? I'm all set up but if I ask it to explain something it just says "something went wrong! Try again.". Then i have to either quit and restart vscode or disable then enable the extension...

  • @m12652
    @m12652 6 місяців тому +3

    Does anyone else get the feeling that the way AI's answer questions is based on the old Microsoft "clippy" assistant... annoyingly eager and can't answer anything much without wrapping it in a paragraph or so of irrelevance.... Very annoying to get 6 or 7 line answers where the only relevant bits are a number or a few words.

    • @Fonzleberry
      @Fonzleberry 6 місяців тому

      If you're using chat gpt you can change that in settings. I think in things like Ollama, you can also change your settings so that it get's straight to the point.

    • @m12652
      @m12652 6 місяців тому +1

      @@Fonzleberry I know thanks... just haven't had much luck though lol, at one point i got fed up and added an instruction to "only answer boolean questions with a yes or a no", I had to restart the model (bakllava) to get it to start answering properly again as it answered all questions with "yes" or "no". I don't get why the default mode is to burry all answers in information not requested. I guess someone redefined the word "conversational". Can't even ask whats 2+2 without an explanation lol

    • @Fonzleberry
      @Fonzleberry 6 місяців тому

      @@m12652 It will improve with time and use cases. A model fine tuned with META's Messenger/Whatsapp data would have a very different feel.

  • @JohnSmith762A11B
    @JohnSmith762A11B 6 місяців тому +2

    Excellent and useful tutorial! 👍

  • @Fonzleberry
    @Fonzleberry 6 місяців тому +1

    Any ideas about how this works on large scripts? What's the context length?

  • @yagoa
    @yagoa 6 місяців тому +1

    how do I use another computer running ollama on my LAN?

  • @bhanujinaidu
    @bhanujinaidu 6 місяців тому +1

    super video. Thanks

  • @ErfanKarimi-ep7ie
    @ErfanKarimi-ep7ie 4 місяці тому

    guys i installed it according to the vid but i cant run the ai and i saw somewhere that i need to put it in PATH but i dont know where the files are installed

  • @m12652
    @m12652 6 місяців тому +1

    This app looks like a good idea but its a long, long way from finished. Buttons (refactor, explain, document and fix bug in selected code don't do anything but copy selected code to the chat. If you use the clear button it clears the selected model etc. but not the history. I just asked it to write a basic api call for sveltekit and it wrote some pure garbage based on assuming the previous selection was part of the current question. I'm using a 2019 MBP with 32gb ram and its too slow to add any value so far... for me at least

  • @SelfImprovementJourney92
    @SelfImprovementJourney92 6 місяців тому +1

    can i use it to write any code I am a beginner do not know anything about coding just starting from zero

    • @MervinPraison
      @MervinPraison  6 місяців тому

      Yes you can write most popular programming language

    • @konstantinrebrov675
      @konstantinrebrov675 6 місяців тому

      You would need to know at least the basics of coding, and how an application is designed and structured. This writes the code for you but if you cannot read the code or at least understand what it's doing at a high level, then it's too early for you. It gives you 2/3 of the finished product. You just need to know how to integrate that code into your application. You need to know how to create an application, what are the different parts of an application, how to deploy and run an application.

  • @sillybilly346
    @sillybilly346 6 місяців тому +1

    It only gives option for codellama and not llama instruct, please help

    • @AlexMelemenidis
      @AlexMelemenidis 6 місяців тому

      I have the same issue. In the CodeGPT menu I only see as options "llama3:8b" and "llama3:70b", but not "llama3:latest" or "llama3:instruct", as I have them available (when I would go to a command line and do ollama list). When I select llama3:8b and enter a prompt, nothing happens. When I choose another model which I have installed, like "mistral" it works just fine...

    • @AlexMelemenidis
      @AlexMelemenidis 6 місяців тому

      ah okay, so seems to be the name and CodeGPT has a set list of compatible model names? I did another "ollama pull llama3:8b" and now it works.

    • @sillybilly346
      @sillybilly346 6 місяців тому +1

      @@AlexMelemenidis yes same here, thanks

  • @McAko
    @McAko 5 місяців тому

    I prefer to use Continue plugin

  • @Mr76Pontiac
    @Mr76Pontiac 6 місяців тому

    I'm really not impressed with Llama:8B. I decided to skip Python, and go to Pascal. I asked to create a tic tac toe game, and have had nothing but problems with it. It CONSTANTLY forgets that Pascal is a declarative language and forgets to include the variable definitions, especially the loop variables. When i ask for it to revisit, this last time it decided to rewrite the function to draw the board in a console.log instead of a writeln. I mean, it rewrote the WHOLE function to be completely useless.
    I tried running the 70b, but the engine just kept prioritizing to my GTX970 instead of my RTX3070. The documentation on the site, as well as the Github repo just doesn't explain well enough where to put the weights on where the engine should calculate.
    I could pull the 970 out, but, meh.

  • @red_onex--x808
    @red_onex--x808 6 місяців тому +1

    awesome info

  • @harikantipudi8668
    @harikantipudi8668 5 місяців тому

    Latency is pretty bad when im using llama3:70b on vscode for CodeGPT. I am on windows . I guess its with the underlying machine . Anything can be done here?

    • @ragtop63
      @ragtop63 2 місяці тому

      Get a better GPU. Preferably something with a lot of VRAM. If you’re attempting to do this on a laptop like many others here are, you’re setting yourself up for failure. If you’re serious about running large language models, don’t run them on a laptop.

  • @Adrian-mu8gg
    @Adrian-mu8gg 2 місяці тому

    If I hv a project with code separated in different files, can it read all and debug?

  • @m12652
    @m12652 6 місяців тому +1

    Nice one 👍

  • @dorrakallel5303
    @dorrakallel5303 6 місяців тому +1

    thank you so much for this video , is it open source please? can we find the weights files and use it ?

    • @ragtop63
      @ragtop63 2 місяці тому

      Yes, it's released under the MIT License. It's open-source and free to use.

  • @alirezanet
    @alirezanet 4 місяці тому

    I prefer Continue it is much more fun to work with

  • @abhijeetvaidya1638
    @abhijeetvaidya1638 5 місяців тому

    why not use codelama ?

  • @programmertelo
    @programmertelo 5 місяців тому

    amazing

  • @srinivasyadav7448
    @srinivasyadav7448 6 місяців тому

    Does it work for react native code?

  • @MeinDeutschkurs
    @MeinDeutschkurs 6 місяців тому

    I can just see: Something went wrong, try again.

  • @MosheRecanati
    @MosheRecanati 6 місяців тому

    Any option to use it with intellij ide?

  • @tumadrezocl
    @tumadrezocl 3 місяці тому

    why ollama pull llama3:8b dont work?

    • @mmmommm237
      @mmmommm237 Місяць тому

      you have to put ollama PATH in your system variables

  • @haricharanvalleru4411
    @haricharanvalleru4411 6 місяців тому +1

    very helpful tutorial

  • @beratyilmaz7951
    @beratyilmaz7951 6 місяців тому

    try codeium extension

  • @mafaromapiye539
    @mafaromapiye539 6 місяців тому

    AI Technologies are making things easier as it boost one's vast Human General Intelligence capabilities...

  • @JohnDoe-ie3ll
    @JohnDoe-ie3ll 5 місяців тому +1

    why you guys are using third party plugins which has limit and then you claim its free.. would be nice to see which doesnt require that shit

  • @amitkumarsingh4489
    @amitkumarsingh4489 6 місяців тому

    could not see the thescreen @ 2:17 in my vscode

  • @krishnak3532
    @krishnak3532 6 місяців тому +1

    If I run the local ollama3 will it requires GPU to see faster performance.@mervin

    • @ragtop63
      @ragtop63 2 місяці тому

      Yes. All language models see vast performance improvements when using a supported GPU.