Please do aider setup video, also show it using different, models, images and how to run it in a browser. Great to see things picking up with these kind of open source projects. If you due the video make sure you are in a dimly lit room with green light frantically typing like movie portrayals of hacking. Thanks again for the video
Great vid! Would love to see some more about how your using llms as a swe. My extent has been gh copilot through work but I’m hoping hoping to run more local llms when the m4 macs come out
I just started getting comfortable with Cody AI in VS Code. Intrigued by aider. How do you get Generated code into your coding files? A copy/paste from terminal to your IDE?
For me, the "completion" assistants always get in the way. I tend to prefer to prompt to get specific functions or basic boilerplate to start with. I will say, my favorite plugin uses deepseek v2 to automatically format / correct formatting in my code.
Continue buttons in front ends are trivial to make when you control the model through something like vllm or llamacpp in python, but API builders seem to think we don't need it. It might be because models aren't trained on >n output tokens, but idk
I like aider browser, but am i mistaken or when using it you can't use many of the useful commands like /ask or /help because those are aider terminal commands?
I tried to use this with deepseek-coder-v2 lite, the 16B, but it's too weak to do anything right past the first prompt. I kinda wish there was a 70B version on ollama. Should i try with another like llama 3.1 70B? Would it work much better than deepseek lite?
no, you should use deepseek v2 through the api, it's a 236B model. LLama 3.1 can't even follow aider's format and needs to be reprompted multiple times, not to mention that it is quite weak at coding.
I do agree the 16B model is basically only useful for things like basic javascript and beginner bash scripting. API's are really cheap though if you don't want to run something locally.
If you want the best results possible then use Claude 3.5 Sonnet, followed by GPT 4-o and DeepSeek Coder V2. Different LLMs are built for different use cases and you're wasting your time trying to code with these locally hosted small LLMs. The technology just isn't there yet, so APIs are your best bet as of now.
I have already tested a few models myself and am really impressed by some of them. However, they all reach their limits. But there is something else that is currently bothering me. I don't really want to have to explain to the coding assistant the context in which the whole thing is happening. I actually want to give it my entire project and work in a larger context. e.g. a Python programme with 10 files, then it should know all 10 files. Let's say it's a programme that somehow stores people in a database and displays them as a webpage. If I then say that I want to save the date of birth for each person, then it should suggest how the database should be adapted, how the dataclass files should be changed, how the input should be validated and how the UI should be adapted.... This task is actually not that difficult, but requires a much larger context. All models don't have that and I really miss something like that. I currently use the CodeGPT plugin for JetBeans IDEs. Does anyone have a good idea how I can better realise my wish?
Sonnet just got updated with cacheing and it will reduce the cost significantly. Brilliant timing
Their API is insane now, it's so cheap it almost feels like cheating haha
these progress in coding ai makes me want to learn to code, I feel like the creation potential is expanding exponentially
Please do aider setup video, also show it using different, models, images and how to run it in a browser. Great to see things picking up with these kind of open source projects. If you due the video make sure you are in a dimly lit room with green light frantically typing like movie portrayals of hacking. Thanks again for the video
Will do!
dudes giving him prompts for the next video. ai getting crazy 🤣🤣🤣
I’ll be back for that video 💎
@@r66p6r AI must be getting more life like as well because it also making typo in the comment sections
I would also like to see this!!
Gemini is also a very good coding model but Gemini shines in chemistry! It's unbelievable when using it for chemical stuff.
What kind of chem / computational chem have you used it for?? I actually considered doing a PhD in comp chem :)
aider is by far the most useful AI tool that I use
What kind of coding do you generally use it for?
Sso you just read documentation bravo!🎉
I also use LLMs a lot for building dictionaries or formating stuff the way in need it in my code!
Great vid! Would love to see some more about how your using llms as a swe. My extent has been gh copilot through work but I’m hoping hoping to run more local llms when the m4 macs come out
Thanks! I can't wait until the M4 macbooks are released. NEED to finally update my M1 max macbook haha
To infinity and beyond ! 👋👍
Thanks!
Wanna see more live coding!
Thanks for the feedback!
This is awesome! I also use Ollama 😮😊
Ollama is one of my favorite tools! Right up there with LM Studio :)
I also like LM Studio, mostly for my M1 Mac
@@aifluxchannel You're running Deepseek coder with Ollama? But you can't with LM studio right?
I have heard about this. Let's see it in use.
Going to do this in the livestream next week ;)
I just started getting comfortable with Cody AI in VS Code. Intrigued by aider. How do you get Generated code into your coding files? A copy/paste from terminal to your IDE?
For me, the "completion" assistants always get in the way. I tend to prefer to prompt to get specific functions or basic boilerplate to start with. I will say, my favorite plugin uses deepseek v2 to automatically format / correct formatting in my code.
Continue buttons in front ends are trivial to make when you control the model through something like vllm or llamacpp in python, but API builders seem to think we don't need it. It might be because models aren't trained on >n output tokens, but idk
They're a bit annoying, but I agree.
I'd love to see a video on how you use AiDer to code.
Coming soon!
I like aider browser, but am i mistaken or when using it you can't use many of the useful commands like /ask or /help because those are aider terminal commands?
More live coding to learn your process
I tried to use this with deepseek-coder-v2 lite, the 16B, but it's too weak to do anything right past the first prompt. I kinda wish there was a 70B version on ollama. Should i try with another like llama 3.1 70B? Would it work much better than deepseek lite?
no, you should use deepseek v2 through the api, it's a 236B model. LLama 3.1 can't even follow aider's format and needs to be reprompted multiple times, not to mention that it is quite weak at coding.
@@sorenkirksdjfk7310 i was looking for something to experiment on without using money
I do agree the 16B model is basically only useful for things like basic javascript and beginner bash scripting. API's are really cheap though if you don't want to run something locally.
If you want the best results possible then use Claude 3.5 Sonnet, followed by GPT 4-o and DeepSeek Coder V2. Different LLMs are built for different use cases and you're wasting your time trying to code with these locally hosted small LLMs. The technology just isn't there yet, so APIs are your best bet as of now.
I have already tested a few models myself and am really impressed by some of them. However, they all reach their limits.
But there is something else that is currently bothering me.
I don't really want to have to explain to the coding assistant the context in which the whole thing is happening.
I actually want to give it my entire project and work in a larger context.
e.g. a Python programme with 10 files, then it should know all 10 files. Let's say it's a programme that somehow stores people in a database and displays them as a webpage.
If I then say that I want to save the date of birth for each person, then it should suggest how the database should be adapted, how the dataclass files should be changed, how the input should be validated and how the UI should be adapted....
This task is actually not that difficult, but requires a much larger context.
All models don't have that and I really miss something like that.
I currently use the CodeGPT plugin for JetBeans IDEs.
Does anyone have a good idea how I can better realise my wish?
I am still putting print statements in my code to debug it sorry not jumping this band wagon until I am pulled on kicking and screaming.
Well people said the same thing when electricity and the telephone first came out😂
Friend im trying my best but your videos are so rambley, I still dont fully understand what this video is about.
use Perplexity, paste the video url in it and ask whatever you want
Aider soort pair programming with an ai tool he just goes over the feature set
it's pronounced (A-Dir)