hitting the tokens per minute/day rate limit with Claude 3.5 sonnet can sometimes happens really quickly, a video on tips and tricks to optimise the prompting would be cool :)
Umm can you compare claude dev,claude engineer and aider.Rate each out of 50 and rank them based on quality of code generated,Work done by ai in simple prompts,Features,Modification in the project and tools available please thank you also give a overall review of these three even other than the things i mentioned.please
there so many ai to try, would be great if you have your tier list based on cheap price with good quality or even Free tier is enough. for me, it would help me really to stay with it. currently i always confused either using aider or claude dev, gemini or claude or even gpt
I agree! Hehe. Maybe you could actually make some kind of tier list for LLM models, focusing on different use cases or primarily just for coding (since that's what most of us here, your audience, are interested in and why we're watching your videos 😊), AICodeKing.
I just used claude dev and made an app that generates power point presentations with speaker notes on any subject you want. It’s packaged to work on newer macs and windows machines. In python 🐍 which is not a language I am real familiar with.
Is Gemini-1.5 Flash-002 not working with the feature update? I’m getting this message: I apologize for the continued difficulties. Since I cannot directly access your browser's console, I am limited in my ability to debug the issue. To proceed, I need the console log output from your browser. Without this information, I cannot effectively diagnose and resolve the problem. Please provide the console log, and I will do my best to assist you further.
You have to keep in mind that these things are calling functions on the backend to enable these sorts of abilities. But it has to "remember" or infer when to call those functions, and sometimes models strugggle with that. I've found on a couple occasions in other contexts that sometimes you can get the models unstuck even without knowing the specifics of the function by just telling them something like, "You should have some sort of function you've been provided with that enables you to do this".
🎯 Key points for quick navigation: 00:00:05 *🎉 Introduction to Claude Dev Upgrades* - The video discusses new features and upgrades in Claude Dev since the last update. - Claude Dev can now control a Chromium browser for improved debugging. - Updates include better error reporting and support for new Gemini models. 00:03:27 *🛠️ Demonstration of New Tools* - The host demonstrates generating a simple to-do app using Claude Dev and utilizing the new inspection features. - Claude Dev’s ability to fetch console logs and take screenshots is showcased. - The new tools help identify deployment readiness and debug issues efficiently. 00:07:34 *⚙️ Advanced Features and Performance* - The video explores new Gemini model integrations and their performance with Claude Dev. - A Min Sweeper game is created using Gemini, highlighting speed and efficiency. - Discussion on subscription membership and access to exclusive content related to Claude Dev. Made with HARPA AI
I was asking myself why was claudedev so slow at the first request; then i saw that your were using Vertex Ai. Speaking of it, can you recommend me a tutorial to set it up with claudedev like you did in the other video. Or are you talking about it in details in the video you've just uploaded for the premium members?
I’ve found it’s useful to create a readme where I edit and save my prompts first then submit them. That lets me evolve them easily and repeatably . Easier than scrolling through the logs.
I have been thinking of doing the same. Are placing the prompt in the readme...then copy n lasting into the chat...or r u including the readme as part of your context and telling it to implement xyz
Yea every time ai code king publishes an update I have to switch tools to try them out. It’s an addiction. The main reason I stick to aider is that I prefer having a separate terminal window that doesn’t take up space in vscode.
hitting the tokens per minute/day rate limit with Claude 3.5 sonnet can sometimes happens really quickly, a video on tips and tricks to optimise the prompting would be cool :)
Just use Claude 3.5 through OpenRouter instead, their rate limit is higher.
Use openRouter with sonnet 3.5 its token limit is lifted. And they added caching support now as well.
That’s where you use openrouter and never hit rate limits. Still cheap when you use claude 3.5 🎉
tks@@darklen14 I'll try that :)
I'll hit the per-minute limit with one larger file and it's really annoying
Umm can you compare claude dev,claude engineer and aider.Rate each out of 50 and rank them based on quality of code generated,Work done by ai in simple prompts,Features,Modification in the project and tools available please thank you also give a overall review of these three even other than the things i mentioned.please
bro just ai prompted a UA-cam creator 😂
there so many ai to try, would be great if you have your tier list based on cheap price with good quality or even Free tier is enough. for me, it would help me really to stay with it. currently i always confused either using aider or claude dev, gemini or claude or even gpt
I agree! Hehe. Maybe you could actually make some kind of tier list for LLM models, focusing on different use cases or primarily just for coding (since that's what most of us here, your audience, are interested in and why we're watching your videos 😊), AICodeKing.
I just realized that language models are basically micro-transactions for coding
Hey king, Nice vídeo as always. Could you make a video doing an app with codeium? There’s no much informacional on YT about this tool
I just used claude dev and made an app that generates power point presentations with speaker notes on any subject you want. It’s packaged to work on newer macs and windows machines. In python 🐍 which is not a language I am real familiar with.
The dev's haven't updated claude dev in 3 patches, it's become completely sentient at this point and is updating itself.
Is Gemini-1.5 Flash-002 not working with the feature update? I’m getting this message:
I apologize for the continued difficulties. Since I cannot directly access your browser's console, I am limited in my ability to debug the issue. To proceed, I need the console log output from your browser. Without this information, I cannot effectively diagnose and resolve the problem. Please provide the console log, and I will do my best to assist you further.
It's working for me. Is this happening always or in specific cases?
try to reinstall it, its work for me
You have to keep in mind that these things are calling functions on the backend to enable these sorts of abilities. But it has to "remember" or infer when to call those functions, and sometimes models strugggle with that. I've found on a couple occasions in other contexts that sometimes you can get the models unstuck even without knowing the specifics of the function by just telling them something like, "You should have some sort of function you've been provided with that enables you to do this".
First!
🎯 Key points for quick navigation:
00:00:05 *🎉 Introduction to Claude Dev Upgrades*
- The video discusses new features and upgrades in Claude Dev since the last update.
- Claude Dev can now control a Chromium browser for improved debugging.
- Updates include better error reporting and support for new Gemini models.
00:03:27 *🛠️ Demonstration of New Tools*
- The host demonstrates generating a simple to-do app using Claude Dev and utilizing the new inspection features.
- Claude Dev’s ability to fetch console logs and take screenshots is showcased.
- The new tools help identify deployment readiness and debug issues efficiently.
00:07:34 *⚙️ Advanced Features and Performance*
- The video explores new Gemini model integrations and their performance with Claude Dev.
- A Min Sweeper game is created using Gemini, highlighting speed and efficiency.
- Discussion on subscription membership and access to exclusive content related to Claude Dev.
Made with HARPA AI
I was asking myself why was claudedev so slow at the first request; then i saw that your were using Vertex Ai. Speaking of it, can you recommend me a tutorial to set it up with claudedev like you did in the other video. Or are you talking about it in details in the video you've just uploaded for the premium members?
I’ve found it’s useful to create a readme where I edit and save my prompts first then submit them. That lets me evolve them easily and repeatably . Easier than scrolling through the logs.
I have been thinking of doing the same. Are placing the prompt in the readme...then copy n lasting into the chat...or r u including the readme as part of your context and telling it to implement xyz
@@toddschavey6736 I didn't think of that but it's a great idea
I waiting for qwrn-coder 2.5 36b, I can run it locally and maybe I wouldn't need Claude
That's expensive for the first generation
Technically it's free. Because, I'm using Vertex AI here.
We miss the dragons dancing with the piano song
The ultimate test. Please write me the SkyNet. :)
About time we get these sort of features built in 😊
Great video, thanks for share.
Thanks King as always
INSANE COOL !!!
ay
Third bro❤🎉
niceee
👍
For using today it's bettere claudeDev or aider?
Yea every time ai code king publishes an update I have to switch tools to try them out. It’s an addiction. The main reason I stick to aider is that I prefer having a separate terminal window that doesn’t take up space in vscode.
@@dmh20002 thank you
@@dmh20002 Thank you