I really appreciate your feedback @chadjones4255! I agree. On a real project, how often do we spin up a brand new code repo? Not all that often? So, even though sure, these assistants can help us quickly create these simple apps or get complex ones scaffolded really fast, the REAL value lies in what they can help us do AFTER a real production-bound app is scaffolded. But these exercises do provide a critical data point in determining just where we are with what these coding assistants are capable of. And spinning up a new, functional app during a short video is WAY easier to explain and understand - especially for folks who are new to software dev. As like when we're architecting/designing new software systems, it's important to have multiple "views" of the proposed system from various perspectives (e.g. component relationships, deployment, etc.), it's helpful to conduct many different types of experiments.
You might want to retry by giving docs for chroma db (or atleast the langhchain subset) and bs4. The fact they all had similar errors indicates it was more an error of outdated source knowledge than the ide's themselves (Although I didn't read the docs he provided, so maybe it does mention how it should work)
I really appreciate your feedback @imcool3357! You are 100% correct. Had I gone to the trouble of first identifying the up-to-date API docs that the current version of LangChain community library is dependent on and loaded those docs into the context as well, it's likely none of the assistants would have had any errors - or maybe just one error vs 3-5.
I feel like Cursor's "Tab" autocomplete is much better and faster than Windsurf's. I triedWindsurf for a day and switched back to Cursor right after because the Tab autocomplete felt too slow and less context-aware than that of Cursor's.
This video clearly show that given the same base model they all performs similarly. I've replicated and expanded this experiment to parse also pdf and markdown and improve the UI. I have to say I'm impressed! (and aider it's opensorce)
Everybody over uses Claude 3.5 sonnet but I have great experience using o1mini. Have o1 mini do heavy lifting and then create a summary + instructions for Claude or other model and the results yield are way better.
I really appreciate your feedback @boardsontt1756! I won't deny that folks are having success using o1 mini. It's best at "planning" (I mean, reasoning is kinda its "reason for existing"), but according to the vast majority of folks I network with at least, it's not as good at generating/editing code. I could be wrong, because I'm basing this on hearsay, as I haven't really put o1 mini to the true test myself. Not that leaderboards are the "gold standard" by any means, but I've found that the aider team really does pretty much get it right with theirs, which is based on just on the LLM, but the LLM when used with aider: aider.chat/docs/leaderboards/. This point is critical. I never trust general LLM benchmarks. Also, in case you haven't already used aider's "architect" mode or seen this, here's aider's comparison of various LLMs combined with its architect and editor modes: aider.chat/2024/09/26/architect.html. If you watch tutorials on my channel, you'll know I kinda harp on this notion of "better" and "best" 😉 I put those notions in the same category as Santa Clause 🎅 Could you provide an example of a code gen/editing task you tried using Claude that didn't work and then, with the exact same prompt and process, you switched to o1 and it just worked? I really do want to understand.
@@ontheruntonowhere Critical thinking/ planning out your file structure etc. Think of it as laying the foundation. I found that I can get amazing results when using o1 when compared to Sonnet, usually in the first shot. I usually have to go back and fourth or upload extra context to Sonnet. Many people shy away because of the price but if you’re building something you care about that shouldn’t be an issue. And if you’re only using it to get a foundation it’s pretty inexpensive compared to using Sonnet. Extra tip: Brainstorm in Perplexity if you have a pro membership. Create a space for what ever project it is your trying to build (spaces in perplexity are like custom gpts) upload as much context as you want and talk to Sonnet via perplexity. To increase your context window edit responses that don’t really to stay. Regenerate answers if you feel you’re not getting the results you want or approach your prompt differently (make sure to edit bad/ poor original prompt).
One thing that really bothers me is pasting screenshots instead of direct text errors. It just doesn't make sense. It's way slower, way more expensive in API and way less precise. On the hand I'd love to see comparison with Wingman-AI which is VS Code extension. :)
I really appreciate your feedback @BleedingDev! I don't disagree with your point about using images when all the same error info is available as text right in the dev env. I truly only intended to demonstrate an important feature of aider and most AI coding assistants. Breakages aside, there absolutely will be situations in which taking a screenshot is by far the best way to provide context to the assistant. For instance, I have a screenshot of a mockup or wireframe I created that I want to use to advise the assistant on what I'm planning to build. That will GREATLY reduce my prompting load. Or, maybe I notice something isn't being rendered correctly? Can't get that from any error logs. I know that's not your point. Just wanted to call out that the "screenshot, copy/paste" is a really critical coding assistant capability. Also, I measured with aider and the most recent version of Claude 3.5 Sonnet. I pasted a very detailed and "busy" screenshot and asked aider to explain it. It nailed it and my Anthropic cost was one penny.
@@CodingtheFuture-jg1he thanks for the reply! :) I would go like "it's possible, but this time it's better to just go text way". I think it is important to ask correctly. It was truth in the past and it's truth now. If you know how to ask people, Google or AI, you are way more powerful not just as developer. :) Also for the fair comparison it would be better to compare same input. I know it doesn't make sense here, but in some edge-cases it would be critical!
Just so you know this video potentially solved an issue I had with my code base Where I didn't know I had to use embeddings but rather I was passing a whole long text to the bot which compromised a whole lot of things including time
I would expect them to add it soon, since the rest of their competitors already have it and it doesn't seem like it requires some sort of engineering breakthrough.
The flows work well on small interactions, but I just wrote a library that puts all interactions into a DB, as well as the success or failure of the request. I kept on having a chat go gray and having to reload windows. I lost interest in a few hours, but I was also growing tired of vs code forks...
You know, that was one of my initial reactions @SoloJetMan! Then I reminded myself that this, as well as the ability to add reference docs like aider and Cursor allow, are far simpler features for the Codeium team to add than the core Cascade capabilities (which are a considerably higher bar). If they can achieve what we're seeing in Cascade so far, it should be trivial for them to add the other features.
I really appreciate your feedback @augmentos! I hear you. But, as I stated, I intentionally do not want to draw the conclusion for anyone as to "which assistant is best". All 3 are quite capable of this kind of task. The choice of which to use is likely based on your personal preferences and style of working. My advice: the moment anyone tells you "this tools is the best", without saying clearly "best at what" and "why is that particular strength most important", respectfully nod and then immediately purge your memory of every bit of "advice" that person gave you. I've been in software engineering a long time and I can tell you that people abuse the terms "best", "best practice" and "antipattern" to add the perception of authority to what is nothing more than their personal opinion. It's a human thing, but we need to be aware. Gotta take it all in and decide for ourselves 😀
Very interesting. All three AI coding assistants (Aider, Cursor, and Codium Windsurf) successfully built the RAG app, but needed different amounts of hand-holding: Codium Windsurf: Only needed 2 fixes Aider: Needed 3 fixes Cursor: Needed 4 fixes They all hit similar bumps (mostly API and config stuff) and ended up with working apps.
Yep @puremajik! Now, honestly, I wouldn't read too much into the number of fixes per assistant. Maybe I should've have said that. The reason is: because part of the performance of these tools is how their developers have designed them, but a major part is the backend LLMs. Since LLM output is totally non-deterministic, I likely could have repeated this exact exercise with each assistant 5 times and during some runs, one assistant would outperform another one that previously outperformed it. I think the big takeaway is that ALL 3 are quite capable of this kind of task. Right now at least, the choice of which to use is likely based on your personal preferences and style of working.
@@CodingtheFuture-jg1heI found your video after seeing half a dozen that tried to convince me this IDE is better than an another one. I was skeptical because they all use the same LLM model. And I can see they all seems converging to provide the same features, especially now with cursor agent. You've just confirmed with your experiments what I was thinking. Very reasonable conclusion without any hype. Thank you so much! You've got one more subscriber. P.S.: Have you tried Cline before? I guess it will be the same too though. Real breakthrough comes with new LLMs and not IDEs.
I would love for you to add a small follow up video with Cline as well (formerly claude dev). It's pretty cool how it can use computer use to debug the code itself and think it would be a great addition to this trio.
Yes even though Aider, Cursor, and Windsurf all completed the task successfully. Cursor and Windsurf are way cheaper than Aider, and Windsurf is half the price of of even Cursor. So Windsurf is the winner here if you want the most bang for your buck.
Am a newbe looking to start this AI coding. I thought Aider was free, naturally you have to add you LLM, but you say it is the most expensive, i am very lost here in this sea of AI coders after only been looking in the last 7 days and there seems to be so many contradictions everywhere you look. But Aider most expensive, i may have looked at the work thing then, wow, this won't be easy I guess. Rgds.
@@joeking5211 Well if there's any consolation I still stand by what I said. I'm relatively new myself and Windsurf does what I need it to do. But if you are super new You can also use something called "Google AI Studio" that allows you to also share your screen with an LLM and it can explain to you what how to use the other LLMs you're looking at.
For the Cursor test I'd say the first 2 issues were with Sonnet, not Cursor. Cursor did all the edits correctly, but Sonnet made the actual errors in coding. Which was a bit surprising to me since you mentioned it was supposed to be the same model as used in Aider?
AI Agent is the one who has the responsibility to setting up the system prompt. thats why there is a different between them. so its cursor fault not sonnet, cause they all using sonnet with the exactly same prompts which will enhanced by the AI Agent itself. so its cursor fault
I really appreciate your feedback @tobitege! IMHO, you're partially correct. Although as some others have stated, you do have to take into account that the performance of any coding assistant is the combination of the backend LLM's capabilities AND the way the coding assistant devs have designed and implemented the assistant. BTW, it's not just the prompts they've designed - it's also all the CODE they've written around all this stuff. If you'd like to get a sense of what I mean, browse aider's codebase: github.com/Aider-AI/aider. You'll find it's WAY more than aider just taking your prompts, wrapping them with aider's prompts and passing it all through to the LLM. Also, if you pay attention to the errors each assistant made (other than the one API key error with Cursor, which I clarified was my fault not Cursor's), you'll notice a pattern: the errors were related to either a ChromaDB API issue or a BeautifulSoup API error. Now, I used Claude 3.5 Sonnet with all 3 assistants. What does that tell you? Well, it's pretty clear that the root cause of those errors was in fact Claude. That's why I say that you're kinda correct. But it's important to also note that the exact same can be said of both aider and Windsurf.
There is not that much between all the big assistants ('the many faces of Claude'), so it'll be interesting to see which one(s) emerge from the Darwinian stew they're all in at the moment. They surely won't all survive. Many will merge into others. A few will disappear. I've tried them all and could feasibly go with any of them and be happy, but Windsurf has had the least amount of friction for me, as was seen in this test, and their pricing is the most competitive. Cursor has massive brand awareness though. Aider witha large Local LLM would be best of all, but my potato machines don't like that.
I really appreciate your feedback @GregDowns! I agree. There are far too many tools, with more and more coming out on a daily basis. The market will eventually winnow them down to a few - and the rest will be minor niche players. The one we don't talk much about - mainly because from a quality perspective, it's way behind these other coding assistants so far - is GitHub Copilot. Even though it's unlikely to be as top-notch as the Cursors and Codeiums and aiders of the world, there's just this tradition of C-levels and engineering leads at companies buying such "corporate-friendly" solutions. They see them as lower risk. In the long run, Copilot will likely win the day within the enterprise. That will take a bite out of the market as well. In my experience, it's the very rare CTO who asks the engineers "have you experimented with tools to do XYZ? Which have you found to be the most productive? ☹
For some time, we're going to have some "new kid" every couple of weeks. Have you tried Townie? And other coding assistants? If so, what are your impressions so far?
I really appreciate your feedback @berserkerrxii5776! Oh, you GOTTA learn to code. And... you gotta still write some code and keep those skills sharp. I didn't get into that in this video (every video must have a core message or two), but if you watch my other videos on this channel, you'll see that I STRESS this point. Just can't revisit every concept in every video.
The strength of these applications (and bolt.new) is not building something with one command, it is interacting with a codebase.
I really appreciate your feedback @chadjones4255! I agree. On a real project, how often do we spin up a brand new code repo? Not all that often? So, even though sure, these assistants can help us quickly create these simple apps or get complex ones scaffolded really fast, the REAL value lies in what they can help us do AFTER a real production-bound app is scaffolded.
But these exercises do provide a critical data point in determining just where we are with what these coding assistants are capable of. And spinning up a new, functional app during a short video is WAY easier to explain and understand - especially for folks who are new to software dev.
As like when we're architecting/designing new software systems, it's important to have multiple "views" of the proposed system from various perspectives (e.g. component relationships, deployment, etc.), it's helpful to conduct many different types of experiments.
You might want to retry by giving docs for chroma db (or atleast the langhchain subset) and bs4. The fact they all had similar errors indicates it was more an error of outdated source knowledge than the ide's themselves (Although I didn't read the docs he provided, so maybe it does mention how it should work)
I really appreciate your feedback @imcool3357! You are 100% correct. Had I gone to the trouble of first identifying the up-to-date API docs that the current version of LangChain community library is dependent on and loaded those docs into the context as well, it's likely none of the assistants would have had any errors - or maybe just one error vs 3-5.
Windsurf is byfar the best Ai IDE.. The only missing feature is the image upload. Other than that it is great
I feel like Cursor's "Tab" autocomplete is much better and faster than Windsurf's. I triedWindsurf for a day and switched back to Cursor right after because the Tab autocomplete felt too slow and less context-aware than that of Cursor's.
It’s there now lol
New update fixed that.
Great content and very easy to follow then try all the tools shown
Thank you so much for your kind words @MaxZapara! I hope you find at least one of them helpful to you.
Sir, which Theme are you using for your IDEs?
OMG this is awesome! Great demo, thank you for saving me the time of doing this myself... and money.
This video clearly show that given the same base model they all performs similarly.
I've replicated and expanded this experiment to parse also pdf and markdown and improve the UI. I have to say I'm impressed! (and aider it's opensorce)
Nice video! Thanks for teaching us so much! Do you think of doing a video exploring the aider architect feature?
Everybody over uses Claude 3.5 sonnet but I have great experience using o1mini. Have o1 mini do heavy lifting and then create a summary + instructions for Claude or other model and the results yield are way better.
Fantastic use case, thank you let me try to implement this strategy in my app
Could you define what you mean by heavy lifting?
I also would like to know what heavy lifting is. Could you go into detail what you mean for your whole process?
I really appreciate your feedback @boardsontt1756! I won't deny that folks are having success using o1 mini. It's best at "planning" (I mean, reasoning is kinda its "reason for existing"), but according to the vast majority of folks I network with at least, it's not as good at generating/editing code. I could be wrong, because I'm basing this on hearsay, as I haven't really put o1 mini to the true test myself.
Not that leaderboards are the "gold standard" by any means, but I've found that the aider team really does pretty much get it right with theirs, which is based on just on the LLM, but the LLM when used with aider: aider.chat/docs/leaderboards/. This point is critical. I never trust general LLM benchmarks.
Also, in case you haven't already used aider's "architect" mode or seen this, here's aider's comparison of various LLMs combined with its architect and editor modes: aider.chat/2024/09/26/architect.html.
If you watch tutorials on my channel, you'll know I kinda harp on this notion of "better" and "best" 😉 I put those notions in the same category as Santa Clause 🎅
Could you provide an example of a code gen/editing task you tried using Claude that didn't work and then, with the exact same prompt and process, you switched to o1 and it just worked? I really do want to understand.
@@ontheruntonowhere Critical thinking/ planning out your file structure etc. Think of it as laying the foundation. I found that I can get amazing results when using o1 when compared to Sonnet, usually in the first shot. I usually have to go back and fourth or upload extra context to Sonnet. Many people shy away because of the price but if you’re building something you care about that shouldn’t be an issue. And if you’re only using it to get a foundation it’s pretty inexpensive compared to using Sonnet.
Extra tip: Brainstorm in Perplexity if you have a pro membership. Create a space for what ever project it is your trying to build (spaces in perplexity are like custom gpts) upload as much context as you want and talk to Sonnet via perplexity. To increase your context window edit responses that don’t really to stay. Regenerate answers if you feel you’re not getting the results you want or approach your prompt differently (make sure to edit bad/ poor original prompt).
Most youtubers give overhype to AI tools, you tell the actual comparison between them with any hype. Like this type of content.
One thing that really bothers me is pasting screenshots instead of direct text errors. It just doesn't make sense. It's way slower, way more expensive in API and way less precise.
On the hand I'd love to see comparison with Wingman-AI which is VS Code extension. :)
Agreed, for non ui related errors, it’s useless.
Also use a screenshot to ocr app to extract text form instant screenshot and paste in ai chat.
100%. and you need to host it send to windsurf as url
I really appreciate your feedback @BleedingDev! I don't disagree with your point about using images when all the same error info is available as text right in the dev env.
I truly only intended to demonstrate an important feature of aider and most AI coding assistants.
Breakages aside, there absolutely will be situations in which taking a screenshot is by far the best way to provide context to the assistant. For instance, I have a screenshot of a mockup or wireframe I created that I want to use to advise the assistant on what I'm planning to build. That will GREATLY reduce my prompting load. Or, maybe I notice something isn't being rendered correctly? Can't get that from any error logs.
I know that's not your point. Just wanted to call out that the "screenshot, copy/paste" is a really critical coding assistant capability.
Also, I measured with aider and the most recent version of Claude 3.5 Sonnet. I pasted a very detailed and "busy" screenshot and asked aider to explain it. It nailed it and my Anthropic cost was one penny.
@@CodingtheFuture-jg1he thanks for the reply! :) I would go like "it's possible, but this time it's better to just go text way".
I think it is important to ask correctly. It was truth in the past and it's truth now. If you know how to ask people, Google or AI, you are way more powerful not just as developer. :)
Also for the fair comparison it would be better to compare same input. I know it doesn't make sense here, but in some edge-cases it would be critical!
Just so you know this video potentially solved an issue I had with my code base
Where I didn't know I had to use embeddings but rather I was passing a whole long text to the bot which compromised a whole lot of things including time
Building the detailed prompt could be an app by itself tbh
aider runs locally? if i don't have a powerful pc i can't use it?
You can use it with server models like Anthropic or Open AI and others. Aider has a leader board for AI models on their site.
was kinda routing for windsurf, but the lack of vision (attachment) is a dealbreaker for me
I would expect them to add it soon, since the rest of their competitors already have it and it doesn't seem like it requires some sort of engineering breakthrough.
chill bro, it just release couple days ago. they sure will add some more. but currently its good already with the initialization
The flows work well on small interactions, but I just wrote a library that puts all interactions into a DB, as well as the success or failure of the request. I kept on having a chat go gray and having to reload windows. I lost interest in a few hours, but I was also growing tired of vs code forks...
You know, that was one of my initial reactions @SoloJetMan! Then I reminded myself that this, as well as the ability to add reference docs like aider and Cursor allow, are far simpler features for the Codeium team to add than the core Cascade capabilities (which are a considerably higher bar). If they can achieve what we're seeing in Cascade so far, it should be trivial for them to add the other features.
Would be good with Bolt or OttO and wish you expanded the exercise such that you could draw conclusions at the end
I really appreciate your feedback @augmentos! I hear you. But, as I stated, I intentionally do not want to draw the conclusion for anyone as to "which assistant is best". All 3 are quite capable of this kind of task. The choice of which to use is likely based on your personal preferences and style of working.
My advice: the moment anyone tells you "this tools is the best", without saying clearly "best at what" and "why is that particular strength most important", respectfully nod and then immediately purge your memory of every bit of "advice" that person gave you. I've been in software engineering a long time and I can tell you that people abuse the terms "best", "best practice" and "antipattern" to add the perception of authority to what is nothing more than their personal opinion. It's a human thing, but we need to be aware. Gotta take it all in and decide for ourselves 😀
Otto?
Very interesting. All three AI coding assistants (Aider, Cursor, and Codium Windsurf) successfully built the RAG app, but needed different amounts of hand-holding:
Codium Windsurf: Only needed 2 fixes
Aider: Needed 3 fixes
Cursor: Needed 4 fixes
They all hit similar bumps (mostly API and config stuff) and ended up with working apps.
Yep @puremajik! Now, honestly, I wouldn't read too much into the number of fixes per assistant. Maybe I should've have said that. The reason is: because part of the performance of these tools is how their developers have designed them, but a major part is the backend LLMs. Since LLM output is totally non-deterministic, I likely could have repeated this exact exercise with each assistant 5 times and during some runs, one assistant would outperform another one that previously outperformed it.
I think the big takeaway is that ALL 3 are quite capable of this kind of task. Right now at least, the choice of which to use is likely based on your personal preferences and style of working.
@@CodingtheFuture-jg1heI found your video after seeing half a dozen that tried to convince me this IDE is better than an another one. I was skeptical because they all use the same LLM model. And I can see they all seems converging to provide the same features, especially now with cursor agent. You've just confirmed with your experiments what I was thinking. Very reasonable conclusion without any hype. Thank you so much! You've got one more subscriber.
P.S.: Have you tried Cline before? I guess it will be the same too though. Real breakthrough comes with new LLMs and not IDEs.
Yes Yes Yes, great video!
Thank you so much @hannespi2886!
I would love for you to add a small follow up video with Cline as well (formerly claude dev). It's pretty cool how it can use computer use to debug the code itself and think it would be a great addition to this trio.
I've been using Cline for a month or two and it's great, but I've been ringing up quite a tab in it with Claude Sonnet
windsurf for the win
Aider - inconvenient
Cursor - waste of money, over hyped
Windsurf - high potential and well coordinated
Would you say developers will be obsolite? If so when?
juniors will be obosolete .
I read the description. What LLM was used with Aider? Assume the fine tuned models from Cursor and Codium were used respectively.
All models were Claude 3.5 Sonnet
Yes even though Aider, Cursor, and Windsurf all completed the task successfully.
Cursor and Windsurf are way cheaper than Aider, and Windsurf is half the price of of even Cursor.
So Windsurf is the winner here if you want the most bang for your buck.
Am a newbe looking to start this AI coding. I thought Aider was free, naturally you have to add you LLM, but you say it is the most expensive, i am very lost here in this sea of AI coders after only been looking in the last 7 days and there seems to be so many contradictions everywhere you look. But Aider most expensive, i may have looked at the work thing then, wow, this won't be easy I guess. Rgds.
@@joeking5211 Well if there's any consolation I still stand by what I said. I'm relatively new myself and Windsurf does what I need it to do. But if you are super new
You can also use something called "Google AI Studio" that allows you to also share your screen with an LLM and it can explain to you what how to use the other LLMs you're looking at.
Well then....I will go with free. haha. Thanks for the video!
For the Cursor test I'd say the first 2 issues were with Sonnet, not Cursor. Cursor did all the edits correctly, but Sonnet made the actual errors in coding. Which was a bit surprising to me since you mentioned it was supposed to be the same model as used in Aider?
Well if you knew anything mr Keyboard warrior... cursor is responsoble for setting up the system prompt.
Which greatly affects sonnets logic..
AI Agent is the one who has the responsibility to setting up the system prompt. thats why there is a different between them. so its cursor fault not sonnet, cause they all using sonnet with the exactly same prompts which will enhanced by the AI Agent itself. so its cursor fault
@@MeowEngineer don't be a dick. everyone's level of expertise is different
I really appreciate your feedback @tobitege! IMHO, you're partially correct. Although as some others have stated, you do have to take into account that the performance of any coding assistant is the combination of the backend LLM's capabilities AND the way the coding assistant devs have designed and implemented the assistant. BTW, it's not just the prompts they've designed - it's also all the CODE they've written around all this stuff. If you'd like to get a sense of what I mean, browse aider's codebase: github.com/Aider-AI/aider. You'll find it's WAY more than aider just taking your prompts, wrapping them with aider's prompts and passing it all through to the LLM.
Also, if you pay attention to the errors each assistant made (other than the one API key error with Cursor, which I clarified was my fault not Cursor's), you'll notice a pattern: the errors were related to either a ChromaDB API issue or a BeautifulSoup API error. Now, I used Claude 3.5 Sonnet with all 3 assistants.
What does that tell you? Well, it's pretty clear that the root cause of those errors was in fact Claude. That's why I say that you're kinda correct. But it's important to also note that the exact same can be said of both aider and Windsurf.
Ok Mr. Keyboard Warrior
There is not that much between all the big assistants ('the many faces of Claude'), so it'll be interesting to see which one(s) emerge from the Darwinian stew they're all in at the moment. They surely won't all survive. Many will merge into others. A few will disappear.
I've tried them all and could feasibly go with any of them and be happy, but Windsurf has had the least amount of friction for me, as was seen in this test, and their pricing is the most competitive. Cursor has massive brand awareness though. Aider witha large Local LLM would be best of all, but my potato machines don't like that.
I really appreciate your feedback @GregDowns! I agree. There are far too many tools, with more and more coming out on a daily basis. The market will eventually winnow them down to a few - and the rest will be minor niche players.
The one we don't talk much about - mainly because from a quality perspective, it's way behind these other coding assistants so far - is GitHub Copilot. Even though it's unlikely to be as top-notch as the Cursors and Codeiums and aiders of the world, there's just this tradition of C-levels and engineering leads at companies buying such "corporate-friendly" solutions. They see them as lower risk. In the long run, Copilot will likely win the day within the enterprise. That will take a bite out of the market as well. In my experience, it's the very rare CTO who asks the engineers "have you experimented with tools to do XYZ? Which have you found to be the most productive? ☹
Which one is free to use ?
Windsurf
@sambhavkhur
Ok thanks bro
You sound extremely intelligent, even if you're reading from a teleprompter. Amazing
Thank you for that @antoniofuller2331! Teleprompter?! What's a teleprompter 😉
Looks like there's a new kid on the block. Val Town has built a coder...called Townie.
it's not impressive as tho. New, but underwhelming.
For some time, we're going to have some "new kid" every couple of weeks. Have you tried Townie? And other coding assistants? If so, what are your impressions so far?
Imagine actually learning how to code
I imagine spending two minutes creating this app but charging for two days, which I spend with your mom
I really appreciate your feedback @berserkerrxii5776! Oh, you GOTTA learn to code. And... you gotta still write some code and keep those skills sharp.
I didn't get into that in this video (every video must have a core message or two), but if you watch my other videos on this channel, you'll see that I STRESS this point. Just can't revisit every concept in every video.