BOLT has added a DIFF feature to rewrite only the part of the code that needs to be modified, rather than rewrite the entire page's code, which uses a lot of tokens. This could be interesting. They say it's 80-90% faster and cheaper.
Targeting and/or locking files is very important, that will end up on the roadmap as a priority for sure. Join the community to suggest and keep track of these desired features. :)
@@ColeMedin What I'll tell you and the community is to fine tune some good local model for specific tool call that work well with ottodev and upload them to ollama than using prompt this way you can teach the model how to approach problems well and the required tool to use at a moment
Bonjour, je voulais vous remercier vous, et toute la communauté, pour votre implication et vos réalisation open source. Je suis français et je vous envoi beaucoup de force et d'amour depuis le vieux continent. Peace.
For anyone wanting to pull more models the newbie way 😅 Access the Running Container: Use the docker exec command to open a shell inside the container: $ docker exec -it ollama bash $ ollama pull $ ollama list $ ollama run Might help, wish I kinda knew this sooner. Didnt know how to do it via portainer and updating the d-c file everytime and up and downing. Found out my 4060 runs 7b at 93%. So got abit carried away model testing. Also quick tip, get the smaller models to build out, then Claude to pimp it.. 😁
If you hit the 404 that Cole specificially encounters while building a React app, most of the time it boils down to index.html missing at the root, and after that ensuring that there’s a script element in index.html pointing to the root JavaScript include. Thanks everyone for their support of the project ❤
I'm still a Bolt fan because it's web-based, so I can work anywhere. Plus, sharing my projects for feedback is a breeze with its simple deployment feature.
I'm loving the direction of creating an open source IDE like ottodev. My only reservation is the relevancy and speed at which the open source community can keep up with the advancing tech and fast-evolving architecture. Windsurf is demonstrating the power of an agentic system (ie, achieving AUTONOMOUS development). These are areas of AI that I am currently focusing on because of the demonstrable performance increases. But this leads to a larger question of vision. I've worked at too many organizations where the thread is lost and the product becomes just a mashup of features instead of a thoughtful product with intent. For the record, Cole, you're doing an amazing job rallying the dev community to participate in these projects. Just providing some feedback and food for thought. Keep up the great work!
I really appreciate your thoughts and kind words here, thank you very much! I totally agree that there is always a risk that the competition beats us out or the project just becomes a mash of a bunch of PRs. Though things like Windsurf are closed source, so if it's the best we can still be the best open source and there is a lot of value in that for people. Also I am doing a lot of planning and making some partnerships to make sure we can really drive this project forward for the long haul. More updates on that coming soon!
I haven't looked at the bolt code at all, so I don't know how difficult it would be to do this, but I'd love to see the ability to select multiple models to be used for different situations. ie. use a local model for simple LLM requests and only go to something like Claude Sonnet when necessary.
@MatthewChowns. You can set up a swarm inside a RAG and then have each member of the swarm run a model. As a "model" then create an API or link the RAG to the API doc on ottodev. I think I explained it. But I use n8n to create this flow. Then use the flow as a model. I haven't 100% tested it. But it looks doable. I used coles local-ai-packaged and turned that into a model.
Cole, is oTToDev capable of connecting to an existing folder on your machine, or at least a repository on GitHub where you can alter the project files using a LLM? ... this would be life changing ...
When you change the logo, post a how to video, to update. Will join in on funding round to help keep the momentum going on this project, Super exciting. Great job. Thanks everyone for the hard work. This could be like blender/ Makehuman someday with your enthusiasm.
@@EduardsRuzga I'm looking forward to Mixture of Agents support, so the image recognition could be done with Lama3.2 and the code development by say Qwen2.5-coder:30b
@@vannoo67 you can already do it somewhat manually now by switching models. Its just for models that support image inputs we do not support sending it. Otherwise its better to have model that does both, understands image and writes code as a starter for such things. Why? Because all image model will be able to do is to export some textual description of an image to pass to text->code model. In that transformation a lot of information will be lost. I would rather use Sonnet 3.5 first to write first version of code based on image, and then allow Qwen to continue from there. That will produce better results.
@@EduardsRuzga However, conversely, if each specialty (UX, UI, front end coding, back end coding, code validation, ...) is handled by an optimized Agent (not necessarily, but possibly by different models) with well defined interfaces between them (like specifications passed between human specialists) the final result will probably be more robust and conceivably be able to be generated on lessor hardware.
I suggest that as we add new features to the project, developers either record or provide videos of their work on each part, which can then be linked to a project timeline hosted on a centralized platform like the OttoDev community. This approach would create a single, organized space where developers receive well-deserved recognition and promotion while also serving as a valuable resource for newcomers. They can learn from the steps we've taken, understand the thought process behind each feature, and quickly get up to speed with the project.
Likely wont happen. You want the bar for contribution to be as low as possible as it’s already a fair bit of work to learn and contribute to a project.
@@EduardsRuzga i follow your work and its really appriciable. all i think is in such open source project the only thing a developer want apart from their own use is recognition and it helps a lot in confidence boosting . and as its helpful for the community also its a good step. although github serves the same thing but as a moderator when you include some parts of works done by othres ( i understand you have to go thorughall the forks which is a hectic part ) you have to be very prescise in the project whether to include something or not. and hence the final product which is awsome needs to be served to all as a part of learning also. it helps learners save their time.. who knows if you solve this problem it could be a something new..
@@NeoNerdDeveloper I actually want to feature people behind features more. Currently they do not have their channels, profiles or anything. All I can share is their github profiles. Links to PRs And I need to put additional work to reach them out, hope they answer in time for recording if they want to be featured in some other way. I like your thinking and share it, was thinking about it too. Just operating under tight time that I have for working on open source project and making videos while having kids and full time job :) Need help from contributors in terms of how they want to be featured. Will start asking for that in PRs that are merging.
Can you do a video on best way to upgrade. Would love a way to add documentation from url as context. Would give it more up to date info when building out.
I tried windsurf and it looks insane on video and paper but after using it i find it doesn't run as well as advertised at all. Lots of bugs, infinite processing loops, unnecessary interpretation and processing... and making cascade create and run bash commands to create files and such rarely works 100% at least it never did for me never even once. Although i have to say we are on the right track. Like if windsurf gets a few updates and iron some bugs and features it can easily blow literaly everything else out the water
@@northloo perhaps for that use case, but for me (after doing the project scaffold myself because it was hanging up on that), it was super easy with Svelte and Tailwind. amazing speed to get things done.
I've tried out Windsurf a bit and I like it! I'm surprised @northloo has had some trouble with it to an extent but yeah LLMs are still prone to hallucinating so it comes down to the fact that it doesn't matter how good the platform is.
is there a difference between the prompt engineering techniques of the main bolt.new and the open source bolt.new pages, I consistently get much better results with the original bolt.new website vs the open source project even if I use the new claude sonnet 3.5 v2 model through open router.
Yes there are major differences. And all their updates to the smaller but crucial details like prompt engineering and token usage and projects manipulation is closed source unfortunately. As to the differences themselves i do not work for them i have only watched their livestreams i can't say exactly everything that is different. I k ow that they have more efficient prompts and updated coding methodology and token usage efficiency which is why you get better results when using the original version right now
Bro I swear to god that I was sure you were using a hair net when the video started, I had to look three times to be sure I wasn’t hallucinating or something ❤
Congrats Cole in this hard journey. Any dev around that could tell which LLM is doing the best with this? Claude 3.5 sonnet. Any other having same results but cheaper? What about Llama 70B which is IMO the best that could be run at home?
Yeah I think for a v1 of this you could just work with the system prompt, but the diff features in the commercial Bolt.new are definitely more complex!
@@ColeMedin It probably reads all the codes and then uses a tool to insert the specific part of code at a specific line I had once tried to create such a tool but that is in python Can you make a python bridge to bolt so that we can leverage the power of python in this powerful assistant
@@ColeMedin I created a simple agent in python that uses qwen 2.5 coder and it completely works based on diff. It is performing amazing. It was not working properly with aider or Cline or bolt so I tried to write my own agent and it seems it broke the record. Working with diff is not that complex and is pretty smoothing if we use ai also to that stuff. Bolt with diff will be kind of amazing Now I am trying to implement chain of thought
Can we have a browser version for it, so that we cab get it auto-updated automatically? Because running locally, we might have to reinstall it again and again
Interesting to not see a reaction to the marketplace idea yet. Obsidian has done a good job with their plug in marketplace. Not as mature is the Open WebUI Tools and Functions. I could also imagine a different marketplace for low code developers like me who get stuck and will pay to get unstuck.
Hi, not sure if anyone had this problem? But if you want to run ottodev on another machine you have to chnage this line at the beggining of the package.json file to expose it to other interfaces. I use zerotier and connect on another machine. "dev": "remix vite:dev --host"
I am guessing that in the feature, Ottodev will simply pull all feature requests from the community and decide which ones to add first and then it will build them on it's own somewhere on some server 😂 where it runs as a dev contributor to it's own project
Deep Seek, open ai and Anthropic didn't work its no return (I entered the api key in the UI) On the other hand, Google and GROQ worked, anyone knows why? Does anyone have recommendations on the best LLM to use?
I believe you commented this on another video and I replied, but: Yeah sometimes LLMs just hallucinate and give bad commands/code. My favorite local LLM to use is Qwen-2.5-Coder-32b!
@@ColeMedin Thats a great choice, did look at the scores of this LLM and it’s really impressive. But it won’t run my m1 mac, is there way that I would be able to run on it? otherwise I need to run it on a dedicated server. Great work from your side too man, keep it up💯
I tried many times bolt new to create spps, but with a backgroud of python, i have a lot pf troubles to link the front to the end, and i m wondering if some xan help me with a tips or a video that can help me to enhance my developpement process, thx
@@ColeMedin actually guys, i have used cursor to do just that. i actually did that on day 1 of downloading the ottodev repo. (When it was still called any-new-llm) and i managed to make it add the project upload feature since i needed it so bad. I can send you the codebase or just the files by email discord or whatever community if that can make the process of making this feature available for everyone much faster.. even tho i am not a contributer.. if i can contribute and help for free i will. Answer if you want me to send the code over cole.
@northloo that would be absolutely fantastic! Would you want to connect over email or Discourse? My email is cole@dynamous.ai if you want to connect there! This is a much needed feature and I would give you a huge shoutout if it works great :D
I'm also seeing this behaviour. I have Ollama running on my desktop. I have the bolt.new.any-llm running in a docker container and when I select ollama from the drop down it shows the models I have access to in the other dropdown but I consistently run into an error whenever I prompt the model
@@ColeMedin thanks for replying. It WAS giving me a popup error IN otto dev saying "Error Processing Request" (which I can now no longer replicate for some reason but is similar to issue #297 that's currently raised). But that has miraculously fixed itself and the model will now initiate. However the issue I have now (using Ollama and Qwen 2.5 32b) Is that it doesn't want to actually code anything I just get a standard LLM response back. Saying like "to build a basic inventory system you should consider the following things....) It doensn't initiate a build or install packages or anything lol
That's super weird - it's an issue we had with Ollama at one point but not anymore because we changed the config to increase the context limit for Ollama. Are you sure you have the most up to date code and have the container rebuilt?
Which .sh file are you referring to? Bindings.sh? That should remain unused for Windows. I install on Windows myself without issue! If you could share the error that would be great.
i like to interact with my local storage easy and need autofix work with ottodev , and i want you to show a list of the best models work with fantastic like claude
@@ColeMedin Sometimes the dependencies don't install properly, Sometimes it will say I have lock files. I don't really know why. If it's from my pc or the files. It's really frustrating me
No the title is wrong we are facing a lot of problems please check the community as well .😔 We are unable to use qwen 3.2 and also other problems like not giving response
I'm definitely engaged in the community along with our maintainers to figure out all of this! A lot of the issues are just because LLMs hallucinate though, not because the platform is broken. But still things we are looking into!
Have you noticed how weekly sometimes daily there's new releases, new adaptations, it's literally chasing a plateau or a peak now. Could be wrong but by the end of this week there's probably something better on our fingertips to compare it. Couple of days ago Qwen 2.5 was new
Guys dont get caught up in this never ending loop of releases you will never get to work. Pick what works best for your workflow and stick to it. Its all marketing and companies trying to top over the last with no real work put into it. 3/4 of these apps dont even work as advertised its like when video games come out half baked... windsurf looks great but have you all actually used it? Its a great concept but it does not work as advertised at all. Lots of stupid and very annoying bugs
I agree with @EduardsRuzga and @northloo, though I have had a good experience with Windsurf too so I think it's awesome! But Bolt.new offers an unparallel experience being able to develop right in the browser.
Hey bro, get back at me, i sent you an email. I have the codes ready for the drag and drop on prompt feature on bolt you are working on, with the read me file.
can anyone help me while trying to install pnpm i get this error: bolt@ build C:\Users\warsh\Desktop\Bolt > remix vite:build Der Befehl "remix" ist entweder falsch geschrieben oder konnte nicht gefunden werden. ELIFECYCLE Command failed with exit code 1. WARN Local package.json exists, but node_modules missing, did you mean to install?
I'm quite new to this. I use Bolt.new to build apps and now just setup otTToDev with Qwen 2.5 7b model but the output I get is really basic compared to bolt.new which even builds good looking UIs. Is there any way to handle this ?
Well a 7b parameter model is pretty small so it's hard to expect it to perform nearly as well as Claude 3.5 Sonnet! The main advantage is it's free and fast, but the results won't be the best for bigger apps!
BOLT has added a DIFF feature to rewrite only the part of the code that needs to be modified, rather than rewrite the entire page's code, which uses a lot of tokens. This could be interesting. They say it's 80-90% faster and cheaper.
That's a fantastic suggestion. Would help develop projects at cheaper cost even with frontier models like Claude 3.5 sonnet.
Is this in prod already? They were testing it with an alternative url
Targeting and/or locking files is very important, that will end up on the roadmap as a priority for sure. Join the community to suggest and keep track of these desired features. :)
Yes this is something they didn't share with their open source version unfortunately, but we definitely want to add it in asap to oTToDev!
@@ColeMedin What I'll tell you and the community is to fine tune some good local model for specific tool call that work well with ottodev and upload them to ollama than using prompt this way you can teach the model how to approach problems well and the required tool to use at a moment
Thanks a ton @cole and all the contributors for making oTToDEV better and better 👏🏻👍🏻
You bet!!
For better access for non-coder users, it would be very interesting if the developer compiled the code into an .exe file.
I will ddefinitely be contributing monetarily to the open source project bounty!
Would be cool to integrate with VSCodium!
Thanks so much Marc, that means a lot and I appreciate your support! Yes an integration with VSCodium would be sweet!!
I so appreciate that Cole is going through the contributions and highlighting them along with the devs. This is what open source is about!
Thank you! :D
Cant believe this project expended to this extend. been following this project from the start. Just imagine to look this entering kicksterter!
Bonjour, je voulais vous remercier vous, et toute la communauté, pour votre implication et vos réalisation open source.
Je suis français et je vous envoi beaucoup de force et d'amour depuis le vieux continent.
Peace.
For anyone wanting to pull more models the newbie way 😅
Access the Running Container:
Use the docker exec command to open a shell inside the container:
$ docker exec -it ollama bash
$ ollama pull
$ ollama list
$ ollama run
Might help, wish I kinda knew this sooner. Didnt know how to do it via portainer and updating the d-c file everytime and up and downing. Found out my 4060 runs 7b at 93%. So got abit carried away model testing. Also quick tip, get the smaller models to build out, then Claude to pimp it.. 😁
If you hit the 404 that Cole specificially encounters while building a React app, most of the time it boils down to index.html missing at the root, and after that ensuring that there’s a script element in index.html pointing to the root JavaScript include. Thanks everyone for their support of the project ❤
Okay yeah that makes sense, thanks Chris!
I'm still a Bolt fan because it's web-based, so I can work anywhere. Plus, sharing my projects for feedback is a breeze with its simple deployment feature.
Yes that's totally fair! Bolt.new is incredible for convenience!
I'm loving the direction of creating an open source IDE like ottodev. My only reservation is the relevancy and speed at which the open source community can keep up with the advancing tech and fast-evolving architecture.
Windsurf is demonstrating the power of an agentic system (ie, achieving AUTONOMOUS development). These are areas of AI that I am currently focusing on because of the demonstrable performance increases. But this leads to a larger question of vision. I've worked at too many organizations where the thread is lost and the product becomes just a mashup of features instead of a thoughtful product with intent.
For the record, Cole, you're doing an amazing job rallying the dev community to participate in these projects. Just providing some feedback and food for thought. Keep up the great work!
I really appreciate your thoughts and kind words here, thank you very much! I totally agree that there is always a risk that the competition beats us out or the project just becomes a mash of a bunch of PRs.
Though things like Windsurf are closed source, so if it's the best we can still be the best open source and there is a lot of value in that for people. Also I am doing a lot of planning and making some partnerships to make sure we can really drive this project forward for the long haul. More updates on that coming soon!
Im following this community and cant wait for the file lock implementation 🥳
Great job Cole! You and the community are doing Gods work! Thank you 🙏
Jason
Thank you so much! :D
OpenRouter's model selection dropdown should support a regex model search as there are so many models. See what Cline does for an example.
This is a fantastic suggestion, thank you! I totally agree.
Please make Straico integration too, its like open router, can access many models using that API, Thanks
I haven't looked at the bolt code at all, so I don't know how difficult it would be to do this, but I'd love to see the ability to select multiple models to be used for different situations. ie. use a local model for simple LLM requests and only go to something like Claude Sonnet when necessary.
I love this idea Matthew! It wouldn't be super easy but also it's definitely doable!
@MatthewChowns. You can set up a swarm inside a RAG and then have each member of the swarm run a model. As a "model" then create an API or link the RAG to the API doc on ottodev. I think I explained it. But I use n8n to create this flow. Then use the flow as a model. I haven't 100% tested it. But it looks doable. I used coles local-ai-packaged and turned that into a model.
Cole, is oTToDev capable of connecting to an existing folder on your machine, or at least a repository on GitHub where you can alter the project files using a LLM? ... this would be life changing ...
i asked same question , n answer is not yet
it's on the list of features to be or being implemented, but not ready yet
Yeah as others said it's a high priority item we are working on developing!
Amazing as always! You and everyone are doing an awesome job.
Thank you so much!
When you change the logo, post a how to video, to update. Will join in on funding round to help keep the momentum going on this project, Super exciting. Great job. Thanks everyone for the hard work. This could be like blender/
Makehuman someday with your enthusiasm.
Thank you so much! I appreciate it a ton! :D
Many use cases start with screenshot so adding image should be supported on priority
Noted, we want to add that. Bit chalanging as mot all models support it
@@EduardsRuzga I'm looking forward to Mixture of Agents support, so the image recognition could be done with Lama3.2 and the code development by say Qwen2.5-coder:30b
@@vannoo67 you can already do it somewhat manually now by switching models.
Its just for models that support image inputs we do not support sending it.
Otherwise its better to have model that does both, understands image and writes code as a starter for such things. Why?
Because all image model will be able to do is to export some textual description of an image to pass to text->code model.
In that transformation a lot of information will be lost.
I would rather use Sonnet 3.5 first to write first version of code based on image, and then allow Qwen to continue from there. That will produce better results.
@@EduardsRuzga However, conversely, if each specialty (UX, UI, front end coding, back end coding, code validation, ...) is handled by an optimized Agent (not necessarily, but possibly by different models) with well defined interfaces between them (like specifications passed between human specialists) the final result will probably be more robust and conceivably be able to be generated on lessor hardware.
I suggest that as we add new features to the project, developers either record or provide videos of their work on each part, which can then be linked to a project timeline hosted on a centralized platform like the OttoDev community. This approach would create a single, organized space where developers receive well-deserved recognition and promotion while also serving as a valuable resource for newcomers. They can learn from the steps we've taken, understand the thought process behind each feature, and quickly get up to speed with the project.
I like this. But not everyone can or want yo be on video and make videos. Communicate them well.
Likely wont happen. You want the bar for contribution to be as low as possible as it’s already a fair bit of work to learn and contribute to a project.
I am doing features and videos on updates and am up to helping those who want to be on video. I would love to feature people contributing more
@@EduardsRuzga i follow your work and its really appriciable.
all i think is in such open source project the only thing a developer want apart from their own use is recognition and it helps a lot in confidence boosting . and as its helpful for the community also its a good step.
although github serves the same thing but as a moderator when you include some parts of works done by othres ( i understand you have to go thorughall the forks which is a hectic part ) you have to be very prescise in the project whether to include something or not. and hence the final product which is awsome needs to be served to all as a part of learning also.
it helps learners save their time.. who knows if you solve this problem it could be a something new..
@@NeoNerdDeveloper I actually want to feature people behind features more. Currently they do not have their channels, profiles or anything. All I can share is their github profiles.
Links to PRs
And I need to put additional work to reach them out, hope they answer in time for recording if they want to be featured in some other way.
I like your thinking and share it, was thinking about it too. Just operating under tight time that I have for working on open source project and making videos while having kids and full time job :)
Need help from contributors in terms of how they want to be featured.
Will start asking for that in PRs that are merging.
Can you do a video on best way to upgrade. Would love a way to add documentation from url as context. Would give it more up to date info when building out.
You mean the best way to add features through PRs in GitHub? I will certainly be making a video on that soon!
@ColeMedin Exactly 💯
Nice video, work on the LMStudio option, it's a good option to the ollama.
Thanks and yes that's important!
can you set up a donation page so we can support the project??
Coming soon! Couple options for funding we are working through right now actually!
I just discovered your channel and your work ! thank you continute like this ! I follow from France :D
Awesome! Thank you!
AHHHHH THANKS SO MUCH!!! I'VE USED THIS AND IT'S GREAT!!!
Wondering what your take is on Windsurf how you can incorporate their agent based flow into the coding style? Have had a chance to try it?
I tried windsurf and it looks insane on video and paper but after using it i find it doesn't run as well as advertised at all. Lots of bugs, infinite processing loops, unnecessary interpretation and processing... and making cascade create and run bash commands to create files and such rarely works 100% at least it never did for me never even once. Although i have to say we are on the right track. Like if windsurf gets a few updates and iron some bugs and features it can easily blow literaly everything else out the water
@@northloo perhaps for that use case, but for me (after doing the project scaffold myself because it was hanging up on that), it was super easy with Svelte and Tailwind. amazing speed to get things done.
@@jsward17 wym by scaffold urself? You created the folder structure then used windsurf or you used windsurfed 1st then scaffolded on top of it?
I've tried out Windsurf a bit and I like it! I'm surprised @northloo has had some trouble with it to an extent but yeah LLMs are still prone to hallucinating so it comes down to the fact that it doesn't matter how good the platform is.
@@northloo the initial project start where you install modules.
use boilerplates and templates to save compute
is there a difference between the prompt engineering techniques of the main bolt.new and the open source bolt.new pages, I consistently get much better results with the original bolt.new website vs the open source project even if I use the new claude sonnet 3.5 v2 model through open router.
Interested to understand why too
Yes, there is difference. StackBlitz is not merging changes back to opensource anymore.
Yes there are major differences. And all their updates to the smaller but crucial details like prompt engineering and token usage and projects manipulation is closed source unfortunately. As to the differences themselves i do not work for them i have only watched their livestreams i can't say exactly everything that is different. I k ow that they have more efficient prompts and updated coding methodology and token usage efficiency which is why you get better results when using the original version right now
yeah this is one of the secrete benefits to Bolt they often create really great UIs etc that others can't and it's not just the LLM
@@tradingwithwill7214 WebSim also good at it, we will get to adding prompt library
Best wibe! Congrats to starting this off!!!
Bro I swear to god that I was sure you were using a hair net when the video started, I had to look three times to be sure I wasn’t hallucinating or something ❤
Haha why is that? lol
Wow the extensions is going to be really good.
I agree - I can't wait!!
Congrats Cole in this hard journey.
Any dev around that could tell which LLM is doing the best with this? Claude 3.5 sonnet. Any other having same results but cheaper? What about Llama 70B which is IMO the best that could be run at home?
Thank you!
Llama 70b hasn't done the best for me. I've had good luck with Qwen-2.5-Coder-32b and DeepSeek-Coder-V2-236b!
Work of diff feature to rewrite or update only necessary part. Just need to play around with system prompt.
Yeah I think for a v1 of this you could just work with the system prompt, but the diff features in the commercial Bolt.new are definitely more complex!
@@ColeMedin
It probably reads all the codes and then uses a tool to insert the specific part of code at a specific line
I had once tried to create such a tool but that is in python
Can you make a python bridge to bolt so that we can leverage the power of python in this powerful assistant
That might be challenging but also you could use AI to convert it to JS!
@@ColeMedin I created a simple agent in python that uses qwen 2.5 coder and it completely works based on diff. It is performing amazing. It was not working properly with aider or Cline or bolt so I tried to write my own agent and it seems it broke the record.
Working with diff is not that complex and is pretty smoothing if we use ai also to that stuff.
Bolt with diff will be kind of amazing
Now I am trying to implement chain of thought
Woah that's amazing! Is that something you are planning on sharing?
Not really sure why 'Ability to revert code to earlier version' is not higher up the dev list. :)
Yeah fair, I know a lot of people want it and we are looking into how to do it best!
Can we have a browser version for it, so that we cab get it auto-updated automatically? Because running locally, we might have to reinstall it again and again
Yes I am planning on doing this soon!
It's really good.
But does it support custom API endpoints 😊
There is openai like provider
Include the image upload so we can upload ui image and it's reflected in the code.😊
Interesting to not see a reaction to the marketplace idea yet. Obsidian has done a good job with their plug in marketplace. Not as mature is the Open WebUI Tools and Functions. I could also imagine a different marketplace for low code developers like me who get stuck and will pay to get unstuck.
Hi, not sure if anyone had this problem? But if you want to run ottodev on another machine you have to chnage this line at the beggining of the package.json file to expose it to other interfaces. I use zerotier and connect on another machine.
"dev": "remix vite:dev --host"
With all of the new tools and features coming out with AI, how do you guys navigate security? Do you use any tools to scan etc.?
Is this somewhat focused on web development? Or is it more of a generalist development app?
Node.js, no python or anything else yet.
There are some ideas of how to expand
I get message: 'prompt is too long: 206920 tokens > 200000 maximum' with Anthropic after building a large app, any way to fix this?
I am guessing that in the feature, Ottodev will simply pull all feature requests from the community and decide which ones to add first and then it will build them on it's own somewhere on some server 😂 where it runs as a dev contributor to it's own project
which ollama model is best for this type of performance qwen2.5-coder ? if yes what is min parameters one required ?
Yeah for local LLMs, Qwen-2.5-Coder-32b seems to be doing the best right now! Generally you want a 3090 to run it.
Deep Seek, open ai and Anthropic didn't work its no return (I entered the api key in the UI) On the other hand, Google and GROQ worked, anyone knows why? Does anyone have recommendations on the best LLM to use?
I believe you commented this on another video and I replied, but:
Yeah sometimes LLMs just hallucinate and give bad commands/code. My favorite local LLM to use is Qwen-2.5-Coder-32b!
@@ColeMedin Thats a great choice, did look at the scores of this LLM and it’s really impressive. But it won’t run my m1 mac, is there way that I would be able to run on it? otherwise I need to run it on a dedicated server.
Great work from your side too man, keep it up💯
I would use OpenRouter if you can't run it on your computer! Thanks for the kind words man :D
can any one help how to use it with lm studio
Is there any possible alternative to Chrome Canary browser for Linux users?
You dont need to use it if you are not developing
Use build + start combo. It works in any browser
I'd say Firefox is good too!
I tried many times bolt new to create spps, but with a backgroud of python, i have a lot pf troubles to link the front to the end, and i m wondering if some xan help me with a tips or a video that can help me to enhance my developpement process, thx
Need ability to save states or versions of a project and be able to revert to it when bolt eventually fucks everything up.
Yes that is one of the things on our list of priorities!
Cursor really messes up with making web front end for my python scripts that work well in terminal. What is a good platform to use for UI front ends ?
oTToDev/Bolt.new is the best for that honestly! Otherwise you could try V0 as well.
Is there a discord channel to join the community?
We have a Discourse community!
thinktank.ottomator.ai
So once oTToDev can load local files, could we use oTToDev to extent oTToDev?
Haha yes!!
@@ColeMedinskynet inc
Yeah i wanted to try that fir ling time )))
@@ColeMedin actually guys, i have used cursor to do just that. i actually did that on day 1 of downloading the ottodev repo. (When it was still called any-new-llm) and i managed to make it add the project upload feature since i needed it so bad. I can send you the codebase or just the files by email discord or whatever community if that can make the process of making this feature available for everyone much faster.. even tho i am not a contributer.. if i can contribute and help for free i will. Answer if you want me to send the code over cole.
@northloo that would be absolutely fantastic! Would you want to connect over email or Discourse? My email is cole@dynamous.ai if you want to connect there! This is a much needed feature and I would give you a huge shoutout if it works great :D
Val Town ai has brought in some mean competition...making Sonnet 3.5 free...apparently.
Is the project code hosted in the stackblitz server ?
Nope, you can run it all locally!
Could you please or somebody make a tutorial about "exporting from comercial bolt to the local bolt the project?". Thank you
That isn't possible yet but I certainly will make that tutorial once it is! That is something we are working on with loading projects into oTToDev.
can you make ottodev compile code for mql5? , just to make sure it doesnt have errors
Right now it's just for Node since that is what Bolt.new is, but that would be cool to implement at some point!
@ another lucky day of cole answering me lol
Good luck on your mission
Haha thank you very much!
I tried to set it up as a Docker container and no matter what I try I cannot get it to work with my Homebrew install of Ollama.
I'm also seeing this behaviour. I have Ollama running on my desktop. I have the bolt.new.any-llm running in a docker container and when I select ollama from the drop down it shows the models I have access to in the other dropdown but I consistently run into an error whenever I prompt the model
I'm sorry! What is the error message you are getting?
@@ColeMedin thanks for replying. It WAS giving me a popup error IN otto dev saying "Error Processing Request" (which I can now no longer replicate for some reason but is similar to issue #297 that's currently raised). But that has miraculously fixed itself and the model will now initiate. However the issue I have now (using Ollama and Qwen 2.5 32b) Is that it doesn't want to actually code anything I just get a standard LLM response back. Saying like "to build a basic inventory system you should consider the following things....)
It doensn't initiate a build or install packages or anything lol
That's super weird - it's an issue we had with Ollama at one point but not anymore because we changed the config to increase the context limit for Ollama. Are you sure you have the most up to date code and have the container rebuilt?
@ hey! I’m av1155 from the GitHub Issue #353 haha
When can I use it and how?
You can use it now but pulling the git repo I linked in the description! Instructions are there in the repo too.
I'm having trouble installing on Windows because there's a .sh file dedicated to Linux environments. How do you solve this problem? Thanks in advance!
Which .sh file are you referring to? Bindings.sh? That should remain unused for Windows. I install on Windows myself without issue! If you could share the error that would be great.
Great work! But you do need funding to ensure this project continues. Do a kickstart + post a ERC20 adderss for people to donate.
Thank you! Yes we have a couple options for funding right now that I am working through, what you suggested is certainly on the table!
I would like to be able to import github repos and edit them with ottodev, how is this possible?
This is a feature we are working on adding in!
Can I have OttoDev generate the files and then move them to Cursor? Can OttoDev build a PHP app?
Yes you can build an app with oTToDev, export it, then move it into Cursor! It doesn't do PHP unfortunately though.
i like to interact with my local storage easy and need autofix work with ottodev , and i want you to show a list of the best models work with fantastic like claude
Yeah I love the idea of having a running list of the models that work the best!
@@ColeMedin mistral have new update model with canvas and web search and image generator
Yeah it's fantastic!
Excellent!
Is there a link to the discord?
We have a Discourse community!
thinktank.ottomator.ai
Is there an option to import repository from github?
That is something we are looking into adding!
Great stuff
You need to change the logo of bolt or add something nezt to it to tell the viewers you use ottodev and not bolt :)
Thanks! Yes I agree, lot I need to rebrand still haha
can we run ottodev online?
Yes you can self-host it if you want, similar to how you would run it locally!
Does ottodev have a discord community?
We have a Discourse community!
thinktank.ottomator.ai
@@ColeMedin Thank you for your response Master.
so this doesnt support o1- use yet?
Not yet! o1 doesn't support things like system messages that are used in Bolt.new/oTToDev.
Please who can help me figure out while it's not working on my laptop. I really need help. Help me please to get this running on my PC
What problem you get, i have some videos on my channel
I'm curious what your error is as well :)
@@ColeMedin Sometimes the dependencies don't install properly, Sometimes it will say I have lock files. I don't really know why. If it's from my pc or the files. It's really frustrating me
@@EduardsRuzga I just watched your video and the error you had is same I'm having
Seems like maybe that is the LLM hallucinating bad commands?
I have an idea
My idea is make an affiliate link or something like this.
To give a user a prompt More when the user share the link with his friends
Interesting... could you expand on that?
@ColeMedin
Yes. But tell me or explain a lot to do that
Do for Android mobile users
Why are you trying to code on a phone man.... get a real computer.
keep it up
No the title is wrong we are facing a lot of problems please check the community as well .😔 We are unable to use qwen 3.2 and also other problems like not giving response
I'm definitely engaged in the community along with our maintainers to figure out all of this! A lot of the issues are just because LLMs hallucinate though, not because the platform is broken. But still things we are looking into!
🤯😮💨
i absolutely love what you do man but Windsurf is taking over the AI IDE game
Have you noticed how weekly sometimes daily there's new releases, new adaptations, it's literally chasing a plateau or a peak now. Could be wrong but by the end of this week there's probably something better on our fingertips to compare it. Couple of days ago Qwen 2.5 was new
@YusufEbr yes it's a rapidly evolving field, AI is now what the internet was in the early 90s
Winsurf is targeting developers. Bolt potentially targets non developers.
Guys dont get caught up in this never ending loop of releases you will never get to work. Pick what works best for your workflow and stick to it. Its all marketing and companies trying to top over the last with no real work put into it. 3/4 of these apps dont even work as advertised its like when video games come out half baked... windsurf looks great but have you all actually used it? Its a great concept but it does not work as advertised at all. Lots of stupid and very annoying bugs
I agree with @EduardsRuzga and @northloo, though I have had a good experience with Windsurf too so I think it's awesome! But Bolt.new offers an unparallel experience being able to develop right in the browser.
With all these new changes lol do we need a new how to install video ???😂
Honestly yeah I probably will make that at some point haha
Wesome
Hey bro, get back at me, i sent you an email. I have the codes ready for the drag and drop on prompt feature on bolt you are working on, with the read me file.
Sorry I am swamped with emails right now, I'll take a look soon!
can anyone help me while trying to install pnpm i get this error: bolt@ build C:\Users\warsh\Desktop\Bolt
> remix vite:build
Der Befehl "remix" ist entweder falsch geschrieben oder
konnte nicht gefunden werden.
ELIFECYCLE Command failed with exit code 1.
WARN Local package.json exists, but node_modules missing, did you mean to install?
I'm quite new to this. I use Bolt.new to build apps and now just setup otTToDev with Qwen 2.5 7b model but the output I get is really basic compared to bolt.new which even builds good looking UIs. Is there any way to handle this ?
Well a 7b parameter model is pretty small so it's hard to expect it to perform nearly as well as Claude 3.5 Sonnet! The main advantage is it's free and fast, but the results won't be the best for bigger apps!
I seem to be unable to use my openAI or or Anthropic API key. Both the ENV and UI return an error. Not sure why
Hmmm, what is the error you get?
@@ColeMedin There was an error processing your request: An error occurred.
Thanks so much @cole and @wonderWhy
You bet!
Incredible work 👍🤍How can we setup this locally ?
Thank you! I cover that in this video:
ua-cam.com/video/31ivQdydmGg/v-deo.html
Ok for some reason I am getting some errors saying errors processing request and I am using the ollama
Hmmm... generally there is a more helpful error message in the terminal where you started the site. Do you see anything there?
@ well I fixed it apparently it was something with ports which was mismatching