I hope i can find answer there, cause for the love of God, i cant run the damn thing.. always "There was an error processing your request:" on the bottom right, even the console just shows "ERROR", no network logs, no console logs, sigh.. tried both pnpm and docker instructions to no avail, .env.local saved... nothing works My Ollama is WORKING 1000000% using pinokio/open-webui, so its really on bolt diy side... i wanna try bolt so baaaadd.. :(
It would certainly be cool to do this! Though as far as reinventing the wheel, I don't know of another open source AI coding assistant that runs right in the browser.
I can't get it to work. I've tried multiple iterations of this but keep getting: There was an error processing your request: An error occurred. I have modified the .env to .env.local and added my keys but still get this error on all browsers.
i agree i struggle a lot even with following steps, as a none dev it's exhausting, do you think it's possible for them to launch something that verify what your pc need and an .exe of the soft ? without passing by "rename the env.local" and "pnpm run dev" lol
Hey cole , dont forget the fact that the main thing is the coding power of the assistant at the end.also the fact that it needs to be easy for non programmers Make it so non programmers dont have to debug anything On windsurf you can make context settings for the ultimate goal of your coding , please include this but the easy way where you can give a chunk of of context to make the AI assistant focus on the bigger picture and never lose track of Please know the way , the modifications , the best free LLM that suits bolt.diy and can make you do a full complex app or program because at the end of the, i dont think anyone right now can use to make a complete thing . So this is the biggest obstacle i guess Be strong and finish it, thank you for your job ❤
Images, yes, thanks. WRT the reasoning step, it could be good to have a pause there, in which the user can read the plan & either go ahead or modify either the prompt or the plan.
All I want to know is how is your background looks so realistic? Tell us what sort of green screen, lighting and app you use to make it look this good 😊
Looking forward to using this. Love the energy! I think you meant \KAHN-bahn\. If you haven't read David Anderson's little blue book, highly recommend!
the prompt library seems like something that has to be continuously developed and is not a goal with a finish line. dont dilute your focus straight out of the gate. you need to set achievable goals. pick the single best LLM or the most popular, and work around that. then as niche users spend their credits you can analyze the outputs and work on the library.
Nice work! Token calculation is great addition, would love to see that also translated into currency amount (i.e. Total tokens 6520 (~$0.27)). Also to have it as VS Code extension would be amazing.
I am using qwen2.5-coder:3b and cannot access the preview because my system has low specs. 🖥 The only model I can run is qwen2.5-coder:3b. Could you please help me create a bolt.diy setup on Google Colab? 🙏
Can you already ask some questions and qwen answers? cause im having problem with mine, its always "There was an error processing your request:" error.. May i ask what other things you did that is not part of the guide on github? cuz the pnpm and docker thing doesnt work for me, i think im missing something
I'm constantly testing it to make sure everything is working after new features - and right now it is! What is the error you are running into with your setup?
I am having issues running on Windows and Mac both. For some reason, the API keys dont show up on either even though .env.local file is updated correctly. Secondly, its inconsistent. The same project on Windows says too large of a project with 1000+ files, but on Mac it loads the project file. However, in neither of them does it work. It just doesn't reply even if I manually add API key on top of adding API key in .env.local.
Sorry you are running into issues! For the Windows versus Mac issue, is that for importing local projects? And when you say it doesn't reply, is there an error message you get with that?
@@Jamesdean16 Gotcha - usually this means the LLM hallucinated bad code or commands. And with that there is usually an error in either the bolt terminal or in the developer console in the browser (F12 or fn + F12 to view that).
Why dont you try an LLM network. By connecting different LLM's through roles, you can create som sort of LLM panel that would collaborate and check as it generates. We could try prompt pushing - A decentralised mechanism for LLM linking in terms of information analysis and prompt generation. That's what I think about the last part
Can you add an 'All Auto' option so it automatically takes a screenshot of the web app after each step and send it to the llm as the result. Also, could you include a mechanism to test the web app with 'claude ai computer use'?"
@colemedin I believe it would be possible to have ollam model that's hosted locally, that u can get a Web api ,to add to your model. And call llama models from your local host, which u can host on a VM.. and am talking about using LM studio. My VS IDE for Ai 😋
I love using bolt.diy together with Windsurf/Cursor! Build the initial app in bolt.diy and then bring it into Windsurf/Cursor to finish up the frontend and build out the backend!
@Human_Evolution- I'm doing the same thing. Bolt is real quick and easy with getting your project started really quickly with all your files. Its also really good at the beginning/smaller projects to do your UI. Bolt runs into problems as you get to medium sized projects. Download it, open in Windsurf.. continue.
Im all about what youre doing but every time i try to use the tool its not functional. The code wasnt even writing to the IDE last time. I feel like focusing on it functioning should be more of a focus.
It's always functional for me! We do extensive testing for each release and after bringing in PRs. What is the error you are seeing right now? And for blank previews, that is an issue with the LLM versus the application. Of course we can continue to work on the prompting and we are! But at the end of the day we are at the mercy of the LLM which can be though when using smaller ones!
To be honest if stackblitz owns it now - is the team just free labor? Really hoped that this could have truly been opensource and owned by the community, Nothing against stackblitz (they made the original opensource) nor the great work by the community - but not sure this makes sense for many contributors. Hope I am wrong.
StackBlitz doesn't own it! It's still open source owned by the community just as much as before. The only thing that changed was the location of the repo, but that was just to have their stamp of approval as the official open source bolt.new, not a transfer of ownership in any way! Regardless of the location of the repo all the contributions are still public, so it's not like this partnership is allowing them to tap into us more or anything. Unless they start to direct what we create, which they are not :)
nice work coleMedin i hope you create best ide and best extension for vs code too so all developers will convert to your tools instead of windsurf ide and dont forget to add button donate
Can you add flowise as a provider. Then we can use that to call the n8n RAG. Why do you keep avoiding adding a RAG agent to bolt. Then all these things can be done in n8n. I don't see why you won't. It makes sense to have your own RAG as model. Flowise has the API calls that Bolt needs for the base url. You do alot with n8n and RAG agents. But why don't you allow us to use our n8n RAG to bolt.
Cole, Super cool but you need a bunch of interns. The internet is not keeping up. Gemini 2.0 flash, heygen interactive avatars. The tech is.out pacing the educational base. How do we fix that? Thanks. Happy Holidays
I appreciate it! And I know it's pretty hard to keep up haha But we do have Gemini 2.0 flash integrated and I think in general we are doing a good job! Always an opportunity to get more people involved of course... one of the main goals of these videos :)
It’s a complete waste of time. Spent 1 hr trying to build an app and so many hours debugging it. Then I started something on my own and I can build it in just 3 hours without any issues. 😂😂😂
@@svexbankiks2381 hmm, that does not answer anything. Just curious. If you woukd pick 3 concrete features that bolt mises. What would they be? I would rather say that for me bolt like products is where I start and windsurf is where I modify larger projects. But I dont see future for windsurf and cursor in current form. Because they do not work on phones. I do think thatvcloud variants will win. Like microsoft and adobe bringing their products online. But that is my bet, as staff engineer with 20 years of experiance in the field. I still can be wrong. I do agree thst for more complex projects windsurf and cursor are ways to go for the moment. But future is way weirder then you think. At work we start to see product managers and ux designers starting using these tools to prototype apps and go to developers with validated prototypes.
Hey Cole I'm doing exactly the same thing you showed on your roadmap, I'm building my own version of bolt.new by forking your bolt.diy repo. Anybody want me to contribute or I kept it myself????
Join the bolt.diy community over in the oTTomator Think Tank!
thinktank.ottomator.ai/
I hope i can find answer there, cause for the love of God, i cant run the damn thing.. always "There was an error processing your request:" on the bottom right, even the console just shows "ERROR", no network logs, no console logs, sigh.. tried both pnpm and docker instructions to no avail, .env.local saved... nothing works
My Ollama is WORKING 1000000% using pinokio/open-webui, so its really on bolt diy side... i wanna try bolt so baaaadd.. :(
dude, you are reinventing the wheel, just make this thing as VS extension!
Boom - yes
It would certainly be cool to do this! Though as far as reinventing the wheel, I don't know of another open source AI coding assistant that runs right in the browser.
or even better as a PyCharm extension
Your enthusiasm is infectious.
I can't get it to work. I've tried multiple iterations of this but keep getting: There was an error processing your request: An error occurred. I have modified the .env to .env.local and added my keys but still get this error on all browsers.
You are not alone! I have also tried to make it work unfortunately it has not
same thing bro, and i also get the error of "vite plugin" and the browser page is white not even that cool purple color
me too,i tried 20 times!!!!!!!
We want a simple install set up, I can't make it work
i agree i struggle a lot even with following steps, as a none dev it's exhausting, do you think it's possible for them to launch something that verify what your pc need and an .exe of the soft ? without passing by "rename the env.local" and "pnpm run dev" lol
@@pandamanou definitely, because even when it's installed it's a pain in the ass to update it.
One of my favorite UA-camrs. Keep going man
Thanks Ramon!
Hey cole , dont forget the fact that the main thing is the coding power of the assistant at the end.also the fact that it needs to be easy for non programmers
Make it so non programmers dont have to debug anything
On windsurf you can make context settings for the ultimate goal of your coding , please include this but the easy way where you can give a chunk of of context to make the AI assistant focus on the bigger picture and never lose track of
Please know the way , the modifications , the best free LLM that suits bolt.diy and can make you do a full complex app or program because at the end of the, i dont think anyone right now can use to make a complete thing . So this is the biggest obstacle i guess
Be strong and finish it, thank you for your job ❤
Images, yes, thanks. WRT the reasoning step, it could be good to have a pause there, in which the user can read the plan & either go ahead or modify either the prompt or the plan.
All I want to know is how is your background looks so realistic? Tell us what sort of green screen, lighting and app you use to make it look this good 😊
Haha good question! I use Nvidia Broadcast!
How did you know it wasn’t real? The chroma-keying is nearly perfect.
I had no idea 😂@@agentred8732
amazing stuff ! Thanks Cole
npm run dev dont work it wont allow my program to work. Npm is installled already. What am i doing wrong?
try yarn. worked for me after npm and pnpm failed
@@roscoevanderboom8449 will try mate cheers
Looking forward to using this. Love the energy! I think you meant \KAHN-bahn\. If you haven't read David Anderson's little blue book, highly recommend!
Haha you are right! I'm going to have to check out that book!
make sure you get a good deal. open source is pushing it forward.
the prompt library seems like something that has to be continuously developed and is not a goal with a finish line. dont dilute your focus straight out of the gate. you need to set achievable goals. pick the single best LLM or the most popular, and work around that. then as niche users spend their credits you can analyze the outputs and work on the library.
When I import a folder, unfortunately it does not import more than 1000 files, how to solve this?
Nice work! Token calculation is great addition, would love to see that also translated into currency amount (i.e. Total tokens 6520 (~$0.27)).
Also to have it as VS Code extension would be amazing.
I am using qwen2.5-coder:3b and cannot access the preview because my system has low specs. 🖥 The only model I can run is qwen2.5-coder:3b. Could you please help me create a bolt.diy setup on Google Colab? 🙏
Can you already ask some questions and qwen answers? cause im having problem with mine, its always "There was an error processing your request:" error..
May i ask what other things you did that is not part of the guide on github? cuz the pnpm and docker thing doesnt work for me, i think im missing something
I have a suggestion. When writing code with bolt.diy it always starts from pakage.json file, but I only want to edit a single file
Hi , what's is the difference with lindy ai please ?
LM STUDIO is not loading model, infact there is nothing showing up in drop down...please look into the issue
Brother, I am very grateful for bolt.diy. but there are many bugs for now. I can't even get it to install npm or run application 😢.
try yarn. worked for me after npm and pnpm failed
I'm constantly testing it to make sure everything is working after new features - and right now it is! What is the error you are running into with your setup?
I am having issues running on Windows and Mac both. For some reason, the API keys dont show up on either even though .env.local file is updated correctly. Secondly, its inconsistent. The same project on Windows says too large of a project with 1000+ files, but on Mac it loads the project file. However, in neither of them does it work. It just doesn't reply even if I manually add API key on top of adding API key in .env.local.
Sorry you are running into issues! For the Windows versus Mac issue, is that for importing local projects? And when you say it doesn't reply, is there an error message you get with that?
@@ColeMedin Yes when I am importing a folder and no, nothing happens. It looks like it's thinking but its just stuck there.
@@Jamesdean16 Gotcha - usually this means the LLM hallucinated bad code or commands. And with that there is usually an error in either the bolt terminal or in the developer console in the browser (F12 or fn + F12 to view that).
You are nailing it bro
Why dont you try an LLM network. By connecting different LLM's through roles, you can create som sort of LLM panel that would collaborate and check as it generates. We could try prompt pushing - A decentralised mechanism for LLM linking in terms of information analysis and prompt generation. That's what I think about the last part
Can you add an 'All Auto' option so it automatically takes a screenshot of the web app after each step and send it to the llm as the result. Also, could you include a mechanism to test the web app with 'claude ai computer use'?"
so it is not an open source version, it is an open source fork where the closed source fork doesn't contribute?
You guys rock!❤
We need a tool similar to a screenshot but specifically
for selecting elements on a page, like in v0
Great idea! I may add something like this soon.
@@sqyer Thank you
Please fix the lm studio integration.
@colemedin I believe it would be possible to have ollam model that's hosted locally, that u can get a Web api ,to add to your model. And call llama models from your local host, which u can host on a VM.. and am talking about using LM studio. My VS IDE for Ai 😋
Is this better than Cursor? Or is better even something that makes sense?
I love using bolt.diy together with Windsurf/Cursor! Build the initial app in bolt.diy and then bring it into Windsurf/Cursor to finish up the frontend and build out the backend!
@@ColeMedin that's interesting. What would be the purpose of using Bolt, and then another IDE?
@Human_Evolution- I'm doing the same thing. Bolt is real quick and easy with getting your project started really quickly with all your files. Its also really good at the beginning/smaller projects to do your UI. Bolt runs into problems as you get to medium sized projects. Download it, open in Windsurf.. continue.
@@ColeMedin why?
Supabase or Google Firebase backend one click integration would be great
Running on my new SSD 🤩
great work bro following from long time....
Thanks so much for the support, I really appreciate it!
The key is stability. Bolt.new eats credits with errors. Bolt.diy requires stability. Ensure stability and a guide to a systems capability is key.
how to install it?
Im all about what youre doing but every time i try to use the tool its not functional. The code wasnt even writing to the IDE last time. I feel like focusing on it functioning should be more of a focus.
It's always functional for me! We do extensive testing for each release and after bringing in PRs. What is the error you are seeing right now? And for blank previews, that is an issue with the LLM versus the application. Of course we can continue to work on the prompting and we are! But at the end of the day we are at the mercy of the LLM which can be though when using smaller ones!
I tried it. The preview didnt work. It worked then it didnt
what is the difference with their private github?
To be honest if stackblitz owns it now - is the team just free labor? Really hoped that this could have truly been opensource and owned by the community, Nothing against stackblitz (they made the original opensource) nor the great work by the community - but not sure this makes sense for many contributors. Hope I am wrong.
StackBlitz doesn't own it! It's still open source owned by the community just as much as before. The only thing that changed was the location of the repo, but that was just to have their stamp of approval as the official open source bolt.new, not a transfer of ownership in any way!
Regardless of the location of the repo all the contributions are still public, so it's not like this partnership is allowing them to tap into us more or anything. Unless they start to direct what we create, which they are not :)
nice work coleMedin i hope you create best ide and best extension for vs code too so all developers will convert to your tools instead of windsurf ide and dont forget to add button donate
Can you add flowise as a provider. Then we can use that to call the n8n RAG. Why do you keep avoiding adding a RAG agent to bolt. Then all these things can be done in n8n. I don't see why you won't. It makes sense to have your own RAG as model. Flowise has the API calls that Bolt needs for the base url. You do alot with n8n and RAG agents. But why don't you allow us to use our n8n RAG to bolt.
I suggest indirect integration to VScode or cursor
Cole, Super cool but you need a bunch of interns. The internet is not keeping up. Gemini 2.0 flash, heygen interactive avatars. The tech is.out pacing the educational base. How do we fix that? Thanks. Happy Holidays
I appreciate it! And I know it's pretty hard to keep up haha
But we do have Gemini 2.0 flash integrated and I think in general we are doing a good job! Always an opportunity to get more people involved of course... one of the main goals of these videos :)
Beat me by a minute!!!!
Can we make MOBILE APPS???
First priority should be to implement MCP.
What MCP you want to connect?
Many Features are ready and waiting to be merged.
A debug agent
It doesn’t work , it stops creating applications , it’s very hard to work with bolt.diy
First!!!!
👏🏽
It’s a complete waste of time. Spent 1 hr trying to build an app and so many hours debugging it. Then I started something on my own and I can build it in just 3 hours without any issues. 😂😂😂
Wow! I'm watching the video 8 mins after it was posted!! :o) Congrats on all the advances, I'm getting ready to use Bolt.diy soon!!!
Awesome! Thank you!
I think Windsurf with anyLLM will be more better than Bolt anyLLM... what do you think my friend @ColeMedin ?
Windsurf is not open source so it can't be forked like bolt.new to add in any LLM!
What do you like about windsurf in comparison to bolt?
@@EduardsRuzga Bolt is for beginner devs Windsurf is for advanced ones...
@@svexbankiks2381 hmm, that does not answer anything. Just curious. If you woukd pick 3 concrete features that bolt mises. What would they be?
I would rather say that for me bolt like products is where I start and windsurf is where I modify larger projects. But I dont see future for windsurf and cursor in current form.
Because they do not work on phones.
I do think thatvcloud variants will win.
Like microsoft and adobe bringing their products online.
But that is my bet, as staff engineer with 20 years of experiance in the field.
I still can be wrong.
I do agree thst for more complex projects windsurf and cursor are ways to go for the moment.
But future is way weirder then you think. At work we start to see product managers and ux designers starting using these tools to prototype apps and go to developers with validated prototypes.
I can’t agree more. Windsurf is the best so far. I also like lovable.
Hey Cole I'm doing exactly the same thing you showed on your roadmap, I'm building my own version of bolt.new by forking your bolt.diy repo. Anybody want me to contribute or I kept it myself????