This project is really taking off - thank you so much everyone for your suggestions, contributions, and support! ❤ Couple of things: 1. If you make a contribution to this fork of Bolt.new, I WILL feature your change in a video! 2. I am certainly planning on opening up a PR to the original Bolt.new repo at some point! First though I want to make this more mature as a community. Fleshing out features, making it possible to set the API keys in the frontend somewhere, etc. so that it could be added into the main Bolt.new repo seamlessly. 3. If you are still having issues with the smaller models not opening up the code container on the right side, I address that 12 minutes into the video! This is something I am working on that will be a huge improvement for local LLMS!
@@ColeMedin Can you Add an option for users to begin their development process using an existing project as a starting point, rather than always starting from scratch.
Love seeing this! One thing I really appreciate is that you're not just keeping it to yourself, but you're allowing others to join in. That’s huge! 🤟 Thanks for not being like other UA-camrs, man. So many in the AI space have decided to monetize their subscribers. I get that they need to make money to support their content, but I’d much rather 'buy you a coffee' than spend money on some scammy course about making a billion-dollar business-like certain other UA-camrs with around 120,000 subscribers are trying to sell.
Thank you so much! My goal is to be much more giving, collaborative, and value packed than the average AI content creator, so I appreciate you calling that out a lot! I certainly don't blame the other UA-camrs for what they do, and I of course will have to monetize in some ways myself, but I'm working hard to do that in a way that doesn't involve wasting your time, selling scammy courses, or anything like that!
This is fucking fire bro thank you keep it up I built a whole app in 12 hours I’ve been trying to build this for 6 years this is a game changer no coding experience just constantly updating and fixing the errors via natural language prompts
Love to see this continuing! Could you please do more videos of use cases? I'd love to see it try web design or something, just to see the full process with ollama. I'm still having a lot of trouble getting the llm to move to console and not just reply like a regular chatbot
Thanks for the hardwork I have a suggestion plz add proper instructions for it to first run a command to initialize a project and then plan and edit files based on the plans This can make it a lot more robust😊
You bet!! Could you expand a bit more on what you are looking for here? I have instructions for running things in the README so I am curious what kind of follow up you keen on.
@@ColeMedin I am talking about a feature like cursor rules , web commands like these @web, a special folder for it to create plans in markdown files like current task, roadmap, plans , improvements,etc in the folder boltdocs. And it should always use the terminal commands like npx create react APp or sveltekit or svelte or something like that and it should then edit the necessary files according to the plans
With all the negative news about open source .i.e. Wordpress recently in the press 🤣 ... and OpenAI no longer being Open. You are restoring the heart of Open Source. Well done!!!
LM Studio and MSTY (my preferred GUI for LLMs) both use an OpenAI compatible API. Same as Ollama. I'd be willing to bet just pointing the Ollama provider at LM Studio's URL has a decent chance of working.
I believe you are right! And I for sure know that setting it up like I set up Groq would work since that just changes the OpenAI baseUrl like you are saying essentially!
Getting the image upload will be key to quickly iterating on POCs. On the Bolt paid plan I can dump a screenshot of an app I like or is similar and boom instantly I have the design, buttons, etc in place. Then I just reprompt with Bolt to make the buttons do the things I want. Image upload is more powerful than people think at times. +1 vote for this. Either way amazing job so far on the fork!! Thank you
Load Local Projects!! I have been trying to do something like this! You can not do that with the paid version either. Yes, I am subscribed to the paid version also.
This is amazing! Thank you Cole (and others) for your hard work. I'm so grateful for living on a planet where smart people like you provide dumb people like me with things like this! One huge thing I would love to see implemented, is something to prevent Bolt from re-writing code when you add a new function. There's a lot of one step forward, two step back in Bolt, where when you have added function A, B and are about to add function C.. it removes the work you did with function A. Very frustrating!
It's my pleasure! Thank you for the kind words! I agree that the rewrites are pretty frustrating! This would be a pretty fundamental change to how Bolt.new works, but a few others have raised this concern as well so I am certainly keep on fixing it and will add it to the list of improvements!
The ability for it to have other project context windows so when you start a new chat it can still have memory of the other chats and projects you did will be game changer
I can see this exploding over the next few months with all the features being added. Great work Another feature that would be HUGE is the ability to deploy to Netlify
Great work! Just saw a video of Fragment saying it's better than Bolt. They have the functionality which your fork has of being able to select other models. So the Medin fork is probably the best already 😊 another thing with Fragment which could be an improvement of the Medin fork is the functionality to select and choose a persona for a session. Almost like scaffolding the session but not doing it in the prompting.
@@ColeMedin I've tried to build some cross-platform apps, i.e. both for mobile and web, and my experience is that the suggested tech stack for front- and back-end differs a lot between different models chosen. But could be that I should be better att prompting of course.
You can do one thing instead of listing future updates open for everyone, distribute them to different persons to build. You can run a poll and taking the vote who wants to develop which feature. The individual has to provide a plan for tackling the problem. Then you will decide who will develop which feature. Otherwise multiple persons are doing the same thing. For example I have also implemented the Gemini integration.
I appreciate this suggestion a lot! The only problem is it would take a good amount of time + effort to organize this. So I might need to rely on people to check the pull requests and the checked off features to know what is left that needs to be implemented. But I love your suggestion so I will be thinking about how to do that efficiently!
I agree with the OP to an extent. There is another repo that is also a fork and they went the other extreme with around 480 open issues. But at least we have a way to communicate with each other. Discord with polling maybe. I like that better that UA-cam comments and GitHub issues
Great content and a great way to engage the open source community! Potential huge improvement: One challenge with these tools (bolt.new, cursor V0 etc) is that they start hallucinating and breaking existing code (or creating new errors) when generating new code, especially when the context become large (needle in the haystack problem). I think I have a solution on how to mitigate this problem. How would you like me to proceed in order to see if it could be part of this project?
Thank you very much! I appreciate you being willing to contribute to fix this problem. I agree it's a huge issue for really any AI coding assistant! The fact that you have a solution is incredible and you have me so curious haha You can absolutely feel free to make a pull request with your solution! Or if you want to contribute in any other way please let me know! Regardless - I would make an entire video on your contribute if you tackle this!
Imagine creating an agent that can create fine tuned agents automatically based on a URL of documentation of a particular library, framework or API... So every time you want to use a new library it creates a specialized sub agent who goes away scrapes all the documentation and examples for a library, creates the fine tuned agent and then the master agent has a specialist in that particular library or framework available to him. Imagine all that happening automatically in the backend.... 10-20 specialist agents fine tuned on all the working parts of your codebase. I bet the results would be infinitely better!
Just subscribe to your channel and loving what you’ve done so far with this project, these are 3 things that would turn the tide when it comes to code generation: image to code, running agents and importing projects
@@ColeMedin if the agents could run autonomously until the goal is achieved, if this could be done wow. Thanks for putting in the time and effort to create this (upgrade) it.
12:50 When an llm thinks, it uses less tokens then user input queries or llm inferences. In other words the reason why it helps is because the llm uses less tokens to process the request because its already in its context.
Freakin love it bro. I would monetize it though. Just a little. Just enough so you can hire help for bugs, support, updates, new features. Even if its $5-10 month or something.
Good question! Bolt.new can use a few different component libraries from my experience, and ShadCN is one of them! So yeah if you ask it to use ShadCN for the components it will 😎
The fact it rewrites all thebcode every time kinda sucks. Also gets stuck on errors it cant fix itself and the editing experience kinds sucks. Also will just start implementing changes merely from asking the llm a question. It kinda sucks in these aspects would be nice to fix. For this reason cursor and replit ai agents have been alot less frustrating
I have both replit and Bolt...and I'm trying to create a slightly more complex app...replit is too slow and often doesn't do what I ask, Bolt is more responsive but the problems you highlighted after as the app becomes more complex, they start to get boring and repetitive and I have to ask Claude or chatgpt for help to get it working. Cursor I haven't tried it yet
@@rouges666 Cursor is alot more flexible, but wont just go and build the entire app out for you in one go like replit and bolt. But once you get a minimal version of the app going, it allows alot more flexibility, and doesnt just overwrite all your code each time with changes. I have the paid version of replit, and also hit the limits each day. I havent hit any limits yet on cursor and im on the freeplan. Cursor will force you to learn more as well, which is think is better in the longterm. Today with replit i just couldnt get it to fix its own errors, and i just had to rollback, then hit usage limits.
I totally agree with you that the rewrites is a downside to using Bolt.new! As @rouge666 mentioned though the Bolt.new experience is often better than Replit, and then Cursor as you mentioned can build out an entire app. So in my mind, there is a time and place for both Bolt.new (and potentially Replit, it's still awesome) and Cursor. Bolt.new when you want to build something from scratch without coding in an IDE, and then Cursor to take your application further.
Great work! Just a question Did you contact the creators of bolt? Maybe you shouldn’t fork it but join their development , together you truly can build something even more awesome
Thank you and good question! I have not at this point, but maybe I would consider doing it! I am also probably going to be making some pull requests into the main open source repo once we build it out more.
@@ColeMedin amazing, I think just like you said, the power of open source is building together, their team obviously have some marketing and design skills :)
Hey Cole, this is awesome. Can you please add or suggest how this can be improved or trained for blockchain development e.g. Rust for solana. Not many code generators currently focus on blockchain, thanks.
I've been trying for months to do really relevant things with small LLM models like Llama 3.1 7B, Llama 3.2, Mistral, Gemma2. Of the many possible ways, Langchain JS, Python with Langchain, Node JS with and without frameworks, Python with and without frameworks and none of them can do anything relevant and useful without using GPT or another paid model. Does it make more sense in this incredible project (Bolt) to use models with fine-tuning instead of "super" prompt at least for local models? 🤔
I've been running into the same thing honestly! So many times smaller models fail to even create single functions that a model like Claude 3.5 Sonnet or o1 can knock out of the park. But yes with this fork of Bolt.new it's a step forward to being able to do these things with smaller models. Still work to be done for sure! A fine-tuned local LLM is for sure a great approach!
Thank you for your support! Agents in the backend is on the list and it'll be so incredible once we have that implemented! More to come on that for sure.
This be great instead all those wait times and limitations and credits these other developers put on the account we pay for you be in middle of building app then bam! You have to wait 1hour then 2hour then 3-4 absolutely crazy
Can you add a feature to give any documentation link(openai,crewai, autogen, langchain etc) , so that when we want to code using these frameworks, the llm will go to these documentation and create the correct code using that knowledge?
Personally I have an interest in learning C# and Blazor. From my very very limited understanding of web development, I assume this fork and the original bolt.new are centered around javascript / typescript frameworks. Would something similar for Blazor be possible? I’m currently studying C# (beginner level) and would love to have something like this to have AI coach me and review stuff I write so that I don’t have to Google everything myself.
I've done a bit of C# + Blazor myself a couple years ago and I think it's a great ecosystem! You are correct in assuming that Bolt.new (and this fork) is meant for JS/TS. But yes something similar with Blazor would definitely be possible! It would probably have to be a whole separate project though since the whole concept of a webcontainer in Bolt.new is centered around running a Node instance in the browser. A tool like Claude Dev/Cursor could still help you a ton with learning C# though! It just couldn't build out an entire app with a single prompt like Bolt.new.
@@ColeMedin Thank you! I studied the webcontainer thing a bit and understand a bit more what it does. It seems there is indeed no Blazor alternative for that. Keep going with this bolt fork, I’m learning a lot from it and I love how a community formed around this. Great work!
I love the idea, thank you! This would fit well within an agentic workflow which I am looking into adding support for with this platform. That would be awesome!
Cline + Supermaven over Cursor. 👌 But there is some secret system prompt sauce I feel Stackblitz is keeping from us. They can just get amazing 1 shot results that are functional
That's a more complicated implementation for sure... but I absolutely love it! That Pythagora demo has got me curious for sure. Do you have a link to that? 👀
We can go far with this project. It was an idea I had that someone implemented. He's own AI IDE involves ticking checkboxes next to files to decide which files or folders to include in the chat, in order to save tokens and, consequently, money. Here’s the yt video to the UA-cam project with many great ideas to make the Bolt project optimal, with as few tokens as possible, along with other ideas for potential prompts. m.ua-cam.com/video/ikn7JSUflTI/v-deo.html
This is fantastic - thank you so much for sharing! From what I could tell though it seems his solution just works with Apple unfortunately, and I want something that can be used on all platforms like this Bolt.new fork. But the concepts from it could definitely be used here so I love it!
This is awesome, I have tried and it worked. But I have an issue I am not able to have a follow up instructions, it's stuck in the "Running command" although the command has run.
Fantastic, I'm glad it is working for you! This is a small bug with the open source version of Bolt.new that I've noticed myself. Something I am looking into fixing. Though it has actually completed your request so you can move on to follow up prompts without any issue!
Just wanted to say great work but I am having an issue. I am hosting my local instance of your port of Bolt on ngrok. Unfortunately, when accessing the site from it's public url, the Ollama option lists no models. This prevents me from remotely accessing my bolt instance. Any help would be greatly appreciated.
Thank you for the suggestion! I am working on ways to make this entire process easier to run yourself and then I am planning on making a video on this after that!
This project is really taking off - thank you so much everyone for your suggestions, contributions, and support! ❤
Couple of things:
1. If you make a contribution to this fork of Bolt.new, I WILL feature your change in a video!
2. I am certainly planning on opening up a PR to the original Bolt.new repo at some point! First though I want to make this more mature as a community. Fleshing out features, making it possible to set the API keys in the frontend somewhere, etc. so that it could be added into the main Bolt.new repo seamlessly.
3. If you are still having issues with the smaller models not opening up the code container on the right side, I address that 12 minutes into the video! This is something I am working on that will be a huge improvement for local LLMS!
this is how open source should be, stating and even cite the main branch in video description... your morality is touching my heart. thank you.
You are so welcome!
Damn, bro, this is a total time-saver! What a time to be alive, bless you!
Thank you! It sure is a time to be alive 🎉
@@ColeMedin Can you Add an option for users to begin their development process using an existing project as a starting point, rather than always starting from scratch.
@@DavidFernandez-zg1cr @ColeMedin that could be amazing!!! 😮
kudos for such fantastic initiative. thank you so much. loading local project or from github will make this fork THE one to rule them all
Love seeing this! One thing I really appreciate is that you're not just keeping it to yourself, but you're allowing others to join in. That’s huge! 🤟
Thanks for not being like other UA-camrs, man. So many in the AI space have decided to monetize their subscribers. I get that they need to make money to support their content, but I’d much rather 'buy you a coffee' than spend money on some scammy course about making a billion-dollar business-like certain other UA-camrs with around 120,000 subscribers are trying to sell.
i do agree with u and u are referring to David
@@shay5338 Yes and may his channel RIP.
Thank you so much! My goal is to be much more giving, collaborative, and value packed than the average AI content creator, so I appreciate you calling that out a lot!
I certainly don't blame the other UA-camrs for what they do, and I of course will have to monetize in some ways myself, but I'm working hard to do that in a way that doesn't involve wasting your time, selling scammy courses, or anything like that!
Very exciting! Glad to see the community taking off. Keep up the good work!
Thank you very much Jared!! I sure will!
thank you. More local use and open utilities = better.
You bet! I absolutely agree!!
This is fucking fire bro thank you keep it up I built a whole app in 12 hours I’ve been trying to build this for 6 years this is a game changer no coding experience just constantly updating and fixing the errors via natural language prompts
Did you build the app with this new fork version?
Thank you Chris!! And that's amazing! What did you build?
You and the community did an awesome job!! 🙌🙌
On behalf of all of us, thank you! 😁
will spread this video out to improve performance!
Wow I appreciate it a ton - thank you!!
dewd comes in with the gratitude, workaround, fix's and project plan.
Great, you can always surprise us! I think you're a very thoughtful person. I'll learn from you!
Thank you for the kind words - I appreciate it a lot!
Love to see this continuing! Could you please do more videos of use cases? I'd love to see it try web design or something, just to see the full process with ollama. I'm still having a lot of trouble getting the llm to move to console and not just reply like a regular chatbot
Thank you! Yes - I will be doing content around specific use cases with this in the near future!
What an amazing project. I am truly excited to test it out.
Thank you very much!
Thanks for the hardwork
I have a suggestion plz add proper instructions for it to first run a command to initialize a project and then plan and edit files based on the plans
This can make it a lot more robust😊
You bet!! Could you expand a bit more on what you are looking for here? I have instructions for running things in the README so I am curious what kind of follow up you keen on.
@@ColeMedin I am talking about a feature like cursor rules , web commands like these @web, a special folder for it to create plans in markdown files like current task, roadmap, plans , improvements,etc in the folder boltdocs.
And it should always use the terminal commands like npx create react APp or sveltekit or svelte or something like that and it should then edit the necessary files according to the plans
Awesome progress. I think the cherry on top, once things settle , we can get it into a docker container.... That would be awesome.
Thank you and yes that is definitely the plan!
Hello Cole. Could you make a video of how to setup, build and run the app on windows . Thank you for a great video
I certainly can! Or maybe even a short!
@@ColeMedin Same for a mac please. Also, I pay for Claude, I'm not sure how to use that sub with this. (Or if that's even necessary)
His Read Me file on GitHub is super well written if you follow it closely :) Even me, a non coding dummy could figure it out :)
Yes an please a mac too! I keep having a error
@@TheDandonian I agree, I would love you to do a MacOs setup video please... Great Job. You got me into coding. I'm so excited.
All salute and thanks to you Mr. Cole. Really so helpful to the growing AI community ;)🥰👍
I'm glad you think so - it's been my pleasure to start this up for all of us!
With all the negative news about open source .i.e. Wordpress recently in the press 🤣 ... and OpenAI no longer being Open. You are restoring the heart of Open Source. Well done!!!
That's the goal - thank you!! ❤
LM Studio and MSTY (my preferred GUI for LLMs) both use an OpenAI compatible API. Same as Ollama. I'd be willing to bet just pointing the Ollama provider at LM Studio's URL has a decent chance of working.
I believe you are right! And I for sure know that setting it up like I set up Groq would work since that just changes the OpenAI baseUrl like you are saying essentially!
Getting the image upload will be key to quickly iterating on POCs. On the Bolt paid plan I can dump a screenshot of an app I like or is similar and boom instantly I have the design, buttons, etc in place. Then I just reprompt with Bolt to make the buttons do the things I want.
Image upload is more powerful than people think at times. +1 vote for this.
Either way amazing job so far on the fork!! Thank you
Thank you very much!! And I agree that image uploading should be one of the top priorities for this fork!
Good stuff 🔥 excited to see where it goes!
Thank you very much, I am as well! 😃
Load Local Projects!! I have been trying to do something like this! You can not do that with the paid version either. Yes, I am subscribed to the paid version also.
Yes this is on the list - I would absolutely love this too!!
+1 for this feature, maybe the git integration can be done in both directions
Amazing! Thanks for your work
You bet! Thank you!
This is amazing!
Thank you Cole (and others) for your hard work. I'm so grateful for living on a planet where smart people like you provide dumb people like me with things like this!
One huge thing I would love to see implemented, is something to prevent Bolt from re-writing code when you add a new function. There's a lot of one step forward, two step back in Bolt, where when you have added function A, B and are about to add function C.. it removes the work you did with function A. Very frustrating!
It's my pleasure! Thank you for the kind words!
I agree that the rewrites are pretty frustrating! This would be a pretty fundamental change to how Bolt.new works, but a few others have raised this concern as well so I am certainly keep on fixing it and will add it to the list of improvements!
thats Great work, thanks alot
looking forward for the improvments
Love this video! ❤️ I haven't gone through the project yet but I'll definitely try and make a contribution! ✨
Thank you so much for your work!😊
You are so welcome!
Very impressive mate, well done. I’m going to try this. Thank you very much..
Thanks man ❤ much needed features, and thanks for your contribution ❤
1. Much needed is open exiting projects
2. Agents support like cline if possible
You are so welcome! Both of your suggestions are on the list and I can't wait to get them added!
The ability for it to have other project context windows so when you start a new chat it can still have memory of the other chats and projects you did will be game changer
Also being able to directly talk to the ai rather then text will also be amazing!!
Fantastic suggestions - thank you!
I can see this exploding over the next few months with all the features being added. Great work
Another feature that would be HUGE is the ability to deploy to Netlify
Thank you very much! Fingers crossed it continues to grow!
Fantastic suggestion too - I will add this to the list!
Great work! Just saw a video of Fragment saying it's better than Bolt. They have the functionality which your fork has of being able to select other models. So the Medin fork is probably the best already 😊 another thing with Fragment which could be an improvement of the Medin fork is the functionality to select and choose a persona for a session. Almost like scaffolding the session but not doing it in the prompting.
Thank you! I've been checking out Fragment as well and I appreciate your thoughts here! What kinds of personas would you see being useful for this?
@@ColeMedin I've tried to build some cross-platform apps, i.e. both for mobile and web, and my experience is that the suggested tech stack for front- and back-end differs a lot between different models chosen. But could be that I should be better att prompting of course.
You can do one thing instead of listing future updates open for everyone, distribute them to different persons to build. You can run a poll and taking the vote who wants to develop which feature. The individual has to provide a plan for tackling the problem. Then you will decide who will develop which feature. Otherwise multiple persons are doing the same thing. For example I have also implemented the Gemini integration.
I appreciate this suggestion a lot! The only problem is it would take a good amount of time + effort to organize this. So I might need to rely on people to check the pull requests and the checked off features to know what is left that needs to be implemented. But I love your suggestion so I will be thinking about how to do that efficiently!
I agree with the OP to an extent. There is another repo that is also a fork and they went the other extreme with around 480 open issues. But at least we have a way to communicate with each other.
Discord with polling maybe. I like that better that UA-cam comments and GitHub issues
Being able to load local projects on it would be insane!
Great content and a great way to engage the open source community!
Potential huge improvement: One challenge with these tools (bolt.new, cursor V0 etc) is that they start hallucinating and breaking existing code (or creating new errors) when generating new code, especially when the context become large (needle in the haystack problem). I think I have a solution on how to mitigate this problem. How would you like me to proceed in order to see if it could be part of this project?
Thank you very much!
I appreciate you being willing to contribute to fix this problem. I agree it's a huge issue for really any AI coding assistant! The fact that you have a solution is incredible and you have me so curious haha
You can absolutely feel free to make a pull request with your solution! Or if you want to contribute in any other way please let me know! Regardless - I would make an entire video on your contribute if you tackle this!
Hello Cole,
*Perplexitiy integration is missing from that list.
Great to see all the progress in the making. 🚀
Yes you are right - I will add that now!
Please add docker support directly
Fantastic suggestion - thank you! I will add it to the list.
Another suggestion. Implement mixture of agents mechanism to get better quality response from even small models.
when you see your pull request in the youtube video. Am more than happy
I'm glad!! Thank you so much for contributing to this! ❤
can you please please do a indepth tutorial on how to install everything and set up thanks you!
Imagine creating an agent that can create fine tuned agents automatically based on a URL of documentation of a particular library, framework or API... So every time you want to use a new library it creates a specialized sub agent who goes away scrapes all the documentation and examples for a library, creates the fine tuned agent and then the master agent has a specialist in that particular library or framework available to him. Imagine all that happening automatically in the backend.... 10-20 specialist agents fine tuned on all the working parts of your codebase. I bet the results would be infinitely better!
Wow I love your thoughts here! Yes - this would be INCREDIBLE!
Just subscribe to your channel and loving what you’ve done so far with this project, these are 3 things that would turn the tide when it comes to code generation: image to code, running agents and importing projects
Thank you so much! All three of your suggestions are on the list in my repo and I agree would be a game changer!!
@@ColeMedin if the agents could run autonomously until the goal is achieved, if this could be done wow. Thanks for putting in the time and effort to create this (upgrade) it.
Gemini integration - just what I wanted!
Fantastic!! 😃
keep up the great content, fan of you!
Viva the community.Open source forever ♾️♾️♾️♾️♾️♾️♾️♾️♾️
Amazing work! I wonder if it could be integrated well with VS code?
Thank you! At this point it can't be, but once we add the ability to load in local projects that would be a logical next step!
12:50 When an llm thinks, it uses less tokens then user input queries or llm inferences. In other words the reason why it helps is because the llm uses less tokens to process the request because its already in its context.
Nice yeah that makes a ton of sense!!
Freakin love it bro.
I would monetize it though. Just a little.
Just enough so you can hire help for bugs, support, updates, new features. Even if its $5-10 month or something.
Absolutely love it man! Is there a chance that you could do a guide to self host this fork?
Thank you! I am planning on containerizing this to make it easier to run yourself, and then yes I will make a video on how to self-host it!
@@ColeMedin Amazing, thanks for the reply, keep up the good work :D
Phidata for agents. I'm not a seasoned dev, but been playing with it today and it's bloody great. Much more simple than Lang etc.
I haven't heard of Phidata until now but it looks amazing! I will definitely check it out further!
@ColeMedin its so damn intuitive. Love it. I hadn't heard of it either until this week.
i'll come back in a month to see the new changes 👍
Great video bro!!
Thanks man!
Awesome work
Thank you!!
Bless you brother
This is incredible!
Thank you!!
hugging face pls
I'll add this to the list in the morning!
Nice work! Awesome recruiting a team. Are you doing external pull requests?
Thanks and yes I am! This is all community based - I haven't recruited anyone specifically!
I see you are and it looks great! This will take over.
@@hope42 Thank you! 😁
Add prompt cashing
Great suggestion - I appreciate it!
Do Bolt use shadcn ui?
May be using it (optional) for better UI ?
Or may be, we just use our prompt will do it without adding it to the system prompt.
Good question! Bolt.new can use a few different component libraries from my experience, and ShadCN is one of them! So yeah if you ask it to use ShadCN for the components it will 😎
The fact it rewrites all thebcode every time kinda sucks. Also gets stuck on errors it cant fix itself and the editing experience kinds sucks. Also will just start implementing changes merely from asking the llm a question. It kinda sucks in these aspects would be nice to fix. For this reason cursor and replit ai agents have been alot less frustrating
I have both replit and Bolt...and I'm trying to create a slightly more complex app...replit is too slow and often doesn't do what I ask, Bolt is more responsive but the problems you highlighted after as the app becomes more complex, they start to get boring and repetitive and I have to ask Claude or chatgpt for help to get it working.
Cursor I haven't tried it yet
@@rouges666 Cursor is alot more flexible, but wont just go and build the entire app out for you in one go like replit and bolt. But once you get a minimal version of the app going, it allows alot more flexibility, and doesnt just overwrite all your code each time with changes. I have the paid version of replit, and also hit the limits each day. I havent hit any limits yet on cursor and im on the freeplan. Cursor will force you to learn more as well, which is think is better in the longterm. Today with replit i just couldnt get it to fix its own errors, and i just had to rollback, then hit usage limits.
I totally agree with you that the rewrites is a downside to using Bolt.new! As @rouge666 mentioned though the Bolt.new experience is often better than Replit, and then Cursor as you mentioned can build out an entire app.
So in my mind, there is a time and place for both Bolt.new (and potentially Replit, it's still awesome) and Cursor. Bolt.new when you want to build something from scratch without coding in an IDE, and then Cursor to take your application further.
I'll love to contribute
Fantastic - I look forward to seeing your contributions!
This is awesome!
Thank you :D
haha I am literally doing the samething.. has so much potential :D
Do share if you are making it open source!
cool! amazing job 💯👍
Thank you very much!
Great work.
Thank you!!
Great work! Just a question
Did you contact the creators of bolt? Maybe you shouldn’t fork it but join their development , together you truly can build something even more awesome
Thank you and good question! I have not at this point, but maybe I would consider doing it! I am also probably going to be making some pull requests into the main open source repo once we build it out more.
@@ColeMedin amazing, I think just like you said, the power of open source is building together, their team obviously have some marketing and design skills :)
ADD REQUEST: upload existing projects, need to go to cursor and back.
Thank you for the suggestion - this is indeed in the list of improvements!
fantastic fork. im not a developer, so i would appreciate a simple program flow to innstall
Thank you! Yes - I will working on making this even easier to install and then creating a guide on that!
Hey Cole, this is awesome. Can you please add or suggest how this can be improved or trained for blockchain development e.g. Rust for solana. Not many code generators currently focus on blockchain, thanks.
Yes, HuggingFace and Replicate
Great suggestions - thank you! I will add them to the list.
Remarkable!🎉
Nice community and collab, don't mind if I join, soon tm
Thank you very much and I appreciate you wanting to contribute!
I've been trying for months to do really relevant things with small LLM models like Llama 3.1 7B, Llama 3.2, Mistral, Gemma2. Of the many possible ways, Langchain JS, Python with Langchain, Node JS with and without frameworks, Python with and without frameworks and none of them can do anything relevant and useful without using GPT or another paid model.
Does it make more sense in this incredible project (Bolt) to use models with fine-tuning instead of "super" prompt at least for local models? 🤔
I've been running into the same thing honestly! So many times smaller models fail to even create single functions that a model like Claude 3.5 Sonnet or o1 can knock out of the park.
But yes with this fork of Bolt.new it's a step forward to being able to do these things with smaller models. Still work to be done for sure! A fine-tuned local LLM is for sure a great approach!
Can we have a simple GUI to enter the API keys pls 🙏
This is a great suggestion! It would require a lot of setup in the backend but it would make things a lot easier! I've got it added to the list!
Teşekkürler. Missing agents running backend
Thank you for your support! Agents in the backend is on the list and it'll be so incredible once we have that implemented! More to come on that for sure.
This be great instead all those wait times and limitations and credits these other developers put on the account we pay for you be in middle of building app then bam! You have to wait 1hour then 2hour then 3-4 absolutely crazy
It should also include Python as beckend
Do you mean making Bolt.new work better with creating Python backends for the full stack apps? I certainly agree!
Great work.....👏👏👏
Can you add a feature to give any documentation link(openai,crewai, autogen, langchain etc) , so that when we want to code using these frameworks, the llm will go to these documentation and create the correct code using that knowledge?
awesome
You did an amazing job. Congratulations. Can I use OPENAI_API_KEY without ANTHROPIC_API_KEY?
Suggestion: Add the option for the model to automatically start debugging if the produced code throws an error.
Personally I have an interest in learning C# and Blazor. From my very very limited understanding of web development, I assume this fork and the original bolt.new are centered around javascript / typescript frameworks. Would something similar for Blazor be possible? I’m currently studying C# (beginner level) and would love to have something like this to have AI coach me and review stuff I write so that I don’t have to Google everything myself.
I've done a bit of C# + Blazor myself a couple years ago and I think it's a great ecosystem! You are correct in assuming that Bolt.new (and this fork) is meant for JS/TS. But yes something similar with Blazor would definitely be possible! It would probably have to be a whole separate project though since the whole concept of a webcontainer in Bolt.new is centered around running a Node instance in the browser.
A tool like Claude Dev/Cursor could still help you a ton with learning C# though! It just couldn't build out an entire app with a single prompt like Bolt.new.
@@ColeMedin Thank you! I studied the webcontainer thing a bit and understand a bit more what it does. It seems there is indeed no Blazor alternative for that. Keep going with this bolt fork, I’m learning a lot from it and I love how a community formed around this. Great work!
I would be awesome if you could you please add a documentation scraper like in Cursor? It would significantly reduce the number of errors 😄
I love the idea, thank you! This would fit well within an agentic workflow which I am looking into adding support for with this platform. That would be awesome!
Thank you - my pleasure :)
"Awesome work"!
Could we get the Nemotron model to work in this?
Great question! And the answer is yes! You just have to pull it from Ollama and it'll be available to use here.
ollama.com/library/nemotron
Great video, I have one question, is it possible to take all these request and run it on Cursor composer? If yes then why not trying to do it?
Thank you! This is something I would have to look into more to be able to answer for sure - but I am planning on diving into this exact thing soon!
Cline + Supermaven over Cursor. 👌
But there is some secret system prompt sauce I feel Stackblitz is keeping from us. They can just get amazing 1 shot results that are functional
Please make a full step by step tutorial how to setup and run also how to add api especially for ollama.
please add support of using hugging face's Api. you did a master picece !!!! thx
I will add this to the list in the morning - thanks for the suggestion!
love this ❤🍻
How about showing diffs and accepting changes or not? I recently saw a pythagora demo which uses agents which looks like one of your list items
That's a more complicated implementation for sure... but I absolutely love it!
That Pythagora demo has got me curious for sure. Do you have a link to that? 👀
@@ColeMedin they have their own channel Cole, but I watched this on another channel: ua-cam.com/video/spsG4G2sbrw/v-deo.htmlsi=dPPvv9TUlgsQFj0U
THANK YOU A LOT, from Brazil. My Ollama was installed but not works.. =(
You are so welcome!! What is the error you are getting with Ollama?
We can go far with this project. It was an idea I had that someone implemented. He's own AI IDE involves ticking checkboxes next to files to decide which files or folders to include in the chat, in order to save tokens and, consequently, money. Here’s the yt video to the UA-cam project with many great ideas to make the Bolt project optimal, with as few tokens as possible, along with other ideas for potential prompts.
m.ua-cam.com/video/ikn7JSUflTI/v-deo.html
This is fantastic - thank you so much for sharing! From what I could tell though it seems his solution just works with Apple unfortunately, and I want something that can be used on all platforms like this Bolt.new fork. But the concepts from it could definitely be used here so I love it!
This is awesome, I have tried and it worked. But I have an issue I am not able to have a follow up instructions, it's stuck in the "Running command" although the command has run.
Fantastic, I'm glad it is working for you! This is a small bug with the open source version of Bolt.new that I've noticed myself. Something I am looking into fixing. Though it has actually completed your request so you can move on to follow up prompts without any issue!
@@ColeMedin It didn't respond to my follow up questions. When I inputted my next question and hit return, nothing happened.
LM Studio please!
It is on the list of improvements in the repo!! 🔥
Can this work with the free version of ChatGPT? My computer sucks too bad to run a LLM locally but Im hooked on using this to help me program?
Openrouter has some free models you can use remotely iirc
With the latest updates in the fork and others...you can pick from a number of free cloud LLM for free (Gemini Flash 1.5)
add settings for save API key. so we don't need to open source code for putting API key.
I have added this to the list of improvements to be made - thank you for the suggestion!
Just wanted to say great work but I am having an issue. I am hosting my local instance of your port of Bolt on ngrok. Unfortunately, when accessing the site from it's public url, the Ollama option lists no models. This prevents me from remotely accessing my bolt instance. Any help would be greatly appreciated.
please create a video on how to set it up step by step please
Thank you for the suggestion! I am working on ways to make this entire process easier to run yourself and then I am planning on making a video on this after that!
Make it really easy finally, especially for local deployment! THE DEPLOYMENT is the hard part.
ps: Please create a new better cursor with this ;)