@@ericsimons4497 i love that you love that he loves customizing your product to be what everyone else loves. Open source software and mariijuana bring people together.
id rather give you 20 bucks a month to keep being this fucking awesome and helping me do all of this stuff locally than pay a whole bunch of different soon to be outdated services various amounts per month. you absolutely rock man!
Thank you so much dude, that means a lot to me! And you're so right that so many of these AI services are becoming outdated so quick haha, it's hard to keep up!
Thanks for asking! I have something I am building in the background right now that I will be releasing soon... not just your typical Skool community ;)
Cole let me tell you... You are definitely the best I found on UA-cam until now, If you continue like this you will smash any others in you niche! So much details, straight to the point, making anyone able to reproduce the same in a matter of time. And... first of all, inspiring! Good job man, very good!
I'm truly blown away by what you've created here. Making this kind of technology accessible is a huge deal, and I can't thank you enough for your hard work and dedication. This is going to make such a positive impact!
Hey Cole! UA-cam suggested your videos 3 days ago... I have now a pitch-ready AI-Application based on N8N :) If you ever come to Switzerland: Your beers are on me - ALL OF THEM...
Love it dude. Only thing I would have added (initially) is that if the API key isn't there, then it doesn't show those options in the drop-down. But a simple fix really. Keep up the great work!
Thank you and you're totally right, that would be a fantastic addition! I would have to have the backend communicate to the frontend somehow which API keys are present, but that could be done with another API endpoint that is hit when the site is loaded.
Maybe an idea. Last time I made a program would make it search the web and then parse it to an agent. But I notice they are also very good with coming up with the best keyword to search in google for… Maybe you should add an auto model choosing ai. So it a local model or api you can set for deciding which model to use. So you give it the project idea/file tree, and the user prompt, and all the info about each model. And then it must decide like this is just a form to generate we can use a simple local model. But when asked to make the form better or nicer it reasons it must use a more advanced model. Maybe even after the user submitted his prompt it shows which model it has chosen, and the user must conform by pressing enter, or when the user thinks the model chosen is still not good enough it can change the model selected for the task by the arrow keys and then pressing enter. It then lifts the weight of your shoulders and saves on expenses. Great video❤️❤️❤️
This is an absolutely amazing idea, thanks for sharing!! I love the concept of having an initial router agent that determines the complexity of the task. I'm sure that's very doable!
I made my own version of bolt that uses Ollama last night, I see that you've already progressed much further with it so I'm planning to contribute to your solution instead :)
Fantastic job Cole, respect! 🙌 Instant subscription. I'd like to see you integrating Perplexity next... it probably sounds silly to you, but I'm a total non-developer. Thanks in advance.
Thank you so much! 😃 Not silly - thanks for the suggestion! I've got a running list of things I want to do to improve this fork and I'll add Perplexity/something similar!
Not a programmer and don't know about bolt few hours ago but know I know I can easily build my concept by using your version thanks alot dude highly appreciated ❤️✨
@@ColeMedin can you please add something like "I have an app or website project that's partially built, but it was created outside of Bolt AI. I'd like to continue its development within Bolt AI." it would be really help full
Thank you so much - I appreciate the support and kind words a lot! I am looking into adding LM Studio! There isn't a direct support for it with the Vercel AI SDK, but it looks like LM Studio supports OpenAI compatible APIs so I should be able to set it up similarly to how I set up Groq by just overriding the base URL for the OpenAI instance.
Nice...I was thinking of doing something like this myself but here it is now. haha. Thanks for your effrots. I'm going to go through your archives now. Cool stuff.
thanks Cole really. for a non coder like me this was fun watching and your next steps of showing how we can do something similar , i think now i can experiment with adding other service providers thanks cole
What I would like to see is: - make the models configurable from a file instead of getting a massive list - provide an easy way to download all the generated files instead of copy / pasting everything - include a dockerfile to easily get going
Thanks Cole, this is really inspiring!. I am a business user wanting to play around with some use cases before going the mvp route. I felt that the step by step guide was more of hop and skip guide (i mean this well) as most tech literates would know what happens next. I think that if this guide were more stupidproof. for folks like myself :) you would get a whole lot more of adoptation for the fork. Were it not for getting the first few steps right on Windows I would contribute to your .md. Do you have a link to windows only step by step set up?
This is great Cole. I'm really enjoying your YT videos on N8N usages as those are most comprehensive tutorials on YT. Keep up with great work! Do you have by any chance docker composer to run this with Ollama?
Thanks so much for the kind words! I will certainly keep it up! I don't have a Docker compose for this right now, but this is something I want to do in the near future!
I think it's generally good if local LLMs can also be used. However, it doesn't work as well as it could (yet). Your work is great, and I'm excited to follow its further development. There is currently a problem with the code being executed and generated automatically. It’s precisely the core functionality that makes Bolt so exciting. But I don't want to make any demands here, I only just want to share my ideas and thoughts. Maybe at some point there will be a native app that will implement all of Bolt's functionality. Keep up the good work and thanks to you, Cole. 👍
Yes I agree it doesn't work as well as it could with local/smaller LLMs right now. Lot of opportunities for prompt engineering and agents behind the scenes which we are indeed working on right now! Thanks for your thoughts!
Thanks so much for all the work you put in and for sharing the how-to with us so we didn’t have to do it as well. This is a significant improvement to this already amazing world of AI assisted coding, especially with respect to Bolt.new. I can’t wait to download your fork.
This content is very well thoughtout, engaging, and helpful. I am going to be checking out this code. Have you ever thought about streaming while you code? I would be curious about your thought process, using AI prompts in developing, and showing the development. Group programming!
Thank you very much! And I have thought about it but not too much yet - thanks for mentioning that! It's very different than recording because if I get stuck on something the stream might get boring for a bit... haha But like you said the thought process behind everything could still be valuable - so I do want to do it in the future for sure!
this is awesome! Thanks for sharing Cole. Do you know if theres a way to still upload files/screenshots using this version? I am a ui designer and want to share wireframes, i find it helps a lot in the build. thanks!
Thank you very much - you bet! Uploading files is a feature that Bolt.new doesn't include in their open source version, unfortunately. I like being able to upload wireframes too so I really wish it was included. But I guess Bolt.new needs to have some closed source features so people have a reason to pay them. Maybe I'll have to add this in as an extension to what I've made here!
hi, I can get the bolt page to appear on my local server but any request is returning "There was an error processing your request". struggling with this
Cole, is there any way the output code be connected to a GitHub repo? This could add a lot of value to the work that you are doing. Keep up the good work!!!
Thank you and I appreciate the suggestion! Right now the open source version of Bolt.new doesn't support this, so I would have to implement it myself. But I am considering doing that because a lot of others have suggested it!
Great work @ColeMedin !! But I don't know if this happens only to me, the chats always lose the previous context I mean every conversation needs to be a new one adding the previous chat manually, and the code only appears in the chat not in the Workbench "code view", Do you know why?
Thank you Roberto! I believe the conversation history issue is a limitation of the open source version of bolt.new. Something I am looking into! And then for the second issue - a lot of the smaller models have issues using the webcontainer (code view). So you'll only see a chat output, which is still useful, but obviously not ideal. I would first try a different model, especially if you're using a really small one like
You bet! Sorry it isn't quite the answer you are looking for! Hopefully that solution for getting the smaller models to work in the webcontainer can work for you though!
this is fantastic. amazing. the best AI youtuber I haver ever seen. One more thing, could you please kindly add openrouter, it is great one including all models.thanks a lot.
Thank you for taking the time to make this video and share this with all of us. Is there anyway you can make this use my local file system or even better use visual studio code?
My pleasure! Good question - so Bolt.new doesn't have this functionality at all so I would have to make it entirely from scratch. Which I am considering doing because a few people have requested exactly this already! Or at least to include the ability to download locally what Bolt.new creates.
Thank you! And yes it is unfortunately - that is something not included in the open source version of Bolt.new, I guess so they can have some proprietary stuff so people will pay for their platform. But that is something I am looking to implement!
Thank you and I agree! Maybe the Bolt.new team will see my changes and implement the same thing themselves if my video gets enough traction. It would be a win for everyone!
Excellent man was able to get this up and running quickly. Only issue is my llama3.2:1b model is just spitting out the code in the same chat window lol, Probably a me problem or a model issue, going to try some of the models you listed.
Thank you - glad you have it running yourself too! Yeah I've noticed as well that the smaller models sometimes don't work very well with Bolt.new's prompt so they won't open up a WebContainer on the right side and it'll be more like a regular chat. Still helpful but yeah obviously not what we are looking for mostly. If you are able to, I would try a larger 30b+ param model like CodeLlama 34b or CodeBooga 34b. Or try DeepSeek-Coder through OpenRouter, that model kicks butt. Otherwise it might be possible to change up the Bolt.new system prompt to work better with smaller models. That is something I am still researching!
Just added google gemini from the Vercel AI SDK, Incredibly simple, but doesn't seem to be as capable as the other models. It seems the in browser code/preview canvas needs some prompt engineering
That's awesome you got Gemini added in! Nice job! Too bad it isn't performing well though... you're right - for many of the not as powerful models there should be an opportunity to tune up the Bolt.new prompt to make it work better. That is something I am looking into!
@@ColeMedin just an update, it was something with my environment, now the canva is working. Just wondering if there a way to automatically save the files
@guerra_dos_bichos Awesome, glad it is working for you! There isn't a way to save files right now since the open source version of Bolt.new doesn't support that unfortunately, but I am looking into making it myself for my fork since it is a highly requested feature!
Thx for all explabayions and hard work. Sometimes local llm s boring but with flowise with 2 or 3 agents it makes fast and not uses CPU and gpu much more, if we can integrate with Bolt and your system.. Nice work
@@ColeMedin I don't know how to write in programming language. But I have experienced that normally when I use an artificial intelligence alone, it gives very late and bad answers. But when I create different agents with flowise, give them tasks and run the system piece by piece, I get both more proper and more efficient answers. If we can use this in your system, something great will come out. Actually, it's like separating the prompt entered by the user into chunks. I don't know if I explained myself. Still, you have produced something very nice, congratulations.
Thanks for the kind words and yeah I see what you mean now! This kind of thing where you have agents running in the background to produce the final result for the Bolt.new frontend is certainly doable! It would take extending the platform quite a bit, but I do love the idea!
Thank you for a great fork!!! Enjoy your videos. I wish there was an option to only write an HTML, CSS, JS site instead of it always having it built in a stack like vue or nextjs.
Thank you Michael! I've actually had luck getting it to only write HTML, CSS, and vanilla JS. I just have to specifically ask for only that in my prompting. Sometimes it still likes to create a package.json file but I think that can be fixed by tuning the Bolt.new prompt for the LLM.
Thank you! I haven't containerized this yet but I like the suggestion! I will certainly consider doing that especially if I add any other services to this fork like agents in the backend.
oh and @cole the AI enhanced prompt seems to be hard coded to anthropic so as I have no available credits won't work, can we have the enhance prompt point to the chosen LLM?
Thank you so so much for your support and kind words!! Right now this isn't possible because Bolt.new doesn't include the import feature in their open source version. I guess they have to keep some things closed source so people have a reason to pay them for their cloud offering. This is something I am looking into adding though! But it will certainly be a good amount of work to set up!
Really a Great work!! the chat works and answer to questions, but using OLLAMA models my preview and code view are empty! with GPTO (using openrouter) works. maybe there are some OLLAMA models preferred to use?
Thank you man! Yeah I've noticed as well that the smaller models sometimes don't work very well with Bolt.new's prompt so they won't open up a WebContainer on the right side and it'll be more like a regular chat. Still helpful but yeah obviously not what we are looking for mostly. If you are able to, I would try a larger 30b+ param model like CodeLlama 34b or CodeBooga 34b. Or try DeepSeek-Coder through OpenRouter, that model kicks butt. Otherwise it might be possible to change up the Bolt.new system prompt to work better with smaller models. That is something I am still researching!
Thank you and good question! Unfortunately this is something that Bolt.new didn't include in their open source version. So I would have to add it entirely myself - which I am considering doing since a lot of people have requested it!
Thank you! You sure can! You would just need to create an openAI instance for the model provider where you override the baseUrl to point to your GPT hosted in Azure AI Studio. Or I believe they have direct support for what you are looking for here (correct me if I'm wrong if the studio is different than this): sdk.vercel.ai/providers/ai-sdk-providers/azure
@@ColeMedin exactly! That's what I was looking for. I tried implementing it using the same approach from the link you shared but encountered an error. As a .NET backend developer, I'm still relatively new to the Node.js stack and learning it as I go. If you could assist in adding this functionality, it would be incredibly helpful. Thanks in advance!
Hey Cole, like what many have said here, thanks for putting out some of the best straight forward hands on content. I have a question that, that no one had been able to answer. I'm hoping with your experience in this field, you'll be able to finally put it to rest: What AI coding tool would you use to work with large files in a codebase? I have a js file with 40k lines of code, and none of the popular tools out there has been able to handle such large context.
My pleasure, thank you for the kind words! This probably isn't the answer you are looking for but I put a lot of thought into the second paragraph so hopefully it helps! I would suggest against having any single file in source code that is that many lines of code. Typically for a JS project, you would split the code into separate components and have all of those in different files. Traditionally recommend for readable + reusability of components, but even more important now for being able to have LLMs come in and help with the code more easily. Now, I don't know what your codebase looks like and I'm sure there is a good reason you have a file that big! If you really do want to handle files that big, you'd probably have to develop a custom system that splits the file up and then feeds chunks one at a time to the LLM to process and do whatever you need it to do like update sections of the code. So basically you summarize each piece of the code so the LLM can navigate between chunks and make the necessary updates in a multi-step agentic workflow.
@@ColeMedin thanks for taking the time to reply in detail. Your answer makes complete sense, especially after researching and testing the AI coding tools in the current meta. The large js file is the output of a vue output that was uglified and then beautified, and I don't have access to the source files. That's my only issue. I guess I'm gonna have to try and dissect it into multiple files based on my intuition. Anyway, thanks again. Cheers
Ah okay that makes sense! That certainly does make it tougher. Good luck splitting it up, I hope that works out well for you and makes it possible to use LLMs to assist more! You bet!!
You are so welcome! That's a strange issue, Bolt.new is specifically prompted to not do that and I haven't ran into it myself. Which model are you using? I would also in your prompt just specify to include all the code in each file it rewrites!
Thank you very much, I appreciate the support a lot! I do show at the end of the video how I edited the source code to make this happen. I didn't actually use AI for this since the changes were between so many different files. Or is there something more specific you were wondering about me making a video on related to this?
Great vid man! Question, if I’m using an api provider like Azure which requires an API key and a ‘resourceName’ do I need to include the ‘resourceName’ in the api-key.ts switch statement also or just the apikey? (apiKey and resourceName are both environment variables) Any help would be greatly appreciated!
Thank you very much! You wouldn't have to include the resourceName in the api-key.ts switch! You would just need to include process.env.AZURE_RESOURCE_NAME or whatever you call the environment variable in the call to createAzure in models.ts just like you include the apiKey there. I assume you saw the docs for this, but in case you didn't: sdk.vercel.ai/providers/ai-sdk-providers/azure
@@ColeMedin appreciate the response! Yes I’m following the vercel docs closely. Last question if you don’t mind, what about the anthropic-vertex community reference which doesn’t include an api key at all? Does that get left out of api-key.ts completely similar to the local ollama models? Thanks again!
Exactly what i needed, without realising I needed it! Thank you! Could you clarify, you say that the ollama models should be installed before use. I run "ollama run deepseek-coder-v2" its installed, and I can see it when I list ollama, but when I then try use the model in bolt.new I get the error "responseBody: '{"error":"model \\"deepseek-coder-v2:16b\\" not found, try pulling it first"}',". Am I missing something?
Solution: Double check the model names by running "ollama list" - then ensure the names in the .env match! For me some of the downloads from Ollama are saved as model:latest instead of model:15b.
They are pretty close so sometimes I'll actually use both when one encounters an issue! But typically I found Claude 3.5 Sonnet to be slightly stronger.
Cole, do you have any video of full end to end automation over the voice? Assuming connect Siri to LLM and n8b workflow. Or could be self hosted IP telephone where can dial-in and speak to LLM to execute some action
You bet! And I believe LM Studio supports OpenAI compatible endpoints so you can set up LM Studio just like I did with Groq in this video! lmstudio.ai/docs/basics/server#openai-like-api-endpoints
Hi thank for this amazing project, my inquery i have openai api key but iam unable to run, althouth i install canary browser and follow main command could make avideo if possible to explain how to set openai apikey step by step thanks
absolutely fantastic, thanks a lot for that!!!! can we use Gemini Flash 1.5 (free) api? also kudos to the suggestion to export zip file which includes all the files
Thank you and fantastic question! That isn't available in Bolt.new but I am looking into how I could extend my version to make that possible! Or at least make it so you can download directly the project that it generates.
Great stuff cole. I want to give this a go specifically for the Ollama bit however, where / what would I need to change if I have Ollama running on another computer in my house. So rather than locally on the machine I will run your fork of Bolt.new but on another machine in the house. Could it be on line 35 of the models.ts I can simply put a baseurl: line there
Thank you! And yes, you should be able to just override the baseUrl as long as that IP is accessible from your machine (i.e. no firewalls blocking or anything like that)!
@@ColeMedin For some reason - None of the Ollama based models are working on my system. Bolt.new throw an error on console about not found. I had already ensured that ollama is up and running on default port 11434. Can you help maybe what might be going wrong? I'm on mac.
@curiousturtle8190 Make sure you run the ollama pull command for the exact same model ID that you are using within Bolt.new! So if you want to use codellama 34b, for example, you would first have to run the command: ollama pull codellama:34b All the model IDs can be found in the app/utils/constants.ts file that I show in the video!
@@ColeMedin you are 💯 correct! After using exact name, it works like a charm. The default pull was using the latest tag name which was causing an error on my end. Thanks a ton! 😊
Thanks Cole. This is awesome cause you have integrated local models. I'm using agent Zero but can't totally write full stack with it. Now i want ti try bolt. So which ollama model is best for full stack development?
@@ColeMedin codellama 13b and 34b not properly working. They don't use instructions and instruments at all. They can't work with "artefacts". Keeping trying something else...
@@ColeMedin i found one interesting fact. I have promted my system promt of deepseek coder v2 16b in ollama and when i asking do some stuff ir works like simple chat bot, but when i say it use artifacts it starts working properly Another interesting thing that when bolt uses anthropic models claude sonnet or others it gives instructions to system prompt while connecting to its api key. System promt can be read from terminal where bolt started.(Cmd powershell) I copied it promt but ollama model can not understand it directly so i rewrited it to plain Text and changed formatting text and it took this role of bolt coder and tools to use it started to understand So can you add same functionality while loading ollama model from local api to bolt give it instructions while loading like it doing to Claude I think it will fix issue and we could test every models from ollama and choice best because their system prompt will promted correctly.
This is super fucking cool. You telling me that if you have a fast pc the local models work better? Time to install vscode and give this a try on my gaming pc jaja
Awesome really awesome #1 quality of content you have. If i may i want to ask, i downloaded your fork, then pnpm install everything working well. but i cant see created files on right side. there is nothing.. i created something from sample todo but no file created on right side. how can I do this if you know? thanks PS: using win 11 and ollama with deepseek-coder-v2 to tryout.
Thank you very much man! I assume you are using the 16B param version of Deepseek Coder? The smaller models sometimes don't work very well with Bolt.new's prompt so they won't open up a WebContainer on the right side and it'll be more like a regular chat. Still helpful but yeah obviously not what we are looking for mostly. If you are able to, I would try a larger 30b+ param model like CodeLlama 34b or CodeBooga 34b. Otherwise it might be possible to change up the Bolt.new system prompt to work better with smaller models. That is something I am still researching!
@@ColeMedin thank you for fast answer. i found like this, if I say create me a todo list app its not write on container. but if i say build me a bla bla its creating on there. and I really want to ask , for react/nextjs tailwind combo which llm is the best for result in your opinion. thanks again for this awesome work!
Of course! Interesting! So you're saying even small changes to the prompt can help the smaller models interact with the webcontainer properly? I've been having a lot of fun and success with DeepSeek-Coder 236b from either Ollama (though you have to have a really good machine!) or OpenRouter (super cheap). It doesn't do the best with styling but it corrects itself really easily when you ask and the functionality is super good.
@@ColeMedin Dang I guess I can't run this then... Now I see why my models aren't opening the editor lol. Guess I have to wait until newer models come out that can handle it.
Great questions! I've had LOT more luck with Bolt.new compared to Cursor. I like both but Bolt.new has given me a better experience overall. Bolt is more focused on the frontend even though it is full stack, so I wouldn't necessarily use it for creating AI agents. But you could certainly try and see what it can put out for you!
@@ColeMedin I tried your repo. I am having issues using Groq. It just says ''there is an error processing your request". I tried all the various Groq models you put too.
man this thing works great! Solid work my friend. Quick question, is there a way to paste screenshot i the fork you created so that it can interpret it or is that only in the original one? thanks
Thank you so much, I'm glad it's working well for you! Unfortunately Bolt.new doesn't provide this feature in the open source version. I guess they have to keep some things closed source so people are willing to pay for what they offer in the cloud. But I am considering adding support for this in my forked version!
@davidbraun7356 I'm guessing it will be fairly complicated... and also not all models will support it so I'll have to figure out how to make that a good experience too. But it would be freaking awesome to have in the fork!
Cofounder/CEO of StackBlitz (creators of bolt.new) here- just wanted to say this is *fucking awesome*. Great work man!
Thank you so much Eric! I appreciate it a ton!!
@@ColeMedin Should probably pin this endorsement. 😊
I have now pinned it - thanks for the suggestion @antkin608!
@@ericsimons4497 i love that you love that he loves customizing your product to be what everyone else loves. Open source software and mariijuana bring people together.
@colemedin what if instead of creating, we want to adjust and develop existing code that we have in a local git folder?
id rather give you 20 bucks a month to keep being this fucking awesome and helping me do all of this stuff locally than pay a whole bunch of different soon to be outdated services various amounts per month. you absolutely rock man!
Thank you so much dude, that means a lot to me! And you're so right that so many of these AI services are becoming outdated so quick haha, it's hard to keep up!
@@ColeMedin where's the link to your community to learn local llm/automation dev? :)
Thanks for asking! I have something I am building in the background right now that I will be releasing soon... not just your typical Skool community ;)
same. i was about to get cursor or bolt... haha count me in!
no such thing as hatex or etc, say, can sayx etc any nmw s perfx
Cole let me tell you... You are definitely the best I found on UA-cam until now, If you continue like this you will smash any others in you niche!
So much details, straight to the point, making anyone able to reproduce the same in a matter of time.
And... first of all, inspiring!
Good job man, very good!
Wow thank you very much - that seriously means a lot to me!! :D
You are doing awesome work, and quickly becoming my favorite AI coding related UA-cam Channel! Thanks for sharing!
@@StarTreeNFT Thank you very much, that means a lot!
I'm truly blown away by what you've created here. Making this kind of technology accessible is a huge deal, and I can't thank you enough for your hard work and dedication. This is going to make such a positive impact!
Thank you so much for the kind words! I appreciate it a ton!
Hey Cole! UA-cam suggested your videos 3 days ago... I have now a pitch-ready AI-Application based on N8N :) If you ever come to Switzerland: Your beers are on me - ALL OF THEM...
That's amazing man!! Sounds good, I'll let you know if I ever come to Switzerland 😎
Love it dude. Only thing I would have added (initially) is that if the API key isn't there, then it doesn't show those options in the drop-down. But a simple fix really. Keep up the great work!
Thank you and you're totally right, that would be a fantastic addition! I would have to have the backend communicate to the frontend somehow which API keys are present, but that could be done with another API endpoint that is hit when the site is loaded.
open source fully used in the right way, love it, keep up the good work Cole 👍🙏
That's the goal - thank you very much Martin!!
Maybe an idea. Last time I made a program would make it search the web and then parse it to an agent. But I notice they are also very good with coming up with the best keyword to search in google for…
Maybe you should add an auto model choosing ai. So it a local model or api you can set for deciding which model to use.
So you give it the project idea/file tree, and the user prompt, and all the info about each model. And then it must decide like this is just a form to generate we can use a simple local model.
But when asked to make the form better or nicer it reasons it must use a more advanced model.
Maybe even after the user submitted his prompt it shows which model it has chosen, and the user must conform by pressing enter, or when the user thinks the model chosen is still not good enough it can change the model selected for the task by the arrow keys and then pressing enter.
It then lifts the weight of your shoulders and saves on expenses.
Great video❤️❤️❤️
This is an absolutely amazing idea, thanks for sharing!! I love the concept of having an initial router agent that determines the complexity of the task. I'm sure that's very doable!
I'm about to burst with excitement, just from the intro.. this is what keeps me up at night
Haha I love it, this kind of stuff is what keeps me up too!
Oh, you solved my problems instantly, and let me just say this, I love you, thank you so much.😭
I'm so glad!! It's my pleasure!
I made my own version of bolt that uses Ollama last night, I see that you've already progressed much further with it so I'm planning to contribute to your solution instead :)
Awesome, thanks man! I look forward to seeing your contributions!
This is impressive. If the majority of your content is like this I’m definitely subscribing. I’m a fan of open source tutorials.
Thank you very much! Yes, a majority of my content is on creating cool stuff with open source!
Fantastic job Cole, respect! 🙌 Instant subscription.
I'd like to see you integrating Perplexity next... it probably sounds silly to you, but I'm a total non-developer.
Thanks in advance.
Thank you so much! 😃
Not silly - thanks for the suggestion! I've got a running list of things I want to do to improve this fork and I'll add Perplexity/something similar!
@@ColeMedin Fantastic. Looking forward to it... 🚀
Not a programmer and don't know about bolt few hours ago but know I know I can easily build my concept by using your version thanks alot dude highly appreciated ❤️✨
Glad I could help! You bet man!
@@ColeMedin can you please add something like "I have an app or website project that's partially built, but it was created outside of Bolt AI. I'd like to continue its development within Bolt AI." it would be really help full
Sorry could you clarify what you are saying here?
@@ColeMedin I mean add something like importing the local project form your PC to Bolt and continuing it's development 😄.
When I see someone who provides real value, I subscribe. So, keep it up! 😊 By the way, LM Studio would also be nice to have.
Thank you so much - I appreciate the support and kind words a lot!
I am looking into adding LM Studio! There isn't a direct support for it with the Vercel AI SDK, but it looks like LM Studio supports OpenAI compatible APIs so I should be able to set it up similarly to how I set up Groq by just overriding the base URL for the OpenAI instance.
Dude this is awesome!
Nice...I was thinking of doing something like this myself but here it is now. haha. Thanks for your effrots. I'm going to go through your archives now. Cool stuff.
Haha that's awesome - my pleasure man! Thank you!
thanks Cole
really.
for a non coder like me this was fun watching and your next steps of showing how we can do something similar , i think now i can experiment with adding other service providers
thanks cole
My pleasure!! I'm glad it was easy to follow as a non coder!
@@ColeMedin definitely friend.
If you are able to make something even crazier. Please do share. I love testing out things
Sounds great haha, will do!
What I would like to see is:
- make the models configurable from a file instead of getting a massive list
- provide an easy way to download all the generated files instead of copy / pasting everything
- include a dockerfile to easily get going
Love these suggestions, thank you! All three I totally agree would be much needed additions to this fork.
Thanks Cole, this is really inspiring!. I am a business user wanting to play around with some use cases before going the mvp route. I felt that the step by step guide was more of hop and skip guide (i mean this well) as most tech literates would know what happens next. I think that if this guide were more stupidproof. for folks like myself :) you would get a whole lot more of adoptation for the fork. Were it not for getting the first few steps right on Windows I would contribute to your .md. Do you have a link to windows only step by step set up?
Oh yeah, it found all my Ollama models perfectly. Thanks for this amazing fork.
Awesome!! My pleasure!
Fan-darn-tastic! Spectacularly generous on your part and Bolt.new's part! 👏👏🥳🎉
Thank you very much! It's been my pleasure to get this up and running for everyone!
Wonderful job. It really helps to test different LLMs' coding capabilities. Thank you for your time and Good Health.
Thank you very much! My pleasure :)
Hey Cole, you are great. I never saw this type of work done and told on UA-cam, Thanks man. God bless you ❤
Thank you very much, that means a lot! 😃
Thanks very much for doing the fork & posting the info. I shall definitely try it out.
You are so welcome!
Cole, you are the the mAn! Great work on the forking of bolt.new, your content is top notch...
Thanks so much Luis, I appreciate it a lot!
I’m commenting again to thank you one more time. You’re awesome brother. Thank you for your work
Haha my pleasure! Thanks man!
Epic, great improvements. I hope the changes will also get merged upstream
Thank you very much! I hope so too haha, it would be a win for everyone
Excellent work ! Especially figuring out the Vercel chat interface and all
Thank you very much! Yeah that was the trickiest part!
This is great Cole. I'm really enjoying your YT videos on N8N usages as those are most comprehensive tutorials on YT. Keep up with great work! Do you have by any chance docker composer to run this with Ollama?
Thanks so much for the kind words! I will certainly keep it up!
I don't have a Docker compose for this right now, but this is something I want to do in the near future!
Really cool video, and I love how you explained everything.
Thank you very much!
Hi Cole! Thanks for the video. Somewhere in your videos you mentioned about discourse for discussions. Is that working now?
You are welcome! Discourse community is coming next Sunday actually!
I think it's generally good if local LLMs can also be used. However, it doesn't work as well as it could (yet). Your work is great, and I'm excited to follow its further development. There is currently a problem with the code being executed and generated automatically. It’s precisely the core functionality that makes Bolt so exciting. But I don't want to make any demands here, I only just want to share my ideas and thoughts. Maybe at some point there will be a native app that will implement all of Bolt's functionality. Keep up the good work and thanks to you, Cole. 👍
Yes I agree it doesn't work as well as it could with local/smaller LLMs right now. Lot of opportunities for prompt engineering and agents behind the scenes which we are indeed working on right now! Thanks for your thoughts!
Awesome project! Does bolt.new use the sonnet 3.5 model (the newest model)?
Thank you! I believe so!
v0's new speed update made me move back to it after using bolt for a bit. I"ll have to take a look at your fork, sounds exciting for local coding!
Super interesting, I didn't know about that! Thanks for taking a look at the fork too!
Thanks so much for all the work you put in and for sharing the how-to with us so we didn’t have to do it as well. This is a significant improvement to this already amazing world of AI assisted coding, especially with respect to Bolt.new. I can’t wait to download your fork.
You bet - thanks so much for the kind words! I hope it all works well for you when you try it out later!
amazing, maybe later any support for LM Studio with MLX support for apple silicon ? ;)
Yes that is on the list of improvements to be made!!
I'll have to grab it this evening. Hopefully, you set-up openrouter. But having ollama is fantastic on its own.
Yes I did set up support for OpenRouter yesterday! Enjoy!!
This content is very well thoughtout, engaging, and helpful. I am going to be checking out this code. Have you ever thought about streaming while you code? I would be curious about your thought process, using AI prompts in developing, and showing the development. Group programming!
Thank you very much! And I have thought about it but not too much yet - thanks for mentioning that! It's very different than recording because if I get stuck on something the stream might get boring for a bit... haha
But like you said the thought process behind everything could still be valuable - so I do want to do it in the future for sure!
Love your desk
Thanks man!
this is awesome! Thanks for sharing Cole. Do you know if theres a way to still upload files/screenshots using this version? I am a ui designer and want to share wireframes, i find it helps a lot in the build. thanks!
Thank you very much - you bet! Uploading files is a feature that Bolt.new doesn't include in their open source version, unfortunately. I like being able to upload wireframes too so I really wish it was included. But I guess Bolt.new needs to have some closed source features so people have a reason to pay them.
Maybe I'll have to add this in as an extension to what I've made here!
just tested this and it works great! you're a legend
hi, I can get the bolt page to appear on my local server but any request is returning "There was an error processing your request". struggling with this
@@DeepakSuresh-te8xq make sure you’ve added the api keys
That's awesome - you bet!!
Cole, is there any way the output code be connected to a GitHub repo? This could add a lot of value to the work that you are doing. Keep up the good work!!!
Thank you and I appreciate the suggestion! Right now the open source version of Bolt.new doesn't support this, so I would have to implement it myself. But I am considering doing that because a lot of others have suggested it!
This is one of the best (most useful) AI videos I have seen in a long time. And that's saying something.
Thank you very much, that means a lot to me! 😄
legend mate! love your content, always the best!
Much appreciated, thank you!! :D
Great work @ColeMedin !! But I don't know if this happens only to me, the chats always lose the previous context I mean every conversation needs to be a new one adding the previous chat manually, and the code only appears in the chat not in the Workbench "code view", Do you know why?
Thank you Roberto!
I believe the conversation history issue is a limitation of the open source version of bolt.new. Something I am looking into!
And then for the second issue - a lot of the smaller models have issues using the webcontainer (code view). So you'll only see a chat output, which is still useful, but obviously not ideal. I would first try a different model, especially if you're using a really small one like
@@ColeMedin Oh :(
Thanks for your answer ;)
You bet! Sorry it isn't quite the answer you are looking for! Hopefully that solution for getting the smaller models to work in the webcontainer can work for you though!
this is fantastic. amazing. the best AI youtuber I haver ever seen.
One more thing, could you please kindly add openrouter, it is great one including all models.thanks a lot.
Thank you so much!! OpenRouter is available now!
Great thank you, Can you please make a video on how to install on a mac? Up to the Keys no problem... from there on I don't understand how to go on?!
My Hero. I almost started to code bolt myself ;) Thanks
Haha my pleasure! Coding something like Bolt.new would be fun but yeah a LOT of work!
Thank you for taking the time to make this video and share this with all of us.
Is there anyway you can make this use my local file system or even better use visual studio code?
My pleasure!
Good question - so Bolt.new doesn't have this functionality at all so I would have to make it entirely from scratch. Which I am considering doing because a few people have requested exactly this already! Or at least to include the ability to download locally what Bolt.new creates.
The attachments button is missing. But nice work!
Thank you! And yes it is unfortunately - that is something not included in the open source version of Bolt.new, I guess so they can have some proprietary stuff so people will pay for their platform. But that is something I am looking to implement!
Nice work! your modifications seem so obvious, I wonder why they weren't already integrated.
Thank you and I agree! Maybe the Bolt.new team will see my changes and implement the same thing themselves if my video gets enough traction. It would be a win for everyone!
Excellent man was able to get this up and running quickly. Only issue is my llama3.2:1b model is just spitting out the code in the same chat window lol, Probably a me problem or a model issue, going to try some of the models you listed.
Thank you - glad you have it running yourself too!
Yeah I've noticed as well that the smaller models sometimes don't work very well with Bolt.new's prompt so they won't open up a WebContainer on the right side and it'll be more like a regular chat. Still helpful but yeah obviously not what we are looking for mostly.
If you are able to, I would try a larger 30b+ param model like CodeLlama 34b or CodeBooga 34b. Or try DeepSeek-Coder through OpenRouter, that model kicks butt.
Otherwise it might be possible to change up the Bolt.new system prompt to work better with smaller models. That is something I am still researching!
@@ColeMedin Awesome thanks for the suggestions!
Of course!!
Just added google gemini from the Vercel AI SDK, Incredibly simple, but doesn't seem to be as capable as the other models. It seems the in browser code/preview canvas needs some prompt engineering
That's awesome you got Gemini added in! Nice job!
Too bad it isn't performing well though... you're right - for many of the not as powerful models there should be an opportunity to tune up the Bolt.new prompt to make it work better. That is something I am looking into!
@@ColeMedin just an update, it was something with my environment, now the canva is working. Just wondering if there a way to automatically save the files
@guerra_dos_bichos Awesome, glad it is working for you! There isn't a way to save files right now since the open source version of Bolt.new doesn't support that unfortunately, but I am looking into making it myself for my fork since it is a highly requested feature!
Preview on same page is on-point
hey is preview possible???
This can happen sometimes depending on the model you use - which models are you trying to use?
Please include the Azure Open AI API support as well
I will add this to my list of improvements to make to the platform!
Thank you Cole truly valuable content.
You bet! Thank you!
Nice work! And with that, you got yourself one more subscriber 😊
Thank you very much, I appreciate the support a lot!
Thanks Colt, you got yourself a new subscriber
My pleasure - thank you very much!
Glad you showed what to change, so I can add LMStudio much faster 😁👍
Always happy to help!! 😃
Thx for all explabayions and hard work. Sometimes local llm s boring but with flowise with 2 or 3 agents it makes fast and not uses CPU and gpu much more, if we can integrate with Bolt and your system..
Nice work
My pleasure, thank you! Could you expand a bit more on your idea here? Sounds interesting!
@@ColeMedin I don't know how to write in programming language. But I have experienced that normally when I use an artificial intelligence alone, it gives very late and bad answers. But when I create different agents with flowise, give them tasks and run the system piece by piece, I get both more proper and more efficient answers. If we can use this in your system, something great will come out. Actually, it's like separating the prompt entered by the user into chunks. I don't know if I explained myself. Still, you have produced something very nice, congratulations.
Thanks for the kind words and yeah I see what you mean now!
This kind of thing where you have agents running in the background to produce the final result for the Bolt.new frontend is certainly doable! It would take extending the platform quite a bit, but I do love the idea!
Awesome, what has been your best Ollama for this - Deepseeker, mistral , qwen ?
DeepSeek has been my favorite!
Thank you for a great fork!!! Enjoy your videos. I wish there was an option to only write an HTML, CSS, JS site instead of it always having it built in a stack like vue or nextjs.
Thank you Michael! I've actually had luck getting it to only write HTML, CSS, and vanilla JS. I just have to specifically ask for only that in my prompting. Sometimes it still likes to create a package.json file but I think that can be fixed by tuning the Bolt.new prompt for the LLM.
@@ColeMedin Thanks Cole!! I tried that and that worked, no framework files :)
I appreciate you!
Awesome man!! You bet!
Awesome video !, could you share the docker version of this fork ?
Thank you! I haven't containerized this yet but I like the suggestion! I will certainly consider doing that especially if I add any other services to this fork like agents in the backend.
Thanks excellent work, anyway to import projects I built in bolt.new to continue building them with your fork?
oh and @cole the AI enhanced prompt seems to be hard coded to anthropic so as I have no available credits won't work, can we have the enhance prompt point to the chosen LLM?
Thank you so so much for your support and kind words!!
Right now this isn't possible because Bolt.new doesn't include the import feature in their open source version. I guess they have to keep some things closed source so people have a reason to pay them for their cloud offering.
This is something I am looking into adding though! But it will certainly be a good amount of work to set up!
@@ColeMedin bolt.new keeps loosing my projects anyway so their implementation wouldn't be the right direction!
Really a Great work!!
the chat works and answer to questions,
but using OLLAMA models my preview and code view are empty!
with GPTO (using openrouter) works.
maybe there are some OLLAMA models preferred to use?
Thank you man!
Yeah I've noticed as well that the smaller models sometimes don't work very well with Bolt.new's prompt so they won't open up a WebContainer on the right side and it'll be more like a regular chat. Still helpful but yeah obviously not what we are looking for mostly.
If you are able to, I would try a larger 30b+ param model like CodeLlama 34b or CodeBooga 34b. Or try DeepSeek-Coder through OpenRouter, that model kicks butt.
Otherwise it might be possible to change up the Bolt.new system prompt to work better with smaller models. That is something I am still researching!
Superb work. Just one thing I would ask you, how to push the code from the interface to github. Or where can I find all the codes or projects?
Thank you and good question! Unfortunately this is something that Bolt.new didn't include in their open source version. So I would have to add it entirely myself - which I am considering doing since a lot of people have requested it!
@@ColeMedin most awaited feature update. Hope you will do it soon.
I'm planning out my content for the next month and including this, so it will be reasonably soon!
This is awesome, Can I use my finetuned/base model GPT deployed in Azure AI Studio?
Thank you! You sure can! You would just need to create an openAI instance for the model provider where you override the baseUrl to point to your GPT hosted in Azure AI Studio. Or I believe they have direct support for what you are looking for here (correct me if I'm wrong if the studio is different than this):
sdk.vercel.ai/providers/ai-sdk-providers/azure
@@ColeMedin exactly! That's what I was looking for. I tried implementing it using the same approach from the link you shared but encountered an error. As a .NET backend developer, I'm still relatively new to the Node.js stack and learning it as I go. If you could assist in adding this functionality, it would be incredibly helpful. Thanks in advance!
Yeah I would love to help! What is the error you ran into?
Hey Cole, like what many have said here, thanks for putting out some of the best straight forward hands on content.
I have a question that, that no one had been able to answer. I'm hoping with your experience in this field, you'll be able to finally put it to rest:
What AI coding tool would you use to work with large files in a codebase? I have a js file with 40k lines of code, and none of the popular tools out there has been able to handle such large context.
My pleasure, thank you for the kind words!
This probably isn't the answer you are looking for but I put a lot of thought into the second paragraph so hopefully it helps! I would suggest against having any single file in source code that is that many lines of code. Typically for a JS project, you would split the code into separate components and have all of those in different files. Traditionally recommend for readable + reusability of components, but even more important now for being able to have LLMs come in and help with the code more easily.
Now, I don't know what your codebase looks like and I'm sure there is a good reason you have a file that big! If you really do want to handle files that big, you'd probably have to develop a custom system that splits the file up and then feeds chunks one at a time to the LLM to process and do whatever you need it to do like update sections of the code. So basically you summarize each piece of the code so the LLM can navigate between chunks and make the necessary updates in a multi-step agentic workflow.
@@ColeMedin thanks for taking the time to reply in detail. Your answer makes complete sense, especially after researching and testing the AI coding tools in the current meta.
The large js file is the output of a vue output that was uglified and then beautified, and I don't have access to the source files. That's my only issue. I guess I'm gonna have to try and dissect it into multiple files based on my intuition.
Anyway, thanks again. Cheers
Ah okay that makes sense! That certainly does make it tougher. Good luck splitting it up, I hope that works out well for you and makes it possible to use LLMs to assist more!
You bet!!
Thanks for this! Works "out of the box"
You are so welcome! That's a strange issue, Bolt.new is specifically prompted to not do that and I haven't ran into it myself. Which model are you using? I would also in your prompt just specify to include all the code in each file it rewrites!
thank you so much! quality work! thanks for being you!!
I appreciate it a ton, thank you!! :)
Looks dope! Slap Openrouter support on this and its gold
Thank you! And I already did yesterday! 😎
If you can make a video about how you edited the source code like Using AI or manually. It would be a banger. Great presentation subscribed !!!
Thank you very much, I appreciate the support a lot!
I do show at the end of the video how I edited the source code to make this happen. I didn't actually use AI for this since the changes were between so many different files. Or is there something more specific you were wondering about me making a video on related to this?
Hi Cole, Great Job!
Thank you very much! 😀
Hey Cole, thanks so much!
You bet!!
Huge thumbs up for this - subscribed, cloned, now a follower :)
Thanks so much, the perfect trifecta! haha 😀
Great vid man! Question, if I’m using an api provider like Azure which requires an API key and a ‘resourceName’ do I need to include the ‘resourceName’ in the api-key.ts switch statement also or just the apikey? (apiKey and resourceName are both environment variables)
Any help would be greatly appreciated!
Thank you very much!
You wouldn't have to include the resourceName in the api-key.ts switch! You would just need to include process.env.AZURE_RESOURCE_NAME or whatever you call the environment variable in the call to createAzure in models.ts just like you include the apiKey there.
I assume you saw the docs for this, but in case you didn't:
sdk.vercel.ai/providers/ai-sdk-providers/azure
@@ColeMedin appreciate the response! Yes I’m following the vercel docs closely. Last question if you don’t mind, what about the anthropic-vertex community reference which doesn’t include an api key at all? Does that get left out of api-key.ts completely similar to the local ollama models? Thanks again!
I don't mind at all!
And yes that is the correct understanding!
Did you try Gemini 1.5 Flash? Nice work.
@@hope42 Thank you! And I did not yet - I've got a huge list of models I want to try and that is one of them!
that's my thoughts !!
Exactly what i needed, without realising I needed it! Thank you! Could you clarify, you say that the ollama models should be installed before use. I run "ollama run deepseek-coder-v2" its installed, and I can see it when I list ollama, but when I then try use the model in bolt.new I get the error "responseBody: '{"error":"model \\"deepseek-coder-v2:16b\\" not found, try pulling it first"}',". Am I missing something?
Solution: Double check the model names by running "ollama list" - then ensure the names in the .env match! For me some of the downloads from Ollama are saved as model:latest instead of model:15b.
Glad you figured it out! Your solution as a reply to your own comment is exactly what I was going to say!
How much would it cost me per message on average if I use the Claude 3.5 Sonnet API?
Great question! Around ~$0.02 on average I would say. Given Claude 3.5 Sonnet is $3 per million input and $15 per million output tokens.
Whats better at coding your projects, 4o or claude 3.5?
They are pretty close so sometimes I'll actually use both when one encounters an issue! But typically I found Claude 3.5 Sonnet to be slightly stronger.
@ColeMedin Thanks. I'm going through 10M tokens a day on Bolt. Is the downloadable LLM the same for your local fork or can you not download Claude 3.5
Sorry could you clarify your question?
I love this. However, we can't install packages, lib to run projects in preview. How to do it?
Thank you! Could you clarify your question! You should be able to install packages and get a preview just like the commercial version of Bolt.new
Cole, do you have any video of full end to end automation over the voice?
Assuming connect Siri to LLM and n8b workflow. Or could be self hosted IP telephone where can dial-in and speak to LLM to execute some action
+1
I do not yet but this is in my pipeline to make content on! Especially for a personal assistant that you can just have access to on your phone!
@@ColeMedin yes please!
I'm wondering how would I use a open source model from LM studio since seems like there is no provider built for that, thanks for the great content!
You bet! And I believe LM Studio supports OpenAI compatible endpoints so you can set up LM Studio just like I did with Groq in this video!
lmstudio.ai/docs/basics/server#openai-like-api-endpoints
Hi thank for this amazing project, my inquery i have openai api key but iam unable to run, althouth i install canary browser and follow main command could make avideo if possible to explain how to set openai apikey step by step thanks
I will be making a step by step guide on this soon here!
absolutely fantastic, thanks a lot for that!!!! can we use Gemini Flash 1.5 (free) api? also kudos to the suggestion to export zip file which includes all the files
Thanks Leonardo! You can use Gemini Flash 1.5 through OpenRouter right now! This version from OpenRouter is totally free:
google/gemini-flash-1.5-exp
great, I will to user this fork with LM Studio
Sounds great, good luck! :D
Amazing work!! Thanks so much for sharing! Is there a way to fiddle with this app so it could write files locally like cursor on windows?
Thank you and fantastic question! That isn't available in Bolt.new but I am looking into how I could extend my version to make that possible! Or at least make it so you can download directly the project that it generates.
@@ColeMedin Amazing! Thanks so much for all your work.
My pleasure!!
Great stuff cole. I want to give this a go specifically for the Ollama bit however, where / what would I need to change if I have Ollama running on another computer in my house. So rather than locally on the machine I will run your fork of Bolt.new but on another machine in the house. Could it be on line 35 of the models.ts I can simply put a baseurl: line there
Thank you! And yes, you should be able to just override the baseUrl as long as that IP is accessible from your machine (i.e. no firewalls blocking or anything like that)!
@@ColeMedin For some reason - None of the Ollama based models are working on my system. Bolt.new throw an error on console about not found. I had already ensured that ollama is up and running on default port 11434.
Can you help maybe what might be going wrong? I'm on mac.
@curiousturtle8190 Make sure you run the ollama pull command for the exact same model ID that you are using within Bolt.new!
So if you want to use codellama 34b, for example, you would first have to run the command:
ollama pull codellama:34b
All the model IDs can be found in the app/utils/constants.ts file that I show in the video!
@@ColeMedin you are 💯 correct! After using exact name, it works like a charm. The default pull was using the latest tag name which was causing an error on my end. Thanks a ton! 😊
You bet!!
Can this work with Nvidia's new 3.1 70b model? that would be amazing
Great question! And the answer is yes! You just have to pull it from Ollama and it'll be available to use here.
ollama.com/library/nemotron
Thanks Cole. This is awesome cause you have integrated local models. I'm using agent Zero but can't totally write full stack with it. Now i want ti try bolt. So which ollama model is best for full stack development?
My pleasure! Out of all the Ollama models, I would give codellama a shot first!
@@ColeMedin thanks i will try it.
Sounds great! You bet!
@@ColeMedin codellama 13b and 34b not properly working. They don't use instructions and instruments at all. They can't work with "artefacts". Keeping trying something else...
@@ColeMedin i found one interesting fact. I have promted my system promt of deepseek coder v2 16b in ollama and when i asking do some stuff ir works like simple chat bot, but when i say it use artifacts it starts working properly
Another interesting thing that when bolt uses anthropic models claude sonnet or others it gives instructions to system prompt while connecting to its api key. System promt can be read from terminal where bolt started.(Cmd powershell) I copied it promt but ollama model can not understand it directly so i rewrited it to plain Text and changed formatting text and it took this role of bolt coder and tools to use it started to understand
So can you add same functionality while loading ollama model from local api to bolt give it instructions while loading like it doing to Claude
I think it will fix issue and we could test every models from ollama and choice best because their system prompt will promted correctly.
thanks! keep up the good work
Thanks so much Hugo! Your support means a ton to me! 😄
This is super fucking cool. You telling me that if you have a fast pc the local models work better? Time to install vscode and give this a try on my gaming pc jaja
Haha thanks man! Not all local models work super well but some do, especially the bigger ones like DeepSeek-Coder V2. Hope it works well for you!
Your explanation is really detailed. I really like your style. Please keep at it
Thank you very much! I definitely will keep at it!
Great video. Can you explain how to install any llm to existing bolt.new 🙏🏼
Thanks!
Thanks! I'll be doing a followup video on this!
Awesome really awesome #1 quality of content you have. If i may i want to ask, i downloaded your fork, then pnpm install everything working well. but i cant see created files on right side. there is nothing.. i created something from sample todo but no file created on right side. how can I do this if you know? thanks
PS: using win 11 and ollama with deepseek-coder-v2 to tryout.
Thank you very much man!
I assume you are using the 16B param version of Deepseek Coder? The smaller models sometimes don't work very well with Bolt.new's prompt so they won't open up a WebContainer on the right side and it'll be more like a regular chat. Still helpful but yeah obviously not what we are looking for mostly.
If you are able to, I would try a larger 30b+ param model like CodeLlama 34b or CodeBooga 34b.
Otherwise it might be possible to change up the Bolt.new system prompt to work better with smaller models. That is something I am still researching!
@@ColeMedin thank you for fast answer. i found like this, if I say create me a todo list app its not write on container. but if i say build me a bla bla its creating on there. and I really want to ask , for react/nextjs tailwind combo which llm is the best for result in your opinion. thanks again for this awesome work!
Of course!
Interesting! So you're saying even small changes to the prompt can help the smaller models interact with the webcontainer properly?
I've been having a lot of fun and success with DeepSeek-Coder 236b from either Ollama (though you have to have a really good machine!) or OpenRouter (super cheap). It doesn't do the best with styling but it corrects itself really easily when you ask and the functionality is super good.
@@ColeMedin Dang I guess I can't run this then... Now I see why my models aren't opening the editor lol. Guess I have to wait until newer models come out that can handle it.
Amazing job
So how does Bolt compare with Cursor? And can you do make AI agents within Bolt? like does Bolt replace Vscode?
Great questions! I've had LOT more luck with Bolt.new compared to Cursor. I like both but Bolt.new has given me a better experience overall.
Bolt is more focused on the frontend even though it is full stack, so I wouldn't necessarily use it for creating AI agents. But you could certainly try and see what it can put out for you!
@@ColeMedin Thanks for your reply.
Of course!!
@@ColeMedin I tried your repo. I am having issues using Groq. It just says ''there is an error processing your request". I tried all the various Groq models you put too.
man this thing works great! Solid work my friend. Quick question, is there a way to paste screenshot i the fork you created so that it can interpret it or is that only in the original one? thanks
Thank you so much, I'm glad it's working well for you!
Unfortunately Bolt.new doesn't provide this feature in the open source version. I guess they have to keep some things closed source so people are willing to pay for what they offer in the cloud.
But I am considering adding support for this in my forked version!
@@ColeMedin +1 for image input! Is this complicated to add? I'm curious about how it works in the cloud version (and in V0, Replit, and others..).
@davidbraun7356 I'm guessing it will be fairly complicated... and also not all models will support it so I'll have to figure out how to make that a good experience too. But it would be freaking awesome to have in the fork!