@@EduardsRuzga nothing in return 👍...but depending on what others might want I'm the docker guy. Getting this to run in a container allows for repeateable experience both local and hosted. I ran into roadblock implementing it due to wrangler. But seeing your recent local setup I may be able to getting going. But if someone beats me to it, awesome
@@toddschavey6736 nah, I failed so far to make it work on linux... Like you with wrangler. There is github.com/cloudflare/workerd It's ClaudeFlare runtime that is used to run CloudFlare workers... I for now deployed to cloudflare If someone will make it work in containers... I considered rewriting Bolt to just work with Node in production, but it's a large change that will make it not work with an upstream repo. So I postponed this question for now.
Thank you! Well, I am interested and covering this topic for a long time. I mean coding with AI. I am blogging on Medium too and have articles about chatgpt and coding as far back as April 2023. Now we have open source bolt :) Of course I care :)
@EduardsRuzga I wish I could have a developer like you in my team. We are heading for the MVP of a big web app. Anyway... you never know. You are valuable!
They also don't support their customers. They essentially told me to use a different service. I wanted them to answer questions. I was lied to and treated like a burden. I was a paying customer with a broken product.
I feel your pain. I was burned by Replit like that before. They did not have a free test tier, I was excited, paid 25$, tried 5 things, all failed, and was bitter. All of them should put upfront that that is an alpha/beta with issues. As for StackBlitz. There are some things they have over others for which I do give them more credit. 1. They have a free tier to try before you buy 2. They have OpenSource version I still agree that it was premature for them to start charging... And being in AI spaces is tricky... I mean when you paid them they paid to AI providers so they do not have your money anymore... Their choice is to go bankrupt by refunding you or do what they did.... Not new in the startup world... Not good and should not be that way...
@@Epirium I think yes, I did not look into groq but I heard that they also have free tier. Currently its Open Router but I Want to add Google Gemini and Groq and Antropic and OpenAI next. Groq and Gemini have free tiers. There are also others. I want to add many providers so people can use the keys they have + have more free. options to choose from too. I think I will add it over next 1-2 weeks.
Amazing! Only issue I'm experiencing is the llm hallucinate with large projects forcing you to refresh your browser in ultimately losing your work. Wish I knew a way around this issue so I can create large robust website.
@@EduardsRuzga yes, that would be great for the project, you should contact him to colaborate😁, actually i am folowing him, and todey i saw you video really great job!
What did you miss? It is literally download zip, install node, open terminal, npm install, npm run build, npm run start I may consider doing another one if you can share mode details on where I lost you.
@@EduardsRuzga I was able to download the zip file from Github. I have already installed NodeJs and npm as wel.. But after that I do not know what and where I should do? By the way, I am from Sri Lanka and 55+. not a tech. So you know what I mean!
Many thanks! Liking the auto backup to folder idea - I'll have a look at your fork again. What I did notice a few days back was that it looked to me as if the jsZip code would download everything, including node_modules, logs etc. is/was that the case?
@@EduardsRuzga Thanks for the answer to my idle and incorrect speculation! I'd assumed they would be in the project directory even if hidden by the bolt interface, I know there are some, such as .bolt, that are hidden.
@@EduardsRuzga No, not enough time for that! I'm interested to know where they do live now though - I'm going to poke around and see if I can find where they are. And then see if I can get your port to run locally, just for fun!
@@EduardsRuzga Ah, so node_modules _is_ in the ~/project directory, just checked it with ls. I guess there's a filter that hides it from the AI. Anyway.... I'll stop distracting you!
Hi..can u help me..i create system 90 % ready complete, data already save firebase, but have issue , if I log in, the saved data is not the same, the data is not synced, tell me what I need to do, if I log in on a different device, the saved data is not the same
@@EduardsRuzga buongiorno l'ho contattata anche su LinkedIn comunque appena rientro a casa le invio su LinkedIn la foto dell'errore che mi esce credo che sia dovuto al file node perché sembra incompatibile.
It is coming, people already tried. With images you mean to allow it to generate UI from images or to add images to the code to use in the website/app?
@@huntermatthewricks that second part is complex. Cursor is only product on the market i know of that does that. May bew github workspaces + copilot can do that but not as good as cursor
There are bugs here and there. I use combinations of: 1. Infogram.com(camera + animations for the first part of this video) 2. OBS for some of the screen recordings 3. iMovie for some editing, especially Audio 4. For this video I bought Screen Studio for a month, it has some issues but it's good for such hands-on demos with screen recording. The second half of this video with the flying camera is done with that If you want more info find me on LinkedIn, connect, DM, I will share screenshots there for what and how I do Actually... Here is Infogram template infogram.com/1p9zzjm5vy7w5ku752qppdygmys37lvzw3r?live Inside of editor, you can have a camera that you can also resize and animate
okk many of the people might have downloaded ollama on their pc or any other llm so they want to use it when not connected to net .. in such scenario there should a way to connected and work right .... i have ollama on my pc but it still ask download ollama besite the key icons ??? if our main purpose is to avoid charges then this approach will really help ??? or how we can connect locally downloaded llm with the bolt this two approach could be done right ???
@@--INDIAN--TRADER it does not yet work from hosted version. You need to download and run it locally too. But i have some ideas of how to connect hosted website variant to local models. Not at the top of my list yet.
@@EduardsRuzga yes sir i have bolt .new downloaded & installed locally and now it runs and also have downloaded ollama which was alredy there locally on pc both ... so that is my question if we already have both why we need to to connect to other llm online we can use the llm ollama which is already download that was my question sorry if my question was not to the point ...
Not sure what you mean. You want to return to website and still see chat history? In theory I can be exporting chart history too to files. But currently its already saved in to browser cache so on same machine you should see it. As for storing it on server side across devices, with user login, that is something to work on.
I am having trouble with rate limits, will be looking in to it over next days and get back to you, sorry for bad experience. I do speak about that in stability part and current advise is to get your own free key from Open Router
i am encountering with an error "'bindings' is not recognized as an internal or external command, operable program or batch file." in terminal can you help ?
Ouh right, this is Linux/Unix thing. If you know how it should work under WSL(Windows Sub System) terminal I think I need to get to making small video about that part. Will explore over weekend what is easiest way and make small video next week.
What’s the best combination with bolt locally with m1 Mac with 16 gigs of ram? I was able to run deepseek coder 16b but it had issues generating the files
Yeah, my experience with local coding models so far was so so. GPT and Sonnet are just too good in comparison. Local ones run slower and work worse. I did try deep seeker before and it was so so. I heard recently some good things about Mistral Codestral, downloaded it with LM studio but had not time to test.
Great video. Thank you. Do you know if there's a way to tweak bolt.new functionality so that when I am giving it various commands to tweak my app, instead of writing every single piece of code again, it only changes and re-writes the file(s) that need to be changed instead of everything? This would really speed up development and save lots of API credits. Thanks. (ps. I am a no-code / none-developer).
@@g13jon in theory yes but its hard in practice. I am playing with this for a while, since summer 2023 and its not easy thing to do. Even new ChatGPT Canvas struggles with it.
@@EduardsRuzga Ah I see. Well hopefully one day it becomes a reality for the bolt service. Thanks again for great videos and your online demo hosted version of the open source model. 🙌👍🏽
e2b thing? I tried it, it uses server-side containers to turn serverside code, which is more challenging to host and works slower. But it has the bonus of running more than javascript. I wonder if one can add that to web containers that bolt uses. Overall, Bolt is simpler to expand, developer, host etc. So I put my efforts behind that.
@@thomasschoko3527 you mean allow it to work on its own codebase? I do want to get there ))) Already trying out tired out meme, so i heard you like bolt? Well I put Bolt jn to Bolt so that Bolt can develop Bolt )
nice video but once again ?? Windows and Windows conda , neither even get close to working please do a windows install video or provide a docker image etc thanks
Local models not at the top of my list, may move higher if more people want it but my goal is to do that in ways that will allow to host it online. I am also interested in it working on a phone in browser, that is super anti ollama
@@EduardsRuzga i am not developer i am only sys admin but i think instead of spending money for api it is easier to buy powerful pc and use 70b models forever and create so much content and app so u need for free.... But it is only my opinion.
@theNotLogo Well, that is a complex topic with an article worth of answers. Currently, I do not recommend local models for many reasons. Server hardware is cheaper per token, but its upfront investment of 10-20k is going to go obsolete next year. You probably will not get your investment back. It's like renting/leasing a car vs. buying upfront in the market where machines become more fuel-efficient yearly. Now consumer grade rtx 4090 with 24gb ram costs 2.5k where I am. It will not run 70b and 70b is not as good as Sonnet or even gpt4o in general. So you will get less quality. Let's say 2x less quality. That means you just spent 5k on tokens for 1-2 years. I plan my spending on tokens to be 10x less while using frontier models from providers.
@@EduardsRuzga maybe you are right... Which provider is best and cheapest? I want to try using api. I am interested how much tokens will i spend in one day. I am working on one project and it seems very difficult for producing. .
Well, that is why I am interested in opensource, another model, and other ways to work with it. But StackBlitz also working to improve that. What other products do you use?
Thank you for your time and effort. And thanks for sharing 🤘👍
Ouh thanks for your generous support! Revisit my humble establishment! :D
What's new on your side. What features you look forward to in Bolt?
@@EduardsRuzga nothing in return 👍...but depending on what others might want I'm the docker guy. Getting this to run in a container allows for repeateable experience both local and hosted.
I ran into roadblock implementing it due to wrangler. But seeing your recent local setup I may be able to getting going. But if someone beats me to it, awesome
@@toddschavey6736 nah, I failed so far to make it work on linux... Like you with wrangler.
There is github.com/cloudflare/workerd
It's ClaudeFlare runtime that is used to run CloudFlare workers...
I for now deployed to cloudflare
If someone will make it work in containers...
I considered rewriting Bolt to just work with Node in production, but it's a large change that will make it not work with an upstream repo. So I postponed this question for now.
I want to be a part of this amazing revolution in technology development. Much love from New Orleans!
thank you for this info! just the video i was looking for
Thank you! Glad it was of help, what exactly you were interested in? There are multiple topics covered here :)
Keep going bro don't stop🎉❤
Thanks! I am, just some slowdowns but not stopping ;)
bro these videos are slept on
Awesome video and thanks for keeping up with this project
Braaaavo mate for your effort. It seems you care! Thankssss
Thank you! Well, I am interested and covering this topic for a long time. I mean coding with AI. I am blogging on Medium too and have articles about chatgpt and coding as far back as April 2023.
Now we have open source bolt :) Of course I care :)
@EduardsRuzga I wish I could have a developer like you in my team. We are heading for the MVP of a big web app. Anyway... you never know. You are valuable!
@georgezorbas9036 thank you! What MVP are you guys doing?
@@EduardsRuzga I am sending you message and don't show up!!!
@@georgezorbas9036 that's youtube, find me on LinkedIn for DMs.
They also don't support their customers. They essentially told me to use a different service. I wanted them to answer questions. I was lied to and treated like a burden. I was a paying customer with a broken product.
I feel your pain. I was burned by Replit like that before. They did not have a free test tier, I was excited, paid 25$, tried 5 things, all failed, and was bitter.
All of them should put upfront that that is an alpha/beta with issues.
As for StackBlitz. There are some things they have over others for which I do give them more credit.
1. They have a free tier to try before you buy
2. They have OpenSource version
I still agree that it was premature for them to start charging...
And being in AI spaces is tricky... I mean when you paid them they paid to AI providers so they do not have your money anymore...
Their choice is to go bankrupt by refunding you or do what they did.... Not new in the startup world... Not good and should not be that way...
8:22 bro you are literally crazy, Thank you so much for this
Hah, "crazy" - thanks for compliment :D
@@EduardsRuzga I’m crazy but can we integrate groq ai models here? And go crazy with it?
@@Epirium I think yes, I did not look into groq but I heard that they also have free tier. Currently its Open Router but I Want to add Google Gemini and Groq and Antropic and OpenAI next. Groq and Gemini have free tiers. There are also others. I want to add many providers so people can use the keys they have + have more free. options to choose from too. I think I will add it over next 1-2 weeks.
Amazing! Only issue I'm experiencing is the llm hallucinate with large projects forcing you to refresh your browser in ultimately losing your work. Wish I knew a way around this issue so I can create large robust website.
@@brianbayarea510 yeah, solution is selective updates, its comi g yo.comercisl volt, oodn source will get yhere too
1:04 coleman doing separate based on API keys in env file, pretty simple, but yours re build is awesome coz it has lot of features
@@Epirium we may merge with coleman, communicating with him and in his fork
@@EduardsRuzga yes, that would be great for the project, you should contact him to colaborate😁, actually i am folowing him, and todey i saw you video really great job!
Can you do a separate video to show us the step by step guide to install Bolt in our local folder? What and how you did was little too fast for us?
What did you miss? It is literally download zip, install node, open terminal, npm install, npm run build, npm run start
I may consider doing another one if you can share mode details on where I lost you.
@@EduardsRuzga I was able to download the zip file from Github. I have already installed NodeJs and npm as wel.. But after that I do not know what and where I should do? By the way, I am from Sri Lanka and 55+. not a tech. So you know what I mean!
@@EduardsRuzga Can you do it for windows?
@@humanbodyinsigh yes. I recorded already yesterday but my recording software failed, so i lost my work. Will try tomorrow again
Thank you .. very good work. Please keep it up.
Many thanks! Liking the auto backup to folder idea - I'll have a look at your fork again. What I did notice a few days back was that it looked to me as if the jsZip code would download everything, including node_modules, logs etc. is/was that the case?
Bolt by default hides such folders so no, it does not sync node modules
@@EduardsRuzga Thanks for the answer to my idle and incorrect speculation! I'd assumed they would be in the project directory even if hidden by the bolt interface, I know there are some, such as .bolt, that are hidden.
@@theoilybeard3287 they hide it so its not sent to AI if I understand it correctly
Not digging in to what works )
@@EduardsRuzga No, not enough time for that! I'm interested to know where they do live now though - I'm going to poke around and see if I can find where they are. And then see if I can get your port to run locally, just for fun!
@@EduardsRuzga Ah, so node_modules _is_ in the ~/project directory, just checked it with ls. I guess there's a filter that hides it from the AI. Anyway.... I'll stop distracting you!
Thank you, much appreciated.
Hi..can u help me..i create system 90 % ready complete, data already save firebase, but have issue ,
if I log in, the saved data is not the same, the data is not synced, tell me what I need to do, if I log in on a different device, the saved data is not the same
@@amancarseat92 i would need to take a look somehow. May be share on github, or sgare a video.
Ho fatto tutta la procedura alla lettera , ma quando si apre il sito esce un errore ! E non riesco a utilizzare bolt.new .
What kind of error?
@@EduardsRuzga buongiorno l'ho contattata anche su LinkedIn comunque appena rientro a casa le invio su LinkedIn la foto dell'errore che mi esce credo che sia dovuto al file node perché sembra incompatibile.
love this, much appreciated
It would be great to have the option to upload images or files
It is coming, people already tried. With images you mean to allow it to generate UI from images or to add images to the code to use in the website/app?
Also, a way to retain all conversation history and keep extremely large context windows to build out full stack apps.
@@huntermatthewricks that second part is complex. Cursor is only product on the market i know of that does that. May bew github workspaces + copilot can do that but not as good as cursor
thank you for the video and yor effort!
the prompt enhance option isn't working
btw, which application do you use for edit you videos?
There are bugs here and there.
I use combinations of:
1. Infogram.com(camera + animations for the first part of this video)
2. OBS for some of the screen recordings
3. iMovie for some editing, especially Audio
4. For this video I bought Screen Studio for a month, it has some issues but it's good for such hands-on demos with screen recording. The second half of this video with the flying camera is done with that
If you want more info find me on LinkedIn, connect, DM, I will share screenshots there for what and how I do
Actually...
Here is Infogram template
infogram.com/1p9zzjm5vy7w5ku752qppdygmys37lvzw3r?live
Inside of editor, you can have a camera that you can also resize and animate
GIT support, PLEASE!! Thanks brother!
@@huntermatthewricks it is on my roudmap but not on the top
First more AI providers and file upload, file upload shoukd then feed in to git
okk many of the people might have downloaded ollama on their pc or any other llm so they want to use it when not connected to net .. in such scenario there should a way to connected and work right .... i have ollama on my pc but it still ask download ollama besite the key icons ??? if our main purpose is to avoid charges then this approach will really help ??? or how we can connect locally downloaded llm with the bolt this two approach could be done right ???
@@--INDIAN--TRADER it does not yet work from hosted version.
You need to download and run it locally too.
But i have some ideas of how to connect hosted website variant to local models. Not at the top of my list yet.
@@EduardsRuzga yes sir i have bolt .new downloaded & installed locally and now it runs and also have downloaded ollama which was alredy there locally on pc both ... so that is my question if we already have both why we need to to connect to other llm online we can use the llm ollama which is already download that was my question sorry if my question was not to the point ...
awesome thank you how to get the chat history like in bolt.new to don loose our work
Not sure what you mean. You want to return to website and still see chat history?
In theory I can be exporting chart history too to files.
But currently its already saved in to browser cache so on same machine you should see it.
As for storing it on server side across devices, with user login, that is something to work on.
When command npm run start gives error
@@nocopyrightsongandmusics9705 you copy error in to chat and ask to fix
Auto/manual fixing will be added later
There was an error procesing your request
I am having trouble with rate limits, will be looking in to it over next days and get back to you, sorry for bad experience. I do speak about that in stability part and current advise is to get your own free key from Open Router
Thank You. So if I get an open router api key I can then use your bolt site without this error?
@lancemarchetti8673 yes, try that, there is 200 request for free on one key for free models. I plan to add more providers which should help
i am encountering with an error "'bindings' is not recognized as an internal or external command,
operable program or batch file." in terminal can you help ?
Ouh right, this is Linux/Unix thing.
If you know how it should work under WSL(Windows Sub System) terminal
I think I need to get to making small video about that part.
Will explore over weekend what is easiest way and make small video next week.
@@EduardsRuzga sure thing ! make asap
What recording software are you using?
I use combination. At the start of the video its Infogram presentation with camera + OBS to record screen. After I use Screen.Studio
What’s the best combination with bolt locally with m1 Mac with 16 gigs of ram? I was able to run deepseek coder 16b but it had issues generating the files
Yeah, my experience with local coding models so far was so so. GPT and Sonnet are just too good in comparison.
Local ones run slower and work worse.
I did try deep seeker before and it was so so.
I heard recently some good things about Mistral Codestral, downloaded it with LM studio but had not time to test.
Great video. Thank you. Do you know if there's a way to tweak bolt.new functionality so that when I am giving it various commands to tweak my app, instead of writing every single piece of code again, it only changes and re-writes the file(s) that need to be changed instead of everything? This would really speed up development and save lots of API credits. Thanks. (ps. I am a no-code / none-developer).
@@g13jon in theory yes but its hard in practice. I am playing with this for a while, since summer 2023 and its not easy thing to do.
Even new ChatGPT Canvas struggles with it.
@@EduardsRuzga Ah I see. Well hopefully one day it becomes a reality for the bolt service. Thanks again for great videos and your online demo hosted version of the open source model. 🙌👍🏽
Fragments is good , please try that also
e2b thing? I tried it, it uses server-side containers to turn serverside code, which is more challenging to host and works slower. But it has the bonus of running more than javascript. I wonder if one can add that to web containers that bolt uses.
Overall, Bolt is simpler to expand, developer, host etc.
So I put my efforts behind that.
@@EduardsRuzga hmm yeah you are right
When i host the bolt website i cant host the generated website preview at the same time
You mean you cant share link to.aite you made? If yes, then you are right, i am thinking of how best solve it. You would need to deploy.
Local file synch doesn't seem to be working on your web version?
@@benjamenharper which browser and operating system you use, will test
@@EduardsRuzga Windows 11, Chrome
Can you do a Windows install?
@@humanbodyinsigh yes i will tomorrow
Give Bolt the prompt to add the features on itself !
@@thomasschoko3527 you mean allow it to work on its own codebase?
I do want to get there )))
Already trying out tired out meme, so i heard you like bolt? Well I put Bolt jn to Bolt so that Bolt can develop Bolt )
openrouter have very short time rate limit pls suggestions custum api like , openai , gemini
Yes, I am thinking of adding Gemini API next as one of fallbacks
Want to add it over next days
Is there anyway to upload code?
@@huntermatthewricks not yet, secon in my list of priorities. Bit later.
nice video but once again ?? Windows and Windows conda , neither even get close to working please do a windows install video or provide a docker image etc thanks
I guess I can do a small one at some point, I will get back to you if I get to it. Thanks for dropping by.
and code upload also needed
Second place in my list after more free models providers
image upload plz
@devangk3899 image upload to project or giving design ideas to AI?
Add ollama please
Local models not at the top of my list, may move higher if more people want it but my goal is to do that in ways that will allow to host it online.
I am also interested in it working on a phone in browser, that is super anti ollama
@@EduardsRuzga i am not developer i am only sys admin but i think instead of spending money for api it is easier to buy powerful pc and use 70b models forever and create so much content and app so u need for free.... But it is only my opinion.
@theNotLogo Well, that is a complex topic with an article worth of answers. Currently, I do not recommend local models for many reasons. Server hardware is cheaper per token, but its upfront investment of 10-20k is going to go obsolete next year. You probably will not get your investment back. It's like renting/leasing a car vs. buying upfront in the market where machines become more fuel-efficient yearly.
Now consumer grade rtx 4090 with 24gb ram costs 2.5k where I am. It will not run 70b and 70b is not as good as Sonnet or even gpt4o in general. So you will get less quality. Let's say 2x less quality. That means you just spent 5k on tokens for 1-2 years. I plan my spending on tokens to be 10x less while using frontier models from providers.
@@EduardsRuzga maybe you are right... Which provider is best and cheapest? I want to try using api. I am interested how much tokens will i spend in one day. I am working on one project and it seems very difficult for producing. .
@@EduardsRuzga however thank u for your responding, and spending your time for me. Thanks so much..🎉
Cost is very high
Well, that is why I am interested in opensource, another model, and other ways to work with it. But StackBlitz also working to improve that.
What other products do you use?
Hello I would like to reach you about a collab. What is the best way to reach you? Email?
Find my lon LinkedIn or discord
www.linkedin.com/in/eduardruzga?