many thanks for the great video. one question: where is the LLM downloaded to? i want to make space again to try another LLM. how can i delete the 4GB again?
for some reason my instance defaults to anthropic no matter if i select ollama, the reason i discoverd that is that the anthropic key was not set and it complained about it in the error output even when ollama models was selected.
@@MervinPraisonthanks for reply, do i need to change modelfile or any of the commands if i change model? also shouold i increase more than you have set?
I think this is a good start. But, it is still not that powerful and for someone who already codes fairly quick. I feel like this is much slower at the moment, still. Give it a couple years and I reckon this might be worth.
Hey, I saw your videos. They're great and informative but your thumbnails are not appealing enough. I think you should hire a Professional Thumbnail Artist for your videos to increase your view count cause every impression matters. I can improve your ctr from 2-3% to 15%. Please acknowledge and share your contact details to get your thumbnail.
I don't understand anything since the start.... You say "In your terminal" but what terminal? Dude you can't start a video tutorial by assuming certain things
Yo. You just make the tedious process of the trifles into a juicy episode which is amazing for long term style for UA-cam algorithm
Thank you for your contributions Mervin! Including a repo link in description would be helpful, just saying
many thanks for the great video.
one question: where is the LLM downloaded to? i want to make space again to try another LLM. how can i delete the 4GB again?
Can this generates also backend or only frontend? If only frontend it's a waste of time.
I like this approach. Thank you Mervin
Love your videos - nice work on showing warts and all
Just can't get ollama models to appear under ollama. 2 hours of diagnosing with claude and still nothing. everything appears to be running.
Can you show, how Can we use other LLM not only qwen2.5-large:7b?
muchas gracias
How to know the context max length for each parameters?
Great job, well done!
why the hell when you install not asking fro API key but when i don it is?
This is awesome!!!
What is the best llm for coding apps?
Can I also use openai api for this application?
do we need graphics card for this?
for some reason my instance defaults to anthropic no matter if i select ollama, the reason i discoverd that is that the anthropic key was not set and it complained about it in the error output even when ollama models was selected.
Where can I search modelfile
ive done everything several times but im not getting the actual code and files
Did you try increasing context length ?
Also try various models
@@MervinPraisonthanks for reply, do i need to change modelfile or any of the commands if i change model? also shouold i increase more than you have set?
Is bolt.new using claude by default?
I have used gemini-1.5-pro but now also not working Gemini has context length of 2M there is some problem with the software only
well why not use open router api key to test the local bolt. there you can find the big models for free to use and the bigger context length as well
how do you make the modelfile?? what file does it has to be? I have no clue how to make this in my CMD prompt. ( windows computer )
Right click and create a new file. Name it as modelfile
@MervinPraison Thanks. But what kind of format file do I need to make. Just a folder, txt, ,?
@@mikevanaerle9779
Create modelfile.txt and run below command
ollama create -f modelfile.txt qwen2.5-large:7b
Doc: mer.vin/2024/11/bolt-new-ollama/
@@MervinPraison Thank you
@@MervinPraison for me it does not see the preview, or code in the right
good video as always.. but i think you missed out the creation of the .env.local file
He assumed that you already know to use ollama and env tweaks.
Thanks
Hey Mervin you forgot to tell your viewers to change the .env file to use Ollama local API
Yes I want to know how to set .env for ollama
DOES THIS WORK ON WINDOWS?
Yes
To run more than 7B , do will we need more RAM, right? 64gb or more?
Yep
25 GB ram
Can I install this from within VS terminal?
Yes, You can use any terminal.
@ oh great this really looks interesting will try to install it. Thanks.
it works perfect but GPU is a must to have the speed. for me when i asked , it corrected some files
you should have given him credit!
Nice :D
I think this is a good start. But, it is still not that powerful and for someone who already codes fairly quick. I feel like this is much slower at the moment, still. Give it a couple years and I reckon this might be worth.
what all are we talking here? API?
This fork of bolt.new enables the use of any provider including local (on machine) inference provided by Ollama as in this example.
Could please again you can do proper video explanation , this is not not good explanation sir
Hey, I saw your videos. They're great and informative but your thumbnails are not appealing enough. I think you should hire a Professional Thumbnail Artist for your videos to increase your view count cause every impression matters. I can improve your ctr from 2-3% to 15%. Please acknowledge and share your contact details to get your thumbnail.
I don't understand anything since the start.... You say "In your terminal" but what terminal? Dude you can't start a video tutorial by assuming certain things
Visual studio code terminal or your preferred ide. You are cloning the GitHub repository
You can't expect the tutorial starts with an explanation on how to turn your computer on.
@@zipaJopa you must be the funniest person at home.. this youtuber didn't even share the git clone command as he claimed in his video..
@@AnimalCentral-l7p but it's bolt.new-any-llm?
Terminal / Shell / Console / Command Line all mean the same.