How to Use Bolt.new for FREE with Local LLMs (And NO Rate Limits)

Поділитися
Вставка
  • Опубліковано 22 лис 2024

КОМЕНТАРІ • 328

  • @ColeMedin
    @ColeMedin  15 днів тому +17

    I'm building something BIG behind the scenes:
    ottomator.ai/
    ALSO
    Check out the FlexiSpot C7 ergonomic chair at bit.ly/4fBz8R3 and use the code ''24BFC7'' to get $50 off on the C7 ergonomic chair! Like I said in the video, investing in a good chair is crucial for any developer, and FlexiSpot is certainly a good option.

    • @arun0191
      @arun0191 15 днів тому

      when you going to lanuch? Soon to the moon

    • @TableFlipFoundry
      @TableFlipFoundry 14 днів тому +2

      I've been watching you closely with this! I have also been working on my own thing. I really want to share it with you because I think there is a chance you can add it to the oTTodev. I dont know how to reach out. I'd love the chance to show you the idea.

    • @arzanpowvalla1318
      @arzanpowvalla1318 12 днів тому

      Perhaps you can start adding the new features now. There are many open PRs, so maybe you could begin merging them.

  • @infinitywork7069
    @infinitywork7069 7 днів тому +9

    you have to make another video explaining all the steps of the installation, the commands until the end. personally I could not install and you said that there is a link to the reference video I did not see anything not everyone masters well so you have to make a detailed tutorial step by step because we also want to take advantage of this technology thank you very much

    • @ColeMedin
      @ColeMedin  4 дні тому +1

      I do cover how to run it in this video!
      ua-cam.com/video/31ivQdydmGg/v-deo.html
      Or is there something you think is missing from the steps? Let me know!

  • @ScottLahteine
    @ScottLahteine 15 днів тому +8

    This is exciting stuff. Just being able to call out to an LLM from your code is exciting stuff. But then, so is being able to ask the LLM to help reorganize some text into JSON, and choosing code paths based on what you get back. And also, allowing LLMs to write code for execution or generate new helper agents on the fly. This technology adds an entirely new loose dimension to the craft. It’s hard to know where to even begin. But an agentic tool that builds multi-file applications is an amazing proving ground for LLMs to balance creativity, knowledge, and control, and I can’t wait to put it to the test!

  • @tdadelaja
    @tdadelaja 15 днів тому +12

    Hi Cole, you are a miracle, and I want to thank you. Please, I have a request. Could you please make a video for a five-year-old walking us through how to install the whole thing? I am not a programmer, and I have no coding knowledge. I just have ideas I want to build.

    • @u.a3
      @u.a3 15 днів тому +5

      Till he makes a video and walks you through. Here’s what you can do.
      •Find a video on youtube that will tell you how to get a github project on your device.
      • go to the github link in the description of this video and you can get the project in your device. Everything will be setup. You just have to do with the model files what cole did.
      If you want to understand ollama and getting its models, again youtube is your friend. That’s how i learned and i have zero coding knowledge. Just break this down into simple steps and you’ll be on your way

    • @ColeMedin
      @ColeMedin  15 днів тому +3

      You are so welcome! I actually already have a video on my channel for getting it up and running super easily:
      ua-cam.com/video/31ivQdydmGg/v-deo.html

    • @edgedesignslu
      @edgedesignslu 15 днів тому

      i have been waiting for him to show the step by step process to install local bolt etc sigh!

    • @ColeMedin
      @ColeMedin  9 днів тому

      It's there in the video I linked above!

    • @trilloclock3449
      @trilloclock3449 2 дні тому

      Same bro, same!

  • @TurePappa
    @TurePappa 15 днів тому +2

    Thanks, I subscribed and will watch more videos, because I'm trying to build on my own an app with AI for my community and this was useful.

  • @leex7776
    @leex7776 13 днів тому +2

    Would be very nice if you could show how to use it installed on a server instead of local. I think this is not working at the moment and a lot of people have problems with it, because of cross-origin etc. errors. Other then that, thanks for your videos and work on it :)

    • @ColeMedin
      @ColeMedin  10 днів тому +1

      Yeah I will be making a video on this in the future!

  • @PixelFrontier-channel
    @PixelFrontier-channel 15 днів тому +2

    Dude hell yes! I have really wanted this because my smol llm's arent working well. God bless.

  • @themax2go
    @themax2go 15 днів тому +17

    here's my op prompt: "make it powerful and look awesome. do not skimp on feature richness or looks. it's gotta rock my socks off. you get $1,000,000 if you get it right on the first go" - trust me, that totally works 😎

    • @ZukunftBilden
      @ZukunftBilden 15 днів тому +3

      Nice of you to already go into debt to fund the expansion of the future overlords reich ;) Curious, when do you think they will come to collect?

    • @humbleonyenma
      @humbleonyenma 9 днів тому

      repo and how do i get it running on my computer?

    • @Arewethereyet69
      @Arewethereyet69 2 дні тому

      Thanks now I got subpoenaed by my own device for failing to pay.

  • @JJ-tr8cu
    @JJ-tr8cu 15 днів тому +6

    Can't wait for the n8n tutorial too!

  • @coollobsterr
    @coollobsterr 15 днів тому +3

    Ollama now supports vision!!! Can't wait to see images in ottodev

    • @chind0na
      @chind0na 9 днів тому

      Ollama lava for vision

  • @awakenedsoul3501
    @awakenedsoul3501 15 днів тому +3

    This is great Cole, keep them improving..Awesome update:) Thank you for all your hard work.

    • @ColeMedin
      @ColeMedin  15 днів тому +1

      Thank you! You bet!!

  • @lazetrader9160
    @lazetrader9160 15 днів тому +2

    Hey Cole, I am impressed by your content and judos to you for the hardwork for contributing. I have 2 Pro 200 Bolt subscription n both run into 200k tokens limit then i tried using your fork version have encountered many bugs and couldn't deploy my project n preview as well. Is it possible to do a tutorial on how to build the project and deploy? I dont mind paying for your consulting call if needed.

    • @ColeMedin
      @ColeMedin  15 днів тому +2

      Thank you very much! Which model are you trying to use? Sometimes smaller ones if you're running locally will struggle with that kind of thing.

  • @sidharthrout728
    @sidharthrout728 15 днів тому +3

    Thank you for the video , but can you kindly tell how to configure the oLLAMA models thank you .

    • @ColeMedin
      @ColeMedin  15 днів тому

      You are welcome! What do you need help with as far as the configuration?

  • @climateireland7546
    @climateireland7546 15 днів тому +1

    Bravo sir!
    To keep you back strong you should do the quadraplex exercise, good mornings etc.

    • @ColeMedin
      @ColeMedin  15 днів тому +1

      Thank you! Yes, I do a lot of exercises to help over the last year and it's been working great - I appreciate it!

  • @codigomovil
    @codigomovil 14 днів тому +3

    Great video!! 🎉
    I have a question... If I create a local LLM model with a larger context window, can it be used in Cline or Continue for code generation?

  • @SouthbayCreations
    @SouthbayCreations 15 днів тому +1

    Hey Cole fantastic video! You mentioned you were going to link a video for the n8n workflow video in the beginning of the video and towards the end you mentioned about linking another video but forgot the links. Not complaining, just wanted to let you know. Thanks! Jason

    • @ColeMedin
      @ColeMedin  11 днів тому +1

      Thanks for the heads up! I try so hard to not forget those links haha, I'll add it into the description!

  • @CashuzDaGeneral
    @CashuzDaGeneral 7 днів тому +1

    This is great stuff. 😂😂Just subbed and looking forward to more. This is great for what I am looking to bring out. ❤❤😂😂

  • @findingfoodi
    @findingfoodi 7 годин тому

    Hello Cole, where is the README which you are referring to in the video???

  • @ToddWBucy-lf8yz
    @ToddWBucy-lf8yz 15 днів тому +1

    If you haven't yet you really should check out the granite 8b dense model. Fast like really fast and pretty damn good at knocking out the boilerplate

    • @ColeMedin
      @ColeMedin  15 днів тому

      I will, thank you for the suggestion!

  • @zebcode
    @zebcode 15 днів тому +1

    Ah you already built the agent in n8n! I did some very basic exploratory stuff in coden with Ollama. I had thought of this idea but looking forward to pulling and trying when I get a moment.

  • @omarnahdi3380
    @omarnahdi3380 13 днів тому +1

    If we use a local LLM from Ollama does the model need to be 128k content length?
    Btw, Great project from an individual youtuber🤟

    • @ColeMedin
      @ColeMedin  10 днів тому +1

      Anything above 8k context length is good! Thank you!

  • @kuyajon
    @kuyajon 15 днів тому +3

    more power cole solid fan from Manila here.

    • @threathunter369
      @threathunter369 15 днів тому +1

      hey kabayan, san ka sa manila?

    • @kuyajon
      @kuyajon 15 днів тому +1

      @threathunter369 Quezon city

    • @threathunter369
      @threathunter369 14 днів тому +1

      @@kuyajon wow nice, na try mo na sayo install Locally? ayaw gumana saken yung new update, errors pag nag ask ako ng prompt sa Anthropic and Others, Kay Ollama lang gumana tapos super bagal pa, haha:)

    • @kuyajon
      @kuyajon 14 днів тому

      @@threathunter369 hello dipa ako nakakagamit ng ganito locally, nood nood lang. ang ginawa ko lang ollama lang saka yung web ui nya, parang may local lang ako na chatgpt. pag maliliit na models 7b pababa hindi ganun kabagal sa 1660ti 6gb ko na pang laptop. pero diko na nitry mas malaki im sure wala na pag asa. dpa ako masyado seryoso dahil wala naman ako matino na hardware. pero natutuwa lang so far.

  • @piyushlamsoge6007
    @piyushlamsoge6007 8 днів тому +1

    This is just amazing !!!!
    Exactly what i looking for ,
    But i have request for you if you're reading this will please make full tutorial video on how to create API endpoint and connect it to any code using webhook just like did in this video, It was really helpful to me
    I'm looking for your response
    Thank you once again!!!!!!!🥰🥰🥰🥰🥰🥰🥰🥰🥰🥰🥰

    • @ColeMedin
      @ColeMedin  7 днів тому

      I'm glad - thank you!! Could you expand a bit more on what you are looking for? :D

    • @piyushlamsoge6007
      @piyushlamsoge6007 4 дні тому +1

      @@ColeMedin i working on project where i want to create backend completely using n8n and connect to frontend which is design in nextjs
      But i lack with knowledge of webhook and API working with n8n.
      i expecting you could help me to with making any tutorial or projects where i can find something which is relevent to my work!!
      and of course , thank you for your reply 🥰🥰🥰🥰

  • @wasimdorboz
    @wasimdorboz 15 днів тому +1

    why u are the best because everythin free and thanks for the prompt

  • @jassingh8717
    @jassingh8717 14 днів тому +1

    Hi Cole, what Mac machine do you recommend to use for development with bolt.new with local modals running. Will be nice if you can tell which is best on budget and which is best overall

    • @ColeMedin
      @ColeMedin  10 днів тому

      To be honest I'm not an expert on Mac machines, but I would ask in the new Discourse community (thinktank.ottomator.ai), I know there has been discussion on this already!

  • @ChrisMateo-q6s
    @ChrisMateo-q6s 15 днів тому +2

    This is awesome does this have the ability to go back to a checkpoint yet or no?

    • @ColeMedin
      @ColeMedin  15 днів тому

      Not yet but that is a much needed feature for sure!

  • @timothymaggenti717
    @timothymaggenti717 15 днів тому +1

    Thanks, for this. But I can't get docker to work. Tried and tried and even Claude can't make it work. Thanks for the CTX fix.

    • @ColeMedin
      @ColeMedin  15 днів тому

      You are welcome! What is the error you are getting in Docker?

  • @QuizzyQuestX
    @QuizzyQuestX 14 днів тому

    i run it locally with llama3.1:8b and it works good. I dont no why, but the preview functions doesnt seem to work good. first it worked and then it's just full white.
    i wish:
    1- editing previous messages.
    2. detach files -> images for context etc. -> would be nice for the new llama vision model to get all infos and then change to a code model and implement the analysis i.e.
    just testet it for an hour.
    Thanks for this!

    • @ColeMedin
      @ColeMedin  10 днів тому

      You are welcome! Yes, sometimes smaller LLMs don't give the right commands to start the preview.
      Love the suggestions! :D

  • @clausladefoged7347
    @clausladefoged7347 13 днів тому

    Love your work, Cole! Thanks a lot. I have oTToDev running locally now. All is good, but quite often the preview is just blank. Is that a known issue or more likely related to my MAC?

    • @ColeMedin
      @ColeMedin  10 днів тому

      You bet!! Sometimes the LLMs will hallucinate the wrong commands which stops the preview from working, I'm guess that is what is happening here.

  • @im_the_Arka
    @im_the_Arka 15 днів тому +22

    claude is a joke fr
    you give it a prompt, it gives you 20 errors - it fixes 20 errors and give you new 30 errors - it fixes them aaaaand...you're out of tokens...

    • @themax2go
      @themax2go 15 днів тому

      🤣😭

    • @ashwinash3284
      @ashwinash3284 15 днів тому +1

      Became Commercial model these days

    • @CauseOfDeath27
      @CauseOfDeath27 15 днів тому +1

      so true

    • @ColeMedin
      @ColeMedin  15 днів тому +4

      EXACTLY haha, this is why I wanted to build something without rate limits or tokens that get eaten up.

    • @im_the_Arka
      @im_the_Arka 15 днів тому

      @@ColeMedin thank you again
      I'm still trying to get ollama to work, because claude has so far given me bugs that he can't fix himself and just eats up millions of tokens and I get nothing in return

  • @LeonGustin
    @LeonGustin 15 днів тому +1

    Man I would really love to see you get this working on a phone because I'm real busy throughout the day and I would like to at certain times be able to just hop on my phone and just try to crank out an app Don't know if it's possible.

    • @ColeMedin
      @ColeMedin  15 днів тому

      It would certainly be possible but deploying the fork as a website like commercial Bolt.new!

  • @build.aiagents
    @build.aiagents 15 днів тому +1

    Phenomenal! Create a version for n8n, prompt and deploy n8n workflows 😏

  • @mpsmanger4713
    @mpsmanger4713 14 днів тому +1

    Can you recommend someone to help me ( a newbie) set this up on my home PC for a project I am working on?

  • @pranavshil419
    @pranavshil419 15 днів тому +1

    How to make it stateful! And be able to save a version of my app and build on top of it. The problem I face is it forgets what it built and when I ask to update something it starts building from scratch again and UI looks different

  • @cyrusjameskhan
    @cyrusjameskhan 10 днів тому +1

    Great stuff, thank you for sharing!

  • @gslezendyt9344
    @gslezendyt9344 14 днів тому +1

    Hi Cole can you fork the Cofounder which can be useful.cofounder is better than bolt.

    • @ColeMedin
      @ColeMedin  11 днів тому +1

      There might actually be an opportunity to bring Cofounder into oTToDev! I think there is a time and place for both really

    • @gslezendyt9344
      @gslezendyt9344 10 днів тому +1

      @@ColeMedin thanks Cole waiting for it.

  • @TheGateofZion
    @TheGateofZion 15 днів тому +2

    I have oTTodev installed on my pc from the first time you launched. How do i get these new updates added it?

    • @ColeMedin
      @ColeMedin  15 днів тому

      Good question! You can do a git pull to get the latest changes from the repo and then restart the containers!

  • @andrinSky
    @andrinSky 15 днів тому +3

    by Me when i start bolt.new-any-llm with Openrouter with Deepseek-coder v2 236B (Openrouter) he will not Open the Code and there is no Preview...........He will show the code only on the Left side but there is no Code. What can i dow now?

  • @jparkr
    @jparkr 15 днів тому +1

    Where can I ask questions as I'm getting warnings about API keys? I want to tray Ollama only. If Discourse is ready, could you please paste its link here?

    • @ColeMedin
      @ColeMedin  15 днів тому +1

      Discourse is going to be launching this Sunday! Those warnings with API keys when you build the container can be ignored though!

  • @alin.gabriel
    @alin.gabriel 10 годин тому

    Any help how to install on Coolify? Google Generative AI API key is missing. Pass it using the 'apiKey' parameter. Environment variables is not supported in this environment. Am i doing something wrong?

  • @SejalDatta-l9u
    @SejalDatta-l9u 14 днів тому

    @ColeMedin. Great video and cool project! I added Haiku 3.5 to your list of LLMs, but no previews are being generated. Any ideas why? Am I supposed to have pre-installed any pip libraries ahead of time?
    Also, other than React/Tailwind CSS, do you know any other tech stacks that can be generated bolt.new/Ottodev preview?
    Keep up the good work!

    • @ColeMedin
      @ColeMedin  10 днів тому

      Smaller LLMs sometimes don't give the right commands to start the preview unfortunately, I'm guessing that is what is happening there with Haiku 3.5
      oTToDev also does a great job with Next.js! Really anything that is within the Node environment.
      Thanks you!

  • @khangvutien2538
    @khangvutien2538 5 днів тому +1

    I liked and I subscribed 😇

  • @EpicEnigma7800
    @EpicEnigma7800 15 днів тому +2

    thanks for the ollama fix it was about to sending me to psychiatric asylum

  • @أهدافاليوم-ط1ط
    @أهدافاليوم-ط1ط 14 днів тому

    Hi @Cole Thanks for the valuable share, what if need to use a custom api with own llm model ?

    • @ColeMedin
      @ColeMedin  10 днів тому

      You are welcome! A custom API will work if it is OpenAI compatible! You can use the "OpenAI like" integration.

  • @VaibhavShewale
    @VaibhavShewale 13 днів тому +1

    well it would be awesome if there was a system minimum requirement was present in readme

    • @ColeMedin
      @ColeMedin  10 днів тому

      It really entirely depends on the models you want to use! If you are using APIs so you aren't running models locally you can really use any machine! And then it's good to at least have some GPU when using small local LLMs like Qwen 2.5 Coder 7b. And then a bigger GPU like a 3080/3090 to run the larger 30b+ parameter models.

  • @TheSalto66
    @TheSalto66 5 днів тому

    Hello Cole, If you do the command "ollama show qwen2.5-coder" you see that the context length is alredy setup to 32768

    • @TheSalto66
      @TheSalto66 5 днів тому

      Also after the procedure to setup num_ctx at 32768 the code page and preview page is still empty

    • @ColeMedin
      @ColeMedin  4 дні тому

      Yes true! Which model are you using? Usually a brank preview means the LLM is hallucinating bad commands/is too small. 3b parameter models are too small no matter which one.

    • @TheSalto66
      @TheSalto66 4 дні тому +1

      @@ColeMedin I have installed in Windows and Docker as required. Then I use Ollama with qwen2.5-coder:7b. Sometime starts sometime not. Never works terminal, preview, code, windows

    • @TheSalto66
      @TheSalto66 4 дні тому +1

      If I use "Ollama run qwen2.5-coder-extra-ctx:7b", inside PowerShell, the response is slow but good. Bolt is probably wrong because the response from LLM is too slow: the program does not wait for the complete response.

    • @ColeMedin
      @ColeMedin  2 дні тому

      Yeah slow but good is still acceptable in my mind for coding tasks!

  • @karamjittech
    @karamjittech 14 днів тому

    Awesome video. I installed using Docker with Ollama but when try to generate something it says "There was an error processing your request" and in Docker logs it says "authentication_error".

    • @ColeMedin
      @ColeMedin  10 днів тому

      Thank you! That is strange, is there anything more on the authentication error?

  • @regallux6973
    @regallux6973 15 днів тому +1

    This may sound silly to others but I have a question I hope you can help me answer
    I use bolt new alot for back end integrations like setting up API and Auth with data bases I work with super base now
    Here's my question dose the ottodev have any limitations like I know it's a fork of bolt but dose it has limitation in regards to handling package installations like dose it install all required dependencies just like bolt dose
    I hope you can explain more about this it can install all npm without any issues or I will be needing bolt. New for such
    Dose ottodev has any advantage over bolt

    • @ColeMedin
      @ColeMedin  15 днів тому

      Great question! So oTToDev is going to be able to handle everything that the open source version of Bolt.new can since we didn't change the core functionality that would make it perform worse in any way. Of course with new models being available, some might not perform the best if you're running smaller models with Ollama, for example.
      We're working on a bunch of features with oTToDev that make it useful over Bolt. You can use local LLMs so you have unlimited usage for free, you can push directly to GitHub, etc. A lot of things not available in the open source version of Bolt.new and some not even available in the commercial version like using other LLMs. And then we are also working on new features like being able to load in local projects to continue the development of them in oTToDev!

  • @wasimdorboz
    @wasimdorboz 15 днів тому +1

    plz one more question is there any way to use ollama under api key and not localhost becuase i wont my pc explode

    • @ColeMedin
      @ColeMedin  15 днів тому

      If you want to use open source LLMs but not run them yourself I'd suggest using OpenRouter!

  • @motivation_guru_93
    @motivation_guru_93 13 днів тому

    Hi. I am interested in learning how to build n8n workflows. What's the best channel to learn from scratch about N8N ? Can someone please advise

    • @ColeMedin
      @ColeMedin  10 днів тому

      N8N has their own UA-cam channel that is worth checking out!

  • @hwdazk9157
    @hwdazk9157 6 днів тому

    adding a openai api key doesnt work its like i didnt add one reqeuest error

  • @1-chaz-1
    @1-chaz-1 15 днів тому +1

    Way cool. Nicely done!

  • @humbleonyenma
    @humbleonyenma 9 днів тому

    Please how do i get the updated repo and how do i get it running on my computer?.

    • @ColeMedin
      @ColeMedin  9 днів тому

      You can do a "git pull" in the command line or GitHub Desktop to get the newest changes. Then rebuilding is just following the same steps you used to set it up initially! But since the container is already on your machine the rebuilding process goes a lot faster!

  • @amf1013
    @amf1013 15 днів тому +1

    This project is amazing but for some reason everything i reload the page i lose most of my data back until my first prompt and that's all I see on the preview any fixes?

    • @ColeMedin
      @ColeMedin  15 днів тому

      I appreciate it, sorry you are running into that though! It seems to be a glitch in the original Bolt.new repo that happens sometimes. I haven't experienced it yet but I know others have. So it is an issue we have tracked and are looking into.

  • @mofeidyousifhassan7625
    @mofeidyousifhassan7625 7 днів тому

    I've set up everything exactly as in the documentation but I still get and error: there was an error processing your request, please help!

    • @ColeMedin
      @ColeMedin  4 дні тому

      What is the error message you see in the terminal where you ran the site or in the developer console in the browser?

  • @edgardneto2015
    @edgardneto2015 10 днів тому

    Hi, can you help me! I installed it locally, tested it by creating an app, it creates the folders but when it finishes it stays in "Run command" running infinitely and never completes, I keep waiting to be able to send the command and the "Run command" never stops spinning.

    • @ColeMedin
      @ColeMedin  9 днів тому

      Generally this means the LLM hallucinated a bad command which prevented the preview from starting. But we recently made a fix so this happens less so I would try it again!

  • @CB-yc6wz
    @CB-yc6wz 9 днів тому

    i have made to the point where i launch the app via localhost but nothing happens after I type in a prompt and hit the go button

    • @ColeMedin
      @ColeMedin  9 днів тому

      Interesting... is there any error you get in the terminal where you started the site? Or in the developer console in the browser?

  • @marckeelingiv9405
    @marckeelingiv9405 15 днів тому +2

    How well does ottodev work with existing code bases?

    • @ColeMedin
      @ColeMedin  15 днів тому +2

      Great question! Right now you can't use oTToDev with existing code bases because you can't import a project, but there is a PR out for that right now that just needs some touching up. Then you'll be able to import a project and the LLM will understand all the files that you added in instantly.

    • @kaos4011
      @kaos4011 15 днів тому +1

      @@ColeMedinhow pls tutorial would be great

  • @elizabethkirby1782
    @elizabethkirby1782 15 днів тому +1

    So cool, thanks so much

  • @remedyreport
    @remedyreport 15 днів тому

    How do we change the default limit for bolt.new? 8K seems rather small.

    • @ColeMedin
      @ColeMedin  15 днів тому

      Totally fair! You can change that in this file:
      github.com/coleam00/bolt.new-any-llm/blob/main/app/lib/.server/llm/constants.ts

  • @guix69
    @guix69 День тому

    Is there a replay of the november 10th live somewhere?

    • @ColeMedin
      @ColeMedin  День тому +1

      Yes it's in the live tab on my channel. Here's the link too!
      ua-cam.com/video/_YzTntvUWN4/v-deo.html

    • @guix69
      @guix69 16 годин тому

      @@ColeMedin Thank you very much for the quick answer :)

  • @adriangpuiu
    @adriangpuiu 13 днів тому

    why cant we select the free llama 405 B from the openrouter ? it's free and can't see it in the openrouter dropdown list? is there a way to modify this ?

    • @ColeMedin
      @ColeMedin  10 днів тому

      We could certainly add this to the list! There are some huge rate limits from what I've heard with the free models on OpenRouter though.

  • @davidbenisty6785
    @davidbenisty6785 7 днів тому

    Hi, thanks very much. However, I don't understand why I am receiving this message: 'There was an error processing your request: No details were returned,' and nothing is working. its like my llm dont work
    thanks

    • @RealNoobifies
      @RealNoobifies 7 днів тому

      same here

    • @ColeMedin
      @ColeMedin  4 дні тому

      What is the error message you see in the terminal where you ran the site or in the developer console in the browser?

  • @omarezzat85
    @omarezzat85 14 днів тому

    When I use Ollama it gives me "x-api-key is required" how can i solve that

    • @ColeMedin
      @ColeMedin  11 днів тому

      I've had this happen before and restarting the container/pnpm has actually helped! Really not sure why but try that out first.

  • @Ndrivot1980X
    @Ndrivot1980X 12 днів тому

    Can you add images? I have this installed but can't find how to add to the prompt

    • @ColeMedin
      @ColeMedin  10 днів тому

      That isn't a part of the open source Bolt so we have to add it ourselves, which we are working on!

  • @vextorfx6243
    @vextorfx6243 14 днів тому

    Why it doesn't show for me at 6:39 I followed the steps exactly as mentioned, but I'm still unable to see the modified model with the larger context. It seems the model is not updated as I expected

    • @ColeMedin
      @ColeMedin  11 днів тому

      It actually isn't necessary to do this anymore because of a new pull request that added this extra context length as a default option to Ollama models!

  • @themax2go
    @themax2go 15 днів тому

    question (ok, actually 2): does this takes context from the last message alone, n number of prev messages, or the text (code, comments, text docs, ...) from the files uploaded - or everything? i doubt the latter and probably the first... the problem is context window, so how's that being managed?

    • @ColeMedin
      @ColeMedin  15 днів тому

      Good questions! It actually takes in everything but then cuts off around ~8000 tokens once the conversation gets long.

  • @realziyad
    @realziyad 4 дні тому

    Hey, can this be installed on a railway or DO server? Installation has to be simple enough as I'm no coder and need very simple installation instruction and can get it done.

    • @ColeMedin
      @ColeMedin  2 дні тому

      Yes you can - I am thinking of making a video on this soon!

    • @realziyad
      @realziyad 2 дні тому +1

      @@ColeMedin I'm waiting for it, pls post the video ASAP.👍

  • @LLMResearch-g4r
    @LLMResearch-g4r 12 днів тому

    Why do I need login authorization with cloudflare when it starts?

    • @ColeMedin
      @ColeMedin  9 днів тому

      You shouldn't! What is the error you are seeing?

  • @HectorDiabolucus
    @HectorDiabolucus 15 днів тому

    You said production read application so how would you deploy this application?

    • @ColeMedin
      @ColeMedin  15 днів тому +1

      More content around this coming soon! But I do have a video out already for deploying apps to the cloud in general that is generic enough to be applied to something like this!
      ua-cam.com/video/259KgP3GbdE/v-deo.html

  • @GiovanniSereni-zz4fo
    @GiovanniSereni-zz4fo 10 днів тому

    I have seen various videos on LLM Qwen-2.5 Coder 32B; Is it possible to integrate it into Bolt.New-Any-LLM? It is a minimal tutorial to be able to have the free KEY API Please integrate file uploads

    • @ColeMedin
      @ColeMedin  9 днів тому

      Tonight's video is actually on Qwen-2.5 Coder 32B and using that with oTToDev is a part of it! File uploads is a high priority feature I hope to have implemented soon!

  • @yuangamin6067
    @yuangamin6067 6 днів тому

    Is there a new video on how to install step by step

    • @ColeMedin
      @ColeMedin  4 дні тому

      I cover it in this video!
      ua-cam.com/video/31ivQdydmGg/v-deo.html

  • @A43-i7v
    @A43-i7v 15 днів тому

    Thank you for the video
    I just have a question from your experience what is the best free way I can use ai assistant to work on a project I already have. Because most of the ones I know allow 5-10 files while the project can be more than that .. if there is any you know that can work with a big number of files together please let me know

    • @ColeMedin
      @ColeMedin  10 днів тому

      I would try Cursor with local AI!

    • @A43-i7v
      @A43-i7v 10 днів тому

      @ColeMedin sorry do you mean local host alone or connect local host with cursor?

    • @ColeMedin
      @ColeMedin  9 днів тому

      Connect local host with Cursor!

    • @A43-i7v
      @A43-i7v 9 днів тому

      @@ColeMedin is there a specific way or video you recommend to do that .. I'm not good with local hosting yet 😅

  • @hope42
    @hope42 15 днів тому

    I get a blank white screen when using any model. Any clues? I did a git pull on your fork.

    • @ColeMedin
      @ColeMedin  10 днів тому

      Which model are you using? Sometimes LLMs hallucinate and don't give the right commands to start the preview.

  • @aladagemre
    @aladagemre 9 днів тому

    I tried using docker container. Had troubles with environment variables. Somehow fixed. Then it was stuck "Creating file...." stage. It kept adding new files but nothing finalized. I deleted. It's way too slow. I tested OpenAI gpt-4o-mini and deepseek coder

    • @ColeMedin
      @ColeMedin  9 днів тому

      Weird I haven't had it get stuck there before... is there any error you are seeing?

    • @aladagemre
      @aladagemre 9 днів тому

      @ColeMedin no errors. Just keeps stuck Creating.... but there's no file populated. Then it adds another Creating file2.... and then Creating file3.....
      All waiting for completion but never finish.

    • @ColeMedin
      @ColeMedin  4 дні тому +1

      And it does this repeatedly for you? I haven't seen this before

  • @mk030166tube
    @mk030166tube 14 днів тому

    I followed the instructions in the video to expand the context, but I'm getting this error:
    C:\Users\marco\bolt.new-any-llm\modelfiles>ollama create -f Qwen2.5Coder qwen2.5-coder-ottodev:3b
    transferring model data
    pulling manifest
    Error: pull model manifest: file does not exist
    I specify that inside the modelfiles folder I don't have all the files that are shown in your video, but only the one I created as you told me. However, the video is not very clear.

    • @sofiane4823
      @sofiane4823 13 днів тому +1

      Put command pull model: ollama pull model_name

    • @ColeMedin
      @ColeMedin  11 днів тому

      That's right @sofaine4823, thank you!

  • @PhuPhillipTrinh
    @PhuPhillipTrinh 15 днів тому +1

    wow 20 times cheaper than claude! i should give it a shot!

    • @ColeMedin
      @ColeMedin  15 днів тому +1

      Yeah it's crazy! :D

  • @manojpranith6253
    @manojpranith6253 21 годину тому

    I know this is very basic stuff to ask in this chat box
    Can i know if i can build full stack web apps with model
    Do let me know
    Thanks

  • @Munixx
    @Munixx 2 дні тому

    when i use commands like npm install on bolt terminal, it says i dont have the permissions

    • @ColeMedin
      @ColeMedin  День тому

      Hmm... did you run the application in a terminal as administrator?

  • @jochemvangalen9464
    @jochemvangalen9464 8 днів тому

    I get an error did all the steps but it says There was an error processing your request

    • @ColeMedin
      @ColeMedin  7 днів тому

      What is the error you get in the terminal where you started the site or in the developer console in the browser?

  • @c0sti495
    @c0sti495 14 днів тому

    How can we make the google ai studio api work? Because simply adding it in the .dev file doesn't work.

    • @ColeMedin
      @ColeMedin  11 днів тому

      What is the error you are getting?

    • @c0sti495
      @c0sti495 10 днів тому

      @@ColeMedin thanks for reply.
      Docker returns this: "APICallError [AI_APICallError]: Method doesn't allow unregistered callers (callers without established identity). Please use API Key or other form of API consumer identity to call this API." + other details, but there's no point in pasting everything here, I suppose.
      Do i need to do some extra things on my google ai studio besides creating an API key? Don't know why i get this "permission denied" error.

    • @ColeMedin
      @ColeMedin  9 днів тому

      To be honest I'm no expert in Google AI Studio API, so I would do some research to see if there is more setup you need to do in the platform to get the API key working. Maybe even just getting it working within a simple Python script and then trying it again in oTToDev?

  • @StockTraderyt
    @StockTraderyt 15 днів тому

    Hey Cole,
    Is there any feature to use existing projects or load project features in ottodev, can you help me in it

  • @jonathanwhittingham9345
    @jonathanwhittingham9345 14 днів тому

    I setup this and Llama locally, added the Llama url and it kept saying needs api key, Groq worked ok but was hoping to use with local LLM. I use Bolt currently and would like to develop Flutter apps. any ideas how to fix the API Key error

    • @ColeMedin
      @ColeMedin  10 днів тому

      Are you running with Docker or without? Sometimes I get this error (it's rare) and restarting my container actually fixes it. It's weird I know haha

  • @mr.gk5
    @mr.gk5 4 дні тому

    For some reason, the canvas is not automatically opened when I prompt. Instead it just give me the results right from the main screen?

    • @ColeMedin
      @ColeMedin  3 дні тому

      Which model are you using? Sometimes that'll happen with smaller models because they just aren't strong enough unfortunately.

    • @mr.gk5
      @mr.gk5 2 дні тому

      @ I used qwen coder 2.5 14b on a 4090. The original download from ollama works fine. This only happened after I changed context length, also noticed that it used 8 gbs of shared vram which it didn’t before. Maybe because I have the models saved on a non system disk? Added a OLLAMA_MODELS in systems variables to have it saved on my other drive since the C disk is running out of space

    • @ColeMedin
      @ColeMedin  День тому

      Could you clarify what you mean by the original download from Ollama?

    • @mr.gk5
      @mr.gk5 23 години тому

      @@ColeMedin yeah, I meant the qwen coder 2.5 14b version I download worked fine. But the custom version where the context length was modified and created by the Modelfile didn’t pop up the canvas and it was using shared vram so it was really slow too. That same modded model run from terminal didn’t consume shared vram. It only seems to be buggy when I run it with bolt.new

  • @passportmarc
    @passportmarc 15 днів тому

    Why does it only do next js and similar packages? Why can't we have this generate a Laravel project? Is this because of the preview generator?
    Id love to use this in a PHP env.

    • @ColeMedin
      @ColeMedin  15 днів тому +1

      That's just the limitation of Bolt.new since the webcontainer is for Node.js! But we certainly want to go to other environments in the future!

    • @passportmarc
      @passportmarc 15 днів тому +1

      @ I asked chatGPT got pretty much the same answer… and while there are done things that could help this using docker and bolt would probably be only solution unless you assume herd and let bash/Python write to the disk directly

  • @carlosborges5303
    @carlosborges5303 3 дні тому

    I am getting a erro There was an error processing your request: No details were returned

    • @ColeMedin
      @ColeMedin  2 дні тому

      Hmm... what are the errors you are seeing in the terminal/developer console in the browser? Those are way more helpful.

  • @irokomause8311
    @irokomause8311 15 днів тому

    what is the configuration or specs for the local LLMs and what is configuration of your system

    • @prashantkachhawaha3707
      @prashantkachhawaha3707 15 днів тому

      A lot of RAM, i have 256GB RAM in my system i'm going to try Deepseeker v2.5 soon ..

    • @ColeMedin
      @ColeMedin  15 днів тому +1

      My system is two 3090 GPUs (24GB of VRAM each) with 128 of RAM. What you need depends a lot on the local LLM you want to use! For example, a smaller model like Qwen 2.5 Coder 7b can run on almost any computer with a graphics card!

    • @raxxel2830
      @raxxel2830 3 дні тому

      @@ColeMedin Hi Im thinking about getting an alienwarw laptop with 64 RAM and a 4090 with 16GB of VRAM could i get good results with that setup?

  • @shijimasorce
    @shijimasorce 15 днів тому

    I can't seem to get bolt to user either llama 3.2 or qwen 2.5-coder after following the steps. Neither of them will open the web interface and create or edit files.

    • @ColeMedin
      @ColeMedin  15 днів тому

      Even after creating a version of the model with the num_ctx parameter set to something bigger? What size of either model are you using? It could be that the model size is too small. If you're using Llama 3.2 3b for example, it probably isn't capable enough since from my experience you typically want to use at least a 7b parameter model.

  • @MrInnovativeEnergy
    @MrInnovativeEnergy 15 днів тому

    Any reason everything is Node.js, web/cloud based? Can we not make local Cpp, C#, Python etc apps using this tool? What would be the best tool for developing applications for say an mmo backend server where you need it to understand the source tree and make changes based on the current codebase?

    • @ColeMedin
      @ColeMedin  15 днів тому +1

      Bolt.new is meant more to make a full stack app from scratch in the browser, for something like what you are looking for with an existing backend codebase I would try something like Cursor!

    • @MrInnovativeEnergy
      @MrInnovativeEnergy 15 днів тому +1

      @@ColeMedin Thanks

  • @lancemarchetti8673
    @lancemarchetti8673 15 днів тому

    Is it true that only desktops and laptops with a GPU can run ollama? I have an Acer Aspire i5 24GB Ram Win11. Is that enough for a local Qwen coder setup?

    • @ColeMedin
      @ColeMedin  15 днів тому

      It's true for anything but the smaller local LLMs! I would try using Qwen 2.5 Coder 7b as I do in the video. With your machine there is a good chance it'll run pretty well!

  • @llouis8081
    @llouis8081 9 днів тому

    why i did not have that option to use my local model

    • @ColeMedin
      @ColeMedin  9 днів тому

      You aren't able to select Ollama and pick a model from the dropdown?

  • @vinik2224
    @vinik2224 15 днів тому

    Great tool , but I can't get it to work. I added an OpenAI key and tried to create a blog, but the system works without being able to create the files and run commands (I'm using it with Docker Compose).

    • @ColeMedin
      @ColeMedin  15 днів тому

      I'm sorry you're having trouble! Which model are you using? It sounds more like it is hallucinating and not giving the right commands/code.

    • @vinik2224
      @vinik2224 15 днів тому

      @@ColeMedin GPT4o and 4omini strat to reply but do not process the commande, the other openai model crash and do noit give any answer

    • @ColeMedin
      @ColeMedin  10 днів тому

      Strange... is there an error message you are getting?

    • @vinik2224
      @vinik2224 10 днів тому

      @@ColeMedin no I have no error message
      I deploy the app with docker and acces it from remote device and not from localhost like http//dockerhostip:5173

    • @ColeMedin
      @ColeMedin  9 днів тому

      Hmmm, it's pretty hard to tell what's wrong without an error message! You've checked both the terminal where you ran the app and the developer console in the browser?

  • @matrix01mindset
    @matrix01mindset 15 днів тому +1

    Could have in the future a Cofounder Plugin module for Bolt?

    • @ColeMedin
      @ColeMedin  15 днів тому

      Yeah a lot of people are looking into that actually!

  • @YASAAR
    @YASAAR 9 днів тому

    I get memory error, my machine is 16GB .. is there a way around this?

    • @ColeMedin
      @ColeMedin  9 днів тому +1

      Which model are you trying to use? I'd suggest using a smaller model if you are getting a memory error, or using an API like OpenRouter so you aren't running the models on your machine.

    • @YASAAR
      @YASAAR 9 днів тому

      @ I actually used Qwen , I'll look into how to use open router 🙏🏿 do you have any videos on that?

    • @ColeMedin
      @ColeMedin  4 дні тому

      It's one of the providers available in oTToDev so you can just select it and add in your API key!

  • @AC-pr2si
    @AC-pr2si 5 днів тому

    The oklama models are not working. I had them working a few days ago but now I am getting error messages

    • @AC-pr2si
      @AC-pr2si 5 днів тому

      I meant ollama models

    • @ColeMedin
      @ColeMedin  4 дні тому

      What is the error message you are getting?

    • @AC-pr2si
      @AC-pr2si 4 дні тому

      @ “There was an error processing this request “ then the terminal is showing error when I try to connect to ollama. It only shows up when I try to use a ollama model. I follow your steps to add the modelfile and everything; I had it working one time then when I redownloaded the files the ollama version stopped working.

    • @ColeMedin
      @ColeMedin  2 дні тому

      Hmmm... what is the error message you see in the terminal/developer console in the browser?

  • @shay5338
    @shay5338 15 днів тому +3

    well well, first one to comment love your video, and It would be really cool if you were to make a video on cursor

    • @ColeMedin
      @ColeMedin  15 днів тому +1

      Thank you very much! I will certainly be making a video on Cursor in the future.

    • @shay5338
      @shay5338 15 днів тому +1

      @@ColeMedin really, please do I'll be waiting

    • @ColeMedin
      @ColeMedin  15 днів тому +1

      Sounds good!

    • @nl4260
      @nl4260 15 днів тому

      How to use read only rules files for best possibly outputs and really focusing on some backend stuff would be amazing for a cursor video that's unique IMHO.

  • @protanopia
    @protanopia 12 днів тому

    hey cole, your github and the video show different instructions on getting qwen2.5-coder setup.. in your video it looks like you made a folder and put a file named qwen2.5-coder in there, but in the github it says to make a textfile called modelfile. im pretty lost and cant proceed. what should i do?

    • @ColeMedin
      @ColeMedin  10 днів тому

      Actually this isn't necessary anymore after a recent PR that sets the context limit correctly for Ollama within oTToDev!

    • @protanopia
      @protanopia 9 днів тому

      @@ColeMedin so whats the workflow now? :D sorry im confused

    • @ColeMedin
      @ColeMedin  9 днів тому

      Now you can use Ollama out of the box without changing anything!

  • @alex_great23
    @alex_great23 15 днів тому

    why other ollama models don't work? what is the reason?

    • @ColeMedin
      @ColeMedin  15 днів тому

      Smaller LLMs will sometimes struggle with the larger Bolt.new prompt and not open up the webcontainer because of that. If that is what you mean!

    • @alex_great23
      @alex_great23 14 днів тому

      @@ColeMedin I just use cline and other models don't work in it either, only qwen 2.5 tool. I became interested, is it possible to adapt other models. For example, deepseek-coder-v2, what are their differences, how to adapt it for these tasks.

    • @ColeMedin
      @ColeMedin  10 днів тому

      Adapt as in create different prompts to work with different models? We are actually planning this out!

  • @Warowo
    @Warowo 12 днів тому

    I am struggling to get this up and running on my mac. Could anyone help ?

    • @ColeMedin
      @ColeMedin  9 днів тому

      What is the error you are running into? There have been some updates to the repo too which might help!

  • @jasonbauer6441
    @jasonbauer6441 11 днів тому

    Congrats man, but running bolt.new in local has to be painfully slow.

    • @ColeMedin
      @ColeMedin  10 днів тому

      It depends on your specs and what models you want to run! It's been fast for me!