I Forked Bolt.new and Made it WAY Better

Поділитися
Вставка
  • Опубліковано 14 лис 2024

КОМЕНТАРІ • 840

  • @ericsimons4497
    @ericsimons4497 29 днів тому +234

    Cofounder/CEO of StackBlitz (creators of bolt.new) here- just wanted to say this is *fucking awesome*. Great work man!

    • @ColeMedin
      @ColeMedin  29 днів тому +9

      Thank you so much Eric! I appreciate it a ton!!

    • @antkin608
      @antkin608 25 днів тому +6

      @@ColeMedin Should probably pin this endorsement. 😊

    • @ColeMedin
      @ColeMedin  23 дні тому +8

      I have now pinned it - thanks for the suggestion @antkin608!

    • @BirdManPhil
      @BirdManPhil 17 днів тому

      @@ericsimons4497 i love that you love that he loves customizing your product to be what everyone else loves. Open source software and mariijuana bring people together.

    • @luiginica
      @luiginica 5 днів тому +1

      @colemedin what if instead of creating, we want to adjust and develop existing code that we have in a local git folder?

  • @GravLabs-to6xy
    @GravLabs-to6xy Місяць тому +92

    id rather give you 20 bucks a month to keep being this fucking awesome and helping me do all of this stuff locally than pay a whole bunch of different soon to be outdated services various amounts per month. you absolutely rock man!

    • @ColeMedin
      @ColeMedin  Місяць тому +12

      Thank you so much dude, that means a lot to me! And you're so right that so many of these AI services are becoming outdated so quick haha, it's hard to keep up!

    • @piercemooney8750
      @piercemooney8750 Місяць тому +2

      @@ColeMedin where's the link to your community to learn local llm/automation dev? :)

    • @ColeMedin
      @ColeMedin  Місяць тому +9

      Thanks for asking! I have something I am building in the background right now that I will be releasing soon... not just your typical Skool community ;)

    • @jerryGolddd
      @jerryGolddd Місяць тому +4

      same. i was about to get cursor or bolt... haha count me in!

    • @zes7215
      @zes7215 8 днів тому

      no such thing as hatex or etc, say, can sayx etc any nmw s perfx

  • @fabriziomainardi
    @fabriziomainardi Місяць тому +12

    Cole let me tell you... You are definitely the best I found on UA-cam until now, If you continue like this you will smash any others in you niche!
    So much details, straight to the point, making anyone able to reproduce the same in a matter of time.
    And... first of all, inspiring!
    Good job man, very good!

    • @ColeMedin
      @ColeMedin  Місяць тому +2

      Wow thank you very much - that seriously means a lot to me!! :D

  • @StarTreeNFT
    @StarTreeNFT Місяць тому +12

    You are doing awesome work, and quickly becoming my favorite AI coding related UA-cam Channel! Thanks for sharing!

    • @ColeMedin
      @ColeMedin  Місяць тому +1

      @@StarTreeNFT Thank you very much, that means a lot!

  • @wilowleo
    @wilowleo 20 днів тому +3

    I'm truly blown away by what you've created here. Making this kind of technology accessible is a huge deal, and I can't thank you enough for your hard work and dedication. This is going to make such a positive impact!

    • @ColeMedin
      @ColeMedin  17 днів тому

      Thank you so much for the kind words! I appreciate it a ton!

  • @mandraketupi5
    @mandraketupi5 29 днів тому +11

    Hey Cole! UA-cam suggested your videos 3 days ago... I have now a pitch-ready AI-Application based on N8N :) If you ever come to Switzerland: Your beers are on me - ALL OF THEM...

    • @ColeMedin
      @ColeMedin  28 днів тому +1

      That's amazing man!! Sounds good, I'll let you know if I ever come to Switzerland 😎

  • @drjonbear7517
    @drjonbear7517 Місяць тому +5

    Love it dude. Only thing I would have added (initially) is that if the API key isn't there, then it doesn't show those options in the drop-down. But a simple fix really. Keep up the great work!

    • @ColeMedin
      @ColeMedin  Місяць тому

      Thank you and you're totally right, that would be a fantastic addition! I would have to have the backend communicate to the frontend somehow which API keys are present, but that could be done with another API endpoint that is hit when the site is loaded.

  • @martin_BKK
    @martin_BKK Місяць тому +8

    open source fully used in the right way, love it, keep up the good work Cole 👍🙏

    • @ColeMedin
      @ColeMedin  Місяць тому

      That's the goal - thank you very much Martin!!

  • @chriskingston1981
    @chriskingston1981 Місяць тому +9

    Maybe an idea. Last time I made a program would make it search the web and then parse it to an agent. But I notice they are also very good with coming up with the best keyword to search in google for…
    Maybe you should add an auto model choosing ai. So it a local model or api you can set for deciding which model to use.
    So you give it the project idea/file tree, and the user prompt, and all the info about each model. And then it must decide like this is just a form to generate we can use a simple local model.
    But when asked to make the form better or nicer it reasons it must use a more advanced model.
    Maybe even after the user submitted his prompt it shows which model it has chosen, and the user must conform by pressing enter, or when the user thinks the model chosen is still not good enough it can change the model selected for the task by the arrow keys and then pressing enter.
    It then lifts the weight of your shoulders and saves on expenses.
    Great video❤️❤️❤️

    • @ColeMedin
      @ColeMedin  Місяць тому +5

      This is an absolutely amazing idea, thanks for sharing!! I love the concept of having an initial router agent that determines the complexity of the task. I'm sure that's very doable!

  • @philamavikane9423
    @philamavikane9423 Місяць тому +5

    I'm about to burst with excitement, just from the intro.. this is what keeps me up at night

    • @ColeMedin
      @ColeMedin  Місяць тому +1

      Haha I love it, this kind of stuff is what keeps me up too!

  • @VTFLab
    @VTFLab Місяць тому +6

    Oh, you solved my problems instantly, and let me just say this, I love you, thank you so much.😭

    • @ColeMedin
      @ColeMedin  Місяць тому +2

      I'm so glad!! It's my pleasure!

  • @zebcode
    @zebcode 19 днів тому +4

    I made my own version of bolt that uses Ollama last night, I see that you've already progressed much further with it so I'm planning to contribute to your solution instead :)

    • @ColeMedin
      @ColeMedin  17 днів тому

      Awesome, thanks man! I look forward to seeing your contributions!

  • @naturallydope247
    @naturallydope247 20 днів тому +1

    This is impressive. If the majority of your content is like this I’m definitely subscribing. I’m a fan of open source tutorials.

    • @ColeMedin
      @ColeMedin  17 днів тому +1

      Thank you very much! Yes, a majority of my content is on creating cool stuff with open source!

  • @miselgpt
    @miselgpt 27 днів тому +3

    Fantastic job Cole, respect! 🙌 Instant subscription.
    I'd like to see you integrating Perplexity next... it probably sounds silly to you, but I'm a total non-developer.
    Thanks in advance.

    • @ColeMedin
      @ColeMedin  26 днів тому +1

      Thank you so much! 😃
      Not silly - thanks for the suggestion! I've got a running list of things I want to do to improve this fork and I'll add Perplexity/something similar!

    • @miselgpt
      @miselgpt 26 днів тому +1

      @@ColeMedin Fantastic. Looking forward to it... 🚀

  • @Nothing41i
    @Nothing41i 15 днів тому +2

    Not a programmer and don't know about bolt few hours ago but know I know I can easily build my concept by using your version thanks alot dude highly appreciated ❤️✨

    • @ColeMedin
      @ColeMedin  15 днів тому +1

      Glad I could help! You bet man!

    • @Nothing41i
      @Nothing41i 15 днів тому

      @@ColeMedin can you please add something like "I have an app or website project that's partially built, but it was created outside of Bolt AI. I'd like to continue its development within Bolt AI." it would be really help full

    • @ColeMedin
      @ColeMedin  12 днів тому

      Sorry could you clarify what you are saying here?

    • @Nothing41i
      @Nothing41i 12 днів тому

      @@ColeMedin I mean add something like importing the local project form your PC to Bolt and continuing it's development 😄.

  • @PaulMoody-d2s
    @PaulMoody-d2s Місяць тому +4

    When I see someone who provides real value, I subscribe. So, keep it up! 😊 By the way, LM Studio would also be nice to have.

    • @ColeMedin
      @ColeMedin  Місяць тому +2

      Thank you so much - I appreciate the support and kind words a lot!
      I am looking into adding LM Studio! There isn't a direct support for it with the Vercel AI SDK, but it looks like LM Studio supports OpenAI compatible APIs so I should be able to set it up similarly to how I set up Groq by just overriding the base URL for the OpenAI instance.

  • @WhatLurksBeneath
    @WhatLurksBeneath 18 годин тому +1

    Dude this is awesome!

  • @AaronBlox-h2t
    @AaronBlox-h2t Місяць тому +3

    Nice...I was thinking of doing something like this myself but here it is now. haha. Thanks for your effrots. I'm going to go through your archives now. Cool stuff.

    • @ColeMedin
      @ColeMedin  Місяць тому

      Haha that's awesome - my pleasure man! Thank you!

  • @Afrasayabful
    @Afrasayabful Місяць тому +4

    thanks Cole
    really.
    for a non coder like me this was fun watching and your next steps of showing how we can do something similar , i think now i can experiment with adding other service providers
    thanks cole

    • @ColeMedin
      @ColeMedin  Місяць тому +2

      My pleasure!! I'm glad it was easy to follow as a non coder!

    • @Afrasayabful
      @Afrasayabful 29 днів тому +1

      @@ColeMedin definitely friend.
      If you are able to make something even crazier. Please do share. I love testing out things

    • @ColeMedin
      @ColeMedin  28 днів тому

      Sounds great haha, will do!

  • @nombable
    @nombable Місяць тому +2

    What I would like to see is:
    - make the models configurable from a file instead of getting a massive list
    - provide an easy way to download all the generated files instead of copy / pasting everything
    - include a dockerfile to easily get going

    • @ColeMedin
      @ColeMedin  Місяць тому +1

      Love these suggestions, thank you! All three I totally agree would be much needed additions to this fork.

  • @anwarbousetta7316
    @anwarbousetta7316 Годину тому

    Thanks Cole, this is really inspiring!. I am a business user wanting to play around with some use cases before going the mvp route. I felt that the step by step guide was more of hop and skip guide (i mean this well) as most tech literates would know what happens next. I think that if this guide were more stupidproof. for folks like myself :) you would get a whole lot more of adoptation for the fork. Were it not for getting the first few steps right on Windows I would contribute to your .md. Do you have a link to windows only step by step set up?

  • @tokyoscooterguy
    @tokyoscooterguy 28 днів тому +2

    Oh yeah, it found all my Ollama models perfectly. Thanks for this amazing fork.

    • @ColeMedin
      @ColeMedin  28 днів тому

      Awesome!! My pleasure!

  • @dctrex
    @dctrex Місяць тому +1

    Fan-darn-tastic! Spectacularly generous on your part and Bolt.new's part! 👏👏🥳🎉

    • @ColeMedin
      @ColeMedin  Місяць тому

      Thank you very much! It's been my pleasure to get this up and running for everyone!

  • @sakarsr
    @sakarsr Місяць тому +1

    Wonderful job. It really helps to test different LLMs' coding capabilities. Thank you for your time and Good Health.

    • @ColeMedin
      @ColeMedin  Місяць тому

      Thank you very much! My pleasure :)

  • @shakeelhshah11
    @shakeelhshah11 Місяць тому +2

    Hey Cole, you are great. I never saw this type of work done and told on UA-cam, Thanks man. God bless you ❤

    • @ColeMedin
      @ColeMedin  Місяць тому

      Thank you very much, that means a lot! 😃

  • @elizabethkirby1782
    @elizabethkirby1782 19 днів тому +1

    Thanks very much for doing the fork & posting the info. I shall definitely try it out.

  • @LuisYax
    @LuisYax Місяць тому +2

    Cole, you are the the mAn! Great work on the forking of bolt.new, your content is top notch...

    • @ColeMedin
      @ColeMedin  Місяць тому

      Thanks so much Luis, I appreciate it a lot!

  • @encoretoicarlito
    @encoretoicarlito 29 днів тому +1

    I’m commenting again to thank you one more time. You’re awesome brother. Thank you for your work

    • @ColeMedin
      @ColeMedin  28 днів тому

      Haha my pleasure! Thanks man!

  • @Techonsapevole
    @Techonsapevole Місяць тому +2

    Epic, great improvements. I hope the changes will also get merged upstream

    • @ColeMedin
      @ColeMedin  Місяць тому

      Thank you very much! I hope so too haha, it would be a win for everyone

  • @henrijohnson7779
    @henrijohnson7779 Місяць тому +1

    Excellent work ! Especially figuring out the Vercel chat interface and all

    • @ColeMedin
      @ColeMedin  Місяць тому

      Thank you very much! Yeah that was the trickiest part!

  • @PiotrKwidzinski
    @PiotrKwidzinski 26 днів тому +1

    This is great Cole. I'm really enjoying your YT videos on N8N usages as those are most comprehensive tutorials on YT. Keep up with great work! Do you have by any chance docker composer to run this with Ollama?

    • @ColeMedin
      @ColeMedin  26 днів тому

      Thanks so much for the kind words! I will certainly keep it up!
      I don't have a Docker compose for this right now, but this is something I want to do in the near future!

  • @kodcx9654
    @kodcx9654 16 днів тому +1

    Really cool video, and I love how you explained everything.

  • @jparkr
    @jparkr 15 днів тому +1

    Hi Cole! Thanks for the video. Somewhere in your videos you mentioned about discourse for discussions. Is that working now?

    • @ColeMedin
      @ColeMedin  12 днів тому +1

      You are welcome! Discourse community is coming next Sunday actually!

  • @n4botz
    @n4botz 23 дні тому +1

    I think it's generally good if local LLMs can also be used. However, it doesn't work as well as it could (yet). Your work is great, and I'm excited to follow its further development. There is currently a problem with the code being executed and generated automatically. It’s precisely the core functionality that makes Bolt so exciting. But I don't want to make any demands here, I only just want to share my ideas and thoughts. Maybe at some point there will be a native app that will implement all of Bolt's functionality. Keep up the good work and thanks to you, Cole. 👍

    • @ColeMedin
      @ColeMedin  23 дні тому

      Yes I agree it doesn't work as well as it could with local/smaller LLMs right now. Lot of opportunities for prompt engineering and agents behind the scenes which we are indeed working on right now! Thanks for your thoughts!

  • @ricardocnn
    @ricardocnn 15 днів тому +1

    Awesome project! Does bolt.new use the sonnet 3.5 model (the newest model)?

    • @ColeMedin
      @ColeMedin  15 днів тому

      Thank you! I believe so!

  • @PathLink-fk3cp
    @PathLink-fk3cp Місяць тому +1

    v0's new speed update made me move back to it after using bolt for a bit. I"ll have to take a look at your fork, sounds exciting for local coding!

    • @ColeMedin
      @ColeMedin  Місяць тому

      Super interesting, I didn't know about that! Thanks for taking a look at the fork too!

  • @q_u_a_d_r_a_b_y_t_e
    @q_u_a_d_r_a_b_y_t_e Місяць тому +1

    Thanks so much for all the work you put in and for sharing the how-to with us so we didn’t have to do it as well. This is a significant improvement to this already amazing world of AI assisted coding, especially with respect to Bolt.new. I can’t wait to download your fork.

    • @ColeMedin
      @ColeMedin  Місяць тому

      You bet - thanks so much for the kind words! I hope it all works well for you when you try it out later!

  • @electroheadfx
    @electroheadfx 20 днів тому +1

    amazing, maybe later any support for LM Studio with MLX support for apple silicon ? ;)

    • @ColeMedin
      @ColeMedin  17 днів тому +1

      Yes that is on the list of improvements to be made!!

  • @SiliconSouthShow
    @SiliconSouthShow Місяць тому +1

    I'll have to grab it this evening. Hopefully, you set-up openrouter. But having ollama is fantastic on its own.

    • @ColeMedin
      @ColeMedin  Місяць тому +2

      Yes I did set up support for OpenRouter yesterday! Enjoy!!

  • @AnthonyGalati
    @AnthonyGalati 24 дні тому +1

    This content is very well thoughtout, engaging, and helpful. I am going to be checking out this code. Have you ever thought about streaming while you code? I would be curious about your thought process, using AI prompts in developing, and showing the development. Group programming!

    • @ColeMedin
      @ColeMedin  23 дні тому

      Thank you very much! And I have thought about it but not too much yet - thanks for mentioning that! It's very different than recording because if I get stuck on something the stream might get boring for a bit... haha
      But like you said the thought process behind everything could still be valuable - so I do want to do it in the future for sure!

  • @eaglebirdiepar
    @eaglebirdiepar 19 днів тому +1

    Love your desk

  • @williammurphy9055
    @williammurphy9055 Місяць тому +2

    this is awesome! Thanks for sharing Cole. Do you know if theres a way to still upload files/screenshots using this version? I am a ui designer and want to share wireframes, i find it helps a lot in the build. thanks!

    • @ColeMedin
      @ColeMedin  Місяць тому +2

      Thank you very much - you bet! Uploading files is a feature that Bolt.new doesn't include in their open source version, unfortunately. I like being able to upload wireframes too so I really wish it was included. But I guess Bolt.new needs to have some closed source features so people have a reason to pay them.
      Maybe I'll have to add this in as an extension to what I've made here!

  • @zensajnani
    @zensajnani 26 днів тому +1

    just tested this and it works great! you're a legend

    • @DeepakSuresh-te8xq
      @DeepakSuresh-te8xq 26 днів тому

      hi, I can get the bolt page to appear on my local server but any request is returning "There was an error processing your request". struggling with this

    • @zensajnani
      @zensajnani 26 днів тому +1

      @@DeepakSuresh-te8xq make sure you’ve added the api keys

    • @ColeMedin
      @ColeMedin  26 днів тому +1

      That's awesome - you bet!!

  • @prakashj3193
    @prakashj3193 29 днів тому +1

    Cole, is there any way the output code be connected to a GitHub repo? This could add a lot of value to the work that you are doing. Keep up the good work!!!

    • @ColeMedin
      @ColeMedin  28 днів тому +1

      Thank you and I appreciate the suggestion! Right now the open source version of Bolt.new doesn't support this, so I would have to implement it myself. But I am considering doing that because a lot of others have suggested it!

  • @mrgyani
    @mrgyani Місяць тому +1

    This is one of the best (most useful) AI videos I have seen in a long time. And that's saying something.

    • @ColeMedin
      @ColeMedin  Місяць тому +1

      Thank you very much, that means a lot to me! 😄

  • @DanHumphreys-bi8hk
    @DanHumphreys-bi8hk 29 днів тому +1

    legend mate! love your content, always the best!

    • @ColeMedin
      @ColeMedin  28 днів тому

      Much appreciated, thank you!! :D

  • @RobertoSilvaZuniga
    @RobertoSilvaZuniga 28 днів тому +1

    Great work @ColeMedin !! But I don't know if this happens only to me, the chats always lose the previous context I mean every conversation needs to be a new one adding the previous chat manually, and the code only appears in the chat not in the Workbench "code view", Do you know why?

    • @ColeMedin
      @ColeMedin  28 днів тому

      Thank you Roberto!
      I believe the conversation history issue is a limitation of the open source version of bolt.new. Something I am looking into!
      And then for the second issue - a lot of the smaller models have issues using the webcontainer (code view). So you'll only see a chat output, which is still useful, but obviously not ideal. I would first try a different model, especially if you're using a really small one like

    • @RobertoSilvaZuniga
      @RobertoSilvaZuniga 28 днів тому +1

      @@ColeMedin Oh :(
      Thanks for your answer ;)

    • @ColeMedin
      @ColeMedin  26 днів тому

      You bet! Sorry it isn't quite the answer you are looking for! Hopefully that solution for getting the smaller models to work in the webcontainer can work for you though!

  • @RCCarl-i3r
    @RCCarl-i3r 23 дні тому +1

    this is fantastic. amazing. the best AI youtuber I haver ever seen.
    One more thing, could you please kindly add openrouter, it is great one including all models.thanks a lot.

    • @ColeMedin
      @ColeMedin  23 дні тому

      Thank you so much!! OpenRouter is available now!

  • @marcoatmac
    @marcoatmac 7 годин тому

    Great thank you, Can you please make a video on how to install on a mac? Up to the Keys no problem... from there on I don't understand how to go on?!

  • @jxhannes3209
    @jxhannes3209 Місяць тому +1

    My Hero. I almost started to code bolt myself ;) Thanks

    • @ColeMedin
      @ColeMedin  Місяць тому +1

      Haha my pleasure! Coding something like Bolt.new would be fun but yeah a LOT of work!

  • @ErnestGWilsonII
    @ErnestGWilsonII Місяць тому +1

    Thank you for taking the time to make this video and share this with all of us.
    Is there anyway you can make this use my local file system or even better use visual studio code?

    • @ColeMedin
      @ColeMedin  Місяць тому

      My pleasure!
      Good question - so Bolt.new doesn't have this functionality at all so I would have to make it entirely from scratch. Which I am considering doing because a few people have requested exactly this already! Or at least to include the ability to download locally what Bolt.new creates.

  • @QuranLiveRecitation
    @QuranLiveRecitation 25 днів тому +1

    The attachments button is missing. But nice work!

    • @ColeMedin
      @ColeMedin  25 днів тому

      Thank you! And yes it is unfortunately - that is something not included in the open source version of Bolt.new, I guess so they can have some proprietary stuff so people will pay for their platform. But that is something I am looking to implement!

  • @rousabout7578
    @rousabout7578 Місяць тому +1

    Nice work! your modifications seem so obvious, I wonder why they weren't already integrated.

    • @ColeMedin
      @ColeMedin  Місяць тому

      Thank you and I agree! Maybe the Bolt.new team will see my changes and implement the same thing themselves if my video gets enough traction. It would be a win for everyone!

  • @PixelFrontier-channel
    @PixelFrontier-channel Місяць тому +1

    Excellent man was able to get this up and running quickly. Only issue is my llama3.2:1b model is just spitting out the code in the same chat window lol, Probably a me problem or a model issue, going to try some of the models you listed.

    • @ColeMedin
      @ColeMedin  Місяць тому +1

      Thank you - glad you have it running yourself too!
      Yeah I've noticed as well that the smaller models sometimes don't work very well with Bolt.new's prompt so they won't open up a WebContainer on the right side and it'll be more like a regular chat. Still helpful but yeah obviously not what we are looking for mostly.
      If you are able to, I would try a larger 30b+ param model like CodeLlama 34b or CodeBooga 34b. Or try DeepSeek-Coder through OpenRouter, that model kicks butt.
      Otherwise it might be possible to change up the Bolt.new system prompt to work better with smaller models. That is something I am still researching!

    • @PixelFrontier-channel
      @PixelFrontier-channel 29 днів тому +1

      @@ColeMedin Awesome thanks for the suggestions!

    • @ColeMedin
      @ColeMedin  28 днів тому

      Of course!!

  • @Jonathan-o1h9x
    @Jonathan-o1h9x Місяць тому +2

    Just added google gemini from the Vercel AI SDK, Incredibly simple, but doesn't seem to be as capable as the other models. It seems the in browser code/preview canvas needs some prompt engineering

    • @ColeMedin
      @ColeMedin  Місяць тому

      That's awesome you got Gemini added in! Nice job!
      Too bad it isn't performing well though... you're right - for many of the not as powerful models there should be an opportunity to tune up the Bolt.new prompt to make it work better. That is something I am looking into!

    • @guerra_dos_bichos
      @guerra_dos_bichos 29 днів тому +1

      @@ColeMedin just an update, it was something with my environment, now the canva is working. Just wondering if there a way to automatically save the files

    • @ColeMedin
      @ColeMedin  28 днів тому

      @guerra_dos_bichos Awesome, glad it is working for you! There isn't a way to save files right now since the open source version of Bolt.new doesn't support that unfortunately, but I am looking into making it myself for my fork since it is a highly requested feature!

  • @mikebennett5559
    @mikebennett5559 20 днів тому +1

    Preview on same page is on-point

    • @harish.mnaidu3519
      @harish.mnaidu3519 19 днів тому

      hey is preview possible???

    • @ColeMedin
      @ColeMedin  17 днів тому

      This can happen sometimes depending on the model you use - which models are you trying to use?

  • @quadratetechsolutionspriva3669
    @quadratetechsolutionspriva3669 26 днів тому +2

    Please include the Azure Open AI API support as well

    • @ColeMedin
      @ColeMedin  25 днів тому

      I will add this to my list of improvements to make to the platform!

  • @zkiyyeller3525
    @zkiyyeller3525 Місяць тому +2

    Thank you Cole truly valuable content.

  • @mehmetnaciakkk3983
    @mehmetnaciakkk3983 Місяць тому +1

    Nice work! And with that, you got yourself one more subscriber 😊

    • @ColeMedin
      @ColeMedin  Місяць тому

      Thank you very much, I appreciate the support a lot!

  • @WhyBacon
    @WhyBacon Місяць тому +1

    Thanks Colt, you got yourself a new subscriber

    • @ColeMedin
      @ColeMedin  Місяць тому

      My pleasure - thank you very much!

  • @1brokkolibaum
    @1brokkolibaum Місяць тому +1

    Glad you showed what to change, so I can add LMStudio much faster 😁👍

    • @ColeMedin
      @ColeMedin  Місяць тому

      Always happy to help!! 😃

  • @MeTuMaTHiCa
    @MeTuMaTHiCa Місяць тому +1

    Thx for all explabayions and hard work. Sometimes local llm s boring but with flowise with 2 or 3 agents it makes fast and not uses CPU and gpu much more, if we can integrate with Bolt and your system..
    Nice work

    • @ColeMedin
      @ColeMedin  Місяць тому

      My pleasure, thank you! Could you expand a bit more on your idea here? Sounds interesting!

    • @MeTuMaTHiCa
      @MeTuMaTHiCa Місяць тому

      @@ColeMedin I don't know how to write in programming language. But I have experienced that normally when I use an artificial intelligence alone, it gives very late and bad answers. But when I create different agents with flowise, give them tasks and run the system piece by piece, I get both more proper and more efficient answers. If we can use this in your system, something great will come out. Actually, it's like separating the prompt entered by the user into chunks. I don't know if I explained myself. Still, you have produced something very nice, congratulations.

    • @ColeMedin
      @ColeMedin  Місяць тому +1

      Thanks for the kind words and yeah I see what you mean now!
      This kind of thing where you have agents running in the background to produce the final result for the Bolt.new frontend is certainly doable! It would take extending the platform quite a bit, but I do love the idea!

  • @DanielCleggZA
    @DanielCleggZA 19 днів тому

    Awesome, what has been your best Ollama for this - Deepseeker, mistral , qwen ?

    • @ColeMedin
      @ColeMedin  17 днів тому

      DeepSeek has been my favorite!

  • @michaelandersonse
    @michaelandersonse Місяць тому +1

    Thank you for a great fork!!! Enjoy your videos. I wish there was an option to only write an HTML, CSS, JS site instead of it always having it built in a stack like vue or nextjs.

    • @ColeMedin
      @ColeMedin  Місяць тому +1

      Thank you Michael! I've actually had luck getting it to only write HTML, CSS, and vanilla JS. I just have to specifically ask for only that in my prompting. Sometimes it still likes to create a package.json file but I think that can be fixed by tuning the Bolt.new prompt for the LLM.

    • @michaelandersonse
      @michaelandersonse Місяць тому +1

      @@ColeMedin Thanks Cole!! I tried that and that worked, no framework files :)
      I appreciate you!

    • @ColeMedin
      @ColeMedin  28 днів тому

      Awesome man!! You bet!

  • @techtodayAI
    @techtodayAI 28 днів тому +1

    Awesome video !, could you share the docker version of this fork ?

    • @ColeMedin
      @ColeMedin  28 днів тому +1

      Thank you! I haven't containerized this yet but I like the suggestion! I will certainly consider doing that especially if I add any other services to this fork like agents in the backend.

  • @erdtyfgjy
    @erdtyfgjy Місяць тому +2

    Thanks excellent work, anyway to import projects I built in bolt.new to continue building them with your fork?

    • @timrobinson5235
      @timrobinson5235 Місяць тому

      oh and @cole the AI enhanced prompt seems to be hard coded to anthropic so as I have no available credits won't work, can we have the enhance prompt point to the chosen LLM?

    • @ColeMedin
      @ColeMedin  Місяць тому +1

      Thank you so so much for your support and kind words!!
      Right now this isn't possible because Bolt.new doesn't include the import feature in their open source version. I guess they have to keep some things closed source so people have a reason to pay them for their cloud offering.
      This is something I am looking into adding though! But it will certainly be a good amount of work to set up!

    • @erdtyfgjy
      @erdtyfgjy 16 днів тому

      @@ColeMedin bolt.new keeps loosing my projects anyway so their implementation wouldn't be the right direction!

  • @MagieCannizzaro
    @MagieCannizzaro 29 днів тому

    Really a Great work!!
    the chat works and answer to questions,
    but using OLLAMA models my preview and code view are empty!
    with GPTO (using openrouter) works.
    maybe there are some OLLAMA models preferred to use?

    • @ColeMedin
      @ColeMedin  28 днів тому +1

      Thank you man!
      Yeah I've noticed as well that the smaller models sometimes don't work very well with Bolt.new's prompt so they won't open up a WebContainer on the right side and it'll be more like a regular chat. Still helpful but yeah obviously not what we are looking for mostly.
      If you are able to, I would try a larger 30b+ param model like CodeLlama 34b or CodeBooga 34b. Or try DeepSeek-Coder through OpenRouter, that model kicks butt.
      Otherwise it might be possible to change up the Bolt.new system prompt to work better with smaller models. That is something I am still researching!

  • @souvickdas5564
    @souvickdas5564 29 днів тому +1

    Superb work. Just one thing I would ask you, how to push the code from the interface to github. Or where can I find all the codes or projects?

    • @ColeMedin
      @ColeMedin  28 днів тому

      Thank you and good question! Unfortunately this is something that Bolt.new didn't include in their open source version. So I would have to add it entirely myself - which I am considering doing since a lot of people have requested it!

    • @souvickdas5564
      @souvickdas5564 28 днів тому

      @@ColeMedin most awaited feature update. Hope you will do it soon.

    • @ColeMedin
      @ColeMedin  28 днів тому

      I'm planning out my content for the next month and including this, so it will be reasonably soon!

  • @m.f.mfazrin8720
    @m.f.mfazrin8720 26 днів тому +1

    This is awesome, Can I use my finetuned/base model GPT deployed in Azure AI Studio?

    • @ColeMedin
      @ColeMedin  25 днів тому +1

      Thank you! You sure can! You would just need to create an openAI instance for the model provider where you override the baseUrl to point to your GPT hosted in Azure AI Studio. Or I believe they have direct support for what you are looking for here (correct me if I'm wrong if the studio is different than this):
      sdk.vercel.ai/providers/ai-sdk-providers/azure

    • @m.f.mfazrin8720
      @m.f.mfazrin8720 23 дні тому

      ​@@ColeMedin exactly! That's what I was looking for. I tried implementing it using the same approach from the link you shared but encountered an error. As a .NET backend developer, I'm still relatively new to the Node.js stack and learning it as I go. If you could assist in adding this functionality, it would be incredibly helpful. Thanks in advance!

    • @ColeMedin
      @ColeMedin  23 дні тому

      Yeah I would love to help! What is the error you ran into?

  • @T33KS
    @T33KS Місяць тому

    Hey Cole, like what many have said here, thanks for putting out some of the best straight forward hands on content.
    I have a question that, that no one had been able to answer. I'm hoping with your experience in this field, you'll be able to finally put it to rest:
    What AI coding tool would you use to work with large files in a codebase? I have a js file with 40k lines of code, and none of the popular tools out there has been able to handle such large context.

    • @ColeMedin
      @ColeMedin  Місяць тому

      My pleasure, thank you for the kind words!
      This probably isn't the answer you are looking for but I put a lot of thought into the second paragraph so hopefully it helps! I would suggest against having any single file in source code that is that many lines of code. Typically for a JS project, you would split the code into separate components and have all of those in different files. Traditionally recommend for readable + reusability of components, but even more important now for being able to have LLMs come in and help with the code more easily.
      Now, I don't know what your codebase looks like and I'm sure there is a good reason you have a file that big! If you really do want to handle files that big, you'd probably have to develop a custom system that splits the file up and then feeds chunks one at a time to the LLM to process and do whatever you need it to do like update sections of the code. So basically you summarize each piece of the code so the LLM can navigate between chunks and make the necessary updates in a multi-step agentic workflow.

    • @T33KS
      @T33KS Місяць тому

      @@ColeMedin thanks for taking the time to reply in detail. Your answer makes complete sense, especially after researching and testing the AI coding tools in the current meta.
      The large js file is the output of a vue output that was uglified and then beautified, and I don't have access to the source files. That's my only issue. I guess I'm gonna have to try and dissect it into multiple files based on my intuition.
      Anyway, thanks again. Cheers

    • @ColeMedin
      @ColeMedin  Місяць тому

      Ah okay that makes sense! That certainly does make it tougher. Good luck splitting it up, I hope that works out well for you and makes it possible to use LLMs to assist more!
      You bet!!

  • @zepposprojects3205
    @zepposprojects3205 19 днів тому

    Thanks for this! Works "out of the box"

    • @ColeMedin
      @ColeMedin  17 днів тому

      You are so welcome! That's a strange issue, Bolt.new is specifically prompted to not do that and I haven't ran into it myself. Which model are you using? I would also in your prompt just specify to include all the code in each file it rewrites!

  • @kevinmolina6692
    @kevinmolina6692 Місяць тому +1

    thank you so much! quality work! thanks for being you!!

    • @ColeMedin
      @ColeMedin  Місяць тому

      I appreciate it a ton, thank you!! :)

  • @MarcusNeufeldt
    @MarcusNeufeldt Місяць тому +1

    Looks dope! Slap Openrouter support on this and its gold

    • @ColeMedin
      @ColeMedin  Місяць тому

      Thank you! And I already did yesterday! 😎

  • @Flutterbro12
    @Flutterbro12 Місяць тому +1

    If you can make a video about how you edited the source code like Using AI or manually. It would be a banger. Great presentation subscribed !!!

    • @ColeMedin
      @ColeMedin  Місяць тому

      Thank you very much, I appreciate the support a lot!
      I do show at the end of the video how I edited the source code to make this happen. I didn't actually use AI for this since the changes were between so many different files. Or is there something more specific you were wondering about me making a video on related to this?

  • @mercadolibreventas
    @mercadolibreventas Місяць тому +1

    Hi Cole, Great Job!

    • @ColeMedin
      @ColeMedin  Місяць тому

      Thank you very much! 😀

  • @TomPane-pl4lk
    @TomPane-pl4lk Місяць тому +2

    Hey Cole, thanks so much!

  • @simonsaysboo
    @simonsaysboo 29 днів тому +1

    Huge thumbs up for this - subscribed, cloned, now a follower :)

    • @ColeMedin
      @ColeMedin  28 днів тому

      Thanks so much, the perfect trifecta! haha 😀

  • @sillybilly346
    @sillybilly346 Місяць тому +1

    Great vid man! Question, if I’m using an api provider like Azure which requires an API key and a ‘resourceName’ do I need to include the ‘resourceName’ in the api-key.ts switch statement also or just the apikey? (apiKey and resourceName are both environment variables)
    Any help would be greatly appreciated!

    • @ColeMedin
      @ColeMedin  Місяць тому

      Thank you very much!
      You wouldn't have to include the resourceName in the api-key.ts switch! You would just need to include process.env.AZURE_RESOURCE_NAME or whatever you call the environment variable in the call to createAzure in models.ts just like you include the apiKey there.
      I assume you saw the docs for this, but in case you didn't:
      sdk.vercel.ai/providers/ai-sdk-providers/azure

    • @sillybilly346
      @sillybilly346 Місяць тому

      @@ColeMedin appreciate the response! Yes I’m following the vercel docs closely. Last question if you don’t mind, what about the anthropic-vertex community reference which doesn’t include an api key at all? Does that get left out of api-key.ts completely similar to the local ollama models? Thanks again!

    • @ColeMedin
      @ColeMedin  Місяць тому

      I don't mind at all!
      And yes that is the correct understanding!

  • @hope42
    @hope42 Місяць тому +1

    Did you try Gemini 1.5 Flash? Nice work.

    • @ColeMedin
      @ColeMedin  Місяць тому +1

      @@hope42 Thank you! And I did not yet - I've got a huge list of models I want to try and that is one of them!

    • @thomasheinzm.9094
      @thomasheinzm.9094 Місяць тому +1

      that's my thoughts !!

  • @floydpaul5851
    @floydpaul5851 Місяць тому +1

    Exactly what i needed, without realising I needed it! Thank you! Could you clarify, you say that the ollama models should be installed before use. I run "ollama run deepseek-coder-v2" its installed, and I can see it when I list ollama, but when I then try use the model in bolt.new I get the error "responseBody: '{"error":"model \\"deepseek-coder-v2:16b\\" not found, try pulling it first"}',". Am I missing something?

    • @floydpaul5851
      @floydpaul5851 Місяць тому +1

      Solution: Double check the model names by running "ollama list" - then ensure the names in the .env match! For me some of the downloads from Ollama are saved as model:latest instead of model:15b.

    • @ColeMedin
      @ColeMedin  Місяць тому +1

      Glad you figured it out! Your solution as a reply to your own comment is exactly what I was going to say!

  • @cashdoo
    @cashdoo 9 днів тому +2

    How much would it cost me per message on average if I use the Claude 3.5 Sonnet API?

    • @ColeMedin
      @ColeMedin  8 днів тому

      Great question! Around ~$0.02 on average I would say. Given Claude 3.5 Sonnet is $3 per million input and $15 per million output tokens.

  • @reedickyaluss
    @reedickyaluss 15 днів тому +1

    Whats better at coding your projects, 4o or claude 3.5?

    • @ColeMedin
      @ColeMedin  15 днів тому +1

      They are pretty close so sometimes I'll actually use both when one encounters an issue! But typically I found Claude 3.5 Sonnet to be slightly stronger.

    • @reedickyaluss
      @reedickyaluss 15 днів тому

      @ColeMedin Thanks. I'm going through 10M tokens a day on Bolt. Is the downloadable LLM the same for your local fork or can you not download Claude 3.5

    • @ColeMedin
      @ColeMedin  12 днів тому

      Sorry could you clarify your question?

  • @eanbri8940
    @eanbri8940 10 днів тому +1

    I love this. However, we can't install packages, lib to run projects in preview. How to do it?

    • @ColeMedin
      @ColeMedin  8 днів тому

      Thank you! Could you clarify your question! You should be able to install packages and get a preview just like the commercial version of Bolt.new

  • @sauliusjuozaitis15
    @sauliusjuozaitis15 Місяць тому +1

    Cole, do you have any video of full end to end automation over the voice?
    Assuming connect Siri to LLM and n8b workflow. Or could be self hosted IP telephone where can dial-in and speak to LLM to execute some action

    • @TheWiiZZLE
      @TheWiiZZLE Місяць тому

      +1

    • @ColeMedin
      @ColeMedin  Місяць тому +2

      I do not yet but this is in my pipeline to make content on! Especially for a personal assistant that you can just have access to on your phone!

    • @TheWiiZZLE
      @TheWiiZZLE Місяць тому +1

      @@ColeMedin yes please!

  • @muldurksk
    @muldurksk Місяць тому +1

    I'm wondering how would I use a open source model from LM studio since seems like there is no provider built for that, thanks for the great content!

    • @ColeMedin
      @ColeMedin  Місяць тому

      You bet! And I believe LM Studio supports OpenAI compatible endpoints so you can set up LM Studio just like I did with Groq in this video!
      lmstudio.ai/docs/basics/server#openai-like-api-endpoints

  • @Mohd15021
    @Mohd15021 19 днів тому +1

    Hi thank for this amazing project, my inquery i have openai api key but iam unable to run, althouth i install canary browser and follow main command could make avideo if possible to explain how to set openai apikey step by step thanks

    • @ColeMedin
      @ColeMedin  17 днів тому

      I will be making a step by step guide on this soon here!

  • @leonardobetti8811
    @leonardobetti8811 Місяць тому +1

    absolutely fantastic, thanks a lot for that!!!! can we use Gemini Flash 1.5 (free) api? also kudos to the suggestion to export zip file which includes all the files

    • @ColeMedin
      @ColeMedin  Місяць тому

      Thanks Leonardo! You can use Gemini Flash 1.5 through OpenRouter right now! This version from OpenRouter is totally free:
      google/gemini-flash-1.5-exp

  • @Yoko-0x0
    @Yoko-0x0 Місяць тому +1

    great, I will to user this fork with LM Studio

    • @ColeMedin
      @ColeMedin  Місяць тому

      Sounds great, good luck! :D

  • @AI_Creatives_Toolbox
    @AI_Creatives_Toolbox Місяць тому +1

    Amazing work!! Thanks so much for sharing! Is there a way to fiddle with this app so it could write files locally like cursor on windows?

    • @ColeMedin
      @ColeMedin  Місяць тому

      Thank you and fantastic question! That isn't available in Bolt.new but I am looking into how I could extend my version to make that possible! Or at least make it so you can download directly the project that it generates.

    • @AI_Creatives_Toolbox
      @AI_Creatives_Toolbox Місяць тому +1

      @@ColeMedin Amazing! Thanks so much for all your work.

    • @ColeMedin
      @ColeMedin  Місяць тому

      My pleasure!!

  • @screwf4ce1
    @screwf4ce1 29 днів тому

    Great stuff cole. I want to give this a go specifically for the Ollama bit however, where / what would I need to change if I have Ollama running on another computer in my house. So rather than locally on the machine I will run your fork of Bolt.new but on another machine in the house. Could it be on line 35 of the models.ts I can simply put a baseurl: line there

    • @ColeMedin
      @ColeMedin  28 днів тому

      Thank you! And yes, you should be able to just override the baseUrl as long as that IP is accessible from your machine (i.e. no firewalls blocking or anything like that)!

    • @curiousturtle8190
      @curiousturtle8190 28 днів тому +1

      @@ColeMedin For some reason - None of the Ollama based models are working on my system. Bolt.new throw an error on console about not found. I had already ensured that ollama is up and running on default port 11434.
      Can you help maybe what might be going wrong? I'm on mac.

    • @ColeMedin
      @ColeMedin  28 днів тому +1

      @curiousturtle8190 Make sure you run the ollama pull command for the exact same model ID that you are using within Bolt.new!
      So if you want to use codellama 34b, for example, you would first have to run the command:
      ollama pull codellama:34b
      All the model IDs can be found in the app/utils/constants.ts file that I show in the video!

    • @curiousturtle8190
      @curiousturtle8190 26 днів тому +1

      ​@@ColeMedin you are 💯 correct! After using exact name, it works like a charm. The default pull was using the latest tag name which was causing an error on my end. Thanks a ton! 😊

    • @ColeMedin
      @ColeMedin  23 дні тому +1

      You bet!!

  • @h.w7141
    @h.w7141 26 днів тому +2

    Can this work with Nvidia's new 3.1 70b model? that would be amazing

    • @ColeMedin
      @ColeMedin  25 днів тому

      Great question! And the answer is yes! You just have to pull it from Ollama and it'll be available to use here.
      ollama.com/library/nemotron

  • @theNotLogo
    @theNotLogo 29 днів тому +1

    Thanks Cole. This is awesome cause you have integrated local models. I'm using agent Zero but can't totally write full stack with it. Now i want ti try bolt. So which ollama model is best for full stack development?

    • @ColeMedin
      @ColeMedin  28 днів тому +1

      My pleasure! Out of all the Ollama models, I would give codellama a shot first!

    • @theNotLogo
      @theNotLogo 28 днів тому

      @@ColeMedin thanks i will try it.

    • @ColeMedin
      @ColeMedin  28 днів тому

      Sounds great! You bet!

    • @theNotLogo
      @theNotLogo 21 день тому

      @@ColeMedin codellama 13b and 34b not properly working. They don't use instructions and instruments at all. They can't work with "artefacts". Keeping trying something else...

    • @theNotLogo
      @theNotLogo 21 день тому +1

      @@ColeMedin i found one interesting fact. I have promted my system promt of deepseek coder v2 16b in ollama and when i asking do some stuff ir works like simple chat bot, but when i say it use artifacts it starts working properly
      Another interesting thing that when bolt uses anthropic models claude sonnet or others it gives instructions to system prompt while connecting to its api key. System promt can be read from terminal where bolt started.(Cmd powershell) I copied it promt but ollama model can not understand it directly so i rewrited it to plain Text and changed formatting text and it took this role of bolt coder and tools to use it started to understand
      So can you add same functionality while loading ollama model from local api to bolt give it instructions while loading like it doing to Claude
      I think it will fix issue and we could test every models from ollama and choice best because their system prompt will promted correctly.

  • @hugo_bart
    @hugo_bart Місяць тому +2

    thanks! keep up the good work

    • @ColeMedin
      @ColeMedin  Місяць тому

      Thanks so much Hugo! Your support means a ton to me! 😄

  • @Bob-huisbaas
    @Bob-huisbaas Місяць тому +2

    This is super fucking cool. You telling me that if you have a fast pc the local models work better? Time to install vscode and give this a try on my gaming pc jaja

    • @ColeMedin
      @ColeMedin  Місяць тому

      Haha thanks man! Not all local models work super well but some do, especially the bigger ones like DeepSeek-Coder V2. Hope it works well for you!

  • @MINLAN1231
    @MINLAN1231 Місяць тому +1

    Your explanation is really detailed. I really like your style. Please keep at it

    • @ColeMedin
      @ColeMedin  Місяць тому

      Thank you very much! I definitely will keep at it!

  • @ana446lpd2
    @ana446lpd2 25 днів тому

    Great video. Can you explain how to install any llm to existing bolt.new 🙏🏼
    Thanks!

    • @ColeMedin
      @ColeMedin  23 дні тому

      Thanks! I'll be doing a followup video on this!

  • @k2an
    @k2an Місяць тому +1

    Awesome really awesome #1 quality of content you have. If i may i want to ask, i downloaded your fork, then pnpm install everything working well. but i cant see created files on right side. there is nothing.. i created something from sample todo but no file created on right side. how can I do this if you know? thanks
    PS: using win 11 and ollama with deepseek-coder-v2 to tryout.

    • @ColeMedin
      @ColeMedin  Місяць тому

      Thank you very much man!
      I assume you are using the 16B param version of Deepseek Coder? The smaller models sometimes don't work very well with Bolt.new's prompt so they won't open up a WebContainer on the right side and it'll be more like a regular chat. Still helpful but yeah obviously not what we are looking for mostly.
      If you are able to, I would try a larger 30b+ param model like CodeLlama 34b or CodeBooga 34b.
      Otherwise it might be possible to change up the Bolt.new system prompt to work better with smaller models. That is something I am still researching!

    • @k2an
      @k2an Місяць тому

      @@ColeMedin thank you for fast answer. i found like this, if I say create me a todo list app its not write on container. but if i say build me a bla bla its creating on there. and I really want to ask , for react/nextjs tailwind combo which llm is the best for result in your opinion. thanks again for this awesome work!

    • @ColeMedin
      @ColeMedin  Місяць тому

      Of course!
      Interesting! So you're saying even small changes to the prompt can help the smaller models interact with the webcontainer properly?
      I've been having a lot of fun and success with DeepSeek-Coder 236b from either Ollama (though you have to have a really good machine!) or OpenRouter (super cheap). It doesn't do the best with styling but it corrects itself really easily when you ask and the functionality is super good.

    • @PixelFrontier-channel
      @PixelFrontier-channel Місяць тому

      @@ColeMedin Dang I guess I can't run this then... Now I see why my models aren't opening the editor lol. Guess I have to wait until newer models come out that can handle it.

  • @sirusThu
    @sirusThu 11 днів тому +1

    Amazing job

  • @exileofthemainstream8787
    @exileofthemainstream8787 Місяць тому +2

    So how does Bolt compare with Cursor? And can you do make AI agents within Bolt? like does Bolt replace Vscode?

    • @ColeMedin
      @ColeMedin  Місяць тому

      Great questions! I've had LOT more luck with Bolt.new compared to Cursor. I like both but Bolt.new has given me a better experience overall.
      Bolt is more focused on the frontend even though it is full stack, so I wouldn't necessarily use it for creating AI agents. But you could certainly try and see what it can put out for you!

    • @exileofthemainstream8787
      @exileofthemainstream8787 Місяць тому +1

      @@ColeMedin Thanks for your reply.

    • @ColeMedin
      @ColeMedin  Місяць тому

      Of course!!

    • @exileofthemainstream8787
      @exileofthemainstream8787 Місяць тому

      @@ColeMedin I tried your repo. I am having issues using Groq. It just says ''there is an error processing your request". I tried all the various Groq models you put too.

  • @freelancellc
    @freelancellc Місяць тому

    man this thing works great! Solid work my friend. Quick question, is there a way to paste screenshot i the fork you created so that it can interpret it or is that only in the original one? thanks

    • @ColeMedin
      @ColeMedin  Місяць тому +2

      Thank you so much, I'm glad it's working well for you!
      Unfortunately Bolt.new doesn't provide this feature in the open source version. I guess they have to keep some things closed source so people are willing to pay for what they offer in the cloud.
      But I am considering adding support for this in my forked version!

    • @davidbraun7356
      @davidbraun7356 Місяць тому

      @@ColeMedin +1 for image input! Is this complicated to add? I'm curious about how it works in the cloud version (and in V0, Replit, and others..).

    • @ColeMedin
      @ColeMedin  Місяць тому +1

      @davidbraun7356 I'm guessing it will be fairly complicated... and also not all models will support it so I'll have to figure out how to make that a good experience too. But it would be freaking awesome to have in the fork!