TechXplainator
TechXplainator
  • 70
  • 124 730
How to Set Up and Run ComfyUI on Lightning AI for FREE!
In this video, I'll guide you through the complete process of running ComfyUI on Lightning AI. You'll learn how to sign up for Lightning AI, install ComfyUI and ComfyUI Manager, download Stable Diffusion models, and run ComfyUI efficiently. Plus, I'll provide a walkthrough of the ComfyUI interface. Whether you're new to ComfyUI or looking to optimize your workflow, this tutorial has everything you need to get started.
___________________________________________________________________________________________________
👉🏻 Written tutorial with links and code: techxplainator.com/how-to-set-up-and-run-comfyui-on-lightning-ai
🔗 FREE studio template: lightning.ai/techxplainator/studios/comfyui-stable-diffusion-starter-kit
Chapters:
----------------
00:00 Intro
01:18 FREE studio Template
01:42 ComfyUI Installation Step 1: Sign up to Lightning AI
03:50 ComfyUI Installation Step 2: Install ComfyUI
05:29 ComfyUI Installation Step 3: Install ComfyUI Manager
07:35 ComfyUI Installation Step 4: Run ComfyUI
09:55 ComfyUI Installation Step 5: Connect ComfyUI
10:55 ComfyUI Installation Step 6: Add ComfyUI to studio startup
11:20 ComfyUI Installation Step 7: Rename the Studio
11:56 ComfyUI Installation Step 8: Verify Studio Configuration
13:47 Download Checkpoints Option 1: ComfyUI Manager
14:49 Download Checkpoints Option 2: Via Terminal
16:35 Download Checkpoints Option 3: Manual Upload
17:06 ComfyUI Walkthrough
#AI #stablediffusion #comfyui #lightningai
Переглядів: 48

Відео

Forget Colab: Lightning AI Is the FREE Dev Tool I Wish I Knew Sooner!
Переглядів 20014 днів тому
Discover why Lightning AI is the best FREE alternative to Google Colab for AI development! Learn how to bypass installs, avoid dependency issues, and easily access GPUs. 👉🏻 Written tutorial with links and code: techxplainator.com/lightning-ai-best-alternative-for-colab 🔗 Lightning.AI website: lightning.ai/ Chapters: 00:00 Intro 01:01 Why Even Bother? 02:14 My issues with Colab 03:33 Lightning A...
Llamafile: The Easiest Way of Running Your Own AI Locally and for Free!
Переглядів 48728 днів тому
Learn how to run powerful AI models directly on YOUR computer with Llamafile! It's FREE, private, and SUPER EASY to use - no programming skills required! Just download ONE file and use it completely OFFLINE. This video is your step-by-step guide to getting started with Llamafile. 👩‍🎓 My Free Tutorials: 🔗 How to get started with Llamafile: techxplainator.com/the-easiest-way-of-running-your-own-a...
How to Set Up Ollama and Open WebUI for Remote Access: Your Personal Assistant on the Go!
Переглядів 3,6 тис.Місяць тому
Unlock the power of Open WebUI from anywhere with Ollama! Learn how to access this powerful AI chatbot platform on your phone, tablet, or any device. Get started with remote access, no coding required, and unleash the potential of large language models for free. In this tutorial, I'll show you how to set up Open WebUI on your local machine and connect it to Ollama for seamless remote access usi...
How to Build Free AI Agents 🤖 with Llama3 & Colab
Переглядів 586Місяць тому
Unleash your inner mad scientist! ‍ This video teaches you to build AI agents with Google Colab and the free LLM, Llama3 using Ollama. Code your own Minions for fun (and maybe world domination - who am I to judge). And all with open source and free tools! 🔗 Jupyter Notebook: github.com/TechXplainator/Tutorials/blob/main/ai-agents/how-to-build-free-ai-agents-minions/world_domination.ipynb 👩‍🎓 My...
Ollama on Google Colab: A Game-Changer!
Переглядів 2 тис.2 місяці тому
Struggling to run large language models locally due to limited GPU resources? Discover how to effortlessly execute Ollama models on Google Colab's powerful cloud infrastructure. In this tutorial, I'll guide you through the entire process, from setting up your Colab environment to running your first Ollama model. No need for expensive hardware - let Colab do the heavy lifting! 🔗 How to run Ollam...
Get Your Own Local AI Coding Copilot for Free!
Переглядів 3312 місяці тому
Local Coding Power: How to Set Up a Free AI Coding Assistant in Minutes! In this video, I'll show you how to set up your own local LLAMA3 copilot using Code-GPT and Ollama in Visual Studio Code. We'll install and configure the tools together, then explore exciting features that boost productivity and code quality. 🔗 How to run Ollama as coding assistant: techxplainator.com/unlocking-local-codin...
New Era of Image Generation in Leonardo AI: Gen V2 Walkthrough
Переглядів 5433 місяці тому
Upgrade Your Creations with Leonardo AI's Gen V2 Features Get ready to take your image generation skills to the next level with Leonardo AI's new and improved text-to-image generation UI, 'Image Generation V2'! In this walkthrough, I'll show you how to switch to legacy mode for familiar controls, as well as dive deep into the new features that make V2 a major upgrade. From presets to ControlNet...
How to Import open source LLMs from Huggingface and run them locally
Переглядів 2343 місяці тому
A tutorial on how to import open-source models into Ollama! 🤖 In this video, I'll demonstrate how to import any large language model from Huggingface and run it locally on your machine using Ollama, specifically focusing on GGUF files. As an example, I'll use the CapybaraHermes model from "The Bloke". The process is straightforward, and I'll guide you through each step in a clear and concise ma...
How to Build a Custom Local Version of LLAMA3: Simply Explained
Переглядів 3,3 тис.3 місяці тому
Customize Any Large Language Model with Ollama: A Simple Guide! In this video, I'll show you how to customize any large language model and run it locally on your machine using Ollama. As a demonstration, I'll use LLAMA3 to create a custom version that behaves like Yoda from the Star Wars movies. The process is straightforward, and I'll walk you through each step, making it easy for anyone to fo...
Say Goodbye to ChatGPT and Run Your Own AI Locally
Переглядів 7124 місяці тому
In this tutorial, I'll show you how to get started with Ollama WebUI, an open-source solution for running large language models locally. 🔗 Installation guide for Open WebUI: techxplainator.com/say-goodbye-to-chatgpt-and-run-your-own-ai-locally 🔗 Open WebUI Github: github.com/open-webui/open-webui?tab=readme-ov-file#open-webui-formerly-ollama-webui- 🔗 Docker Website: www.docker.com/products/dock...
How to Get Started with Ollama
Переглядів 5094 місяці тому
Discover how to operate your private large language model for free using Ollama. 🔗 Installation guide for Ollama: techxplainator.com/run-your-own-ai-locally-a-guide-to-using-ollama 🔗 www.ollama.com Udemy Courses: 🎓 Prompt Engineering: www.udemy.com/course/prompt-engineering-elevate-your-interactions-with-chatgpt/?referralCode=8789D25BBF5DD6243E34 🎓 Leonardo AI: www.udemy.com/course/your-ultimat...
Say Goodbye to Backgrounds: Leonardo AI Tutorial
Переглядів 5064 місяці тому
Transparent Magic: Generate Images with Leonardo AI Explore Leonardo's latest innovation enabling image generation with a transparent background, and gain insights into its applications and constraints. 💙 Loving Leonardo AI? Grab your FREE account today! 🎉 Thinking about a paid upgrade? Use my affiliate link (it won't cost you a dime extra, but it gives me a boost!) 👉🏻 app.leonardo.ai/?via=Tech...
Organize Your Personal Feed with Leonardo AI's Collections
Переглядів 3214 місяці тому
Get Organized with Leonardo AI Collections! Discover how to efficiently organize your generated images into folders with Leonardo's newest feature, "Collections." 💙 Loving Leonardo AI? Grab your FREE account today! 🎉 Thinking about a paid upgrade? Use my affiliate link (it won't cost you a dime extra, but it gives me a boost!) 👉🏻 app.leonardo.ai/?via=TechXplainator Udemy Courses: 🎓 Leonardo AI:...
The Power of Outpainting in Leonardo AI
Переглядів 7365 місяців тому
Outpainting 101 with Leonardo AI Extend and enhance your images with Leonardo AI in a few simple steps. Master out-painting effortlessly! 💙 Loving Leonardo AI? Grab your FREE account today! 🎉 Thinking about a paid upgrade? Use my affiliate link (it won't cost you a dime extra, but it gives me a boost!) 👉🏻 app.leonardo.ai/?via=TechXplainator Udemy Courses: 🎓 Leonardo AI: www.udemy.com/course/you...
Top 5 SDXL Models for AI Image Generation
Переглядів 1,5 тис.5 місяців тому
Top 5 SDXL Models for AI Image Generation
The Power of Inpainting in Leonardo AI
Переглядів 3945 місяців тому
The Power of Inpainting in Leonardo AI
The Best Ways for Running SDXL on Mac
Переглядів 3,2 тис.6 місяців тому
The Best Ways for Running SDXL on Mac
OpenAI’s Sora: A Breakthrough in AI Video Generation
Переглядів 1866 місяців тому
OpenAI’s Sora: A Breakthrough in AI Video Generation
Leonardo AI's Universal Upscaler: How-To Guide
Переглядів 3,7 тис.6 місяців тому
Leonardo AI's Universal Upscaler: How-To Guide
Create an Awesome Valentine's Post for Free using Leonardo AI Realtime Canvas
Переглядів 2687 місяців тому
Create an Awesome Valentine's Post for Free using Leonardo AI Realtime Canvas
Fastest Way to Switch Windows on Mac!
Переглядів 4017 місяців тому
Fastest Way to Switch Windows on Mac!
How to Level up your ChatGPT Prompt Game
Переглядів 2297 місяців тому
How to Level up your ChatGPT Prompt Game
Instant Images: A Guide to Using Realtime Gen in Leonardo AI
Переглядів 4838 місяців тому
Instant Images: A Guide to Using Realtime Gen in Leonardo AI
Stable Diffusion Crash Course for Beginners!
Переглядів 1,4 тис.8 місяців тому
Stable Diffusion Crash Course for Beginners!
Easy Image Animation with Leonardo Motion!
Переглядів 4418 місяців тому
Easy Image Animation with Leonardo Motion!
Understanding Fine-Tuned Models in Leonardo AI
Переглядів 2,7 тис.8 місяців тому
Understanding Fine-Tuned Models in Leonardo AI
The Magic of Leonardo AI's Realtime Canvas and Free AI Image Generation
Переглядів 1,3 тис.8 місяців тому
The Magic of Leonardo AI's Realtime Canvas and Free AI Image Generation
Elevate your Stable Diffusion Prompts - Part 2
Переглядів 8609 місяців тому
Elevate your Stable Diffusion Prompts - Part 2
How to Optimize Your Prompts Using Leonardo AI's AI-Powered Features
Переглядів 1 тис.9 місяців тому
How to Optimize Your Prompts Using Leonardo AI's AI-Powered Features

КОМЕНТАРІ

  • @bnermine9780
    @bnermine9780 День тому

    Thank you for the great video! Could the model then be used inside a local python code? I am writing a classification script using an llm but running it on my cpu takes ages. Can I edit my local python code so that the classification is done with the model running on google colab but the results are stored locally? This would also help me apply the same model to different use cases. Thank you!!

    • @TechXplainator
      @TechXplainator 22 години тому

      Thank you so much for your kind words! And yes, you can definitely do that. Here is how that could work: 1. Keep the Colab notebook running with Ollama and Ngrok set up as shown in the tutorial. 2. In your local Python script, use the 'requests' library to send classification requests to the Ollama model via the Ngrok URL. 3. Process the responses and store the results locally. I hope that helps. Happy coding ☺️

  • @MeetTheForeigner
    @MeetTheForeigner 2 дні тому

    Homebrew not working on my M3 Max :(

    • @TechXplainator
      @TechXplainator День тому

      I'm sorry to hear that. Have you tried visiting the Homebrew homepage for troubleshooting solutions? You can check it out here: brew.sh/

  • @MarkSmith-ho5ij
    @MarkSmith-ho5ij 9 днів тому

    Coders using apple lol. Please use Linux and stop this...

  • @pubg-hf2np
    @pubg-hf2np 13 днів тому

    Hello Sir, I used lightning ai to docker compose a prebuilt image included Volumes in docker compose.yml.. It worked very well i mean the server on port 80.. However when the studio went to sleep and restarted it again all docker containers and volumes are gone and when I docker compose them all is built from scratch.. So what is the solution for that? Actually there is also a studio template for the application but unfortunately it goes the same way. Btw it is Ragflow github project

    • @TechXplainator
      @TechXplainator 12 днів тому

      I haven't tried that, so I don't know the answer. But there is a documentation page that might answer your question: lightning.ai/docs/overview/studios/custom-docker-images Hope it helps :-)

  • @stableArtAI
    @stableArtAI 13 днів тому

    Sorry to hear about the troubles you have had. We did the install 1.7 based on your guide and since have been using 1.10 and initially from 8GB VRAM to 16GB VRAM and of course 64GB system memory. The nice thing about SD is able to run different installations. In addition the can run 2 or more SD sessions at the same time. CPU vs GPU is easy to start and also letting SD pick to reach best performance is easy to and auto configure in newest version.

  • @evelynstrip
    @evelynstrip 14 днів тому

    how can I uninstall all of these things? I just now heard about forge

    • @TechXplainator
      @TechXplainator 13 днів тому

      To uninstall the downloaded GitHub repository, you'll need to follow these steps in reverse order of the installation process: 1. Begin by navigating to Finder and deleting the folder named "stable-diffusion-webui." This action should also remove the models stored in the "model" folder. 2. Next, uninstall all programs installed via Homebrew by executing the command: brew uninstall cmake protobuf rust python@3.10 git wget If you wish to uninstall Homebrew entirely, please refer to the guide provided here: docs.brew.sh/FAQ#how-do-i-uninstall-homebrew

    • @evelynstrip
      @evelynstrip 12 днів тому

      @@TechXplainator Thank you <3 do you know which platform is faster for Mac (m3 18GPU), Forge or A1111? ive been working on windows before on only 8 GPU and it was faster. I don't understand why.

    • @TechXplainator
      @TechXplainator 12 днів тому

      I haven't tried forge myself, comfyUI is faster than A1111 for all platforms. But in general, Stable Diffusion models (regardless of the UI) are optimized for NVIDIA. So I guess they all will work better on Windows with NVIDIA GPU.

  • @CJR248
    @CJR248 14 днів тому

    Add the two lines below at the beginning of the code and you will noticeably increase performance when using a GPU, especially the T4 and A100, ollama will already detect the GPUs during installation: !sudo apt install pciutils !lspci

  • @richardurwin
    @richardurwin 14 днів тому

    Awesome!

  • @fabriciocincunegui5332
    @fabriciocincunegui5332 15 днів тому

    How do i export ollama on my cmd im on windows 11

    • @TechXplainator
      @TechXplainator 14 днів тому

      I can't verify this on a Windows PC since I don't have one, but based on my research, here's how to export the `OLLAMA_HOST` variable on Windows 11 using Command Prompt: 1. Open Command Prompt as Administrator. 2. Run the command below, replacing `<paste_url_here>` with your Ngrok URL: setx OLLAMA_HOST "<paste_url_here>" 3. Close and reopen Command Prompt to apply the changes.

  • @fabriciocincunegui5332
    @fabriciocincunegui5332 15 днів тому

    thnx for patient

  • @0likewater
    @0likewater 20 днів тому

    ollama run llama3.1 it just installing on local machine but not on google colab, im using windows and in powershell i puted set OLLAMA_HOST= they dosent sync if i go to the static url it gives me log in console 7.526µs | | GET "/favicon.ico" but nothing as you has when model become dowloading to the colab

    • @0likewater
      @0likewater 20 днів тому

      resolved, on windows setx OLLAMA_HOST "url" /M ollama run llama3.1

  • @aloksandalokz
    @aloksandalokz 23 дні тому

    Is there a way to batch upscale images in Leonardo AI?

  • @cekuhnen
    @cekuhnen 28 днів тому

    Thank you for the vid - i wish you would have covered the other modes more to make this video more inclusive.

  • @kjmncspkn
    @kjmncspkn 29 днів тому

    and you can do it on windows

  • @kjmncspkn
    @kjmncspkn 29 днів тому

    If don't have homebrew or know what it is you have do download it. HUGE help

  • @stableArtAI
    @stableArtAI Місяць тому

    AS always great stuff. Currently we have been too busy with SD and it's lates updates in Automatic1111: ua-cam.com/users/postUgkxaH9_32dEgob4gf6QoVec325hGvnKQWE3.

  • @chillscripter
    @chillscripter Місяць тому

    do the exact thing that you said in the video but i got the error from ollama : the parameter is incorrect how can i solve that?

    • @TechXplainator
      @TechXplainator Місяць тому

      Hey there! To help me figure out what's going wrong, could you please tell me: 1. Are you using a fixed Ngrok link or letting Colab create a new one each time? 2. Did you open the link from the notebook in a browser? Does it say "Ollama is running"? 3. Have you correctly linked your local Ollama to Colab-Ollama by setting the OLLAMA_HOST environment variable to your Ngrok URL? (You can usually do this in your terminal with a command like export OLLAMA_HOST=<your_ngrok_url>) 4. When you run a model locally (like typing ollama run llama3.1), does the model download to your computer or to Colab? Can you see the download happening in your Colab notebook?

  • @SophieMarnez
    @SophieMarnez Місяць тому

    Just perfect : no fluff, no rush, to the point... WOW ! You're the perfect tutor for these sometimes overwhelming features. Thanks a lot !

    • @TechXplainator
      @TechXplainator Місяць тому

      Thanks so much! I'm really glad you enjoyed it ☺️

  • @victorpinasarnault9135
    @victorpinasarnault9135 Місяць тому

    This can be installed on Linux too?

    • @TechXplainator
      @TechXplainator Місяць тому

      I'm not a Linux user myself, so I have not tried it, but Ollama offers a Linux version: ollama.com/download/linux. And open WebUI uses Docker (OS independent) and mentions Linux, so I assume it should be possible to run it on Linux: github.com/open-webui/open-webui?tab=readme-ov-file#quick-start-with-docker-. Hope this helps ☺️

  • @stableArtAI
    @stableArtAI Місяць тому

    The advantage of Local is the ability to run thousands of generators. And now the Next GEN Automatic1111 as come of age in just a few months when we started with 1.7 The Dragon Princess: ua-cam.com/video/2Onz6zhOwAs/v-deo.html

    • @TechXplainator
      @TechXplainator Місяць тому

      True.. The reason I like Leonardo is because I don't have a high-powered local machine. I could run SD on Google Colab, but that's also expensive. Leonardo is a comfortable solution that is relatively cheap. But I agree with you. If I had more GPU, I would definitely switch back to Stable Diffusion.

  • @stableArtAI
    @stableArtAI Місяць тому

    The good and bad out of trying a new checkpoint. This started some features that have been plaguing with failure when hires.fix was enabled in addition to some give the black image at end of generation. Well needless to say we go to point were SD will not start up w/o error. So, decide to go from 1.9.x to 1.10 and oh what set of great improvements. It self installed many requirements and was fast. It when to an older version of Python. But for now we have a very clean and stable 1.10 and so far other issue seem to have been resolved....for now. Added a video that highlight just some image tests and wow.

  • @rajarshisen5905
    @rajarshisen5905 Місяць тому

    Please help, I can run Ollama in Colab, but while run it from docker as open-web-ui, I am getting the following error while trying to chat to llama3 in the web browser. Ollama: 404, message='Not Found', url=URL('<static_url>/api/chat')

    • @TechXplainator
      @TechXplainator Місяць тому

      Does Ollama work from the terminal? I mean, when running export OLLAMA_HOST=<YOUR_URL_GENERATED_IN_COLAB> and ollama run llama3, do you get to interact with llama3 in your terminal? And do you see any action in your Colab (you should be seeing the notebook downloading a model and responding to chat)

    • @merocky5
      @merocky5 Місяць тому

      Yes, I do. Ollama is executing on Colab, when I call it from local computer terminal. Only while using the open-web-ui, following the last command of your python notebook, I get the error as described above. The front end web app starts, but while trying to chat with ollama installed in Colab, I get the error mentioned in above message. I did some internet search and appears that "api" word may be /be not included in the latest version of ollama? Please help how I can resolve this. Thanks a lot

    • @TechXplainator
      @TechXplainator Місяць тому

      To make sure we're on the same page, I just want to summarize your setup: 1. You're using a static Ngrok URL. 2. You've successfully connected your local Ollama instance with the one hosted on Colab by running an export command. 3. You've installed OpenWebUI using Docker and replaced the example Ngrok URL with your own static Ngrok URL, as indicated by this command: `docker run -d -p 4000:8080 -e OLLAMA_BASE_URL=example.com -v open-webui:/app/backend/data --name test --restart always ghcr.io/open-webui/open-webui:main` 4. The Docker container was created, but trying to access the Ollama WebUI at `localhost:4000/` results in an error. Please confirm that this summary is accurate so I can help you troubleshoot the issue ☺️

    • @rajarshisen5905
      @rajarshisen5905 Місяць тому

      @@TechXplainator Yes, the summary is spot on. I have followed all of the above bullet points and got error on last bullet point while trying to post a chat to Ollama using web-UI.

    • @TechXplainator
      @TechXplainator Місяць тому

      I was not able to replicate the error, but based on my research, here are a few things you could try: 1. Verify OpenWebUI settings: Access the OpenWebUI settings page (click on your avatar on the bottom left) and verify that the Ollama Server URL is correctly set to your Ngrok URL: Go to “connections”. Under “Ollama Base URL” you should see your static Ngrok URL 2. Network Configuration Ensure that the Docker container can communicate with the Ollama server. Use the --network=host flag to allow the Docker container to use the host network: docker run -d --network=host -v open-webui:/app/backend/data -e OLLAMA_BASE_URL=<your_ngrok_url> --name open-webui --restart always ghcr.io/open-webui/open-webui:main I hope this helps. If not, please check out the troubleshooting page from Open WebUI: docs.openwebui.com/troubleshooting/

  • @abdulrahmanelawady4501
    @abdulrahmanelawady4501 Місяць тому

    Maybe one day the linux community will wake up and start using GUIs instead of CLIs. For some reason there is no package manager that actually have a effective GUI. It's always missing something or another and you have to go back to CLI😅

  • @adammustapha3142
    @adammustapha3142 Місяць тому

    thanks

  • @NoHack_Know_How
    @NoHack_Know_How Місяць тому

    how do you set it up for local Lan connection ONLY, can you explain please.

    • @TechXplainator
      @TechXplainator Місяць тому

      Of course. I made another video about setting up OpenWebUI locally: ua-cam.com/video/0DFJc-oIRQ8/v-deo.html. Hope this helps ☺️

  • @abdulrahmanelawady4501
    @abdulrahmanelawady4501 Місяць тому

    You done us a great favor putting everything together step-by-step. It's often very confusing to get things

    • @TechXplainator
      @TechXplainator Місяць тому

      Thank you sooo much! I'm really glad you enjoyed my video ☺️

  • @devinerapier3776
    @devinerapier3776 Місяць тому

    so how powerful should my computeer has to be?

    • @TechXplainator
      @TechXplainator Місяць тому

      I would recommend at least 16 GB RAM. However, in this demo, I'm running this on a Macbook Air M2 8GB RAM and if I don't use any other processes, it runs - although very slowly. So 16 GB RAM or higher should run more smoothly.

  • @Salionca
    @Salionca Місяць тому

    Great!

  • @user-ns7tf8zn8t
    @user-ns7tf8zn8t Місяць тому

    When I write the export OLLAMA_HOST it said that" export : The term 'export' is not recognized as the name of a cmdlet" Is it because I am using docker ?

    • @TechXplainator
      @TechXplainator Місяць тому

      The error message you're encountering is not related to Docker, but rather to the command shell you're using. The "export" command is specific to Unix-like systems (such as Linux and macOS) and is not recognized in Windows PowerShell or Command Prompt. To set an environment variable in Windows, you should use the "set" command instead of "export". Here's how you can set the OLLAMA_HOST variable in PowerShell: $env:OLLAMA_HOST = "your_value_here" Or in Command Prompt: set OLLAMA_HOST=your_value_here Hope this helps ☺️

  • @andreabaffascirocco2934
    @andreabaffascirocco2934 Місяць тому

    I have try but seems that the command ollama run llama3.1 download the model on my laptop instead of colab.

    • @TechXplainator
      @TechXplainator Місяць тому

      Try running the command export OLLAMA_HOST=<YOUR NGROK URL> (check the URL says "Ollama is running" first). Then in the same terminal window, you should do "ollama run llama3" again. Hope this helps ☺️

    • @andreabaffascirocco2934
      @andreabaffascirocco2934 Місяць тому

      @@TechXplainator Thanks. i'll try

    • @andreabaffascirocco2934
      @andreabaffascirocco2934 Місяць тому

      @@TechXplainator Now all work fine, The problem is i have installed ollama on my ubuntu using snap. Whit this installation the pc try to download LLama 3.1 on pc and not on colab.

    • @TechXplainator
      @TechXplainator Місяць тому

      I'm glad it works now ☺️

  • @stableArtAI
    @stableArtAI Місяць тому

    unfortunately what started down the update path did not seem to correct the issue: UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown Have to do shutdown 2nd time before get valid prompt.

    • @TechXplainator
      @TechXplainator Місяць тому

      Too bad it didnt work. If you happen to find a fix, it would be great if you could share it here

    • @stableArtAI
      @stableArtAI Місяць тому

      @@TechXplainator So don't really need to shutdown twice, as it is just a warning and basically skews the terminal prompt a bit. Still can't find any information about what would cause this all of a sudden. But so far things are pretty stable and performance think has jumped a little too. No current plans to move to 3.12 due to the depreciation factor of some commands and lack of backwards compatibility. They seem to still support 3.1x versions, so good for now. Would like to learn more about SD 2.x and SD3, but seem a bit confusing on what it actually is doing and since cannot get comfyui to work have not had too much success on get those to run. However, like some old great apps, there is still such a big community creating ckpts/lora/ext etc. for SD1.5. Although after working now with SD for 6 months, I think we graduated from neophyte. We have found ways that in some cases a lora is not needed if the right prompts are used and also when using controlnet.

  • @stableArtAI
    @stableArtAI Місяць тому

    tended to get some random errors when running some generations lately. so decide to make a bold move. Don't find much on versions with SD running stable that is new. But current now running: version: v1.9.4  •  python: 3.11.9  •  torch: 2.2.2

  • @jameschan6277
    @jameschan6277 Місяць тому

    Please help if I use windows PC desktop, how can I open terminals like MAC?

    • @TechXplainator
      @TechXplainator Місяць тому

      To open terminals on a Windows PC desktop similar to how you would on a Mac, you can use the following methods: Option 1: PowerShell: 1. Press `Windows+X` and select "Windows PowerShell" or "Windows PowerShell (Admin)" from the menu. 2. Alternatively, press `Windows+R`, type `powershell`, and press Enter to open a PowerShell window. Option 2: Command Prompt: 1. Press `Windows+R`, type `cmd`, and press Enter to open a Command Prompt window. 2. You can also search for "Command Prompt" in the Start menu, right-click the result, and select "Run as Administrator" if you need elevated privileges. Hope this helps ☺️

  • @startuplabs9539
    @startuplabs9539 Місяць тому

    Getting this error... 2024-07-14 22:05:28][DEBUG]: == Working Agent: Evil Genius Mastermind --------------------------------------------------------------------------- OllamaEndpointNotFoundError Traceback (most recent call last) <ipython-input-8-e77becf01c71> in <cell line: 2>() 1 # Run the crew ----> 2 result = my_crew.kickoff() 3 print(result) 23 frames /usr/local/lib/python3.10/dist-packages/langchain_community/llms/ollama.py in _create_stream(self, api_url, payload, stop, **kwargs) 243 if response.status_code != 200: 244 if response.status_code == 404: --> 245 raise OllamaEndpointNotFoundError( 246 "Ollama call failed with status code 404. " 247 "Maybe your model is not found " OllamaEndpointNotFoundError: Ollama call failed with status code 404. Maybe your model is not found and you should pull the model with `ollama pull llama3`.

    • @TechXplainator
      @TechXplainator Місяць тому

      It looks like it's not finding the llama3 model. Please make sure you download it via terminal (command is "ollama pull llama3") on your local machine. Also, please test the Ngrok-URL you are using in a browser. It should say "Ollama running"

    • @startuplabs9539
      @startuplabs9539 Місяць тому

      thank you. i got it to work i needed to run other notebook along with this one. great help and videos

  • @Salionca
    @Salionca Місяць тому

    Great! Everything free! Thanks for the video.

  • @HunterJuniorX
    @HunterJuniorX Місяць тому

    is there a way to use models from hugging face?

    • @TechXplainator
      @TechXplainator Місяць тому

      Yes there is - if they are available as quantized models (GGUF files). I made a video on how you can import GGUF files from huggingface and use them in Ollama - feel free to check it out: ua-cam.com/video/vs1u9z2U4ZA/v-deo.html

  • @Salionca
    @Salionca 2 місяці тому

    The video is greaat but I'm ot going to spend money on that. I prefer to wait to buy a new laptop.

    • @TechXplainator
      @TechXplainator 2 місяці тому

      Thanks! And thats completely understandable :-)

  • @Salionca
    @Salionca 2 місяці тому

    Jupyter Notebook links in the video description don't work.

    • @TechXplainator
      @TechXplainator 2 місяці тому

      Oooh you're right! I messed up some of my links there. Thank you so much for pointing that out! The links are fixed now 😊

  • @AndrewDavidBaron
    @AndrewDavidBaron 2 місяці тому

    I spent entire day trying this. Kept getting error messages. Tried to uninstall and reinstall every aspect of this tutorial. I even uninstalled and reinstalled Terminal itself. Nothing worked. I gave up. Literally wasted an entire day.

  • @Kartuun89
    @Kartuun89 2 місяці тому

    This is great, I am not seeing some of the options you have though, for instance, that arrow to put the code over ( the Insert code button) into VS code is missing.

    • @TechXplainator
      @TechXplainator 2 місяці тому

      Thank you :-) I'm glad you found the video helpful. I'm not sure what the issue could be, as I made sure to show all the settings in my video and didn't do anything beyond what was demonstrated. You might want to check out the troubleshooting page for CodeGPT for more help: docs.codegpt.co/docs/tutorial-basics/troubleshooting Hope this helps!

  • @Salionca
    @Salionca 2 місяці тому

    Good video. Thanks.

  • @mr.secretd1682
    @mr.secretd1682 2 місяці тому

    how could i remove this program from my mac ? it take to much place

    • @TechXplainator
      @TechXplainator 2 місяці тому

      To uninstall the downloaded GitHub repository, you'll need to follow these steps in reverse order of the installation process: 1. Begin by navigating to Finder and deleting the folder named "stable-diffusion-webui." This action should also remove the models stored in the "model" folder. 2. Next, uninstall all programs installed via Homebrew by executing the command: brew uninstall cmake protobuf rust python@3.10 git wget If you wish to uninstall Homebrew entirely, please refer to the guide provided here: docs.brew.sh/FAQ#how-do-i-uninstall-homebrew

  • @WillowsAIAcademy
    @WillowsAIAcademy 2 місяці тому

    Thank you, it takes a little getting used to but I think I prefer the new look.

  • @virasatsingh3014
    @virasatsingh3014 2 місяці тому

    Great video! Any clue why I might be getting the following error? 'NoneType' object has no attribute 'lowvram'

    • @TechXplainator
      @TechXplainator 2 місяці тому

      Thanks, I appreciate it! Regarding the error: I found this bug report on the AUTOMATIC1111 Github page for someone who had the same issue. There were 2 suggestions, but I don't know if they worked: 1. Update your mac (github.com/AUTOMATIC1111/stable-diffusion-webui/issues/15637#issuecomment-2094152779). 2. Update command line arguments (github.com/AUTOMATIC1111/stable-diffusion-webui/issues/15637#issuecomment-2095943078) - also see this thread for more details: github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Command-Line-Arguments-and-Settings. I hope this helps ☺️

  • @stableArtAI
    @stableArtAI 2 місяці тому

    Just started past couple weeks to do ChatGPT and find out what all the fuss is. (checkout our videos that used it). But we might look into also run a local version like app that you have covered in. Just have to find a little free time for that project. LOL>

    • @TechXplainator
      @TechXplainator 2 місяці тому

      Thanks for watching and sharing your experience! Running a local version of ChatGPT can be a great project. Feel free to reach out if you have any questions. 😊.

  • @stableArtAI
    @stableArtAI 2 місяці тому

    After a few months and many tweaks it is better to be doing local than online like leonardo. Somethings might be a little, say, mechanical. But still very cool, you guides has been so much help, thanks again. The web, does have a few limits with respect to styles. But the style editor is a big plus in organization of using them. So naming them can be important too. We don't really know the effects of running the latest torch vision with torch 2.1.2 because it list 0.16.2 to be the one, yet we are running 0.17 version. As well as we have not installed a working version of torch 2.2.x as yet. Nor have we update python version( may recall might have did. new version that automatic and SD 1.5 does not officially support to work with lora's and checkpoints.(for a latter time, still processing so many images to share). Current which seems pretty stable for us: version: v1.9.4  •  python: 3.10.14  •  torch: 2.1.2  •  xformers: N/A  •  gradio: 3.41.2

    • @TechXplainator
      @TechXplainator 2 місяці тому

      Thanks for the feedback! I'm glad my guides have been helpful. Running local has its advantages, especially with style organization. Your current setup looks solid! Keep up the great work and feel free to share more updates. Meanwhile, I've been focusing on local LLMs (as you may have noticed from my recent videos ☺️), but I want to dive back into SD, especially img2video, soon.

    • @stableArtAI
      @stableArtAI 2 місяці тому

      @@TechXplainator image to video seem still kind of weak(that is the default extensions available, unless you know of a better one).

    • @stableArtAI
      @stableArtAI 2 місяці тому

      @@TechXplainator solid, yes, seen a big difference from the original version and after tweaking all the errors from python and torch. Even dreamsamplerXL works better. still though confused on the SD 2 and 3 which seem to be all integrated into the updates however ckpts and lora would need to be compatible.

  • @nikolaysparkov
    @nikolaysparkov 2 місяці тому

    I got an error - cd: string not in pwd: run

    • @TechXplainator
      @TechXplainator 2 місяці тому

      when are you getting this error / at which step? cd stands for "choose directory" and this error often occurs when you have a typo in the directory you are trying to open. Please check out this page for a good description: www.howtouselinux.com/post/fix-cd-string-not-in-pwd. I hope this helps!

  • @akierskan7787
    @akierskan7787 3 місяці тому

    Hi! Could you please do a tutorial on this Stable Diffusion for image 2 video or text 2 video?

    • @TechXplainator
      @TechXplainator 3 місяці тому

      Sure, I’d be happy to! Stay tuned for a tutorial on Stable Diffusion for image-to-video and text-to-video. Thanks for the suggestion!

    • @akierskan7787
      @akierskan7787 3 місяці тому

      @@TechXplainator thank you so much! ❤️❤️

    • @akierskan7787
      @akierskan7787 3 місяці тому

      @@TechXplainator Hi, sorry, it's me again. :( I tried generating images but this issue seems to come up: AttributeError: 'NoneType' object has no attribute 'lowvram'? Would you have any advice to fix this? :( I really wanna start generating images. :(

    • @TechXplainator
      @TechXplainator 2 місяці тому

      I found this bug report on the AUTOMATIC1111 Github page for someone who had the same issue. There were 2 suggestions, but I don't know if they worked: 1. Update your mac (github.com/AUTOMATIC1111/stable-diffusion-webui/issues/15637#issuecomment-2094152779). 2. Update command line arguments (github.com/AUTOMATIC1111/stable-diffusion-webui/issues/15637#issuecomment-2095943078) - also see this thread for more details: github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Command-Line-Arguments-and-Settings. I hope this helps ☺️

    • @akierskan7787
      @akierskan7787 2 місяці тому

      @@TechXplainator oh my gosh! You’re amazing! Thank you so much! 😭😭😭 Appreciate you being at the forefront of Stable Diffusion for Macs!

  • @CaptainKokomoGaming
    @CaptainKokomoGaming 3 місяці тому

    Still won't use leonardo. Money hungry, image generation costs keep going up and every button you click is more in app currency. You don't realise how quickly you burn through in app currency. Alchemy is just their way of making you use your currency quicker. Informative video, nothing against you or your content but **** Leonardo. They won't get a cent out of me.

    • @TechXplainator
      @TechXplainator 2 місяці тому

      Thanks for sharing your thoughts! I understand your concerns about Leonardo. It's always good to explore different options and find what works best for you. Appreciate the feedback on the video and glad you found it informative!

  • @Sean43322
    @Sean43322 3 місяці тому

    It was very good, thank you Please produce a video about the difference between the models and styles with examples. You explain very well. Thanks

    • @TechXplainator
      @TechXplainator 3 місяці тому

      Thank you, that's very kind! I already have a similar video, though it's a bit older. Perhaps it will still be helpful (ua-cam.com/video/PHRUAlypaLE/v-deo.html). I'll definitely consider updating it.