- 32
- 177 240
Miro Leon
Germany
Приєднався 23 гру 2016
New COMFYUI FRONT-END: Guide and Walk Through [ComfyUI | Stability Matrix]
Get to know the new ComfyUI frontend! We just got a major update (v0.3.0) with a new user interface and exciting new features! Watch this tutorial to make the most out of the new ComfyUI!
Watch all of my other ComfyUI tutorials - ua-cam.com/play/PLib6g3rkWM3k1lAEmRRjXkut6z6eh-0p-.html
🌟 A Special Celebration of Our Discord Community 🌟
Join the conversation and become a part of our growing artistic collective on Discord. Your insights make our community richer!
discord.gg/BPQRBVfkG7
🔗 Essential Resources
Stability Matrix - lykos.ai/downloads
👨🎨 Meet Your Host:
I teach AI in art and architecture at RWTH Aachen University. In my journey with the Heibara Ai collective, we’ve found that image-to-image generation in Stable Diffusion offers remarkable outcomes, even with unrelated input images. Today, I’ll show you how to set up ComfyUI, enhance your AI prompts, and dive into creative image-to-image generation.
✨ Follow my Work
Find my other artwork on Instagram - miroxleon
Follow me on Twitter - miroxleon
For all other links and contact, visit my website - miroleon.de/links
Get my wallpaper via miroleon.gumroad.com/l/wallpaper_inflate_land
🔍 Chapters
0:00 Welcome
0:20 Installing Stability Matrix
3:23 Generating Images / Queue Workflow
3:46 Docking Queue to Menu
4:15 Sidebar
4:21 Queue Menu
6:35 Node Library Menu
7:57 Model Library Menu
8:57 Workflow Menu
10:54 Finding Save Location of Workflows
11:37 Top Menu
11:57 Light and Dark Mode
12:05 ComfyUI Settings
12:18 Snap to Grid
12:28 Link Midpoint Marker (Arrow)
12:52 Link Type (Spline vs Straight)
13:05 Zoom Speed
13:19 Menu Position (Top vs Bottom)
13:41 Queue Button and Batch Count Limit (100)
14:37 Reroute Beta
14:51 Sidebar (Position Left vs Right and Size)
15:09 Workflow Position (Topbar/Tabs)
15:36 Show Missing Models
16:00 Keybindings/Shortcuts
16:40 Rerouting System in Practice
17:59 Multi Select Nodes (Ctrl)
18:08 Groups and Nested Groups (Ctrl+G)
18:53 Open Workflow Shortcut (Ctrl+O)
19:05 Export Workflow Shortcut (Ctrl+S)
19:09 Queue Workflow Shortcut (Ctrl+Enter)
19:17 Terminate Workflow Shortcut (Ctrl+Alt+Enter)
19:31 Zoom Shortcut (Alt+Plus/Minus)
19:36 Focus/Zoom to Workflow Shortcut (.)
20:06 Drag vs Select (Space)
20:26 Queue Workflow in Front (Ctrl+Shift+Enter)
20:47 Bypass Nodes (Ctrl+B)
21:05 Pin Nodes (P)
21:39 Wrap Up
Watch all of my other ComfyUI tutorials - ua-cam.com/play/PLib6g3rkWM3k1lAEmRRjXkut6z6eh-0p-.html
🌟 A Special Celebration of Our Discord Community 🌟
Join the conversation and become a part of our growing artistic collective on Discord. Your insights make our community richer!
discord.gg/BPQRBVfkG7
🔗 Essential Resources
Stability Matrix - lykos.ai/downloads
👨🎨 Meet Your Host:
I teach AI in art and architecture at RWTH Aachen University. In my journey with the Heibara Ai collective, we’ve found that image-to-image generation in Stable Diffusion offers remarkable outcomes, even with unrelated input images. Today, I’ll show you how to set up ComfyUI, enhance your AI prompts, and dive into creative image-to-image generation.
✨ Follow my Work
Find my other artwork on Instagram - miroxleon
Follow me on Twitter - miroxleon
For all other links and contact, visit my website - miroleon.de/links
Get my wallpaper via miroleon.gumroad.com/l/wallpaper_inflate_land
🔍 Chapters
0:00 Welcome
0:20 Installing Stability Matrix
3:23 Generating Images / Queue Workflow
3:46 Docking Queue to Menu
4:15 Sidebar
4:21 Queue Menu
6:35 Node Library Menu
7:57 Model Library Menu
8:57 Workflow Menu
10:54 Finding Save Location of Workflows
11:37 Top Menu
11:57 Light and Dark Mode
12:05 ComfyUI Settings
12:18 Snap to Grid
12:28 Link Midpoint Marker (Arrow)
12:52 Link Type (Spline vs Straight)
13:05 Zoom Speed
13:19 Menu Position (Top vs Bottom)
13:41 Queue Button and Batch Count Limit (100)
14:37 Reroute Beta
14:51 Sidebar (Position Left vs Right and Size)
15:09 Workflow Position (Topbar/Tabs)
15:36 Show Missing Models
16:00 Keybindings/Shortcuts
16:40 Rerouting System in Practice
17:59 Multi Select Nodes (Ctrl)
18:08 Groups and Nested Groups (Ctrl+G)
18:53 Open Workflow Shortcut (Ctrl+O)
19:05 Export Workflow Shortcut (Ctrl+S)
19:09 Queue Workflow Shortcut (Ctrl+Enter)
19:17 Terminate Workflow Shortcut (Ctrl+Alt+Enter)
19:31 Zoom Shortcut (Alt+Plus/Minus)
19:36 Focus/Zoom to Workflow Shortcut (.)
20:06 Drag vs Select (Space)
20:26 Queue Workflow in Front (Ctrl+Shift+Enter)
20:47 Bypass Nodes (Ctrl+B)
21:05 Pin Nodes (P)
21:39 Wrap Up
Переглядів: 3 491
Відео
Create Fractal GLASS DISTORTION in THREE.JS! [FREE CODE | CODEPEN]
Переглядів 1,9 тис.3 місяці тому
🔗 JOIN OUR DISCORD! discord.gg/9JtYAqdWzq 🎨 In this tutorial, I show you how to create a fractal glass distortion effect - or any other displacement effect - in Three.JS! Make sure to download my Displacement Texture Pack Freebie to follow along and please consider support my work by getting the full Displacement Texture Pack! 💾 Download the Displacement Texture Freebie via miroleon.gumroad.com...
Create a STREAM COUNTDOWN with THREE.JS! [FREE CODE | CODEPEN]
Переглядів 3 тис.7 місяців тому
JOIN OUR DISCORD! discord.gg/9JtYAqdWzq In this tutorial, we create a 3D stream countdown with Three.js! I learned a lot by going through the code and breaking it down, so I hope you find some value in it as well! Get my Gradient HDR Pack to support me! miroleon.gumroad.com/l/gradient_hdr_pack Previous Three.js Tutorial Gradient HDRs ua-cam.com/video/Muq-VpaPzoE/v-deo.html Previous Three.js Tut...
FAST, EASY, and FREE Generative AI Tool [Stable Diffusion | Fooocus]
Переглядів 5 тис.Рік тому
JOIN OUR DISCORD! discord.gg/9JtYAqdWzq In this tutorial, I give you an introduction to Stable Diffusion using the WebUI Fooocus - perhaps the fastest, easiest and free way to get started with generative AI to date. We will create a space helmet made from Chinese porcelain as an example. If you have any questions or want to see more tutorials for generative AI, let me know in the comments! Supp...
Create this 3D parallax-style SCROLL ANIMATION with THREE.JS! [FREE CODE | CODEPEN]
Переглядів 35 тис.Рік тому
Create this 3D parallax-style SCROLL ANIMATION with THREE.JS! [FREE CODE | CODEPEN]
Create this 3D PARALLAX-STYLE Landing Page with THREE.JS! [FREE CODE | CODEPEN]
Переглядів 47 тис.Рік тому
Create this 3D PARALLAX-STYLE Landing Page with THREE.JS! [FREE CODE | CODEPEN]
DREAMY RENDERINGS with Redshift in Cinema 4D [with FREE Project Files]
Переглядів 1,6 тис.Рік тому
DREAMY RENDERINGS with Redshift in Cinema 4D [with FREE Project Files]
Level Up Your Renderings with GRADIENT HDRs! [Three.js, Blender, and Co.] [CODEPEN | FREE CODE]
Переглядів 4,9 тис.Рік тому
Level Up Your Renderings with GRADIENT HDRs! [Three.js, Blender, and Co.] [CODEPEN | FREE CODE]
Improve Your THREE.JS MATERIALS and RENDERINGS With This Trick! [Three.js] [CODEPEN | FREE CODE]
Переглядів 3,6 тис.Рік тому
Improve Your THREE.JS MATERIALS and RENDERINGS With This Trick! [Three.js] [CODEPEN | FREE CODE]
Create this 3D WEB RENDERING with FREE TOOLS [Three.js & Mixamo] [CODEPEN | FREE CODE]
Переглядів 6 тис.Рік тому
Create this 3D WEB RENDERING with FREE TOOLS [Three.js & Mixamo] [CODEPEN | FREE CODE]
Miro Leon - Acratic [Official Video]
Переглядів 3,7 тис.5 років тому
Miro Leon - Acratic [Official Video]
Nested group are a cool addition but I didn't found out how to bypass them from the side panel. If anyone knows, would you please be kind enough to spread the knowledge ?
Thank you for your comment! I just tested it, and the group bypassing should also work with nested groups. You should be able to right-click inside of the group or right-click the group header and select "Bypass Group Nodes". This should respect the nested group logic, i.e. it should bypass all nodes inside of the group itself and any other group inside of the group but not of any higher-level group. Alternatively, you can also keep the "CTRL" key pressed, which will allow you to multi-select nodes by dragging the mouse over any area. Then, you should be able to bypass all the selected nodes with "CTRL+B". Note that, unfortunately, highlighting a group and then using the shortcut "CTRL+B" does not work (yet). You will have to right-click the group and select "Bypass Group Nodes". To un-bypass/reactivate the nodes in a group, you have to right-click the group again and select "Set Group Nodes to Always". The wording is a bit confusing, as it's not called "un-bypass" or anything like that, but the "Set Group Nodes to Always" will activate all the nodes again. I hope this helped. If you have any follow-up questions, feel free to ask here or join our Discord (link below the video)!
@@miroxleon Thanks but I already new the "bypass group nodes" feature and the "set group nodes to always" as well, which is exactly why I did precise "from the side pannel" in my previous message. Don't take this the wrong way : I'm glad you did your best to answer me, and I thank you for that, but I'm still looking for a solution to make it work from the side pannel. My everyday workflow is pretty big and has around 20 groups, so zooming everytime is a pain in the *ss when I can deactivate all my other groups from the little eyes on the side pannel. Releasing new features is always a good thing. Releasing new features that are complete and well integrated is better. As a reminder, the side pannel is natively integrated in Comfy's new UI so it's up to their team to fix it. It reminds me of those games devs that showcase their game as the brand new thing and then you have to wait and apply dozen of patches to make it work correctly. That is the world we are living in : advertising and attracting people with new features in an unfinished state is more important than developping them correctly and completely.
@@lockos I'm sorry for the confusion! I must've skipped that part, as I never considered bypassing nodes or node groups from the side panel. I tried to recreate it and can see that once a group is nested, the bypassing option disappears from within the "Nodes Map". This is really strange, and I assume that the dev team is aware of it. Perhaps it's on their to-do list for fixes in soon-to-come updates. I hope this will be resolved for your particular use case so that you can manage your workflows more efficiently soon!
@@miroxleon Thanks, you're a nice person. I guess I'll just have to wait on that one.
Yes please explain ipadapter and control net in confyui
Thank you for your comment! Yes, IPAdapter is on my list! I hope I find the time to make a stream about it soon! Stay tuned!
Great video. A couple of questions: 1) How do you import a previous install of Comfy into this. I have a lot of stuff already installed in Comfy and I would hate to lose anything in a migration. 2) is this anything like the new ComyUI app? I haven't had a chance to try it out yet.
Thank you for your comment! I assume you're referring to switching from a standalone ComfyUI installation to Stability Matrix? First and foremost, you don't have to switch to Stability Matrix. I find it more convenient in some aspects, but if you're comfortable with the standalone ComfyUI installation, there is no significant reason to switch over. For migrating stuff, I'm not fully sure how well that works, as I don't have any practical experience with this. Once you install ComfyUI via Stability Matrix, you will find all the regular ComfyUI directories and files in "\Data\Packages\ComfyUI\". You should be able to just copy and paste your custom nodes and outputs there, but since it's its own virtual environment, I believe all the packages will reinstall themselves. If you're just talking about updating your standalone ComfyUI installation to the latest update to get the new frontend, that should not cause any issues (besides that custom nodes may be outdated, etc., just the usual ComfyUI update hassle). In terms of the new standalone ComfyUI app, yes, I believe this will bring ComfyUI closer to Stability Matrix. I received access to the closed beta the other day but haven't gotten to test it yet. I will definitely make another video about that once I have a closer look at it. There, I would also discuss how the standalone ComfyUI app compares to Stability Matrix. I hope this helped a bit. If you have any follow-up questions, just let me know or feel free to join our Discord server (link in the video description)!
@@miroxleon It did, thanks for taking the time to respond. I was talking about moving from my current ComyUI portable install and transferring the required stuff into Stability Matrix. The companies always seem to forget tht which I think is crazy because a good percentage of people who would consider using new flavours of ComfyUI, will probably alrady have the orginal installed. I have been invited to the closed beta too. Just trying to find the time to try and see what the advantages are. Have a great day and thanks again. Really useful video.
literally just came out:)
Yeah, at least as a regular release and not as a preview! I hope the video helped! Talking about news, I just got beta access to the ComfyUI standalone app! So, stay tuned for a video on that as well!
Strapped in and absorbed the entire video, I now feel as though I've gained superpowers!
Thank you so much for your comment and for giving the stream a chance! I’m happy to hear that you learned something from it! Much more to come, stay tuned!
Like the style friend, subbed over here!! We'll be your students!
Thank you very much! I appreciate it a lot!
@@miroxleon You got it brother!
Comfyui needs more and better workflow presets in app.
Thank you for your comment! Yeah, I agree. I think there could be more organised and unified ways to show standard workflows for more diverse use cases. But considering it's an open-source project, I'm pretty impressed that a core developer team has formed and that there is meaningful and fast progress in terms of the ComfyUI backend and frontend. I'm very quite hopeful that ComfyUI will develop in the right kind of direction and will prove itself useful in the mid-and long-term! Let's see where things go!
Just what I was looking for. Thank you!
Thank you so much for your comment! I'm glad the video is helpful to you!
Extremely Helpful and useful Guide, Thanks alot!
Thank you for your kind comment! I’m glad that the video helped!
Thank you for the tour!
Thank you for watching and leaving a comment!
Do I need strong PC?
@@roberthakobyan1993 thank you for your comment and question! You will need at least a decent PC. The most important spec is your VRAM or you need an Apple Silicon chip (M1-M4) if you’re on a Mac. It’s mostly a speed question. The better your hardware, the faster things run (although there are other hardware bottlenecks on specific models and workflows). Since it’s totally free, I’d just suggest you give it a try and see how it goes (without having too high expectations)! You can follow my first livestream on the topic, which will get you started with every step of the setup!
@@miroxleon Thank U for fast response
@roberthakobyan1993 my pleasure, if you have any other questions, feel free to write a comment or join our discord community!
Brother Leon, Malik Yusef and I are expanding our creative networks by putting together the DEAD (Drop Everything And Design) Designer Society, working on inhouse and industry projects. Lmk if this sounds of interest, id love to connect 🙏
will you add a timecode list after showing at which time you started each topics?
Thank you for your comment! I’m starting to do the time codes now!
@@miroxleon suuuper , danke
@@Beauty.and.FashionPhotographer All time codes are added now! I hope this helps!
@@miroxleon so cool. i was able to get straight to the parts i wanted to see. Fantastic . ....Off Topic: Will You do tutorial for training Face Loras of Photographs (sdxl and Flux) ? And Loras for styles, like architecture design, or fashion designers , like Armani, versace or similar ? Or even Lora sliders for fixing wrong body proportions , like elongates stretched bodys , Headsizes that are too small etc etc, .... which happen often with SD. ?
Great, I'm glad to hear that! Generally, I plan to expand on these tutorials and live streams. I'm currently part of a research project about ethical interventions with LoRAs, which is supposed to lead to open-source documentation of easy and accessible LoRA training approaches. I'm afraid this might not be exactly what you're looking for. But maybe you will find some value in it, and perhaps it will become a more extensive video series on the channel. So, feel free to stay subscribed and follow along!
Would love to see your take on Eva Sánchez's Portfolio site. Particularly the effects she uses on text. Read an interview of hers on CoDrops where she stated the developer made use of WebGL + MSDF to make the liquid and fractal effects on the text. I'm in the process of learning the ropes of WebGL so this is something that has me scratching my head. It's clearly some combination of SDFs and a distortion effect on top but the specifics are completely lost on me. Anyway definitely worth checking out as its a gorgeous website, and a great source of inspo imo.
Downloaded but cmd showing press any key I don't know how to open after download
Hey, thank you for your comment! Did you open the run.bat and did it start the local host? Then, you should be able to open the WebUI in your browser! Just make sure not to close the command prompt (that also closes the program)! You can also watch my full video about how to run Fooocus on UA-cam! That may make it easier to follow! Otherwise, I’d also recommend using Stability Matrix to install and run Fooocus and to manage your models, images, etc!
Yes please, more on the orbit controls!! You are amazing, I need more in depth tutorials from you
Thank you so much for your kind words! Yeah, I also want to do something about the orbit controls. I was never really happy with orbit controls + animated camera in Three.js, but the customised version in this scene works pretty well for my taste…! I'll try to do something about this soon; stay tuned!
@@miroxleon Thanks so much for the reply and thanks for your effort really. You are my senpai on this
beautiful tutorials man , love it keep it up i have always wanted to make this !
Thank you so much for your kind comment! I'm happy to hear that you wanted to make something like this in the past! I hope the tutorial and texture pack help you realise it! If you make something with this, feel invites to join the discord and share your project with us in the #community-projects channel!
Love the Content Man, I only have Knowledge of HTML, CSS and JS Only Via Chatgpt, ( I cant Write Code ) But your Tutorial is Very Valuable for Improving my Capabilities. Can you Also Teach How to Replace the Model as Per my Requirement?
Thank you for your comment! I'm glad to hear that my videos are valuable to you! Are you asking how to change the 3D model or 3D model type? If you want to import a different FBX 3D model, you only have to change the file location in this line: fbxloader.load( "path/to/file.fbx" ...) If you want to import a different model type, like GLB/GLTF, OBJ, etc., you will need a different importer. In that case, you could cut the import statement: import { FBXLoader } from "cdn.skypack.dev/three@0.136.0/examples/jsm/loaders/FBXLoader.js"; Then, you can import a different loader. Unfortunately, the logic of each importer is a bit different, so you cannot just switch "fbx" for "glb" or "obj"; you'd have to rewrite the loader setup. Thus, you'd have to cut the following part and rewrite it according to your desired loader: const fbxloader = new FBXLoader(); fbxloader.load( "miroleon.github.io/daily-assets/two_hands_01.fbx", function (object) { object.traverse(function (child) { if (child.isMesh) { child.material = hands_mat; } }); } ); ChatGPT should be able to do that for you, for the most part. I'd just try to give ChatGPT the lines I showed you here (best you copy-paste them from the Codepen) and prompt something like: "I have this FBXLoader setup in Three.js but want to load a *file type* model with the same settings instead. Please read the reference carefully and make all the necessary changes to import a *file type* model instead". If you want to learn the different loaders, the Three.js documentation and examples are great to get started. Maybe a mix of ChatGPT and looking in the documentation will get you furthest. If you have issues in the process or want to share your own results, please feel free to join our Discord and share your progress with the community! I hope this helped a little bit!
Awesome work, man! I would like to suggest that you experiment with the Vanruec post-processing with vanilla three.js. That post-processing pipeline can create an Uber shader during render, meaning that all the passes are rendered as a single pass, drastically improving performance for more complex scenes. I hope it helps. All the best, and keep up the good work! 👏🏻
Thank you so much for your comment and feedback! You're absolutely right about the performance improvements! Since I'm using Three.js primarily for artistic purposes, I don't spend much time optimising the scenes. For production, I would probably also change the FBX model for a GLB/GLTF model, etc. I did try to find the Vanruec post-processing pipline, but couldn't find anything. Do you have another reference how I can find that? It sounds super promising and I'd love to look into it! Also, please feel free to play around with my code and optimise it or improve it further! You can also make a tutorial about it yourself - your videos are amazing! I'd be really curious to see what you make out of this! If you want to talk about it more, you are welcome to our Discord community as well! Thank you again for your comments and feedback!
So Value Content ! Insane
Thank you so much! It means a lot to me if people find value in these videos! I hope you enjoy my other work as well!
nice nice nice
Thank you so much! It's a great pleasure to welcome you to the Discord as well!
love that glow.
Thank you!
Please tell something for Android
Thank you for your comment! Unfortunately, I’m unaware of any generative AI tools running natively on Android. Perhaps that is due to the variety of Android devices and their range of hardware specs. The only real open-source offline generative AI tool I’m aware of for mobile devices is “Draw Things“, which can run on MacOS, iOS, and iPadOS. If there really isn’t any AI tool running locally on Android, you may consider using services such as MidJourney or Dall-E via ChatGPT, which both have decent mobile UIs. Another approach would be to run an AI tool such as Fooocus, A1111, or ComfyUI on a PC and host it on the private IP of the device in your network. You can check your IP in the command prompt with ‘ipconfig’ and then add it to the settings of the respective WebUI. For example, this is quite easy if you install your WebUI via Stability Matrix. You'd click on the settings icon of the respective UI and search for something like ‘-listen’, which is probably set to ‘localhost’ or ‘127.0.0…’ by default (although this setting has different names in different WebUIs). There, you paste your local IP address that you've found with ‘ipconfig’. Then, after starting the WebUI on the PC, you should be able to open the WebUI from any device in your local network by typing the following pattern into your browser ‘localIP:port”. You will also see the final “URL” that you can type in the browser on your phone in the command line after starting the WebUI. Then, you can generate images from your phone or any other device in the network. They will be rendered and saved on the PC, but you can queue prompts on your mobile device. Of course, there might be network restrictions depending on your network situation. If you need more help with this, just let me know. The best way to get help is by joining our discord, which you can find in the video descriptions of all my latest videos! I'm sorry that I don't know any native Android tools, but I hope this little overview helped a little bit…!
All the best
Is it better (performance wise) to set those texture and effects in blender or set it via the code like you did in this video
Thank you for your comment! This is extremely context dependent, I’d say. For basic stuff, like the materials I’m using here, I’d prefer to do it in code, because it’s easier to make changes on the fly. If you want to optimise performance by baking light into the textures in case of more complex scenes that makes a lot of sense to do in Blende on the other hand. The effects/post-processing I’m doing here are also relatively expensive in terms of computational resources on the client device. But I also couldn’t bake the glowing/bloom effect in the texture, so that’s a compromise I’m willing to take. In the end, as I’ve mentioned, this will mostly depend on your context. If you want to talk about this more (and with other very competent people), you can join our Discord! You’ll find the link under all my latest videos. Perhaps you can get some more context specific help over there! Thanks again for your comment, I hope this helped a little bit!
Man how are you so artistic? Inspired.
Thanks
Thank YOU for watching and leaving a comment!
This is just awesome. Keep giving us such great content.
Thank you so much for your kind comment! This made my day! It took a while to finish this video, but I’m back with hopefully much more to come soon!
how do i do on my mac?
Thank you for your comment! It's quite a bit more difficult to run Stable Diffusion/Fooocus on a Mac OS device and since I don't have an M1 or M2 device, I wasn't able to test it yet. You can refer to this unofficial guide on the official GitHub repository for this. I hope this helps. Al tools that run natively on Mac OS are a bit more rare, but I hope you find something that works for you...! github.com/lllyasviel/Fooocus?tab=readme-ov-file#mac
@@miroxleon thanks man, i followed the steps and im ended up getting this error- fatal: destination path 'Fooocus' already exists and is not an empty directory. can you help out if you can?
Spend 6gb for every time open the ai model? Or just one time
Thank you for your comment! No worries, you should only have to wait for the download of the model once! Sometimes there are Fooocus updates that come with new models, so they will download together with the update. But you don’t have to re-download and wait for the model every time you open the app! I hope this helps!
@@miroxleon thankyou
Does the gradient hdrs work in react three fiber?
Thank you for your comment! I haven’t worked with React Three Fiber before, but I’m pretty sure that it also supports HDRIs…! You can web search for React Three Fiber HDRI or environment to get specific tutorials! You can still use my full tutorial as a reference and perhaps copy the code from Codepen and let ChatGPT translate it to React. Just in case it doesn’t work, you can first get my Gradient HDR Freebie and only buy the full pack after you test it to make sure that it works! If you need more help, feel free to join our Discord server! The link is under all the latest videos! I hope this helps, and good luck and success on your Three.js journey!
you can put whatever resolutions you want in that tab. its very easy.
Oh, thank you for your comment and the update! Much appreciated!
@@miroxleon i can fil you in if you need.
Hi, it work with Apple M1 or just in windows ? ?
Thank you for your comment! The short answer is 'yes', although I cannot verify, as I don't have an Apple Silicon device. However, there is an unofficial guide on how to use Fooocus on M1 and M2 devices. As they say: "You can install Fooocus on Apple Mac silicon (M1 or M2) with macOS 'Catalina' or a newer version. Fooocus runs on Apple silicon computers via PyTorch MPS device acceleration. Mac Silicon computers don't come with a dedicated graphics card, resulting in significantly longer image processing times compared to computers with dedicated graphics cards." Here is the link to the instructions: github.com/lllyasviel/Fooocus?tab=readme-ov-file#mac I wish you the best of luck and success getting this to run on your M1 device! Feel free to let us know how this went!
@@miroxleon Thanks you for your return ! Your video is really interesting
Thank you very much!
Can i run it on a rtx 3050 laptop
Thanks for your comment! The answer is: it depends, but it should work. I did a quick web search which showed that there are different versions of the RTX 3050 Laptop GPU. There are versions from 4GB to 8GB VRAM. Principally, both should work, but it's difficult to estimate without knowing the entire specs. However, you can check the VRAM yourself easily, as described and shown at 2:21 of the video. I would just give it a try to run Fooocus if I were in your situation. My best guess is that it might be slow, but it should work anyway...! Feel free to let us know how it went if you try to run it. Then, if someone else is trying to run Fooocus on a 3050 in the future, they might find value in this info!
Awesome :)
Thank you for your kind comment!
hi, i'm having problem to run it on my vs code, on codepen it is running. please help!
Thank you for your comment! Do you have any further information? Any error logs? How did you download the code or copy-pasted it into VS Code? Are you using a server plugin for VS Code? I hope I can help you when I have more information! You can also join our Discord and we can help you there! Here's an invite link: discord.com/invite/WBnTvFtwNn
@@miroxleon I ran it on codepen then after successfully running the test code for the three js i.e. the green box rotation one, i pasted the code of html css and js but js isn't working. Please help if you don't mind 🙏
@@tarshswarnkar I think I got you...! So, let me try to set this up for you. Generally, if you copy and paste the code from Codepen into local index.html, script.js, and style.css files, they are not correctly formatted or linked with each other yet. The easiest way to do the setup in VS Code is by creating such an index.html file. Then type "html" in the code editor (don't click away or the suggestions will go away and you will have to retype "html"!). While you're typing there should be a list of suggestions that VS Code makes. One of those is "html:5". That's the general setup for a responsive HTML file. Just click on that entry in the suggestion list or navigate there with arrow key down and enter. Then you should get the general outline for an HTML5 document, starting with <!DOCTYPE html> ... and ending with ... </html>. That's the part that is missing in the Codepen, as Codepen does automatically fill that in itself. Now, you can copy and paste the code from the HTML section on Codepen between <body> and </body>. There are still some minor issues though. Even if you create a script.js file and a style.css file and paste the code from Codepen there, the files don't know that they each exist yet. Your index.html will be the main hub for them to know that they have to work together. Therefore you have to add the line <link rel="stylesheet" href="style.css"> before the </head> tag, so that the HTML knows that it needs the style from the CSS file. Lastly, you'll need to add the script.js file to the mix. For that, simply go down to the end of the document and past the line <script type="module" src="script.js"></script> before the </body> tag. It's important that you use the type="module" here, as we're working with modules in Three.js! Also, make sure that you have saved all files, the index.html with the HTML code, the script.js file with the JavaScript code, and the style.css file with the CSS code. This should be the entire setup, but in order to preview the code, you'll need a live server plugin for VS Code. You could simply open the index.html file in your browser, but if you check the console (usually with ctrl + shift + j) it will show you a CORS error, which is a security thing, and prevents to read from certain files or sources. Anyway, for that purpose we have live servers that run the file on a localhost. If you don't have one setup, click on the "Extensions" icon in the left side bar of VS Code (or press ctrl + shift + x) and search for "Live Server". Any of the top ones should be good. I personally started with "Live Server" by Ritwick Dey and later switched to "Live Server (Five Server)" by Yannick. The first one is super simple but works and the second one has a few more custom options. Simply select one and click on the "Install" button. After the installation, you might have to activate it. When it's fully installed, there should be some kind of "Live" button in the lower right corner of VS Code (the Five Server one is a play icon and "Go Live" next to it). This should start a new local server. Make sure you are in the index.html file when you go live or navigate there when you are live, as only that one will render your scene. If you open the script.js or style.css files with a live server, they will just show you the code. That should be it! Maybe you already know some of this, but the lengthy description is just to make sure, you can follow along if this is your first time going from Codepen to VS Code! To make it easier for you, I created a quick Github Gist, which turns the Codepen code into the separate and linked files that I just described. You can find it here: gist.github.com/miroleon/2280303bb40394f80a9229a2374281af There are two ways you can work with the Github Gist. For one, there is a "Dowload ZIP" button in the top right corner. You can simply download the ZIP, unpack it, open the folder in VS Code, and start your Live Server and this should work (I tested it myself to be sure). You also see the three different files in the preview page on Github Gist. So, you can also recreate these files locally on your machine and then copy paste the code there respectively. Your choice! There's another thing you can do right from Codepen. There is an "Export" button in the right lower corner. If you click on it, you will see the option "Export .zip". This also downloads a zip archive with the code but can be a bit confusing due to the folder structure. If you unzip it, you will get a folder with a LICENSE.txt, a README.md, a "dist" folder and a "src" folder. The "src" folder contains the code in the same form as it's shown on Codepen, so that won't be correctly linked either. However, in the "dist" folder, you will see the correct files with the correct links in the index.html file. Then, you can also open the folder in VS Code and run your Live Server and done! I hope this helped! If this is your first time doing the whole Codepen -> local VS Code thing, don't worry, I had the exact same questions when I started! It's good to learn it. It helps you understand the document structure and everything better! Let me know if you have any further questions or run into any issues!
@@miroxleon thank you for your very generous reply, but is there a place where I can send you the problem as a video that I'm, if yes that will be the greatest help
@@miroxleon I think 3js is creating a problem, even though I have installed everything
Amazing work!! I noticed you are using three js version 0.136, I tried to replicate it with the newest version (0.160) but it gets totally wrong colors, do you know why it doesn't work with the new version?
Hey Mau, thank you for your comment! Oh yes... that's a thing... I'm not perfectly sure what the cause of the different colours is, but there have been some major changes across Three.js version over the years. I must confess, at some point, I gave up staying up-to-date with the latest releases and what changed in them. I had some parametric geometry manipulations I used to do, and they stopped working after r136 or somewhere around there, too. I'm sure there are workarounds, and it's not advisable to keep using outdated releases, but yeah, I must confess, I just stayed with older Three.js versions for my creative projects. For client work, I'd probably take the time to adapt a project for newer releases. Anyway, if you are searching for fixes for the colour issue, I'd have two ideas that might help you. Assuming the differing look is caused by the HDR being differently tone mapped by default between releases, you may want to have a look at the renderer (threejs.org/docs/index.html?q=render#api/en/constants/Renderer) and its toneMapping property (threejs.org/docs/index.html#api/en/renderers/WebGLRenderer.toneMapping). Here, you can find a Three.js example demonstrating different tone mapping types (threejs.org/examples/#webgl_tonemapping). The Three.js documentation also has a more extensive explanation of colour management in general and in Three.js in particular, which might be helpful in this context (threejs.org/docs/#manual/en/introduction/Color-management). I hope this helped at least a little bit. If you have any questions, feel free to raise them! If you want to have a more extensive conversation on the issue (or anything else Three.js related), you're very much invited to join our Discord Server! It's usually easier to have a chat there, compared to the UA-cam comments, and other web devs are around that might have some helpful insight as well! discord.gg/dWgeDM9QpB
@@miroxleon Thank you for your response!! Sure it helps a lot, I didn't know where to start. I'll be trying make it look similar to your original result with the latest version, and I'll let you know on the discord if I find something!
@@MauRuizDev That'd be amazing! It's always great if we have a chance to learn from each other as a community! Good luck with your efforts with the latest Three.js version and let us know if you need any help along the way! Looing forward to see your results down the line!
Thank you for joining the stream! The chapters haven't update in the video yet. In the meantime, you can use the following timestamps: 2:37 Welcome 5:10 Lessons from Last Stream 7:26 Reference and Inspiration 8:54 Getting Started in Blender 12:43 Backface Culling 17:37 Two Types of Subdivision in Blender 23:48 Edge Crease/Edit Subdivision Surface Impact/Smoothness 31:08 Fixing Sizes from Transform to Metric in Dimensions 37:40 Adding Tables from the Room Geometry 42:31 Edge Crease for Straigth Table Edge 55:10 Inset/Inner Extrude 1:01:53 Short Break 1:07:44 From Tube to Vase Shape 1:10:35 Dissolve Edges 1:12:20 Edit Crease After Applying 1:17:03 Adding Second Table Level 1:21:24 Make and Delete Annotation in Blender 1:34:40 Edit Undo Steps in Preferences 1:36:23 Wrap Up
bob
You can find the link to the cheat sheet below the linked livestream or via this link: miroleon.github.io/blender-cheat-sheet/
Totally blown away, such a fantastic detailed tutorial. Thank you so much, and I look forward to learning more from you. Liked and subscribed.
Thank you so much for your kind comment! This really means a lot to me! I'll try to keep creating more helpful and educational content in the future!
I installed Fooocus on my laptop, but it kept throwing an error whenever I try to generate an image. What can I do to stop the error?
Thank you for your comment! Can you give any information about the error? I never had an issue like that before, but maybe I can figure something out knowing more about the error...!
Hi @miroleon how are you ! thanks for your tutorial. i would like to know if you can make this easilyw with can and custom code ? have you code an example with webflow please ? thanks you
Thank you for your comment! I must beg your pardon; I haven't used Webflow before and thus cannot give you any solid information on how to do this within the confines of that framework. I did do some quick research now, and principally, you should be able to embed custom code (which inlcudes HTML, CSS, and JS, which I'm also using here). If you head over to the Codepen for this tutorial via the video description, you could copy the HTML and paste it into your custom code embed. The same goes for CSS, which you would have to put into the <style></style> tag and the <script></script> tag for the JS code. However, Webflow only seems to allow custom code with a max length of 10,000 characters, which might be too little for this particular project. You can minimise the length of the code by deleting all the comments ("//") from my original code and putting it into a website to get rid of all the empty spaces to cut down the character length (you could use this website, for example codebeautify.org/remove-extra-spaces). In the end, I'm not sure whether Webflow can handle the relative complexity of this project - this project isn't crazy complex itself, but it might just be a bit too much for a custom code embed. In the end, I think you have to get a bit creative trying to recreate something like this within Webflow. I hope this tutorial at least gives you an idea of the principle of how one can make such a landing page so that you can search for similar ways to implement it into Webflow. Otherwise, I would just encourage you to maybe use the opportunity to try building a site from the ground up, e.g., by forking the code in Codepen and trying to build your website around this tutorial. I know it's a lot of work, but it's helpful to learn the foundation anyway! If you choose to stay with Webflow, I wish you the best of luck and success in making something like this work within that framework. For all the info on the custom code embed, you can check Webflow's documentation, which I also referenced for my reply: university.webflow.com/lesson/custom-code-embed?topics=elements Lastly, if you want to get more feedback and learn more, you are also welcome to join our Discord! discord.gg/9JtYAqdWzq
how many interations/second are you getting in this 3080 mobile? i am thinking about to buy a 3080ti laptop with 16gb vram.
Thank you for your comment! I must confess that I just changed my setup, so I cannot easily check the old setup's performance and cannot recall the it/s (or s/it) from the top of my head. What I can say is that Stable Diffusion (whether through Fooocus or A1111) ran decently well on that Razer 14 with the 3080. Sure, the fans start spinning, and doing AI animation work isn't fun, but for single image generation, I'd assume a laptop with a 3080ti laptop edition to be equipped well enough. I'll try to run a test soon and get back to you, though! You can also join our Discord and put your question in the AI channel. Then it'd be easier to get back to you once I did a test (and maybe others can help as well)! discord.com/invite/3xxNV52MCZ
I managed to check the old setup with the Razer 14/3080 Laptop GPU now. I hope this isn't too late and still is helpful for you...! This isn't comprehensive by any means, but on Fooocus, the laptop peaks at around 1.55 it/s. When I do a quick test through A1111, I tend to be at around 1.15 s/it. I haven't spent time optimising or benchmarking this in this setup yet, so these are the out-of-the-box numbers. If you have any further questions, you can still join the conversation on Discord!
I'm sorry for the weird audio glitch in the beginning of the short. I couldn't find a way to fix it, so I hope it's not bothering you too much. Even more so, I recommend the full tutorial (without the audio glitch) to understand Fooocus and Stable Diffusion better! You can find it below the title of the short or via this link: ua-cam.com/video/RuAuIBCYleY/v-deo.htmlsi=NtlZfo2BozbOpU4m
How do you start it once its already installed after a fresh window boot? Do you need to remember the link or do you have to reinstall and click the link again?
Thank you for your comment! I'm not perfectly sure if I understand the question correctly, but I'll try to answer in the best way I can: Since Fooocus comes as an embedded package, you don't have to "install" anything in the common sense of the term. What happens when you first run the .bat file is that the terminal opens, which then downloads the required models so that Stable Diffusion can turn your prompt into an actual image (as a reminder: Fooocus is the user interface; Stable Diffusion is the algorithm; Python is the language). So, once you downloaded all the necessary files, you can run Fooocus via the .bat file. When you close Fooocus by closing the terminal and rebooting your PC, you can start it in the same way that you started it the first time. Simply double-click the .bat file in the original folder. It doesn't have to download everything again. The models are stored automatically in the folder. However, Fooocus is regularly updated, so sometimes it will download updated models as well. You simply have to keep the terminal open for as long as you use Fooocus. If you close the terminal, Fooocus will also close. When you open Fooocus via the .bat file again, it should start the local server itself and automatically open the UI in the browser again. Otherwise you can type the link or ctrl + click the link in the terminal to open the web UI again. So, you just keep that one folder that you originally downloaded and always start Fooocus there. If you want to "uninstall" Fooocus, simply delete that folder. There shouldn't be anything else you have to do to "uninstall" it. Since this is not the same intuitive way to open programs such as Photoshop etc., you can make it a bit easier to start by right-clicking on the .bat file, searching for "Create Shortcut", which might be hidden under "Show more options" and then copy-paste the shortcut to your desktop or pin it to your start menu. Then you can simply double-click that shortcut and it will start the terminal to run the Fooocus UI. Again, the terminal has to stay open for Fooocus to run. When you close the terminal, Fooocus closes as well and probably the browser which started to show the Fooocus UI will also close. I hope my long-winded explanation helped a bit to clarify the usage of Fooocus. If I didn't answer your question, please feel free to ask again. You're also welcome to join our discord if you need any further help or want to exchange more ideas with the community! discord.gg/wxj6YTDuEW
Very useful video, but i'd like to understand better models and loras, and how to use material from sites like civitai
Thank you for your comment! I can surely do another tutorial for more advanced techniques, using models and LoRAs from CivitAI in A1111 and training your own models and LoRAs with Koya. It's good to know that there is demand for this type of video! I'll work on it soon!
@@miroxleon that would be great! personally i'm starting with fooocus, which i find easy to start with but i don't always get the results i want! for example, i'd like to use the same model i created (a person for a sort of comic i'm creating) and i'd like to use poses of other models online, any suggestion?
@@polgia9314 I understand, Fooocus can be somewhat limited in that regard. To get started, have a look at how to install A1111. For models, I personally had the best quality/time performance with a model called "Cyber Realistic", which you can find on CivitAI. However, I have only little experience with "comic" style models. For controlling poses, you should have a look at ControlNet and OpenPose. With those, you can determine the pose of the character that the AI will render, which then also helps with consistency. There is also the OpenPose Editior for A1111, with which you can easily manipulate a pose and adapt it to your needs (there are also people who post their poses for OpenPose online to download). I'm keen to make a more detailed tutorial about this in the near future, but in the meantime, you are also welcome to join our discord. There, we also have an AI channel where you can chat and ask your questions (although there are many other more specialised AI Discords, too)! discord.gg/4GcfAxJDXc
@@miroxleon thanks a lot for your advice, I’m gonna look into it 😄
On a mac can work?
Thank you for your comment! It's quite a bit more difficult on Mac OS and since I don't have an M1 or M2 Apple device, I wasn't able to test it yet. You can refer to this unofficial guide on the official GitHub repository for this. I hope this helps. AI tools natively for Mac OS are a bit more rare these days, but I hope you find something that works for you...! github.com/lllyasviel/Fooocus#mac
See pictures like God in Cambodia