360° with AI Masterclass for EVERYONE: Stable Diffusion, ControlNet, Depth Map, LORA and VR!

Поділитися
Вставка
  • Опубліковано 31 лип 2024
  • [Update: now with Panorama 360 Viewer plugin ⬇️ ] After 100+ hours of hard work, I finally finished my new Masterclass that covers everything you need to know about creating stunning 360° photos using advanced AI techniques. This class will bring you beyond just simple text prompts AI art generation like the Blockade Labs! You will learn practical techniques to control your 360 image creation with depth map, ControlNet and artistic models that are available online or trained by you!
    We will begin by introducing you on how to capture 360 photos for AI. Then we will walk you through step by step how to install Stable Diffusion Automatic1111 WebUI, LORA, and ControlNet in under 3 minutes! Then we will teach you 2 examples of how to use AI to generate your next 360 masterpieces. We will even teach you how to load your AI-generated 360 photos into your Meta Quest VR headset to create an immersive experience.
    Whether you're an amateur photographer or a professional looking to up your game, this masterclass is for everyone. Join us to learn how to create stunning 360° photos with AI in VR!
    See the final AI 360 video in VR, use this link on your Meta Quest 2 browser: / 360creato
    👉[Update: 2023-3-18]: NEW Panorama 360 Photo Viewer is now available directly inside your Stable Diffusion Automatic1111 WebUI. You can use it to check stitch line issues and 360 photo results without using third-party 360 viewers like Facebook 360 or Affinity Photo.
    Link: github.com/GeorgLegato/sd-web...
    How to install 360 Viewers: Copy the above URL - go to Extensions Tap - Install from URL - paste in the URL, and Hit Install. Don't forget to Refresh and Apply UI.
    How to use it: bit.ly/3Z3UC09
    Credit: GeorgLegato on Reddit
    0:00 - Introduction: what you'll learn
    2:18 - Blockade Labs AI and its limitation
    3:23 - How to Capture 360 Photo for AI
    6:45 - Install Stable Diffusion Automatic1111
    9:54 - How to Install ControlNet
    10:45 - How to Install LORA
    11:04 - How to Install VAE
    11:42 - Civitai - what it is and how to use it
    11:32 - Indoor 360 AI restyle
    13:32 - My BETTER prompt tips
    15:21 - TWO important settings for seamless 360 AI photo
    17:19 - Outdoor 360 Adv. workflow
    21:58 - AI inpaint for 360 photo
    25:11 - FREE AI upscale with Stable Diffusion vs Paid Topaz Photo AI
    26:12 - Touch up AI 360 Photo with Affinity Photo 2
    28:08 - How to Inject 360 Metadata and Publish 360 Photo on Facebook
    29:39 - How to View 360 Photo inside a VR Headset like Meta Quest 2
    32:45 - immerGallery: create VR experience with all your AI 360 photos
    List of links you need for Automatic1111 installation:
    ➡️ Python 3.10.6: www.python.org/downloads/rele...
    ➡️ Git: gitforwindows.org/
    ➡️ Automatic 1111: github.com/AUTOMATIC1111/stab...
    ➡️ Checkpoint 1.5 Download: huggingface.co/runwayml/stabl...
    ➡️ ControlNet for Automatic 1111: github.com/Mikubill/sd-webui-...
    ➡️ All ControlNet models: huggingface.co/lllyasviel/Con...
    ➡️ VAE Download & Installation: stable-diffusion-art.com/how-...
    ➡️ LatentLabs360 on CivitAI: civitai.com/models/10753/late...
    ➡️ Topaz Photo AI: bit.ly/3mzEyQQ
    ➡️ EXIF Fixer for Facebook 360 Photo metadata injection: exiffixer.com/
    ➡️ VR App to create VR experience from 360 photos: immervr.com/
    Your "webui-user.bat" file should look like this (copy and paste):
    ----------------
    @echo off
    set PYTHON="C:\Users\hugh\AppData\Local\Programs\Python\Python310\python.exe"
    set GIT=
    set VENV_DIR=
    set COMMANDLINE_ARGS= --xformers
    git pull
    call webui.bat
    ---------------
    IMPORTANT: Code and Prompt AND Prompt Tips you need for better AI:
    👉 Download here: bit.ly/42bjpSN
    👉 Download my FREE 360 photos in this video: bit.ly/42dPbyP
    Camera and Gears for 360 photo in general:
    ➜ Get Insta360 X3 with FREE Accessories now: bit.ly/insta360x3
    ➜ Insta360 ONE RS: bit.ly/insta360oners
    ➜ Ulanzi X3 Meta Cage: bit.ly/3JFyYeA
    ➜ Zhiyun M40: amzn.to/3lcJGQc
    ➜ Zhiyun F100: amzn.to/3yBqC0Z
    🎆 FOLLOW ME:
    ➜ Instagram: / hugh.hou
    ➜ Facebook: / 360creator
    ➜ Meta Quest TV: ocul.us/30uMZUj
    ➜ TikTok: / hughhou
    #AI #360Photo #stablediffusion

КОМЕНТАРІ • 102

  • @hughhou
    @hughhou  Рік тому +9

    👉[Update: 2023-3-18]: NEW Panorama 360 Photo Viewer is now available directly inside your Stable Diffusion Automatic1111 WebUI. You can use it to check stitch line issues and 360 photo results without using third-party 360 viewers like Facebook 360 or Affinity Photo.
    Link: github.com/GeorgLegato/sd-webui-panorama-viewer
    How to install 360 Viewers: Copy the above URL - go to Extensions Tap - Install from URL - paste in the URL, and Hit Install. Don't forget to Refresh and Apply UI.
    How to use it: bit.ly/3Z3UC09
    Credit: GeorgLegato on Reddit

  • @amirnajafi-pro
    @amirnajafi-pro Рік тому +15

    I can only say that I am amazed at how much impact UA-cam tutorials can have on a person's life. I live in a country where the simplest citizen rights have been taken away from people and they are dictated their way of thinking. In a country where there is no hope for the future, watching each of your educational videos gives me and has given me a window full of hope.
    Since last year, I started working on producing virtual reality content in a small city in Iran, and now, after a year, I am facing a huge amount of positive and extraordinary feedback in my business. To the extent that I can claim that I have become the first in my country in this field.
    I always wanted to thank you for each of the exceptional educational tutorials you provide. After watching this video, I couldn't resist telling you this because my mind has been filled with this question for a while, how can I use artificial intelligence to develop my business in this field?
    Thank you, Hugh ❤️
    (Btw this message has been translated with ChatGPT 😅)

    • @hughhou
      @hughhou  Рік тому +4

      Wow congratulations and so proud of your achievements! I won’t be able to do what you do in your environments! Keep it coming. For AI - I am doing lots of research in the field to figure out how to use it for immersive media. So will keep updating here :)

    • @amirnajafi-pro
      @amirnajafi-pro Рік тому +1

      @@hughhou it’s Pleasure to hear this from you master 🙏🏻
      I had a request that I wanted to share with you. In the Twinmotion software, you can have the highest quality of a 3D virtual tour. I really want to create a connection between converting my own 360-degree photos into objects in this software. I would appreciate it if you could provide a tutorial video on this if possible.

    • @Blazerfan11
      @Blazerfan11 Рік тому

      There are many UA-cam help videos on how to use ChatGPT to develop your business.

  • @guillermonippermedia5595
    @guillermonippermedia5595 Рік тому +2

    Master!!!🙏

  • @lixinzhao6987
    @lixinzhao6987 Рік тому +1

    360拍摄真好看!厉害!

  • @EvileDik
    @EvileDik Рік тому +4

    Every time I watch a Hugh video, I have to forcibly log myself out of Amazon to resist the temptation to spurge on 360 cameras, and then be disappointed by my inability to produce the exceptional results he achieves.

    • @hughhou
      @hughhou  Рік тому

      Lol! You don’t need more camera but def more accessories 😂

  • @MrMarleyoneluv
    @MrMarleyoneluv 9 місяців тому +1

    Loved the video lots of great info. Got me playing with a lot of cool tools and features. I own a Go Pro max/ Insta360 1 inch edition. I would like to metion a suggestion to add a short segment into this video right after you talk about the lighting and the 360 image to explain how to make a multi exposure HDRI image to get all the extra detail you get from the higher dynamic range. Thanks for the masterclass!

    • @MrMarleyoneluv
      @MrMarleyoneluv 9 місяців тому

      I also understand its 36 mins long and you may already have another video up of HDRI lol :)

  • @pillantechvideo
    @pillantechvideo Рік тому +3

    Very interesting new stuff and vr tools, thanks for sharing us

    • @hughhou
      @hughhou  Рік тому +3

      You bet! This can change the virtual tour industry dramatic Just like NeRF :)

  • @leono83
    @leono83 Рік тому +1

    Amazing content as ever. Truly inspirational

  • @tryentist
    @tryentist Рік тому +3

    Wow Hugh, this is fantastic!! It is nice to have you outpace me so I can tag along to catch up!

    • @hughhou
      @hughhou  Рік тому

      Not sure I am outpace anyone lol. Just after I finished the editing ChatGPT 4 is out. Now there is a whole new level of AI super power for creators… more research! I need everyone help lol.

  •  Рік тому

    Awesome video

  • @virtualityXR
    @virtualityXR Рік тому +1

    Hugh is the Goku of YT creators! 😄 So glad you are pushing the community forward with AI tools...

    • @hughhou
      @hughhou  Рік тому

      Haha glad you think so :)

  • @WindFireAllThatKindOfThing
    @WindFireAllThatKindOfThing Рік тому +2

    That's it. I'm turning my garage into a Cyberpunk dystopia.
    Oh wait, it's already a dystopia.

  • @JamesCorbett
    @JamesCorbett Рік тому +1

    Superb tutorial 👍

    • @hughhou
      @hughhou  Рік тому +1

      Glad you liked it

  • @kleber1983
    @kleber1983 Рік тому +1

    this is GOLD! this is just too good to be true!! thanks bro!

  • @CarlosAndresCuervo
    @CarlosAndresCuervo Рік тому +1

    This is insane! Awesome but looks like rocket science. So complex. Hopefully in the future there will be alternatives with a simple GUI and a lot less steps to generate AI art from 360 pictures.

    • @hughhou
      @hughhou  Рік тому

      Yes. This is still very early stage of AI. It will get easier by the day. Just today I update the description with native 360 viewers inside the WebUI! I am sure by next month it will get 10x time easier. I will keep update here.

  • @chardellbrown1821
    @chardellbrown1821 Рік тому +5

    Hugh, thank you so very much! You have been a lifeline as I try to figure out ways to bring more people together in VR experiences. Please keep up the amazing work! Perhaps a video about multi-viewer 360 youtube livestreams using theta/insta type of cams? I don't know if that's possible, as the youtube vr app only seems to be single-user/single-player. Is there a workaround or alternative?

    • @hughhou
      @hughhou  Рік тому +2

      That will require Meta and UA-cam work together. Prob will be some app out that allow you to do soon.

  • @camilovallejo5024
    @camilovallejo5024 11 місяців тому +1

    You my friend are a pioneer

    • @hughhou
      @hughhou  11 місяців тому

      Awww thank you so much!

  • @Niberspace
    @Niberspace 3 місяці тому +1

    This video deserves more views, amazing.

    • @hughhou
      @hughhou  3 місяці тому

      Awww glad you like it!

  • @vezvisuals4194
    @vezvisuals4194 Рік тому +3

    AMAZING JOB AGAIN, Hugh!!! Can Stable Diffusion + plugins do 3D 360 or 3D 180 AI image generation?

    • @hughhou
      @hughhou  Рік тому +2

      Not yet. But active research is on the way, probably very very soon.

  • @Danipirru
    @Danipirru Рік тому

    Hi, Thank you for your tutorial, it is fantastic. I have a problem and that is that when I finish the process the created image disappears and is not saved in the folder. Why does this happen?

  • @VRVideographer
    @VRVideographer Рік тому +1

    Thanks. Very useful tutorial. Installation worked great. Stable diffusion worked well for a couple of days and then started producing an error on render (A tensor with all NaNs was produced in VAE...etc..). I reinstalled a few times but now on subsequent installations the 'depth' does not appear in the controlnet pre-processor. There are depth_leres, depth_midas, depth_zoe plus lots of others - canny, openpose, mlsd etc. though I haven't managed to get as good results with these pre-processors. You mentioned using 'none' as a pre-processor when using a stereoscopic image. How does this work? Thanks for all your work in making these concise tutorials

    • @glukvideo
      @glukvideo Рік тому

      Same for me, do not have option just depth

    • @VRVideographer
      @VRVideographer Рік тому +1

      @@glukvideo I think for the pre-processor, depth midas is the same as depth which is better for near objects. Depth leres is better for far objects as far as I'm aware

  • @ezdeezytube
    @ezdeezytube Місяць тому

    Can we do this in comfyui with sdxl?

  • @ratside9485
    @ratside9485 Рік тому +3

    You don't need Kohya-ss Additonal Networks, this is only for Lora models that have been made with Kohya-ss. Thank you for your work. Have you tried this with 180 degree images?

    • @hughhou
      @hughhou  Рік тому

      Thank you for the tips!

    • @hughhou
      @hughhou  Рік тому

      Not yet. 180 will need 180 training model - plan to make one as well. I am wonder anyone is making this that can share models.

    • @ratside9485
      @ratside9485 Рік тому

      @@hughhou I don't know how they do it when the maximum size for traning is 768x768. Maybe you can change the style with the new controlnet models. Have a look at Controlnet Style Adapter.

  • @drumtro11
    @drumtro11 5 місяців тому +1

    I'm going to the link in Huggingface but all the file names are different or updated. Any advice? Nvm, I did a search there. You are awesome dude.

  • @pillantechvideo
    @pillantechvideo Рік тому +2

    I wonder if you can share us the link to see yours result directly on vr hmd, looks amazing experience!

    • @hughhou
      @hughhou  Рік тому +1

      Yes! It’s in the video! The link is on my Facebook page - Facebook.com/360creator - put the link on your Meta Quest browser or any XR browser to fully immersive

    • @pillantechvideo
      @pillantechvideo Рік тому

      @@hughhou yes mate, its amazing to see on oculus go if you active the pc version browser!

  • @miguelmira8615
    @miguelmira8615 Місяць тому

    Can you show us how to do this but with deforum for 360 video?

  • @samirsvirtualworld
    @samirsvirtualworld Рік тому +1

    So you prefer Facebook for 360 pictures and UA-cam for 360 video?

    • @hughhou
      @hughhou  Рік тому +1

      As an artist in the space, yes. As distribution is everything and facebook and youtube has the widest reach. But for the best quality, no. I use Meta Quest TV for the best quality 360 video and other virtual tour platform for gigapixels 360 photo.

  • @zakadrom
    @zakadrom Рік тому

    Hugh hi! write a lesson on How to style a VR1803D video in stable diffusion

  • @sabuj100
    @sabuj100 Рік тому +2

    Is there any way for radeon gpus'?

    • @hughhou
      @hughhou  Рік тому

      Yes. There are instruction out there install on AMD Radeon.

  • @peterbelanger4094
    @peterbelanger4094 Рік тому +1

    Tiling did not work for me. But I wasn't using control net. i just tried vanilla with the prompt "panoramic view of a room" (wanted to start simple).
    but it tiled all 4 edges and broke the panorama. (it would be great if A1111 split tiling into horiz tiling and vertical tiling, but it doesn't)
    I guess tiling only works if you are using Control net.
    Without tiling, i had to go into photoshop, offset the image horizontally halfway, then go back into image2image and inpaint over the seam to make it fully 360

    • @hughhou
      @hughhou  Рік тому

      Make sure install Asymmetric tilting plugins as some checkpoint does not tilt correctly. Enable it as well and tile X enable and also check tilting. You don't need controlnet to enable tilting. But your model should support VAE Tilting as well. Try a more popular model training on higher res image to get a better result. I also just write an update in description to enable 360 viewer directly inside A1111 so you can see if it tilt correctly immediate. I hope all these extra efforts help you.

  • @fisheyemedia7196
    @fisheyemedia7196 Рік тому +1

    Please make the same video for the Mac users.

    • @hughhou
      @hughhou  Рік тому +3

      Yes it is coming for Mac workflow. You can use Mac with stable diffusion - just different in installation step!!

  • @bplusf3195
    @bplusf3195 Рік тому +1

    Thx! Why not light it with bushman halo?

    • @hughhou
      @hughhou  Рік тому +1

      That also works really well.

  • @glukvideo
    @glukvideo Рік тому +2

    Why I do not have a "depth" option in "preprocessor" ? only this: "depth_lares; depth_midas and depth_zoe" ?

    • @hughhou
      @hughhou  Рік тому

      Did you download the depth model the latest one?

  • @210larz
    @210larz 6 місяців тому

    Hi! thanks for the exelent video tutorial. Im not finding the tilyng option enywhere? could someone help me :)

    • @mediaofline
      @mediaofline 4 місяці тому

      same here, cant find tiling

  • @sihlemakalima
    @sihlemakalima 11 місяців тому +1

    Every time I generate I keep getting different results. Also I don't have the "depth" option on my preprocessors, I have depth_leres, depth_midas and depth_zoe

    • @hughhou
      @hughhou  11 місяців тому +1

      Depth Midas is what you need and you can reduce the denoise strength to get a consistent result. Also on Canny to really lock down the detail is details are not really "depth"

  • @maheshreddy8435
    @maheshreddy8435 Рік тому

    this error "URLError: " while trying to load "kohya-ss" extension, please help

  • @adcarteriv
    @adcarteriv Рік тому +1

    this looks so cool but I keep getting errors, my computer may be too slow for this...

    • @hughhou
      @hughhou  Рік тому

      What is the specs? Also can be not install into C drive problem.

  • @AbuBakkar-zr6ei
    @AbuBakkar-zr6ei 2 місяці тому

    I followed everything literally multiple times but I get a grey display after rendering

  • @ArtBQ
    @ArtBQ Рік тому

    Hugh, How can someone take a 360 photo using only an iPhone pro max? I’m shocked there are little to no apps for this!!!

  • @urbanhymne
    @urbanhymne Рік тому +2

    Can it works in mac ???

    • @hughhou
      @hughhou  Рік тому +1

      Yes. Stable Diffusion has Mac M1 installation or web app. You might need to search around on Internet. I will consider making a Mac version of the tutorial soon as well.

  • @anikiovirtualtours4594
    @anikiovirtualtours4594 Рік тому +3

    Alright. So I have tried to follow this step by step but am not getting results anything like you are. A few things: At 8:24 you highlight the safetensor file but the Save dialog box that opens shows you saving a .ckpt file instead. My understanding is they are the same but safetensors have extra measures to prevent malicious code, so I assume it shouldn't matter but one thing I noticed in studying differences in your install instructions. You do go out of your way to select the pickletensor instead of the safetensor for Protogen (which I also didn't do but will try) so maybe related?
    Essentially, using your prompt, settings, and image, I get a a flat image showing what looks like gold embroidery that has nothing to do with the room or original image as far as I can tell. I've tried with my own images and the results have looked like water droplets or random boxes. So, I'm stuck.
    Anyway, thanks for this tutorial and for your anticipated assistance!

    • @pandoramics
      @pandoramics Рік тому +2

      SAME thing here! I did all the steps several times, restarted everything, and getting the same results as you.

    • @pandoramics
      @pandoramics Рік тому +1

      oh, and I also noted the .ckpt but after a search everybody is saying they are the same, so... why!?

    • @anikiovirtualtours4594
      @anikiovirtualtours4594 Рік тому +2

      @@pandoramics I even checked and it's not April 1 yet! :)

    • @alfredrade
      @alfredrade Рік тому +1

      I'm starting to think this was a joke. It's too good to be true. I really want it to be true!

    • @hughhou
      @hughhou  Рік тому

      Mmmm... strange. Did you download the extra docs in the description and using the exact prompt, seed and method I am using. The result won't be the same depend on your depth map / 360 photo. But won't be that far. You will need to add negative prompt if you see unwanted result.

  • @kleber1983
    @kleber1983 Рік тому +1

    I wish someone could explain me why when I generate an image, the result is completely unrelated with the controlnet reference image? What am I doing wrong? anyone? thx.

    • @Thepurplepotatocat
      @Thepurplepotatocat Рік тому +1

      same. I'll let you know if I figure it out...

    • @kleber1983
      @kleber1983 Рік тому +1

      @@Thepurplepotatocat thank you, man!

    • @Thepurplepotatocat
      @Thepurplepotatocat Рік тому +1

      @@kleber1983 hey, most important thing is make sure you are on the img2img tab and not the txt2img tab.

    • @hughhou
      @hughhou  Рік тому +1

      As Arch West mentioned, you want to be in img2img tab sorry I went to fast there. Also, make sure the weight is at 1 and denoise strength is not crazy high - try from 0.5 and then go up before it become unrelated. Sorry I just saw this thread and thanks Arch for jumping in

  • @jhonrymat
    @jhonrymat Рік тому +1

    Hello, congratulations for teaching your knowledge, I am following the tutorial step by step but when I double generate it, it shows me the following error: (currently I have an rxt3050), could you help me.
    OutOfMemoryError: CUDA out of memory. Tried to allocate 2.00 MiB (GPU 0; 4.00 GiB total capacity; 3.40 GiB already allocated; 0 bytes free; 3.45 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
    Time taken: 5.65s Torch active/reserved: 3488/3534 MiB, Sys VRAM: 4096/4096 MiB (100.0%)

    • @hughhou
      @hughhou  Рік тому

      It’s GPU out of memory error. You will need to command to reduce the GPU memory usage. How much V-ram you have?

    • @jhonrymat
      @jhonrymat Рік тому

      @@hughhou 4gb :( , is that enough? Or what other option can I choose to use? hellpp..!

  • @thebigfortuno3329
    @thebigfortuno3329 Рік тому +1

    Is it possible to make it stereo?

    • @hughhou
      @hughhou  Рік тому +1

      Not yet. I am still trying to figure it out.

    • @VRVideographer
      @VRVideographer Рік тому

      would love to get it working with stereo

  • @noplannomention
    @noplannomention Рік тому +1

    not working with m1 Mac at all?

    • @hughhou
      @hughhou  Рік тому +5

      Yes, I did not mentioned Mac workflow here but it does work with Mac as well - requires some work around. This Masterclass is already way too long so I will cover Mac workflow in my next tutorial :)

  • @krzysztofskraburski
    @krzysztofskraburski Рік тому

    why you shouting ?? thanks for knowledge but it's hard to follow when YOU SHOUT!!!!!!!!!! then NEXT !!! copy the PATH!!!!!!!!!!! and you are done THEN !!!!!!! NEXT !!!!!! omg man !!!

    • @hughhou
      @hughhou  Рік тому

      Thank for the feedback. That is just how I talk lol.

    • @krzysztofskraburski
      @krzysztofskraburski Рік тому

      @@hughhou just saying :D no hate, but the content 🔥🔥 🔥🔥 🔥🔥 , thank you for sharing your knowledge :)

  • @VRMOTION
    @VRMOTION Рік тому

    Hugh hi! write a lesson on How to style a VR1803D video in stable diffusion