By nanoseconds, the average time for "I'm Jeff" is steadily increasing. If Jeff were immortal and continues Craft Computing into the far future, the intro will eventually take longer than the lifespan of a star. The very last sound ever heard before the heat death of the universe could become the nearly-endless, final F of our reality. Only then will Jeff finally be able to rest.
I personally feel the backdrops of the server in your garage or the wall of trinkets is what adds charm / personality to the videos. The "oh, I remember that location from that one video" would be lost with AI generated back drops. Otherwise, you do you
I thought this was a very interesting video. As someone who enjoys the physical hardware and the building of computers, I think the software side for me personally is a little out of my skill range and interest range. That said, as far as you using whatever program you want to use to create the art and to produce the the videos that you want to produce, I support it. Appreciate your content and I enjoyed this video.
Thanks for telling about InvokeAI! :D People’s reactions to AI often depend on how it impacts their own situation. Musicians were happy to use AI for cover art, but when it came to their music being used to train AI, the tune changed. The same goes for writers who didn’t mind AI-generated art but balked at AI writing. Content creators on UA-cam are now facing similar concerns. But here’s the thing: the issue isn’t AI itself-it’s how we use it and what we value. If we only focus on money and ownership, we’ll always be fighting over who gets paid. Yet, sooner or later, money might not even be the main value. As technology advances, it’ll be possible for everyone to generate anything tangible, and the old rules won’t apply. The fear of a dystopian AI future is real, but we need to rethink how we approach AI. Instead of seeing it as a threat, we could focus on creating systems that benefit everyone, not just the ones who own the technology. Change is inevitable, but resisting it out of fear won’t stop it from happening. We need to look at the bigger picture and find a way to coexist with these advancements rather than just protecting the status quo.
No one has a problem with AI until it starts to impact on their own situation. When image generators appeared a lot of musicans were really happy to use them for cover art for their music and had no problem with the fact that Artist's work had been used without permission to train those generators. But later on when Music generators appeared the Musicians were really angry that their music had been used to train those music machies. The same is true of writers who were happy to use AI Art for their book covers but not so happy to discover that their books have been used to train AI's to write books. Content creators on youtube are now startng to get angry that their content has been used to train Video creation AI's and will probably get more angry when they find themselves competing with AI generated youtube channels where all of the content-including the people- is totaly AI generated. AI could have been a positive force in our society by empowering the ordinary person- but you don't empower people by taking their work and using it without permission or payment in order to create machines that replace them, which is what is happening now. In the end the only people who are going to be 'empowered' by AI will be those who own and control it- almost eveyone else will rendered poweless and poor by this technology. Orwells 1984 will look positively benign compared to the dytopian reality that an AI powered surveillance society will represent- not only will AI likely take your job, it will watch you 24/7 to make sure you adapt nicely and quietly to your new status as an unemployed non entity.
@@CraftComputing Putting people out of work is the business model of Open AI, and to be fair they are completely candid about this- their intent is to create AI that replicates the intelligence of the average person- that's most of us. They call this AGI or Artifical General Intelligence. Ask yourself a simple question- what possible commercial value could a non specialised artificial intelligence actually have? Why not build specialised AI's that excel in narrow domains- that's what you would do if you were in the tool making business. Open AI are not in the tool making business- they don't want to equip the average man with tools- they want to build an AI that directly competes wth that man by replicating his flexibility and capacity to adapt- Artifical General Intelligence. They may or may not succeed in this endevour- but let's not kid ourselves here- let's at least have the moral courage to face the reality that if the AI developers get their way then a lot of people are going to be made destitute as their hard won skills are replicated by machines- and to add insult to injury those machines will likely have been trained on the skills and work of those they will replace.
So it's important to support open source AI initiatives, so that you can keep control. Big corporations still will do their thing though. A salted earth strategy may not be the best way to do it.
@@CraftComputingI have worked in companies that replace entire departments with AI. We used to have a 24/7 chat support that was replaced by GPT. It also now automates responses for emails that go to our support line. The AI decides if an email actually needs escalated to a human for review. This is a technology where we are actively seeing job displacement. Is just a question of how far will it go.
@@CraftComputingLet's not forget that major corporations see the possibility to cut their entire work force and are actively moving in that direction. I am sure LYFT and UBER would love to replace all its human drivers with AI driven cars.
I was messing around with something similar to this a couple a of months back. I liked some of the results I got, but I still needed to touch them up in Affinity. One thing I really noticed is when you add multiple descriptors, especially with colour. So to keep things fantasy themed, a prompt like: "A very pale male Elf, with long silver curly hair, golden catlike eyes, and wearing a tattered blue robe, is standing in a vacant vandalised ally, under the midnight sky." it just wouldn't work, but if I give that to an artist, they'd be trying to picture what I'm trying to explain, focusing on those little details (and adding their own flair).
While you are on the subject of self-hosted AI. Do you have any experience with AI image recognition for home security systems such as the frigate NVR software and a coral TPU?
Personally, I would recommend trying it out and learning from that experience. Worst case scenario you’re out a couple hundred dollars but you have knowledge of what doesn’t work and why not to do it that way.
I know you already gotten an LG TV, but one thing to consider would be getting a mate screen TV for the background. This would eliminate the glare from the screen or reduce the need to adjust or make extra purchases of additional light sources.
One of the things I noticed with the AI images with the horse is that the reigns always go behind the horse's neck when there should be one on each side of the horse's neck, a minor nitpick I know. Very good video though!
YES!!! I am currently setting up my own production system to generate all kinds of art and stuff. Kudos to you Jeff. Also just moved to PDX. So much better than I thought it would be. Now our heat wave is over
I am assuming you performed the image creation on your monster of a machine. What would be really interesting is having a watt meter running and take some readings of how many watts a particular image creation and/or editing consumed. I've read you can create images, but it may take hours on a small rig. You have a fast rig. So watt consumption could be a way of assessing and possibly estimating time when comparing your fast rig to something less.
26:28 Jeff has the "Disney Volume at home" too (Disney has a large screen room thing they use as a "better greenscreen" in a similar way, they call it "Volume")
Finally a tutorial I can run on my hardware. Got it running in an Unraid Docker container. Only using a 1080 for the GPU, so it's a little slow, but it's usable. Not using it for any content creation but just messing around with it and having some laughs at the current state of AI.
This is amazing! More content like this would be appreciated! Owning the means of production is a very exciting opportunity coming from AI. I've tried fooling around with home AI servers but never this image creation setup. My stable diffusion stuff was total garbage, this gives me hope.
This is basically the mandelorian model, just put the people in the scene, make the scene digital, and avoid CK like the plague. Having worked with CK/LK, I wouldn't wish it on you. It's a pain, it's limiting, and, philosophically, I really hate capturing something that is unusable without substantial processing. Anyway, easy enough for a static camera, the only thing you really have to worry about is matching the perspective of your lens to the perspective of the background, and refresh rate of the TV. You do also have to worry about screen glare a little bit. Simple to get around if your screen is big enough, because you can angle the screen, and then perspective warp the image your screen is displaying to push the glare out of frame. Nice part is, extra bigass TV's are pretty cheap now if you don't actually care much about image quality, which in this case you don't. You may also be able to get away with a short-throw projector in many instances.
It would be interesting to see how all this Ai works on a mid range gaming machine. The type of machine most people may have just have at home anyway for a new creator. Like a getting started style video :)
This is great, showcases a lot of how AI has progressed and can be used with creative solutions for your channel. Please keep going and would love to see more.
I understand the appeal of generating backgrounds and music. I've tried these systems, but haven't used anything in my videos thus far. I'm tempted to actually use it as well. Video creators are already not immune. Some of my transcripts from my larger UA-cam channel are in EleutherAI's 825 GiB "the pile" dataset that I didn't agree to. It's being used by companies like Apple, Nvidia, and Salesforce according to news articles I saw in July. At this point I don't think there is any going back, so I'd rather see it advance quickly to reach real AGI. That way social and economic change must be made, but it is frustrating to see creative works targeted first because it's the easiest type of AI to create. In my opinion the current systems are more like an advanced form of a web search rather than an actually intelligence.
Try using AMD's Vega GPUs. HBCC allows you to use Ram as VRAM. Using Optane PMEM or just 4 optane NVME SSDs in a pool as swap space can get you enough VRAM to run LLAMA 405b, the full 810GBs of it. I'd recommend a Vega 2 GPU like the MI60 if you are just using PMEM or ram, because PCIE 4.0 X16 is the bottle neck. If you are using Vega 64/56 or the Radeon 7 which is Vega 2 but for some reason doesn't run in PCIE 4.0, then optane NVME is just fine. It's stupid, and totally insane to run such a big model, but token quality matters more than Tokens per second, so I made the jankiest setup ever just to say I could. If I could run it on a raspberry pi, I would.
vr set is easy use to cameras, use on different layers, one for you and the other only on a green screen in the conner. when you move the second camera the background moving camera left, right up down will change and not you
You will most likely never read this comment but I'll try anyways: Can you please show how to install it on an ubuntu server? Because I'm having a *REALLY* hard time following the very unclear instructions on their github page. They say install python 3.11 but... how exactly? Even when activating the deadsnakes repo I can install it but pip is missing and nowhere is any mention how to get the proper version. ALso python3 still returns 3.12.x Wish they had proper clear full instructions.
I installed invoke-AI on my freshly set up Windows 10 machine with a P4000 GPU just fine. But I really would like to use Ubuntu Server, but like I said the official instructions don't give all the details. Its a bit frustrating.
Adding something like this has been something I've wanted to add to my home lab. 2 question: I have a Dell T440 running Proxmox with a lot of moderate speed cores, more than enough ram, but my Nvidia p1000 is passed through to a plex VM. Can that be shared easily enough between VMs or is a GPU even essential to doing this? Whats the training process like? My sailing club has thousands of photos that I'd want to train it on to possibly use for marketing content. Can you train it to know difference between 2 different, but similar classes of boats? Formula 18 vs Hobie 16 if you feel like looking up what those are.
Would be interested to see the budget version of this. How much capability would you lose if you were using P40s or P100s? Is it just a matter of extra time to generate the image, or are there features that are unavailable to the old hardware? Would love to play with this, but cant justify $3K just for the A5000s.
Yeah this is a subject that can spawn a long term series like his "we have VDI at home" series, going around and trying out various hardware configurations
To give context, a P4000 Quadro takes around 2-3min to render out a standard resolution SDXL image with no Lora, a bit longer with but you're talking about 10min with ControlNets. This card is roughly the same era, VRAM counts for more as swapping models can be crippling to performance just LLMs AI Image Gen love VRAM, the more the merrier so it is sometimes better to get an older card with more VRAM than a modern card with less. The main problem is old nVidia cards are still damned expensive and unless you get older supported AMD cards then you're stuck with the RX7000 and their W7000 series for the latest ROCm support. This said a 7900XTX is close to the 4090 performance at a fraction of it's cost.
You can get a lot done with a consumer RTX 3000/4000 card (4000 much better). I've run heaps of stuff on 3070, 3080, 4090, it is much better price to perf unless you go for really big models that can't fit in VRAM (there are.ways around that too, like quantization, but you def start losing quality).
Love the video concept and what you are doing. Thanks for venturing into new ways to use the technology. I have my first AI locally for book writing but your video shows there are so many types of AI and uses. Good content and new Sub!
Since I only listen to your vids, it doesn’t matter to me either way…although I would watch a de-aged Jeff do the beer reviews just to see how UA-cam and viewers not in the know would react.
I have been using AI generation to compliment my own photographic sets for over a year with no complaints from my clients, I create fantasy scenes for clients think kids as fairies, or knights or adults who LARP. I have even done re-enactments of battle scenes from history compositing people onto AI generated battle scenes. Whilst I understand the concerns, using AI in this way is not dissimilar to using it as artists have for kit bashing images over the years with fragment of other images to form a greater whole which is legal. I also trained my own models on my own imagery from models who have signed releases over the years. As a photographer and artist I probably understand the law better than most, and I will expect to see the usual haters and trolls try and lambast you but as you state it is a tool, and a powerful brainstorming and creative tool help with mental blocks or enable you produce content. For example for the kids sets it would cost me thousands to builde a set once twice a year for them to coming and do the shoot, now the parents can come to me at any time of the year for a specific look so if the kid wants to be knight slaying a dragon, no problem or the girl who wants to be a mermaid. Done. In the old days I would have had to make the choice to spend hours in Photoshop or days in the workshop that I can do in a blink of an eye. InvokeAI is probably the most polished of the UIs even if it does not always integrate the latest and greatest, but it makes up for it by being stable and reliable. It is arguably the most intuitive of UI especially when compared to A1111 or Comfy.
THANK YOU for mentioning kit bashing! It's where I feel the argument of 'stolen images' really falls apart. If a model has been trained on 100,000 images of a mountain, and you ask it to draw a mountain, it's not going to pick one of those images and copy-paste it in. It's going to draw it's interpretation of a mountain from that training data... thus a unique and derivative work.
@@CraftComputing I see a lot of people complaining about AI art without actually understanding the workflow of digital art. It reminds me years ago of the fears over Photoshop stealing jobs (it created whole new industries for those who could adapt) or even earlier the introduction of synthesisers and digital music which led to MP3s. There is a great fear over anything AI like it is some sentient creation that knows everything, which in reality has no bearing because AI is dumb as a rock out-with it's shackles.
I thought for sure this was going to be making your own ollama server as I've seen other UA-camrs do. Interesting stuff. Do you have any thoughts on pinokio? Keep up the good work!
i found getting stable diffusion to run on windows a small amount harder than a normal install but once i told off my computers antivirus it worked like a charm,
Ubuntu 22.04 is actually still supported in the Ubuntu LTS lineup until 2026 under standard support, so any of my fellow linux bros would be wrong to come at you :) That said if it works with 3.11 it probably would work with 3.12 but they probably have not tested it 3.12 is still relatively speaking new.
The use of generative AI images genuinely bothers me. I'm ignoring the ethics in this case - it is the fact that I apparently have a super low threshold for the uncanny valley. I'm saying "apparently" as I have several friends that love messing around with generative AI and anything beyond text just give me the heebie jeebies. They're obviously fine with it and the whole "just ignore the hands" bit. My brain immediately goes with the "kill it with fire, right now!" approach to all of it. And yes, that includes the examples in this video. I actually needed to look away from the video at times. So... perhaps it isn't as seamless as you might expect? I'm probably an extreme case of this, I'll admit.. but I can't be the only one.
Heh, the "dreamy" kinda abstract look. That's why I hate the old 70-80s scifi book covers, they were made by human artists but they look like AI art and it's weird. Most people can't tell unless it's really bad, and as the use will increase and becomes the norm, it's gonna become the opposite. An example is autotune in songs. Pretty much any pop song in the last 20 years is using autotune to force the singer's voice to "follow the song" artificially. This is very obvious to any artist, but nowadays people will complain if a song does not have autotune because it sounds different
My concern is what is used to train the models made available for InvokeAI. There was little discussion here and a brief google search failed to turn any up.
InvokeAI instructions are fairly bullet proof for the vast majority of user installations, if you have AMD then it is a bit more convoluted and you'd probably best popping over to their Discord. InvokeAI installer is probably the best and easiest I have seen out any on the AI community whether it's AI gen or LLMs as most projects leave this as an afterthought.
Back in the days of the www, the way one got the creative content you couldn’t/would t create for yourself was through parter and friend-ships with other talented folks you found in real life or on the internet. As cool a d promising as Algorithmic Intelligence is for cottage industry/independent creators etc, I can’t help feeling a little bit like its really just a mechanism to empower already lonely people to avoid making friends or building the types of communities which might actually change things for the better.
I've been using SD 1.5 with Automatic 1111 for a little over a year now. I haven't moved to SDXL yet, due to lack of resources. Being retired kind of limits how much hardware I can afford, so currently running on a Ryzen 5 5600X with an Nvidia GTX 1060 with 6gb. of VRAM. It's barely enough to do what I want to do... An RTX 3060 would help immensely and allow me to upgrade to SDXL and better base models. I can totally relate on the approximately 80% success rate and the cursed images.🤣 Some are just funny, but there are others that are real nightmare fuel!🤯🤮 Like someone else mentioned, a lot of the quality depends on the prompts (and negative prompts), to get rid of artifacts and oddball anatomical glitches, like mutated fingers, toes, extra limbs, etc. Different 'Loras' that steer the image in various directions are a big help as well, and you can use different 'weights' with those to choose how much the Lora affects the overall image. You can find a lot of those on CivitAI for most any situation.😉
AMD would be a better choice as you get more VRAM for your money and it is just as compatible with AI generators these days, at least with the 7000 series.
@@ThirdEnvoqation At the time I installed SD, it was still a trick to get any kind of AMD GPU to work right with SD, and Nvidia was fully supported, so I went with the easy choice!😄
@@danw1955 Aye, it is only in the last few months with them supporting Windows through ROCm 6.1 through WSL, Linux support has always been solid and arguably better than nVidia's on Linux. nVidia is screwing the pooch with GPU prices and their artificial restrictions on VRAM.
15:43 As noticed also by other youtubers that dabbled in AI images (Shadiversity among others, on his secondary channel SHAD AI), the AI training on swords and swordfighting is very bad, so to get anything resembling a decent sword and a correct swordfighting pose you have to feed the AI sketches to use as a template. The same for guns or any other specialist tool
So, horse tells; saddle is not period and has weird extra stuff and metal stirrups the left stirrup is to far forward. the horses neck is too thick, the front right leg to skinny, the mane has something wrong but I can’t tell from your video, the bridal and bit are modern as well.
Honestly I hate (with a strong passion) the current AI trend! Anytime AI is mentioned I will literally groan, roll my eyes and move onto something else :D However, this is really freaking awesome! I am an advocate for running things locally in a self-hosted environment and I just installed this on my local PC, it works great and its local! Thanks for sharing this! And I would not be oppossed to seeing more AI content that is similar to this!
The difference between SD-Next and InvokeAI is night and day when it comes to UI. SD-Next is great for bring the latest cutting edge models but InvokeAI values stability over rushing the latest fad. Also InvokeAI Unified Canvas is far more than just Inpainting, it's also part of their regional controls system and how they integrate ControlNet into the core systems and the InvokeAI team are actively looking to introduce layers soon. SD-Next is certainly borrowing concepts from InvokeAI but the new interface is rough, with it being laggy and lacks any finesse is a royal pain to use at times. I know the new UI for SD-Next is still in development but it still has a long way to go and I won't even get into the A1111 interface. InvokeAI also runs AMD on Linux, and users have reported getting it to work on Windows on WSL just like SD-Next for the supported ROCM 6 cards.
Nice work i wonder how it will work with M40 or P40 Gpu it will be nice if you can make a video about ai content generation and List of gpu's keep going
I've tried to keep the Generative AI out of my house. But your explanation of your uses for it make all the sense. A two person operation cannot do the things of a 100 person operation. I'm good with your uses.
using some negative embeds can help with the hands. Inpainting the hands can help too. There are negatives and detailers out there for hands and feet. Upscaling works wonders. Invoke is fun. I still use it but I prefer ComfyUI. having a lot of fun playing with it
Ai or technically "Algorithmic" set design... interesting. Most of my use of green screen is to simply place myself inside of a frame for streaming and for floating subtitles, but I can see how a film set might be useful. Also I feel it's slightly more appropriate as it seems to be limited use and more for out of focus set pieces and NOT as primary content OR as merchandise. I feel like the possible use of AI to create art for sale OR for merchandise MIGHT get sketchy pretty quick because you are CLOSER and CLOSER to that "derivative work" issue. Set pieces seems slightly removed/ less of a potential issue. Still I always have the question in the back of my mind of specifically "Where did the image data come from?"
Light control is a massive problem with projectors. The screen area can't receive any light besides the projection image. I don't have enough light separation control in my current space.
Definitely something I'd like to do! It's either that or doing live green screen compositing like LTT, but that requires a lot more setup and hardware.
...and why do You ignore negative prompt? Besides, fixing images is easy - just use the same SEED and adjust the prompt - you'll be good. Append there what you don't want to see - e.g. deformed face, misfigured etc.
lots of fear mongering on here about AI. its coming whether you like it or not. photography did not replace artists, and AI wont replace artists or photographers. its a powerful tool, but your fear is irrational.
This is a faulty argument. Photography did open a new field but is based on the outside world. AI Art (right now) is almost exclusively gAI trained on (almost all cases) works that already exist.
I am not sure the problem is with AI “replacing” photographers, but plagiarizing (or outright stealing from) them. AI platforms are engaged in widespread IP theft, which lowers profit margins for original creators.
I recommend OVH over Vultr because OVH offers more flexibility. Additionally, it appears that most of Vultr's bare metal servers are frequently out of stock, and upgrading memory or hard drives is often not possible.
@@afaulconbridge Yes, in France. Your data is not "protected from loss" on a hosting provider so any lost data is on the customers not making backups. If they had backups they would just restore on another OVH french datacenter and keep going.
What would self hosted ai be good for. It must be soooooo slow. What good is it if it takes 2000 years longer for it to learn or do what its supposed to do compared to ChatGPT that resides on supercomputers in the cloud?
For fun after reading this comment, I loaded the Invoke Docker container on one of my office servers, then ran the "chairless command center" prompt with the same model and resolution. CPU only on a Ryzen 5950X with ECC DDR4 2666MHz, far from "2000 years" it took 20 minutes 25 seconds from clicking "Invoke" to finished image, without any GPU. 7840u laptop CPU: ~24 minutes
Respectfully, I don’t see how “all content you see on this channel comes straight from my brain” squares with using AI generated assets in videos. Surely your current, completely uncontroversial use of stock assets fills this need perfectly adequately without the potential risks of contested copyright in the future (you claim those generated images are “open source” but this is not a legally settled matter yet at all), pushback from viewers, etc.
Agreed, I groan every. single. time. I'm watching some youtube video and then suddenly the screen transitions to some AI generated image and all i can think about is 1. that mind was immediately able to identify it as AI, and 2. how bad it looks Really sad to see the number of content creators willing to use something that looks objectively awful as long as it saves them a bit of time or cash. it really devalues their content
That's not what I'm talking about here though. I'm talking about scene compositing and virtual production. I also hate random crappy AI images thrown into a video for 'visual appeal' that don't actually add anything of value. I also talked about using AI as a starting point for developing artwork, not just straight up using whatever is generated. Getting a direction, and then fleshing it out in Photoshop or Unreal, rather than just using whatever the AI spits out.
@@CraftComputingThank you for replying. I’ve been subscribed a while but had missed that video. I’m afraid though having now watched it that your stance is not compatible with my own and I will be unsubscribing, especially if your use of the tech is going to increase going forward.
meh - why just images - you barely use or need images - you need something more like ai for lawyers - you need to make a ai monster machine and escalate the youtube ai violence once and for all - run a variety of open source llm and then have updates every quarter; realistically it may be 12-20 quarters before things really start to pop - aided and abetted by cxl/ualink, faster gpu, accelerators, usb5, pci-e v6 but when the ai bibble for smb sector gets here it will be even bigger than the big tech ai we see now - more people will buy in, it will be more believeable as the hw and sw stacks will be much more mature and you will see other trends that help spur adoption - more sharing of models and model training daya, weights, real time data ingestion and agg - you will literally be scraping and xrawling the web 24/7 to add to yourdata and be able to gauge sentiment and value much better, there should be a bump in innovation and we will probably see even more startups centered around ai tech - the future is bright so this is why i implore you to be a thought leader and jump in with both feet and don't marginalize yourself by focusing just on images
AI at the moment is not anywhere near that, it can do pictures or fiction text, and songs. Anything that actually requires precise understanding of a subject is OUT
Voice actors actually do not have any rights to their own voice, voices are not protected, they have rights to their likeness so if you try to say they said this or use their name that can get you into trouble but if you don't note their name or use their name its completely free to use. Again peoples voices are not protected the person is....if you made a game and used all AI voices of "famous people" but never cite the people that would be effectively completely legal.
Voice actors do not have the rights to their created characters, just like actors don't have rights to their portrayed characters. But ethically, telling a voice actor 'thanks for creating this character... here's your pay for the first 5 episodes, but we'll let this AI synthesis take it from here' is a dirty move with worse results.
@@CraftComputing People do not have rights to their voice in anyway. And most voice actors sign away their rights by giving the copyright to the company. But regardless if you wanted to go make a video game and wanted Tom Cruise in it in voice only but don't use his name, you can do that. Voice is not protected and you cannot copyright the sound of someone's voice and people don't own it. It's weird but its how the law is setup. Laws don't care about ethics I'm just saying what is currently possible.
By nanoseconds, the average time for "I'm Jeff" is steadily increasing. If Jeff were immortal and continues Craft Computing into the far future, the intro will eventually take longer than the lifespan of a star. The very last sound ever heard before the heat death of the universe could become the nearly-endless, final F of our reality. Only then will Jeff finally be able to rest.
Sponsor spots are normally what i skip over but i did appreciate the spaceballs reference "ludicrous intelligence". Well done!!
I personally feel the backdrops of the server in your garage or the wall of trinkets is what adds charm / personality to the videos. The "oh, I remember that location from that one video" would be lost with AI generated back drops.
Otherwise, you do you
I thought this was a very interesting video. As someone who enjoys the physical hardware and the building of computers, I think the software side for me personally is a little out of my skill range and interest range. That said, as far as you using whatever program you want to use to create the art and to produce the the videos that you want to produce, I support it. Appreciate your content and I enjoyed this video.
Thanks for telling about InvokeAI! :D
People’s reactions to AI often depend on how it impacts their own situation. Musicians were happy to use AI for cover art, but when it came to their music being used to train AI, the tune changed. The same goes for writers who didn’t mind AI-generated art but balked at AI writing. Content creators on UA-cam are now facing similar concerns.
But here’s the thing: the issue isn’t AI itself-it’s how we use it and what we value. If we only focus on money and ownership, we’ll always be fighting over who gets paid. Yet, sooner or later, money might not even be the main value. As technology advances, it’ll be possible for everyone to generate anything tangible, and the old rules won’t apply.
The fear of a dystopian AI future is real, but we need to rethink how we approach AI. Instead of seeing it as a threat, we could focus on creating systems that benefit everyone, not just the ones who own the technology. Change is inevitable, but resisting it out of fear won’t stop it from happening. We need to look at the bigger picture and find a way to coexist with these advancements rather than just protecting the status quo.
No one has a problem with AI until it starts to impact on their own situation. When image generators appeared a lot of musicans were really happy to use them for cover art for their music and had no problem with the fact that Artist's work had been used without permission to train those generators. But later on when Music generators appeared the Musicians were really angry that their music had been used to train those music machies. The same is true of writers who were happy to use AI Art for their book covers but not so happy to discover that their books have been used to train AI's to write books.
Content creators on youtube are now startng to get angry that their content has been used to train Video creation AI's and will probably get more angry when they find themselves competing with AI generated youtube channels where all of the content-including the people- is totaly AI generated.
AI could have been a positive force in our society by empowering the ordinary person- but you don't empower people by taking their work and using it without permission or payment in order to create machines that replace them, which is what is happening now. In the end the only people who are going to be 'empowered' by AI will be those who own and control it- almost eveyone else will rendered poweless and poor by this technology.
Orwells 1984 will look positively benign compared to the dytopian reality that an AI powered surveillance society will represent- not only will AI likely take your job, it will watch you 24/7 to make sure you adapt nicely and quietly to your new status as an unemployed non entity.
Drum machines have never put a single drummer out of work.
@@CraftComputing Putting people out of work is the business model of Open AI, and to be fair they are completely candid about this- their intent is to create AI that replicates the intelligence of the average person- that's most of us. They call this AGI or Artifical General Intelligence.
Ask yourself a simple question- what possible commercial value could a non specialised artificial intelligence actually have? Why not build specialised AI's that excel in narrow domains- that's what you would do if you were in the tool making business.
Open AI are not in the tool making business- they don't want to equip the average man with tools- they want to build an AI that directly competes wth that man by replicating his flexibility and capacity to adapt- Artifical General Intelligence.
They may or may not succeed in this endevour- but let's not kid ourselves here- let's at least have the moral courage to face the reality that if the AI developers get their way then a lot of people are going to be made destitute as their hard won skills are replicated by machines- and to add insult to injury those machines will likely have been trained on the skills and work of those they will replace.
So it's important to support open source AI initiatives, so that you can keep control.
Big corporations still will do their thing though. A salted earth strategy may not be the best way to do it.
@@CraftComputingI have worked in companies that replace entire departments with AI. We used to have a 24/7 chat support that was replaced by GPT. It also now automates responses for emails that go to our support line. The AI decides if an email actually needs escalated to a human for review.
This is a technology where we are actively seeing job displacement. Is just a question of how far will it go.
@@CraftComputingLet's not forget that major corporations see the possibility to cut their entire work force and are actively moving in that direction. I am sure LYFT and UBER would love to replace all its human drivers with AI driven cars.
Having a light saber duel on your set with your fridge in the background is part of the charm!
I was messing around with something similar to this a couple a of months back. I liked some of the results I got, but I still needed to touch them up in Affinity. One thing I really noticed is when you add multiple descriptors, especially with colour. So to keep things fantasy themed, a prompt like: "A very pale male Elf, with long silver curly hair, golden catlike eyes, and wearing a tattered blue robe, is standing in a vacant vandalised ally, under the midnight sky." it just wouldn't work, but if I give that to an artist, they'd be trying to picture what I'm trying to explain, focusing on those little details (and adding their own flair).
While you are on the subject of self-hosted AI. Do you have any experience with AI image recognition for home security systems such as the frigate NVR software and a coral TPU?
I haven't played with image detection for 8+ years (so you can imagine how great it worked back then).
Personally, I would recommend trying it out and learning from that experience. Worst case scenario you’re out a couple hundred dollars but you have knowledge of what doesn’t work and why not to do it that way.
Ltt has a video about that, im not sure if it was frigate, but they did use coral tpus
I know you already gotten an LG TV, but one thing to consider would be getting a mate screen TV for the background. This would eliminate the glare from the screen or reduce the need to adjust or make extra purchases of additional light sources.
One of the things I noticed with the AI images with the horse is that the reigns always go behind the horse's neck when there should be one on each side of the horse's neck, a minor nitpick I know. Very good video though!
YES!!! I am currently setting up my own production system to generate all kinds of art and stuff.
Kudos to you Jeff. Also just moved to PDX. So much better than I thought it would be. Now our heat wave is over
I am assuming you performed the image creation on your monster of a machine. What would be really interesting is having a watt meter running and take some readings of how many watts a particular image creation and/or editing consumed. I've read you can create images, but it may take hours on a small rig. You have a fast rig. So watt consumption could be a way of assessing and possibly estimating time when comparing your fast rig to something less.
Im interested. I particularly liked how you didn't try to dress things up and showest the actual state of the tech for folks with real situations
26:28 Jeff has the "Disney Volume at home" too (Disney has a large screen room thing they use as a "better greenscreen" in a similar way, they call it "Volume")
Do I look like I have LED video wall money :-D
@@CraftComputing Few people have the money that the almighty mouse has.
Finally a tutorial I can run on my hardware. Got it running in an Unraid Docker container. Only using a 1080 for the GPU, so it's a little slow, but it's usable. Not using it for any content creation but just messing around with it and having some laughs at the current state of AI.
This is amazing! More content like this would be appreciated! Owning the means of production is a very exciting opportunity coming from AI. I've tried fooling around with home AI servers but never this image creation setup. My stable diffusion stuff was total garbage, this gives me hope.
So so cool... This was an awesome ep. Thanks heaps...
Any reason you didn't go with the NV link on the two cards?
This is basically the mandelorian model, just put the people in the scene, make the scene digital, and avoid CK like the plague.
Having worked with CK/LK, I wouldn't wish it on you. It's a pain, it's limiting, and, philosophically, I really hate capturing something that is unusable without substantial processing.
Anyway, easy enough for a static camera, the only thing you really have to worry about is matching the perspective of your lens to the perspective of the background, and refresh rate of the TV.
You do also have to worry about screen glare a little bit. Simple to get around if your screen is big enough, because you can angle the screen, and then perspective warp the image your screen is displaying to push the glare out of frame.
Nice part is, extra bigass TV's are pretty cheap now if you don't actually care much about image quality, which in this case you don't. You may also be able to get away with a short-throw projector in many instances.
It would be interesting to see how all this Ai works on a mid range gaming machine. The type of machine most people may have just have at home anyway for a new creator. Like a getting started style video :)
This is great, showcases a lot of how AI has progressed and can be used with creative solutions for your channel. Please keep going and would love to see more.
I understand the appeal of generating backgrounds and music. I've tried these systems, but haven't used anything in my videos thus far. I'm tempted to actually use it as well.
Video creators are already not immune. Some of my transcripts from my larger UA-cam channel are in EleutherAI's 825 GiB "the pile" dataset that I didn't agree to. It's being used by companies like Apple, Nvidia, and Salesforce according to news articles I saw in July.
At this point I don't think there is any going back, so I'd rather see it advance quickly to reach real AGI. That way social and economic change must be made, but it is frustrating to see creative works targeted first because it's the easiest type of AI to create. In my opinion the current systems are more like an advanced form of a web search rather than an actually intelligence.
Try using AMD's Vega GPUs. HBCC allows you to use Ram as VRAM. Using Optane PMEM or just 4 optane NVME SSDs in a pool as swap space can get you enough VRAM to run LLAMA 405b, the full 810GBs of it.
I'd recommend a Vega 2 GPU like the MI60 if you are just using PMEM or ram, because PCIE 4.0 X16 is the bottle neck. If you are using Vega 64/56 or the Radeon 7 which is Vega 2 but for some reason doesn't run in PCIE 4.0, then optane NVME is just fine.
It's stupid, and totally insane to run such a big model, but token quality matters more than Tokens per second, so I made the jankiest setup ever just to say I could.
If I could run it on a raspberry pi, I would.
vr set is easy use to cameras, use on different layers, one for you and the other only on a green screen in the conner. when you move the second camera the background moving camera left, right up down will change and not you
You will most likely never read this comment but I'll try anyways:
Can you please show how to install it on an ubuntu server?
Because I'm having a *REALLY* hard time following the very unclear instructions on their github page.
They say install python 3.11 but... how exactly? Even when activating the deadsnakes repo I can install it but pip is missing and nowhere is any mention how to get the proper version. ALso python3 still returns 3.12.x
Wish they had proper clear full instructions.
I installed invoke-AI on my freshly set up Windows 10 machine with a P4000 GPU just fine.
But I really would like to use Ubuntu Server, but like I said the official instructions don't give all the details. Its a bit frustrating.
I'm curious. Have you been playing around with lora's to see if it can give you some more clean results?
Adding something like this has been something I've wanted to add to my home lab. 2 question:
I have a Dell T440 running Proxmox with a lot of moderate speed cores, more than enough ram, but my Nvidia p1000 is passed through to a plex VM. Can that be shared easily enough between VMs or is a GPU even essential to doing this?
Whats the training process like? My sailing club has thousands of photos that I'd want to train it on to possibly use for marketing content. Can you train it to know difference between 2 different, but similar classes of boats? Formula 18 vs Hobie 16 if you feel like looking up what those are.
Would be interested to see the budget version of this. How much capability would you lose if you were using P40s or P100s? Is it just a matter of extra time to generate the image, or are there features that are unavailable to the old hardware? Would love to play with this, but cant justify $3K just for the A5000s.
Yeah this is a subject that can spawn a long term series like his "we have VDI at home" series, going around and trying out various hardware configurations
To give context, a P4000 Quadro takes around 2-3min to render out a standard resolution SDXL image with no Lora, a bit longer with but you're talking about 10min with ControlNets. This card is roughly the same era, VRAM counts for more as swapping models can be crippling to performance just LLMs AI Image Gen love VRAM, the more the merrier so it is sometimes better to get an older card with more VRAM than a modern card with less. The main problem is old nVidia cards are still damned expensive and unless you get older supported AMD cards then you're stuck with the RX7000 and their W7000 series for the latest ROCm support. This said a 7900XTX is close to the 4090 performance at a fraction of it's cost.
You can get a lot done with a consumer RTX 3000/4000 card (4000 much better). I've run heaps of stuff on 3070, 3080, 4090, it is much better price to perf unless you go for really big models that can't fit in VRAM (there are.ways around that too, like quantization, but you def start losing quality).
Love the video concept and what you are doing. Thanks for venturing into new ways to use the technology. I have my first AI locally for book writing but your video shows there are so many types of AI and uses. Good content and new Sub!
Since I only listen to your vids, it doesn’t matter to me either way…although I would watch a de-aged Jeff do the beer reviews just to see how UA-cam and viewers not in the know would react.
I have been using AI generation to compliment my own photographic sets for over a year with no complaints from my clients, I create fantasy scenes for clients think kids as fairies, or knights or adults who LARP. I have even done re-enactments of battle scenes from history compositing people onto AI generated battle scenes. Whilst I understand the concerns, using AI in this way is not dissimilar to using it as artists have for kit bashing images over the years with fragment of other images to form a greater whole which is legal. I also trained my own models on my own imagery from models who have signed releases over the years. As a photographer and artist I probably understand the law better than most, and I will expect to see the usual haters and trolls try and lambast you but as you state it is a tool, and a powerful brainstorming and creative tool help with mental blocks or enable you produce content. For example for the kids sets it would cost me thousands to builde a set once twice a year for them to coming and do the shoot, now the parents can come to me at any time of the year for a specific look so if the kid wants to be knight slaying a dragon, no problem or the girl who wants to be a mermaid. Done. In the old days I would have had to make the choice to spend hours in Photoshop or days in the workshop that I can do in a blink of an eye. InvokeAI is probably the most polished of the UIs even if it does not always integrate the latest and greatest, but it makes up for it by being stable and reliable. It is arguably the most intuitive of UI especially when compared to A1111 or Comfy.
THANK YOU for mentioning kit bashing! It's where I feel the argument of 'stolen images' really falls apart. If a model has been trained on 100,000 images of a mountain, and you ask it to draw a mountain, it's not going to pick one of those images and copy-paste it in. It's going to draw it's interpretation of a mountain from that training data... thus a unique and derivative work.
@@CraftComputing I see a lot of people complaining about AI art without actually understanding the workflow of digital art. It reminds me years ago of the fears over Photoshop stealing jobs (it created whole new industries for those who could adapt) or even earlier the introduction of synthesisers and digital music which led to MP3s. There is a great fear over anything AI like it is some sentient creation that knows everything, which in reality has no bearing because AI is dumb as a rock out-with it's shackles.
I thought for sure this was going to be making your own ollama server as I've seen other UA-camrs do. Interesting stuff. Do you have any thoughts on pinokio? Keep up the good work!
I haven't played around with Pinokio, but it's on my list to check out :-)
i found getting stable diffusion to run on windows a small amount harder than a normal install but once i told off my computers antivirus it worked like a charm,
JUST STAY YOU. I know all the bs is a fad now, but what doesnt change is just be YOU.
What part of this video said I was changing anything about me?
Ubuntu 22.04 is actually still supported in the Ubuntu LTS lineup until 2026 under standard support, so any of my fellow linux bros would be wrong to come at you :)
That said if it works with 3.11 it probably would work with 3.12 but they probably have not tested it 3.12 is still relatively speaking new.
lol you finally got the J K Simmons persona into an ad 😅
I knew his voice sounded familiar!
What’s happening with the new studio project?
Financial and permitting hell.
@@CraftComputing Ah typical, permits really do suck the joy out of anything. I wish you the best of luck!
I need Jeff to ask for pictures of Spiderman with that AD voice
The use of generative AI images genuinely bothers me.
I'm ignoring the ethics in this case - it is the fact that I apparently have a super low threshold for the uncanny valley. I'm saying "apparently" as I have several friends that love messing around with generative AI and anything beyond text just give me the heebie jeebies. They're obviously fine with it and the whole "just ignore the hands" bit.
My brain immediately goes with the "kill it with fire, right now!" approach to all of it. And yes, that includes the examples in this video. I actually needed to look away from the video at times.
So... perhaps it isn't as seamless as you might expect? I'm probably an extreme case of this, I'll admit.. but I can't be the only one.
Heh, the "dreamy" kinda abstract look. That's why I hate the old 70-80s scifi book covers, they were made by human artists but they look like AI art and it's weird.
Most people can't tell unless it's really bad, and as the use will increase and becomes the norm, it's gonna become the opposite. An example is autotune in songs. Pretty much any pop song in the last 20 years is using autotune to force the singer's voice to "follow the song" artificially. This is very obvious to any artist, but nowadays people will complain if a song does not have autotune because it sounds different
Hmmm... I think if it can be integrated into nextcloud 🤔
My concern is what is used to train the models made available for InvokeAI. There was little discussion here and a brief google search failed to turn any up.
Invoke is a frontend to Stable Diffusion, so they use publicly viewable images for their training
That’s just a tool, the models are separate… but yeah, I don’t think there’s any model nowadays that is trained ethically :(
@@Flackon exactly my concern. Was consent sought before using those pictures? That's the big question here and I suspect I already know the answer
can you please make step by step guide how to setup it ? Thank you.
InvokeAI instructions are fairly bullet proof for the vast majority of user installations, if you have AMD then it is a bit more convoluted and you'd probably best popping over to their Discord. InvokeAI installer is probably the best and easiest I have seen out any on the AI community whether it's AI gen or LLMs as most projects leave this as an afterthought.
Back in the days of the www, the way one got the creative content you couldn’t/would t create for yourself was through parter and friend-ships with other talented folks you found in real life or on the internet.
As cool a d promising as Algorithmic Intelligence is for cottage industry/independent creators etc, I can’t help feeling a little bit like its really just a mechanism to empower already lonely people to avoid making friends or building the types of communities which might actually change things for the better.
I've been using stable diffusion for months to create images on my own hardware.
would be scary fun to mess with voice to text to translation to voice, so you could reach non-english audience easier :D
I've been using SD 1.5 with Automatic 1111 for a little over a year now. I haven't moved to SDXL yet, due to lack of resources. Being retired kind of limits how much hardware I can afford, so currently running on a Ryzen 5 5600X with an Nvidia GTX 1060 with 6gb. of VRAM. It's barely enough to do what I want to do... An RTX 3060 would help immensely and allow me to upgrade to SDXL and better base models. I can totally relate on the approximately 80% success rate and the cursed images.🤣 Some are just funny, but there are others that are real nightmare fuel!🤯🤮 Like someone else mentioned, a lot of the quality depends on the prompts (and negative prompts), to get rid of artifacts and oddball anatomical glitches, like mutated fingers, toes, extra limbs, etc. Different 'Loras' that steer the image in various directions are a big help as well, and you can use different 'weights' with those to choose how much the Lora affects the overall image. You can find a lot of those on CivitAI for most any situation.😉
AMD would be a better choice as you get more VRAM for your money and it is just as compatible with AI generators these days, at least with the 7000 series.
@@ThirdEnvoqation At the time I installed SD, it was still a trick to get any kind of AMD GPU to work right with SD, and Nvidia was fully supported, so I went with the easy choice!😄
@@danw1955 Aye, it is only in the last few months with them supporting Windows through ROCm 6.1 through WSL, Linux support has always been solid and arguably better than nVidia's on Linux. nVidia is screwing the pooch with GPU prices and their artificial restrictions on VRAM.
15:43 As noticed also by other youtubers that dabbled in AI images (Shadiversity among others, on his secondary channel SHAD AI), the AI training on swords and swordfighting is very bad, so to get anything resembling a decent sword and a correct swordfighting pose you have to feed the AI sketches to use as a template.
The same for guns or any other specialist tool
Published this right as Flux is taking over the stable diffusion market. Great stuff still!
Now we have two choices instead of one!
It has a commercial license, so he could not use it for youtube I think if it's monetized. But Flux Dev is REALLY good.
@@rodrimora Can get in touch and ask for a sponsorship or something, they probably won't mind some advertising
Build The Volume in your garage?
One day I'll have LED Wall Volume money :-D
So, horse tells; saddle is not period and has weird extra stuff and metal stirrups the left stirrup is to far forward. the horses neck is too thick, the front right leg to skinny, the mane has something wrong but I can’t tell from your video, the bridal and bit are modern as well.
You should see if haelo will send you an 10h ai compute unit. It has 40 tops of compute. How many tops does your system have?
Honestly I hate (with a strong passion) the current AI trend! Anytime AI is mentioned I will literally groan, roll my eyes and move onto something else :D
However, this is really freaking awesome! I am an advocate for running things locally in a self-hosted environment and I just installed this on my local PC, it works great and its local!
Thanks for sharing this! And I would not be oppossed to seeing more AI content that is similar to this!
Thanks for actually listening. AI is an amazing tool that has both amazing and terrible uses... just like art.
nice... pretty nice!
Most gui can do inpainting SDnext and automatic can both do it and have version that can run on AMD
The difference between SD-Next and InvokeAI is night and day when it comes to UI. SD-Next is great for bring the latest cutting edge models but InvokeAI values stability over rushing the latest fad. Also InvokeAI Unified Canvas is far more than just Inpainting, it's also part of their regional controls system and how they integrate ControlNet into the core systems and the InvokeAI team are actively looking to introduce layers soon. SD-Next is certainly borrowing concepts from InvokeAI but the new interface is rough, with it being laggy and lacks any finesse is a royal pain to use at times. I know the new UI for SD-Next is still in development but it still has a long way to go and I won't even get into the A1111 interface.
InvokeAI also runs AMD on Linux, and users have reported getting it to work on Windows on WSL just like SD-Next for the supported ROCM 6 cards.
Also dechutes gateway, ouch…, right in the beer belly.
7:17 youtube would sometimes disagree with that statement
Wow 46 seconds ago I must be special! Hi Jeff!
Well you should test out the new Invoke 5.x. its a giant step ahead of invoke 4.x
Nice work
i wonder how it will work with M40 or P40 Gpu
it will be nice if you can make a video about ai content generation and List of gpu's
keep going
I've tried to keep the Generative AI out of my house. But your explanation of your uses for it make all the sense. A two person operation cannot do the things of a 100 person operation. I'm good with your uses.
using some negative embeds can help with the hands. Inpainting the hands can help too. There are negatives and detailers out there for hands and feet. Upscaling works wonders. Invoke is fun. I still use it but I prefer ComfyUI. having a lot of fun playing with it
use a layered png file for your backgrounds
Maybe copying Linus might not be the best idea, they went from talking about interesting tech stuff to "watch us...".
This is great! Thanks for sharing! I can't wait to test this on my homelab.
1:45
I wake up.
Ai or technically "Algorithmic" set design... interesting. Most of my use of green screen is to simply place myself inside of a frame for streaming and for floating subtitles, but I can see how a film set might be useful. Also I feel it's slightly more appropriate as it seems to be limited use and more for out of focus set pieces and NOT as primary content OR as merchandise. I feel like the possible use of AI to create art for sale OR for merchandise MIGHT get sketchy pretty quick because you are CLOSER and CLOSER to that "derivative work" issue. Set pieces seems slightly removed/ less of a potential issue.
Still I always have the question in the back of my mind of specifically "Where did the image data come from?"
Oh my sweet summer child
@@jakehunter1831 what a helpful comment. Thank you for sharing your incredible knowledge on the topic and especially for your time.
How about a projector instead of a TV?
Light control is a massive problem with projectors. The screen area can't receive any light besides the projection image. I don't have enough light separation control in my current space.
I had the same thought
@@CraftComputing So future plans or ideas when you get the studio up.
Definitely something I'd like to do! It's either that or doing live green screen compositing like LTT, but that requires a lot more setup and hardware.
...and why do You ignore negative prompt? Besides, fixing images is easy - just use the same SEED and adjust the prompt - you'll be good. Append there what you don't want to see - e.g. deformed face, misfigured etc.
This is really great.
I want to be able to have the original voice of the actor from a foreign film read the sub title track in my language. Any software to do this?
Check out github.com/coqui-ai/TTS
My AI image generation horror episodes were trying to create body pillows in various animal shapes.
Now this,this is exiting
As an artist I protest the use of AI. It takes work away from artists and musicians.
Did an ai write this?
@@ikeshpack9644 your mom wrote it.
Gotta be thankful of self-hosted AI, we finally have an excu.. *AHEM* good reason to get all those old compute cards without display.
yeah go into unreal engine set setup
lots of fear mongering on here about AI. its coming whether you like it or not. photography did not replace artists, and AI wont replace artists or photographers. its a powerful tool, but your fear is irrational.
The photographer I replaced in this video was.... me.
This is a faulty argument. Photography did open a new field but is based on the outside world. AI Art (right now) is almost exclusively gAI trained on (almost all cases) works that already exist.
@@RawmanFilmnot sure what you mean, landscape paintings and landscape photography are both a thing
I am not sure the problem is with AI “replacing” photographers, but plagiarizing (or outright stealing from) them. AI platforms are engaged in widespread IP theft, which lowers profit margins for original creators.
AI replaced the artist that Micheals used instead to create that god awful snow witch with the three legged wolf and water mark stamps 😂
I like that you do tutorials on these things and go into detail. It really helps me get started and learning on my own.
No
Maybe a green screen can help?
Perfect use case
please get a green screen
convincing? i think not
I recommend OVH over Vultr because OVH offers more flexibility. Additionally, it appears that most of Vultr's bare metal servers are frequently out of stock, and upgrading memory or hard drives is often not possible.
Didn't one of OVH's data centres burn down and lose a bunch of customer data? One-off events could happen to anyone, but it's not a great history...
@@afaulconbridge Yes, in France. Your data is not "protected from loss" on a hosting provider so any lost data is on the customers not making backups. If they had backups they would just restore on another OVH french datacenter and keep going.
@@afaulconbridge You're correct I was using OVH as other option. I personally have 3 different providers I use.
Dont trust the "Real Ale Wanker". He will be drinking AI Ale soon. Mark my words!
Michael Jackson on a horse?
I think you should shoot some scenes in Mo's Tavern... 3d realistic rendered, of course. "One Flaming Mo, please!"
Long-term, I'd love to start doing 3D rendered scenes.
lame content
you spelled 'comment' wrong
@@CraftComputing damn got me
What would self hosted ai be good for. It must be soooooo slow. What good is it if it takes 2000 years longer for it to learn or do what its supposed to do compared to ChatGPT that resides on supercomputers in the cloud?
Each image takes around 10s on my RTX A5000. Similar results from my 4070 and 4080 Super as well.
The compute intensive process is the training, you can often download packages of trained data to use.
For fun after reading this comment, I loaded the Invoke Docker container on one of my office servers, then ran the "chairless command center" prompt with the same model and resolution. CPU only on a Ryzen 5950X with ECC DDR4 2666MHz, far from "2000 years" it took 20 minutes 25 seconds from clicking "Invoke" to finished image, without any GPU.
7840u laptop CPU: ~24 minutes
Respectfully, I don’t see how “all content you see on this channel comes straight from my brain” squares with using AI generated assets in videos. Surely your current, completely uncontroversial use of stock assets fills this need perfectly adequately without the potential risks of contested copyright in the future (you claim those generated images are “open source” but this is not a legally settled matter yet at all), pushback from viewers, etc.
Go watch my AI Ethics video.
Agreed, I groan every. single. time. I'm watching some youtube video and then suddenly the screen transitions to some AI generated image and all i can think about is
1. that mind was immediately able to identify it as AI, and
2. how bad it looks
Really sad to see the number of content creators willing to use something that looks objectively awful as long as it saves them a bit of time or cash. it really devalues their content
That's not what I'm talking about here though. I'm talking about scene compositing and virtual production. I also hate random crappy AI images thrown into a video for 'visual appeal' that don't actually add anything of value.
I also talked about using AI as a starting point for developing artwork, not just straight up using whatever is generated. Getting a direction, and then fleshing it out in Photoshop or Unreal, rather than just using whatever the AI spits out.
@@CraftComputingThank you for replying. I’ve been subscribed a while but had missed that video. I’m afraid though having now watched it that your stance is not compatible with my own and I will be unsubscribing, especially if your use of the tech is going to increase going forward.
@@tormaid42 "My youtube subscriptions list have to all agree with everything I believe now and forever or I'm unsubbing."
meh - why just images - you barely use or need images - you need something more like ai for lawyers - you need to make a ai monster machine and escalate the youtube ai violence once and for all - run a variety of open source llm and then have updates every quarter; realistically it may be 12-20 quarters before things really start to pop - aided and abetted by cxl/ualink, faster gpu, accelerators, usb5, pci-e v6 but when the ai bibble for smb sector gets here it will be even bigger than the big tech ai we see now - more people will buy in, it will be more believeable as the hw and sw stacks will be much more mature and you will see other trends that help spur adoption - more sharing of models and model training daya, weights, real time data ingestion and agg - you will literally be scraping and xrawling the web 24/7 to add to yourdata and be able to gauge sentiment and value much better, there should be a bump in innovation and we will probably see even more startups centered around ai tech - the future is bright so this is why i implore you to be a thought leader and jump in with both feet and don't marginalize yourself by focusing just on images
AI at the moment is not anywhere near that, it can do pictures or fiction text, and songs. Anything that actually requires precise understanding of a subject is OUT
I like how AI generated more respectable female fantasy armor than most chainmail bikini artists out there lol.
Voice actors actually do not have any rights to their own voice, voices are not protected, they have rights to their likeness so if you try to say they said this or use their name that can get you into trouble but if you don't note their name or use their name its completely free to use. Again peoples voices are not protected the person is....if you made a game and used all AI voices of "famous people" but never cite the people that would be effectively completely legal.
Voice actors do not have the rights to their created characters, just like actors don't have rights to their portrayed characters. But ethically, telling a voice actor 'thanks for creating this character... here's your pay for the first 5 episodes, but we'll let this AI synthesis take it from here' is a dirty move with worse results.
@@CraftComputing People do not have rights to their voice in anyway. And most voice actors sign away their rights by giving the copyright to the company. But regardless if you wanted to go make a video game and wanted Tom Cruise in it in voice only but don't use his name, you can do that. Voice is not protected and you cannot copyright the sound of someone's voice and people don't own it. It's weird but its how the law is setup. Laws don't care about ethics I'm just saying what is currently possible.
AI is the future.