aw, on the thumbnail, I thought you had found a way to make better hair, but that was ai image... I'm just waiting more RAM to start experimenting, on my study it seems you can use a cpu if you solve the bandwidth problems, which is the main technical bottleneck. My goal is to try and obtain cpu with integrated GPU because it shares the ram, which mean unbounded VRAM, and latency is the least of my concern. There is also a solution to turn ssd into ram over PCI which is competitive with ddr4, which open the flood to To of ram instead of mere Go (cap to 128Go with home windows, uncapped with linux). The opportunity cost is to have a sub 1000€ home ai server that can run big model. It doesn't matter if image generation isn't has fast as the speed of light, it can be automated into being faster than human for arbitrary rendering quality, so that's a win in any way you cut it in productivity. Especially now I found way to have LLM do all the works by simulatiing a whole team talking to each other for better output. Also 72 is the number of everything, if you have the question and the answer you have everything. It was funny to see bad mouth millennial like me, and then explain why millennial jokes works, the computer WAS a millennial, the jokes is that the boomer civilization are frustrated by their expectation (wait till you see gen z obama in a triangle). It's not new either, ask the jonesy about Dadaism and surrealism, non sequitr happen in society where rules becomes so overbearing that a new generation feel like rejecting it whole (see also the 90s japan). And god forbid being "creative" mean following the rules, like Sarah Palin said "epic fail" (don't know if you will get that obscure reference) : P
aw, on the thumbnail, I thought you had found a way to make better hair, but that was ai image...
I'm just waiting more RAM to start experimenting, on my study it seems you can use a cpu if you solve the bandwidth problems, which is the main technical bottleneck.
My goal is to try and obtain cpu with integrated GPU because it shares the ram, which mean unbounded VRAM, and latency is the least of my concern. There is also a solution to turn ssd into ram over PCI which is competitive with ddr4, which open the flood to To of ram instead of mere Go (cap to 128Go with home windows, uncapped with linux).
The opportunity cost is to have a sub 1000€ home ai server that can run big model. It doesn't matter if image generation isn't has fast as the speed of light, it can be automated into being faster than human for arbitrary rendering quality, so that's a win in any way you cut it in productivity. Especially now I found way to have LLM do all the works by simulatiing a whole team talking to each other for better output.
Also 72 is the number of everything, if you have the question and the answer you have everything. It was funny to see bad mouth millennial like me, and then explain why millennial jokes works, the computer WAS a millennial, the jokes is that the boomer civilization are frustrated by their expectation (wait till you see gen z obama in a triangle). It's not new either, ask the jonesy about Dadaism and surrealism, non sequitr happen in society where rules becomes so overbearing that a new generation feel like rejecting it whole (see also the 90s japan). And god forbid being "creative" mean following the rules, like Sarah Palin said "epic fail" (don't know if you will get that obscure reference) : P
I need AI hair in rl.... hahaha!
Very interested in your 'writer's room' technique... Stealing that one for later!
this girl is very attractive... where can I get this model... the narrator I mean...
She is an original sculpt, imported as a (head) morph.
Not for sale.
@@CutsceneArtist oh... too bad... beautiful though... nice job!