Interesting. I wish I could get these cards for $100 USD. Would mind having 2x or more for Lora and such. But I have a feeling I would run into a lot of issues with compatibility and I’m not an expert. Not to mention the difficulty in getting a motherboard that runs it
@@repixelatedmc because its cheaper, I bought these card as 820 CNY (115 USD) per card, but now it become double of this price. Also, 24GB of physical VRAM let it more future-proof.
By the way, using FP16 on GPU can't use less VRAM, only become faster, because the VRAM requirements is depending on models you using, and the SD models you can find now usually are "purned FP16", if your GPU don't support FP16, it can be load on GPU and run it as FP32 by stuffing zero on missing precision. I tested this when community still provide FP32 and FP16 models, and using FP32 model will become a little bit faster.
Interesting. I wish I could get these cards for $100 USD. Would mind having 2x or more for Lora and such. But I have a feeling I would run into a lot of issues with compatibility and I’m not an expert. Not to mention the difficulty in getting a motherboard that runs it
Quick question, why even a p40 if it needs to run on fp32 and has equivalent performance of a 3060 that is able to run on fp16 ( 2x less vram )
@@repixelatedmc because its cheaper, I bought these card as 820 CNY (115 USD) per card, but now it become double of this price. Also, 24GB of physical VRAM let it more future-proof.
By the way, using FP16 on GPU can't use less VRAM, only become faster, because the VRAM requirements is depending on models you using, and the SD models you can find now usually are "purned FP16", if your GPU don't support FP16, it can be load on GPU and run it as FP32 by stuffing zero on missing precision. I tested this when community still provide FP32 and FP16 models, and using FP32 model will become a little bit faster.