Thanks again to Manscaped for sponsoring this video. You can get 20% off + free shipping and two free gifts with code CRAFT at mnscpd.com/CraftComputing
Jeff, this is an absolutely awesome video. But why wouldn't you use Aster V7 and Spacedesk and let the windows scheduler allocate resources dynamically? You can have even more heads and won't need as many graphics cards
Honestly dude, I have had a homelab for like 6 years and this channel has slowly become my favorite. Thank you so much for your contribution to this community
I used to run esxi and a bunch of windows VMs just to learn about VMs now I'm on proxmox and looking to do a similar gpu pass-through setup as Jeff. I've done a bunch of stuff really. Minecraft servers, NAS, plex, you name it. Lol
Finally NVIDIA is doing something good for us homelabbers :D Regarding the only 50% success rate of passthrough. At 7:59 you echoed the text line to the file vfio.conf. At 8:35 you echoed the other PCI IDs to the same file. AFAIK if you only use one > then it will overwrite the previous content. >> would append i think. Try merging the two lines to one: Maybe it'll work then :-) echo "options vfio-pci ids=10de:1f82,10de:10fa,10de:1b81,10de:10f0 disable_vga=1" > /etc/modprobe.d/vfio.conf
Your 690s won't pass through because you overwrite their device IDs at 8:40 (instead of adding the 1080s id's to the 690 id's, you replaced the 690 id's with the 1080's)
I legit was searching for a video regarding the new official support, and found this 10min after upload! How did I not see this in my subscriptions first-
amazing news and a much wanted feature :D thank you been lurking a long time, and this vid finally made me subscribe haha. keep up the great work dude.
It was your videos that started me down the path with proxmox. It was also your multiple attempts that gave me hope.. Beyond excited to try this out finally!
I just borked my setup and then fixed it. One thing I must have done at some point when I initially got it working, which is CRITICAL, is that I set it to use UEFI bios. The legacy bios or CSM screws up everything. This combined with reinitializing the UEFI bios with an extracted and (slightly) modified boot rom, for the Nvidia card is the trick. Hope it gets easier.
It has been a long time now, but I had exactly the same problem with passing a non-UEFI 660Ti under KVM. My solution was not to try flashing the card, but to pass a modified UEFI bios ROM image to the virtual machine using the
Not sure if anyone else mentioned it, but you might not want to have those blower fans at the back of the GTX690 cards. They are designed with a single fan intake in the center and air exits out the front AND the back of the card. Using the blower fans at the back of the card like that is working against the center fan. Because of the placement of the dual GPU's, there is one on either side of the center fan, so it pushes air in both directions to go over each heatsink.
This is kind of surprising to me. I watched a lot of the videos in your cloud gaming series and not all of them so I don't know the whole story. I recently built a ProxMox server and put a GTX 980 in it. I've been using it in a Windows VM with my steam link and it's been working great. I did have to end up installing that nvenc driver mod though to get hardware transcoding to work in Plex, but everything works beautifully.
Passing through graphics cards has worked fine with Unraid for a while. It works perfectly fine with my 2 VMs. My Nvidia card is passed to my Windows VM, and my AMD card is passed through to my Linux VM (and the AMD card is technically my primary GPU), which works fine.
The NVIDIA announcement sounds like the block would only be removed from the windows driver. Does someone know if I can stop using the workaround for a Linux VM too?
yep. For me, windows was in fact never a problem. At least with my old GTX 650 Ti. VM is set to BIOS in proxmox. It works without issues even with older drivers. Linux however only works with the crippled nouveau drivers, no matter if its BIOS or UEFI emulation.
Jeff, thanks for great content! I have finally managed to apply this to my own setup. With a minor tweak you can get ridd of the mini stutters by applying 'write back (unsafe)' on the used HDDs attached to the VMs. It will still not be 100% but veeeery close. Again, thanks :)
No mention of Titans ?... NV have been randomly enabling and disabling Quadro features like PCIe passthrough on Titan driver versions for many years now.
Fun fact: running the driver in VM mode; ie, without the "old" Code 43 workaround, disables HDCP support. If you wanna watch Netflix and stuff using the Windows apps at max res, you might wanna continue using the workaround, even though the driver now works without it.
I think I'm missing something here. Aside for the misconfiguration that already many users reported, through VFIO I was always able to passthrough my PCIe cards just fine? I mean, Proxmox uses KVM, and KVM supports the "kvm=off" parameter, and NVidia windows drivers always installed perfectly fine? IF ANYTHING, I'm more mad at them that they disable MSI mode on Windows by default unless it's a Quadro card, and that I always have to re-enable it manually using RegEdit every single time I update the drivers. I keep telling myself I'll make a tool that does it automatically for me, yet I still didn't do it…
Cool video, I used some of your others to passthrough a nvidia card to linux for plex encoding. As other mentioned below your issues with the 690 are most likely due to overwriting the config and a typo in there.
Funny to see that the subscribers of this channel are all seeing the mistake in the device IDs and >>. Funny stuff. I could bet you are all tech nerds like me, always ready to help others.
The issue you had with the k2’s and the m60’s was due to the fact they aren’t designed to just be used for PCIE pass through. You have to use a supported hyper visor (I used esxi 6.7 u1 and u3) and install the supported vib for the card. Then you can install a pass through profile on the VM, then install the proper GRID profile driver and NVENC works. I’ve done this successfully and it works well for VMware horizon as well.
I've been trying to virtualise my rig for at least 2 years now. I always at some problem causing me to drop the project numerous times, mostly error 43... FINALLY everything works just as I always wanted. Linux VM as a workstation. Fire up my gaming VM when I need it. Fire up my work VM when I need it. I love it.
On KVM this worked since forever. You can make qemu-kvm not expose the details that the Nvidia driver looks for to establish whether it is running in a VM. The biggest problem you are likely to face is that 6xx series like the 690 don't support YUV 4:4:4 for H.264 NVENC encoding in RDP. But something like Steam streaming should still work with full hardware encoding. But being able to do it over RDP is super-convenient as you can run the gaming VM completely headless. But for that you need a 10xx series card, IIRC GTX1070 or later.
Weird. I have been running 1080 in kvm with IOMMU on Intel about three years ago, for about a year. And since December 2020, I am running 2080 Super in Proxmox with IOMMU on Ryzen 5xxx. No problems whatsoever. Oh, and btw, if there is a modprobe config that you enable vfio in, for specific PCI devices, there's no need to disable any drivers. The system automatically switches those devices to use vfio.
... I got this working years ago :I you need to disable the Hypervisor flag which is a one liner in ESXi and not that hard on most other Hypervisors on linux. Edit VM configuration under the VM Options tab and add hypervisor.cpuid.v0 FALSE in case you were curious
Yes I would like the same answer for this. Because you can use a bunch of free ones but they don't support a lot of input. Especially controller support. I have Moonlight setup on my friends cloud gaming VM and he requires a direct connection to one of my monitors as well, I would prefer to use something that works with Nvidia and AMD, Moonlight only support Nvidia GTX cards.
Hi Jeff. Steven from Belgium here. I tried to do this with 2 GTX 670 dcii Top cards in 1 VM on unRAID and the problem was that SLI was not supported. Maybe you will encounter the same problem since the 690 is two gpus on one board. Anyway good luck and thanks for this update.
So the 465 drivers are the first which disable the VM check. Does anyone know when they first introduced the check? The cards usable before the check obviously would be useless for gaming but they would likely still be useful for light desktop work.
Regarding to S7150x2 in Proxmox, instead of using GIM driver, why not trying pass through whole device (not GIM) in order to have access of hardware encoding, mean will have 2 hardware encoding, 3 cards mean 6 hardware encoding right since each card is x2
At 8.44 when you do the disabled the second card you used the wrong id.. it should have been 10de:1b80 not 10de:1b81 i didn't get to the end to see if it matters for now but i guess it could matter at some point
I always used the hacks available in KVM to work around the driver restriction, but only with a few GPUs at a time and not with a complex setup like that. Glad Nvidia got rid of this arbitrary restriction! About time.
Do you know if Quadro P1000 can now be passed through to a VM as well as the Geforce cards? In the past only 2XXX and greater series cards can be passed through.
Nice! Exactly a topic i am on right now - virtualization of gpu‘s. I guess for gaming and deeplearning i need to have at least one gpu from Nvidia and for virtually splitting up a physical gpu to be used by many vm‘s in parallel (think of a k8s cluster which nodes are vm‘s and each having to be able to perform gpu-work) i need amd to do the trick, as vgpu by nvidia is not only for expensive cards, but also hits you with even more expensive licensing in the face. Need to watch your vids on the amd cards to learn more about that ;)
thats pretty good. I hope that the home server segment will start to get big again. Now all we need is good GPU virtualization. At least Intel GPUs do already support it. With modern CPUs, 24/7 is not a big problem anymore as even a very powerful consumer CPU (like the 12/16 core Ryzen chips) does not use that much power when just idling. This could really be the future for so many things, but I doubt that anyone will make setting up a home server super easy (which would be needed for mass adoption). Microsoft already tried 10 years ago with windows home server.
A few questions if you dont mind. Can you hook a monitor up to the card and get the video output from the VM? Will Proxmox let you bond the local keyboard and mouse to the VM? Will Proxmox let you redefine the power switch to turn off and on the VM? I know I can do those under KVM/QEMU running on libvirt. Just wondering if you can do that with proxmox.
I don't see why you couldn't, they're both just frontends for QEMU/KVM. I don't know about passing through an actual local keyboard but you can always pass through an entire USB controller, maybe even individual USB keyboards. Is that what you did with libvirt or does it also work with PS/2 keyboards? I'm asking because that would be useful for laptops.
@@eDoc2020 I passed the usb through. The use was to create a virtual desktop on the desktop system. Made it easy to move a person from desk to desk as you moved their image over.
I had a 690 to the PLX chip of that card wont physically allow splitting the cores to 2 different VMs I had them working with both cores passed through to the same vm but was alway greeted with a device error when splitting the cores. what was weird that it worked on a tesla card i had i think they use a different chip or a differnet firmware
I noticed you didn't use the gameready driver, which I thought one has to use to enable game streaming. What are you using to remotely connect to a 3D accelerated session in the windows VMs instead?
Funny, you get those calls too! I've been thrown in jail numerous times according to the caller's recollection. I did mention things about his parents he probably didn't know. Fun times. Also, great new of the NVIDIA passthrough. But not for the 690 - bummer.
I was playing with the concept of making a cloyd gaming server, and seeing your old videos about it, I decided to get myself a pair of Grid K2 cards and a Dell R720. I got it to work, and it's running smooth with 4 cores on each machine in proxmox. However, I think I found the problem with the Grid K2 card, and I have a theoreticall solution. The Grid K520 is an exact copy of the Grid K2, and it is ment to be used for cloud gaming. I believe that if we manage to install the K520 drivers on the K2, it will run games smoothly. So far the K2 (with the K2 drivers) has worked amazingly well with parsec, only struggling with some games due to drivers. Would love to see you look into the K2 again, and maybe with the info I have provided so far, we could get the driver to work on the K2? Just a thought!
The second vfio.conf cmd seems to overwrite the content of the first one. If the VM Console and the phys. GPU have the same resolution, there is no mouse pointer offset. Does the new driver also works within a Linux VM?
Is it telling that my first thought after reading that in the driver notes was "oh! Jeff's gonna love this!" :-D My second thought pretty much followed your rant about them spinning it as "we've added this great new feature" when it's really "we've decided to stop blocking this." Reading a little more into it, it seems like their intended use case here is for the GPU to be used as display output, rather than being used remotely -- though it didn't seem like you had any trouble doing exactly that... I'm getting more and more tempted by the idea of puting my main system into the home lab and using the old system as not-so-thin client (it's ITX so stuffing it inside a tiny enclosure wouldn't be difficult). Hmmm...
You've always had the option of hiding kvm in proxmox from the NVIDIA driver, so why didn't you try that before this? It's great that they finally stopped blocking it though, as it was a contributing factor for me wanting an rdna 2 card instead. AMD has never blocked VM passthrough, but rx 5000 cards need a patched kernel for a separate issue. Had a similar setup for my kids to be able to remote play games.
@@yfs9035 With proxmox I only have a noticable performance difference in one game in VR. I had mostly attributed that not using core pinning and/or hugetables. There was a more noticeable difference in behavior with PopOS or Ubuntu as a host, as my ultimate goal is to only run windows just for games. The one game for me is DCS World that keeps me most connected to Windows. For a while I just ran proxmox headless, but couldn't get native level performance in VR for it. Everything else, if there was a difference it was negligible.
Thanks again to Manscaped for sponsoring this video. You can get 20% off + free shipping and two free gifts with code CRAFT at mnscpd.com/CraftComputing
Jeff, this is an absolutely awesome video. But why wouldn't you use Aster V7 and Spacedesk and let the windows scheduler allocate resources dynamically? You can have even more heads and won't need as many graphics cards
I am able to passthrough nvidia passthrough in esxi by adding this line to the VM configuration key = hypervisor.cpuid.v0 Value = FALSE
seriously how did that sponsorship happen and how many takes did you need?
Can I get the phone number that called you at the end of that?
Wait so I’m going to be using an ear hair trimmer useful someday soon?
Honestly dude, I have had a homelab for like 6 years and this channel has slowly become my favorite. Thank you so much for your contribution to this community
Agreed!
what is your home lab for?
I used to run esxi and a bunch of windows VMs just to learn about VMs now I'm on proxmox and looking to do a similar gpu pass-through setup as Jeff. I've done a bunch of stuff really. Minecraft servers, NAS, plex, you name it. Lol
Ive had a home lab since the early 1990's and im enjoying the videos. Always something new to learn.
@@GabrielFoote what made you switch from esxi to proxmox?
Finally NVIDIA is doing something good for us homelabbers :D Regarding the only 50% success rate of passthrough. At 7:59 you echoed the text line to the file vfio.conf. At 8:35 you echoed the other PCI IDs to the same file. AFAIK if you only use one > then it will overwrite the previous content. >> would append i think. Try merging the two lines to one: Maybe it'll work then :-)
echo "options vfio-pci ids=10de:1f82,10de:10fa,10de:1b81,10de:10f0 disable_vga=1" > /etc/modprobe.d/vfio.conf
Lol, I was about to post a comment about just that before double checking if anyone else had noticed it.
Also, it helps if the device id is the correct one: 1b80 instead of 1b81
I hope he sees this comment. I am still watching this video so I dont know if it still worked. :)
Also, for the GTX 1080 the ID is 10de:1b80, hence the 10de:1b81 used in video..
no, you would have to use the >> instead of > or go and edit the file directly from NANO or VIM
It's simple, nVidia enabled it just for you.
Or they stopped disabling it
@@pkt1213 This is true.
Their beta effort includes flipping a single bit. What an incredible effort from NVIDIA.
@@TheViettan28 This does not surprise me.
Better late than never.
Your 690s won't pass through because you overwrite their device IDs at 8:40 (instead of adding the 1080s id's to the 690 id's, you replaced the 690 id's with the 1080's)
Just saw that myself and was hoping that there's a comment on it. He put the wrong deviceIDs :(
I came here to say the same, probably why it didn't work.
Poke poke, hope he notice
I was going to mention he made a typo too. Glade to see someone already mentioned it.
came here to say this
For the disabling the output, for the GTX 1080 there's been a mistake, you've used 10de:1b81 instead of 10de:1b80 for modprobe
Was going to say the same, but you got there before me :-)
He was also overwriting the entire file making his previous command useless too; I think he meant to use “>>” instead of “>”
We missed your cloud gaming server too. It's so much fun to watch the saga.
I legit was searching for a video regarding the new official support, and found this 10min after upload!
How did I not see this in my subscriptions first-
amazing news and a much wanted feature :D thank you
been lurking a long time, and this vid finally made me subscribe haha. keep up the great work dude.
It was your videos that started me down the path with proxmox. It was also your multiple attempts that gave me hope.. Beyond excited to try this out finally!
I have decided. I want to try make this myself just for learning practices. I have loads of hardware laying around to play with.
This video has literally made me want to get back into doing virtual machines again its so neat
The command at 8:42 should include >>. With single > you are overwriting the config file!
Exactly! Also used 81 instead of 80 for the device by mistake.
I came to the comments to mention these very things. Pedants unite! ;)
Came to make this same comment
Also spotted the 81 - 80 error, glad to see it wasn't my eyes, old age sucks!
@@sterlingsummers9697 I hear ya, squinting a lil doubting that you read it right. Totally was me.
I followed the guide you can find by searching for Heiko Passing Through a Nvidia RTX 2070 Super GPU. This worked for me.
I just borked my setup and then fixed it. One thing I must have done at some point when I initially got it working, which is CRITICAL, is that I set it to use UEFI bios. The legacy bios or CSM screws up everything. This combined with reinitializing the UEFI bios with an extracted and (slightly) modified boot rom, for the Nvidia card is the trick. Hope it gets easier.
I'm so excited, at last!!!!
Deffo gonna have have my virtualised server going now!!
I am so looking forward to you getting that thing going for real. Such a huge inspiration.
It has been a long time now, but I had exactly the same problem with passing a non-UEFI 660Ti under KVM. My solution was not to try flashing the card, but to pass a modified UEFI bios ROM image to the virtual machine using the
This is great news. Currently building my server, thinking to buy quadro but this means I can use my old graphic card for this.
shhh don't tell them, they'll take it away.
Not sure if anyone else mentioned it, but you might not want to have those blower fans at the back of the GTX690 cards. They are designed with a single fan intake in the center and air exits out the front AND the back of the card. Using the blower fans at the back of the card like that is working against the center fan. Because of the placement of the dual GPU's, there is one on either side of the center fan, so it pushes air in both directions to go over each heatsink.
The blowers weren't connected. They were there for the passive S7150x2 cards. I know the airflow design of the 690s 🙂
Ty for the vid! I'm a little worried about trying it with ESXi but man I can't wait to have this running! LAN party's coming back!
Works fine in ESXI but you need to disable the GPU before you reboot the VM else the card get's "stuck" and you need a host reboot.
Love it how u'r using a diskette as cup holder, finally some use for those appart museums xD
This is kind of surprising to me. I watched a lot of the videos in your cloud gaming series and not all of them so I don't know the whole story. I recently built a ProxMox server and put a GTX 980 in it. I've been using it in a Windows VM with my steam link and it's been working great. I did have to end up installing that nvenc driver mod though to get hardware transcoding to work in Plex, but everything works beautifully.
Same here, I was using two gpus in my proxmox node just fine, one for a windows vm, and another for a debian vm
KVM works fine as long as you hide the virtualization. But as soon as you tell the virtual machine to enable VM acceleration code 43 insta-crash
That's how it used to be*
Great video. Glad I found your channel.
As soon as a heard the news I was waiting for this video.
Every time i see that PC now I get PTSD from the series haha, can't imagine how you feel personally working on it!
has a trackball in 2021.
absolute legend.
I laughed out loud at the ending. It would be a pretty funny crossover if Mark Rober slid in at the end with a glitter bomb, hahaha! :)
Passing through graphics cards has worked fine with Unraid for a while. It works perfectly fine with my 2 VMs. My Nvidia card is passed to my Windows VM, and my AMD card is passed through to my Linux VM (and the AMD card is technically my primary GPU), which works fine.
Timestamps are slightly off. Beer starts at 16:36. I always missed actual useful info trying to close the tab before the booze part.
The NVIDIA announcement sounds like the block would only be removed from the windows driver. Does someone know if I can stop using the workaround for a Linux VM too?
Right now the latest 465.89 Nvidia Driver with the virtualization enabled is only available for Windows, guess we habe to wait to find out :(
yep.
For me, windows was in fact never a problem. At least with my old GTX 650 Ti. VM is set to BIOS in proxmox. It works without issues even with older drivers.
Linux however only works with the crippled nouveau drivers, no matter if its BIOS or UEFI emulation.
Jeff, thanks for great content! I have finally managed to apply this to my own setup. With a minor tweak you can get ridd of the mini stutters by applying 'write back (unsafe)' on the used HDDs attached to the VMs. It will still not be 100% but veeeery close. Again, thanks :)
whole home headless gaming, ive be dreaming of this for so long
may have been noted previously... late to the party, sorry...but at 08:36 you have a typo:
s/1b81/1b80
They needed it for geforce now and now you got it
No mention of Titans ?...
NV have been randomly enabling and disabling Quadro features like PCIe passthrough on Titan driver versions for many years now.
Fun fact: running the driver in VM mode; ie, without the "old" Code 43 workaround, disables HDCP support. If you wanna watch Netflix and stuff using the Windows apps at max res, you might wanna continue using the workaround, even though the driver now works without it.
was not expecting the manscaping thing
I believe it didn't work because when adding the cards you typed 81 instead of 80 for the device ids.. check at 8:37 mark
This has me on the edge of my seat
I think I'm missing something here. Aside for the misconfiguration that already many users reported, through VFIO I was always able to passthrough my PCIe cards just fine? I mean, Proxmox uses KVM, and KVM supports the "kvm=off" parameter, and NVidia windows drivers always installed perfectly fine?
IF ANYTHING, I'm more mad at them that they disable MSI mode on Windows by default unless it's a Quadro card, and that I always have to re-enable it manually using RegEdit every single time I update the drivers. I keep telling myself I'll make a tool that does it automatically for me, yet I still didn't do it…
I'd like to see a comparison between proxmox, esxi and unraid for this use case
Cool video, I used some of your others to passthrough a nvidia card to linux for plex encoding. As other mentioned below your issues with the 690 are most likely due to overwriting the config and a typo in there.
Funny to see that the subscribers of this channel are all seeing the mistake in the device IDs and >>. Funny stuff. I could bet you are all tech nerds like me, always ready to help others.
Hello Nice video and highly informing but @ 11.52 you said Nvidia 680 and not 1080
When a video comes exactly when you need it 👍
The issue you had with the k2’s and the m60’s was due to the fact they aren’t designed to just be used for PCIE pass through. You have to use a supported hyper visor (I used esxi 6.7 u1 and u3) and install the supported vib for the card. Then you can install a pass through profile on the VM, then install the proper GRID profile driver and NVENC works. I’ve done this successfully and it works well for VMware horizon as well.
I've been trying to virtualise my rig for at least 2 years now.
I always at some problem causing me to drop the project numerous times, mostly error 43...
FINALLY everything works just as I always wanted.
Linux VM as a workstation. Fire up my gaming VM when I need it. Fire up my work VM when I need it.
I love it.
PCI passthru has been working just fine on my ESXi/TitanX homelab for a number of years already so not sure what you mean by 'they just enabled it'?
The next question: how long until the one vm session limit is hacked away like the session limit was for encodes for true vGPU?
Already done.
Super keen to see the out come of the gtx690 and the grid k2 was well!
Probably at this point using the grid doesn't make a lot of sense
Sol De Noche • Stout - Imperial / Double Milk • 8.5% ABV from dry city brew works
Your balls will thank you, aaaaand as always i am Jeff! We love you Jeff.
Great Information, thx 🙏
On KVM this worked since forever. You can make qemu-kvm not expose the details that the Nvidia driver looks for to establish whether it is running in a VM.
The biggest problem you are likely to face is that 6xx series like the 690 don't support YUV 4:4:4 for H.264 NVENC encoding in RDP. But something like Steam streaming should still work with full hardware encoding. But being able to do it over RDP is super-convenient as you can run the gaming VM completely headless. But for that you need a 10xx series card, IIRC GTX1070 or later.
Weird. I have been running 1080 in kvm with IOMMU on Intel about three years ago, for about a year.
And since December 2020, I am running 2080 Super in Proxmox with IOMMU on Ryzen 5xxx.
No problems whatsoever.
Oh, and btw, if there is a modprobe config that you enable vfio in, for specific PCI devices, there's no need to disable any drivers.
The system automatically switches those devices to use vfio.
7:59 wouldn't it be better to open the file on nano and add all the lines at once?
That is my thought too.
... I got this working years ago :I you need to disable the Hypervisor flag which is a one liner in ESXi and not that hard on most other Hypervisors on linux.
Edit VM configuration under the VM Options tab and add
hypervisor.cpuid.v0 FALSE
in case you were curious
What's the software you are using @ 13:05 to stream the desktop?
Yes I would like the same answer for this. Because you can use a bunch of free ones but they don't support a lot of input. Especially controller support. I have Moonlight setup on my friends cloud gaming VM and he requires a direct connection to one of my monitors as well, I would prefer to use something that works with Nvidia and AMD, Moonlight only support Nvidia GTX cards.
looking at the screen on the one you were having issues on it looks like you typed the device id to block in wrong.
im confused as to how i was able to enable passthrough with a 1060 on esxi for a while now with no issues if it was previously blocked?
What's your favorite prototyping kit?
Hi Jeff. Steven from Belgium here.
I tried to do this with 2 GTX 670 dcii Top cards in 1 VM on unRAID and the problem was that SLI was not supported.
Maybe you will encounter the same problem since the 690 is two gpus on one board. Anyway good luck and thanks for this update.
love the lighting...with that green the case looks like it belongs on a Romulan ship
So the 465 drivers are the first which disable the VM check. Does anyone know when they first introduced the check? The cards usable before the check obviously would be useless for gaming but they would likely still be useful for light desktop work.
This is super interesting!
I like how you cut right before it pulled the hair out of your ear.
@asdrubale bisanzio yeah I think ur ear needs hair xD
Good to know im not the only one talking to my beer
What is the name of the dashboard at 12:59?
Parsec
@@wiel-spin thx
Regarding to S7150x2 in Proxmox, instead of using GIM driver, why not trying pass through whole device (not GIM) in order to have access of hardware encoding, mean will have 2 hardware encoding, 3 cards mean 6 hardware encoding right since each card is x2
At 8.44 when you do the disabled the second card you used the wrong id.. it should have been 10de:1b80 not 10de:1b81 i didn't get to the end to see if it matters for now but i guess it could matter at some point
I always used the hacks available in KVM to work around the driver restriction, but only with a few GPUs at a time and not with a complex setup like that. Glad Nvidia got rid of this arbitrary restriction! About time.
Do you know if Quadro P1000 can now be passed through to a VM as well as the Geforce cards? In the past only 2XXX and greater series cards can be passed through.
Nice! Exactly a topic i am on right now - virtualization of gpu‘s.
I guess for gaming and deeplearning i need to have at least one gpu from Nvidia and for virtually splitting up a physical gpu to be used by many vm‘s in parallel (think of a k8s cluster which nodes are vm‘s and each having to be able to perform gpu-work) i need amd to do the trick, as vgpu by nvidia is not only for expensive cards, but also hits you with even more expensive licensing in the face.
Need to watch your vids on the amd cards to learn more about that ;)
So will you try the Grid k-2 now?
So Jeff talks to his beers now. I'm not worried.
did you tested the trimmer on the dragon balls?
thats pretty good. I hope that the home server segment will start to get big again.
Now all we need is good GPU virtualization. At least Intel GPUs do already support it.
With modern CPUs, 24/7 is not a big problem anymore as even a very powerful consumer CPU (like the 12/16 core Ryzen chips) does not use that much power when just idling.
This could really be the future for so many things, but I doubt that anyone will make setting up a home server super easy (which would be needed for mass adoption).
Microsoft already tried 10 years ago with windows home server.
A few questions if you dont mind.
Can you hook a monitor up to the card and get the video output from the VM?
Will Proxmox let you bond the local keyboard and mouse to the VM?
Will Proxmox let you redefine the power switch to turn off and on the VM?
I know I can do those under KVM/QEMU running on libvirt. Just wondering if you can do that with proxmox.
I don't see why you couldn't, they're both just frontends for QEMU/KVM. I don't know about passing through an actual local keyboard but you can always pass through an entire USB controller, maybe even individual USB keyboards. Is that what you did with libvirt or does it also work with PS/2 keyboards? I'm asking because that would be useful for laptops.
@@eDoc2020 I passed the usb through. The use was to create a virtual desktop on the desktop system. Made it easy to move a person from desk to desk as you moved their image over.
What remote desktop software did Jeff use?
I had a 690 to the PLX chip of that card wont physically allow splitting the cores to 2 different VMs
I had them working with both cores passed through to the same vm but was alway greeted with a device error when splitting the cores.
what was weird that it worked on a tesla card i had i think they use a different chip or a differnet firmware
I noticed you didn't use the gameready driver, which I thought one has to use to enable game streaming. What are you using to remotely connect to a 3D accelerated session in the windows VMs instead?
Thank you
Thanks for this video. FYI, your second echo command into /etc/modprobe.d/vfio.conf used the wrong first pcie identifier at time 8:47
was just about to post this :) (still, comment for the algorithm)
He also didn't use >> which essentially cleared the file before writing meaning only the last line was added.
This is still one of the most suspenseful episodicals on air right now. Hopefully the season finale will be epic (and come out before 2024). ;)
Damm it! I bought a quadro to get over code 43 and 2 days later this happend. good bey resealing value
I was looking for a quadro and then this happened. I feel your pain....
Funny, you get those calls too! I've been thrown in jail numerous times according to the caller's recollection. I did mention things about his parents he probably didn't know. Fun times. Also, great new of the NVIDIA passthrough. But not for the 690 - bummer.
Wasn't this already possible for at least 4-5 years with regular KVM and qemu + looking glass if you don't want another display?
I was playing with the concept of making a cloyd gaming server, and seeing your old videos about it, I decided to get myself a pair of Grid K2 cards and a Dell R720. I got it to work, and it's running smooth with 4 cores on each machine in proxmox. However, I think I found the problem with the Grid K2 card, and I have a theoreticall solution. The Grid K520 is an exact copy of the Grid K2, and it is ment to be used for cloud gaming. I believe that if we manage to install the K520 drivers on the K2, it will run games smoothly.
So far the K2 (with the K2 drivers) has worked amazingly well with parsec, only struggling with some games due to drivers. Would love to see you look into the K2 again, and maybe with the info I have provided so far, we could get the driver to work on the K2? Just a thought!
My balls thank you Jeff. Thanks!
The second vfio.conf cmd seems to overwrite the content of the first one.
If the VM Console and the phys. GPU have the same resolution, there is no mouse pointer offset.
Does the new driver also works within a Linux VM?
OCD kicking in here: got the first device ID wrong @ 8:30
Thank you!! Now hopeful for putting some games on my Win10 VM to play some games on the Ubuntu Host server i built
Is it telling that my first thought after reading that in the driver notes was "oh! Jeff's gonna love this!" :-D
My second thought pretty much followed your rant about them spinning it as "we've added this great new feature" when it's really "we've decided to stop blocking this."
Reading a little more into it, it seems like their intended use case here is for the GPU to be used as display output, rather than being used remotely -- though it didn't seem like you had any trouble doing exactly that...
I'm getting more and more tempted by the idea of puting my main system into the home lab and using the old system as not-so-thin client (it's ITX so stuffing it inside a tiny enclosure wouldn't be difficult). Hmmm...
11:50 I loved a gtx 1080 named gtx 680, I freaking loved it!!!
You've always had the option of hiding kvm in proxmox from the NVIDIA driver, so why didn't you try that before this? It's great that they finally stopped blocking it though, as it was a contributing factor for me wanting an rdna 2 card instead. AMD has never blocked VM passthrough, but rx 5000 cards need a patched kernel for a separate issue. Had a similar setup for my kids to be able to remote play games.
Disabling those features slow down the VM unfortunately.
@@yfs9035 With proxmox I only have a noticable performance difference in one game in VR. I had mostly attributed that not using core pinning and/or hugetables. There was a more noticeable difference in behavior with PopOS or Ubuntu as a host, as my ultimate goal is to only run windows just for games. The one game for me is DCS World that keeps me most connected to Windows. For a while I just ran proxmox headless, but couldn't get native level performance in VR for it. Everything else, if there was a difference it was negligible.
You've made a mistake in 8:34, lspci said 10de:lb80 and you've typed 10de:lb81. great video btw
I'm looking forward to this not being a beta feature. It will really give me the push to build an unraid machine to host a moonlight streaming VM
That beer looked goooooood
this is pretty cool. i have a machine running 3x gtx780sc's id love to try this on
You can connect two monitors to video out of the VMs right? So it would work for 2 workplaces?
Whats the app Jeff is using to remote connect to the vm's?