long story short: adding a second GPU won't help gaming performance much. This is because graphics cards are already so fast that any extra distance between components can actually slow it down. That's why the memory modules on a graphics card always surround the main chip. Also, you *can* use an AMD and Nvidia card at the same time in the same pc, for example running a game on AMD and using Nvidia for OBS; they can both do separate tasks at the same time, but not the same task at the same time. But because having two cards installed complicates things, it's honestly better to never worry about running two at once and instead either try overclocking or replacing your card entirely if it isn't performing to your needs
@@arenzricodexd4409 with an SLI/Crossfire compatible gpu with another gpu of the same model, and vendor would work, but amd and nvidia cards were never designed to work together sadly
The closest you can get to it is getting a second GPU that's dedicated to frame generation (using Lossless Scaling) while the first one does the rendering, depending on the model you can turn 60 fps into 120, 180 or even 240 fps, I love it but it's not real "GPU performance combined", but rather two GPU's doing different things that benefit the guy playing the game
@@hideakikovacs2859 yes and no. Yes you can do it, but it can lower performance due to transferring the image between GPU. Most GPU have encoder chips baked onto them so it's faster to encode with the GPU your using to game
Fun Fact - back in 2010 there was a short lived solution to combine amd and nvidia - some motherboards like the ASUS Crosshair IV Extreme had a "hydra" chip on them
@@stuwilldoo7035 DX12 (and vulkan) support a thing called explicit multi adaptor that kind of worked as a hardware agnostic SLI/crossfire solution - including letting that intel iGPU in the processor get in on the action only issue is only like 5 games supported it because it is 100% on the developers of the game to implement this feature because it requires really low level GPU programming to do
@@FuentesDj some games physx are not updated and the cpu physx is extremely unoptimized (Mafia 2 and Mirror's Edge), so having the dual setup still improves that experience. It's probably only a few games though. BL2 physx is about the same on modern cpu as it is on nvidia gpus, it's definitely the version of physx in the game. I know this, because iirc, you can swap Mirror's Edge physx files with updated ones and it fixes issues with the cpu physx, but this is not possible on Mafia 2.
Random Dude here. 2 things your looking for. First a driver from the past before Nvidia decided to add driver blocks to hybrid gpu setups. Second is a motherboard that supports Bifurcation allowing the 2 PCIE slots depicted by the mobo diagram in the manual to use 8x/8x splitting PCIE lanes. Most GPU's do not saturate x16 slot and nearly nothing is lost performance wise. Good thing is your current Motherboard may already support it. Bad thing is finding a driver without the locks in place. Do an old search for Hybrid GPU you may find what you're looking for. Yes we did used to do this without issue. GL
4:32 It's completely fine to use a GPU at a slower slot. I use dual gpu for gpu passthrough virtualization, both the GTX 1060 6GB and RTX 3060 are running at their full rated speeds despite the motherboard throttling the pcie speeds to accommodate both cards.
The reason your second GPU is losing performance is not due to a chipset bottleneck on your motherboard, the chipset only manages the last PCIe and M.2 slot. The real issue is that your GPU is plugged into a PCIe slot with only 1 lane, giving you just 1 GB/s of bandwidth. Also, you don’t need a $1,700 motherboard. First of all, SLI only works with two NVIDIA GPUs, and not in the way you're thinking. If you want to connect a second GPU to the PCIe lanes of the CPU, use your top M.2 slot with a riser. It has 4 PCIe lanes that are directly connected to the CPU.
that depends on the motherboard, there are cheaper motherboards in the $300 range that can dp 8x/8x but the guy who uploaded this video doesnt actually know what he is doing or what he is talking about.
xfx gpus are especially stable, and reliable, they are probably the best vender for AMD cards, and I'm happy you chose that particular AMD card, putting team red's best foot forward.
I have a dual GPU setup for exactly the reason you mentioned! Radeon RX 6800 in the PCIe 4.0 x16 slot for gaming. RTX 4060 ti 16gb in the PCIe 3.0 x4 slot for CUDA, tensor stuff, etc.
2 Gpus actually make sense for content creators and streamers. 1 card can be used for gaming, the other can be dedicated in OBS for recording/encoding. This way you can still stream/record without losing performance at all and no delay whatsoever. The card dedicated to the recording doesn't need to be a powerhouse, a simple 1050Ti can record 1080p 60fps flawlessly if there's another card handling the game itself.
MSI 870A and P55 Lucid Hydra motherboards supported multi GPU from AMD and Nvidia 14 yrs ago, while it worked it was hit and miss on being better than SLI and Crossfire, the only benefit is you could use any GPU that was from the same brand so GTX XXX or HD XXXX , or just mix them up, and was killed off due to lack of support just like SLI and Crossfire themselves.
You can actually use a rtx 30/40 series gpu as the main rendering gpu for games/applications that supports dlss, and a rx 6000/7000 series gpu for AFMF. That's what people actually did to get 200~300 fps on max settings of cyberpunk when it first got first released.
I use my 3060ti as the main render GPU and my 5600XT as the GPU for Lossless Scaling. I will not spoil much, but it varies. The main bottleneck is mostly my 5600XT and its 6GB VRAM, and partially the x4 slot it is in.
Actually you could combine AMD (ATI at the time) with an Nvidia GPU with technology called "Lucid Hydra" -- it was a feature on my Crosshair IV AMD board. I I believe I had a 6870 (primary) paired with a GTX275 (secondary). Funny enough, as far as I can remember, it didn't really cause any major glitches in games, but many games didn't see much of an improvement, while others saw massive gains. I remember Red Faction Gorilla getting a major 20 - 30fps boost. DX & OpenGL would potentially show different results. Great vid btw!
This reminds me of when physx came out on nvidia, I had a hd 5770 gpu and a gtx 460. In the same pc, i used the 460 (i think it was anyways) to compute the physx calculations and the amd card for raster. Worked really well, but not in everygame because it requires you to manually edit config files to tell it which gpu is for what. Lots of hassle but felt worth at the time. Theres so much distance between computer components, i wish they'd all come together to be nice and speedy.
I was not expecting the video to end after the price reveal lmao. SLI was dropped so fast that any board with it is an absurd price. I was interested in it for a time too.. sad to see it disappear
If your motherboard supports bifurcation, you can split the top PCIe 4.0x16 into two 4.0x8 lanes, and use an adapter to split it into two x16 slots. Each gpu will be limited to PCIe 4.0x8 speeds but it’s better than using the bottom slots which run at x1 speeds
Well, if you get an m.2 to pcie adapter, you can use the x4 lanes from the top m.2 slot for the gpu. 90% of the new ATX motherboards only wires pcie X1 to the physical x16 plastic for each slot. mATX motharboards have x4 and x8 slots from the chipset because of the size limit.
Yes, you are correct. This dude put the 1080 ti into a Gen 3 Pcie x16(x1) slot. The "(x1)" means it only has a single lane of bandwidth lmao. Using the m.2 to x16 adapter would have been better. Even so, he didn't play a single game that could utilize his setup. The only games that can utilize an AMD and Nvidia gpu together are the rare titles that have Direct X 12 explicit multi-gpu support. Edit: I was incorrect. Rise of the Tomb Raider has explicit multi-gpu support, but I doubt he had anything configured correctly.
You could try to run the 1080 ti with a riser cable from an m.2 slot, as there are 4 lanes directly connected to the cpu in the up most slot. In that way both gpus would be connected at the cpu at the same time
You _don't_ need a SLI motherboard per se, what you do need is a board supporting bifurcation. That essentially means you can route the PCIE lanes to two slots (so PCIEx8 + PCIEx8 instead of one PCIEx16).
So did you just plug both GPUs in and they worked automatically? Or did you have to install new drivers? Or do you plug your monitor into the GPU you want to 'host' everything? What's the process? You made it seem like you just added a second GPU into your motherboard and they both worked together flawlessly.
@@agoogleuser2507 Also the app can select what GPU it will be using, and if its different then the one monitor cable plugged into, then this card would work in pass-trough mode if its rendering something for screen to be displayed, aka games (which can SEVERELY limit the output if you use like CMP90HX in pass-trough mode trough your iGPU).
@@agoogleuser2507 Games, rendering software, pretty much anything. And if it doesnt support it natively (in settings menu), you can just temporarily disable one gpu, launch the thing you need on working one, and then enable it back from device manager.
There is one motherboard manufacturing company (could not remember which one gigabyte/asus/msi/asrock/abit/dfi/biostar/ecs) did produce a motherboard that support both Nvidia and Amd graphics that would run simoultaneously and the chipset on the motherboard they were using at that time was called Lucid. This was way back when nvidia SLI and amd Crossfire was a thing.
I used to run ski/crossfire cards. I kind of wish they would bring that back fully supported. It was sick to buy 2 cards at a $300 price point that had nearly exactly the same performance of the $1000 card
There is another way to make dual GPU useful as I know, AMD's 6000 and 7000 series card has fluid motion frames, which work like dlss frame gen and compatible with any card. So u can now have a card dedicated running frame generation, decreasing the load for the card for rendering.
The passthrough system of 2d GPUs and 3d accelerators in the 90s would be wonderful here. Imagine being able to plug one GPU into another and that other one goes to the monitor. Fullscreen applications just whichever one's better does the rendering, windowed mode, UI done by one and everyrhing within the window's borders is done by the other. I hoped something like this would come to systems featuring both an iGPU and a dGPU.
@ArnoId-Schwarzenegger yes, he does, I wish youtube would boot this misinformation off the platform also 1500fps in fortnight isn't impossible, in fact I'd be more impressed with getting 900fps in cs2
Afaik, It's all still rendering off the 6900XT, the 1080Ti is doing no rendering besides the second display it's connected to. The 1080Ti can be used for certain tasks like secondary rendering (2 games at once, one off 6900XT and one off 1080Ti) or like for it's encoder but you can't "SLI" them together, in a sense, to gain more performance.
I did this a few years back. I had a AMD Fury and added a 980 I came across for cheap and was using for mining purposes. Snice I had hem both I went ahead and tried them out. Low and behold they could work together in certain instances. I tried mGPU in a couple things but most of the time just left the 980 mining while I played on the Fury. The fun trick was after NVIDIA updated their drivers and stopped blocking access to PhysX when you had a AMD card installed. Had a few games that instantly started running better by enabling PhysX on the 980 instead of having to do it on the CPU and still rendering on the Fury.
But I do, yes I do do do. Operating AMD and Nvidia parallel eliminates the bottlenecks when I do AI or video editing stuff on the nvidia, I can still e.g. gaming along and stuff on several monitors, without any problems. Take a good power supply. Upgrading the power supply and swapping in a second hand RTX was the best tuning hack in years.
might be actually slower because then the data must be copied all over to the 2nd card before it can encode it. This will also need the System RAM again which otherwise OBS can bypass too with the current Gamecapture hook and NVEnc implementation they have.
Bro, this is not how you dual GPUs work. You cannot make an AMD card work with an Nvidia card. Look at your task manager while in game and you'll see only one card is doing anything at all. You don't just plug in dual GPUs and magically get a bigger resource pool for gaming 🤦
The host also is claiming in 2024 that a rx 6900xt is the best amd card which is bs, if you had a 7900xtx it would smoke both cards, on top of this the 4090 is actually the fastest consumer grade gpu on the market. This video is full of so many holes I can't even use this cheese on a grilled cheese sandwich.
Right, but there is one trick I think could work to make more FPS out of 2 GPUs. Use lossless scalling and make your main GPU render a game and second GPU generate frames in LLS app. I tested it and it works, but unfortunately my second GPU was too weak and generated frame drops, because whole system had to wait for it to generate fake frames, but it is worth trying if someone has something than 1050 paired with rtx 3070.
@LeitoAE if you're referring to GPU passthrough, I guarantee your 1050 is not doing anything to benefit your frames. I don't know what your exact setup is but the fact that you're saying you're getting less frames from whatever your doing does in fact, not make more frames.
Maybe going with an X99 Xeon or an old ThreadRipper equivalent could also work properly. They usually have more PCIe lanes and the motherboards from that time were thought for SLI and Crossfire in mind.
Hi, I’ve been following your channel for quite some time and really appreciate the content! I’m looking for advice on building a new PC for both gaming and AI development, specifically for working with LLM models. I also do a lot of development using ReactJS. Currently, I have a decent setup with an Intel i9-9xxx series processor and an AMD Radeon IV GPU. However, the lack of CUDA support is limiting my AI-related work. Additionally, I frequently use Adobe Illustrator, and it tends to lag when working with larger files. My system has 32GB of RAM running at 3600MHz. I’m planning to build a new PC that can handle these tasks better without going overboard on the budget-something that’s efficient, not overkill. I’m mainly confused about which motherboard to choose, and I’m considering upgrading my CPU, memory, and GPU to a near-top-tier setup. Do you have any recommendations? Should I go with AMD or Intel, or would it be worth waiting for Nvidia's new CPUs? I’m not sure which direction to take and would really appreciate any guidance. Thanks so much!
Test for you. 1- AMD GPU as primary. 2- Ngreedia GPU as secondary. 3- Install Arkham Knight. 4- test if PhysX works using the Ngreedia GPU just for that
@@pacomatic9833 If you remember, PhysX launched with the Aegia accelerator. Ngreedia bought them and locked the tech behind their hardware. Back then, it was possible to use an AMD GPU as primary and the Ngreedia GPU just for PhysX.
@@PupperTiggle it always worked, but Ngreedia went out of their way to make sure that it didnt work. They spent a decade on that anticonsumer crap. We went through hell back in the day to be able to use our gpus.
I actually have run something similar to this for streaming. AMD 6800XT for playing games and Nvidia 1050 Ti for streaming and video encoding. Only reason I don't do that anymore is because I threw the 1050 Ti in a Xeon workstation I use for a media server.
About combining GPUs: My server (2U) has 2x Nvidia Tesla P40 24GB, 1x Nvidia A2000 12GB, 1x Matrox G200eR (onboard) and 1x AMD Radeon Pro WX 3100 and it never crashed.
The more realistic pairing would be the 1080ti with a 5700xt. There may be some missing features with the 5 vs 6 series but, the 1080ti and 5700xt are almost identical in benchmarks- one is better in some games, the other is better in others but when averaged they're within margin of error. Still though, this is a really neat test and I've wondered a bit about this when this feature was new- I just haven't heard anything about it since this was first possible a few Years ago. Thanks for the video, it's very interesting!
This unironically feels like a "I am gaming on AMD gpu, but I just have to use Nvidium in Minecraft"-kind of thing. There are some gains(When they don't bite each other to make it worse), but it feels so unneeded.
In my experience, several different gpu manufacturers lead to the ability to simply connect several additional monitors, it is worth considering that the rendering on the monitor will take place on the GPU to which it is connected (thanks captain), but the most interesting thing is the technology of the manufacturers. I found a case it the web like using gt1030 to work with physx in borderlands 2 or 3, I can't remember, although all the main rendering took place on rx580. Another option, at the moment I use EGPU with a laptop, the laptop has an igpu amd 780m and via EGPU I connect rtx3060 in baldur's gate 3 on a stationary computer with rtx3050 I did not have the opportunity to enable FSR only dlss, and in my EGPU build I have FSR and DlSS, so it seems to me that the increase of perfomance with diferent manufacturers gpu will only be if the user himself will parallelize tasks, for example, the main monitor with the game is a high-performance gpu and a funny video with cats on the second monitor is rendered by a weaker gpu. SLI and crossfire themselves are very lame solutions, for example: with nvidia both gpu will have the same frequencies and the most powerful will adjust to the weak one and in most tasks one card checks the work of the other and therefore very often there were problems with sli configurations. AMD Crossfire has no frequency limitation, but I very much doubt that legacy AMD video cards with old drivers in Crossfire can show stability in games or work tasks.
I have a spare 1080 laying around... and I have a 6950XT... I also have a motherboard that supports bifurcation (Basically, SLI/Crossfire capable) I have 2x PCIe X16 slots... (x16/x0, x8/x8). As long as the motherboard supports PCIe Bifurcation, it can likely support it. What you're saying is possible. It has been done. This is a topic as old as it gets... no offense. The only caveat being, you can't run the same game off both gpus in almost every scenario... with a few exceptions where it actually works. NOW... if your PRIMARY GPU is an NVIDIA... you can set the AMD Card in NVIDIA Control Panel to handle PhysX/Compute whilst the NVIDA GPU handles the graphics load. This... usually works better if the secondary card is an NVIDIA card... even if not the same generation. As long as the signal is x8 or better, the GPUs can focus mainly on what they need to do. In many cases, people use this for virtualization... utilizing hypervisor OS to run virtual machines... tying those GPUs to their own virtual machines, and specifying the amount of compute per cpu, you can have 2 gaming pc's in one. I have a Ryzen 5900X, 6950XT, and 1080, all on water. I could do this without issue... but you pointed out the most obvious plot-twist... power. Even though I'm capable of running it with a 1200W PSU, the power consumption just from running the second GPU will increase by at least 100W at idle, and if both gpus are utilized for gaming, well, you can imagine the power bill is akin to running a space heater. Let's face it, Virtualization is the only reason to have more than one GPU anymore these days. Nobody is optimizing for multi-gpu gaming anymore.
you could just use a software that tricks the pc into thinking that your 1 monitor is actually 2 seperate monitors and you can then assign both of the gpu's to render left side and right side seperately leading to 100% consumption of both gpu's ( idk the software name, i just heard about it on the internet like a month ago, you just have to do your own research) :)
"sicko mode or mo bamba ? why not both ?" ahh video
fr
Fr
FR
@@lecctron edging rn
crazy reference
Ryzen 4070 mentioned 🗣️🗣️🔥🔥
RAHHH🔥🗣️🗣️🔥🗣️🗣️🗣️🔥🦅🦅🦅🦅🦅🔥🔥🔥🔥🗣️🗣️🗣️🗣️🔥🔥🔥🔥🔥🗣️🔥🗣️🗣️🔥🗣️🔥🔥🔥🦅🦅🦅🦅🦅
ZTT FOR LIFE🗣🔥🦅
RYZEN 4070
the leaks said ryzen 4090 is releasing November 23rd 2025
@@latestcueleak said ryzen 5090 will release in February 30, 2026
Guess you're buying an $1800 MB because you can't leave us on a cliffhanger!
Im gonna pull a spiderverse and drop the sequel years later 😂
@@lecctron we’ll be waiting
@@lecctronbro that last spiderverse movie was a whole cliffhanger
@@lecctron lol it would probably be cheaper to change your cpu + mb
waiting
long story short: adding a second GPU won't help gaming performance much. This is because graphics cards are already so fast that any extra distance between components can actually slow it down. That's why the memory modules on a graphics card always surround the main chip.
Also, you *can* use an AMD and Nvidia card at the same time in the same pc, for example running a game on AMD and using Nvidia for OBS; they can both do separate tasks at the same time, but not the same task at the same time. But because having two cards installed complicates things, it's honestly better to never worry about running two at once and instead either try overclocking or replacing your card entirely if it isn't performing to your needs
@@nepp- no. Combine them to make them run a single game together. Time to Fuzion.
@@arenzricodexd4409 with an SLI/Crossfire compatible gpu with another gpu of the same model, and vendor would work, but amd and nvidia cards were never designed to work together sadly
The closest you can get to it is getting a second GPU that's dedicated to frame generation (using Lossless Scaling) while the first one does the rendering, depending on the model you can turn 60 fps into 120, 180 or even 240 fps, I love it but it's not real "GPU performance combined", but rather two GPU's doing different things that benefit the guy playing the game
How about for recording, as in using the NVENC encoder while I game on the radeon cards, does that work or am I missing something here?
@@hideakikovacs2859 yes and no.
Yes you can do it, but it can lower performance due to transferring the image between GPU. Most GPU have encoder chips baked onto them so it's faster to encode with the GPU your using to game
sucks when someone with a good idea cant carry it out because of money problems. love you bro
happens all the time on this channel lol we got so close 💔
@@lecctron send me the banking details, ill sponsor a dollar.
Easy fix, have more money
Fun Fact - back in 2010 there was a short lived solution to combine amd and nvidia - some motherboards like the ASUS Crosshair IV Extreme had a "hydra" chip on them
i thought it was coming with either directX 11/12 ?
@@stuwilldoo7035 DX12 (and vulkan) support a thing called explicit multi adaptor that kind of worked as a hardware agnostic SLI/crossfire solution - including letting that intel iGPU in the processor get in on the action
only issue is only like 5 games supported it because it is 100% on the developers of the game to implement this feature because it requires really low level GPU programming to do
Fun fact: The current CEOs of AMD and NVIDIA are cousins once removed.
And AMD also provided NVIDIA their AMD Epyc CPUs in DGX A100 workstations.
@@PhonkMachine220YT I remember back in the day people pairing nvidia with radeon for physx
@@FuentesDj some games physx are not updated and the cpu physx is extremely unoptimized (Mafia 2 and Mirror's Edge), so having the dual setup still improves that experience. It's probably only a few games though. BL2 physx is about the same on modern cpu as it is on nvidia gpus, it's definitely the version of physx in the game. I know this, because iirc, you can swap Mirror's Edge physx files with updated ones and it fixes issues with the cpu physx, but this is not possible on Mafia 2.
Random Dude here. 2 things your looking for.
First a driver from the past before Nvidia decided to add driver blocks to hybrid gpu setups.
Second is a motherboard that supports Bifurcation allowing the 2 PCIE slots depicted by the mobo diagram in the manual to use 8x/8x splitting PCIE lanes. Most GPU's do not saturate x16 slot and nearly nothing is lost performance wise.
Good thing is your current Motherboard may already support it. Bad thing is finding a driver without the locks in place. Do an old search for Hybrid GPU you may find what you're looking for.
Yes we did used to do this without issue. GL
yea thats why some gpus have nvme slot :D these days cus GPUs not use full x16
wow suddenly i'm excited to try this again, thanks so much !
2:46 THE NETTSPEND BACKGROUND IM DEAD (i need it pause)
4:32 It's completely fine to use a GPU at a slower slot.
I use dual gpu for gpu passthrough virtualization, both the GTX 1060 6GB and RTX 3060 are running at their full rated speeds despite the motherboard throttling the pcie speeds to accommodate both cards.
The reason your second GPU is losing performance is not due to a chipset bottleneck on your motherboard, the chipset only manages the last PCIe and M.2 slot.
The real issue is that your GPU is plugged into a PCIe slot with only 1 lane, giving you just 1 GB/s of bandwidth.
Also, you don’t need a $1,700 motherboard. First of all, SLI only works with two NVIDIA GPUs, and not in the way you're thinking. If you want to connect a second GPU to the PCIe lanes of the CPU, use your top M.2 slot with a riser. It has 4 PCIe lanes that are directly connected to the CPU.
Sli capable means it gives 16x slots to both gpus. So yes it would help in this case even without actually using an sli bridge.
that depends on the motherboard, there are cheaper motherboards in the $300 range that can dp 8x/8x but the guy who uploaded this video doesnt actually know what he is doing or what he is talking about.
xfx gpus are especially stable, and reliable, they are probably the best vender for AMD cards, and I'm happy you chose that particular AMD card, putting team red's best foot forward.
I like this guy I subscribed after watching like 4 vids
The vid we all been waiting for
yoo korvie
Yo its that stone guy
Hi
I literally can't believe LTT hasn't made a video like this before
The forbidden combo ahh video
@@6XVK ahhhhhhh
fun fact AMD and Nividia's founders are cousin Jensen and Lisa are related not joking
@@RTWT990 oh that’s sick
🎉
😂
I have a dual GPU setup for exactly the reason you mentioned!
Radeon RX 6800 in the PCIe 4.0 x16 slot for gaming.
RTX 4060 ti 16gb in the PCIe 3.0 x4 slot for CUDA, tensor stuff, etc.
2 Gpus actually make sense for content creators and streamers.
1 card can be used for gaming, the other can be dedicated in OBS for recording/encoding. This way you can still stream/record without losing performance at all and no delay whatsoever.
The card dedicated to the recording doesn't need to be a powerhouse, a simple 1050Ti can record 1080p 60fps flawlessly if there's another card handling the game itself.
I searched for this because i got curious
MSI 870A and P55 Lucid Hydra motherboards supported multi GPU from AMD and Nvidia 14 yrs ago, while it worked it was hit and miss on being better than SLI and Crossfire, the only benefit is you could use any GPU that was from the same brand so GTX XXX or HD XXXX , or just mix them up, and was killed off due to lack of support just like SLI and Crossfire themselves.
7:25 triggered ptsd
@@Marius04020 literally
nooooo
I litelary stoped the video to check if it really was there or it was in my head
@@Marius04020 Bruh 😭
@@Marius04020 ddlc reference
Awesome Vid man. Loving the Croft Manor background music from TB Legend btw.
Dammmm That AMD GPU is _gasping_ for air
Holy crap, the legendary ryzen 4070 🔥 I knew it existed!!
You can actually use a rtx 30/40 series gpu as the main rendering gpu for games/applications that supports dlss, and a rx 6000/7000 series gpu for AFMF. That's what people actually did to get 200~300 fps on max settings of cyberpunk when it first got first released.
I've had my gtx 1080ti for a long time and it still performs to even the most graphics intense games in 2024 i love it so much
Bro mention lunchly which was crazy 😂
every Gamer Dream is combined AMD and Nividia Gpu together
Godzilla had a stroke trying to understand why you've done this and fragging died.
Bro made team yellow 😐
Bro been here since like 250- subs luv to see you’ve grown so much❤
@@Cxrved I’m here since almost 1k
Know it’s a good day when leccteon post
My dude said "2010s" like it was the 1980s.
Day two of asking for this video idea... "Using lossless scalings frame generation on the GT 710"
that will be part of the next video stay tuned bro
I use my 3060ti as the main render GPU and my 5600XT as the GPU for Lossless Scaling. I will not spoil much, but it varies. The main bottleneck is mostly my 5600XT and its 6GB VRAM, and partially the x4 slot it is in.
w vid
Actually you could combine AMD (ATI at the time) with an Nvidia GPU with technology called "Lucid Hydra" -- it was a feature on my Crosshair IV AMD board. I I believe I had a 6870 (primary) paired with a GTX275 (secondary). Funny enough, as far as I can remember, it didn't really cause any major glitches in games, but many games didn't see much of an improvement, while others saw massive gains. I remember Red Faction Gorilla getting a major 20 - 30fps boost. DX & OpenGL would potentially show different results. Great vid btw!
Whats your editing software? Your too good❤
This is why I bought an open form factor case. Convective cooling with no case fans needed
Best ending: amd and nvdia unite
I use a RTX 3080 for gaming and the APU of my 5600G to encode my Livestreams. It works flawlessly. No problems, no colored screens whatsoever.
This reminds me of when physx came out on nvidia,
I had a hd 5770 gpu and a gtx 460. In the same pc, i used the 460 (i think it was anyways) to compute the physx calculations and the amd card for raster.
Worked really well, but not in everygame because it requires you to manually edit config files to tell it which gpu is for what. Lots of hassle but felt worth at the time.
Theres so much distance between computer components, i wish they'd all come together to be nice and speedy.
blud completed mission impossible 😭🙏
Because this video, i was subscribed to your channel. Smart an interest content about hardware, but on funny way. Keep it going :)
I was not expecting the video to end after the price reveal lmao. SLI was dropped so fast that any board with it is an absurd price. I was interested in it for a time too.. sad to see it disappear
If your motherboard supports bifurcation, you can split the top PCIe 4.0x16 into two 4.0x8 lanes, and use an adapter to split it into two x16 slots. Each gpu will be limited to PCIe 4.0x8 speeds but it’s better than using the bottom slots which run at x1 speeds
Well, if you get an m.2 to pcie adapter, you can use the x4 lanes from the top m.2 slot for the gpu.
90% of the new ATX motherboards only wires pcie X1 to the physical x16 plastic for each slot.
mATX motharboards have x4 and x8 slots from the chipset because of the size limit.
Yes, you are correct. This dude put the 1080 ti into a Gen 3 Pcie x16(x1) slot. The "(x1)" means it only has a single lane of bandwidth lmao. Using the m.2 to x16 adapter would have been better. Even so, he didn't play a single game that could utilize his setup. The only games that can utilize an AMD and Nvidia gpu together are the rare titles that have Direct X 12 explicit multi-gpu support.
Edit: I was incorrect. Rise of the Tomb Raider has explicit multi-gpu support, but I doubt he had anything configured correctly.
You could try to run the 1080 ti with a riser cable from an m.2 slot, as there are 4 lanes directly connected to the cpu in the up most slot. In that way both gpus would be connected at the cpu at the same time
You _don't_ need a SLI motherboard per se, what you do need is a board supporting bifurcation. That essentially means you can route the PCIE lanes to two slots (so PCIEx8 + PCIEx8 instead of one PCIEx16).
Should've done the full trifecta and added the intel arc gpu as well
I was thinking to combine my old gpus with hopes and dreams and then this vid came into my search
physx be like "woah so much room for activities"
I did this back in around 2015 using ATI HD 7600 and a GTX 980. Playing Witcher 3 on a Radeon GPU with Nvidia Hairworks was fun.
So did you just plug both GPUs in and they worked automatically? Or did you have to install new drivers? Or do you plug your monitor into the GPU you want to 'host' everything? What's the process? You made it seem like you just added a second GPU into your motherboard and they both worked together flawlessly.
had to install drivers, other than that it worked pretty much flawlessly across the 2 monitors
@@lecctron I see. What drivers?
@@agoogleuser2507 Also the app can select what GPU it will be using, and if its different then the one monitor cable plugged into, then this card would work in pass-trough mode if its rendering something for screen to be displayed, aka games (which can SEVERELY limit the output if you use like CMP90HX in pass-trough mode trough your iGPU).
@@alexturnbackthearmy1907 What app?
@@agoogleuser2507 Games, rendering software, pretty much anything. And if it doesnt support it natively (in settings menu), you can just temporarily disable one gpu, launch the thing you need on working one, and then enable it back from device manager.
For the mobo, I think you can also use the asus proart b650-creator. It has two pcie slots connected to the cpu and costs around 400 dollars
If i could, i would but that for you. Hope to see this work out in the future
Outside of software, The main cause for dual gpus not being performant is communication between the two GPUs that is fast in a practical manner.
AMD and NVIDIA should collab now
look ya'll, it's the DSP of computer hardware.
There is one motherboard manufacturing company (could not remember which one gigabyte/asus/msi/asrock/abit/dfi/biostar/ecs) did produce a motherboard that support both Nvidia and Amd graphics that would run simoultaneously and the chipset on the motherboard they were using at that time was called Lucid. This was way back when nvidia SLI and amd Crossfire was a thing.
great job next video should be intel+nvidia+amd gpus all in one pc
best unexpected ending of a video ever
the moment he mentioned 1080ti i write this and then close the video. lol
I used to run ski/crossfire cards. I kind of wish they would bring that back fully supported. It was sick to buy 2 cards at a $300 price point that had nearly exactly the same performance of the $1000 card
Croft's Mansion ahh soundtrack at the beggining of the video 🗣️🗣️🔥🔥
this guy: "i combined two rival gpus to create a hollow gpu. imaginary gpu: amdvidia
There is another way to make dual GPU useful as I know, AMD's 6000 and 7000 series card has fluid motion frames, which work like dlss frame gen and compatible with any card. So u can now have a card dedicated running frame generation, decreasing the load for the card for rendering.
The passthrough system of 2d GPUs and 3d accelerators in the 90s would be wonderful here. Imagine being able to plug one GPU into another and that other one goes to the monitor. Fullscreen applications just whichever one's better does the rendering, windowed mode, UI done by one and everyrhing within the window's borders is done by the other.
I hoped something like this would come to systems featuring both an iGPU and a dGPU.
My favorite youtube back with another awesome video
If it goes up in any game, it's probably tricking the drivers to enable some Nvidia tech. Not the gpu doing any work.
It’s equally good to each other
Hmm, I'm not sure you fully understand how games work with GPUs...
@@10Sambo01 you don't know
@Manemyonem I'v ebeen an IT professional for almost30 years. I know.
@@10Sambo01 yup I agree
@ArnoId-Schwarzenegger yes, he does, I wish youtube would boot this misinformation off the platform also 1500fps in fortnight isn't impossible, in fact I'd be more impressed with getting 900fps in cs2
Afaik, It's all still rendering off the 6900XT, the 1080Ti is doing no rendering besides the second display it's connected to. The 1080Ti can be used for certain tasks like secondary rendering (2 games at once, one off 6900XT and one off 1080Ti) or like for it's encoder but you can't "SLI" them together, in a sense, to gain more performance.
No way you got almost 50k subs. I was there from the beginning🎉
I did this a few years back. I had a AMD Fury and added a 980 I came across for cheap and was using for mining purposes. Snice I had hem both I went ahead and tried them out. Low and behold they could work together in certain instances. I tried mGPU in a couple things but most of the time just left the 980 mining while I played on the Fury. The fun trick was after NVIDIA updated their drivers and stopped blocking access to PhysX when you had a AMD card installed. Had a few games that instantly started running better by enabling PhysX on the 980 instead of having to do it on the CPU and still rendering on the Fury.
Somebody sponsor this man!
But I do, yes I do do do. Operating AMD and Nvidia parallel eliminates the bottlenecks when I do AI or video editing stuff on the nvidia, I can still e.g. gaming along and stuff on several monitors, without any problems. Take a good power supply. Upgrading the power supply and swapping in a second hand RTX was the best tuning hack in years.
I used to use a 3060 as main gpu and a 2060 super as secondary gpu, 3060 for gaming and 2060 super for streaming and recording
might be actually slower because then the data must be copied all over to the 2nd card before it can encode it. This will also need the System RAM again which otherwise OBS can bypass too with the current Gamecapture hook and NVEnc implementation they have.
@@De-M-oN you are wrong i gained almost 80-90 fps by doing so
Lets help him to get 1800$ for the Motherboard 🔥🔥
Now break the minecraft fps wr! 🤭
Bro, this is not how you dual GPUs work. You cannot make an AMD card work with an Nvidia card. Look at your task manager while in game and you'll see only one card is doing anything at all. You don't just plug in dual GPUs and magically get a bigger resource pool for gaming 🤦
The host also is claiming in 2024 that a rx 6900xt is the best amd card which is bs, if you had a 7900xtx it would smoke both cards, on top of this the 4090 is actually the fastest consumer grade gpu on the market. This video is full of so many holes I can't even use this cheese on a grilled cheese sandwich.
@@lockinhinddanger934 you foaming out the mouth I can’t even lie
@@lockinhinddanger934 You know this video was done for fun right? You really need to go outside and find something real to do.
Right, but there is one trick I think could work to make more FPS out of 2 GPUs. Use lossless scalling and make your main GPU render a game and second GPU generate frames in LLS app. I tested it and it works, but unfortunately my second GPU was too weak and generated frame drops, because whole system had to wait for it to generate fake frames, but it is worth trying if someone has something than 1050 paired with rtx 3070.
@LeitoAE if you're referring to GPU passthrough, I guarantee your 1050 is not doing anything to benefit your frames. I don't know what your exact setup is but the fact that you're saying you're getting less frames from whatever your doing does in fact, not make more frames.
Maybe going with an X99 Xeon or an old ThreadRipper equivalent could also work properly. They usually have more PCIe lanes and the motherboards from that time were thought for SLI and Crossfire in mind.
Hi,
I’ve been following your channel for quite some time and really appreciate the content!
I’m looking for advice on building a new PC for both gaming and AI development, specifically for working with LLM models. I also do a lot of development using ReactJS.
Currently, I have a decent setup with an Intel i9-9xxx series processor and an AMD Radeon IV GPU. However, the lack of CUDA support is limiting my AI-related work. Additionally, I frequently use Adobe Illustrator, and it tends to lag when working with larger files. My system has 32GB of RAM running at 3600MHz.
I’m planning to build a new PC that can handle these tasks better without going overboard on the budget-something that’s efficient, not overkill. I’m mainly confused about which motherboard to choose, and I’m considering upgrading my CPU, memory, and GPU to a near-top-tier setup.
Do you have any recommendations? Should I go with AMD or Intel, or would it be worth waiting for Nvidia's new CPUs? I’m not sure which direction to take and would really appreciate any guidance.
Thanks so much!
Test for you.
1- AMD GPU as primary.
2- Ngreedia GPU as secondary.
3- Install Arkham Knight.
4- test if PhysX works using the Ngreedia GPU just for that
PhysX crashed all the time with one GPU, I doubt it'd be any stabler with 2
@@pacomatic9833 If you remember, PhysX launched with the Aegia accelerator.
Ngreedia bought them and locked the tech behind their hardware.
Back then, it was possible to use an AMD GPU as primary and the Ngreedia GPU just for PhysX.
might actually work pretty sure nvidia already made support for having a second gpu dedicated just to physx
@@PupperTiggle it always worked, but Ngreedia went out of their way to make sure that it didnt work.
They spent a decade on that anticonsumer crap.
We went through hell back in the day to be able to use our gpus.
The 1080-Ti has ALWAYS been a solid graphics card.
I actually have run something similar to this for streaming. AMD 6800XT for playing games and Nvidia 1050 Ti for streaming and video encoding. Only reason I don't do that anymore is because I threw the 1050 Ti in a Xeon workstation I use for a media server.
I think the first tomb raider benchmark got upper with 2gpus because the 1080ti got the PhysX rendering, which free'd some ressources of the 6900xt
Pcie bifrucation my dude. Split the x16 into 2 x8 slots :D
About combining GPUs: My server (2U) has 2x Nvidia Tesla P40 24GB, 1x Nvidia A2000 12GB, 1x Matrox G200eR (onboard) and 1x AMD Radeon Pro WX 3100 and it never crashed.
Guys. Lets all donate money to see the finished computer. We can do it.
Guys support this man so we can get a part 2!!!!!!!
this is insane, cool vid G!
Tbh they would make so much more money if they both made their stuff work together.
You’re doong what i dreamt
The more realistic pairing would be the 1080ti with a 5700xt. There may be some missing features with the 5 vs 6 series but, the 1080ti and 5700xt are almost identical in benchmarks- one is better in some games, the other is better in others but when averaged they're within margin of error. Still though, this is a really neat test and I've wondered a bit about this when this feature was new- I just haven't heard anything about it since this was first possible a few Years ago. Thanks for the video, it's very interesting!
Now that is a original idea
@@frenaedits Its been done before, and I thought about it years ago- just no money to execute the experiment.
Is it? There are official product try to do this about 15 years ago.
This unironically feels like a "I am gaming on AMD gpu, but I just have to use Nvidium in Minecraft"-kind of thing. There are some gains(When they don't bite each other to make it worse), but it feels so unneeded.
Wait how does that even work? You can use shaders with Nvidium anyways.
That dude at 0:40 installing the second card gave me a brain aneurysm...
In my experience, several different gpu manufacturers lead to the ability to simply connect several additional monitors, it is worth considering that the rendering on the monitor will take place on the GPU to which it is connected (thanks captain), but the most interesting thing is the technology of the manufacturers. I found a case it the web like using gt1030 to work with physx in borderlands 2 or 3, I can't remember, although all the main rendering took place on rx580. Another option, at the moment I use EGPU with a laptop, the laptop has an igpu amd 780m and via EGPU I connect rtx3060 in baldur's gate 3 on a stationary computer with rtx3050 I did not have the opportunity to enable FSR only dlss, and in my EGPU build I have FSR and DlSS, so it seems to me that the increase of perfomance with diferent manufacturers gpu will only be if the user himself will parallelize tasks, for example, the main monitor with the game is a high-performance gpu and a funny video with cats on the second monitor is rendered by a weaker gpu. SLI and crossfire themselves are very lame solutions, for example: with nvidia both gpu will have the same frequencies and the most powerful will adjust to the weak one and in most tasks one card checks the work of the other and therefore very often there were problems with sli configurations. AMD Crossfire has no frequency limitation, but I very much doubt that legacy AMD video cards with old drivers in Crossfire can show stability in games or work tasks.
I have a spare 1080 laying around... and I have a 6950XT...
I also have a motherboard that supports bifurcation (Basically, SLI/Crossfire capable)
I have 2x PCIe X16 slots... (x16/x0, x8/x8).
As long as the motherboard supports PCIe Bifurcation, it can likely support it.
What you're saying is possible. It has been done. This is a topic as old as it gets... no offense.
The only caveat being, you can't run the same game off both gpus in almost every scenario... with a few exceptions where it actually works.
NOW... if your PRIMARY GPU is an NVIDIA... you can set the AMD Card in NVIDIA Control Panel to handle PhysX/Compute whilst the NVIDA GPU handles the graphics load.
This... usually works better if the secondary card is an NVIDIA card... even if not the same generation.
As long as the signal is x8 or better, the GPUs can focus mainly on what they need to do.
In many cases, people use this for virtualization... utilizing hypervisor OS to run virtual machines... tying those GPUs to their own virtual machines, and specifying the amount of compute per cpu, you can have 2 gaming pc's in one.
I have a Ryzen 5900X, 6950XT, and 1080, all on water. I could do this without issue... but you pointed out the most obvious plot-twist... power. Even though I'm capable of running it with a 1200W PSU, the power consumption just from running the second GPU will increase by at least 100W at idle, and if both gpus are utilized for gaming, well, you can imagine the power bill is akin to running a space heater.
Let's face it, Virtualization is the only reason to have more than one GPU anymore these days. Nobody is optimizing for multi-gpu gaming anymore.
you could just use a software that tricks the pc into thinking that your 1 monitor is actually 2 seperate monitors and you can then assign both of the gpu's to render left side and right side seperately leading to 100% consumption of both gpu's ( idk the software name, i just heard about it on the internet like a month ago, you just have to do your own research) :)