@@DelgaDude literally stalker 2 that just came out is vram limited with 8gb at 1080p. This is the case with other games too. It will matter how fast a $200 12gb gpu will be, maybe it will be slow enough that youll never run above low/medium settings so vram wont be an issue. Again, with stalker 2, a 3060 12gb is a good bit faster than the 3070 8gb. If this $200 theoretical gpu is comparable to a 3060, 12gb would be nice like the OP said.
Intel please don't give up on dedicated GPUs i know y'all are struggling but damn we need a 3rd choice. I hope these cards are awesome and priced good maybe bring down prices from the red and green guys
intel/amd/nvidia have one thing in common. they are all working towards a general purpose SOC with Gfx cores, general purpose cores and AI cores to be as close to each other as possible. This is an inevitability. "discrete" died with the 1080. every generation is merely working against reality as we all realise throwing a kilowatt at the job is asinine. 50watts will do if we just have these processes nearby. sorry, your rig wont look fancy soon.
The problem is people like me. I bought a 1080, 2080, 3080, and 4090 on day one. I plan on burying a 5090 day one. Intel can't compete with Nvidia's true high end.
Intel CEO here, yes because of this comment I will divert all resources to the graphics division and not cut the next project. Thank you for your support
Unfortunately the current CEO has openly stated he has no interest in pursuing this market. This B580 will likely be the last GPU launch from intel for a long while. We must also remember the 2000+ employee layoff's that included many veteran engineers with more than a decade at the company. Intel is going to focus on what Intel has done best since it's inception, focus on server/ datacenter market. If there is one captive market they can rely on to weather the storm caused by their 13ht and 14th gen shenanigans, it's the enterprise market. There isn't nearly enough consumer interest in their current arrow lake line up and this new Battlemage GPU offering to help them out of their financial hole. And no they literally cannot afford to focus on all three, and they've already admitted to flaws in Arrow Lake needing a full investigation so GPU's are going to have to go back on the shelf for the foreseeable future. The best we can hope for is that the drivers won't suck and will be supported for multiple years or at least until they decide to continue GPU development.
They are only launching these because they had prepurchased allocation from TSMC. I find it doubtful they will do this again. Laptops only is what I would bet.
@@aboveaveragebayleaf9216 Unfortunately, Intel is having serious financial troubles and has been winding down their discrete GPU development to cut costs. Likely they will be back to doing only integrated graphics again after this generation, and I don't think they're going to produce many discrete GPUs for this generation either. They're going to use the GPUs they have already committed to paying TSMC to make probably a couple of years ago, but I don't think they're going to make any more than that.
There's a storm outside, got the curtains closed, heater fan purring, fairy lights on, drinking a hot chocolate, wrapped up in bed, listening to my favourite tech youtuber go over the latest leaks and tech info. It doesn't get much better than this.
Intel is setting itself up for success with focusing on ai upscaling and ray tracing from the get go, cant wait to see performance on the coming generations
remember it will need to compete with next gen cards from AMD and Nvidia, so 6700XT perf at 250 - 280$ - probably won't cut it. RX 7700 XT at that price would be enticing.
no one cares about raytracing performance at sub $300 price point as it's not usable anyway upscaling is what matters though (and raw performance and power) 225w for rx6600 performance would not be exactly great since the rx6600 is 175w and undervolted about 100w, so hopefully it's way less than the 150+75 limit that the connectors can supply only 8 pcie lanes is a bad news as well, many people with pcie3 and pcie4 waiting for a decently priced upgrade option probably
Pretty unfortunate news regarding the 9950X3D, but that was to be expected. I think some of us were holding out for the possibility of a surprise layout for that variant. I suppose we’ll find out come CES, but now I’m a little less excited.
I am still hopeful. If AMD give us an easy way to disable the non 3D cores, then the 9950X3D could still come out on top with overclocking. A 9950X3D overclocked to 6.2GHz with the non 3D cores disabled is a possibility.
@@thornstrikesback Word. "All" they have to do is make the 8 cores with 3D cache operate only gaming computations and leave the rest. Would in theory be as good as 9800x3d for gaming and stomp all over it for content creation.
Imagine if Intel came out with a 4080 or a 7900 XTX competitor for 575-600. If they go down in flames, it would be like the Terminator giving the thumbs up before they melted away.
its not going to happen at that price point. Nvidia is giving up on mid-low tier, AMD gave up on high tier and is focusing on mid tier. That leaves intel a market share for low tier GPUs.
Well, considering that 8 cores of Battlemage is already beating 8 cores of Alchemist at a lower power draw, it would take a lot for this to be worse than the A580.
I'm expecting the B series will have about $20 up from last gen. If they don't keep it within that, I don't see how they will be able to compete. They will need to destroy the competition in value since I doubt they can beat them at raw performance.
FP64 on arc is actually very good, but only crazy number-crunchers like me will buy it for that. Graphics performance is much poorer, so I'll rely on that to get a bargain with the gamers turning up their noses a little.
@@VicariousAdventurer Updates to the Linux driver code indicates Xe2 will also have int64. (I mostly use int32. Integer math is limited on all of the recent red and green. Only half of Cuda cores do ints eg.) I remember my Firepro W7000 (2011 and 4GB vram) had excellent hardware fp64. All of the new cards are decent with single precision flops but they do some janky software emulated double precision at about 1/10th speed. Anyway AMD never had reasonable support for GCN1 and so in practice it was near useless for general computation. Intel oneAPI(SyCL C++) seems like a much better tool chain than anything from AMD.
Be interesting to know what hardware quicksync trancoding improvements have been made and if they will be supported quickly in ffmpeg. No interest in games at all.
If intel really wants to capitalise on something is if they make a B380 low profile there would be literally no competition at that level and many people will just buy intel cards for that purpose one can only hope
It has already been reported Intel laid off 50% of its GPU dept and Pat Gelsinger said in an earnings call that "discrete GPUs are not the future, AI is" so I wouldn't buy one of those if it was half an Nvidia or AMD card, as the support lifecycle with less than half the staff is going to be pretty meager and without constantly updated drivers it is a pretty paperweight.
I bought founders edition of alchemist a750 and a770 and one version of all Intel sparkle cards combined with one pc with a 15-12400 and another pc with a core ultra 265k, big fan of Intel 💯🙋🏾♂️
I'm going to buy it just for the 12GB of VRAM. I'm pleased with the driver progress that my A380 has seen in the year that I've had it. I trust that the B580 will continue that trend as long as Intel doesn't shutter the GPU team or get acquired by another corporation.
I doubt so as they had long term plans using Fabs and all,I mean if it could produce better chips than current reliance on TSMC, they aren't just gonna make chips solely for GPU when they had the ability to make CPUs for AI too lol
would having double the v cache increase performance even more than the 9800 x3d or would it just behave like ram ? when ur only using 10 g ram having 64 gigs will have no benefit ????? just curious !!!!
B770 will be around a 4070 - 4070Ti super with current leaks anyway - so pretty big improvement if Intel can keep prices similar and the drivers are good!
If i weren't gunning for a 4k system i would absolutely give intel a chance with an ultra budget system. We need competition, especially in the lower market
If AMD don't release the 9900X3D and 9950X3D with dual 3D V Cache, let's just hope they have at least figured out a way to avoid the core parking and schedular issues, or make a mode that disables the non 3D cores in software with a single button click. If they at least do this, these CPU's could still potentially be more powerful gaming CPU's compared to the 9800X3D. If the 9800X3D is already being overclocked to a stable 5.8GHz, then the 9950X3D could have the potential to be overclocked to 6.2GHz or higher. In other words, the 9950X3D could be made to run exactly like a 9800X3D but with higher clock speeds. Guess we'll just wait and see.
I dont know a single person who plays weith Ray Tracing on... That't just a gimic feature IMHO .. Raw FPS is the only think that matters to mainstream gamers
But does it sync with any technology to match any monitor's refresh rate...iff not, well, that sucks even with xmx cores and chips, i hope you understand how important for high RRates is the syncing with high refresh 1080p monitors
Let me just tell you guys if you haven't tried it Ratchet and Clank looks amaaazing w XeSS even in Nvidia or AMD cards, but on the a750 TBH I didn't notice an improvement from already amazing
When doing pixel-value-guessing generator comparisons, do put up a native resolution picture to go along with it so people can see what the picture was supposed to look like. The first generation of Arc GPU's seemed to be to be rather unbalanced; when looking at the power consumption, when looking at the raw force it was supposed to have, it felt bottlenecked and hope they solve that. Have an Intel laptop with Iris graphics and although that is not for modern day AAA games, it does perform quite adequate and more for 99% of my games backlog.
So what I think, AMD would’ve been testing and checking out all options and probably found some difficulty adding 2 3D v-cache chiplets. Unluckily last gen AMD 79+0x3D the non v-cache cores were faster causing the confused usage of non v-cache cores in the chips during gaming. It seems they have fixed that with comparing 9700x with 9800x3D (they are the same clock speed). So likely the 99+0x3Ds will be same performance or faster. But won’t be any massive gains. You just get an added bonus on productivity.. and maybe better Civ VII performance😅. Why I reckon same or better? Because AMD seems to release their best last.
3:34 I mean, by saying "it's hard to get drivers right at the beginning" what are we even saying? The only data point we have on this is Intel itself, so all we're actually saying is "intel didn't get the drivers right, because intel didn't get the drivers right". This doesn't really say anything.
If Intel delivers a decent performance jump and price I'll probably buy one for my nephew as he is using my old 5700xt. Hopefully they deliver something good 🤞
I run a benchmark channel, and what separates AMD gpu and NVIDIA gpu is really something simple: DLSS Because regular rendering on AMD card is perfectly fine, and in fact it is great, but what AMD is lacking the power of DLSS and Low Latency modes available ONLY on Nvidia cards. If Intel implements equal to DLSS and Low Latency modes as features as good as Nvidia, then they got a market in budget side of the PC components. AMD cards are way better than what they used to be, but Low Latency mode is a difference between 8-10 ms frame to frame latency and 4-6 ms, and that you can feel when playing competitive shooters
honestly the only thing that was holding me back from getting an a770 16gb was the price, at launch the $329 price point for worse performance than an rx7600 8gb at $269 turned me away. If it was ~$280 at launch i would have jumped onto it and AMD would have freaked out since the rx7600 would have been DOA unless they priced it below ~$230 then it would have been another king of value like the rx580
I'd personally like to see Intel get this whole GFX Card thing together. I use AMD cards with AMD chips because I'm that type of guy. As an enthusiast, it would be a lot of fun to run an Intel CPU with an Intel GPU and fanboy team Blue for a generation or few. Idk if anyone is with me on this .
I doubt that the PCIe x8 times in a PCIe 4.0 system would have any consequences. Not in the performance segment. I would honestly be happy to get something powerful from Intel
If intel really want to gain market cap it should not be just for gaming, if they would make a 48Gb gpu for under 2000$ with around same amount of cuda cores as 4090 they would destroy nvidia in the Ai market.
Nice to see that the B580s are pretty much ready to go, but I'm doubting there will be a lot of supply to really win market share. If they are over $230, it will be a big fail...
I honestly believe intel will eventually be a good GPU manufacturer.... May not be this gen may not be the next gen, but i know they can, and will eventually be very worthwhile.....
8 lanes on pci3 which alot of people buying this level of card will have doesn't work well. ill be getting a 5070ti or 8800xt and i still run on pci3 as i run at 4k my 5700x3d is more then enough.
Intel has the ability to make dedicated gpus in the $150-400 end. Lot of these buyers this will be their first graphics card thay will allow them to play the latest titles. I hope intel sees the bigger picture here.
I really do hope these GPU's don't suck and that Intel takes the right steps that AMD never did when fighting against NVIDIA. Give us a GPU with the performance of a 4070 for 550€ like AMD did and we're back at square 1. Give us the same GPU for 300 or 350€ and there's no way in hell people aren't buying it. Please, Intel. Please.
Not an Intel fan but I would purchase one just to give another company my money like I do with AMD over Nvidia. I want to see more competition in the GPU space.
If the launch is good enough, the GPUs perform and the price is right they might be surprised by the adoption rate and even run into supply issues, and might end up reconsidering giving up
Alright, It sounds like it would be in the realms of 4070 performance, i would like it to be as good as possible, but the war is price at the moment, if they have this at 175w thats an efficient chip if it performs to the 4070 level tbh, and whilst im certain many improvements will be made under the hood, cant be expecting a 4080 for cheap. I want to see this be a straight forward better card than the 4060 Ti, thats about £350, I want to see a big undercut, there needs to be no thought process on what is better as thats how intel will increase market share on the GPU front, If it is reaching 4070 numbers give or take that is priced on average £500+ over here, then saving £150-200 really changes the playing field and with 12GB VRAM makes the 4060 kinda obsolete, and doing a drastic undercut like this is what made PS1 dominate N64 back in the day. Its all about the money and what people can spend in a world where no one has money, because right now, most people get by with a steam deck, but the growth needs to happen so anyone on a system with less than a 3060 Ti would be the targets to hit, like 20 series users i guess and we want to see this batch of intel cards in the steam survey next year.
Yeah if they can manage $200 for this card and it meets or beats the 4060 this would be a game changer for Intel. Sure they might not make a ton of money off of it but at this point they should be focusing on building goodwill and market share. If they can kill it this generation with price/performance AND avoid significant driver issues... Intel might be in this thing.
I wish video card makers would stop crippling the number of PCIe lanes. As mentioned in the video, the spillover to system RAM when gaming has a smaller impact when you have more lanes and throughput. There is another area with an even larger impact, such as when doing AI workloads such as image generation. For example, with the RTX 4060 8GB and RTX 3060ti 8GB. When doing a SD image generation with a resolution that requires 20GB of VRAM, the 3060ti becomes exponentially faster, where when within the dedicated memory pool size, the performance is similar. The memory bottleneck over PCIe will always lead to a large performance drop, the faster the PCIe throughput it, the shorter the render time becomes in those situations where the working dataset requires the use of shared memory.
@@PixelatedWolf2077 For me the main letdown is not going with 16 lanes of PCIe 5.0. If they could double the throughput from PCIe 4.0 x16 cards, then for the impact from memory spillover will be significantly reduced, it would also give the card a significant advantage in AI workloads, mainly in the way of being able to work with even larger datasets, while keeping processing times reasonable. When doing very large datasets, the GPU speed becomes less important, as even mid range cards go from maxing out their power limit, to using 1/3rd of the power limit by the time there is a significant spillover to system memory, as the card is waiting on the system memory and the PCIe bottleneck. By default windows allows you to use up to half of your system RAM as shared memory for a video card, thus if you have a system with a 12GB video card and 64GB of system memory, then you can do an AI workload that is memory hard and a size of up to 44GB without the process failing. The only issue is that the more system memory you use for the GPU workload, the slower it will go and the more the power consumption of the card will drop as it waits on the GPU. Since the PCIe bus is also needed for sending work and receiving work related data from the GPU, that baseline load on the bus, ends up taking up around 2/3rs of the throughput of a PCIe X8 4.0 bus, thus cards like the 4060 end up chugging really bad when there is a lot of spillover, this is why there is such a vast improvement with PCIe 4.0 x16, all of that extra throughput can do memory related work. If that can be doubled again, that will be a vast improvement for even more memory related work. Aside from that, when it comes to gaming in VRAM limited situations, modern drivers are smart enough to prioritize different memory pools, but the more shared memory a game uses, the larger the performance hit becomes, until you see a PCIe bus usage of around 50% (if you fully saturate the bus in one direction, it will read 50% bus utilization). Once the bus is saturated in one direction, the game you are playing will begin to hitch significantly, and frame rates will often drop down to the single digits. A faster bus increases how much shared memory a game can use before you run into major hitching issues. For example, on a PCIe 4.0 X8 bus, you can often use around 500-600MB of shared memory before you start getting hitching or some bad frame time issues. with PCIe 4.0 x16, you can often use around 1.5-1.7GB before the card starts hitching. Though this also depends on the game as there are exceptions, for example, some games will load up a bunch of data into VRAM that are not actively used most of the time. For example, Ratchet & Clank Rift Apart, can easily handle 3+GB of spillover without hitching. And that is because of how they load the game world. The rifts are simply regions under the main map that get fully loaded into memory, but are culled from the render pipeline when not in view. When you go through a rift, it simply teleports you under the map, then the AI and physics are enabled. Since the needed data is already loaded, it looks like an instant loading of a different world for a seamless transition. When using shared memory, there is just a few milliseconds of higher frame time that is barely noticeable, as it swaps data. Anyway, doubling the throughput compared to a PCIe 4.0 X16 slot, will allow these cards to actively use probably 4GB of shared memory without the game completely hitching and becoming unplayable.
After how Intel treated customers with the CPU problems, I will not be buying an Intel GPU unless its super cheap and the reviews are glowing... as well as at least a few months passing..
I’ll prove a point right now. The 9950 X 3-D has been “confirmed” at 144 MB of cache. A 9800 x3d has 104 mb of v-cache. A 97000x has 40 mb of v-cache. Therefore with the 9950x 3d verified 144 mb of v cache proves only one ccd has has3d cache. The 9950x 3d will essentially be a 9800x3d on one ccd and a 9700x on the other ccd. Such a disappointment.
3 години тому
I don't think these GPUs will be as cheap as people expect. They will most likely have a good price to performance but not killing price to performance. Intel has no money to give gifts.
Considering Nvidia is pushing more and more towards AI hardware dev; I don't see them making to many more gpu generations that can be used for gaming. This means Intel might be the alternative gpu to jump too when this happens. I think we'll see at least two more Nvidia gpu cycles that focus on gaming 5000 & 6000 but after that its highly questionable.
Honestly my next system might be fully Intel. Depends on their pricing of course but with Nvidia being an extreme danger to consumers wallets and AMD running away in the cpu sector theres good incentive to keep Intel around.
Yep - was afraid we only get one CCD with x3d cash... With Microsoft being more Spy- and Malware these days then a competent OS i do wonder how much of a improvement those new CPUs are going to be without process lasso. At least Intel is not giving up on GPUs and maybe becomes a meaningful competitor on the market at some point - we surely would need that by now !
Too bad about the bad intel chips. Super interested in the GPU side though. I think all current gpu's should be 16GB. So this news about the vram size is not all that great.
Long term yes am hoping Intel come into play, but we really need the high end still, I guess one step at a time. Not sure what the next generation will be like, just not imagining any good value for at least 2-3 years, at the high end anyway. Best we can hope for is at around the 70 class there's solid competition since AMD are going to be absent from the 80 class and up until at least 2026 probably 2027 unless they fast track RDNA5 and RDNA4 is more of a refresh of RDNA3, RDNA3 that's a bit more power efficient and has slightly better ray tracing. Probably $500 is the spot, if they release a 8800 XT which is on par with a 7900XTX, more efficient and better RT for around $500, then Intel has something in the $400 range that makes sense, that's probably as good as we can hope for, then a 5070/ti/super for around that price with similar performance to a 4080S, I think that's our next generation, with an overpriced 5080 and 5090 due to 0 competition. Bleak but tolerable for most - let's see.
A 200$ 12GB GPU would be nice.
I mean that would be a 1080p card and at that resolution VRAM won't really do much.
@@DelgaDude Monster Hunter Wilds uses 12+GB VRAM at 1080p with frame generation.
@@DelgaDude literally stalker 2 that just came out is vram limited with 8gb at 1080p. This is the case with other games too. It will matter how fast a $200 12gb gpu will be, maybe it will be slow enough that youll never run above low/medium settings so vram wont be an issue. Again, with stalker 2, a 3060 12gb is a good bit faster than the 3070 8gb. If this $200 theoretical gpu is comparable to a 3060, 12gb would be nice like the OP said.
This will most likely be 250 or 270
@userblame632 Another extremely unoptimized game that runs bad even on the high end GPUs.
That white asrock looks clean as hell.
B580 with 12gb as their minimum is very good theyre beating amd on vram amount now too as b580s compete with 8600/5060
If they get me a 24GB at remotely reasonable priced one I am all over that shit no joke.
Intel please don't give up on dedicated GPUs i know y'all are struggling but damn we need a 3rd choice. I hope these cards are awesome and priced good maybe bring down prices from the red and green guys
I have been supporting them with some for my builds. I have 2 Arc's now, might get a Battle Mage too.
intel/amd/nvidia have one thing in common. they are all working towards a general purpose SOC with Gfx cores, general purpose cores and AI cores to be as close to each other as possible. This is an inevitability. "discrete" died with the 1080. every generation is merely working against reality as we all realise throwing a kilowatt at the job is asinine. 50watts will do if we just have these processes nearby. sorry, your rig wont look fancy soon.
The problem is people like me. I bought a 1080, 2080, 3080, and 4090 on day one. I plan on burying a 5090 day one. Intel can't compete with Nvidia's true high end.
Intel CEO here, yes because of this comment I will divert all resources to the graphics division and not cut the next project. Thank you for your support
Unfortunately the current CEO has openly stated he has no interest in pursuing this market. This B580 will likely be the last GPU launch from intel for a long while. We must also remember the 2000+ employee layoff's that included many veteran engineers with more than a decade at the company.
Intel is going to focus on what Intel has done best since it's inception, focus on server/ datacenter market. If there is one captive market they can rely on to weather the storm caused by their 13ht and 14th gen shenanigans, it's the enterprise market. There isn't nearly enough consumer interest in their current arrow lake line up and this new Battlemage GPU offering to help them out of their financial hole. And no they literally cannot afford to focus on all three, and they've already admitted to flaws in Arrow Lake needing a full investigation so GPU's are going to have to go back on the shelf for the foreseeable future. The best we can hope for is that the drivers won't suck and will be supported for multiple years or at least until they decide to continue GPU development.
Please intel let's go. We need competition. I'm hoping by next gen they got it all figured out and can compete with the 90s.
They are only launching these because they had prepurchased allocation from TSMC. I find it doubtful they will do this again. Laptops only is what I would bet.
Competing with the 90s definitely doesn't sound realistic, whatsoever. Lol
They don't need to compete at the high end. They just need to be competitive in the middle and low end.
@@aboveaveragebayleaf9216 Unfortunately, Intel is having serious financial troubles and has been winding down their discrete GPU development to cut costs. Likely they will be back to doing only integrated graphics again after this generation, and I don't think they're going to produce many discrete GPUs for this generation either. They're going to use the GPUs they have already committed to paying TSMC to make probably a couple of years ago, but I don't think they're going to make any more than that.
You better be first in line to buy one Cheering like that
There's a storm outside, got the curtains closed, heater fan purring, fairy lights on, drinking a hot chocolate, wrapped up in bed, listening to my favourite tech youtuber go over the latest leaks and tech info. It doesn't get much better than this.
vibes are high, enjoy!
just don't do a bios update and you'll have a great night
Like seriously this is one of the very few tech tubers that is not baiting or repeating the same things others already said.
I'm coming over with my sleeping bag, Capri Sun's, and a box of Gushers......If my mom gets approval from your mom that is. Hahaha
This reminds of me of a SONG!
Having an Asrock model is a positive suprise, I only expected no-name brands.
It was partially expected since they were (pretty much) the only major brand to support Alchemist, so I'm glad to see they stuck with it for this one.
I prefer reference or these random off-brands if the price is right
@@Eonymia I feel like both Nvidia and AMD are doing Asrock dirty (forcing them to have paper thin margins) which is why they support Arc.
Question is will Asrock be their Intel's EVGA.
Intel is setting itself up for success with focusing on ai upscaling and ray tracing from the get go, cant wait to see performance on the coming generations
i really wouldn't expect alot from this
The RT on Alchemist isn't bad... It's weak, but it's better than AMDs by far@@nostrum6410
My guess is 250 to 280 for the msrp. If it hits 6700xt/ raster and usable ray tracing it would be very interesting product.
remember it will need to compete with next gen cards from AMD and Nvidia, so 6700XT perf at 250 - 280$ - probably won't cut it. RX 7700 XT at that price would be enticing.
@@eliadbu I agree. If its 250-280 and the same as the 6700XT then you couldve pretty much gotten a B580 a year+ ago. Not so exciting.
no one cares about raytracing performance at sub $300 price point as it's not usable anyway
upscaling is what matters though (and raw performance and power)
225w for rx6600 performance would not be exactly great since the rx6600 is 175w and undervolted about 100w, so hopefully it's way less than the 150+75 limit that the connectors can supply
only 8 pcie lanes is a bad news as well, many people with pcie3 and pcie4 waiting for a decently priced upgrade option probably
Pretty unfortunate news regarding the 9950X3D, but that was to be expected. I think some of us were holding out for the possibility of a surprise layout for that variant. I suppose we’ll find out come CES, but now I’m a little less excited.
I am still hopeful. If AMD give us an easy way to disable the non 3D cores, then the 9950X3D could still come out on top with overclocking. A 9950X3D overclocked to 6.2GHz with the non 3D cores disabled is a possibility.
@@thornstrikesback Word. "All" they have to do is make the 8 cores with 3D cache operate only gaming computations and leave the rest. Would in theory be as good as 9800x3d for gaming and stomp all over it for content creation.
Really good news for us, more competition. Boy, Intel really needs a good launch right now!
Imagine if Intel came out with a 4080 or a 7900 XTX competitor for 575-600.
If they go down in flames, it would be like the Terminator giving the thumbs up before they melted away.
its not going to happen at that price point. Nvidia is giving up on mid-low tier, AMD gave up on high tier and is focusing on mid tier. That leaves intel a market share for low tier GPUs.
Disappointing to see that the upcoming X3D parts aren't going to have vcache on both CCDs.
Probably the extra cost doesn't justify the performance gain
If they sort out the scheduling nonsense, then it shouldn't be a big drawback. Provided they get games running on the correct CCD 100% of the time.
"probably going to be faster than A580 or else why does this exist"
well... we dont know, it is Intel...
That's specially a bold statement after the infamous Core Ultra 200 launch...
Well, considering that 8 cores of Battlemage is already beating 8 cores of Alchemist at a lower power draw, it would take a lot for this to be worse than the A580.
exactly. anything goes at intel. 🤣🤣🤣
I'm expecting the B series will have about $20 up from last gen. If they don't keep it within that, I don't see how they will be able to compete. They will need to destroy the competition in value since I doubt they can beat them at raw performance.
FP64 on arc is actually very good, but only crazy number-crunchers like me will buy it for that. Graphics performance is much poorer, so I'll rely on that to get a bargain with the gamers turning up their noses a little.
@@VicariousAdventurer Updates to the Linux driver code indicates Xe2 will also have int64. (I mostly use int32. Integer math is limited on all of the recent red and green. Only half of Cuda cores do ints eg.)
I remember my Firepro W7000 (2011 and 4GB vram) had excellent hardware fp64. All of the new cards are decent with single precision flops but they do some janky software emulated double precision at about 1/10th speed. Anyway AMD never had reasonable support for GCN1 and so in practice it was near useless for general computation.
Intel oneAPI(SyCL C++) seems like a much better tool chain than anything from AMD.
The big question is do you still need Resizable bar?
If they can get them out before Christmas in reasonable quantities, that'll be nice.
The pricing would have been nice. Still hyped about them though.
I want one. A 16GB model. That steel legend would look really nice with the new Steel Legend mobo.
Be interesting to know what hardware quicksync trancoding improvements have been made and if they will be supported quickly in ffmpeg. No interest in games at all.
If intel really wants to capitalise on something is if they make a B380 low profile there would be literally no competition at that level and many people will just buy intel cards for that purpose one can only hope
Truth! For all of us cheapo optiplex upgraders that's golden. If somebody made a decent low profile single slot, they could pretty much print money
That would be incredible
Battlemage looking good😊
It has already been reported Intel laid off 50% of its GPU dept and Pat Gelsinger said in an earnings call that "discrete GPUs are not the future, AI is" so I wouldn't buy one of those if it was half an Nvidia or AMD card, as the support lifecycle with less than half the staff is going to be pretty meager and without constantly updated drivers it is a pretty paperweight.
A.I bubble is starting to pop right now.
Mlid viewer? Opinion rejcted
So you only look skin deep? A leaker confirmed they're going through with Celestial too
@@Zeni-th. I watch MLiD occasionally for actual leaks, but yeah the guy is a whiny flake when it gets to analysis and opinion.
I bought founders edition of alchemist a750 and a770 and one version of all Intel sparkle cards combined with one pc with a 15-12400 and another pc with a core ultra 265k, big fan of Intel 💯🙋🏾♂️
They need to compete with the coming 50X series. If price/performance isn't on a killer level, it's over.
I don‘t expect much NVidia competition in the $200-300 range…
I really like the naming of the gpus
I'm going to buy it just for the 12GB of VRAM. I'm pleased with the driver progress that my A380 has seen in the year that I've had it. I trust that the B580 will continue that trend as long as Intel doesn't shutter the GPU team or get acquired by another corporation.
They just need to not implode.
The cooling system looks so much more modern and effective this time around. Looks like a modern cooler from an nVidia card, so thats good!
Please Intel, don't steal defeat from the jaws of victory..
Wouldn't it be hilarious if Intel shifted to being solely a gpu manufacturer😅
I doubt so as they had long term plans using Fabs and all,I mean if it could produce better chips than current reliance on TSMC, they aren't just gonna make chips solely for GPU when they had the ability to make CPUs for AI too lol
would having double the v cache increase performance even more than the 9800 x3d or would it just behave like ram ? when ur only using 10 g ram having 64 gigs will have no benefit ????? just curious !!!!
If it performs around a 4060 with 12GB at around 200$, it could sell like hot cakes.
Daniel any guesses as to what specs we might see from the ARC 770 equivalent?
i like the design
I hope Intel is bringing out the big guns.
3060 12gb at this moment is around 350 Euro.
B770 will be around a 4070 - 4070Ti super with current leaks anyway - so pretty big improvement if Intel can keep prices similar and the drivers are good!
ASRock is my preferred manufacture, so thats pretty exciting.
If i weren't gunning for a 4k system i would absolutely give intel a chance with an ultra budget system. We need competition, especially in the lower market
If AMD don't release the 9900X3D and 9950X3D with dual 3D V Cache, let's just hope they have at least figured out a way to avoid the core parking and schedular issues, or make a mode that disables the non 3D cores in software with a single button click. If they at least do this, these CPU's could still potentially be more powerful gaming CPU's compared to the 9800X3D. If the 9800X3D is already being overclocked to a stable 5.8GHz, then the 9950X3D could have the potential to be overclocked to 6.2GHz or higher. In other words, the 9950X3D could be made to run exactly like a 9800X3D but with higher clock speeds. Guess we'll just wait and see.
I dont know a single person who plays weith Ray Tracing on... That't just a gimic feature IMHO .. Raw FPS is the only think that matters to mainstream gamers
But does it sync with any technology to match any monitor's refresh rate...iff not, well, that sucks even with xmx cores and chips, i hope you understand how important for high RRates is the syncing with high refresh 1080p monitors
12GB for 1080p sounds good, hopefully does have a decent performance, somewhere between 4060 and 4060 Ti perf for 225-250$ would be nice.
Let me just tell you guys if you haven't tried it Ratchet and Clank looks amaaazing w XeSS even in Nvidia or AMD cards, but on the a750 TBH I didn't notice an improvement from already amazing
1:50 i was literally typing "itll just be an a580 with stable drivers im sure" and then u slapped those 12gigs down on me....whoa
Battlemage uses Xe2 cores, Alchemist are an Xe variant. (Variations are for HPC accelerators, mobile integrated, performance, etc)
When doing pixel-value-guessing generator comparisons, do put up a native resolution picture to go along with it so people can see what the picture was supposed to look like. The first generation of Arc GPU's seemed to be to be rather unbalanced; when looking at the power consumption, when looking at the raw force it was supposed to have, it felt bottlenecked and hope they solve that. Have an Intel laptop with Iris graphics and although that is not for modern day AAA games, it does perform quite adequate and more for 99% of my games backlog.
i would love to be able to recommend them, won't know until launch though
My little brother is looking for a upgrade from a 1070, these new intel GPUs look interesting. I hope they are worth buying.
pcie slot does NOT deliver 75w 12v, it does 66w.
Once again here to request a look at the history of generational GPU price:performance improvement over the years 🙏
So what I think, AMD would’ve been testing and checking out all options and probably found some difficulty adding 2 3D v-cache chiplets. Unluckily last gen AMD 79+0x3D the non v-cache cores were faster causing the confused usage of non v-cache cores in the chips during gaming. It seems they have fixed that with comparing 9700x with 9800x3D (they are the same clock speed). So likely the 99+0x3Ds will be same performance or faster. But won’t be any massive gains. You just get an added bonus on productivity.. and maybe better Civ VII performance😅. Why I reckon same or better? Because AMD seems to release their best last.
3:34 I mean, by saying "it's hard to get drivers right at the beginning" what are we even saying? The only data point we have on this is Intel itself, so all we're actually saying is "intel didn't get the drivers right, because intel didn't get the drivers right". This doesn't really say anything.
If Intel delivers a decent performance jump and price I'll probably buy one for my nephew as he is using my old 5700xt. Hopefully they deliver something good 🤞
I run a benchmark channel, and what separates AMD gpu and NVIDIA gpu is really something simple: DLSS
Because regular rendering on AMD card is perfectly fine, and in fact it is great, but what AMD is lacking the power of DLSS and Low Latency modes available ONLY on Nvidia cards.
If Intel implements equal to DLSS and Low Latency modes as features as good as Nvidia, then they got a market in budget side of the PC components.
AMD cards are way better than what they used to be, but Low Latency mode is a difference between 8-10 ms frame to frame latency and 4-6 ms, and that you can feel when playing competitive shooters
honestly the only thing that was holding me back from getting an a770 16gb was the price, at launch the $329 price point for worse performance than an rx7600 8gb at $269 turned me away. If it was ~$280 at launch i would have jumped onto it and AMD would have freaked out since the rx7600 would have been DOA unless they priced it below ~$230 then it would have been another king of value like the rx580
I do hope my next gpu in some years will be intel. I really do. We need competition
B580 Limited Edition/Founders Edition is going to cost around $250-260.
I'd personally like to see Intel get this whole GFX Card thing together. I use AMD cards with AMD chips because I'm that type of guy. As an enthusiast, it would be a lot of fun to run an Intel CPU with an Intel GPU and fanboy team Blue for a generation or few.
Idk if anyone is with me on this .
I doubt that the PCIe x8 times in a PCIe 4.0 system would have any consequences. Not in the performance segment. I would honestly be happy to get something powerful from Intel
If intel really want to gain market cap it should not be just for gaming, if they would make a 48Gb gpu for under 2000$ with around same amount of cuda cores as 4090 they would destroy nvidia in the Ai market.
Nice to see that the B580s are pretty much ready to go, but I'm doubting there will be a lot of supply to really win market share. If they are over $230, it will be a big fail...
12gb of ram from video cards is almost a must anymore. Especially if you video edit.
I'm hoping AMD and Intel bring out some solid SSF single slot cards next gen, for some killer ITX builds. Fingers 🤞
Oh well, this alone just made me skip all black friday deal hunting.
I honestly believe intel will eventually be a good GPU manufacturer.... May not be this gen may not be the next gen, but i know they can, and will eventually be very worthwhile.....
8 lanes on pci3 which alot of people buying this level of card will have doesn't work well. ill be getting a 5070ti or 8800xt and i still run on pci3 as i run at 4k my 5700x3d is more then enough.
Intel has the ability to make dedicated gpus in the $150-400 end. Lot of these buyers this will be their first graphics card thay will allow them to play the latest titles. I hope intel sees the bigger picture here.
Intel is in an interesting position- latest CPUs are not good, last round of GPUs not good. We’ll see
The 2013 release with XeSS is War Thunder.
if this was in price range of 300 dollars i would consider for first time in my life buying a ati ^^
I really do hope these GPU's don't suck and that Intel takes the right steps that AMD never did when fighting against NVIDIA.
Give us a GPU with the performance of a 4070 for 550€ like AMD did and we're back at square 1. Give us the same GPU for 300 or 350€ and there's no way in hell people aren't buying it.
Please, Intel. Please.
Not an Intel fan but I would purchase one just to give another company my money like I do with AMD over Nvidia. I want to see more competition in the GPU space.
Intel Arc B380 - RX-6650XT/3060 performance - 8GB - $179.99
Intel Arc B580 - RX-6700XT/4060-Ti performance - 12GB- $249.99
Intel Arc B750 - RX-7900GRE/4070-Super Performance -16GB - $379.99
Intel Arc B770 - RX-7900XT/4070-Ti Super Performance -24GB - $499.99
I don’t see them reaching 80/800 class cards performance levels.
"it's probably going to be faster than an A580 or why does this exist?" looks at 14900K, looks at 285K
If the launch is good enough, the GPUs perform and the price is right they might be surprised by the adoption rate and even run into supply issues, and might end up reconsidering giving up
Alright, It sounds like it would be in the realms of 4070 performance, i would like it to be as good as possible, but the war is price at the moment, if they have this at 175w thats an efficient chip if it performs to the 4070 level tbh, and whilst im certain many improvements will be made under the hood, cant be expecting a 4080 for cheap.
I want to see this be a straight forward better card than the 4060 Ti, thats about £350, I want to see a big undercut, there needs to be no thought process on what is better as thats how intel will increase market share on the GPU front, If it is reaching 4070 numbers give or take that is priced on average £500+ over here, then saving £150-200 really changes the playing field and with 12GB VRAM makes the 4060 kinda obsolete, and doing a drastic undercut like this is what made PS1 dominate N64 back in the day.
Its all about the money and what people can spend in a world where no one has money, because right now, most people get by with a steam deck, but the growth needs to happen so anyone on a system with less than a 3060 Ti would be the targets to hit, like 20 series users i guess and we want to see this batch of intel cards in the steam survey next year.
Yeah if they can manage $200 for this card and it meets or beats the 4060 this would be a game changer for Intel. Sure they might not make a ton of money off of it but at this point they should be focusing on building goodwill and market share. If they can kill it this generation with price/performance AND avoid significant driver issues... Intel might be in this thing.
Damn, for all the weak stuff they do, the naming of Intel GPUs is badass.
Alchemist, Battlemage, Celestial, Druid. So cool.
I wish video card makers would stop crippling the number of PCIe lanes. As mentioned in the video, the spillover to system RAM when gaming has a smaller impact when you have more lanes and throughput. There is another area with an even larger impact, such as when doing AI workloads such as image generation. For example, with the RTX 4060 8GB and RTX 3060ti 8GB. When doing a SD image generation with a resolution that requires 20GB of VRAM, the 3060ti becomes exponentially faster, where when within the dedicated memory pool size, the performance is similar. The memory bottleneck over PCIe will always lead to a large performance drop, the faster the PCIe throughput it, the shorter the render time becomes in those situations where the working dataset requires the use of shared memory.
It'll be fine in this case after all the B580 is running the PCIe 5 equivalent of a x16 lane of PCIe 4
@@PixelatedWolf2077 For me the main letdown is not going with 16 lanes of PCIe 5.0. If they could double the throughput from PCIe 4.0 x16 cards, then for the impact from memory spillover will be significantly reduced, it would also give the card a significant advantage in AI workloads, mainly in the way of being able to work with even larger datasets, while keeping processing times reasonable.
When doing very large datasets, the GPU speed becomes less important, as even mid range cards go from maxing out their power limit, to using 1/3rd of the power limit by the time there is a significant spillover to system memory, as the card is waiting on the system memory and the PCIe bottleneck.
By default windows allows you to use up to half of your system RAM as shared memory for a video card, thus if you have a system with a 12GB video card and 64GB of system memory, then you can do an AI workload that is memory hard and a size of up to 44GB without the process failing. The only issue is that the more system memory you use for the GPU workload, the slower it will go and the more the power consumption of the card will drop as it waits on the GPU. Since the PCIe bus is also needed for sending work and receiving work related data from the GPU, that baseline load on the bus, ends up taking up around 2/3rs of the throughput of a PCIe X8 4.0 bus, thus cards like the 4060 end up chugging really bad when there is a lot of spillover, this is why there is such a vast improvement with PCIe 4.0 x16, all of that extra throughput can do memory related work. If that can be doubled again, that will be a vast improvement for even more memory related work.
Aside from that, when it comes to gaming in VRAM limited situations, modern drivers are smart enough to prioritize different memory pools, but the more shared memory a game uses, the larger the performance hit becomes, until you see a PCIe bus usage of around 50% (if you fully saturate the bus in one direction, it will read 50% bus utilization). Once the bus is saturated in one direction, the game you are playing will begin to hitch significantly, and frame rates will often drop down to the single digits. A faster bus increases how much shared memory a game can use before you run into major hitching issues. For example, on a PCIe 4.0 X8 bus, you can often use around 500-600MB of shared memory before you start getting hitching or some bad frame time issues. with PCIe 4.0 x16, you can often use around 1.5-1.7GB before the card starts hitching.
Though this also depends on the game as there are exceptions, for example, some games will load up a bunch of data into VRAM that are not actively used most of the time. For example, Ratchet & Clank Rift Apart, can easily handle 3+GB of spillover without hitching. And that is because of how they load the game world. The rifts are simply regions under the main map that get fully loaded into memory, but are culled from the render pipeline when not in view. When you go through a rift, it simply teleports you under the map, then the AI and physics are enabled. Since the needed data is already loaded, it looks like an instant loading of a different world for a seamless transition. When using shared memory, there is just a few milliseconds of higher frame time that is barely noticeable, as it swaps data.
Anyway, doubling the throughput compared to a PCIe 4.0 X16 slot, will allow these cards to actively use probably 4GB of shared memory without the game completely hitching and becoming unplayable.
For everyone clamoring that we need competition …..you actually have to buy the gpu’s for the competition thing to work or
They just fail 😊
770 gigateras exploding on your screen D: ❤
Looks nice that white asrock card
After how Intel treated customers with the CPU problems, I will not be buying an Intel GPU unless its super cheap and the reviews are glowing... as well as at least a few months passing..
I’ll prove a point right now. The 9950 X 3-D has been “confirmed” at 144 MB of cache. A 9800 x3d has 104 mb of v-cache. A 97000x has 40 mb of v-cache.
Therefore with the 9950x 3d verified 144 mb of v cache proves only one ccd has has3d cache.
The 9950x 3d will essentially be a 9800x3d on one ccd and a 9700x on the other ccd. Such a disappointment.
I don't think these GPUs will be as cheap as people expect. They will most likely have a good price to performance but not killing price to performance. Intel has no money to give gifts.
Considering Nvidia is pushing more and more towards AI hardware dev; I don't see them making to many more gpu generations that can be used for gaming. This means Intel might be the alternative gpu to jump too when this happens. I think we'll see at least two more Nvidia gpu cycles that focus on gaming 5000 & 6000 but after that its highly questionable.
A $250 b850 the speed of a 4060 ti 16gb would be nice
Honestly my next system might be fully Intel. Depends on their pricing of course but with Nvidia being an extreme danger to consumers wallets and AMD running away in the cpu sector theres good incentive to keep Intel around.
AMD CPU PC with discrete Intel GPU sounds bizarre. :D
Are any of these gpus coming to laptops?
Yes, Xe2 cores will be used in integrated graphics of both mobile and desktop CPUs.
say the line Danieljak!
No timestamps?
The year Intel CPUs are a dumpster fire, they might actually have something in the GPU realm.
xess dp4a got really better with version 1.3. That DF video is outdated. It now the little bit worse than dlss but not far behind. Even with dp4a
Yep - was afraid we only get one CCD with x3d cash... With Microsoft being more Spy- and Malware these days then a competent OS i do wonder how much of a improvement those new CPUs are going to be without process lasso. At least Intel is not giving up on GPUs and maybe becomes a meaningful competitor on the market at some point - we surely would need that by now !
So Intel lowest tier card give us 12gb of vram, while Nvidia still stuck at 8gb for their 5060 according to rumor, bruh.
Too bad about the bad intel chips. Super interested in the GPU side though. I think all current gpu's should be 16GB. So this news about the vram size is not all that great.
Long term yes am hoping Intel come into play, but we really need the high end still, I guess one step at a time. Not sure what the next generation will be like, just not imagining any good value for at least 2-3 years, at the high end anyway. Best we can hope for is at around the 70 class there's solid competition since AMD are going to be absent from the 80 class and up until at least 2026 probably 2027 unless they fast track RDNA5 and RDNA4 is more of a refresh of RDNA3, RDNA3 that's a bit more power efficient and has slightly better ray tracing. Probably $500 is the spot, if they release a 8800 XT which is on par with a 7900XTX, more efficient and better RT for around $500, then Intel has something in the $400 range that makes sense, that's probably as good as we can hope for, then a 5070/ti/super for around that price with similar performance to a 4080S, I think that's our next generation, with an overpriced 5080 and 5090 due to 0 competition. Bleak but tolerable for most - let's see.
i love my ARC A770, so let see performance now ;-)
lol