Fedora's CPU Proposal Is Way Better Than Ubuntu

Поділитися
Вставка
  • Опубліковано 9 лис 2024

КОМЕНТАРІ • 521

  • @stepannovotny4291
    @stepannovotny4291 10 місяців тому +85

    Ubuntu has definitely been going off half-cocked for some years now. It's great to see Debian and Fedora step up in various ways to fill that gap. I do love my Atom PC's which draw less than 4 watts while running, and there is certainly a growing tsunami of older but capable systems accumulating out there due to the nutty 4 year refresh cycle that most corporate PC's are on.

    • @404hopenotfound
      @404hopenotfound 10 місяців тому +3

      it helps to make shore that parts are essely obtainable

    • @daveamies5031
      @daveamies5031 10 місяців тому +7

      I remember when Corporate PC's were refreshed every 2 years, and users complaining they didn't get a new PC every year, but where I am, many organisations have been every 5 years since windows XP, nothing nutty about 4 year refresh cycle. Amazing how things have changed.

  • @computerfan1079
    @computerfan1079 10 місяців тому +135

    This is a really well thought out way of doing it: nearly no downsides, we keep compatibility, no weighing wether to make v2 or v3 the baseline and everyone gets the fastest version. It also uses a transparent and generic way to do it. It should be a non-brainer for other distros to include it once this is in stable systemd

    • @terrydaktyllus1320
      @terrydaktyllus1320 10 місяців тому +5

      "This is a really well thought out way of doing it: nearly no downsides, we keep compatibility, no weighing wether to make v2 or v3 the baseline and everyone gets the fastest version. It also uses a transparent and generic way to do it. It should be a non-brainer for other distros to include it once this is in stable systemd"
      People like you really need to remove their blinkers and occasionally look at things "in the other man's shoes".
      I've no interest in Fedora, they can do what they like with their distro, the same with Ubuntu. I do use Gentoo, I have done for more than 20 years now, and one of the main reasons I use it is because I can build it exactly the way I want to on whatever platform that I want to - I have it running today on, for example, a Thinkpad T22 from 2002 with a Pentium III CPU and 512MB RAM that I keep updated regularly and I use at least once a week to do shell scripting on my server as a nice little computer with a great keyboard and distraction-free computing, "just like the old days". I also have it running on Raspberry Pi Zero, as another example.
      There are also a huge amount of low-powered and older devices out there running Linux (that neither Fedora or Ubuntu care about anyway) - including industrial systems, car management systems, IoT embedded devices, even Raspberry Pi and other SBCs can be considered to be lower powered compared to modern "gamer wanker" desktop PCs.
      So, sure, this may be "fine and dandy" for you with your very recent hardware but "everyone gets the fastest version" is complete nonsense as a statement - because the way you get "the fastest version" is to optimise the compilation of code based on the platform you're planning to run it on - whether it's your "gamer wanker" Ryzen PC with 60,000 CPU cores or my old Thinkpad T22 from 2002.
      For the record, I don't use systemd on my numerous personal Linux systems either. I have to use it at work because looking after Red Hat servers pays my mortgage but it's completely unnecessary bloat. Again, you are just demonstrating that you can only see the world from your perspective.

    • @schemage2210
      @schemage2210 10 місяців тому

      @@terrydaktyllus1320 Ah yes, the other man's shoes, where said other man doesn't want no stinking performance increases if his OS determines it can be done automatically!!! Your two alternatives is to either rise the hardware minimum spec thereby risk users not being able to use their computers, OR do nothing and leave potential performance boosts on the table. Fedora and @computerfan1079 are pointing out the sensible middleground that is well worth investigating.

    • @schemage2210
      @schemage2210 10 місяців тому +5

      True, at some point though the hardware spec will need to be upgraded, but in the meantime this is a good solution.

    • @ninetysixvoid
      @ninetysixvoid 10 місяців тому +23

      ​​@@terrydaktyllus1320"the way you get the "fastest version" is to optimize the compilation based on the platform you're running it on" which is exactly what fedora does, if your PC is v1 it will get v1, if v4 then v4, and that only for the packages that benefits of newer architectures

    • @wenyi7014
      @wenyi7014 10 місяців тому +30

      @@terrydaktyllus1320 i have no idea what your point is. do you just want everyone to use gentoo, or fedora to become gentoo? i feel like you're just here to tell everyone that you've use gentoo for 20 years, use a thinkpad, and don't use systemd.

  • @spidalack
    @spidalack 10 місяців тому +29

    As a gentoo user, where you compile and optimize everything, yea, most things see no gains. You get big gains in a small number of specific software, but for most people it is not worth the trouble.
    The only place I ever really saw an actual gain was, ironically enough, on an atom netbook that could not handle compiling code itself without turning into a rocket engine.

    • @jnharton
      @jnharton 10 місяців тому +5

      The thing is, in that context, whether compiling it yourself from source makes any difference (from a standard distribution) depends on whether your machine is different from what the average user has.
      With the extreme degree of hardware uniformity these days it doesn't necessarily make much sense. But if you were building on a new/radically different hardware architecture it might help a lot.

    • @rallealyt
      @rallealyt 9 місяців тому +1

      I agree... people are obssessed with beanchmarks that represent very little in real world for 99% of users. It's like people who spend hours optimizing thigns so they can haver more 5 fps in a game... Stability and compatibility is the way to go in mainstream distros.

  • @tomaszgasior772
    @tomaszgasior772 10 місяців тому +128

    As most of the time in Fedora, they will probably implement that in upstream software so after that Ubuntu will just enable their work in own packages.

    • @Ghfvhvfg
      @Ghfvhvfg 10 місяців тому +6

      Those Damm freeloaders if i were a ibm shareholder in mindset

    • @olnnn
      @olnnn 10 місяців тому +2

      As stated in the video the feature is something that has already been implemented at least for libraries so it's more about wiring it up and using it.

    • @Tobiasliese
      @Tobiasliese 10 місяців тому +1

      @@olnnn As stated in the video, this change will require changes in Systemd

    • @olnnn
      @olnnn 10 місяців тому +2

      @@Tobiasliese Yeah for the application binaries specifically, for libraries it's already implemented.

    • @jonnyso1
      @jonnyso1 10 місяців тому

      Well, a key difference between fedora and ubuntu is that fedora kinda exists for this purpose, they try things out before sending it to RHEL

  • @MadMathMike
    @MadMathMike 10 місяців тому +65

    I'm not sure how long it takes you to prepare to record these videos, but your presentation of this information is honestly incredible. Thank you for your efforts! 😊

    • @StarlordStavanger
      @StarlordStavanger 10 місяців тому +9

      I help him a lot with the scripts

    • @MadMathMike
      @MadMathMike 10 місяців тому +4

      @@StarlordStavanger Oh, that's awesome! Great job with the pacing and depth of each section. 👌☺️

  • @GASTBF
    @GASTBF 10 місяців тому +25

    it definitely makes way more sense than Ubuntu's proposal. All canonical does is continuously shoot themselves in the foot nowadays. There is a reason why linux mint has grown so much in popularity. and why fedora has grown so much.

    • @redseb99
      @redseb99 10 місяців тому

      To be fair, it was just an initial test for Ubuntu of having an alternate version. If it only happens now, v1 will still be the standard for 24.04 and then by 26.04 they will probably have adopted the systemd standard way

  • @Beryesa.
    @Beryesa. 10 місяців тому +481

    Fedora's -CPU Proposal Is- Way Better Than Ubuntu

    • @vilian9185
      @vilian9185 10 місяців тому +42

      what's is ironic because fedora is basically the test ground for rhel

    • @AURON2401
      @AURON2401 10 місяців тому +50

      Everything is way better than crapuntu.
      And i'm almost certain everyone already knows this.
      They're just a form of how-to-get-easy-views video topic now.

    • @Person-who-exists
      @Person-who-exists 10 місяців тому +13

      Fedora > baby’s first Linux distro

    • @danwellington3571
      @danwellington3571 10 місяців тому +8

      Lol "Fedora's Is"

    • @habios
      @habios 10 місяців тому

      @@vilian9185 Like it or not, a lot mainstream distros adopts what works on Fedora

  • @thisday77
    @thisday77 10 місяців тому +76

    My notebook runs a i5 Sandy Bridge, and it still feels absolute fast and fluid for all notebook tasks like browsing, streaming videos or office work. So I would like to see older cpus supported for a much longer period. So, the fedora proposal definitvly sounds better than the ubuntu approach.

    • @akeem2983
      @akeem2983 10 місяців тому +5

      My desktop is i5 Sandy Bridge and it too works great for work, browsing and gaming. Though I think about upgrading it, mostly for VR gaming

    • @blueberry101q
      @blueberry101q 10 місяців тому +5

      I also have an i7 sandy bridge and for me it works just fine. It'd be sad to see it no longer getting supported.

    • @xeridea
      @xeridea 10 місяців тому +2

      My wife has a laptop from around then, even after putting in an SSD, it is slow. It is usable, but defiantly far slower than even my $300 laptop from 4 years ago. I would say borderline adequate for basic tasks, but not worth keeping support for. They can just have an alternate ISO for archeological specimens

    • @k.b.tidwell
      @k.b.tidwell 10 місяців тому +4

      I built an i5-2500K desktop back in 2012 that STILL does a fine job with gaming, even though the graphics card does need a third upgrade. I bought it for future-proofing, and so far it's fulfilling that goal. Whoever does this restructuring right so that I can keep on using it gets my business lol.

    • @thisday77
      @thisday77 10 місяців тому

      @@xeridea
      First, "from around then" doesn't say anything :) And even then it could be that you got a CPU that was not ok or some other components slow down your machine.
      My i5-2520M on a Thinkpad X220 with 16 GB RAM and a SSD runs Fedora 39 with KDE and does everything I do on a Laptop just fine. And it boots in 12 seconds from Grub.
      You can search for UA-cam videos, that show you how it should look like to run a CPU "from around then". It even still runs fine with Windows 10.
      But I would be fine with "an alternate ISO for archeological specimens", even I can't see any archeological specimen in a normal running system.
      But I see huge wasting in not supporting them at all. So yes, I think an alternate ISO would be the minimum for now. And I'm excited to see how the different linux distributors will decide in this regard.

  • @nrg753
    @nrg753 10 місяців тому +11

    This proposal makes so much sense. There are still decent sandy bridge processors out there, my server is a 6c/12t on x79 and it's holding up just fine!

    • @terrydaktyllus1320
      @terrydaktyllus1320 10 місяців тому +2

      There are still decent Pentium III processors out there - it depends what you need the computing device to actually do that determines whether or not the CPU is good enough.
      I thought that was obvious to anyone who truly understands computing and the wide range of environments that computers are used in.

    • @nrg753
      @nrg753 10 місяців тому

      @@terrydaktyllus1320 yeah I own a couple of Pentium II's, they have their purpose 🙂

  • @sewer56lol
    @sewer56lol 10 місяців тому +14

    Another very reasonable approach is simply detecting what CPU the user has, at runtime and downloading optimised packages for the user's microarchitecture levels.
    Yes, it means distros have to compile multiple times, and storage costs for people hosting mirrors are increased.
    However, it's a very simple solution which in the long run provides a better user experience and helps protect the environment (millions of runs surrly make up the cost of more compilation).
    Also, unless you are someone like a CPU reviewer, you are unlikely to be constantly swapping out chips.
    Personslly I run Arch, but I use the packages from CachyOS, as they provide v3+LTO builds for the standard set of Arch packages.

    • @iliqiliev
      @iliqiliev 10 місяців тому +4

      Yeah, that's what I thought too and I'm curious if there are any downsides other than the increased burden on the package providers

    • @snygg1993
      @snygg1993 10 місяців тому +3

      My first thought was "I want my package/software manager already to install only the best matching version", too 🙂
      However, I think it might not be "additional" effort for repos compared to the fedora proposal.
      The repo would have to host the optimized versions (if existing) either way, it would be only the user not downloading all of them ... thus it might even save some traffic.
      There should be no reason to not provide an option to download all versions anyways (for the CPU testers that constantly switch chips) as long as the dynamic switching would be still available.

    • @pacifico4999
      @pacifico4999 10 місяців тому +2

      They'll have to compile multiple times anyway. Doing it at the package manager will save bandwidth and space, it's the correct solution IMO.

    • @sewer56lol
      @sewer56lol 10 місяців тому

      Oh hey it's Mr Telegram Scam Bot. Hello there 👋

  • @jefferyrlc
    @jefferyrlc 10 місяців тому +15

    I like this proposal. Shame I use Arch and not Fedora. My CPU is a Ryzen 7 5800X. I think the storage "bloat" will be neglible and the rise in performance for those that can leverage it would be appreciated.

  • @JustinSmith-k1c
    @JustinSmith-k1c 10 місяців тому +18

    Great video Brodie! This Fedora proposal seems really well thought out!
    This would be great to keep older systems like my uncle's old ThinkCentre M52 (single-channel DDR2 2GB at 533MHz, 2004 x86_64 Prescott Pentium 4 @ 3.20 GHz, antique 160GB HDD) in working order for years to come and improve performance on less ancient systems.
    For those interested, I recently installed Linux Mint Xfce 21.2 for him and it was running really smooth. Takes about 2m30 to boot but after that it's smooth sailing browsing the web with Firefox, writing documents LibreOffice, no slowdown when moving desktop windows, etc...
    It's a miracle the thing booted with one stick of RAM being completely corrupt (memtest showed thousands of errors in about 10 seconds!) but Linux is just that efficient it only used a single gigabyte of RAM and was able to sort-of work in the live ISO (granted the installer kept crashing and installed garbage to the HDD, but I'm still counting it).
    Anyways, I had a fun time diagnosing a computer almost as old as myself :)
    Take care!

    • @tostadorafuriosa69
      @tostadorafuriosa69 10 місяців тому +3

      bro cant you do an ssd swap on that thing?

    • @JustinSmith-k1c
      @JustinSmith-k1c 10 місяців тому +5

      @@tostadorafuriosa69 For sure! A dirt cheap, low-end SSD and a bit more RAM would would be huge in term of system responsiveness/general usefulness, but I didn't have any laying around at the time, so I couldn't upgrade.
      Unfortunately though, the system is already on its last legs as *many* capacitors on the motherboard have gone bad, but hey, it's still good to tinker with!

    • @tostadorafuriosa69
      @tostadorafuriosa69 10 місяців тому +1

      @@JustinSmith-k1c It would be cool if you could change them and keep the thing alive for longer

  • @olnnn
    @olnnn 10 місяців тому +52

    Worth noting that some very performance critical things like glibc and I think e.g some video decoder/encoders also does something like this manually internally to select between code paths depending on cpu feature support so these CPU instructions already don't go entirely unused. That's more for hand-optimized low level stuff though so it's not a practical option for most packages like what's discussed there.

    • @jfolz
      @jfolz 10 місяців тому +15

      libjpeg-turbo is a good example of this. It has loads of different code paths for different ISA and hardware capabilities. It's a lot of work, but it's the only way to squeeze every little bit of performance out of the hardware.

    • @ReflexVE
      @ReflexVE 10 місяців тому +1

      Honestly it's not clear to me why they aren't just using code paths for this stuff. It's what most other operating systems do.

    • @snowwsquire
      @snowwsquire 10 місяців тому +6

      @@ReflexVE cause they don't write every program in the repositories,

    • @TheClonerx
      @TheClonerx 10 місяців тому

      ​@@jfolzanother good example is OpenSSL

    • @owenhilyard3157
      @owenhilyard3157 10 місяців тому +6

      @@ReflexVE Code paths have a runtime cost, and there are two common places to implement them. The most common is at the function level, where there is compiler support for doing it very easily. This means that every single function call has a switch (microarch_level) { ... } in it. This also messes with optimizers and static analysis. The other option, which has a much lower runtime cost is to essentially compile a copy of the program for each microarch level and then use either compiler flags or linker scripts to merge all of them behind a switch statement at the front. This is easier to automate, but also multiplies binary size since you can't easily share identical functions unless they don't make any function calls.

  • @MoraFermi
    @MoraFermi 10 місяців тому +53

    This is a very good proposal!
    amd64 ISA is definitely a very well designed one and will likely continue largely unchanged for the foreseeable future -- with just extra bits here and there tacked on to it. As such, there is no real reason to "move off" the -v1 as the baseline, since it's good enough for 85%+ of all the code running on our systems.

    • @MrKata55
      @MrKata55 10 місяців тому +13

      "amd64 ISA is definitely a very well designed one" Man, have you ever looked at it low-level? x86_64 is a complex mess, and only proprietary software of 1970s and 1980s has lead to Intel winning the processor game with all the hacks they used to turn x86 instructions into µ-ops (the microcode). Time has shown that hardware-wise RISC is the way to go, and recently Apple has proven that e.g. ARM is simply more efficient with their Apple Silicon M series. RISC-V is also getting more and more popular. Besides, all modern processors are RISC with just a fancy translation layer on top of them, which we could do without if not for the x86 legacy.

    • @pcallycat9043
      @pcallycat9043 10 місяців тому

      @@MrKata55 I'm sure the processor manufacturers would love to escape x86 as well, except that there is 35 years of windows only software that wouldn't run anymore lol. I know the day is coming when x86 will be relegated to the 'old' hardware stack when I'll keep a couple of machines in working order just for playing the game library, much like older 6502 based machine like the commodore 64 :)

    • @vccsya
      @vccsya 10 місяців тому +7

      ​@@MrKata55That is not true. While yes amd64 is a huge mess, it is still competitive and beloved. It is one of the best ISAs we have. And while yeah it doesnt have an as easy time as aarch64 in decoding (because 86 started as varsize, and still is varsize), the rest is a pure blur. Aarch64 (ARM64) and AMD64 nowadays are basically the same thing. If you think RISC is the way to go, you are right. All AMD64 and X86 CPUs have been RISC inside for a very, very long time and the rest is all uCode. Same for Aarch64 btw. AArch64 isnt really risc anymore, it has tons of cisc-y additions and atm both ISAs have been as close together as never before.

    • @cylemons8099
      @cylemons8099 10 місяців тому +2

      @@vccsya I remember reading a Hacker News comment explaining that using x86 and ARM to compare CISC and RISC doesn't make sense since neither are the best example from either side. x86 has simplistic addressing modes compared to 68000 and VAX and ARM is more complex than Mips and Alpha.

    • @vccsya
      @vccsya 10 місяців тому +1

      @@cylemons8099 aarch64 is pretty similar to amd64. recent versions now also got an extension for a memory model similar to x86's (look at apple m1). We cannot compare arm32 with aarch64, those are fundamentally different. armv7(arm32) is garbage and pretty bad, but very low power. If we look purely at modern arm64 and amd64, we can say they are pretty much the same, with a slight benefit at arm just for the decoding

  • @Fender178
    @Fender178 10 місяців тому +11

    Yeah what Fedora has proposed is indeed a great idea. Have optimizations based on what CPU version you fall under such as V3 for Haswell it will allow users with older hardware to continue their distro of choice without having to worry about switching in the case of Ubuntu users.

    • @terrydaktyllus1320
      @terrydaktyllus1320 10 місяців тому +1

      ...and only at least 20 years behind LFS and Gentoo doing this stuff right from the start. What kept you, Fedora? And does that mean you're metamorphosing from a binary based to a source based distro?

  • @certs743
    @certs743 10 місяців тому +10

    My daily driver right now is still my Dell T3600 workstation with an E5-2665. It still works great and does everything I need it to on Linux. I definitely am not a fan of some arbitrary decision with marginal benefits turning it into e-waste.

  • @DelticEngine
    @DelticEngine 10 місяців тому +7

    My main machine is currently a dual Opteron 6380 system which over ten years old and still does the job, apart from the need to source non-UEFI GPUs that are compatible with it. I am looking to upgrade, but I'm not sure what to. I'm thinking 2nd/3rd generation Epyc as I'm not convinced the 4th generation is worth it as it's a much larger outlay for seemingly little gain.
    As a second system, the machine I'm watching this video on, is an AMD FX-8370 which is also very old but a little more recent than the Opteron system which is actually around 13-14 years old now. I also have various laptops with one being an old dual-core 1.2-1.3GHz chip in it running Fedora Linux because Linux gains better performance than Windows.
    Well, you did ask, Brodie! :)

  • @No-mq5lw
    @No-mq5lw 10 місяців тому +13

    Kind of wish there was some perspective on how Clear Linux handles different x64 levels, since that's a distro that already does what Fedora is proposing here.

  • @kensmith5694
    @kensmith5694 10 місяців тому +8

    Fedora's answer seems a lot better than the other options but I should point out that there are lots and lots of computers to be had very cheap on the 2nd hand market that are several generations out of date. People on a low income can easily have a machine that will play cat videos[1] for nearly zero dollars if the threshold is not raised for the minimum system.
    [1] The main use of all computers.

  • @WilReid
    @WilReid 10 місяців тому +3

    Since you asked... My current systems are an i7-920 w/ 12GB DDR3 and an i7-4770k w/ 32GB DDR3, and this won't affect either for multiple reasons. First, the 4770 is a windows machine and the 920 was running Gentoo. Second, both are getting replaced as soon as I burn-in/torture test the new hardware I picked up while traveling near a Microcenter this Christmas. A new i9-12900K w/ 64GB DDR5 and an i9-14900k w/ 96GB DDR5 are about to take their place. So if I were to run either Ubuntu or Fedora on either of the new systems, I would appreciate either approach. But I like Fedora's more. How much more space is having 4 copies of every library going to take up? 5GB? 15GB tops? Even if it were 50GB, I certainly wouldn't care a bit as the smallest SSD I have that isn't an Optane drive is 1TB. Even an extra 50GB would fit on the 118GB Optane sticks if the base install was 20GB.
    I've been running Gentoo on an off for 20 years. So I've been keeping 10s maybe even 100s of GB of source tarballs on drives somewhere for years. The biggest "cost" of larger libraries is the bandwidth for those with limited or metered internet. So I would actually be very curious how much extra space is needed for the compressed package. I'm wondering if these libraries will be similar enough that a good compression algorithm will mean the extra data needed to be transferred (or consequently stored on mirrors) is an insignificant 2-5% even if the unpacked files are 300% bigger. A compressed filesystem could make all the arguments against Fedora's approach nearly moot considering its failsafe benefits.
    Extra copies of libraries taking up valuable space is really only a thing on an embedded system running off an SD or eMMC. There it might make a ton of difference when you're trying to run the entire system off 16GB of flash or less. Even my only SBC (A Vision Five 2) has a 1TB NVME drive mounted on it, even though I've been booting Arch on it from a 32GB MicroSD.
    If the Fedora team is willing to put in the work to do this, it's the better route IMHO. But at the least, while it should be the default going forward, they should offer a system wide opt-out during the install. Sometimes having software try to think for us when it cannot know what we have already planned to do at a future date is a bad thing. And developers cannot plan for every contingency and ask us a million questions of what we "might do" during a setup/configuration phase.
    In any case, YT suggested this video and it was very interesting to listen to while I was waiting in the Chick-fil-A drive thru. Thanks!

  • @yugen042
    @yugen042 10 місяців тому +23

    I'd be hesitant to generally even raise to v2 level on distributions that are positioned for general purpose computing at this point. There are still plenty of fast enough core 2 duos etc. that still run fine on Lubuntu etc. and they would all be obsolete or forced to switch distros. And with v3 many of the popular older librebootable ivy brodge thinkpads will no longer work like x230 and these are plenty fast and not even all that old. What Ubuntu would be communicating is that ten years is the upper limit of what is still using which is the wrong message.

  • @abit_gray
    @abit_gray 10 місяців тому +17

    I have Ivy Bridge as my laptop and still don't need to upgrade due to performance reasons (compared to Windows). I hope distros do not decide that it is too old.

    • @rightwingsafetysquad9872
      @rightwingsafetysquad9872 10 місяців тому +4

      I used a Sandy Bridge era Pentium with Atom cores for Windows 11. It wasn't good, but it worked well enough for reading Excel files. Even with Windows, old hardware is still good, actually in many cases I think Windows is lighter than Linux now.

    • @theexile4694
      @theexile4694 10 місяців тому +4

      ​@@rightwingsafetysquad9872 Windows is not lighter, ever, than Linux. 😂

    • @pankoza2
      @pankoza2 10 місяців тому +3

      and I am using a f**king AMD FX as my Desktop
      and temporarily using a Haswell Celeron (these have AVX cut off) as laptop until my i5-9300H laptop gets fixed or replaced with something better

    • @abit_gray
      @abit_gray 10 місяців тому

      @@rightwingsafetysquad9872 With Linux it is about your choice. The laptop I have is an "office" one and had trouble even with Win10. I am also comparing performance of programs running there, not just browsing websites.
      The most fun I had was with PayDay 2 and Steam under Wine which had 10% or more performance increase compared to Windows :)

    • @wisnoskij
      @wisnoskij 10 місяців тому

      If you have an ancient computer, just run a distro aimed at legacy hardware. "Distros" wont ever drop support for legacy hardware, but Fedora/Ubuntu and many others never supported legacy hardware and will drop support for architectures as they become legacy.

  • @goaserer
    @goaserer 10 місяців тому +5

    Sounds like a reasonable solution. Not break any existing systems and at the same time keep somewhat up with RHEL by using newer instructions on newer hardware. It even allows to integrate v4 functions no other distro touches yet

  • @lesh4357
    @lesh4357 10 місяців тому +4

    I'm watching this on a sandy bridge laptop (purchased 2011). It is fine for nearly everything I do.
    M$ seem to be deliberatly breaking things in order to get it into landfill.
    Linux is the saviour of old equipment. It is important to not take the M$ approach and force obsolesance.
    So the Fedora (keep v1) aproach seems better than the Ubuntu and others way.
    If dynamic way becomes too dificult then this could be an installation option. Most situations are not changing cpu type during the lifetime of the equipments / distros use.
    The Linux community should be commended on the waste it stops. Just think of the resources that go into making new stuff, energy, metals (some rare earth), plastics, co2 produced etc.
    So keep everything working as long as possible.

  • @DangerNoodle42
    @DangerNoodle42 10 місяців тому +3

    I like the dynamic approach. Could be used to provide transparent transition times before dropping old hardware support, while also allowing those transitions to take place over a couple decades.

  • @therealvbw
    @therealvbw 10 місяців тому +1

    I dodged a bullet here. Was running Ivy Bridge but updated to AM5 for Christmas. Thank you for the videos!

  • @disnaut4935
    @disnaut4935 10 місяців тому +3

    I appreciate watching these kinds of videos because it puts into perspective what needs to be thought about when it comes to retiring old hardware and trying to take performance or features farther. I'm at the start of my software engineer career and I'm still learning about the things that the elder's take into consideration and it blows my mind.

  • @guildpilotone
    @guildpilotone 10 місяців тому +5

    I like the dynamic implementation proposal. It may take some more work to implement, but its universality makes the effort worth it. Currently running 2 systems, one with Ryzen 5600X and the other with 5700X.

    • @RiantoFatma
      @RiantoFatma 10 місяців тому

      Is it true that 5600 has (almost) comparable performance to 5700 due to generational gap?

  • @freetobe3
    @freetobe3 10 місяців тому +3

    As a Sandy Bridge user, Fedora is looking more and more like the Ultimate option. I'm a big Arch fan but my laptop won't be my daily driver for much longer and massive gaps in update cycles are a pain to handle.

  • @B33ENN
    @B33ENN 10 місяців тому +2

    I still use Westmere, Ivy-Bridge E and Haswell machines. One of the 'selling' points of Linux is how it runs so well on older hardware and keeps good machines in use.

  • @keit99
    @keit99 10 місяців тому +3

    That sounds really reasonable. I Personally run a skylake mobile CPU without AvX512 😢

  • @ultradude5410
    @ultradude5410 10 місяців тому +3

    Hey, I’m still running a Sandy Bridge CPU, and it’s in the hands-down coolest laptop I’ve owned
    The Thinkpad X220T is a super cool machine that I strongly encourage people look up
    I use it because I was able to pick it up for cheap because I needed a laptop to take to and from university
    It’s plenty powerful for all of the school-related things I do. The only pain point is the 720p-ish screen, but at least it’s a touchscreen! Well, also the trackpad is a little janky, but the trackpoint is the way anyways!

  • @k.b.tidwell
    @k.b.tidwell 10 місяців тому

    You point out something that "I could feel in my bones" when you talk about how some chips, like the Atoms, just defy easy catagorization. I presume that all of this has roots in a simple labor shortage in maintaining all of the old chips, but with AI, why does this have to be?
    Ok...wow...I was commenting as I listened to the video, and I actually just had to delete a proposal because YOU COVERED IT lol with the bit about the code optimization routines to automagically modify things to adapt to the older CPU's in an efficient way. EXCELLENT AND I AGREE WITH YOUR JUDGEMENT THAT IT'S THE BEST WAY!
    I mean, our coders are smart enough to get this done easily, aren't they? Aren't they? (Kind of makes you wonder why this hasn't been done long ago, huh?)
    Great video!

  • @Kris-od3sj
    @Kris-od3sj 10 місяців тому +3

    I expect that this rollout in Fedora will incentivize developers to optimize their programs/libs for those higher feature levels, now that there's going to actually be a widely used distro which utilizes said feature levels (corporate distros like RHEL and SLE aside)

  • @billv4987
    @billv4987 10 місяців тому +1

    I'm running an i7 10700 (main machine running Kubuntu), i7 3770 (work-related coding running Mint Cinnamon), and a Xeon E3110 (media and file server running Debian.) The Wolfdale Xeon is an old-timer but still doing very well in its role. The Ivy Bridge still does great for its purpose as well.

  • @olafschluter706
    @olafschluter706 10 місяців тому +1

    I stopped the video at times to read the full proposal, and I think that it is reasonable. Appreciate your presentation of the topic.

    • @BrodieRobertson
      @BrodieRobertson  10 місяців тому

      The links are almost always in the description as well

  • @Pentium100MHz
    @Pentium100MHz 10 місяців тому +12

    I use Debian primarily and Debian probably is not going to drop the support this soon. However, I use old hardware as servers - Opteron 270 as a file server (it is perfectly adequate for that), Xeon X5687 as a VM host and Xeon E5-2660 as another VM host. Opteron 4256EE (this is a low-TDP CPU) as a router (it also runs a couple of VMs).
    Servers are expensive, that's why I use older ones - they were cheaper to buy and now that I have them, I might as well use them. While I do not chase the latest versions, I eventually upgrade the OS when the time is right. It would be really expensive to replace those servers with newer and more powerful ones just to see the load go from 2-4 to 0.5, it's not like I would utilize the extra performance of the newer CPUs.

    • @HagobSaldadianSmeik
      @HagobSaldadianSmeik 10 місяців тому +9

      According to Passmark benchmarks the Opteron 270 with its 95W TDP is much slower than a Raspberry Pi 5. I don't pay your power bill but I think it might be time to let that poor thing retire.

    • @Pentium100MHz
      @Pentium100MHz 10 місяців тому +5

      @@HagobSaldadianSmeik So, how do I attach 14 hard drives (12 for data and 2 for OS) and 24GB of ECC RAM to a Raspberry Pi?
      The thing is, I have looked into it, especially last year when the electricity was really expensive. The server currently uses about 300W, or about 216kWh/month or, with current energy prices, about 40EUR/month.
      The newer servers usually fall into these categories:
      1. Same or similar TDP - so, probably no energy savings, just the CPU will be 99.5% idle instead of 98% idle.
      2. Expensive - will take years to pay for itself.
      3. Barely doable.
      I had plans to replace the motherboard and CPU with one that supports Opteron 2419EE - these one have 60W TDP with 40W "average CPU power" and the motherboard has PCI-X slots so I can reuse my SATA HBAs. This would probably save me ~100W of power or ~13EUR/month while costing something like 200EUR, so it would pay for itself in a bit over a year. Log story short, I tried to buy a server with such motherboard, the seller wrapped it in a single layer of cardboard and called it "proper packaging" so the predictable happened. I have the CPUs, so maybe at some point I will buy the motherboard (from another seller).
      Maybe at some point some client decides to get rid of their old server and I will be allowed to drag it home for "recycling".
      While the Opteron 270 CPUs are not the fastest they are good enough for me and can saturate 1G link reading/writing from a zfs pool (assuming the hard drives are not the bottleneck). A newer CPU would just be more idle.

    • @ReflexVE
      @ReflexVE 10 місяців тому

      @@Pentium100MHz Gonna concur with @HagobSaldadianSmeik on this. I don't think they are saying you should replace it with a Pi, I think they are saying that even the lowest end gear today clobbers what you have for a fraction of the cost. Personally I'd just get one of those cheap NAS cases for your drives and a Intel N100 type board for it. It'll clobber current perf and sip power comparably, plus they are pretty damn cheap. Alternately invest in something a bit more powerful, toss Proxmox on there and consolidate three servers into one saving a ton of power, and vastly simplifying maintenance.
      While it is true that a modern CPU is going to sit idle most of the time, 1) a huge part of power savings is how quickly a CPU can return to idle, the lower the performance the more time it spends consuming max power, idle is a good thing and 2) power management, especially idle power, is orders of magnitude lower than the era of the Opteron 270, even at idle they are going to use a fraction of the power of your current setup.
      But hey, it's your server room. I finally gave up my rack mount setup and am quite happy with a 12 bay QNAP + a Minisforum with a Ryzen 7940 mini PC. They didn't cost much, vastly simplified my setup and use a fraction of the power of what I used to consume. Plus when I do need the performance, it's there.

    • @HagobSaldadianSmeik
      @HagobSaldadianSmeik 10 місяців тому +2

      @@Pentium100MHz 24Gb and 14 Harddrives? How about 3 Raspberry Pis with USB hubs in a Ceph Cluster? 🙃
      No, I get it I am also running some old server hardware. Admittedly my Xeon E3-1230 v2 isn't quite as ancient, but I will keep running this thing until it breaks.

    • @BrainStormzFTC
      @BrainStormzFTC 10 місяців тому +1

      Since the Pi 5 has PCIe support, I see no reason you couldn't connect that many drives. Jeff Geerling has some videos on that type of thing.

  • @Kurvenjunkie1
    @Kurvenjunkie1 10 місяців тому

    This is amazing idea, I hope Fedora will go that way! ❤
    Coming from s390x architecture, we had several discussions about dropping hardware support in favor of newer faster instructions, leaving customers behind. And the other way around: cases with some SIGILL (most likely because not correctly implementations).
    However, on the s390x side, there is some code to verify the cpu level before executing some newer instructions. But this needs to be implemented on every occasion. The fedora solution would be on a higher level and may reduce complexity on specialized code.

  • @PhilfreezeCH
    @PhilfreezeCH 10 місяців тому +3

    This is also a change that should work well with future RISC-V CPUs, which are likely to have differing extension support with different performance benefits.

    • @terrydaktyllus1320
      @terrydaktyllus1320 10 місяців тому

      You're making this sound like it's something new here but if you have access to the source code (which has been the philosophy since Linux began anyway) then you can always compile the code yourself using optimisations appropriate to the CPU platform you are working with - including RISC-V.
      I'm not saying that everyone should have to know how to compile code themselves but there's nothing here that hasn't already been here for decades.

  • @tech34756
    @tech34756 10 місяців тому +3

    As someone who has Linux on some older/Atom hardware, I prefer any method which retains compatibility.
    I know it may not be the most optimised code, but I'd rather have something usable.

  • @xuerian
    @xuerian 10 місяців тому +2

    I think this is a great solution. With that system in place, it seems relatively simple to curate the optimization packages that are present as well, limiting the storage duplication.

    • @Tobiasliese
      @Tobiasliese 10 місяців тому

      Also if you use dtorage dedub. There shoudn't be a massive change in filesize.

  • @Afsafs123
    @Afsafs123 10 місяців тому +9

    I'm a Fedora user, and I have a love/hate relationship with the distro. The deprecation of X11 affects me personally and is too early (nvidia sucks). This change, I'd be fine with supporting. I'm still considering moving to Ubuntu simply due to needing this machine to "just work" with X11, but in other ways I just can't stand it (apt is a mess compared to dnf).
    The second I moved back to Cinnamon and X11, I remembered why I loved Fedora. The second I switch to Wayland, I'm reminded why I hate it.
    It sucks, but hey, I could just distro hop if I really get annoyed. We're in the weird position of being between technologies.

    • @tekno679
      @tekno679 10 місяців тому +5

      Could you elaborate on why you need X11? Personally, I only encountered an issue with screen sharing after switching, everything else worked fine.

    • @Afsafs123
      @Afsafs123 10 місяців тому

      @@tekno679 The amount of bugs on my 3060 is staggering. Random app crashes (truly random), system lockups, graphical glitches, even in web browsers (and I'm a web dev)... It was unusable. Truly, I mean it. I lost hours to bugs.
      I will admit, I went with the KDE spin. Maybe its Wayland implementation is terrible. GNOME isn't an option, though, I can't stand its interface (moved from Cinnamon).
      Wayland is _probably_ amazing. For me, it's just not ready.

    • @huntercz1226
      @huntercz1226 10 місяців тому +4

      @@tekno679 Out of sync frames and tearing. Not a problem if your GPU can pull frames at your monitor's refresh rate, but it's a big problem and annoying when gaming. Fortunately, this will be solved when the explicit sync protocol gets merged to Wayland and compositors will implement it.
      This only happens on Nvidia.

    • @TVPInterpolation
      @TVPInterpolation 10 місяців тому

      ​@@huntercz1226 you may laugh - but i can use wayland on my hardware as long as my monitor is plugged into the iGPU. Games instantly pick the right gpu to run on. maybe thats worth a shot?

    • @Distroreport
      @Distroreport 10 місяців тому +1

      Running on Fedora 39 WS, Nvidia & Wayland and for the most part it has been fine. But pretty much all electron apps crap out. Repo apps or flatpak it doesn’t matter.
      But Wayland on Gnome has been far more stable for me than on KDE. That was borderline usable for me.

  • @SeekingTheLoveThatGodMeans7648
    @SeekingTheLoveThatGodMeans7648 10 місяців тому +3

    Could there be a way to initially deploy with baseline V1 binaries but to have the system stage an update from binaries that support more advanced CPUs if it sees a more advanced CPU? Going in the other direction, to a less advanced CPU, we would expect to be a very rare case but perhaps could be taken care of through a thumb drive or other removable media based bootable updater.
    And/or do the CPU feature based path thing for versions of the software and libraries that use the more advanced CPU features, but only as much as the actual CPU installed would require it. (Don't get V4 updates, for example, if you only have a V3.)
    Now some users might want to copy an entire Linux installation onto another machine to get an exact duplicate environment. In this the CPU feature based path thing would be what to do.

  • @henrik2117
    @henrik2117 10 місяців тому +31

    The Linux community was proud to say that when Microsoft made the change for Windows 11+ that the users of older hardware could go and install Linux instead. Making a Microsoft move for a couple of percentage would do more damage than good.
    People interested in optimising can and will always do the compilation themselves anyway.

    • @lukas_ls
      @lukas_ls 10 місяців тому +5

      You obviously didn't understand the proposal. Nothing would change for the users and unlike on Windows (where this issue actually existed for quiet a while now and is causing problems) it can actually bring some benefits.

    • @donkey7921
      @donkey7921 10 місяців тому +1

      a couple percentage points adds up. not to mention depends on how this is done and what the threshold for supported hardware. PS i like win 11 a lot compared to windows 10. its been infinitely better for me in every way, if that kinda improvement is what we could get on linux im 1000% for it.

    • @FireStormOOO_
      @FireStormOOO_ 10 місяців тому +2

      Did you watch the video? This is specifically in contrast to the Ubuntu/Canonical proposal.

    • @henrik2117
      @henrik2117 10 місяців тому +1

      @@lukas_ls yes, I did. I chose to make a statement to avoid things going further. The proposals are all fine if implemented as proposed, but as clearly stated in the video they are PROPOSALS, not final decisions and as seen in many other cases things can go from fine to full Red Hat in a short time.

    • @henrik2117
      @henrik2117 10 місяців тому

      @@donkey7921 then use windows 11. The point is to leave the choice to the user. That is the philosophy of open source.

  • @dameanvil
    @dameanvil 10 місяців тому +1

    0:00 💡 A recent experiment in Ubuntu explored raising the CPU Baseline from X8664 V1 to potentially V3, aimed at understanding the hardware loss and benefits, but it's purely experimental.
    1:22 💡 Fedora proposes optimized binaries for the amd64 architecture, targeting different microarchitecture levels (V1, V2, V3, V4) but it's not guaranteed and requires community feedback before implementation.
    2:39 💡 Different microarchitecture levels (V1, V2, V3, V4) correspond to specific CPU generations: Intel's Haswell, Skylake, Zen 4 for AMD, each with various CPUs fitting into these levels.
    3:18 💡 Compiling code for a higher microarchitecture level than supported by the CPU leads to crashes and potential performance differences, ranging from -5% to +10%, varying by application.
    6:02 💡 Fedora's plan involves dynamically loading CPU-optimized libraries based on the CPU's capabilities, enabling performance boosts without excluding older CPUs.
    9:25 💡 The dynamic linker will check CPU capabilities at boot, loading the appropriate optimizations, offering a more flexible solution compared to hard-optimizing for a specific microarchitecture level.
    11:38 💡 Fedora's proposal aims to benefit developers interested in optimization work within Fedora, providing performance gains to users with compatible hardware automatically and transparently.
    12:21 💡 Unlike Ubuntu's experiment that raised the Baseline, Fedora plans to move from V2 to V3, potentially excluding older hardware but offering performance and energy efficiency improvements for supported devices.
    14:22 💡 Fedora's proposal is open for discussion, indicating a potential shift in CPU Baselines across distributions, allowing a gradual adoption of newer Baselines without abruptly cutting off older CPUs.

  • @--i-am-root
    @--i-am-root 10 місяців тому +2

    13:53 lol. I run older hardware (2009-2017), and plan on getting a pre-IME laptop, so worst case scenario I do Arch with a custom kernel, or go full Gentoo... or even LFS.

  • @alenygam6048
    @alenygam6048 10 місяців тому +6

    Well my opinion is that my main laptop is a Thinkpad T430 with an i7 3720qm which is still a perfectly capable machine for what I need it to do and I do not want to fix what ain't broke.

  • @Verssales
    @Verssales 10 місяців тому +2

    For me particularly this isn't an issue, I as many tech people like to update their hardware from time to time, but I understand that many people can't. I think that the idea o rising the requirement is okish, there will always be distros to support really old hardware, like we still have distros supporting 32 bits. So the idea of loading binaries dynamically is VERY cool, this could be used not just for x86, but for arm and riscv too, in the future as it gets more popular on linux.

  • @niroc6018
    @niroc6018 10 місяців тому +3

    I've been compiling a custom kernel and cpu heavy programs with the appropriate flags for my cpu architecture (Zen 2) for years. It would be nice if I didn't have to anymore... well, I'd still use a custom kernel for a better cpu scheduler suited to my needs.

  • @hygri
    @hygri 9 місяців тому

    Yeah cool stuff Brodie, didn't know about this, great move from Fedora! Looks like a solid bit of kit they're building, likely it'll end up in all the other distros in time. Running a spread of stuff from v2 to the latest v4 - that'd be seriously handy if it works its way into gentoo and nix.

  • @esra_erimez
    @esra_erimez 10 місяців тому +1

    Never clicked on a video faster. This is so very relevant

  • @von_nobody
    @von_nobody 10 місяців тому +1

    One thing we need consider, current code targeting v3 is mix bag because is compilers fault, in theory better compilers will grain more grain without drawbacks. Maybe if v3 will be more used then more pressure will be done on compiler to fix all ineffectiveness.

  • @scheimong
    @scheimong 10 місяців тому +2

    Sounds pretty good to me. I also wonder whether they're planning to do something similar for the kernel, or if this effort is currently limited to userland.

  • @Megabobster
    @Megabobster 10 місяців тому +3

    x86 code runs on x86-64 cpus and linux supports both together seamlessly already. i could be misunderstanding how things work but why not just define new architectures with the supported feature sets and do things the same way they've been working for ages?

  • @joeMW284
    @joeMW284 10 місяців тому +3

    I'm rocking a 3rd gen i7. It works fine. I would be pissed if they pulled a Microsoft on me.

  • @FireStormOOO_
    @FireStormOOO_ 10 місяців тому +1

    This seems way smarter. If it can be cheap and easy enough to support a wide range of CPU variants then we just get the best of both worlds.

  • @MarkParkTech
    @MarkParkTech 10 місяців тому +13

    It seems a bit over-complicated. I think most people aren't switching out newer CPU's for older ones, and as I don't have any better idea, I'd like to see the option at the package manager level to only install the optimized variants ( if available ) that your architecture supports. If you're someone who does a lot of tinkering, I suppose the ability to have both versions simultaneously can be useful, and using the systemd route makes sense. But if you're just installing a cpu that supports a more up to date standard, it should also be able to detect and update your packages to the optimized versions whenever you're ready - but even if it doesn't everything should still work. Really, I don't see any good options for this issue that have been presented thusfar.

    • @pacifico4999
      @pacifico4999 10 місяців тому +6

      I agree with you, I think this should be handled by the package manager. Maybe even add configurations to opt out of optimizations if you're one of the few people in the world downgrading CPU architectures without reinstalling the OS.

    • @enemixius
      @enemixius 10 місяців тому +3

      I think that would be a feature for more niche distros. For the large ones, like Fedora, it makes sense to keep it simple and have everything there.
      If you're the kind of person to go tinkering, you'll likely not be running vanilla Fedora anyway, and your enthusiast distro of choice will probably find a way to let you optimise even more.

    • @nio804
      @nio804 10 місяців тому

      I think this is actually quite a simple way to implement support for optimized binaries while retaining backwards compatibility.
      You don't need to have any additional logic in package management deciding what to install, and the system degrades gracefully to the baseline binaries, which means you pretty much can't make mistakes that would break systems (the worst mistake a distro could make). Since the glibc support is already there, all systemd has to do is slightly change the system $PATH to affect which binaries get used. It's a well-understood mechanism that has existed since forever.

    • @pacifico4999
      @pacifico4999 10 місяців тому +1

      @@nio804 why hold 4 binaries if people hardly ever downgrade CPUs, especially across generations?

    • @pacifico4999
      @pacifico4999 10 місяців тому

      @@nio804 you need to make changes to the package manager anyway, since now it has to download 4 versions of the binaries and put them in their respective location

  • @necuz
    @necuz 10 місяців тому +2

    I recently distrohopped from Nobara 39 to CachyOS (mainly for working Nvidia 545 driver). Subjectively there is no speed difference at all to having the v3 binaries. However if distros actually start shipping v3 packages it would incentivize devs to do those optimizations.

    • @SianaGearz
      @SianaGearz 10 місяців тому +1

      Nah. This really only just affects instruction selection by the compiler, which is a benefit which is distributed very broadly but is very minor in magnitude. Libraries and applications say in multimedia space which know they need to wring all the possible instructions out of a CPU include branching code paths, usually an indirect call to larger data processing kernels, that have fine grained optimisations for numerous CPUs, so if you compile them for any x86-64, they will just include all those SSE4.2 AVX etc code paths, they do some compiler flag twiddling to allow them to generate that and they know how to not execute code that will not run on your CPU. Cryptography libraries and similar should IMO do the same. Everything else will never be hand optimised.

  • @LloydLynx
    @LloydLynx 10 місяців тому +1

    Maybe I'm just being an old man, but Haswell doesn't feel that old. It feels like the more wallet friendly option when you can't afford brand new hardware. Now it's 10 years old. When did that happen? My Haswell workstation still feels like a beast. i7 4790, RX580, 16GB RAM, and 3 SSDs for root home and swap. Now it's becoming the bare minimum.

  • @az9az9az9
    @az9az9az9 10 місяців тому +10

    Geekbench ML benchmark scored 10.37 times higher with AVX2 than with x86-64 on AMD 7840HS. The AVX512 should be 20.74 times faster. Strange enough the fans on my Laptop were silent turning benchmark with AVX2, but with x86-64 they were spinning like turbine engine. This should also mean crazy good battery life.

    • @acerIOstream
      @acerIOstream 10 місяців тому +2

      You bring a good point regarding battery life. These operations are likely more power efficient, and can result in Linux being certified for sale in some countries.

    • @Ozzymand
      @Ozzymand 10 місяців тому +2

      @@acerIOstream Good thing linux is free and doesn't need to be sold ey?

  • @RobBCactive
    @RobBCactive 10 місяців тому +4

    This sort of thing NEVER gains much applied in the general case, it's far better to have optimised shared libraries where CPU extended features make a large difference for specialised processing.
    Many years ago an interesting hybrid ABI was developed with AMD64 register counts and call conventions but using compact 32bit pointers. The actual results were disappointing and inconsistent, despite compacting code. So the effort petered out as not being worth the trouble.

  • @itjustcrashed
    @itjustcrashed 10 місяців тому +1

    Apple Silicon M2 (10 Core GPU, 8 Core CPU)
    I use macOS but even if I used TempleOS I'd still watch Brodie.

  • @wagyourtai1
    @wagyourtai1 10 місяців тому +3

    I'm surprised they're not including arm in the proposal... or is that a "separate" distro

    • @xuerian
      @xuerian 10 місяців тому

      The scope of the solution seems likely to expand if it is well received.

  • @hopelessdecoy
    @hopelessdecoy 10 місяців тому +1

    I say make it modular, the more modular and customize-able a system is the more Linux it is in my book. Let me install CPU plugins/libraries that programs can take advantage of if needed or I can uninstall if not. I'm not a systems programmer but I love making software that accepts expandable modules and plugins. Modding Skyrim got me into programming and my design philosophy.

  • @dashcharger24
    @dashcharger24 10 місяців тому +3

    This is why I switched back to Fedora. It still sucks how they handle the source control, but RH just makes better (future) decisions.
    Canonical is very good in being Google 2.0. Meaning making a mess and canceling it later.

    • @MatthewMiller-mattdm
      @MatthewMiller-mattdm 10 місяців тому

      What problems are you experiencing with Fedora’s approach to source control?

    • @dashcharger24
      @dashcharger24 10 місяців тому

      @@MatthewMiller-mattdm The sources for RHEL have become closed source. It's not affecting me, because I do not work on Linux-development, but it isn't good for the Linux ecosystem, as RH does a lot of contributions.

    • @MatthewMiller-mattdm
      @MatthewMiller-mattdm 10 місяців тому

      @@dashcharger24 That doesn't have anything to do with Fedora.
      And for RHEL, this is overblown. RHEL is still open source, and crucially, all improvements and fixes _do_ get released publicly. In some cases, you'll see something like version 1.19+patch of some package go to customers initially (both binary and source) and the public fix might be 1.20 with _slightly_ different code - but that update _also_ goes to RHEL at the next minor release.

  • @maxmouse3
    @maxmouse3 10 місяців тому +1

    I like the Fedora proposal, I hope this works out :)

  • @speedytruck
    @speedytruck 10 місяців тому +2

    5:05 Because those are software changes. They don't require you to buy new hardware :D

  • @lubossoltes321
    @lubossoltes321 10 місяців тому +1

    you somehow dropped Zen1/2/3 between the v3 and v4 baseline ? also afaik intel 12gen and newer only support AVX512 when e-cores are disabled ....
    so basically if I link to the same binary from all 3 hwcaps binary directories (v2/3/4), what will happen ?

  • @s.m.4995
    @s.m.4995 10 місяців тому +2

    They're worried about making Linus Torvold's laptop stop working.

  • @marsovac
    @marsovac 10 місяців тому +4

    I like the Fedora method, but there are two solutions for this. THe other a bit complciated. Extend the packet manager to have microarch tags. Recompile the kernel for the CPU at hand or download the maximum precompiled kernel that this cpu supports. When downloading packages from the packet manager, download the best for that microarch. If not available it can compile it like Gentoo.

  • @genstian
    @genstian 10 місяців тому +1

    Ofc, not every kind of load can be CPU optimized, but some do, and some matters quite a lot, because a lot of those things that doesn't speed up is things that doesn't matter, your background IO load etc, but we've targetted Haswell+ almost since haswell, because a lot of our high end loads benefits significantly in high CPU bound loads (*exceptions are mostly to IO and ram bandwidth loads). Not every distro got to target ancient hardware, and well, for RHEL and Ubuntu, noone is using any money on ancient hardware,
    I admit its a compromize, but they should probably define a hardcap on how old hardware are supported,

  • @marcelomafra
    @marcelomafra 10 місяців тому +1

    I've been using for a while CachyOS's kernel for x86-64-v3 (Core i7 10750H), can't say if I few any performancedifference, but I take advantage of the all with the other patches applied including the Lenovo Legion Patchset that are already integrated. That said, it helped me with NVIDIA drivers, for some reason with Fedora's one I would always have weird problems installing or updating with both RPMFusion or Negativo17.

  • @Cody4k
    @Cody4k 10 місяців тому +1

    NixOS user with a Ryzen 7840U Laptop (v4), 5800x Desktop (v3), 5950x Server (v3), 3900x Server (v3), i5-1135G7 Server (v4), and Xeon E3-1270 Server (v2). I also have a few old v1 systems lingering (Socket AM3+).

  • @jonathanbuzzard1376
    @jonathanbuzzard1376 9 місяців тому

    Coming from an HPC background (I maintain a large HPC system) you need to benchmark every code for optimal performance. For example just because your CPU supports AVX512 does not mean that compiling with AVX512 gives you the best performance. You might well be better off compiling for AVX2 instead. Basically by targeting AVX2 you end up with more compute units than if you target AVX512 which depending on your code may give better performance. When we install new software we will compile with a range of compilers and optimizations work out which gives the best performance and then release that. When you have code burning through millions of CPU hours a year a 10% saving is a big deal. Imagine you have a $10 million HPC system, if you make your code 10% faster that turns your $10million HPC system into an $11 million HPC system and that is serious money in anyone books.

  • @angeldirk00
    @angeldirk00 10 місяців тому +1

    is there a way/program to see which cpu version you're running under linux? I have a laptop made in 2017 that i'm fairly certain is at least v4 with AVX v1 (non -512) but I want to make sure

  • @zeburgerkang
    @zeburgerkang 10 місяців тому

    5800x... nearly finished the code camp video "Introduction to Linux - Full Course for Beginners"... 3 chapters left, I might go reinforce what I've taken note of with the crash course version of the video... any advice?

  • @alphaomega154
    @alphaomega154 10 місяців тому

    im really curious about fedora. i think i want to give it a try. i want to know if like LMDE 6 , it comes with pipewire-wireplumber pairing out of the box. since i find it important. there is performance benefit. also. i just noticed, that the use of LVM encryption especially if also doubled by other encryptions, like home folder encryptions or the use of ZFS, it is taxing the CPU. which could hit the hardrive performance. you probably wont notice it if you use recent CPUs with excellent multicore encryption/decryption performance, but it still takes up some amount of resources in the CPU and slows down the hardrive response time somewhat. i guess windows dont have those kind of security. but security comes with a prce, right? and most of the time, the compensation is performance.

  • @akosv96
    @akosv96 10 місяців тому +1

    Zen 3 Ryzen 5700G. The integrated graphics saved my life until the gpu crash happened. Now I feel greedy and want a better CPU since it's pcie gen 3 only (in case I want to swap to new gpu) but it's super useful for redundancy in case I virtualize windows.

  • @Waitwhat469
    @Waitwhat469 10 місяців тому +4

    I wonder if other hardware detection and optimization will come up in the future like this.

  • @demanuDJ
    @demanuDJ 10 місяців тому +11

    I'm still using Sandy Bridge, Ivy Bridge, Haswell and few AMD equivalents... these machines still works in most cases and make work done. I dont want to change them because of some cryptographic optimisations which I don't use every day. But I don't use Ubuntu, I've learned they make a lot of stupid decisions and they're not trustworthy for me. Sorry Cannonical...

    • @spicynoodle7419
      @spicynoodle7419 10 місяців тому

      What happens when browsers and other popular programs start releasing only amd64v3 builds?

    • @dashcharger24
      @dashcharger24 10 місяців тому +1

      Sandy and Ivy Bridge.. those are names I haven't heard for a long time.

  • @chrstphrr
    @chrstphrr 10 місяців тому +1

    Unlike some of folks here less than half my age having pro-euthanize-old-cpu opinions, I've used pre and post 386 hardware before.
    Also, back then, not when it was bleeding edge, but a few years after. The 386 was, in practical terms, out in the late 80s, and it was scrubbed from having linux kernel support THIRTY SIX YEARS after Intel announced the introduction of that silicon.
    15 y.o. hardware isn't the actual get-out-and-push kinda slow that 3-4 decade old hardware was.
    So, keeping the same timescale... no need to bring out the pitchforks and torches for the first parts of AMD-x64 architecture until AFTER the 32bit-ctime problem will have everyone all abuzz like Y2K in the late 90s.
    Not everyone on this planet is running linux on old hardware for nostalgia. Not everyone has first world funds to keep up with the bleeding edge. Slightly old computer hardware isn't the same as eating 8 year old expired canned food you found in a dumpster.
    15 year old hardware (at least, desktops) have peripheral upgrades that keep them more current than 386 hardware did even when it was relevant. In the few years before, and after the 386, peripheral slots changed several times, each to incompatible physical/pinout form factors. My Core2 motherboard can still run a SSD, or a more modern GPU. I will concede, not to the full capacity new hardware could. But, far more usable than a 386 trying to continue to work 1, 2, or 3 decades later.
    Let's keep the snobbery about hardware support limited to the far less free commercial OSes like Windows or MacOS, please and thank you.

  • @666lordofdestruction
    @666lordofdestruction 10 місяців тому +1

    Hey Brodie. On an earlier episode you mentioned not using Linux if you were not able to game on it. Do you game with multiple monitors on Linux? Are you able to get a game to span across three monitors (e.g.48:9 aspect ratio)? Maybe even make a quick video showing how its done.... ^_^

    • @BrodieRobertson
      @BrodieRobertson  10 місяців тому +1

      I don't do any multi monitor gaming so I can't help with that, but I might look into it

  • @FaithyJo
    @FaithyJo 10 місяців тому +1

    I'm rocking a Sandy Bridge Xeon on my home server. SSE 4.1 4.2 and AVX... I'm golden.
    I also run Debian which I guarantee will be the last distro to increase their x86_64 version level

  • @ibrahim-tech
    @ibrahim-tech 3 місяці тому

    I think all Linux distributions should adopt Funtoo's approach of providing packages optimized for various microarchitectures. This would ensure no one is left out and allow us to get the most out of our hardware. By extending the life of well-functioning computers, we can reduce e-waste. It's frustrating when people have to discard machines capable of running a browser or office suite just because the OS vendor stops supporting them, making them insecure.

  • @AndreiNeacsu
    @AndreiNeacsu 10 місяців тому +1

    I still have some older computers (FX8350, i7-3770K, etc.) all with 32GB of DDR3 RAM, Vega 56 and RX 580 GPUs that run just fine. It's almost comical that they run Win 11 just fine (after disabling the TPM 2.0 checks) but Ubuntu will stop working. This feels like a good time to quote Linus Torvalds but replacing nVidia with Canonical.

  • @StupidusMaximusTheFirst
    @StupidusMaximusTheFirst 6 місяців тому

    My main desktop is Ivy Bridge i7. It's super fast, plays all modern games. I will likely still be using this system even 10 years later on, on a different role, like a headless home server. They can offer optimised binaries for newest chips, that's fine and I support this, as long as they never drop support for baseline older x86.

  • @RobertTreat9
    @RobertTreat9 10 місяців тому +2

    Have a laptop here that's running on Sandy Bridge. Runs quite well for what I have it doing and would be a shame if they turned it into a doorstop at this time. It is a bit long in the tooth but far from needing a cyber nursing home.

  • @prozacgodgamedev
    @prozacgodgamedev 10 місяців тому

    I always debate on which software packages are going to truly bennefit from this? Gimp? Inkscape? LibreOffice? I mean... where do you draw the line for making say a level 1 base core and then ... compile apps for v2/v3/v4 etc...
    I'm just not sure what issue this solves? marginal improvements for some apps on some platforms?

  • @SianaGearz
    @SianaGearz 10 місяців тому +1

    Phenom II is very usable for general purpose tasks, and they're level v1machines. Even Core2 isn't so bad. I think it's too early to jettison these machines off the desktop entirely.
    But also it shouldn't be necessary to handle it at the system level? I mean it just makes sense that libraries which do cryptography, as well as specialised applications and libraries which do heavy video processing etc just include cpuid-specific code paths to take advantage of new instructions while remaining compatible. They can also do fine grained selections where they make sense, like for example the very useful instruction popcnt is available in AMD processors with SSE4A, which are not level v2. Nehalem has this one as well.
    Other applications benefit from being able to just assume popcnt for example at the compiler level, implementing optimisation also when branching isn't practical, but honestly the benefit is limited. You'll see that benchmarks where there is a substantial benefit are all highly specialised code such as cryptography.

  • @siberx4
    @siberx4 10 місяців тому +1

    I can say that I currently still run a home server on a Westmere (Nehalem die shrink) processor, so this is relevant to my interests. Any baseline above v2 would screw me over.
    That being said, this whole concern seems kind of dumb. The performance improvements are marginal in almost all cases, and if any application actually meaningfully benefits from the newer optimizations, there's _already_ a mechanism to conditionally enable the newer instructions with just a bit of extra effort on the part of a given application's developers.
    The only purpose of this feature is to slightly improve performance for lazy developers of not performance-focused applications, but at least the Fedora implementation won't lock any users on older processors out.

  • @MrKata55
    @MrKata55 10 місяців тому +1

    Probably shouldn't post it in UA-cam comments and instead on fedora hardware, but I'm too lazy to make an account there (arch user btw) so feel free to repost these ideas in the right place. I think the most sensible option for the extra binaries is just make it an extra package? Especially since as shown, most don't actually benefit from the extra instruction set support, but it gives this very dangerous incentive for proprietary software providers (e.g. Zoom and Teams) to compile their code in this manner, leading to technological exclusion of people still rocking Core 2 Quads and AM2 Phenoms - which are, I personally checked - still quite usable for e-Learning and office stuff as of 2023 in terms of raw compute power, as long as you slap like 6GB+ RAM on their mobos.

  • @NonameEthereal
    @NonameEthereal 10 місяців тому +1

    SIGILL on _CPUs_ that do not support it AND operating systems not supporting it even if on a CPU that does. Had some fun with that on OpenBSD with AVX512. Node uses some libraries (simdutf) that build multiple code paths in and supposedly figure out at runtime if CPU supports the instructions. But some instructions, like AVX512 set, ALSO need OS support. Whoopsiedoopsie... Especially since this then also is an area where Intel chips have differing supports on the same silly CPU because of the whole P vs E core, and later had to soft-disable support that existed because the OS should probably not be expected to look into the future and make sure a thread is on a P core because there's an AVX512 instruction coming in the future...

  • @priit7777
    @priit7777 10 місяців тому

    I wonder if having some new instruction is the same benefit/benchmarks for all CPU's that support it equal? I mean like does having AVX on AMD vs Intel for some specific library or program increase the benefit the same amount +5% for example from "default"? Or it's like +1% to intel and +7% to AMD or vice versa? Do even different generation CPU's with this instruction have same benefit? Like Zen -1% (actually has the instruction just for compatibility sake, without adding any performance benefit?) , Zen2 +5% and Zen3 for example +6%?
    Also a rather drastic jump from v3 to v4...

  • @reaperinsaltbrine5211
    @reaperinsaltbrine5211 9 місяців тому

    This proposal very accurately shows who RH's owner now: this is really something IBM would do. That Plan9 guy was not that crazy as it sounds, even if the chance of Plan9's mass adoption is gone. In Linux it probably would be best to leave it to the application to decide, because then it would work even on systems that don't use systemd as their init system (like Alpine).

  • @ScottHacunda
    @ScottHacunda 10 місяців тому

    I am still running a Intel(R) Core(TM) i7-4790K (8) @ 4.40 GHz, it is still fast enough for all the games I play and runs programs without any problems. I will upgrade one of these days now that current processors have finally improved beyond, just a matter of getting the money together to build another long life computer

  • @Spencer-wc6ew
    @Spencer-wc6ew 10 місяців тому +7

    For anyone wondering, here's when the first CPUs of each version started being used:
    V1 - 1999
    V2 - 2009
    V3 - 2013
    V4 - 2016

    • @wolf2965
      @wolf2965 10 місяців тому +1

      And yet, V3/V4 was not available in CPUs produced as late as Q1 2021 - Comet Lake Celerons and Pentiums were lacking support for AVX / AVX2 until the very end.

  • @pcallycat9043
    @pcallycat9043 10 місяців тому +1

    and... the super cool thing is that, since every distro consumes systemd, now every distro will have to do it the fedora way or modify systemd to not dick with search paths... wonder how that will play out. I do like the uniformity that systemd brings, but as it takes over more and more of the base os, distributors are left with less and less freedom to do things any other way than how fedora/redhat decide to do it.

  • @rightwingsafetysquad9872
    @rightwingsafetysquad9872 10 місяців тому +19

    If I were Ubuntu, I'd do something like support CPUs for at least 7 years on regular releases and at least 12 on new LTS releases (which receive 5 years of support for a total of 17 years of supporting a CPU). I'm not sure if Fedora has a similar mechanism in place.

    • @Afsafs123
      @Afsafs123 10 місяців тому +10

      12 years sounds like a long time, but in CPU land it really isn't, is the issue. I think if they went with at minimum the Pentium 3, then they would be fine. Removing support for old x64 processors at all for a distro like Ubuntu is, quite frankly, insane.

    • @gRocketOne
      @gRocketOne 10 місяців тому +7

      The problem with that is that "newer" x86 features aren't universally supported by all newer x86 CPUs. There are chips being produced today (e.g. some Intel Atom CPUs) that only support x86_64-v2.

    • @Pasi123
      @Pasi123 10 місяців тому

      @@gRocketOne Even the regular non-Atom based Pentium Gold's didn't get AVX/AVX2 until 2022. Comet Lake (10th gen) Pentiums released in 2020-2021 didn't have AVX at all so they were x86-64-v2. As far as I know the 10th gen Pentiums are still being sold today.
      Same with the Atom based Pentium Silver's

    • @the-answer-is-42
      @the-answer-is-42 10 місяців тому

      I think we should be rather conservative with deprecating CPUs, since there are many fully functional older machines out there and it's incredibly wasteful to stop supporting them unnecessarily. Microsoft is already doing this with the hardware requirements for Windows 11, I hope Linux distros doesn't follow their example.

    • @rightwingsafetysquad9872
      @rightwingsafetysquad9872 10 місяців тому

      @@the-answer-is-42 That's the great thing about Linux, you have options. If you want to run on hardware more than 17-20 years old, there's always Debian or Gentoo.l

  • @DaraelDraconis
    @DaraelDraconis 10 місяців тому

    It's a neat idea. I'm personally not a huge fan of the current trend towards assuming data storage is of negligible concern, resulting in ever-ballooning package sizes (see also, to name a few, software distributed primarily as Docker recipes; flatpak/snappy/whatever the "actually we'll bundle all the dependencies for everything and lose all benefits of dynamic libraries" packaging system du jour is; systemd itself to an extent with the way it integrates ever more functions… but I can see it being useful.

  • @edelzocker8169
    @edelzocker8169 10 місяців тому +1

    The Core2Duo CPUs are still used for home office or in specialised systems like AV-scanners...