Get started with Notion, sign up for free or unlock AI for $10 per month: ntn.so/WolfgangsChannel *Corrections:* - At 14:40, the N100's iGPU is not just 'playing back' the AV1 content, it's transcoding it to H.264 - Intel i3-6100 is a dual core processor, not quad core (15:25) *Links:* Asrock N100DC-ITX geni.us/ECsSoNr (Amazon) Jonsbo N2 geni.us/8rpN (Amazon) ASMedia ASM1166 M.2 SATA Controller geni.us/FL52EAe (Amazon) Trendnet 10Gbit SFP+ Adapter geni.us/wASn1 (Amazon) DC Jack to block terminals adapter geni.us/CFMe (Amazon) 4-pin ATX extension cable geni.us/Q26G (Amazon) PicoPSU geni.us/nX6uO (Amazon) Sharkoon SilentStorm SFX Bronze 450W geni.us/wKg3i3 (Amazon) As an Amazon Affiliate, I earn from qualifying purchases.
If I manage get a 24 Pin ATX to 4 Pin ATX, or 24 Pin to 8 Pin then 8 Pin to 4 Pin, will that work? Would it be safer than just hacking the 24 Pin to jump the power?
7 місяців тому
Why not full SSD build? It seems way simpler to power, lower consumption, quiet, smaller & possibly faster. Is it just HDD/SSD capacity vs. price consideration, or am I missing something?
I use the N100M as main working PC. With 32 GB RAM and Debian 12 it is more than enough for programming with nodejs. With the low power consumption I can power it entirly from solar (island-system). An best of all... silence... I use two case fans but they only run on full cpu load. This is the first low power mainboard where I can configure the case fans for 0% on low work load. I love working on a complete silent pc :)
@@desklamp4792 If the 12VHPWR connector disaster taught us anything, it's that you shouldn't drive power connectors up to their specified limits in regular production.
I've learned a great deal from your videos, and I've been a UNIX/Linux systems administrator for 25 years (back to the days when I had to download a new kernel image over a 14.4Kbps modem and compile it - hours! - just to get a 3Com Ethernet adapter working). Keep up the good work.
6:52 the electrically and mechanically more safe option is to crimp the two pairs of cables together with a two-cable crimp each, before screwing them into the barrel jack (or rather its terminal block). But that's just a nit, I think your block is rated for multi-strand wire because there's a tiny metal plate between screw and cable, so it won't damage the strands. Still better crimped.
@@MartinZeitler nah its fine wires not one big one. You always have to put a crimp on them otherwise it's a fire hazard especially if it's higher current!
@@einfachmanu90 just image-search for something "4-pin ATX cable pinout", it will tell you which wire is connected to which pin on the connector. If the wires are colored, black is ground, yellow is 12V, red is 5V, usually. You will want to double check and measure with a multi-meter if you can. A standard barrell-jack connector is center-positive, but the brick should tell you that, too.
I got the N100M exactly because I would've needed to hack a standard PSU to power the DC version and it has significantly less room to add cards. I think it's a pretty nice platform to work with, really power efficient and flexible. About the SSDs I would advise againt splicing in more connectors from the on board JST connector because, even if they use less power, it's pulling from 5V instead of 12V so it's using a rail with less current available then 12V. It's also weird seeing it not idling down to C10. My N100M with TrueNAS Scale goes to C10 and the NIC is not giving me that issue. Last but not least, if cooling is sufficient, you can bring the short and long TDP to 30W and get better transcoding performance since the long TDP on the board is just 10W.
I have a few N100 boards and all show over 96% in C10 state - the worst being the Proxmox server. Edit: I think I might know why his CPU didn't go to C10 state. I just read the 12th Gen processor datasheet and it says that the dependency for that is "Display in PSR or powered off" - none of my servers have a display attached and powertop even says GPU "Powered On 0.0%". It might just be that in his case, he had the iGPU busy, which doesn't allow it to go from C8, to C10.
He said "the only case in which I would recommend this solution is if you're building an SSD only NAS, since SSDs need less power than hard drives". Power is a product of voltage and current. If the connector can only output 2A per pin and there's only one 5V pin the connector supports a maximum of 10W. Which is not a lot of power if you wanna use more than two consumer SSDs. Usually SSDs are rated to use 1A, so even when using SSDs you're limited to two drives unless you want to risk it. And, if you just didn't know, that connector carries 12 and 5V but only 5V is used for SSDs.
iperf3 before 3.16 version is single-threaded and can't saturate 10G by default. In order to saturate 10G you have to run multiple iperf3 server processes: iperf3 -s -p 5101&; iperf3 -s -p 5102&; iperf3 -s -p 5103 & and run multiple clients: iperf3 -c hostname -T s1 -p 5101 &; iperf3 -c hostname -T s2 -p 5102 &; iperf3 -c hostname -T s3 -p 5103 &;
@@5467nick But even the latest Version (TEG-10GECSFP (Version v3.0R))is only PCIe 2.0 which would be a bottleneck if you only have two Lanes. However, the RJ45 Version (TEG-10GECTX (Version v3.0R)) has got an PCIe 3.0 Link, so two Lanes would be sufficient to saturate the 10GbE Network. This Info is directly from the Trendnet Homepage.
You might also set the CPU affinity of the threads to 0. This way each thread will be distributed to different cores (if there are as many as the number of threads).
re: low networking throughput from the NIC the AQN-100 supports up to 8 parallel queues ( AQ_CFG_VECS_DEF ) to help balance interrupt handling / work over multiple CPU cores [even while the workload is only a single TCP connection]; it might be a little hobbled by only being on 4 kinda slow cores since it can't fully take advantage of the existing hardware parallelism a similar TDP processor with 8 real cores to service hardware interrupts in parallel might achieve better throughput, even if the individual cores were somewhat slower
also double check what congestion control algo is being used, on a direct connection over 20gbit fiber, it can change my results with iperf3 from 3gbit to 19gbit, I mostly use bbr
Hey, the barrell jack adapter does not have spring loaded contacts, so you want to crimp your wires before screwing them in. If you don't they can come loose during operation and this is a fire hazard.
Just read the 12th gen processor datasheet and one of the requirements to go from C8, to C10 is: "Display in PSR or powered off". You could potentially go to C10, if you make sure the iGPU is not busy with work. All my N100 servers(which don't have displays attached, btw) are in C10 state. Just a wild guess but it might work.
Very good work. Thank you for sharing. Loved it from start to end. Only potential omission (from my point of view) is a short mention of relevant BIOS settings (in addition to the Realtek NIC) to minimize power consumption.
I migrated my old NAS to a N2 with my old N6005 board. All I can say is that I love it (now). It's purely a NAS running nothing else other that TrueNAS core. I've not even considered an upgrade to N100/N300 as I do not believe it will offer any additional benefits. Great video, otherwise. Learn something new every day and I have met that small challenge. Thanks for the effort you obviously put into this video.
A very interesting video for all those who want to buy efficient hardware. Now, for those of us who already have hardware, and at the moment cannot change it, the software part would be very interesting. Do you have a video where you explain that setup with Unraid and different containers? The tricks and ways to configure the Dockers of delugevpn, prowlarr, booksonic, cloudflared, invoiceninja, nextcloud, paperless, photprism, radarr, recyclarr, sonarr and vaultwarden would be very appreciated. Thanks for everything Wolfgang.
If you must poke wires into a power connector at least use a resistor to limit the current if you make a mistake. Deburring and grommets are a good practice if you route wires through sheet metal
Have been tracking your channel for a while. Great stuff. Looking forward to your build video. Immediately bought three of these for my new low-power, quiet Ceph cluster.
I've seen that situation with iperf3 before. I was just recently looking at some MoCA videos and decided to go with that to improve my mesh system, and in one video it was pointed out that iperf3 wasn't using the full gigabit connection, but with the parameter for parallel streams, it did reach gigabit speed. I had the same experience once I set up the MoCA boxes for myself. You're definitely not alone there. This was an interesting build as I've been thinking about looking at what N100 boards would be like, especially for power consumption, for something like this.
I've been running my Nas on an asrock q1900dc-itx since 2014. I started with FreeNAS, and it's now running TrueNAS Core. I've got 2x 2 TB Seagate Nas drives, and I added 2x 4 TB WD Red plus drives. The PSU is a 60w ThinkPad brick from ~2010. I don't run any services besides NAS on the system, because I don't want to add any complications to its management. I ran FreeNAS 9.2 from 2014 until TrueNAS core was a year old. Maybe I'll migrate to truenas scale, if core gets discontinued. But more likely I'd only do that by building a new NAS on something like the n100dc-itx. I love having a 10 year old NAS with industry standards parts that I can maintain without any particular vendor staying in business.
6W quad core, swappable single channel DIMM in server style that allows direct airflow, NVMe, dual sata, an OPEN PCI-E x2 slot....Yeah this is pretty much what I need. My home server is a 2009 era eMachines with a single core 1.6GHz 2650e, 2GB, dual sata 3gbps, PCI-E g2x1 (empty) and g2x16 (10GbE SFP). This looks like a more than suitable replacement in speed alone.
1:15 of course there is the j5040-itx it can handle 4 drives and has also a pcie connector and a m.2 wifi port. it supports 2 4k streams and is also passive cooled, has dual channel ram with max 8gb per channel. pared with a 12v adapter and a pico psu you can go low as 5w on idle.
The PCI slot on it is 2.0 x1 - which is the main reason why I prefer the N100DC-ITX board, as mentioned multiple times in the video. Besides, it costs almost the same as N100 boards new and comes with worse performance and an older iGPU
I think for me the most valuable data was right at the end. I never know where exactly to place these CPUs like the n100 in relation to their bigger brothers in the core i-something series. So the performance and performance per watt graphic really helps.
@@arouraios4942 yes, but it still is a reference to something I'm more familiar with. Let's take Intel's atom CPU, it never came off that favorably, compared to it's bigger brothers, not even a generation offset.
Tbf I wouldn't be too comfortable running a "hacked" psu instead of the provided one, especially if you have to buy one just for that purpose it kinda defeat the premise of the motherboard in the first place. I think something like the N100I-D D4 would make more sense, even if you have to ditch the 10Gb card.
My DIY home NAS uses the distant ancestor of this board, the Asrock Q1900DC-ITX. It's been in service since 2014, running FreeNAS 9.2 until I migrated to True NAS Core about a year after it became stable. The Q1900 has 4 SATA ports, and I have pair of 2 tb seagate, and pair of 4 tb WD's formatted ZFS. It's in a Thermaltake V1 Core box, and powered by a 75w laptop brick. If I need more storage, I can swap out the smaller pair of drives. My use case is backup storage, so I don't run any services besides samba. I think I'll probably build a new NAS, as it's dicey to trust a 10 year old MB. And keep the first NAS as a backup.
I wouldn't be surprised if the NIC just overheats, since this is the exact same issue I had until I strapped a little Noctua fan onto the heat sink of my 10G NIC. Might be worth a shot
Have a passively cooled Asus 10G in my PC still which doesnt overheat at all... Its due to iperf not taking advantage of multiple cores i think. Had the same issue on a low power Intel pc but between my zen 3 and zen 4 40w+ ryzens it's fine
there was a misleading information about the N100 vs I3-6100 performance.. Because you calculated the TDP of 6 vs TDP of 51. I'm pretty sure that the N100 goes above 20W when running Cinebench R23. I dont think it will be 8.5 times more efficient at the same performance. Can you please measure? Thanks. Love your videos!
i thought for a long time about going for an asrock or asus n100 card but finally, i bought a tiny pc hp prodesk mini, i5 8500t, 8gb ddr4, 256gb ssd nvme for 95€. I just added 2x16gb ddr4 and 1tb hdd sata + 1tb usb hdd. With all this added, idle power consumption is 5w on Debian 12, C10 pkg (no monitor, no mouse, no keyboard) Of course I don't have parity, but that's not crucial in my case.
I just built this. The problem is not only the DC pins can handle only two amps but the on-board DC-DC converter is only 90W (basically on board picoPSU). I don't have that many data, so I am fine for now, but I was thinking down the road I can get those external HDD power supplies (give you a molex with 12 and 5 Volts) OR since I have some soldering skills - make a little DC-DC PCB from 19 to 12 and 5 Volts and power everything I need from one beefy 19V laptop adapter. But then again, by the time I will fill up my 14 Tb, some new board like N500 will come out, that would have a more powerful DC-DC converter on board with more PCI-e lanes and more SATAs.
I recommend adding a fan onto the cooler. Under full load this CPU easily hits 100°C very quickly. Idle temps weren't great either. I think it was around 70-80°C. I am using the mATX board variant for a server.
I built a system based on this video and it's awesome! N100DC-ITX, 32gb Ram, bequiet! Sfx power 3 450W, 2 18Tb Sata drives (more on the way), one usb nvme and a boot ssd. No Pcie card. I had to manually set the DDR4-3200 CL16 stick to DDR4 -3000 CL 16 since I don't think the board has XMP and DRAM Voltage can only be set to 1.26V. My stick is running fine (no memtest yet). With Proxmox installed and several containers (vpn, jellyfin, ...) idling powertop shows 73% at C10 and 7% at C8 which sounds great (I don't know much about Linux power management). A Wall-plug power meter shows 28 Watts on idle, up to 80 watts under full load. I think the bequiet psu was a bad choice, it alone draws 10 Watts when the board is powered down, but I found out about Wolfgangs PSU Chart too late. All in all I'm very happy tho. Great video on a great board, thanks for the content Wolfgang!
In some ways i kinda wish the manufacturers for the n-series motherboards are crazy enough to cram basic IO (SATA/LAN/USB3) onto a single pcie lane and expose the remaining 8 as pciex4 slots
I would love to see you build the ultimate Unraid-Plex home server with 5-6 SATA HDDs using the new Minisforum MS-01 taking out the motherboard and putting in a small case and also putting a cheap graphics card that can easily handle at least 4x4K streams with transcoding!
What you could do instead for the power IMO is put a female barrel jack in those two wifi antena openings and use a small jumper cable to connect that to the board power. The whole "pull 4 cables from inside the case to plug into something outside looks a bit botched IMO.
To be honest, N100 can be configured with higher TDP.As for certain N100 Box configured with 25w TDP with 16G DDR5 4800 RAM, its Geekbench6 score can achieve around 3300, which is quite close to skylake 4 cores i5 :D But it performed a lot worse when it's configured with 6 watts TDP, though. :) I was using it with Windows 10 back then with my N100 box, changing TDP between 6w and 25w in BIOS really made a huge difference :) Thanks for your great videos and they help me save my power bills a lot :D
Thank you for your channel! It is a really nice resource when shopping for hardware. I was contemplating a N100DC, but chose a N100M with separate Pico PSU because of availability. I would like more cheap alternatives with ECC, but that is just my preference.
Now CWWK has an N100/N305 board with only two 2.5 ethernet ports, but it has a PCIe slot with 4 lanes to take a higher bandwidth network card. I also has 6 SATA ports. This meets the requirements in more straight forward way than was possible before.
I have this board! It sits under my TV. Also it's nice for someone like yourself to review this board and show off the capabilities. If I get some drives, I'll probably upgrade mine to a very similar setup to yours, though likely 2.5GbE.
Do you by any chance use the onboard audio with the 3.5mm ports? I am looking for a cheap board that I can directly plug my (cheap old) 5.1 System into directly. If you tried it, how was the quality? Not looking for high end, but it also shouldn't be terrible.
@@MrMoralHazard sorry! I'm using only the HDMI for audio. I would test, but I literally lack the "ear" to tell you if the 3.5mm is worth a damn or not.
Thanks for this guide! I am currently working on my own version of this. After having it build with just the basics, without the HDD's yet (only 500Gb SSD with Proxmox), I measured 22 watts at being idle, way off from your measurements... I watched your video over again to find some hints to it. I found the problem: I bought a Flex PSU at AliExpress. It turns out that when I only have the power supply turned on, with nothing else on it, it already is using up 10 watts!! Omg. (at 32 cents/kwh 24/7 running that costs 28 euro/year) This is going back and I have ordered a 300W Pico PSU + 12v 10a dc power adapter (so 240W effectively). Lets see how that goes.
Perfect timing. I am considering a n100 board for a NAS too. Curious if you would recommend this board over a N100M with more expansion or the Cwwk / Topton N100 boards. My goal is low power, not necessarily high network speed
The problem is that Cwwk / Topton N100 boards have the JMB585 chip, which prevent the system to reach deep c-states which result to higher consumption.
Topton now has a 8505 board that works great as a NAS (and/or router) because it has more PCIe lanes and in addition to Quicksync, it has quickassist. For around 200USD it can handle M.2 4.0x4, 6 SATA, 1*PCIEx4, 4*Intel i226-V 2.5Gb ethernet and dual channel DDR5... Oh yeah, it can also do a 4*3.0x1 NVMe adapter board.
Very nice NAS-project! Because of the bandwith-issue, i once had a switch (some TP-Link if i remember correctly) that listed in the specs that it would auto-adjust the framesize, but this actually nerver worked. After updating the switch, i could set this setting manually and actually enable jumbo frames.
Great video 👍 9:26 -- ASPM (active state power management) setting, thank you. Kindest regards, friends and neighbours. P.S. Please do *_not_* do a UGreen NAS video.
As long as you were doing a bit of custom cables, you could also use your 19 V power supply to branch to a straight through 19 V to your motherboard and 2 cheap-ish buck regulators, 12 V @5V, and 5V at 6A both available on Amazon or ebay for under 20 usd add a SATA power cables. A little more involved, but may less intimidating (perhaps cheaper) than working with a 120/240 V power supply.
I don't think the buck converters + 19V power brick would do much for the efficiency. Given that the bulk of the "active" or spun up power draw is 12V & 5V for the HDD, you'd be loosing allot via those buck converts.
Managed to get my hp prodesk 400 to idle at 5 ish watts (jumps between 4 to peaks of 6 occasionally) with 2 hdd spun down whilst running samba shares with mergerfs on the proxmox host, haos vm & jellyfin lxc, i3 8100 16g ram, 1 disk spinning is 12w, 2 is about 18, under load is about 60w
Thanks for the interesting video! I was wondering at the beginning why you didn't use the "ASUS Prime N100I-D D4", but if you need the higher PCI-E speed for a 10gb network card, it's clear again. Presumably the external power supply of the Asrocks board also needs less power than the ATX/SFX power supply of the Asus board, right? I have seen a very similar construction proposal at "Elefacts" and am still thinking about it.
Van you pls do a video on how to make existing hardware more efficient. Your videos are nice but keeping your old stuff is cheaper and some people cannot afford buying a new motherboard every time you do a video. Very informative video
To improve power efficiency you need to reduce the distance between transistors on a chip, so there is less electrical resistance between them Typically this is done by shrinking the manufacturing node from say 14nm to say 7nm. So you'd need to buy a new chip to take advantage of that. But what big tech doesn't want you to know is that you can shrink the distance between transistors yourself by just pushing really hard on the sides of the chip. My trick is to take my CPU and put it in a vice and then squeeze as hard as I can. Using this method I've turned a 28nm Xeon v2 into a 7nm Epyc 7002 (although the pressure did make some transistors pop out of the side).
The Avoton/Denverton based boards are really, really good. I think they check all the boxes for what you're looking for in a small low power motherboard. Those chips were designed from the ground up by Intel to perform exactly the function you're trying to achieve. Plus they support ECC memory, which is awesome. I used to have an AsRock C2750D4I. Mini ITX. 8 core Silvermont/Bay Trail generation low power Atom processor, with four DDR3 slots supporting ECC. 12 SATA ports. PCIe-8x slot. IPMI onboard with a dedicated ethernet port, plus two additional onboard gigabit ports. And they're old enough now that you can probably get them for cheap. The newer Denverton based boards are probably significantly better, but I haven't had direct experience with those. The only reason I got rid of the C2750D4I was because I moved to 10 gigabit fibre and the poor little Atom cores couldn't push more than about 190 megabytes/second over rsync when I was doing backups between servers.
I´m running one for my OPNsense FW with a dual 2.5G NIC. For some reason it kept crashing. The culprit was DRAM voltage being too low on a 3200 16GB stick. The XMP support in the UEFI is kinda weird. Anyway for 24/7 operation in a case I slapped on a little fan for airflow. Now it runs with no problems.
I went in a different direction. Bought the Aoostar R1, added 2 20tb drives for storage, bam low power media server running at 10 watts from the wall with the hard drives spun down, plus you can set the bios to run the N100 at 15w tdp for more performance at a max temp of 70c.
The power draw is amazing though, I tried a friendly elec nas board with RK3588 and topped at 13W if I remember. They say that x86 consumes more but well, it all goes to CPU manufacturing (7nm in this case), motherboard connected devices etc... Can't wait to have the equivalent of N100 but in 1nm in 2027 !
5:23 yes - 2A when spinning up. but the rating of that jst plug is for continous current, not peak. so it would probably be fine. but still a janky solution
Re: the network speed. We ran in to a similar issue at work with one of our windows servers. The fix was to enable smb multichannel.
9 місяців тому+3
Yep, there is no replacement for this board, if we talking about support for 10Gbps NIC, but to be honest Asus Prime N100I-D D4 is also not that bad, at least it has second M.2 e-key port with PCIe (Asrock M.2 e-key port hasn't PCIe), so it can be used to install AI accelerator or „slightly” slower 2.5-5Gbps NIC, and it's a bit cheaper.
Who needs 10Gbps on a home server anyway, that's kind of a meme tbh. Even the usefulness of 2.5Gbps could be questioned for a computer that can only transfer to and from SATA ports.
4 місяці тому
@@billmurray7676 SATA can handle up to 6Gbps, single HDD is capable to saturate 2.5Gbps... IMO 1Gbps NIC is just slow, even from regular SD card I can read data faster than that.
it doesn't matter what sata can support, it matters what you put on it : you put HDDs on it, so that's about 200Mo/s, which is 1.6Gps. So no, you won't saturate 2.5Gbps with your HDD. That means 10Gbps is clearly useless on a home server. And 2.5Gbps, well, like I said, it's a questionable investment or reason to buy hardware in pratice.
4 місяці тому
@@billmurray7676, fastest available hard drives are reaching SATA limit, RAID in NAS is common thing, it's possible to achieve this limit even with standard hard drives and we haven't even started talking about modern SSDs used as an buffer or main storage. It's 2024, 1Gbps NIC should be considered as an obsolete for anything above low-end NASes.
You said HDD, so obviously SSD are out. Also, that would be pretty stupid to build a RAID for performance in a NAS since, by essence, you're supposed to build for data safety. You can't have both in a RAID. 1Gbps, although not ideal, is clearly not obsolete, 2.5/5 are options, but 10Gbps is definitely useless, which means that PCIe x1 is fine, and you shouldn't sacrifice other benefits for PCIe x2 or x4.
Maybe ok with ITX formfactor. But if you want/need smaller and more energy efficient. Go with odroid m1. It has pcie 3.0 2-lanes and can handle same JBM585 etc nvme adapters without issues.
Its such a pita to find suitable itx boards (especially in my country) that I gave up and always chose flex-ATX boards with a SFX PSU instead. Is it bigger? Yes. But we can always hide it in a piece of furniture...
Do you have / could you do a video about power monitoring in Home Assistant. I like the way you are able to see the power usage of your devices, but its something I have no experience in and would be great to have a how-to video.
I don't know how Wolfgang does it exactly, but I do something very similar with the ESPHome integration on home assistant. I buy 'Shelly Plug S' power plugs. I don't like their native app, but the hardware is just a simple esp8226 with some peripherals. So I just open up those plugs and flash ESPHome onto them with an USB dongle (Look up guides for it, its not too complex). Once I have ESPHome running on the plug, I can easily import them into home assistant via the usual esphome integration. I often also calibrate them with a known load for better accuracy, but this step is optional. You can then toggle the shelly power plug via home assistant, read the voltage, power, daily power consumption and temperature. Plug it in between the device you want to monitor and the power socket, and you can see the power consumption of that device in HA.
@@harmenkoster7451 Thanks so much for the advice. Ive been looking at LocalBytes Smart Plug which comes flashed with Tasmota or ESPHome, im not really sure of the difference but ESPHome looks a lot more straighforward to setup in Home Assistant. I simply want to monitor the power consumption of my server, ideally have a nice graph that I can view over a time period, ambient temperature would also be nice to monitor, im not sure if thats something that would be included in the plug or something I would need to buy in addition. Thanks again.
Looks awesome. Explaining computers did a really nice miniPC build with a N100 board. Think it was different to this one. Also Supermiro has an awesome relatively low power (for the number of cores etc) epyc ITX board. This is great if you want more lanes. Also, it should last forever. Obviously, it is more expensive......
I built a desktop with LMDE running on ASRock N100M with an NVME SSD and 16GB Kingston memory, almost like Explaining Computers did, except I use HDMI output to an LG TV. Everything would be fine, but to my huge disappointment the system tends to freeze or reload spontaneously. I have tried Debian as well as almost all available Debian-based distros with different desktop environments, Fedora desktop - with no effect. Finally I ended up with LMDE that seems to be more stable on my hardware. As for working as remotely controlled NAS or home server -- N100M MoBo worked with no issues at all. It seems the problem is somewhere with video output/drivers.
You'd be surprised how much power SSDs can use. I've been looking into it. Some SATA SSDs can be 6w at full load meaning that 2.5a on the 5v pin of the JST is only really good for two. I have a couple of U.2 SSDs in a rig here and those are rated 20-25w load
What about the Asus Pro WS W680M-ACE SE??? I know its not cheap but its probably cheaper than trying to find a C246 these days and its much more modern, has ECC support, 10gb networking and you could get a nice new low power intel chip with quicksync
From a tinkerer's point of view, this might be a nice solution, but I would still consider this M.2-to-SATA card a workaround. However the Intel N100 seems to be the perfect fit for an energy efficient NAS. The first turn key solutions are already round the corner, I will wait for one of these, I prefer to tinker with software over hardware. 😁
Great stuff. I was asking in the comment about Mellanox Connect -X3 which is terrible. My server needed more horsepower so I went with i3 14100. Bought Intel X710 (dual port) for 100EUR with 2 transceivers. It goes down to C8 and there is a 10G speed always. Server is just one thing. Got RACK with 2 switches (8x10G SFP+ and 8x1G (4x POE+)), shutdown server with ipmi and 18 cores, 1x router, 1x wifi 6 AP, 1x camera, above i3 server and UPS. Previously with server based on this 18 core xeon total idle consumption was 136W, went down to 69W with i3 which is still a lot. So there are also unoptimized things like camera (15W!), ups (not measured) and IPMI (5W), probably networking could also do better.
Nice video! It would have been interesting for you test real world network file transfer speeds with Truenas, etc. as that’s what really matters to most of us.
This pulled me over the line. Been looking at this board for a while now, but was doubting about the performance. Think I am going to acquire an N100M for new NAS
Hey, did you get the N100M and which case did you use for it? I'm looking for a small case ideally some kind of mini itx that has space for the N100M if used with PicoPSU
Thanks for the great video. The board is really sweet. The idle power consumption is amazing. It's a shame that there are not more PCIe lanes to go around. There are simply not enough lanes to get to SATA6 speeds while also having 10Gbit networking. That might be fine for spinning disks but I am still looking for the 'perfect' 6-8 disk, all flash NAS. It's a real shame that all those efficient Fujitsu boards are incredibly hard to com by.
I think if pursuit a cheap build, you can buy an Intel 82599EN based network adapter, its usualy branded as X520-DA1, and this motherboard should fit x8 pci-e without any problems.
@12:41 Wait a minute... So these power consumption numbers are when using the Bronze label SFX power supply because you hooked up HDD's ? ..or did you use the pico power supply? It's not very clear to me.. And can someone tell me if I can truly pass through the 6 SATA ports on the nvme board to a VM in Proxmox? In other words; Does the nvme slot share it's IOMMU group with other devices? If so, which devices?
9:29 I copied the code but running lspci still shows aspm=disabled and can't go below c3 still different asrock mobo but same issue the ethernet controller looks the same, ls'd the folder for the driver and it also has l1_aspm is it supposed to show aspm=disabled even if I run the sudo tee? appreciate the help!
It doesn't support AV1 encoding using the integrated GPU, still you can encode AV1 using SVT on CPU using ffmpeg. If I remember correctly, encoding on CPU gets better quality despite taking longer encoding time.
For a nas like this, av1 decoding is probably more important than encoding. If you have av1 files, you want to be able to transcode on the fly to e.g. h264 or HEVC to stream to a tv or media player box.
Get started with Notion, sign up for free or unlock AI for $10 per month: ntn.so/WolfgangsChannel
*Corrections:*
- At 14:40, the N100's iGPU is not just 'playing back' the AV1 content, it's transcoding it to H.264
- Intel i3-6100 is a dual core processor, not quad core (15:25)
*Links:*
Asrock N100DC-ITX geni.us/ECsSoNr (Amazon)
Jonsbo N2 geni.us/8rpN (Amazon)
ASMedia ASM1166 M.2 SATA Controller geni.us/FL52EAe (Amazon)
Trendnet 10Gbit SFP+ Adapter geni.us/wASn1 (Amazon)
DC Jack to block terminals adapter geni.us/CFMe (Amazon)
4-pin ATX extension cable geni.us/Q26G (Amazon)
PicoPSU geni.us/nX6uO (Amazon)
Sharkoon SilentStorm SFX Bronze 450W geni.us/wKg3i3 (Amazon)
As an Amazon Affiliate, I earn from qualifying purchases.
I had problem with my Asus n100 and hevc playback
ASUS H170i Pro mini !!!! Please! As a home server and as a router
No.
If I manage get a 24 Pin ATX to 4 Pin ATX, or 24 Pin to 8 Pin then 8 Pin to 4 Pin, will that work? Would it be safer than just hacking the 24 Pin to jump the power?
Why not full SSD build? It seems way simpler to power, lower consumption, quiet, smaller & possibly faster. Is it just HDD/SSD capacity vs. price consideration, or am I missing something?
I use the N100M as main working PC. With 32 GB RAM and Debian 12 it is more than enough for programming with nodejs. With the low power consumption I can power it entirly from solar (island-system). An best of all... silence... I use two case fans but they only run on full cpu load. This is the first low power mainboard where I can configure the case fans for 0% on low work load. I love working on a complete silent pc :)
Those screw terminal barrel jack connectors are usually only rated for around 600mA, so I'd recommend to use a soldered connector instead.
Shouldn't that be okay since the board claims to only use something like 6W?
@@desklamp4792 If the 12VHPWR connector disaster taught us anything, it's that you shouldn't drive power connectors up to their specified limits in regular production.
I've learned a great deal from your videos, and I've been a UNIX/Linux systems administrator for 25 years (back to the days when I had to download a new kernel image over a 14.4Kbps modem and compile it - hours! - just to get a 3Com Ethernet adapter working). Keep up the good work.
6:52 the electrically and mechanically more safe option is to crimp the two pairs of cables together with a two-cable crimp each, before screwing them into the barrel jack (or rather its terminal block). But that's just a nit, I think your block is rated for multi-strand wire because there's a tiny metal plate between screw and cable, so it won't damage the strands. Still better crimped.
Major hickup for me (network engineer and electrician) too haha glad im not the only one
Not a problem while they're nicely aligned parallel, so that the clap inside presses both down equally.
@@MartinZeitler nah its fine wires not one big one. You always have to put a crimp on them otherwise it's a fire hazard especially if it's higher current!
How do I know what is positive and what is negative? Can I tell somehow from the plug? I'm afraid of breaking something :(
@@einfachmanu90 just image-search for something "4-pin ATX cable pinout", it will tell you which wire is connected to which pin on the connector. If the wires are colored, black is ground, yellow is 12V, red is 5V, usually. You will want to double check and measure with a multi-meter if you can. A standard barrell-jack connector is center-positive, but the brick should tell you that, too.
I got the N100M exactly because I would've needed to hack a standard PSU to power the DC version and it has significantly less room to add cards. I think it's a pretty nice platform to work with, really power efficient and flexible. About the SSDs I would advise againt splicing in more connectors from the on board JST connector because, even if they use less power, it's pulling from 5V instead of 12V so it's using a rail with less current available then 12V. It's also weird seeing it not idling down to C10. My N100M with TrueNAS Scale goes to C10 and the NIC is not giving me that issue. Last but not least, if cooling is sufficient, you can bring the short and long TDP to 30W and get better transcoding performance since the long TDP on the board is just 10W.
I have a few N100 boards and all show over 96% in C10 state - the worst being the Proxmox server.
Edit: I think I might know why his CPU didn't go to C10 state. I just read the 12th Gen processor datasheet and it says that the dependency for that is "Display in PSR or powered off" - none of my servers have a display attached and powertop even says GPU "Powered On 0.0%". It might just be that in his case, he had the iGPU busy, which doesn't allow it to go from C8, to C10.
Well no consumer SSDs use the 12V rail on SATA molex connectors... So your reasoning there for not using the connector is not really valid.
He said "the only case in which I would recommend this solution is if you're building an SSD only NAS, since SSDs need less power than hard drives". Power is a product of voltage and current. If the connector can only output 2A per pin and there's only one 5V pin the connector supports a maximum of 10W. Which is not a lot of power if you wanna use more than two consumer SSDs. Usually SSDs are rated to use 1A, so even when using SSDs you're limited to two drives unless you want to risk it.
And, if you just didn't know, that connector carries 12 and 5V but only 5V is used for SSDs.
got myself the same one and I'm pleased with it, too
How low in power consumption can the N100M get in the C10 state?
They make a micro atx version of this board called the N100-M, which has a full size pci-e slot and uses a standard ATX power supply.
iperf3 before 3.16 version is single-threaded and can't saturate 10G by default. In order to saturate 10G you have to run multiple iperf3 server processes: iperf3 -s -p 5101&; iperf3 -s -p 5102&; iperf3 -s -p 5103 & and run multiple clients: iperf3 -c hostname -T s1 -p 5101 &; iperf3 -c hostname -T s2 -p 5102 &; iperf3 -c hostname -T s3 -p 5103 &;
Yes 100%, jumbo MTU might help here to, but the issue here is more likely that his PCIe 10G sfp+ card can only do PCIe 2.0 which limits to 2x4 GT/s
@@erikmagkekse Video shows it hitting over 8Gb/s at 11:17. Also he claimed it has a new chipset.
@@5467nick But even the latest Version (TEG-10GECSFP (Version v3.0R))is only PCIe 2.0 which would be a bottleneck if you only have two Lanes. However, the RJ45 Version (TEG-10GECTX (Version v3.0R)) has got an PCIe 3.0 Link, so two Lanes would be sufficient to saturate the 10GbE Network. This Info is directly from the Trendnet Homepage.
You might also set the CPU affinity of the threads to 0. This way each thread will be distributed to different cores (if there are as many as the number of threads).
Awesome video dude. Thanks to your tip my N100 NixOS home server is finally running at C8 and not C3 like before.
did he have to disable the on board Lan to achieve that ?
re: low networking throughput from the NIC
the AQN-100 supports up to 8 parallel queues ( AQ_CFG_VECS_DEF ) to help balance interrupt handling / work over multiple CPU cores [even while the workload is only a single TCP connection]; it might be a little hobbled by only being on 4 kinda slow cores since it can't fully take advantage of the existing hardware parallelism
a similar TDP processor with 8 real cores to service hardware interrupts in parallel might achieve better throughput, even if the individual cores were somewhat slower
also double check what congestion control algo is being used, on a direct connection over 20gbit fiber, it can change my results with iperf3 from 3gbit to 19gbit, I mostly use bbr
Do you know how many parallel queues a X520-DA2 has?
Hey, the barrell jack adapter does not have spring loaded contacts, so you want to crimp your wires before screwing them in. If you don't they can come loose during operation and this is a fire hazard.
How do I know what is positive and what is negative? Can I tell somehow from the plug? I'm afraid of breaking something :(
@@einfachmanu90 It should be documented in the manual. Usually outside is Negative (GND), but i have seen the other configuration as well...
Just read the 12th gen processor datasheet and one of the requirements to go from C8, to C10 is: "Display in PSR or powered off". You could potentially go to C10, if you make sure the iGPU is not busy with work. All my N100 servers(which don't have displays attached, btw) are in C10 state. Just a wild guess but it might work.
with VMs, it's almost impossible to avoid iGPU i guess ? That's why i prefer using a headless OS with all containers in docker.
What is your idle power draw in C10? Does it worth the effort to enable it?
@@Reza1984_ IDLE(Proxmox, no USB or HDMI connected) = ~6.4W. IDLE + USB(keyboard) + HDMI = 7W-10W(mostly ~7.5W).
Very good work. Thank you for sharing. Loved it from start to end. Only potential omission (from my point of view) is a short mention of relevant BIOS settings (in addition to the Realtek NIC) to minimize power consumption.
I migrated my old NAS to a N2 with my old N6005 board. All I can say is that I love it (now). It's purely a NAS running nothing else other that TrueNAS core. I've not even considered an upgrade to N100/N300 as I do not believe it will offer any additional benefits. Great video, otherwise. Learn something new every day and I have met that small challenge. Thanks for the effort you obviously put into this video.
Great channel, Wolfgang, really great content! I really do appreciate the drive to find power-efficient build parts!
A very interesting video for all those who want to buy efficient hardware.
Now, for those of us who already have hardware, and at the moment cannot change it, the software part would be very interesting.
Do you have a video where you explain that setup with Unraid and different containers?
The tricks and ways to configure the Dockers of delugevpn, prowlarr, booksonic, cloudflared, invoiceninja, nextcloud, paperless, photprism, radarr, recyclarr, sonarr and vaultwarden would be very appreciated.
Thanks for everything Wolfgang.
If you must poke wires into a power connector at least use a resistor to limit the current if you make a mistake. Deburring and grommets are a good practice if you route wires through sheet metal
Have been tracking your channel for a while. Great stuff. Looking forward to your build video.
Immediately bought three of these for my new low-power, quiet Ceph cluster.
I've seen that situation with iperf3 before. I was just recently looking at some MoCA videos and decided to go with that to improve my mesh system, and in one video it was pointed out that iperf3 wasn't using the full gigabit connection, but with the parameter for parallel streams, it did reach gigabit speed. I had the same experience once I set up the MoCA boxes for myself. You're definitely not alone there.
This was an interesting build as I've been thinking about looking at what N100 boards would be like, especially for power consumption, for something like this.
I've been running my Nas on an asrock q1900dc-itx since 2014. I started with FreeNAS, and it's now running TrueNAS Core. I've got 2x 2 TB Seagate Nas drives, and I added 2x 4 TB WD Red plus drives. The PSU is a 60w ThinkPad brick from ~2010. I don't run any services besides NAS on the system, because I don't want to add any complications to its management. I ran FreeNAS 9.2 from 2014 until TrueNAS core was a year old.
Maybe I'll migrate to truenas scale, if core gets discontinued. But more likely I'd only do that by building a new NAS on something like the n100dc-itx.
I love having a 10 year old NAS with industry standards parts that I can maintain without any particular vendor staying in business.
I really thanks your video and all of video comments. It will help my current nas build. and I learned wire need crimp.
6W quad core, swappable single channel DIMM in server style that allows direct airflow, NVMe, dual sata, an OPEN PCI-E x2 slot....Yeah this is pretty much what I need. My home server is a 2009 era eMachines with a single core 1.6GHz 2650e, 2GB, dual sata 3gbps, PCI-E g2x1 (empty) and g2x16 (10GbE SFP). This looks like a more than suitable replacement in speed alone.
1:15 of course there is the j5040-itx it can handle 4 drives and has also a pcie connector and a m.2 wifi port. it supports 2 4k streams and is also passive cooled, has dual channel ram with max 8gb per channel. pared with a 12v adapter and a pico psu you can go low as 5w on idle.
The PCI slot on it is 2.0 x1 - which is the main reason why I prefer the N100DC-ITX board, as mentioned multiple times in the video. Besides, it costs almost the same as N100 boards new and comes with worse performance and an older iGPU
I think for me the most valuable data was right at the end. I never know where exactly to place these CPUs like the n100 in relation to their bigger brothers in the core i-something series. So the performance and performance per watt graphic really helps.
Keep in mind that's a 6th gen i3. So we don't know how it compares to a current gen cpu
@@arouraios4942 yes, but it still is a reference to something I'm more familiar with. Let's take Intel's atom CPU, it never came off that favorably, compared to it's bigger brothers, not even a generation offset.
Tbf I wouldn't be too comfortable running a "hacked" psu instead of the provided one, especially if you have to buy one just for that purpose it kinda defeat the premise of the motherboard in the first place.
I think something like the N100I-D D4 would make more sense, even if you have to ditch the 10Gb card.
Small point, this board doesn't come with a power supply.
My DIY home NAS uses the distant ancestor of this board, the Asrock Q1900DC-ITX. It's been in service since 2014, running FreeNAS 9.2 until I migrated to True NAS Core about a year after it became stable. The Q1900 has 4 SATA ports, and I have pair of 2 tb seagate, and pair of 4 tb WD's formatted ZFS. It's in a Thermaltake V1 Core box, and powered by a 75w laptop brick. If I need more storage, I can swap out the smaller pair of drives. My use case is backup storage, so I don't run any services besides samba. I think I'll probably build a new NAS, as it's dicey to trust a 10 year old MB. And keep the first NAS as a backup.
I wouldn't be surprised if the NIC just overheats, since this is the exact same issue I had until I strapped a little Noctua fan onto the heat sink of my 10G NIC. Might be worth a shot
Have a passively cooled Asus 10G in my PC still which doesnt overheat at all... Its due to iperf not taking advantage of multiple cores i think. Had the same issue on a low power Intel pc but between my zen 3 and zen 4 40w+ ryzens it's fine
there was a misleading information about the N100 vs I3-6100 performance.. Because you calculated the TDP of 6 vs TDP of 51. I'm pretty sure that the N100 goes above 20W when running Cinebench R23. I dont think it will be 8.5 times more efficient at the same performance. Can you please measure? Thanks. Love your videos!
Ok. I was expecting to see more NAS related stuff. ZFS, truenas, unraid? configurations and performance on those sata ports. Looking forward for that.
Open source version of Synology OS
i thought for a long time about going for an asrock or asus n100 card but finally, i bought a tiny pc hp prodesk mini, i5 8500t, 8gb ddr4, 256gb ssd nvme for 95€. I just added 2x16gb ddr4 and 1tb hdd sata + 1tb usb hdd. With all this added, idle power consumption is 5w on Debian 12, C10 pkg (no monitor, no mouse, no keyboard)
Of course I don't have parity, but that's not crucial in my case.
I just built this. The problem is not only the DC pins can handle only two amps but the on-board DC-DC converter is only 90W (basically on board picoPSU). I don't have that many data, so I am fine for now, but I was thinking down the road I can get those external HDD power supplies (give you a molex with 12 and 5 Volts) OR since I have some soldering skills - make a little DC-DC PCB from 19 to 12 and 5 Volts and power everything I need from one beefy 19V laptop adapter.
But then again, by the time I will fill up my 14 Tb, some new board like N500 will come out, that would have a more powerful DC-DC converter on board with more PCI-e lanes and more SATAs.
Sorry for spam, was trying to comment multiple times, but yt doesn't like me mentioning some chinese ali-shop
I recommend adding a fan onto the cooler. Under full load this CPU easily hits 100°C very quickly. Idle temps weren't great either. I think it was around 70-80°C.
I am using the mATX board variant for a server.
I built a system based on this video and it's awesome!
N100DC-ITX, 32gb Ram, bequiet! Sfx power 3 450W, 2 18Tb Sata drives (more on the way), one usb nvme and a boot ssd. No Pcie card.
I had to manually set the DDR4-3200 CL16 stick to DDR4 -3000 CL 16 since I don't think the board has XMP and DRAM Voltage can only be set to 1.26V. My stick is running fine (no memtest yet).
With Proxmox installed and several containers (vpn, jellyfin, ...) idling powertop shows 73% at C10 and 7% at C8 which sounds great (I don't know much about Linux power management). A Wall-plug power meter shows 28 Watts on idle, up to 80 watts under full load. I think the bequiet psu was a bad choice, it alone draws 10 Watts when the board is powered down, but I found out about Wolfgangs PSU Chart too late. All in all I'm very happy tho.
Great video on a great board, thanks for the content Wolfgang!
In some ways i kinda wish the manufacturers for the n-series motherboards are crazy enough to cram basic IO (SATA/LAN/USB3) onto a single pcie lane and expose the remaining 8 as pciex4 slots
I would love to see you build the ultimate Unraid-Plex home server with 5-6 SATA HDDs using the new Minisforum MS-01 taking out the motherboard and putting in a small case and also putting a cheap graphics card that can easily handle at least 4x4K streams with transcoding!
What you could do instead for the power IMO is put a female barrel jack in those two wifi antena openings and use a small jumper cable to connect that to the board power. The whole "pull 4 cables from inside the case to plug into something outside looks a bit botched IMO.
To be honest, N100 can be configured with higher TDP.As for certain N100 Box configured with 25w TDP with 16G DDR5 4800 RAM, its Geekbench6 score can achieve around 3300, which is quite close to skylake 4 cores i5 :D
But it performed a lot worse when it's configured with 6 watts TDP, though. :)
I was using it with Windows 10 back then with my N100 box, changing TDP between 6w and 25w in BIOS really made a huge difference :)
Thanks for your great videos and they help me save my power bills a lot :D
Thank you for your channel! It is a really nice resource when shopping for hardware.
I was contemplating a N100DC, but chose a N100M with separate Pico PSU because of availability.
I would like more cheap alternatives with ECC, but that is just my preference.
9:12
May I know what's the 12V power supply you used for picopsu?
Nice update to your last Serv-Nas-Homelab build !
~Evilcorp crippleware hack, brings tech to the masses !
Now CWWK has an N100/N305 board with only two 2.5 ethernet ports, but it has a PCIe slot with 4 lanes to take a higher bandwidth network card. I also has 6 SATA ports. This meets the requirements in more straight forward way than was possible before.
issue is 20W power in idle
A lot of extra stuff to get just this working. All that adds up. What about a better less "rigged together" option?
I have this board! It sits under my TV.
Also it's nice for someone like yourself to review this board and show off the capabilities.
If I get some drives, I'll probably upgrade mine to a very similar setup to yours, though likely 2.5GbE.
Do you by any chance use the onboard audio with the 3.5mm ports? I am looking for a cheap board that I can directly plug my (cheap old) 5.1 System into directly. If you tried it, how was the quality? Not looking for high end, but it also shouldn't be terrible.
@@MrMoralHazard sorry! I'm using only the HDMI for audio. I would test, but I literally lack the "ear" to tell you if the 3.5mm is worth a damn or not.
Thanks for this guide! I am currently working on my own version of this. After having it build with just the basics, without the HDD's yet (only 500Gb SSD with Proxmox), I measured 22 watts at being idle, way off from your measurements... I watched your video over again to find some hints to it. I found the problem: I bought a Flex PSU at AliExpress. It turns out that when I only have the power supply turned on, with nothing else on it, it already is using up 10 watts!! Omg. (at 32 cents/kwh 24/7 running that costs 28 euro/year) This is going back and I have ordered a 300W Pico PSU + 12v 10a dc power adapter (so 240W effectively). Lets see how that goes.
Dude, having Notion as sponsor is quite cool.
Not the usual boring squarespace bs or whatever.
I really appreciate that you don't sell your soul 😄
Perfect timing. I am considering a n100 board for a NAS too.
Curious if you would recommend this board over a N100M with more expansion or the Cwwk / Topton N100 boards. My goal is low power, not necessarily high network speed
I'm also interested in how the Asrock N100DC-ITX compares to the CWWK and Topton motherboards N100 boards.
I second this
The problem is that Cwwk / Topton N100 boards have the JMB585 chip, which prevent the system to reach deep c-states which result to higher consumption.
@@Eujanous That needs to be compared to see actual results. If jmb585 is more efficient than it may not make much difference.
Topton now has a 8505 board that works great as a NAS (and/or router) because it has more PCIe lanes and in addition to Quicksync, it has quickassist. For around 200USD it can handle M.2 4.0x4, 6 SATA, 1*PCIEx4, 4*Intel i226-V 2.5Gb ethernet and dual channel DDR5... Oh yeah, it can also do a 4*3.0x1 NVMe adapter board.
Shout out to Wolfgang's parents who apparently have a 10 gig network at home.
But a German Internet connection 🤡🤡🤡
50M DSL max kekw
Would love to see Asrock come out with an N305 board as well.
Very nice NAS-project! Because of the bandwith-issue, i once had a switch (some TP-Link if i remember correctly) that listed in the specs that it would auto-adjust the framesize, but this actually nerver worked. After updating the switch, i could set this setting manually and actually enable jumbo frames.
Managed switch from tp link? Nice 😊
@@mmuller2402 i think the model name was something "jetstream ". it actually was not mine, i just made it run for the customer...
Another great video. Thanks mate. I saw a glimpse of Traefik in this video. Would love to see a setup of it like you did with Nginx Proxy Manager.
Great video 👍 9:26 -- ASPM (active state power management) setting, thank you. Kindest regards, friends and neighbours. P.S. Please do *_not_* do a UGreen NAS video.
As long as you were doing a bit of custom cables, you could also use your 19 V power supply to branch to a straight through 19 V to your motherboard and 2 cheap-ish buck regulators, 12 V @5V, and 5V at 6A both available on Amazon or ebay for under 20 usd add a SATA power cables. A little more involved, but may less intimidating (perhaps cheaper) than working with a 120/240 V power supply.
I don't think the buck converters + 19V power brick would do much for the efficiency. Given that the bulk of the "active" or spun up power draw is 12V & 5V for the HDD, you'd be loosing allot via those buck converts.
@@evanvandenberg2260 it wasn't for efficiency, the idea was not to use a 240V supply.
Managed to get my hp prodesk 400 to idle at 5 ish watts (jumps between 4 to peaks of 6 occasionally) with 2 hdd spun down whilst running samba shares with mergerfs on the proxmox host, haos vm & jellyfin lxc, i3 8100 16g ram, 1 disk spinning is 12w, 2 is about 18, under load is about 60w
Thank you for the video, could please talk more about remote control? Or if there were a video before could you please share the link?
Soldering will help to improve the DC connector, as you solder directly into the board.
Thanks for the interesting video! I was wondering at the beginning why you didn't use the "ASUS Prime N100I-D D4", but if you need the higher PCI-E speed for a 10gb network card, it's clear again. Presumably the external power supply of the Asrocks board also needs less power than the ATX/SFX power supply of the Asus board, right? I have seen a very similar construction proposal at "Elefacts" and am still thinking about it.
Van you pls do a video on how to make existing hardware more efficient. Your videos are nice but keeping your old stuff is cheaper and some people cannot afford buying a new motherboard every time you do a video. Very informative video
To improve power efficiency you need to reduce the distance between transistors on a chip, so there is less electrical resistance between them Typically this is done by shrinking the manufacturing node from say 14nm to say 7nm. So you'd need to buy a new chip to take advantage of that.
But what big tech doesn't want you to know is that you can shrink the distance between transistors yourself by just pushing really hard on the sides of the chip. My trick is to take my CPU and put it in a vice and then squeeze as hard as I can. Using this method I've turned a 28nm Xeon v2 into a 7nm Epyc 7002 (although the pressure did make some transistors pop out of the side).
The Avoton/Denverton based boards are really, really good. I think they check all the boxes for what you're looking for in a small low power motherboard. Those chips were designed from the ground up by Intel to perform exactly the function you're trying to achieve. Plus they support ECC memory, which is awesome.
I used to have an AsRock C2750D4I. Mini ITX. 8 core Silvermont/Bay Trail generation low power Atom processor, with four DDR3 slots supporting ECC. 12 SATA ports. PCIe-8x slot. IPMI onboard with a dedicated ethernet port, plus two additional onboard gigabit ports. And they're old enough now that you can probably get them for cheap.
The newer Denverton based boards are probably significantly better, but I haven't had direct experience with those. The only reason I got rid of the C2750D4I was because I moved to 10 gigabit fibre and the poor little Atom cores couldn't push more than about 190 megabytes/second over rsync when I was doing backups between servers.
You are off the mark. Those atoms cant transcode 4k/hdr in any serious way
Can't wait for the build and setup video. The power consumption on this beast is brilliant! Please also mention the cost of the build.
I´m running one for my OPNsense FW with a dual 2.5G NIC. For some reason it kept crashing. The culprit was DRAM voltage being too low on a 3200 16GB stick. The XMP support in the UEFI is kinda weird. Anyway for 24/7 operation in a case I slapped on a little fan for airflow. Now it runs with no problems.
I went in a different direction. Bought the Aoostar R1, added 2 20tb drives for storage, bam low power media server running at 10 watts from the wall with the hard drives spun down, plus you can set the bios to run the N100 at 15w tdp for more performance at a max temp of 70c.
The power draw is amazing though, I tried a friendly elec nas board with RK3588 and topped at 13W if I remember. They say that x86 consumes more but well, it all goes to CPU manufacturing (7nm in this case), motherboard connected devices etc... Can't wait to have the equivalent of N100 but in 1nm in 2027 !
N100m video?
5:23 yes - 2A when spinning up. but the rating of that jst plug is for continous current, not peak. so it would probably be fine. but still a janky solution
Re: the network speed.
We ran in to a similar issue at work with one of our windows servers. The fix was to enable smb multichannel.
Yep, there is no replacement for this board, if we talking about support for 10Gbps NIC, but to be honest Asus Prime N100I-D D4 is also not that bad, at least it has second M.2 e-key port with PCIe (Asrock M.2 e-key port hasn't PCIe), so it can be used to install AI accelerator or „slightly” slower 2.5-5Gbps NIC, and it's a bit cheaper.
Who needs 10Gbps on a home server anyway, that's kind of a meme tbh. Even the usefulness of 2.5Gbps could be questioned for a computer that can only transfer to and from SATA ports.
@@billmurray7676 SATA can handle up to 6Gbps, single HDD is capable to saturate 2.5Gbps... IMO 1Gbps NIC is just slow, even from regular SD card I can read data faster than that.
it doesn't matter what sata can support, it matters what you put on it : you put HDDs on it, so that's about 200Mo/s, which is 1.6Gps. So no, you won't saturate 2.5Gbps with your HDD.
That means 10Gbps is clearly useless on a home server. And 2.5Gbps, well, like I said, it's a questionable investment or reason to buy hardware in pratice.
@@billmurray7676, fastest available hard drives are reaching SATA limit, RAID in NAS is common thing, it's possible to achieve this limit even with standard hard drives and we haven't even started talking about modern SSDs used as an buffer or main storage. It's 2024, 1Gbps NIC should be considered as an obsolete for anything above low-end NASes.
You said HDD, so obviously SSD are out.
Also, that would be pretty stupid to build a RAID for performance in a NAS since, by essence, you're supposed to build for data safety. You can't have both in a RAID.
1Gbps, although not ideal, is clearly not obsolete, 2.5/5 are options, but 10Gbps is definitely useless, which means that PCIe x1 is fine, and you shouldn't sacrifice other benefits for PCIe x2 or x4.
Can you also take a look at the AMD 8000G series if that's possible,
the 300€/$+ without a mainboard is a bit more for a small channel..
Maybe ok with ITX formfactor. But if you want/need smaller and more energy efficient. Go with odroid m1. It has pcie 3.0 2-lanes and can handle same JBM585 etc nvme adapters without issues.
In the UK, the cheapest I could find one of those 10G NICs was for just over £80 from Misco.
Despite what the manual says I have a N100 running with 32GB, runs like a charm.
Good video man !
Give the Topton NAS Motherboard N6005/N5105 Mini ITX a look. You will find it to be a super DIY home NAS MB!
Its such a pita to find suitable itx boards (especially in my country) that I gave up and always chose flex-ATX boards with a SFX PSU instead. Is it bigger? Yes. But we can always hide it in a piece of furniture...
Do you have / could you do a video about power monitoring in Home Assistant. I like the way you are able to see the power usage of your devices, but its something I have no experience in and would be great to have a how-to video.
I don't know how Wolfgang does it exactly, but I do something very similar with the ESPHome integration on home assistant. I buy 'Shelly Plug S' power plugs. I don't like their native app, but the hardware is just a simple esp8226 with some peripherals. So I just open up those plugs and flash ESPHome onto them with an USB dongle (Look up guides for it, its not too complex). Once I have ESPHome running on the plug, I can easily import them into home assistant via the usual esphome integration. I often also calibrate them with a known load for better accuracy, but this step is optional. You can then toggle the shelly power plug via home assistant, read the voltage, power, daily power consumption and temperature.
Plug it in between the device you want to monitor and the power socket, and you can see the power consumption of that device in HA.
@@harmenkoster7451 Thanks so much for the advice. Ive been looking at LocalBytes Smart Plug which comes flashed with Tasmota or ESPHome, im not really sure of the difference but ESPHome looks a lot more straighforward to setup in Home Assistant. I simply want to monitor the power consumption of my server, ideally have a nice graph that I can view over a time period, ambient temperature would also be nice to monitor, im not sure if thats something that would be included in the plug or something I would need to buy in addition. Thanks again.
I'm convinced by this video, and I almost forget that I don't want to build a NAS.
7:14 you don't need a thick wire, you are connecting the signal pin to the ground, a random paper clip will do the job just right.
The thick wire is there to fit snugly into the female ATX pins
@@WolfgangsChannel ok, fair point.
Looks awesome. Explaining computers did a really nice miniPC build with a N100 board. Think it was different to this one. Also Supermiro has an awesome relatively low power (for the number of cores etc) epyc ITX board. This is great if you want more lanes. Also, it should last forever. Obviously, it is more expensive......
I didn't check, but bifurcation on the PCIe slot is a really good option for addition drives. Not all boards have this though.
I built a desktop with LMDE running on ASRock N100M with an NVME SSD and 16GB Kingston memory, almost like Explaining Computers did, except I use HDMI output to an LG TV. Everything would be fine, but to my huge disappointment the system tends to freeze or reload spontaneously. I have tried Debian as well as almost all available Debian-based distros with different desktop environments, Fedora desktop - with no effect. Finally I ended up with LMDE that seems to be more stable on my hardware. As for working as remotely controlled NAS or home server -- N100M MoBo worked with no issues at all. It seems the problem is somewhere with video output/drivers.
You'd be surprised how much power SSDs can use. I've been looking into it. Some SATA SSDs can be 6w at full load meaning that 2.5a on the 5v pin of the JST is only really good for two. I have a couple of U.2 SSDs in a rig here and those are rated 20-25w load
Hi Wolfgang Wolfgang! - lol, that made me chuckle….
What about the Asus Pro WS W680M-ACE SE???
I know its not cheap but its probably cheaper than trying to find a C246 these days and its much more modern, has ECC support, 10gb networking and you could get a nice new low power intel chip with quicksync
From a tinkerer's point of view, this might be a nice solution, but I would still consider this M.2-to-SATA card a workaround. However the Intel N100 seems to be the perfect fit for an energy efficient NAS. The first turn key solutions are already round the corner, I will wait for one of these, I prefer to tinker with software over hardware. 😁
Here's the command (9:26) to test your PCIe devices for ASPM:
sudo lspci -vv | awk '/ASPM/{print $0}' RS= | grep --color -P '(^[a-z0-9:.]+|ASPM )'
Great stuff. I was asking in the comment about Mellanox Connect -X3 which is terrible. My server needed more horsepower so I went with i3 14100. Bought Intel X710 (dual port) for 100EUR with 2 transceivers. It goes down to C8 and there is a 10G speed always. Server is just one thing. Got RACK with 2 switches (8x10G SFP+ and 8x1G (4x POE+)), shutdown server with ipmi and 18 cores, 1x router, 1x wifi 6 AP, 1x camera, above i3 server and UPS. Previously with server based on this 18 core xeon total idle consumption was 136W, went down to 69W with i3 which is still a lot. So there are also unoptimized things like camera (15W!), ups (not measured) and IPMI (5W), probably networking could also do better.
Nice video! It would have been interesting for you test real world network file transfer speeds with Truenas, etc. as that’s what really matters to most of us.
asus pro h610t dsm d4 is the my choice, because i need more cpu power. It have 12v for storage built-in, but limited only 2 SATA too.
This pulled me over the line. Been looking at this board for a while now, but was doubting about the performance. Think I am going to acquire an N100M for new NAS
Hey, did you get the N100M and which case did you use for it? I'm looking for a small case ideally some kind of mini itx that has space for the N100M if used with PicoPSU
Thanks for the great video. The board is really sweet. The idle power consumption is amazing. It's a shame that there are not more PCIe lanes to go around. There are simply not enough lanes to get to SATA6 speeds while also having 10Gbit networking. That might be fine for spinning disks but I am still looking for the 'perfect' 6-8 disk, all flash NAS. It's a real shame that all those efficient Fujitsu boards are incredibly hard to com by.
I think if pursuit a cheap build, you can buy an Intel 82599EN based network adapter, its usualy branded as X520-DA1, and this motherboard should fit x8 pci-e without any problems.
Theres another similar board with 6 sata ports on amazon. about to pick one up for an offsite backup build
@12:41 Wait a minute... So these power consumption numbers are when using the Bronze label SFX power supply because you hooked up HDD's ? ..or did you use the pico power supply? It's not very clear to me..
And can someone tell me if I can truly pass through the 6 SATA ports on the nvme board to a VM in Proxmox?
In other words; Does the nvme slot share it's IOMMU group with other devices? If so, which devices?
Watch the beginning of the “Power consumption” section
@@WolfgangsChannel Alright, so it's using the Pico PS, with all HDD's getting power from the single (pico) power line I presume
0:24 if it's a naked-crystal CPU, maybe handle the heatsink more careful than that. Instantly made me remember the chipped Athlon/Duron cores.
Than what? I didn’t do anything with the heatsink in the video
@@WolfgangsChannel it looked like you held down the board by pushing on the heatsink, to connect the power cable.
Just a general FYI. Despite Intel's N100 spec sheet, I've been running a 32GB 3200Mhz DIMM on this board without issue for 7+ months. (TrueNAS Scale)
Unfortunately both Asrock N100 boards mentioned here are unavailable except from a couple sketchy sellers on Amazon and one on Aliexpress.
Why not just get the N100M instead of this one? Only the price?
9:29 I copied the code but running lspci still shows aspm=disabled and can't go below c3 still
different asrock mobo but same issue the ethernet controller looks the same, ls'd the folder for the driver and it also has l1_aspm
is it supposed to show aspm=disabled even if I run the sudo tee?
appreciate the help!
It doesn't support AV1 encoding using the integrated GPU, still you can encode AV1 using SVT on CPU using ffmpeg. If I remember correctly, encoding on CPU gets better quality despite taking longer encoding time.
I don't think it's the smartest idea to do it on an N100 however.
Even with SVT-AV1 getting better and better, it would probably run single digit fps if not less on decent preset(5/6).
For a nas like this, av1 decoding is probably more important than encoding. If you have av1 files, you want to be able to transcode on the fly to e.g. h264 or HEVC to stream to a tv or media player box.
@@kepstin Yes and I don't know if you can use the intel igpu or even intel arc AV1 encoder in ffmpeg for example.
I've been eyeing the n100 for a while now, I want to replace my Athlon 5350, which I can get to hit around 15W @ idle with one drive.
I did just the same but with the Gigabyte N5105I H, comes with a past generation CPU but can handle easily truenas and over a gigabyte transfers
love your videos ! i bought a used hp sff business pc for £50 to use as a minecraft server, i feel like it's unbeatable for home servers
Unfortunately talos Linux does not include the intel idle module. The idle power consumption is something like 10 watt because of this.