A magic box of Proxmox goodies finally arrived!
Вставка
- Опубліковано 24 сер 2023
- 🔥These links support my madness🔥
💲 NordVPN: www.bmbsucks.com
💲 Members-Only Punishment: bit.ly/bmbmembers
💯These are my other Socials💯
🔹Twitter: bit.ly/BMBTw
🔹Instagram: bit.ly/bmbinstagram
🔹Tik-Tok: bit.ly/bmbtock
🔹Discord: bit.ly/BMBDISCORD
✉️P.O. Box for Mail✉️
Byte My Bits
P.O. Box 77
Haysville, KS 67060 - Наука та технологія
You should create a ZVOL and then create an ISCSI share and mount this to the Windows 10 VM. This will be block level storage so will appear as a locally mounted drive in windows 10. For example, you will need to format it in windows 10 once mounted.
was going to comment that. ISCSI should work.
You only need one molex plugged in for each row. The dual molex is used if you are using two different power supplies to give you redundancy if the power supply dies.
A few thoughts:
A) Using the VirtIO network adapters, you should get 10 Gbps between Unraid VM and Windows VM. Mounted network drives with Windows services act a bit funky though. Typically they're tied to the current user, not the whole system. If your Blue Iris service is running as YOU (or another user you can log in as and mount the network drive), maybe it would work???
2) Windows mounts iSCSI drives nicely. It makes that data essentially invisible to Unraid and other accounts, but gets your data into your NAS still.
iii) I'm still jealous of your setup! Even if it's super kludgy.
I'm sure most if not all of us have had this sort of thing happen to us. "I have a great thing to try"! And then you find out you need to do 10 things to get to the great thing to try, but each of those 10 things has a laundry list of steps that at any moment could blow up, taking even more time and before you even get close to your original goal (aka, the great idea) you decide to just go to bed and try again the next day/night. I've had this happen so many times that I have completely forgotten about the "great idea" for weeks or even months and then when I do remember, (Oh ya! I was trying to do x,y,z. What happened with that?) the hell starts all over again.
Fun times!
Unless I have missed something, but you could give Craftcomputing a shout, I think he knows Proxmox quite well.
I would 100%, use some pwm fans and a fan hub which connects the pwm pin from a fan header on the motherboard and power direct from the power supply. Then you can simply control the fan speed from the BIOs or ipmi if supported. 😊
"I have an idea in my head." Buckle up folks we're in for a ride!
set up an iscsi target for the blueiris to connect to. the issue is that you are using SMB for your network connection - which sucks.
I've been doing electrical work for over 16 years. Sometimes I kind of want to drive out there and give him a hand for a week. And I know that sounds weird, but Jason is 500% easier more weird than I am.
So, about the SAS card and expander. Each cable from the SAS HBA has 4 lanes of either 3/6/12Gbps of bandwidth each. If you pass both from the HBA to the expander, the expander can multiplex all the different signals to both of those all 8 lanes, maximizing your bandwidth.
This can matter when you get to 24 drives all running at once.
If you keep your current setup, however, you might consider using the lane that goes directly to the HBA for SSDs so that you are maximizing the bandwidth there.
Sadly, he only has an 8i, and the backplane needs 6 cables, hence the need for 4 lanes from one port to be passed to the expander giving him 5 outbound ports to the backplane, with the 6th coming from the card. If he plugs all 8 lanes into the expander, he will only have 4 ports going out... Solution would be to get a 16i instead (so 4 outbound connections from the card, 2 to the expander (split into 4), and 2 from the HBA into the backplane
Good thinking though. I like the cut of your jib
if youre setting up the 9 drives under zfs, id say put 8 of in one group with raid z2. then youd have 1 as spare. Also 8 wide on raidZ2 is the best tradeoff between speed and resiliency. It also makes it easier to add drives in the future since youd need to add them in sets.
Yeah I would either use Proxmox native storage, and create virtual disks. Or use Unraid, and use iSCSI disks (basically just passing a virtual drive over the network). iSCSI is more useful in mult-node Proxmox clusters though, it is overkill for this setup IMHO. But if you want to learn, it is worth the effort.
7:33 "This is gonna be really really really sketch, but it's gonna work", should be the name of this channel
troof lol
Can you add an antenna to your sensor device to reach outside of the case?
My guess is that 2.xx volts was too little to start those fans. Try upping the minimum voltage?
Also some Installation Beer! Woo-Hoo! What a ride this was!
You can mount an NFS share through WSL, but reboots might be a bit flakey since Windows doesn't really do it right.
Big jelly of those sexy drives 🤤
I used a Noctua fan controller in my server to quiet down the fans and it's worked perfectly the last couple years. Always best to keep it simple. lol
The backplane plugs are for redundant PSU's so you probably didn't need to plug them all in. And as others have said iSCSI might solve your Blue Iris problems but if it's is going to be constantly writing to the pool raidz might be a bad choice. I suppose if the point is to test you have time to figure it out but if performance happens to suck you might want to try mirrored vdevs.
Eh, I prefer shipping hard drives in a cardboard box with a single deflated bubble wrap and rocks.
+1 for an iSCSI share, the Windows VM should see the share as a physical connected drive (not a networked one)
Three Raid-Z vdevs of three drives would be crazy fast, but you'd loose three drives of storage. Also, have you tried an iSCSI connection to the Blue Iris VM?
What about an iscusi block drive in truenas vm then access it in win 10 as local storage
You need to know the minimum volts the fans will actually start at, I would bet real money it is not as low as 2.3V, and is that due to a too low PWM input signal, which could be fixed with fan curve, or a broken PWM-DC converter. iSCSI may be an option for storage, but if it's two machines on the same host I would be surprised if access times was the cause of issues, I see 40Gb/s between machines on the same host on a 10 year old Haswell Xeon.
iSCSI is prob what you want for blue iris its a network drive that show like local storage
Setup a iSCSI target and use windows iscsi initiator and mount it. It works on pretty mich ebery nvr/dvr system out there.
use a iscsi where you are able to map it as a fiscal drive but it is om the network
Great video, love it
Like others have said set up an ISCSI drive
Thanks
PWM onto a DC to DC converter?? I don't know if that is recommended... I would think that would fry it... DC to DC converter is for either on/off, and I think the rapid flipping of the on/off 500 times a second would fry it, like using a relay for it, it would end up going so fast it will fuse the internal coil up and weld itself.
I would do a Analog to digital converter triac with each fan fed its own 5v referance voltage and the incoming pwm signal goes throught the gate... Or you could use a LM3914 and that can regulate the signal going to the fans... You just hook up the power and gnd, then the pwm through the LM3914 and that will signal the on/off switch making the PWM signal.
ORRRR....
You can build a USB plug in module and use an STEM32 controller and get all fancy with it and write a driver in windows to be able to open an app and control each fan independintly, or use an ESP32 and setup automation in ESPHOME to contrtol the fans with an onboard temp sensor... or multple temp sensors so you power up the right fan with the right temp raises... and so on...
does that motherboard not have a fan controller? can't you just terminate those fans with a standard fan plug and let the motherboard take care of speed regulation based on an ambient temp sensor (or even cpu temp?)
This whole mess can be explained in one word: Amperes.
Jason ....try an ISCSI target
can you do the Intel N100 stuff for Plex? Asus has a board, the Asus PRIME N100I-D D4 (that's an i after the 100)
Are you sure your "bad" Synology drive isn't just a result of the Western Digital 3 year warranty scam?
I wonder when you will decide to test out Frigate and see how awesome it is compared to Blue Iris. All these problems you keep mentioning do not exist with Frigate.
On the next episode of Jason needs an NPM….
Publish an Iscsi volume to blue iris. Done.
What else are you doing with Proxmox?
You didn’t really pass the drives to the VM…. You need to pass the controller
Nice
you tube privilege
I would trash unraid and setup normal ZFS via ProxMox. Then setup iSCSI for your blue iris. Unraid will create unnecessary overhead on the CPU as you already have ZFS running. Also, you'll get the benefit of caching that ZFS provides. Windows always suck at SMB shares anyway.
ISCSI
black magic design hmh
or just take a chill pill and burn a 100gig blu-ray disc, yep its retro tech
well 1000 discs is 100TB
and raid is not a backup, even a tape is backup
1000 discs as spindles does not even take so much space even
its just a single small storage box
Why not just pass through the whole hba to unraid?
I’m pretty sure the normal way to do things is to pass through the card. I think that’s what Craft Computing does - PCIe pass-through from Proxmox to TrueNAS in his case.
Yeah, I was confused why he didn't do this!
First
put on a shirt
First?
You were indeed good sir
@@Nunya58294 nice, its midnight here, sitting at the workbench slowly reformatting a boat load of used Dell EMC SAS drives and SAS SSDs that all use larger sector sizes and can’t be used by my RAID controllers or OS. Alert for this vid pops up and I’m like, yeah, I’m in the mood for some Jason 🤣
@@davidflorey Haha heck yeah man!
How's Loki (the server) doing good sir?
Ahh I see now.......