Yes, I am sure this is way above most people's heads though. I am loving this content. This is exactly what I want to do at home and possibly deploy for my boss at work.
Haha, yesterday I wantd to get it to work with FreeNAS. Failed. Then I went to bed. And this morning you have uploaded this video. Great work. Keep up.
really love that you started with an overview of what iscsi is to contextualize what you were doing. awesome. perhaps its only new newer versions of freenas but the iscsi service needed to be enabled too on my machine.
The best FreeNAS demo on here.. Like the performance demo as well. I found it really helpful to see and hear about the extra features like snapshotting recovery and 1GB/10GB comparison. Also liked the explanation of iSCSI being block storage and how it must be created as its won zVol to be identified as a "Device" for Exitance configuration. Thank you!
A short and nice tutorial on iscsi. The way you present, even novice can understand. Thumps up !!! Most of the time, we use NFS ( Off course Freenas ) in conjuncture with virtualization. We felt it simple and easy. Being a file level protocall, we can do all sort of operation with vm file. If you could a real difference between NFS and iScsi, it would be nice.
By 9:57 in the video, you finally helped me figure out why my iSCSI wasn't working, thank you! I've been wanting to pull the 3x 2Tb drives from my computer cause they're getting in the way of my new cooling setup, but I needed some way to handle my large storage, and this is just what I needed. I have my server in a raid 5 configuration and running on a NIC teamed set of two gigabit lines connected to the same switch my PC is on, so while I can't afford 10GbE yet, this should save me a lot of grief with storage by moving from local to network storage.
Woo my first iSCSI Setup. Always wanted to do that.. the names used in the iSCSI setup are pretty counterintuitive at times, "Target" setting on what id think of as server side, and multiple places where authentication is a setting.
what a great channel. I don't usually comment much, but every piece of open source enterprise software I get my hands on, you usually do a video on. Awesome, thank you and keep it up!
love your videos man. I set this up for my Blue Iris system so it can move all videos over to a 20 drive when they reach 30 days on the main system and then keeps them there for as long as it can until 6 months old more of a test to see how long I can store data in it with 11 cameras running at 4mp. then I will either add or take a way when the test come well only 3 months in and its only used 2.7 TB so looking good. Now my daughter getting in to video editing, and I have found its faster with the Iscsi than it is with just a mapped file and I can limit the space to what ever I want. Also some times windows likes to play the gave of I can't see the shares and I have to play search and destroy to see what happened. so I like coming to your videos to learn stuff about Freenas and PfSence with I need to get set up. But been waiting on when I want to change my back bone to 10 GB Because I can put a 10 GB card in the server I have that I will be using for PfSence. I know were to go to learn more. any time I look up anything about PfSence or even freenas I look to see if you created a how to or review because you do a great job of teaching.
I have 2 vmware server running from an iscsi volume over 10Gbit. I have the storage on its own 10gbit switch so I don't need chap. I was able to copy a 500GB VM the the storage in under 1 hour. works great.
When I get to the step at 10:30, my Windows 10 machine goes BSOD, then reboots, then goes BSOD again. Eventually Windows noticed there's a problem and offered to attempt to repair its startup sequence, which worked. That was a scary 15 minutes. Has this ever happened to you ? This happens with an Aquantia 10Gb NIC, however everything works fine with an Intel 1Gb NIC... maybe it's a driver issue ?
I would need a workstation that supports it, but I don't think they're very popular solution. it's just a bio setting that connects it to the iscsi and from there and act just like a hard drive.
@@LAWRENCESYSTEMS -- I agree they're not a great option for many reasons. I see the question pop up when people fall in love with ZFS and start to think about having ALL of their storage on ZFS, but still have the utility of Windows environments. Just a content idea. :) It might be fun to see a walk-through and a discussion of the (perceived) advantages and the disadvantages, which could lead you into another video(s) about methods to administer a deployment of n# workstations.
@@sethwilliamson for thin client based offices, you might consider just having the clients boot via PXE. The OS that is loaded would presumably be something tiny that just RDPs into a terminal services VM.
@@praecorloth yes, that is exactly how it is done. thru PXE and RDS. iscsi connections are a PAIN to manage, managing more than 50+ to desktops scattered around a building and across several switches is more work than one might think.
Fyi, there is the concept of authentication (CHAPS) in two places. The first is the portal, which just allows you to search and find the targets with or without providing a password. If you had entered CHAPS on the portal setup you wouldn't have been able to just quick connect from windows, you'd have to set the password to get the list of targets. Then you can also set authentication for connecting to the target. If you had that set when you clicked the target to connect to it would have failed and you would have had to click connect instead of done because you'd need to enter the CHAPS password. Also note that you can limit the initiator(s) and networks allowed to connect to the target.
So we have dedicated storage networks. I want to use dedicated interfaces and networks for this. When I go to create an IP and vlan on that interface I'm struggling to be able to add a subnet mask I can add an IP but thats it and the IP does not work I can't ping the storage device(all layer 2 no firewall so its not firewall) Is there any videos that would help with this?
Can you clarify then how I am supposed to be setting up a Windows Failover Cluster with iSCSI with MPIO with a Quorum disk? I thought that you were to use a 3rd server as iSCSI in this case FreeNAS, and have both of the initiators access the target we created on FreeNas?
And is there a cheap solution when it comes to maintenance (electric etc) for a long-term Storage? But based on a node e.g. RPi in big network Storage.
Hello everyone! Thanks for this amazing and very well explained video about iscsi functionality on FreeNAS. I have a question about that, it is recommended use iscsi with FreeNAS for MSSQL backups storage?
HOW did it exceed 125MB/s if it's using only 1GbE....?? Also, is there a way to "local copy" the iSCSI LUN on the FreeNAS box to the actual shared volume on the array (Tank I think you call it) ...? In order to say, share things without having to make anything but the COMMAND ... a network operation...?
ok I just set up a Iscsi for my daughter and it also sees the one for my blue Iris I set it up were she couldnt access the Blue Iris drive. But how do you set it up were it only sees the one you want that system to use.
I don't understand. iSCSI is just block storage. Wouldn't you expect it to use *less* CPU than Samba? It has no sharing, no own permissions, no translation to do. Just take data and shove it into zvol. Does it get more smaller requests? I am confused @ Lawrence Systems
I _suspect_ what is happening here is that the volume is being presented as block storage, but FreeNAS still has to take that information and decide how to write the data to the ZFS volume. Assuming that's more or less right, I would expect that an iscsi share from a 'file' extent to have less CPU load since FreeNAS would only have to track changes to something that already exists on disk. I found some vague mention of file extents having "better performance" and Tom made some reference to zvol extents having "more features" and both of those comments make me feel like I'm at least pointed in the right direction.
I've followed the step to a tee, yet when i connect to the target on windows server 2019, i can see "Disk 1" in disk management, but it as if it has no space. Its just blank in the right column. No unallocated space, nothing! I'm racking my brain trying to figure this out!
@@LAWRENCESYSTEMS I figured it out. For some reason, freenas was making the zvol 16kb. I apparently needed to let freenas know what unit i wanted to use (gigabyte). The newer version doesnt seem to have the option on the right to select gigabyte, megabyte, etc. Not sure why they got rid of that.
I have 4 1 gbs ports & 1 10 gb on my Dell R710, & one 10 GB on both of my workstations, I am happy to have 50 gb on the server & 24 gb on each of the workstations upgradeable to 260 gb ram on each of the systems, though I still need 4 4 tb drives for it, with esxi 6.5.0 full ver :)
hello - i installed the freenas 12.1 on esxi 7.i just wanted to excercise the hyper-v cluster with scsi share .but it is so slow that i cant go head bilulding failover cluster it keeping get disconnectiog online/offline on disk manager ..can you help me please ?
What would be the recommend network layout for the iSCSI setup? My speeds are not so good around 50MB/s, while with NFS and SMB I get full speed. I have 2x Proxmox nodes connected on 1 GBit to the switch and my FreeNAS machine with 2x 1GBit on LACP. Currently everything is running in the same subnet. Can you do a video on how you would set the VLAN's etc? :)
I'm currious where you get FreeNAS 11.2 with this modern UI, I have installed 11.2 U3 on my server and the "modern UI" looks kinda weird- not as usable like yours :-O
Nice video. But I still don't get, why all the "on top management tools" must add iSCSI in a destructive way. I'm currently in the situation where I want to migrate an already existing VM on my FreeNAS box to any-other hypervisor. I thought: Hey, lets leave the VM storage on the FreeNAS box and add the zvols via iSCSI to the VM box. Proxmox seem to work (the hyperviso sees all the partitions on it), yet a VM in proxmoix does not see the device "as is": Proxmox seems to want to add an LVM on top of it (=destroying the data on it). In XCP-NG it's even worse: Adding the store via XOA completely destroys the volume directly. I will try eSXI now, but I fear it will just behave similar :/
Im testing iscsi(freenas) with vmware and I have some issues with zfs snapshots. When I try to provision snapshot via iscsi to vmware I see disk in devices but Its not shown in datastores. I have seen workaroud with few cli commands to fix it but it take a lot of time so I think Im going to use NFS
@@LAWRENCESYSTEMS Not sure if its exact same scenario (I didnt read whole article to be honest :-) ) but it looks like my problem paul.dugas.cc/2017/03/09/freenas-vmware/ Solution is probably to use vsphere client and "Assign a new signature" but I have only esxi free so I guess its not posible for me
Tom know this is an older post. Was curious if you mount a volume in windows via iSCSI is there anyway to fail over to a 2nd iSCSI server? My main iSCSI server is freeNAS but when I have to shutdown freeNAS for Maintance I have to shutdown all my vm’s as it would be offline. Is there anyway to make a 2nd freeNAS box for failover?
Lawrence Systems / PC Pickup OK, so I’m not crazy then, I couldn’t of a way either. :-) I’m assuming this would be one of those scenarios where it would be best to keep the data on a virtual disk and not mount the volume via iSCSI.
So just thought of this, what if you were do mount two disks from 1x from say from the freeNAS and 1x from a Synology NAS, (both via iSCSI) then in the OS did a raid level mirror. Then when say the FreenNAS box went down the raid failed, but would still work as it was a mirror. Then when the FreeNAS box came back online you would just rebuild it and put it back to a valid state. I think the only issue would be is if the server would continue to run with the failed disk. Was just thinking outloud.... Sound like a test bench scenario.
I have configured iscsi on free nas. it works fine. But file is not syncing at real time. When clientA and clientB are sharing the same lun of iscsi. When clientA paste the file then clientB is not capable to see the file. If clientB disconnect the iscsi and connect again then clientB is capable to see the file. File is not syncing when both client is connected with the same iscsi target. Please help me.
What makes you think it can't? Up to before I switched to Proxmox, my home lab was running vSphere with FreeNAS iSCSI storage and I had the LUNs for the VMFS datastores shared across my four ESXi hosts with no issues.
Is it possible to have one Freenas server hand out 2 different iSCSI HDDs to 2 different windows PCs? I can't seam to get windows to just connect to one of the iSCSI drives it seams to just map them all. Now I have managed to map the same iSCSI drive on 2 different windows PC's. Am I missing something?
You call it "compression" but I believe what it's doing is "deduplicating". Deduplication doesn't compress files, it creates pointers to data (usually 4K chunks) that, if needed again and again, it simply accesses that data through the pointer(s) instead of recreating the data. The difference is subtle but significant in that a compressed file is deduplicated within itself but multiple instances of that same data pattern will occur as many times as the user copies that data to the volume. A good analogy would be like copying the same .ZIP (or .RAR) file to a volume. That data is reproduced on the disk for as many times as it is copied. In a deduplicated environment, the chunks of 4K disk space that represents that data is copied once. Thereafter, pointers are created to that repeated 4K data pattern. Of course, a disk pointer takes up far less disk space than the data it's pointing to. Deduplication is awesome and available in NTFS but not Microsoft's new file system ReFS, which makes it much less useful in the iSCSI environment. ZFS is superior in every way except that the file system is not natively recognized by a Windows operating system. :(
You have done lot of video's on Freenas. May I suggest you to do evaluation of a unknown NAS called ovios. ( www.ovios.org). I have used this NAS distribution in test environment. I find following advantages with it. 1. Linux based. 2. File system is ZFS. 3. Very efficient. 4. SMB, AD integration 5. NFS 6. iScsi, 6. FTP. 7. Snapshots. 8. High Availability 6. Replication 7. Storage Cluster . 8. Very conservative on ram usage. 9. Free. 10. Can be easily virtulised. All there. Only flip side is that you can not have web based management tool. Please do evaluate...
@@LAWRENCESYSTEMS Thanks for your kind and straight reply. . So nice of you. If you can not review Ovios, that's fine. Its your prerogative. Now my wish is if you could use it once.. Please give a try. There is no harm in trying. If you like or do not like it, you would be spending max 1 hour ( I am aware that you are quire busy ) . Whatever your experience, do not even share.. No one can force you on this. I always felt, that you are an expert on this subject, hence I wished so. For gods sake, please do not mind. Let me also clear that I do not have any connection either with Ovios or developers of Ovios. To confess, it was rather co-incidence to find ZFS based NAS, google helped to reach at this Ovios home page. I was curious after reading documentation. I decided to give a try. I was amazed. But no way, I have any expertise to compare with a very stable product like Freenas. I decided to bring it your notice, for expert review & comment's. Let me also confess that, I watched over 100 video's on Freenas, but the way you present, hats of to you !!!. Best of luck. May god bless you.
I am completely lost. Wish you would have gone slower and better explained what things are. I’m totally knew to FreeNAS, so I have no clue what anything is or why you did it...
this guy deserves so much more views. Informational videos. To the point. You can tell it is a passion. Keep it up Tom!
Yes, I am sure this is way above most people's heads though. I am loving this content. This is exactly what I want to do at home and possibly deploy for my boss at work.
Haha, yesterday I wantd to get it to work with FreeNAS. Failed. Then I went to bed. And this morning you have uploaded this video.
Great work. Keep up.
really love that you started with an overview of what iscsi is to contextualize what you were doing. awesome. perhaps its only new newer versions of freenas but the iscsi service needed to be enabled too on my machine.
Tom, I swear....everytime...whatever I need to know, I find it on your channel. Thanks! xD
The best FreeNAS demo on here.. Like the performance demo as well. I found it really helpful to see and hear about the extra features like snapshotting recovery and 1GB/10GB comparison. Also liked the explanation of iSCSI being block storage and how it must be created as its won zVol to be identified as a "Device" for Exitance configuration. Thank you!
Thanks, glad it helped
A short and nice tutorial on iscsi. The way you present, even novice can understand. Thumps up !!! Most of the time, we use NFS ( Off course Freenas ) in conjuncture with virtualization. We felt it simple and easy. Being a file level protocall, we can do all sort of operation with vm file. If you could a real difference between NFS and iScsi, it would be nice.
By 9:57 in the video, you finally helped me figure out why my iSCSI wasn't working, thank you!
I've been wanting to pull the 3x 2Tb drives from my computer cause they're getting in the way of my new cooling setup, but I needed some way to handle my large storage, and this is just what I needed. I have my server in a raid 5 configuration and running on a NIC teamed set of two gigabit lines connected to the same switch my PC is on, so while I can't afford 10GbE yet, this should save me a lot of grief with storage by moving from local to network storage.
Hi, 2022 and this is what a was looking for, Thank you for your Videos, saved me a tone of Time :)
Woo my first iSCSI Setup. Always wanted to do that.. the names used in the iSCSI setup are pretty counterintuitive at times, "Target" setting on what id think of as server side, and multiple places where authentication is a setting.
what a great channel. I don't usually comment much, but every piece of open source enterprise software I get my hands on, you usually do a video on. Awesome, thank you and keep it up!
Thanks
love your videos man. I set this up for my Blue Iris system so it can move all videos over to a 20 drive when they reach 30 days on the main system and then keeps them there for as long as it can until 6 months old more of a test to see how long I can store data in it with 11 cameras running at 4mp. then I will either add or take a way when the test come well only 3 months in and its only used 2.7 TB so looking good. Now my daughter getting in to video editing, and I have found its faster with the Iscsi than it is with just a mapped file and I can limit the space to what ever I want. Also some times windows likes to play the gave of I can't see the shares and I have to play search and destroy to see what happened. so I like coming to your videos to learn stuff about Freenas and PfSence with I need to get set up. But been waiting on when I want to change my back bone to 10 GB Because I can put a 10 GB card in the server I have that I will be using for PfSence. I know were to go to learn more. any time I look up anything about PfSence or even freenas I look to see if you created a how to or review because you do a great job of teaching.
I have 2 vmware server running from an iscsi volume over 10Gbit. I have the storage on its own 10gbit switch so I don't need chap. I was able to copy a 500GB VM the the storage in under 1 hour. works great.
Well, under 1hour 500GB over 10Gbit connection sounds slow to me honestly, it should have taken like 20 mins max.
is it me or is it @ 7:22 you go over targets twice? lol i rewound this twice thinking i missed something.. xD
Wow this was an excellent tutorial, it helped me set this exact iscsi to my windows server!! Thanks and keep it up, love all these videos!!
Thank you so so much. You have the best tutorials. Stay awesome
When I get to the step at 10:30, my Windows 10 machine goes BSOD, then reboots, then goes BSOD again. Eventually Windows noticed there's a problem and offered to attempt to repair its startup sequence, which worked. That was a scary 15 minutes. Has this ever happened to you ? This happens with an Aquantia 10Gb NIC, however everything works fine with an Intel 1Gb NIC... maybe it's a driver issue ?
Learned so much, I did follow your steps and worked nicely. Thank you.
How about a video on booting from iSCSI? I've seen some rising interest in diskless thin client offices.
I would need a workstation that supports it, but I don't think they're very popular solution. it's just a bio setting that connects it to the iscsi and from there and act just like a hard drive.
@@LAWRENCESYSTEMS -- I agree they're not a great option for many reasons. I see the question pop up when people fall in love with ZFS and start to think about having ALL of their storage on ZFS, but still have the utility of Windows environments. Just a content idea. :) It might be fun to see a walk-through and a discussion of the (perceived) advantages and the disadvantages, which could lead you into another video(s) about methods to administer a deployment of n# workstations.
@@sethwilliamson for thin client based offices, you might consider just having the clients boot via PXE. The OS that is loaded would presumably be something tiny that just RDPs into a terminal services VM.
@@praecorloth yes, that is exactly how it is done. thru PXE and RDS. iscsi connections are a PAIN to manage, managing more than 50+ to desktops scattered around a building and across several switches is more work than one might think.
fun fact you can install steam on one iSCSI target and your entire games library on another and it works like a charm (mine does anyway).
do you need to create zvol? I did the process without creating zvol and it worked perfectly
I love it ! good job Tom , I am going to put stickers on all my servers at home , TLC by LTS ..... x
Fyi, there is the concept of authentication (CHAPS) in two places. The first is the portal, which just allows you to search and find the targets with or without providing a password. If you had entered CHAPS on the portal setup you wouldn't have been able to just quick connect from windows, you'd have to set the password to get the list of targets. Then you can also set authentication for connecting to the target. If you had that set when you clicked the target to connect to it would have failed and you would have had to click connect instead of done because you'd need to enter the CHAPS password. Also note that you can limit the initiator(s) and networks allowed to connect to the target.
So we have dedicated storage networks. I want to use dedicated interfaces and networks for this. When I go to create an IP and vlan on that interface I'm struggling to be able to add a subnet mask I can add an IP but thats it and the IP does not work I can't ping the storage device(all layer 2 no firewall so its not firewall) Is there any videos that would help with this?
Can you clarify then how I am supposed to be setting up a Windows Failover Cluster with iSCSI with MPIO with a Quorum disk? I thought that you were to use a 3rd server as iSCSI in this case FreeNAS, and have both of the initiators access the target we created on FreeNas?
And is there a cheap solution when it comes to maintenance (electric etc) for a long-term Storage? But based on a node e.g. RPi in big network Storage.
Hello everyone! Thanks for this amazing and very well explained video about iscsi functionality on FreeNAS. I have a question about that, it is recommended use iscsi with FreeNAS for MSSQL backups storage?
HOW did it exceed 125MB/s if it's using only 1GbE....??
Also, is there a way to "local copy" the iSCSI LUN on the FreeNAS box to the actual shared volume on the array (Tank I think you call it) ...?
In order to say, share things without having to make anything but the COMMAND ... a network operation...?
ok I just set up a Iscsi for my daughter and it also sees the one for my blue Iris I set it up were she couldnt access the Blue Iris drive. But how do you set it up were it only sees the one you want that system to use.
When do I need to use cache disk? I'll use RAID 10 nvme m2 4x2TB pcie card. Do I need to use cache disk? What will the benefit be?
I don't understand. iSCSI is just block storage. Wouldn't you expect it to use *less* CPU than Samba? It has no sharing, no own permissions, no translation to do. Just take data and shove it into zvol. Does it get more smaller requests? I am confused @ Lawrence Systems
I _suspect_ what is happening here is that the volume is being presented as block storage, but FreeNAS still has to take that information and decide how to write the data to the ZFS volume. Assuming that's more or less right, I would expect that an iscsi share from a 'file' extent to have less CPU load since FreeNAS would only have to track changes to something that already exists on disk. I found some vague mention of file extents having "better performance" and Tom made some reference to zvol extents having "more features" and both of those comments make me feel like I'm at least pointed in the right direction.
Thanks a lot, very helpful video!
I've followed the step to a tee, yet when i connect to the target on windows server 2019, i can see "Disk 1" in disk management, but it as if it has no space. Its just blank in the right column. No unallocated space, nothing! I'm racking my brain trying to figure this out!
You should be able to format the drive from there.
@@LAWRENCESYSTEMS I figured it out. For some reason, freenas was making the zvol 16kb. I apparently needed to let freenas know what unit i wanted to use (gigabyte). The newer version doesnt seem to have the option on the right to select gigabyte, megabyte, etc. Not sure why they got rid of that.
Another wicked video !!
What advantages does this have over say just saving a vhd / vmdk file on the freenas pool and mounting that in the windows VM.
I have 4 1 gbs ports & 1 10 gb on my Dell R710, & one 10 GB on both of my workstations, I am happy to have 50 gb on the server & 24 gb on each of the workstations upgradeable to 260 gb ram on each of the systems, though I still need 4 4 tb drives for it, with esxi 6.5.0 full ver :)
If we're adding the iSCSI target to a pool, we can add it to the pool master and it will then copy down to the other hosts?
Yes, when you add shared storage to the pool all the hosts get the connection.
hello - i installed the freenas 12.1 on esxi 7.i just wanted to excercise the hyper-v cluster with scsi share .but it is so slow that i cant go head bilulding failover cluster it keeping get disconnectiog online/offline on disk manager ..can you help me please ?
What would be the recommend network layout for the iSCSI setup?
My speeds are not so good around 50MB/s, while with NFS and SMB I get full speed.
I have 2x Proxmox nodes connected on 1 GBit to the switch and my FreeNAS machine with 2x 1GBit on LACP.
Currently everything is running in the same subnet.
Can you do a video on how you would set the VLAN's etc? :)
make sure your switch supports jumbo frames
I'm currious where you get FreeNAS 11.2 with this modern UI, I have installed 11.2 U3 on my server and the "modern UI" looks kinda weird- not as usable like yours :-O
Nice video. But I still don't get, why all the "on top management tools" must add iSCSI in a destructive way. I'm currently in the situation where I want to migrate an already existing VM on my FreeNAS box to any-other hypervisor. I thought: Hey, lets leave the VM storage on the FreeNAS box and add the zvols via iSCSI to the VM box. Proxmox seem to work (the hyperviso sees all the partitions on it), yet a VM in proxmoix does not see the device "as is": Proxmox seems to want to add an LVM on top of it (=destroying the data on it). In XCP-NG it's even worse: Adding the store via XOA completely destroys the volume directly. I will try eSXI now, but I fear it will just behave similar :/
Im testing iscsi(freenas) with vmware and I have some issues with zfs snapshots. When I try to provision snapshot via iscsi to vmware I see disk in devices but Its not shown in datastores. I have seen workaroud with few cli commands to fix it but it take a lot of time so I think Im going to use NFS
Interesting, most of the VMware system we have been working on are using local storage so I don't have a ton of experience with it
@@LAWRENCESYSTEMS Not sure if its exact same scenario (I didnt read whole article to be honest :-) ) but it looks like my problem paul.dugas.cc/2017/03/09/freenas-vmware/ Solution is probably to use vsphere client and "Assign a new signature" but I have only esxi free so I guess its not posible for me
Tom know this is an older post. Was curious if you mount a volume in windows via iSCSI is there anyway to fail over to a 2nd iSCSI server? My main iSCSI server is freeNAS but when I have to shutdown freeNAS for Maintance I have to shutdown all my vm’s as it would be offline. Is there anyway to make a 2nd freeNAS box for failover?
How would you keep the two iSCSI in sync?
Lawrence Systems / PC Pickup OK, so I’m not crazy then, I couldn’t of a way either. :-) I’m assuming this would be one of those scenarios where it would be best to keep the data on a virtual disk and not mount the volume via iSCSI.
So just thought of this, what if you were do mount two disks from 1x from say from the freeNAS and 1x from a Synology NAS, (both via iSCSI) then in the OS did a raid level mirror. Then when say the FreenNAS box went down the raid failed, but would still work as it was a mirror. Then when the FreeNAS box came back online you would just rebuild it and put it back to a valid state. I think the only issue would be is if the server would continue to run with the failed disk. Was just thinking outloud.... Sound like a test bench scenario.
I have configured iscsi on free nas. it works fine. But file is not syncing at real time. When clientA and clientB are sharing the same lun of iscsi. When clientA paste the file then clientB is not capable to see the file. If clientB disconnect the iscsi and connect again then clientB is capable to see the file. File is not syncing when both client is connected with the same iscsi target. Please help me.
iSCSI is not designed to be shared that way.
@@LAWRENCESYSTEMS Is it not possible to sync file with the multiple client on same iscsi target?
@@biswasashim4473 iscsi IS NOT designed to be shared between separate clients
Why cant freenas share LUN's , that is required to have a HA VM Solution, to vmotion and such ..
What makes you think it can't? Up to before I switched to Proxmox, my home lab was running vSphere with FreeNAS iSCSI storage and I had the LUNs for the VMFS datastores shared across my four ESXi hosts with no issues.
TOP me ajudou bastante.
how to get the System Overview interface?
I hope you find the answer by now. Incase you don't find it or someone curious about the system overview, go to services and activate netdata.
Is it possible to have one Freenas server hand out 2 different iSCSI HDDs to 2 different windows PCs? I can't seam to get windows to just connect to one of the iSCSI drives it seams to just map them all. Now I have managed to map the same iSCSI drive on 2 different windows PC's. Am I missing something?
Yes, one way would be to use two different iSCSI initiators, one for each drive so you can choose them one at a time.
You call it "compression" but I believe what it's doing is "deduplicating". Deduplication doesn't compress files, it creates pointers to data (usually 4K chunks) that, if needed again and again, it simply accesses that data through the pointer(s) instead of recreating the data. The difference is subtle but significant in that a compressed file is deduplicated within itself but multiple instances of that same data pattern will occur as many times as the user copies that data to the volume. A good analogy would be like copying the same .ZIP (or .RAR) file to a volume. That data is reproduced on the disk for as many times as it is copied. In a deduplicated environment, the chunks of 4K disk space that represents that data is copied once. Thereafter, pointers are created to that repeated 4K data pattern. Of course, a disk pointer takes up far less disk space than the data it's pointing to. Deduplication is awesome and available in NTFS but not Microsoft's new file system ReFS, which makes it much less useful in the iSCSI environment. ZFS is superior in every way except that the file system is not natively recognized by a Windows operating system. :(
I heard for ISCSI, requires it own separate private network is this true ?
Nevermind you answered it .:)
No. But it is a good idea.
People actually like Xen Orchestra? I still prefer XenCenter.. Or I guess XCP-NG Center now.
i use almost exclusively for backups. A lot of options and all work well
You have done lot of video's on Freenas. May I suggest you to do evaluation of a unknown NAS called ovios. ( www.ovios.org). I have used this NAS distribution in test environment. I find following advantages with it. 1. Linux based. 2. File system is ZFS. 3. Very efficient. 4. SMB, AD integration 5. NFS 6. iScsi, 6. FTP. 7. Snapshots. 8. High Availability 6. Replication 7. Storage Cluster . 8. Very conservative on ram usage. 9. Free. 10. Can be easily virtulised. All there. Only flip side is that you can not have web based management tool. Please do evaluate...
Not likely i will review it
@@LAWRENCESYSTEMS Thanks for your kind and straight reply. . So nice of you. If you can not review Ovios, that's fine. Its your prerogative. Now my wish is if you could use it once.. Please give a try. There is no harm in trying. If you like or do not like it, you would be spending max 1 hour ( I am aware that you are quire busy ) . Whatever your experience, do not even share.. No one can force you on this. I always felt, that you are an expert on this subject, hence I wished so. For gods sake, please do not mind. Let me also clear that I do not have any connection either with Ovios or developers of Ovios.
To confess, it was rather co-incidence to find ZFS based NAS, google helped to reach at this Ovios home page. I was curious after reading documentation. I decided to give a try. I was amazed. But no way, I have any expertise to compare with a very stable product like Freenas. I decided to bring it your notice, for expert review & comment's. Let me also confess that, I watched over 100 video's on Freenas, but the way you present, hats of to you !!!.
Best of luck. May god bless you.
I am completely lost.
Wish you would have gone slower and better explained what things are.
I’m totally knew to FreeNAS, so I have no clue what anything is or why you did it...