Testing Synology and TrueNAS NFS VS iSCSI

Поділитися
Вставка
  • Опубліковано 25 сер 2024

КОМЕНТАРІ • 83

  • @ewenchan1239
    @ewenchan1239 3 роки тому +24

    Thank you for this video.
    Yes, I would definitely love to learn more about the different use cases for iSCSI vs. NFS.
    I've never really dove into much, so thank you for putting this video together and explaining this to us.
    I greatly appreciate it.

  • @joshsmith4998
    @joshsmith4998 3 роки тому +8

    I think a deeper dive into the philosophy of storage design would be helpful! I've setup ISCSI, Fiber Channel, and SMB/NFS shares in the past and through various VMware topologies but never really got into the nitty gritty of optimizing storage for your VMs for performance, security, and scalability :)

  • @LAWRENCESYSTEMS
    @LAWRENCESYSTEMS  3 роки тому +1

    Benchmark Links used in the video
    openbenchmarking.org/result/2108267-IB-DEBIANXCP30
    openbenchmarking.org/result/2108249-IB-DEBIANXCP11
    Synology Tutorials
    lawrence.technology/synology/
    XCP-NG Tutorials
    lawrence.technology/xcp-ng-and-xen-orchestra-tutorials/
    Linux Benchmarking
    ua-cam.com/video/YjhEjWs8YzE/v-deo.html
    Getting Started With The Open Source & Free Diagram tool Diagrams.NET
    ua-cam.com/video/P3ieXjI7ZSk/v-deo.html
    ⏱ Timestamps ⏱
    00:00 NFS VS iSCSI
    01:42 Scope and Setup
    02:45 Difference Between iSCSI & NFS
    08:50 Test Results
    12:48 Storage Design Considerations

  • @engrpiman
    @engrpiman 3 роки тому +5

    Side note: I have found that Synology always reattaches via ISCSI when the VM reboots. My Qnap NASs often had trouble and needed me to manually mount the drive.
    What you do is uses block storage and then use Veeam to snapshot and backup the individual VMs. Works great.

  • @chrisipad4425
    @chrisipad4425 2 роки тому +1

    thanks for this easy to follow comparison between NFS VS iSCSI!

  • @handlealreadytaken
    @handlealreadytaken 3 роки тому +8

    Interesting content. This was always a hot topic when implementing either EMC or NetApp systems with VM Ware and/or Windows running on bare metal in a clustered environment. I'm sure a lot has changed since I touched those, but at the time tiered storage was handled differently at a block vs file level.

    • @fastbimmerrob
      @fastbimmerrob 3 роки тому +1

      You would be surprised.. Not much has changed 😬 it would be like riding a bicycle!

    • @BruceFerrell
      @BruceFerrell 3 роки тому

      And it still is. Clustered access to iSCSI requires OS level disk management to correctly allow it. Without that, file systems can and does occur.

  • @falazarte
    @falazarte 3 роки тому +1

    Great video! Looking forward to the storage design video.. Thank you!

  • @joelsmith2525
    @joelsmith2525 3 роки тому +2

    The case you mention near the end of the video with a Graylog VM, and how to handle the storage differently would be super helpful to me! I'm planning on setting up Graylog (sort of half started already) and the storage aspect is one part I was very unsure about.

  • @GrishTech
    @GrishTech 3 роки тому +2

    Ah yes. Thanks for the updated tests.

  • @phrag5944
    @phrag5944 3 роки тому +7

    super good explaination, I've used the "ethernet to HDD" analogy before to describe iSCSI. or a "locally appearing, network attached raw block of a drive that looks like a locally installed drive to the layman."
    Ethernet to HDD is better I guess.

  • @84Actionjack
    @84Actionjack 3 роки тому +2

    Would be very interested in different use cases of running a Windows VM with ISCi and how the data should interface with the VM. Looking forward to that and more. Thanks

  • @RicoCantrell
    @RicoCantrell 3 роки тому +3

    Awesome explanation!

  • @adam872
    @adam872 2 роки тому +3

    In spite of some performance degradation, NFS all the way for me. I find the convenience and flexibility are worth a lot more than the performance gains (in some cases) of iSCSI. Thanks for the video.

  • @RyanOHaganWA
    @RyanOHaganWA 3 роки тому +4

    Hey Tom, can we do a segment about CEPH?

  • @Mr_Sprint
    @Mr_Sprint 3 роки тому +1

    As mentioned about TrueNAS and restoring snapshots, this is why I set up separate extends for each VM, so no two VMs live on the same LUN.

    • @Supermansdead81
      @Supermansdead81 3 роки тому +1

      That’s exactly what I do. I do put test VM’s that are considered important production in a larger random LUN…..but all important production VM’s have their own unique IntelliFlash LUN. I then relax knowing I’ve got SAN snapshots per LUN on schedules set as well as our Veeam backup jobs to one Veeam storage repository and backup copy jobs to a separate Veeam storage repository. I’ve also went through setting up Veeam SureBackup Jobs for automatic Veeam restore point verification in a Veeam Virtual Lab. It’s a great setup if your stuff is exclusively in vSphere. We mainly use ESXi hosts now which makes the whole process pretty streamlined at this point. I do thin VMDK’s exclusively and the IntelliFlash does that on the backend as well for iSCSI LUN’s.

  • @chromerims
    @chromerims Рік тому

    hmm . . . I have thin provisioned iSCSI before. Just this week in fact.
    Excellent video, sir 👍

  • @michaelchatfield9700
    @michaelchatfield9700 9 місяців тому

    Very helpful.

  • @devoid42
    @devoid42 3 роки тому +1

    Great video, I'm in the market to build a network storage solution and this was very much in my interest. I have the requirement for family storage but I also host VM's as well that will be utilizing the storage

  • @tedmiles2461
    @tedmiles2461 3 роки тому +4

    24:20 If you think you'll need to use snapshots on truenas/zfs. Why not make multiple zvols ...one per vm , instead of one zvol for a pool of VMs?

  • @hescominsoon
    @hescominsoon 3 роки тому +1

    I run a single extent per vm. This way a snapshot is available per vm instead of putting all of the vm's inside of one extent which does limit your snapshotting options..:)

  • @berndeckenfels
    @berndeckenfels 3 роки тому +3

    Iscsi could thin Provision, but the zfs trim impl is not very mature. Compress however does help in keeping free blocks out of the zvol usage

    • @fastbimmerrob
      @fastbimmerrob 3 роки тому

      I love the videos here but this info is not correct. You can very much thin provision LUNs and present iscsi or FC.

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  3 роки тому +1

      You cannot with XCP-NG

    • @fastbimmerrob
      @fastbimmerrob 3 роки тому +1

      @@LAWRENCESYSTEMS Ah, I knew it had to have made sense if you're posting it! Thanks again Sir.

    • @saintbenedictscholacantorum
      @saintbenedictscholacantorum 3 роки тому +1

      On TrueNAS I just check the "Sparse" option on the zvol for the iSCSI extent, and it thin provisions just fine. I can present the extent to Windows or to Proxmox or I imagine to anything. But I can believe that a trim problem might eventually negate the thin provisioning; I haven't used it long enough to see the impact.

  • @andibiront2316
    @andibiront2316 2 роки тому

    Running TrueNAS Core and ESXi. Both, the NAS and the ESXi are using thin provisioning with iSCSI. The ESXi even sends UNMAP commands to TrueNAS when a thin disk shrinks (files are deleted on the GuestOS FileSystem).

  • @curefanz
    @curefanz 10 місяців тому

    Great video

  • @hazaqames477
    @hazaqames477 3 роки тому +3

    Did you use NFS 4.1 and multipathing? I may have missed your NFS setup.

  • @mckidney1
    @mckidney1 3 роки тому +3

    This video is weird, iSCSI and FS are not comparable like this - you are using a simulated block device (VMDK) over simulated block device (iSCSI) compared to simulated file system (NFS) and then you introduces simulated blocked devices (ZFS snapshots) and thick provisioning (which actually happens at 3 out of 4 steps already.). I suspect this video targets people caring about the hypervisor and do not know which option to choose. From that perspective it makes sense. But from the perspective of designing a NAS/SAN for your hypervisor - think vs thick, snapshots all those hurdles are created by the design being a jumbled mess.

  • @jeffreyplum5259
    @jeffreyplum5259 2 роки тому

    I am a home user, just exploring servers. The only place place many people see this thick versus thin provisioning is in Virtual box. If one chooses fixed size disks, the client is guaranteed access to that much disk space. That storage is carved out of the storage pool without question. This is great for performance, but expensive in storage space. Iscsi expands this high performance model to a disk server, often over a very high speed connection. NFS thin provisioning, is like the Virtual box dynamic disk. One gives up absolute performance over a known disk size for a more economical storage system.
    In my case, my VM hosts are small,, with limited internal storage. Dynamic disk provisioning allows me to squeeze the most out of my modest VM host ssd space. I plan on using NFS storage for user data, and images. II can also offload static data and snapshots to a file server. Eventually even my VN may use the NFS shares as well. I am more comfortable running VMs than containers at the moment. I can load more VMs onto my host with mostly static data offloaded to a file server.. I may also add emulated systems to my home lab. I can use storage on my older systems to backup my VM hosts Many thanks for your help

  • @jeffm2787
    @jeffm2787 2 роки тому

    I've had excellent luck with ESXI , TrueNAS (Not core) and iSCSI. Effectively thin provisioned and compressed with lz4. Generally speaking iSCSI will outperform NFS with ESXI. FC of course is the even better option.

  • @tedmiles2461
    @tedmiles2461 3 роки тому

    BTW you can mount a snapshot on zfs/truenas also and copy out the one vm that you wanted to restore also

  • @zenja42
    @zenja42 3 роки тому +4

    What I've seen with iSCSI vs NFS...
    Storage System in DC A & Servers in DC B - 40km Fiber between DWDM Layer 1 Network.
    For some reason iSCSI fallback to 64 MTU Packages... NFS just takes the 9000 MTU - had two dual way end to end with a Viavi MTS-5800 and tested -> no problem
    The customer had a HP Storage / freenas / qnap for testing - server where ESXi. We were not able to find the error. (As DC customer equipment is customer owned - so not our problem)
    But from what I've seen - NFS just handle (longer / changing) latency way better... also Routing NFS is not really a problem. I've seen corupted iSCSI over a 180km span - just because of systems of the netapp metro cluster get's out of sync. -> Latency (netapp say "it must be the same cable distans within ~20-30m...") we had to install a compansation fiber of ~18km to get the same latency on both ways...
    "Verry enterprise stuff... works grade... most of the time :P "
    Thanks for sharing your benchmarking :D

    • @rockenrooster
      @rockenrooster 3 роки тому

      do people really have SANs that far away from the compute host? Is this common? Seems like a TERRIBLE idea regardless

    • @JeroenvandenBerg82
      @JeroenvandenBerg82 3 роки тому

      Running iSCSI or NFS over these distances sounds like a bad idea? I have never seen a vendor that supports that?

    • @zenja42
      @zenja42 3 роки тому

      @@rockenrooster yes... full synced clusters. RTT is 4,2ms

    • @zenja42
      @zenja42 3 роки тому

      @@JeroenvandenBerg82 customers with full sync cluster just don't care - they build it. RTT is 4,2ms.
      This specific customer just have his storage over there & just do full cache on local ssd's.

    • @rockenrooster
      @rockenrooster 3 роки тому

      Ahh, local SSD cache would make a huge difference....

  • @teaearlgrayh0t
    @teaearlgrayh0t 3 роки тому

    Inability to provision thin volumes, has nothing to do with protocol, but with limitation of storage device. I would also use vVols with ISCSI

  • @mscari
    @mscari 3 роки тому +1

    How about using VMM Pro as an hypervisor? Would the performance be better compared to the setup you tested?

  • @NetBandit70
    @NetBandit70 3 роки тому +2

    I triple dog dare you to make a video on AoE (ATA over Ethernet) and HyperSCSI

  • @jedring3756
    @jedring3756 3 роки тому

    1 thing to point out. Synology will thin provision iscsi and it works correctly under vmware. Additionally on iSCSI atleast as far as Synology goes you can use the snapshot of an iSCSI lun to make a new lun. then add that to a target. attach to your host and pull the vhd you need from your recovered lun to your live lun.

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  3 роки тому

      Yes, it also gives you the ability to snapshot it, but gives performance warning when configured that way.

    • @Prime0pt
      @Prime0pt 3 роки тому +1

      Vmware vmfs use thin provisioning. So it will work on iSCSi. XCP-NG use lvm and thick provisioning. So thin provisioning of Synology will be useless

  • @JeroenvandenBerg82
    @JeroenvandenBerg82 3 роки тому

    iSCSI does not have to thick provision, this sounds like a limitation of Xen. In my lab I have a thin provisioned volume in VMware on a thin provisioned LUN in FreeNAS connected over iSCSI, this does require you to monitor the 'real' free space because it's easy to over provision and run out of space.
    We run a Pure storage SAN (full flash) at my work environment and that is the recommended configuration, according to my vCenter it's storing 12TB of the 19TB provisioned, this is using just 3.76TB on the SAN, this is with de-duplication, compression and thin provisioning and all over iSCSI.
    And with the storage integration in VEEAM we can restore a single VM from a full volume snapshot within a few minutes.

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  3 роки тому

      Yes, this is a limitation of Xenserver / XCp-NG

  • @lordgarth1
    @lordgarth1 3 роки тому +2

    So block storage vs file storage ?

  • @itgoatee
    @itgoatee 3 роки тому

    What was your disk layout on the TrueNAS system? I am trying to run the same suite, and I am getting 300seconds on the SQLite tests.

  • @apigoterry
    @apigoterry 3 роки тому +3

    How about using multipath vs lacp on iscsi vs nfs?

    • @BruceFerrell
      @BruceFerrell 3 роки тому +1

      multipath IS useful for iSCSI if there are multiple targets accessed as the same device. It has zero effect for NFS. LACP, depending on the configuration has the potential of giving better throughput or link redundancy.

  • @juancarlospizarromendez3954
    @juancarlospizarromendez3954 3 роки тому

    I suggest a comparison of iscsi vs nfs vs smb vs ftp vs sftp vs http vs https vs scp protocols for different workloads.

  • @BruceFerrell
    @BruceFerrell 3 роки тому

    the most direct difference between iSCSI vs NFS... iSCSI presents a non-sharable block device (there qualifications, but as a general rule...) to the client system. NFS presents a file system that can be accessed by multiple hosts simultainously.
    The bottom line is they are NOT in the same class at all and comparisons are apples and oranges.

  • @JoeTaber
    @JoeTaber 3 роки тому

    Instead of using iSCSI I wonder if it'd be better to run block-device level workloads on the vm host in ZFS and then using ZFS send on a frequent schedule to transfer the data to the TrueNAS device and ZFS receive from TrueNAS when migrating the VM to a new host.

  • @Ajicles
    @Ajicles 3 роки тому +2

    Wonder what results you would have with jumbo frames enabled.

    • @BruceFerrell
      @BruceFerrell 3 роки тому

      jumbo frames really need to be done on a separate storage LAN.

  • @hescominsoon
    @hescominsoon 2 роки тому

    i create a different iscsi lun for each VM in iscsi..solves the iscsi restore issue..:)

  • @monamoralisch264
    @monamoralisch264 3 роки тому

    nize 1, thx4up

  • @NickF1227
    @NickF1227 3 роки тому +1

    ...wait...but you can thin provision zvols..

  • @scoopzuk
    @scoopzuk 2 роки тому

    Hi Tom - did you do any more videos on storage design considerations? I have watched lots of your videos but haven’t seen one about storage design for VMs and data storage. I currently have all my data stored inside the Windows VM which is making VM snapshots huge and slow. I’d like to learn more about the best way to setup a Windows file server/DC VM but with storage for files to share to 30 workstations. I’ve been tinkering with truenas scale recently and was thinking of setting up the main data store as SMB shares there, I’ve linked to the AD to handle share permissions. Or is it better to share as iscsi to windows VM and then use windows to share the files and folders?

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  2 роки тому +1

      I have not done a video on that yet, but it's on my to do list because we do consulting work often to fix issues created by people creating huge VM's. Using iSCSI to connect TrueNAS to Windows is a great way to do it.

    • @scoopzuk
      @scoopzuk 2 роки тому +1

      @@LAWRENCESYSTEMS thanks. I’ll keep an eye out for the video in the future, I’m sure it’ll be very helpful to me and others with similar bad setups that have been inherited. It was all made worse when I took the VM offline to consolidate snapshots in ESXI not realising there were snapshots from years ago so the consolidation took 3 days, a failed p440ar battery meant write speeds were painfully slow.

    • @scoopzuk
      @scoopzuk 2 роки тому

      @@LAWRENCESYSTEMS I’ve also been left undecided on iscsi to windows file server or direct SMB share from truenas because I love the truenas snapshot options. Every 10mins for 8hrs, every hour for 5 days, every month for a year etc. I have a consulting engineering company and my employees use “previous versions” a lot when they accidentally “save” instead of “save as” so they can self-fix it without bothering me. I have windows snapshot/VSS twice a day but more granular schedules for snapshot taking and scrubbing that truenas offers and the fact it integrates with “previous versions” is tempting. So i was all set to go that route…but my employees also use Windows File search a tonne too, we have 40+ years of data and the windows file server index makes search results instantaneous for workstations. Sadly I think I’ll never get that functionality from truenas? This is the kind of stuff I’d love to hear you discuss on homelabs or this channel.

  • @sevilnatas
    @sevilnatas Рік тому

    Wondering if ZFS deduplication buys you something? Seems like dedup could add the advantage of NFS into the speed of iSCSI. So, in other words, run a pre-provisioned iSCSI end point with ZFS depu turned on. Best of both worlds?

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  Рік тому

      deduplication works if you have data that can be deduplicated

    • @sevilnatas
      @sevilnatas Рік тому

      @@LAWRENCESYSTEMSRight, but my understanding is that it deduplicates at the block level. So, if I have got that right, it seems that opens up a lot of opportunities for dedup where you don't usually think about it. For example, VM snapshots. It doesn't need to dedup the whole snapshot, just the many duplicative blocks the snapshot is made up of. Anyway, I may be off base here, but it seems doable.

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  Рік тому

      @@sevilnatas It's the block level per dataset and snapshots are block level differentials.

    • @sevilnatas
      @sevilnatas Рік тому

      @@LAWRENCESYSTEMSSo if I understand what you are saying, if I have several Win11 VMs that I am using for testing, so they are setup close to identical, the deduplication should almost result in the actual space on disk being the size near a single one of those VMs and the snapshots I run off of them will probably also greatly benefit from dedup, between VMs, but the individual snapshots, per VM, will not, because a snapshot is essentially operating like a dedup, itself. If I've got that right, still sounds like a pretty good setup, at least for my situation, where many VMs, of the same OS are being used.
      The other thing I'd like to experiment with is VMWare's Horizon tech that allows for linked clones. Going back to my specific scenario, testing with multiple, similar VMs, I think I would greatly benefit from the linked clones achieving a similar result as the dedup setup. Might be more straight forward and easier to maintain. I just don't know if I can use the VMWare Horizon software on the free license. It is probably a premium offering.
      The additional thing I like about the linked clones is the ability to do update and enhancements to the "golden" VM and then inherit those changes to the linked clones. Seems like a great maintenance benefit.

  • @TimothyHora
    @TimothyHora 3 роки тому

    I use TrueNAS baremetal on a HPE DL380p GEN8 as Storage Array in my VMWare HA Cluster (built with 3 HP Z420) - I‘ve implemented the zraid2 Storage over iSCSI (VASA Support) with Multipath I/O and have no problems with snapshoting/rollback a VM in a LUN.
    Did you run your Benches also with Multipath I/O? Would be surely interesting in the context of this vid :)
    I‘m talking here from my HomeLab, not Enterprises I work for - just to be sure :)

    • @Prime0pt
      @Prime0pt 3 роки тому +1

      This video is about XCP-NG. Its way of working with storage is different from vmware's

    • @TimothyHora
      @TimothyHora 3 роки тому

      @@Prime0pt I know. My meaning is: My Storage is based on TrueNAS where die VM's of the vmware Cluster (different Machines) has it's home. the 3 ESXi are only the "computing nodes" if you will say so - they have no storage built in. All Storage is centralized on the TrueNAS which supports VASA. So I'm talking here to only about Storage and that my storage is built on a DL380. It will interest me, if anybody did performance Checks "with TrueNAS in iSCSI Multipath I/O Mode" - perfectly with VASA Protocol but not mandatory to my Question ;)

  • @johnholland2575
    @johnholland2575 3 роки тому

    I was wondering, how I can migrate XCP-NG zvols or datasets from one server to another. Reasoning being, that I'd like to be able to migrate individual PSQL DBs on zvols or datasets from server to another.
    Thanks Tom for the informative content.

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  3 роки тому

      Using ZFS replication ua-cam.com/video/XOm9aLqb0x4/v-deo.html

  • @scorpjitsu
    @scorpjitsu 3 роки тому

    I recognize that Bigby cup! Are you in MI?

  • @JoeTaber
    @JoeTaber 3 роки тому

    Apparently NVMe over TCP will be a thing and could supplant iSCSI.

  • @Adrayven
    @Adrayven 3 роки тому

    Synology does snap shots of iSCSI a lot better than TrueNAS imo

  • @trumanhw
    @trumanhw 2 роки тому

    I dont think coelesce means what u think it means ... :)

  • @pepeshopping
    @pepeshopping 3 роки тому +1

    When you do not understand the differences between file and block based shares…
    It shows!!
    It all really comes down to protocol and ASYNC writes!!

  • @ryzenforce
    @ryzenforce 3 роки тому

    For me, it is NFS all the way because I prefer a dedicated system to make the reads and writes instead of multiple devices connected and doing it directly themselves via iSCSI. Also, less corrupted data when using NFS.