Manage your Media Collection with Jellyfin! Install on Proxmox with Hardware Transcode

Поділитися
Вставка
  • Опубліковано 18 чер 2024
  • In the last video I introduced Linux Containers, today we're going to supercharge that by seeing if we can get some graphics hardware into our container, and give our large blu-ray collection a new home. We're going to cover a few more advanced Proxmox container features, such as privilaged containers, hardware pass-through, and Jellyfin setup and transcoding for Intel and AMD GPUs.
    There are always hardware quirks with hardware transcoding, but I've worked through it with two examples - a modern Intel Jasper Lake Celeron (which requires the guc/huc firmware), and an AMD Radeon WX 3100.
    Complete Ultimate Home Server Playlist:
    • Terramaster NAS as low...
    Jellyfin's Guides:
    Installation: jellyfin.org/docs/general/ins...
    Hardware Acceleration: jellyfin.org/docs/general/adm...
    Feel free to chat with me more on my Discord server:
    / discord
    If you'd like to support me, feel free to here: ko-fi.com/apalrd
    Products used in this video:
    Radeon Pro WX 3100: ebay.us/6oTEdl
    Terramaster F2-223: amzn.to/3xJ3yx3
    Timestamps:
    00:00 - Introduction and Transcode
    02:54 - Create Container
    05:01 - Jellyfin Install
    06:04 - Shared Media Mount
    11:29 - Transcode w/ Intel Quick Sync
    18:51 - Transcode w/ AMD Radeon
    21:16 - Conclusion
    Some links to products may be affiliate links, which may earn a commission for me.
    #jellyfin #homeserver #homelab
  • Наука та технологія

КОМЕНТАРІ • 173

  • @donsurtube
    @donsurtube 8 місяців тому +2

    new to proxmox and have to express how much I appreciate your videos which have helped me enormously. There are lots of videos on the subject but you bet them all, Thanks again and keep it up

  • @bokami3445
    @bokami3445 21 день тому

    Just wanted to say Thanks for this video. Using the information you present, I managed to get JellyFin HW decoding working on my Proxmox cluster.

  • @sanchOlabs
    @sanchOlabs 5 місяців тому

    Thanks for the tutorials, watched two until now, both super packed with information ;)

  • @chyldstudios
    @chyldstudios 7 місяців тому +1

    Dude, super cool video. I just started getting into Jellyfin and Proxmox and ran across your video.

  • @kyleolsen3305
    @kyleolsen3305 Рік тому +31

    Your videos have helped and inspired me so much. This time last year i barely knew how to use a terminal and now i'm daily driving linux and amassing a homelab proxmox setup with multiple nodes. my home network and i thank you.

  • @chetramsteak
    @chetramsteak Рік тому +3

    I can't thank you enough for this! I tried following a couple other guides and spent hours trying to get Jellyfin set up and working properly on my Proxmox server, but I was running into issue after issue. But this one worked perfectly, and was really thorough! I'm still trying to work out how to get QVS working with my Core i9-11900K, but in the meantime, I at least have Jellyfin up and running.
    Thanks again!

  • @MrRedWA
    @MrRedWA 5 місяців тому

    Awesome! Thanks for the instructions! worked like a charm.

  • @TheUkeloser
    @TheUkeloser 3 місяці тому +1

    I know this is an older video, but I just set up my jellyfin server and got QSV hardware transcoding using this guide. Thanks!

  • @JimothyDandy
    @JimothyDandy Рік тому

    You rock, my man. Love your adventurous ways.
    Skimming by my "normal" subs to see what my man, Apalrd, may have for for us today.

  • @imzsoul
    @imzsoul Рік тому +1

    legend, i finally have transcoding working!

  • @Immortus27
    @Immortus27 4 місяці тому

    Thanks for the guide, it really helped a lot. Now i'm finally able to set up proxmox to the hardware, where Jellyfin was installed on Ubuntu server just because i didn't know how to pass iGPU to container and make it work properly. I've tried to solve it using other guides, but they always covered a part of the solution. Your guide is really all-in-one solution, and i'm glad that i've finally found it

  • @ChromeBreakerD
    @ChromeBreakerD Рік тому

    thank you very much. I got it working thanks to you.
    Only thing that I had to do that you did not mention is rebooting the proxmox host once.

  • @mikerollin4073
    @mikerollin4073 5 місяців тому

    Great stuff, this is perfect for my setup

  • @MinishMan
    @MinishMan 11 місяців тому +1

    So so sooooo helpful! Doing it in July 2023 - just 4 months after you - and the website instructions are a bit different. This homelab thing is a minefield! But I guess that's the point.
    Jellyfin didn't ask me to add the dev/dri/card0 mount.entry, so I didn't, and my group for the dev/dri/renderD128 mount.entry was "sgx". No idea why this is different but it all worked first time on a 7th gen Intel CPU.
    Thank you again!

    • @apalrdsadventures
      @apalrdsadventures  11 місяців тому

      the card0 node makes sense, since that node can do rendering and also kernel modesetting (for video output) and the render node can just do rendering.

  • @Maisonier
    @Maisonier 3 місяці тому

    Amazing video! Thank you

  • @housy2
    @housy2 11 місяців тому

    Thanks a lot for this tutorial! Working well on an odroid H3

  • @ertibimbashi9135
    @ertibimbashi9135 8 місяців тому

    Awesome work - Wish I stumbled into your profile sooner.

  • @proteinman1981
    @proteinman1981 10 місяців тому

    Thanks alot for this guide, your a champion

  • @BeansEnjoyer911
    @BeansEnjoyer911 9 місяців тому +16

    Crazy how much can change in just 6 months.
    Jellyfin docs are now different, proxmox 8 has some weird oddities.
    Either way, all transcoding was failing, and was able to track it down in the Jellyfin Admin Dashboard logs.
    After checking the permissions of the GPU, in "ls -l /dev/dri/" I noticed renderD128 was owned by sgx. Idk what sgx is, but switching it to the render group fixed it.
    > chgrp render /dev/dri/renderD128
    New Jellyfin docs will recommend adding to render group. However if you didn't do that, you can add jellyfish user to it:
    > usermod -a -G render jellyfin
    Edit: for search-ability: mp4 mpeg4 transcode fail

    • @apalrdsadventures
      @apalrdsadventures  9 місяців тому +7

      I'll try to run through this and pin a comment with the updated instructions for Debian 12 / Proxmox 8.

    • @hieroclesthestoic
      @hieroclesthestoic 7 місяців тому +3

      Another change here in the Jellyfin docs is that they're excluding the line passing card0 to the container. Adding that in and then changing the group to render fixed transcoding for me.

    • @AdamBramley
      @AdamBramley 5 місяців тому

      I used this to get things working in Proxmox 7, then updated to 8 about 5 minutes after and immediately had to make further changes.
      The changes in these comments worked great for a 5th gen i3.
      One more thing worth noting is that you can get stats on the GPU by installing intel-gpu-tools in the LXC.
      Thanks for the writeup and the 7>8 additions!

    • @daviddunkelheit9952
      @daviddunkelheit9952 Місяць тому

      I think sgx could be system gaurd extensions…

    • @user-bt7vc7eh6f
      @user-bt7vc7eh6f Місяць тому

      I'm working this out right now, instead of chgrp render /dev/dri/renderD128 shouldn't we change the /etc/pve/lxc/xxx.conf file to map the users correctly? (Or am I totally off?)

  • @GutsyGibbon
    @GutsyGibbon Рік тому +1

    Excellent, haven't tried it yet but the detailed explanation is perfect. I have Optiplex 3040 as a Proxmox server (at least in the testing phase) and it looks like it has Intel GPU Skylake, so what you did here should also work for me. Thanks!

  • @diegofelipe2119
    @diegofelipe2119 7 місяців тому

    Awesome video, thanks!

  • @seimeianri
    @seimeianri Рік тому

    thanks for the video, I was about to start a research to install jelly on an mac mini 2011 and the video went live

  • @DarrylGibbs
    @DarrylGibbs 9 місяців тому

    Dude, you are a legend!

  • @mindshelfpro
    @mindshelfpro Рік тому +26

    I run jellyfin on docker on a Core 2 Duo 4GB DDR2 laptop and a spinning 500GB SATA drive. Actually there are about 10 docker containers on this 2008 laptop, including homeassistant, cloudflared, sonarr, prowlarr, jellseer, and qbittorrent. In the future I will build a proper jellyfin docker stack.. but in the mean time the containers are configured to work together manually (JF, PL, SN, and QB)... that also runs X. I access Jellyfin from all over the country. Its amazing what little hardware specs are required to run some very useful software.

    • @mindshelfpro
      @mindshelfpro Рік тому +1

      I'll remote into the laptop now and see if there is a long shot chance that CPU can do any encoding with a new tool I learnt about in your video today... vainfo!

    • @eDoc2020
      @eDoc2020 Рік тому

      @@mindshelfpro I can tell you right now that the _CPU_ won't do anything like that. And I don't think Intel chipset graphics would. You'd need an external GPU for any chance of it.

    • @mindshelfpro
      @mindshelfpro Рік тому

      @eDoc2020 I was hoping for too much. .Reinstalled i915, i965, Mesa graphics and no luck with vainfo. The Core 2 Duo has been transcoding 4 streams at a time successfully though, it takes about 10 seconds to start a stream but overall the laptop is doing good. Mind you I only download 720p streams to keep overall storage down.

    • @RobinCernyMitSuffix
      @RobinCernyMitSuffix Рік тому +3

      @@mindshelfpro "Mind you I only download 720p streams to keep overall storage down."
      That's probably the reason why it works ;)
      Newer intel quicksync can do several 4k 10Bit HDR10 streams, or even one 8k stream, without hitting the CPU hard.
      It has come a loooong way ^^

    • @HydraInk
      @HydraInk 7 місяців тому

      Hey, I'm setting up VMs for the first time and I wanted to get Sonarr, Radarr, Qbittorrent, and jellyfin up and running on my homelab, Is it fine to install all of those on the same VM of ubuntu server? Also do I need to set up a storage solution like truenas before making these?

  • @IamJeffrey
    @IamJeffrey Рік тому

    Subscribed! Thank you for the video. Was planning to buy a terramaster nas also but I was hesitant because of the transcoding performance until I watch this video.

    • @apalrdsadventures
      @apalrdsadventures  Рік тому

      It's a really modern CPU, just a low end one for the 2-core unit

    • @IamJeffrey
      @IamJeffrey Рік тому

      ​@@apalrdsadventures what makes you to decide to install proxmox on that nas instead of truenas scale?

    • @apalrdsadventures
      @apalrdsadventures  Рік тому

      I'm focused more on VMs/applications instead of file sharing, and TrueNAS isn't as good at virtualization and containers as Proxmox is. Both are using ZFS under the hood anyway.
      TrueNAS is great at being a storage server, but that's not the primary use case for an all-in-one server.

  • @FTLN
    @FTLN Рік тому

    Been waiting for this one :)

  • @ikorbln
    @ikorbln Рік тому

    Thx for this video, it work´s easy as that.

  • @ndupontnet
    @ndupontnet 5 місяців тому

    Excellent, thank you very much for that !

  • @SandboChang
    @SandboChang Рік тому +7

    You can also do trasncoding with an unprivileged LXC for better security, the only additional steps will be to assign the corresponding group ID between the host and the LXC for the video and render groups.

  • @KeithTingle
    @KeithTingle Рік тому

    you have the most interesting topics on YT (in my geeky opinion)

  • @i7andy
    @i7andy 4 місяці тому

    thanks dude

  • @gravisan
    @gravisan 5 місяців тому +1

    In the deep canadian cold, I'm choosing to stay with CPU decoding as this provides a side benefit of heating my room :)

  • @fa.miebelzwett400
    @fa.miebelzwett400 Рік тому +2

    Hello apalrd, I appreciate your videos very much and like your content.
    When I followed along your adventure, I got an overflow on my LXC's boot disk since transcoding uses disk space.* I just want to mention for others following along that there is an option to throttle transcoding and that there also is a task within jellyfin to delete old transcoding files. (You can use this as a blueprint for a cronjob.)
    *I enlarged my boot disk because I thought, apt had filled it, but later I found the transcoding issue. Now I cannot revert to the small size since recovering from backup with --rootfs parameter doesn't work with ZFS subvolumes. 😅

    • @apalrdsadventures
      @apalrdsadventures  Рік тому

      With ZFS it's still thin provisioned, so as long as you don't use the space it won't take more space if the disk is larger. Proxmox does not let you shrink volumes easily since they are supporting a bunch of backends, many of which don't support shrinking FS easily.
      Sorry about the transcode space issue, someone else in the comments suggested making a dataset just for transcode cache.

  • @YannMetalhead
    @YannMetalhead 6 місяців тому

    Good video.

  • @sbsaylors8
    @sbsaylors8 Рік тому +2

    Thanks!

  • @W1ldTangent
    @W1ldTangent Рік тому +6

    Still think we may be brothers separated at birth 😂 Been on the Jellyfin train for a while now after getting disillusioned with Emby. I also have Sonarr, Radarr, Lidarr and Bazarr (all with Jackett search) mixed into the stack. I've been running this setup for a few years now, it's absolutely been the best in a long line of "torrentbox" iterations I've created over the years for my... uhhh.. linux ISO downloads, ya...
    Basically zero-maintenance, I've even had watchtower auto-updating my containers and besides the occasional breakage requiring pinning a version tag for a while until things get sorted upstream, you just keep feeding it more storage and consume... Linux.

    • @protacticus630
      @protacticus630 4 місяці тому

      This is my whish to set up too. Mind you to share your hardware and software approach... I have same terramaster device with 2x8TB with Proxmox on it,I would like to have samba shares where all downloads will be saved from *arrsuite. Can you please advise?

    • @sebastianleangres1799
      @sebastianleangres1799 4 місяці тому

      @@protacticus630I'm doing the same thing this did, basically followed this channels playlist for samba/fileshare and jellyfin/hardware passthrough setup. Then all Arr programs + jellyfin in one container sharing a mountpoint with a dedicated torrent+vpn container pct set [vm id] --mpX /yourpool/,mp=/mnt/yourpool is the mount point command. Took some doing to get everything working right but it wasn't terrible.

  • @chunkyfen
    @chunkyfen 8 місяців тому

    Hi! Thank you for this tutorial, do you know how I could mount SMB shares from my main windows pc to my proxmox server jellyfin container? Thank you!

  • @MarkConstable
    @MarkConstable Рік тому

    How close would the Jellyfin video setup on this F2-423 go towards supporting a desktop system with GPU pass-through, particularly for Manjaro/KDE ?

  • @evanmarshall9498
    @evanmarshall9498 10 місяців тому

    I am having trouble following along with the hardware acceleration section of this video. Were you ever able to do one for Nvidia?

  • @InsaiyanTech
    @InsaiyanTech 4 місяці тому +1

    should you do a arr stack in lxc or should i do a docker lxc and put them in there or seperate everything?

  • @elcapitanomontoya
    @elcapitanomontoya 3 місяці тому

    Solid guide, but I have a question about LVM/ZFS for the mountpoint.
    I have a server on which I run Proxmox with a ServeRAID_M1215 RAID controller and two storage drives set up on RAID-1 configuration. All of the documentation surrounding volume/filesystem setup on Proxmox says not to use ZFS on top of a hardware RAID controller. What are my options for creating a pool from which to share mountpoints? If the answer is lvm-thin, what would be the correct commands using lvm-thin for a shared media mount?

  • @liqm88
    @liqm88 8 місяців тому

    The way you explained this has helped me a lot. How can I know wich formats my igpu will support?

    • @apalrdsadventures
      @apalrdsadventures  8 місяців тому +1

      Running `vainfo | grep VAEntrypointEncSlice` will give you a list of encoders the HW supports with VA-API. But you can also comb through the specs for that generation of CPU+GPU.

  • @markkoops2611
    @markkoops2611 Рік тому +4

    You may want to change the transcode cache folder, by default it will use a folder on the OS mount point

    • @vidmonkey
      @vidmonkey Рік тому

      where do you recommend the cache folder be located? I suspect that if its on an SSD, it may shorten the life. Would a standard mechanical HDD be better?

    • @NetBandit70
      @NetBandit70 Рік тому +1

      How much caching do you need if you can do it on the fly? I'd think 64MB RAM would handle a few minutes of cache, to keep your disk untouched.

  • @quintinignatiusfourie2308
    @quintinignatiusfourie2308 2 місяці тому

    How do i what you did with the transcoding config but with an Intel i5 6200u cpu

  • @ear6
    @ear6 Рік тому

    Thank you for this great walkthrough. Would you please write the commands for Nvidia card transcoding setup ? Are the same Mesa drivers fine for Nvidia card ? Thank you again!

    • @apalrdsadventures
      @apalrdsadventures  Рік тому

      I don't have any nvidia cards and the proprietary drivers aren't installed by default like the Intel / AMD ones, so you'd have to follow the Jellyfin docs on that one

  • @NetBandit70
    @NetBandit70 Рік тому

    Do you have any feelings about doing bare metal linux with docker, vs proxmox and containers? Seems like a lot of stuff lately is being packaged for docker: jellyfin, nextcloud, unifi controller, etc.

    • @apalrdsadventures
      @apalrdsadventures  Рік тому +10

      I'm not a huge Docker fan for a variety of reasons:
      - It's designed to be opaque to the operator and immutable, which is good if you have a good build system to package your app for containers, but means there's no opportunity for customization on the underlying system by me. If I'm using a container built by someone else, I have to trust / like the way they've setup the system and assume they've made decent choices.
      - Networking is particularly complex and I don't like it, vs LXC having a full network namespace in the container with a normal IP address and not port-mapped to the host via an intermediate docker network
      If I was deploying apps as a software company, I'd want to package my own things in Docker/Kubernetes containers as part of the software build process and manage them that way. Since I am not trying to deploy apps at scale and package them myself, having more control of the configuration of the Linux system is my preference.

  • @kristianwind9216
    @kristianwind9216 5 місяців тому

    I am having a voice delay on AppleTV even with no transcoding needed. This is not the case from eg. a browser or on iPad. Have you experienced this?

  • @jaroslavchytil5732
    @jaroslavchytil5732 Рік тому +1

    nice ... one silly question, if i will unmount drive and mount it to different proxmox client will be data visible?

    • @MikeDeVincentis
      @MikeDeVincentis Рік тому

      Should be. As long as you mount to the same share and that share is accessible to the other client.

  • @neail5466
    @neail5466 Рік тому

    Great, information, what is the "pct" in the passthrough? I use 'qm' for Qmue agent.

  • @Superman12321
    @Superman12321 8 місяців тому

    I was following your tutorial and setting up the container in the network section you lost me. You didn't explain where and how you got the static IPv4 and the IPv6. i don't know where to get those from. I am new to this.

  • @sudokillme
    @sudokillme 3 місяці тому

    is chmod -R 777 a good idea? Sounds like a safety issue

  • @nights2walk
    @nights2walk Рік тому

    Please make a video on how to mount google drive folder as a media folder to jellyfin in proxmox

  • @NeverEnoughRally
    @NeverEnoughRally 11 місяців тому

    Can you maybe comment more about why you are using "options i915 enable_guc=3" vs the "options i915 enable_guc=2" like the jellyfins website says to do? In all my searching i was only able to find reference to the 2, but not the 3. Is that specific proxmox? I'm currently setting mine up on unraid. Is there some benefit to raising the number in performance? Is there a limit to what you can put in there?

    • @apalrdsadventures
      @apalrdsadventures  11 місяців тому +2

      It's a bitmask, there are two features which can be enabled (bit 0 and bit 1), so setting 3 enables both features.
      wiki.archlinux.org/title/Intel_graphics#Enable_GuC_/_HuC_firmware_loading has information on what the individual bits do, if you need to enable them with your specific hardware, and if they are the default on your hardware.

  • @HerrFreese
    @HerrFreese 4 місяці тому +1

    Could you please explain, why you use mount points in Proxmox and then bindmounts to the LXCs? Is this because you are using ZFS? As far as I tried you can also (bind?)mount single LVM-Volumes and/or volumes from storage defined in proxmox directly to different containers (via the .conf files). This should also be possible with other "devices"? Am I missing something?

    • @apalrdsadventures
      @apalrdsadventures  4 місяці тому +1

      In general I use bind mount points in lxc to share data between containers without going over the network. It does of course work for devices as well as directories.

  • @loopback2
    @loopback2 Рік тому

    Hey, have you done any NAS benchmarks?

    • @apalrdsadventures
      @apalrdsadventures  Рік тому +1

      Not really, I've mostly focused on keeping things general enough to apply to more hardware

  • @ivelinbanchev4337
    @ivelinbanchev4337 7 місяців тому

    Great one. Thank you so much! Sadly, most of the links are not available at the moment. Or they have completely changed.
    PS: Would love to see a way to point Jellyfin to NAS such as Synology/Xpenology or TrueNAS. Right now I am struggling to connect it to my Xpenology VM.

    • @apalrdsadventures
      @apalrdsadventures  7 місяців тому

      Yeah, I've noticed that over the past year the Jellyfin docs have changed and they got rid of their Proxmox guide. I'll have to put up the commands on my website I guess.

    • @ivelinbanchev4337
      @ivelinbanchev4337 7 місяців тому

      @@apalrdsadventures Yep. TBH, I've spent like 40+ hours trying to set my Jellyfin on XLC and nothing seems to be working for that transcoding. I guess I can't pass trough my video driver.

  • @NetScalerTrainer
    @NetScalerTrainer Рік тому

    Any recommendations for a proxmox system where the CPU supports PCI pass-through but the bios does not? Will this work?

    • @apalrdsadventures
      @apalrdsadventures  Рік тому +4

      In general you always need BIOS support to pass through PCI devices to VMs.
      However, for a container-based solution like this, there is no need for PCI pass-through, since the driver runs on the host.

    • @NetScalerTrainer
      @NetScalerTrainer Рік тому

      @@apalrdsadventures that is good to know!! I will proceed!

  • @henriquehff
    @henriquehff 4 місяці тому

    Your videos are so inspiring, aa while ago I didn't even think about creating my own server, nowadays I'm configuring new services on a daily basis, right now I'm trying to configure tdarr, I think the problem is the intel drivers, what's the output of the command "vainfo"? I was trying to transcode to HEVC and I couldn't figure out why it wasn't working, then I saw that in the "vainfo" command the "VAEntrypointEncSliceLP" output means that I can encode, and the "VAEntrypointVLD" means I can decode, but here it appears I can only decode HEVC but not encode, only h264 is available for decode/encode, I have a geminilake cpu that was supposed to be able to encode in HEVC right?

    • @henriquehff
      @henriquehff 4 місяці тому

      finally, after hours trying to get this working, all I needed to do was to add the debian unstable source repository, and then install the "intel-media-va-driver-non-free", and now I'm able to decode/encode in every format supported by the intel gpu, even in jellyfin it's working transcoding to HEVC

  • @dj-aj6882
    @dj-aj6882 Рік тому

    Hey apalrd,
    I Love your litle Project!
    Do you think it would be posible to cluster it up at diferent familie Homes for Nextcloud?

    • @apalrdsadventures
      @apalrdsadventures  Рік тому

      Does Nextcloud natively support clustering? I haven't used Nextcloud much. Proxmox can't reliably cluster over WAN networks due to the latency involved, but you can replicate across them as a backup (i.e. you backup to other houses, they backup to you).

    • @dj-aj6882
      @dj-aj6882 Рік тому

      @@apalrdsadventures yes it does, but it might surpass the fair use policy.
      Mi thought by now is:
      VPS running the following:
      Nextcloud node for public sharing
      Nginx or a other proxy manager
      Headscale for frontend Lan and Backend Lan
      Homeserver Running:
      Local Nextcloud Node
      PiHole as local DNS to filter network and point at local Node
      More apps just like Jellyfin would be cool as well.
      I just wonder if it is possible to integrate Jellyfin in NC so that Family can use it on one interface. Otherwise SSL could be an option.
      Main purpose would be to enforce data safety with multiple locations and to exploit faster Up and download speeds then you can get in My town.
      The Home IP would stay Hiden as well and only a Datacenter is public.

  • @jonathand5762
    @jonathand5762 Рік тому

    If Im only using the media files (movies/tv shows) only for jellyfin, do I still need to mount the media drive to fileserver or am i fine to just mount it to the jellyfin LXC?
    Also I thought that it was recommended to first mount the media drive to the fileserver LXC then have jellyfin access the media through from the fileserver using Samba. From what I gathered this was to limit chances of data corruption by two separate processes potentially trying to read/write at the same time but I guess this wouldn't happen since the two LXC containers live on the same kernel? Any advice on the matter would be greatly appreciated!

    • @apalrdsadventures
      @apalrdsadventures  Рік тому +1

      You'd only need to setup the fileserver mounts to copy media to Jellyfin, if you have another way to copy files you can use that instead.
      Since the two containers are running in the same kernel there's no danger of file contention.

    • @jonathand5762
      @jonathand5762 Рік тому

      @@apalrdsadventures Thank you for the quick reply! What would be the reason for copying data to jellyfin? Doesnt jellyfin/plex simply just need to know it's location by supplying jellyfin the mount points for media (in my case external HDD connected to host)?

    • @apalrdsadventures
      @apalrdsadventures  Рік тому +2

      Ah I see, you already have a full drive and just want to mount it. In that case, the same mount process works, you just need to mount the external drive on the host first (probably in /etc/fstab)

  • @aslanbarsk
    @aslanbarsk Рік тому +1

    Ok, these videos are great, but hard to understand. even with the nice work you have done.
    I'm trying to get my NUC 13 going wtih Homeassistant, ZigbeeMQTT separated, Jellyfin, Synology NFS storage etc.
    But cannot for the life of me do this. Please make such a video!!! :D

    • @raddude1743
      @raddude1743 5 місяців тому

      Are you still waiting for these answers?

  • @CaseyHancocki3luefire
    @CaseyHancocki3luefire Рік тому +1

    what about arc gpus?

  • @PierricDescamps
    @PierricDescamps Рік тому +1

    I'm struggling to get the same done on a VM rather than container , using pci passthrough for a thin client (Fujitsu s740) with an older celeron. Passthrough is set up but proxmox keeps saying the resource is already in use and refuses to boot the VM. If anyone has pointers...

    • @GeoffSeeley
      @GeoffSeeley Рік тому +1

      The host is binding a driver to the video card hence the reason it is in use. You can use the driverctl package to bind the card to vfio-pci driver early in the kernel boot and this allows the device to be used for pass through and stops the host from binding to the card.

    • @apalrdsadventures
      @apalrdsadventures  Рік тому +3

      If you use a container instead of a VM, you don't have to do any PCI passthrough.

  • @zachboatwright
    @zachboatwright 11 місяців тому

    I have a 16TB hard drive, but apparently it has hardware level RAID so proxmox said I can't add nfs storage to it. Is there a way to have shared storage space between my fileserver and jellyfin with lvm storage?

    • @apalrdsadventures
      @apalrdsadventures  11 місяців тому

      A single drive shouldn't have hardware RAID, are you sure it's not just a generic warning and not based on your actual hardware?

    • @zachboatwright
      @zachboatwright 11 місяців тому

      @@apalrdsadventures Yes, that was the issue. Just a generic warning.

  • @mjmeans7983
    @mjmeans7983 Рік тому

    Have you seen any USB 3 based dedicated transcoders?

    • @apalrdsadventures
      @apalrdsadventures  Рік тому

      I haven't seen anything like that, but an iGPU in a somewhat modern system should work fine

  • @strandvaskeren
    @strandvaskeren Рік тому

    What's the advantage of doing the zfs to container stuff? Why not just add the media files to the fileserver vm and have the jellies vm get it's content from there? I personally leave all the handling of storage to proxmox and just add virtual disks to the vm's, what benefits am I missing?

    • @apalrdsadventures
      @apalrdsadventures  Рік тому +2

      It's more tricky to mount shares in containers since mounting has to be done by the kernel. Not a lot more, but enough to make this approach easier (at least with ZFS).
      For VMs there's no benefit to going outside of Proxmox. For containers there isn't usually either, but here we can share a dataset between two containers without the overhead of going through the virtual network. It's also possible to do this by creating a mount point on one container in the GUI and mounting that as a mount point on another container, but then you will have issues if you delete the first container in the future.

    • @strandvaskeren
      @strandvaskeren Рік тому

      @@apalrdsadventures Thank you for the reply. I tend to use VM's over containers and find the virtual network transfer speed between local VM's run at pci speed, way higher than the storace on the proxmox host, so no real disadvantage to going through the virtual network.

    • @apalrdsadventures
      @apalrdsadventures  Рік тому +2

      In this case, using a container over a VM is important since we wouldn't otherwise be able to use the iGPU for transcoding.

    • @MrCriistiano
      @MrCriistiano 6 місяців тому

      @@apalrdsadventures Does ZFS handle the file locking in case 2 CTs try to write to the same file at the same time?

  • @KC-db6ti
    @KC-db6ti Рік тому

    I'm using a AMD iGPU (5700g) and was not able to get it working, it would be great if you could a video on that

    • @apalrdsadventures
      @apalrdsadventures  Рік тому +2

      The GPUs in that generation of APU should be supported by the same driver as all the other GCN-based Radeon cards and there should be absolutely no difference in configuration. I'm guessing your issue is that the chip is newer than the kernel version. You can see if the amdgpu driver is loaded (lsmod | grep amdgpu), and lspci -v to see if the graphics card has the amdgpu kernel driver loaded (look for 'VGA Compatible Controller').
      If it's just not loading the right driver, update to the Proxmox experimental kernel 6.1 - apt update && apt install pve-kernel-6.1

  • @EnlightenedBitFox
    @EnlightenedBitFox Рік тому

    Is it possible to configure vaapi and nvenc drivers/encoders at the same time and then choose between them in jellyfin?

    • @apalrdsadventures
      @apalrdsadventures  Рік тому

      Mixing Intel and AMD (with open source drivers) in general works fine, I’m not sure how much Nvidia’s proprietary drivers mess up the open source library binaries to know if they will mess up vaapi. Nvidia replaces things like libgl with their own and that can cause issues mixing Nvidia with other gpus in other contexts.

    • @EnlightenedBitFox
      @EnlightenedBitFox Рік тому

      @@apalrdsadventures
      At this point the nvidia encoder works but the vaapi doesnt. I dont get it why not. I did all of your steps and before adding the nvidia card it worked

    • @apalrdsadventures
      @apalrdsadventures  Рік тому

      It sounds like the Nvidia driver overwrote the libva binaries, but I don’t have an Nvidia gpu to test with. That’s how they handle other things like OpenGL (instead of using Mesa like everyone else on Linux).

    • @EnlightenedBitFox
      @EnlightenedBitFox Рік тому

      @@apalrdsadventures and how can i change it? I really would like to use vaapi with my intel 6500

    • @apalrdsadventures
      @apalrdsadventures  Рік тому

      I have no idea, the nvidia proprietary driver is to blame for this problem

  • @GodminerXz
    @GodminerXz Рік тому

    Having trouble with hardware acceleration on an intel i5 6400. The output of ls -l /dev/dri is: crw-rw---- 1 root ssl-cert 226, 128 Apr 27 15:55 renderD128. This is different to the video, where the device is owned by either input or render. Any idea on how I can fix this?

    • @apalrdsadventures
      @apalrdsadventures  Рік тому

      In the container, find out what group is video (getend group video) and use that GID instead in the LXC config

    • @GodminerXz
      @GodminerXz Рік тому

      @@apalrdsadventures I get video:X:44:jellyfin from getent, so I changed the 'c 226:0' to 'c 44:0' but no difference. Sorry if I'm doing something stupid, very new to Linux command line

    • @apalrdsadventures
      @apalrdsadventures  Рік тому +3

      Is the output on the container side or the host side? I might have sent you down the wrong path a bit.
      On the container side, leave everything as it was on the host (226:0 in the config), and chown it to video or render (chown :video /dev/dri/card0) (chown :render /dev/dri/renderD128) within the container.

    • @GodminerXz
      @GodminerXz Рік тому

      @@apalrdsadventures That managed to fix it! Thanks a lot for your help and patience with a linux noob. Any ideas why this happened in the first place? I didn't see it mentioned on the jellyfin documentation for lxc and proxmox either...

  • @rapolo01
    @rapolo01 Рік тому

    Can this transcoding guide can be done with and Nvidia Graphics Card ?

    • @apalrdsadventures
      @apalrdsadventures  Рік тому +1

      It's a bigger pain since nvidia's drivers aren't in the kernel and they don't use the standard Linux tools like vainfo for testing, you need to install the proprietary driver on both the host and in the container so the versions of tools line up with the kernel module version.

    • @rapolo01
      @rapolo01 Рік тому

      @@apalrdsadventures Question, Does the AMD part on min 18:51 apply to a processor with integrated gpu?

    • @apalrdsadventures
      @apalrdsadventures  Рік тому +1

      Yes, it will work with any GPU using the radeon or amdgpu drivers (which is everything from AMD in the last ~15 years)

    • @rapolo01
      @rapolo01 Рік тому

      @@apalrdsadventures I exchange my cpu on MicroCenter hopefully I had waranty on it, I get Ryzen 5 5600G and it works flawlessly. thx dude.

  • @theshemullet
    @theshemullet Рік тому

    It would be interesting to see you do the same type of things, but with Unraid. Proxmox is great, but I kind of think of it only when I want a few servers in a cluster. I use Unraid for single host setups.

    • @NetBandit70
      @NetBandit70 Рік тому

      Why not just use bare metal linux then?

    • @theshemullet
      @theshemullet Рік тому

      @@NetBandit70 Is bare metal Linux a distro?

    • @NetBandit70
      @NetBandit70 Рік тому

      @@theshemullet bare metal means running linux directly on physical hardware, not inside a VM or container

    • @theshemullet
      @theshemullet Рік тому

      @@NetBandit70 I know that. I thought you were saying there was a distro called that. The reason I use unpaid is I like the interface. U raid is running on bare metal but I want to be able to run VMS and containers, as well file sharing abilities.

    • @NetBandit70
      @NetBandit70 Рік тому +1

      @@theshemullet bare metal linux with cockpit might be of interest

  • @Felix-ve9hs
    @Felix-ve9hs Рік тому

    Man I with Jellyfin existed 5 years ago when I first set up my media, I really woud like to switch from Plex to Jellyfin, but I've already put countless hours into my Library ...

    • @NetScalerTrainer
      @NetScalerTrainer Рік тому

      So easy to mount folder to your existing Plex file system and just use Jellyfin

  • @rico7772007
    @rico7772007 Рік тому

    Do you have to install some amd drivers to proxmox or the container? how to check witch graphic card is which? when you have two graphics card? Is there a command for it? Im usisng a amd firepro card.

    • @apalrdsadventures
      @apalrdsadventures  Рік тому +1

      Intel and AMD should have the drivers in-kernel, so as long as they aren't too new they should just work. You can try 'lspci -v', find the card, and see if the kernel driver in use is 'amdgpu'. If it is, then the driver is happy. If it's 'radeon', then it's an older card (amdgpu is early GCN through modern RDNA)

    • @rico7772007
      @rico7772007 Рік тому

      @@apalrdsadventures to precise the question, you have mention in your video that you maybe have to change from card0 to card1 when you have two cards, but I can only identify my card in proxmox as 08:00.0 VGA compatible controller. and when I type the command 'lspci -v' ,this is the outcome: Kernel driver in use: radeon Kernel modules: radeon, amdgpu, so kernel is Radeon, any suggestion maybe what could I do now ? I'm using FirePro W4100. How to change to amdgpu kernel drivers?

    • @apalrdsadventures
      @apalrdsadventures  Рік тому

      if you loaded the radeon module then it just means you have an older GPU. It should still show a /dev/dri/cardX and /dev/dir/renderDX nodes
      ls -l /dev/dri/by-path/ will indicate the path of the device which points to the cardX and renderDX nodes. This should show the path on the PCI bus, and you should be able to find the 08:00.0 coded into that path.
      In my case, I had two GPUs, my radeon card was card1 but I passed it through as card0 for consistency (card0 and renderD128 are assumed if not given by the software). If you only have one GPU it will be card0 and renderD128. It's sequential, but not all cards have render nodes.

  • @iPhonesuechtler
    @iPhonesuechtler 10 місяців тому

    Please help, what do I use instead of "zfs list" to find the mount points if my storage isn’t zfs?
    Otherwise very very great video Thank you so much!
    edit: ok I redid everything and have my storage as zfs now ^^
    and also, did you create a zfs inside a zfs there? is "dpool" already a zfs storage and you created "dpool/media" as a filesystem inside a filesystem?
    Am I in a f***in k-hole right now, what is going on, can somebody please help me out here? °__°

    • @apalrdsadventures
      @apalrdsadventures  10 місяців тому

      ZFS is a lot of things (it does RAID, volume management, and filesystem), so within the zfs pool 'dpool' there's a hierarchy of zfs datasets which can each be mounted in different places and have different options.

    • @iPhonesuechtler
      @iPhonesuechtler 10 місяців тому

      @@apalrdsadventures Thanks! Great stuff. Very cool I got a quick answer too.
      Do you think you can make a video where you explain the networking part in more detail? (maybe on a broader scale than proxmox/jellyfin..? but proxmox makes a good example i think) Local/Public IP, IPv4 and IPv6, VLAN and WHYY is it 192.168... or 172.... or 10.0... or 255.255... How does it work in relation to VMs and Containers? Safe practices? Firewall configuration? And what is CIDR? How do you use this right? It seems to be required for Container setup.
      The local network defaults are something I would like to know more about. Or is there good info somewhere and if so, could you point me in the right direction?
      That's a lot to ask so I won't be expecting an answer here, but I still wanted to ask
      Thanks again for everything so far, the videos are great value, keep it up :)

  • @giantxBash
    @giantxBash 9 місяців тому

    jellyfin dont have all the command to paste now on pve configuration

  • @davidariza2320
    @davidariza2320 4 місяці тому

    why does this look way easier than using TrueNAS?!!!

  • @neail5466
    @neail5466 Рік тому +1

    I don't understand why people are obsessed about transcoding so much, all of the modern devices can playback almost 10bit 4k even over 2.4ghz network. The hastle with transcoding not only unnecessary but also futile.
    If someone is concerned about formats, VLC plays most of them.
    The only used case is accessing the videos over the internet, where bandwidth is a limitation. I don't personally believe you should open your NAS to the internet even if you could. That is another added risk.

    • @apalrdsadventures
      @apalrdsadventures  Рік тому

      In general it's for smart TVs, since they can only playback what they can decode in hardware. Depending on the age of your collection, you might have a big collection of xvid or other codecs that don't have hardware implementations even if they aren't that hard to software decode.

  • @poppipo1222
    @poppipo1222 Рік тому

    doesnt work on coffee lake

    • @apalrdsadventures
      @apalrdsadventures  Рік тому +1

      Did you try it without the low power encoding mode? It's fairly specific which chips need and which don't need that enabled.

    • @poppipo1222
      @poppipo1222 Рік тому

      @@apalrdsadventures Hello, turns out it was my fault! Turned out I had somehow deleted or disallowed my Proxmox root user from accessing /dev/dri, idk how or if thats the case. I reinstalled proxmox and followed your guide again (minus the LXC device mount, jellyfin has changed some lines there recently) and it works perfect!!! You're the best!!!

  • @scentilatingone2148
    @scentilatingone2148 Місяць тому

    Where's everyone ripping from these days

  • @PoopaChallupa
    @PoopaChallupa Рік тому

    Man needs better thumbnails if he's going to increase viewership. They all look like homework.

  • @GreySkullification
    @GreySkullification 10 місяців тому

    Not using dark mode? Unacceptable. 10/10 for content. 0/10 for style. It's so bright I could not follow along.

  • @Hypedgaming
    @Hypedgaming Рік тому

    Bro change the thumbnail i know it's clean but u don't get any click

  • @kidsythe
    @kidsythe 11 місяців тому

    half height Intel GPU 😁 transcode beast 🦾

  • @Simon-xi8tb
    @Simon-xi8tb Місяць тому

    I did everything in this video but i get no card0 in /dev/dri/, only renderD128...hmmm

    • @apalrdsadventures
      @apalrdsadventures  Місяць тому

      on the host or in the container? You actually only need renderD128 to render.