DIY NAS with Proxmox and Xpenology

Yes. The main thing to know is that you need to add the following two lines to the .conf file for the specific LXC (found under /etc/pve/lxc/<id>.conf on the Proxmox host):

lxc.cgroup2.devices.allow: c 10:200 rwm
lxc.mount.entry: /dev/net dev/net none bind,create=dir

EDIT: Mounting network shares to an unprivileged LXC can be a tad tricky, but this post I did on the Proxmox forum details how to do it for Samba shares. I assume the process is similar for NFS:

EDIT2: Not that it makes a huge difference, but my LXCs are always Debian-based (Debian 11 for Proxmox 7, and Debian 12 for Proxmox 8) keeps things nice and consistent when working from the command line, same tools same syntax when moving between the Proxmox host and the Docker host.

2 Likes

Probably a bad idea to pirate Synology's operating system, DSM. If one wants a nicer interface to manage to NAS, True NAS will work well.

I use True NAS Core as my Hypervisor right now and yes it's Bhive under the hood. The idea to pass the boot drive to Proxmox is interesting saving one step if I move to Proxmox. I'm considering this as I run a Windows VM for a few applications that require Windows and so far the TSM is not passed through to Windows via True NAS so I can't upgrade that VM to Windows 11. I've heard that Proxmox upgrades are not simply click and go. Is this correct?

Will this handle making the shares writeable? I have my Channels Directory on an SMB share and I got it mostly working, but Channels would not start due to not having writable permissions.

1 Like

You need to provide write privilege for the user that runs Channels DVR. This is true for any NAS or OS share.

Yes. That's exactly what I do with OliveTin to support access to Channels DVR files.

EDIT: Your fstab file on the Proxmox host makes the connection to your Samba shares, and the permissions are set by adding 100000 to your desired user and group IDs in the LXC. So /etc/fstab on your Proxmox host would look something like this:

//media-server/tv\040series /mnt/tv_series cifs x-systemd.automount,username=User\040Name,password=Password,uid=101000,gid=101000 0 0
//media-server/movies       /mnt/movies    cifs x-systemd.automount,username=User\040Name,password=Password,uid=101000,gid=101000 0 0
//media-server/backup       /mnt/backup    cifs x-systemd.automount,username=User\040Name,password=Password,uid=101000,gid=101000 0 0

Note the uid and gid "101000", this will passthrough to the LXC as "1000". Then you need mount points in your <id>.conf file:

mp0: /mnt/tv_series/,mp=/mnt/tv_series
mp1: /mnt/movies/,mp=/mnt/movies
mp3: /mnt/backup/,mp=/mnt/backup

The Proxmox forum link above has more detail.

2 Likes

Which device is that on your host? It could be different for everybody.

Edit: For me that is /dev/net/tun

root@pve1:/dev/net# ls -la
total 0
drwxr-xr-x  2 root root      60 Mar  9 11:15 .
drwxr-xr-x 19 root root    4700 Mar  9 11:15 ..
crw-rw-rw-  1 root root 10, 200 Mar  9 11:15 tun

I've done it it multiple times now, once passing through an original boot NVMe as a PCIe device (more complicated), and another by simply passing through the original drive as a SATA device.

Among the advantages of going this route is that you still have the option to boot from the original drive as a failsafe. I never needed to, but I was happy to have it as an option when I was virtualizing my Home Theater PC/Server.

Minor releases are no big deal, but major releases are another matter. I started on Proxmox 7.x, and have two running on that release. My mobile "super router" which runs OpenMPTCProuter, ROOter and Docker is using Proxmox 8.x on a 4 ethernet port "industrial" PC.

For me there's nothing particularly compelling about 8 vs 7, much like I feel about Debian 12 vs 11. :slight_smile: I'm pretty sure a few things would break trying to do an in-situ upgrade, just like most OS major releases.

The short answer is, when I do decide there's a compelling reason to upgrade, I'll most likely do it with new hardware -- as moving VMs and LXC containers between systems is pretty easy.

That's an interesting question. The first LXC I ever setup with Docker uses the "file" approach you describe to /dev/net/tun, but since then I've been using the "dir" approach using /dev/net. Both work.

lxc.cgroup2.devices.allow: c 10:200 rwm
lxc.mount.entry: /dev/net/tun dev/net/tun none bind,create=file
lxc.cgroup2.devices.allow: c 10:200 rwm
lxc.mount.entry: /dev/net dev/net none bind,create=dir

Thank you

Just to add to this point a bit, Proxmox does most of the functions that one might associate with a NAS these days. The notable exception is having a nice WebUI or GUI driven interface for managing storage.

However, this is easily added, or may already be present in a VM you'd have running. For me, as an example, I've been using DrivePool under Windows for many years. It meets my needs exactly, as it allows me to create one or more large network shares using multiple drives. Folder duplication is easy, as is adding or subtracting disks.

DrivePool was something I was familiar with, and had a proven track record with -- not to mention that all of my data was already stored using this product. With Proxmox, I created a Windows VM, passed through my existing boot drive and all of my DrivePool drives. Nothing else was required.

So whether you're a fan of TrueNAS, Unraid, OpenMediaVault, Windows with DrivePool, or Windows with Storage Spaces -- any of these can be used to provide an elegant way to manage your network storage. Use what you're already familiar with, use an open source NAS OS, or use Windows -- any one will do the job. It's also possible to use the tools Proxmox provides, but is mostly via the CLI, and unless your day job already involves familiarity with ZFS and/or Ceph I'd steer clear of that approach.

Do you notice a performance drop to your DrivePool under Proxmox? I'm a photographer and backup my photo library from my workstation to TrueNAS with a full every month over a 10-GB LAN. It takes many hours over this optimized setup.

I haven't noticed any performance drop, but I use Veeam Backup & Replication (another legacy Windows tool for me) and everything (including full backups every 30 days) happens overnight. Their Community Edition supports up to 10 clients. Backups start at 12:30am, and when incremental, are done in a few minutes -- followed by shutting the system down. Full backups can take more than an hour, but I'd imagine I don't have nearly the data to transfer that you do.

My backups are all automated using Macrium Reflect with Macrium Site Manager aucustrating all of it. I used to work in IT doing networking, servers, and security. It's a real nice setup. Yes I'm moving a lot of data and all I do is read he daily reports that flag any problems. The full backup of my photo and video library takes about 10 hours at 4-Gb. As the library is always growing, I'm careful about affecting performance. At some point I'll have to move to synthetic full backups yet I'm not fond of them due to the risk of bit rot. I do defend against this now running periodic scrubs and of cause multiple copies are kept.

hm, so what does exposing /dev/net/tun into the lxc do wrt to docker?

i also found this guide on exposing gpus into lxc containers for transcoding: LXC GPU Access | swigg

That may work, but I went down many dead ends in passing through Intel Quicksync to an unprivileged LXC. This guy's GitHub page proved to be the most useful to me:

And, these were the additions I made to the <id>.conf file for the Channels LXC I used this on, mostly based on his write-up:

lxc.cgroup2.devices.allow: c 226:0 rwm
lxc.cgroup2.devices.allow: c 226:128 rwm
lxc.mount.entry: /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file 0, 0
lxc.mount.entry: /dev/dri dev/dri none bind,optional,create=dir
lxc.hook.pre-start: sh -c "chown -R 100000:100000 /dev/dri"
lxc.hook.pre-start: sh -c "chown 100000:100044 /dev/dri/card0"
lxc.hook.pre-start: sh -c "chown 100000:100105 /dev/dri/renderD128"

On the Proxmox host it looks like this:

root@pve0:~# ls -la /dev/dri
total 0
drwxr-xr-x  3 root root        100 Mar  4 12:05 .
drwxr-xr-x 19 root root       4740 Mar  4 12:05 ..
drwxr-xr-x  2 root root         80 Mar  4 12:05 by-path
crw-rw----  1 root video  226,   0 Mar  4 12:05 card0
crw-rw----  1 root render 226, 128 Mar  4 12:05 renderD128

And, on the Docker host it looks like this:

root@channels0:~# ls -la /dev/dri
total 0
drwxr-xr-x 3 root root        100 Mar  4 18:05 .
drwxr-xr-x 8 root root        520 Mar 20 23:05 ..
drwxr-xr-x 2 root root         80 Mar  4 18:05 by-path
crw-rw---- 1 root video  226,   0 Mar  4 18:05 card0
crw-rw---- 1 root render 226, 128 Mar  4 18:05 renderD128
1 Like

I'm glad you asked that question -- as I had to think about it for a few minutes. :slight_smile: It's actually not for Docker, it's for Tailscale, which I use in pretty much every one of my LXCs and VMs.

I typically use the containerized version of Tailscale, though there are some limitations to that, as a container can't modify resolv.conf on the Docker host. For Docker hosts where I need the best possible Tailscale support, I install it on the Docker host itself.

Thanks! Tailscale docs here: Tailscale in LXC containers · Tailscale Docs

I’ve been on proxmox for years and wouldn’t have it any other way. like @bnhf said, the only thing really missing is more storage / share management features.
I wish that would come to pass, but right now they’re feverishly trying to absorb footprint from all those abandoning vmware.

would love it if we were to start up a proxmox subthread with specific topic tags like docker in lxc, gpu passthru, unprivileged tips, file share, etc

I have been on proxmox for probably a year now. I have Channels on a Synology NAS and use a Minisforum UM790 Pro that also connects to a mount using SMB to the NAS. That way I can run two Channels DVRs and use the same storage. That way I can have my Channels DVR and the miss can have hers. I don't transcode anything so AMD has worked fine for me. The Proxmox Server has Debian in a LCX container. I run Channels, Docker, Plex, and Portainer. Then on another virtual machine I run Home Assistant. By having this setup since the storage is all on the NAS, making backups of the LXC container and Home Assistance only takes 5 minutes and the backups can be automated to the Synology NAS.

I just spun-up a Synology DSM 7.2 VM in Proxmox -- and it is really easy. I'd say this mainly for developer's testing, or people wanting to preview DSM before they buy a Synology. Not recommended for production.

Basic steps:

Download and unzip the latest Redpill Loader "Arc" from here:

Generally follow the instructions in this video, although it is not necessary to passthrough disks. You can add them in the standard Proxmox virtual fashion. I added two 32GB "sata" drives, so that I could set up a RAID volume. You'll need to add at least one virtual drive:

1 Like