A Different Kind of NAS: Compact, Quiet, All Flash Storage to 64TB

For many (most?), running Channels DVR server is all about choosing the minimum hardware that'll give acceptable results. This is not that :slight_smile::

For the last couple of days I've been testing the new TerraMaster F8 SSD Plus, and I really like it. It's only a tad bigger than an Intel NUC -- and is very, very quiet. It has a relatively recent 12th generation i3, a 10GbE Ethernet port, and 8 (yes 8) NVMe slots for flash storage. A total of 64TB supported.

For me the key question was, can I run something other than TerraMaster's TOS on it -- and the answer is yes. I have Promox 8.2.7 installed, and it's running very well. No issues during the installation, and nothing in the logs that would suggest any.

This NAS feels super responsive, no doubt due to the 10GbE Ethernet and the lack of any "spinning rust" for storage. CDVR in an LXC container has been working great, including Intel Quick Sync transcoding. I believe this platform will also make an outstanding host for Proxmox Backup Server, and so I've ordered the version with the N95 processor to use for that task.

For those whose budgets allow, this is an amazing bit of kit. It's small, light, fast and a killer Proxmox platform! There's currently a $100 off coupon on both models:

https://www.amazon.com/TERRAMASTER-F8-SSD-Plus-NAS/dp/B0D9HWLDX5

1 Like

Received a couple of 4TB Samsung 990 Pro NVMe drives, along with some upgraded double-sided NVMe heatsinks for this unit. So, I thought I would show what they look like installed, along with the process of creating a ZFS RAID0 drive pool in Proxmox:

The Terramaster F8 does come with some heatsinks, but the rubber bands used to hold them together are less than inspiring. Plus the ones I bought come in multiple colors, so you can use color to identify drive manufacturers and capacities -- I'm using black for the Samsung 990 Pro drives:

Also, I spread the drives across the PCIe slots, on both sides of the motherboard, in the interest of better heat dissipation.

I'll use ZFS RAID0 to pool the drives together, as I'm not too stressed about losing TV series or movie content, and I'd rather have the combined capacity on a single mount point. There are other ZFS RAID options for those that want better data security, at the cost of total capacity.

With Proxmox 8, they're discouraging the use of RAID0 by removing it from the WebUI, but it's easy to do from the command line:

First list your drives by id:

root@pve5:~# ls -la /dev/disk/by-id
total 0
drwxr-xr-x 2 root root 820 Oct 22 10:52 .
drwxr-xr-x 9 root root 180 Oct 22 10:52 ..
lrwxrwxrwx 1 root root  10 Oct 22 09:07 dm-name-pve-root -> ../../dm-1
lrwxrwxrwx 1 root root  10 Oct 22 09:07 dm-name-pve-swap -> ../../dm-0
lrwxrwxrwx 1 root root  10 Oct 22 09:07 dm-name-pve-vm--100--disk--0 -> ../../dm-6
lrwxrwxrwx 1 root root  10 Oct 22 09:07 dm-name-pve-vm--101--disk--0 -> ../../dm-7
lrwxrwxrwx 1 root root  10 Oct 22 09:07 dm-uuid-LVM-AwBeCOJ8ap5XGitziddolVsAof8qGbEbdy0mn1852UcK42rzzaxBjOxkfNEmeoNu -> ../../dm-7
lrwxrwxrwx 1 root root  10 Oct 22 09:07 dm-uuid-LVM-AwBeCOJ8ap5XGitziddolVsAof8qGbEbhgfOmomzqxyOt2jwvVHMrWZNqNwxocfF -> ../../dm-6
lrwxrwxrwx 1 root root  10 Oct 22 09:07 dm-uuid-LVM-AwBeCOJ8ap5XGitziddolVsAof8qGbEbok1hTSXzxYIeA2AO2iEQOKKtR1j9UI7p -> ../../dm-1
lrwxrwxrwx 1 root root  10 Oct 22 09:07 dm-uuid-LVM-AwBeCOJ8ap5XGitziddolVsAof8qGbEbzFD2TKPPy2meq40uoSuV3gYiNGBqr1OO -> ../../dm-0
lrwxrwxrwx 1 root root  15 Oct 22 09:07 lvm-pv-uuid-NDQxRt-zfxi-RyKh-2evE-eYfi-k1Yp-FnK1bs -> ../../nvme1n1p3
lrwxrwxrwx 1 root root  13 Oct 22 09:07 nvme-eui.0025384541a2c423 -> ../../nvme1n1
lrwxrwxrwx 1 root root  15 Oct 22 09:07 nvme-eui.0025384541a2c423-part1 -> ../../nvme1n1p1
lrwxrwxrwx 1 root root  15 Oct 22 09:07 nvme-eui.0025384541a2c423-part2 -> ../../nvme1n1p2
lrwxrwxrwx 1 root root  15 Oct 22 09:07 nvme-eui.0025384541a2c423-part3 -> ../../nvme1n1p3
lrwxrwxrwx 1 root root  13 Oct 22 10:52 nvme-eui.0025384541a2cac4 -> ../../nvme2n1
lrwxrwxrwx 1 root root  15 Oct 22 10:52 nvme-eui.0025384541a2cac4-part1 -> ../../nvme2n1p1
lrwxrwxrwx 1 root root  15 Oct 22 10:52 nvme-eui.0025384541a2cac4-part9 -> ../../nvme2n1p9
lrwxrwxrwx 1 root root  13 Oct 22 10:52 nvme-eui.0025384541a2cacc -> ../../nvme0n1
lrwxrwxrwx 1 root root  15 Oct 22 10:52 nvme-eui.0025384541a2cacc-part1 -> ../../nvme0n1p1
lrwxrwxrwx 1 root root  15 Oct 22 10:52 nvme-eui.0025384541a2cacc-part9 -> ../../nvme0n1p9
lrwxrwxrwx 1 root root  13 Oct 22 09:07 nvme-Samsung_SSD_990_PRO_4TB_S7KGNU0X520013T -> ../../nvme1n1
lrwxrwxrwx 1 root root  13 Oct 22 09:07 nvme-Samsung_SSD_990_PRO_4TB_S7KGNU0X520013T_1 -> ../../nvme1n1
lrwxrwxrwx 1 root root  15 Oct 22 09:07 nvme-Samsung_SSD_990_PRO_4TB_S7KGNU0X520013T_1-part1 -> ../../nvme1n1p1
lrwxrwxrwx 1 root root  15 Oct 22 09:07 nvme-Samsung_SSD_990_PRO_4TB_S7KGNU0X520013T_1-part2 -> ../../nvme1n1p2
lrwxrwxrwx 1 root root  15 Oct 22 09:07 nvme-Samsung_SSD_990_PRO_4TB_S7KGNU0X520013T_1-part3 -> ../../nvme1n1p3
lrwxrwxrwx 1 root root  15 Oct 22 09:07 nvme-Samsung_SSD_990_PRO_4TB_S7KGNU0X520013T-part1 -> ../../nvme1n1p1
lrwxrwxrwx 1 root root  15 Oct 22 09:07 nvme-Samsung_SSD_990_PRO_4TB_S7KGNU0X520013T-part2 -> ../../nvme1n1p2
lrwxrwxrwx 1 root root  15 Oct 22 09:07 nvme-Samsung_SSD_990_PRO_4TB_S7KGNU0X520013T-part3 -> ../../nvme1n1p3
lrwxrwxrwx 1 root root  13 Oct 22 10:52 nvme-Samsung_SSD_990_PRO_4TB_S7KGNU0X521437F -> ../../nvme2n1
lrwxrwxrwx 1 root root  13 Oct 22 10:52 nvme-Samsung_SSD_990_PRO_4TB_S7KGNU0X521437F_1 -> ../../nvme2n1
lrwxrwxrwx 1 root root  15 Oct 22 10:52 nvme-Samsung_SSD_990_PRO_4TB_S7KGNU0X521437F_1-part1 -> ../../nvme2n1p1
lrwxrwxrwx 1 root root  15 Oct 22 10:52 nvme-Samsung_SSD_990_PRO_4TB_S7KGNU0X521437F_1-part9 -> ../../nvme2n1p9
lrwxrwxrwx 1 root root  15 Oct 22 10:52 nvme-Samsung_SSD_990_PRO_4TB_S7KGNU0X521437F-part1 -> ../../nvme2n1p1
lrwxrwxrwx 1 root root  15 Oct 22 10:52 nvme-Samsung_SSD_990_PRO_4TB_S7KGNU0X521437F-part9 -> ../../nvme2n1p9
lrwxrwxrwx 1 root root  13 Oct 22 10:52 nvme-Samsung_SSD_990_PRO_4TB_S7KGNU0X521445R -> ../../nvme0n1
lrwxrwxrwx 1 root root  13 Oct 22 10:52 nvme-Samsung_SSD_990_PRO_4TB_S7KGNU0X521445R_1 -> ../../nvme0n1
lrwxrwxrwx 1 root root  15 Oct 22 10:52 nvme-Samsung_SSD_990_PRO_4TB_S7KGNU0X521445R_1-part1 -> ../../nvme0n1p1
lrwxrwxrwx 1 root root  15 Oct 22 10:52 nvme-Samsung_SSD_990_PRO_4TB_S7KGNU0X521445R_1-part9 -> ../../nvme0n1p9
lrwxrwxrwx 1 root root  15 Oct 22 10:52 nvme-Samsung_SSD_990_PRO_4TB_S7KGNU0X521445R-part1 -> ../../nvme0n1p1
lrwxrwxrwx 1 root root  15 Oct 22 10:52 nvme-Samsung_SSD_990_PRO_4TB_S7KGNU0X521445R-part9 -> ../../nvme0n1p9

Pick the ones you want to use for your ZFS pool by serial number. You can use the WebUI to help identify the drives you want to pool. Then execute the following:

root@pve5:~# zpool create nvme-raid0 /dev/disk/by-id/nvme-Samsung_SSD_990_PRO_4TB_S7KGNU0X521445R /dev/disk/by-id/nvme-Samsung_SSD_990_PRO_4TB_S7KGNU0X521437F

nvme-raid0 is the name I chose for my pool, followed by the full by-id path for the two drives.

We'll add this to a Channels LXC container momentarily, but first we need to adjust the permissions so that when this ZFS pool is passed through to an LXC we'll be able to read and write to it. To do this in Proxmox, you add 100000 to your desired UID and GID in the container. I'd like user 1000 to be able read and write the new pool, so:

root@pve5:~# chown -R 101000:101000 /nvme-raid0

Next, we need to modify the .conf file for our Channels DVR LXC, so that our new ZFS drive pool is available. The ID can be found in the WebUI, and then going to the command line for the pve node, and navigate to the directory containing the LXC .conf files:

root@pve5:~# cd /etc/pve/lxc
root@pve5:/etc/pve/lxc# ls -la
total 1
drwxr-xr-x 2 root www-data   0 Oct 16 08:04 .
drwxr-xr-x 2 root www-data   0 Oct 16 08:04 ..
-rw-r----- 1 root www-data 418 Oct 22 12:14 100.conf
-rw-r----- 1 root www-data 708 Oct 22 10:44 101.conf

Then edit the file with the correct ID, and add a line for a new mount point -- I'm using mp2, since mp0 and mp1 already exist:

mp2: /nvme-raid0,mp=/mnt/nvme-raid0

Restart the LXC, and confirm the new ZFS pool is mounted and available:

root@channels5:/mnt/nvme-raid0# df -h | grep nvme 
nvme-raid0                        7.2T  128K  7.2T   1% /mnt/nvme-raid0

There we go -- two new NVMe data drives installed with upgraded double-sided heatsinks, along with creating a ZFS pool with the new drives, and passing it through to our Channels DVR LXC container!

1 Like