Gpu selection....Linux server

I recently moved my dvr install from nas to a Linux machine with an i3-6100...much faster at commercial detection and processing. This helps for remote streaming as it more quickly frees up the cpu while doing transcode and stream.

I have a nvidia GeForce 1030 I could put in this Ubuntu unit...would that perform any better, or free up cpu, over continuing to use the skylake 530 igpu? I am assuming I will need to install the proprietary nvidia drivers. I can actually leave both gpus active, I think.

1 Like

We don't support nvidia encoding on Linux yet, so continuing to use Intel QuickSync is your best bet.

1 Like

On the flip side, is you're really inclined to try this and are willing to test some beta builds then we can try adding nvidia Linux support.

What Linux distro are you currently using?

I generally run the betas now, so I am game. Ubuntu 19.04. I assume currently these are now only supported in Windows? Are they better there at transcode when compared with quiscksync?

1 Like

I assume getting it working in a linux build would also mean the docker version would work also? If so getting it working would be awesome.
Was browsing the forum and realised that HW transcoding was possible, after further digging i realised all the success stories were on windows.

Containers with HW acceleration on Linux is a topic that is hard to tackle. Mostly, it's because HW acceleration is done through kernel-level drivers, and containers use the host kernel, so they don't really get access to those kernel interfaces. (In general, that's a good thing, because part of why containers are great is because of the separation and the security that brings.)

I know there has been some progress and success on passing the GPU through to the container in Docker. I'm not sure if the same level has been achieved with other container technologies, like systemd-nspawn, LXC, or snap.

I've successfully passed through iGPU & nVidia GPU to Plex with no issue and have been able to since my first iGPU-able server about a year ago.

I'm not going to pretend I'm completely clued up on how it works but with iGPU and a Plex container it was as simple as passing through /dev/dri. For nVidia GPU it required the Plex container having the nVidia drivers installed and then passing through the GPU.

It might be a huge pain to get working with Channels as i'm sure Plex did more things in the background to get it working but I'm going to assume that step 1 of getting it to work would involve Channels supporting it themselves within their (Non docker) Linux version.

This already works with Channels as well. Only nvidia is unsupported at the moment.

I am mainly focused on getting the best remote streaming channels server I can - not running in docker container yet, fine for me as a basic install. Is there good reason to pursue nvidia over my skylake igpu for this purpose? Do the GeForce gaming cards offer a better/faster solution than the intel igpu for this purpose? The single-stream transcode performance for my current system is “ok”...was running on Synology NAS which was also ok as a remote streamer as long as the box wasn’t doing anything else, like for example commercial detection, which could stall the stream quite a bit, and took nearly 20 minutes for each hour of programming. The Linux machine can now do that in less than three minutes. My thought was that if I isolated transcode to the gt1030 I would entirely free the cpu. Not worth installing Windows to test that theory right now.

Yeah, sorry was just replying to racameron who made it sound like HW acceleration within a container on linux was extremely difficult.

@charlescihon Theres no real reason to go with a GPU over an iGPU unless you happen to have a GPU spare, which you do. Assuming a fairly recent CPU/iGPU, as the older ones can have much lower quality output.

@Migz: I understand that in most circumstances allowing the container access to /dev/dri can usually enable hardware acceleration for containers. The problem comes when you are using containers not to ease distribution, but to separate processes for security purposes. For the former most users have no problem giving kernel-level access to whatever they install (the Windows paradigm); for the later, strict privilege separation of containers means getting kernel-level devices to allow access to containers to more than a simple one-liner (the OpenBSD paradigm).

I imagine most users are using containers/Docker for the first reason, rather than the second.

Although this development may be of use to others - my original quest to use this card in Ubuntu (or for CDVR at all) has hit a snag - some careful study of the documentation on Nvidia's site tells me this is the only 10-series card that does not support NVENC at all (GTX 1030). I would need to move at least to 1050Ti. The machine I am using is a small form factor unit that can only take single-slot, half-height cards, and also has a limited power supply, so this card is really the only one that might make sense in the machine, if, of course, adding such a card would have added to its performance. As it stands, it is a very low-watt, effective machine using the Skylake iGPU already on-board, and the 1030 is really a "gamer card" for very low watt, small systems, and lacks the encoding features of the other Pascal cards.