AMD iGPU for HW trancoding in container

Hi -

After a few years of perfect Intel-based HW acceleration by simply passing /dev/dri into the container and it 'just working', I've upgraded my server and now it has an AMD iGPU - a 660M to be precise.

It appears it is no longer 'plug and play' out the box with Debian 14 in a minimal install (no desktop environment), but there are conflicted threads about whether it should work, and if so, how to enable it.

I've poked around for a bit but ended up installing so many other things both outside and inside the container it sort of broke the point of using a container in the first place!

So I've gone back to square one and have a minimal Debian install with firmware-amd-graphics installed from testing (the whole distro is on testing), so should be up to date. I've also tried the current and pre-release of Channels but no difference.

With Intel that seemed to be enough as presumably any other libraries needed are then inside the container ... but I get this error:

h264_amf
fork/exec /channels-dvr/2025.08.01.1845/ffmpeg-dl: no such file or directory
h264_nvenc
fork/exec /channels-dvr/2025.08.01.1845/ffmpeg-dl: no such file or directory
h264_nvenc+deint
fork/exec /channels-dvr/2025.08.01.1845/ffmpeg-dl: no such file or directory
h264_nvenc+scaler
fork/exec /channels-dvr/2025.08.01.1845/ffmpeg-dl: no such file or directory
h264_nvenc+tonemap
fork/exec /channels-dvr/2025.08.01.1845/ffmpeg-dl: no such file or directory
h264_vaapi@/dev/dri/card0
[AVHWDeviceContext @ 0x30f39d40] libva: VA-API version 1.22.0
DRM_IOCTL_I915_GEM_APERTURE failed: Invalid argument
Assuming 131072kB available aperture size.
May lead to reduced performance or incorrect rendering.
get chip id failed: -1 [2]
param: 4, val: 0
i915 does not support EXECBUFER2
DRM_IOCTL_VERSION, unsupported drm device by media driver: amdg
DRM_IOCTL_VERSION, unsupported drm device by media driver: amdg
[AVHWDeviceContext @ 0x30f39d40] libva: driver init failed
[AVHWDeviceContext @ 0x30f39d40] libva: va_openDriver() returns 18
[AVHWDeviceContext @ 0x30f39d40] Failed to initialise VAAPI connection: 18 (invalid parameter).
signal: segmentation fault
h264_vaapi@/dev/dri/renderD128
[AVHWDeviceContext @ 0x2c134d40] libva: VA-API version 1.22.0
DRM_IOCTL_I915_GEM_APERTURE failed: Invalid argument
Assuming 131072kB available aperture size.
May lead to reduced performance or incorrect rendering.
get chip id failed: -1 [2]
param: 4, val: 0
i915 does not support EXECBUFER2
DRM_IOCTL_VERSION, unsupported drm device by media driver: amdg
DRM_IOCTL_VERSION, unsupported drm device by media driver: amdg
[AVHWDeviceContext @ 0x2c134d40] libva: driver init failed
[AVHWDeviceContext @ 0x2c134d40] libva: va_openDriver() returns 18
[AVHWDeviceContext @ 0x2c134d40] Failed to initialise VAAPI connection: 18 (invalid parameter).
signal: segmentation fault
h264_vaapi@/dev/dri/renderD129
no such file or directory
h264_vaapi@/dev/renderD128
no such file or directory

Firstly, I think I had this on Intel so maybe a red herring but why can't it find ffmpeg-dl? I've checked in the container and it is present on the paths mentioned (/channels-dvr/2025.08.01.1845/ffmpeg-dl)

Secondly, is the fact it says amdg instead of amdgpu an issue?

Thirdly and mostly, I'm looking for advice on what extra package I need to install on the host in order to have it work inside the container. I'm presuming it is a host package missing due to reports of it working elsewhere and it being unlikely the solution was to rebuild the image.

I've seen some threads say that I need to map another device, or maybe I need to move to the proprietary drivers ... ?

I'd be happy once this is sorted to write it up for others. I suspect that many others who report this working are running the server on a more featured machine and maybe not in a container so whatever I am missing is there already for them.

It's worth noting that while my GPU is recognised by Plex in it's settings, it is not used for transcoding indicating a similar root cause - though I've not troubleshot that.

Thanks for your help!

OK I got a bit further - inside the container I installed

apk add mesa-va-gallium --no-cache --update-cache

Which then included libva that didn't seem to be installed already despite being used in the Channels View Debug Info. :thinking:

(24/25) Installing libva (2.22.0-r1)

Then I also installed libva-utils to get vainfo which running provides:

Trying display: wayland
error: XDG_RUNTIME_DIR is invalid or not set in the environment.
Trying display: x11
error: can't connect to X server!
Trying display: drm
libva info: VA-API version 1.22.0
libva info: Trying to open /usr/lib/dri/radeonsi_drv_video.so
libva info: Found init function __vaDriverInit_1_22
amdgpu: os_same_file_description couldn't determine if two DRM fds reference the same file description.
If they do, bad things may happen!
libva info: va_openDriver() returns 0
vainfo: VA-API version: 1.22 (libva 2.22.0)
vainfo: Driver version: Mesa Gallium driver 24.2.8 for AMD Radeon 660M (radeonsi, rembrandt, LLVM 19.1.4, DRM 3.64, 6.16.12+deb14+1-amd64)
vainfo: Supported profile and entrypoints
      VAProfileH264ConstrainedBaseline: VAEntrypointVLD
      VAProfileH264ConstrainedBaseline: VAEntrypointEncSlice
      VAProfileH264Main               : VAEntrypointVLD
      VAProfileH264Main               : VAEntrypointEncSlice
      VAProfileH264High               : VAEntrypointVLD
      VAProfileH264High               : VAEntrypointEncSlice
      VAProfileHEVCMain               : VAEntrypointVLD
      VAProfileHEVCMain               : VAEntrypointEncSlice
      VAProfileHEVCMain10             : VAEntrypointVLD
      VAProfileHEVCMain10             : VAEntrypointEncSlice
      VAProfileJPEGBaseline           : VAEntrypointVLD
      VAProfileVP9Profile0            : VAEntrypointVLD
      VAProfileVP9Profile2            : VAEntrypointVLD
      VAProfileAV1Profile0            : VAEntrypointVLD
      VAProfileNone                   : VAEntrypointVideoProc

The start of this output seems to be because I don't have a desktop environment installed, but the latter suggests that VA-API is all installed and ready to go! It seems to be the same version of libva as the debug info from Channels (1.22).

But - if libva wasn't installed in the container, already then how did the Channels debug reference it? Is it using it's own instance? And if so, is that without the Mesa Gallium driver?

The other curious thing is the Trying display: drm in the vainfo log, and then references to an 'unsupported drm device' in the Channel debug.

What is channels actually running when it produces it's debug? That might help me get closer to finding out what is missing.

OK I think I've worked out the reason it is not working, and I'm not sure how this will ever work without Channels changing things, so any other reports of it working seem fishy to me!

I was now able to confirm the use of VAAPI-based HW encode inside the container when only passing in /dev/dri by encoding a file with ffmpeg.

I had forgotten that Channels uses its own ffmpeg and installed it directly (apk add ffmpeg), then ran this command to create a stream of video and then encode using h264.

ffmpeg -v verbose -init_hw_device vaapi=va:/dev/dri/renderD128 -filter_complex 'testsrc=size=1280x720:rate=30,format=nv12,hwupload' -t 5 -c:v h264_vaapi -y /tmp/test_vaapi.mp4

In the output I see:

...
[AVHWDeviceContext @ 0x7fd153e2de00] Initialised VAAPI connection: version 1.22
[AVHWDeviceContext @ 0x7fd153e2de00] VAAPI driver: Mesa Gallium driver 24.2.8 for AMD Radeon 660M (radeonsi, rembrandt, LLVM 19.1.4, DRM 3.64, 6.16.12+deb14+1-amd64).
...
[h264_vaapi @ 0x7fd146460300] Using VAAPI profile VAProfileH264High (7).
[h264_vaapi @ 0x7fd146460300] Using VAAPI entrypoint VAEntrypointEncSlice (6).
[h264_vaapi @ 0x7fd146460300] Using VAAPI render target format YUV420 (0x1).
...

And no errors, demonstrating the use of VAAPI. This shows that I don't need any additional installed packages on the host as I originally though.

Then I remembered that Channels bundles it's own ffmpeg and long story short seems like that is somehow configured to only use Intel VAAPI and so the errors are because it is looking for i915 instead of amdgpu driver libraries.

I can demonstrate this by running the same ffmpeg encoding above but on Channels internal ffmpeg fails and it gives the same errors as originally reported.

Then as further proof, I symlinked the apk ffmpeg to replace the Channels bundled one and lo and behold - it succeeds and Channels report Hardware capable transcoding.

So at this point I feel pretty stuck - the hardware is clearly capable, and it is actually fairly trivial - as long as mesa-va-gallium is installed is should 'just work', but because Channels is using its own ffmpeg that seems to only supports Intel ... failure.

I don't know if any Channels Dev are looking at this but if there is any way to override the ffmpeg in use (at my own risk yada yada), or maybe some environment variable to 'switch' it to radeonsi (e.g. LIBVA_DRIVER_NAME=radeonsi, LIBVA_DRIVERS_PATH=/usr/lib/dri), that would be much appreciated!

Or, if there is a way to 'roll my own' container with my own ffmpeg and Channels that would be fine too?

I went down the path of rolling my own container and have got to the following 'solution'.

I can create a new image starting FROM the official one and install simply mesa-va-gallium and ffmpeg.

Then, I need to replace the bundled ffmpeg in /channels-dvr/latest with the system one in /usr/bin that uses the mesa-va libraries

But due to the way Channels is run I've had to wrap the run.sh with my own script and override the existing CMD.

This does seem to work - Channels is running and hardware transcoding is successful!

The only trouble is that now when Channels does its own auto update, it will overwrite the 'hack' and go back to its own ffmpeg and I won't notice. So need to think of a way to resolve that if I continue with this path.

@tmm1 and @maddox - apologies to tag you direct but was hoping one of you might weigh in on this already. Can you comment on whether AMD GPU support is something you could support directly please? I presume that using a bundled ffmpeg is because then you have more control over what you test?

Here is my revised Dockerfile:

FROM docker.io/fancybits/channels-dvr:latest

RUN apk add mesa-va-gallium ffmpeg --no-cache --update-cache

COPY run-and-update-ffmpeg.sh /

CMD /bin/sh -c ./run-and-update-ffmpeg.sh

And here is the run-and-update-ffmpeg.sh:

#!/bin/sh
set -e

# Run the original run.sh in the background
/run.sh &

# Wait for Channels to be downloaded and `latest` folder to be created
echo "Waiting for folder /channels-dvr/latest to be created ..."
while [ ! -d /channels-dvr/latest ]; do
  sleep 1
done

# Once the folder exists, switch ffmpeg
echo "Folder detected, switching ffmpeg ..."
mv /channels-dvr/latest/ffmpeg /channels-dvr/latest/ffmpeg.non-amd
ln -sf /usr/bin/ffmpeg /channels-dvr/latest/ffmpeg

# Wait for run.sh process to keep container alive
wait