OliveTin EZ-Start: A New Way to Deploy OliveTin-for-Channels Using Just Two Environment Variables to Get Started!

Thanks here is the Healthcheck: BTW I've tried PORTAINER_ENV 0-3 all return same error.

Checking your OliveTin-for-Channels installation...
(extended_check=true)

OliveTin Container Version 2025.07.21
OliveTin Docker Compose Version 2025.03.26

----------------------------------------

Checking that your selected Channels DVR server (192.168.0.51:8089) is reachable by URL:
HTTP Status: 200 indicates success...

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100  1276  100  1276    0     0   138k      0 --:--:-- --:--:-- --:--:--  138k
HTTP Status: 200
Effective URL: http://192.168.0.51:8089/

----------------------------------------

Checking that your selected Channels DVR server's data files (/mnt/192.168.0.51-8089) are accessible:
Folders with the names Database, Images, Imports, Logs, Movies, Streaming and TV should be visible...

total 8
drwxr-xr-x 2 root root 4096 Jul 24 21:37 .
drwxr-xr-x 1 root root 4096 Jul 27 15:36 ..
drwxr-xr-x 2 root root    0 Jul 26 22:05 Database
drwxr-xr-x 2 root root    0 Jul 20 12:49 Images
drwxr-xr-x 2 root root    0 Jul 24 16:52 Imports
drwxr-xr-x 2 root root    0 Sep  5  2020 Live TV
drwxr-xr-x 2 root root    0 Jun 27  2022 Logs
drwxr-xr-x 2 root root    0 Oct  6  2023 Movies
drwxr-xr-x 2 root root    0 Jul  5 22:10 Streaming
drwxr-xr-x 2 root root    0 Jul  9 10:53 TV

Docker reports your current DVR_SHARE setting as...
/mnt/data/supervisor/media/DVR/Channels

If the listed folders are NOT visible, AND you have your Channels DVR and Docker on the same system:

Channels reports this path as...
Z:\Recorded TV\Channels

When using WSL with a Linux distro and Docker Desktop, it's recommended to use...
/mnt/z/Recorded TV/Channels

----------------------------------------

Checking that your selected Channels DVR server's log files (/mnt/192.168.0.51-8089_logs) are accessible:
Folders with the names data and latest should be visible...

total 4
drwxr-xr-x 2 root root    0 Jun 27  2022 .
drwxr-xr-x 1 root root 4096 Jul 27 15:36 ..
drwxr-xr-x 2 root root    0 May 12 04:24 comskip
drwxr-xr-x 2 root root    0 May 12 04:24 recording

Docker reports your current LOGS_SHARE setting as...
/mnt/data/supervisor/media/DVR/Channels/Logs

If the listed folders are NOT visible, AND you have your Channels DVR and Docker on the same system:

Channels reports this path as...
C:\ProgramData\ChannelsDVR

When using WSL with a Linux distro and Docker Desktop, it's recommended to use...
/mnt/c/ProgramData/ChannelsDVR

----------------------------------------

Checking if your Portainer token is working on ports 9000 and/or 9443:

Portainer http response on port 9000 reports version 
Portainer Environment ID for local is 
Portainer https response on port 9443 reports version 2.32.0
Portainer Environment ID for local is 

----------------------------------------

Here's a list of your current OliveTin-related settings:

HOSTNAME=olivetin
CHANNELS_DVR=192.168.0.51:8089
CHANNELS_DVR_ALTERNATES=another-server:8089 a-third-server:8089
CHANNELS_CLIENTS=appletv4k firestick-master amazon-aftkrt
ALERT_SMTP_SERVER=smtp.gmail.com:587
ALERT_EMAIL_FROM=[Redacted]@gmail.com
ALERT_EMAIL_PASS=[Redacted]
ALERT_EMAIL_TO=[Redacted]@gmail.com
UPDATE_YAMLS=true
UPDATE_SCRIPTS=true
PORTAINER_TOKEN=[Redacted]
PORTAINER_HOST=192.168.0.20
PORTAINER_PORT=9443
PORTAINER_ENV=1

----------------------------------------

Here's the contents of /etc/resolv.conf from inside the container:

# Generated by Docker Engine.
# This file can be edited; Docker Engine will not make further changes once it
# has been modified.

nameserver 127.0.0.11
search lan
options ndots:0

# Based on host file: '/etc/resolv.conf' (internal resolver)
# ExtServers: [host(192.168.0.1) host(2603:7000:b500:15f8::1)]
# Overrides: []
# Option ndots from: internal

----------------------------------------

Here's the contents of /etc/hosts from inside the container:

127.0.0.1	localhost
::1	localhost ip6-localhost ip6-loopback
fe00::	ip6-localnet
ff00::	ip6-mcastprefix
ff02::1	ip6-allnodes
ff02::2	ip6-allrouters
172.19.0.2	olivetin

----------------------------------------

Your WSL Docker-host is running:

 FIFO pipe not found. Is the host helper script running?
Run sudo -E ./fifopipe_hostside.sh "$PATH" from the directory you have bound to /config on your host computer

----------------------------------------

Your WSL Docker-host's /etc/resolv.conf file contains:

FIFO pipe not found. Is the host helper script running?
Run sudo -E ./fifopipe_hostside.sh "$PATH" from the directory you have bound to /config on your host computer

----------------------------------------

Your WSL Docker-host's /etc/hosts file contains:

FIFO pipe not found. Is the host helper script running?
Run sudo -E ./fifopipe_hostside.sh "$PATH" from the directory you have bound to /config on your host computer

----------------------------------------

Your WSL Docker-host's /etc/wsl.conf file contains:

FIFO pipe not found. Is the host helper script running?
Run sudo -E ./fifopipe_hostside.sh "$PATH" from the directory you have bound to /config on your host computer

----------------------------------------

Your Windows PC's %USERPROFILE%\.wslconfig file contains:

FIFO pipe not found. Is the host helper script running?
Run sudo -E ./fifopipe_hostside.sh "$PATH" from the directory you have bound to /config on your host computer


----------------------------------------

Your Windows PC's etc/hosts file contains:

FIFO pipe not found. Is the host helper script running?
Run sudo -E ./fifopipe_hostside.sh "$PATH" from the directory you have bound to /config on your host computer

----------------------------------------

Your Windows PC's DNS server resolution:

FIFO pipe not found. Is the host helper script running?
Run sudo -E ./fifopipe_hostside.sh "$PATH" from the directory you have bound to /config on your host computer

----------------------------------------

Your Windows PC's network interfaces:

FIFO pipe not found. Is the host helper script running?
Run sudo -E ./fifopipe_hostside.sh "$PATH" from the directory you have bound to /config on your host computer

----------------------------------------

Your Tailscale version is:

FIFO pipe not found. Is the host helper script running?
Run sudo -E ./fifopipe_hostside.sh "$PATH" from the directory you have bound to /config on your host computer

----------------------------------------```

I'm pretty sure I see the problem in my code. Give me a bit and I'll push a fix...

1 Like

If you're comfortable editing a Bash script, could you edit lines 15 & 16 in portainerstack.sh that look like this:

portainerEnv=$(curl -s -k -H "X-API-Key: $portainerToken" "http://$portainerHost:9000/api/endpoints" | jq '.[] | select(.Name=="local") | .Id') \
  && [[ -z $portainerEnv ]] && portainerEnv=$(curl -s -k -H "X-API-Key: $portainerToken" "https://$portainerHost:$portainerPort/api/endpoints" | jq '.[] | select(.Name=="local") | .Id')`

And make this small change (get rid of backslash at the end of the first line, and delete the && at the beginning of the second):

portainerEnv=$(curl -s -k -H "X-API-Key: $portainerToken" "http://$portainerHost:9000/api/endpoints" | jq '.[] | select(.Name=="local") | .Id')
  [[ -z $portainerEnv ]] && portainerEnv=$(curl -s -k -H "X-API-Key: $portainerToken" "https://$portainerHost:$portainerPort/api/endpoints" | jq '.[] | select(.Name=="local") | .Id')

This should correct a logic flaw (on my part) that affects people with only https enabled in Portainer. Let me know if that works.

Killed the container, made the change, restarted container. Same error:

Checking your OliveTin-for-Channels installation...
(extended_check=true)

OliveTin Container Version 2025.07.21
OliveTin Docker Compose Version 2025.03.26

----------------------------------------

Checking that your selected Channels DVR server (192.168.0.51:8089) is reachable by URL:
HTTP Status: 200 indicates success...

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100  1276  100  1276    0     0   155k      0 --:--:-- --:--:-- --:--:--  155k
HTTP Status: 200
Effective URL: http://192.168.0.51:8089/

----------------------------------------

Checking that your selected Channels DVR server's data files (/mnt/192.168.0.51-8089) are accessible:
Folders with the names Database, Images, Imports, Logs, Movies, Streaming and TV should be visible...

total 8
drwxr-xr-x 2 root root 4096 Jul 24 21:37 .
drwxr-xr-x 1 root root 4096 Jul 27 16:16 ..
drwxr-xr-x 2 root root    0 Jul 26 22:05 Database
drwxr-xr-x 2 root root    0 Jul 20 12:49 Images
drwxr-xr-x 2 root root    0 Jul 24 16:52 Imports
drwxr-xr-x 2 root root    0 Sep  5  2020 Live TV
drwxr-xr-x 2 root root    0 Jun 27  2022 Logs
drwxr-xr-x 2 root root    0 Oct  6  2023 Movies
drwxr-xr-x 2 root root    0 Jul  5 22:10 Streaming
drwxr-xr-x 2 root root    0 Jul  9 10:53 TV

Docker reports your current DVR_SHARE setting as...
/mnt/data/supervisor/media/DVR/Channels

If the listed folders are NOT visible, AND you have your Channels DVR and Docker on the same system:

Channels reports this path as...
Z:\Recorded TV\Channels

When using WSL with a Linux distro and Docker Desktop, it's recommended to use...
/mnt/z/Recorded TV/Channels

----------------------------------------

Checking that your selected Channels DVR server's log files (/mnt/192.168.0.51-8089_logs) are accessible:
Folders with the names data and latest should be visible...

total 4
drwxr-xr-x 2 root root    0 Jun 27  2022 .
drwxr-xr-x 1 root root 4096 Jul 27 16:16 ..
drwxr-xr-x 2 root root    0 May 12 04:24 comskip
drwxr-xr-x 2 root root    0 May 12 04:24 recording

Docker reports your current LOGS_SHARE setting as...
/mnt/data/supervisor/media/DVR/Channels/Logs

If the listed folders are NOT visible, AND you have your Channels DVR and Docker on the same system:

Channels reports this path as...
C:\ProgramData\ChannelsDVR

When using WSL with a Linux distro and Docker Desktop, it's recommended to use...
/mnt/c/ProgramData/ChannelsDVR

----------------------------------------

Checking if your Portainer token is working on ports 9000 and/or 9443:

Portainer http response on port 9000 reports version 
Portainer Environment ID for local is 
Portainer https response on port 9443 reports version 2.32.0
Portainer Environment ID for local is 

----------------------------------------

Here's a list of your current OliveTin-related settings:

HOSTNAME=olivetin
CHANNELS_DVR=192.168.0.51:8089
CHANNELS_DVR_ALTERNATES=another-server:8089 a-third-server:8089
CHANNELS_CLIENTS=appletv4k firestick-master amazon-aftkrt
ALERT_SMTP_SERVER=smtp.gmail.com:587
ALERT_EMAIL_FROM=[Redacted]@gmail.com
ALERT_EMAIL_PASS=[Redacted]
ALERT_EMAIL_TO=[Redacted]@gmail.com
UPDATE_YAMLS=true
UPDATE_SCRIPTS=true
PORTAINER_TOKEN=[Redacted]
PORTAINER_HOST=192.168.0.20
PORTAINER_PORT=9443
PORTAINER_ENV=2

----------------------------------------

Here's the contents of /etc/resolv.conf from inside the container:

# Generated by Docker Engine.
# This file can be edited; Docker Engine will not make further changes once it
# has been modified.

nameserver 127.0.0.11
search lan
options ndots:0

# Based on host file: '/etc/resolv.conf' (internal resolver)
# ExtServers: [host(192.168.0.1) host(2603:7000:b500:15f8::1)]
# Overrides: []
# Option ndots from: internal

----------------------------------------

Here's the contents of /etc/hosts from inside the container:

127.0.0.1	localhost
::1	localhost ip6-localhost ip6-loopback
fe00::	ip6-localnet
ff00::	ip6-mcastprefix
ff02::1	ip6-allnodes
ff02::2	ip6-allrouters
172.19.0.2	olivetin

----------------------------------------

Your WSL Docker-host is running:

 FIFO pipe not found. Is the host helper script running?
Run sudo -E ./fifopipe_hostside.sh "$PATH" from the directory you have bound to /config on your host computer

----------------------------------------

Your WSL Docker-host's /etc/resolv.conf file contains:

FIFO pipe not found. Is the host helper script running?
Run sudo -E ./fifopipe_hostside.sh "$PATH" from the directory you have bound to /config on your host computer

----------------------------------------

Your WSL Docker-host's /etc/hosts file contains:

FIFO pipe not found. Is the host helper script running?
Run sudo -E ./fifopipe_hostside.sh "$PATH" from the directory you have bound to /config on your host computer

----------------------------------------

Your WSL Docker-host's /etc/wsl.conf file contains:

FIFO pipe not found. Is the host helper script running?
Run sudo -E ./fifopipe_hostside.sh "$PATH" from the directory you have bound to /config on your host computer

----------------------------------------

Your Windows PC's %USERPROFILE%\.wslconfig file contains:

FIFO pipe not found. Is the host helper script running?
Run sudo -E ./fifopipe_hostside.sh "$PATH" from the directory you have bound to /config on your host computer


----------------------------------------

Your Windows PC's etc/hosts file contains:

FIFO pipe not found. Is the host helper script running?
Run sudo -E ./fifopipe_hostside.sh "$PATH" from the directory you have bound to /config on your host computer

----------------------------------------

Your Windows PC's DNS server resolution:

FIFO pipe not found. Is the host helper script running?
Run sudo -E ./fifopipe_hostside.sh "$PATH" from the directory you have bound to /config on your host computer

----------------------------------------

Your Windows PC's network interfaces:

FIFO pipe not found. Is the host helper script running?
Run sudo -E ./fifopipe_hostside.sh "$PATH" from the directory you have bound to /config on your host computer

----------------------------------------

Your Tailscale version is:

FIFO pipe not found. Is the host helper script running?
Run sudo -E ./fifopipe_hostside.sh "$PATH" from the directory you have bound to /config on your host computer

----------------------------------------

Killing and restarting would restore the original script. Keep it running, make the change and try it out. If it works, I'll push an update. My testing says it'll work, but nothing like a real world test. :slight_smile:

@Lunatixz There might be something more going on here, as the Post-Install Healthcheck isn't returning a Portainer Environment ID, and it looks to me like it should.

Could you PM me the Healthcheck debug log using the new Action for that purpose? (Contains some sensitive info, that's why I'm requesting by PM)

screenshot-htpc6-2025-07-27-15-08-53

Sent, Change didn't work... hanging at waiting for results...

Thanks for the help, if you need further testing pls let me know.

Could you exec into the OliveTin container and run:

curl -k -X GET --max-time 3 -H "X-API-Key: $PORTAINER_TOKEN" https://$PORTAINER_HOST:$PORTAINER_PORT/api/endpoints

You should either get a whole bunch of output, or just an error in JSON form. If it's a bunch of output, I'm only curious about "Id": which is right at the beginning. If it's an error, post it.

1 Like

DM Sent

For the moment let's not worry about the healthcheck not showing the env id.

Instead, let's see if we can Project One-Click spin-ups working. First, stop the OliveTin stack and change your PORTAINER_ENV value to 1, as we now know that's the correct env id. Start the stack again, and then modify these lines in portainerstack.sh:

Lines 15 & 16 should currently look like this:

portainerEnv=$(curl -s -k -H "X-API-Key: $portainerToken" "http://$portainerHost:9000/api/endpoints" | jq '.[] | select(.Name=="local") | .Id') \
  && [[ -z $portainerEnv ]] && portainerEnv=$(curl -s -k -H "X-API-Key: $portainerToken" "https://$portainerHost:$portainerPort/api/endpoints" | jq '.[] | select(.Name=="local") | .Id')

I'd like you to add a space and a backslash to the end of line 16, and then add the third line below between the current line 16 & line 17:

portainerEnv=$(curl -s -k -H "X-API-Key: $portainerToken" "http://$portainerHost:9000/api/endpoints" | jq '.[] | select(.Name=="local") | .Id') \
  && [[ -z $portainerEnv ]] && portainerEnv=$(curl -s -k -H "X-API-Key: $portainerToken" "https://$portainerHost:$portainerPort/api/endpoints" | jq '.[] | select(.Name=="local") | .Id') \
  && [[ -z $portainerEnv ]] && portainerEnv="$PORTAINER_ENV"

This should allow us a fallback of using the value in PORTAINER_ENV if the curl commands are unsuccessful in getting the env id.

After that, try to add any project via Project One-Click...

EDIT: The above shouldn't be necessary. I'm fairly certain the non-standard name you're using for the local Portainer environment is at the root of this. Check my response to your last PM, but if you rename primary back to local everything should start working.

I have Channels DVR and Portainer running on my Synology NAS. I am following the process to get OliveTin set up. I am able to do the first part in Portainer and access the OliveTin web interface on port 1337. I then am able to run the Environment Variables Generator/Tester successfully. I then stopped the olivetin stack, removed the two initial variables and then add in the text from the generator. When I then update the stack (I didn't selectit to do the re-pull and redeploy option), but then I get the following error message:

Failed to deploy a stack: compose up operation failed: Error response from daemon: Bind mount failed: '/data/olivetin' does not exist

Not sure how to proceed from that error and any help is appreciated.

I believe you need to create that folder on your Synology.

As @jagrim said, you need to create folders in advance on Synology, as they don't support Docker creating them.

I believe most Synology users find that adding two folders in advance -- /volume1/docker/olivetin and /volume1/docker/olivetin/data works well. Then use a value of HOST_DIR=/volume1/docker in the stack.

EDIT: Note the OliveTin Environment Variables Generator/Tester Action has some specifics on this requirement as well, when describing likely values for HOST_DIR:

1 Like

Thank you both. That indeed was my issue. Once I created the two folders and updated the stack, the container launched as expected. I appreciate the help and the great tool! I look forward to learning more about it and using it.

1 Like

I'm unable to launch the stack in Portainer. I copied and pasted exactly as posted at the beginning of this thread, only changing it by adding the two environmental variables you gave (-ezstart and host IP). I get the following error upon deployment:

Invalid interpolation format for services.olivetin.volumes.[]: "${HOST_DIR:-/unused}${HOST_DIR:+/olivetin:/config}"; you may need to escape any $ with another $

I'm a bit baffled because if that were an actual problem I imagine everyone else would be running into it, too, and I can't find any mention of it. And also, why only on that line and not the others that are similarly formatted?

For reference, I'm running Docker Desktop on Windows 11 Pro and using Portainer in the GUI. Docker Desktop version 4.45.0 (updated this morning, got the same error on v4.44.3) and Portainer version 2.33.1. Docker version 28.3.3

I had to update the OliveTin Docker Compose recently, due to a change in acceptable syntax allowed by Portainer. I just update post #1 in this thread to reflect that, so update your compose accordingly.

The error you're seeing is new to me though. The compose should not be edited, with the two env vars added in the Environment variables section of Portainer (Advanced mode is easiest). Like this:

Ah, thanks for the clarification! I copied the updated compose, put the two environment variables in the right section this time, and I still get that same error. So then I replaced the variable string with the actual volume mount path (/local/path/to/olivetin/config:/config).

Now I'm getting this error instead:

Error response from daemon: rpc error: code = InvalidArgument desc = ContainerSpec: duplicate mount point: +

Correct me if I'm wrong, but I believe the problem is that it's not interpreting the variables correctly -- it seems to think that the "+" is the path?

Looks like EZ-start is not in the cards for me! Lol!

You're using the full version of Portainer correct (i.e. not the extension version installed from within Docker Desktop)?

After your first message, I confirmed that nothing has changed when installing Portainer via a WSL2 distro (I used Debian), and that OliveTin installs fine from there. Tested on Windows 11 Pro 24H2, Docker Desktop 28.3.3, Portainer 2.33.1 LTS (full version), Debian 11.11 (for WSL2) and WSL2 2.5.9.0.

Ahhhhha! I'm using the extension in the GUI for Docker Desktop. Let me try the full version!

Haha, just kidding. I uninstalled the extension, installed Portainer Community Edition 23.3.1 LTS. Added the stack, clicked deploy, and got the same errors. Swapped the actual path for the config volume and cleared that error, but I'm still getting the "duplicate mount point: +" error. I am on WSL2 version 2.5.10.0, but there is a chance I updated it during troubleshooting all of this.

It appears that whatever the cause, my system isn't parsing the variables correctly, so I'll just write up my docker-compose file without them!