TubeArchivist Processing Script

Enhance Your Channels DVR Workflow with a YouTube Processing Script!

Hi Channels DVR Community,

Here is a tool I’ve developed to complement TubeArchivist and improve workflows for Channels DVR users. This Python script processes YouTube videos to prepare them for seamless integration into your Channels DVR library.


Key Features:

  • YouTube Metadata Retrieval: Automatically fetches video metadata (title, description, uploader, upload date) using the YouTube Data API.
  • NFO File Generation: Creates .nfo files compatible with Channels DVR (when it starts recognizing them) for better video organization and metadata recognition.
  • File Renaming and Organization:
    • Renames video files based on their titles for consistency.
    • Organizes videos into directories named after their uploaders for easier browsing.
  • Notifications via Apprise:
    • Receive processing summaries through your favorite notification platforms (e.g., Pushover, Discord, Slack, Email).
  • Channels DVR Integration:
    • Optionally triggers a Channels DVR metadata refresh after processing, ensuring your library stays up-to-date.
  • Remove Old Files:
    • Optionally remove files older than X days

Use Cases

  • YouTube Archiving:
    If you use TubeArchivist to archive YouTube videos, this script takes your workflow a step further by generating .nfo files and organizing videos into a structure that should be easier to work with in Channels DVR (Videos).

  • Enhanced Library Management:
    Keep your YouTube videos organized, properly labeled.


Installation and Usage

  1. Clone this repository:

    git clone https://github.com/peppy6582/TubeArchivist-Accessory-Script.git
    cd TubeArchivist-Accessory-Script
    
  2. Make the script executable:

    chmod +x youtube-process.py
    
  3. Install Python dependencies:

    pip install -r requirements.txt
    
  4. Configure the script by creating a config.txt file:

    VIDEO_DIRECTORY=/path/to/TubeArchivist/YouTube/
    CHANNELS_DIRECTORY=/path/to/TubeArchivist/YouTube Channels/
    PROCESSED_FILES_TRACKER=processed_files.txt
    YOUTUBE_API_KEY=your-youtube-api-key
    APPRISE_URL=pover://your_user_key@your_api_token
    CHANNELS_DVR_API_REFRESH_URL=http://YOUR_IP_ADDRESS:8089/dvr/scanner/scan
    DELETE_AFTER=30 (removes files older than x days)
    
    • Required:
      • VIDEO_DIRECTORY: Directory where TubeArchivist stores downloaded videos.
      • CHANNELS_DIRECTORY: Directory to organize videos by uploader.
      • YOUTUBE_API_KEY: YouTube Data API key.
    • Optional:
      • APPRISE_URL: URL for sending notifications via Apprise-supported services.
      • CHANNELS_DVR_API_REFRESH_URL: URL for Channels DVR metadata refresh.
      • DELETE_AFTER: Remove files older than X days

Why the Channels DVR Community Might Love This

If you enjoy building custom media libraries and maximizing the potential of Channels DVR, this script helps you integrate YouTube content into your library effortlessly. By automatically generating metadata and keeping files organized, you’ll spend less time managing files and more time enjoying your library!


Get Started!

The script is available on GitHub:
:point_right: TubeArchivist Accessory Script

I’d love to hear your feedback or suggestions for improving the script. Let me know if you have any questions or need help setting it up.


5 Likes

Added the ability to automatically remove files older than X days with the config variable:
DELETE_AFTER

Set it to how many days you want to retain the videos.

2 Likes

Awesome. I was just comparing Pinchflat with TubeArchivist to choose one, and this add-on made the decision easier. i look forward to checking it out. Thanks for sharing!

Well...I guess I'm gonna have to mull over migrating over again and this time to tubearchivist. Very easy to just put my urls in...and off I go TBF with that even if it is a memory hog at times

Any plans on adding a docker container for this?

I have never built my own container before, but I will look into it.

1 Like

I have been trying to get it setup as a docker container all day, but I am not having any luck. If anyone out there wants to fork this and try themselves, feel free.

I feel this would be a very helpful tool for OliveTin!

and @bnhf has lots of experience with this, perhaps he can help...?

1 Like

@Fofer

I've reviewed the script and its requirements, and I agree, it looks like something that would make a nice addition to OliveTin.

@Phillip_Berryman

Your script looks like something I could fairly easily add to OliveTin-for-Channels, which already includes a number of Python scripts. The OliveTin Action would include setting a couple of variables specific to your script, along with the desired interval for running the script. Most of the values your script needs are already part of the standard set of OliveTin env vars.

If you'd like to go in this direction, I'll probably need some assistance getting sample data in place. I don't have much in the way of YouTube videos in my personal library, and those I have were sourced using other methods.

Let me know what you think...

3 Likes

I played around a bit this morning with TubeArchivist itself, since I haven't used it before, and got it up-and-running using this Docker Compose in Portainer-Stacks:

version: '3.9'
services:
  tubearchivist:
    # 2025.01.11
    # GitHub home for this project: https://hub.docker.com/r/bbilly1/tubearchivist.
    # Docker container home for this project with setup instructions: https://github.com/tubearchivist/tubearchivist.
    container_name: tubearchivist
    image: bbilly1/tubearchivist:${TAG}
    ports:
      - ${HOST_PORT}:8000
    environment:
      - ES_URL=http://archivist-es:${ES_PORT} # needs protocol e.g. http and port
      - REDIS_HOST=archivist-redis            # don't add protocol
      - HOST_UID=1000
      - HOST_GID=1000
      - TA_HOST=${TA_HOST}                    # set your host name
      - TA_USERNAME=${TA_USERNAME}            # your initial TA credentials
      - TA_PASSWORD=${TA_PASSWORD}            # your initial TA credentials
      - ELASTIC_PASSWORD=${TA_PASSWORD}       # set password for Elasticsearch
      - TZ=${TZ}                              # set your time zone
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:8000/health"]
      interval: 2m
      timeout: 10s
      retries: 3
      start_period: 30s
    volumes:
      - ${HOST_DIR}/tubearchivist/media:/youtube
      - cache:/cache
    depends_on:
      - archivist-es
      - archivist-redis
    restart: unless-stopped

  archivist-redis:
    image: redis/redis-stack-server:${TAG}
    container_name: archivist-redis
    expose:
      - ${REDIS_PORT}
    volumes:
      - redis:/data
    depends_on:
      - archivist-es
    restart: unless-stopped
    
  archivist-es:
    image: bbilly1/tubearchivist-es:${TAG}    # only for amd64, or use official es 8.16.0
    container_name: archivist-es
    expose:
      - ${ES_PORT}
    environment:
      - ELASTIC_PASSWORD=${TA_PASSWORD}       # matching Elasticsearch password
      - ES_JAVA_OPTS=-Xms1g -Xmx1g
      - xpack.security.enabled=true
      - discovery.type=single-node
      - path.repo=/usr/share/elasticsearch/data/snapshot
    #ulimits:
      #memlock:
        #soft: -1
        #hard: -1
    volumes:
      - es:/usr/share/elasticsearch/data      # check for permission error when using bind mount, see readme
    restart: unless-stopped

volumes:
  #media:
  cache:
  redis:
  es:

And here are my sample environment variables to go with the above:

TAG=latest
HOST_PORT=8010
ES_PORT=9200
REDIS_PORT=6379
TA_HOST=htpc6
TA_USERNAME=tubearchivist
TA_PASSWORD=whatever
TZ=America/Denver
HOST_DIR=/data

Note that only HOST_PORT needs to be set to something that doesn't conflict on your Portainer host machine. The other two ports are only used internally within the stack, so aren't published. TA_HOST is the resolvable hostname or IP address of your Docker/Portainer host system.

Using Docker volumes for everything except the YouTube videos themselves seems to work well. You can try to use the ulimits: section if you want, but it won't be compatible with many systems due to Docker restrictions and could create memory allocation issues.

Looks like adding a couple of env vars to OliveTin itself will cover the main data needs for the script, so that only the CDVR server you want to use and the interval you want to run it at will be needed to initiate a run interval (every 12 hours for example).

Let me know how I can be of assistance in a PM if necessary.

Any updates on this front? I'm already migrating everything from ytdl-sub and pinchflat to this. Being able to mass add youtube urls is nice and then using this script to make it Channels compatible would be cool

I spent some time on it earlier in the week, and hope to finish it up today. I'll keep you posted!

1 Like

@Phillip_Berryman

I got reasonably close today on creating an OliveTin Action based on your script. However, there's a showstopper:

It looks like you're doing a file move rather than a copy when relocating videos. This can create issues in the Docker world, and will fail entirely when the Docker Host system and Channels DVR system are not one-in-the-same. Moves are faster, so I get the appeal, but you'll need to fallback to copying where moves are not supported.

Here's the error I got:

Generated NFO: /mnt/media-server8-8089_ta/media/UCFlaaxfm8EK9WVL-HFCkDfA/jUKOp_Qy8jU.nfo
Renamed file: /mnt/media-server8-8089_ta/media/UCFlaaxfm8EK9WVL-HFCkDfA/jUKOp_Qy8jU.mp4 -> /mnt/media-server8-8089_ta/media/UCFlaaxfm8EK9WVL-HFCkDfA/Boost Camp   October 26, 2021.mp4
Renamed NFO file: /mnt/media-server8-8089_ta/media/UCFlaaxfm8EK9WVL-HFCkDfA/jUKOp_Qy8jU.nfo -> /mnt/media-server8-8089_ta/media/UCFlaaxfm8EK9WVL-HFCkDfA/Boost Camp   October 26, 2021.nfo
Traceback (most recent call last):
  File "/config/youtube-process.py", line 204, in <module>
    process_videos()
    ~~~~~~~~~~~~~~^^
  File "/config/youtube-process.py", line 185, in process_videos
    move_file(renamed_path, uploader)
    ~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^
  File "/config/youtube-process.py", line 102, in move_file
    os.rename(file_path, new_path)
    ~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^
OSError: [Errno 18] Invalid cross-device link: '/mnt/media-server8-8089_ta/media/UCFlaaxfm8EK9WVL-HFCkDfA/Boost Camp   October 26, 2021.mp4' -> '/mnt/media-server8-8089/Kingfield Pilates and Movement/Boost Camp   October 26, 2021.mp4' 

Though my test failed for the reason outlined, I believe everything is ready to go otherwise:

screenshot-htpc6-2025.01.19-12_46_56

EDIT: I think you can hold off on making any changes, as I've made a number of adjustments that I believe users will find acceptable. I've run into an issue on my end however, so I'm working on that atm.

1 Like

The OliveTin-for-Channels Action front-ending this script is complete, along with a Project One-Click Action to create a TubeArchivist stack. More details here:

1 Like

Is there any way to add the air date and the youtube id in the title? I still want to use this script to get the thumbnails and the date metadata in Channels as I tried embedding it into the vid and that doesn't work (@bnhf if you know you know how to do that pls help :D)

Would running the YT thumbnail script first work? If so, I should be able to make that happen.

I think the thumbnail script should run AFTER the youtube video file gets moved to a different folder as that makes a bit more sense but as this script works ATM, it only gives you the title without the air date or youtube id in brackets

So TA Script -> Thumbnail script