• 4 Posts
  • 111 Comments
Joined 2 months ago
cake
Cake day: February 11th, 2026

help-circle











  • Ah-ha, thanks for the update on Docker! Saves me going down that rabbit hole 😅

    On the files on the NAS: yep, that’s by design. My files are across the WAN, not LAN, so I built it to stage remote files locally before transcoding. It currently pulls a file, transcodes it, and moves it wherever you chose for output. This does mean that going over a network is slow, because you have to wait for the staging and cleanup before doing another file. That’s deliberately conservative though; I wanted to avoid saturating networks in case the network operator takes exception to that sort of thing. A secondary benefit is that the disk space required for operations is just twice the size of the source file - very low chance of having to pause a job because the disk monitoring detected there’s no room.

    I’ll look at putting in an override that disregards the network and treats remote files as local for you!


  • Haha thanks! You mean, support them for output, as well as being able to convert from? Last night I outlined adding an “Auto” option for container, which would keep the source container if possible, but the controls I’ve exposed vs the ones I haven’t are a conscious choice, to maximise player compatibility for the outputs without the user having to know anything about codecs, containers, encoders, their hardware, or quality settings. I’m deliberately keeping the options to a minimum because I didn’t want to make Handbrake 😅

    As to why I chose these codecs: h264 works on devices from 15+ years ago, and HEVC is compatible back to 2015-16. AV1 is 2020 onwards and requires GPU decoding; that’s too new and resource-intensive for my goals with HISTV.

    I’ll think about how I could pull this off though. Perhaps a “lite” mode that keeps the original codec and container, or an “auto” mode for codec dropdown too. I think I like the second one better: lets you mix and match keeping container or codec or both, without adding any real complexity to the options.



  • Hahaha baby steps! But I’ll look at it; if nothing else I think it would be very funny to have the dev equivalent of Jar Jar Binks end the format war by accident (which I say as just a joke; I of course have no idea how complex the issue actually is).

    Edit: Damn dude, you weren’t kidding about the challenge. You’re right that HISTV won’t work for DoVi 5 in its current state - “Fundamental to FFMPEG” does kinda mean “fundamental to HISTV”, being that it’s basically just a clever wrapper around FFMPEG. That said, if your tooling is ready, I’ve got a plan to integrate it. The philosophy with HISTV is to preserve whatever we can and fall back gracefully as far as we have to if we can’t preserve the current profile, which your tools slot into like they were made for it (Rust Gang represent (👉゚ヮ゚)👉).


  • Ah, then the real slowness is going to come from having them on a spinning disk HDD. For friends, 3Mbps or 4Mbps target bitrate should be plenty, with the 2x multiplier should be enough to preserve detail. No need to touch anything else, you don’t need precision mode for it. Maybe start with 3 and that on just one episode, and see how you go - if you find yourself noticing it looks blocky, bump up to 4Mbps and you’ll be golden.



  • Fun fact - HISTV actually has two-pass encoding! Though, with enough system RAM you can actually look ahead far enough that you can get the benefits of two-pass in just a single pass. I have a bit about this in the README.md:

    Precision mode

    One checkbox for the best quality the app can produce. It picks the smartest encoding strategy based on how much RAM your system has:

    Your RAM What it does
    16GB or more Looks 250 frames ahead to plan bitrate (single pass)
    8-16GB Looks 120 frames ahead to plan bitrate (single pass)
    Under 8GB Scans the whole file first, then encodes (two passes)

    Two-pass only happens when precision mode is on AND the system has less than 8GB RAM AND the file would be CRF-encoded. Reason being those lookaheads above. Lookahead buffers live in memory. On low-RAM systems that buffer would be too large, so the app falls back to two-pass instead and stores the analysis run in a tempfile on disk. To break down each one:

    • Pass 1: Runs ffmpeg with -pass 1 writing to a null output. ffmpeg analyses the entire file and writes a statistics log (the passlog file) describing the complexity of every scene. No actual video is produced - this pass is pure analysis.
    • Pass 2: Runs ffmpeg with -pass 2 using the statistics from pass 1. The encoder now knows what’s coming and can distribute bits intelligently across the whole file - spending more on complex scenes, less on simple ones - without needing a large lookahead buffer in RAM. After both passes complete, the passlog temp files are cleaned up.

    The biggest problem with two-pass encoding is the speed. It has to do two passes over the whole file: one analysis, one actually encoding. With a 250-frame lookahead, you’re basically just writing your passlog into RAM - and reading from it - as you go. With 120-frame lookahead your CPU will likely catch up to the passlog at times, but you can still write to and read from it as you go, so you still get similar speed, and still close enough in quality that it doesn’t really make a difference, in a single pass.


  • You know what? I think I can figure out a way to estimate final file size and display it to the user. It’ll only work if “Precision Mode” is off though - that uses “CRF” or “Constant Rate Factor” which basically tells the encoder “be efficient, but make the file as big as it needs to be to look good”. As a result there’s no way to tell how big the file will end up being - the encoder makes the decision on the fly.

    With “Precision Mode” off, HISTV has two gears:

    1. If your file is already small enough (at or below the target bitrate), it uses “CQP” or “Constant Quantisation Parameter” (the QP I/P numbers), which tells the encoder “Use this quality level for every frame, I don’t care how big the file ends up”. It’s fast and consistent - every frame gets the same treatment.
    2. If your file is too big, it switches to VBR (Variable Bit Rate), which tells the encoder “Stay around this target bitrate, spike up to the peak ceiling on complex scenes, but don’t go over”. It’s how the app actually shrinks files. You can estimate the output size with target mbps * seconds / 8 - so a 60-second clip at 4Mbps lands around 30MB. <- This is the maths I’m thinking about doing to display the estimate to the user.