MP4 files are embarrassingly easy to steal. Right-click, save as, done. For a blog that occasionally embeds short AI-generated video clips, this wasn't a theoretical concern — it was a guarantee. Anyone with a browser's developer tools could grab the file URL and download it in seconds. So I decided to replace the direct MP4 links with HLS adaptive bitrate streaming, complete with AES-128 encryption and burned-in watermarks. The kind of setup you'd expect from a proper video platform. On a static site hosted on S3.
That last sentence should have been the warning.
HLS — HTTP Live Streaming — works by chopping video into small
transport stream segments, each a few seconds long, and serving them
via playlists that tell the player what to fetch and in what order.
Apple invented it for
iOS back in 2009. The protocol is elegant: just files on a web
server, no special streaming infrastructure required. A master
playlist points to variant playlists at different quality levels,
and the client picks the appropriate one based on available
bandwidth. For a fifteen-second clip on a blog, adaptive bitrate is
arguably overkill. I built it anyway, because the encryption layer
depends on the HLS segment structure, and because I wanted four
quality tiers from 480p to source resolution. The transcoding
pipeline uses FFmpeg to produce each tier with its own playlist and
.ts segments, then wraps them in a master .m3u8 that lists all
four variants with their bandwidth and resolution metadata.
FFmpeg's HLS muxer is powerful and poorly documented in roughly equal measure. The flags for segment naming, playlist type, and encryption keyinfo files all interact in ways that the man page describes with the enthusiasm of someone filling out tax forms. Getting the basic transcoding working — four tiers, VOD playlist type, sensible segment durations — took maybe an afternoon. Getting the encryption right took three days.
The AES-128 encryption in HLS works like this: you generate a
sixteen-byte random key, write it to a file, and tell FFmpeg where
to find it via a keyinfo file. The keyinfo file has three lines —
the URI where the player will fetch the key at runtime, the local
path FFmpeg should read during encoding, and an initialisation
vector. The player downloads the key, decrypts each segment on the
fly, and plays the video. Simple in theory. The problem is that
the key URI in the keyinfo file is relative to the playlist that
references it, not relative to the keyinfo file itself, and not
relative to the master playlist. Each variant playlist lives in its
own subdirectory — 480p/stream.m3u8, 720p/stream.m3u8, and so
on — while the encryption key sits one level up. So the URI needs
to be ../enc.key. Get this wrong and the player fetches a 404
instead of a decryption key, and the error message from hls.js is
spectacularly unhelpful. "FragParsingError" tells you nothing about
why the fragment couldn't be parsed. I spent a full evening
staring at network waterfall charts in Chrome DevTools before
realising the key path was resolving to the wrong directory.
The watermark was its own category of frustration. I wanted the
site domain burned into every frame — subtle, low opacity, bottom
right corner. FFmpeg's drawtext filter handles this, and it's
flexible enough to scale the text relative to the video height so
it stays proportional across all four quality tiers. The filter
string looks like someone encrypted it themselves:
drawtext=text='plutonicrainbows.com':fontsize=h*0.025:fontcolor=white@0.30:shadowcolor=black@0.15:shadowx=1:shadowy=1:x=(w-text_w-20):y=(h-text_h-20).
It works, but when you're chaining it with the scale filter for
resolution targeting — scale=-2:720,drawtext=... — the order
matters and the comma-separated syntax doesn't forgive stray
whitespace. I had a version that worked perfectly at 1080p and
produced garbled output at 480p because the scale filter was
receiving the wrong input dimensions. The fix was reordering the
filter chain. The debugging was two hours of staring at pixel soup.
Then came the client-side player. Safari supports HLS natively
through the video element — you just point the src at the
.m3u8 file and it plays. Every other browser needs
hls.js, a JavaScript
library that implements HLS via Media Source Extensions. The
dual-path architecture isn't complicated in principle. Check if
hls.js is available and MSE is supported, use it. Otherwise, check
if the browser can play application/vnd.apple.mpegurl natively,
and use that. The complication is that these two paths behave
differently in ways that matter. With hls.js, you get fine-grained
control — you can lock the quality tier, set bandwidth estimation
defaults, handle specific error events. The native Safari path
gives you a video element and a prayer. You can't force max quality
on native HLS. You can't get meaningful error information. And
iOS Safari doesn't support MSE at all,
which means hls.js won't load, which means you're stuck with
whatever quality Safari decides is appropriate based on its own
internal bandwidth estimation.
For fifteen-second clips, this mismatch was particularly annoying.
The whole point of locking to the highest quality tier is that short
videos don't benefit from ABR ramp-up — by the time the adaptive
algorithm has measured bandwidth and stepped up to a higher tier,
the clip is nearly finished. I set abrEwmaDefaultEstimate to
50 Mbps in hls.js to force it straight to the top tier on page
load. Safari users get whatever Safari gives them.
The lightbox player itself needed to handle a surprising number of
edge cases. Autoplay policies mean the video has to start muted.
The overlay should fade in immediately but the video element should
stay hidden until the first frame is actually decoded — otherwise
you get a flash of black rectangle before content appears. I used
the playing event to reveal the video, with a four-second
fallback timeout in case the event never fires. The progress bar
is manually updated via setInterval because the native progress
events fire too infrequently for a smooth visual. Right-click is
disabled on the video element. The controlsList attribute strips
the download button from native controls. None of this is real DRM
— anyone sufficiently determined can still capture the stream. But
it raises the effort from "right-click, save" to "actually write
code," which is enough for a personal blog.
Deployment surfaced the final batch of surprises. The .m3u8
playlist files need to be gzipped and served with the right content
type. The .ts segments need appropriate cache headers. And the
encryption key files — those sixteen-byte .key files — need
Cache-Control: no-store so that if I ever re-transcode a video,
browsers don't serve a stale key that can't decrypt the new
segments. I'd already been through the
CloudFront HTTP/2 configuration
saga, so I knew the CDN layer could hold surprises. The .key file
caching caught me out anyway. Stale encryption keys produce the
same unhelpful "FragParsingError" as a missing key, which meant
another round of DevTools archaeology.
The whole system works through graceful degradation. No FFmpeg on
the build machine? Video processing is skipped entirely and the
links fall back to pointing at the source MP4 files. No
video_processor.py module? Caught by an ImportError, build
continues. No videos directory? No-op. I learned from the
forty-five bugs audit
that a static site generator needs to handle missing dependencies
without falling over, and the video pipeline follows that pattern.
The opaque URL scheme was a late addition that I'm glad I thought
of. Instead of exposing file paths in the HTML — which would let
someone construct the master playlist URL and bypass the lightbox
entirely — the build script generates a six-character content hash
for each video and rewrites the anchor tags to use
#video-{hash} with a data-video-id attribute. The JavaScript
player reads the data attribute and constructs the HLS URL
internally. The actual file structure is never visible in the page
source. Again, not real security. But another layer of friction.
Was it worth it? For a personal blog with maybe a few hundred
readers, building a four-tier HLS pipeline with per-video AES-128
encryption is — and I'm being generous to myself here — completely
disproportionate. An <video> tag pointing at an MP4 would have
been fine. But the MP4 approach bothered me, and sometimes that's
reason enough. The fifteen-second clips play smoothly across every
browser I've tested, the watermark is visible without being
obnoxious, and the encryption keys rotate per video. The whole
thing adds about forty seconds to the build for each new video,
which is nothing given that the
image pipeline already takes
longer than that.
The drawtext filter string still looks like someone sat on a keyboard. Some things can't be made elegant. They can only be made to work.
Sources:
-
Apple HTTP Live Streaming - Apple Developer
-
hls.js — JavaScript HLS Client - GitHub
-
Native HLS Playback: The Complete 2025 Developer Guide - VideoSDK
-
HLS Encryption: How to Encrypt Video Streams with AES-128 - Dacast
-
FFmpeg drawtext Filter for Dynamic Overlays - OTTVerse