Seekable page-to-video capture engine using Chrome’s BeginFrame API.
The engine package provides the low-level video capture pipeline: it loads an HTML page in headless Chrome, seeks to each frame independently, and captures pixel buffers using Chrome’s HeadlessExperimental.beginFrame API. This is the layer that makes Hyperframes rendering deterministic.
Most users should NOT use the engine directly. Use the CLI (npx hyperframes render) or the producer package instead — they handle runtime injection, audio mixing, and encoding for you.
Use @hyperframes/engine when you need to:
Build a custom rendering pipeline with full control over frame capture
Integrate Hyperframes capture into an existing video processing system
Capture individual frames (e.g., for thumbnails or sprite sheets) without encoding to video
Implement a custom encoding backend (not FFmpeg)
Use a different package if you want to:
Render an HTML composition to a finished MP4 or WebM — use the producer or CLI
Preview compositions in the browser — use the CLI or studio
The engine implements a seek-and-capture loop that is fundamentally different from screen recording:
1
Launch headless Chrome
The engine starts chrome-headless-shell, a minimal headless Chrome binary optimized for programmatic control via the Chrome DevTools Protocol (CDP).
2
Load the composition
Your HTML composition is loaded into a browser page. The Hyperframes runtime is injected to manage timeline seeking.
3
Seek to each frame
For every frame in the video (e.g., 900 frames for a 30-second video at 30fps), the engine calls renderSeek(time) to advance the composition to the exact timestamp. No wall clock is involved — each frame is independently positioned.
4
Capture via BeginFrame
Chrome’s HeadlessExperimental.beginFrame API captures the compositor output as a pixel buffer. This produces pixel-perfect frames without any screen recording artifacts.
5
Hand off frames
Captured frame buffers are passed to a consumer — typically FFmpeg (via the producer) for encoding into MP4, but you can provide your own consumer.
This approach guarantees deterministic rendering: the same HTML always produces the identical video, regardless of system load or timing.
import { resolveConfig, DEFAULT_CONFIG } from '@hyperframes/engine';import type { EngineConfig } from '@hyperframes/engine';// Use defaultsconst config = DEFAULT_CONFIG;// Or resolve with overridesconst config = resolveConfig({ // ... custom options});
import { acquireBrowser, releaseBrowser, resolveHeadlessShellPath, buildChromeArgs,} from '@hyperframes/engine';// Acquire a browser instance (creates or reuses from pool)const browser = await acquireBrowser();// Get the Chrome binary pathconst chromePath = await resolveHeadlessShellPath();// Release when doneawait releaseBrowser(browser);
Extract frames from source video files for injection into the browser:
import { parseVideoElements, extractAllVideoFrames, getFrameAtTime, createFrameLookupTable, FrameLookupTable,} from '@hyperframes/engine';// Parse video elements from HTMLconst videos = parseVideoElements(html);// Extract all frames from a videoconst frames = await extractAllVideoFrames(videoPath, { fps: 30 });// Create a lookup table for fast frame accessconst lookup = createFrameLookupTable(frames);const frame = lookup.getFrameAtTime(5.0);
import { parseAudioElements, processCompositionAudio } from '@hyperframes/engine';// Parse audio elements from HTMLconst audioElements = parseAudioElements(html);// Process and mix all audio tracksconst mixResult = await processCompositionAudio({ audioElements, duration, fps });
Serve composition files over HTTP for the browser to load:
import { createFileServer } from '@hyperframes/engine';const server = await createFileServer({ root: './my-video', port: 0 });// server.url, server.port// ... use server.url as the composition URLawait server.close();
The engine exports two layers of HDR support: color-space utilities that classify sources and configure the FFmpeg encoder, and a WebGPU readback runtime for capturing CSS-animated DOM directly into HDR.For end-to-end HDR rendering (HDR video and image sources composited into an HDR10 MP4) use the producer or the CLI’s --hdr flag — see HDR Rendering. The APIs below are for custom integrations.
import { isHdrColorSpace, detectTransfer, analyzeCompositionHdr, getHdrEncoderColorParams, DEFAULT_HDR10_MASTERING,} from '@hyperframes/engine';import type { HdrTransfer, HdrEncoderColorParams, HdrMasteringMetadata } from '@hyperframes/engine';// Classify a single source from its ffprobe color spaceisHdrColorSpace(colorSpace); // boolean — true for BT.2020 / PQ / HLGdetectTransfer(colorSpace); // 'pq' | 'hlg' (gate on isHdrColorSpace first)// Pick the dominant transfer across many sourcesanalyzeCompositionHdr([cs1, cs2]); // { hasHdr, dominantTransfer: 'pq' | 'hlg' | null }// Build the FFmpeg color params + HDR10 static metadata for x265const params = getHdrEncoderColorParams('pq');// {// colorPrimaries: 'bt2020',// colorTrc: 'smpte2084',// colorspace: 'bt2020nc',// pixelFormat: 'yuv420p10le',// x265ColorParams: 'colorprim=bt2020:transfer=smpte2084:colormatrix=bt2020nc:master-display=...:max-cll=1000,400',// mastering: { masterDisplay: '...', maxCll: '1000,400' },// }
getHdrEncoderColorParams always includes both color tagging and the HDR10 static metadata (mastering display + content light level). Without that metadata, downstream players treat the file as SDR BT.2020 and tone-map incorrectly. Pass a custom HdrMasteringMetadata if you have measured per-content values; otherwise the conservative DEFAULT_HDR10_MASTERING defaults match how most HDR10 grading suites tag content.
For capturing CSS-animated DOM directly into HDR (no FFmpeg source involved), the engine exposes a separate WebGPU pipeline:
import { launchHdrBrowser, buildHdrChromeArgs, initHdrReadback, uploadAndReadbackHdrFrame, float16ToPqRgb,} from '@hyperframes/engine';// Launch headed Chrome with WebGPU enabledconst { browser, page } = await launchHdrBrowser({ width: 1920, height: 1080 });// Inject the WebGPU readback runtimeconst ok = await initHdrReadback(page, 1920, 1080);// For each frame: upload float16 pixels, read back float16 RGBAconst { rgba16, bytesPerRow } = await uploadAndReadbackHdrFrame(page, float16Base64);// Convert linear float16 → PQ-encoded 16-bit RGB suitable for piping into ffmpeg/x265const pqRgb = float16ToPqRgb(rgba16, width, height, bytesPerRow);
This path requires headed Chrome with --enable-unsafe-webgpu — WebGPU is unavailable in chrome-headless-shell. It is not used by the default --hdr render pipeline (which extracts HDR pixels from sources via FFmpeg and composites in Node). Use it only for advanced custom pipelines that need CSS animations driving HDR pixel output.
The engine communicates with the browser page via the window.__hf protocol. Any page that implements this protocol can be captured by the engine — you are not limited to Hyperframes compositions.
// The page must expose this on window.__hfinterface HfProtocol { duration: number; // Total duration in seconds seek(time: number): void; // Seek to a specific time media?: HfMediaElement[]; // Optional media element declarations}interface HfMediaElement { elementId: string; // DOM element ID src: string; // Media source URL startTime: number; // Start time on timeline endTime: number; // End time on timeline mediaOffset?: number; // Playback offset in source volume?: number; // Volume (0-1) hasAudio?: boolean; // Whether element has audio}
Traditional screen capture records at wall-clock speed — if your system is under load, frames get dropped. The engine uses Chrome’s HeadlessExperimental.beginFrame to explicitly advance the compositor, producing each frame on demand. This means:
No dropped frames — every frame is captured
No timing dependency — a 60-second video does not take 60 seconds to capture
Pixel-perfect output — the compositor produces the exact pixels it would display
The engine requires chrome-headless-shell, which is included when you install the package. It uses a pinned Chrome version to ensure consistent rendering across environments. For fully deterministic output (including fonts), use Docker mode via the producer.