Every video goes through a multi-stage lifecycle from detection to completion. BitBonsai manages this automatically with zero user intervention required.
What is a job? A job is a single video file being encoded. Each video in your library becomes one job. See glossary for more details.
Most jobs go: DETECTED → HEALTH_CHECK → NEEDS_DECISION → QUEUED → ENCODING → VERIFYING → COMPLETED. The whole process is automatic after you click “Queue Selected.”
Health checks prevent encoding failures by detecting corrupt files before wasting CPU time. Failed health checks are logged with detailed error messages.
Error Category: SOURCE_CORRUPTEDMessage: Invalid data found when processing inputSuggestion: Source file may be corrupted. Try re-downloading or skip this file.FFmpeg stderr: [mov,mp4,m4a,3gp,3g2,mj2 @ 0x...] moov atom not found
Click any job in the Encoding tab to view detailed information:
Field
Description
Source File
Full file path in library
Target Codec
HEVC (H.265), AV1, or VP9
Progress
Percentage complete (0-100%)
Speed
Encoding speed in FPS
Time Remaining
ETA based on current speed
Original Size
File size before encoding
Output Size
Estimated file size after encoding
Original Codec
Source codec (usually H.264)
Resolution
Video resolution (1080p, 4K, etc.)
Original Bitrate
Source video bitrate (Mbps)
Target Bitrate
Output video bitrate (Mbps)
Worker Node
Which node is processing this job
Started At
When encoding began
Error Message
(FAILED jobs only) FFmpeg error details
Pro tip: Sort jobs by “Time Remaining” to see which jobs finish soonest. Large 4K movies may take hours, while 1080p TV episodes finish in 5-10 minutes.
Problem: Container restarted mid-encoding → jobs stuck in ENCODING statusSolution: On backend startup, BitBonsai finds all jobs with status ENCODING and resets them to QUEUEDWhen it runs: Every backend container restartUser action: None (automatic)Logs:
Copy
🔄 Orphaned job recovery: Reset 3 stuck ENCODING jobs to QUEUED
Problem: NFS mount not ready → job marks file as “not found” → FAILEDSolution: Before marking FAILED, retry 10 times with 2-second delays (20 seconds total)When it runs: During encoding temp file checksUser action: None (automatic)Logs:
Copy
🔄 Temp file not found, retrying (attempt 3/10)...✓ Temp file detected after 6 seconds (NFS mount recovery)
This prevents false FAILED status during NFS mount hiccups or slow network storage.
Problem: Network hiccup during health check → false CORRUPTED statusSolution: Retry health check 5 times with 2-second delays (10 seconds total)When it runs: During HEALTH_CHECK and VERIFYING stagesUser action: None (automatic)Why this matters: Prevents wasting time re-checking healthy files
Problem: Files marked CORRUPTED during NFS hiccups are often actually healthySolution: Every hour, BitBonsai finds all CORRUPTED jobs and resets them to QUEUED for re-validationWhen it runs: Hourly (cron job in backend)User action: None (automatic)Logs:
Copy
🔄 Auto-requeue: Found 12 CORRUPTED job(s) - resetting for re-validation✓ Re-validated 12 jobs: 8 HEALTHY, 4 still CORRUPTED
Why hourly? NFS mounts often fail temporarily during network issues. Hourly re-checks catch files that become accessible again.
Problem: FFmpeg crashes mid-encode but process doesn’t exit → job stuck at same progress for hoursSolution: If progress hasn’t changed in 15 minutes, job is marked FAILED and auto-retriedWhen it runs: Background watchdog every 5 minutesUser action: None (automatic)Logs:
Copy
⚠️ Stuck job detected: Job #123 at 45% for 20 minutes → FAILED (auto-retry)