We don't believe in mystery scoring. Here's exactly what happens between upload and insight.
Pick any short-form video from your camera roll — up to 3 minutes, 50MB. The app streams it to our servers in chunks so the upload is fast even on LTE.
ffmpeg extracts 10 keyframes at even intervals across the video. OpenAI Whisper transcribes the full audio track with a confidence filter that rejects hallucinations (music-only clips produce zero transcript, not fake dialogue).
Anthropic's Claude Sonnet 4 receives the keyframes and transcript. It scores 7 categories with defined rubrics, identifies 8-12 specific passes/fails, writes 3 savage roast lines referencing exact moments, and — if you provided performance data — reconciles the score against reality.
An A-to-F letter grade, overall score out of 100, and a MID / UNMID verdict. Tabbed breakdown: the roast itself, category scores, detailed findings, your ideals comparison (if seeded), and the proof — the transcript, keyframes, and line scores that back every claim.
Apply the notes, shoot a v2, upload again. Your score climbs (or doesn't). Over time, you build the habit of unmidifying every video before you post it. That's the real product.
Radical transparency
We publish our scoring rubric openly. Competitors can't do this without killing their product. We can because the value isn't the formula — it's the execution, the fix loop, and the library.
Read the rubric