How we test, scored, and held to account.
A review is only as trustworthy as the process behind it. Everything on this page — the weights, the brief, the panel, the update cadence — is public so you can judge whether our scores deserve your attention.
Five dimensions, with published weights.
Every tool scored on this site is graded across the same five dimensions. Weights have been stable since February 2026 — we do not re-weight the rubric to change a verdict.
Output quality
Coherence, motion fidelity, prompt adherence, and how often the first render is usable.
Ease of use
Onboarding friction, prompt UX, editor ergonomics, and time-to-first-render for a new user.
Speed
Render time, queue wait times under load, and export throughput.
Value for money
Features per dollar against the closest comparable tier from competitors.
Customer support
Response time, resolution quality, and the availability of self-serve documentation.
Math: each dimension is scored out of 10, multiplied by its weight, summed, and divided by 100 — giving a final score on a 0-10 scale. Deevid AI's current score of 9.2 reflects ease of use 9.5 · output quality 9.4 · speed 8.8 · value 9.1 · support 8.9, under these exact weights.
The same 12 prompts, every time.
When comparing tools, every tool runs against an identical 12-prompt brief covering five categories. We publish the category mix — the specific prompts are rotated quarterly to prevent gaming.
Each prompt gets three generations per tool. Outputs are scored blind by our panel (see below) against a shared rubric — the panel does not know which tool produced which clip.
Real professionals, small enough to be consistent.
Ad agency background. Grades on client-deliverability more than aesthetic novelty.
Narrative coherence specialist. Ruthlessly grades character consistency and temporal logic.
Ecommerce focus. Judges whether generated product shots meet commercial-ready standards.
Represents the "will this ship on TikTok tomorrow" standard. The fastest reviewer on the panel.
Panellists are compensated a flat fee per review cycle, independent of the tool scored. They have no knowledge of our affiliate relationships while scoring.
We re-test every 90 days.
AI tools change fast. A review frozen in time is a review you can't trust six months later. Every published review here is re-run against the current version of the tool at least quarterly, and scores move up or down accordingly.
Found an issue with a score?
We respond to every correction request. If you have evidence that a score is outdated or inaccurate, email hello@deevidreview.com with specifics — we'll re-test and update if warranted. Read more on our about page or full affiliate disclosure.