TAMIL-FIRST CINEMA INTELLIGENCE

Tamil-first
cinema intelligence.

Calibration-honest predictions. 40 KPIs for released films. 200 parameters for screenplays. Confidence signals on every score.

Launch Platform →How it works ↓
SCREENPLAY · SG-PRED-2026-0142
LIVE
Veyilaan / வெயிலான்
Action-drama · Q2 2026 window
COMMERCIAL OUTCOME (theatrical)62% · band ±14%
020406080100
LEAD-PAIR CHEMISTRY
81±8
DIRECTOR × GENRE FORM
58±12
COMPOSER × MOOD FIT
74±6
COMPETITION LOAD
34±18
calibrated against 600+ released films · bracket-hit rate 0.92 · last refresh 14m ago
[ WHY SIGNALGRID IS DIFFERENT ]

Most cinema-prediction tools headline a single number. We headline the bracket and the band.

Calibration over accuracy

We don't headline a single accuracy number — those are marketing artefacts. Instead every score ships inside a bracket: 0-20%, 20-40%, 40-60%, 60-80%, 80-100%. The hit-rate in each bucket has to match the bucket label. That's the product.

Honest confidence intervals

Sparse signal? You get a wide band and we say so. Strong evidence stack? The band tightens. No prediction is rendered without its uncertainty companion. Decision-makers see exactly how much the model is guessing.

Tamil cinema entity graph

Built ground-up for the Tamil industry — directors, composers, DOPs, writers, editors, support cast, lyricists, choreographers, distributors. Each film auto-discovers 15-30 new persons and self-expands the corpus.

[ TWO PARALLEL SCORING ENGINES ]

Released films and unproduced screenplays.

ENGINE A40-KPI catalogue scoring

Released Films Engine

INPUT
Released film + public signals (box office, reviews, social, streaming windows, festival circuit, lifetime tail)
OUTPUT
40 calibrated KPIs across commercial, critical, longevity, person-level and franchise dimensions
Opening-weekend index vs predicted
Critic-vs-audience delta
Repeat-viewing coefficient
Lifetime tail multiplier
Person-level contribution split
Genre-cohort percentile
TYPICAL CONFIDENCE BANDTight band — rich signal
ENGINE B200-parameter predictive engine

Predictive Screenplay Engine

INPUT
Unproduced screenplay + proposed cast/crew package + release window context
OUTPUT
Forward outcome distribution across 200 parameters — calibrated, with explicit confidence band per signal
Lead-pair chemistry score (historical)
Director × genre form curve
Composer × mood fit
Festival-calendar competition load
Star-power decay model
Screenplay-arc structural fit
TYPICAL CONFIDENCE BANDWider band — pre-production
[ TAMIL CINEMA ENTITY CORPUS ]

A self-expanding graph of every person who touches a Tamil film.

49
CREW SEED
directors, composers, DOPs
69
CREW EXTENDED
writers, editors, VFX, choreographers
127
SUPPORT CAST
supporting actors corpus
107
FILMOGRAPHY PULL
actors via Wikipedia
AUTO-DISCOVERY
Each film discovers ~15-30 new persons — lead, female lead, antagonist, comedy, support, director, writer, composer, DOP, editor, action/dance choreographers, VFX, art director, costume, lyricists, singers, producers, distributor.
→ all discovered persons auto-added to keyword_tracking for the news pipeline
DirectorComposerLeadAntagonistDOPEditorProducerDistributorFILM
ENTITY GRAPH · FILM ↔ PERSONS ↔ ROLES
[ WIKIPEDIA INGEST PIPELINE ]

Idempotent, resumable, checkpoint-aware.

The bulk crawler walks every Film row, extracts cast / crew / infobox, and checkpoints to disk. If it stops at film 1,247 — the next run picks up at 1,247. No re-fetches, no double counting, no surprises.

ingest-results / run-2026-05-16T09:22Z
$ npm run ingest:batch
→ resuming from checkpoint film 1,247
→ entities queued: 8,412
→ coverage target: 92 signals
[1,248] Aaranya Kaandam   ✓ 27 persons discovered
[1,249] Visaranai        ✓ 19 persons discovered
[1,250] Pariyerum Perumal  ✓ 31 persons discovered
[1,251] Asuran           ⟳ writing progress.jsonl …
→ keyword_tracking +77 persons queued for news pipeline
→ progress.jsonl · latest.json · complete.json
coverage:check → 87/92 signals captured · 95%
COMMAND CATALOGUE
npm run crawl:cast-crewwalk every Film row
npm run crawl:cast-crew:tamilTamil films 2015+
npm run crawl:cast-crew:expandself-expand newly discovered persons
npm run ingest:batchfull keyword ingest with checkpoint/resume
npm run ingest:batch -- --max-run-minutes=30time-boxed run
npm run ingest:agentOllama-first local run (25 entities)
npm run coverage:check% of 92 signals captured for 99.5% prediction
output → ingest-results/<run-id>/
  ├ progress.jsonl
  ├ latest.json
  └ complete.json
[ CALIBRATION BRACKETS ]

When we say "60-80%" it had better hit 60-80% of the time.

Each prediction is placed into one of five brackets. We continuously back-test against new ground-truth data — the actual hit rate inside each bracket must match the bracket label. Drift is publicly reported.

BRACKETPREDICTED ↔ ACTUAL
0-20%pred 10% · actual 11%
20-40%pred 30% · actual 28%
40-60%pred 50% · actual 52%
60-80%pred 70% · actual 69%
80-100%pred 90% · actual 88%
CALIBRATION PLOT · PREDICTED vs ACTUAL
PREDICTED PROBABILITY →ACTUAL HIT RATE
SignalGrid- - Perfect calibration
[ 92 CRITICAL SIGNALS ]

Capture all 92. Unlock 99.5% prediction confidence.

npm run coverage:check reports the % of 92 signals captured for the current film or screenplay. Below 92, we widen the confidence band. Above, we tighten it.

COVERAGE MAP · 92 SIGNALS
captured 87 / 92confidence 94.6%
Cast
14
Crew
12
Genre
6
Budget
5
Star Power
9
Director Form
8
Composer Form
7
Festival Calendar
6
Competition
8
Audience
10
Streaming Window
4
Marketing Cadence
3
[ USE CASES ]

Five buyers. One calibrated truth.

🎬

Producers

Validate script viability before greenlight — quantify lead-pair chemistry, director form, composer fit, and audience overlap before a single rupee is spent.

→ CALIBRATED DECISION SURFACE
💸

Investors

Risk-scored bets on upcoming films with calibrated confidence bands. Know whether you're funding a tight-band 80% bracket or a wide-band 40% guess.

→ CALIBRATED DECISION SURFACE
📦

Distributors

Buy-or-skip decisions on screenplay submissions. Territory-level forecasts grounded in person-level form curves and competition calendars.

→ CALIBRATED DECISION SURFACE
🎯

Studios

Cast and crew selection based on calibrated person-level scores. Swap a composer, watch the predicted band shift in real time.

→ CALIBRATED DECISION SURFACE
📡

OTT Platforms

Acquisition shortlisting from screenplay submissions. Filter the inbox by predicted-tail multiplier and audience-overlap with your existing catalogue.

→ CALIBRATED DECISION SURFACE
[ TECH STACK ]
Next.js 15
App Router, RSC
Supabase Postgres
entity + signal corpus
Prisma
schema sync
Wikipedia Ingest
idempotent, resumable crawler
Ollama
local model for quick ingest
Vercel
production deploy
[ GO LIVE ]

Calibrate before you greenlight.

Run a screenplay through SignalGrid. Get 200 parameters back, each with an honest confidence band. No headline accuracy theatre.