Methodology · v1.0 · April 2026

How JobForesight Calculates AI Exposure

This page explains how every score, time window, and peer percentile on JobForesight is produced. It is written by Robiul Islam, the founder, and is intentionally honest about uncertainty: the numbers below combine published research with editorial judgement, and we would rather you understand the limits of our model than over-trust a single figure.

01 · Data sourcesWhat we build on

Our model is grounded in public research and government datasets. Each source below contributes a different signal — task structure, automation probabilities, real-world AI usage, or workforce baselines.

02 · ScoringHow exposure scores are computed

Per-task automatability. For every occupation we start from its O*NET task list. Each task is rated on two horizons: today (can a current frontier LLM, plus standard tooling, perform this end-to-end with acceptable quality?) and five years out (does the trajectory of model capability and tool integration plausibly close the gap?). Ratings are anchored to observed AI use from the Anthropic Economic Index and Microsoft Copilot data wherever a matching task exists, and to expert judgement otherwise.

Weighting by frequency. Tasks are not equal. A task that fills 40% of a paralegal's week matters more than one performed quarterly. We weight each task's automatability rating by its share of the occupation's time-on-task profile (drawn from O*NET work activity importance and frequency ratings), so the resulting score reflects how much of the actual job is exposed — not how many discrete tasks happen to be exposed.

Aggregation to 0–100. Weighted task scores are summed and normalised to a 0–100 scale, where higher means more exposed. The cut points (Low / Moderate / High / Critical) are calibrated against the distribution across the 333 occupations we currently track so the bands carry comparative meaning.

Honest caveat: we use the Anthropic Economic Index for grounding, but final scores reflect editorial judgement informed by multiple sources. JobForesight is not endorsed by Anthropic, O*NET, or any other source listed above.

03 · Time windowsHow time windows are derived

On every occupation page (for example, investment analysts) you will see a Window to Act figure with a junior / mid / senior split. The window is expressed in months, not years — this is deliberate. Year-level resolution is precise enough to feel authoritative but loose enough to dodge accountability; months force us to be specific and force you to plan.

The window represents the period in which we expect significant exposure to the current task set of that role at that seniority. It is not a countdown to the job disappearing. Junior windows are typically shorter because junior task mixes are more routine and more easily delegated to AI; senior windows extend further because senior work is more weighted toward judgement, accountability, and relationship-bound tasks that current models do not handle well.

Windows are derived from (a) the occupation's exposure score, (b) the share of tasks already automatable today vs. on the five-year horizon, and (c) sector-level adoption pace from the McKinsey and Goldman Sachs data. They are intervals, not point estimates, and you should read them as such.

04 · Peer percentileHow the peer comparison is calculated

The "more exposed than X% of workers we track" line on every report is a rank-based percentile across the 333 occupations currently in our tracked set. Higher percentile means more exposed: an occupation at the 81st percentile has a higher exposure score than 81% of the occupations we score.

It is a relative ranking, not a probability. Two occupations one percentile apart may have nearly identical absolute scores; two ten percentiles apart will be visibly different. We use rank rather than raw score so the comparison stays stable as we add new occupations.

05 · LimitsWhat we don't claim

  • We do not predict individual job loss. We predict which tasks within an occupation are exposed to automation. Whether that translates into headcount reduction depends on management decisions, regulation, demand growth, and labour market frictions that no model can forecast role-by-role.
  • We use US labour market data primarily. O*NET and BLS are US-anchored. UK-specific tuning is in progress as of April 2026; UK occupation pages currently inherit US task structures with editorial adjustment for local context, not a separate ONS-sourced model.
  • We update exposure scores quarterly — or sooner when significant new research drops (a new Anthropic Economic Index release, a major capability jump, a new ILO/OECD brief). Each update is logged in the change log below.
  • We are calibrated against the Anthropic Economic Index but not Anthropic-endorsed. Treat us as a third-party interpretation, not an official source.
  • We assume a roughly five-year forward horizon. Predictions beyond that get less reliable quickly — task structures themselves change as AI reshapes work, so longer-horizon scoring becomes a guess about a moving target.

06 · Change logMethodology changes

DateVersionChange
April 20261.0Initial public methodology.

See your own exposure score

Two minutes, free, no email required to see your headline score.

Take the free 2-min assessment → More about the founder on the about page.