Methodology · v1.0 · April 2026
This page explains how every score, time window, and peer percentile on JobForesight is produced. It is written by Robiul Islam, the founder, and is intentionally honest about uncertainty: the numbers below combine published research with editorial judgement, and we would rather you understand the limits of our model than over-trust a single figure.
Our model is grounded in public research and government datasets. Each source below contributes a different signal — task structure, automation probabilities, real-world AI usage, or workforce baselines.
Per-task automatability. For every occupation we start from its O*NET task list. Each task is rated on two horizons: today (can a current frontier LLM, plus standard tooling, perform this end-to-end with acceptable quality?) and five years out (does the trajectory of model capability and tool integration plausibly close the gap?). Ratings are anchored to observed AI use from the Anthropic Economic Index and Microsoft Copilot data wherever a matching task exists, and to expert judgement otherwise.
Weighting by frequency. Tasks are not equal. A task that fills 40% of a paralegal's week matters more than one performed quarterly. We weight each task's automatability rating by its share of the occupation's time-on-task profile (drawn from O*NET work activity importance and frequency ratings), so the resulting score reflects how much of the actual job is exposed — not how many discrete tasks happen to be exposed.
Aggregation to 0–100. Weighted task scores are summed and normalised to a 0–100 scale, where higher means more exposed. The cut points (Low / Moderate / High / Critical) are calibrated against the distribution across the 333 occupations we currently track so the bands carry comparative meaning.
On every occupation page (for example, investment analysts) you will see a Window to Act figure with a junior / mid / senior split. The window is expressed in months, not years — this is deliberate. Year-level resolution is precise enough to feel authoritative but loose enough to dodge accountability; months force us to be specific and force you to plan.
The window represents the period in which we expect significant exposure to the current task set of that role at that seniority. It is not a countdown to the job disappearing. Junior windows are typically shorter because junior task mixes are more routine and more easily delegated to AI; senior windows extend further because senior work is more weighted toward judgement, accountability, and relationship-bound tasks that current models do not handle well.
Windows are derived from (a) the occupation's exposure score, (b) the share of tasks already automatable today vs. on the five-year horizon, and (c) sector-level adoption pace from the McKinsey and Goldman Sachs data. They are intervals, not point estimates, and you should read them as such.
The "more exposed than X% of workers we track" line on every report is a rank-based percentile across the 333 occupations currently in our tracked set. Higher percentile means more exposed: an occupation at the 81st percentile has a higher exposure score than 81% of the occupations we score.
It is a relative ranking, not a probability. Two occupations one percentile apart may have nearly identical absolute scores; two ten percentiles apart will be visibly different. We use rank rather than raw score so the comparison stays stable as we add new occupations.
| Date | Version | Change |
|---|---|---|
| April 2026 | 1.0 | Initial public methodology. |
Two minutes, free, no email required to see your headline score.
Take the free 2-min assessment → More about the founder on the about page.