Methodology & Transparency

How Human+ Works

Every score in our directory is derived from publicly available data, applied against a consistent rubric. No advertising. No company partnerships. No conflicts of interest. This page explains exactly how we do it.

01
The Problem We're Solving

The AI adoption narrative in 2026 is badly broken. Companies announce AI investments to impress investors while simultaneously reducing headcount. PR teams frame layoffs as "restructuring toward AI." Executives celebrate "efficiency gains" that translate directly to job losses.

Job seekers, employees, and investors have no reliable way to distinguish between companies genuinely using AI to expand human capability — and companies using AI as cover to reduce it.

Human+ exists to fill that gap using only publicly observable signals: job postings, earnings call language, SEC filings, headcount data, product announcements, and company communications.

The core question we ask:

When a company deploys AI, does the number of humans doing meaningful work go up, stay flat, or go down?

Everything in our methodology flows from that question.
What we are not:

We do not accept payments from companies. We do not offer "verified" badges for sale. We do not run advertising. A company cannot buy its way into — or out of — our directory.
02
Our Data Sources

We only use information that is independently verifiable by anyone. We do not rely on anonymous tips, internal documents, or unverifiable claims. Every data point in a company's score can be traced to a public source.

Source Type What We Extract Refresh
Layoffs.fyi Database Layoff events, scale, AI-cited reason flags, date of event Weekly
LinkedIn Public Profile Open job count, role type changes, headcount trend (employee count display) Monthly
SEC EDGAR Regulatory Filing Annual/quarterly employee count, AI risk factor language, exec compensation vs. headcount correlation Quarterly
Annual Reports Corporate Filing Headcount YoY, reskilling investment, AI strategy framing, HR metrics Annually
Earnings Call Transcripts Public Record Executive language analysis — "efficiency" vs. "augmentation" framing, workforce plans Quarterly
Company Newsroom / Blog Primary Source AI product launches, reskilling announcements, workforce commitments As published
Press Reports Journalism Third-party verification of workforce changes, investigative findings As published
PwC / Deloitte / McKinsey / Gartner Research Sector-wide benchmarks for augmentation vs. replacement rates Annually
TrueUp / Crunchbase Database Headcount trend data, funding vs. hiring correlation Monthly
03
How Scores Are Calculated

Each company is assessed against 13 signal criteria — 7 positive (augmentation) and 6 negative (replacement). Points are awarded or deducted based on what the public data shows. Scores update when new data warrants a change, and every change is tracked with a delta indicator.

Augmentation Signal
Pts
Why It Matters
Hiring while deploying AI
+3
The strongest single signal. AI is expanding capacity, not contracting it. Verified against open role count and headcount trend.
Revenue growth + stable headcount
+3
Productivity gains from AI are being captured as growth, not passed to shareholders via layoffs. Verified via earnings + headcount data.
Responsible AI governance
+2
Formal AI ethics policies, bias audits, or "human in the loop" commitments indicate structural intent to augment rather than automate away.
New AI collaboration roles created
+2
Job titles that didn't exist before AI deployment (AI trainer, prompt engineer, AI QA, human-AI workflow designer) signal genuine work transformation.
Work redesign initiatives
+2
Documented programs that change how humans work alongside AI — not just tool adoption, but restructuring workflows to keep humans central.
New products/services enabled by AI
+2
AI generating new business lines requires new human talent to manage, sell, and maintain them. Expansion signal.
AI upskilling / reskilling programs
+1
Investment in training existing employees to work with AI. Verified against announced programs and budget disclosures.
Replacement Signal
Pts
Why It Matters
Any layoffs in 2026 (AI-cited)
-5
The heaviest penalty. When companies publicly cite AI as a reason for workforce reduction, the intent is unambiguous. Verified via Layoffs.fyi and press reports.
AI rollout with zero new hiring
-3
If AI adoption is not accompanied by any hiring, the efficiency gains are likely being extracted via headcount reduction rather than growth.
Significant headcount decline
-3
Net headcount reduction while AI investment is announced. Verified against LinkedIn trend data and annual report employee counts.
Surface-level AI adoption
-2
AI adoption that is purely marketing-driven (adding "AI" to product names without substantive use) combined with workforce stagnation.
Eliminating mid-level specialist roles
-2
Mid-level specialists (analysts, coordinators, writers, support staff) being cut while AI handles their prior functions. Verified via job category analysis.
Executive focus on "efficiency/automation"
-2
When public executive communications frame AI exclusively around cost reduction and efficiency — rather than capability growth — intent is clear. Verified via earnings call language analysis.
04
The Six Tiers

Scores map to one of six tiers. The tiers are not labels — they are action recommendations for job seekers evaluating employers, workers monitoring their current employer, and investors considering ESG criteria.

Tier 1 · +6 or higher Strong Augmentation

Clear, consistent, verifiable evidence that AI is expanding human roles. Hiring while deploying. Revenue growth preserving headcount. Safe choice for employees and investors alike.
Tier 2 · +3 to +5 Likely Augmentation

Positive signals outweigh negative. Some mixed data may exist (e.g., recovery from prior cuts) but the trajectory is augmentation. Probably safe.
Tier 3 · 0 to +2 Possible Augmentation

Weak positive signals. Company may be early in AI adoption, or the evidence is ambiguous. Monitor quarterly — score could move in either direction.
Tier 4 · -1 to -3 Unclear / Mixed

Evidence is contradictory. May have reskilling programs alongside quiet headcount cuts. Requires close monitoring. Not safe to assume positive intent.
Tier 5 · -4 to -6 Likely Replacement

Negative signals dominate. AI is likely being used to reduce rather than augment workforce. Avoid if you value employment stability.
Tier 6 · -7 or lower Strong Replacement

Unambiguous replacement pattern confirmed by multiple public data sources. CEO communications confirm intent. Active flag in our directory. Avoid.
05
Our Review Process
Step 01 — Identification

Company Selected

A company is selected for review via community submission, media coverage of an AI workforce event, or proactive monitoring of companies above a size threshold. Priority given to companies with recent AI announcements or layoff events.

Step 02 — Data Collection

Public Data Gathered

We collect all relevant public data points: LinkedIn headcount trend, open job count, Layoffs.fyi records, earnings call transcripts, annual report HR sections, and press coverage from the prior 12 months.

Step 03 — Signal Scoring

Rubric Applied

Each signal criterion is evaluated against the evidence. Points are awarded only when the signal is clearly present. Ambiguous data does not earn points in either direction — it results in lower confidence on the entry.

Step 04 — Peer Review

Independent Check

A second reviewer validates the score independently using the same source data. Disagreements of more than ±2 points trigger a third review. Final score requires consensus.

Step 05 — Publication

Entry Published

Entry goes live with score, tier, evidence sources, last-verified date, and key notes. All sources cited are accessible publicly. Companies are not notified prior to publication.

Step 06 — Ongoing Monitoring

Score Updated

Each entry is re-evaluated quarterly, or immediately when a triggering event occurs (layoff announcement, major hiring surge, earnings call with notable language). Delta indicators show quarterly score changes.

06
Update Cadence

The directory is a living document. Scores change when data changes. The delta indicator on each card shows the net change from the prior quarter. A flat company is not necessarily safe — absence of new data is noted as such.

Cadence
What We Check
Status
Weekly
Layoffs.fyi new entries, major press coverage, triggering events
Live
Monthly
LinkedIn open job count, headcount trend display, job category analysis
Live
Quarterly
Full score re-evaluation, earnings call transcript analysis, delta calculation, new entries
Live
Annually
Annual report deep review, methodology review, sector benchmarking
Manual
As triggered
Any public layoff event, major AI product launch, workforce policy announcement
Live
07
Who We Cover
Countries Currently Tracked

United States · United Kingdom · Germany · Sweden · Netherlands · France · Spain · Australia · Canada · India · Brazil · Switzerland · Ireland

Expanding to: Japan · South Korea · Singapore · South Africa · Mexico
Sectors Currently Tracked

Technology · Finance · Healthcare · Manufacturing · Retail · Professional Services · Media & Creative · Telecommunications · Education

Expanding to: Energy & Utilities · Logistics · Agriculture Tech

We prioritize companies where the AI/workforce decision is consequential — large enough that their choices affect many workers, and in sectors where AI displacement risk is high enough to matter.

We do not currently track companies with fewer than 50 employees. At that scale, individual hiring decisions are too noisy to evaluate against our rubric reliably.

We do not weight country of headquarters against country of workforce. A US-headquartered company with primarily Indian or European employees is evaluated on the basis of its global workforce data, not its HQ location.

If a company you care about is not in our directory, submit it for review →

08
Frequently Asked Questions
Can a company ask to be removed or have their score changed?
No. Companies do not have editorial input over their entries. If a company believes a specific data point is factually incorrect and can provide public evidence to the contrary, we will review and update. We will not change scores based on corporate communications alone.
What if a company was bad in 2024 but is now augmenting?
Scores reflect the current rolling 12-month picture, weighted toward the most recent quarter. A company that laid off workers in 2024 but has genuinely been hiring and reskilling in 2025–26 will see its score improve. The delta indicator will show the positive trend. Recovery is real and we reflect it.
How do you handle companies in countries where employment data is less transparent?
We note lower data confidence on entries where public sources are limited. A score based on fewer verified signals carries less weight. We do not publish entries where we cannot verify at least 3 signal criteria from independent public sources.
Is "strong augmentation" the same as "safe employer"?
No. A Tier 1 score means current observable evidence favors augmentation over replacement. It is not an employment guarantee. Business conditions change. We strongly recommend treating our scores as one input among several when evaluating an employer.
What's the difference between a "flagged" company and a Tier 5 or 6?
Tier 5 and 6 reflect the score range. A "flag" is an editorial marker we apply when a specific, documented replacement event has occurred — a named team was replaced by AI, a public executive statement explicitly cited AI as the reason for cuts. All flagged companies are Tier 5 or 6, but not all Tier 5/6 companies are flagged (some may have lower scores from accumulated signals rather than a single documented event).
How do I submit evidence for a company in the directory?
Use the Submit form on the main directory page. Include a link to the public source. If your evidence is strong enough to move a score, we'll update within 5–7 business days and note the change in the entry's delta.