Skip to content
E
ERPResearch

ERP Benchmark Methodology

Last reviewed: April 24, 2026ERP Research

How the ERP Research benchmark is built — source breakdown, deduplication approach, verification steps, refresh cadence, and known limitations of our 1,400+ case study dataset.

What the benchmark measures

The ERP Research benchmark tracks real-world ERP implementations — which companies have gone live on which ERP system, when, for which modules, and under what deployment model. It answers the question buyers ask most often: "Who is actually using this software, in my industry, at my size?"

The public benchmark page at erpresearch.com/benchmark exposes aggregates across five primary dimensions:

  1. Vendor — the ERP system the company chose (24 vendors tracked)
  2. Industry — the customer's primary industry (20 top-level, 60+ sub-industries)
  3. Company size — headcount band at go-live
  4. Deployment type — cloud, on-premise, hybrid, or private cloud
  5. Country / region — headquarters and, where different, deployment geography

Secondary dimensions include migration path (e.g. ECC → S/4HANA, legacy → NetSuite), founded-decade cohort, and company type (public, private, subsidiary, nonprofit).

Sources

Implementations are only included in the benchmark if they come from a verifiable public source. The dataset blends three source types:

Vendor-published case studies (≈72%) Structured customer stories published on vendor websites — SAP customer references, Oracle success stories, Microsoft Dynamics customer showcases, NetSuite customer stories, Acumatica cases, Sage customers, Epicor successes, Infor case studies, IFS customer stories, and 14 others. These are the highest-signal source because the vendor confirms the customer is live.

Industry press and awards (≈18%) Announcements in trade press (ComputerWeekly, CIO, DiginomicaFood Processing, Manufacturing Today, etc.), user-group award recipients (ASUG, Oracle OpenWorld, Community Summit North America), and Gartner / IDC / Forrester reference customer mentions.

Earnings calls and regulatory filings (≈10%) Public company disclosures (10-K "Technology" sections, investor-day slides, earnings-call transcripts) where the customer names their ERP system. Used mainly for enterprise-size customers and where vendor case studies are unavailable.

User-submitted implementations are not included in the public benchmark.

Verification

Before a record enters the benchmark, it passes three checks:

  1. Source link captured — every record stores the URL of the originating case study or announcement.
  2. Customer entity matched — the customer's legal entity is matched against a reference list of company records (company name, HQ country, approximate size, industry). Subsidiaries and parent-company implementations are tracked separately.
  3. Vendor + module normalised — the ERP product is mapped to our canonical vendor list (e.g. "Microsoft Dynamics GP" → dynamics-365 with a legacy-gp migration flag; "NetSuite OneWorld" → oracle-netsuite with a multi-subsidiary flag).

Records that fail any of the three checks are held in an unverified queue and excluded from the public aggregates until reviewed.

Compare ERP vendors side by side

Use our interactive comparison tool to evaluate features, pricing, and fit across leading ERP systems.

Compare ERP Software

Deduplication

The same customer can appear in multiple sources — a SAP case study, an industry-press announcement, and an earnings-call mention might all reference the same go-live. The dedup key is (customer_entity_id, vendor_id, deployment_go_live_year). When duplicates are detected, the richest record (with the most structured fields) is kept and alternate source URLs are merged into the kept record.

The same customer on the same ERP at different points in time (e.g. ECC in 2015 then S/4HANA in 2023) is kept as two records because it represents two distinct implementations.

Refresh cadence

The benchmark aggregates table (benchmark_aggregates in our warehouse) rebuilds every 24 hours from the raw case study table. The public benchmark page uses Next.js Incremental Static Regeneration with a 5-minute revalidation window, so changes are visible within minutes of a rebuild.

New case studies are ingested continuously — we run scrapers against the 13 largest vendor websites on a weekly cadence and against the 9 largest industry-press sources on a daily cadence.

Known limitations

  1. Mid-market bias in vendor case studies. Vendors publish case studies most often for lighthouse customers, which skews toward mid-market (101–1,000 employees) and upper-mid-market. Very small (<50 employees) and very large (>10,000 employees) implementations are under-represented in the raw case study corpus. We partially correct for this using earnings-call sources, but mid-market remains over-represented relative to the true install base.

  2. English-language source bias. Our scrapers cover English-language vendor sites globally but cover Japanese, Korean, Chinese, Spanish, German, and French sources less thoroughly. Asia-Pacific and Latin America implementation counts are under-reported relative to North America and EMEA.

  3. Time-to-observation lag. Customers typically appear in vendor case studies 6–18 months after go-live. So the most recent two years of go-lives are incomplete, and aggregates for the current year should be treated as provisional.

  4. No indication of project success or satisfaction. The benchmark tracks that a customer implemented an ERP, not how well the implementation went. A company can appear in the benchmark having had a troubled project — we do not filter for outcome.

  5. Self-reported headcount. Company-size brackets use the customer's headcount at go-live as reported in the source or as inferred from LinkedIn / Crunchbase / filings. Headcount changes over time are not retrofitted.

How to cite the benchmark

When referencing benchmark figures in research, RFPs, or press, please link to the specific benchmark view (vendor, industry, country, or cross-tab) rather than quoting numbers in isolation. Numbers change as the dataset updates daily.

Preferred citation:

ERP Research Benchmark, 2026. erpresearch.com/benchmark. Accessed {date}.

Corrections and contributions

If you spot an incorrect record — wrong vendor attribution, wrong industry, duplicate implementation — please contact us. Corrections are reviewed within five business days and, where valid, reflected in the next daily rebuild.

Implementation partners and vendors are welcome to submit additional verified case studies by contacting the editorial team. Submissions must include a source URL; we do not accept private case studies for the public benchmark.

Compare the vendors mentioned in this article

See how Acumatica, Microsoft Dynamics GP stack up side by side.

Compare Mentioned Vendors

Vendors Mentioned in This Article

Related Resources

Have questions about this topic?

Our ERP experts can help you find the right solution for your business.

Join 2,000+ companies using ERP Research to find their ideal ERP