The Headline Number and the One That Didn't Move
AI adoption in A/E firms jumped from 38% to 53% in a single year, according to a 2026 industry study of roughly 700 North American firms. Over the same period, industry hit rates — the share of pursuits converted to wins — held near the ~40% long-running benchmark reported in ENR Marketropolis data. The gap isn't about how much AI proposal teams are using. It's about where in the proposal workflow they're using it.
This post is for principals and proposal managers at engineering and construction firms who have already rolled out AI tools, seen the speed gains, and noticed that none of it is showing up in the win column. The short version: most of the 53% are using AI for drafting. Drafting isn't what determines shortlists.
What the 2026 Adoption Numbers Actually Say
The same 2026 industry study of ~700 A&E firms reported three additional numbers worth reading together with the 53% adoption figure:
| Metric | Value | Signal |
|---|---|---|
| A/E firms using AI tools | 53% (up from 38% last year) | Adoption roughly doubled in 12 months |
| Operating profit on net revenue | 21.4% — 10-year high | Firms are more profitable per dollar of work |
| Net revenue per employee | +11% YoY | Output per head is up |
| Industry hit rate (ENR) | ~40% — essentially flat | Firms aren't winning a higher share of their pursuits |
Operating profit and revenue-per-employee both climbed alongside AI adoption. That's the "AI is working" story, and it's real — for efficiency. What the same period did not produce is a measurable shift in the industry win rate. Firms are more profitable on the work they win. They are not winning a higher share of the work they chase.
That distinction is the whole article.
Why Adoption Doubled But Win Rates Didn't Move
Two things are true at once:
- AI makes drafting, reformatting, and summarization faster.
- The factors that decide shortlists — relevance of past projects, specificity of key-personnel match, clarity of technical approach, credibility of Section H narrative — do not get better just because the draft was faster.
A proposal that came together in four days instead of seven, using the same project sheets and the same Section H language, is not a more competitive proposal. It is a faster one. Speed is an operational win. It is not a win-rate lever.
This is the efficiency-vs-competitiveness confusion at the firm level. Time-to-submit drops. Margin goes up because the same number of pursuits costs less staff time. Leadership reasonably treats the margin line as evidence that AI is working. It is. It is just not working on the variable that determines whether the next pursuit makes the shortlist. That's the same gap the AI proposal capacity myth post covered from the capacity angle — the speed gains are real but mislabeled.
Task-Level AI vs. Coordination-Level AI
The distinction most A/E firms haven't made explicit yet:
| Task-level AI | Coordination-level AI | |
|---|---|---|
| What it does | Drafts copy, summarizes documents, reformats resumes, generates first-pass Section H boilerplate | Retrieves the right project sheets for a pursuit, matches key personnel to scope, flags qualification gaps, pulls reusable content consistently |
| Where it saves time | Inside a single task (drafting, summarizing, reformatting) | Across the whole proposal workflow (retrieval, matching, consistency) |
| Effect on win rate | Minimal. Output quality caps at input quality. | Higher. Better match between pursuit requirements and the team's actual qualifications. |
| What it requires | A prompt and a draft | Structured, current data about staff, projects, past submittals, and boilerplate |
| Current adoption | High — this is what most of the 53% are doing | Low — requires content-operations work most firms haven't done yet |
Task-level AI is where the 53% adoption number comes from. Most of the adoption is drafting, summarization, and reformatting — tasks where the bottleneck was writing speed and the fix was a better writing assistant. That is useful. It is not strategic.
Coordination-level AI is where win rates actually move. It moves them because most A/E proposal bottlenecks are not writing bottlenecks. They are content-retrieval and content-currency bottlenecks.
The SME Coordination Bottleneck
A 2026 benchmark of roughly 300 AEC proposal professionals put concrete numbers on the coordination problem:
- 64% cite SME delays as the top bottleneck in assembling proposals
- 74% of proposals involve 11 or more contributors
- 67% of firms report less than 50% of their proposal process is AI-powered
- 49% of proposal content requests take 6 to 10 days to fulfill
Read those four numbers together: the bottleneck is getting the right current content from the right person; most proposals cross at least 11 people; pulling the content takes the better part of a work week; and AI hasn't crossed the half-way mark in the workflow.
Speeding up drafting does not move the 6-to-10-day SME retrieval window. The draft is downstream. The real time cost is upstream — the chase for the current resume, the project sheet with the relevant scope, the past performance narrative with the right client reference. That upstream cost is also where win-rate variance lives, because the quality of what gets retrieved determines the quality of what ends up on the evaluator's desk.
What Coordination-Level AI Looks Like in Practice
For a 50-person civil engineering firm, the practical version of coordination-level AI is a few specific capabilities:
- Structured staff profiles — not a shared drive full of resumes, but a single source for each person's certifications, project history, specializations, and recent training. AI can only pull the right resume for a pursuit if the data it pulls from is structured and current. See how to manage multiple resume versions for the source-of-truth problem in detail.
- Project sheet library organized by scope tags — not a folder of Word files named by project name. AI can only surface the two or three most relevant projects for a data-center substation pursuit if projects carry scope tags (transmission, substation civil, utility coordination, stormwater, environmental review) that a retrieval system can read. See project experience sheets for the sheet structure.
- Pursuit-to-content matching — when a new RFP lands, pulling the three most-relevant project sheets and the five best-fit key personnel in minutes, not days. This is the step that removes the SME as the bottleneck for routine content pulls.
- Version control for boilerplate — one copy of the current Section H language, the current safety-record narrative, the current QA/QC approach paragraph. AI reuse of boilerplate breaks when there are seven versions of each scattered across old submittal folders. The federal proposal library realign post covers the audit pass for existing libraries.
None of this is a single-prompt solution. It's a content-operations project that AI makes valuable but doesn't replace. This is the space RFPM.ai occupies — structured staff profiles, tagged project sheets, and versioned boilerplate in a single workspace — but the mechanics are the same whether a firm builds this internally, buys it, or cobbles it together.
The firms that report AI is helping them win more work are, in almost every case, firms that already did the content-operations work. AI did not generate the win-rate gain. AI amplified the coordination they'd already built.
The Three-Year Window
The same 2026 industry study reported two more numbers worth pairing:
- 38% of A&E firms self-rate as digitally mature or advanced
- 74% expect to be within three years
The 36-point gap between those numbers is the near-term window. Most firms know they are not there yet. Most firms intend to get there. The firms that win in that window are the ones that spend the next 12 to 18 months organizing content so that AI has something good to work with.
That means:
- Resume data structured and queryable, not locked in PDFs
- Project experience tagged by scope and client type, not filed by project name
- Boilerplate in a single current location, not duplicated across submittal folders
- Past-performance narratives tied to specific scope elements, not buried inside old proposals
The firms that skip the content-operations work and expect AI to compensate will run AI on messy inputs and get cleaner-sounding versions of the same pursuits they were losing before. That is the story the 2026 numbers are already telling — AI adoption up, margin up, win rates flat. The firms that treat the next three years as a content window instead of an AI window are the ones that will move the win-rate line.
What to Do This Quarter
If the firm is already at the 53% adoption level and wants the next move to matter:
- Audit the last five losses. For each, ask: was the losing factor a drafting problem (slow, late, disorganized), a content problem (wrong project examples, weak key-personnel match, stale past performance), or a strategy problem (wrong pursuit, bad positioning)? If more than half of losses are content problems, coordination-level AI has more leverage than more task-level AI.
- Inventory where resumes live. If the answer involves more than one shared drive folder, more than three file-name conventions, or "whoever touched it last," that is the first coordination problem to solve.
- Tag project sheets by scope, not by project name. A project sheet named
Main-Street-Widening-2023.docxis invisible to retrieval. The same sheet taggedroadway | intersection | traffic-control | municipal | state-DOTis findable by pursuit requirement. - Consolidate boilerplate. One current Section H paragraph, one QA/QC narrative, one safety record, one small-business participation statement. Everything else gets retired.
- Re-measure in 90 days. Not AI usage. Time from RFP release to first complete draft assembly, and the share of submittals where content retrieval (not drafting) was the critical-path task.
Frequently Asked Questions
Does the 53% AI adoption figure include general-purpose AI or only AEC-specific proposal tools?
The 2026 industry study tracks AI tool use broadly across A&E firms, including both general-purpose AI (chat assistants, writing assistants) and industry-specific proposal and project tools. The 53% figure is not limited to any single product or category. The separate 2026 proposal-team benchmark that reports the 64% SME-delay statistic is specific to proposal teams and covers firms using any combination of general AI, dedicated proposal software, and internal tools.
Has any firm-size segment moved its win rate with AI adoption?
The publicly available data through 2026 does not cleanly segment win-rate change by firm size. Operational improvements — speed, capacity, margin — are more visible in the benchmarks than strategic improvements. If a firm-size segment has moved its win rate with AI specifically, that signal has not yet surfaced as a distinct trend in the industry data.
What does "coordination-level AI" actually require from a smaller firm?
For a firm under 50 people, the practical first step is structured staff profiles — a single source of truth for resumes that updates once and feeds multiple proposal versions. Step two is a project-sheet library with scope tags. These two steps unlock most of the retrieval value. A firm does not need to solve the full content-operations problem before AI starts helping with pursuits.
If task-level AI isn't moving win rates, should firms stop using it?
No. Task-level AI delivers real operational wins — faster drafts, faster summaries, faster reformatting — and those wins show up in margin. The mistake is treating task-level gains as evidence that the AI investment is complete. Task-level is the floor, not the ceiling.
How long does the content-operations work usually take?
For a firm that starts with scattered resumes and project files, expect 6 to 12 months to reach a state where AI retrieval produces reliably usable outputs. Most of the time goes into the first pass through existing content — structuring what's already there. After the first pass, ongoing maintenance is measured in hours per month, not weeks.
AI adoption in A/E firms doubled, margin rose, and win rates held flat. That gap is the story the 2026 data is telling. The firms that close it in the next three years are the ones building the coordination layer that AI actually has something to pull from.