proposal-operations7 min read

ENR FutureTech 2026: What It Means for Proposal Teams

ENR FutureTech 2026's most-repeated takeaway: AI as a multiplier, not a replacement. What that diagnosis means for AEC proposal teams.

Oswald B.Founder, RFPM.aiUpdated May 8, 2026

What ENR FutureTech 2026 Said About AI

ENR FutureTech 2026 closed Wednesday with the AEC industry's clearest signal yet that the AI conversation has shifted from "should we adopt" to "what makes adoption work." Per ENR's own wrap-up, the most-repeated takeaway across three days: AI works as a multiplier of team strengths, not a replacement, and that's where AEC firms are pulling ahead. For proposal teams, the diagnosis is the same — but the proposal room is already two adoption cycles past the conversation on stage.

The Conference's Headline Message

The 16th-year conference (May 4–6, San Francisco) drew record attendance — registrations up 17% year over year, sponsorships sold out — and 35+ speakers from Trimble, Hensel Phelps, Suffolk Construction, Zachry, McCarthy, Mortenson, Kiewit, and PCL. The agenda spanned data-driven operations, AI agents, robotics, jobsite wearables, and reality capture.

The recurring data point on stage was the one that has reset the AI conversation across industries this year: 95% of generative AI pilots fail to deliver measurable business return. That number comes from MIT's NANDA initiative, The GenAI Divide: State of AI in Business 2025 — built on 150 leader interviews, a survey of 350 employees, and analysis of 300 public AI deployments. The report's central finding is that the failure mode is rarely the model. It is the absence of structured data and integration with the systems people actually use.

Steve Jones, Senior Director of Industry Insights at Dodge Construction Network, delivered "The Dodge Perspective" on the macro AEC tech landscape. He framed construction as approaching a tipping point for AI adoption — high awareness, strong interest, validation from early adopters. Dodge's own research is consistent with the MIT framing: the path that works is AI embedded in the digital tools and data already in use, not generic AI bolted on top.

The conference message holds together. AI as multiplier, not replacement. Diagnose the data and the workflow first. Layer the AI on something that can support it.

What Proposal Teams Heard Differently

For BD and proposal leaders, the same message cuts harder than it does on the construction side. AI in proposals is not a future adoption decision. 53% of A/E firms already use AI somewhere in their proposal workflow, per a 2026 industry benchmark of roughly 700 firms. Per the same benchmark, industry win rates haven't moved. A separate 2026 benchmark of about 300 proposal professionals reports that 67% of firms operate at less than 50% AI maturity in their proposal function — meaning AI is in the workflow, but it is not connected to the data that makes the workflow work.

This is the broken-workflow-with-AI-on-top pattern the MIT report describes. The proposal-side equivalent is straightforward: project sheets, resumes, and qualifications scattered across SharePoint folders and individual hard drives, indexed by filename and what one coordinator happens to remember. The same source data that should drive every Section E and Section F instead has to be reconstructed by hand for each pursuit.

For "AI as multiplier" to work, the team strengths have to exist in a form the AI can multiply. A capable AI assistant pasted on top of a disorganized proposal library doesn't make the proposal team better at qualifying for pursuits. It produces confident-sounding misstatements faster, which moves the bottleneck from drafting to verification.

Two Adoption Cycles Already In

The firms succeeding with AI in proposals are not the ones who adopted AI first. They are the ones who fixed the data layer first.

That work happened in two waves before ChatGPT made AI in proposals a mainstream conversation. The first wave was content centralization — moving project sheets, resumes, and reusable narrative blocks out of individual SharePoint folders and shared drives into a single repository the proposal team could actually search. The second wave was structured tagging — making each project sheet and each resume queryable by attributes (project type, agency, role, year, contract value, certification) instead of relying on filenames.

Firms that did both are now generating tailored Section F drafts and tailored resume entries from one source. Firms that did neither are still copy-pasting from Word documents, and the AI on top is producing the same output 10 times faster — which means 10 times more verification work. The AI is not the differentiator. The data layer underneath is.

This is the conversation that wasn't on the FutureTech main stage. The conference is built around the design and construction side of the firm — robotics deployment, autonomous equipment, reality capture, jobsite safety. The proposal team is in a different adoption cycle. By the time the multiplier message reaches the proposal coordinator's desk, AI has already been generating Section E and Section F drafts for two years — and the dysfunction the MIT data describes is already the steady state.

What This Means for Proposal Teams Specifically

If the post-FutureTech narrative pushes your firm to add another AI tool to the proposal workflow, the question to ask first is the conference's question, applied to your room: what is the workflow, and is it organized enough that AI can act as a multiplier rather than an amplifier of the dysfunction? AI added on top of disorganized content produces fast nonsense. AI added on top of structured content produces fast first drafts that hold up to a five-minute verification pass.

The work that compounds:

  1. Inventory before adding. A current and tagged record of every project, every resume, and every certification is the precondition for any AI tool to be useful. Skip this step and the AI output will be wrong half the time.
  2. Treat the proposal library as the source of truth, not the deliverable. When the same staff record drives the SF330 resume, the SOQ project sheet, and the agency-specific qualifications package, AI is doing translation, not invention. That is the safe place for AI in proposals.
  3. Pick the AI tool last. The tool matters less than the data it sees. A capable AI assistant with no view of your firm's actual content will produce confident misstatements. A modest tool with full access to a structured library will produce defensible drafts.

The firms that own the AI conversation in proposals two years from now are not the firms who showed up at FutureTech 2026 ready to buy. They are the firms who showed up at the data conversation in 2023, fixed the underlying workflow, and are now using AI as the multiplier the conference said it should be.

FAQ

What was the main message at ENR FutureTech 2026?

ENR FutureTech 2026 ran May 4–6 in San Francisco, drew record attendance in its 16th year, and featured 35+ speakers across data-driven operations, AI agents, robotics, reality capture, and jobsite safety. ENR's wrap-up framed the most-repeated takeaway across three days as: technology adopted as a multiplier of team strengths, rather than a replacement, is where AEC firms are pulling ahead. The recurring AI data point cited on stage was MIT NANDA's finding that 95% of generative AI pilots fail to deliver measurable business return.

How does AI in proposals differ from AI in design and construction?

AI in design and construction is being introduced to teams that have been slower to adopt new tools. AI in proposals has been in use for two years already — 53% of A/E firms have AI somewhere in the proposal workflow, with industry win rates flat. The constraint on the proposal side is not adoption. It is whether the firm's project history, resumes, and qualifications are structured well enough for AI to act as a multiplier rather than amplify the existing dysfunction.

What is "structured proposal data" and why does it matter for AI?

Structured proposal data means project sheets, resumes, and qualifications stored as queryable records — searchable by project type, agency, role, year, certifications — rather than as Word documents scattered across SharePoint. AI generating Section E and Section F drafts from structured data produces output bounded by what the firm actually delivered. AI generating drafts from unstructured PDFs produces plausible content that often doesn't match the record.

Should our firm add more AI tools after FutureTech?

Adding AI tools without first organizing the content the tools will use produces output that needs more verification, not less. The work that compounds is structuring the proposal library — current project records, tagged resumes, queryable certifications — before introducing additional AI on top. Firms that already did that work are two cycles ahead. Firms that haven't will not catch up by buying tools.

What's the takeaway for BD principals from FutureTech 2026?

The post-conference narrative will push toward AI tooling decisions. The right question to ask the proposal team is not "what AI should we buy?" but "what does the AI need to read — and is it organized?" Firms with structured proposal content win the AI productivity story because the AI is acting as a multiplier on real strengths. Firms without lose the productivity story even after they buy the tool.

RFPM.ai automates proposal resumes and project sheets for engineering and construction firms. See how it works →