proposal-operations9 min read

When AI Writes Your SF330, Whose Standard of Care Is It?

Under FAR 52.222-90, an unverified AI draft in your SF330 is a certification problem. The kind of AI you use and how you verify it are the answer.

Oswald B.Founder, RFPM.aiUpdated April 29, 2026

The Question ENR FutureTech Is Asking This Week

When AI drafts a project description for SF330 Section E or Section F, who is responsible for what it says? The licensed professional whose name is on Section G remains accountable for accuracy. After April 24, so does the firm that certifies compliance under FAR 52.222-90. Until this month, an unverified AI draft in a federal submittal was a credibility problem. It is now a certification problem too.

The answer is not to stop using AI. It is to be deliberate about which AI you use and how you verify what it produces.

ENR FutureTech opens May 4 in San Francisco with a headline panel called "Evolving Standard of Care: How AI Adoption Has Become a Risk Management Imperative for AEC." The conversation is framed for design firms (engineers and architects whose stamp is on a drawing), but the question doesn't stop at the design office. It walks straight into the proposal room. In the proposal room, the question becomes: was it the principal whose registration number is on Section G, the coordinator who pasted the AI output into the template, or the firm that signed the certification on Section J?

What "Standard of Care" Means When AI Is in the Loop

In AEC professional practice, the standard of care is the duty of a licensed professional to perform services with the skill, care, and diligence ordinarily exercised by competent professionals in the same field, in the same locality, under similar conditions. It is the legal touchstone for malpractice claims and the operative concept behind every E&O policy.

AI shows up in proposal work in two very different forms.

One form is a general-purpose chat assistant: a free ChatGPT account, a generic copilot, anything that has no view of your firm's actual projects and staff. It produces content that reads as plausible regardless of whether it is true. It has no record of what your firm has done, so it fills gaps with confident invention. The standard-of-care risk is real because the draft and the firm's actual record are two different things, and unless someone on the proposal team catches the gap, it gets submitted as fact.

The other form is AI built for proposal work, drawing on the firm's own staff records and project history. It can only describe what is actually in those records. The output maps to qualifications and projects the firm can substantiate. The standard-of-care question shifts from "did the AI invent something?" to "did the proposal team verify what came back?"

Either way, one operative idea applies: delegating drafting to AI does not delegate professional responsibility for what the AI drafts. The licensed professional whose name appears on the document remains responsible for its accuracy. The kind of AI you use changes how hard the verification job is. It does not remove it.

For proposal teams, that responsibility lands in three predictable places.

Three Places This Hits SF330 Submittals

Section E — Resumes of Key Personnel

Section E is the most common AI-drafting target on a federal A/E proposal. The temptation is real: a proposal coordinator with 14 resumes to tailor for a Tuesday submittal can save hours by feeding raw bullets to an AI and asking for a Section E-formatted version.

The risk shows up when the AI doing the drafting has no view of staff records. Project roles get inflated from "construction observation" to "design lead." Years of experience get rounded up. Certifications get invented. A specific role on a specific project becomes "served as senior design engineer" when the actual role was "junior staff engineer." Each individual change is small. Cumulatively, they describe a different person than the one who will actually do the work.

AI that draws on the firm's own staff records does not have that latitude. It cannot promote a junior engineer to design lead because the underlying record says otherwise. The output it produces is bounded by what the firm actually has on file.

Section F — Project Experience

Section F project descriptions are the second target. A general-purpose AI is good at generating project-experience prose because it has read a lot of it. It is bad at knowing which projects your firm did and which it did not. A sufficiently detailed prompt will produce a confident, plausible Section F entry for a project your firm never touched, or with a scope and value that does not match what your firm actually delivered.

Once an SF330 with a fabricated Section F is submitted, the firm has signed a representation about its experience. The contracting officer will weigh that experience in the source-selection decision. The path from "the AI made it up" to "the firm misrepresented qualifications" is short.

AI working from the firm's own project history produces a different output. It can only describe projects the firm actually delivered, in the scope and role recorded. The verification question on Section F becomes "is this project record accurate?" instead of "did this project exist at all?"

Section H — Approach Narratives

Section H is where AI-drafted boilerplate is most defensible (generic methodology language, design philosophy, quality assurance approach) and also where the standard-of-care risk is highest. Section H statements about how the team will perform the work are commitments. An AI-generated commitment to "deploy advanced BIM coordination workflows across all disciplines" is enforceable if the firm cannot actually do it.

Past-performance claims in Section H ("our firm has successfully delivered 47 similar projects on or ahead of schedule") need verification before they are submitted, regardless of how the draft was produced. Section H is where the kind of AI matters least, because the content is forward-looking.

The New Layer: FAR 52.222-90 and Certification

Until this month, the consequences of AI-introduced errors in an SF330 were mostly competitive: losing the shortlist, losing credibility with the contracting officer, getting rejected in technical evaluation. As of April 24, FAR 52.222-90 attaches a certification to every new federal contract over $15,000 and to every existing federal contract by July 24. The certification itself is about discriminatory DEI activities, but the underlying mechanic, that compliance is "material to the Government's payment decisions for purposes of section 3729(b)(4)" of the False Claims Act, applies more broadly to anything the firm represents to the government on the path to award.

Combined with the flow-down through revised FAR 52.244-6 and the new debarment authority under FAR 9.406-2(b)(viii), the implication is straightforward. Any material misrepresentation in a federal submittal, including an AI-introduced one, is now a present-and-responsible determination input, not just a competitive setback.

The standard of care for AI in proposal content is not a future question. It is the question on the next federal submittal that goes out the door.

A Five-Minute Verification Pass Before You Submit

Most AI-introduced errors in an SF330 are catchable in a short, structured pass. Five questions, in order:

  1. For every Section E entry, does the role described match the role the person actually played? If the AI says "design lead" and the person was a junior staff engineer, fix it before the resume goes in.
  2. For every Section F project, is this a project the firm actually performed, in the role and at the scope described? If a project description references work outside your firm's actual contract, remove it.
  3. For every Section F project, does the firm have records to substantiate every fact in the description? Project value, dates, role, key personnel involvement. If you would not be comfortable producing the documentation in a CPARS dispute, do not submit the description as drafted.
  4. For every Section H commitment, is the commitment one the firm can actually meet? Forward-looking promises about methodology, technology, or staffing that exceed what the firm has delivered before need to be either substantiated or removed.
  5. For every numerical claim, does the number match a source the firm controls? Win rates, project counts, award values. The source check is the only defense.

This pass does not require legal review. It requires the proposal coordinator to compare AI output against the firm's actual records before the submittal goes out.

What This Means for Proposal Operations

The takeaway from the FutureTech standard-of-care conversation is not that AEC firms should stop using AI in proposals. The 53% of A/E firms already using AI for proposals are not going back. The takeaway is that two things matter more than they used to: what kind of AI drafts the content, and the verification step that runs after.

The kind of AI matters because a generic chat assistant with no view of your firm's records cannot do the verification job for you. It can only produce confident-sounding prose. AI built around the firm's actual proposal content (staff qualifications, project records, past-performance metrics) produces output that maps to records the firm can substantiate.

Verification matters because Section H commitments and forward-looking statements need a human in the loop regardless of how the draft was produced.

Firms that keep their staff and project records in one place where AI can draw on them make the verification pass faster, because the source of truth sits next to the draft. Firms whose source of truth is scattered across Word documents on a shared drive will spend the same five-minute verification pass on every resume of every pursuit, and they will spend more of it untangling output that wasn't anchored to anything in the first place.

FAQ

Is using AI to draft SF330 content a violation of professional standard of care?

Using AI to draft SF330 content is not itself a violation. The violation occurs when AI-drafted content is submitted without verification against the firm's actual records. Standard of care attaches to the document the firm signs, not to how the draft was produced. Firms that pair AI built around their own records with a short verification pass meet the standard. Firms that paste output from a generic chat model into a template without checking do not.

Who is responsible if AI introduces errors into an SF330 submittal?

The licensed professional whose name appears on Section G remains responsible for the accuracy of the document, including AI-drafted content. Delegating drafting to AI does not delegate professional responsibility. The firm that signs the certifications on Section J is also accountable to the federal government for what was represented.

Does FAR 52.222-90 apply to AI-introduced misstatements in proposals?

FAR 52.222-90 itself certifies compliance with discriminatory DEI rules. Its enforcement mechanic, that compliance is material to government payment decisions under the False Claims Act, applies broadly. After April 24, 2026, material misstatements in federal submittals, including AI-introduced ones, can factor into present-and-responsible determinations and FAR 9.406-2(b)(viii) debarment authority.

How do I verify AI-drafted SF330 content before submitting?

Run a five-question pass before submittal. Confirm Section E roles match actual project roles. Confirm Section F projects were actually performed by the firm in the scope described. Verify the firm has records to substantiate every Section F fact. Confirm Section H commitments are achievable. Source-check every numerical claim against the firm's records.

Should our firm stop using AI for proposal drafting?

No. The 53% of A/E firms already using AI for proposals are not reverting. The shift is toward the right kind of AI plus the verification pass: AI connected to your firm's actual project and staff records, plus a five-question check before submittal. Firms with structured proposal data verify in minutes. Firms running drafts through a generic chat assistant against Word documents on a shared drive take longer and accept more risk.

RFPM.ai automates proposal resumes and project sheets for engineering and construction firms. See how it works →