Most health plans treat audit findings as coding errors. The exposure was built months earlier, in workflows coding QA never touches.
We reviewed more than ten OIG Medicare Advantage audit reports and worked through over 800 pages of findings, methodology documentation, and payment error analyses to understand what actually drives RADV audit failures at scale. The answer is not what most risk adjustment teams are focused on.
The unsupported diagnoses that surface in OIG audits are not randomly distributed. They cluster around a small, identifiable set of patterns that appear in plan after plan, service year after service year, regardless of the organization or the coding vendor involved. Across every report reviewed, the same failure modes repeated. Approximately 70% of flagged diagnosis codes were unsupported by medical records. In some HCC categories, that rate exceeded 90%.
Most risk adjustment teams respond by tightening QA sampling rates, retraining on ICD-10-CM guidelines, and increasing second-pass review volume. That response treats audit failures as a coding accuracy problem.
Audit exposure builds in the workflow. It surfaces in the audit.
The patterns that create audit liability are not primarily QA failures. They are workflow design problems, clinical documentation gaps, and coding logic errors that accumulate silently during everyday operations and only become visible when CMS opens the chart.
The February 2026 MA ICPG — OIG’s first Medicare Advantage compliance guidance since 1999 explicitly flagged chart reviews, health risk assessments, and AI-generated coding prompts as risk adjustment compliance concerns. The enforcement infrastructure is no longer episodic. It is continuous, and it is designed to find exactly the patterns described below.
How OIG Decides Which Members to Pull
Before a single chart is reviewed, OIG’s screening analytics have already decided which members are worth auditing. The filtering logic is pattern-based and increasingly efficient. Four signals appear repeatedly in how audit samples are constructed: chronic conditions reported only once in a service year, acute diagnoses without matching inpatient utilization, diagnoses inconsistent with pharmacy claims data, and members whose HCC combinations trigger disease interaction factors that compound RAF scores.
That last signal carries disproportionate financial weight. When two diagnoses interact to produce an elevated RAF score and one of those diagnoses is found unsupported, the payment delta is larger than either diagnosis would generate alone. Interaction-heavy profiles attract audit attention even when each individual condition looks ordinary on its own.
The seven patterns below are not just documentation failures. They are the specific conditions OIG’s own screening logic is designed to find. Understanding what the filter targets changes how a program should prioritize its pre-submission review.
The 7 Patterns That Build RADV Audit Exposure
Pattern 1: Historical Conditions Coded as Current Without MEAT Support
This is the single largest driver of RADV audit disallowances. OIG has stated it directly: unsupported diagnoses most frequently result from conditions that were historically present being reported as active in the current service year. A diagnosis sitting in a problem list or referenced in a specialist note from a prior year does not meet CMS’s documentation standard for the current period. The condition must be monitored, evaluated, assessed, or treated during a face-to-face encounter in the service year being submitted.
The workflow failure is not that coders misunderstand MEAT criteria. The failure is that chart retrieval and ingestion processes deliver records where historical documentation and current documentation are not clearly separated. When a coder reviews a 200-page chart with problem lists carried forward across multiple years, the distinction between active and historical conditions requires clinical interpretation, not code selection. AI systems that flag diagnoses based on keyword presence in the record compound this problem by surfacing historical references without temporal context.
Pattern 2: Acute Conditions Reported From Outpatient Settings Without Corroborating Claims
OIG’s toolkit specifically identifies acute stroke reported only on a physician claim without a corresponding hospital claim, and acute heart attack reported only on an outpatient claim without inpatient admission within 60 days. These are clinically implausible encounter patterns that pass through coding workflows because the coder reviews only the chart in front of them, not the claims context surrounding it.
The fix is not coder education. The fix is a pre-coding validation layer that cross-references encounter setting and diagnosis acuity before the chart reaches a reviewer.
Pattern 3: Cancer Diagnoses Without Evidence of Active Treatment
OIG audits found error rates between 88% and 96% for lung, breast, and colon cancer diagnoses that lacked evidence of surgery, radiation, or chemotherapy within six months. A cancer diagnosis documented in a primary care visit note without corresponding oncology treatment records will not survive audit. When the chart shows surveillance visits or monitoring after completed treatment, auditors conclude the documentation supports history of cancer, not active malignancy.
The chart retrieval strategy determines whether treatment evidence is available. If the chase logic does not pull oncology records for members with cancer HCCs, the coding decision is made against incomplete documentation.
Pattern 4: Diagnoses Inconsistent With Pharmacy Data
OIG frequently compares diagnoses against the medications typically associated with them. Embolism diagnoses without anticoagulant therapy. Major depressive disorder without antidepressant medication. When the expected pharmacy data is missing, the diagnosis becomes analytically suspicious regardless of what the chart says. Internal teams often stop at “the code was in the Assessment.” OIG’s filtering logic does not stop there.
This pattern is a claims data validation problem, not a clinical documentation problem. Programs that cross-reference diagnosis submissions against pharmacy claims before final submission can identify and flag these mismatches for additional documentation review.
Pattern 5: Problem List Diagnoses Without Clinical Evaluation
Electronic health records carry forward conditions from earlier visits. Those entries appear in the note even when the provider did not evaluate the condition during the encounter. Auditors reviewing these charts find diagnoses listed in the problem list but no discussion of the condition’s status, treatment, or monitoring in the active assessment. When that happens, the record does not demonstrate that the condition represented active disease burden during the service year.
This pattern is particularly prevalent in programs that rely on AI-driven chart review without analyst governance. Pattern-matching algorithms identify the diagnosis. They do not evaluate whether the documentation source meets CMS’s encounter-level requirements.
Pattern 6: Provider Type and Credential Gaps
CMS accepts risk adjustment diagnoses only from specific provider types in specific encounter settings. OIG has rejected codes where records came from radiology reports or electrocardiograms not interpreted and signed by an acceptable provider type. Diagnostic test results without physician documentation and notes from non-qualified provider types produce the same result: disallowed HCCs.
This is an administrative validation failure, not a clinical coding error. When ingestion workflows do not verify provider credentials and encounter type before routing charts to coders, the resulting codes carry audit risk the coder cannot see.
Pattern 7: Single Source HCC Documentation in Interaction-Heavy Profiles
A single claim supporting an HCC, with no corroborating documentation elsewhere in the member’s record, is a high-risk audit target. CMS now limits plans to two medical records per audited HCC. When the only documentation for a condition is a single outpatient visit note, the plan’s entire RAF revenue for that member’s condition depends on one record surviving scrutiny.
The financial exposure multiplies when that single-source diagnosis is part of a disease interaction pair. Heart failure and diabetes. Heart failure and COPD. When one unsupported diagnosis in an interacting pair collapses the disease interaction factor, the RAF delta is disproportionately large. Programs that identify and preserve the strongest encounter for each HCC before submission have a structural advantage, especially for members whose HCC combinations trigger interaction factors.
Why Conventional Responses Miss the Root Cause
The standard response to RADV findings is reactive: review the charts that failed, retrain the coders who touched them, adjust QA sampling. That treats each finding as an isolated event.
Problem list carry-forward is a function of EHR design, not coder negligence. Acute-versus-history confusion is a function of how clinical language maps to ICD-10-CM specificity requirements. Single-occurrence chronic conditions reflect gaps in concurrent and prospective capture strategy, not retrospective coding quality.
The patterns that fail audits are not coding quality problems. They are program design problems that surface in coding.
Annova Built Its Entire Workflow Around These Failure Points
Most risk adjustment programs find these patterns when CMS does. Annova’s workflows are designed to find them first.
Temporal reasoning, not keyword detection (Patterns 1 and 5)
Annova’s nHance engine parses every chart into 14 clinical domains and applies temporal reasoning to distinguish active conditions from historical ones. A diagnosis carried forward on a problem list without current MEAT support gets flagged before an analyst opens the chart — not after it fails audit.
Encounter validation before coding begins (Patterns 2 and 6)
Every chart is validated for provider credentials, NPI match, encounter type, and face-to-face requirements at ingestion. Charts that fail go on hold with documented reasons. They do not proceed to coding with unresolvable audit risk already baked in.
Best encounter preservation (Pattern 7)
For each HCC, analysts identify and store the encounter with the strongest MEAT-compliant documentation and cleanest provider support. That record becomes the plan’s preferred audit source — not whatever CMS happens to pull first.
Pharmacy cross-match before submission (Pattern 4)
Diagnoses with known pharmacy inconsistencies — embolism without anticoagulant, major depression without antidepressant — are flagged for additional review before final submission. OIG runs this check externally. Annova runs it first.
Accuracy calibrated to CMS, not internal QA (all patterns)
A program that achieves 96% accuracy against its own benchmark can still produce codes that fail audit. Annova’s QA framework is continuously updated against CMS RADV findings, OIG audit reports, and AHA Coding Clinic guidance. The standard is how CMS measures it.
What Changes When Programs Address These Patterns at the Source
Programs that manage RADV exposure more effectively share a common trait: they monitor diagnosis patterns across their populations rather than evaluating documentation encounter by encounter. Each individual chart may appear defensible in isolation. When the same pattern appears across hundreds of members, it becomes analytically easy for OIG to isolate for review, and financially devastating under extrapolation.
CMS is auditing every eligible contract annually. OIG’s February 2026 ICPG made risk adjustment one of its most detailed compliance risk sections. DOJ’s False Claims Act working group named MA fraud a priority enforcement area, with record recoveries of $6.8 billion in fiscal year 2025.
Audit defensibility is not an activity that begins when CMS sends a notice. It is a design requirement embedded in every step between chart retrieval and submission.
The seven patterns above are the specific places where program design either holds or fails under that scrutiny. Plans that know where their exposure builds can fix it before CMS finds it.