Domain I: Support Responsible and Trustworthy AI Efforts — Comprehensive Study Guide
Exam weight: 15% of PMI-CPMAI exam (~18 scored questions) Score-report framing: ✅ Target — but the heaviest cross-domain pull Maps to CPMAI methodology: Runs throughout all phases (not a single phase) Number of ECO tasks: 5 (I.1 through I.5) Estimated study time: 7 hoursPer locked decision in CLAUDE.md: Domain I gets a full study guide, not a refresh — even though scored Target on first attempt. Domain I is the most cross-pulled content on the exam: wrong-answer traps in Domains III, IV, and V often hinge on Domain I principles. Mastery here protects weak-domain scores.
Overview
Domain I is unique. It doesn't map to a single CPMAI phase — it runs throughout the project lifecycle, from Phase I (Business Understanding) all the way through Phase VI (Operationalization). Every other domain pulls Domain I principles into its decisions. When a Domain III question mentions PII, when a Domain IV question mentions bias, when a Domain V question mentions audit trails — Domain I is being tested.
The unifying pattern: Domain I asks whether the project respects human values, regulatory boundaries, and accountability structures throughout its existence. Not at a single checkpoint — continuously. Every task begins with an oversight verb: oversee, manage, conduct, monitor, manage. The PM doesn't write privacy plans, run bias tests, or maintain audit trails — the PM ensures these activities are happening, are documented, and are integrated into decisions across all other domains.
scored Target on first attempt here on the first attempt. The risk for the retake isn't Domain I questions in isolation — it's missing Domain I cues inside Domain III/IV/V questions. Stems mentioning PII, GDPR, HIPAA, bias, audit, transparency, explainability, accountability, or trustworthy-AI principles are testing whether you respect Domain I when answering questions in other domains.
Table of Contents
- Module 1: The Trustworthy AI Framework Foundation (Lessons 1-4)
- Module 2: Privacy and Security (Lessons 5-8)
- Module 3: Transparency (Lessons 9-13)
- Module 4: Bias (Lessons 14-18)
- Module 5: Regulatory Compliance (Lessons 19-23)
- Module 6: Accountability and Audit (Lessons 24-28)
- Quick Reference: The 5 Pillars (SRTGI) Cross-Reference
- Cross-Domain Links
- Knowledge Check
- Memory Aids & Mnemonics Summary
Module 1: The Trustworthy AI Framework Foundation
Lessons 1-4 | The framework, why it exists, and how it runs through every project phase.Lesson 1: What Is a Trustworthy AI Framework?
A Trustworthy AI Framework is a documented approach guiding two communities: those building AI systems (need guardrails, boundaries, processes) and those using AI systems (need reliable, transparent, contestable interactions). PMI's Trustworthy AI Framework consolidates principles across 60+ existing frameworks into a comprehensive structure aligned with PMI-CPMAI.
The framework isn't optional for the exam — it's the spine of Domain I and the cross-cutting reference for every other domain.
KEY TAKEAWAYS
- Trustworthy AI Framework guides builders (guardrails) and users (reliable interactions).
- PMI's framework consolidates 60+ frameworks into one structure.
Lesson 2: The 5 Pillars of Trustworthy AI (SRTGI)
The 5 pillars memorized in Phase I — same framework, deeper here:
| Pillar | Focus | ECO Task Connection |
|---|---|---|
| Societal & Ethical | Human values, fairness, bias | I.3 (bias checks) |
| Responsible | Accountability, privacy, safety, security | I.1 (privacy/security), I.5 (accountability) |
| Transparent | Visibility into AI design, data, decisions | I.2 (transparency) |
| Governed | Oversight, compliance, audit, regulation | I.4 (compliance), I.5 (audit) |
| Interpretable / Explainable | Why the AI made this specific decision | I.2 (transparency) |
💡 Memory Aid — SRTGI
Societal, Responsible, Transparent, Governed, Interpretable. "Some Really Trustworthy Governance Inspires." ((covered in Phase I).)KEY TAKEAWAYS
- 5 pillars — SRTGI — map to ECO Domain I tasks.
- Pillars + ECO tasks together = the cross-cutting reference for the exam.
Lesson 3: Why Domain I Runs Throughout the Project Life Cycle
Domain I is not a phase. It runs continuously, from Phase I through Phase VI:
- Phase I (Business Understanding): Risk assessment includes ethical risks. Project scope considers regulatory constraints.
- Phase II (Data Understanding): Privacy and compliance check (III.6). Bias measurement (informational bias).
- Phase III (Data Preparation): Bias mitigation in data transformation. Documented data lineage.
- Phase IV (Model Development): Algorithm transparency (I.2). Bias measurement during training (I.3).
- Phase V (Model Evaluation): Trustworthy AI gate criteria (IV.6). Auditability verified.
- Phase VI (Operationalization): Continuous monitoring of bias, drift, transparency. Audit trails maintained. Compliance enforced.
A common exam pattern: a question in another domain (e.g., Domain V) tests whether you recognize the Domain I component. Wrong answers ignore Domain I; right answers integrate it.
KEY TAKEAWAYS
- Domain I runs continuously — not a phase, a thread.
- Every other domain pulls Domain I principles into its decisions.
Lesson 4: How Domain I Touches Every Other Domain
Specific cross-domain pulls:
- Domain II (Identify Business Needs) ↔ I.3, I.4: Risk assessment includes ethical and regulatory risk.
- Domain III (Identify Data Needs) ↔ I.1, I.3, I.5: Privacy/compliance/access (III.6) is Domain I work executed in Phase II.
- Domain IV (Manage Model Dev/Eval) ↔ I.2, I.3, I.5: QA/QC (IV.2) overlaps transparency, bias checks, accountability documentation.
- Domain V (Operationalize) ↔ All Domain I tasks: Production is where Domain I is enforced. Governance (V.3), metrics (V.4), contingency (V.7), final report (V.5) all pull Domain I.
When a stem mentions trustworthy-AI vocabulary in any domain, the answer involves Domain I.
KEY TAKEAWAYS
- Domain I pulls into II (risk), III (privacy/compliance), IV (transparency/bias/audit), V (everything).
- Trustworthy-AI vocabulary in any domain = Domain I integration.
Module 2: Privacy and Security
Lessons 5-8 | The privacy and security backbone of trustworthy AI.Lesson 5: ECO Task I.1 — Oversee Privacy and Security Plan
The PM oversees creation and ongoing maintenance of the project's privacy and security plan. Privacy = protecting personal/sensitive data from unauthorized exposure. Security = protecting AI systems (and the data they consume) from compromise.
The PM doesn't write the plan alone. The PM coordinates contributions from data privacy officers, information security teams, legal, compliance, data engineering, and ML engineering — and ensures the plan is documented, signed off, and operationalized through V.3 (governance).
KEY TAKEAWAYS
- Privacy = protect data subjects. Security = protect the AI system + its data.
- Plan is PM-coordinated, multi-stakeholder, signed off, operationalized.
💡 Memory Aid — PRIDE Privacy Plan
Personal data identified, Regulatory frameworks documented, Incident response defined, Data access controls in place, Encryption / anonymization planned. Five components of a privacy plan.PM Oversight Angle
- PM owns: Coordinating creation of the privacy and security plan; ensuring sign-off from privacy/security/legal stakeholders; integrating plan into project decisions throughout phases.
- Deliverable: Privacy and Security Plan — typically a workbook section in CPMAI. Includes PII identification, regulatory framework mapping, access controls, encryption/anonymization, incident response, audit cadence.
- Iteration trigger: Privacy/security plan reveals incompatibility with required data → loop back to III.1 (re-define) or Phase I (re-scope).
- Escalation trigger: Privacy or security gap that requires legal counsel, regulatory disclosure, or budget beyond project authority.
- Wrong-answer trap: "Have the security team handle privacy/security separately." Privacy and security are PM-coordinated, project-integrated — not delegated and forgotten.
- Question pattern signal: Stems mentioning PII, sensitive data, GDPR, HIPAA, encryption, security incident, breach.
- ECO task tag: Domain I, Task 1 — Oversee privacy and security plan
Lesson 6: AI Privacy Considerations
Privacy concerns specific to AI:
- PII in training data — model may memorize and leak personal information.
- Re-identification risk — anonymized data can be combined with other sources to re-identify individuals.
- Inference disclosure — model predictions about individuals reveal information they didn't share.
- Prompt logging — GenAI systems may log user prompts containing sensitive information.
- Cross-border transfer — data crossing jurisdictions triggers regulations (GDPR especially).
The PM ensures all five are addressed in the privacy plan.
KEY TAKEAWAYS
- AI privacy ≠ traditional data privacy. Adds memorization, re-identification, inference disclosure, prompt logging, cross-border concerns.
Lesson 7: Securing AI Systems
Security factors for AI:
- Model security — protect model weights/architecture from theft.
- Training data security — sensitive data shouldn't leak via training pipeline.
- Inference data security — runtime inputs may be sensitive.
- Adversarial input protection — detect and reject manipulated inputs.
- Access control — who can use the model, with what data.
- Audit logging — security events recorded for forensics.
These overlap V.3 (governance) and V.4 (monitoring) — Domain I principles, Domain V execution.
KEY TAKEAWAYS
- AI security = model + training data + inference data + adversarial protection + access + audit.
- Principles in Domain I; execution in Domain V.
Lesson 8: PII and Regulatory Privacy
Privacy regulations relevant to AI:
| Region/Sector | Regulation | Key Requirements |
|---|---|---|
| EU + many global | GDPR | Consent, right to erasure, data minimization, cross-border transfer, automated decision-making rights |
| US healthcare | HIPAA | PHI protection, BAAs (Business Associate Agreements), audit trails |
| US California | CCPA/CPRA | Consumer privacy rights, opt-out |
| US finance | GLBA | Customer financial info protection |
| Industry-specific | Many others | Sector-specific privacy and security |
The PM ensures the privacy plan maps required data to applicable regulations. Discovering a regulatory gap mid-project is expensive rework.
KEY TAKEAWAYS
- Major regulations: GDPR, HIPAA, CCPA/CPRA, GLBA, sector-specific.
- Map required data to applicable regulations early (Phase I/II), not at deployment.
Module 3: Transparency
Lessons 9-13 | Visibility into how AI works, why it decides, and who knows.Lesson 9: ECO Task I.2 — Manage AI/ML Transparency
The PM manages the transparency program — ensuring data selection, algorithm selection, and decision-making processes are documented, accessible, and contestable as appropriate.
Two transparency dimensions:
- Systemic transparency — visibility into the model itself (data sources, preprocessing, architecture, training).
- Algorithmic transparency — visibility into specific decisions (why was this user denied?).
KEY TAKEAWAYS
- Transparency has two dimensions: systemic (how built) + algorithmic (why this decision).
- PM manages the program; data scientists + ML engineers execute.
PM Oversight Angle
- PM owns: Managing the transparency program — ensuring data and algorithm selection is documented, decisions are traceable, disclosure happens where required, contestability is supported.
- Deliverable: Transparency documentation — includes data sources, preprocessing, model architecture, training parameters, decision logging requirements.
- Iteration trigger: Transparency requirements not met by chosen technique (e.g., black-box deep learning for high-stakes decision) → loop back to IV.1 (technique) for re-selection or to Phase I to reconsider scope.
- Escalation trigger: Transparency requirement that conflicts with technical or business constraints (e.g., regulator demands explainability for a black-box model in production).
- Wrong-answer trap: "Add post-hoc explanations after deployment." Transparency is planned upstream, not retrofitted.
- Question pattern signal: Stems mentioning "explainable," "transparent," "traceable," "regulator inquired," "user wants to understand."
- ECO task tag: Domain I, Task 2 — Manage AI/ML transparency
Lesson 10: Systemic Transparency
Systemic transparency = visibility into all components and ingredients of an ML model:
- Data sources — what data was used.
- Preprocessing steps — how was data prepared.
- Model architecture — what kind of model.
- Training parameters — how was it trained.
- Test results — how was it evaluated.
This is achievable for almost any model. The PM ensures it's documented.
KEY TAKEAWAYS
- Systemic transparency = visibility into model construction.
- Achievable for most models with proper documentation.
Lesson 11: Decision Transparency / Algorithmic Explainability
Decision transparency = visibility into specific predictions. Why did the model deny THIS loan? Flag THIS user as fraud?
For some algorithms (linear models, decision trees), this is inherent. For deep learning, it's often impossible without specialized Explainable AI (XAI) techniques.
XAI methods:
- Feature importance — which inputs most influenced the prediction.
- LIME / SHAP — approximate the model locally to explain individual predictions.
- Counterfactual explanations — what input would have changed the outcome.
KEY TAKEAWAYS
- Decision transparency is harder than systemic transparency.
- XAI methods (feature importance, LIME, SHAP, counterfactuals) bridge the gap for black-box models.
Lesson 12: Disclosure and Consent
Two PMI Trustworthy AI Framework concepts:
- Disclosure — users should be told when they're interacting with AI, not a human. Bot-to-human handoff should be clearly communicated.
- Consent — users should be able to opt out of AI interactions. Feasible alternatives must exist for those who opt out.
These are project-design considerations, not afterthoughts. Build into Phase I scope and verify in Phase VI deployment.
KEY TAKEAWAYS
- Disclosure = users know they're interacting with AI.
- Consent = users can opt out, with feasible alternatives.
- Designed in Phase I; verified in Phase VI.
Lesson 13: Open Systems
Open systems = AI components (models, training data, evaluation methods) that are openly available for inspection, modification, or audit. Open-source models, public training datasets, published evaluation benchmarks.
Trade-off: open systems support transparency and auditability but may conflict with competitive advantage or security (open systems are easier to attack adversarially).
The PM frames the trade-off for stakeholder decision; doesn't choose unilaterally.
KEY TAKEAWAYS
- Open systems = inspectable, modifiable, auditable AI components.
- Trade-off: transparency vs competitive/security risk. Stakeholder-decided.
Module 4: Bias
Lessons 14-18 | Bias in AI — where it comes from, how to measure it, how to mitigate it.Lesson 14: ECO Task I.3 — Conduct Bias Checks
The PM oversees bias checks on model, data, and algorithm. Three categories of bias check:
- Model bias — does the trained model produce systematically different outcomes for different groups?
- Data bias — does the training data over- or under-represent certain groups?
- Algorithmic bias — does the algorithm itself encode assumptions that produce biased outputs?
The PM doesn't run the bias tests — the data scientist + ethics team do. The PM ensures tests happen, results are documented, mitigation plans exist where bias is found, and the program continues into production (V.4 monitoring).
KEY TAKEAWAYS
- Three bias check categories: model, data, algorithm.
- PM oversees the regime; team executes.
PM Oversight Angle
- PM owns: Overseeing the bias check program — ensuring tests happen, results are documented, mitigation plans exist, monitoring continues post-deployment.
- Deliverable: Bias Check Plan + ongoing test results + mitigation actions.
- Iteration trigger: Bias detected exceeds tolerance → loop back to address (more data, different technique, fairness post-processing). May trigger Phase I rescope.
- Escalation trigger: Bias incident requiring legal counsel, regulatory disclosure, or public response.
- Wrong-answer trap: "Have the data scientist add a fairness post-processing layer and proceed." Post-processing is one mitigation option, not a substitute for documented stakeholder decision and ongoing monitoring.
- Question pattern signal: Stems mentioning "bias," "fairness," "demographic," "underrepresented," "discrimination," "different outcomes for different groups."
- ECO task tag: Domain I, Task 3 — Conduct bias checks
Lesson 15: Three Types of Bias in AI (NVI)
(Same NVI mnemonic from Domain III — repeated here because Domain I tests it directly.)
- Neural-network bias — adjustment factor in neural networks. Has nothing to do with fairness.
- Variance bias — bias-vs-variance trade-off in model fitting (under/overfit).
- Informational bias — overrepresentation or underrepresentation of categories in data, with fairness implications. The exam-relevant one.
Confusing the three is a top exam trap.
💡 Memory Aid — NVI
Neural-net bias (math), Variance bias (fitting), Informational bias (fairness — the exam one).KEY TAKEAWAYS
- Three "biases" — N, V, I.
- I = informational = fairness = the one the exam tests.
Lesson 16: Bias Measurement and Mitigation
Measurement — quantify bias across user segments. Common metrics: demographic parity, equal opportunity, predictive parity. Each captures a different aspect of fairness. Mitigation strategies:- Pre-processing — re-balance training data, remove protected attributes, augment underrepresented classes.
- In-processing — modify the training procedure (fairness constraints in the loss function).
- Post-processing — adjust model outputs for fairness (calibrate per-group, use different thresholds).
The PM coordinates which strategy fits the project's risk tolerance, technical constraints, and regulatory requirements.
KEY TAKEAWAYS
- Measurement → quantify across segments. Multiple fairness metrics.
- Mitigation → pre-processing, in-processing, or post-processing. Choice is stakeholder-decided.
Lesson 17: Diversity and Inclusion in AI Teams
Bias in AI often traces back to bias in the team building it. Homogeneous teams miss bias in their own outputs because they don't have diverse perspectives to spot it.
PMI's Trustworthy AI Framework includes Diversity & Inclusion as a societal pillar — diverse teams produce more trustworthy AI. The PM advocates for diverse representation in AI project teams as a project-level risk-reduction measure.
KEY TAKEAWAYS
- Diverse teams = more trustworthy AI (catch more bias).
- Team diversity is a project-level risk-reduction measure, not just an HR concern.
Lesson 18: Fairness, Discrimination, and Real-World Impact
Bias in AI has real consequences:
- Healthcare — biased models miss diagnoses for underrepresented groups.
- Finance — biased models deny credit unfairly.
- Hiring — biased models filter out qualified candidates.
- Criminal justice — biased models recommend disparate sentences.
- Insurance — biased models inflate premiums for some groups.
Wrong-answer trap on the exam: treating bias as theoretical or low-priority. Right-answer pattern: treating bias as concrete project risk with documented mitigation.
KEY TAKEAWAYS
- Bias has concrete real-world impact in healthcare, finance, hiring, criminal justice, insurance.
- Treat as concrete project risk, not theoretical concern.
Module 5: Regulatory Compliance
Lessons 19-23 | The regulatory landscape and the PM's role in compliance oversight.Lesson 19: ECO Task I.4 — Monitor Regulatory and Policy Compliance
The PM monitors regulatory and policy compliance throughout the project — from Phase I scope (does the use case comply?) through Phase VI operations (is production maintaining compliance?). Regulatory requirements come from:
- Government regulations — GDPR, HIPAA, CCPA, EU AI Act, etc.
- Industry standards — ISO/IEC 42001, NIST AI RMF, sector-specific.
- Organizational policy — internal AI governance, risk management.
- Contractual obligations — customer commitments, vendor requirements.
KEY TAKEAWAYS
- Compliance sources: government, industry standards, organizational policy, contracts.
- PM monitors continuously, not just at gates.
PM Oversight Angle
- PM owns: Monitoring regulatory and policy compliance throughout project; coordinating with legal, compliance, and regulatory affairs as needed; ensuring compliance status is documented and reviewed.
- Deliverable: Compliance Map + ongoing compliance status; documented reviews per regulatory framework.
- Iteration trigger: Compliance gap detected → loop back to address; may require Phase I scope change.
- Escalation trigger: Regulatory gap requiring legal counsel, executive decision, or regulatory disclosure.
- Wrong-answer trap: "Have legal review the project at deployment." Compliance is continuous, not deployment-stage. Late review = expensive rework.
- Question pattern signal: Stems mentioning "regulator," "compliance," "GDPR/HIPAA/etc.," "policy," "law," "regulation."
- ECO task tag: Domain I, Task 4 — Monitor regulatory and policy compliance
Lesson 20: AI Regulatory Landscape
The AI regulatory landscape is rapidly evolving. Key frameworks:
- EU AI Act (effective 2024+, phased) — risk-based classification (unacceptable/high/limited/minimal risk), transparency, conformity assessment.
- NIST AI Risk Management Framework (AI RMF) — voluntary in US, increasingly referenced in contracts.
- ISO/IEC 42001 — international AI management system standard.
- Sector-specific — financial services, healthcare, employment, education.
For the exam: don't memorize details, but recognize when a stem mentions any of these and that they're Domain I (compliance) territory.
KEY TAKEAWAYS
- Major frameworks: EU AI Act, NIST AI RMF, ISO/IEC 42001, sector-specific.
- Recognition matters more than memorization for the exam.
Lesson 21: Compliance Across Jurisdictions
AI projects often span jurisdictions — data crosses borders, models serve global users, vendors operate internationally. Each jurisdiction has its own compliance regime.
The PM coordinates a compliance map showing required data flows, model serving locations, user populations, and applicable regulations. Where regulations conflict (rare but real), legal counsel is engaged.
KEY TAKEAWAYS
- Multi-jurisdiction projects = multiple compliance regimes.
- PM-coordinated compliance map; legal handles conflicts.
Lesson 22: AI Governance Principles
PMI's Trustworthy AI Framework lists AI Governance Principles as a category:
- Risk Assessment & Mitigation — formal risk processes.
- System Auditability — ability to audit AI systems.
- Contestability — users can contest AI decisions.
- System Controls — organizational controls over AI.
- System Monitoring & Quality Verification — continuous monitoring.
- Involvement of Regulatory Bodies & Third-Party Certifications — engaging regulators and external auditors.
- Educated Workforce — workforce trained in AI governance.
These principles inform Domain V's V.3 (model governance) execution.
KEY TAKEAWAYS
- 7 governance principles: risk, auditability, contestability, controls, monitoring, regulators/certifications, educated workforce.
- Inform V.3 governance execution.
Lesson 23: Educated Workforce as Compliance Foundation
A workforce that doesn't understand AI governance can't comply with it — regardless of policies on paper. The PM advocates for and tracks workforce education as part of the compliance program.
This includes:
- AI ethics training for builders.
- Compliance training per applicable regulations.
- Domain-specific training (e.g., HIPAA for healthcare AI teams).
- Ongoing education as the regulatory landscape evolves.
KEY TAKEAWAYS
- Educated workforce = compliance foundation, not extra.
- PM advocates and tracks training as compliance work.
Module 6: Accountability and Audit
Lessons 24-28 | Who's accountable for AI decisions, and how it's documented.Lesson 24: ECO Task I.5 — Manage Accountability Documentation and Audit Trail
The PM manages accountability documentation and audit trails — ensuring someone is accountable for every AI decision and the trail to that accountability is documented and traceable.
Accountability includes:
- Human accountability — for any consequential AI decision, a human is named responsible.
- Organizational accountability — the organization owns the AI system's outcomes.
- Documentation — decisions, model versions, data lineage, override events all documented.
- Audit trail — chronological record of who did what, when, with what input.
KEY TAKEAWAYS
- Accountability = named human + organizational ownership + documentation + audit trail.
- PM manages the program; teams contribute documentation.
PM Oversight Angle
- PM owns: Managing the accountability program — ensuring named human accountability for AI decisions, documentation discipline, audit trail completeness; integrating accountability into governance (V.3) and metrics (V.4).
- Deliverable: Accountability Documentation + audit trail infrastructure + reviewed cadence.
- Iteration trigger: Accountability gap detected (no named owner, missing audit trail, undocumented decisions) → halt downstream work, address.
- Escalation trigger: Accountability incident requiring legal, regulatory, or executive engagement.
- Wrong-answer trap: "Have the data scientist log model outputs and call it an audit trail." Audit trails are comprehensive — input, model version, prediction, timestamp, rationale, overrides — not just outputs.
- Question pattern signal: Stems mentioning "regulator inquired about a decision," "user is contesting an AI outcome," "audit," "accountability," "who is responsible."
- ECO task tag: Domain I, Task 5 — Manage accountability documentation and audit trail
Lesson 25: Human Accountability for AI Decisions
PMI's Trustworthy AI Framework: a human must be accountable for AI decisions. This doesn't mean a human reviews every prediction — it means that for any consequential decision, a human is named as accountable for the outcome.
Practical implementation:
- High-stakes decisions (loan approval, medical diagnosis, hiring): human-in-the-loop, named approver.
- Medium-stakes decisions (product recommendations): human-on-the-loop, sample-based review.
- Low-stakes decisions (routine classifications): automated, but accountability framework above the system.
KEY TAKEAWAYS
- Human accountability = named responsibility, not human-reviews-everything.
- Tier the implementation by decision stakes.
Lesson 26: System Auditability
System auditability = the AI system can be audited. Requires:
- Comprehensive logging — inputs, outputs, model version, timestamps.
- Reproducibility — the prediction can be re-derived from inputs and model artifacts.
- Access controls — only authorized auditors can review.
- Retention — logs retained per regulatory requirements (often years).
Auditability is a Domain I principle, executed in Domain V (V.3 governance, V.4 metrics).
KEY TAKEAWAYS
- Auditability = logging + reproducibility + access controls + retention.
- Domain I principle, Domain V execution.
Lesson 27: Contestability — User Right to Challenge AI Decisions
Contestability = users affected by AI decisions can challenge them. PMI's Trustworthy AI Framework lists this as a Governance Principle.
Practical implementation:
- Disclosure — users know AI made the decision (Lesson 12).
- Explanation — users get an understandable explanation (XAI from Lesson 11).
- Mechanism to challenge — formal process to dispute the outcome.
- Human review path — escalation to a human decision-maker.
- Outcome remediation — if the AI was wrong, corrective action.
EU AI Act and GDPR Article 22 (automated decision-making) make contestability a regulatory requirement, not just a best practice.
KEY TAKEAWAYS
- Contestability = users can challenge AI decisions.
- Regulatory requirement under EU AI Act and GDPR Article 22.
Lesson 28: Documentation and Trace Requirements
What documentation is required across an AI project's life cycle:
| Phase | Required Documentation |
|---|---|
| I (Business Understanding) | Risk assessment, scope, success criteria |
| II (Data Understanding) | Data sources, requirements spec, privacy/compliance mapping, gate decision |
| III (Data Preparation) | Pipelines, transformations, gate decision (IV.5) |
| IV (Model Development) | Technique justification, training records, evaluation results, gate decision (IV.6) |
| V (Model Evaluation) | Performance metrics, bias measurements, robustness, comparison vs baseline |
| VI (Operationalization) | Deployment plan, governance plan, monitoring plan, contingency plan, audit logs, lessons learned |
Comprehensive documentation = trustworthy AI. Gaps in documentation = governance and accountability gaps.
KEY TAKEAWAYS
- Documentation spans all 6 phases, with specific artifacts per phase.
- Gaps in documentation = governance/accountability gaps = exam wrong-answer territory.
Quick Reference: The 5 Pillars (SRTGI) Cross-Reference
| Pillar | Domain I Task | Cross-Domain Pulls |
|---|---|---|
| Societal/Ethical | I.3 (bias checks) | III.6 (privacy, but ethical sourcing too), IV.2 (QA/QC bias), V.4 (production monitoring) |
| Responsible | I.1 (privacy/security), I.5 (accountability) | III.6 (privacy/compliance/access), V.3 (governance), V.7 (contingency) |
| Transparent | I.2 (transparency) | IV.1 (technique selection — explainability), IV.6 (gate — transparency), V.3 (governance) |
| Governed | I.4 (compliance), I.5 (audit) | IV.6 (gate — compliance), V.3 (governance), V.4 (metrics — audit) |
| Interpretable / Explainable | I.2 (transparency) | IV.1 (technique selection), IV.6 (gate criteria) |
Cross-Domain Links
- I.1 (Privacy/Security) ↔ III.6: Domain III's privacy/compliance/access check IS the Phase II execution of I.1. Same principles, executed in Phase II context.
- I.2 (Transparency) ↔ IV.1, IV.6: Algorithm transparency requirements constrain technique selection (IV.1) and are evaluated at the operationalization gate (IV.6).
- I.3 (Bias Checks) ↔ III.7, IV.2, V.4: Bias is checked in data evaluation (III.7), model QA/QC (IV.2), and production monitoring (V.4).
- I.4 (Regulatory Compliance) ↔ Phase I + II + V: Compliance constrains Phase I scope, Phase II data sourcing, Phase V production.
- I.5 (Accountability/Audit) ↔ V.3, V.5: Audit trails enforce in V.3 (governance) and contribute to V.5 (lessons learned). Domain II's ROI/business case (II.5/II.9) also pulls accountability framing.
Knowledge Check
Question 1
A regulator inquires about how an AI-driven decision was made for a specific customer. The team is asked to provide documentation. What's the BEST response?
A. Have the data scientist explain the model architecture
B. Provide the audit trail from accountability documentation: input, model version, prediction, timestamp, decision rationale, and any human-in-the-loop overrides — sourced from the documented program (ECO Task I.5)
C. Decline since model decisions are confidential
D. Re-run the prediction and provide the new result
Click for answer and rationale
Correct: BECO Task I.5 (accountability documentation and audit trail). Domain I principle, Domain V execution. The audit trail is the prepared answer.
- A wrong: Architecture isn't decision-specific accountability.
- C wrong: Regulators have inquiry rights; declining is rarely correct.
- D wrong: Re-running doesn't reproduce the original decision context.
Question 2
A bias measurement during model QA/QC reveals demographic disparity in predictions. What's the PM's BEST response?
A. Have the data scientist add a fairness post-processing layer and proceed
B. Treat as ECO I.3 (bias checks) + IV.2 (QA/QC) issue: document the finding, evaluate mitigation options, engage stakeholders for remediation decision, ensure mitigation continues into production monitoring (V.4)
C. Deploy the model and monitor bias in production
D. Restart from scratch with new data
Click for answer and rationale
Correct: BI.3 + IV.2 + V.4 cross-domain pull. Bias requires documented stakeholder remediation, not unilateral technical fix.
- A wrong: Wrong-answer trap — post-processing without governance.
- C wrong: Production isn't the bias-resolution venue.
- D wrong: Without root-cause, restart may repeat the issue.
Question 3
True or False: Domain I work happens primarily during Phase I (Business Understanding) and Phase VI (Operationalization).
Click for answer and rationale
Correct: FALSEDomain I runs continuously throughout all six phases. It's not a phase — it's a thread. Privacy plan exists from Phase I and lives through Phase VI. Bias checks happen in Phase II, III, IV, and V. Compliance monitoring is continuous.
Question 4
The team is using a deep learning model for a high-stakes regulated decision (loan approval). The regulator requires explainability. What's the PM's BEST response?
A. Approve the deep learning model since it's more accurate
B. Document the technique selection and ensure trade-off between performance and explainability is presented to stakeholders; consider interpretable-by-design alternatives; engage compliance/legal early
C. Add post-hoc XAI explanations after deployment
D. Reject deep learning and require an interpretable model
Click for answer and rationale
Correct: BI.2 (transparency) + IV.1 (technique). High-stakes regulated decisions = stakeholder + compliance/legal coordination. PM facilitates, doesn't unilaterally approve or reject.
- A wrong: Skips Domain I transparency consideration.
- C wrong: Wrong-answer trap — post-hoc XAI may not satisfy regulator's "explainability" definition.
- D wrong: Unilateral PM rejection isn't stakeholder-engaged either.
Question 5
A team discovers that the chosen training data set contains PII that wasn't anonymized. The data scientist proposes anonymizing it during preprocessing. What's the PM's BEST response?
A. Approve the anonymization plan
B. Treat as ECO I.1 (privacy plan) + III.6 (data privacy check): coordinate with privacy/legal, validate that anonymization meets regulatory bar (re-identification risk assessed), document the decision, and update the privacy plan accordingly before processing continues
C. Have the data scientist proceed with anonymization in parallel with documentation
D. Switch to a different data set
Click for answer and rationale
Correct: BI.1 + III.6 cross-domain. Privacy decisions require legal coordination, regulatory verification, documentation. Anonymization is a valid technique but the plan needs governance.
- A wrong: Approves without governance check (re-identification risk, regulatory adequacy).
- C wrong: Wrong-answer trap — parallel work bypasses governance check.
- D wrong: Replacement may be needed but only after governance review.
Question 6
The deployment plan for an AI loan-approval system is being finalized. A stakeholder asks what happens if a customer disputes an AI decision. What's the PM's BEST response?
A. The dispute goes to customer service
B. Reference the contestability mechanism in the Trustworthy AI plan: disclosure, XAI explanation, formal challenge process, human review path, outcome remediation — all documented as part of I.2 (transparency) + I.5 (accountability) + V.3 (governance) plans
C. The data scientist will retrain the model based on disputes
D. There's no dispute mechanism since the model is well-tested
Click for answer and rationale
Correct: BContestability is a Trustworthy AI Framework principle. Domain I (I.2 + I.5) defines; Domain V (V.3) executes.
- A wrong: Customer service alone isn't a contestability mechanism.
- C wrong: Wrong-answer trap — retraining isn't dispute resolution.
- D wrong: Contestability is required regardless of model quality (regulatory).
Question 7
True or False: The PM's responsibility for Domain I tasks ends at deployment.
Click for answer and rationale
Correct: FALSEDomain I runs continuously. Privacy plans evolve. Bias monitoring continues in production (V.4). Compliance is ongoing (regulatory landscape changes). Audit trails accumulate. Accountability documentation must be current.
Question 8
A team is evaluating a third-party foundation model for fine-tuning. What Domain I considerations does the PM raise?
A. Whether the model performs well on benchmarks
B. Provenance (where was it trained, on what data, with what license), bias measurement (does the vendor publish bias evaluations?), audit support (can decisions be traced?), regulatory fit (does using this model satisfy the project's regulatory framework?)
C. Cost of using the model
D. Vendor reputation
Click for answer and rationale
Correct: BDomain I principles apply to third-party models too. Provenance, bias, audit, regulatory fit are all PM concerns when adopting external AI components.
- A wrong: Performance is one concern, not the Domain I concern.
- C wrong: Cost is a project concern but not Domain I.
- D wrong: Reputation is a sourcing factor, not the structured Domain I concern.
Question 9
The marketing team wants to deploy a GenAI customer service chatbot that doesn't disclose to users that they're talking to an AI. What's the PM's BEST response?
A. Approve since users may prefer the human-feeling experience
B. Block deployment: disclosure is a Trustworthy AI Framework requirement (I.2 transparency); users should be told they're interacting with AI; surface to stakeholders and compliance for resolution
C. Deploy and add disclosure later
D. Have the chatbot occasionally identify as AI
Click for answer and rationale
Correct: BDisclosure is a non-negotiable Trustworthy AI principle (Lesson 12). Bot-to-human handoff should be clearly communicated.
- A wrong: Wrong-answer trap — user "preference" doesn't override disclosure principle.
- C wrong: Disclosure is upfront, not retrofitted.
- D wrong: Occasional disclosure is partial; principle requires consistent disclosure.
Question 10
A stem mentions "the AI system is processing customer health information." What Domain I cues does the PM recognize?
A. Performance optimization is critical
B. Privacy (I.1 — HIPAA implications), bias checks (I.3 — health data has known bias risks), regulatory compliance (I.4 — HIPAA, possibly state laws), accountability (I.5 — health decisions need named human accountability)
C. The model needs to be deep learning
D. The cloud provider should be HIPAA-certified
Click for answer and rationale
Correct: BHealth data triggers all four PHI-related Domain I tasks. PMI tests recognition of cross-domain Domain I cues in stems.
- A wrong: Doesn't address Domain I.
- C wrong: Architecture isn't Domain I.
- D wrong: Single point that's part of B's broader picture.
Memory Aids & Mnemonics Summary
| Mnemonic | What to Remember |
|---|---|
| SRTGI (5 Pillars) | Societal/Ethical, Responsible, Transparent, Governed, Interpretable. "Some Really Trustworthy Governance Inspires." |
| PRIDE (Privacy Plan) | Personal data identified, Regulatory frameworks, Incident response, Data access controls, Encryption/anonymization |
| NVI (3 Biases) | Neural-net (math), Variance (fitting), Informational (fairness — exam one) |
| Disclosure & Consent | Users know they're interacting with AI + can opt out with feasible alternatives |
| Systemic vs Decision Transparency | Systemic = how built. Decision = why this prediction. |
| 3 Mitigation Strategies | Pre-processing (data), in-processing (training), post-processing (outputs) |
| Audit Trail Components | Input + model version + prediction + timestamp + rationale + overrides |
| 5 Domain I Tasks | I.1 Privacy/security, I.2 Transparency, I.3 Bias, I.4 Compliance, I.5 Accountability |
| Contestability Components | Disclosure + explanation + challenge mechanism + human review path + outcome remediation |
| Major Regulations | GDPR, HIPAA, CCPA, EU AI Act, NIST AI RMF, ISO/IEC 42001 |
Closing reminders for Domain I
- Domain I runs continuously — not a phase, a thread. Every other domain pulls Domain I in.
- Watch for Domain I cues in non-Domain-I stems. PII, GDPR, HIPAA, bias, audit, transparency, explainability, accountability, regulatory inquiry — all signal Domain I integration is being tested even when the question is about Domain III/IV/V.
- The PM doesn't run privacy tests, bias tests, or audit tools. The PM ensures these activities happen, are documented, and integrate with decisions across all domains.
- Disclosure and consent are non-negotiable. Wrong-answer traps include "users prefer not knowing" or "disclosure can be added later." Right answer: disclosure upfront, consent with feasible alternatives.
- Bias mitigation is stakeholder-decided, not data-scientist-decided. Three mitigation strategies (pre/in/post-processing) — choice is PM-coordinated.
- Contestability is a regulatory requirement under EU AI Act and GDPR Article 22, not a best practice.
- Five pillars (SRTGI) cross-reference to all five Domain I tasks. Memorize the mapping.
Next: Cross-domain reference docs (
output/references/) — phase-domain crosswalk, gate cheat sheet, foundational concepts. Then game files. Then practice questions. Then the site.