Domain I: Support Responsible and Trustworthy AI Efforts — Comprehensive Study Guide

Exam weight: 15% of PMI-CPMAI exam (~18 scored questions) Score-report framing: ✅ Target — but the heaviest cross-domain pull Maps to CPMAI methodology: Runs throughout all phases (not a single phase) Number of ECO tasks: 5 (I.1 through I.5) Estimated study time: 7 hours
Per locked decision in CLAUDE.md: Domain I gets a full study guide, not a refresh — even though scored Target on first attempt. Domain I is the most cross-pulled content on the exam: wrong-answer traps in Domains III, IV, and V often hinge on Domain I principles. Mastery here protects weak-domain scores.

Overview

Domain I is unique. It doesn't map to a single CPMAI phase — it runs throughout the project lifecycle, from Phase I (Business Understanding) all the way through Phase VI (Operationalization). Every other domain pulls Domain I principles into its decisions. When a Domain III question mentions PII, when a Domain IV question mentions bias, when a Domain V question mentions audit trails — Domain I is being tested.

The unifying pattern: Domain I asks whether the project respects human values, regulatory boundaries, and accountability structures throughout its existence. Not at a single checkpoint — continuously. Every task begins with an oversight verb: oversee, manage, conduct, monitor, manage. The PM doesn't write privacy plans, run bias tests, or maintain audit trails — the PM ensures these activities are happening, are documented, and are integrated into decisions across all other domains.

scored Target on first attempt here on the first attempt. The risk for the retake isn't Domain I questions in isolation — it's missing Domain I cues inside Domain III/IV/V questions. Stems mentioning PII, GDPR, HIPAA, bias, audit, transparency, explainability, accountability, or trustworthy-AI principles are testing whether you respect Domain I when answering questions in other domains.

Table of Contents


Module 1: The Trustworthy AI Framework Foundation

Lessons 1-4 | The framework, why it exists, and how it runs through every project phase.

Lesson 1: What Is a Trustworthy AI Framework?

A Trustworthy AI Framework is a documented approach guiding two communities: those building AI systems (need guardrails, boundaries, processes) and those using AI systems (need reliable, transparent, contestable interactions). PMI's Trustworthy AI Framework consolidates principles across 60+ existing frameworks into a comprehensive structure aligned with PMI-CPMAI.

The framework isn't optional for the exam — it's the spine of Domain I and the cross-cutting reference for every other domain.

KEY TAKEAWAYS


Lesson 2: The 5 Pillars of Trustworthy AI (SRTGI)

The 5 pillars memorized in Phase I — same framework, deeper here:

PillarFocusECO Task Connection
Societal & EthicalHuman values, fairness, biasI.3 (bias checks)
ResponsibleAccountability, privacy, safety, securityI.1 (privacy/security), I.5 (accountability)
TransparentVisibility into AI design, data, decisionsI.2 (transparency)
GovernedOversight, compliance, audit, regulationI.4 (compliance), I.5 (audit)
Interpretable / ExplainableWhy the AI made this specific decisionI.2 (transparency)

💡 Memory Aid — SRTGI

Societal, Responsible, Transparent, Governed, Interpretable. "Some Really Trustworthy Governance Inspires." ((covered in Phase I).)

KEY TAKEAWAYS


Lesson 3: Why Domain I Runs Throughout the Project Life Cycle

Domain I is not a phase. It runs continuously, from Phase I through Phase VI:

A common exam pattern: a question in another domain (e.g., Domain V) tests whether you recognize the Domain I component. Wrong answers ignore Domain I; right answers integrate it.

KEY TAKEAWAYS


Lesson 4: How Domain I Touches Every Other Domain

Specific cross-domain pulls:

When a stem mentions trustworthy-AI vocabulary in any domain, the answer involves Domain I.

KEY TAKEAWAYS


Module 2: Privacy and Security

Lessons 5-8 | The privacy and security backbone of trustworthy AI.

Lesson 5: ECO Task I.1 — Oversee Privacy and Security Plan

The PM oversees creation and ongoing maintenance of the project's privacy and security plan. Privacy = protecting personal/sensitive data from unauthorized exposure. Security = protecting AI systems (and the data they consume) from compromise.

The PM doesn't write the plan alone. The PM coordinates contributions from data privacy officers, information security teams, legal, compliance, data engineering, and ML engineering — and ensures the plan is documented, signed off, and operationalized through V.3 (governance).

KEY TAKEAWAYS

💡 Memory Aid — PRIDE Privacy Plan

Personal data identified, Regulatory frameworks documented, Incident response defined, Data access controls in place, Encryption / anonymization planned. Five components of a privacy plan.

PM Oversight Angle


Lesson 6: AI Privacy Considerations

Privacy concerns specific to AI:

The PM ensures all five are addressed in the privacy plan.

KEY TAKEAWAYS


Lesson 7: Securing AI Systems

Security factors for AI:

These overlap V.3 (governance) and V.4 (monitoring) — Domain I principles, Domain V execution.

KEY TAKEAWAYS


Lesson 8: PII and Regulatory Privacy

Privacy regulations relevant to AI:

Region/SectorRegulationKey Requirements
EU + many globalGDPRConsent, right to erasure, data minimization, cross-border transfer, automated decision-making rights
US healthcareHIPAAPHI protection, BAAs (Business Associate Agreements), audit trails
US CaliforniaCCPA/CPRAConsumer privacy rights, opt-out
US financeGLBACustomer financial info protection
Industry-specificMany othersSector-specific privacy and security

The PM ensures the privacy plan maps required data to applicable regulations. Discovering a regulatory gap mid-project is expensive rework.

KEY TAKEAWAYS


Module 3: Transparency

Lessons 9-13 | Visibility into how AI works, why it decides, and who knows.

Lesson 9: ECO Task I.2 — Manage AI/ML Transparency

The PM manages the transparency program — ensuring data selection, algorithm selection, and decision-making processes are documented, accessible, and contestable as appropriate.

Two transparency dimensions:

KEY TAKEAWAYS

PM Oversight Angle


Lesson 10: Systemic Transparency

Systemic transparency = visibility into all components and ingredients of an ML model:

This is achievable for almost any model. The PM ensures it's documented.

KEY TAKEAWAYS


Lesson 11: Decision Transparency / Algorithmic Explainability

Decision transparency = visibility into specific predictions. Why did the model deny THIS loan? Flag THIS user as fraud?

For some algorithms (linear models, decision trees), this is inherent. For deep learning, it's often impossible without specialized Explainable AI (XAI) techniques.

XAI methods:

KEY TAKEAWAYS


Lesson 12: Disclosure and Consent

Two PMI Trustworthy AI Framework concepts:

These are project-design considerations, not afterthoughts. Build into Phase I scope and verify in Phase VI deployment.

KEY TAKEAWAYS


Lesson 13: Open Systems

Open systems = AI components (models, training data, evaluation methods) that are openly available for inspection, modification, or audit. Open-source models, public training datasets, published evaluation benchmarks.

Trade-off: open systems support transparency and auditability but may conflict with competitive advantage or security (open systems are easier to attack adversarially).

The PM frames the trade-off for stakeholder decision; doesn't choose unilaterally.

KEY TAKEAWAYS


Module 4: Bias

Lessons 14-18 | Bias in AI — where it comes from, how to measure it, how to mitigate it.

Lesson 14: ECO Task I.3 — Conduct Bias Checks

The PM oversees bias checks on model, data, and algorithm. Three categories of bias check:

The PM doesn't run the bias tests — the data scientist + ethics team do. The PM ensures tests happen, results are documented, mitigation plans exist where bias is found, and the program continues into production (V.4 monitoring).

KEY TAKEAWAYS

PM Oversight Angle


Lesson 15: Three Types of Bias in AI (NVI)

(Same NVI mnemonic from Domain III — repeated here because Domain I tests it directly.)

  1. Neural-network bias — adjustment factor in neural networks. Has nothing to do with fairness.
  2. Variance bias — bias-vs-variance trade-off in model fitting (under/overfit).
  3. Informational bias — overrepresentation or underrepresentation of categories in data, with fairness implications. The exam-relevant one.

Confusing the three is a top exam trap.

💡 Memory Aid — NVI

Neural-net bias (math), Variance bias (fitting), Informational bias (fairness — the exam one).

KEY TAKEAWAYS


Lesson 16: Bias Measurement and Mitigation

Measurement — quantify bias across user segments. Common metrics: demographic parity, equal opportunity, predictive parity. Each captures a different aspect of fairness. Mitigation strategies:
  1. Pre-processing — re-balance training data, remove protected attributes, augment underrepresented classes.
  2. In-processing — modify the training procedure (fairness constraints in the loss function).
  3. Post-processing — adjust model outputs for fairness (calibrate per-group, use different thresholds).

The PM coordinates which strategy fits the project's risk tolerance, technical constraints, and regulatory requirements.

KEY TAKEAWAYS


Lesson 17: Diversity and Inclusion in AI Teams

Bias in AI often traces back to bias in the team building it. Homogeneous teams miss bias in their own outputs because they don't have diverse perspectives to spot it.

PMI's Trustworthy AI Framework includes Diversity & Inclusion as a societal pillar — diverse teams produce more trustworthy AI. The PM advocates for diverse representation in AI project teams as a project-level risk-reduction measure.

KEY TAKEAWAYS


Lesson 18: Fairness, Discrimination, and Real-World Impact

Bias in AI has real consequences:

Wrong-answer trap on the exam: treating bias as theoretical or low-priority. Right-answer pattern: treating bias as concrete project risk with documented mitigation.

KEY TAKEAWAYS


Module 5: Regulatory Compliance

Lessons 19-23 | The regulatory landscape and the PM's role in compliance oversight.

Lesson 19: ECO Task I.4 — Monitor Regulatory and Policy Compliance

The PM monitors regulatory and policy compliance throughout the project — from Phase I scope (does the use case comply?) through Phase VI operations (is production maintaining compliance?). Regulatory requirements come from:

KEY TAKEAWAYS

PM Oversight Angle


Lesson 20: AI Regulatory Landscape

The AI regulatory landscape is rapidly evolving. Key frameworks:

For the exam: don't memorize details, but recognize when a stem mentions any of these and that they're Domain I (compliance) territory.

KEY TAKEAWAYS


Lesson 21: Compliance Across Jurisdictions

AI projects often span jurisdictions — data crosses borders, models serve global users, vendors operate internationally. Each jurisdiction has its own compliance regime.

The PM coordinates a compliance map showing required data flows, model serving locations, user populations, and applicable regulations. Where regulations conflict (rare but real), legal counsel is engaged.

KEY TAKEAWAYS


Lesson 22: AI Governance Principles

PMI's Trustworthy AI Framework lists AI Governance Principles as a category:

These principles inform Domain V's V.3 (model governance) execution.

KEY TAKEAWAYS


Lesson 23: Educated Workforce as Compliance Foundation

A workforce that doesn't understand AI governance can't comply with it — regardless of policies on paper. The PM advocates for and tracks workforce education as part of the compliance program.

This includes:

KEY TAKEAWAYS


Module 6: Accountability and Audit

Lessons 24-28 | Who's accountable for AI decisions, and how it's documented.

Lesson 24: ECO Task I.5 — Manage Accountability Documentation and Audit Trail

The PM manages accountability documentation and audit trails — ensuring someone is accountable for every AI decision and the trail to that accountability is documented and traceable.

Accountability includes:

KEY TAKEAWAYS

PM Oversight Angle


Lesson 25: Human Accountability for AI Decisions

PMI's Trustworthy AI Framework: a human must be accountable for AI decisions. This doesn't mean a human reviews every prediction — it means that for any consequential decision, a human is named as accountable for the outcome.

Practical implementation:

KEY TAKEAWAYS


Lesson 26: System Auditability

System auditability = the AI system can be audited. Requires:

Auditability is a Domain I principle, executed in Domain V (V.3 governance, V.4 metrics).

KEY TAKEAWAYS


Lesson 27: Contestability — User Right to Challenge AI Decisions

Contestability = users affected by AI decisions can challenge them. PMI's Trustworthy AI Framework lists this as a Governance Principle.

Practical implementation:

EU AI Act and GDPR Article 22 (automated decision-making) make contestability a regulatory requirement, not just a best practice.

KEY TAKEAWAYS


Lesson 28: Documentation and Trace Requirements

What documentation is required across an AI project's life cycle:

PhaseRequired Documentation
I (Business Understanding)Risk assessment, scope, success criteria
II (Data Understanding)Data sources, requirements spec, privacy/compliance mapping, gate decision
III (Data Preparation)Pipelines, transformations, gate decision (IV.5)
IV (Model Development)Technique justification, training records, evaluation results, gate decision (IV.6)
V (Model Evaluation)Performance metrics, bias measurements, robustness, comparison vs baseline
VI (Operationalization)Deployment plan, governance plan, monitoring plan, contingency plan, audit logs, lessons learned

Comprehensive documentation = trustworthy AI. Gaps in documentation = governance and accountability gaps.

KEY TAKEAWAYS


Quick Reference: The 5 Pillars (SRTGI) Cross-Reference

PillarDomain I TaskCross-Domain Pulls
Societal/EthicalI.3 (bias checks)III.6 (privacy, but ethical sourcing too), IV.2 (QA/QC bias), V.4 (production monitoring)
ResponsibleI.1 (privacy/security), I.5 (accountability)III.6 (privacy/compliance/access), V.3 (governance), V.7 (contingency)
TransparentI.2 (transparency)IV.1 (technique selection — explainability), IV.6 (gate — transparency), V.3 (governance)
GovernedI.4 (compliance), I.5 (audit)IV.6 (gate — compliance), V.3 (governance), V.4 (metrics — audit)
Interpretable / ExplainableI.2 (transparency)IV.1 (technique selection), IV.6 (gate criteria)

Cross-Domain Links


Knowledge Check

Question 1

A regulator inquires about how an AI-driven decision was made for a specific customer. The team is asked to provide documentation. What's the BEST response?

A. Have the data scientist explain the model architecture

B. Provide the audit trail from accountability documentation: input, model version, prediction, timestamp, decision rationale, and any human-in-the-loop overrides — sourced from the documented program (ECO Task I.5)

C. Decline since model decisions are confidential

D. Re-run the prediction and provide the new result

Click for answer and rationale Correct: B

ECO Task I.5 (accountability documentation and audit trail). Domain I principle, Domain V execution. The audit trail is the prepared answer.

  • A wrong: Architecture isn't decision-specific accountability.
  • C wrong: Regulators have inquiry rights; declining is rarely correct.
  • D wrong: Re-running doesn't reproduce the original decision context.

Question 2

A bias measurement during model QA/QC reveals demographic disparity in predictions. What's the PM's BEST response?

A. Have the data scientist add a fairness post-processing layer and proceed

B. Treat as ECO I.3 (bias checks) + IV.2 (QA/QC) issue: document the finding, evaluate mitigation options, engage stakeholders for remediation decision, ensure mitigation continues into production monitoring (V.4)

C. Deploy the model and monitor bias in production

D. Restart from scratch with new data

Click for answer and rationale Correct: B

I.3 + IV.2 + V.4 cross-domain pull. Bias requires documented stakeholder remediation, not unilateral technical fix.

  • A wrong: Wrong-answer trap — post-processing without governance.
  • C wrong: Production isn't the bias-resolution venue.
  • D wrong: Without root-cause, restart may repeat the issue.

Question 3

True or False: Domain I work happens primarily during Phase I (Business Understanding) and Phase VI (Operationalization).

Click for answer and rationale Correct: FALSE

Domain I runs continuously throughout all six phases. It's not a phase — it's a thread. Privacy plan exists from Phase I and lives through Phase VI. Bias checks happen in Phase II, III, IV, and V. Compliance monitoring is continuous.

Question 4

The team is using a deep learning model for a high-stakes regulated decision (loan approval). The regulator requires explainability. What's the PM's BEST response?

A. Approve the deep learning model since it's more accurate

B. Document the technique selection and ensure trade-off between performance and explainability is presented to stakeholders; consider interpretable-by-design alternatives; engage compliance/legal early

C. Add post-hoc XAI explanations after deployment

D. Reject deep learning and require an interpretable model

Click for answer and rationale Correct: B

I.2 (transparency) + IV.1 (technique). High-stakes regulated decisions = stakeholder + compliance/legal coordination. PM facilitates, doesn't unilaterally approve or reject.

  • A wrong: Skips Domain I transparency consideration.
  • C wrong: Wrong-answer trap — post-hoc XAI may not satisfy regulator's "explainability" definition.
  • D wrong: Unilateral PM rejection isn't stakeholder-engaged either.

Question 5

A team discovers that the chosen training data set contains PII that wasn't anonymized. The data scientist proposes anonymizing it during preprocessing. What's the PM's BEST response?

A. Approve the anonymization plan

B. Treat as ECO I.1 (privacy plan) + III.6 (data privacy check): coordinate with privacy/legal, validate that anonymization meets regulatory bar (re-identification risk assessed), document the decision, and update the privacy plan accordingly before processing continues

C. Have the data scientist proceed with anonymization in parallel with documentation

D. Switch to a different data set

Click for answer and rationale Correct: B

I.1 + III.6 cross-domain. Privacy decisions require legal coordination, regulatory verification, documentation. Anonymization is a valid technique but the plan needs governance.

  • A wrong: Approves without governance check (re-identification risk, regulatory adequacy).
  • C wrong: Wrong-answer trap — parallel work bypasses governance check.
  • D wrong: Replacement may be needed but only after governance review.

Question 6

The deployment plan for an AI loan-approval system is being finalized. A stakeholder asks what happens if a customer disputes an AI decision. What's the PM's BEST response?

A. The dispute goes to customer service

B. Reference the contestability mechanism in the Trustworthy AI plan: disclosure, XAI explanation, formal challenge process, human review path, outcome remediation — all documented as part of I.2 (transparency) + I.5 (accountability) + V.3 (governance) plans

C. The data scientist will retrain the model based on disputes

D. There's no dispute mechanism since the model is well-tested

Click for answer and rationale Correct: B

Contestability is a Trustworthy AI Framework principle. Domain I (I.2 + I.5) defines; Domain V (V.3) executes.

  • A wrong: Customer service alone isn't a contestability mechanism.
  • C wrong: Wrong-answer trap — retraining isn't dispute resolution.
  • D wrong: Contestability is required regardless of model quality (regulatory).

Question 7

True or False: The PM's responsibility for Domain I tasks ends at deployment.

Click for answer and rationale Correct: FALSE

Domain I runs continuously. Privacy plans evolve. Bias monitoring continues in production (V.4). Compliance is ongoing (regulatory landscape changes). Audit trails accumulate. Accountability documentation must be current.

Question 8

A team is evaluating a third-party foundation model for fine-tuning. What Domain I considerations does the PM raise?

A. Whether the model performs well on benchmarks

B. Provenance (where was it trained, on what data, with what license), bias measurement (does the vendor publish bias evaluations?), audit support (can decisions be traced?), regulatory fit (does using this model satisfy the project's regulatory framework?)

C. Cost of using the model

D. Vendor reputation

Click for answer and rationale Correct: B

Domain I principles apply to third-party models too. Provenance, bias, audit, regulatory fit are all PM concerns when adopting external AI components.

  • A wrong: Performance is one concern, not the Domain I concern.
  • C wrong: Cost is a project concern but not Domain I.
  • D wrong: Reputation is a sourcing factor, not the structured Domain I concern.

Question 9

The marketing team wants to deploy a GenAI customer service chatbot that doesn't disclose to users that they're talking to an AI. What's the PM's BEST response?

A. Approve since users may prefer the human-feeling experience

B. Block deployment: disclosure is a Trustworthy AI Framework requirement (I.2 transparency); users should be told they're interacting with AI; surface to stakeholders and compliance for resolution

C. Deploy and add disclosure later

D. Have the chatbot occasionally identify as AI

Click for answer and rationale Correct: B

Disclosure is a non-negotiable Trustworthy AI principle (Lesson 12). Bot-to-human handoff should be clearly communicated.

  • A wrong: Wrong-answer trap — user "preference" doesn't override disclosure principle.
  • C wrong: Disclosure is upfront, not retrofitted.
  • D wrong: Occasional disclosure is partial; principle requires consistent disclosure.

Question 10

A stem mentions "the AI system is processing customer health information." What Domain I cues does the PM recognize?

A. Performance optimization is critical

B. Privacy (I.1 — HIPAA implications), bias checks (I.3 — health data has known bias risks), regulatory compliance (I.4 — HIPAA, possibly state laws), accountability (I.5 — health decisions need named human accountability)

C. The model needs to be deep learning

D. The cloud provider should be HIPAA-certified

Click for answer and rationale Correct: B

Health data triggers all four PHI-related Domain I tasks. PMI tests recognition of cross-domain Domain I cues in stems.

  • A wrong: Doesn't address Domain I.
  • C wrong: Architecture isn't Domain I.
  • D wrong: Single point that's part of B's broader picture.


Memory Aids & Mnemonics Summary

MnemonicWhat to Remember
SRTGI (5 Pillars)Societal/Ethical, Responsible, Transparent, Governed, Interpretable. "Some Really Trustworthy Governance Inspires."
PRIDE (Privacy Plan)Personal data identified, Regulatory frameworks, Incident response, Data access controls, Encryption/anonymization
NVI (3 Biases)Neural-net (math), Variance (fitting), Informational (fairness — exam one)
Disclosure & ConsentUsers know they're interacting with AI + can opt out with feasible alternatives
Systemic vs Decision TransparencySystemic = how built. Decision = why this prediction.
3 Mitigation StrategiesPre-processing (data), in-processing (training), post-processing (outputs)
Audit Trail ComponentsInput + model version + prediction + timestamp + rationale + overrides
5 Domain I TasksI.1 Privacy/security, I.2 Transparency, I.3 Bias, I.4 Compliance, I.5 Accountability
Contestability ComponentsDisclosure + explanation + challenge mechanism + human review path + outcome remediation
Major RegulationsGDPR, HIPAA, CCPA, EU AI Act, NIST AI RMF, ISO/IEC 42001

Closing reminders for Domain I


Next: Cross-domain reference docs (output/references/) — phase-domain crosswalk, gate cheat sheet, foundational concepts. Then game files. Then practice questions. Then the site.