Domain I: Support Responsible and Trustworthy AI Efforts — Study Game
How to Play
Pick a game mode and test yourself. Cover answers and try to recall before peeking. Domain I is the cross-cutting domain — most of these reinforce cross-pulls into Domains III/IV/V.
GAME MODE 1: Rapid Fire Flashcards
The Framework
Card 1 — Front: SRTGI — five pillars of Trustworthy AI?Answer: Societal/Ethical, Responsible, Transparent, Governed, Interpretable.Card 2 — Front: Does Domain I map to a single CPMAI phase?
Answer: No — runs continuously across all six phases.
Privacy and Security (I.1)
Card 3 — Front: PRIDE privacy plan elements?Answer: Personal data identified, Regulatory frameworks, Incident response, Data access controls, Encryption/anonymization.Card 4 — Front: Anonymization vs pseudonymization?
Answer: Anonymization is irreversible (no longer PII). Pseudonymization is reversible (still PII under GDPR).Card 5 — Front: Major regulations affecting AI?
Answer: GDPR (EU), HIPAA (US healthcare), CCPA/CPRA (California), GLBA (US financial), EU AI Act, NIST AI RMF, ISO/IEC 42001.
Transparency (I.2)
Card 6 — Front: Systemic vs Decision transparency?Answer: Systemic = visibility into model construction. Decision = visibility into specific predictions.Card 7 — Front: XAI vs Interpretability?
Answer: XAI = post-hoc explain any model. Interpretability = inherently understandable models.Card 8 — Front: What's contestability?
Answer: Users affected by AI decisions can challenge them. Required under EU AI Act + GDPR Art 22.Card 9 — Front: Disclosure principle?
Answer: Users should know they're interacting with AI. Bot-to-human handoff clearly communicated.
Bias (I.3)
Card 10 — Front: NVI — three bias types?Answer: Neural-net (math), Variance (fitting), Informational (fairness — exam one).Card 11 — Front: Three informational bias types?
Answer: Reporting, Recall, Classification.Card 12 — Front: Three bias mitigation strategies?
Answer: Pre-processing (rebalance data), In-processing (training-time fairness constraints), Post-processing (output calibration).Card 13 — Front: Bias must be ____, ____, and ____.
Answer: measurable, monitored, managed.
Compliance (I.4)
Card 14 — Front: When does I.4 compliance monitoring happen?Answer: Continuously, throughout all phases — not just at deployment.Card 15 — Front: Seven AI Governance Principles?
Answer: Risk Assessment, System Auditability, Contestability, System Controls, System Monitoring, Regulatory/Third-party Certifications, Educated Workforce.
Accountability (I.5)
Card 16 — Front: What's in an audit trail?Answer: Input + model version + prediction + timestamp + decision rationale + human-in-the-loop overrides.Card 17 — Front: Human accountability principle?
Answer: A named human is accountable for consequential AI decisions. Tier by stakes (HITL/HOTL/automated).
GAME MODE 2: Scenario Showdown — What Should the PM Do?
Scenario 1: The Regulatory Inquiry
- Regulator inquires about a specific AI decision affecting a customer
- Documentation request
Reveal
Provide audit trail from accountability program: input, model version, prediction, timestamp, decision rationale, human-in-the-loop overrides (I.5).Scenario 2: The Bias Detection
- Bias measurement reveals demographic disparity
- Data scientist proposes fairness post-processing layer
Reveal
Document, evaluate mitigation options, engage stakeholders for the decision, ensure mitigation continues into V.4 production monitoring (I.3).Scenario 3: The High-Stakes Decision
- High-stakes regulated decision needs explainability
- Chosen technique is black-box deep learning
Reveal
Document the trade-off; consider interpretable-by-design alternatives; engage compliance/legal early. I.2 + IV.1 cross-pull.Scenario 4: The Health Data Trigger
- Required dataset contains patient health information
Reveal
Recognize all four PHI Domain I tasks fire: I.1 (HIPAA privacy), I.3 (health data has known bias risks), I.4 (HIPAA + state laws), I.5 (named human accountability).Scenario 5: The Disclosure Request
- Marketing wants AI chatbot that doesn't disclose to users it's AI
- "Users prefer the human-feeling experience"
Reveal
Block deployment. Disclosure is non-negotiable Trustworthy AI principle. Engage stakeholders and compliance.Scenario 6: The Real-World Disparity Defense
- Bias check reveals disparity
- Data scientist says it reflects real-world differences
Reveal
"Reflecting real-world differences" isn't a defense if AI amplifies/perpetuates disparity. Engage ethics/legal/stakeholders. Possibly loop II.8 to clarify success criteria.GAME MODE 3: Pattern Match Challenge
| # | Concept | ECO Task |
|---|---|---|
| 1 | Privacy and security plan | I.1 |
| 2 | Transparency / explainability | I.2 |
| 3 | Bias checks | I.3 |
| 4 | Regulatory and policy compliance monitoring | I.4 |
| 5 | Accountability documentation and audit trail | I.5 |
| 6 | SRTGI 5 pillars | I (overall framework) |
| 7 | NVI three biases | I.3 |
| 8 | PRIDE privacy plan | I.1 |
| 9 | Contestability mechanism | I.2 + I.4 |
| 10 | Educated workforce | I.4 |
GAME MODE 4: Fill-in-the-Blank Speed Round
- SRTGI = Societal/Ethical, Responsible, ________, Governed, Interpretable.
- NVI biases: Neural-net, Variance, ________ (the exam one).
- Anonymization is irreversible; ________ is reversible.
- The five GenAI risks: HIIPP — Hallucination, IP misappropriation, ________, Prompt injection, Private data sharing.
- PRIDE privacy plan: Personal data, Regulatory frameworks, ________, Data access controls, Encryption.
- Three bias mitigation strategies: ________-processing, In-processing, Post-processing.
- Disclosure = users know they're interacting with AI. ________ = users can opt out with feasible alternatives.
- Audit trail captures: input + ________ + prediction + timestamp + rationale + overrides.
- Domain I runs ________ — not a phase, a thread.
- Contestability is required under EU AI Act + GDPR Article ________.
Reveal answers
- Transparent
- Informational
- pseudonymization
- Inappropriate responses
- Incident response
- Pre
- Consent
- model version
- continuously
- 22
GAME MODE 5: True or False Lightning Round
| # | Statement | Correct |
|---|---|---|
| 1 | Domain I happens primarily in Phase I and Phase VI | FALSE — continuous across all 6 phases |
| 2 | Pseudonymization satisfies GDPR cross-border transfer | FALSE — pseudonymized data is still PII |
| 3 | XAI post-hoc explanations always satisfy regulator's "explainability" bar | FALSE — depends on regulation |
| 4 | Bias must be measurable, monitored, and managed | TRUE |
| 5 | Audit trails are stored alongside model files for easy access | FALSE — secure controlled access |
| 6 | Human accountability means a human reviews every prediction | FALSE — named accountability tiered by stakes |
| 7 | Disclosure can be added after deployment if users complain | FALSE — upfront, not retrofit |
| 8 | The 5 pillars (SRTGI) map directly to the 5 ECO Domain I tasks | TRUE |
| 9 | "Reflecting real-world differences" justifies AI bias | FALSE — amplification/perpetuation matter |
| 10 | Contestability is just a best practice | FALSE — required under EU AI Act + GDPR Art 22 |
GAME MODE 6: Mnemonic Speed Recall
| Mnemonic | Expand it |
|---|---|
| SRTGI | Societal, Responsible, Transparent, Governed, Interpretable (5 Pillars) |
| PRIDE | Personal data, Regulatory frameworks, Incident response, Data access, Encryption |
| NVI | Neural-net, Variance, Informational (3 Bias Types) |
| HIIPP | Hallucination, IP misappropriation, Inappropriate, Prompt injection, Private data |
| 5 Domain I Tasks | Privacy/security · Transparency · Bias · Compliance · Accountability |
| 3 Mitigation Strategies | Pre-processing · In-processing · Post-processing |
| Audit Trail | Input + version + prediction + timestamp + rationale + overrides |
| Major Regulations | GDPR, HIPAA, CCPA, EU AI Act, NIST AI RMF, ISO/IEC 42001 |
Scoring Summary
| Mode | Score | Max |
|---|---|---|
| Flashcards | ___/17 | 17 |
| Scenarios | ___/6 | 6 |
| Pattern Match | ___/10 | 10 |
| Fill-in | ___/10 | 10 |
| True/False | ___/10 | 10 |
| Mnemonic | ___/8 | 8 |
| TOTAL | ___/61 | 61 |