Domain I: Support Responsible and Trustworthy AI Efforts — Study Game

How to Play

Pick a game mode and test yourself. Cover answers and try to recall before peeking. Domain I is the cross-cutting domain — most of these reinforce cross-pulls into Domains III/IV/V.


GAME MODE 1: Rapid Fire Flashcards

The Framework

Card 1 — Front: SRTGI — five pillars of Trustworthy AI?
Answer: Societal/Ethical, Responsible, Transparent, Governed, Interpretable.
Card 2 — Front: Does Domain I map to a single CPMAI phase?
Answer: No — runs continuously across all six phases.

Privacy and Security (I.1)

Card 3 — Front: PRIDE privacy plan elements?
Answer: Personal data identified, Regulatory frameworks, Incident response, Data access controls, Encryption/anonymization.
Card 4 — Front: Anonymization vs pseudonymization?
Answer: Anonymization is irreversible (no longer PII). Pseudonymization is reversible (still PII under GDPR).
Card 5 — Front: Major regulations affecting AI?
Answer: GDPR (EU), HIPAA (US healthcare), CCPA/CPRA (California), GLBA (US financial), EU AI Act, NIST AI RMF, ISO/IEC 42001.

Transparency (I.2)

Card 6 — Front: Systemic vs Decision transparency?
Answer: Systemic = visibility into model construction. Decision = visibility into specific predictions.
Card 7 — Front: XAI vs Interpretability?
Answer: XAI = post-hoc explain any model. Interpretability = inherently understandable models.
Card 8 — Front: What's contestability?
Answer: Users affected by AI decisions can challenge them. Required under EU AI Act + GDPR Art 22.
Card 9 — Front: Disclosure principle?
Answer: Users should know they're interacting with AI. Bot-to-human handoff clearly communicated.

Bias (I.3)

Card 10 — Front: NVI — three bias types?
Answer: Neural-net (math), Variance (fitting), Informational (fairness — exam one).
Card 11 — Front: Three informational bias types?
Answer: Reporting, Recall, Classification.
Card 12 — Front: Three bias mitigation strategies?
Answer: Pre-processing (rebalance data), In-processing (training-time fairness constraints), Post-processing (output calibration).
Card 13 — Front: Bias must be ____, ____, and ____.
Answer: measurable, monitored, managed.

Compliance (I.4)

Card 14 — Front: When does I.4 compliance monitoring happen?
Answer: Continuously, throughout all phases — not just at deployment.
Card 15 — Front: Seven AI Governance Principles?
Answer: Risk Assessment, System Auditability, Contestability, System Controls, System Monitoring, Regulatory/Third-party Certifications, Educated Workforce.

Accountability (I.5)

Card 16 — Front: What's in an audit trail?
Answer: Input + model version + prediction + timestamp + decision rationale + human-in-the-loop overrides.
Card 17 — Front: Human accountability principle?
Answer: A named human is accountable for consequential AI decisions. Tier by stakes (HITL/HOTL/automated).

GAME MODE 2: Scenario Showdown — What Should the PM Do?

Scenario 1: The Regulatory Inquiry

Reveal Provide audit trail from accountability program: input, model version, prediction, timestamp, decision rationale, human-in-the-loop overrides (I.5).

Scenario 2: The Bias Detection

Reveal Document, evaluate mitigation options, engage stakeholders for the decision, ensure mitigation continues into V.4 production monitoring (I.3).

Scenario 3: The High-Stakes Decision

Reveal Document the trade-off; consider interpretable-by-design alternatives; engage compliance/legal early. I.2 + IV.1 cross-pull.

Scenario 4: The Health Data Trigger

Reveal Recognize all four PHI Domain I tasks fire: I.1 (HIPAA privacy), I.3 (health data has known bias risks), I.4 (HIPAA + state laws), I.5 (named human accountability).

Scenario 5: The Disclosure Request

Reveal Block deployment. Disclosure is non-negotiable Trustworthy AI principle. Engage stakeholders and compliance.

Scenario 6: The Real-World Disparity Defense

Reveal "Reflecting real-world differences" isn't a defense if AI amplifies/perpetuates disparity. Engage ethics/legal/stakeholders. Possibly loop II.8 to clarify success criteria.


GAME MODE 3: Pattern Match Challenge

#ConceptECO Task
1Privacy and security planI.1
2Transparency / explainabilityI.2
3Bias checksI.3
4Regulatory and policy compliance monitoringI.4
5Accountability documentation and audit trailI.5
6SRTGI 5 pillarsI (overall framework)
7NVI three biasesI.3
8PRIDE privacy planI.1
9Contestability mechanismI.2 + I.4
10Educated workforceI.4
Scoring: 9-10 = Expert | 7-8 = Solid | <7 = Review

GAME MODE 4: Fill-in-the-Blank Speed Round

  1. SRTGI = Societal/Ethical, Responsible, ________, Governed, Interpretable.
  2. NVI biases: Neural-net, Variance, ________ (the exam one).
  3. Anonymization is irreversible; ________ is reversible.
  4. The five GenAI risks: HIIPP — Hallucination, IP misappropriation, ________, Prompt injection, Private data sharing.
  5. PRIDE privacy plan: Personal data, Regulatory frameworks, ________, Data access controls, Encryption.
  6. Three bias mitigation strategies: ________-processing, In-processing, Post-processing.
  7. Disclosure = users know they're interacting with AI. ________ = users can opt out with feasible alternatives.
  8. Audit trail captures: input + ________ + prediction + timestamp + rationale + overrides.
  9. Domain I runs ________ — not a phase, a thread.
  10. Contestability is required under EU AI Act + GDPR Article ________.

Reveal answers
  1. Transparent
  2. Informational
  3. pseudonymization
  4. Inappropriate responses
  5. Incident response
  6. Pre
  7. Consent
  8. model version
  9. continuously
  10. 22


GAME MODE 5: True or False Lightning Round

#StatementCorrect
1Domain I happens primarily in Phase I and Phase VIFALSE — continuous across all 6 phases
2Pseudonymization satisfies GDPR cross-border transferFALSE — pseudonymized data is still PII
3XAI post-hoc explanations always satisfy regulator's "explainability" barFALSE — depends on regulation
4Bias must be measurable, monitored, and managedTRUE
5Audit trails are stored alongside model files for easy accessFALSE — secure controlled access
6Human accountability means a human reviews every predictionFALSE — named accountability tiered by stakes
7Disclosure can be added after deployment if users complainFALSE — upfront, not retrofit
8The 5 pillars (SRTGI) map directly to the 5 ECO Domain I tasksTRUE
9"Reflecting real-world differences" justifies AI biasFALSE — amplification/perpetuation matter
10Contestability is just a best practiceFALSE — required under EU AI Act + GDPR Art 22
Scoring: 9-10 = Exam ready | 7-8 = Almost | <7 = Review

GAME MODE 6: Mnemonic Speed Recall

MnemonicExpand it
SRTGISocietal, Responsible, Transparent, Governed, Interpretable (5 Pillars)
PRIDEPersonal data, Regulatory frameworks, Incident response, Data access, Encryption
NVINeural-net, Variance, Informational (3 Bias Types)
HIIPPHallucination, IP misappropriation, Inappropriate, Prompt injection, Private data
5 Domain I TasksPrivacy/security · Transparency · Bias · Compliance · Accountability
3 Mitigation StrategiesPre-processing · In-processing · Post-processing
Audit TrailInput + version + prediction + timestamp + rationale + overrides
Major RegulationsGDPR, HIPAA, CCPA, EU AI Act, NIST AI RMF, ISO/IEC 42001

Scoring Summary

ModeScoreMax
Flashcards___/1717
Scenarios___/66
Pattern Match___/1010
Fill-in___/1010
True/False___/1010
Mnemonic___/88
TOTAL___/6161
Rating: 53+ = mastered · 40-52 = strong · 27-39 = review · <27 = re-study Trustworthy AI Framework.