Domain V: Operationalize AI Solution — Study Game

How to Play

Pick a game mode and test yourself. Cover the answers and try to recall before peeking.


GAME MODE 1: Rapid Fire Flashcards

Deployment Plan (V.1)

Card 1 — Front: What does the HOPE-MS mnemonic stand for?
Answer: How served, Operation location, Performance, Escalation, Monitoring, Stakeholder sign-off (deployment plan elements).
Card 2 — Front: When in the project should V.1 deployment plan first be drafted?
Answer: Iteratively from Phase I onward; finalized at IV.6 GO.
Card 3 — Front: Four serving methods PMI lists for AI deployment?
Answer: Batch, Microservices, Real-time, Stream learning.
Card 4 — Front: Four AI environments PMI defines?
Answer: Development (build/train), Big Data/Engineering (pipelines), Scaffolding (integrate), Operationalization (production). Mnemonic: DBSO.

Deployment Locations

Card 5 — Front: When does on-premise deployment make sense?
Answer: Data residency / regulatory requirements, sensitive data, latency-critical, existing on-prem investment.
Card 6 — Front: When does edge deployment make sense?
Answer: Latency-critical, bandwidth-limited, privacy-on-device, offline operation.
Card 7 — Front: When does cloud deployment make sense?
Answer: Scale on demand, managed infrastructure, integrated tooling, pay-as-you-go cost.
Card 8 — Front: Self-hosted vs API-hosted GenAI trade-off?
Answer: Self-hosted = control + privacy + capital cost. API-hosted = speed + scale + vendor terms.

Continuous Operations (V.4 + V.6 + V.7)

Card 9 — Front: Three categories of metrics in V.4?
Answer: Business KPIs, Model performance, Operational health. Mnemonic: BMO.
Card 10 — Front: Difference between model drift and data drift?
Answer: Model drift = predictions degrade. Data drift = inputs shift. Both inevitable.
Card 11 — Front: Are model drift and data drift exceptional events?
Answer: No — INEVITABLE. Monitoring exists because drift happens.
Card 12 — Front: What's MLOps?
Answer: DevOps adapted for ML — adds model versioning, data lineage, retraining automation, drift monitoring.
Card 13 — Front: What does V.6 (transition plan) cover?
Answer: Handoff documentation + knowledge transfer + ownership change (team change, vendor change, successor project).
Card 14 — Front: What does V.7 cover?
Answer: Contingency plan for AI-system incidents (model failure, data feed failure, performance breach, trustworthy-AI incidents).

Governance (V.3)

Card 15 — Front: What does APAVBE stand for in model governance?
Answer: Access control, Provenance/auditing, Audit logs, Versioning, Bias monitoring, Extension controls.
Card 16 — Front: What feeds V.3 governance from Domain I?
Answer: All 5 Domain I tasks — privacy, transparency, bias, regulatory compliance, accountability.

Trustworthy AI in Production

Card 17 — Front: Five trustworthy-AI properties production AI must have?
Answer: Compliant, Safe, Reliable, Secure, Ethical, Privacy-respecting.
Card 18 — Front: What's malicious AI?
Answer: Intentional use of AI for criminal/unethical/dangerous purposes (cyberthreats + physical threats).
Card 19 — Front: What's an adversarial attack?
Answer: Manipulating input data to deceive ML models (e.g., turtle classified as rifle).
Card 20 — Front: Six steps when ethical AI issue arises in production?
Answer: Detect, Contain, Audit, Notify, Remediate, Document.

Final Report (V.5)

Card 21 — Front: What goes in V.5 final report?
Answer: Project summary, performance vs II.8 success criteria, lessons learned, recommendations, outstanding risks.
Card 22 — Front: Where do V.5 lessons learned go?
Answer: Input to next iteration's Phase I (Business Understanding). CPMAI iterative.

Limits of AI

Card 23 — Front: Six hard limits of AI technology?
Answer: Doesn't understand, doesn't reason causally, fails OOD, needs data, has no values, can't fully self-explain.
Card 24 — Front: What does Phase VI close-out produce?
Answer: Recommendation for next iteration (not a hard gate).

GAME MODE 2: Scenario Showdown — What Should the PM Do?

Scenario 1: The Premature Deployment

Reveal Pause and coordinate creation of the deployment plan before deployment begins, including stakeholder sign-off (V.1). Production incidents trace back to "we didn't plan for this."

Scenario 2: The Drift Discovery

Reveal Investigate root cause (data drift, model decay, scope shift), engage stakeholders, decide between retrain/rollback/rescope per the contingency plan. V.4 + V.7 intersect.

Scenario 3: The Change Control Bypass

Reveal Treat as governance and accountability incident. Validate deployed version against requirements, document deviation, escalate per accountability procedures, reinforce change control. V.3 + I.5 cross-pull.

Scenario 4: The Untested Contingency

Reveal Halt deployment until rollback procedures are fixed and re-tested. Failed contingency test = critical preparation gap. V.7 requires tested plans.

Scenario 5: The Vendor Transition

Reveal Pause transition; coordinate skill-gap remediation (training, knowledge transfer, hiring) before transition. V.6 — receiving team capability is a gating factor.

Scenario 6: The Regulatory Inquiry

Reveal Provide the audit trail from model governance: input, model version, prediction, timestamp, decision rationale, human-in-the-loop overrides. V.3 + I.5 cross-pull. Audit trail is the prepared answer.

Scenario 7: The Discriminatory Output

Reveal Execute the trustworthy-AI-incident contingency: Detect → Contain → Audit → Notify (regulators per requirements) → Remediate → Document. Cross-pull I.3 + I.4 + I.5.

Scenario 8: The Closeout Question

Reveal Convene Phase VI close-out: final report and lessons learned, transition to operations decision, capture outstanding risks, formal stakeholder sign-off (V.5).


GAME MODE 3: Pattern Match Challenge

#ScenarioECO Task
1Coordinating creation of deployment planV.1
2Managing the actual deployment executionV.2
3Overseeing model governance in productionV.3
4Tracking production metricsV.4
5Producing final report and lessons learnedV.5
6Coordinating handoff to new ops teamV.6
7Planning response to model failureV.7
8Audit trail content (input + version + prediction + ...)V.3 (+ I.5)
9A/B testing two model versionsV.3
10Detecting bias drift in productionV.4
Scoring: 9-10 = Expert | 7-8 = Solid | Below 7 = Review

GAME MODE 4: Fill-in-the-Blank Speed Round

  1. The 4 AI environments are Development, Big Data/Engineering, ________, Operationalization.
  2. V.4 metrics span ________ KPIs, Model performance, Operational health.
  3. Model drift = predictions degrade. Data drift = ________ shift.
  4. APAVBE governance covers Access, ________, Audit logs, Versioning, Bias monitoring, Extension controls.
  5. Six steps for ethical AI incident: Detect, Contain, Audit, ________, Remediate, Document.
  6. V.5 lessons learned feed next iteration's ________.
  7. Stream learning serves predictions AND ________ from incoming data.
  8. Hot path = ________ latency. Cold path = high latency.
  9. The 5 GenAI risks are Hallucination, IP misappropriation, ________, Prompt injection, Private data sharing.
  10. V.6 transition plan handoff is documented, ________, and confirmed ready by the receiving team.

Reveal answers
  1. Scaffolding
  2. Business
  3. inputs
  4. Provenance
  5. Notify
  6. Phase I
  7. learns
  8. low
  9. Inappropriate responses
  10. signed-off


GAME MODE 5: True or False Lightning Round

#StatementCorrect
1Model drift and data drift are exceptional eventsFALSE — inevitable
2V.1 deployment plan is finalized at IV.6 GOTRUE
3Operationalization equals deploymentFALSE — operationalization is broader
4A model can be deployed anywhere (mobile, server, cloud, edge, browser)TRUE
5Routine retraining triggers V.6 transition planFALSE — that's V.4/V.3
6Production validation can substitute for IV.6 evaluationFALSE — gate is pre-deployment
7The PM declares deployment complete after verifying all success criteriaTRUE
8Audit trails are stored alongside model files for easy accessFALSE — secure controlled access
9V.5 lessons learned focus only on what went wrongFALSE — successes too
10Contingency plans must be tested before productionTRUE
11Auto-detection tools eliminate the need for V.7 contingency planFALSE — detection ≠ response
12The PM declares deployment success based solely on runtimeFALSE — runtime + monitoring + performance + governance
Scoring: 11-12 = Exam ready | 9-10 = Almost there | Below 9 = Review

GAME MODE 6: Mnemonic Speed Recall

MnemonicExpand it
HOPE-MSHow served, Operation location, Performance, Escalation, Monitoring, Stakeholder sign-off
DBSODevelopment, Big data/engineering, Scaffolding, Operationalization (4 AI environments)
BMOBusiness KPIs, Model performance, Operational health (V.4 metrics)
APAVBEAccess control, Provenance/auditing, Audit logs, Versioning, Bias monitoring, Extension controls
HIIPPHallucination, IP misappropriation, Inappropriate, Prompt injection, Private data (GenAI risks)
Hot vs ColdHot = Hurry (ms, real-time). Cold = Consider (hours, aggregation).
Detect-Contain-Audit-Notify-Remediate-Document6 steps for ethical AI incident
MLOps vs DevOpsDevOps = code CI/CD. MLOps = code + model + data CI/CD + drift monitoring.

Scoring Summary

Game ModeScoreMax
Flashcards___/2424
Scenario Showdown___/88
Pattern Match___/1010
Fill-in-the-Blank___/1010
True/False___/1212
Mnemonic Recall___/88
TOTAL___/7272
Rating: 65+ = mastered · 50-64 = strong · 35-49 = review · <35 = re-study Module 06.