Domain V: Operationalize AI Solution — Study Game
How to Play
Pick a game mode and test yourself. Cover the answers and try to recall before peeking.
GAME MODE 1: Rapid Fire Flashcards
Deployment Plan (V.1)
Card 1 — Front: What does the HOPE-MS mnemonic stand for?Answer: How served, Operation location, Performance, Escalation, Monitoring, Stakeholder sign-off (deployment plan elements).Card 2 — Front: When in the project should V.1 deployment plan first be drafted?
Answer: Iteratively from Phase I onward; finalized at IV.6 GO.Card 3 — Front: Four serving methods PMI lists for AI deployment?
Answer: Batch, Microservices, Real-time, Stream learning.Card 4 — Front: Four AI environments PMI defines?
Answer: Development (build/train), Big Data/Engineering (pipelines), Scaffolding (integrate), Operationalization (production). Mnemonic: DBSO.
Deployment Locations
Card 5 — Front: When does on-premise deployment make sense?Answer: Data residency / regulatory requirements, sensitive data, latency-critical, existing on-prem investment.Card 6 — Front: When does edge deployment make sense?
Answer: Latency-critical, bandwidth-limited, privacy-on-device, offline operation.Card 7 — Front: When does cloud deployment make sense?
Answer: Scale on demand, managed infrastructure, integrated tooling, pay-as-you-go cost.Card 8 — Front: Self-hosted vs API-hosted GenAI trade-off?
Answer: Self-hosted = control + privacy + capital cost. API-hosted = speed + scale + vendor terms.
Continuous Operations (V.4 + V.6 + V.7)
Card 9 — Front: Three categories of metrics in V.4?Answer: Business KPIs, Model performance, Operational health. Mnemonic: BMO.Card 10 — Front: Difference between model drift and data drift?
Answer: Model drift = predictions degrade. Data drift = inputs shift. Both inevitable.Card 11 — Front: Are model drift and data drift exceptional events?
Answer: No — INEVITABLE. Monitoring exists because drift happens.Card 12 — Front: What's MLOps?
Answer: DevOps adapted for ML — adds model versioning, data lineage, retraining automation, drift monitoring.Card 13 — Front: What does V.6 (transition plan) cover?
Answer: Handoff documentation + knowledge transfer + ownership change (team change, vendor change, successor project).Card 14 — Front: What does V.7 cover?
Answer: Contingency plan for AI-system incidents (model failure, data feed failure, performance breach, trustworthy-AI incidents).
Governance (V.3)
Card 15 — Front: What does APAVBE stand for in model governance?Answer: Access control, Provenance/auditing, Audit logs, Versioning, Bias monitoring, Extension controls.Card 16 — Front: What feeds V.3 governance from Domain I?
Answer: All 5 Domain I tasks — privacy, transparency, bias, regulatory compliance, accountability.
Trustworthy AI in Production
Card 17 — Front: Five trustworthy-AI properties production AI must have?Answer: Compliant, Safe, Reliable, Secure, Ethical, Privacy-respecting.Card 18 — Front: What's malicious AI?
Answer: Intentional use of AI for criminal/unethical/dangerous purposes (cyberthreats + physical threats).Card 19 — Front: What's an adversarial attack?
Answer: Manipulating input data to deceive ML models (e.g., turtle classified as rifle).Card 20 — Front: Six steps when ethical AI issue arises in production?
Answer: Detect, Contain, Audit, Notify, Remediate, Document.
Final Report (V.5)
Card 21 — Front: What goes in V.5 final report?Answer: Project summary, performance vs II.8 success criteria, lessons learned, recommendations, outstanding risks.Card 22 — Front: Where do V.5 lessons learned go?
Answer: Input to next iteration's Phase I (Business Understanding). CPMAI iterative.
Limits of AI
Card 23 — Front: Six hard limits of AI technology?Answer: Doesn't understand, doesn't reason causally, fails OOD, needs data, has no values, can't fully self-explain.Card 24 — Front: What does Phase VI close-out produce?
Answer: Recommendation for next iteration (not a hard gate).
GAME MODE 2: Scenario Showdown — What Should the PM Do?
Scenario 1: The Premature Deployment
- Data scientist says model is ready to deploy
- Deployment plan has not been created
- Team wants to push to production today
Reveal
Pause and coordinate creation of the deployment plan before deployment begins, including stakeholder sign-off (V.1). Production incidents trace back to "we didn't plan for this."Scenario 2: The Drift Discovery
- Model has been in production 3 months
- Accuracy degraded by 8 percentage points
- Data scientist wants to retrain immediately
Reveal
Investigate root cause (data drift, model decay, scope shift), engage stakeholders, decide between retrain/rollback/rescope per the contingency plan. V.4 + V.7 intersect.Scenario 3: The Change Control Bypass
- Routine governance review reveals an updated model version was deployed last month without going through change control
- Model is performing well
Reveal
Treat as governance and accountability incident. Validate deployed version against requirements, document deviation, escalate per accountability procedures, reinforce change control. V.3 + I.5 cross-pull.Scenario 4: The Untested Contingency
- Contingency plan test reveals rollback procedure fails in 3 of 5 scenarios
- Deployment is scheduled for next week
Reveal
Halt deployment until rollback procedures are fixed and re-tested. Failed contingency test = critical preparation gap. V.7 requires tested plans.Scenario 5: The Vendor Transition
- A new vendor will take over operations from the current team
- Receiving team has skill gaps
Reveal
Pause transition; coordinate skill-gap remediation (training, knowledge transfer, hiring) before transition. V.6 — receiving team capability is a gating factor.Scenario 6: The Regulatory Inquiry
- Regulator inquires about how an AI decision was made for a specific customer
- Team has documented audit trail
Reveal
Provide the audit trail from model governance: input, model version, prediction, timestamp, decision rationale, human-in-the-loop overrides. V.3 + I.5 cross-pull. Audit trail is the prepared answer.Scenario 7: The Discriminatory Output
- Production AI system has produced a discriminatory output affecting a real customer in a regulated industry
Reveal
Execute the trustworthy-AI-incident contingency: Detect → Contain → Audit → Notify (regulators per requirements) → Remediate → Document. Cross-pull I.3 + I.4 + I.5.Scenario 8: The Closeout Question
- Model has been deployed and serving for 3 months, meeting performance targets
- PM asked whether to declare project complete
Reveal
Convene Phase VI close-out: final report and lessons learned, transition to operations decision, capture outstanding risks, formal stakeholder sign-off (V.5).GAME MODE 3: Pattern Match Challenge
| # | Scenario | ECO Task |
|---|---|---|
| 1 | Coordinating creation of deployment plan | V.1 |
| 2 | Managing the actual deployment execution | V.2 |
| 3 | Overseeing model governance in production | V.3 |
| 4 | Tracking production metrics | V.4 |
| 5 | Producing final report and lessons learned | V.5 |
| 6 | Coordinating handoff to new ops team | V.6 |
| 7 | Planning response to model failure | V.7 |
| 8 | Audit trail content (input + version + prediction + ...) | V.3 (+ I.5) |
| 9 | A/B testing two model versions | V.3 |
| 10 | Detecting bias drift in production | V.4 |
GAME MODE 4: Fill-in-the-Blank Speed Round
- The 4 AI environments are Development, Big Data/Engineering, ________, Operationalization.
- V.4 metrics span ________ KPIs, Model performance, Operational health.
- Model drift = predictions degrade. Data drift = ________ shift.
- APAVBE governance covers Access, ________, Audit logs, Versioning, Bias monitoring, Extension controls.
- Six steps for ethical AI incident: Detect, Contain, Audit, ________, Remediate, Document.
- V.5 lessons learned feed next iteration's ________.
- Stream learning serves predictions AND ________ from incoming data.
- Hot path = ________ latency. Cold path = high latency.
- The 5 GenAI risks are Hallucination, IP misappropriation, ________, Prompt injection, Private data sharing.
- V.6 transition plan handoff is documented, ________, and confirmed ready by the receiving team.
Reveal answers
- Scaffolding
- Business
- inputs
- Provenance
- Notify
- Phase I
- learns
- low
- Inappropriate responses
- signed-off
GAME MODE 5: True or False Lightning Round
| # | Statement | Correct |
|---|---|---|
| 1 | Model drift and data drift are exceptional events | FALSE — inevitable |
| 2 | V.1 deployment plan is finalized at IV.6 GO | TRUE |
| 3 | Operationalization equals deployment | FALSE — operationalization is broader |
| 4 | A model can be deployed anywhere (mobile, server, cloud, edge, browser) | TRUE |
| 5 | Routine retraining triggers V.6 transition plan | FALSE — that's V.4/V.3 |
| 6 | Production validation can substitute for IV.6 evaluation | FALSE — gate is pre-deployment |
| 7 | The PM declares deployment complete after verifying all success criteria | TRUE |
| 8 | Audit trails are stored alongside model files for easy access | FALSE — secure controlled access |
| 9 | V.5 lessons learned focus only on what went wrong | FALSE — successes too |
| 10 | Contingency plans must be tested before production | TRUE |
| 11 | Auto-detection tools eliminate the need for V.7 contingency plan | FALSE — detection ≠ response |
| 12 | The PM declares deployment success based solely on runtime | FALSE — runtime + monitoring + performance + governance |
GAME MODE 6: Mnemonic Speed Recall
| Mnemonic | Expand it |
|---|---|
| HOPE-MS | How served, Operation location, Performance, Escalation, Monitoring, Stakeholder sign-off |
| DBSO | Development, Big data/engineering, Scaffolding, Operationalization (4 AI environments) |
| BMO | Business KPIs, Model performance, Operational health (V.4 metrics) |
| APAVBE | Access control, Provenance/auditing, Audit logs, Versioning, Bias monitoring, Extension controls |
| HIIPP | Hallucination, IP misappropriation, Inappropriate, Prompt injection, Private data (GenAI risks) |
| Hot vs Cold | Hot = Hurry (ms, real-time). Cold = Consider (hours, aggregation). |
| Detect-Contain-Audit-Notify-Remediate-Document | 6 steps for ethical AI incident |
| MLOps vs DevOps | DevOps = code CI/CD. MLOps = code + model + data CI/CD + drift monitoring. |
Scoring Summary
| Game Mode | Score | Max |
|---|---|---|
| Flashcards | ___/24 | 24 |
| Scenario Showdown | ___/8 | 8 |
| Pattern Match | ___/10 | 10 |
| Fill-in-the-Blank | ___/10 | 10 |
| True/False | ___/12 | 12 |
| Mnemonic Recall | ___/8 | 8 |
| TOTAL | ___/72 | 72 |