Domain V: Operationalize AI Solution — Comprehensive Study Guide

Exam weight: 17% of PMI-CPMAI exam (~20 scored questions) Score-report framing: ❌ Below Target — PRIORITY 2 for rebuild Maps to CPMAI methodology phase: Phase VI — Operationalization Number of ECO tasks: 7 (V.1 through V.7) Estimated study time: 11 hours
Note from docs/ECO_TASK_REFERENCE.md: the score report flagged Task V.6 (Manage AI solution transition plan) as having no questions on his form. Cover it anyway — the retake form is randomized.

Overview

Domain V is the smallest of the three weak domains by weight (17%) but covers the most distinct business surface: what happens after the model passes the operationalization gate (IV.6) and goes into production. Every task in this domain begins with an oversight verb: manage, oversee, prepare. The PM is responsible for ensuring the deployment plan is built, the deployment is managed, governance is overseen, metrics are monitored, contingencies are planned, transitions are managed, and lessons are captured.

The unifying pattern: Domain V tests whether the project manager understands that an AI project doesn't end at deployment. Models drift. Data shifts. Real-world conditions diverge from training. A project that "deployed and is done" is exactly the failure mode PMI is testing for. The right answers always involve continuous monitoring, governance, contingency, and iteration.

Domain V has no formal go/no-go gate (the model→ops gate is IV.6, in Domain IV). But Domain V questions often test the readiness checklist that comes before deployment can be authorized — and the wrong-answer traps are typically "deploy and assume the team will handle it" or "the data scientist owns post-deployment performance."

Table of Contents


Module 1: Operationalization Foundations

Lessons 1-4 | What operationalization means and why it's different from app deployment.

Lesson 1: ECO Task V.1 — Manage Creation of AI Solution Deployment Plan

The PM's first job in Domain V: ensure a deployment plan exists before any model is pushed into production. Deployment planning isn't a check-the-box step — it's a documented artifact answering: how will the model serve predictions, where will it run, who will operate it, what performance is expected, what fails-over, and what monitoring is in place from day one.

The PM does not write the deployment plan alone. The data scientist, ML engineer, platform team, and operations team all contribute. The PM coordinates the contributions, ensures the plan is documented, and ensures stakeholders sign off before deployment begins.

KEY TAKEAWAYS

💡 Memory Aid — HOPE-MS Plan

A deployment plan answers: How served (batch / real-time / microservice / stream), Operation location (on-prem / edge / cloud / hybrid), Performance expected, Escalation path on failure, Monitoring from day one, Stakeholder sign-off.

PM Oversight Angle


Lesson 2: The "Inference" Phase of an AI Project

ML projects have a different cycle from traditional software:

Traditional AppML Model
PhasesDesign → Build → Test → Deploy → ManageTraining → Inference
OutputCode that runs deterministic logicModel that returns probabilistic predictions
Production lifecycleStable until updatedContinuously evolves with data drift
Inference is the phase where a trained model is used in production to generate predictions on real-world inputs. Most of Domain V deals with inference — the operational lifetime of the model.

KEY TAKEAWAYS


Lesson 3: What Is Operationalization?

Operationalization is "the term AI practitioners use to describe putting machine learning models into real-world environments." Note PMI's distinction:

A model can be placed anywhere it needs to provide predictions: in a mobile app, on a server, in the cloud, on a desktop, in a web browser (via JavaScript), on an edge device, or even in a medical imaging system. The location is a deployment-plan decision, not a default.

KEY TAKEAWAYS


Lesson 4: Model Operationalization and Life Cycle Questions

PMI lists the questions a project must answer before it's "successfully operationalized." Two categories:

Operationalization Questions:
  1. How will you put the model into operation? (Batch / real-time / microservices / hybrid)
  2. Where will you operationalize? (On-prem / cloud / edge / hybrid)
  3. How will you manage versioning? (Multiple model versions in production simultaneously)
  4. What guidance for developers on model scaffolding?
  5. How will you implement GenAI in production?

Model Life Cycle Questions:
  1. Approach for life cycle management?
  2. Integration with DevOps/DevSecOps?
  3. Tools for MLOps?
  4. Methods for monitoring (usage, abuse, performance, drift)?
  5. How to manage the data life cycle in production?
  6. Approaches for governance and security?

The PM doesn't answer these technically — the PM ensures they're answered, documented, and signed off as part of V.1.

KEY TAKEAWAYS


Module 2: The Four AI Technology Environments

Lessons 5-9 | The four distinct technology environments AI requires, and why one platform can't do it all.

Lesson 5: The Four AI Environments — Overview

There is no "one platform to rule them all" for AI. Different roles and stages have different requirements. PMI defines four core environments:

  1. Model Development and Training Environment — where data scientists build and train models
  2. Big Data / Data Engineering Environment — where data is processed and pipelines run
  3. Model Scaffolding Environment — where models are integrated into apps
  4. Model Operationalization Environment — where models run in production

The PM coordinates whichever combination the project needs.

KEY TAKEAWAYS

💡 Memory Aid — DBSO (Four Environments)

Development (build the model), Big data/engineering (move and prep data), Scaffolding (integrate the model into apps), Operationalization (run the model in production). "Don't Build Stuff Outside" — build it across all four.

Lesson 6: Model Development and Training Environment

Where data scientists build, experiment, and train models. Typically: Jupyter notebooks, ML frameworks (TensorFlow, PyTorch, scikit-learn), GPU compute, experiment tracking tools (MLflow, Weights & Biases), and access to staged training data.

PM concern: ensuring the development environment has the compute, data access, and tooling the data science team needs — coordinated as part of V.1 / aligns with III.4 (workspace).

KEY TAKEAWAYS


Lesson 7: Big Data / Data Engineering Environment

Where data is gathered, cleaned, transformed, and pipelined. Typically: data warehouses, data lakes, ETL/ELT tools (Spark, Airflow, dbt), streaming platforms (Kafka, Kinesis), and data catalogs.

PM concern: ensuring data pipelines are built and operate reliably. Pipeline ownership is a deployment-plan question (V.1) and an ongoing operational concern (V.4).

KEY TAKEAWAYS


Lesson 8: Model Scaffolding Environment

The "scaffolding" environment is where the model gets integrated into the application or system that consumes it. This is the layer where developers (often without ML backgrounds) interact with the model — APIs, SDKs, client libraries, integration patterns.

PM concern: providing developer guidance on how to consume the model, version it, handle errors, and fall back gracefully. Often overlooked until production incidents reveal that consumers don't know what to do when the model returns unexpected results.

KEY TAKEAWAYS


Lesson 9: Model Operationalization Environment

The production environment where the model runs and serves predictions. Typically: model-serving infrastructure (TensorFlow Serving, TorchServe, Seldon, SageMaker, Vertex AI, custom REST APIs), monitoring stack, logging, and version routing.

PM concern: ensuring the operationalization environment matches the deployment plan's choices for serving method (batch/real-time/microservice/stream) and location (on-prem/cloud/edge/hybrid).

KEY TAKEAWAYS


Module 3: Deployment — How and Where

Lessons 10-20 | The how (serving methods) and where (locations) of model deployment.

Lesson 10: ECO Task V.2 — Manage AI Solution Deployment

Deployment is the act of moving the model from staging into production according to the V.1 plan. The PM doesn't push the model — the ML engineer or DevOps team does. The PM manages the deployment — coordinates timing, ensures the plan is followed, surfaces blockers, and confirms success criteria are met before declaring deployment complete.

A subtle but exam-critical point: deployment success is not "the model is running." Deployment success is "the model is running, monitored, performant against the success criteria, and in compliance with governance." The PM declares completion against the full criteria, not the runtime check.

KEY TAKEAWAYS

PM Oversight Angle


Lesson 11: Batch Prediction

The model runs on a schedule (e.g., nightly) against a batch of inputs and produces a batch of predictions. Used when:

Examples: monthly customer churn predictions, weekly fraud risk scoring, daily demand forecasts.

KEY TAKEAWAYS


Lesson 12: Microservices for AI

The model is exposed as an API that other services call on demand. Each call returns a prediction synchronously. Used when:

Examples: a recommendation API called by the product page, a sentiment-analysis service called by support tooling.

KEY TAKEAWAYS


Lesson 13: Real-Time Prediction

The model produces predictions in real time on continuously incoming data, typically with strict latency requirements. Used when:

Examples: fraud detection at point-of-sale, autonomous vehicle perception, real-time bidding.

KEY TAKEAWAYS


Lesson 14: Stream Learning

Stream learning continuously updates the model as new data arrives — the model both serves predictions and learns from incoming data simultaneously. Distinct from real-time prediction, which serves predictions but doesn't update the model.

Used when:

KEY TAKEAWAYS


Lesson 15: Cold Path vs Hot Path Analytics

Not unique to AI but shows up in deployment-plan questions:

PathLatencyUse Case
Hot pathLow — milliseconds to secondsReal-time alerts, fraud detection, dashboards
Cold pathHigh — minutes to hoursLong-term aggregation, reporting, historical analysis

A typical AI architecture uses both: hot path for immediate response, cold path for retraining and longer-horizon analytics.

💡 Memory Aid — Hot vs Cold

Hot = Hurry (milliseconds, alerts, real-time). Cold = Consider (hours, aggregation, retraining).

KEY TAKEAWAYS


Lesson 16: On-Premises Deployment

The model runs on infrastructure the organization owns and operates. Reasons:

Trade-offs: higher operational burden, harder to scale, more capital expense.

KEY TAKEAWAYS


Lesson 17: Edge Device Deployment

The model runs on a device close to the data source — a phone, an IoT sensor, a self-driving vehicle, a medical device. Reasons:

Trade-offs: limited compute and memory, harder to update, model must be smaller.

KEY TAKEAWAYS


Lesson 18: Cloud ML Deployment

The model runs in a cloud provider's managed environment — AWS SageMaker, Google Vertex AI, Azure ML, or similar. Reasons:

Trade-offs: vendor lock-in, data egress costs, regulatory considerations.

KEY TAKEAWAYS


Lesson 19: Self-Hosted vs API-Hosted GenAI Models

Specific to GenAI, two production patterns:

Self-HostedAPI-Hosted
WhereYour infrastructureVendor's infrastructure (OpenAI, Anthropic, etc.)
ControlFull — model, data, latency, privacyLimited — vendor's terms apply
CostHigh up-front (compute, storage, ops)Pay-per-token, no infra cost
PrivacyData never leaves your boundaryData goes through vendor (review terms)
Best forRegulated data, custom fine-tuning, high-volumePrototyping, low-volume, varied use cases

The PM coordinates this decision against trustworthy-AI constraints (Domain I) and the deployment plan (V.1).

KEY TAKEAWAYS


Lesson 20: Risks of GenAI in Production

PMI lists specific GenAI production risks (Phase I covers these, but Domain V tests deployment-time mitigations):

Mitigations belong in the deployment plan (V.1) and ongoing monitoring (V.4): content filters, prompt validation, output review, escalation paths, audit logging.

💡 Memory Aid — HIIPP (5 GenAI Risks)

Hallucination, IP misappropriation, Inappropriate responses, Prompt injection, Private data sharing. (Same mnemonic as Phase I — same risks, deployment-stage tests.)

KEY TAKEAWAYS


Module 4: Continuous Operations and Life Cycle

Lessons 21-29 | Why deployment isn't done — managing ongoing operation, drift, and contingency.

Lesson 21: Failure Reason — AI Life Cycles Are Continuous

A common AI project failure: organizations treat the model as a "one-and-done" deliverable. They deploy, declare success, and move on. Six months later, the model is producing degraded predictions — and nobody is monitoring.

PMI's framing: AI project life cycles are continuous. Models drift. Data shifts. The world changes. The deployment plan must include continuous management, monitoring, and iteration — or the project is failing on a delayed timer.

KEY TAKEAWAYS


Lesson 22: AI Life Cycle Challenges — COVID-19 E-Commerce Example

PMI's classic real-life example: e-commerce demand forecasting models trained pre-2020 dramatically failed during COVID-19. Consumer behavior shifted overnight; training data no longer reflected reality.

The lesson: external shocks invalidate model assumptions. The deployment plan must include monitoring for distribution shift and a contingency plan for retraining or rollback when shift is detected.

KEY TAKEAWAYS


Lesson 23: ECO Task V.4 — Oversee AI Solution Metrics

The PM oversees metrics that track AI solution health in production. Metrics include:

Metrics must tie back to the success criteria from Domain II (Task II.8). Without that linkage, you're measuring without baseline.

KEY TAKEAWAYS

💡 Memory Aid — BMO Metrics

Business KPIs, Model performance, Operational health. Three tiers, all monitored, all linked to Phase I/II success criteria.

PM Oversight Angle


Lesson 24: Model Life Cycle Management

Model life cycle management covers the model's entire production lifetime: deployment, monitoring, versioning, retraining, retirement. The PM coordinates the cadence, owners, and triggers for each:

KEY TAKEAWAYS


Lesson 25: Managing the Data Life Cycle (in Production)

Data drift — the production data slowly diverging from training data — is a primary cause of model decay. The PM coordinates:

This is the production extension of the data life cycle you learned in Domain III (Lesson 29).

KEY TAKEAWAYS


Lesson 26: ECO Task V.6 — Manage AI Solution Transition Plan

Transition plans cover handoffs: model handed from project team to operations team, model handed from one operations team to another (reorg, vendor change), or AI solution transitioned to a successor project. The PM owns the transition plan — what's documented, what's transferred, what knowledge needs to move with the artifact.

Asterisked task: the first attempt form had no V.6 questions. The retake form may or may not. Cover it.

KEY TAKEAWAYS

PM Oversight Angle


Lesson 27: DevOps for AI

DevOps integrates development with operations — continuous integration, continuous deployment, monitoring, automation. For AI projects, DevOps practices apply but with added complexity from model artifacts and data dependencies.

DevSecOps adds security as a first-class concern. AI projects often need DevSecOps because of regulated data and trustworthy-AI requirements.

KEY TAKEAWAYS


Lesson 28: MLOps — Machine Learning Operations

MLOps is DevOps adapted for ML. It addresses:

The PM doesn't build the MLOps stack but ensures the project plan accounts for it as a capability the operations team must have.

💡 Memory Aid — MLOps vs DevOps

DevOps = code-driven CI/CD. MLOps = code + model + data CI/CD. MLOps adds model versioning, data lineage, retraining automation.

KEY TAKEAWAYS


Lesson 29: ECO Task V.7 — Oversee AI Solution Contingency Plan

A contingency plan documents what happens when things go wrong: the model breaks, predictions are unreliable, an incident occurs, a data feed fails. The PM oversees creation of the plan and ensures it's tested and ready before production goes live.

Common contingency scenarios:

KEY TAKEAWAYS

PM Oversight Angle


Module 5: Governance, Monitoring, and Trustworthy AI in Production

Lessons 30-37 | The governance, monitoring, and trustworthy-AI controls that run continuously in production.

Lesson 30: ECO Task V.3 — Oversee Model Governance

Model governance "provides controls, processes, procedures, and organizational guidance on how models are built, iterated, used, and shared." The PM oversees governance throughout production — ensuring it's defined, applied, and audited.

Model governance covers:

KEY TAKEAWAYS

💡 Memory Aid — APAVBE Governance

Access control, Provenance/auditing, Audit logs, Version control, Bias monitoring, Extension controls.

PM Oversight Angle


Lesson 31: Model Deployment with Governance Framework

Why governance is needed during deployment and beyond:

The governance framework includes: model versioning, continuous evaluation, deployment management, iteration controls, A/B testing, access control, security, extension controls.

KEY TAKEAWAYS


Lesson 32: Model Monitoring

Model monitoring verifies that the production model performs as expected. Key aspects:

Model drift and data drift are inevitable, not exceptional. Monitoring quantifies how much.

💡 Memory Aid — Model Drift vs Data Drift

Model drift = the model's predictions degrade over time. Data drift = the inputs to the model shift over time. Drift is inevitable; monitoring is the response.

KEY TAKEAWAYS


Lesson 33: Trustworthy AI Considerations in Production

Production AI must be:

This pulls heavily on Domain I (Trustworthy AI). Production is where Domain I requirements are actually enforced.

KEY TAKEAWAYS


Lesson 34: AI System Safety and Reliability

Two distinct properties:

Both require: defined acceptable boundaries, fallback plans for when boundaries are crossed, resilience to operational failures.

PMI's example issue table (autonomous vehicles, facial recognition, content moderation) shows specific failure modes and mitigations — the PM understands these are project-level risks, not just technical concerns.

KEY TAKEAWAYS


Lesson 35: Malicious AI

Malicious AI is the intentional use of AI for criminal, unethical, dangerous, or harmful purposes. Two main categories:

Specific patterns:

The PM coordinates defense via the contingency plan (V.7), monitoring (V.4), and trustworthy-AI framework (Domain I).

KEY TAKEAWAYS


Lesson 36: Securing Machine Learning Models

Security factors for production ML models:

Security is part of governance (V.3) and trustworthy-AI (Domain I), enforced through monitoring (V.4) and contingency (V.7).

KEY TAKEAWAYS


Lesson 37: Resolving Issues of Ethical and Trustworthy AI

When ethical or trustworthy-AI issues arise in production, the PM coordinates resolution:

  1. Detect — monitoring surfaces the issue (bias, privacy breach, malicious use).
  2. Contain — pause or roll back the model if needed.
  3. Audit — document what happened, who was affected, what the cause was.
  4. Notify — stakeholders, regulators, affected users (per legal requirements).
  5. Remediate — fix the underlying issue (data, model, deployment).
  6. Document — feed lessons learned (V.5) for future projects.

The Trustworthy AI Framework (Domain I) provides the structural reference; Domain V is where execution happens.

KEY TAKEAWAYS


Module 6: Closeout — Reporting, Limits, and Next Iteration

Lessons 38-40 | Closing the project loop with lessons learned, recognizing AI limits, and preparing for the next iteration.

Lesson 38: ECO Task V.5 — Prepare Final Report / Lessons Learned

The final Domain V deliverable: a formal report capturing what was built, what was learned, what worked, what didn't, and what's recommended for future iterations.

Required content per CPMAI methodology:

Lessons learned are not "the end" — they're input to the next iteration's Phase I. CPMAI is iterative; the final report is the bridge.

KEY TAKEAWAYS

PM Oversight Angle


Lesson 39: The Limits of AI Technology

Even with successful operationalization, AI has hard limits:

Recognizing limits is a Phase VI competency: it informs lessons learned, recommendations for the next iteration, and decisions about expanding or contracting the AI program.

KEY TAKEAWAYS


Lesson 40: Phase VI Go/No-Go — Ready for Next Iteration

CPMAI is iterative. Phase VI ends not with "the project is done" but with "what's the next iteration?" The Phase VI go/no-go is a softer decision than Phase II/IV gates — it asks:

The output is a recommendation, not a hard gate. The recommendation feeds back into Phase I (Business Understanding) for the next iteration.

KEY TAKEAWAYS


Quick Reference: Model Monitoring Checklist

CategoryWhat to MonitorWhy
PerformanceLatency, throughput, error rate, accuracy, F1, recallSLA + project success criteria
Drift — ModelPredictions degrading over timeRetraining trigger
Drift — DataInput distribution shiftingData pipeline + retraining trigger
UsageWho is using the model, how, how oftenFuture training + governance
BiasFairness across user segmentsTrustworthy AI compliance
SecurityAdversarial inputs, unauthorized access, audit logsGovernance + safety

Quick Reference: Trustworthy AI in Production

PillarWhat to Verify
CompliantRegulatory + legal in all operating jurisdictions
SafeDoesn't endanger humans through neglect or carelessness
ReliableOperates as intended throughout life cycle
SecureProtected from malicious AI and adversarial attacks
EthicalAddresses ethical and fairness concerns
Privacy-RespectingData privacy and security maintained

Cross-Domain Links


Knowledge Check

Question 1

The data scientist informs the PM that the model is ready to deploy and asks when production rollout can begin. The deployment plan has not yet been created. What should the PM do?

A. Approve deployment and have the team document the plan in parallel

B. Pause and coordinate creation of the deployment plan before deployment begins, including stakeholder sign-off

C. Have the ML engineer deploy to a staging environment while the plan is written

D. Schedule the deployment and have the data scientist write the plan that day

Click for answer and rationale Correct: B

ECO Task V.1 requires a documented deployment plan with stakeholder sign-off before deployment. Production incidents trace back to "we didn't plan for this."

  • A wrong: Concurrent deploy/plan = no plan when needed.
  • C wrong: Wrong-answer trap — partial deployment without a plan still violates V.1.
  • D wrong: Rushed plan without stakeholder sign-off doesn't meet the bar.

Question 2

A model has been running in production for 3 months. The data scientist notices that prediction accuracy has degraded by 8 percentage points. What should the PM do?

A. Have the data scientist retrain the model immediately

B. Investigate root cause (data drift, model decay, scope shift), engage stakeholders, and decide between retraining, rolling back, or rescoping per the contingency plan

C. Continue monitoring for another month to see if it's a temporary issue

D. Roll back to the previous version

Click for answer and rationale Correct: B

ECO Tasks V.4 (metrics oversight) + V.7 (contingency) intersect. Performance breach triggers the contingency plan, not unilateral retraining.

  • A wrong: Retraining without root-cause analysis is a technical workaround.
  • C wrong: Wrong-answer trap — passive monitoring while production degrades.
  • D wrong: Rollback may be right but should follow contingency-plan triage, not be the first move.

Question 3

A PM is told that the operationalization environment is ready and the model can be pushed at any time. Which is the BEST sequence?

A. Deploy → monitor → declare success

B. Deploy → declare success

C. Verify deployment plan executed → confirm monitoring is live → confirm performance baseline met → confirm governance in place → declare deployment complete

D. Deploy → wait 30 days → declare success

Click for answer and rationale Correct: C

ECO Task V.2 requires deployment success against the full criteria — runtime + monitoring + performance + governance.

  • A wrong: Misses governance and performance baseline confirmation.
  • B wrong: Wrong-answer trap — runtime alone doesn't equal success.
  • D wrong: Time isn't the criterion — verified criteria are.

Question 4

True or False: Model drift and data drift are exceptional events that indicate the deployment was flawed.

Click for answer and rationale Correct: FALSE

PMI's framing: model drift and data drift are inevitable, not exceptional. Monitoring exists because drift happens. A deployment that assumes no drift is the flawed one.

Question 5

During a routine governance review, the PM discovers that an updated model version was deployed last month without going through change control. What is the BEST response?

A. Document the deployment retroactively to close the gap

B. Treat as a governance and accountability incident: validate the deployed version against requirements, document the deviation, escalate per accountability procedures, and reinforce change control

C. Roll back to the previous version immediately

D. Note it for next quarter's governance review

Click for answer and rationale Correct: B

ECO Task V.3 (governance) crosses to Domain I Task 5 (accountability/audit trail). A bypass of change control is a governance incident requiring containment + audit + escalation, not retroactive documentation.

  • A wrong: Retroactive documentation papers over the bypass.
  • C wrong: Rollback may not be needed — validate first.
  • D wrong: Wrong-answer trap — passive deferral allows the bypass pattern to repeat.

Question 6

The PM is finalizing the deployment plan and is asked whether to pre-define a contingency plan for model failure. What's the BEST PM response?

A. Defer contingency planning until after deployment and observe production for actual failure modes

B. Create the contingency plan now; test it pre-production; ensure response procedures, owners, and escalation paths are documented before going live

C. Have the operations team handle contingencies as they arise

D. Skip contingency planning since modern monitoring tools auto-detect failures

Click for answer and rationale Correct: B

ECO Task V.7 requires contingency plans to be created, tested, and ready before production.

  • A wrong: Wrong-answer trap — observing production for failures is not "planning."
  • C wrong: Operations doesn't own AI-specific contingencies (model failure, data drift, trustworthy-AI incidents).
  • D wrong: Auto-detection ≠ contingency response.

Question 7

A model has been running successfully for 6 months. A new vendor will take over operations of the model from the current team. What should the PM do?

A. Email the operations team and let them figure out the handoff

B. Coordinate a transition plan including documented artifacts, knowledge transfer, training, escalation paths, and sign-off from the receiving team

C. Have the data scientist train the new vendor's team on the model

D. Have the new vendor inherit the model as-is and start fresh on operations

Click for answer and rationale Correct: B

ECO Task V.6 — transition plans are documented, signed off, and confirmed ready by the receiving team.

  • A wrong: Wrong-answer trap — undocumented handoffs lose institutional knowledge.
  • C wrong: Knowledge transfer is part of the plan, not a substitute for it.
  • D wrong: "Start fresh" abandons documented governance, monitoring, and history.

Question 8

True or False: Lessons learned from a completed AI project are filed away for organizational record but don't influence ongoing or future projects.

Click for answer and rationale Correct: FALSE

CPMAI is iterative. ECO Task V.5 — lessons learned are input to the next iteration's Phase I. Filing them away without integration breaks the methodology's iterative loop.

Question 9

A regulator inquires about how an AI-driven decision was made for a specific customer. The PM is asked to provide documentation. What's the BEST response?

A. Have the data scientist explain the model architecture

B. Provide the audit trail from model governance: input, model version, prediction, timestamp, decision rationale, and any human-in-the-loop overrides — sourced from the documented governance program

C. Decline since model decisions are confidential

D. Re-run the prediction and provide the new result

Click for answer and rationale Correct: B

ECO Task V.3 (governance) crosses Domain I Task 5 (accountability). The audit trail is the prepared answer; this is exactly what governance documentation exists for.

  • A wrong: Architecture explanation isn't decision-specific accountability.
  • C wrong: Regulators have inquiry rights; declining is rarely correct.
  • D wrong: Wrong-answer trap — re-running doesn't reproduce the original decision context.

Question 10

The team has deployed an AI model and is monitoring its performance. After 3 months, the model is still meeting performance targets. The PM is asked whether to declare the project complete. What's the BEST response?

A. Declare complete since performance targets are being met

B. Continue monitoring indefinitely — projects don't complete

C. Convene Phase VI close-out: document final report and lessons learned, decide on transition to operations or next iteration, capture outstanding risks and dependencies, formal stakeholder sign-off

D. Have the operations team take over and let them decide when to declare complete

Click for answer and rationale Correct: C

ECO Task V.5 + Phase VI go/no-go. Project closeout has a structure: final report, lessons learned, transition plan, sign-off. Performance targets met = ready for closeout, not "automatically done."

  • A wrong: Skips the closeout deliverables.
  • B wrong: Projects do complete — operations continues, but the project closes formally.
  • D wrong: Wrong-answer trap — closeout decision is PM-coordinated with stakeholders, not delegated to operations.


Memory Aids & Mnemonics Summary

MnemonicWhat to Remember
HOPE-MS (Deployment Plan)How served, Operation location, Performance, Escalation, Monitoring, Stakeholder sign-off
DBSO (4 AI Environments)Development, Big data/engineering, Scaffolding, Operationalization
HIIPP (5 GenAI Risks)Hallucination, IP misappropriation, Inappropriate, Prompt injection, Private data
BMO MetricsBusiness KPIs, Model performance, Operational health
APAVBE GovernanceAccess, Provenance, Audit logs, Versioning, Bias, Extension controls
Hot vs Cold PathHot = Hurry (ms, alerts, real-time). Cold = Consider (hours, aggregation, retraining)
Model Drift vs Data DriftModel drift = predictions degrade. Data drift = inputs shift. Both inevitable, monitor both.
MLOps vs DevOpsDevOps = code CI/CD. MLOps = code + model + data CI/CD + drift monitoring
Detect-Contain-Audit-Notify-Remediate-Document6 steps when an ethical AI issue arises in production
Limits of AINo understanding, no causal reasoning, OOD failure, data-dependent, no values, no full self-explanation

Closing reminders for Domain V


Next: domain-IV-model-dev-eval.md (PRIORITY 3 — Domain IV has the two highest-density gates: IV.5 and IV.6)