1. GETTING ACCESS — DO THIS FIRST
- Find your unit data steward (data stewards may be embedded in your unit, assigned at battalion, brigade, or division level, or positioned within a directorate at the ASCC — ask your chain of command if you’re unsure who to contact).
- Ask them to submit an MSS account request with your name, unit, MOS, and required access level.
- MSS admin team provisions your account and assigns markings (markings = the data categories and classification levels you’re authorized to see).
- You receive notification when account is active — typically 3–5 business days.
- Receive the MSS portal URL from your unit data steward. This is your login link.
2. WHAT IS MSS?
MSS is the mission command information system (MCIS) program of record, directed by the USAREUR-AF CG to enable rapid and accurate decision-making. It is a secure, web-based platform where your unit’s data lives and can be analyzed and acted upon. Think of it as a shared operations center for data: information from logistics, personnel, readiness, and other systems is collected, organized, and made accessible through applications your unit uses every day.
MSS is built on the Palantir Foundry platform, authorized for Army use under the Maven Smart System program.
Stores data from Army systems in a single, organized location; makes data visible through applications and dashboards; enables units to update records, report status, and track readiness; provides analysis tools for authorized personnel; supports AI-assisted analysis through AIP tools embedded in applications; includes Gaia — a map-based geospatial application for situational awareness and operational overlays; and provides command-level applications (e.g., CUB/CUA in USAREUR-AF) for operational briefing and C2.
Not a replacement for official systems of record (DCPDS, GCSS-A, MEDPROS, etc.); not classified by default — classification depends on data markings; not a public system — access is controlled and audited.
- Personnel Readiness — Soldier readiness status
- Logistics — equipment availability & maintenance
- Operational Reporting — SITREPs and updates
- Planning — orders, unit positions, task org
- C2 — unit status across the AOR
3. SECURITY RESPONSIBILITIES
- Use only your own credentials. Do not share your CAC, PIN, or access tokens.
- Access only data you are authorized to view.
- Report misrouted data immediately. If you see data at a higher classification than your clearance — STOP and report it.
- Do not export data without authorization. Exports are logged.
- Log out when done. Do not leave an MSS session unattended on an unlocked workstation.
- Report security incidents immediately to your supervisor and unit security officer.
4. TASKS
- Insert your CAC into the CAC reader.
- Open an approved web browser (Chrome or Firefox recommended).
- Navigate to the MSS portal URL provided by your unit data steward.
- When prompted, select your authentication certificate (not email certificate).
- Enter your CAC PIN when prompted.
- MSS home screen loads — you are now logged in.
| Element | Location | Purpose |
|---|---|---|
| Search bar | Top center | Find datasets, applications, and projects |
| Notification bell | Top right | System alerts, workflow updates |
| User profile icon | Top right | Account settings, markings (your authorized data categories), logout |
| Compass (file explorer) | Left sidebar | Browse all MSS resources; pin resources to top of Files page for quick access |
| Home button (logo) | Top left | Return to home screen from anywhere |
| Pinned items | Home main area | Shortcuts to frequently used resources |
| Recent activity | Home main area | Recently visited datasets and apps |
| Notepad | Left sidebar / search | Draft documents with AIP-assisted editing — use custom prompts and recently-used functions to accelerate writing |
For Q1 2026 platform updates affecting SL 1 operators, see Platform Changes →
5. REPORTING PROBLEMS
| Problem Type | Who to Contact |
|---|---|
| Cannot log in | MSS Help Desk |
| Cannot access a project | Unit data steward |
| Data appears incorrect | Unit data steward (do not correct it yourself) |
| System error or crash | MSS Help Desk (provide error code and screenshot) |
| Security incident | Supervisor and unit security officer — IMMEDIATELY |
| Application not working | MSS Help Desk |
UPCOMING TRAINING — SL 1
| Dates | Location | Format | POC | Seats | Status |
|---|---|---|---|---|---|
| 14 APR 2026 | Wiesbaden, Clay Kaserne, Bldg 3312, Rm 104 | In-Person | SSG Johnson | 20 / 20 | OPEN |
| 05 MAY 2026 | Grafenwöhr, Bldg 244, Conf Rm B | In-Person | SFC Davis | 8 / 20 | 8 SEATS REMAINING |
| 18 JUN 2026 | Stuttgart, Kelley Bks, Bldg 3357 | In-Person | SSG Martinez | 20 / 20 | OPEN |
| 09 JUL 2026 | Virtual (MS Teams) | Virtual | SSG Johnson | 30 / 30 | OPEN |
Duration: 1 day (8 hours). Course runs 0800–1700. All dates subject to change — confirm with POC 5 days prior.
COMPETENCIES UPON COMPLETION
- Use Solution Designer to visually map data flows and application architecture before building
- Build project roadmaps using forward and backward planning from the commander’s requirement
- Create and organize Foundry projects via the UI
- Set up folder structure: raw / staging / curated layers
- Manage project access and permissions via UI
- Follow USAREUR-AF naming conventions and builder standards
- Ingest data using Pipeline Builder — visual, no code
- Configure connectors and file sources via UI
- Schedule pipeline runs via UI
- Understand raw / staging / curated dataset layers
- Create Object Types and set primary keys via Ontology Manager UI
- Define Interfaces and apply them to Object Types for consistent property contracts
- Configure Object Views to control how properties display to operators
- Create Link Types between objects via UI
- Understand Action types (write-back, form, webhook, conditional) and configure basic Actions
- Validate Object data in Object Explorer before building apps
- Build and publish Workshop applications with dashboards, forms, and filters
- Select and configure appropriate widgets for each use case
- Apply access controls and publish to users
- Use AIP Analyst to ask natural-language questions against Object Types and datasets
- Interpret AIP Analyst outputs — charts, tables, and narrative summaries
- Validate AIP Analyst results against source data before briefing or publishing
- Understand when AIP Analyst is sufficient vs when a Workshop app or Contour analysis is needed
- Use Global Branching to build and promote via UI
- Distinguish development from production environments
- Apply USAREUR-AF builder standards
Branching = making a test copy of your work before going live. You build in the dev branch (your sandbox), test it, then publish to production (what users see).
THE FOUNDRY DATA STACK
Data flows through layers. As a SL 2 builder, you work in the middle layers using visual tools. Never modify raw data — report data errors to your data steward instead.
WORKSHOP WIDGET SELECTION
| You Need To… | Use This Widget |
|---|---|
| Display many objects or records | Object Table |
| Show details for one selected object | Object Detail |
| Let users filter the data they see | Filter Panel / Dropdown |
| Show a chart (bar, line, pie) | Chart Widget |
| Show geographic data on a map | Map Widget |
| Let users write or update data | Button + Action or Action Form |
| Show a single key metric prominently | Metric Tile |
| Navigate between app sections | Navigation / Tab Widget |
ONTOLOGY SETUP ORDER (UI STEPS)
- Confirm curated dataset exists and is populated (Pipeline Builder pipeline passing)
- Open Ontology Manager in the left sidebar
- Create Object Type → set primary key property → map properties from curated dataset
- Create Link Types between related Object Types (if needed)
- Publish ontology branch and test in Object Explorer
- Build Workshop app only after Object Explorer confirms objects are visible
NAMING CONVENTIONS
| Object Type | Convention | Example |
|---|---|---|
| Datasets (path) | /Project/AOR/source/raw|staging|curated | /USAREUR/EUR/personnel/curated/soldier_status |
| Object Types | PascalCase | UnitStatus |
| Properties (API name) | camelCase | unitName |
| Properties (display name) | Title Case | Unit Name |
| Link Types | PascalCase verb form | HasEquipment |
| Workshop app names | Unit + function + version | EUR-Personnel-Readiness-v2 |
For Q1 2026 platform updates affecting SL 2 builders, see Platform Changes →
UPCOMING TRAINING — SL 2
| Dates | Location | Format | POC | Seats | Status |
|---|---|---|---|---|---|
| 21–25 APR 2026 | Wiesbaden, Clay Kaserne, Bldg 3312, Rm 104 | In-Person | SFC Chen | 7 / 15 | 7 SEATS REMAINING |
| 11–15 MAY 2026 | Grafenwöhr, Bldg 244, Conf Rm B | In-Person | SSG Williams | 3 / 15 | 3 SEATS REMAINING |
| 22–26 JUN 2026 | Stuttgart, Kelley Bks, Bldg 3357 | In-Person | SFC Chen | 15 / 15 | OPEN |
| 13–17 JUL 2026 | Virtual (MS Teams) | Virtual | SSG Williams | 20 / 20 | OPEN |
Duration: 5 days (40 hours). Course runs 0800–1700 each day. Prerequisite: SL 1 complete. All dates subject to change — confirm with POC 5 days prior.
WHO ATTENDS
- SL 2 Go on file — hard requirement, no exceptions
- Command-approved Project Brief — submitted to C2DAO ≥14 days before Day 1
- Supervisor signature on enrollment request
- Specific output: named dashboard, pipeline, Ontology type, or Quiver/Contour product
- Named consumer — a real person or role who will use the product
- All data sources accessible before Day 1
- No code required — Python / TypeScript / OSDK = SL 4 track, not FBC
- 5-day feasibility: functional prototype reachable within sprint
SPRINT WEEK STRUCTURE
| Day | Activity |
|---|---|
| Day 1 | In-brief: scope review, environment check, kickoff (0800–0900). Build (0900–1700). |
| Days 2–4 | Daily standup (0800, 15 min). Build (0815–1700). SME available throughout. |
| Day 5 | Product demo / peer review (0800–1000). Go/No-Go determination (1000–1200). Out-brief and handoff (1300–1500). |
GO STANDARD
| Standard | Criterion |
|---|---|
| Functional product | The product does what your Project Brief says it will do — your named consumer can use it |
| Documentation | Naming conventions followed; product description explains purpose and data sources |
| Handoff package | Complete by end of Day 5 — product description, data sources, limitations, maintenance guidance, promotion status, POC |
| Governance | Product in a branch; promotion plan documented or production promotion initiated |
ENROLLMENT
- T-21 days: Enrollment request submitted
- T-14 days: Project Brief approved by C2DAO
- T-10 days: Sprint workspace provisioned
- T-5 days: Candidate confirms access
- Day 1: Sprint begins
- 4 sprint events per fiscal year (quarterly)
- 4–16 participants per sprint
- 1 SME per ≤8 participants
- Annual schedule published each October
WHAT THIS COURSE COVERS
What MSS does for your formation and why it matters; how to evaluate data products — operationally, not technically; how to guide your formation’s data posture through resourcing, prioritization, and governance; what questions to ask about data freshness, source integrity, and product quality; and the training pipeline that qualifies your data workforce (SL 1 through SL 5).
Not a platform navigation course — you will see MSS, not operate it; not a data literacy primer — you already understand why data matters; not a substitute for SL 1 in the standard pipeline; not a qualification to build, modify, or administer anything on the platform.
DAILY SCHEDULE
| Time | Block | Content |
|---|---|---|
| 0800–0830 | 1 | Course introduction; senior leader role in the data environment |
| 0830–0930 | 2 | Why MSS exists — strategic context, CG guidance (Ch 1) |
| 0930–1030 | 3 | The platform and what it produces — five-layer architecture, data product types, live walkthrough (Ch 2) |
| 1030–1045 | — | Break |
| 1045–1200 | 4 | How data products impact your formation — data as command function, failure patterns, Commander’s Data PIRs (Ch 3) |
| 1200–1300 | — | Lunch |
| 1300–1345 | 5 | The training pipeline — SL 1 through SL 5, FBC, resourcing decisions (Ch 4) |
| 1345–1430 | 6 | Governance — the governance chain, VAUTI framework, red flags (Ch 5) |
| 1430–1445 | — | Break |
| 1445–1530 | 7 | How data projects work — agile overview, roadmap vs POAM (Ch 6) |
| 1530–1615 | 8 | Working with data professionals — engagement practices, terminology (Ch 7) |
| 1615–1700 | 9 | Asking the right questions — diagnostic questions for products, workforce, and AI (Ch 8–9) |
DOCUMENTS
COMPETENCIES UPON COMPLETION
- Design Complex Workshop applications with conditional logic and variable passing
- Build dynamic layouts: show/hide panels based on user selections
- Design navigation flows and inter-page parameter handoff
- Publish and manage application versions
- Build multi-source join pipelines with complex aggregations (visual)
- Design scheduled and triggered pipeline runs
- Review and interpret data lineage graphs
- Escalate to SL 4 when code transforms are required
- Design Object Type and Link Type models via Ontology Manager UI
- Architecture thinking: model for downstream app requirements, not just source data
- Design Action workflows with validation and approval logic via UI
- Coordinate ontology changes with all downstream application owners
- Conduct advanced Contour analysis: complex aggregations, pivots, calculated columns, saved views
- Build advanced Quiver dashboards with multi-object analysis and linked views
- Create reusable analysis templates for unit use
- Configure existing AIP Logic workflows (triggers, inputs, outputs)
- Set up natural language query on Object Types via AIP Logic UI
- Orient to Agent Studio — awareness of the full AIP toolset beyond AIP Logic and AI FDE
- Agent building, custom tools, action logic, and production deployment are SL 4H scope
- Use AI FDE to build and configure AI-assisted development workflows within Foundry
- Design prompts, manage context windows, and tune model parameters for operational use cases
- Integrate AI FDE outputs into existing Workshop applications and Pipeline Builder workflows
- Evaluate AI-generated outputs for accuracy and operational suitability before publishing
- Production-scale agent development and custom tooling remain SL 4H scope
- Configure Kairos timeline widgets — Ontology-driven, real-time planning visualization (distinct from static Gantt charts)
- Orient to Target Workbench — targeting workflow integration; operational use is SL 4A/SL 4B scope
- Ensure Object Types are structured to support both tools
- Orient to Automations on Object Types — schedule or condition-triggered property updates, pipeline runs, and notifications
- Orient to Machinery business process modeling — multi-step workflows with roles, transitions, and object linkage
- Recognize when Automations or Machinery apply; configuration and management are SL 4 scope
- Manage branching and production promotion via UI
- Apply USAREUR-AF C2DAO governance standards and naming conventions
- Manage governance workflows with data stewards
- Ensure coalition-facing products have C2DAO coordination and NAFv4 compliance review
SL 3 vs SL 4 — WHAT YOU OWN VS WHAT YOU ESCALATE
| You Own At SL 3 (UI) | Escalate to SL 4 When… |
|---|---|
| Application design and UX | Custom Python/PySpark transforms needed |
| Ontology model design (via UI) | Functions on Objects (TypeScript) required |
| Advanced Pipeline Builder (visual) | Incremental watermark or code logic needed |
| AIP Logic configuration (existing workflows) | Agent building, custom tools, action logic, or production deployment needed |
| AI FDE prompt design and workflow integration | Custom model fine-tuning, agent orchestration, or production-scale deployment needed |
| Governance coordination | External application (OSDK) needed |
| Production promotion via UI | CI/CD pipeline automation needed |
C2DAO GOVERNANCE GATES — HARD STOPS
| Requirement | SL 3 Action | Hard Gate? |
|---|---|---|
| New shared Object Type or dataset | Coordinate with C2DAO before publishing to production | Yes |
| Coalition / MPE-facing data product | C2DAO coordination + NAFv4 compliance review | Yes — do not skip |
| Schema change to existing shared resource | Notify all downstream owners; coordinate with steward | Yes |
| New AIP Logic workflow on operational data | Authorization review before deployment | Yes |
| Access permission changes | Submit through formal request to unit data steward | Yes |
UPCOMING TRAINING — SL 3
| Dates | Location | Format | POC | Seats | Status |
|---|---|---|---|---|---|
| 28 APR – 02 MAY 2026 | Wiesbaden, Clay Kaserne, Bldg 3312, Rm 104 | In-Person | CW3 Thompson | 4 / 10 | 4 SEATS REMAINING |
| 15–19 JUN 2026 | Virtual (MS Teams) | Virtual | CW2 Rodriguez | 15 / 15 | OPEN |
| 17–21 AUG 2026 | Wiesbaden, Clay Kaserne, Bldg 3312, Rm 104 | In-Person | CW3 Thompson | 10 / 10 | OPEN |
Duration: 5 days (40 hours). Course runs 0800–1700 each day. Prerequisites: SL 1 and SL 2 complete. All dates subject to change — confirm with POC 5 days prior.
For Q1 2026 platform updates affecting SL 3 advanced builders, see Platform Changes →
TRACK SELECTION BY MOS / ROLE
| Role / MOS | Recommended Track | Advanced |
|---|---|---|
| Warfighting Function Tracks (SL 4A–F) | ||
| G2/S2 — MI units, ISR analysts | SL 4A (Intelligence) | — |
| FA officers/NCOs — Fire support | SL 4B (Fires) | — |
| Maneuver units — G3/S3 data roles | SL 4C (Movement & Maneuver) | — |
| G4/S4 — Logistics, GCSS-A | SL 4D (Sustainment) | — |
| Air defense, CBRN, force protection | SL 4E (Protection) | — |
| G6/S6 — C2 systems, networks | SL 4F (Mission Command) | — |
| Technical Specialist Tracks (SL 4G–O) | ||
| FA49 — Operations Research Analyst | SL 4G (ORSA) | SL 5G |
| G2/S2 quantitative analyst | SL 4G (ORSA) or SL 4K (KM) | SL 5G / SL 5K |
| 17A/17C — Cyber officer/NCO | SL 4L (SWE) or SL 4H (AI Eng) | SL 5L / SL 5H |
| 25D — IT specialist | SL 4L (SWE) | SL 5L |
| AI/ML engineer (GS/contractor) | SL 4H (AI Eng) or SL 4M (MLE) | SL 5H / SL 5M |
| Data scientist (GS/contractor) | SL 4G (ORSA) or SL 4M (MLE) | SL 5G / SL 5M |
| G8/S8 — Resource manager | SL 4J (PM) | SL 5J |
| Product Manager (PM / GS) | SL 4J (PM) | SL 5J |
| KMO / Knowledge Officer / 37F | SL 4K (KM) | SL 5K |
| Civil Affairs | SL 4J (PM) or SL 4K (KM) | SL 5J / SL 5K |
| UI/UX designer (GS/contractor) | SL 4N (UX Designer) | SL 5N |
| Platform engineer / DevOps / SysAdmin | SL 4O (Platform Eng) | SL 5O |
- Configure Code Workspaces (Python/R) within Foundry
- Statistical modeling: regression, classification, validation for readiness/logistics
- Time series forecasting with ARIMA/SARIMA patterns
- Monte Carlo simulation for COA comparison and risk quantification
- Linear programming for resource allocation and scheduling optimization
- Wargame/exercise data collection architecture and aggregation pipelines
- Analytical decision support products (Quiver/Contour) to commander standard
- Communicate uncertainty: confidence intervals, sensitivity analysis, briefing standards
- Author AIP Logic workflows: prompt engineering, chain design, output handling
- Build and configure AIP Agent Studio agents with tools, memory, and orchestration
- Implement LLM integration patterns: ontology data grounding, RAG, context construction
- Apply AI safety requirements: human-in-the-loop gates, output validation, OPSEC
- Write Python transforms that prepare data for AI consumption
- Connect AIP Logic workflows to Object Types and Actions
- Test and red-team AI outputs; evaluate quality against defined standards
- Deploy and monitor AIP Logic workflows in production
- Configure Code Workspaces for model development (GPU, packages, environment management)
- Build and evaluate ML models within the Foundry environment
- Manage model versioning, experiment tracking, and reproducibility
- Deploy models to production and integrate with Ontology Objects and Actions
- Implement MLOps patterns: monitoring, drift detection, retraining triggers
- Apply responsible AI practices and model documentation standards for operational use
- Stand up Agile project structures (backlog, sprint cadence, ceremonies) for data and AI builds
- Write user stories and acceptance criteria that SL 4G–O developers can execute without ambiguity
- Manage the ML/AI project lifecycle: six phases from Problem Definition through Sustainment
- Translate commander requirements into prioritized, sprint-ready backlogs
- Specify project tracking systems (sprint boards, status dashboards) for SL 4L implementation
- Build and maintain risk registers; manage dependency blockers across specialist tracks
- Conduct production readiness reviews against the Definition of Done before release
- Execute change management plans for new MSS capability rollout to operational units
- Design knowledge architecture for AAR, lessons learned, doctrine, and SOP repositories
- Build AAR capture systems using Workshop forms and Object Type pipelines
- Design and operate lessons-learned ingestion and tagging pipelines
- Use AIP Logic for knowledge summarization, search augmentation, and theme extraction
- Build full-text and semantic search systems over knowledge repositories
- Manage doctrine and SOP version control within Foundry
- Build personnel expertise mapping (skills/experience registries)
- Design knowledge transfer and unit continuity processes using MSS
- NEW: Leverage Document Intelligence (GA) for automated document parsing and extraction
- NEW: Use Object Views (GA) for curated knowledge browsing interfaces
- Authenticate and query the Foundry Ontology via OSDK (TypeScript/Python)
- Execute Actions, subscribe to Object changes, handle pagination and filtering via OSDK
- Use Foundry Platform SDK for dataset operations, file management, and branch management
- Build TypeScript Functions on Objects (computed properties, bulk query patterns)
- Write and test complex Action validators with TypeScript
- Build Slate applications integrated with the Foundry API
- Apply USAREUR-AF code review and deployment standards for MSS applications
- NEW: Use Pilot for AI-assisted code generation within Code Repositories
- NEW: Monitor OSDK client health via the Health Dialog dashboard
- NEW: Integrate external tools via Model Context Protocol (MCP) connectors
- Can you write a basic TypeScript or Python function from scratch?
- Do you understand REST APIs (GET, POST, status codes, JSON payloads)?
- Can you read and write async/await patterns?
- Have you used a package manager (npm or pip)?
- Can you navigate a terminal and run CLI commands?
If you answered No to 2+ items, complete the Self-Study Addendum (included with TM-40L) before Day 1. Primary feeders: 17A/17C, FA26, civilian SWEs.
- Conduct user research in operational and classified environments (interview, contextual inquiry, usability testing)
- Design information architectures for data-dense operational displays
- Build interactive prototypes from low-fidelity sketches through high-fidelity mockups
- Design Workshop layouts: widget selection, dashboard hierarchy, responsive patterns
- Apply visual design standards for tactical displays: classification marking, contrast, field conditions
- Ensure Section 508 / WCAG 2.1 AA accessibility compliance
- Architect and operate Kubernetes clusters for MSS workloads
- Implement Infrastructure as Code with GitOps workflows and continuous reconciliation
- Design CI/CD pipelines: automated build, test, scan, and deploy for MSS applications
- Harden containers using DoD Iron Bank images, vulnerability scanning, and SHA256 digest pinning
- Deploy across classification boundaries and DDIL environments (air-gapped, edge clusters)
- Manage RMF/ATO lifecycle from the infrastructure perspective, STIG compliance
- NEW: Configure and optimize Compute Modules (GA) for scalable pipeline execution
- NEW: Manage Data Connection source types and third-party integration patterns
TM designations (TM-10 through TM-50O) are internal USAREUR-AF C2DAO course identifiers, not DA-published technical manuals.
▾ Warfighting Function Tracks (SL 4A–F)
| Designation | Track | Publication |
|---|---|---|
| SL 4A | Intelligence | TM_40A_INTELLIGENCE.pdf |
| SL 4B | Fires | TM_40B_FIRES.pdf |
| SL 4C | Movement & Maneuver | TM_40C_MOVEMENT_MANEUVER.pdf |
| SL 4D | Sustainment | TM_40D_SUSTAINMENT.pdf |
| SL 4E | Protection | TM_40E_PROTECTION.pdf |
| SL 4F | Mission Command | TM_40F_MISSION_COMMAND.pdf |
SL 5 SERIES — PUBLICATIONS
- Nonlinear programming, stochastic models
- Agent-based modeling (ABMS)
- Campaign wargame data architecture
- Multi-agent orchestration & shared state
- Advanced RAG, domain-adapted LLMs
- AI red-team assessment & observability
- Automated retraining pipelines
- Transformer fine-tuning, GNNs
- Federated retraining, adversarial robustness
- PI planning, cross-team governance
- GO/SES briefing, Palantir partnership
- Technical debt at program scale
- Federated KM architecture, NATO integration
- STANAG 4778 conformance
- Knowledge graphs at scale
- Scale, multi-tenancy, event streaming
- OWASP, SAST, authorized pen testing
- Architecture review, platform governance
- Design systems at scale, component libraries
- DDIL-aware and cross-domain UI design
- DesignOps, ResearchOps, accessibility at enterprise scale
- Multi-cluster fleet management, SRE practices
- RMF/ATO automation, continuous compliance
- Cross-domain infrastructure, developer experience engineering
COMMAND STRATEGY
Signed by GEN Donahue. Establishes the command vision for data-driven operations over the next 3–5 years. Defines four strategic outcomes: Decision Advantage, Data Interoperability, Modernize Theater Data Infrastructure, and Data-Ready Workforce.
Key frameworks: VAULTIS data attributes • Cognitive Hierarchy (Data → Information → Knowledge → Shared Understanding → Decision Advantage) • Decision Dominance
Vision: Leverage data at speed and scale for decision dominance and optimized operations.
Quarterly product cycle for identifying, developing, and deploying data capabilities. Two phases: Discovery & Framing (Problem ID → Bootcamp → CADs) and Iteration & Implementation (PoC → Exercise validation).
Key events: Foundry Bootcamp • Capability Awareness Days (CADs) • CG-chaired Priority Steering Board (PSB) • Forcing Function exercises
Decision gate: Persevere, pivot, or divest at exercise validation.
End-to-end lifecycle for how capabilities move from intake through fielding, sustainment, and retirement. Five phases: Intake & Scoping → Concept → Development (SAFe/ART) → Execution & Sustainment → Evolution / EOL.
Key gates: Approval Gate (PSB with CG) between Concept and Development • Transition Gate between Execution and Evolution/EOL
Principle: New capabilities built inside SAFe; operationalized outside SAFe.
DRAFT DATA LITERACY PUBLICATIONS
Written for commanders, senior NCOs, and senior Civilians. Covers command responsibilities, evaluating data products, directing a data-capable formation, and decision frameworks.
Format: Short (~20–30pp). Principles, not procedures. Chapter/paragraph numbered.
Key topics: Commander’s data responsibilities • Evaluating analytical products • Standing up MSS capability • Data governance and stewardship
Comprehensive platform-agnostic data literacy reference. Recommended prior reading before SL 1.
Format: Long (~50–100pp). Examples, vignettes, detailed explanations, annexes.
Key topics: Data types and structures • Pipeline concepts • Data quality • Analysis fundamentals • Security and classification • Operational data integration • Governance
FULL PUBLICATIONS INDEX
| Publication | Audience | Purpose | When to Read |
|---|---|---|---|
| Command Strategy | |||
| Data & Analytics Strategy | All personnel | CG-signed command vision; 4 strategic outcomes; VAULTIS; decision dominance | Strategic context for all training |
| Unified Data Transition Strategy | SL 3+, ODT, CTO | Quarterly product cycle; PSB; capability development process (CUI) | Before product submissions |
| Capability Lifecycle | ODT, PMs, ART | End-to-end capability lifecycle: intake → concept → dev (SAFe) → execution & sustainment → evolution/EOL | Before PSB submissions; process orientation |
| Foundation — All Personnel | |||
| Data Literacy (SL) | O-5+ / SGM+, Sr Civilians | Principles, command responsibilities | Before directing MSS use |
| Data Literacy | All personnel | Comprehensive data literacy reference | Before SL 1 (recommended) |
| SL 1 | All personnel | Operate MSS as end user | Before first MSS access |
| SL 2 | All staff | Build pipelines, Ontology, Workshop via UI — no code | After SL 1 |
| SL 3 | Data-adjacent specialists | Design complex apps; governance; C2DAO standards | After SL 1 + SL 2 |
| SL 4 — Warfighting Function Tracks (by WFF assignment) | |||
| SL 4A | G2/S2, MI, ISR | Intelligence WFF MSS integration | After SL 3 |
| SL 4B | FA, fire support | Fires WFF MSS integration | After SL 3 |
| SL 4C | Maneuver, G3/S3 | Movement & Maneuver WFF MSS integration | After SL 3 |
| SL 4D | G4/S4, logistics | Sustainment WFF MSS integration | After SL 3 |
| SL 4E | Air defense, CBRN, force protection | Protection WFF MSS integration | After SL 3 |
| SL 4F | G6/S6, C2, networks | Mission Command WFF MSS integration | After SL 3 |
| SL 4 — Technical Specialist Tracks (by role/MOS) | |||
| SL 4G | ORSA / FA49 | Statistical modeling, simulation, wargame analytics | After SL 3 |
| SL 4H | AI Engineers | AIP Logic authoring, Agent Studio, LLM integration | After SL 3 |
| SL 4M | ML Engineers | Code Workspaces, model training, MLOps | After SL 3 |
| SL 4J | PMs / G8 | PM dashboards, milestone tracking, portfolio analysis | After SL 3 |
| SL 4K | KMs / KMOs | Knowledge repositories, AIP summarization, lessons learned | After SL 3 |
| SL 4L | SWEs | OSDK, full-stack Foundry apps, TypeScript Functions | After SL 3 |
| SL 4N | UI/UX Designers | Soldier Centered Design, Workshop & Slate UI, accessibility | After SL 3 |
| SL 4O | Platform Engineers | Kubernetes, CI/CD, DevSecOps, Infrastructure as Code | After SL 3 |
| SL 5 — Advanced Technical Tracks (by role/MOS) | |||
| SL 5G–O | Senior developers (all tracks) | Advanced versions of each SL 4 specialist track | After SL 4 (by track) |
| CDA Reference — SL 3 and Specialist Tracks (SL 4G–O) | |||
| EA Series (00–05) | SL 3+, SL 4K, SL 4L | Enterprise Architecture — foundation, schools of thought, artifacts, governance, military application | With or after SL 3 |
| CDA Doctrine Overview | SL 4G–O | Doctrine-driven development; JRTC lessons; Foundry Ontology blueprint | At start of SL 4G–O |
| Identity vs. Classification | SL 3, SL 4K, SL 4L | Identity resolution and classification governance at scale | With SL 3 |
| Enterprise Data Compass | SL 4J, SL 4K | Authoritative data architecture, ontology, and semantic governance standard | With SL 4J/K |
| CDA Slide Library | All tracks (prereq reading) | 29 decks — Intro To Data (SL 1 prereq), Data 101 (SL 2 prereq), Data 201 (SL 3/40G–O prereq) | Before each TM level |
CDA REFERENCE MATERIAL
Six-module reference series covering EA foundations, schools of thought, artifacts and views, governance, and military application. Supports SL 3, SL 4K, and SL 4L.
Key topics: EA vs DA • TOGAF, Zachman, DODAF frameworks • Capability mapping • NAF/ArchiMate • Army EA governance
Doctrine-aligned data product design using JRTC lessons learned. Covers doctrine-first Ontology design, the Three-Generation Dilemma, and AVT25 assessment case study.
Key topics: MDMP data support • COA analysis modeling • Doctrinal object types • AVT25 tools case study
XVIII ABC’s corps-level ODT pilot. Their organizational journey, manning structure (PM + UX + SWE + DE + DS), problem-solution development process, Program Increment cycles, and BDA visualization case study (prototype → MVP → POR in 9 months). Military Review, Feb 2026.
Key topics: ODT organization • TIO governance • ASWF methodology • Exercise integration • Echeloned ODT employment • Decision optimization
Required reading: SL 4J, SL 4F • Recommended: All SL 4 tracks
How five tools in the same enclave multiplied work exponentially by failing to share doctrinal primitives. The case for doctrine-first shared data architecture.
Key topics: Shared primitives • DBO architecture • Exponential work multiplication • Time cost analysis
One officer’s thought piece proposing terminology for decision optimization at echelon. Introduces useful shorthand: operationalized data, Automated Fighting Products (AFP), and Decision Optimization Teams. Names the Maven Smart System as an ASCC-level COP platform. Military Review, Jan–Feb 2025.
Key topics: Operationalized data • AFP evolution • DOT at echelon • FA 26B/49/57 workforce • Training pipeline
Supplementary reading: SL 4F, SL 4G, SL 4J • Recommended: All SL 4 tracks, Senior Leaders
EXTERNAL DOCTRINE & INSTITUTIONAL SOURCES
MCCoE’s conceptual framework for integrating data-centric capabilities into mission command. Defines decision optimization at echelon and the role of Operational Data Teams in the command post.
Relevance: SL 4F (Mission Command), SL 4J (Product Manager), EXEC (Senior Leader)
CTC observer trends on data-centric operations, common gaps in unit-level data readiness, and recommendations for training programs. Published Feb 2025.
Relevance: SL 1 (all personnel), SL 4A–F (WFF tracks), instructor development
TRADOC’s institutional approach to data literacy across the force. Defines competency levels, assessment criteria, and integration with PME. Informs the MSS SL 1/20 foundation sequence.
Relevance: Data Literacy publications, SL 1, SL 2, instructor certification (T3-I)
Public announcement of Maven C2 integration into training and education at the Combined Arms Center. Establishes institutional backing for MSS-based training programs. Published Feb 2026.
Relevance: All tracks — institutional context for the MSS training program
CORE DATA LITERACY CONCEPTS
TRAINING PATH
CONTACT ROUTING
| Issue | Route To | Priority |
|---|---|---|
| Cannot log in / CAC issues | MSS Help Desk | Normal |
| No access to a project or dataset | Unit data steward | Normal |
| Data appears incorrect | Unit data steward (do not self-correct) | Normal |
| System error, crash, or outage | MSS Help Desk + screenshot + error code | Normal |
| Application broken or not loading | MSS Help Desk | Normal |
| Building question / how-to | Unit data lead or USAREUR-AF data team | Normal |
| Governance / C2DAO question | USAREUR-AF C2DAO | Normal |
| Security incident | Supervisor + unit security officer | IMMEDIATE |
COMMON DAILY TASKS
| Task | How |
|---|---|
| Find a record | Search bar or Filter Panel |
| Filter the view | Select values in the Filter Panel |
| Export data | Export / Download button → CSV or Excel |
| Submit or update a record | Click record → Action button → fill in → Submit |
WHEN IT BREAKS
SECURITY — DO NOT
- Do not export data to a personal device or unapproved storage
- Do not share your MSS credentials with anyone — URLs and screenshots are fine unless data is sensitive
- Do not enter classified information into MSS unless your instance is approved for that classification level
- Do not screenshot or share MSS screens containing data above your network’s approved classification
- Do not use MSS on public or unsecured Wi-Fi
- If you see data you should not have access to — stop and report to your data steward immediately
BEFORE CALLING FOR HELP — COLLECT THIS INFORMATION
- Your username and unit
- Name of the application, dataset, or pipeline you were using
- Exact error message (screenshot preferred)
- Time the error occurred (local or Zulu — state which)
- Steps that led to the error in order
- Browser and workstation you are using
PREREQUISITES BEFORE FIRST LOGIN
- Annual Cyber Awareness Training — required for all DoD personnel; must be current
- MSS User Onboarding Brief — provided by unit data steward
- Account request approved — submit at mss.data.mil or through your unit data steward; provisioning generally completes within 24 hours. If access is not active after 24 hours, contact your data steward directly.
IMPORTANT: If you work on multiple enclaves (NIPR, SIPR, MPE, etc.), you must complete account setup and first login on each enclave separately. Your account on one enclave does not carry over to another.
USAREUR-AF DATA TEAM
USAREUR-AF Operational Data Team
Army AI/Data Accelerator (C2DAO)
Reference Documents
Technical Manuals — Foundation (All Staff)
TM-40 Warfighting Function Tracks (6 — click to expand)
TM-40 Technical Specialist Tracks (8 — click to expand)
- Regression, classification, forecasting
- Monte Carlo COA analysis
- Optimization and sensitivity analysis
- AIP Logic workflow design
- Agent configuration
- LLM integration patterns
- NEW: AIP Document Intelligence (GA) — chunking & embedding
- NEW: AI FDE (GA Mar 2026)
- Feature engineering, experiment tracking
- Batch inference, model versioning
- Drift detection, retraining pipelines
- NEW: Model Studio (GA Feb 2026) — no-code model training
- Scrum / Kanban for data projects
- ML/AI project lifecycle
- Risk register, release planning
- Knowledge ontology design
- Lessons-learned intake pipeline
- SOP review workflows
- NEW: Document Intelligence, Object Views
- OSDK & Platform SDK
- Functions on Objects, Actions
- CI/CD, security, Slate
- NEW: Pilot, OSDK Health, MCP
- Workshop UI patterns & design systems
- User research in military contexts
- Accessibility & DDIL-aware design
- Foundry infrastructure management
- SRE practices & observability
- RMF/ATO compliance automation
- NEW: Compute Modules, Data Connection
TM-50 Advanced Technical Tracks (8 — click to expand)
Train the Trainer — T3 (2 courses + SOPs — click to expand)
Concepts Guides (23 — click to expand)
Practical Exercises (13 — click to expand)
Pre-Assessment Tests (27 — click to expand)
Post-Assessment Tests (26 — click to expand)
Course Syllabi (26 — click to expand)
Self-Study Guides (17 — click to expand)
Lesson Plans (5 — click to expand)
Administrative & Institutional (12 — click to expand)
Enterprise Architecture Series (6 — click to expand)
Architecture & Design References (28 — click to expand)
CDA — Common Data Architecture (15)
GDAP — Governance Data Access Platform (5)
MIM — NATO MIP Information Model (7)
Ontology Design (1)
External Doctrinal & Strategic References
ARMY DOCTRINE & REGULATION
| Publication | Title | Type | Tracks |
|---|---|---|---|
| ADP 3-0 | Operations | Doctrine | WFF (A–F) |
| ADP 3-19 | Fires | Doctrine | SL 4B |
| ADP 3-37 | Protection of the Force (Jul 2019) | Doctrine | SL 4E |
| ADP 3-90 | Offense and Defense | Doctrine | SL 4C |
| ADP 5-0 | The Operations Process | Doctrine | SL 4F, SL 4G |
| ADP 6-0 | Mission Command (Jul 2019) | Doctrine | SL 4F |
| ADP 7-0 | Training | Doctrine | Training Mgmt |
| AR 25-1 | Army Information Technology (Jul 2019) | Regulation | All |
| AR 25-2 | Army Cybersecurity (Apr 2019) | Regulation | SL 4H, SL 4M, SL 4L |
| AR 25-30 | Army Publishing Program | Regulation | SL 5H |
| AR 25-400-2 | Army Records Management | Regulation | SL 4K |
| AR 5-11 | Management of Army Models and Simulations | Regulation | SL 4G |
| AR 71-9 | Warfighting Analysis | Regulation | SL 4G |
| AR 350-1 | Army Training and Leader Development | Regulation | Training Mgmt |
| AR 525-2 | Force Protection | Regulation | SL 4E |
| AR 530-1 | Operations Security | Regulation | SL 4E |
| FM 2-0 | Intelligence (Oct 2023) | Doctrine | SL 4A |
| FM 3-0 | Operations (Mar 2025) | Doctrine | WFF (A–F) |
| FM 3-01 | U.S. Army Air and Missile Defense | Doctrine | SL 4B |
| FM 3-09 | Fire Support and Field Artillery Operations (Aug 2024) | Doctrine | SL 4B, SL 4C |
| FM 3-12 | Cyberspace and EW Operations | Doctrine | SL 4E, SL 4H, SL 4M, SL 4L |
| FM 3-27 | Army Global Ballistic Missile Defense | Doctrine | SL 4B |
| FM 3-55 | Information Collection | Doctrine | SL 4A |
| FM 3-60 | Targeting (Aug 2023) | Doctrine | SL 4B |
| FM 3-81 | Maneuver Enhancement Brigade | Doctrine | SL 4C |
| FM 3-90 | Offense and Defense (May 2023) | Doctrine | SL 4C |
| FM 4-0 | Sustainment (Aug 2024) | Doctrine | SL 4D |
| FM 1-0 | Human Resources Support | Doctrine | SL 4D |
| FM 5-0 | Planning and Orders Production (Nov 2024) | Doctrine | SL 4F, SL 4C |
| FM 6-0 | Commander’s Activities (May 2022) | Doctrine | SL 4F, SL 4C |
| FM 7-0 | Training (Jun 2021) | Doctrine | Training Mgmt |
| ATP 2-01 | Collection Management (May 2023) | Doctrine | SL 4A |
| ATP 2-33.4 | Intelligence Analysis | Doctrine | SL 4A |
| ATP 2-22.9-1 | PAI/OSINT (Oct 2023) | Doctrine | SL 4A |
| ATP 3-01.81 | Counter-UAS | Doctrine | SL 4B |
| ATP 3-09.42 | Fire Support for M&M | Doctrine | SL 4B |
| ATP 3-13.3 | Army Operations Security | Doctrine | SL 4E |
| ATP 3-90.4 | Combined Arms Mobility | Doctrine | SL 4C |
| ATP 5-0.1 | Army Design Methodology | Doctrine | SL 4F |
| ATP 5-0.3 | Multi-Service Tactics for Ops Assessment | Doctrine | SL 4G |
| ATP 6-01.1 | Techniques for Effective Knowledge Management | Doctrine | SL 4K, SL 5K |
| TC 6-0.2 | Battle Staff Operations | Doctrine | SL 4F |
| DA PAM 5-11 | Verification, Validation & Accreditation | Doctrine | SL 4G |
| DA PAM 25-1-1 | IT Implementation Instructions | Doctrine | SL 4K, SL 4L |
| DA PAM 25-2-5 | Cybersecurity Technical Reference | Doctrine | SL 4H, SL 4M, SL 4L |
| DA PAM 25-40 | Army Publishing Program Procedures | Doctrine | Standards |
| DA PAM 25-403 | Army Records Information Management | Doctrine | SL 4K |
| DA PAM 600-3 | Officer Professional Development | Doctrine | SL 4G |
DoD DIRECTIVES & INSTRUCTIONS
| Publication | Title | Type | Tracks |
|---|---|---|---|
| DoDD 3000.09 | Autonomy in Weapon Systems (Jan 2023) | Directive | WFF (A–F) |
| DoDI 5000.87 | Software Acquisition Pathway (Oct 2020) | Instruction | SL 4L, SL 5L, SL 4J, SL 5J |
| Army Directive 2024-02 | Agile Software Dev & Acquisition (Dec 2024) | Directive | SL 4L, SL 5L, SL 4J, SL 5J |
| Army Directive 2024-03 | Army Digital Engineering | Directive | SL 4H, SL 4M, SL 4L |
TRADOC PUBLICATIONS
Published at adminpubs.tradoc.army.mil, not armypubs.army.mil
| Publication | Title | Type | Tracks |
|---|---|---|---|
| TR 350-70 | Army Learning Policy and Systems | Regulation | Training Mgmt |
| TP 350-70-3 | Faculty and Staff Development Program | Pamphlet | Training Mgmt |
| TP 350-70-7 | Army Educational Processes | Pamphlet | Training Mgmt |
| TP 350-70-14 | Training Development in Institutional Domain | Pamphlet | Training Mgmt |
NATO STANDARDS & AGREEMENTS
| Publication | Title | Type | Tracks |
|---|---|---|---|
| ADatP-34 / NISP | C3 Interoperability Standards and Profiles | Standard | SL 4K, SL 4L |
| STANAG 5636 / NCMS | Core Metadata Specification | STANAG | SL 4K, SL 5K |
| STANAG 5643 (proposed) | MIM Governance Standard | STANAG | SL 4K, SL 4L, SL 5K, SL 5L |
| ADatP-5644 | Web Service Messaging Profile (WSMP) | Standard | SL 4L, SL 5L |
| ADatP-36 | Friendly Force Information (FFI) | Standard | SL 4A, SL 4C |
| STANAG 5527 | FFT Systems Interoperability | STANAG | SL 4A |
DoD & ARMY STRATEGIC GUIDANCE (not doctrine)
| Document | Authority | Date | Tracks |
|---|---|---|---|
| DoD Data Strategy | OSD | Oct 2020 | All |
| DoD Data, Analytics & AI Adoption Strategy | CDAO | Nov 2023 | All |
| DoD Responsible AI Strategy | CDAO | Jun 2024 | SL 4H/M, SL 5H/M |
| DoD Zero Trust Reference Architecture v2.0 | DISA/NSA | Jul 2022 | SL 3 |
| DoD AI Cybersecurity Risk Mgmt Guide | DoD CIO | 2024 | SL 4H/M, SL 5H/M |
| DoD Software Modernization Strategy | OSD CIO | Feb 2022 | SL 4L, SL 5L |
| JADC2 Strategy Summary | Joint Staff | Mar 2022 | WFF (A–F), SL 4G |
| JCOIE | Joint Staff J-7 | Current | SL 4F |
| Army Data Plan | Army CIO | Oct 2022 | All |
| Army Cloud Plan | Army CIO | Oct 2022 | SL 1, SL 2, SL 3 |
| UDRA v1.1 | DASA(DES) | Feb 2025 | SL 3, Specialist (G–O) |
| Army CIO Data Stewardship Memo | Army CIO | Apr 2024 | SL 1, SL 2, SL 3, SL 4K |
NATO STRATEGIC GUIDANCE (not doctrine)
| Document | Date | Tracks |
|---|---|---|
| NATO Data Strategy for the Alliance | Feb 2025 | SL 3, SL 4K, SL 5K |
| NATO Data Centric Reference Architecture v2 | 2025 | SL 3 |
| NATO Data Quality Framework for the Alliance | Aug 2025 | SL 3 |
| NATO Digital Transformation Implementation Strategy | Oct 2024 | WFF (A–F) |
| NATO Warfighting Capstone Concept | 2021 | SL 4F |
Professional Reading & Lessons Learned (65+ articles)
MILITARY REVIEW — Army University Press (14)
| Title | Date | Tracks |
|---|---|---|
| Data-Centric at the Division: 3ID’s One-Year Journey to Transform and Modernize | Jan 2025 | All |
| Modernizing Military Decision-Making: Integrating AI into Army Planning | Aug 2025 | SL 4F, SL 4H, SL 4G |
| The Military Needs Frontier Models | Aug 2025 | SL 4H, SL 4M, SL 4L |
| Exploring AI-Enhanced Cyber and Information Operations Integration | Mar-Apr 2025 | SL 4E, SL 4A, SL 4H |
| Authorities and the Multidomain Task Force | Mar-Apr 2025 | SL 4A, SL 4B, SL 4F |
| Taking a Data-Centric Approach to Unit Readiness | 2024 | All, esp. SL 4G |
| Attaining Readiness by Developing a Data-Centric Culture | 2024 | All, esp. SL 4J |
| Sustaining Our People Advantage in Data-Centric Warfare | 2024 | All |
| AI as a Combat Multiplier: Using AI to Unburden Army Staffs | Sep 2024 | SL 4H, SL 4F, SL 4G |
| Transforming the Multidomain Battlefield with AI | 2024 | SL 4H, SL 4M, SL 4A |
| The Coming Military AI Revolution | May-Jun 2024 | SL 4H, SL 4M |
| AI in Modern Warfare: Strategic Innovation and Emerging Risks | Sep-Oct 2024 | All |
| Advancing Counter-UAS Mission Command Systems | May-Jun 2024 | SL 4E, SL 4F |
| The True Test of Mission Command | Sep-Oct 2024 | SL 4F |
PARAMETERS — Army War College Quarterly (3)
| Title | Date | Tracks |
|---|---|---|
| Responsibly Pursuing Generative AI for the War Fighter | Winter 2025-26 | SL 4H, SL 4M, All |
| Integrating AI and ML into COP and COA Development | 2024-25 | SL 4G, SL 4H, SL 4F |
| Trusting AI: Integrating AI into the Army’s Professional Ethic | 2024 | All |
MIPB — Military Intelligence Professional Bulletin (6)
| Title | Date | Tracks |
|---|---|---|
| FRIDAY: Unlocking OSINT for a Data-Driven Army | 2025 | SL 4A, SL 4H, SL 4L |
| Intelligence Support to Information Advantage | Jan-Jun 2026 | SL 4A, SL 4K |
| Using AI to Create Digital Enemy Commanders | Jul-Dec 2025 | SL 4H, SL 4M, SL 4A |
| The Market Knows Best: Prediction Markets for National Security | Jul-Dec 2025 | SL 4A, SL 4G |
| Army Transitioning to Support Deep Sensing in MDO | Jul-Dec 2025 | SL 4A, SL 4B, SL 4C |
| Open-Source Intelligence Support to Targeting | 2024 | SL 4A, SL 4B |
FIELD ARTILLERY BULLETIN — Line of Departure (6)
| Title | Date | Tracks |
|---|---|---|
| The New Digital Kill Chain | 2025 | SL 4B, SL 4L |
| AI’s New Frontier in War Planning | 2025 | SL 4B, SL 4H |
| Project Convergence: Revolutionizing Targeting in LSCO | 2025 | SL 4B, SL 4A, SL 4G |
| Enhancing Tactical Level Targeting With AI | 2024 | SL 4B, SL 4H, SL 4M |
| The Future of Strategic Fires Target Acquisition | 2024 | SL 4B, SL 4A |
| The Combat Aviation Brigade and Digital Call for Fire | 2024 | SL 4B, SL 4C |
NCO JOURNAL — Army University Press (3)
| Title | Date | Tracks |
|---|---|---|
| Knowledge Management and The Old Guard | Aug 2025 | SL 4K, SL 4F |
| From Data to Wisdom | Feb 2025 | All |
| Artificial Intelligence and Future Warfare | Sep 2025 | SL 4H, SL 4M, All |
ARMY SUSTAINMENT — Army Logistics University (4)
| Title | Date | Tracks |
|---|---|---|
| Army Sustainment Enterprise’s Delayed Approach to Data Modernization | Winter 2025 | SL 4D, SL 4K |
| Predictive Logistics: Reimagining Sustainment on the 2040 Battlefield | Winter 2025 | SL 4D, SL 4H, SL 4M, SL 4G |
| Enabling Logistics in Contested Environments | Spring 2025 | SL 4D, SL 4G |
| Advancing to Data-Driven Logistics Operations | 2024 | SL 4D, SL 4K |
ARMY AL&T MAGAZINE (7)
| Title | Date | Tracks |
|---|---|---|
| Accelerating the Army’s AI Strategy | 2024-25 | SL 4H, SL 4J, All |
| Commoditizing AI/ML Models | 2024-25 | SL 4H, SL 4M, SL 4L |
| The Army’s Data (Ad)Vantage | 2024 | All |
| The Software Advantage | 2024-25 | SL 4L, SL 4J |
| Army Intelligence | 2025 | SL 4A, SL 4H |
| Emerging Technology and Modernizing the Army | 2024-25 | All |
| Reality Check (AI/ML implementation) | 2024-25 | SL 4H, SL 4M, SL 4J |
ARMY COMMUNICATOR — Cyber CoE (3)
| Title | Date | Tracks |
|---|---|---|
| Leading in Data Centricity, C2 Fix Best Practices | Spring 2025 | SL 4E, SL 4F, SL 4L |
| Army Communicator Spring 2024 | Spring 2024 | SL 4E, SL 4L |
| Army Communicator January 2024 — ITN Suite | Jan 2024 | SL 4E, SL 4C |
FROM THE GREEN NOTEBOOK (3)
| Title | Date | Tracks |
|---|---|---|
| How To Be a Data Literate Leader — And Why It Matters | Mar 2024 | All, SL 4K |
| Harnessing the Power of Knowledge Management | Apr 2024 | SL 4K, SL 4F |
| Understanding Weapons of Math Destruction | Jul 2024 | SL 4G, SL 4H, SL 4M |
INFANTRY MAGAZINE — Maneuver CoE (1)
| Title | Date | Tracks |
|---|---|---|
| Moneyball for Gunnery — 1/4 ID BCT data analytics | 2024 | SL 4C, SL 4G |
SMALL WARS JOURNAL (4)
| Title | Date | Tracks |
|---|---|---|
| Data as Firepower: Data Superiority as a Warfighting Concept | Aug 2025 | All |
| Elevating Information: Why the Army Should Establish Information as a Core WfF | Apr 2025 | SL 4A, SL 4F, SL 4K |
| Accelerating Decision-Making: Integrating AI into the Modern Wargame | Feb 2026 | SL 4G, SL 4H, SL 4F |
| AI-Enabled Wargaming at CGSC | Jan 2026 | SL 4G, SL 4H, SL 4F |
WAR ON THE ROCKS (1)
| Title | Date | Tracks |
|---|---|---|
| The U.S. Army, AI, and Mission Command | Mar 2025 | SL 4F, SL 4H |
MODERN WAR INSTITUTE — West Point (1)
| Title | Date | Tracks |
|---|---|---|
| Leadership, Lethality, and Data Literacy | 2024 | All |
CALL — Center for Army Lessons Learned (1)
| Title | Date | Tracks |
|---|---|---|
| FY24 MCTP Key Observations | Feb 2025 | All |
CDA Slide Decks — Conceptual Prereqs
SL 1 — Orientation (2 decks — click to expand)
SL 2 — Intro To Data (10 decks — click to expand)
SL 2 / SL 3 — Data 101 (4 decks — click to expand)
SL 3 — Advanced (8 decks — click to expand)
SL 3 / SL 4 — Data 201 (6 decks — click to expand)
SL 4 — Specialist Track Decks (4 decks — click to expand)
SL 5 — Advanced Track Decks (6 decks — click to expand)
Program & Briefing Decks (3 decks — click to expand)
http://<host>:<port>. Contact your ODT representative for the current host address.Training & Evaluation Outline (T&EO)
Evaluated tasks by skill level per AR 350-1 and TR 350-70. Any step marked CRITICAL = automatic NO-GO if failed.
SL 1 — Maven User (10 Tasks)
| Task | Title | Standard | Steps | Critical Items |
|---|---|---|---|---|
| SL1-01 | Log In and Navigate | Authenticate via CAC/PIV and navigate to designated app — within 5 min | 6 | Do not access production |
| SL1-02 | Filter Table / Identify Missing Submissions | Apply date filter and identify all units with missing submissions — within 5 min | 4 | |
| SL1-03 | Execute an Authorized Action | Locate record, execute Action, verify update — within 3 min | 6 | Do not execute on wrong record |
| SL1-04 | Export Filtered Table to CSV | Export filtered table; confirm row count; label with classification | 5 | File must have classification label |
| SL1-05 | Build a Basic Contour Chart | Build bar chart with correct axes and filter — within 10 min | 5 | |
| SL1-06 | Identify Classification / Export Procedure | Locate marking in Properties; state authorized distribution and export | 5 | Correct distribution + destination |
| SL1-07 | Explore an Object Type in Quiver | Navigate to Object Type, filter, export — within 5 min | 5 | |
| SL1-08 | Submit a Query to an AIP Interface | Submit query; assess output; state verification requirement | 5 | AIP outputs require human verification |
| SL1-09 | Troubleshoot Common Access Issues | Diagnose and resolve 2 of 2 pre-staged failures — within 5 min | 4 | |
| SL1-10 | Request Access to a Missing Resource | Identify access gap; submit formatted request to unit MSS admin | 3 |
▸ View full GO/NO-GO performance measures — SL 1
SL1-01: Log In and Navigate
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | Navigates to Training Environment URL (not production) | Correct URL | Opens production |
| 2 | Selects correct certificate (PIV Authentication) | Correct cert | Wrong cert |
| 3 | Enters PIV PIN | Correct | Fails |
| 4 | Confirms Training Environment displayed | Confirmed | Production |
| 5 | Navigates to designated Workshop application | Open within 5 min | Exceeds 5 min |
| 6 | [CRITICAL] Does not access production | Training only | Navigates to production |
SL1-02: Filter Table / Identify Missing Submissions
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | Locates filter control | Found | Cannot locate |
| 2 | Applies “last 7 days” filter | Applied; rows reduce | Incorrect |
| 3 | Identifies submission count | Correct | Incorrect |
| 4 | Identifies missing unit(s) | All named | Misses a unit |
SL1-03: Execute an Authorized Action
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | Locates target record | Found | Cannot locate |
| 2 | Activates Action button | Activated | Grayed; no diagnosis |
| 3 | Completes parameter form | Correct | Incorrect |
| 4 | Confirms execution | Executes | Dismisses |
| 5 | Verifies status updated | Visible | Not verified |
| 6 | [CRITICAL] Does not execute on wrong record | Correct record | Wrong record |
SL1-04: Export Filtered Table to CSV
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | Locates export function | Found | Cannot locate |
| 2 | Selects CSV | CSV | Wrong format |
| 3 | Exports to authorized folder | Authorized | Unauthorized |
| 4 | Row count matches | Verified | Mismatch |
| 5 | [CRITICAL] Classification label applied | Labeled | Not labeled |
SL1-05: Build a Basic Contour Chart
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | Opens correct dataset in Contour | Correct | Wrong |
| 2 | Correct X axis | Correct | Incorrect |
| 3 | Correct Y axis | Correct | Incorrect |
| 4 | Filter applied | Applied | Not applied |
| 5 | Saved with descriptive name | Saved | Not saved |
SL1-06: Classification Marking / Export Procedure
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | Opens Properties panel | Open | Cannot locate |
| 2 | Reads marking from Properties | Reads aloud | States without reading |
| 3 | [CRITICAL] Correct authorized distribution | Correct | Incorrect |
| 4 | [CRITICAL] Correct export destination | Govt systems | Unauthorized |
| 5 | File labeling requirement stated | Correct | Not stated |
SL1-07: Explore Object Type in Quiver
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | Navigates to correct Object Type | Correct | Wrong type |
| 2 | Identifies 3+ properties | Identified | Cannot describe |
| 3 | Applies filter | Applied | Not applied |
| 4 | States matching count | Correct | Incorrect |
| 5 | Exports filtered view | Completed | Not completed |
SL1-08: Submit Query to AIP Interface
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | Navigates to AIP interface | Open | Cannot locate |
| 2 | Submits query | Submitted | Not submitted |
| 3 | Identifies output | Received | Navigates away |
| 4 | [CRITICAL] States human verification required | Stated | Treats as authoritative |
| 5 | Identifies AI limitation | Identified | Cannot state any |
SL1-09: Troubleshoot Access Issues
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | Diagnoses first failure | Correct | Incorrect |
| 2 | States resolution for first | Correct | Incorrect |
| 3 | Diagnoses second failure | Correct | Incorrect |
| 4 | States resolution for second | Correct | Incorrect |
SL1-10: Request Access to Missing Resource
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | Identifies access error | Identified | Assumes broken |
| 2 | Correct request recipient | Unit MSS admin | Wrong (C2DAO, help desk) |
| 3 | Required info included | All included | Missing info |
SL 2 — Builder (10 Tasks)
| Task | Title | Standard | Steps | Critical Items |
|---|---|---|---|---|
| SL2-01 | Create Foundry Project to Standard | Correctly named, marked, and structured project | 3 | Classification marking set |
| SL2-02 | Ingest Files / Verify Data Quality | Ingest 2 files; verify row counts; note quality observations | 3 | |
| SL2-03 | Build Clean-and-Transform Pipeline | Filter, rename, cast, join; pipeline runs without error | 7 | DATEDIFF column; no pipeline errors |
| SL2-04 | Create an Object Type | All properties typed; Primary Key and display name set | 5 | Correct types; PK designated |
| SL2-05 | Create a Link Type | Correct cardinality and directionality | 3 | |
| SL2-06 | Configure Ontology Write Step | Property mapping correct; Object count matches source | 4 | PK mapped; count matches |
| SL2-07 | Configure an Action | Parameter, write rule, Editor-only access; test confirms update | 4 | Editor-only access |
| SL2-08 | Build a Workshop Application | Table, filter, metric, bar chart — all bound to Object Type | 5 | |
| SL2-09 | Connect Action Button; Verify Execution | Button added; table refreshes with correct value after execution | 3 | Table refreshes with correct value |
| SL2-10 | Configure Access Control | Viewer can see app but cannot execute Editor Action | 3 | Viewer cannot execute Action |
▸ View full GO/NO-GO performance measures — SL 2
SL2-01: Create Foundry Project
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | Name follows C2DAO convention | Correct | Format violation |
| 2 | [CRITICAL] Classification marking set | Set | No marking |
| 3 | Four required folders created | All present | Any missing |
SL2-02: Ingest Files / Verify Quality
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | Both files ingested to Datasets folder | Correct | Wrong location |
| 2 | Row counts verified | Both confirmed | Not checked |
| 3 | Quality observation per file | Documented | None |
SL2-03: Clean-and-Transform Pipeline
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | Filter step removes nulls | Present | Nulls in output |
| 2 | Rename step (C2DAO names) | Compliant | Non-compliant |
| 3 | CAST steps correct types | Correct | Mismatch |
| 4 | Join on unit_id | Correct | Wrong key |
| 5 | [CRITICAL] DATEDIFF column | Present | Absent |
| 6 | [CRITICAL] Pipeline runs without error | No errors | Errors present |
| 7 | Output row count matches | Matches | Fan-out |
SL2-04: Create Object Type
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | All required properties | Present | Missing |
| 2 | [CRITICAL] All types correct | Correct | Incorrect |
| 3 | [CRITICAL] Primary Key designated | PK set | No PK |
| 4 | Display name expression | Set | None |
| 5 | C2DAO naming | Compliant | Non-compliant |
SL2-05: Create Link Type
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | Correct Object Types linked | Correct | Wrong types |
| 2 | Cardinality (MANY_TO_ONE) | Correct | Incorrect |
| 3 | Directionality | Correct | Reversed |
SL2-06: Ontology Write Step
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | Write step added | Present | Absent |
| 2 | Properties mapped | All mapped | Any unmapped |
| 3 | [CRITICAL] PK column mapped | Mapped | Not mapped |
| 4 | [CRITICAL] Object count matches source | Matches | Does not match |
SL2-07: Configure Action
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | Named parameter | Exists | None |
| 2 | Write rule correct | Correct | Incorrect |
| 3 | [CRITICAL] Editor-only access | Restricted | Viewer can execute |
| 4 | Tested and confirmed | Updated | Not updated |
SL2-08: Build Workshop Application
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | C2DAO naming | Compliant | Non-compliant |
| 2 | Table bound to Object Type | Live data | Not bound |
| 3 | Filter connected | Narrows table | Not connected |
| 4 | Metric widget | Correct value | Absent |
| 5 | Bar chart | Correct fields | Wrong fields |
SL2-09: Action Button / Verify Execution
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | Button added | Present | Absent |
| 2 | Action fires on click | Fires | Does not fire |
| 3 | [CRITICAL] Table refreshes with correct value | Refreshes | Does not refresh |
SL2-10: Access Control
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | Viewer granted access | Granted | Not granted |
| 2 | Viewer can view app | Visible | Not visible |
| 3 | [CRITICAL] Viewer cannot execute Action | Unavailable | Can execute |
SL 3 — Advanced Builder (9 Tasks)
| Task | Title | Standard | Steps | Critical Items |
|---|---|---|---|---|
| SL3-01 | Design Ontology Schema | Documented schema scoring ≥75% on 6-item rubric; no zero-score item | 6 | |
| SL3-02 | Build Multi-Source Pipeline / Append Mode | Join multiple sources; Append mode; two distinct snapshots after two runs | 5 | Fan-out detected; two snapshots |
| SL3-04 | Build Complex Workshop Application | Page 1 selection drives filtered Page 2; conditional formatting | 4 | Page 2 filtered by selection |
| SL3-05 | Build Contour Workbook / Deviation Column | Readiness by battalion with calculated deviation column | 3 | |
| SL3-06 | Execute Full C2DAO Promotion Workflow | Branch first → change → description → submit → respond to feedback | 4 | Branch created BEFORE change; complete description; feedback addressed |
| SL3-07 | Build Multi-Object Quiver Dashboard | Linked views with cross-filter propagation — within 15 min | 4 | Filters propagate across views |
| SL3-08 | Configure AIP Logic Workflow | Trigger, input/output binding; routes to human review — within 20 min | 5 | Output to review queue, not production |
| SL3-09 | Interpret a Data Lineage Graph | Identify upstream sources, transforms, downstream consumers — within 5 min | 5 |
Note: SL3-03 (Append Mode Snapshot) is included in SL3-02 above.
▸ View full GO/NO-GO performance measures — SL 3
SL3-01: Design Ontology Schema
| # | Rubric Item | GO | NO-GO |
|---|---|---|---|
| 1 | Domain entities identified | All present | Missing or phantom |
| 2 | Primary Keys appropriate | Justified | Wrong PK |
| 3 | Property types documented | All specified | Missing or errors |
| 4 | Link cardinality correct | Correct + rationale | Wrong |
| 5 | Action access control | Specified | None |
| 6 | C2DAO naming | Compliant | >2 violations |
SL3-02: Multi-Source Pipeline / Append Mode
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | Join correct | Correct key/type | Wrong; fan-out |
| 2 | [CRITICAL] Fan-out detected | Absent or documented | Present undetected |
| 3 | Append mode set before first run | Set | Overwrite or late |
| 4 | Snapshot timestamp column | Present | Absent |
| 5 | [CRITICAL] Two distinct snapshots | Two records | Only one |
SL3-04: Complex Workshop
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | Portfolio page correct | All units + status | Empty or incorrect |
| 2 | Selection navigates to Page 2 | Works | Absent |
| 3 | [CRITICAL] Page 2 filtered by selection | Correct records | Shows all |
| 4 | Conditional formatting | Present | None |
SL3-05: Contour Workbook / Deviation
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | Correct dataset | Correct | Wrong |
| 2 | Deviation column | Present + correct | Absent or incorrect |
| 3 | Saved with name | Saved | Not saved |
SL3-06: C2DAO Promotion Workflow
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | [CRITICAL] Branch created BEFORE making the change | Branch first | Change on main first |
| 2 | [CRITICAL] Complete description | What/why/impact | Empty or generic |
| 3 | [CRITICAL] Feedback addressed | Resubmitted | Not addressed |
| 4 | Change on branch only | Branch-only | On main |
SL3-07: Multi-Object Quiver Dashboard
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | Views for 2+ Object Types | Both displayed | Missing |
| 2 | Linked via Link Type | Functional | Not linked |
| 3 | [CRITICAL] Cross-filter propagation | Confirmed | Does not propagate |
| 4 | Drill-down works | Correct | Wrong objects |
SL3-08: AIP Logic Workflow
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | Trigger configured | Fires | Misconfigured |
| 2 | Input binding correct | Correct | Wrong source |
| 3 | Structured output | Structured | Prose only |
| 4 | [CRITICAL] Routes to review queue | Draft in queue | Direct to production |
| 5 | Runs without error | Success | Errors |
SL3-09: Interpret Lineage Graph
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | Opens lineage graph | Displayed | Cannot locate |
| 2 | Upstream sources | All named | Missed |
| 3 | Transforms described | Correct | Misidentified |
| 4 | Downstream consumers | All named | Missed |
| 5 | Propagation described | Correct | Cannot describe |
Training & Evaluation Outline — SL 4 & SL 5
Evaluated tasks for specialist and advanced tracks per AR 350-1 and TR 350-70. Any step marked CRITICAL = automatic NO-GO if failed.
SL 4 WFF — Warfighting Function Tracks (6 Shared Tasks)
| Task | Title | Standard | Steps | Critical Items |
|---|---|---|---|---|
| 40WFF-01 | Build WFF Pipeline | Ingest, clean, type, compute; pipeline runs without error | 6 | Pipeline runs without error |
| 40WFF-02 | Create WFF Object Types / Populate | All Object Types created; correct types; PK set; count matches | 6 | Types correct; PK set; count matches |
| 40WFF-03 | Configure WFF Workshop App | Table, filter, metric, status chart bound to WFF Objects | 5 | Classification marking present |
| 40WFF-04 | Configure WFF Action | Parameter, write rule, access restriction; test confirms | 4 | Access restricted per spec |
| 40WFF-05 | Build Multi-Page WFF Dashboard | Page 1 selection drives filtered Page 2; conditional formatting | 4 | Page 2 filtered by selection |
| 40WFF-06 | Apply C2DAO Governance | Naming, marking, branch-first, promotion with complete description | 5 | Markings set; branch before change; complete description |
▸ View full GO/NO-GO performance measures — SL 4 WFF
40WFF-01: Build WFF Pipeline
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | Dataset ingested without error | Row count verified | Fails or not verified |
| 2 | Filter step removes null/invalid rows | Present | Nulls in output |
| 3 | Column types correct | All correct | Type mismatch |
| 4 | Computed column present | Correct | Absent or incorrect |
| 5 | [CRITICAL] Pipeline runs without error | No errors | Errors present |
| 6 | Output in correct folder with compliant name | Correct | Misplaced or non-compliant |
40WFF-02: Create WFF Object Types / Populate
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | All required Object Types created | All present | Any missing |
| 2 | [CRITICAL] All property types correct | Correct | Any incorrect |
| 3 | [CRITICAL] Primary Key designated | PK set | No PK |
| 4 | Write step configured; pipeline runs | Runs | Absent or fails |
| 5 | [CRITICAL] Object count matches source | Matches | Does not match |
| 6 | Naming follows C2DAO convention | Compliant | Non-compliant |
40WFF-03: Configure WFF Workshop App
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | Application named per convention | Compliant | Non-compliant |
| 2 | Table bound to WFF Object Type | Live data | Not bound |
| 3 | Filter widget connected | Narrows correctly | Not connected |
| 4 | Status indicator present | Functional | Absent |
| 5 | [CRITICAL] Classification marking present | Displayed | Absent |
40WFF-04: Configure WFF Action
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | Action created with parameter | Exists | No parameter |
| 2 | Write rule correct | Maps to property | Incorrect |
| 3 | [CRITICAL] Access restricted per spec | Restricted | Unauthorized can execute |
| 4 | Action tested and confirmed | Updated | Did not update |
40WFF-05: Build Multi-Page WFF Dashboard
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | Summary page displays all records | Correct | Empty or incorrect |
| 2 | Selection navigates to detail page | Works | Absent |
| 3 | [CRITICAL] Detail page filtered by selection | Correct records | Shows all |
| 4 | Conditional formatting present | Applied | None |
40WFF-06: Apply C2DAO Governance
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | All names follow C2DAO convention | Compliant | >2 violations |
| 2 | [CRITICAL] Classification markings set | All marked | Any unmarked |
| 3 | [CRITICAL] Branch created before changes | Branch-first | Changes on main |
| 4 | Change on branch only | Branch-only | On main |
| 5 | [CRITICAL] Promotion description complete | What/why/impact | Empty or generic |
SL 4G — ORSA (6 Tasks)
| Task | Title | Standard | Steps | Critical Items |
|---|---|---|---|---|
| 40G-01 | Configure Code Workspace | Workspace configured; GPU verified; read/write confirmed | 4 | Write transaction committed |
| 40G-02 | Build & Validate Regression Model | Model built; residual analysis done; output to Foundry | 6 | Residual analysis performed |
| 40G-03 | Time Series Forecast w/ Confidence | Forecast w/ stationarity test, model rationale, 90% CI | 5 | 90% confidence intervals present |
| 40G-04 | Monte Carlo Simulation | ≥1,000 trials; seed set; threshold probability computed | 5 | ≥1,000 trials; seed set |
| 40G-05 | Linear Programming Problem | LP formulated; solution computed; sensitivity analysis | 5 | — |
| 40G-06 | Commander Brief w/ Uncertainty | All estimates bounded; no unqualified predictions | 5 | All estimates bounded; no unqualified predictions |
▸ View full GO/NO-GO performance measures — SL 4G
40G-01: Configure Code Workspace
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | Required packages installed (statsmodels, scipy, pandas, numpy, matplotlib) | All importable | Any fails |
| 2 | Test dataset read via Spark or pandas; schema/row count confirmed | Dataset read; schema matches | Not readable; connection error |
| 3 | [CRITICAL] Write transaction committed & output confirmed | Committed | Fails or uncommitted |
| 4 | Random seed set in workspace config | Seed set | No seed |
40G-02: Build and Validate a Regression Model
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | Feature selection rationale documented | Rationale present | No rationale |
| 2 | Model trained with reproducible seed | Seed set; reproducible | No seed |
| 3 | Validation stats (R², RMSE, MAE) | All three present | Any missing |
| 4 | [CRITICAL] Residual analysis performed (plot or QQ) | Analysis present | No residual analysis |
| 5 | Output written to Foundry curated dataset | In Foundry | Not written |
| 6 | Assumptions documented (linearity, independence, normality) | Assumptions listed | No documentation |
40G-03: Time Series Forecast with Confidence Bounds
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | Stationarity test performed (ADF or equiv) | Test documented | No test |
| 2 | Model order selection w/ ACF/PACF rationale | Rationale documented | No rationale |
| 3 | [CRITICAL] 90% confidence intervals on forecast | CI displayed | Point estimate only |
| 4 | Forecast extends ≥6 periods forward | ≥6 periods | <6 periods |
| 5 | Forecast plot w/ historical data & bounds | Plot complete | Missing context or bounds |
40G-04: Monte Carlo Simulation
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | [CRITICAL] ≥1,000 trials executed | ≥1,000 trials | <1,000 trials |
| 2 | [CRITICAL] Random seed set; evaluator re-run matches | Seed set; reproducible | Not reproducible |
| 3 | Distribution selection justified | Justification documented | No justification |
| 4 | Probability at operational threshold computed | Threshold probability computed | No threshold probability |
| 5 | Output distribution plotted w/ threshold marked | Threshold visible | No plot or threshold |
40G-05: Formulate and Solve a Linear Programming Problem
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | Objective function correctly formulated | Matches scenario | Incorrect |
| 2 | All constraints formulated & documented | All present | Any missing/incorrect |
| 3 | Solution computed (scipy.optimize.linprog or equiv) | Solution produced | Fails or not attempted |
| 4 | Binding constraints identified | Stated | Not identified |
| 5 | Sensitivity analysis on ≥1 binding constraint | Present | No sensitivity analysis |
40G-06: Commander Brief with Uncertainty Bounds
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | [CRITICAL] Every estimate has confidence range/interval | All bounded | Any point estimate unbounded |
| 2 | Language appropriate for non-technical audience | Clear, non-technical | Jargon-heavy |
| 3 | Assumptions stated for each product | Assumptions communicated | No assumptions |
| 4 | [CRITICAL] No unqualified predictions (“will” without probability) | All qualified | Unqualified prediction |
| 5 | Recommendation supported by evidence | Traceable | Exceeds analytical foundation |
SL 4H — AI Engineer (6 Tasks)
| Task | Title | Standard | Steps | Critical Items |
|---|---|---|---|---|
| 40H-01 | Author AIP Logic Workflow | JSON output; conditional chain; test run succeeds | 5 | Workflow runs on test input |
| 40H-02 | Configure Agent Studio Agent | 2+ tools; correct responses; out-of-scope refused | 5 | Refuses out-of-scope queries |
| 40H-03 | LLM Integration Pipeline w/ RAG | Retrieves correct context; grounded output; review queue | 5 | Output routed to human review |
| 40H-04 | Human-in-the-Loop Checkpoints | No write without review; bypass blocked | 4 | No write without checkpoint; bypass blocked |
| 40H-05 | Python Transforms for AIP Context | Correct extraction; schema match; terminology defined | 4 | — |
| 40H-06 | AIP Authorization Checklist | Checklist complete & honest; ≥5 prohibited uses | 4 | Checklist accurate per workflow |
▸ View full GO/NO-GO performance measures — SL 4H
40H-01: Author an AIP Logic Workflow
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | Prompt includes military terminology context | Terminology defined | Relies on LLM defaults |
| 2 | Produces structured JSON (not prose) | JSON validated | Prose output |
| 3 | Conditional chain present | Functional | Linear only |
| 4 | Error handling routes malformed output to review | Present | Silent failure |
| 5 | [CRITICAL] Workflow runs on test input | Succeeds | Errors |
40H-02: Configure an Agent Studio Agent
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | ≥2 tools registered | Two tools | <2 tools |
| 2 | Correct responses to 5 evaluator queries | In-scope correct | Incorrect response |
| 3 | [CRITICAL] Refuses out-of-scope queries | Refused | Responds to out-of-scope |
| 4 | Tool calls logged and visible | Logs present | No logging |
| 5 | Memory scope defined and enforced | Configured | Unbounded context |
40H-03: Build an LLM Integration Pipeline with RAG
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | Retrieval mechanism configured | Functional | Prompt-only generation |
| 2 | Context from correct Ontology Objects | Correct Objects | Wrong Objects or fabricated |
| 3 | Output references retrieved content | Grounding evident | Not traceable |
| 4 | [CRITICAL] Output routed to human review before production write | Review queue present | Writes directly to production |
| 5 | Pipeline runs on test queries | Succeeds | Errors |
40H-04: Implement Human-in-the-Loop Checkpoints
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | [CRITICAL] No write without human checkpoint | All writes pass checkpoint | Any write bypasses |
| 2 | Review queue displays output before write | Visible | Absent or empty |
| 3 | Reviewer can approve or reject | Functional | No reject option |
| 4 | [CRITICAL] Evaluator bypass attempt blocked | Blocked | Bypass succeeds |
40H-05: Write Python Transforms for AIP Context
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | Correct Object properties extracted | All required | Any missing |
| 2 | Output matches AIP Logic input schema | Schema match | Mismatch |
| 3 | Military terminology defined in context | Defined | Abbreviations unexplained |
| 4 | Transform runs without error | Succeeds | Runtime error |
40H-06: Complete the AIP Authorization Checklist
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | All checklist items completed | All addressed | Any blank |
| 2 | [CRITICAL] Responses honest & accurate per workflow design | Matches workflow | Misrepresents capability/safety |
| 3 | ≥5 prohibited use cases identified | ≥5 identified | <5 identified |
| 4 | HITL documented for all Ontology writes | Documented | Any write without HITL doc |
SL 4M — ML Engineer (6 Tasks)
| Task | Title | Standard | Steps | Critical Items |
|---|---|---|---|---|
| 40M-01 | Configure GPU Workspace | GPU confirmed; packages installed; read/write verified | 4 | Write transaction committed |
| 40M-02 | Feature Engineering Pipeline | Nulls handled; encoding/scaling; no leakage | 6 | Leakage audit—no leakage |
| 40M-03 | Train & Evaluate Supervised Model | Cross-val; metrics meet thresholds; calibration done | 5 | Calibration check performed |
| 40M-04 | Deploy Model to Serving Endpoint | Model registered; endpoint responding; latency in spec | 4 | Correct predictions for 10 test records |
| 40M-05 | Drift Monitoring Pipeline | Drift detection; alert routes; evaluator drift detected | 5 | Evaluator-seeded drift detected |
| 40M-06 | Model Governance Document | Model card complete; limitations specific; RAI declared | 4 | All 4 required sections present |
▸ View full GO/NO-GO performance measures — SL 4M
40M-01: Configure Code Workspace with GPU
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | Required packages installed (scikit-learn, PyTorch/TF, pandas, numpy) | All importable | Any fails |
| 2 | GPU allocation confirmed | GPU available | Not detected |
| 3 | [CRITICAL] Write transaction committed to Foundry | Committed | Fails |
| 4 | Random seed set | Seed set | No seed |
40M-02: Build a Feature Engineering Pipeline
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | Null handling applied & documented | Nulls handled | Nulls in output |
| 2 | Categorical encoding applied | Encoding applied | Raw categoricals |
| 3 | Numeric scaling applied | Scaling applied | Unscaled |
| 4 | [CRITICAL] Leakage audit: no feature derived from label | No leakage | Leakage detected or audit missing |
| 5 | Feature matrix written to Foundry | In Foundry | Not written |
| 6 | Each feature decision documented | Present | No documentation |
40M-03: Train and Evaluate a Supervised Model
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | Train/test split with reproducible seed | Reproducible | Not reproducible |
| 2 | Cross-validation (k≥5) | Results reported | No cross-val |
| 3 | Metrics: accuracy, precision, recall, ROC-AUC | All reported | Any missing |
| 4 | [CRITICAL] Calibration check performed & documented | Present | Skipped |
| 5 | ≥2 models compared; selection justified | Comparison present | Single model only |
40M-04: Deploy a Model to a Serving Endpoint
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | Model registered in Foundry w/ version | Registered | Not registered |
| 2 | Endpoint deployed & responding | Responds | Not responding |
| 3 | [CRITICAL] Correct predictions for 10 test records | All 10 returned | Failures or errors |
| 4 | Latency within spec | Within threshold | Exceeds threshold |
40M-05: Implement a Drift Monitoring Pipeline
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | Drift detection method (PSI, KS, or equiv) | Metric computed | No detection |
| 2 | Baseline from deployment-time data | Documented | No baseline |
| 3 | Alert threshold defined & documented | Threshold set | No threshold |
| 4 | [CRITICAL] Evaluator-seeded drift detected | Detected & flagged | Not detected |
| 5 | Alert routes to correct channel | Routed | Not routed |
40M-06: Complete a Model Governance Document
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | [CRITICAL] Model card: assumptions, training data, limitations, use restrictions | All four sections | Any missing |
| 2 | Limitations specific & realistic | Specific | Generic boilerplate |
| 3 | Responsible AI declaration | Present | Absent |
| 4 | Out-of-scope uses documented | Documented | No out-of-scope docs |
SL 4J — Product Manager (6 Tasks)
| Task | Title | Standard | Steps | Critical Items |
|---|---|---|---|---|
| 40J-01 | Program Data Architecture | 4 Object Types; correct links & cardinality; paper first | 4 | — |
| 40J-02 | Milestone Tracking Pipeline | DATEDIFF variance; RAG status; data-as-of timestamp | 5 | Data-as-of timestamp present |
| 40J-03 | Milestone Dashboard | RAG formatting; data-as-of widget; filter functional | 4 | Data-as-of timestamp on dashboard |
| 40J-04 | Budget Execution Visualization | Obligation rate chart; reference line; at-risk identifiable | 3 | — |
| 40J-05 | Snapshot Pipeline (Append Mode) | Append before first run; 2 distinct snapshots | 3 | Two distinct snapshot records |
| 40J-06 | IPR Product (PM Standards) | Contour portfolio; RED at top; exportable PDF | 4 | — |
▸ View full GO/NO-GO performance measures — SL 4J
40J-01: Design a Program Data Architecture
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | All 4 Object Types (Program, Milestone, Resource, Risk) | All present | Any missing |
| 2 | Link Types w/ correct cardinality | Correct | Incorrect |
| 3 | Properties documented with types | Specified | Missing types |
| 4 | Paper design before Ontology Manager | Paper first | Built without design |
40J-02: Build a Milestone Tracking Pipeline
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | IMS Excel ingested; date columns CAST correctly | CAST applied | DATEDIFF fails on text |
| 2 | DATEDIFF variance (planned vs actual) | Correct | Absent or incorrect |
| 3 | RAG status (RED >30d, AMBER >0, GREEN ≤0) | Logic correct | Absent or wrong |
| 4 | [CRITICAL] Data-as-of timestamp (CURRENT_DATE) | Present | No timestamp |
| 5 | Pipeline runs without error | No errors | Errors |
40J-03: Milestone Dashboard with Data-As-Of Timestamp
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | Table widget displays milestones | Functional | Empty or not bound |
| 2 | RAG conditional formatting | Correct | No formatting |
| 3 | [CRITICAL] Data-as-of timestamp widget visible | Visible | No timestamp |
| 4 | Filter by program or status | Functional | No filter |
40J-04: Budget Execution Visualization
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | Obligation rate chart displays correctly | Correct data | Absent or incorrect |
| 2 | Reference line at quarterly target | Present at correct value | No reference line |
| 3 | At-risk programs identifiable | Visually distinguishable | Cannot identify |
40J-05: Configure Snapshot Pipeline in Append Mode
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | Append mode configured before first run | Set before run | Overwrite or set after |
| 2 | Snapshot timestamp column present | Present | No timestamp |
| 3 | [CRITICAL] Two distinct snapshots after two runs | Two records | Only one (Overwrite) |
40J-06: IPR Product Meeting PM Dashboard Standards
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | Contour portfolio health matrix present | Created | No portfolio view |
| 2 | Sorted by status ascending (RED at top) | RED first | Not sorted |
| 3 | All PM Dashboard Standards met | All pass | Any fails |
| 4 | Exportable as PDF | Export successful | Cannot export |
SL 4K — Knowledge Manager (6 Tasks)
| Task | Title | Standard | Steps | Critical Items |
|---|---|---|---|---|
| 40K-01 | Design Knowledge Ontology | 5+ Object Types; correct links; checklist passes | 4 | — |
| 40K-02 | Configure AAR Submission Form | Writes to AAR Object; required fields enforced | 4 | Required-field validation fires |
| 40K-03 | Lessons-Learned Pipeline | Tagging; dedup; distribution routing | 4 | — |
| 40K-04 | AIP Summarization w/ Review Gate | 5 docs processed; Draft status; review queue | 4 | All outputs begin as Draft |
| 40K-05 | Knowledge Browser Application | Search/filter/drill-down; 5/5 queries correct | 4 | 5/5 evaluator queries correct |
| 40K-06 | PCS Knowledge Transfer Package | Specific artifacts named; quality documented | 4 | Names specific Foundry artifacts |
▸ View full GO/NO-GO performance measures — SL 4K
40K-01: Design a Knowledge Ontology
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | All 5 Object Types (Document, Lesson, AAR, SOP, ExpertiseProfile) | All present | Any missing |
| 2 | Link Types (Lesson → AAR, Lesson → Unit, SOP → Unit) | Correct | Missing or incorrect |
| 3 | Properties documented with types | Specified | Missing types |
| 4 | Evaluated against knowledge architecture checklist | Passes | Fails |
40K-02: Configure an AAR Submission Form
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | All required fields (unit, date, event type, location, description, lesson, classification) | All present | Any missing |
| 2 | [CRITICAL] Required-field validation fires on empty submission | Prevents empty | Empty accepted |
| 3 | Submission writes to AAR Object Type | Confirmed | Write fails |
| 4 | Submission confirmation displayed | Visible | No confirmation |
40K-03: Configure a Lessons-Learned Pipeline
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | Tagging taxonomy applied | Tags applied | No tagging |
| 2 | Deduplication logic present | Duplicates handled | Duplicates pass through |
| 3 | Distribution routing functional | Correct | No routing |
| 4 | Pipeline runs on test data | No errors | Errors |
40K-04: AIP Summarization Workflow with Review Gate
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | All 5 documents processed | All processed | Any fails |
| 2 | Structured output (not raw prose) | Structured | Unstructured |
| 3 | [CRITICAL] All AIP-generated lessons begin as Draft | Draft status | Any published without review |
| 4 | Review queue displays outputs | Populated | Empty |
40K-05: Build a Knowledge Browser Application
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | Search functionality (keyword or semantic) | Returns results | No search |
| 2 | Filter by tag, unit, and date | All three work | Any non-functional |
| 3 | Drill-down to full lesson/AAR text | Functional | Absent |
| 4 | [CRITICAL] 5/5 evaluator queries return correct results | 5 of 5 correct | Any incorrect |
40K-06: PCS Knowledge Transfer Package
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | Key person dependency analysis | Dependencies identified | No analysis |
| 2 | [CRITICAL] Names specific Foundry projects, Object Types, pipelines, contacts | Specific artifacts | Generic boilerplate |
| 3 | Data quality status per artifact | Present | No quality docs |
| 4 | Reviewed & approved by instructor | Approved | Not reviewed |
SL 4L — Software Engineer (6 Tasks)
| Task | Title | Standard | Steps | Critical Items |
|---|---|---|---|---|
| 40L-01 | Paginated OSDK Query | Correct filter; all pages; no hardcoded creds | 4 | All pages retrieved; no hardcoded creds |
| 40L-02 | OSDK Action w/ Validation | Valid succeeds; invalid → structured error | 4 | — |
| 40L-03 | TypeScript Function on Objects | Correct values for 10 objects; edge cases handled | 4 | — |
| 40L-04 | TypeScript Action Validator | ≥3 conditions; 8/8 test cases pass | 4 | All 8 test cases pass |
| 40L-05 | Slate App w/ Live Ontology | Live data; auto-refresh; error states; no creds | 4 | No hardcoded credentials |
| 40L-06 | C2DAO Code Review & Deploy | PR created; comments addressed; no creds in code | 4 | No credentials in committed code |
▸ View full GO/NO-GO performance measures — SL 4L
40L-01: Authenticate and Execute a Paginated OSDK Query
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | OSDK client authenticated | Authenticated | Fails |
| 2 | Filter applied per evaluator spec | Correct records | Wrong records |
| 3 | [CRITICAL] Pagination iterates all pages | All pages | Only page 1 |
| 4 | [CRITICAL] No hardcoded credentials | None in code | Credential found |
40L-02: Execute an Action via OSDK with Validation
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | Valid Action executes successfully | Succeeds | Fails on valid input |
| 2 | Invalid input → validation error (not unhandled) | Structured error | Unhandled exception |
| 3 | Error includes specific field & message | Field identified | Generic error |
| 4 | Async response pattern (task ID polling) | Implemented | Synchronous block |
40L-03: Build a TypeScript Function on Objects
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | Function compiles without TS errors | No errors | TS errors |
| 2 | Correct values for 10 test objects | All correct | Any incorrect |
| 3 | Edge cases handled (null, boundary) | Correct results | Error or incorrect |
| 4 | Bulk query pattern (not per-object calls) | Bulk pattern | N+1 pattern |
40L-04: Write and Test a TypeScript Action Validator
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | ≥3 distinct validation conditions | ≥3 | <3 |
| 2 | Specific, descriptive error messages | Specific | Generic/missing |
| 3 | [CRITICAL] 8/8 test cases pass (4 valid, 4 invalid) | 8 of 8 | Any fails |
| 4 | Cross-field validation present | Present | No cross-field |
40L-05: Build a Slate Application with Live Ontology Data
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | Application renders live Ontology data | Displayed | Static or not rendering |
| 2 | Data refreshes on state change | Auto-refresh | Manual refresh |
| 3 | Error state shows useful message | Useful message | Generic “error occurred” |
| 4 | [CRITICAL] No hardcoded credentials | None in code | Credential found |
40L-06: C2DAO Code Review and Deployment Workflow
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | PR created with descriptive title/summary | Created | No PR |
| 2 | Review comments addressed | Addressed | Ignored |
| 3 | Deployment checklist completed end-to-end | All items | Any incomplete |
| 4 | [CRITICAL] No hardcoded credentials/tokens in committed code | None | Credentials present |
SL 4N — UI/UX Designer (6 Tasks)
| Task | Title | Standard | Steps | Critical Items |
|---|---|---|---|---|
| 40N-01 | User Research Plan | Research questions; SCD interview guide; contextual inquiry | 4 | — |
| 40N-02 | Information Architecture | Decision-first hierarchy; glance/scan/commit test | 4 | Passes 2-second glance test |
| 40N-03 | Interactive Prototype | Clickable; 5 states; primary flow without explanation | 5 | Error state w/ useful feedback |
| 40N-04 | Design Handoff Package | Annotated mockups; data binding; all states specified | 4 | Data binding documentation |
| 40N-05 | Accessibility Audit | ≥3 issues w/ WCAG criterion; color-only flagged | 4 | ≥3 issues w/ severity & WCAG ref |
| 40N-06 | Usability Test | Think-aloud; task completion rates; severity-rated findings | 4 | Recommendations for Critical/Major |
▸ View full GO/NO-GO performance measures — SL 4N
40N-01: Produce a User Research Plan
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | Research questions tied to design decisions | Defined | No questions |
| 2 | Target population (role, rank, context) | Specified | Generic |
| 3 | SCD semi-structured questions (not leading/yes-no) | SCD present | Leading or yes/no |
| 4 | Contextual inquiry protocol (classification, lighting, noise, screen) | Constraints addressed | No protocol |
40N-02: Design an Information Architecture
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | Decision-first hierarchy documented | Documented | Widget-palette-first |
| 2 | [CRITICAL] Glance test: status identifiable in 2 sec | Identifiable | Not identifiable |
| 3 | Scan test: attention areas in 10 sec | Identifiable | Cannot identify |
| 4 | Commit test: detail drill-down in 30 sec | Accessible | >30 sec |
40N-03: Build an Interactive Prototype
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | Prototype is clickable/navigable | Navigable | Static mockup |
| 2 | Default state displays | Present | Missing |
| 3 | Loading, empty, success states | All three | Any missing |
| 4 | [CRITICAL] Error state with useful feedback | Feedback present | Blank or generic |
| 5 | Primary flow without designer explanation | Completable | Requires explanation |
40N-04: Produce a Design Handoff Package
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | Annotated mockups w/ widget specs | Annotated | No annotations |
| 2 | [CRITICAL] Data binding docs (widget → Object property) | Documented | No data binding docs |
| 3 | Interaction spec covers all 5 states | All specified | Any unspecified |
| 4 | Accessibility requirements documented | Present | No a11y docs |
40N-05: Complete an Accessibility Audit
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | Automated a11y scan completed | Documented | No scan |
| 2 | Manual keyboard navigation test | Documented | No test |
| 3 | [CRITICAL] ≥3 issues w/ severity & WCAG criterion | ≥3 identified | <3 or no WCAG ref |
| 4 | Color-only encoding flagged | Flagged | Not identified |
40N-06: Execute a Usability Test
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | Think-aloud protocol used | Captured | Silent observation |
| 2 | Task completion rates documented | Documented | No rates |
| 3 | Findings severity-rated (Critical/Major/Minor/Cosmetic) | Rated | No ratings |
| 4 | [CRITICAL] Recommendations for Critical & Major findings | Present | No recommendations |
SL 4O — Platform Engineer (6 Tasks)
| Task | Title | Standard | Steps | Critical Items |
|---|---|---|---|---|
| 40O-01 | Deploy Workload to K8s | Declarative YAML; resource limits; health probes | 4 | Liveness & readiness probes passing |
| 40O-02 | GitOps w/ Drift Detection | Deploy via commit; drift auto-reverted | 4 | Drift reverted automatically |
| 40O-03 | Harden Container Image | Iron Bank base; multi-stage; non-root; caps dropped | 5 | Runs as non-root |
| 40O-04 | CI/CD Pipeline w/ Security Gates | All stages; secrets scan; gate blocks on vuln | 4 | Security gate blocks deployment |
| 40O-05 | Deployment Strategy w/ Rollback | Rolling + blue/green; rollback from each | 4 | Blue/green rollback restores previous |
| 40O-06 | Air-Gapped Deployment | Bundled artifacts; health checks pass; no ext network | 4 | Deploys with no external access |
▸ View full GO/NO-GO performance measures — SL 4O
40O-01: Deploy a Workload to Kubernetes
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | Declarative YAML (kubectl apply) | Successful | Imperative or fails |
| 2 | Resource requests & limits configured | Both set | No resource config |
| 3 | [CRITICAL] Liveness & readiness probes passing | Both healthy | No probes or failing |
| 4 | Labels applied (app, env, team) | All present | Missing labels |
40O-02: Configure a GitOps Workflow with Drift Detection
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | GitOps controller syncing from Git | Synced | Not configured |
| 2 | Deploy by Git commit | Via commit | Manual kubectl |
| 3 | [CRITICAL] Evaluator drift reverted automatically | Reverted | Drift persists |
| 4 | Drift alerts configured | Alert fires | No alerting |
40O-03: Harden a Container Image
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | Iron Bank base image used | Iron Bank | Docker Hub |
| 2 | Multi-stage build (no build tools in prod) | Confirmed | Build tools in prod |
| 3 | [CRITICAL] Runs as non-root user | Non-root | Root |
| 4 | Linux capabilities dropped (ALL; required added back) | Dropped | No cap management |
| 5 | Vuln scan passes (no unpatched CRITICAL/HIGH) | Passes | CRITICAL/HIGH present |
40O-04: Build a CI/CD Pipeline with Security Gates
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | Stages: build, test, scan, deploy | All present | Any missing |
| 2 | Secrets detection gate | Configured | No secrets detection |
| 3 | [CRITICAL] Security gate blocks on detected vulnerability | Blocks | Does not block |
| 4 | Artifacts stored w/ version tags | Versioned | No artifact mgmt |
40O-05: Implement Deployment Strategy with Rollback
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | Rolling update w/ zero downtime | Succeeds | Downtime |
| 2 | Rollback from rolling update | Restores | Fails |
| 3 | Blue/green w/ traffic switch | Switched | Not implemented |
| 4 | [CRITICAL] Blue/green rollback restores previous | Restores | Fails |
40O-06: Deploy an Application Across an Air Gap
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | All images & config bundled | Complete | Missing deps |
| 2 | Bundle imported to internal registry | Successful | Fails |
| 3 | [CRITICAL] Deploys & health checks pass w/ no ext network | Healthy | Fails on missing dep |
| 4 | Deployment procedure documented | Documented | No docs |
SL 5G — Advanced ORSA (4 Tasks)
| Task | Title | Standard | Steps | Critical Items |
|---|---|---|---|---|
| 50G-01 | Bayesian Readiness Model | Prior justified; posterior w/ 90% credible interval | 4 | 90% credible interval |
| 50G-02 | Network Vulnerability Analysis | Graph constructed; centrality computed; top 3 nodes | 4 | Top 3 critical nodes w/ risk rating |
| 50G-03 | Pareto Frontier for COA | Frontier computed; 3 COA points named | 4 | ≥3 named COA points |
| 50G-04 | GO/SES Analytical Product | BLUF; uncertainty; assumption register; peer review | 6 | All estimates bounded; assumption register; peer review block |
▸ View full GO/NO-GO performance measures — SL 5G
50G-01: Implement a Bayesian Readiness Model
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | Prior selection justified | Justified | No justification |
| 2 | [CRITICAL] Posterior w/ 90% credible interval | CI present | Point estimate only |
| 3 | Assumption register entry for prior | Documented | No entry |
| 4 | Hierarchical model if multi-echelon data | Hierarchical | Single-level |
50G-02: Conduct Network Vulnerability Analysis
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | Network graph w/ correct nodes/arcs | Matches data | Incorrect |
| 2 | Betweenness centrality computed | Computed | No centrality |
| 3 | [CRITICAL] Top 3 critical nodes w/ operational risk | Identified w/ risk | No risk translation |
| 4 | Node removal impact analysis | Present | No impact analysis |
50G-03: Compute Pareto Frontier for COA Comparison
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | Both objectives quantified from data | Quantified | Vague |
| 2 | Pareto frontier computed & plotted | Plotted | No frontier |
| 3 | [CRITICAL] ≥3 COA points named w/ operational descriptions | 3 named | <3 or no naming |
| 4 | Recommendation with assumption caveat | Caveat present | No caveat |
50G-04: GO/SES-Ready Analytical Product
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | BLUF w/ result, confidence, key assumption | Complete | Missing or incomplete |
| 2 | [CRITICAL] All estimates have uncertainty bounds | All bounded | Any unbounded |
| 3 | [CRITICAL] Assumption register present & complete | Present | No register |
| 4 | Limitations w/ specific invalidation conditions | Present | No limitations |
| 5 | [CRITICAL] Peer review signature block | Present | No block |
| 6 | All models reproducible (seeds set) | Reproducible | Not reproducible |
SL 5H — Advanced AI Engineer (3 Tasks)
| Task | Title | Standard | Steps | Critical Items |
|---|---|---|---|---|
| 50H-01 | Enterprise RAG Architecture | Chunking justified; metadata schema; eval harness w/ MRR | 4 | Retrieval eval harness producing MRR |
| 50H-02 | Multi-Agent System | Orchestrator routes; failure recovery; schema validation | 4 | Failure recovery path functional |
| 50H-03 | AI Governance Framework | Review gates on all outputs; audit log; rollback; OPSEC | 4 | All outputs gated; OPSEC addressed |
▸ View full GO/NO-GO performance measures — SL 5H
50H-01: Design an Enterprise RAG Pipeline Architecture
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | Chunking strategy w/ tradeoff rationale | Justified | No rationale |
| 2 | Metadata schema (source, date, section, classification) | Present | No schema |
| 3 | [CRITICAL] Retrieval eval harness w/ ground truth → MRR | Produces MRR | No harness |
| 4 | OPSEC implications of embedding model addressed | Addressed | Not considered |
50H-02: Design a Multi-Agent System
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | Orchestrator routes to correct workers | Correct | Misrouted |
| 2 | ≥2 specialized workers w/ capabilities | Two present | <2 |
| 3 | [CRITICAL] Failure recovery (timeout, fallback, dead-letter) | Functional | No recovery |
| 4 | Tool output schemas validated before hand-off | Validated | No validation |
50H-03: Design an AI Governance Framework
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | [CRITICAL] Human review gates on all consequential outputs | All gated | Any ungated |
| 2 | Audit log schema (query, output, reviewer, decision, timestamp) | Present | No audit logging |
| 3 | Rollback procedure (≤15 min recovery) | Documented | No rollback |
| 4 | [CRITICAL] OPSEC classification handling addressed | Addressed | Not addressed |
SL 5M — Advanced ML Engineer (3 Tasks)
| Task | Title | Standard | Steps | Critical Items |
|---|---|---|---|---|
| 50M-01 | Drift Monitoring Pipeline | PSI computed; evaluator drift detected; alert routes | 4 | Evaluator-seeded drift detected |
| 50M-02 | Automated Retraining w/ Shadow | Trigger linked to drift; shadow mode comparison; human gate | 4 | Shadow mode comparison present |
| 50M-03 | Fairness Eval & Governance | ≥2 subgroups; model card complete; deprecation criteria | 5 | Model card complete; deprecation criteria defined |
▸ View full GO/NO-GO performance measures — SL 5M
50M-01: Build a Drift Monitoring Pipeline
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | PSI per feature w/ thresholds | Present | No PSI |
| 2 | Baseline from deployment-time data | Documented | No baseline |
| 3 | [CRITICAL] Evaluator-seeded drift detected | Detected | Not detected |
| 4 | Alert routes correctly | Routed | Not routed |
50M-02: Automated Retraining with Shadow Mode
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | Retraining trigger linked to drift alert | Configured | No trigger |
| 2 | Candidate model registered w/ CANDIDATE status | Registered | No registration |
| 3 | [CRITICAL] Shadow mode comparison (candidate vs production) | Present | No shadow mode |
| 4 | Human approval gate before promotion | Present | Auto-promotion |
50M-03: Fairness Evaluation and Governance Package
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | Fairness eval across ≥2 subgroups | ≥2 evaluated | <2 |
| 2 | Performance disparities documented | Documented | No analysis |
| 3 | [CRITICAL] Model card: assumptions, data, limitations, use, RAI | All sections | Any missing |
| 4 | [CRITICAL] Deprecation criteria defined | Present | No criteria |
| 5 | Human review gate on consequential outputs | Present | No gate |
SL 5J — Advanced Product Manager (3 Tasks)
| Task | Title | Standard | Steps | Critical Items |
|---|---|---|---|---|
| 50J-01 | Portfolio Health Dashboard | 5 dimensions; RAG; readable in 60 sec | 4 | — |
| 50J-02 | Technical Investment Brief | BLUF; tradeoff table; adjusts to injected constraint | 4 | BLUF present; adjusts to constraint |
| 50J-03 | Respond to Injected Risk | Risk documented; escalation decision; response briefed | 4 | Escalation decision with rationale |
▸ View full GO/NO-GO performance measures — SL 5J
50J-01: Build a Portfolio Health Dashboard
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | All 5 dimensions (milestones, deps, risk, velocity, budget) | All present | Any missing |
| 2 | RAG w/ clear definitions | Applied | No RAG |
| 3 | Readable by GO/SES in 60 sec | Readable | Requires explanation |
| 4 | Dependency health indicators | Visible | No dep view |
50J-02: Present a Technical Investment Brief
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | [CRITICAL] BLUF present at start | Present | No BLUF |
| 2 | Tradeoff table (cost, schedule, perf, risk) | Present | No tradeoff |
| 3 | Challenging question handled without defensiveness | Substantive | Defensive |
| 4 | [CRITICAL] Recommendation adjusted to injected constraint | Adjusted | No adjustment |
50J-03: Respond to an Injected Portfolio Risk
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | Risk register updated | Documented | Not documented |
| 2 | [CRITICAL] Escalation decision w/ rationale | Decision made | No decision |
| 3 | Response briefed to evaluator | Briefed | Not briefed |
| 4 | Cross-program dependency impact assessed | Stated | No assessment |
SL 5K — Advanced Knowledge Manager (4 Tasks)
| Task | Title | Standard | Steps | Critical Items |
|---|---|---|---|---|
| 50K-01 | Multi-Domain Taxonomy | 3 domains; cross-domain linkages; governance | 3 | Cross-domain linkages defined |
| 50K-02 | AI-Augmented Tagging Pipeline | Confidence threshold; low-conf → review queue | 4 | Low-confidence tags route to review |
| 50K-03 | Knowledge System Health Eval | Zero-recall rate; age analysis; top 3 gaps; remediation | 4 | Zero-recall rate computed |
| 50K-04 | Unit Continuity Protocol | Handoff protocol; decay monitoring; reactivation | 4 | — |
▸ View full GO/NO-GO performance measures — SL 5K
50K-01: Design a Multi-Domain Taxonomy
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | Taxonomy covers 3 domains | All present | Any missing |
| 2 | [CRITICAL] Cross-domain linkages defined | Present | No linkage |
| 3 | Vocabulary governance process documented | Documented | No governance |
50K-02: AI-Augmented Tagging Pipeline with Review Gate
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | Pipeline processes documents | Runs | Errors |
| 2 | Confidence threshold w/ basis | Documented | No basis |
| 3 | [CRITICAL] Low-confidence tags → human review (not auto-applied) | Review queue | Auto-applied |
| 4 | High-confidence verified against gold standard | Verified | No verification |
50K-03: Evaluate Knowledge System Health
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | [CRITICAL] Zero-recall rate computed w/ calculation | Computed | No analysis |
| 2 | Content age distribution analyzed | Present | No age analysis |
| 3 | Top 3 coverage gaps identified | Identified | <3 gaps |
| 4 | Prioritized remediation plan | Present | No plan |
50K-04: Design a Unit Continuity Protocol
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | Handoff protocol for departing personnel | Present | No protocol |
| 2 | Knowledge decay monitoring (flag after 6 mo) | Present | No monitoring |
| 3 | Reactivation procedure for dormant systems | Present | No procedure |
| 4 | Protocol applied to scenario case study | Applied | Generic |
SL 5L — Advanced Software Engineer (4 Tasks)
| Task | Title | Standard | Steps | Critical Items |
|---|---|---|---|---|
| 50L-01 | OSDK-First Object Type | Query-optimized; interface contract complete | 4 | Interface contract complete |
| 50L-02 | Type-Safe TS Function w/ Tests | No type errors; discriminated unions; all tests pass | 4 | All unit tests pass |
| 50L-03 | CI/CD w/ Contract Testing | All stages; contract test catches break; human gate | 4 | Contract test catches breaking change |
| 50L-04 | Security Review & Fix | 5 categories; CRITICAL fixed; no client-side creds | 4 | CRITICAL fixed; no client-side creds |
▸ View full GO/NO-GO performance measures — SL 5L
50L-01: OSDK-First Object Type with Interface Contract
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | Object Type designed for OSDK consumption | Query patterns considered | Data-centric only |
| 2 | Stable, unique PK (not mutable business key) | Stable | Mutable PK |
| 3 | [CRITICAL] Interface contract: queries, Actions, errors, versioning | Complete | Any section missing |
| 4 | Top 5 OSDK queries documented before build | Documented | No pre-build docs |
50L-02: Type-Safe TypeScript Function with Tests
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | Compiles with no type errors | No errors | Type errors |
| 2 | Discriminated union error types | Present | Generic errors |
| 3 | [CRITICAL] Unit tests cover validation & error paths; all pass | All pass | Any fails |
| 4 | Input validation at Action boundary | Present | No validation |
50L-03: CI/CD Pipeline with Contract Testing
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | Stages: unit, integration, contract, security, promotion | All present | Any missing |
| 2 | Branch protection: no direct push to main | Configured | Direct push allowed |
| 3 | [CRITICAL] Contract test catches a breaking change | Blocked | Not detected |
| 4 | Human approval before production | Present | Auto-promotion |
50L-04: Security Review and Fix Critical Findings
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | Covers 5 categories (input val, creds, OSDK, output, access) | All covered | Any missed |
| 2 | Findings prioritized by severity | Rated | No ratings |
| 3 | [CRITICAL] CRITICAL findings fixed | Fixed | Not fixed |
| 4 | [CRITICAL] No OSDK creds in client-side code | None | Creds present |
SL 5N — Advanced UI/UX Designer (3 Tasks)
| Task | Title | Standard | Steps | Critical Items |
|---|---|---|---|---|
| 50N-01 | Design System Component | Variants; a11y notes; do/don’t examples; data binding | 4 | Accessibility documented |
| 50N-02 | DDIL-Aware App Pattern | All 4 tiers; freshness indicators; no blank screen | 4 | No blank screen at any tier |
| 50N-03 | Design Governance Proposal | Review gates; deviation mgmt; quality metrics | 3 | Deviation management process |
▸ View full GO/NO-GO performance measures — SL 5N
50N-01: Design a Design System Component
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | Variants documented w/ visuals | Present | No variants |
| 2 | [CRITICAL] Accessibility notes (contrast, keyboard, screen reader) | Documented | No a11y docs |
| 3 | Do/don’t usage examples | Present | No examples |
| 4 | Data binding patterns documented | Documented | No binding docs |
50N-02: Design a DDIL-Aware Application Pattern
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | All 4 DDIL tiers (Connected, Degraded, Intermittent, Disconnected) | All present | Any missing |
| 2 | Data freshness indicators (age-based visual) | Present | No indicators |
| 3 | [CRITICAL] No blank screen at any DDIL tier | Content at all tiers | Blank screen |
| 4 | Offline-first: writes queued for sync | Queue designed | No offline handling |
50N-03: Produce a Design Governance Proposal
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | Design review gates defined (when, who, criteria) | Defined | No gates |
| 2 | [CRITICAL] Deviation management process | Present | No deviation mgmt |
| 3 | Quality metrics (consistency, coverage, deviation rate) | Defined | No metrics |
SL 5O — Advanced Platform Engineer (4 Tasks)
| Task | Title | Standard | Steps | Critical Items |
|---|---|---|---|---|
| 50O-01 | Fleet Topology & Upgrade | Hub/edge; parameterized templates; wave strategy; rollback | 4 | Rollback procedure documented |
| 50O-02 | SLOs with Error Budgets | SLIs defined; SLOs set; error budgets; budget policy | 4 | Error budgets computed |
| 50O-03 | Automated Compliance Pipeline | Evidence automated; dashboard pass/fail/exception | 4 | Compliance dashboard functional |
| 50O-04 | Federated Observability w/ SLO Alerts | Cross-cluster federation; SLO alert fires on breach | 4 | SLO alert fires on breach |
▸ View full GO/NO-GO performance measures — SL 5O
50O-01: Fleet Topology and Upgrade Strategy
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | Fleet topology w/ hub & edge clusters | Designed | No topology |
| 2 | Cluster templates parameterized (region, classification, workload) | Parameterized | Separate templates per cluster |
| 3 | Wave-based upgrade (canary → production) | Documented | No strategy |
| 4 | [CRITICAL] Rollback procedure for failed upgrades | Present | No rollback |
50O-02: Define SLOs with Error Budgets
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | SLIs defined (availability, latency, success rate) | Defined | No SLIs |
| 2 | SLOs w/ specific targets & windows | Targets set | Vague SLOs |
| 3 | [CRITICAL] Error budgets computed from SLO targets | Computed | No budgets |
| 4 | Budget-based decision policy (stop shipping when exhausted) | Documented | No policy |
50O-03: Build an Automated Compliance Pipeline
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | Vuln scan results collected as evidence | Present | Manual scan |
| 2 | Config baseline comparisons automated | Automated | Manual |
| 3 | [CRITICAL] Compliance dashboard: pass/fail/exception | Functional | No dashboard |
| 4 | Exception tracking w/ expiration dates | Tracked | No exception mgmt |
50O-04: Federated Observability with SLO-Based Alerting
| # | Performance Measure | GO | NO-GO |
|---|---|---|---|
| 1 | Cross-cluster metric federation configured | Visible across clusters | Not configured |
| 2 | Fleet-wide dashboard (resource util, pod health) | Present | No cross-cluster dashboard |
| 3 | [CRITICAL] SLO alert fires on fleet-wide SLI breach | Fires | Does not fire |
| 4 | Cross-cluster correlation demonstrated | Demonstrated | No correlation |
Platform Changes
Foundry platform updates that affect training content. Instructors: review before each course iteration and integrate into labs as appropriate.
Q1 2026 PLATFORM UPDATES
Affects SL 1 (Operator)
- AIP Analyst sessions persist when you switch tabs or change sections
- Custom background colors on Workshop sections and pages
- Markdown editing in text input widgets
- Microsoft Word export (.docx in addition to CSV)
- Usage metrics for Workshop applications
- Custom widgets on mobile
- AIP-assisted editing with custom prompts
- Recently-used functions in AIP Assist menu
- AI FDE integration
- Configurable default map styles in Ontology Manager
- Gaia → Workflow Lineage shortcut
- Compass pinning for frequently-used items
- Project selection during installation configuration
Affects SL 2 (Builder)
- Core Object Views generally available (Feb 2026)
- Global Branching support for Object Views
- Updated Object Explorer with redesigned landing page
- Incremental execution enforcement
- Preview behavior controls
- LLM data generation for test data
- Approximate nearest neighbor join
- File/file set output
- Role-based branch security
- Upgraded branch security enabled by default
- Ontology SQL in Quiver
- Quiver time series workspace
Affects SL 3 (Advanced Builder)
- AIP Logic branch selection for evals
- Autopilot (new, Mar 2026)
- Interface parameters in Automate actions
- Function effects in Automate
- Streaming time series conditions
- Presentation mode
- Multi-ontology support
- Log search from Lineage nodes
- Monitoring status colors
- Expanded access (Gaia, Quiver, Notepad, Automate)
- Role-based branch security
- Upgraded branch security for all users
- Object Views + Branching
- Materializations with row-level policies
APRIL 2026 PLATFORM UPDATES
Affects All Levels
- No-code model inference — ML models from Model Studio can be added as visual nodes for batch predictions (Spark only; one tabular input/output)
- Regex search on object type string properties and struct fields
- Link type marking inheritance — classification markings now auto-inherit on creation (fix)
- CBAC picker UI for AIP create_object_type tool
- Ontology design best practices documentation added
- Marking scopes for all Developer Console apps (not just CBAC enrollments)
- Workflow Lineage limits auto-expansion to ~800 nodes for performance
- External systems tab relocated to Settings > External systems in Code Workspaces
- Debug source tool in AI FDE for Data Connection troubleshooting