Skip to main content
USAREUR-AF Insignia
Headquarters • United States Army Europe and Africa

Maven Smart System

Training & Information Hub  —  USAREUR-AF Operational Data Team
MSS TRAINING HUB

VERSION 3.2 — APRIL 2026
DIST: UNLIMITED
Skill Level (SL) 1 • SL 2 • SL 3 • SL 4A–O • SL 5G–O • Data Lit (EXEC) • Data Literacy DRAFT — NOT YET APPROVED FOR DISTRIBUTION
Skill Level 1 Maven User Manual — All Personnel No technical background required • Prerequisite: None • Open PDF →
BLUF
Skill Level 1 (SL 1) teaches you how to log in, navigate MSS, read dashboards, use Gaia and command-level applications, submit data through forms, interact with AI tools embedded in Workshop, and stay within authorized boundaries. Required for all USAREUR-AF personnel before operating MSS. Request your account before your first class — submit at mss.data.mil or through your data steward. Provisioning generally completes within 24 hours.
PUBLICATION
TM-10 (SL 1) — Maven User Manual — Full technical manual.  Open PDF →
CAUTION — REQUEST YOUR ACCOUNT FIRST
You cannot log in without a provisioned account. Do not wait until the first day of class. Request your account at mss.data.mil or through your unit data steward — provisioning generally completes within 24 hours. If access is not active after 24 hours, contact your data steward. Steps are in Section 1 below.

1. GETTING ACCESS — DO THIS FIRST

  1. Find your unit data steward (data stewards may be embedded in your unit, assigned at battalion, brigade, or division level, or positioned within a directorate at the ASCC — ask your chain of command if you’re unsure who to contact).
  2. Ask them to submit an MSS account request with your name, unit, MOS, and required access level.
  3. MSS admin team provisions your account and assigns markings (markings = the data categories and classification levels you’re authorized to see).
  4. You receive notification when account is active — typically 3–5 business days.
  5. Receive the MSS portal URL from your unit data steward. This is your login link.
NOTE — DATA STEWARD
Your data steward manages MSS access and data quality for your organization. They may be embedded in your unit, assigned at a higher echelon, or positioned within a directorate at the ASCC. They are your first point of contact for account requests, access problems, and data errors. If you don’t know who they are, ask your chain of command.

2. WHAT IS MSS?

MSS is the mission command information system (MCIS) program of record, directed by the USAREUR-AF CG to enable rapid and accurate decision-making. It is a secure, web-based platform where your unit’s data lives and can be analyzed and acted upon. Think of it as a shared operations center for data: information from logistics, personnel, readiness, and other systems is collected, organized, and made accessible through applications your unit uses every day.

MSS is built on the Palantir Foundry platform, authorized for Army use under the Maven Smart System program.

NOTE — WHAT IS FOUNDRY?
Palantir Foundry is a commercial data platform that Army headquarters selected to run MSS. You do not need to know how it works — just how to use it. The word “Foundry” may appear in help documentation and system menus; it refers to the same platform as MSS.
MSS DOES

Stores data from Army systems in a single, organized location; makes data visible through applications and dashboards; enables units to update records, report status, and track readiness; provides analysis tools for authorized personnel; supports AI-assisted analysis through AIP tools embedded in applications; includes Gaia — a map-based geospatial application for situational awareness and operational overlays; and provides command-level applications (e.g., CUB/CUA in USAREUR-AF) for operational briefing and C2.

MSS IS NOT

Not a replacement for official systems of record (DCPDS, GCSS-A, MEDPROS, etc.); not classified by default — classification depends on data markings; not a public system — access is controlled and audited.

USAREUR-AF MISSION AREAS
  • Personnel Readiness — Soldier readiness status
  • Logistics — equipment availability & maintenance
  • Operational Reporting — SITREPs and updates
  • Planning — orders, unit positions, task org
  • C2 — unit status across the AOR

3. SECURITY RESPONSIBILITIES

WARNING
Unauthorized access to, disclosure of, or modification of data in MSS may constitute a violation of 18 U.S.C. § 1030 (Computer Fraud and Abuse Act) and applicable Army regulations. Violations may result in disciplinary action, loss of access, and criminal prosecution.
  1. Use only your own credentials. Do not share your CAC, PIN, or access tokens.
  2. Access only data you are authorized to view.
  3. Report misrouted data immediately. If you see data at a higher classification than your clearance — STOP and report it.
  4. Do not export data without authorization. Exports are logged.
  5. Log out when done. Do not leave an MSS session unattended on an unlocked workstation.
  6. Report security incidents immediately to your supervisor and unit security officer.

4. TASKS

TASK: ACCESS THE MAVEN SMART SYSTEM
Conditions
Provisioned MSS account, CAC reader, approved workstation and browser
Standards
Successfully authenticate with CAC and reach the MSS home screen
Equipment
CAC, CAC reader, workstation, MSS portal URL (from unit data steward)
  1. Insert your CAC into the CAC reader.
  2. Open an approved web browser (Chrome or Firefox recommended).
  3. Navigate to the MSS portal URL provided by your unit data steward.
  4. When prompted, select your authentication certificate (not email certificate).
  5. Enter your CAC PIN when prompted.
  6. MSS home screen loads — you are now logged in.
CAUTION
Do not save your PIN in the browser. Do not allow the browser to remember your login. MSS sessions may contain sensitive information.
TASK: NAVIGATE THE MSS HOME SCREEN
Conditions
Logged into MSS
Standards
Identify all major navigation elements; locate search, notifications, and profile
ElementLocationPurpose
Search barTop centerFind datasets, applications, and projects
Notification bellTop rightSystem alerts, workflow updates
User profile iconTop rightAccount settings, markings (your authorized data categories), logout
Compass (file explorer)Left sidebarBrowse all MSS resources; pin resources to top of Files page for quick access
Home button (logo)Top leftReturn to home screen from anywhere
Pinned itemsHome main areaShortcuts to frequently used resources
Recent activityHome main areaRecently visited datasets and apps
NotepadLeft sidebar / searchDraft documents with AIP-assisted editing — use custom prompts and recently-used functions to accelerate writing

For Q1 2026 platform updates affecting SL 1 operators, see Platform Changes →

5. REPORTING PROBLEMS

Problem TypeWho to Contact
Cannot log inMSS Help Desk
Cannot access a projectUnit data steward
Data appears incorrectUnit data steward (do not correct it yourself)
System error or crashMSS Help Desk (provide error code and screenshot)
Security incidentSupervisor and unit security officer — IMMEDIATELY
Application not workingMSS Help Desk

SCHED UPCOMING TRAINING — SL 1

ENROLLMENT
Contact the listed POC to reserve a seat. Bring your CAC and ensure your MSS account request is submitted at least 5 business days before the course start date. Virtual sessions require MS Teams access and a headset.
Dates Location Format POC Seats Status
14 APR 2026 Wiesbaden, Clay Kaserne, Bldg 3312, Rm 104 In-Person SSG Johnson 20 / 20 OPEN
05 MAY 2026 Grafenwöhr, Bldg 244, Conf Rm B In-Person SFC Davis 8 / 20 8 SEATS REMAINING
18 JUN 2026 Stuttgart, Kelley Bks, Bldg 3357 In-Person SSG Martinez 20 / 20 OPEN
09 JUL 2026 Virtual (MS Teams) Virtual SSG Johnson 30 / 30 OPEN

Duration: 1 day (8 hours). Course runs 0800–1700. All dates subject to change — confirm with POC 5 days prior.

Next Level — After SL 1
TM-20 (SL 2) — No-Code Builder Manual
Learn to ingest data, build Workshop applications, create Object Types, and manage projects — all without coding. Required prereq for all builder roles.
Continue to SL 2
SL 2 No-Code Builder Manual — All Staff Prerequisite: SL 1 • No coding required • Open PDF →
BLUF
SL 2 teaches you how to ingest data, build Workshop applications, create Object Types and basic Actions, use AIP Analyst for natural-language data queries, and manage projects — all using the graphical user interface. No coding required. Prerequisite: SL 1 complete.
PUBLICATION
TM-20 (SL 2) — No-Code Builder Manual — Full technical manual.  Open PDF →

COMPETENCIES UPON COMPLETION

PROJECT MANAGEMENT (UI)
  • Use Solution Designer to visually map data flows and application architecture before building
  • Build project roadmaps using forward and backward planning from the commander’s requirement
  • Create and organize Foundry projects via the UI
  • Set up folder structure: raw / staging / curated layers
  • Manage project access and permissions via UI
  • Follow USAREUR-AF naming conventions and builder standards
DATA INGEST (NO CODE)
  • Ingest data using Pipeline Builder — visual, no code
  • Configure connectors and file sources via UI
  • Schedule pipeline runs via UI
  • Understand raw / staging / curated dataset layers
ONTOLOGY (UI)
  • Create Object Types and set primary keys via Ontology Manager UI
  • Define Interfaces and apply them to Object Types for consistent property contracts
  • Configure Object Views to control how properties display to operators
  • Create Link Types between objects via UI
  • Understand Action types (write-back, form, webhook, conditional) and configure basic Actions
  • Validate Object data in Object Explorer before building apps
WORKSHOP APPLICATIONS
  • Build and publish Workshop applications with dashboards, forms, and filters
  • Select and configure appropriate widgets for each use case
  • Apply access controls and publish to users
AIP ANALYST
  • Use AIP Analyst to ask natural-language questions against Object Types and datasets
  • Interpret AIP Analyst outputs — charts, tables, and narrative summaries
  • Validate AIP Analyst results against source data before briefing or publishing
  • Understand when AIP Analyst is sufficient vs when a Workshop app or Contour analysis is needed
BRANCHING & GOVERNANCE
  • Use Global Branching to build and promote via UI
  • Distinguish development from production environments
  • Apply USAREUR-AF builder standards

Branching = making a test copy of your work before going live. You build in the dev branch (your sandbox), test it, then publish to production (what users see).

THE FOUNDRY DATA STACK

Data flows through layers. As a SL 2 builder, you work in the middle layers using visual tools. Never modify raw data — report data errors to your data steward instead.

Workshop App / AIP Analyst / AIP Agent (consume)
Ontology (Objects, Links, Actions)
Curated Dataset (Pipeline Builder output)
Staging Dataset (Pipeline Builder transforms)
Raw Dataset — READ ONLY, never modify

WORKSHOP WIDGET SELECTION

You Need To…Use This Widget
Display many objects or recordsObject Table
Show details for one selected objectObject Detail
Let users filter the data they seeFilter Panel / Dropdown
Show a chart (bar, line, pie)Chart Widget
Show geographic data on a mapMap Widget
Let users write or update dataButton + Action or Action Form
Show a single key metric prominentlyMetric Tile
Navigate between app sectionsNavigation / Tab Widget

ONTOLOGY SETUP ORDER (UI STEPS)

  1. Confirm curated dataset exists and is populated (Pipeline Builder pipeline passing)
  2. Open Ontology Manager in the left sidebar
  3. Create Object Type → set primary key property → map properties from curated dataset
  4. Create Link Types between related Object Types (if needed)
  5. Publish ontology branch and test in Object Explorer
  6. Build Workshop app only after Object Explorer confirms objects are visible

NAMING CONVENTIONS

Object TypeConventionExample
Datasets (path)/Project/AOR/source/raw|staging|curated/USAREUR/EUR/personnel/curated/soldier_status
Object TypesPascalCaseUnitStatus
Properties (API name)camelCaseunitName
Properties (display name)Title CaseUnit Name
Link TypesPascalCase verb formHasEquipment
Workshop app namesUnit + function + versionEUR-Personnel-Readiness-v2
CAUTION — SHARED RESOURCES
Changes to shared datasets and Object Types affect all downstream applications and users. Before modifying a shared resource, coordinate with your data steward.

For Q1 2026 platform updates affecting SL 2 builders, see Platform Changes →

SCHED UPCOMING TRAINING — SL 2

ENROLLMENT
SL 1 must be complete before attending SL 2. Contact the listed POC to reserve a seat. Bring completion certificate from SL 1 on day one.
Dates Location Format POC Seats Status
21–25 APR 2026 Wiesbaden, Clay Kaserne, Bldg 3312, Rm 104 In-Person SFC Chen 7 / 15 7 SEATS REMAINING
11–15 MAY 2026 Grafenwöhr, Bldg 244, Conf Rm B In-Person SSG Williams 3 / 15 3 SEATS REMAINING
22–26 JUN 2026 Stuttgart, Kelley Bks, Bldg 3357 In-Person SFC Chen 15 / 15 OPEN
13–17 JUL 2026 Virtual (MS Teams) Virtual SSG Williams 20 / 20 OPEN

Duration: 5 days (40 hours). Course runs 0800–1700 each day. Prerequisite: SL 1 complete. All dates subject to change — confirm with POC 5 days prior.

Next Level — After SL 2
TM-30 (SL 3) — Advanced Builder Manual
For data-adjacent specialists: complex application design, advanced pipelines, Ontology architecture, AIP Logic, and C2DAO governance. Required prereq for all SL 4 tracks.
Continue to SL 3
FBC Foundry Bootcamp — Applied Build Event Prerequisite: SL 2 + command-validated project • Quarterly • Outside TM chain • No TM credit granted
BLUF
The Foundry Bootcamp is a quarterly 5-day supervised build event. You bring a validated operational problem; you build a solution; SMEs are available for consultation. Minimal instruction — this is not a course. You leave with a functional product and a handoff package. FBC does not replace SL 3 and does not grant credit toward SL 4 enrollment.
PUBLICATION
FBC — Foundry Bootcamp Participant Guide — Full reference document.  Open PDF →
NOTE
FBC is outside the SL 1 through SL 5 training chain. Completion does not count as SL 3 or any other TM credit. If you need structured platform instruction, enroll in SL 3. FBC is for builders with SL 2 skills and a real problem to solve.

WHO ATTENDS

REQUIRED
  • SL 2 Go on file — hard requirement, no exceptions
  • Command-approved Project Brief — submitted to C2DAO ≥14 days before Day 1
  • Supervisor signature on enrollment request
PROJECT REQUIREMENTS
  • Specific output: named dashboard, pipeline, Ontology type, or Quiver/Contour product
  • Named consumer — a real person or role who will use the product
  • All data sources accessible before Day 1
  • No code required — Python / TypeScript / OSDK = SL 4 track, not FBC
  • 5-day feasibility: functional prototype reachable within sprint

SPRINT WEEK STRUCTURE

DayActivity
Day 1In-brief: scope review, environment check, kickoff (0800–0900). Build (0900–1700).
Days 2–4Daily standup (0800, 15 min). Build (0815–1700). SME available throughout.
Day 5Product demo / peer review (0800–1000). Go/No-Go determination (1000–1200). Out-brief and handoff (1300–1500).

GO STANDARD

StandardCriterion
Functional productThe product does what your Project Brief says it will do — your named consumer can use it
DocumentationNaming conventions followed; product description explains purpose and data sources
Handoff packageComplete by end of Day 5 — product description, data sources, limitations, maintenance guidance, promotion status, POC
GovernanceProduct in a branch; promotion plan documented or production promotion initiated

ENROLLMENT

TIMELINE
  • T-21 days: Enrollment request submitted
  • T-14 days: Project Brief approved by C2DAO
  • T-10 days: Sprint workspace provisioned
  • T-5 days: Candidate confirms access
  • Day 1: Sprint begins
CADENCE
  • 4 sprint events per fiscal year (quarterly)
  • 4–16 participants per sprint
  • 1 SME per ≤8 participants
  • Annual schedule published each October
DOCUMENTS
Participant Guide: FBC_GUIDE.pdf  •  Coordinator Package: FBC_SPRINT_PACKAGE.pdf  •  SOP: FOUNDRY_BOOTCAMP_SOP.pdf  •  Environment Setup: FBC_ENVIRONMENT_SETUP.pdf  •  Project Brief form: CAD Appendix D
EXEC Senior Leader Executive Course Audience: O-5+ / E-9+ • 1 day • No prerequisites • Terminal — outside the TM pipeline
BLUF
EXEC gives battalion commanders, command sergeants major, and equivalent senior leaders the operational understanding of MSS required to lead formations that depend on data-driven decision-making. You will not build anything. You will learn what the platform produces, how data products affect your formation, and how to direct your staff’s use of data as a command function. EXEC replaces SL 1 for O-5 / E-9+ personnel. It is terminal — no progression to SL 2 or beyond.
PUBLICATION
TM-EXEC — Senior Leader Executive Course — Full reference document.  Open PDF →
NOTE
EXEC is outside the SL 1 through SL 5 training chain. It does NOT grant SL 1 credit. EXEC is orientation only — there is no evaluated practical exercise or GO/NO-GO assessment. If a senior leader wants hands-on platform qualification, they should enroll in SL 1 and proceed through the standard pipeline.

WHAT THIS COURSE COVERS

YOU WILL LEARN

What MSS does for your formation and why it matters; how to evaluate data products — operationally, not technically; how to guide your formation’s data posture through resourcing, prioritization, and governance; what questions to ask about data freshness, source integrity, and product quality; and the training pipeline that qualifies your data workforce (SL 1 through SL 5).

THIS COURSE IS NOT

Not a platform navigation course — you will see MSS, not operate it; not a data literacy primer — you already understand why data matters; not a substitute for SL 1 in the standard pipeline; not a qualification to build, modify, or administer anything on the platform.

DAILY SCHEDULE

TimeBlockContent
0800–08301Course introduction; senior leader role in the data environment
0830–09302Why MSS exists — strategic context, CG guidance (Ch 1)
0930–10303The platform and what it produces — five-layer architecture, data product types, live walkthrough (Ch 2)
1030–1045Break
1045–12004How data products impact your formation — data as command function, failure patterns, Commander’s Data PIRs (Ch 3)
1200–1300Lunch
1300–13455The training pipeline — SL 1 through SL 5, FBC, resourcing decisions (Ch 4)
1345–14306Governance — the governance chain, VAUTI framework, red flags (Ch 5)
1430–1445Break
1445–15307How data projects work — agile overview, roadmap vs POAM (Ch 6)
1530–16158Working with data professionals — engagement practices, terminology (Ch 7)
1615–17009Asking the right questions — diagnostic questions for products, workforce, and AI (Ch 8–9)

DOCUMENTS

TM-EXEC PUBLICATIONS
Course Manual: TM_EXEC_SENIOR_LEADER.pdf  •  Concepts Guide: CONCEPTS_GUIDE_TM_EXEC.pdf  •  Syllabus: SYLLABUS_TM_EXEC.pdf
NOT FINDING WHAT YOU NEED?
Contact your unit data steward for additional publications, source files, or access to restricted materials. For technical support, contact your unit data steward or the Operational Data Team.
SL 3 Advanced Builder Manual — Data-Adjacent Specialists Prerequisite: SL 1 + SL 2 • 17/25-series • G2 • Data analysts • Open PDF →
BLUF
SL 3 is for personnel who design and own MSS solutions. This level covers complex application design (including Kairos timelines and Target Workbench), advanced Pipeline Builder, Ontology architecture, configuration of existing AIP Logic workflows, AI FDE for AI-assisted development workflows, Automations and Machinery (orientation), data governance, and C2DAO standards. All work is done via the UI; advanced coding and agent development are escalated to SL 4 developers.
PUBLICATION
TM-30 (SL 3) — Advanced Builder Manual — Full technical manual.  Open PDF →
CAUTION
Modifications to shared datasets or Object Types at SL 3 level affect all downstream applications and users across the formation, including coalition partners. Coordinate with your unit data steward and the USAREUR-AF C2DAO before publishing any changes to production resources.
NOTE — IS THIS YOUR LEVEL?
SL 3 covers advanced no-code building — application design, pipeline architecture, governance. If your role requires coding, ML, or ORSA, SL 3 is a prerequisite to a specialist track. View Specialist Tracks (SL 4/5) →

COMPETENCIES UPON COMPLETION

ADVANCED WORKSHOP DESIGN
  • Design Complex Workshop applications with conditional logic and variable passing
  • Build dynamic layouts: show/hide panels based on user selections
  • Design navigation flows and inter-page parameter handoff
  • Publish and manage application versions
ADVANCED PIPELINE BUILDER
  • Build multi-source join pipelines with complex aggregations (visual)
  • Design scheduled and triggered pipeline runs
  • Review and interpret data lineage graphs
  • Escalate to SL 4 when code transforms are required
ONTOLOGY ARCHITECTURE
  • Design Object Type and Link Type models via Ontology Manager UI
  • Architecture thinking: model for downstream app requirements, not just source data
  • Design Action workflows with validation and approval logic via UI
  • Coordinate ontology changes with all downstream application owners
ADVANCED ANALYSIS
  • Conduct advanced Contour analysis: complex aggregations, pivots, calculated columns, saved views
  • Build advanced Quiver dashboards with multi-object analysis and linked views
  • Create reusable analysis templates for unit use
AIP LOGIC
  • Configure existing AIP Logic workflows (triggers, inputs, outputs)
  • Set up natural language query on Object Types via AIP Logic UI
  • Orient to Agent Studio — awareness of the full AIP toolset beyond AIP Logic and AI FDE
  • Agent building, custom tools, action logic, and production deployment are SL 4H scope
AI FDE (Foundry Development Environment)
  • Use AI FDE to build and configure AI-assisted development workflows within Foundry
  • Design prompts, manage context windows, and tune model parameters for operational use cases
  • Integrate AI FDE outputs into existing Workshop applications and Pipeline Builder workflows
  • Evaluate AI-generated outputs for accuracy and operational suitability before publishing
  • Production-scale agent development and custom tooling remain SL 4H scope
KAIROS & TARGET WORKBENCH (orientation only)
  • Configure Kairos timeline widgets — Ontology-driven, real-time planning visualization (distinct from static Gantt charts)
  • Orient to Target Workbench — targeting workflow integration; operational use is SL 4A/SL 4B scope
  • Ensure Object Types are structured to support both tools
AUTOMATIONS & MACHINERY (awareness — not evaluated)
  • Orient to Automations on Object Types — schedule or condition-triggered property updates, pipeline runs, and notifications
  • Orient to Machinery business process modeling — multi-step workflows with roles, transitions, and object linkage
  • Recognize when Automations or Machinery apply; configuration and management are SL 4 scope
GOVERNANCE & PRODUCTION
  • Manage branching and production promotion via UI
  • Apply USAREUR-AF C2DAO governance standards and naming conventions
  • Manage governance workflows with data stewards
  • Ensure coalition-facing products have C2DAO coordination and NAFv4 compliance review

SL 3 vs SL 4 — WHAT YOU OWN VS WHAT YOU ESCALATE

You Own At SL 3 (UI)Escalate to SL 4 When…
Application design and UXCustom Python/PySpark transforms needed
Ontology model design (via UI)Functions on Objects (TypeScript) required
Advanced Pipeline Builder (visual)Incremental watermark or code logic needed
AIP Logic configuration (existing workflows)Agent building, custom tools, action logic, or production deployment needed
AI FDE prompt design and workflow integrationCustom model fine-tuning, agent orchestration, or production-scale deployment needed
Governance coordinationExternal application (OSDK) needed
Production promotion via UICI/CD pipeline automation needed

C2DAO GOVERNANCE GATES — HARD STOPS

RequirementSL 3 ActionHard Gate?
New shared Object Type or datasetCoordinate with C2DAO before publishing to productionYes
Coalition / MPE-facing data productC2DAO coordination + NAFv4 compliance reviewYes — do not skip
Schema change to existing shared resourceNotify all downstream owners; coordinate with stewardYes
New AIP Logic workflow on operational dataAuthorization review before deploymentYes
Access permission changesSubmit through formal request to unit data stewardYes

SCHED UPCOMING TRAINING — SL 3

ENROLLMENT
SL 1 and SL 2 must be complete before attending SL 3. Class size is limited. Contact POC early — seats fill quickly. Bring SL 1 and SL 2 completion certificates on day one.
Dates Location Format POC Seats Status
28 APR – 02 MAY 2026 Wiesbaden, Clay Kaserne, Bldg 3312, Rm 104 In-Person CW3 Thompson 4 / 10 4 SEATS REMAINING
15–19 JUN 2026 Virtual (MS Teams) Virtual CW2 Rodriguez 15 / 15 OPEN
17–21 AUG 2026 Wiesbaden, Clay Kaserne, Bldg 3312, Rm 104 In-Person CW3 Thompson 10 / 10 OPEN

Duration: 5 days (40 hours). Course runs 0800–1700 each day. Prerequisites: SL 1 and SL 2 complete. All dates subject to change — confirm with POC 5 days prior.

For Q1 2026 platform updates affecting SL 3 advanced builders, see Platform Changes →

Next Level — After SL 3
Specialist Tracks — SL 4 & SL 5 Series
Fourteen tracks: six Warfighting Function (WFF) tracks covering Intelligence, Fires, Movement & Maneuver, Sustainment, Protection, and Mission Command — plus eight specialist tracks for ORSA, AI Engineer, ML Engineer, Product Manager, Knowledge Manager, Software Engineer, UI/UX Designer, and Platform Engineer. Select your track based on MOS/role.
Access Specialist Tracks
SPECIALIST TRACKS SL 4 & SL 5 Series — WFF & Technical Tracks WFF tracks (A–F): SL 3 req. • Technical tracks (G–O): SL 3 req.
BLUF
SL 4 has two track types: Warfighting Function tracks (SL 4A–F) for WFF-assigned roles (Intelligence, Fires, M&M, Sustainment, Protection, Mission Command) — prerequisite SL 3, no coding required — and Technical Specialist tracks (SL 4G–O) for personnel who build and engineer MSS solutions (ORSA, AI Eng, MLE, PM, KM, SWE, UX Designer, Platform Eng) — prerequisite SL 3. Advanced versions (SL 5G–O) are available after completing the corresponding SL 4 technical track.

TRACK SELECTION BY MOS / ROLE

Role / MOSRecommended TrackAdvanced
Warfighting Function Tracks (SL 4A–F)
G2/S2 — MI units, ISR analystsSL 4A (Intelligence)
FA officers/NCOs — Fire supportSL 4B (Fires)
Maneuver units — G3/S3 data rolesSL 4C (Movement & Maneuver)
G4/S4 — Logistics, GCSS-ASL 4D (Sustainment)
Air defense, CBRN, force protectionSL 4E (Protection)
G6/S6 — C2 systems, networksSL 4F (Mission Command)
Technical Specialist Tracks (SL 4G–O)
FA49 — Operations Research AnalystSL 4G (ORSA)SL 5G
G2/S2 quantitative analystSL 4G (ORSA) or SL 4K (KM)SL 5G / SL 5K
17A/17C — Cyber officer/NCOSL 4L (SWE) or SL 4H (AI Eng)SL 5L / SL 5H
25D — IT specialistSL 4L (SWE)SL 5L
AI/ML engineer (GS/contractor)SL 4H (AI Eng) or SL 4M (MLE)SL 5H / SL 5M
Data scientist (GS/contractor)SL 4G (ORSA) or SL 4M (MLE)SL 5G / SL 5M
G8/S8 — Resource managerSL 4J (PM)SL 5J
Product Manager (PM / GS)SL 4J (PM)SL 5J
KMO / Knowledge Officer / 37FSL 4K (KM)SL 5K
Civil AffairsSL 4J (PM) or SL 4K (KM)SL 5J / SL 5K
UI/UX designer (GS/contractor)SL 4N (UX Designer)SL 5N
Platform engineer / DevOps / SysAdminSL 4O (Platform Eng)SL 5O
SL 4 SERIES — WARFIGHTING FUNCTION TRACKS
TM-40A (SL 4A) — IntelligenceSL 3 Req.
Intelligence Warfighting Function
G2/S2 • MI units • ISR analysts
Prereq: SL 3Open PDF →
TM-40B (SL 4B) — FiresSL 3 Req.
Fires Warfighting Function
FA officers/NCOs • Fire support coordinators
Prereq: SL 3Open PDF →
TM-40C (SL 4C) — Movement & ManeuverSL 3 Req.
Movement & Maneuver WFF
Maneuver units • G3/S3 data roles
Prereq: SL 3Open PDF →
TM-40D (SL 4D) — SustainmentSL 3 Req.
Sustainment Warfighting Function
Logistics • G4/S4 • GCSS-A users
Prereq: SL 3Open PDF →
TM-40E (SL 4E) — ProtectionSL 3 Req.
Protection Warfighting Function
Air defense • CBRN • Engineer • Force protection
Prereq: SL 3Open PDF →
TM-40F (SL 4F) — Mission CommandSL 3 Req.
Mission Command Warfighting Function
G6/S6 • C2 systems • Network managers
Prereq: SL 3Open PDF →
SL 4 SERIES — TECHNICAL SPECIALIST TRACKS (Level 1)
TM-40G (SL 4G) — ORSASL 3 Req.
Operations Research & Systems Analysis
FA49 • G2/S2 quant analysts • Wargame analysts
Prereq: SL 3 • Advanced: SL 5GOpen PDF →
TM-40H (SL 4H) — AI EngineerSL 3 Req.
AIP Logic, Agent Studio & LLM Integration
AI/ML specialists • 17A/17C
Prereq: SL 3 • Advanced: SL 5HOpen PDF →
TM-40M (SL 4M) — ML EngineerSL 3 Req.
Model Development, Validation & Deployment
ML engineers • Data scientists (GS/contractor)
Prereq: SL 3 • Advanced: SL 5MOpen PDF →
TM-40J (SL 4J) — Product ManagerSL 3 Req.
Pipeline Mgmt, Milestones & Portfolio Health
G8/S8 • PMs • Civil Affairs • 4 days
Prereq: SL 3 • Advanced: SL 5JOpen PDF →
TM-40K (SL 4K) — Knowledge ManagerSL 3 Req.
Forms, Lessons Learned & Institutional Memory
KMOs • 37F • Civil Affairs • 4 days
Prereq: SL 3 • Advanced: SL 5KOpen PDF →
TM-40L (SL 4L) — Software EngineerSL 3 Req.
Python/TypeScript, OSDK & Code Transforms
17A/17C • 25D • GS/contractor SWEs
Prereq: SL 3 • Advanced: SL 5LOpen PDF →
TM-40N (SL 4N) — UI/UX DesignerSL 3 Req.
Soldier Centered Design, Workshop & Slate UI
UI/UX designers • Human factors • GS/contractor designers
Prereq: SL 3 • Advanced: SL 5NOpen PDF →
TM-40O (SL 4O) — Platform EngineerSL 3 Req.
Kubernetes, CI/CD, DevSecOps & Infrastructure as Code
Platform engineers • DevOps • SysAdmins • 25D
Prereq: SL 3 • Advanced: SL 5OOpen PDF →
SL 5 SERIES — ADVANCED TRACKS (Level 2)
NOTE
SL 5 tracks are advanced-level continuations requiring the corresponding SL 4 track as a prerequisite. They cover expert-level techniques, production architecture, and operational integration at scale.
NOT FINDING WHAT YOU NEED?
Contact your unit data steward for additional publications, source files, or access to restricted materials. For technical support, contact your unit data steward or the Operational Data Team. For task-level procedures, use the Task Index →
SL 4 Technical Specialist Tracks — Developer Manuals Prerequisite: SL 3 • Eight tracks by role • Advanced versions at SL 5
BLUF
The SL 4 Technical Specialist tracks (SL 4G–O) cover developer-level capabilities requiring coding, advanced tooling, or specialized technical expertise — prerequisite SL 3. Warfighting Function tracks (SL 4A–F) are no-code, operationally focused training covering Intelligence, Fires, M&M, Sustainment, Protection, and Mission Command MSS integration — prerequisite SL 3. Select your track using the MOS/Role table on the HOME tab.
TM-40G (SL 4G) — ORSA Track SL 3 Required
Operations Research & Systems Analysis
FA49 • G2/S2 quantitative analysts • Wargame analysts
  • Configure Code Workspaces (Python/R) within Foundry
  • Statistical modeling: regression, classification, validation for readiness/logistics
  • Time series forecasting with ARIMA/SARIMA patterns
  • Monte Carlo simulation for COA comparison and risk quantification
  • Linear programming for resource allocation and scheduling optimization
  • Wargame/exercise data collection architecture and aggregation pipelines
  • Analytical decision support products (Quiver/Contour) to commander standard
  • Communicate uncertainty: confidence intervals, sensitivity analysis, briefing standards
Prereq: SL 3 • Advanced: SL 5GOpen PDF →
TM-40H (SL 4H) — AI Engineer Track SL 3 Required
AIP Logic, Agent Studio & LLM Integration
AI/ML specialists • 17A/17C
  • Author AIP Logic workflows: prompt engineering, chain design, output handling
  • Build and configure AIP Agent Studio agents with tools, memory, and orchestration
  • Implement LLM integration patterns: ontology data grounding, RAG, context construction
  • Apply AI safety requirements: human-in-the-loop gates, output validation, OPSEC
  • Write Python transforms that prepare data for AI consumption
  • Connect AIP Logic workflows to Object Types and Actions
  • Test and red-team AI outputs; evaluate quality against defined standards
  • Deploy and monitor AIP Logic workflows in production
Prereq: SL 3 • Advanced: SL 5HOpen PDF →
TM-40M (SL 4M) — ML Engineer Track SL 3 Required
Code Workspaces, Model Training & Deployment
ML engineers • Data scientists building/deploying models on MSS
  • Configure Code Workspaces for model development (GPU, packages, environment management)
  • Build and evaluate ML models within the Foundry environment
  • Manage model versioning, experiment tracking, and reproducibility
  • Deploy models to production and integrate with Ontology Objects and Actions
  • Implement MLOps patterns: monitoring, drift detection, retraining triggers
  • Apply responsible AI practices and model documentation standards for operational use
Prereq: SL 3 • Advanced: SL 5MOpen PDF →
TM-40J (SL 4J) — Product Manager Track SL 3 Required
Agile Project Management for Data & AI Capabilities
PMs • Product owners • G8/S8 • Technical team leads
  • Stand up Agile project structures (backlog, sprint cadence, ceremonies) for data and AI builds
  • Write user stories and acceptance criteria that SL 4G–O developers can execute without ambiguity
  • Manage the ML/AI project lifecycle: six phases from Problem Definition through Sustainment
  • Translate commander requirements into prioritized, sprint-ready backlogs
  • Specify project tracking systems (sprint boards, status dashboards) for SL 4L implementation
  • Build and maintain risk registers; manage dependency blockers across specialist tracks
  • Conduct production readiness reviews against the Definition of Done before release
  • Execute change management plans for new MSS capability rollout to operational units
Prereq: SL 3 • Advanced: SL 5JOpen PDF →
TM-40K (SL 4K) — Knowledge Manager Track SL 3 Required
Knowledge Repositories, AIP Summarization & Lessons Learned
KMOs • 37F • S2/S3 KM roles • AAR facilitators
  • Design knowledge architecture for AAR, lessons learned, doctrine, and SOP repositories
  • Build AAR capture systems using Workshop forms and Object Type pipelines
  • Design and operate lessons-learned ingestion and tagging pipelines
  • Use AIP Logic for knowledge summarization, search augmentation, and theme extraction
  • Build full-text and semantic search systems over knowledge repositories
  • Manage doctrine and SOP version control within Foundry
  • Build personnel expertise mapping (skills/experience registries)
  • Design knowledge transfer and unit continuity processes using MSS
  • NEW: Leverage Document Intelligence (GA) for automated document parsing and extraction
  • NEW: Use Object Views (GA) for curated knowledge browsing interfaces
Prereq: SL 3 • Advanced: SL 5KOpen PDF →
TM-40L (SL 4L) — Software Engineer Track SL 3 Required
OSDK, Full-Stack Foundry Applications & Platform SDK
SWEs • 17A/17C • 25D
  • Authenticate and query the Foundry Ontology via OSDK (TypeScript/Python)
  • Execute Actions, subscribe to Object changes, handle pagination and filtering via OSDK
  • Use Foundry Platform SDK for dataset operations, file management, and branch management
  • Build TypeScript Functions on Objects (computed properties, bulk query patterns)
  • Write and test complex Action validators with TypeScript
  • Build Slate applications integrated with the Foundry API
  • Apply USAREUR-AF code review and deployment standards for MSS applications
  • NEW: Use Pilot for AI-assisted code generation within Code Repositories
  • NEW: Monitor OSDK client health via the Health Dialog dashboard
  • NEW: Integrate external tools via Model Context Protocol (MCP) connectors
Prereq: SL 3 • Advanced: SL 5LOpen PDF →
Am I Ready for SL 4L? — Self-assess before enrolling:
  • Can you write a basic TypeScript or Python function from scratch?
  • Do you understand REST APIs (GET, POST, status codes, JSON payloads)?
  • Can you read and write async/await patterns?
  • Have you used a package manager (npm or pip)?
  • Can you navigate a terminal and run CLI commands?

If you answered No to 2+ items, complete the Self-Study Addendum (included with TM-40L) before Day 1. Primary feeders: 17A/17C, FA26, civilian SWEs.

TM-40N (SL 4N) — UI/UX Designer Track SL 3 Required
Soldier Centered Design, Workshop & Slate UI
UI/UX designers • Human factors • GS/contractor designers
  • Conduct user research in operational and classified environments (interview, contextual inquiry, usability testing)
  • Design information architectures for data-dense operational displays
  • Build interactive prototypes from low-fidelity sketches through high-fidelity mockups
  • Design Workshop layouts: widget selection, dashboard hierarchy, responsive patterns
  • Apply visual design standards for tactical displays: classification marking, contrast, field conditions
  • Ensure Section 508 / WCAG 2.1 AA accessibility compliance
Prereq: SL 3 • Advanced: SL 5NOpen PDF →
TM-40O (SL 4O) — Platform Engineer Track SL 3 Required
Kubernetes, CI/CD, DevSecOps & Infrastructure as Code
Platform engineers • DevOps • SysAdmins • 25D
  • Architect and operate Kubernetes clusters for MSS workloads
  • Implement Infrastructure as Code with GitOps workflows and continuous reconciliation
  • Design CI/CD pipelines: automated build, test, scan, and deploy for MSS applications
  • Harden containers using DoD Iron Bank images, vulnerability scanning, and SHA256 digest pinning
  • Deploy across classification boundaries and DDIL environments (air-gapped, edge clusters)
  • Manage RMF/ATO lifecycle from the infrastructure perspective, STIG compliance
  • NEW: Configure and optimize Compute Modules (GA) for scalable pipeline execution
  • NEW: Manage Data Connection source types and third-party integration patterns
Prereq: SL 3 • Advanced: SL 5OOpen PDF →

TM designations (TM-10 through TM-50O) are internal USAREUR-AF C2DAO course identifiers, not DA-published technical manuals.

Warfighting Function Tracks (SL 4A–F)
DesignationTrackPublication
SL 4AIntelligenceTM_40A_INTELLIGENCE.pdf
SL 4BFiresTM_40B_FIRES.pdf
SL 4CMovement & ManeuverTM_40C_MOVEMENT_MANEUVER.pdf
SL 4DSustainmentTM_40D_SUSTAINMENT.pdf
SL 4EProtectionTM_40E_PROTECTION.pdf
SL 4FMission CommandTM_40F_MISSION_COMMAND.pdf
Next Level — After SL 4
SL 5 — Advanced Developer Tracks
Expert-level continuation of each SL 4 specialist track. For senior technical leads, platform architects, and developers building enterprise-scale MSS capabilities.
Continue to SL 5
SL 5 Advanced Developer Tracks Prerequisite: SL 4 (by track) • Tracks SL 5G–O • All 8 tracks available
BLUF — ADVANCED TRACKS
The SL 5 series provides advanced-level instruction for each developer track, building directly on the corresponding SL 4 specialist manual (SL 4G–O). Intended for senior technical leads, platform architects, and senior developers designing enterprise-scale MSS capabilities. All 8 tracks are complete and available.

SL 5 SERIES — PUBLICATIONS

TM-50G (SL 5G) — ORSA Advanced
Advanced
Advanced ORSA
Prerequisite: SL 4G
  • Nonlinear programming, stochastic models
  • Agent-based modeling (ABMS)
  • Campaign wargame data architecture
TM-50H (SL 5H) — AI Engineer Advanced
Advanced
Advanced AI Engineering
Prerequisite: SL 4H
  • Multi-agent orchestration & shared state
  • Advanced RAG, domain-adapted LLMs
  • AI red-team assessment & observability
TM-50M (SL 5M) — ML Engineer Advanced
Advanced
Advanced ML Engineering
Prerequisite: SL 4M
  • Automated retraining pipelines
  • Transformer fine-tuning, GNNs
  • Federated retraining, adversarial robustness
TM-50J (SL 5J) — PM Advanced
Advanced
Advanced Product Management
Prerequisite: SL 4J
  • PI planning, cross-team governance
  • GO/SES briefing, Palantir partnership
  • Technical debt at program scale
TM-50K (SL 5K) — KM Advanced
Advanced
Advanced Knowledge Management
Prerequisite: SL 4K
  • Federated KM architecture, NATO integration
  • STANAG 4778 conformance
  • Knowledge graphs at scale
TM-50L (SL 5L) — SWE Advanced
Advanced
Advanced Software Engineering
Prerequisite: SL 4L
  • Scale, multi-tenancy, event streaming
  • OWASP, SAST, authorized pen testing
  • Architecture review, platform governance
TM-50N (SL 5N) — UI/UX Advanced
Advanced
Advanced UI/UX Design
Prerequisite: SL 4N
  • Design systems at scale, component libraries
  • DDIL-aware and cross-domain UI design
  • DesignOps, ResearchOps, accessibility at enterprise scale
TM-50O (SL 5O) — Platform Eng Advanced
Advanced
Advanced Platform Engineering
Prerequisite: SL 4O
  • Multi-cluster fleet management, SRE practices
  • RMF/ATO automation, continuous compliance
  • Cross-domain infrastructure, developer experience engineering
NOTE — PREREQUISITES
Each SL 5 track requires completion of the corresponding SL 4 track. Personnel should confirm SL 4 proficiency with their data steward before beginning SL 5 content. Contact the USAREUR-AF Operational Data Team for access questions.
NOT FINDING WHAT YOU NEED?
Contact your unit data steward for additional publications, source files, or access to restricted materials. For technical support, contact your unit data steward or the Operational Data Team. For task-level procedures, use the Task Index →
DRAFT PUBS Strategy, Doctrine & Data Literacy Publications Command-level strategy • VAULTIS framework • Foundational data literacy content • Platform-agnostic
CAUTION — DRAFT PUBLICATIONS
These publications are in draft status and have not been approved for official distribution. Do not distribute outside the USAREUR-AF training program without authorization from the USAREUR-AF Operational Data Team.
BLUF
Before touching MSS, understand why data matters. Data Literacy for Senior Leaders is written for commanders and senior leaders. Data Literacy Technical Reference is the comprehensive reference for all personnel. Neither publication is platform-specific.

COMMAND STRATEGY

STRATEGIC CONTEXT
The documents below establish the command-level vision and process for data-driven operations at USAREUR-AF. They provide the why behind the training program and define how capability development flows from problem identification through fielding.
CG-SIGNED • MAY 2025
USAREUR-AF Data and Analytics Strategy

Signed by GEN Donahue. Establishes the command vision for data-driven operations over the next 3–5 years. Defines four strategic outcomes: Decision Advantage, Data Interoperability, Modernize Theater Data Infrastructure, and Data-Ready Workforce.

Key frameworks: VAULTIS data attributes • Cognitive Hierarchy (Data → Information → Knowledge → Shared Understanding → Decision Advantage) • Decision Dominance

Vision: Leverage data at speed and scale for decision dominance and optimized operations.

CUI • ODT / CTO
Unified Data Transition Strategy

Quarterly product cycle for identifying, developing, and deploying data capabilities. Two phases: Discovery & Framing (Problem ID → Bootcamp → CADs) and Iteration & Implementation (PoC → Exercise validation).

Key events: Foundry Bootcamp • Capability Awareness Days (CADs) • CG-chaired Priority Steering Board (PSB) • Forcing Function exercises

Decision gate: Persevere, pivot, or divest at exercise validation.

ODT PROCESS
Capability Lifecycle Process

End-to-end lifecycle for how capabilities move from intake through fielding, sustainment, and retirement. Five phases: Intake & Scoping → Concept → Development (SAFe/ART) → Execution & Sustainment → Evolution / EOL.

Key gates: Approval Gate (PSB with CG) between Concept and Development • Transition Gate between Execution and Evolution/EOL

Principle: New capabilities built inside SAFe; operationalized outside SAFe.

Open Capability Lifecycle Graphic →  •  Full Process Detail →

DRAFT DATA LITERACY PUBLICATIONS

SENIOR LEADERS (O-5+ / SGM+)
Data Literacy for Senior Leaders

Written for commanders, senior NCOs, and senior Civilians. Covers command responsibilities, evaluating data products, directing a data-capable formation, and decision frameworks.

Format: Short (~20–30pp). Principles, not procedures. Chapter/paragraph numbered.

Key topics: Commander’s data responsibilities • Evaluating analytical products • Standing up MSS capability • Data governance and stewardship

ALL PERSONNEL
Data Literacy Reference

Comprehensive platform-agnostic data literacy reference. Recommended prior reading before SL 1.

Format: Long (~50–100pp). Examples, vignettes, detailed explanations, annexes.

Key topics: Data types and structures • Pipeline concepts • Data quality • Analysis fundamentals • Security and classification • Operational data integration • Governance

FULL PUBLICATIONS INDEX

PublicationAudiencePurposeWhen to Read
Command Strategy
Data & Analytics StrategyAll personnelCG-signed command vision; 4 strategic outcomes; VAULTIS; decision dominanceStrategic context for all training
Unified Data Transition StrategySL 3+, ODT, CTOQuarterly product cycle; PSB; capability development process (CUI)Before product submissions
Capability LifecycleODT, PMs, ARTEnd-to-end capability lifecycle: intake → concept → dev (SAFe) → execution & sustainment → evolution/EOLBefore PSB submissions; process orientation
Foundation — All Personnel
Data Literacy (SL)O-5+ / SGM+, Sr CiviliansPrinciples, command responsibilitiesBefore directing MSS use
Data LiteracyAll personnelComprehensive data literacy referenceBefore SL 1 (recommended)
SL 1All personnelOperate MSS as end userBefore first MSS access
SL 2All staffBuild pipelines, Ontology, Workshop via UI — no codeAfter SL 1
SL 3Data-adjacent specialistsDesign complex apps; governance; C2DAO standardsAfter SL 1 + SL 2
SL 4 — Warfighting Function Tracks (by WFF assignment)
SL 4AG2/S2, MI, ISRIntelligence WFF MSS integrationAfter SL 3
SL 4BFA, fire supportFires WFF MSS integrationAfter SL 3
SL 4CManeuver, G3/S3Movement & Maneuver WFF MSS integrationAfter SL 3
SL 4DG4/S4, logisticsSustainment WFF MSS integrationAfter SL 3
SL 4EAir defense, CBRN, force protectionProtection WFF MSS integrationAfter SL 3
SL 4FG6/S6, C2, networksMission Command WFF MSS integrationAfter SL 3
SL 4 — Technical Specialist Tracks (by role/MOS)
SL 4GORSA / FA49Statistical modeling, simulation, wargame analyticsAfter SL 3
SL 4HAI EngineersAIP Logic authoring, Agent Studio, LLM integrationAfter SL 3
SL 4MML EngineersCode Workspaces, model training, MLOpsAfter SL 3
SL 4JPMs / G8PM dashboards, milestone tracking, portfolio analysisAfter SL 3
SL 4KKMs / KMOsKnowledge repositories, AIP summarization, lessons learnedAfter SL 3
SL 4LSWEsOSDK, full-stack Foundry apps, TypeScript FunctionsAfter SL 3
SL 4NUI/UX DesignersSoldier Centered Design, Workshop & Slate UI, accessibilityAfter SL 3
SL 4OPlatform EngineersKubernetes, CI/CD, DevSecOps, Infrastructure as CodeAfter SL 3
SL 5 — Advanced Technical Tracks (by role/MOS)
SL 5G–OSenior developers (all tracks)Advanced versions of each SL 4 specialist trackAfter SL 4 (by track)
CDA Reference — SL 3 and Specialist Tracks (SL 4G–O)
EA Series (00–05)SL 3+, SL 4K, SL 4LEnterprise Architecture — foundation, schools of thought, artifacts, governance, military applicationWith or after SL 3
CDA Doctrine OverviewSL 4G–ODoctrine-driven development; JRTC lessons; Foundry Ontology blueprintAt start of SL 4G–O
Identity vs. ClassificationSL 3, SL 4K, SL 4LIdentity resolution and classification governance at scaleWith SL 3
Enterprise Data CompassSL 4J, SL 4KAuthoritative data architecture, ontology, and semantic governance standardWith SL 4J/K
CDA Slide LibraryAll tracks (prereq reading)29 decks — Intro To Data (SL 1 prereq), Data 101 (SL 2 prereq), Data 201 (SL 3/40G–O prereq)Before each TM level

CDA REFERENCE MATERIAL

SPECIALIST TRACK PREREQ READING
The CDA Reference material below is required reading for SL 3 and all SL 4G–O specialist tracks. It is not required for SL 1/20 or WFF tracks (SL 4A–F). Consult the DEPENDENCY_MAP for per-track required reading.
ENTERPRISE ARCHITECTURE SERIES
EA-00 through EA-05

Six-module reference series covering EA foundations, schools of thought, artifacts and views, governance, and military application. Supports SL 3, SL 4K, and SL 4L.

Key topics: EA vs DA • TOGAF, Zachman, DODAF frameworks • Capability mapping • NAF/ArchiMate • Army EA governance

CDA DOCTRINE SERIES
Doctrine-Driven Development

Doctrine-aligned data product design using JRTC lessons learned. Covers doctrine-first Ontology design, the Three-Generation Dilemma, and AVT25 assessment case study.

Key topics: MDMP data support • COA analysis modeling • Doctrinal object types • AVT25 tools case study

CASE STUDY
XVIII Airborne Corps — Fighting with Live Data

XVIII ABC’s corps-level ODT pilot. Their organizational journey, manning structure (PM + UX + SWE + DE + DS), problem-solution development process, Program Increment cycles, and BDA visualization case study (prototype → MVP → POR in 9 months). Military Review, Feb 2026.

Key topics: ODT organization • TIO governance • ASWF methodology • Exercise integration • Echeloned ODT employment • Decision optimization

Required reading: SL 4J, SL 4F • Recommended: All SL 4 tracks

CASE STUDY
Lessons Learned — AVT25 Assessment Tools

How five tools in the same enclave multiplied work exponentially by failing to share doctrinal primitives. The case for doctrine-first shared data architecture.

Key topics: Shared primitives • DBO architecture • Exponential work multiplication • Time cost analysis

REFERENCE
Achieving Decision Dominance (Adkins)

One officer’s thought piece proposing terminology for decision optimization at echelon. Introduces useful shorthand: operationalized data, Automated Fighting Products (AFP), and Decision Optimization Teams. Names the Maven Smart System as an ASCC-level COP platform. Military Review, Jan–Feb 2025.

Key topics: Operationalized data • AFP evolution • DOT at echelon • FA 26B/49/57 workforce • Training pipeline

Supplementary reading: SL 4F, SL 4G, SL 4J • Recommended: All SL 4 tracks, Senior Leaders

EXTERNAL DOCTRINE & INSTITUTIONAL SOURCES

INSTITUTIONAL REFERENCES
The following sources from MCCoE, CALL, and TRADOC inform MSS curriculum design and are recommended supplementary reading for instructors and course developers.
MCCoE
Mission Command Center of Excellence — Decision Optimization CONOPS

MCCoE’s conceptual framework for integrating data-centric capabilities into mission command. Defines decision optimization at echelon and the role of Operational Data Teams in the command post.

Relevance: SL 4F (Mission Command), SL 4J (Product Manager), EXEC (Senior Leader)

CALL
Center for Army Lessons Learned — FY24 MCTP Key Observations

CTC observer trends on data-centric operations, common gaps in unit-level data readiness, and recommendations for training programs. Published Feb 2025.

Relevance: SL 1 (all personnel), SL 4A–F (WFF tracks), instructor development

TRADOC
TRADOC Data Literacy Framework

TRADOC’s institutional approach to data literacy across the force. Defines competency levels, assessment criteria, and integration with PME. Informs the MSS SL 1/20 foundation sequence.

Relevance: Data Literacy publications, SL 1, SL 2, instructor certification (T3-I)

MCCoE / ARMY.MIL
Army’s Combined Arms Command to Integrate Maven C2 Smart System

Public announcement of Maven C2 integration into training and education at the Combined Arms Center. Establishes institutional backing for MSS-based training programs. Published Feb 2026.

Relevance: All tracks — institutional context for the MSS training program

CORE DATA LITERACY CONCEPTS

DATA TYPES
Structured (tables, rows, columns) • Semi-structured (JSON, XML) • Unstructured (documents, images). MSS ingests all three types from Army source systems.
DATA PIPELINE
Raw → Staging → Curated → Ontology → Application. Each layer adds quality, structure, and meaning. Never modify raw data — it is the source of truth.
DATA QUALITY
Accuracy, completeness, consistency, timeliness, uniqueness. Bad decisions follow bad data. Report quality issues to your data steward rather than working around them.
DATA GOVERNANCE
Ownership, stewardship, access control, classification markings. Every dataset has an owner. Every user has a role. Access is audited. Misuse is tracked and prosecuted.
NOT FINDING WHAT YOU NEED?
Contact your unit data steward for additional publications, source files, or access to restricted materials. For technical support, contact your unit data steward or the Operational Data Team. For task-level procedures, use the Task Index →
MSS TRAINING HUB MAVEN SMART SYSTEM — USAREUR-AF Version 3.2 • April 2026
NEW TO MSS?
Start with the Quick Start guide before reading Skill Level (SL) 1. Operational in 30 minutes: log in via CAC, navigate to your unit’s app, filter data, export a view. → QUICK_START.pdf  •  No account yet? Contact your unit data steward.
BLUF
The MSS curriculum is organized by Skill Level (SL) per DA PAM 611-21. All personnel start at SL 1. Builders add SL 2; data-adjacent specialists continue to SL 3; technical roles select their SL 4 track. Use “Find My Track” below or browse the reference table.
1 What best describes your role?

TRAINING PATH

Core Progression (Skill Levels)
FOUNDATION
📖
SL 1
Maven User
8 hrs • Unit-level
🔧
SL 2
Builder
40 hrs • 5 days
SL 3
Advanced Builder
40 hrs • 5 days
🎯
SL 4/5
Specialist (WFF/Tech)
14 tracks • 40 hrs ea.
Parallel Tracks
🏗
Bootcamp
Foundry Bootcamp
5 days • 2x Quarterly
Prereq: SL 2
EXEC
Senior Leader
1 day • O-5 / E-9+
No prereq
🎓
T3
Instructor Pipeline
T3-F (UDT) • T3-I (Cert)
Prereq: SL 2 / SL 3
Start Your Training
Foundation — SL 1 → SL 2 → SL 3
All personnel begin here. SL 1 gets you operational; SL 2 builds no-code skills; SL 3 unlocks specialist tracks.
Start with SL 1
MSS ACCOUNT ACCESS
MSS access requires a provisioned account. Submit your request through your unit data steward or at mss.data.mil. Provisioning generally completes within 24 hours; if access is not active after 24 hours, contact your data steward directly.
NOT FINDING WHAT YOU NEED?
Contact your unit data steward for additional publications, source files, or access to restricted materials. For technical support, contact your unit data steward or the Operational Data Team. For task-level procedures, use the Task Index →
SUPPORT Getting Help — USAREUR-AF
BLUF
Know who to call before you have a problem. Route issues correctly from the start. Collect information before calling for help — it speeds resolution significantly.
WARNING — SECURITY INCIDENTS
If you suspect a security violation, report immediately to your supervisor and unit security officer. Do not investigate or resolve it yourself. Preserve the screen state; do not close the window or clear the browser.

CONTACT ROUTING

IssueRoute ToPriority
Cannot log in / CAC issuesMSS Help DeskNormal
No access to a project or datasetUnit data stewardNormal
Data appears incorrectUnit data steward (do not self-correct)Normal
System error, crash, or outageMSS Help Desk + screenshot + error codeNormal
Application broken or not loadingMSS Help DeskNormal
Building question / how-toUnit data lead or USAREUR-AF data teamNormal
Governance / C2DAO questionUSAREUR-AF C2DAONormal
Security incidentSupervisor + unit security officerIMMEDIATE

COMMON DAILY TASKS

TaskHow
Find a recordSearch bar or Filter Panel
Filter the viewSelect values in the Filter Panel
Export dataExport / Download button → CSV or Excel
Submit or update a recordClick record → Action button → fill in → Submit

WHEN IT BREAKS

CAN’T LOG IN
Check CAC is fully inserted; try a different port. No account? Request at mss.data.mil or through your data steward. Provisioning generally within 24 hrs; if not active after 24 hrs, contact your data steward.
APP WON’T LOAD
Hard-refresh (Ctrl+Shift+R). Clear cache. Try a different browser.
BUTTON GREYED OUT / NO ACCESS
You’re missing a role or write permission. Contact your data steward to request the correct access level.

SECURITY — DO NOT

  • Do not export data to a personal device or unapproved storage
  • Do not share your MSS credentials with anyone — URLs and screenshots are fine unless data is sensitive
  • Do not enter classified information into MSS unless your instance is approved for that classification level
  • Do not screenshot or share MSS screens containing data above your network’s approved classification
  • Do not use MSS on public or unsecured Wi-Fi
  • If you see data you should not have access to — stop and report to your data steward immediately

BEFORE CALLING FOR HELP — COLLECT THIS INFORMATION

  • Your username and unit
  • Name of the application, dataset, or pipeline you were using
  • Exact error message (screenshot preferred)
  • Time the error occurred (local or Zulu — state which)
  • Steps that led to the error in order
  • Browser and workstation you are using

PREREQUISITES BEFORE FIRST LOGIN

  1. Annual Cyber Awareness Training — required for all DoD personnel; must be current
  2. MSS User Onboarding Brief — provided by unit data steward
  3. Account request approved — submit at mss.data.mil or through your unit data steward; provisioning generally completes within 24 hours. If access is not active after 24 hours, contact your data steward directly.

IMPORTANT: If you work on multiple enclaves (NIPR, SIPR, MPE, etc.), you must complete account setup and first login on each enclave separately. Your account on one enclave does not carry over to another.

USAREUR-AF DATA TEAM

LOCATION & HIGHER HQ
EUCOM Theater — Europe & Africa
Headquarters, United States Army Europe and Africa
USAREUR-AF Operational Data Team
Army AI/Data Accelerator (C2DAO)
PUBLICATIONS
All training publications are maintained by the USAREUR-AF Operational Data Team. Contact your unit data steward for access to source files and this application. Version 3.2 — April 2026.
FEEDBACK & CORRECTIONS
Route corrections through your unit data steward to the USAREUR-AF Operational Data Team. Include: publication name, section, and description of the issue.
NOTE — DISTRIBUTION
These are DRAFT documents — not yet approved for distribution. Do not distribute outside your organization without consulting your data steward.
NOT FINDING WHAT YOU NEED?
Contact your unit data steward for additional publications, source files, or access to restricted materials. For technical support, contact your unit data steward or the Operational Data Team. For task-level procedures, use the Task Index →
DOCUMENTS
All Training Publications
Click any publication to open the PDF

Reference Documents

Technical Manuals — Foundation (All Staff)

TM-40 Warfighting Function Tracks (6 — click to expand)
TM-40 Technical Specialist Tracks (8 — click to expand)
TM-40G (SL 4G) — ORSA
Specialist
Operational Research Systems Analysis
ORSA specialists — quantitative methods, commander products
  • Regression, classification, forecasting
  • Monte Carlo COA analysis
  • Optimization and sensitivity analysis
TM-40H (SL 4H) — AI Engineer
Specialist
AI Engineering
AI engineers — AIP Logic, Agents, LLM integration
  • AIP Logic workflow design
  • Agent configuration
  • LLM integration patterns
  • NEW: AIP Document Intelligence (GA) — chunking & embedding
  • NEW: AI FDE (GA Mar 2026)
TM-40M (SL 4M) — ML Engineer
Specialist
Machine Learning Engineering
ML engineers — model training, validation, deployment
  • Feature engineering, experiment tracking
  • Batch inference, model versioning
  • Drift detection, retraining pipelines
  • NEW: Model Studio (GA Feb 2026) — no-code model training
TM-40J (SL 4J) — Product Manager
Specialist
Data Product Management
PMs — pipelines, milestones, portfolio health
  • Scrum / Kanban for data projects
  • ML/AI project lifecycle
  • Risk register, release planning
TM-40K (SL 4K) — Knowledge Manager
Specialist
Knowledge Management
KMs — forms, lessons learned, institutional memory
  • Knowledge ontology design
  • Lessons-learned intake pipeline
  • SOP review workflows
  • NEW: Document Intelligence, Object Views
TM-40L (SL 4L) — Software Engineer
Specialist
Software Engineering
SWEs — Python/TypeScript, OSDK, code transforms
  • OSDK & Platform SDK
  • Functions on Objects, Actions
  • CI/CD, security, Slate
  • NEW: Pilot, OSDK Health, MCP
TM-40N (SL 4N) — UX Designer
Specialist
UI/UX Design
Designers — user research, prototyping, accessibility
  • Workshop UI patterns & design systems
  • User research in military contexts
  • Accessibility & DDIL-aware design
TM-40O (SL 4O) — Platform Engineer
Specialist
Platform Engineering
Platform engineers — infrastructure, SRE, RMF/ATO
  • Foundry infrastructure management
  • SRE practices & observability
  • RMF/ATO compliance automation
  • NEW: Compute Modules, Data Connection
TM-50 Advanced Technical Tracks (8 — click to expand)
Train the Trainer — T3 (2 courses + SOPs — click to expand)
NOTE — T3 PROGRAM
T3-I (Instructor Certification) and T3-F (MSC Force Multiplier) sit outside the SL 1 to SL 5 numbering chain. T3-I requires SL 3 + C2DAO selection. T3-F requires SL 2 + CDR nomination.
Concepts Guides (23 — click to expand)
SL 4A Concepts Guide
Intelligence
Key concepts and terminology for the Intelligence WFF track.
SL 4B Concepts Guide
Fires
Key concepts and terminology for the Fires WFF track.
SL 4C Concepts Guide
Movement & Maneuver
Key concepts and terminology for the Movement & Maneuver WFF track.
SL 4D Concepts Guide
Sustainment
Key concepts and terminology for the Sustainment WFF track.
SL 4E Concepts Guide
Protection
Key concepts and terminology for the Protection WFF track.
SL 4F Concepts Guide
Mission Command
Key concepts and terminology for the Mission Command WFF track.
SL 4G Concepts Guide
ORSA
Key concepts and terminology for the ORSA specialist track.
SL 4H Concepts Guide
AI Engineer
Key concepts and terminology for the AI Engineer track.
SL 4M Concepts Guide
ML Engineer
Key concepts and terminology for the ML Engineer track.
SL 4J Concepts Guide
Program Mgr
Key concepts and terminology for the Product Manager track.
SL 4K Concepts Guide
Knowledge Mgr
Key concepts and terminology for the Knowledge Manager track.
SL 4L Concepts Guide
Software Eng
Key concepts and terminology for the Software Engineer track.
SL 4N Concepts Guide
UX Designer
Key concepts and terminology for the UX Designer track.
SL 4O Concepts Guide
Platform Eng
Key concepts and terminology for the Platform Engineer track.
SL 5G Concepts Guide
ORSA Adv
Advanced concepts and terminology for SL 5G.
SL 5H Concepts Guide
AI Eng Adv
Advanced concepts and terminology for SL 5H.
SL 5M Concepts Guide
MLE Adv
Advanced concepts and terminology for SL 5M.
SL 5J Concepts Guide
PM Adv
Advanced concepts and terminology for SL 5J.
SL 5K Concepts Guide
KM Adv
Advanced concepts and terminology for SL 5K.
SL 5L Concepts Guide
SWE Adv
Advanced concepts and terminology for SL 5L.
SL 5N Concepts Guide
UX Adv
Advanced concepts and terminology for SL 5N.
SL 5O Concepts Guide
Plat Eng Adv
Advanced concepts and terminology for SL 5O.
TM-EXEC Concepts Guide
Senior Leader
Key concepts for the Senior Leader Executive Course.
Practical Exercises (13 — click to expand)
Pre-Assessment Tests (27 — click to expand)
NOTE — PRE-TESTS
Pre-assessment tests are administered before the start of each course. They establish a knowledge baseline and help instructors identify gaps.
PRE-TEST
SL 1 Pre-Assessment
Maven User — foundation pre-test.
PRE-TEST
SL 2 Pre-Assessment
No-Code Builder — pre-test.
PRE-TEST
SL 3 Pre-Assessment
Advanced Builder — pre-test.
PRE-TEST
SL 4A Pre-Assessment
Intelligence WFF track — pre-test.
PRE-TEST
SL 4B Pre-Assessment
Fires WFF track — pre-test.
PRE-TEST
SL 4C Pre-Assessment
Movement & Maneuver WFF track — pre-test.
PRE-TEST
SL 4D Pre-Assessment
Sustainment WFF track — pre-test.
PRE-TEST
SL 4E Pre-Assessment
Protection WFF track — pre-test.
PRE-TEST
SL 4F Pre-Assessment
Mission Command WFF track — pre-test.
PRE-TEST
SL 4G Pre-Assessment
ORSA specialist — pre-test.
PRE-TEST
SL 4H Pre-Assessment
AI Engineer — pre-test.
PRE-TEST
SL 4M Pre-Assessment
ML Engineer — pre-test.
PRE-TEST
SL 4J Pre-Assessment
Product Manager — pre-test.
PRE-TEST
SL 4K Pre-Assessment
Knowledge Manager — pre-test.
PRE-TEST
SL 4L Pre-Assessment
Software Engineer — pre-test.
PRE-TEST
SL 4N Pre-Assessment
UX Designer — pre-test.
PRE-TEST
SL 4O Pre-Assessment
Platform Engineer — pre-test.
PRE-TEST
SL 5G Pre-Assessment
Advanced ORSA — pre-test.
PRE-TEST
SL 5H Pre-Assessment
Advanced AI Engineer — pre-test.
PRE-TEST
SL 5M Pre-Assessment
Advanced ML Engineer — pre-test.
PRE-TEST
SL 5J Pre-Assessment
Advanced Product Manager — pre-test.
PRE-TEST
SL 5K Pre-Assessment
Advanced Knowledge Manager — pre-test.
PRE-TEST
SL 5L Pre-Assessment
Advanced Software Engineer — pre-test.
PRE-TEST
SL 5N Pre-Assessment
Advanced UX Designer — pre-test.
PRE-TEST
SL 5O Pre-Assessment
Advanced Platform Engineer — pre-test.
PRE-TEST
T3-F Pre-Assessment
MSC Force Multiplier — pre-test.
PRE-TEST
T3-I Pre-Assessment
Instructor Certification — pre-test.
Post-Assessment Tests (26 — click to expand)
NOTE — POST-TESTS
Post-assessment tests are administered at the end of each course. They measure learning gains against the pre-assessment baseline.
POST-TEST
SL 2 Post-Assessment
No-Code Builder — post-test.
POST-TEST
SL 3 Post-Assessment
Advanced Builder — post-test.
POST-TEST
SL 4A Post-Assessment
Intelligence WFF track — post-test.
POST-TEST
SL 4B Post-Assessment
Fires WFF track — post-test.
POST-TEST
SL 4C Post-Assessment
Movement & Maneuver WFF track — post-test.
POST-TEST
SL 4D Post-Assessment
Sustainment WFF track — post-test.
POST-TEST
SL 4E Post-Assessment
Protection WFF track — post-test.
POST-TEST
SL 4F Post-Assessment
Mission Command WFF track — post-test.
POST-TEST
SL 4G Post-Assessment
ORSA specialist — post-test.
POST-TEST
SL 4H Post-Assessment
AI Engineer — post-test.
POST-TEST
SL 4M Post-Assessment
ML Engineer — post-test.
POST-TEST
SL 4J Post-Assessment
Product Manager — post-test.
POST-TEST
SL 4K Post-Assessment
Knowledge Manager — post-test.
POST-TEST
SL 4L Post-Assessment
Software Engineer — post-test.
POST-TEST
SL 4N Post-Assessment
UX Designer — post-test.
POST-TEST
SL 4O Post-Assessment
Platform Engineer — post-test.
POST-TEST
SL 5G Post-Assessment
Advanced ORSA — post-test.
POST-TEST
SL 5H Post-Assessment
Advanced AI Engineer — post-test.
POST-TEST
SL 5M Post-Assessment
Advanced ML Engineer — post-test.
POST-TEST
SL 5J Post-Assessment
Advanced Product Manager — post-test.
POST-TEST
SL 5K Post-Assessment
Advanced Knowledge Manager — post-test.
POST-TEST
SL 5L Post-Assessment
Advanced Software Engineer — post-test.
POST-TEST
SL 5N Post-Assessment
Advanced UX Designer — post-test.
POST-TEST
SL 5O Post-Assessment
Advanced Platform Engineer — post-test.
POST-TEST
T3-F Post-Assessment
MSC Force Multiplier — post-test.
POST-TEST
T3-I Post-Assessment
Instructor Certification — post-test.
Course Syllabi (26 — click to expand)
Self-Study Guides (17 — click to expand)
NOTE — SELF-STUDY
Self-study guides provide pre-course reading, prerequisite knowledge checks, and recommended preparation activities. Complete the guide for your track before attending the resident course.
Lesson Plans (5 — click to expand)
Administrative & Institutional (12 — click to expand)
Enterprise Architecture Series (6 — click to expand)
Architecture & Design References (28 — click to expand)

CDA — Common Data Architecture (15)

CDA OVERVIEW
Cross-Domain Architecture Overview
Architecture workspace for data strategy and training. Defines activities, data, and systems across the enterprise.
CDA CONSTRAINTS
Constraints & Directives
12 non-negotiable architectural constraints derived from theater-level operational experience.
CDA DOCTRINE
Doctrine-Driven Development Guide
Turns JRTC lessons into ontology and pipeline patterns that close the three-generation planning gap.
CDA AGENT
Military Ontology Architect Agent
Expert ontology architect role definition for military operations, PPBE, and Foundry data models.
CDA IDENTITY
Identity vs Classification
Fundamental separation between what something is (identity) and what bucket it belongs to (classification).
CDA AVT25
AVT25 Assessment — Exponential Work Multiplication
Analysis of duplicated assessment tooling in the same enclave and the exponential cost of fragmentation.
CDA AGENTS
Agent Doctrine Overview
Doctrinal reference layer for architecture, ontology, pipeline, and identity agent work.
CDA CORE
Core Principles
Non-negotiable bedrock principles for all platform agents — the stability stack, governance, and data doctrine.
CDA ONTOLOGY
Ontology Engineer Doctrine
Ontology engineering reference — semantic stability layer design, validation, and governance in Foundry.
CDA ENTITY RES
Entity Resolution Doctrine
Governed, versioned, reversible entity resolution pipelines with full provenance and auditability.
CDA INGESTION
Ingestion & Integration Doctrine
ETL/ELT pipeline doctrine — four-layer architecture from source systems to applications.
CANON ADP
ADP Crosswalk
Crosswalk mapping Army Doctrine Publications to CDA ontology and data model elements.
CANON CONDITIONS
Conditions, Indicators & Thresholds
Doctrine canon for operational conditions, indicators, and threshold modeling.
CANON ENGAGE
Engagement Operations
Doctrine canon for engagement operations data modeling and ontology patterns.
CANON INFO
Information Activities
Doctrine canon for information activities data modeling and ontology patterns.

GDAP — Governance Data Access Platform (5)

MIM — NATO MIP Information Model (7)

Ontology Design (1)

External Doctrinal & Strategic References
REFERENCE ONLY
Items below are not clickable links. They are external publications listed here for reference and citation. Obtain them through official Army, DoD, or NATO distribution channels.
CLASSIFICATION
References are categorized as Doctrine (regulatory authority — ARs, FMs, DoD Directives, NATO STANAGs) or Strategic Guidance (authoritative but not regulatory — strategies, plans, frameworks). This distinction matters for compliance and citation.

ARMY DOCTRINE & REGULATION

PublicationTitleTypeTracks
ADP 3-0OperationsDoctrineWFF (A–F)
ADP 3-19FiresDoctrineSL 4B
ADP 3-37Protection of the Force (Jul 2019)DoctrineSL 4E
ADP 3-90Offense and DefenseDoctrineSL 4C
ADP 5-0The Operations ProcessDoctrineSL 4F, SL 4G
ADP 6-0Mission Command (Jul 2019)DoctrineSL 4F
ADP 7-0TrainingDoctrineTraining Mgmt
AR 25-1Army Information Technology (Jul 2019)RegulationAll
AR 25-2Army Cybersecurity (Apr 2019)RegulationSL 4H, SL 4M, SL 4L
AR 25-30Army Publishing ProgramRegulationSL 5H
AR 25-400-2Army Records ManagementRegulationSL 4K
AR 5-11Management of Army Models and SimulationsRegulationSL 4G
AR 71-9Warfighting AnalysisRegulationSL 4G
AR 350-1Army Training and Leader DevelopmentRegulationTraining Mgmt
AR 525-2Force ProtectionRegulationSL 4E
AR 530-1Operations SecurityRegulationSL 4E
FM 2-0Intelligence (Oct 2023)DoctrineSL 4A
FM 3-0Operations (Mar 2025)DoctrineWFF (A–F)
FM 3-01U.S. Army Air and Missile DefenseDoctrineSL 4B
FM 3-09Fire Support and Field Artillery Operations (Aug 2024)DoctrineSL 4B, SL 4C
FM 3-12Cyberspace and EW OperationsDoctrineSL 4E, SL 4H, SL 4M, SL 4L
FM 3-27Army Global Ballistic Missile DefenseDoctrineSL 4B
FM 3-55Information CollectionDoctrineSL 4A
FM 3-60Targeting (Aug 2023)DoctrineSL 4B
FM 3-81Maneuver Enhancement BrigadeDoctrineSL 4C
FM 3-90Offense and Defense (May 2023)DoctrineSL 4C
FM 4-0Sustainment (Aug 2024)DoctrineSL 4D
FM 1-0Human Resources SupportDoctrineSL 4D
FM 5-0Planning and Orders Production (Nov 2024)DoctrineSL 4F, SL 4C
FM 6-0Commander’s Activities (May 2022)DoctrineSL 4F, SL 4C
FM 7-0Training (Jun 2021)DoctrineTraining Mgmt
ATP 2-01Collection Management (May 2023)DoctrineSL 4A
ATP 2-33.4Intelligence AnalysisDoctrineSL 4A
ATP 2-22.9-1PAI/OSINT (Oct 2023)DoctrineSL 4A
ATP 3-01.81Counter-UASDoctrineSL 4B
ATP 3-09.42Fire Support for M&MDoctrineSL 4B
ATP 3-13.3Army Operations SecurityDoctrineSL 4E
ATP 3-90.4Combined Arms MobilityDoctrineSL 4C
ATP 5-0.1Army Design MethodologyDoctrineSL 4F
ATP 5-0.3Multi-Service Tactics for Ops AssessmentDoctrineSL 4G
ATP 6-01.1Techniques for Effective Knowledge ManagementDoctrineSL 4K, SL 5K
TC 6-0.2Battle Staff OperationsDoctrineSL 4F
DA PAM 5-11Verification, Validation & AccreditationDoctrineSL 4G
DA PAM 25-1-1IT Implementation InstructionsDoctrineSL 4K, SL 4L
DA PAM 25-2-5Cybersecurity Technical ReferenceDoctrineSL 4H, SL 4M, SL 4L
DA PAM 25-40Army Publishing Program ProceduresDoctrineStandards
DA PAM 25-403Army Records Information ManagementDoctrineSL 4K
DA PAM 600-3Officer Professional DevelopmentDoctrineSL 4G

DoD DIRECTIVES & INSTRUCTIONS

PublicationTitleTypeTracks
DoDD 3000.09Autonomy in Weapon Systems (Jan 2023)DirectiveWFF (A–F)
DoDI 5000.87Software Acquisition Pathway (Oct 2020)InstructionSL 4L, SL 5L, SL 4J, SL 5J
Army Directive 2024-02Agile Software Dev & Acquisition (Dec 2024)DirectiveSL 4L, SL 5L, SL 4J, SL 5J
Army Directive 2024-03Army Digital EngineeringDirectiveSL 4H, SL 4M, SL 4L

TRADOC PUBLICATIONS

Published at adminpubs.tradoc.army.mil, not armypubs.army.mil

PublicationTitleTypeTracks
TR 350-70Army Learning Policy and SystemsRegulationTraining Mgmt
TP 350-70-3Faculty and Staff Development ProgramPamphletTraining Mgmt
TP 350-70-7Army Educational ProcessesPamphletTraining Mgmt
TP 350-70-14Training Development in Institutional DomainPamphletTraining Mgmt

NATO STANDARDS & AGREEMENTS

PublicationTitleTypeTracks
ADatP-34 / NISPC3 Interoperability Standards and ProfilesStandardSL 4K, SL 4L
STANAG 5636 / NCMSCore Metadata SpecificationSTANAGSL 4K, SL 5K
STANAG 5643 (proposed)MIM Governance StandardSTANAGSL 4K, SL 4L, SL 5K, SL 5L
ADatP-5644Web Service Messaging Profile (WSMP)StandardSL 4L, SL 5L
ADatP-36Friendly Force Information (FFI)StandardSL 4A, SL 4C
STANAG 5527FFT Systems InteroperabilitySTANAGSL 4A

DoD & ARMY STRATEGIC GUIDANCE (not doctrine)

DocumentAuthorityDateTracks
DoD Data StrategyOSDOct 2020All
DoD Data, Analytics & AI Adoption StrategyCDAONov 2023All
DoD Responsible AI StrategyCDAOJun 2024SL 4H/M, SL 5H/M
DoD Zero Trust Reference Architecture v2.0DISA/NSAJul 2022SL 3
DoD AI Cybersecurity Risk Mgmt GuideDoD CIO2024SL 4H/M, SL 5H/M
DoD Software Modernization StrategyOSD CIOFeb 2022SL 4L, SL 5L
JADC2 Strategy SummaryJoint StaffMar 2022WFF (A–F), SL 4G
JCOIEJoint Staff J-7CurrentSL 4F
Army Data PlanArmy CIOOct 2022All
Army Cloud PlanArmy CIOOct 2022SL 1, SL 2, SL 3
UDRA v1.1DASA(DES)Feb 2025SL 3, Specialist (G–O)
Army CIO Data Stewardship MemoArmy CIOApr 2024SL 1, SL 2, SL 3, SL 4K

NATO STRATEGIC GUIDANCE (not doctrine)

DocumentDateTracks
NATO Data Strategy for the AllianceFeb 2025SL 3, SL 4K, SL 5K
NATO Data Centric Reference Architecture v22025SL 3
NATO Data Quality Framework for the AllianceAug 2025SL 3
NATO Digital Transformation Implementation StrategyOct 2024WFF (A–F)
NATO Warfighting Capstone Concept2021SL 4F
Professional Reading & Lessons Learned (65+ articles)
REFERENCE ONLY
Items below are not clickable links. They are curated articles from Army professional journals, military publications, and think tanks. Obtain them through the publishing organization or your unit library. Full reading lists are included as appendices in each TM publication.

MILITARY REVIEW — Army University Press (14)

TitleDateTracks
Data-Centric at the Division: 3ID’s One-Year Journey to Transform and ModernizeJan 2025All
Modernizing Military Decision-Making: Integrating AI into Army PlanningAug 2025SL 4F, SL 4H, SL 4G
The Military Needs Frontier ModelsAug 2025SL 4H, SL 4M, SL 4L
Exploring AI-Enhanced Cyber and Information Operations IntegrationMar-Apr 2025SL 4E, SL 4A, SL 4H
Authorities and the Multidomain Task ForceMar-Apr 2025SL 4A, SL 4B, SL 4F
Taking a Data-Centric Approach to Unit Readiness2024All, esp. SL 4G
Attaining Readiness by Developing a Data-Centric Culture2024All, esp. SL 4J
Sustaining Our People Advantage in Data-Centric Warfare2024All
AI as a Combat Multiplier: Using AI to Unburden Army StaffsSep 2024SL 4H, SL 4F, SL 4G
Transforming the Multidomain Battlefield with AI2024SL 4H, SL 4M, SL 4A
The Coming Military AI RevolutionMay-Jun 2024SL 4H, SL 4M
AI in Modern Warfare: Strategic Innovation and Emerging RisksSep-Oct 2024All
Advancing Counter-UAS Mission Command SystemsMay-Jun 2024SL 4E, SL 4F
The True Test of Mission CommandSep-Oct 2024SL 4F

PARAMETERS — Army War College Quarterly (3)

TitleDateTracks
Responsibly Pursuing Generative AI for the War FighterWinter 2025-26SL 4H, SL 4M, All
Integrating AI and ML into COP and COA Development2024-25SL 4G, SL 4H, SL 4F
Trusting AI: Integrating AI into the Army’s Professional Ethic2024All

MIPB — Military Intelligence Professional Bulletin (6)

TitleDateTracks
FRIDAY: Unlocking OSINT for a Data-Driven Army2025SL 4A, SL 4H, SL 4L
Intelligence Support to Information AdvantageJan-Jun 2026SL 4A, SL 4K
Using AI to Create Digital Enemy CommandersJul-Dec 2025SL 4H, SL 4M, SL 4A
The Market Knows Best: Prediction Markets for National SecurityJul-Dec 2025SL 4A, SL 4G
Army Transitioning to Support Deep Sensing in MDOJul-Dec 2025SL 4A, SL 4B, SL 4C
Open-Source Intelligence Support to Targeting2024SL 4A, SL 4B

FIELD ARTILLERY BULLETIN — Line of Departure (6)

TitleDateTracks
The New Digital Kill Chain2025SL 4B, SL 4L
AI’s New Frontier in War Planning2025SL 4B, SL 4H
Project Convergence: Revolutionizing Targeting in LSCO2025SL 4B, SL 4A, SL 4G
Enhancing Tactical Level Targeting With AI2024SL 4B, SL 4H, SL 4M
The Future of Strategic Fires Target Acquisition2024SL 4B, SL 4A
The Combat Aviation Brigade and Digital Call for Fire2024SL 4B, SL 4C

NCO JOURNAL — Army University Press (3)

TitleDateTracks
Knowledge Management and The Old GuardAug 2025SL 4K, SL 4F
From Data to WisdomFeb 2025All
Artificial Intelligence and Future WarfareSep 2025SL 4H, SL 4M, All

ARMY SUSTAINMENT — Army Logistics University (4)

TitleDateTracks
Army Sustainment Enterprise’s Delayed Approach to Data ModernizationWinter 2025SL 4D, SL 4K
Predictive Logistics: Reimagining Sustainment on the 2040 BattlefieldWinter 2025SL 4D, SL 4H, SL 4M, SL 4G
Enabling Logistics in Contested EnvironmentsSpring 2025SL 4D, SL 4G
Advancing to Data-Driven Logistics Operations2024SL 4D, SL 4K

ARMY AL&T MAGAZINE (7)

TitleDateTracks
Accelerating the Army’s AI Strategy2024-25SL 4H, SL 4J, All
Commoditizing AI/ML Models2024-25SL 4H, SL 4M, SL 4L
The Army’s Data (Ad)Vantage2024All
The Software Advantage2024-25SL 4L, SL 4J
Army Intelligence2025SL 4A, SL 4H
Emerging Technology and Modernizing the Army2024-25All
Reality Check (AI/ML implementation)2024-25SL 4H, SL 4M, SL 4J

ARMY COMMUNICATOR — Cyber CoE (3)

TitleDateTracks
Leading in Data Centricity, C2 Fix Best PracticesSpring 2025SL 4E, SL 4F, SL 4L
Army Communicator Spring 2024Spring 2024SL 4E, SL 4L
Army Communicator January 2024 — ITN SuiteJan 2024SL 4E, SL 4C

FROM THE GREEN NOTEBOOK (3)

TitleDateTracks
How To Be a Data Literate Leader — And Why It MattersMar 2024All, SL 4K
Harnessing the Power of Knowledge ManagementApr 2024SL 4K, SL 4F
Understanding Weapons of Math DestructionJul 2024SL 4G, SL 4H, SL 4M

INFANTRY MAGAZINE — Maneuver CoE (1)

TitleDateTracks
Moneyball for Gunnery — 1/4 ID BCT data analytics2024SL 4C, SL 4G

SMALL WARS JOURNAL (4)

TitleDateTracks
Data as Firepower: Data Superiority as a Warfighting ConceptAug 2025All
Elevating Information: Why the Army Should Establish Information as a Core WfFApr 2025SL 4A, SL 4F, SL 4K
Accelerating Decision-Making: Integrating AI into the Modern WargameFeb 2026SL 4G, SL 4H, SL 4F
AI-Enabled Wargaming at CGSCJan 2026SL 4G, SL 4H, SL 4F

WAR ON THE ROCKS (1)

TitleDateTracks
The U.S. Army, AI, and Mission CommandMar 2025SL 4F, SL 4H

MODERN WAR INSTITUTE — West Point (1)

TitleDateTracks
Leadership, Lethality, and Data Literacy2024All

CALL — Center for Army Lessons Learned (1)

TitleDateTracks
FY24 MCTP Key ObservationsFeb 2025All

CDA Slide Decks — Conceptual Prereqs

RECOMMENDED READING
The CDA slide decks provide conceptual grounding that the TM series assumes but does not teach. They are recommended reading, not required prerequisites. Organized below by the TM level they support.
SL 1 — Orientation (2 decks — click to expand)
SL 2 — Intro To Data (10 decks — click to expand)
SL 2 / SL 3 — Data 101 (4 decks — click to expand)
SL 3 — Advanced (8 decks — click to expand)
SL 3 / SL 4 — Data 201 (6 decks — click to expand)
SL 4 — Specialist Track Decks (4 decks — click to expand)
SL 5 — Advanced Track Decks (6 decks — click to expand)
Program & Briefing Decks (3 decks — click to expand)
NOT FINDING WHAT YOU NEED?
Contact your unit data steward for additional publications, source files, or access to restricted materials. For technical support, contact your unit data steward or the Operational Data Team. For task-level procedures, use the Task Index →
TASK INDEX
What Do You Want To Do?
Search • Filter by category • Click badge to open PDF
NOT FINDING WHAT YOU NEED?
Contact your unit data steward for additional publications, source files, or access to restricted materials. For technical support, contact your unit data steward or the Operational Data Team. For task-level procedures, use the Task Index →
DASHBOARDS Analytics & Operations Suite Streamlit applications • Click any card to open
BLUF
These dashboards provide real-time analytics, operational management, and content quality tools for the MSS training program. Click any card below to load the application. On the ODT local network, apps load live. On Cloudflare, apps require VPN to the ODT network.
Remote access: Dashboard apps run on the ODT local network. Connect to VPN before opening a dashboard.
Training Analytics
📋Readiness Tracker:8501
Soldier training timelines, overdue flagging, printable records.
📊Exam Analytics:8502
Score distributions, cohort comparison, item discrimination, question improvement.
📝AAR Aggregator:8503
AAR trend analysis, priority matrix, keyword extraction, GO/NO-GO tracking.
📈Training Metrics:8511
Executive dashboard — aggregates all training data for CG/DCG briefings.
Training Operations
🎯Progress Tracker:8504
Individual Soldier progress, goal tracking, stalled-Soldier alerts.
📅MTT Scheduler:8505
Mobile Training Team scheduling, resource allocation, calendar view.
👤Enrollment Manager:8508
Class enrollment, waitlists, rosters, and seat management.
🧑Instructor Manager:8512
Instructor assignments, certifications, and availability tracking.
Content & Quality
Data Quality:8510
Pipeline health monitoring, metric trending, active alerts.
🔗XRef Validator:8506
Cross-reference validation — find broken links and stale references.
📚Curriculum Tracker:8513
Document version control, review cycles, freshness tracking.
🔍Glossary Search:8507
Full-text search across the MSS data foundry glossary.
💡Lessons Learned:8514
Lessons learned database — search, tag, and trend analysis.
Distribution & Sync
📦Offline Packager:8509
Build offline content packages for disconnected environments.
SharePoint Sync:8515
Sync training content to SharePoint for enterprise distribution.
ACCESS
Dashboards run on the ODT local network. To access remotely, connect via VPN first. Each application is available at http://<host>:<port>. Contact your ODT representative for the current host address.
SCHEDULE
FY27 Annual Training Calendar — SL 2 / SL 3 / FBC
Dual-team • 4-week cycle • 60 classes • 5 locations • ~450 personnel/year
SAMPLE SCHEDULE
This is a notional planning schedule for demonstration purposes. Dates, instructors, locations, and seat availability are not real. Actual class schedules will be published via official training channels.
NOT FINDING WHAT YOU NEED?
Contact your unit data steward for additional publications, source files, or access to restricted materials. For technical support, contact your unit data steward or the Operational Data Team. For task-level procedures, use the Task Index →

Training & Evaluation Outline (T&EO)

Evaluated tasks by skill level per AR 350-1 and TR 350-70. Any step marked CRITICAL = automatic NO-GO if failed.

SL 1 — Maven User (10 Tasks)

TaskTitleStandardStepsCritical Items
SL1-01Log In and NavigateAuthenticate via CAC/PIV and navigate to designated app — within 5 min6Do not access production
SL1-02Filter Table / Identify Missing SubmissionsApply date filter and identify all units with missing submissions — within 5 min4
SL1-03Execute an Authorized ActionLocate record, execute Action, verify update — within 3 min6Do not execute on wrong record
SL1-04Export Filtered Table to CSVExport filtered table; confirm row count; label with classification5File must have classification label
SL1-05Build a Basic Contour ChartBuild bar chart with correct axes and filter — within 10 min5
SL1-06Identify Classification / Export ProcedureLocate marking in Properties; state authorized distribution and export5Correct distribution + destination
SL1-07Explore an Object Type in QuiverNavigate to Object Type, filter, export — within 5 min5
SL1-08Submit a Query to an AIP InterfaceSubmit query; assess output; state verification requirement5AIP outputs require human verification
SL1-09Troubleshoot Common Access IssuesDiagnose and resolve 2 of 2 pre-staged failures — within 5 min4
SL1-10Request Access to a Missing ResourceIdentify access gap; submit formatted request to unit MSS admin3
▸ View full GO/NO-GO performance measures — SL 1

SL1-01: Log In and Navigate

#Performance MeasureGONO-GO
1Navigates to Training Environment URL (not production)Correct URLOpens production
2Selects correct certificate (PIV Authentication)Correct certWrong cert
3Enters PIV PINCorrectFails
4Confirms Training Environment displayedConfirmedProduction
5Navigates to designated Workshop applicationOpen within 5 minExceeds 5 min
6[CRITICAL] Does not access productionTraining onlyNavigates to production

SL1-02: Filter Table / Identify Missing Submissions

#Performance MeasureGONO-GO
1Locates filter controlFoundCannot locate
2Applies “last 7 days” filterApplied; rows reduceIncorrect
3Identifies submission countCorrectIncorrect
4Identifies missing unit(s)All namedMisses a unit

SL1-03: Execute an Authorized Action

#Performance MeasureGONO-GO
1Locates target recordFoundCannot locate
2Activates Action buttonActivatedGrayed; no diagnosis
3Completes parameter formCorrectIncorrect
4Confirms executionExecutesDismisses
5Verifies status updatedVisibleNot verified
6[CRITICAL] Does not execute on wrong recordCorrect recordWrong record

SL1-04: Export Filtered Table to CSV

#Performance MeasureGONO-GO
1Locates export functionFoundCannot locate
2Selects CSVCSVWrong format
3Exports to authorized folderAuthorizedUnauthorized
4Row count matchesVerifiedMismatch
5[CRITICAL] Classification label appliedLabeledNot labeled

SL1-05: Build a Basic Contour Chart

#Performance MeasureGONO-GO
1Opens correct dataset in ContourCorrectWrong
2Correct X axisCorrectIncorrect
3Correct Y axisCorrectIncorrect
4Filter appliedAppliedNot applied
5Saved with descriptive nameSavedNot saved

SL1-06: Classification Marking / Export Procedure

#Performance MeasureGONO-GO
1Opens Properties panelOpenCannot locate
2Reads marking from PropertiesReads aloudStates without reading
3[CRITICAL] Correct authorized distributionCorrectIncorrect
4[CRITICAL] Correct export destinationGovt systemsUnauthorized
5File labeling requirement statedCorrectNot stated

SL1-07: Explore Object Type in Quiver

#Performance MeasureGONO-GO
1Navigates to correct Object TypeCorrectWrong type
2Identifies 3+ propertiesIdentifiedCannot describe
3Applies filterAppliedNot applied
4States matching countCorrectIncorrect
5Exports filtered viewCompletedNot completed

SL1-08: Submit Query to AIP Interface

#Performance MeasureGONO-GO
1Navigates to AIP interfaceOpenCannot locate
2Submits querySubmittedNot submitted
3Identifies outputReceivedNavigates away
4[CRITICAL] States human verification requiredStatedTreats as authoritative
5Identifies AI limitationIdentifiedCannot state any

SL1-09: Troubleshoot Access Issues

#Performance MeasureGONO-GO
1Diagnoses first failureCorrectIncorrect
2States resolution for firstCorrectIncorrect
3Diagnoses second failureCorrectIncorrect
4States resolution for secondCorrectIncorrect

SL1-10: Request Access to Missing Resource

#Performance MeasureGONO-GO
1Identifies access errorIdentifiedAssumes broken
2Correct request recipientUnit MSS adminWrong (C2DAO, help desk)
3Required info includedAll includedMissing info

SL 2 — Builder (10 Tasks)

TaskTitleStandardStepsCritical Items
SL2-01Create Foundry Project to StandardCorrectly named, marked, and structured project3Classification marking set
SL2-02Ingest Files / Verify Data QualityIngest 2 files; verify row counts; note quality observations3
SL2-03Build Clean-and-Transform PipelineFilter, rename, cast, join; pipeline runs without error7DATEDIFF column; no pipeline errors
SL2-04Create an Object TypeAll properties typed; Primary Key and display name set5Correct types; PK designated
SL2-05Create a Link TypeCorrect cardinality and directionality3
SL2-06Configure Ontology Write StepProperty mapping correct; Object count matches source4PK mapped; count matches
SL2-07Configure an ActionParameter, write rule, Editor-only access; test confirms update4Editor-only access
SL2-08Build a Workshop ApplicationTable, filter, metric, bar chart — all bound to Object Type5
SL2-09Connect Action Button; Verify ExecutionButton added; table refreshes with correct value after execution3Table refreshes with correct value
SL2-10Configure Access ControlViewer can see app but cannot execute Editor Action3Viewer cannot execute Action
▸ View full GO/NO-GO performance measures — SL 2

SL2-01: Create Foundry Project

#Performance MeasureGONO-GO
1Name follows C2DAO conventionCorrectFormat violation
2[CRITICAL] Classification marking setSetNo marking
3Four required folders createdAll presentAny missing

SL2-02: Ingest Files / Verify Quality

#Performance MeasureGONO-GO
1Both files ingested to Datasets folderCorrectWrong location
2Row counts verifiedBoth confirmedNot checked
3Quality observation per fileDocumentedNone

SL2-03: Clean-and-Transform Pipeline

#Performance MeasureGONO-GO
1Filter step removes nullsPresentNulls in output
2Rename step (C2DAO names)CompliantNon-compliant
3CAST steps correct typesCorrectMismatch
4Join on unit_idCorrectWrong key
5[CRITICAL] DATEDIFF columnPresentAbsent
6[CRITICAL] Pipeline runs without errorNo errorsErrors present
7Output row count matchesMatchesFan-out

SL2-04: Create Object Type

#Performance MeasureGONO-GO
1All required propertiesPresentMissing
2[CRITICAL] All types correctCorrectIncorrect
3[CRITICAL] Primary Key designatedPK setNo PK
4Display name expressionSetNone
5C2DAO namingCompliantNon-compliant

SL2-05: Create Link Type

#Performance MeasureGONO-GO
1Correct Object Types linkedCorrectWrong types
2Cardinality (MANY_TO_ONE)CorrectIncorrect
3DirectionalityCorrectReversed

SL2-06: Ontology Write Step

#Performance MeasureGONO-GO
1Write step addedPresentAbsent
2Properties mappedAll mappedAny unmapped
3[CRITICAL] PK column mappedMappedNot mapped
4[CRITICAL] Object count matches sourceMatchesDoes not match

SL2-07: Configure Action

#Performance MeasureGONO-GO
1Named parameterExistsNone
2Write rule correctCorrectIncorrect
3[CRITICAL] Editor-only accessRestrictedViewer can execute
4Tested and confirmedUpdatedNot updated

SL2-08: Build Workshop Application

#Performance MeasureGONO-GO
1C2DAO namingCompliantNon-compliant
2Table bound to Object TypeLive dataNot bound
3Filter connectedNarrows tableNot connected
4Metric widgetCorrect valueAbsent
5Bar chartCorrect fieldsWrong fields

SL2-09: Action Button / Verify Execution

#Performance MeasureGONO-GO
1Button addedPresentAbsent
2Action fires on clickFiresDoes not fire
3[CRITICAL] Table refreshes with correct valueRefreshesDoes not refresh

SL2-10: Access Control

#Performance MeasureGONO-GO
1Viewer granted accessGrantedNot granted
2Viewer can view appVisibleNot visible
3[CRITICAL] Viewer cannot execute ActionUnavailableCan execute

SL 3 — Advanced Builder (9 Tasks)

TaskTitleStandardStepsCritical Items
SL3-01Design Ontology SchemaDocumented schema scoring ≥75% on 6-item rubric; no zero-score item6
SL3-02Build Multi-Source Pipeline / Append ModeJoin multiple sources; Append mode; two distinct snapshots after two runs5Fan-out detected; two snapshots
SL3-04Build Complex Workshop ApplicationPage 1 selection drives filtered Page 2; conditional formatting4Page 2 filtered by selection
SL3-05Build Contour Workbook / Deviation ColumnReadiness by battalion with calculated deviation column3
SL3-06Execute Full C2DAO Promotion WorkflowBranch first → change → description → submit → respond to feedback4Branch created BEFORE change; complete description; feedback addressed
SL3-07Build Multi-Object Quiver DashboardLinked views with cross-filter propagation — within 15 min4Filters propagate across views
SL3-08Configure AIP Logic WorkflowTrigger, input/output binding; routes to human review — within 20 min5Output to review queue, not production
SL3-09Interpret a Data Lineage GraphIdentify upstream sources, transforms, downstream consumers — within 5 min5

Note: SL3-03 (Append Mode Snapshot) is included in SL3-02 above.

▸ View full GO/NO-GO performance measures — SL 3

SL3-01: Design Ontology Schema

#Rubric ItemGONO-GO
1Domain entities identifiedAll presentMissing or phantom
2Primary Keys appropriateJustifiedWrong PK
3Property types documentedAll specifiedMissing or errors
4Link cardinality correctCorrect + rationaleWrong
5Action access controlSpecifiedNone
6C2DAO namingCompliant>2 violations

SL3-02: Multi-Source Pipeline / Append Mode

#Performance MeasureGONO-GO
1Join correctCorrect key/typeWrong; fan-out
2[CRITICAL] Fan-out detectedAbsent or documentedPresent undetected
3Append mode set before first runSetOverwrite or late
4Snapshot timestamp columnPresentAbsent
5[CRITICAL] Two distinct snapshotsTwo recordsOnly one

SL3-04: Complex Workshop

#Performance MeasureGONO-GO
1Portfolio page correctAll units + statusEmpty or incorrect
2Selection navigates to Page 2WorksAbsent
3[CRITICAL] Page 2 filtered by selectionCorrect recordsShows all
4Conditional formattingPresentNone

SL3-05: Contour Workbook / Deviation

#Performance MeasureGONO-GO
1Correct datasetCorrectWrong
2Deviation columnPresent + correctAbsent or incorrect
3Saved with nameSavedNot saved

SL3-06: C2DAO Promotion Workflow

#Performance MeasureGONO-GO
1[CRITICAL] Branch created BEFORE making the changeBranch firstChange on main first
2[CRITICAL] Complete descriptionWhat/why/impactEmpty or generic
3[CRITICAL] Feedback addressedResubmittedNot addressed
4Change on branch onlyBranch-onlyOn main

SL3-07: Multi-Object Quiver Dashboard

#Performance MeasureGONO-GO
1Views for 2+ Object TypesBoth displayedMissing
2Linked via Link TypeFunctionalNot linked
3[CRITICAL] Cross-filter propagationConfirmedDoes not propagate
4Drill-down worksCorrectWrong objects

SL3-08: AIP Logic Workflow

#Performance MeasureGONO-GO
1Trigger configuredFiresMisconfigured
2Input binding correctCorrectWrong source
3Structured outputStructuredProse only
4[CRITICAL] Routes to review queueDraft in queueDirect to production
5Runs without errorSuccessErrors

SL3-09: Interpret Lineage Graph

#Performance MeasureGONO-GO
1Opens lineage graphDisplayedCannot locate
2Upstream sourcesAll namedMissed
3Transforms describedCorrectMisidentified
4Downstream consumersAll namedMissed
5Propagation describedCorrectCannot describe
REFERENCES
AR 350-1 (Army Training and Leader Development) • TR 350-70 (Army Learning Policy and Systems) • ADP 7-0 (Training) • FM 7-0 (Training)

Training & Evaluation Outline — SL 4 & SL 5

Evaluated tasks for specialist and advanced tracks per AR 350-1 and TR 350-70. Any step marked CRITICAL = automatic NO-GO if failed.

SL 4 WFF — Warfighting Function Tracks (6 Shared Tasks)

NOTE
WFF tracks (SL 4A–F) share a common T&EO task structure. Scenario content is adapted per WFF. Prereq: SL 1 + SL 2 + SL 3. Evaluation: 6 tasks, all must pass; 3-hour window.
TaskTitleStandardStepsCritical Items
40WFF-01Build WFF PipelineIngest, clean, type, compute; pipeline runs without error6Pipeline runs without error
40WFF-02Create WFF Object Types / PopulateAll Object Types created; correct types; PK set; count matches6Types correct; PK set; count matches
40WFF-03Configure WFF Workshop AppTable, filter, metric, status chart bound to WFF Objects5Classification marking present
40WFF-04Configure WFF ActionParameter, write rule, access restriction; test confirms4Access restricted per spec
40WFF-05Build Multi-Page WFF DashboardPage 1 selection drives filtered Page 2; conditional formatting4Page 2 filtered by selection
40WFF-06Apply C2DAO GovernanceNaming, marking, branch-first, promotion with complete description5Markings set; branch before change; complete description
▸ View full GO/NO-GO performance measures — SL 4 WFF

40WFF-01: Build WFF Pipeline

#Performance MeasureGONO-GO
1Dataset ingested without errorRow count verifiedFails or not verified
2Filter step removes null/invalid rowsPresentNulls in output
3Column types correctAll correctType mismatch
4Computed column presentCorrectAbsent or incorrect
5[CRITICAL] Pipeline runs without errorNo errorsErrors present
6Output in correct folder with compliant nameCorrectMisplaced or non-compliant

40WFF-02: Create WFF Object Types / Populate

#Performance MeasureGONO-GO
1All required Object Types createdAll presentAny missing
2[CRITICAL] All property types correctCorrectAny incorrect
3[CRITICAL] Primary Key designatedPK setNo PK
4Write step configured; pipeline runsRunsAbsent or fails
5[CRITICAL] Object count matches sourceMatchesDoes not match
6Naming follows C2DAO conventionCompliantNon-compliant

40WFF-03: Configure WFF Workshop App

#Performance MeasureGONO-GO
1Application named per conventionCompliantNon-compliant
2Table bound to WFF Object TypeLive dataNot bound
3Filter widget connectedNarrows correctlyNot connected
4Status indicator presentFunctionalAbsent
5[CRITICAL] Classification marking presentDisplayedAbsent

40WFF-04: Configure WFF Action

#Performance MeasureGONO-GO
1Action created with parameterExistsNo parameter
2Write rule correctMaps to propertyIncorrect
3[CRITICAL] Access restricted per specRestrictedUnauthorized can execute
4Action tested and confirmedUpdatedDid not update

40WFF-05: Build Multi-Page WFF Dashboard

#Performance MeasureGONO-GO
1Summary page displays all recordsCorrectEmpty or incorrect
2Selection navigates to detail pageWorksAbsent
3[CRITICAL] Detail page filtered by selectionCorrect recordsShows all
4Conditional formatting presentAppliedNone

40WFF-06: Apply C2DAO Governance

#Performance MeasureGONO-GO
1All names follow C2DAO conventionCompliant>2 violations
2[CRITICAL] Classification markings setAll markedAny unmarked
3[CRITICAL] Branch created before changesBranch-firstChanges on main
4Change on branch onlyBranch-onlyOn main
5[CRITICAL] Promotion description completeWhat/why/impactEmpty or generic

SL 4G — ORSA (6 Tasks)

TaskTitleStandardStepsCritical Items
40G-01Configure Code WorkspaceWorkspace configured; GPU verified; read/write confirmed4Write transaction committed
40G-02Build & Validate Regression ModelModel built; residual analysis done; output to Foundry6Residual analysis performed
40G-03Time Series Forecast w/ ConfidenceForecast w/ stationarity test, model rationale, 90% CI590% confidence intervals present
40G-04Monte Carlo Simulation≥1,000 trials; seed set; threshold probability computed5≥1,000 trials; seed set
40G-05Linear Programming ProblemLP formulated; solution computed; sensitivity analysis5
40G-06Commander Brief w/ UncertaintyAll estimates bounded; no unqualified predictions5All estimates bounded; no unqualified predictions
▸ View full GO/NO-GO performance measures — SL 4G

40G-01: Configure Code Workspace

#Performance MeasureGONO-GO
1Required packages installed (statsmodels, scipy, pandas, numpy, matplotlib)All importableAny fails
2Test dataset read via Spark or pandas; schema/row count confirmedDataset read; schema matchesNot readable; connection error
3[CRITICAL] Write transaction committed & output confirmedCommittedFails or uncommitted
4Random seed set in workspace configSeed setNo seed

40G-02: Build and Validate a Regression Model

#Performance MeasureGONO-GO
1Feature selection rationale documentedRationale presentNo rationale
2Model trained with reproducible seedSeed set; reproducibleNo seed
3Validation stats (R², RMSE, MAE)All three presentAny missing
4[CRITICAL] Residual analysis performed (plot or QQ)Analysis presentNo residual analysis
5Output written to Foundry curated datasetIn FoundryNot written
6Assumptions documented (linearity, independence, normality)Assumptions listedNo documentation

40G-03: Time Series Forecast with Confidence Bounds

#Performance MeasureGONO-GO
1Stationarity test performed (ADF or equiv)Test documentedNo test
2Model order selection w/ ACF/PACF rationaleRationale documentedNo rationale
3[CRITICAL] 90% confidence intervals on forecastCI displayedPoint estimate only
4Forecast extends ≥6 periods forward≥6 periods<6 periods
5Forecast plot w/ historical data & boundsPlot completeMissing context or bounds

40G-04: Monte Carlo Simulation

#Performance MeasureGONO-GO
1[CRITICAL] ≥1,000 trials executed≥1,000 trials<1,000 trials
2[CRITICAL] Random seed set; evaluator re-run matchesSeed set; reproducibleNot reproducible
3Distribution selection justifiedJustification documentedNo justification
4Probability at operational threshold computedThreshold probability computedNo threshold probability
5Output distribution plotted w/ threshold markedThreshold visibleNo plot or threshold

40G-05: Formulate and Solve a Linear Programming Problem

#Performance MeasureGONO-GO
1Objective function correctly formulatedMatches scenarioIncorrect
2All constraints formulated & documentedAll presentAny missing/incorrect
3Solution computed (scipy.optimize.linprog or equiv)Solution producedFails or not attempted
4Binding constraints identifiedStatedNot identified
5Sensitivity analysis on ≥1 binding constraintPresentNo sensitivity analysis

40G-06: Commander Brief with Uncertainty Bounds

#Performance MeasureGONO-GO
1[CRITICAL] Every estimate has confidence range/intervalAll boundedAny point estimate unbounded
2Language appropriate for non-technical audienceClear, non-technicalJargon-heavy
3Assumptions stated for each productAssumptions communicatedNo assumptions
4[CRITICAL] No unqualified predictions (“will” without probability)All qualifiedUnqualified prediction
5Recommendation supported by evidenceTraceableExceeds analytical foundation

SL 4H — AI Engineer (6 Tasks)

TaskTitleStandardStepsCritical Items
40H-01Author AIP Logic WorkflowJSON output; conditional chain; test run succeeds5Workflow runs on test input
40H-02Configure Agent Studio Agent2+ tools; correct responses; out-of-scope refused5Refuses out-of-scope queries
40H-03LLM Integration Pipeline w/ RAGRetrieves correct context; grounded output; review queue5Output routed to human review
40H-04Human-in-the-Loop CheckpointsNo write without review; bypass blocked4No write without checkpoint; bypass blocked
40H-05Python Transforms for AIP ContextCorrect extraction; schema match; terminology defined4
40H-06AIP Authorization ChecklistChecklist complete & honest; ≥5 prohibited uses4Checklist accurate per workflow
▸ View full GO/NO-GO performance measures — SL 4H

40H-01: Author an AIP Logic Workflow

#Performance MeasureGONO-GO
1Prompt includes military terminology contextTerminology definedRelies on LLM defaults
2Produces structured JSON (not prose)JSON validatedProse output
3Conditional chain presentFunctionalLinear only
4Error handling routes malformed output to reviewPresentSilent failure
5[CRITICAL] Workflow runs on test inputSucceedsErrors

40H-02: Configure an Agent Studio Agent

#Performance MeasureGONO-GO
1≥2 tools registeredTwo tools<2 tools
2Correct responses to 5 evaluator queriesIn-scope correctIncorrect response
3[CRITICAL] Refuses out-of-scope queriesRefusedResponds to out-of-scope
4Tool calls logged and visibleLogs presentNo logging
5Memory scope defined and enforcedConfiguredUnbounded context

40H-03: Build an LLM Integration Pipeline with RAG

#Performance MeasureGONO-GO
1Retrieval mechanism configuredFunctionalPrompt-only generation
2Context from correct Ontology ObjectsCorrect ObjectsWrong Objects or fabricated
3Output references retrieved contentGrounding evidentNot traceable
4[CRITICAL] Output routed to human review before production writeReview queue presentWrites directly to production
5Pipeline runs on test queriesSucceedsErrors

40H-04: Implement Human-in-the-Loop Checkpoints

#Performance MeasureGONO-GO
1[CRITICAL] No write without human checkpointAll writes pass checkpointAny write bypasses
2Review queue displays output before writeVisibleAbsent or empty
3Reviewer can approve or rejectFunctionalNo reject option
4[CRITICAL] Evaluator bypass attempt blockedBlockedBypass succeeds

40H-05: Write Python Transforms for AIP Context

#Performance MeasureGONO-GO
1Correct Object properties extractedAll requiredAny missing
2Output matches AIP Logic input schemaSchema matchMismatch
3Military terminology defined in contextDefinedAbbreviations unexplained
4Transform runs without errorSucceedsRuntime error

40H-06: Complete the AIP Authorization Checklist

#Performance MeasureGONO-GO
1All checklist items completedAll addressedAny blank
2[CRITICAL] Responses honest & accurate per workflow designMatches workflowMisrepresents capability/safety
3≥5 prohibited use cases identified≥5 identified<5 identified
4HITL documented for all Ontology writesDocumentedAny write without HITL doc

SL 4M — ML Engineer (6 Tasks)

TaskTitleStandardStepsCritical Items
40M-01Configure GPU WorkspaceGPU confirmed; packages installed; read/write verified4Write transaction committed
40M-02Feature Engineering PipelineNulls handled; encoding/scaling; no leakage6Leakage audit—no leakage
40M-03Train & Evaluate Supervised ModelCross-val; metrics meet thresholds; calibration done5Calibration check performed
40M-04Deploy Model to Serving EndpointModel registered; endpoint responding; latency in spec4Correct predictions for 10 test records
40M-05Drift Monitoring PipelineDrift detection; alert routes; evaluator drift detected5Evaluator-seeded drift detected
40M-06Model Governance DocumentModel card complete; limitations specific; RAI declared4All 4 required sections present
▸ View full GO/NO-GO performance measures — SL 4M

40M-01: Configure Code Workspace with GPU

#Performance MeasureGONO-GO
1Required packages installed (scikit-learn, PyTorch/TF, pandas, numpy)All importableAny fails
2GPU allocation confirmedGPU availableNot detected
3[CRITICAL] Write transaction committed to FoundryCommittedFails
4Random seed setSeed setNo seed

40M-02: Build a Feature Engineering Pipeline

#Performance MeasureGONO-GO
1Null handling applied & documentedNulls handledNulls in output
2Categorical encoding appliedEncoding appliedRaw categoricals
3Numeric scaling appliedScaling appliedUnscaled
4[CRITICAL] Leakage audit: no feature derived from labelNo leakageLeakage detected or audit missing
5Feature matrix written to FoundryIn FoundryNot written
6Each feature decision documentedPresentNo documentation

40M-03: Train and Evaluate a Supervised Model

#Performance MeasureGONO-GO
1Train/test split with reproducible seedReproducibleNot reproducible
2Cross-validation (k≥5)Results reportedNo cross-val
3Metrics: accuracy, precision, recall, ROC-AUCAll reportedAny missing
4[CRITICAL] Calibration check performed & documentedPresentSkipped
5≥2 models compared; selection justifiedComparison presentSingle model only

40M-04: Deploy a Model to a Serving Endpoint

#Performance MeasureGONO-GO
1Model registered in Foundry w/ versionRegisteredNot registered
2Endpoint deployed & respondingRespondsNot responding
3[CRITICAL] Correct predictions for 10 test recordsAll 10 returnedFailures or errors
4Latency within specWithin thresholdExceeds threshold

40M-05: Implement a Drift Monitoring Pipeline

#Performance MeasureGONO-GO
1Drift detection method (PSI, KS, or equiv)Metric computedNo detection
2Baseline from deployment-time dataDocumentedNo baseline
3Alert threshold defined & documentedThreshold setNo threshold
4[CRITICAL] Evaluator-seeded drift detectedDetected & flaggedNot detected
5Alert routes to correct channelRoutedNot routed

40M-06: Complete a Model Governance Document

#Performance MeasureGONO-GO
1[CRITICAL] Model card: assumptions, training data, limitations, use restrictionsAll four sectionsAny missing
2Limitations specific & realisticSpecificGeneric boilerplate
3Responsible AI declarationPresentAbsent
4Out-of-scope uses documentedDocumentedNo out-of-scope docs

SL 4J — Product Manager (6 Tasks)

TaskTitleStandardStepsCritical Items
40J-01Program Data Architecture4 Object Types; correct links & cardinality; paper first4
40J-02Milestone Tracking PipelineDATEDIFF variance; RAG status; data-as-of timestamp5Data-as-of timestamp present
40J-03Milestone DashboardRAG formatting; data-as-of widget; filter functional4Data-as-of timestamp on dashboard
40J-04Budget Execution VisualizationObligation rate chart; reference line; at-risk identifiable3
40J-05Snapshot Pipeline (Append Mode)Append before first run; 2 distinct snapshots3Two distinct snapshot records
40J-06IPR Product (PM Standards)Contour portfolio; RED at top; exportable PDF4
▸ View full GO/NO-GO performance measures — SL 4J

40J-01: Design a Program Data Architecture

#Performance MeasureGONO-GO
1All 4 Object Types (Program, Milestone, Resource, Risk)All presentAny missing
2Link Types w/ correct cardinalityCorrectIncorrect
3Properties documented with typesSpecifiedMissing types
4Paper design before Ontology ManagerPaper firstBuilt without design

40J-02: Build a Milestone Tracking Pipeline

#Performance MeasureGONO-GO
1IMS Excel ingested; date columns CAST correctlyCAST appliedDATEDIFF fails on text
2DATEDIFF variance (planned vs actual)CorrectAbsent or incorrect
3RAG status (RED >30d, AMBER >0, GREEN ≤0)Logic correctAbsent or wrong
4[CRITICAL] Data-as-of timestamp (CURRENT_DATE)PresentNo timestamp
5Pipeline runs without errorNo errorsErrors

40J-03: Milestone Dashboard with Data-As-Of Timestamp

#Performance MeasureGONO-GO
1Table widget displays milestonesFunctionalEmpty or not bound
2RAG conditional formattingCorrectNo formatting
3[CRITICAL] Data-as-of timestamp widget visibleVisibleNo timestamp
4Filter by program or statusFunctionalNo filter

40J-04: Budget Execution Visualization

#Performance MeasureGONO-GO
1Obligation rate chart displays correctlyCorrect dataAbsent or incorrect
2Reference line at quarterly targetPresent at correct valueNo reference line
3At-risk programs identifiableVisually distinguishableCannot identify

40J-05: Configure Snapshot Pipeline in Append Mode

#Performance MeasureGONO-GO
1Append mode configured before first runSet before runOverwrite or set after
2Snapshot timestamp column presentPresentNo timestamp
3[CRITICAL] Two distinct snapshots after two runsTwo recordsOnly one (Overwrite)

40J-06: IPR Product Meeting PM Dashboard Standards

#Performance MeasureGONO-GO
1Contour portfolio health matrix presentCreatedNo portfolio view
2Sorted by status ascending (RED at top)RED firstNot sorted
3All PM Dashboard Standards metAll passAny fails
4Exportable as PDFExport successfulCannot export

SL 4K — Knowledge Manager (6 Tasks)

TaskTitleStandardStepsCritical Items
40K-01Design Knowledge Ontology5+ Object Types; correct links; checklist passes4
40K-02Configure AAR Submission FormWrites to AAR Object; required fields enforced4Required-field validation fires
40K-03Lessons-Learned PipelineTagging; dedup; distribution routing4
40K-04AIP Summarization w/ Review Gate5 docs processed; Draft status; review queue4All outputs begin as Draft
40K-05Knowledge Browser ApplicationSearch/filter/drill-down; 5/5 queries correct45/5 evaluator queries correct
40K-06PCS Knowledge Transfer PackageSpecific artifacts named; quality documented4Names specific Foundry artifacts
▸ View full GO/NO-GO performance measures — SL 4K

40K-01: Design a Knowledge Ontology

#Performance MeasureGONO-GO
1All 5 Object Types (Document, Lesson, AAR, SOP, ExpertiseProfile)All presentAny missing
2Link Types (Lesson → AAR, Lesson → Unit, SOP → Unit)CorrectMissing or incorrect
3Properties documented with typesSpecifiedMissing types
4Evaluated against knowledge architecture checklistPassesFails

40K-02: Configure an AAR Submission Form

#Performance MeasureGONO-GO
1All required fields (unit, date, event type, location, description, lesson, classification)All presentAny missing
2[CRITICAL] Required-field validation fires on empty submissionPrevents emptyEmpty accepted
3Submission writes to AAR Object TypeConfirmedWrite fails
4Submission confirmation displayedVisibleNo confirmation

40K-03: Configure a Lessons-Learned Pipeline

#Performance MeasureGONO-GO
1Tagging taxonomy appliedTags appliedNo tagging
2Deduplication logic presentDuplicates handledDuplicates pass through
3Distribution routing functionalCorrectNo routing
4Pipeline runs on test dataNo errorsErrors

40K-04: AIP Summarization Workflow with Review Gate

#Performance MeasureGONO-GO
1All 5 documents processedAll processedAny fails
2Structured output (not raw prose)StructuredUnstructured
3[CRITICAL] All AIP-generated lessons begin as DraftDraft statusAny published without review
4Review queue displays outputsPopulatedEmpty

40K-05: Build a Knowledge Browser Application

#Performance MeasureGONO-GO
1Search functionality (keyword or semantic)Returns resultsNo search
2Filter by tag, unit, and dateAll three workAny non-functional
3Drill-down to full lesson/AAR textFunctionalAbsent
4[CRITICAL] 5/5 evaluator queries return correct results5 of 5 correctAny incorrect

40K-06: PCS Knowledge Transfer Package

#Performance MeasureGONO-GO
1Key person dependency analysisDependencies identifiedNo analysis
2[CRITICAL] Names specific Foundry projects, Object Types, pipelines, contactsSpecific artifactsGeneric boilerplate
3Data quality status per artifactPresentNo quality docs
4Reviewed & approved by instructorApprovedNot reviewed

SL 4L — Software Engineer (6 Tasks)

TaskTitleStandardStepsCritical Items
40L-01Paginated OSDK QueryCorrect filter; all pages; no hardcoded creds4All pages retrieved; no hardcoded creds
40L-02OSDK Action w/ ValidationValid succeeds; invalid → structured error4
40L-03TypeScript Function on ObjectsCorrect values for 10 objects; edge cases handled4
40L-04TypeScript Action Validator≥3 conditions; 8/8 test cases pass4All 8 test cases pass
40L-05Slate App w/ Live OntologyLive data; auto-refresh; error states; no creds4No hardcoded credentials
40L-06C2DAO Code Review & DeployPR created; comments addressed; no creds in code4No credentials in committed code
▸ View full GO/NO-GO performance measures — SL 4L

40L-01: Authenticate and Execute a Paginated OSDK Query

#Performance MeasureGONO-GO
1OSDK client authenticatedAuthenticatedFails
2Filter applied per evaluator specCorrect recordsWrong records
3[CRITICAL] Pagination iterates all pagesAll pagesOnly page 1
4[CRITICAL] No hardcoded credentialsNone in codeCredential found

40L-02: Execute an Action via OSDK with Validation

#Performance MeasureGONO-GO
1Valid Action executes successfullySucceedsFails on valid input
2Invalid input → validation error (not unhandled)Structured errorUnhandled exception
3Error includes specific field & messageField identifiedGeneric error
4Async response pattern (task ID polling)ImplementedSynchronous block

40L-03: Build a TypeScript Function on Objects

#Performance MeasureGONO-GO
1Function compiles without TS errorsNo errorsTS errors
2Correct values for 10 test objectsAll correctAny incorrect
3Edge cases handled (null, boundary)Correct resultsError or incorrect
4Bulk query pattern (not per-object calls)Bulk patternN+1 pattern

40L-04: Write and Test a TypeScript Action Validator

#Performance MeasureGONO-GO
1≥3 distinct validation conditions≥3<3
2Specific, descriptive error messagesSpecificGeneric/missing
3[CRITICAL] 8/8 test cases pass (4 valid, 4 invalid)8 of 8Any fails
4Cross-field validation presentPresentNo cross-field

40L-05: Build a Slate Application with Live Ontology Data

#Performance MeasureGONO-GO
1Application renders live Ontology dataDisplayedStatic or not rendering
2Data refreshes on state changeAuto-refreshManual refresh
3Error state shows useful messageUseful messageGeneric “error occurred”
4[CRITICAL] No hardcoded credentialsNone in codeCredential found

40L-06: C2DAO Code Review and Deployment Workflow

#Performance MeasureGONO-GO
1PR created with descriptive title/summaryCreatedNo PR
2Review comments addressedAddressedIgnored
3Deployment checklist completed end-to-endAll itemsAny incomplete
4[CRITICAL] No hardcoded credentials/tokens in committed codeNoneCredentials present

SL 4N — UI/UX Designer (6 Tasks)

TaskTitleStandardStepsCritical Items
40N-01User Research PlanResearch questions; SCD interview guide; contextual inquiry4
40N-02Information ArchitectureDecision-first hierarchy; glance/scan/commit test4Passes 2-second glance test
40N-03Interactive PrototypeClickable; 5 states; primary flow without explanation5Error state w/ useful feedback
40N-04Design Handoff PackageAnnotated mockups; data binding; all states specified4Data binding documentation
40N-05Accessibility Audit≥3 issues w/ WCAG criterion; color-only flagged4≥3 issues w/ severity & WCAG ref
40N-06Usability TestThink-aloud; task completion rates; severity-rated findings4Recommendations for Critical/Major
▸ View full GO/NO-GO performance measures — SL 4N

40N-01: Produce a User Research Plan

#Performance MeasureGONO-GO
1Research questions tied to design decisionsDefinedNo questions
2Target population (role, rank, context)SpecifiedGeneric
3SCD semi-structured questions (not leading/yes-no)SCD presentLeading or yes/no
4Contextual inquiry protocol (classification, lighting, noise, screen)Constraints addressedNo protocol

40N-02: Design an Information Architecture

#Performance MeasureGONO-GO
1Decision-first hierarchy documentedDocumentedWidget-palette-first
2[CRITICAL] Glance test: status identifiable in 2 secIdentifiableNot identifiable
3Scan test: attention areas in 10 secIdentifiableCannot identify
4Commit test: detail drill-down in 30 secAccessible>30 sec

40N-03: Build an Interactive Prototype

#Performance MeasureGONO-GO
1Prototype is clickable/navigableNavigableStatic mockup
2Default state displaysPresentMissing
3Loading, empty, success statesAll threeAny missing
4[CRITICAL] Error state with useful feedbackFeedback presentBlank or generic
5Primary flow without designer explanationCompletableRequires explanation

40N-04: Produce a Design Handoff Package

#Performance MeasureGONO-GO
1Annotated mockups w/ widget specsAnnotatedNo annotations
2[CRITICAL] Data binding docs (widget → Object property)DocumentedNo data binding docs
3Interaction spec covers all 5 statesAll specifiedAny unspecified
4Accessibility requirements documentedPresentNo a11y docs

40N-05: Complete an Accessibility Audit

#Performance MeasureGONO-GO
1Automated a11y scan completedDocumentedNo scan
2Manual keyboard navigation testDocumentedNo test
3[CRITICAL] ≥3 issues w/ severity & WCAG criterion≥3 identified<3 or no WCAG ref
4Color-only encoding flaggedFlaggedNot identified

40N-06: Execute a Usability Test

#Performance MeasureGONO-GO
1Think-aloud protocol usedCapturedSilent observation
2Task completion rates documentedDocumentedNo rates
3Findings severity-rated (Critical/Major/Minor/Cosmetic)RatedNo ratings
4[CRITICAL] Recommendations for Critical & Major findingsPresentNo recommendations

SL 4O — Platform Engineer (6 Tasks)

TaskTitleStandardStepsCritical Items
40O-01Deploy Workload to K8sDeclarative YAML; resource limits; health probes4Liveness & readiness probes passing
40O-02GitOps w/ Drift DetectionDeploy via commit; drift auto-reverted4Drift reverted automatically
40O-03Harden Container ImageIron Bank base; multi-stage; non-root; caps dropped5Runs as non-root
40O-04CI/CD Pipeline w/ Security GatesAll stages; secrets scan; gate blocks on vuln4Security gate blocks deployment
40O-05Deployment Strategy w/ RollbackRolling + blue/green; rollback from each4Blue/green rollback restores previous
40O-06Air-Gapped DeploymentBundled artifacts; health checks pass; no ext network4Deploys with no external access
▸ View full GO/NO-GO performance measures — SL 4O

40O-01: Deploy a Workload to Kubernetes

#Performance MeasureGONO-GO
1Declarative YAML (kubectl apply)SuccessfulImperative or fails
2Resource requests & limits configuredBoth setNo resource config
3[CRITICAL] Liveness & readiness probes passingBoth healthyNo probes or failing
4Labels applied (app, env, team)All presentMissing labels

40O-02: Configure a GitOps Workflow with Drift Detection

#Performance MeasureGONO-GO
1GitOps controller syncing from GitSyncedNot configured
2Deploy by Git commitVia commitManual kubectl
3[CRITICAL] Evaluator drift reverted automaticallyRevertedDrift persists
4Drift alerts configuredAlert firesNo alerting

40O-03: Harden a Container Image

#Performance MeasureGONO-GO
1Iron Bank base image usedIron BankDocker Hub
2Multi-stage build (no build tools in prod)ConfirmedBuild tools in prod
3[CRITICAL] Runs as non-root userNon-rootRoot
4Linux capabilities dropped (ALL; required added back)DroppedNo cap management
5Vuln scan passes (no unpatched CRITICAL/HIGH)PassesCRITICAL/HIGH present

40O-04: Build a CI/CD Pipeline with Security Gates

#Performance MeasureGONO-GO
1Stages: build, test, scan, deployAll presentAny missing
2Secrets detection gateConfiguredNo secrets detection
3[CRITICAL] Security gate blocks on detected vulnerabilityBlocksDoes not block
4Artifacts stored w/ version tagsVersionedNo artifact mgmt

40O-05: Implement Deployment Strategy with Rollback

#Performance MeasureGONO-GO
1Rolling update w/ zero downtimeSucceedsDowntime
2Rollback from rolling updateRestoresFails
3Blue/green w/ traffic switchSwitchedNot implemented
4[CRITICAL] Blue/green rollback restores previousRestoresFails

40O-06: Deploy an Application Across an Air Gap

#Performance MeasureGONO-GO
1All images & config bundledCompleteMissing deps
2Bundle imported to internal registrySuccessfulFails
3[CRITICAL] Deploys & health checks pass w/ no ext networkHealthyFails on missing dep
4Deployment procedure documentedDocumentedNo docs

SL 5G — Advanced ORSA (4 Tasks)

TaskTitleStandardStepsCritical Items
50G-01Bayesian Readiness ModelPrior justified; posterior w/ 90% credible interval490% credible interval
50G-02Network Vulnerability AnalysisGraph constructed; centrality computed; top 3 nodes4Top 3 critical nodes w/ risk rating
50G-03Pareto Frontier for COAFrontier computed; 3 COA points named4≥3 named COA points
50G-04GO/SES Analytical ProductBLUF; uncertainty; assumption register; peer review6All estimates bounded; assumption register; peer review block
▸ View full GO/NO-GO performance measures — SL 5G

50G-01: Implement a Bayesian Readiness Model

#Performance MeasureGONO-GO
1Prior selection justifiedJustifiedNo justification
2[CRITICAL] Posterior w/ 90% credible intervalCI presentPoint estimate only
3Assumption register entry for priorDocumentedNo entry
4Hierarchical model if multi-echelon dataHierarchicalSingle-level

50G-02: Conduct Network Vulnerability Analysis

#Performance MeasureGONO-GO
1Network graph w/ correct nodes/arcsMatches dataIncorrect
2Betweenness centrality computedComputedNo centrality
3[CRITICAL] Top 3 critical nodes w/ operational riskIdentified w/ riskNo risk translation
4Node removal impact analysisPresentNo impact analysis

50G-03: Compute Pareto Frontier for COA Comparison

#Performance MeasureGONO-GO
1Both objectives quantified from dataQuantifiedVague
2Pareto frontier computed & plottedPlottedNo frontier
3[CRITICAL] ≥3 COA points named w/ operational descriptions3 named<3 or no naming
4Recommendation with assumption caveatCaveat presentNo caveat

50G-04: GO/SES-Ready Analytical Product

#Performance MeasureGONO-GO
1BLUF w/ result, confidence, key assumptionCompleteMissing or incomplete
2[CRITICAL] All estimates have uncertainty boundsAll boundedAny unbounded
3[CRITICAL] Assumption register present & completePresentNo register
4Limitations w/ specific invalidation conditionsPresentNo limitations
5[CRITICAL] Peer review signature blockPresentNo block
6All models reproducible (seeds set)ReproducibleNot reproducible

SL 5H — Advanced AI Engineer (3 Tasks)

TaskTitleStandardStepsCritical Items
50H-01Enterprise RAG ArchitectureChunking justified; metadata schema; eval harness w/ MRR4Retrieval eval harness producing MRR
50H-02Multi-Agent SystemOrchestrator routes; failure recovery; schema validation4Failure recovery path functional
50H-03AI Governance FrameworkReview gates on all outputs; audit log; rollback; OPSEC4All outputs gated; OPSEC addressed
▸ View full GO/NO-GO performance measures — SL 5H

50H-01: Design an Enterprise RAG Pipeline Architecture

#Performance MeasureGONO-GO
1Chunking strategy w/ tradeoff rationaleJustifiedNo rationale
2Metadata schema (source, date, section, classification)PresentNo schema
3[CRITICAL] Retrieval eval harness w/ ground truth → MRRProduces MRRNo harness
4OPSEC implications of embedding model addressedAddressedNot considered

50H-02: Design a Multi-Agent System

#Performance MeasureGONO-GO
1Orchestrator routes to correct workersCorrectMisrouted
2≥2 specialized workers w/ capabilitiesTwo present<2
3[CRITICAL] Failure recovery (timeout, fallback, dead-letter)FunctionalNo recovery
4Tool output schemas validated before hand-offValidatedNo validation

50H-03: Design an AI Governance Framework

#Performance MeasureGONO-GO
1[CRITICAL] Human review gates on all consequential outputsAll gatedAny ungated
2Audit log schema (query, output, reviewer, decision, timestamp)PresentNo audit logging
3Rollback procedure (≤15 min recovery)DocumentedNo rollback
4[CRITICAL] OPSEC classification handling addressedAddressedNot addressed

SL 5M — Advanced ML Engineer (3 Tasks)

TaskTitleStandardStepsCritical Items
50M-01Drift Monitoring PipelinePSI computed; evaluator drift detected; alert routes4Evaluator-seeded drift detected
50M-02Automated Retraining w/ ShadowTrigger linked to drift; shadow mode comparison; human gate4Shadow mode comparison present
50M-03Fairness Eval & Governance≥2 subgroups; model card complete; deprecation criteria5Model card complete; deprecation criteria defined
▸ View full GO/NO-GO performance measures — SL 5M

50M-01: Build a Drift Monitoring Pipeline

#Performance MeasureGONO-GO
1PSI per feature w/ thresholdsPresentNo PSI
2Baseline from deployment-time dataDocumentedNo baseline
3[CRITICAL] Evaluator-seeded drift detectedDetectedNot detected
4Alert routes correctlyRoutedNot routed

50M-02: Automated Retraining with Shadow Mode

#Performance MeasureGONO-GO
1Retraining trigger linked to drift alertConfiguredNo trigger
2Candidate model registered w/ CANDIDATE statusRegisteredNo registration
3[CRITICAL] Shadow mode comparison (candidate vs production)PresentNo shadow mode
4Human approval gate before promotionPresentAuto-promotion

50M-03: Fairness Evaluation and Governance Package

#Performance MeasureGONO-GO
1Fairness eval across ≥2 subgroups≥2 evaluated<2
2Performance disparities documentedDocumentedNo analysis
3[CRITICAL] Model card: assumptions, data, limitations, use, RAIAll sectionsAny missing
4[CRITICAL] Deprecation criteria definedPresentNo criteria
5Human review gate on consequential outputsPresentNo gate

SL 5J — Advanced Product Manager (3 Tasks)

TaskTitleStandardStepsCritical Items
50J-01Portfolio Health Dashboard5 dimensions; RAG; readable in 60 sec4
50J-02Technical Investment BriefBLUF; tradeoff table; adjusts to injected constraint4BLUF present; adjusts to constraint
50J-03Respond to Injected RiskRisk documented; escalation decision; response briefed4Escalation decision with rationale
▸ View full GO/NO-GO performance measures — SL 5J

50J-01: Build a Portfolio Health Dashboard

#Performance MeasureGONO-GO
1All 5 dimensions (milestones, deps, risk, velocity, budget)All presentAny missing
2RAG w/ clear definitionsAppliedNo RAG
3Readable by GO/SES in 60 secReadableRequires explanation
4Dependency health indicatorsVisibleNo dep view

50J-02: Present a Technical Investment Brief

#Performance MeasureGONO-GO
1[CRITICAL] BLUF present at startPresentNo BLUF
2Tradeoff table (cost, schedule, perf, risk)PresentNo tradeoff
3Challenging question handled without defensivenessSubstantiveDefensive
4[CRITICAL] Recommendation adjusted to injected constraintAdjustedNo adjustment

50J-03: Respond to an Injected Portfolio Risk

#Performance MeasureGONO-GO
1Risk register updatedDocumentedNot documented
2[CRITICAL] Escalation decision w/ rationaleDecision madeNo decision
3Response briefed to evaluatorBriefedNot briefed
4Cross-program dependency impact assessedStatedNo assessment

SL 5K — Advanced Knowledge Manager (4 Tasks)

TaskTitleStandardStepsCritical Items
50K-01Multi-Domain Taxonomy3 domains; cross-domain linkages; governance3Cross-domain linkages defined
50K-02AI-Augmented Tagging PipelineConfidence threshold; low-conf → review queue4Low-confidence tags route to review
50K-03Knowledge System Health EvalZero-recall rate; age analysis; top 3 gaps; remediation4Zero-recall rate computed
50K-04Unit Continuity ProtocolHandoff protocol; decay monitoring; reactivation4
▸ View full GO/NO-GO performance measures — SL 5K

50K-01: Design a Multi-Domain Taxonomy

#Performance MeasureGONO-GO
1Taxonomy covers 3 domainsAll presentAny missing
2[CRITICAL] Cross-domain linkages definedPresentNo linkage
3Vocabulary governance process documentedDocumentedNo governance

50K-02: AI-Augmented Tagging Pipeline with Review Gate

#Performance MeasureGONO-GO
1Pipeline processes documentsRunsErrors
2Confidence threshold w/ basisDocumentedNo basis
3[CRITICAL] Low-confidence tags → human review (not auto-applied)Review queueAuto-applied
4High-confidence verified against gold standardVerifiedNo verification

50K-03: Evaluate Knowledge System Health

#Performance MeasureGONO-GO
1[CRITICAL] Zero-recall rate computed w/ calculationComputedNo analysis
2Content age distribution analyzedPresentNo age analysis
3Top 3 coverage gaps identifiedIdentified<3 gaps
4Prioritized remediation planPresentNo plan

50K-04: Design a Unit Continuity Protocol

#Performance MeasureGONO-GO
1Handoff protocol for departing personnelPresentNo protocol
2Knowledge decay monitoring (flag after 6 mo)PresentNo monitoring
3Reactivation procedure for dormant systemsPresentNo procedure
4Protocol applied to scenario case studyAppliedGeneric

SL 5L — Advanced Software Engineer (4 Tasks)

TaskTitleStandardStepsCritical Items
50L-01OSDK-First Object TypeQuery-optimized; interface contract complete4Interface contract complete
50L-02Type-Safe TS Function w/ TestsNo type errors; discriminated unions; all tests pass4All unit tests pass
50L-03CI/CD w/ Contract TestingAll stages; contract test catches break; human gate4Contract test catches breaking change
50L-04Security Review & Fix5 categories; CRITICAL fixed; no client-side creds4CRITICAL fixed; no client-side creds
▸ View full GO/NO-GO performance measures — SL 5L

50L-01: OSDK-First Object Type with Interface Contract

#Performance MeasureGONO-GO
1Object Type designed for OSDK consumptionQuery patterns consideredData-centric only
2Stable, unique PK (not mutable business key)StableMutable PK
3[CRITICAL] Interface contract: queries, Actions, errors, versioningCompleteAny section missing
4Top 5 OSDK queries documented before buildDocumentedNo pre-build docs

50L-02: Type-Safe TypeScript Function with Tests

#Performance MeasureGONO-GO
1Compiles with no type errorsNo errorsType errors
2Discriminated union error typesPresentGeneric errors
3[CRITICAL] Unit tests cover validation & error paths; all passAll passAny fails
4Input validation at Action boundaryPresentNo validation

50L-03: CI/CD Pipeline with Contract Testing

#Performance MeasureGONO-GO
1Stages: unit, integration, contract, security, promotionAll presentAny missing
2Branch protection: no direct push to mainConfiguredDirect push allowed
3[CRITICAL] Contract test catches a breaking changeBlockedNot detected
4Human approval before productionPresentAuto-promotion

50L-04: Security Review and Fix Critical Findings

#Performance MeasureGONO-GO
1Covers 5 categories (input val, creds, OSDK, output, access)All coveredAny missed
2Findings prioritized by severityRatedNo ratings
3[CRITICAL] CRITICAL findings fixedFixedNot fixed
4[CRITICAL] No OSDK creds in client-side codeNoneCreds present

SL 5N — Advanced UI/UX Designer (3 Tasks)

TaskTitleStandardStepsCritical Items
50N-01Design System ComponentVariants; a11y notes; do/don’t examples; data binding4Accessibility documented
50N-02DDIL-Aware App PatternAll 4 tiers; freshness indicators; no blank screen4No blank screen at any tier
50N-03Design Governance ProposalReview gates; deviation mgmt; quality metrics3Deviation management process
▸ View full GO/NO-GO performance measures — SL 5N

50N-01: Design a Design System Component

#Performance MeasureGONO-GO
1Variants documented w/ visualsPresentNo variants
2[CRITICAL] Accessibility notes (contrast, keyboard, screen reader)DocumentedNo a11y docs
3Do/don’t usage examplesPresentNo examples
4Data binding patterns documentedDocumentedNo binding docs

50N-02: Design a DDIL-Aware Application Pattern

#Performance MeasureGONO-GO
1All 4 DDIL tiers (Connected, Degraded, Intermittent, Disconnected)All presentAny missing
2Data freshness indicators (age-based visual)PresentNo indicators
3[CRITICAL] No blank screen at any DDIL tierContent at all tiersBlank screen
4Offline-first: writes queued for syncQueue designedNo offline handling

50N-03: Produce a Design Governance Proposal

#Performance MeasureGONO-GO
1Design review gates defined (when, who, criteria)DefinedNo gates
2[CRITICAL] Deviation management processPresentNo deviation mgmt
3Quality metrics (consistency, coverage, deviation rate)DefinedNo metrics

SL 5O — Advanced Platform Engineer (4 Tasks)

TaskTitleStandardStepsCritical Items
50O-01Fleet Topology & UpgradeHub/edge; parameterized templates; wave strategy; rollback4Rollback procedure documented
50O-02SLOs with Error BudgetsSLIs defined; SLOs set; error budgets; budget policy4Error budgets computed
50O-03Automated Compliance PipelineEvidence automated; dashboard pass/fail/exception4Compliance dashboard functional
50O-04Federated Observability w/ SLO AlertsCross-cluster federation; SLO alert fires on breach4SLO alert fires on breach
▸ View full GO/NO-GO performance measures — SL 5O

50O-01: Fleet Topology and Upgrade Strategy

#Performance MeasureGONO-GO
1Fleet topology w/ hub & edge clustersDesignedNo topology
2Cluster templates parameterized (region, classification, workload)ParameterizedSeparate templates per cluster
3Wave-based upgrade (canary → production)DocumentedNo strategy
4[CRITICAL] Rollback procedure for failed upgradesPresentNo rollback

50O-02: Define SLOs with Error Budgets

#Performance MeasureGONO-GO
1SLIs defined (availability, latency, success rate)DefinedNo SLIs
2SLOs w/ specific targets & windowsTargets setVague SLOs
3[CRITICAL] Error budgets computed from SLO targetsComputedNo budgets
4Budget-based decision policy (stop shipping when exhausted)DocumentedNo policy

50O-03: Build an Automated Compliance Pipeline

#Performance MeasureGONO-GO
1Vuln scan results collected as evidencePresentManual scan
2Config baseline comparisons automatedAutomatedManual
3[CRITICAL] Compliance dashboard: pass/fail/exceptionFunctionalNo dashboard
4Exception tracking w/ expiration datesTrackedNo exception mgmt

50O-04: Federated Observability with SLO-Based Alerting

#Performance MeasureGONO-GO
1Cross-cluster metric federation configuredVisible across clustersNot configured
2Fleet-wide dashboard (resource util, pod health)PresentNo cross-cluster dashboard
3[CRITICAL] SLO alert fires on fleet-wide SLI breachFiresDoes not fire
4Cross-cluster correlation demonstratedDemonstratedNo correlation
NOTE
T3 course T&EOs (T3-F and T3-I) are available in the Documents panel under Train the Trainer.
EXEC
TM-EXEC (Senior Leader Executive Course) is orientation only — not evaluated. No T&EO applies.
REFERENCES
AR 350-1 (Army Training and Leader Development) • TR 350-70 (Army Learning Policy and Systems) • ADP 7-0 (Training) • FM 7-0 (Training)

Platform Changes

Foundry platform updates that affect training content. Instructors: review before each course iteration and integrate into labs as appropriate.

📄
CY2026 YTD Executive Summary 290 updates • Jan–Apr 2026
Download PDF

Q1 2026 PLATFORM UPDATES

Affects SL 1 (Operator)

WORKSHOP
  • AIP Analyst sessions persist when you switch tabs or change sections
  • Custom background colors on Workshop sections and pages
  • Markdown editing in text input widgets
  • Microsoft Word export (.docx in addition to CSV)
  • Usage metrics for Workshop applications
  • Custom widgets on mobile
NOTEPAD
  • AIP-assisted editing with custom prompts
  • Recently-used functions in AIP Assist menu
  • AI FDE integration
GAIA & COMPASS
  • Configurable default map styles in Ontology Manager
  • Gaia → Workflow Lineage shortcut
  • Compass pinning for frequently-used items
  • Project selection during installation configuration

Affects SL 2 (Builder)

OBJECT VIEWS — NOW GA
  • Core Object Views generally available (Feb 2026)
  • Global Branching support for Object Views
  • Updated Object Explorer with redesigned landing page
PIPELINE BUILDER
  • Incremental execution enforcement
  • Preview behavior controls
  • LLM data generation for test data
  • Approximate nearest neighbor join
  • File/file set output
BRANCHING & WORKFLOW
  • Role-based branch security
  • Upgraded branch security enabled by default
  • Ontology SQL in Quiver
  • Quiver time series workspace

Affects SL 3 (Advanced Builder)

AIP LOGIC & AUTOMATIONS
  • AIP Logic branch selection for evals
  • Autopilot (new, Mar 2026)
  • Interface parameters in Automate actions
  • Function effects in Automate
  • Streaming time series conditions
WORKFLOW LINEAGE
  • Presentation mode
  • Multi-ontology support
  • Log search from Lineage nodes
  • Monitoring status colors
  • Expanded access (Gaia, Quiver, Notepad, Automate)
BRANCHING & GOVERNANCE
  • Role-based branch security
  • Upgraded branch security for all users
  • Object Views + Branching
  • Materializations with row-level policies

APRIL 2026 PLATFORM UPDATES

Affects All Levels

PIPELINE BUILDER
  • No-code model inference — ML models from Model Studio can be added as visual nodes for batch predictions (Spark only; one tabular input/output)
ONTOLOGY MANAGER
  • Regex search on object type string properties and struct fields
  • Link type marking inheritance — classification markings now auto-inherit on creation (fix)
  • CBAC picker UI for AIP create_object_type tool
  • Ontology design best practices documentation added
GOVERNANCE & SECURITY
  • Marking scopes for all Developer Console apps (not just CBAC enrollments)
  • Workflow Lineage limits auto-expansion to ~800 nodes for performance
DEVELOPER TOOLCHAIN
  • External systems tab relocated to Settings > External systems in Code Workspaces
  • Debug source tool in AI FDE for Data Connection troubleshooting