Decision Console instruction guide (PSP-focused)
This guide explains how to use the Decision Console and how to interpret each metric with a PSP systems lens.
It is written for readers who want a comprehensive understanding of PSP operations (hub workflows, prior auth, benefits investigation, access friction, capacity strain), while staying honest about what this repo is: synthetic inputs + explicit assumptions + deterministic heuristics. It is not calibrated to real hub timestamps, claims, or revenue.
What the Decision Console is
The Decision Console is a commercial access and PSP strategy simulator: a thin UI over the API that helps users rehearse how access posture can flow into patient support operations and executive tradeoffs.
- Access: evaluates a single synthetic formulary line (baseline vs scenario) and produces structured access semantics and burden indices.
- PSP / Journey: consumes only the access outputs (as “upstream signals”) and computes PSP operational proxies (delay, dropout index, burden).
- Executive summary: assembles a curated metric slice, a rule-based narrative, and surfaced assumptions/warnings.
- Forecast sensitivity (optional): builds a 2×2 ordinal index matrix for scenario sensitivity (not dollars).
How different readers should use it
- Commercial strategy readers should start with the demo stories, the commercial readout, access tradeoffs, and forecast sensitivity. The goal is not to claim a forecasted revenue number; it is to identify where access friction or PSP strain could change a launch or pull-through discussion.
- Project and program managers should focus on the workflow sequence, PSP burden, the most stressed journey stage, assumptions, and warnings. Treat the output as a planning prompt for owners, dependencies, capacity, escalation paths, and evidence needed before action.
- Analytics readers should focus on metric definitions, proxy units, assumption IDs, confidence tiers, and validation evidence. The system is useful when it makes uncertainty explicit, not when it pretends synthetic inputs are observed outcomes.
- PSP / patient services readers should focus on hub workflow friction, benefits investigation / prior authorization pressure, stage attribution, and intervention ranking.
Decision-quality caveats
Use these phrases when presenting the demo:
- Modeled proxy, not observed claims.
- Directional index, not forecasted revenue.
- Synthetic formulary scenario, not payer-specific evidence.
- Recommended actions are hypotheses for validation, not automated decisions.
If you need to validate any statement in this guide, the “source of truth” is the metric registry and calculators:
- Metric registry:
analytics/metric_definitions.py - Calculators / derivations:
analytics/metric_calculators.py - PSP proxy metrics:
services/psp_service.py - Forecast sensitivity:
services/forecast_service.py
How to run the Decision Console
From the repo root:
- Run the app:
npm installthennpm run dev - Open the UI:
http://127.0.0.1:3040/
UI modes
- Demo stories: the primary “happy path.” Select a preset, click Run selected story.
- Manual console: step-by-step control (Access → PSP → Executive → Forecast) and scenario arm selection.
- Advanced mode (
?mode=advanced): shows raw JSON payloads for inspection (developer transparency only).
Decision Console workflow (what the buttons actually do)
A) Demo stories (recommended)
Each story runs a fixed sequence:
- Access analysis (
POST /analyze/access) - PSP analysis (
POST /analyze/psp) - PSP scenario ranking (
POST /analyze/psp/rank) (best-effort; UI continues if ranking fails) - Executive summary (
POST /analyze/summary)
This path is optimized for explaining a PSP system quickly:
- What the payer posture implies (eligibility + burden)
- How that posture translates into PSP operational friction
- Which PSP interventions the engine suggests first (scenario ranking)
B) Manual console (fine control)
Use this when you want to isolate a specific question:
- “What happens if access posture changes but PSP intervention stays fixed?”
- “What if PSP takes baseline access signals vs scenario access signals?”
- “Are the executive headline metrics stable across manual vs story runs?”
Manual console lets you:
- Choose the access intervention scenario (baseline vs a scenario arm)
- Choose which access arm feeds PSP (
baselinevsscenario) - Run each step independently
Metric reference (what each metric measures, how it’s calculated, and why it matters)
This section is organized by the Decision Console panels:
- Access (baseline vs access scenario)
- PSP operations
- Executive summary (headline metrics)
- Forecast / commercial sensitivity
- Assumptions and confidence
When you see “index” below, treat it as an ordinal proxy unless stated otherwise.
1) Access panel metrics
Where they appear
- UI: Access (baseline vs access scenario) table
- API:
AccessAnalysisResult.scenario_comparison.{baseline,scenario} - Registry entries:
analytics/metric_definitions.py
1.1 Eligibility (boolean)
- What it measures: Whether the product is considered eligible on the selected formulary line (under each access arm).
- How it’s calculated:
- Eligibility is computed from list status + coverage gap logic.
- In code: access service explicitly states: scoring never flips eligibility.
- See:
services/access_service.pydocstring and_eligibility_for_status(...).
- Useful for:
- PSP intake triage: “Do we proceed down PA / BI workflows, or are we in a gap/non-listed world requiring different actions?”
- Separating access feasibility from operational friction (burden).
- Interpretation guidance:
- Eligible = the modeled line posture supports access; it is not a guarantee of coverage in real adjudication.
1.2 Coverage outcome (typed enum)
- What it measures: A structured “shape” of list status (e.g., listed, limited, excluded, not listed).
- How it’s calculated:
- Derived from canonical list status into
CoverageOutcomeKind. - See:
services/access_service.py_map_coverage_outcome(...)andanalytics/metric_definitions.pyentrycoverage_outcome_kind.
- Derived from canonical list status into
- Useful for:
- Translating access into PSP posture without relying on free-text notes.
- Informing forecast sensitivity initiation proxies (listed vs limited has different multipliers).
1.3 Line resolution (typed enum)
- What it measures: How the engine matched (or failed to match) the formulary line.
- How it’s calculated:
- When no line matches, the engine classifies why (no product, wrong jurisdiction, wrong payer/coverage context, outside effective window).
- See:
services/access_service.py_classify_unmatched_line(...).
- Useful for:
- PSP systems: diagnosing whether friction is “true access restriction” vs “data/key mismatch” vs “as-of window mismatch.”
- Data operations: explaining gaps and the need for feed corrections.
1.4 Rule application state (typed enum)
- What it measures: Whether rules exist on the line, and whether any are effective for the chosen
as_ofdate. - How it’s calculated:
- Uses rule
activeflag + effective date window logic (effective_from,effective_to). - See:
services/access_service.py_rule_in_effect(...).
- Uses rule
- Useful for:
- PSP systems: distinguishing “no PA rules present” from “PA rules exist but are off-window as-of.”
- Compliance / policy ops: demonstrating policy effective dating.
1.5 Interpretation confidence (typed enum: high | partial | low)
- What it measures: Confidence in the engine’s ability to read line + list status + time-applicable rules into a coherent outcome.
- How it’s calculated:
- Low if no line matches.
- Partial if conflicting list status or rules exist but none are time-applicable on
as_of. - High otherwise.
- See:
services/access_service.py_interpretation_confidence(...).
- Useful for:
- PSP leaders: “Is this access read stable enough to operationalize, or is it a weak signal requiring deeper verification?”
- Downstream discounting (forecast initiation proxy applies confidence discounts).
1.6 Administrative burden (index; post-scenario scale)
- What it measures: An ordinal index representing administrative burden under access, after applying scenario scaling.
- How it’s calculated:
- Baseline burden comes from the access decision’s restriction weighting.
- Scenario overrides can rescale burden (e.g., restriction severity scale, ignore quantity limits for scoring).
- See:
analytics/metric_definitions.pyadministrative_burden_scoreand access scenario parameters inservices/access_service.py.
- Useful for:
- PSP staffing: “How heavy is the modeled access workload likely to be?”
- Comparing scenario interventions (e.g., “quantity-limit relief”) to estimate administrative relief.
- Caveat:
- Not calibrated to hours; use it as a relative triage index.
1.7 Δ Approval complexity (index points; scenario − baseline)
- What it measures: Change in the pre-scale approval complexity index between access scenario and baseline.
- How it’s calculated:
- From
AccessAnalysisResult.scenario_comparison.delta_approval_complexity. - See:
analytics/metric_definitions.pyaccess_delta_approval_complexity.
- From
- Useful for:
- Policy ops: identifying whether scenario changes shift the raw restriction load.
- Explaining why burden moved (complexity often feeds burden downstream).
1.8 Access barrier profile (business triage)
- What it measures: A simplified barrier summary (PA / ST / QL / SP / Not Listed) + a dominant barrier label.
- How it’s calculated:
- Derived from the same access decision outcome (not a second engine).
- Exposes operationally meaningful flags for PSP discussions.
- Useful for:
- PSP workflows: “What kind of paperwork and coordination do we anticipate?”
- Segmenting interventions: prior-auth workflows vs step-therapy education vs specialty pharmacy coordination.
2) PSP operations panel metrics
Where they appear
- UI: PSP operations panel (baseline vs intervention table + stage attribution + scenario ranking)
- API:
JourneyAnalysisResult(baseline PSPDecision, intervention PSPDecision, scenario comparison)
Important PSP modeling constraint
PSP metrics do not recompute access. PSP consumes a structured snapshot:
AccessUpstreamSignals.from_access_analysis(...)indomain/access_upstream.py
This is good practice: it prevents accidental “double counting” or hidden dependence on access re-evaluation.
2.1 Upstream access arm & lineage (provenance)
- What it measures:
- Whether PSP was fed from access baseline or access scenario outcomes.
- A provenance string like
access_analysis:baseline.
- How it’s calculated:
- The signals object sets
source = f\"access_analysis:{arm}\". - See:
domain/access_upstream.py.
- The signals object sets
- Useful for:
- PSP auditability: “Are we comparing PSP deltas under the same upstream access snapshot?”
2.2 PSP access input kind (enum)
- What it measures: Whether PSP inputs came from a real access run vs a synthetic placeholder.
- How it’s calculated:
- When built from access analysis:
FROM_ACCESS_OUTCOME. - Placeholder mode:
SYNTHETIC_MINIMAL.
- When built from access analysis:
- Useful for:
- Preventing overinterpretation: if missing-access placeholder is used, PSP metrics are intentionally conservative and should not be treated as determinately ineligible.
2.3 PSP precondition (enum)
- What it measures: Whether access is determinate eligible, determinate limited, determinate ineligible/gap, or missing.
- How it’s calculated:
- Mapped from access outcome envelope: eligibility + coverage gap + limited outcome kind.
- See:
_psp_precondition_from_outcome(...)indomain/access_upstream.py.
- Useful for:
- PSP strategy: determining if workflow should focus on PA/BI execution vs access resolution first.
2.4 Expected approval delay (days proxy)
- What it measures: A heuristic calendar-day delay proxy for reimbursement/PA-style cycles.
- How it’s calculated:
- In PSP service, delay starts from a base plus complexity term, then is scaled by:
- PSP scenario delay scale
- Capacity backlog delay multiplier (computed from FTE/load strain)
- See:
services/psp_service.py:_capacity_strain_and_delay_mult(...)raw_delay = BASE_APPROVAL_DELAY_DAYS + DELAY_DAYS_PER_COMPLEXITY_POINT * approval_complexity_scoredelay = raw_delay * delay_scale * delay_cap_mult
- In PSP service, delay starts from a base plus complexity term, then is scaled by:
- Useful for:
- Bottleneck diagnosis: “Is delay being driven by access complexity or by capacity strain?”
- Intervention design: scenario knobs that reduce delay vs those that reduce dropout.
- Limitations:
- Not statistically forecasted cycle time; a deterministic proxy for comparisons.
2.5 Dropout risk (0–100 index; not a probability)
- What it measures: An ordinal administrative dropout index (0–100), not a dropout probability.
- How it’s calculated:
- Base dropout index depends on the inferred stage (configured).
- Adds complexity contributions and bumps for conflicting rules or coverage gap.
- Multiplies scenario dropout scale.
- Applies follow-up intensity relief.
- Clamped to 0..100.
- See:
services/psp_service.pycompute_operational_metrics(...).
- Useful for:
- Operational risk framing: “How much administrative friction might cause abandonment or delay-induced drop?”
- Comparing PSP interventions: those that reduce dropout index vs those that simply shift delay.
- Limitations:
- It is not a cohort dropout rate and does not include clinical discontinuation.
2.6 Operational burden (index)
- What it measures: A composite ordinal “hub load” index.
- How it’s calculated:
- Driven by:
- number of restriction types
- complexity score
- capacity strain term
- Additional stress when effective FTE is zero.
- Capped to 100 in the UI representation.
- See:
services/psp_service.pyburden = ....
- Driven by:
- Useful for:
- Staffing and workflow planning: “If we push follow-up, are we raising or lowering modeled hub load?”
- Tradeoff analysis: low delay vs high burden can indicate a capacity bottleneck.
2.7 Capacity strain index (0–100 derived index)
- What it measures: An executive-facing “strain highlight” index derived from operational burden and capacity tags.
- How it’s calculated:
- The PSP engine records tags like
high_capacity_strain. - Analytics lifts the index by +20 (capped at 100) when that tag is present; otherwise it tracks burden.
- See:
analytics/metric_calculators.pyderive_capacity_strain_index(...).
- The PSP engine records tags like
- Useful for:
- Communicating capacity constraints without forcing reviewers to scan tag lists.
- Preventing a false sense of improvement when other metrics improve but capacity strain remains high.
2.8 Retained patient proxy (0–100 complement)
- What it measures:
100 − dropout_risk_proxy, labeled explicitly as a proxy. - How it’s calculated:
- See:
analytics/metric_calculators.pyderive_retained_patient_proxy(...).
- See:
- Useful for:
- Executive readability: a “higher is better” complement to dropout index.
- Limitations:
- Not a persistence or adherence rate; purely a transformation of the dropout index.
2.9 Inferred stage (journey stage)
- What it measures: The most likely operational stage implied by access signals.
- How it’s calculated:
- Priority rules:
- Ineligible/gap → referred
- conflicting rules → benefits investigation
- prior auth restriction → prior authorization
- step therapy or high burden → benefits investigation
- any complexity/burden → enrolled
- else initiated
- See:
services/psp_service.pyinfer_journey_stage(...).
- Priority rules:
- Useful for:
- Determining “where” the PSP system is likely stuck (BI vs PA vs enrolled).
- Aligning interventions to the dominant modeled pressure stage.
2.10 Model-implied journey pressure (stage attribution)
- What it measures: A decomposition of baseline stage “pressure” along the minimal path to the inferred stage.
- How it’s calculated:
- Uses configured baseline per-stage dropout indices (
BASE_DROPOUT_RISK_BY_STAGE) and normalizes them along a static path. - This is explicitly not observed funnel conversion.
- See:
domain/journey_attribution.py.
- Uses configured baseline per-stage dropout indices (
- Useful for:
- Explainability: “Which stage contributes most to baseline administrative dropout pressure in this model?”
- Intervention alignment discussions.
2.11 Ranked intervention candidates (scenario sweep)
- What it measures: A deterministic ranking of predefined PSP interventions by modeled benefit.
- How it’s calculated (high level):
- Candidate PSP scenarios are evaluated, compared vs baseline, and ranked by a benefit scoring rule.
- Ranking includes alignment to dominant pressure stage and limiting-factor explanation.
- See walkthrough notes in
docs/demo_walkthrough.mdsection “Ranked intervention candidates”.
- Useful for:
- Portfolio demo: showing that the engine can propose prioritized operational levers with rationale.
- Limitations:
- Not clinical advice, not financial advice, not guaranteed operational outcomes.
3) Executive summary headline metrics (curated slice)
Where they appear
- UI: Executive summary → Headline metrics
- API:
ExecutiveSummary.headline_metrics - Built by:
analytics/summary_builder.pybuild_headline_collection(...)
This is the “executive-facing” view: it intentionally picks a small set of metrics that connect access posture to PSP operational friction to composite benefit.
3.1 Access Eligibility (boolean)
- Same metric definition as Access panel (
access_eligibility_flag). - As-of scenario: computed from the access baseline arm.
3.2 Coverage Outcome (typed enum string)
- Same definition as Access panel (
coverage_outcome_kind), sourced from access baseline.
3.3 Administrative Burden (index)
- Same definition as Access panel (
administrative_burden_score), sourced from access baseline.
3.4 Expected Approval Delay (days proxy)
- From PSP intervention arm (
expected_approval_delay_proxy), sourced fromjr.intervention.metrics.expected_approval_delay_proxy_days.
3.5 Dropout Risk (0–100 index)
- From PSP intervention arm (
dropout_risk_proxy).
3.6 Capacity Strain (0–100 derived index)
- Derived from PSP intervention operational burden and capacity tags.
3.7 Retained Patient Proxy (0–100 index)
- Computed as
100 − dropout_risk_proxyfor the PSP intervention arm.
3.8 Scenario Benefit Score (0–100 composite; neutral 50)
-
What it measures: A single scalar summarizing the net direction of improvement under the PSP intervention vs PSP baseline, holding access signals fixed.
-
How it’s calculated (exact formula):
Let:
- (d_{delay}) =
delta_expected_approval_delay_proxy_days(intervention − baseline) - (d_{drop}) =
delta_dropout_risk_proxy - (d_{op}) =
delta_operational_burden_index
Then:
[ score = clamp(0, 100, 50 - 0.4d_{delay} - 0.25d_{drop} - 0.2*d_{op}) ]
(Negative deltas are favorable for delay/dropout/burden, so they increase the score.)
Source:
analytics/metric_calculators.pycompute_scenario_benefit_score(...). - (d_{delay}) =
-
Useful for:
- Executive snapshot: “Is the intervention directionally better on the three core operational proxies?”
- Comparing scenario runs with the same access lineage.
-
Limitations:
- Not ROI/NPV.
- Linear weights are explicit and heuristic, not econometrically estimated.
3.9 Confidence Summary (string label)
- What it measures: A compact label combining access confidence with a fixed caveat that PSP is proxy-based.
- How it’s calculated:
access=<interpretation_confidence>; psp=proxy_ops_metrics- Source:
analytics/metric_calculators.pybuild_confidence_summary(...).
- Useful for:
- Preventing overclaiming: “high confidence” refers to codebase evidence strength and typed semantics, not statistical certainty.
4) Forecast / commercial sensitivity (ordinal indices)
Where it appears
- UI: Forecast / commercial sensitivity panel
- API:
POST /analyze/forecast - Built by:
services/forecast_service.py
This module does not create a validated forecast. It is a structured way to explore how access posture and PSP operational posture interact in a toy sensitivity model.
4.1 Initiation proxy (0–100 index)
- What it measures: An ordinal proxy for tendency toward initiation, based only on access eligibility/gap + typed coverage outcome + interpretation confidence discount.
- How it’s calculated:
- Ineligible or gap → 0
- Otherwise: base * multiplier(coverage outcome kind) * discount(confidence)
- Source:
services/forecast_service.py_initiation_proxy(...).
- Useful for:
- Comparing “listed vs limited vs excluded” access postures and confidence impacts.
4.2 Retained treatment proxy (0–100 index)
- What it measures:
100 − dropout_risk_proxyusing PSP operational metrics. - How it’s calculated:
- Source:
services/forecast_service.py_retained_treatment_proxy(...).
- Source:
4.3 Utilization index (0–100 composite index)
- What it measures: A composite that rewards initiation and retention and penalizes delay and friction.
- How it’s calculated:
- Weighted linear combination of normalized components and scaled to 0–100.
- Source:
services/forecast_service.py_utilization_index(...)with weights inconfig/forecast_heuristics.
- Useful for:
- Directional sensitivity: “Which corner of Access×PSP looks more favorable in this index space?”
4.4 Commercial impact index (0–100 relative index)
- What it measures: Utilization index × unit value index (default 1.0), capped.
- How it’s calculated:
- Source:
services/forecast_service.pyevaluate_forecast_cell(...).
- Source:
- Useful for:
- A single number for “relative commercial sensitivity” under constraints.
- Limitations:
- Not revenue; unit value is fixed; no market sizing.
5) Assumptions, warnings, and explainability
5.1 Assumptions catalog and run-specific assumptions
- What it measures: A curated set of assumptions (name, description, confidence tier, “drives” links) and which were applied in the current run.
- How it’s calculated:
- The executive summary merges access + PSP assumption ids and selects “key” assumptions deterministically.
- Source:
analytics/summary_builder.py_assumption_id_summary(...),_select_key_assumption_ids(...).
- Useful for:
- Governance: makes explicit what would be replaced by real-world data in a production system.
- Interview readiness: shows you understand what’s modeled vs what’s observed.
5.2 Warnings
- What it measures: Structured warnings with codes, severity, and provenance.
- How it’s calculated:
- Each module can emit warnings; executive summary merges and sorts.
- Source:
analytics/summary_builder.py_sort_warnings_merged(...).
- Useful for:
- Guardrails: prevents “nice looking” outputs from being taken as authoritative when inputs are weak.
Bigger-picture: how the metrics work together (PSP systems view)
At a systems level, the engine is structured to reflect a real PSP stack:
flowchart TD
AccessInput[FormularyLine+Scenario] --> AccessOutcome[AccessOutcome+TypedSemantics]
AccessOutcome --> UpstreamSignals[AccessUpstreamSignals]
UpstreamSignals --> PSPBaseline[PSPDecision_Baseline]
UpstreamSignals --> PSPIntervention[PSPDecision_Intervention]
PSPBaseline --> PSPDeltas[JourneyScenarioComparison]
PSPIntervention --> PSPDeltas
AccessOutcome --> Executive[ExecutiveSummary]
PSPDeltas --> Executive
AccessOutcome --> Forecast[Forecast2x2Sensitivity]
PSPIntervention --> Forecast
How to interpret the combined story
-
Access posture gates the PSP story
- If access is ineligible/gap, the PSP model intentionally infers early stages and high friction tags.
- Eligibility is kept logically separate from burden scoring to prevent “burden improvements” from masquerading as eligibility flips.
-
Access friction becomes PSP operational friction
- The PSP model uses access burden/complexity and restriction types to generate delay and dropout indices.
- This mirrors real hub dynamics where PA/BI paperwork, rule conflicts, and gaps drive both delays and abandonment risk.
-
Capacity strain is treated as a first-class limiter
- A PSP intervention can reduce delay but still trigger
high_capacity_strain; the derived capacity strain index surfaces that. - This mirrors reality: “better process” without capacity can still backlog.
- A PSP intervention can reduce delay but still trigger
-
Composite benefit is intentionally narrow
scenario_benefit_scoreonly blends three proxies (delay, dropout index, burden).- That is deliberate: it prevents collapsing the entire system into a black-box “market success” score.
Policies and operational realities the metrics are meant to reflect
This engine is synthetic, but the metrics map to common PSP/access concepts:
- Prior authorization: adds restriction types, increases complexity score, drives inferred stage toward prior authorization.
- Step therapy: drives benefits investigation / additional friction and can be treated strictly under scenarios.
- Quantity limits: can contribute to burden, and scenarios can ignore QL for scoring to demonstrate administrative relief.
- Effective-dated rules:
as_ofdates change which policies apply (rule application state changes). - Conflicting coverage lines: “policy feed conflict” increases dropout risk bump and reduces interpretation confidence.
- Specialty pharmacy requirement: appears in the barrier profile, useful for coordination workflows.
Limitations (what not to claim)
These limitations are not footnotes; they are core to correct use.
- No real hub telemetry: no timestamps, queue times, or measured stage transitions.
- No claims / utilization validation: forecast and commercial indices are ordinal, not revenue.
- Indices are not rates:
- dropout “risk” is an index
- initiation “probability proxy” is an index
- retained proxy is an index complement
- Heuristic weights: benefit score weights are explicit and deterministic, not empirically fit.
- Synthetic inputs: demo formulary rows are synthetic unless you provide governed data.
- Not GxP / not regulated: portfolio-grade architecture demonstration, not a validated decision system.
Practical ways to use this console in PSP discussions
- Operational bottleneck diagnosis: use inferred stage + stage attribution + friction points to frame “where we’re stuck.”
- Intervention design: compare baseline vs intervention PSP metrics to show tradeoffs (delay vs burden vs dropout index).
- Capacity planning: treat capacity strain as a “stop sign” and discuss what real capacity signals would replace it.
- Policy-driven scenario exploration: use access scenarios to demonstrate how administrative burden can change without changing eligibility.
- Governance readiness: use assumptions + warnings to show you understand what evidence would be needed to productionize.