Journey pipeline (from latest run)

No recent PSP run found. Run a story in the Decision Console, then return here.

Decision Console instruction guide (PSP-focused)

This guide explains how to use the Decision Console and how to interpret each metric with a PSP systems lens.

It is written for readers who want a comprehensive understanding of PSP operations (hub workflows, prior auth, benefits investigation, access friction, capacity strain), while staying honest about what this repo is: synthetic inputs + explicit assumptions + deterministic heuristics. It is not calibrated to real hub timestamps, claims, or revenue.

What the Decision Console is

The Decision Console is a commercial access and PSP strategy simulator: a thin UI over the API that helps users rehearse how access posture can flow into patient support operations and executive tradeoffs.

How different readers should use it

Decision-quality caveats

Use these phrases when presenting the demo:

If you need to validate any statement in this guide, the “source of truth” is the metric registry and calculators:

How to run the Decision Console

From the repo root:

UI modes

Decision Console workflow (what the buttons actually do)

A) Demo stories (recommended)

Each story runs a fixed sequence:

  1. Access analysis (POST /analyze/access)
  2. PSP analysis (POST /analyze/psp)
  3. PSP scenario ranking (POST /analyze/psp/rank) (best-effort; UI continues if ranking fails)
  4. Executive summary (POST /analyze/summary)

This path is optimized for explaining a PSP system quickly:

B) Manual console (fine control)

Use this when you want to isolate a specific question:

Manual console lets you:

Metric reference (what each metric measures, how it’s calculated, and why it matters)

This section is organized by the Decision Console panels:

  1. Access (baseline vs access scenario)
  2. PSP operations
  3. Executive summary (headline metrics)
  4. Forecast / commercial sensitivity
  5. Assumptions and confidence

When you see “index” below, treat it as an ordinal proxy unless stated otherwise.


1) Access panel metrics

Where they appear

1.1 Eligibility (boolean)

1.2 Coverage outcome (typed enum)

1.3 Line resolution (typed enum)

1.4 Rule application state (typed enum)

1.5 Interpretation confidence (typed enum: high | partial | low)

1.6 Administrative burden (index; post-scenario scale)

1.7 Δ Approval complexity (index points; scenario − baseline)

1.8 Access barrier profile (business triage)


2) PSP operations panel metrics

Where they appear

Important PSP modeling constraint

PSP metrics do not recompute access. PSP consumes a structured snapshot:

This is good practice: it prevents accidental “double counting” or hidden dependence on access re-evaluation.

2.1 Upstream access arm & lineage (provenance)

2.2 PSP access input kind (enum)

2.3 PSP precondition (enum)

2.4 Expected approval delay (days proxy)

2.5 Dropout risk (0–100 index; not a probability)

2.6 Operational burden (index)

2.7 Capacity strain index (0–100 derived index)

2.8 Retained patient proxy (0–100 complement)

2.9 Inferred stage (journey stage)

2.10 Model-implied journey pressure (stage attribution)

2.11 Ranked intervention candidates (scenario sweep)


3) Executive summary headline metrics (curated slice)

Where they appear

This is the “executive-facing” view: it intentionally picks a small set of metrics that connect access posture to PSP operational friction to composite benefit.

3.1 Access Eligibility (boolean)

3.2 Coverage Outcome (typed enum string)

3.3 Administrative Burden (index)

3.4 Expected Approval Delay (days proxy)

3.5 Dropout Risk (0–100 index)

3.6 Capacity Strain (0–100 derived index)

3.7 Retained Patient Proxy (0–100 index)

3.8 Scenario Benefit Score (0–100 composite; neutral 50)

3.9 Confidence Summary (string label)


4) Forecast / commercial sensitivity (ordinal indices)

Where it appears

This module does not create a validated forecast. It is a structured way to explore how access posture and PSP operational posture interact in a toy sensitivity model.

4.1 Initiation proxy (0–100 index)

4.2 Retained treatment proxy (0–100 index)

4.3 Utilization index (0–100 composite index)

4.4 Commercial impact index (0–100 relative index)


5) Assumptions, warnings, and explainability

5.1 Assumptions catalog and run-specific assumptions

5.2 Warnings

Bigger-picture: how the metrics work together (PSP systems view)

At a systems level, the engine is structured to reflect a real PSP stack:

flowchart TD
  AccessInput[FormularyLine+Scenario] --> AccessOutcome[AccessOutcome+TypedSemantics]
  AccessOutcome --> UpstreamSignals[AccessUpstreamSignals]
  UpstreamSignals --> PSPBaseline[PSPDecision_Baseline]
  UpstreamSignals --> PSPIntervention[PSPDecision_Intervention]
  PSPBaseline --> PSPDeltas[JourneyScenarioComparison]
  PSPIntervention --> PSPDeltas
  AccessOutcome --> Executive[ExecutiveSummary]
  PSPDeltas --> Executive
  AccessOutcome --> Forecast[Forecast2x2Sensitivity]
  PSPIntervention --> Forecast

How to interpret the combined story

  1. Access posture gates the PSP story

    • If access is ineligible/gap, the PSP model intentionally infers early stages and high friction tags.
    • Eligibility is kept logically separate from burden scoring to prevent “burden improvements” from masquerading as eligibility flips.
  2. Access friction becomes PSP operational friction

    • The PSP model uses access burden/complexity and restriction types to generate delay and dropout indices.
    • This mirrors real hub dynamics where PA/BI paperwork, rule conflicts, and gaps drive both delays and abandonment risk.
  3. Capacity strain is treated as a first-class limiter

    • A PSP intervention can reduce delay but still trigger high_capacity_strain; the derived capacity strain index surfaces that.
    • This mirrors reality: “better process” without capacity can still backlog.
  4. Composite benefit is intentionally narrow

    • scenario_benefit_score only blends three proxies (delay, dropout index, burden).
    • That is deliberate: it prevents collapsing the entire system into a black-box “market success” score.

Policies and operational realities the metrics are meant to reflect

This engine is synthetic, but the metrics map to common PSP/access concepts:

Limitations (what not to claim)

These limitations are not footnotes; they are core to correct use.

Practical ways to use this console in PSP discussions