Select Page

The following is a framework developed by GPT analyzing Best Practices in Health Care using AI Systems .

Best Practices need to surface hidden assumptions, prevent meaning loss, and ensure that cost savings never come at the expense of dignity, trust, or appropriate care. What is AI capable of accomplishing for better health care and what is AI unable to accomplish for better health care?

Evaluating HC-AI Systems: A PSA Framework for Care, Cost, and Meaning

As artificial intelligence and digital care tools move rapidly from pilots into real-world healthcare settings, public-service organizations face a growing challenge: how to tell the difference between systems that genuinely improve care and those that quietly introduce new risks under the banner of efficiency. The Public Services Alliance (PSA) developed this Human-Centered Care + AI (HC-AI) evaluation framework to help practitioners, policymakers, and community partners assess not only whether a system works, but how it works, who it empowers, and what it feels like to live with. This framework is designed to surface hidden assumptions, prevent meaning loss, and ensure that cost savings never come at the expense of dignity, trust, or appropriate care.

A PSA Lens on Human-Centered Care and AI

HC-AI systems combine human labor, digital workflows, and AI-enabled tools to support care coordination, follow-through, prevention, and monitoring. These systems can reduce strain on clinicians and improve continuity for patients—but only when they are designed with clear limits, strong supervision, and explicit ethical guardrails. PSA’s approach treats AI as a workflow support and accountability tool, not as a clinical authority or a replacement for human judgment.

The PSA HC-AI Evaluation Rubric

This rubric is vendor-agnostic and applies equally to human-led, AI-assisted, and hybrid systems. Evaluation should focus on enforceable guardrails, real-world failure modes, and governance structures—not just stated intentions or marketing claims.

Authority & Scope Control

Core question: Who is allowed to decide what—and how does the system prevent authority drift over time?

  • Pass standard: Explicit, enforceable limits (for humans and AI) and monitored scope adherence.
  • Failure signal: “Support only” language without refusal enforcement; scope creep under pressure.

Supervision & Escalation Integrity

Core question: What happens when something goes wrong, at scale, under pressure, or outside normal hours?

  • Pass standard: Defined supervision ratios, measured escalation SLAs, and realistic backstops.
  • Failure signal: Vague “clinical oversight” claims; escalation described but not tracked; AI quietly absorbs overflow.

AI Role Discipline

Core question: Is AI clearly constrained to workflow support, or does it become an authority by default?

  • Pass standard: Checklist/workflow orientation, audit logs, explanation of prompts/flags, and easy override.
  • Failure signal: Open-ended “advice” behaviors; summaries treated as answers; automation bias becomes routine.

Privacy & Household Reality

Core question: Does the system respect how people actually live, not just how privacy policies imagine use?

  • Pass standard: Plain-language data disclosure, household access rules, and controls against informal leakage.
  • Failure signal: Legal compliance language only; unclear caregiver portal rules; “training will handle it.”

Cost, Incentives & Care-Seeking Protection

Core question: Are cost savings aligned with appropriate care, not subtle discouragement of care-seeking?

  • Pass standard: Explicit anti-denial guardrails and tracking of under-use harms (delayed care, missed diagnoses).
  • Failure signal: Success defined mainly as reduced utilization; no “never discourage care” commitment.

Equity, Trust & Legitimacy

Core question: Who benefits from the system, and who bears hidden or uneven risks?

  • Pass standard: Outcomes stratified by language/rurality/disability; visible local governance and accountability.
  • Failure signal: Averages only; “community-based” as a label; one-size tone/interface for diverse populations.

Meaning & Dignity

Core question: How does it feel to receive care or support from this system over time?

  • Pass standard: Ongoing, revocable consent; relationship honesty (tool vs human); dignified exit without penalty.
  • Failure signal: One-time consent; emotional ambiguity; opt-out friction or guilt cues.

Meaning Risk: The Hidden Variable in HC-AI Systems

Meaning risk refers to the ways a system can erode dignity, trust, or autonomy even while improving efficiency or reducing costs. These risks often surface gradually, especially as systems scale and human supervision becomes stretched.

In HC-AI contexts, meaning risk frequently emerges through perceived authority, automation bias, companionship framing, and subtle pressure to reduce care-seeking. Users may comply with system guidance not because it is correct, but because it feels authoritative or caring.

Companion-style virtual caregivers and AI-assisted navigation tools amplify these risks by speaking confidently, operating continuously, and occupying intimate household spaces. Without explicit limits, refusal scripts, and escalation guarantees, these systems can drift from support into quiet control.

PSA emphasizes that ethical HC-AI systems must encode humility, consent, and transparency as design requirements. Meaning should be treated as an early-warning signal, not as a soft or secondary concern.

Systems that succeed technically but fail experientially may reduce utilization while increasing fear, confusion, or disengagement—outcomes that undermine long-term public trust.

Why This Framework Matters Now

As public agencies, nonprofits, and healthcare organizations consider AI-enabled care models, the question is no longer whether these systems can be built, but whether they can be governed responsibly. This PSA framework provides a practical tool for comparing models, identifying hidden risks early, and centering human experience alongside cost and performance metrics. It is intended to support thoughtful adoption, not to block innovation, and to ensure that care systems remain worthy of the trust placed in them.