Definition
The "assistance, not diagnosis" disclaimer is the formal declaration through which a provider of clinical assistance technology establishes that their system is not authorized or designed to make diagnoses, assess psychological competencies, make clinical decisions, or substitute the judgment of the qualified human professional. The system is a tool — clinical responsibility remains entirely with the professional who uses it. This disclaimer is not merely a legal formality: it is an ethical stance on the appropriate role of AI in high human-impact contexts.
How it's used
The disclaimer appears in three places: in the platform's terms of service (in legal language), in the professional's onboarding (in clear, operational language), and in automatically generated reports (as a reminder in each document).
In practice, this means that when CauceOS detects a linguistic signal associated with crisis risk, it does not say "this client has suicidal ideation" — it says "risk-associated language was detected at minute 23:14" and it is up to the professional to evaluate whether that signal has clinical meaning in the specific context. The system signals; the professional evaluates.
Similarly, post-session reports are structured drafts for the professional to review, correct, and sign — never auto-generated clinical documents that the professional simply countersigns without review.
When to apply
This framework operates in any context where AI processes information of a clinical, psychological, or wellbeing nature: telehealth platforms, digital coaching tools, session text analysis systems. The regulatory boundary between "assistance device" and "medical device" varies by jurisdiction and is in active evolution in many countries.
Historical origin
The debate about the appropriate role of AI in medicine and mental health intensified with the emergence of AI-assisted diagnostic systems in the 2010s. The conceptual framework of "professional assistance" as an alternative to "professional replacement" emerged as the dominant stance in AI health ethics forums. The FDA in the US and the CE in Europe have established regulatory frameworks for "Software as a Medical Device" that determine when a system crosses from assistance to regulated medical device.
How CauceOS supports it
CauceOS incorporates the disclaimer in all automatically generated reports with the text: "This report is an AI-generated draft as a support tool. It does not replace the clinical judgment of the professional. All notes must be reviewed, modified if necessary, and signed by the professional responsible for the session." The system does not suggest diagnoses or evaluate the professional's clinical competence.
Related terms
- Crisis detection — the disclaimer is especially critical in the presentation of risk alerts
- Informed consent — the client must also know the system's limitations
- Professional confidentiality — clinical responsibility is inseparable from professional confidentiality
References
- Topol, E. J. (2019). High-performance medicine: The convergence of human and artificial intelligence. Nature Medicine.
- American Psychological Association. (2020). Guidelines for the Practice of Telepsychology.
- European Commission. (2021). Proposal for a Regulation on a European approach for Artificial Intelligence (AI Act).