Privacy Architecture
We monitor the machine.
Not the person.
EmpathyC evaluates AI behaviour, detects risks, and alerts you. Nobody at EmpathyC โ including the founder โ can read your users' messages.
Zero plaintext user data
Encrypted on arrival, never stored readable
GDPR processor, not controller
You retain full data sovereignty
Structurally unable to comply
Can't produce what doesn't exist as a single key
How data flows
Every conversation takes two separate paths from the moment it arrives.
User messages
Your app sends conversation
User messages + AI messages + conversation ID
Encrypted on arrival
4-part key architecture โ no single master key exists
Used in volatile memory only
Per-conversation keys derived at runtime for safety scoring, then discarded
Stored as encrypted blob
Unreadable by anyone at EmpathyC โ ever
AI messages
Your app sends conversation
Same API call โ AI messages separated at ingestion
Evaluated against clinical rubrics
Empathy ยท Reliability ยท Consistency ยท Crisis ยท Boundary ยท Harmful Advice
Stored in plaintext
Your AI's outputs โ content you already own and control
Safety flag triggered
Incident created โ immediate alert (email + Slack) โ report generated
Your admin reviews incident
Copies conversation ID โ looks up user in your system โ takes action
What you see in incident reports
Layered disclosure โ enough context to act, never enough to violate user privacy.
Incident Report ยท chat_abc123 ยท Crisis detected
PDF export- Incident summary
AI-generated, PII-stripped โ no user quotes, no sentiment labels on user content
- AI responses
Full text โ your AI's output, evaluated against clinical rubrics
- User messages
[User โ ***** 08.03.26 03:44]
Timestamp only. Zero content. Always masked.
- Conversation ID
chat_abc123
The bridge โ use this to look up the full conversation in your own system
- Resolution timeline
Detected โ Alert sent โ Acknowledged โ Actioned
- PDF export
Immutable audit trail with cryptographic integrity hash
What we don't have
These are structural constraints, not policy choices. The data doesn't exist in readable form โ so it can't be leaked, subpoenaed, or misused.
Readable user messages
Encrypted blobs only. Not readable by anyone at EmpathyC โ not even with database access.
PII of any kind
No names, emails, phone numbers, addresses, or demographics. Structurally impossible to collect.
Sentiment labels on user messages
Intentional โ labelling user emotional states creates false bias risk and is unnecessary for safety evaluation.
A "view full transcript" button
By design. We show AI messages only. User content is never surfaced โ not in the UI, not in the API.
A master decryption key
4-part key architecture means no single key exists. Even a court order cannot compel us to produce something that doesn't exist.
Why this architecture matters
Privacy by architecture โ not privacy by policy. The difference is enforceable.
GDPR-compliant by design
EmpathyC operates as a data processor, not a controller. You retain full data sovereignty and consent obligations. We process on your behalf.
Liability shrinks structurally
You can't be compelled to produce what you don't have. A full database breach yields encrypted blobs โ no user data exposed.
Minimal attack surface
Even a complete compromise of our infrastructure yields nothing useful. Encrypted blobs without keys are inert.
Court-ready conversation ID bridge
If a court needs the full transcript, they go to your system โ where you hold consent. The conversation_id is the link. EmpathyC is not in the chain of custody for user content.
Client retains duty of care
You know your users, your context, and your legal obligations. We give you the signal. You make the decision. Deliberate by design.
โWe're the smoke alarm, not the fire department.โ
We evaluate AI behaviour and alert you to risk. We don't hold user PII, we don't intervene, and we don't make decisions on your behalf. That's not a limitation โ it's the right design.
For full legal details see the Privacy Policy and Subprocessor List.