Privacy & Trust
Human-led AI governance. Plain-language boundaries.
Executive Summary
This page is the plain-language trust layer for people who want a clear view of how CareHub thinks about privacy, AI, safety, and operational accountability without reading a full technical brief first.
Important
CareHub is not a medical provider and does not present AI output as diagnosis, treatment, or a substitute for qualified clinical judgment.
Trust Baseline
- Human-led AI governance: language models can accelerate drafting and synthesis, but strategic direction and sensitive review stay with people.
- Minimum necessary data: the platform is designed to avoid broad data capture and keep exposure narrow.
- Continuity matters: backup, logging, and recovery posture are treated as part of privacy protection, not separate from it.
How To Read This Page
Use this as the plain-language overview. Use the linked security and policy pages when you need formal policy wording or deeper operational detail.
Core Trust Position
- CareHub does not frame AI as autonomous authority.
- CareHub does not treat user trust as implied consent.
- CareHub does not build privacy around optimistic assumptions.
- CareHub documents limits, controls, and escalation paths in plain language.
That matters because trust-sensitive features fail when the language is looser than the implementation. The platform position is to keep those two aligned.
How CareHub Uses AI
Large language models are used to accelerate drafting, synthesis, translation support, pattern review, and internal build velocity. They are not given a free claim on user data, and they do not determine company strategy, clinical meaning, or trust posture on their own.
Operational Principle
AI is used to reduce time-to-build and time-to-review. Human accountability is used to determine whether anything should ship.
Faster output is not the same thing as delegated judgment. That distinction is central to the CareHub trust model.
Privacy Boundaries
- Data collection is narrowed to what is operationally required.
- Privacy claims are written to be understandable, not performative.
- User dignity is treated as a design requirement, not a legal afterthought.
- Trust-sensitive features are reviewed repeatedly when scope changes.
Boundary Statement
CareHub's public position is that personal health data should not be sold and identifiable records should not be exposed through silent advertising-style sharing models.
Security Operations
- Encryption in transit and at rest where applicable.
- Access boundaries and least-privilege operational control.
- Audit-oriented logging and monitoring for critical workflows.
- Backup and restore posture designed to support resilience, not just storage.
Public-facing pages stay concise by design. Additional detail can be provided under appropriate review workflows and through the linked security documentation.
Compliance Trajectory
CareHub's public trust posture is designed to be compatible with phased compliance maturity rather than empty "enterprise-ready" language. Where regulated or provider-facing workflows are introduced, privacy and security requirements are expected to tighten before launch rather than after incident.
Scope Reminder
This page is a trust summary, not a substitute for jurisdiction-specific legal or regulatory advice.
Ask Rupert Disclosures
I'm Rupert, Daddy's goldendoodle, and I answer with help from Google Gemini. I do my best, but I can still get things wrong, so please double-check anything important.
How I answer
- Interface: Ask Rupert is delivered from the Next.js application layer at /ask-rupert.
- Model: the current API route is configured to call gemini-2.5-flash.
- Processing: CareHub brokers the request through its API layer before any model response is returned.
When to get human help
You can ask me about anything at all, not just medical stuff. But if something is urgent, dangerous, or needs clinical judgement, please speak to a qualified professional or emergency services right away.
When I can save what you tell me
Until you sign in, I can't record your input in CareHub's database for auditing, follow-up, and shared understanding. Chats are processed by Google Gemini and are subject to Google's Privacy Policy. CareHub's own policy layer is at Privacy Policy.
Cookies
You'll Love These Cookies!
And We Want You to Know Why...
Unlike the 30-page legal mazes you'll find elsewhere, we're upfront: we use cookies to enhance your experience and keep things secure.
Essential Cookies
- Required for security
- Always On
Analytics Cookies
Help us improve the app.
Consent Actions
- Accept All - Accept All enables both essential and analytics cookies.
- Decline Non-Essential
- Customize
- Save Preferences
- Learn More in Our Privacy Policy
Related Documents
For deeper review, use the following pages as the more specific reference set around policy, security posture, and the AI experience itself.