How we protect your product data
AES-256 encryption, zero-retention AI APIs, and Row Level Security keep your workspace isolated. You control what gets processed—and whether to help improve 1Brain.
Data flow
Your sources and conversations stay in your workspace. Only selected context is sent to AI providers when you send a message.
Data processing
- Sources are indexed within your workspace and used for RAG (retrieval-augmented generation).
- Before sending content to AI models, we apply sanitization and PII redaction where configured.
- When local context is insufficient, web search may be used as a fallback (with user consent).
- Documents and artifacts are stored in your workspace; only workspace members can access them.
Security controls
- Encrypted vault: Integration tokens (Jira, Linear, Google, etc.) are stored in an AES-256-GCM vault.
- Row Level Security (RLS): Database access is restricted so that only members of a workspace can read or write that workspace's data.
- Audit logging: Critical actions (e.g. source usage, search, export) are logged for accountability.
- PII handling: We support PII detection and redaction in pipelines; sensitive content can be stripped before sending to providers.
- Provider policies: We follow provider-specific data and compliance policies when calling external AI APIs.
AI provider transparency
We use the following providers. Each uses zero-retention APIs so your content is not stored on their side.
| Provider | Zero retention | GDPR compliant | Data regions |
|---|---|---|---|
| Anthropic (Claude) | Yes | Yes | us |
| OpenAI (GPT-4o) | Yes | Yes | us |
| Google (Gemini) | Yes | Yes | us, eu |
Subprocessors
Third parties that process personal data on our behalf:
| Subprocessor | Purpose | Location | DPA / compliance |
|---|---|---|---|
| Supabase | Database, auth, storage | US (AWS) | GDPR DPA available |
| Anthropic | AI generation (Claude) | US | Zero-retention API ToS |
| OpenAI | AI generation, embeddings, Whisper | US | Zero-retention API ToS |
| Google AI | AI generation (Gemini), embeddings | US/EU | Zero-retention API ToS |
| Tavily / Brave | Web search | US | API ToS |
| Stripe | Payment processing | US/EU | PCI DSS, GDPR DPA |
Privacy Policy for full details.
Training consent
We may use anonymized usage to improve the product; you can turn this off anytime in Settings.
- Default: ON — You can turn it off anytime in Settings → Privacy & Trust.
- Your choice — You control whether anonymized usage helps improve 1Brain.
- Revocable — When consent is off, we do not use your data for product improvement.
- When on, we use anonymized usage patterns—not raw documents or conversations—to improve the product.
Your controls
You can control what happens with your data:
- AI processing consent and web search fallback toggles
- Contribute to model improvement (Help improve 1Brain) — on by default, turn off anytime
- Bring your own API keys (BYOK) so data flows under your agreement with the provider
- Export and delete your user data from Settings
What we do NOT do
- We do NOT use raw documents or conversations for training; you can turn off anonymized usage anytime.
- We do NOT share your data between workspaces.
- We do NOT sell your data to third parties.
- We do NOT store your content on AI provider servers (they use zero-retention APIs).
- We do NOT access your workspace without your permission.
- We do NOT prevent you from turning off contribution—you can turn it off at any time.
Privacy rights
You can export and delete your user data from Settings. We do not sell your data to third parties.
Privacy Policy · Terms of Service
Open Settings →