Data flow overview
- Local context collection: Oppla constructs a context window from the active file, open buffers, and an optional set of relevant repository files.
- Local vs. remote decision: Based on your AI settings and AI Rules, Oppla chooses between local runtimes or cloud providers.
- Outbound requests: When using cloud providers, Oppla sends a minimized context payload unless a broader context is explicitly enabled.
- Tool invocations: Agents or tools (linters, test runners) run inside a sandboxed environment; results are kept local unless explicitly uploaded.
- Least privilege: send the minimum required context to providers.
- Auditability: log model requests and agent actions for traceability.
- Configurability: allow per-project and per-user privacy settings.
- Local-first support: prefer local models when policy requires it.
What Oppla may send to AI providers
By default Oppla aims to minimize exposure. Typical payloads include:- Short snippets from the active buffer (line range configurable)
- Filenames and contextual metadata (not full repository by default)
- Explicitly included files or folders when agents are asked to run project-wide tasks (only after user confirmation or via approved AI Rules)
- Secrets (API keys, private keys, tokens, passwords)
- Ever-changing credentials files (e.g., .env)
- Personal data or PII unless explicitly authorized and logged
Local-only & local-first modes
Local-first: prefer a configured local runtime; fall back to cloud when local is unavailable. Local-only mode: disallow any outbound requests. Use this mode for air-gapped or highly regulated environments. Enable local-only via settings (example):- local_only prevents outbound network calls to cloud providers and mutes automatic fallback behavior.
- In local_only mode, you must configure and run a local runtime (Ollama, llama.cpp server, etc.) and point Oppla at the endpoint.
Secret management best practices
- Never check API keys or secrets into source code repositories.
- Use OS-native secret storage:
- macOS: Keychain
- Linux: Secret Service / GNOME Keyring / pass
- Windows: Credential Manager (when supported)
- For CI or automated systems use short-lived tokens and environment variables (OPPLA_AI_API_KEY, etc.)
- When configuring custom endpoints, prefer token-based auth with limited scope.
Redaction and AI Rules
Use AI Rules to prevent sensitive files or patterns from being sent to providers. Typical patterns:- Block files by path globs (e.g., /*.env, secret/)
- Block by regex on content (api_key|secret|password|token)
- Redact matched content before any outbound request
.oppla/ai-rules.json
):
- Matches trigger redaction before any outbound request.
- Violations are logged; admins can configure whether to block or require approval.
Audit logging & retention
Oppla supports configurable audit logs for AI requests and agent actions. Logs typically include:- Who triggered the action (user identity)
- When it occurred (timestamp)
- Which model/provider and model name were used
- What files or ranges were included (redacted as necessary)
- Diffs proposed and applied (if any)
- Approval decisions and approver identity
- Configure retention policy per-organization (e.g., 90 days, 365 days)
- Support for export to SIEM or centralized logging (S3, Elasticsearch, or similar)
- Secure access control to logs (RBAC)
Enterprise controls (RBAC, approvals, quotas)
- Role-based access control: define roles (owner, maintainer, developer, auditor) and map actions to roles (run agents, approve high-risk changes).
- Approval gates: require human approval for high-risk changes (paths matching infra, auth, deploy).
- Quotas: per-user and per-project quotas for model usage and agent runs to control cost and risk.
- Organization policies: enforce project-wide rules (e.g., local_only for certain repositories).
Compliance & regulatory considerations
- GDPR / Data residency: For EU data residency concerns, prefer local-only or region-scoped cloud endpoints. Document what data is transmitted and retention windows.
- SOC2 / ISO: Provide audit trails and access controls to help meet compliance needs.
- Export controls: If operating in jurisdictions with export restrictions, use local-only deployments or approved cloud regions.
Tooling & Model Context Protocol (MCP) security
- Tools invoked by agents run in sandboxed environments with limited filesystem access.
- MCP should validate all tool inputs to prevent command injection.
- Tools should run with least privilege and provide dry-run modes.
- Restrict network access for tool containers unless explicitly required and authorized.
Incident response & breach handling
- If accidental exfiltration is detected:
- Rotate compromised keys immediately.
- Revoke tokens and update AI Rules to block the offending patterns.
- Review audit logs to identify scope of exposure.
- Notify affected stakeholders per your incident response policy.
- Maintain runbooks for handling model-provider incidents (provider key compromise, unexpected data retention behavior).
Developer guidance (for extension & tool authors)
- Return structured, non-sensitive outputs where possible.
- Avoid tooling that requires sending full repository contents to external services.
- Respect AI Rules and follow secure defaults (dry-run, audit-only modes).
- Document what data your extension/tool needs and why.
Troubleshooting & FAQs
Q: How do I ensure nothing leaves my network? A: Enablelocal_only
mode and configure a local runtime or internal model server. Disable cloud providers and verify no fallback is configured.
Q: My agent needs to run tests — will it send test output to the cloud?
A: Only if the agent’s workflow or AI Rules allow that. Use project-level rules to disallow outbound sends for test artifacts.
Q: How are audit logs protected?
A: Audit logs are secured by access control and encryption at rest. Configure S3/remote stores behind VPCs or private network access where required.
Q: Can I redact files automatically?
A: Yes — define redaction rules in AI Rules. Oppla will redact before sending context to providers and log the redaction.
Links & next steps
- AI Configuration: ./configuration.mdx
- AI Rules: ./rules.mdx
- Agent Panel: ./agent-panel.mdx
- Edit Prediction: ./edit-prediction.mdx
- Available Models: ./models.mdx
- Text Threads: ./text-threads.mdx
Want help?
If you’d like, I can:- Add UI screenshots for privacy settings and audit-log configuration
- Create a step-by-step onboarding doc for enterprise security teams
- Produce a sample
.oppla/ai-rules.json
set tuned for strict privacy (audit-only, then block)