What is Edit Prediction?
Edit Prediction predicts your next changes in-place — not just completing a token, but suggesting whole edits, ranges, and intent-aware transformations such as:- Completing multi-line constructs (loops, function bodies)
- Proposing import additions and automatic reordering
- Fixing off-by-one or common logic mistakes
- Suggesting refactor steps (rename, extract function) as a preview
How it works (high-level)
- Context collection: Oppla constructs a lightweight context window from the active file, open buffers, and important project files (imports, configs).
- Local heuristics: Fast, local heuristics apply to produce immediate suggestions for trivial edits.
- Model scoring: If deeper reasoning is required, the configured model scores candidate edits and ranks them by confidence.
- Presentation: The top predictions are shown inline or in a suggestion strip; accept, cycle, or dismiss them.
- Privacy: By default, context sent to cloud models is minimized; local models avoid outbound traffic.
- Safety: Predictions are suggestions — they require explicit acceptance to modify files unless configured otherwise.
Quick start
- Open AI Settings: Command Palette →
oppla: Open AI Settings
(or Preferences → AI) - Ensure an AI provider or local model is configured (see AI Configuration).
- Enable Edit Prediction:
- Preferences → AI → Enable “Edit Prediction”
- Try it: Open a source file and start typing a function or a common pattern. Watch for inline predictions.
Keyboard & UX
- Accept inline prediction:
Tab
(configurable) - Accept in suggestion strip:
Cmd-Enter
/Ctrl-Enter
- Next suggestion:
Cmd-Right
/Ctrl-Right
- Previous suggestion:
Cmd-Left
/Ctrl-Left
- Dismiss:
Esc
- Toggle Edit Prediction on/off: Command Palette →
AI: Toggle Edit Prediction
Configuration (example)
Add or edit your user settings (~/.config/oppla/settings.json) to tweak how Edit Prediction behaves:- local_first: Prefer local runtime when available (reduces latency & improves privacy).
- confidence_threshold: Minimum confidence to show a suggestion (0–1).
- auto_apply_low_risk: When true, low-risk single-line fixes (e.g., missing semicolon) can be applied automatically — use with caution.
Privacy & security
- Default behavior minimizes remote context: only line ranges and a short surrounding window are sent to cloud providers unless you opt in to broader context.
- For sensitive projects, enable
local_first
and run models locally (Ollama, llama.cpp, etc.). - Audit logs: Enterprise builds can record prediction requests for compliance — check Privacy & Security docs for configuration details.
- Always review edits before accepting. Predictions are heuristics and may be incorrect.
Developer & integration notes
- Edit Prediction exposes an internal API for extensions to register custom predictors (planned). Extensions may provide domain-specific suggestions (e.g., SQL snippets, test generation).
- Prediction scoring uses a hybrid model: a lightweight local scorer for immediate suggestions plus server/model scoring for higher-quality options.
- If you’re building tooling that interacts with predictions, prefer non-blocking callbacks and provide opt-out settings to users.
Troubleshooting
- No suggestions appearing:
- Confirm
ai.edit_prediction.enabled
is true. - Check your provider is configured and reachable (AI Configuration).
- If using local models, ensure the local runtime is running and reachable.
- Confirm
- Suggestions are low quality:
- Increase context by opening relevant files (Edit Prediction prefers open buffers).
- Try a different model or adjust
confidence_threshold
. - Disable any experimental rules that modify code structure.
- High latency:
- Enable
local_first
. - Reduce
context_window_lines
or setprefetch_on_file_open
to false.
- Enable
- Unexpected auto-applies:
- Ensure
auto_apply_low_risk
is false; if true, set to false while debugging.
- Ensure
Best practices
- Let Edit Prediction learn: use defaults for a week before heavy customization.
- Use
local_first
on private codebases. - Combine with AI Rules to constrain behaviors (e.g., disallow certain automated edits).
- Keep open only files relevant to the task — smaller context often leads to better suggestions.
Related pages
- AI Overview: AI Overview
- AI Configuration: AI Configuration
- Agent Panel: Agent Panel
- Key Bindings: Key Bindings
- Themes: Themes
- Privacy & Security (stub): Privacy & Security
Feedback
This is a living feature. If Edit Prediction behaves unexpectedly or you have ideas for improvements (acceptance UX, confidence tuning, domain-specific predictors), please open a docs or feature request.Note: This page is a functional stub. Full UX examples, GIFs, and benchmarked latency/accuracy numbers will be added after engineering provides measurement artifacts.