- Avoid running unverified
curl | sh
commands in production. Prefer downloading signed release artifacts, verifying checksums / signatures, or using distro packages.
- Officially tested on:
- Ubuntu 20.04 LTS and newer
- Fedora 38 and newer
- Debian 11 and newer
- Arch Linux (rolling)
- openSUSE Tumbleweed
- Architecture: x86_64 (Intel/AMD) or aarch64 (ARM)
- RAM: 8GB minimum (16GB+ recommended for local models)
- Disk: 2GB for core install, +2–10GB for local models
- GPU: Optional but recommended for local AI inference (NVIDIA with CUDA, AMD with ROCm). Vulkan 1.3 drivers recommended for accelerated workloads.
- Recommended: Download from Official Page
- Download the appropriate package for your distribution from: https://app.oppla.ai/home?tab=download
- Available formats:
.deb
packages for Ubuntu, Debian, and derivatives.rpm
packages for Fedora, RedHat, CentOS, and derivatives.tar.gz
archives for manual installation- AppImage for portable usage
- Example installation (deb):
- Download the .deb file from https://app.oppla.ai/home?tab=download
- Install: sudo apt install ./oppla-*.deb
- Example installation (rpm):
- Download the .rpm file from https://app.oppla.ai/home?tab=download
- Install: sudo dnf install ./oppla-*.rpm
- Manual Installation (tar.gz)
- Download the tar.gz archive from https://app.oppla.ai/home?tab=download
- Extract: tar -xzf oppla-*.tar.gz
- Move to installation directory: sudo mv oppla /opt/
- Create symlink: sudo ln -s /opt/oppla/bin/oppla /usr/local/bin/oppla
- AppImage (Portable)
- Download the AppImage from https://app.oppla.ai/home?tab=download
- Make executable: chmod +x Oppla-*.AppImage
- Run directly: ./Oppla-*.AppImage
- If installed via package manager:
- Debian/Ubuntu: sudo apt remove oppla
- Fedora/RPM: sudo dnf remove oppla
- If installed manually (tar.gz):
- Remove installation directory: sudo rm -rf /opt/oppla
- Remove symlink: sudo rm /usr/local/bin/oppla
- Optional: Remove config: rm -rf ~/.config/oppla
- The install process attempts to add a symlink at /usr/local/bin/oppla for CLI usage.
- If the CLI is missing, run: sudo ln -s /opt/oppla/bin/oppla /usr/local/bin/oppla (adjust paths to your install).
- Desktop entries: the installer registers a .desktop file for GNOME/KDE to show Oppla in application menus.
- Oppla relies on standard desktop portals for full functionality:
- org.freedesktop.portal.FileChooser (file dialogs)
- org.freedesktop.portal.OpenURI (links)
- org.freedesktop.portal.Secret or org.freedesktop.Secrets (secure storage)
- Install xdg-desktop-portal and the appropriate backend for your environment:
- Ubuntu/Debian: sudo apt install xdg-desktop-portal xdg-desktop-portal-gtk
- Fedora: sudo dnf install xdg-desktop-portal xdg-desktop-portal-gtk
- NVIDIA: install CUDA toolkit and drivers; enable CUDA-enabled runtimes for local models.
- AMD: enable ROCm where supported; verify compatibility with your distribution.
- Apple-style hardware: not applicable on Linux; use aarch64 builds on compatible ARM machines.
- If you plan to run large local models, ensure swap or disk-based cache is configured and monitor memory usage.
- Local runtimes Oppla supports (examples):
- Ollama (local server)
- llama.cpp / GGML frontends
- Custom HTTP endpoints (self-hosted inference)
- To use a local runtime:
- Install and run your chosen runtime (see that runtime’s docs)
- Point Oppla at the local endpoint in AI settings (Command Palette →
oppla: Open AI Settings
)
- Verify release signatures and checksums for any downloaded artifacts.
- For enterprise deployment, use private, signed package repositories.
- Avoid storing API keys in project files. Use OS secret stores or environment variables:
- Export: export OPENAI_API_KEY=“sk_xxx”
- Or store via Secret Service / GNOME Keyring integration.
- For cloud AI providers and extension updates, ensure outbound HTTPS (ports 443) to provider endpoints.
- For air-gapped environments, enable local-only mode in AI settings to prevent outbound requests.
- Install fails with missing libraries:
- Check distro-specific dependency packages (glibc, libgtk, libvulkan)
- Oppla won’t start:
- Run the binary from terminal to capture logs: /opt/oppla/oppla —verbose
- Check ~/.config/oppla/logs/ for runtime errors
- AI features are slow:
- Use
local_first
in AI settings, or switch to a lower-latency local model - Reduce context window or close unrelated buffers
- Use
- Local model not reachable:
- Verify the runtime is running and reachable (curl http://localhost:PORT/health)
- Check firewall / localhost bindings
- Running Oppla in development or build-from-source mode is documented in the main development guide (see docs root).
- To build from source on Linux, ensure build dependencies are installed and consult docs/development.mdx.
- Configuring Oppla: ../configuring-oppla.mdx
- Visual customization (create stub at ./visual-customization.mdx) — covers workspace layout, theming UX, and adaptive theme tips.
- Advanced keybindings (create stub at ../advanced/keybindings.mdx) — deep-dive into keymap syntax, debugging, and migration guides.
- AI Configuration & Privacy: ../ai/configuration.mdx and ../ai/privacy-and-security.mdx