- Goal: Full native Windows 11 support with GPU acceleration and CLI integration.
- Current: Preview and build-from-source guidance available. Native packaged installers will follow in future releases.
- Workarounds: WSL2 and containerized builds are recommended for earlier/experimental Windows setups.
- Easiest (recommended today for many users): Run Oppla inside WSL2 (Ubuntu or Debian) with GUI support (WSLg or an X server). This leverages Linux packaging and is simpler for local-model workloads.
- Native build: Build from source on Windows — supported but requires Visual Studio toolchain, correct native dependencies, and GPU drivers. Best for contributors and QA.
- Container: Use a Linux container (Docker Desktop) with device / GPU passthrough (NVIDIA) for testing local models.
- OS: Windows 10 (1909+) / Windows 11 recommended for best WSL2 and GPU support.
- Windows Subsystem for Linux (WSL2) recommended:
- Install WSL2 and a Linux distro (Ubuntu LTS recommended).
- Ensure WSLg (graphic support) or an X server is available for GUI forwarding.
- Native toolchain (for building from source on Windows):
- Visual Studio 2022 or newer with “Desktop development with C++” workload.
- CMake (recent version)
- Git (for source checkout)
- Python 3.x (if the build uses Python tools)
- Node.js / npm or Rust toolchain depending on native components (check the repository README)
- GPU & AI runtimes:
- NVIDIA: CUDA toolkit + drivers (for CUDA-enabled local inference). Install the latest drivers compatible with your GPU and CUDA version.
- DirectML or Windows ML: For DirectML-backed model runtimes, ensure DirectX/DirectML support — verify via Microsoft’s guidance.
- Vulkan: If Oppla uses Vulkan acceleration on Windows, install the latest GPU Vulkan drivers from vendor (NVIDIA/AMD/Intel).
- Signing & verification tools (for packagers / release authors):
- signtool.exe (Windows SDK) for code signing
- GPG / SHA256 utilities for release verification
- Why WSL2:
- You can use the Linux packaging, dependencies, and runtimes that the project primarily targets.
- Easier to run local model runtimes that are Linux-first (llama.cpp, Ollama, etc.)
- GUI support via WSLg enables a native-like graphical experience.
- Setup notes:
- Enable WSL and install a distro (e.g., Ubuntu 22.04 LTS).
- Install required Linux dependencies inside WSL (build-essential, cmake, libvulkan*, etc.)
- Install and configure local model runtime (e.g., ollama) in WSL.
- Launch Oppla from the WSL environment and use WSLg for GUI — or run headless server and connect from Windows client.
- GPU passthrough:
- NVIDIA supports CUDA in WSL2 via the CUDA on WSL driver stack; follow NVIDIA docs to enable GPU acceleration inside WSL.
- General flow (high-level):
- Clone repository: git clone
<repo-url>
- Install required SDKs/toolchains (Visual Studio with C++ workload, CMake, Python).
- Follow repository README build steps (project-specific flags, dependencies).
- Build native binaries and package them (MSIX/NSIS/Wix/Cab as appropriate).
- Clone repository: git clone
- Common tips:
- Use the Visual Studio Developer x64 Command Prompt when running build scripts that expect MSVC toolchain.
- Ensure environment variables point to correct SDK locations (e.g., VCPKG_ROOT if using vcpkg).
- For Electron/Node frontends, ensure Node version matches the repo requirements and run npm/yarn install from a POSIX-compatible shell if necessary (Git Bash or WSL may simplify).
- Be prepared to install or build native dependencies (libvulkan, OpenSSL, etc.) for Windows.
- Many inference runtimes are Linux-first. Check whether the runtime you want (Ollama, llama.cpp wrappers, etc.) has a Windows build or run it inside WSL.
- If using NVIDIA for local inference, install CUDA and the cuDNN versions required by your model runtime.
- For AMD GPUs, Windows ROCm support is limited — prefer Linux for ROCm-based acceleration.
- DirectML can be used as an alternative GPU backend on Windows for some runtimes; check compatibility.
- When creating Windows installers or packages:
- Sign installers and binaries (signtool) to reduce antivirus/SmartScreen friction.
- Provide SHA256 checksums and GPG signatures for release artifacts.
- Offer both native installers and a portable ZIP distribution when possible.
- Consider publishing a Microsoft Store or winget package once stable.
- Avoid running untrusted install scripts (copy-paste
curl | sh
) without validating signatures. - For cloud AI providers, follow the same privacy model as other platforms: use environment variables / OS credential stores for keys and prefer local-only mode for sensitive projects.
- Audit logging and enterprise RBAC may require extra configuration in Windows deployments (file storage locations, secure transports).
- Oppla won’t start / crashes on launch:
- Run the binary from a terminal to capture stderr/stdout.
- Check
%LOCALAPPDATA%\Oppla\logs
or~/.config/oppla/logs
(WSL) for logs. - Verify GPU drivers and runtime libraries (Vulkan, CUDA) are installed.
- GUI rendering issues:
- If running natively, check GPU driver and Vulkan / DirectX versions.
- If using WSLg, ensure WSL and your distro are up to date; try toggling WSLg vs. an external X server.
- Local models not reachable:
- Confirm runtime is running and listening on the expected endpoint.
- In WSL, check localhost/port mapping — consider using
wsl --shutdown
and restarting if networking acts odd.
- High latency for cloud models:
- Verify network connectivity and low-latency routing to provider endpoints; consider regional endpoints.
- Build failures:
- Ensure Visual Studio workloads and CMake are installed.
- Inspect build logs for missing libraries; install required SDKs and ensure PATH includes required tools.
- Provide a CONTRIBUTING.md at repo root describing Windows build steps and recommended tool versions.
- Add CI job for Windows build & smoke tests to catch regressions.
- Include a small “hello world” native example that verifies the runtime and GPU acceleration on Windows.
- Maintain a signed installer process and publish checksums/signatures for releases.
- Linux-specific guidance: docs/ide/general/linux.mdx
- System Requirements: docs/ide/general/system-requirements.mdx
- AI Configuration & Privacy: docs/ide/ai/configuration.mdx and docs/ide/ai/privacy-and-security.mdx
- If you need FreeBSD notes, see docs/development/freebsd.mdx (stub to be created)
- Add a step-by-step native Windows build recipe tailored to this repo (with exact CMake flags, dependencies, and dev env commands)?
- Create a signed-installer checklist and CI jobs for Windows builds?
- Create the FreeBSD stub now as well?