NixOS Module Options
The Nurvus NixOS module configures everything Demi needs: Ollama, Podman, sudo rules, and environment variables.
Options
| Option | Default | Description |
|---|---|---|
services.nurvus.enable |
false |
Enable nurvus and its dependencies |
services.nurvus.model |
auto-detect | Ollama model name (auto-selects by RAM if empty) |
services.nurvus.llmUrl |
"http://localhost:11434" |
LLM server URL (Ollama, LM Studio, etc.) |
services.nurvus.contextSize |
8192 |
LLM context window size in tokens |
services.nurvus.openFirewall |
false |
Open Ollama port for remote access |
services.nurvus.maxCpuCores |
null (no limit) |
Cap Ollama to N CPU cores |
services.nurvus.maxMemoryGB |
null (no limit) |
Cap Ollama memory usage in GB |
services.nurvus.acceleration |
null (CPU only) |
GPU acceleration: "rocm" for AMD, "cuda" for NVIDIA |
services.nurvus.daemon.enable |
true |
Enable daemon mode with web UI |
services.nurvus.daemon.host |
"0.0.0.0" |
Address to bind the web server to |
services.nurvus.daemon.port |
3364 |
Port for the web UI (DEMI on a phone dial pad) |
services.nurvus.daemon.openFirewall |
true |
Open the daemon port in the firewall |
See Daemon Mode for detailed daemon configuration and setup.
Remote LLM
When llmUrl points to a non-localhost address, the module automatically skips installing and running the local Ollama service. This is useful when you want to run the LLM on a more powerful machine in your network, or use a different server like LM Studio.
# Remote Ollama
services.nurvus = {
enable = true;
model = "qwen3:30b-a3b";
llmUrl = "http://my-server:11434";
contextSize = 32768;
};
# LM Studio on a Mac (OpenAI-compatible API, auto-detected from port)
services.nurvus = {
enable = true;
model = "qwen/qwen3-30b-a3b";
llmUrl = "http://192.168.64.1:1234";
contextSize = 32768;
};
User Access
The module creates a nurvus system group. Add users to this group to let them run demi without root:
Root users can always run demi without extra setup.