Amni-Ai is a local-first language model engine. Install and run it to power in-depth theological discussion.
IN-BROWSER ENGINE
Run a small model directly in this tab. No server, no network once loaded. Great for mobile.
Idle
Having trouble loading?
VK_ERROR / CreateComputePipelines → update GPU drivers, or enable chrome://flags/#enable-unsafe-webgpu.
WebGPU not available → use Chrome 113+ / Edge / Safari 18+, not Firefox (no WebGPU yet).
Out of memory → pick Qwen2.5 0.5B (smallest) or close other tabs.
Mixed content → served HTTPS but Hugging Face CDN blocked? hard-refresh and retry.
Built-in responses remain active either way.
QUICK START
Install Amni-Ai (or any OpenAI-compatible server).
Launch it on port 7700 (or set a custom URL above).
Hit TEST → SAVE. Adam badge turns green when connected.
Runs entirely on your machine. Nothing leaves your device. On HTTPS deployments, local AI is unreachable — the built-in theological guide takes over automatically.