Local Setup Guide
local foundry setup
Unlock true data sovereignty by running high-performance models on your own silicon.
01
Install Ollama
Ollama is the engine that powers local AI on your machine. It's lightweight, open-source, and highly optimized.
02
Download Models
Once Ollama is running, you need to pull the models you want to use. We recommend start-of-the-art coding models.
Open your terminal and run the following commands to get the best models for Codiner:
General PurposeLLAMA 3.1
ollama run llama3.1Coding SpecialistDeepSeek Coder V2
ollama run deepseek-coder-v2Ultra FastQwen 2.5 Coder
ollama run qwen2.5-coder:7b03
Connect to Codiner
The final step is to pair Codiner with your local engine. This happens automatically but can be customized in settings.
Launch the Codiner Desktop app or CLI, then follow these instructions:
- Open Settings > AI Engine
- Select "Ollama" as the Provider
- Codiner will detect models automatically
- Choose your default model (e.g., Llama 3)
- Click "Test Connection"
Neural Sync Enabled
Everything working?
Run the verification command in your terminal to ensure Codiner can access your local neural foundry.
$codiner status --neural
Local-Only