Local Setup Guide

local foundry setup

Unlock true data sovereignty by running high-performance models on your own silicon.

01

Install Ollama

Ollama is the engine that powers local AI on your machine. It's lightweight, open-source, and highly optimized.

Download the appropriate version for your OS from the official site:

Pro Tip: Ensure you have at least 8GB of RAM for the best experience with 7B parameter models.

02

Download Models

Once Ollama is running, you need to pull the models you want to use. We recommend start-of-the-art coding models.

Open your terminal and run the following commands to get the best models for Codiner:

General PurposeLLAMA 3.1
ollama run llama3.1
Coding SpecialistDeepSeek Coder V2
ollama run deepseek-coder-v2
Ultra FastQwen 2.5 Coder
ollama run qwen2.5-coder:7b
03

Connect to Codiner

The final step is to pair Codiner with your local engine. This happens automatically but can be customized in settings.

Launch the Codiner Desktop app or CLI, then follow these instructions:

  1. Open Settings > AI Engine
  2. Select "Ollama" as the Provider
  3. Codiner will detect models automatically
  4. Choose your default model (e.g., Llama 3)
  5. Click "Test Connection"
Neural Sync Enabled

Everything working?

Run the verification command in your terminal to ensure Codiner can access your local neural foundry.

$codiner status --neural
Local-Only