Setting Up the Local Model
This is a member-only chapter. Log in with your Signal Over Noise membership email to continue.
Log in to readModule 4 · Section 7 of 9
Setting Up the Local Model
You’ll need Ollama running locally with a suitable model installed:
# Install Ollama
brew install ollama
# Pull a coding model
ollama pull qwen2.5-coder:7b
# Start the Ollama server
ollama serve
Then you need the local-llm MCP server configured in your Claude Code settings to connect to the Ollama instance. Once that’s in place, the mcp__local-llm__* tools become available.