Aider¶
AI pair programming in your terminal with native local model support.
Overview¶
Aider provides:
- Git-aware - Understands your repository structure
- Auto-commits - Commits changes with meaningful messages
- Local models - Native Ollama and OpenAI-compatible support
- Multi-file - Edit multiple files in one session
- Conversation - Iterative development with context
Installation¶
pip¶
pipx (Recommended)¶
Verify¶
Basic Usage¶
Start Session¶
# In a git repository
cd /path/to/project
aider
# With specific files
aider src/main.py tests/test_main.py
# With all Python files
aider *.py
Chat Commands¶
| Command | Action |
|---|---|
/add <file> | Add file to context |
/drop <file> | Remove file from context |
/ls | List files in context |
/diff | Show uncommitted changes |
/undo | Undo last change |
/commit | Commit without changes |
/run <cmd> | Run shell command |
/help | Show all commands |
Local Model Configuration¶
Using Ollama¶
# Start Ollama
ollama serve
# Pull a coding model
ollama pull deepseek-coder-v2:16b
# Run Aider with Ollama
aider --model ollama/deepseek-coder-v2:16b
With Model Alias¶
Create ~/.aider.conf.yml:
Then just run:
OpenAI-Compatible API¶
# Set environment
export OPENAI_API_BASE=http://localhost:8080/v1
export OPENAI_API_KEY=not-needed
# Run with model name
aider --model openai/llama3.3
Multiple Models¶
Use different models for different purposes:
# Main model for editing
aider --model ollama/deepseek-coder-v2:16b \
--weak-model ollama/llama3.2:8b
Recommended Models¶
| Use Case | Model | Command |
|---|---|---|
| General coding | DeepSeek Coder V2 16B | --model ollama/deepseek-coder-v2:16b |
| Complex refactoring | Llama 3.3 70B | --model ollama/llama3.3:70b |
| Quick fixes | Mistral 7B | --model ollama/mistral:7b |
| Large context | Qwen 2.5 32B | --model ollama/qwen2.5:32b |
Configuration¶
Configuration File¶
Create ~/.aider.conf.yml:
# Model settings
model: ollama/deepseek-coder-v2:16b
weak-model: ollama/mistral:7b
# Git settings
auto-commits: true
auto-commit-message: true
dirty-commits: false
# Display settings
dark-mode: true
stream: true
# Context settings
map-tokens: 1024
map-refresh: auto
Per-Project Config¶
Create .aider.conf.yml in project root:
model: ollama/qwen2.5-coder:32b
auto-commits: false
# Project-specific ignores
aiderignore:
- "*.log"
- "node_modules"
- ".env"
Environment Variables¶
# Model configuration
export AIDER_MODEL=ollama/deepseek-coder-v2:16b
export AIDER_WEAK_MODEL=ollama/mistral:7b
# API configuration
export OLLAMA_HOST=http://localhost:11434
# Or for OpenAI-compatible
export OPENAI_API_BASE=http://localhost:8080/v1
export OPENAI_API_KEY=not-needed
Git Integration¶
Auto-Commits¶
Aider commits changes automatically:
# Example commit message generated by Aider
feat: Add input validation to user registration
- Added email format validation
- Added password strength requirements
- Updated error messages
Disable Auto-Commits¶
Undo Changes¶
Advanced Usage¶
Add Files Dynamically¶
# During session
/add src/utils.py
/add tests/test_utils.py
# Read-only (for context)
/read-only docs/API.md
Run Tests¶
Repository Map¶
Aider creates a map of your codebase:
Working with Large Codebases¶
Context Management¶
# Limit context window
aider --map-tokens 2048
# Exclude patterns
aider --aiderignore ".aiderignore"
.aiderignore¶
Create .aiderignore:
Multi-Model Setup¶
Architect Mode¶
Use a stronger model for planning:
Weak Model¶
Faster model for simple operations:
Docker Usage¶
Run in Container¶
docker run -it --rm \
-v $(pwd):/app \
-e OLLAMA_HOST=http://host.docker.internal:11434 \
paulgauthier/aider \
--model ollama/deepseek-coder-v2:16b
With docker-compose¶
services:
aider:
image: paulgauthier/aider
volumes:
- .:/app
environment:
- OLLAMA_HOST=http://ollama:11434
- AIDER_MODEL=ollama/deepseek-coder-v2:16b
depends_on:
- ollama
tty: true
stdin_open: true
Troubleshooting¶
Model Not Found¶
# Verify Ollama has model
ollama list
# Pull if missing
ollama pull deepseek-coder-v2:16b
# Check model name
aider --model ollama/deepseek-coder-v2:16b
Slow Responses¶
- Use smaller model for quick tasks
- Reduce
--map-tokens - Use
--no-streamfor faster display
Git Issues¶
# Ensure in git repo
git status
# Initialize if needed
git init
git add .
git commit -m "Initial commit"
# Then run Aider
aider
Context Too Large¶
# Add fewer files
aider src/main.py # Instead of aider *.py
# Increase context limit (if model supports)
aider --model ollama/qwen2.5:32b # 128K context
Comparison with Alternatives¶
| Feature | Aider | Claude Code | Cline |
|---|---|---|---|
| Interface | CLI | CLI | VS Code |
| Local models | Native | Via proxy | Native |
| Git integration | Excellent | Good | Limited |
| Auto-commits | Yes | Optional | No |
| Multi-file | Yes | Yes | Yes |
Local Model Recommendations¶
Best Models for Aider¶
| Model | VRAM | Context | Strengths | Use Case |
|---|---|---|---|---|
| DeepSeek Coder V2 16B | 12GB | 128K | Excellent coding | General development |
| Qwen 2.5 Coder 32B | 24GB | 128K | Best local coding | Complex refactoring |
| Llama 3.3 70B | 48GB | 128K | Strong reasoning | Architecture, planning |
| Codestral 22B | 16GB | 32K | Fast, accurate | Quick edits |
| DeepSeek R1 Distill 32B | 24GB | 64K | Reasoning + coding | Debug, analysis |
Model Configuration by Task¶
# ~/.aider.conf.yml - Task-optimized setup
# For general coding (balanced)
model: ollama/deepseek-coder-v2:16b
# For complex reasoning tasks
# model: ollama/qwen2.5-coder:32b
# For quick fixes (faster)
weak-model: ollama/codestral:22b
Performance Tuning¶
# ~/.aider.conf.yml
# Reduce context for faster responses
map-tokens: 1024
# Disable streaming for batch mode
stream: false
# Use less context on constrained systems
edit-format: diff
# Limit output tokens
max-chat-history-tokens: 4096
Memory-Constrained Setup¶
For systems with limited VRAM (8-16GB):
# Use smaller, quantized model
aider --model ollama/deepseek-coder-v2:16b-q4_K_M
# Or use efficient model
aider --model ollama/codestral:22b-q4_K_M
Git Workflow Integration¶
Branch-Based Development¶
# Create feature branch
git checkout -b feature/auth
# Start Aider in branch
aider
# Aider commits stay on this branch
Commit Message Customization¶
# ~/.aider.conf.yml
# Custom commit message prefix
commit-prompt: |
Generate a commit message following conventional commits format.
Use these prefixes: feat, fix, docs, style, refactor, test, chore
Keep the subject line under 50 characters.
Include a body explaining WHY the change was made.
Staging Workflow¶
# Review changes before letting Aider commit
aider --no-auto-commits
# Then manually:
git diff
git add -p
git commit
Working with PRs¶
# Start session for PR work
git checkout pr-branch
aider
# When done, push and create PR
git push origin pr-branch
gh pr create
Multi-File Editing Strategies¶
Explicit File Addition¶
# Start with core files
aider src/models/user.py src/api/users.py
# Add related files as needed
/add src/services/user_service.py
/add tests/test_users.py
Pattern-Based Addition¶
# Add all Python files in directory
aider src/api/*.py
# Or use shell expansion
aider $(find src -name "*.py" -type f)
Read-Only Context¶
# Add documentation for context only
/read-only docs/API.md
/read-only README.md
# These files inform responses but aren't edited
Large Codebase Strategy¶
# 1. Start with interface/API files
aider src/api/endpoints.py
# 2. Add implementation as needed
/add src/services/implementation.py
# 3. Add tests last
/add tests/test_endpoints.py
Refactoring Across Files¶
User: Rename UserService to AccountService across all files
Aider: I'll need to modify these files:
- src/services/user_service.py (rename file and class)
- src/api/users.py (update imports)
- src/main.py (update dependency injection)
- tests/test_user_service.py (rename and update)
Shall I proceed?
Advanced Configuration¶
Project-Specific Config¶
Create .aider.conf.yml in project root:
# Project uses specific model
model: ollama/qwen2.5-coder:32b
# Project conventions
auto-commits: true
attribute-commits: false
dirty-commits: false
# Language-specific
edit-format: udiff
# Files to always ignore
aiderignore:
- "*.min.js"
- "vendor/"
- "node_modules/"
- ".env*"
Environment-Based Config¶
# ~/.bashrc or ~/.zshrc
# Work projects
alias aider-work='aider --config ~/.aider-work.yml'
# Personal projects
alias aider-personal='aider --config ~/.aider-personal.yml'
# Quick mode for small fixes
alias aider-quick='aider --model ollama/codestral:22b --no-auto-commits'
Voice Mode¶
# Enable voice input
aider --voice
# Requires: pip install aider-chat[voice]
# Uses OpenAI Whisper for transcription
Integration with Other Tools¶
With Git Hooks¶
With CI/CD¶
# .github/workflows/aider-review.yml
name: Aider Review
on: pull_request
jobs:
review:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Run Aider review
run: |
pip install aider-chat
aider --model openai/gpt-4 --message "Review this PR" --yes
With tmux¶
# Start Aider in named tmux session
tmux new-session -d -s aider "cd ~/project && aider"
# Attach later
tmux attach -t aider
Troubleshooting Local Models¶
Model Response Issues¶
# Test model directly
curl http://localhost:11434/api/generate -d '{
"model": "deepseek-coder-v2:16b",
"prompt": "Write a hello world in Python"
}'
# Check Aider sees model
aider --list-models ollama/
Memory Issues¶
# Monitor VRAM usage
watch -n 1 nvidia-smi
# Use quantized models
ollama pull deepseek-coder-v2:16b-q4_K_M
# Reduce context
aider --map-tokens 512
Connection Issues¶
# Verify Ollama is listening
curl http://localhost:11434/
# Set explicit host
export OLLAMA_HOST=http://localhost:11434
aider --model ollama/deepseek-coder-v2:16b
See Also¶
- AI Coding Tools Index - Tool comparison
- Ollama - Local model serving
- Choosing Models - Model selection
- Claude Code - Alternative CLI tool