Ollama configuration
Ollama allows developers to generate commit messages using a Large Language Model (LLM) locally. This helps keep your codebase private by preventing code changes from being sent to external third-party services.
In order to use Ollama, you need to have it installed and running on your machine. For more information, see the Ollama website or the Ollama documentation.
Available models
The following models have been tested and are supported. For more information about all available models, see the Ollama documentation.
Custom models
We're working on a compact, fine-tuned model that will bring fast, accurate commit messages to everyone.
Ollama models
- Name
llama3.2:3b
- Description
Meta's Llama 3.2, 3B parameters
- Name
mistral:7b
- Description
Mistral AI's Mistral, 7B parameters
- Name
gemma:7b
- Description
Google DeepMind's Gemma, 7B parameters
- Name
qwen2.5:7b
- Description
Alibaba's Qwen 2.5, 7B parameters
Compatible providers
You can use different providers that provide Ollama compatible API by changing the endpoint in the config file.
providers:
ollama:
endpoint: http://localhost:11434/api/generate
Troubleshooting
You can check the Ollama troubleshooting guide for more information.