Ollama
Tool to manage and interface with LLMs locally.
Installation
On macOS, you can install through Homebrew. There’s a formula and a cask. I prefer installing via the formula, to avoid having another icon in my menu bar:
brew install ollama
brew services start ollama
Usage
You can start an interactive chat with a model:
ollama run llama3.1
If you don’t want interactivity, you can provide the prompt as an argument:
ollama run codellama:7b-instruct "write a fizzbuzz implementation in JavaScript"
It also exposes a REST API:
curl http://localhost:11434/api/generate -d '{
"model": "llama3.1",
"prompt": "write a haiku about llms"
}'