GitHub - simonw/llm: Access large language models from the command-line (original) (raw)
LLM
A CLI tool and Python library for interacting with OpenAI, Anthropic’s Claude, Google’s Gemini, Meta’s Llama and dozens of other Large Language Models, both via remote APIs and with models that can be installed and run on your own machine.
Watch Language models on the command-line on YouTube for a demo or read the accompanying detailed notes.
With LLM you can:
- Run prompts from the command-line
- Store prompts and responses in SQLite
- Generate and store embeddings
- Extract structured content from text and images
- Grant models the ability to execute tools
- … and much, much more
Quick start
First, install LLM using pip or Homebrew or pipx or uv:
Or with Homebrew (see warning note):
Or with pipx:
Or with uv
If you have an OpenAI API key key you can run this:
Paste your OpenAI API key into this
llm keys set openai
Run a prompt (with the default gpt-4o-mini model)
llm "Ten fun names for a pet pelican"
Extract text from an image
llm "extract text" -a scanned-document.jpg
Use a system prompt against a file
cat myfile.py | llm -s "Explain this code"
Run prompts against Gemini or Anthropic with their respective plugins:
llm install llm-gemini llm keys set gemini
Paste Gemini API key here
llm -m gemini-2.0-flash 'Tell me fun facts about Mountain View'
llm install llm-anthropic llm keys set anthropic
Paste Anthropic API key here
llm -m claude-4-opus 'Impress me with wild facts about turnips'
You can also install a plugin to access models that can run on your local device. If you use Ollama:
Install the plugin
llm install llm-ollama
Download and run a prompt against the Orca Mini 7B model
ollama pull llama3.2:latest llm -m llama3.2:latest 'What is the capital of France?'
To start an interactive chat with a model, use llm chat:
Chatting with gpt-4.1
Type 'exit' or 'quit' to exit
Type '!multi' to enter multiple lines, then '!end' to finish
Type '!edit' to open your default editor and modify the prompt.
Type '!fragment <my_fragment> [<another_fragment> ...]' to insert one or more fragments
> Tell me a joke about a pelican
Why don't pelicans like to tip waiters?
Because they always have a big bill!
More background on this project:
- llm, ttok and strip-tags—CLI tools for working with ChatGPT and other LLMs
- The LLM CLI tool now supports self-hosted language models via plugins
- LLM now provides tools for working with embeddings
- Build an image search engine with llm-clip, chat with models with llm chat
- You can now run prompts against images, audio and video in your terminal using LLM
- Structured data extraction from unstructured content using LLM schemas
- Long context support in LLM 0.24 using fragments and template plugins
See also the llm tag on my blog.
Contents
- Setup
- Installation
- Upgrading to the latest version
- Using uvx
- A note about Homebrew and PyTorch
- Installing plugins
- API key management
* Saving and using stored keys
* Passing keys using the –key option
* Keys in environment variables - Configuration
* Setting a custom default model
* Setting a custom directory location
* Turning SQLite logging on and off
- Usage
- OpenAI models
- Other models
- Tools
- Schemas
- Templates
- Getting started with –save
- Using a template
- Listing available templates
- Templates as YAML files
* System prompts
* Fragments
* Options
* Tools
* Schemas
* Additional template variables
* Specifying default parameters
* Configuring code extraction
* Setting a default model for a template - Template loaders from plugins
- Fragments
- Model aliases
- Embeddings
- Embedding with the CLI
* llm embed
* llm embed-multi
* llm similar
* llm embed-models
* llm collections list
* llm collections delete - Using embeddings from Python
* Working with collections
* Retrieving similar items
* SQL schema - Writing plugins to add new embedding models
* Embedding binary content - Embedding storage format
- Embedding with the CLI
- Plugins
- Installing plugins
* Listing installed plugins
* Running with a subset of plugins - Plugin directory
* Local models
* Remote APIs
* Tools
* Fragments and template loaders
* Embedding models
* Extra commands
* Just for fun - Plugin hooks
* register_commands(cli)
* register_models(register)
* register_embedding_models(register)
* register_tools(register)
* register_template_loaders(register)
* register_fragment_loaders(register) - Developing a model plugin
* The initial structure of the plugin
* Installing your plugin to try it out
* Building the Markov chain
* Executing the Markov chain
* Adding that to the plugin
* Understanding execute()
* Prompts and responses are logged to the database
* Adding options
* Distributing your plugin
* GitHub repositories
* Publishing plugins to PyPI
* Adding metadata
* What to do if it breaks - Advanced model plugins
* Tip: lazily load expensive dependencies
* Models that accept API keys
* Async models
* Supporting schemas
* Supporting tools
* Attachments for multi-modal models
* Tracking token usage
* Tracking resolved model names
* LLM_RAISE_ERRORS - Utility functions for plugins
* llm.get_key()
* llm.user_dir()
* llm.ModelError
* Response.fake()
- Installing plugins
- Python API
- Basic prompt execution
* System prompts
* Attachments
* Tools
* Schemas
* Fragments
* Model options
* Passing an API key
* Models from plugins
* Accessing the underlying JSON
* Token usage
* Streaming responses - Async models
* Tool functions can be sync or async
* Tool use for async models - Conversations
* Conversations using tools - Listing models
- Running code when a response has completed
- Other functions
* set_alias(alias, model_id)
* remove_alias(alias)
* set_default_model(alias)
* get_default_model()
* set_default_embedding_model(alias) and get_default_embedding_model()
- Basic prompt execution
- Logging to SQLite
- Viewing the logs
* -s/–short mode
* Logs for a conversation
* Searching the logs
* Filtering past a specific ID
* Filtering by model
* Filtering by prompts that used specific fragments
* Filtering by prompts that used specific tools
* Browsing data collected using schemas - Browsing logs using Datasette
- Backing up your database
- SQL schema
- Viewing the logs
- Related tools
- CLI reference
- llm –help
* llm prompt –help
* llm chat –help
* llm keys –help
* llm logs –help
* llm models –help
* llm templates –help
* llm schemas –help
* llm tools –help
* llm aliases –help
* llm fragments –help
* llm plugins –help
* llm install –help
* llm uninstall –help
* llm embed –help
* llm embed-multi –help
* llm similar –help
* llm embed-models –help
* llm collections –help
* llm openai –help
- llm –help
- Contributing
- Changelog