Published 2026-04-02.
Last modified 2026-04-04.
Time to read: 9 minutes.
llm collection.
Aider is a F/OSS CLI for agentic coding. The GitHub project has 42,700 stars, 4,100 forks, and 238 watches; clearly this is a popular project. The Release Notes show an active project.
Installation
$ python -m pip install aider-install
$ aider-install
Help
This is the help message:
usage: aider [-h] [--model MODEL] [--openai-api-key OPENAI_API_KEY]
[--anthropic-api-key ANTHROPIC_API_KEY]
[--openai-api-base OPENAI_API_BASE]
[--openai-api-type OPENAI_API_TYPE]
[--openai-api-version OPENAI_API_VERSION]
[--openai-api-deployment-id OPENAI_API_DEPLOYMENT_ID]
[--openai-organization-id OPENAI_ORGANIZATION_ID]
[--set-env ENV_VAR_NAME=value] [--api-key PROVIDER=KEY]
[--list-models MODEL] [--model-settings-file MODEL_SETTINGS_FILE]
[--model-metadata-file MODEL_METADATA_FILE] [--alias ALIAS:MODEL]
[--reasoning-effort REASONING_EFFORT]
[--thinking-tokens THINKING_TOKENS]
[--verify-ssl | --no-verify-ssl] [--timeout TIMEOUT]
[--edit-format EDIT_FORMAT] [--architect]
[--auto-accept-architect | --no-auto-accept-architect]
[--weak-model WEAK_MODEL] [--editor-model EDITOR_MODEL]
[--editor-edit-format EDITOR_EDIT_FORMAT]
[--show-model-warnings | --no-show-model-warnings]
[--check-model-accepts-settings | --no-check-model-accepts-settings]
[--max-chat-history-tokens MAX_CHAT_HISTORY_TOKENS]
[--cache-prompts | --no-cache-prompts]
[--cache-keepalive-pings CACHE_KEEPALIVE_PINGS]
[--map-tokens MAP_TOKENS]
[--map-refresh {auto,always,files,manual}]
[--map-multiplier-no-files MAP_MULTIPLIER_NO_FILES]
[--input-history-file INPUT_HISTORY_FILE]
[--chat-history-file CHAT_HISTORY_FILE]
[--restore-chat-history | --no-restore-chat-history]
[--llm-history-file LLM_HISTORY_FILE] [--dark-mode]
[--light-mode] [--pretty | --no-pretty] [--stream | --no-stream]
[--user-input-color USER_INPUT_COLOR]
[--tool-output-color TOOL_OUTPUT_COLOR]
[--tool-error-color TOOL_ERROR_COLOR]
[--tool-warning-color TOOL_WARNING_COLOR]
[--assistant-output-color ASSISTANT_OUTPUT_COLOR]
[--completion-menu-color COLOR]
[--completion-menu-bg-color COLOR]
[--completion-menu-current-color COLOR]
[--completion-menu-current-bg-color COLOR]
[--code-theme CODE_THEME] [--show-diffs] [--git | --no-git]
[--gitignore | --no-gitignore]
[--add-gitignore-files | --no-add-gitignore-files]
[--aiderignore AIDERIGNORE] [--subtree-only]
[--auto-commits | --no-auto-commits]
[--dirty-commits | --no-dirty-commits]
[--attribute-author | --no-attribute-author]
[--attribute-committer | --no-attribute-committer]
[--attribute-commit-message-author | --no-attribute-commit-message-author]
[--attribute-commit-message-committer | --no-attribute-commit-message-committer]
[--attribute-co-authored-by | --no-attribute-co-authored-by]
[--git-commit-verify | --no-git-commit-verify] [--commit]
[--commit-prompt PROMPT] [--dry-run | --no-dry-run]
[--skip-sanity-check-repo] [--watch-files | --no-watch-files]
[--lint] [--lint-cmd LINT_CMD] [--auto-lint | --no-auto-lint]
[--test-cmd TEST_CMD] [--auto-test | --no-auto-test] [--test]
[--analytics | --no-analytics]
[--analytics-log ANALYTICS_LOG_FILE] [--analytics-disable]
[--analytics-posthog-host ANALYTICS_POSTHOG_HOST]
[--analytics-posthog-project-api-key ANALYTICS_POSTHOG_PROJECT_API_KEY]
[--just-check-update] [--check-update | --no-check-update]
[--show-release-notes | --no-show-release-notes]
[--install-main-branch] [--upgrade] [--version]
[--message COMMAND] [--message-file MESSAGE_FILE]
[--gui | --no-gui | --browser | --no-browser]
[--copy-paste | --no-copy-paste] [--apply FILE]
[--apply-clipboard-edits] [--exit] [--show-repo-map]
[--show-prompts] [--voice-format VOICE_FORMAT]
[--voice-language VOICE_LANGUAGE]
[--voice-input-device VOICE_INPUT_DEVICE] [--disable-playwright]
[--file FILE] [--read FILE] [--vim]
[--chat-language CHAT_LANGUAGE]
[--commit-language COMMIT_LANGUAGE] [--yes-always] [-v]
[--load LOAD_FILE] [--encoding ENCODING]
[--line-endings {platform,lf,crlf}] [-c CONFIG_FILE]
[--env-file ENV_FILE]
[--suggest-shell-commands | --no-suggest-shell-commands]
[--fancy-input | --no-fancy-input] [--multiline | --no-multiline]
[--notifications | --no-notifications]
[--notifications-command COMMAND]
[--detect-urls | --no-detect-urls] [--editor EDITOR]
[--shell-completions SHELL] [--opus] [--sonnet] [--haiku] [--4]
[--4o] [--mini] [--4-turbo] [--35turbo] [--deepseek] [--o1-mini]
[--o1-preview]
[FILE ...]
aider is AI pair programming in your terminal
options:
-h, --help show this help message and exit
Main model:
FILE files to edit with an LLM (optional)
--model MODEL Specify the model to use for the main chat [env var:
AIDER_MODEL]
API Keys and settings:
--openai-api-key OPENAI_API_KEY
Specify the OpenAI API key [env var:
AIDER_OPENAI_API_KEY]
--anthropic-api-key ANTHROPIC_API_KEY
Specify the Anthropic API key [env var:
AIDER_ANTHROPIC_API_KEY]
--openai-api-base OPENAI_API_BASE
Specify the api base url [env var:
AIDER_OPENAI_API_BASE]
--openai-api-type OPENAI_API_TYPE
(deprecated, use --set-env OPENAI_API_TYPE=)
[env var: AIDER_OPENAI_API_TYPE]
--openai-api-version OPENAI_API_VERSION
(deprecated, use --set-env OPENAI_API_VERSION=)
[env var: AIDER_OPENAI_API_VERSION]
--openai-api-deployment-id OPENAI_API_DEPLOYMENT_ID
(deprecated, use --set-env
OPENAI_API_DEPLOYMENT_ID=) [env var:
AIDER_OPENAI_API_DEPLOYMENT_ID]
--openai-organization-id OPENAI_ORGANIZATION_ID
(deprecated, use --set-env
OPENAI_ORGANIZATION=) [env var:
AIDER_OPENAI_ORGANIZATION_ID]
--set-env ENV_VAR_NAME=value
Set an environment variable (to control API settings,
can be used multiple times) [env var: AIDER_SET_ENV]
--api-key PROVIDER=KEY
Set an API key for a provider (eg: --api-key
provider= sets PROVIDER_API_KEY=) [env var:
AIDER_API_KEY]
Model settings:
--list-models MODEL, --models MODEL
List known models which match the (partial) MODEL name
[env var: AIDER_LIST_MODELS]
--model-settings-file MODEL_SETTINGS_FILE
Specify a file with aider model settings for unknown
models [env var: AIDER_MODEL_SETTINGS_FILE]
--model-metadata-file MODEL_METADATA_FILE
Specify a file with context window and costs for
unknown models [env var: AIDER_MODEL_METADATA_FILE]
--alias ALIAS:MODEL Add a model alias (can be used multiple times) [env
var: AIDER_ALIAS]
--reasoning-effort REASONING_EFFORT
Set the reasoning_effort API parameter (default: not
set) [env var: AIDER_REASONING_EFFORT]
--thinking-tokens THINKING_TOKENS
Set the thinking token budget for models that support
it. Use 0 to disable. (default: not set) [env var:
AIDER_THINKING_TOKENS]
--verify-ssl, --no-verify-ssl
Verify the SSL cert when connecting to models
(default: True) [env var: AIDER_VERIFY_SSL]
--timeout TIMEOUT Timeout in seconds for API calls (default: None) [env
var: AIDER_TIMEOUT]
--edit-format EDIT_FORMAT, --chat-mode EDIT_FORMAT
Specify what edit format the LLM should use (default
depends on model) [env var: AIDER_EDIT_FORMAT]
--architect Use architect edit format for the main chat [env var:
AIDER_ARCHITECT]
--auto-accept-architect, --no-auto-accept-architect
Enable/disable automatic acceptance of architect
changes (default: True) [env var:
AIDER_AUTO_ACCEPT_ARCHITECT]
--weak-model WEAK_MODEL
Specify the model to use for commit messages and chat
history summarization (default depends on --model)
[env var: AIDER_WEAK_MODEL]
--editor-model EDITOR_MODEL
Specify the model to use for editor tasks (default
depends on --model) [env var: AIDER_EDITOR_MODEL]
--editor-edit-format EDITOR_EDIT_FORMAT
Specify the edit format for the editor model (default:
depends on editor model) [env var:
AIDER_EDITOR_EDIT_FORMAT]
--show-model-warnings, --no-show-model-warnings
Only work with models that have meta-data available
(default: True) [env var: AIDER_SHOW_MODEL_WARNINGS]
--check-model-accepts-settings, --no-check-model-accepts-settings
Check if model accepts settings like
reasoning_effort/thinking_tokens (default: True) [env
var: AIDER_CHECK_MODEL_ACCEPTS_SETTINGS]
--max-chat-history-tokens MAX_CHAT_HISTORY_TOKENS
Soft limit on tokens for chat history, after which
summarization begins. If unspecified, defaults to the
model's max_chat_history_tokens. [env var:
AIDER_MAX_CHAT_HISTORY_TOKENS]
Cache settings:
--cache-prompts, --no-cache-prompts
Enable caching of prompts (default: False) [env var:
AIDER_CACHE_PROMPTS]
--cache-keepalive-pings CACHE_KEEPALIVE_PINGS
Number of times to ping at 5min intervals to keep
prompt cache warm (default: 0) [env var:
AIDER_CACHE_KEEPALIVE_PINGS]
Repomap settings:
--map-tokens MAP_TOKENS
Suggested number of tokens to use for repo map, use 0
to disable [env var: AIDER_MAP_TOKENS]
--map-refresh {auto,always,files,manual}
Control how often the repo map is refreshed. Options:
auto, always, files, manual (default: auto) [env var:
AIDER_MAP_REFRESH]
--map-multiplier-no-files MAP_MULTIPLIER_NO_FILES
Multiplier for map tokens when no files are specified
(default: 2) [env var: AIDER_MAP_MULTIPLIER_NO_FILES]
History Files:
--input-history-file INPUT_HISTORY_FILE
Specify the chat input history file (default:
/var/sitesUbuntu/www.mslinn.com/.aider.input.history)
[env var: AIDER_INPUT_HISTORY_FILE]
--chat-history-file CHAT_HISTORY_FILE
Specify the chat history file (default: /var/sitesUbun
tu/www.mslinn.com/.aider.chat.history.md) [env var:
AIDER_CHAT_HISTORY_FILE]
--restore-chat-history, --no-restore-chat-history
Restore the previous chat history messages (default:
False) [env var: AIDER_RESTORE_CHAT_HISTORY]
--llm-history-file LLM_HISTORY_FILE
Log the conversation with the LLM to this file (for
example, .aider.llm.history) [env var:
AIDER_LLM_HISTORY_FILE]
Output settings:
--dark-mode Use colors suitable for a dark terminal background
(default: False) [env var: AIDER_DARK_MODE]
--light-mode Use colors suitable for a light terminal background
(default: False) [env var: AIDER_LIGHT_MODE]
--pretty, --no-pretty
Enable/disable pretty, colorized output (default:
True) [env var: AIDER_PRETTY]
--stream, --no-stream
Enable/disable streaming responses (default: True)
[env var: AIDER_STREAM]
--user-input-color USER_INPUT_COLOR
Set the color for user input (default: #00cc00) [env
var: AIDER_USER_INPUT_COLOR]
--tool-output-color TOOL_OUTPUT_COLOR
Set the color for tool output (default: None) [env
var: AIDER_TOOL_OUTPUT_COLOR]
--tool-error-color TOOL_ERROR_COLOR
Set the color for tool error messages (default:
#FF2222) [env var: AIDER_TOOL_ERROR_COLOR]
--tool-warning-color TOOL_WARNING_COLOR
Set the color for tool warning messages (default:
#FFA500) [env var: AIDER_TOOL_WARNING_COLOR]
--assistant-output-color ASSISTANT_OUTPUT_COLOR
Set the color for assistant output (default: #0088ff)
[env var: AIDER_ASSISTANT_OUTPUT_COLOR]
--completion-menu-color COLOR
Set the color for the completion menu (default:
terminal's default text color) [env var:
AIDER_COMPLETION_MENU_COLOR]
--completion-menu-bg-color COLOR
Set the background color for the completion menu
(default: terminal's default background color) [env
var: AIDER_COMPLETION_MENU_BG_COLOR]
--completion-menu-current-color COLOR
Set the color for the current item in the completion
menu (default: terminal's default background color)
[env var: AIDER_COMPLETION_MENU_CURRENT_COLOR]
--completion-menu-current-bg-color COLOR
Set the background color for the current item in the
completion menu (default: terminal's default text
color) [env var:
AIDER_COMPLETION_MENU_CURRENT_BG_COLOR]
--code-theme CODE_THEME
Set the markdown code theme (default: default, other
options include monokai, solarized-dark, solarized-
light, or a Pygments builtin style, see
https://pygments.org/styles for available themes) [env
var: AIDER_CODE_THEME]
--show-diffs Show diffs when committing changes (default: False)
[env var: AIDER_SHOW_DIFFS]
Git settings:
--git, --no-git Enable/disable looking for a git repo (default: True)
[env var: AIDER_GIT]
--gitignore, --no-gitignore
Enable/disable adding .aider* to .gitignore (default:
True) [env var: AIDER_GITIGNORE]
--add-gitignore-files, --no-add-gitignore-files
Enable/disable the addition of files listed in
.gitignore to Aider's editing scope. [env var:
AIDER_ADD_GITIGNORE_FILES]
--aiderignore AIDERIGNORE
Specify the aider ignore file (default: .aiderignore
in git root) [env var: AIDER_AIDERIGNORE]
--subtree-only Only consider files in the current subtree of the git
repository [env var: AIDER_SUBTREE_ONLY]
--auto-commits, --no-auto-commits
Enable/disable auto commit of LLM changes (default:
True) [env var: AIDER_AUTO_COMMITS]
--dirty-commits, --no-dirty-commits
Enable/disable commits when repo is found dirty
(default: True) [env var: AIDER_DIRTY_COMMITS]
--attribute-author, --no-attribute-author
Attribute aider code changes in the git author name
(default: True). If explicitly set to True, overrides
--attribute-co-authored-by precedence. [env var:
AIDER_ATTRIBUTE_AUTHOR]
--attribute-committer, --no-attribute-committer
Attribute aider commits in the git committer name
(default: True). If explicitly set to True, overrides
--attribute-co-authored-by precedence for aider edits.
[env var: AIDER_ATTRIBUTE_COMMITTER]
--attribute-commit-message-author, --no-attribute-commit-message-author
Prefix commit messages with 'aider: ' if aider
authored the changes (default: False) [env var:
AIDER_ATTRIBUTE_COMMIT_MESSAGE_AUTHOR]
--attribute-commit-message-committer, --no-attribute-commit-message-committer
Prefix all commit messages with 'aider: ' (default:
False) [env var:
AIDER_ATTRIBUTE_COMMIT_MESSAGE_COMMITTER]
--attribute-co-authored-by, --no-attribute-co-authored-by
Attribute aider edits using the Co-authored-by trailer
in the commit message (default: True). If True, this
takes precedence over default --attribute-author and
--attribute-committer behavior unless they are
explicitly set to True. [env var:
AIDER_ATTRIBUTE_CO_AUTHORED_BY]
--git-commit-verify, --no-git-commit-verify
Enable/disable git pre-commit hooks with --no-verify
(default: False) [env var: AIDER_GIT_COMMIT_VERIFY]
--commit Commit all pending changes with a suitable commit
message, then exit [env var: AIDER_COMMIT]
--commit-prompt PROMPT
Specify a custom prompt for generating commit messages
[env var: AIDER_COMMIT_PROMPT]
--dry-run, --no-dry-run
Perform a dry run without modifying files (default:
False) [env var: AIDER_DRY_RUN]
--skip-sanity-check-repo
Skip the sanity check for the git repository (default:
False) [env var: AIDER_SKIP_SANITY_CHECK_REPO]
--watch-files, --no-watch-files
Enable/disable watching files for ai coding comments
(default: False) [env var: AIDER_WATCH_FILES]
Fixing and committing:
--lint Lint and fix provided files, or dirty files if none
provided [env var: AIDER_LINT]
--lint-cmd LINT_CMD Specify lint commands to run for different languages,
eg: "python: flake8 --select=..." (can be used
multiple times) [env var: AIDER_LINT_CMD]
--auto-lint, --no-auto-lint
Enable/disable automatic linting after changes
(default: True) [env var: AIDER_AUTO_LINT]
--test-cmd TEST_CMD Specify command to run tests [env var: AIDER_TEST_CMD]
--auto-test, --no-auto-test
Enable/disable automatic testing after changes
(default: False) [env var: AIDER_AUTO_TEST]
--test Run tests, fix problems found and then exit [env var:
AIDER_TEST]
Analytics:
--analytics, --no-analytics
Enable/disable analytics for current session (default:
random) [env var: AIDER_ANALYTICS]
--analytics-log ANALYTICS_LOG_FILE
Specify a file to log analytics events [env var:
AIDER_ANALYTICS_LOG]
--analytics-disable Permanently disable analytics [env var:
AIDER_ANALYTICS_DISABLE]
--analytics-posthog-host ANALYTICS_POSTHOG_HOST
Send analytics to custom PostHog instance [env var:
AIDER_ANALYTICS_POSTHOG_HOST]
--analytics-posthog-project-api-key ANALYTICS_POSTHOG_PROJECT_API_KEY
Send analytics to custom PostHog project [env var:
AIDER_ANALYTICS_POSTHOG_PROJECT_API_KEY]
Upgrading:
--just-check-update Check for updates and return status in the exit code
[env var: AIDER_JUST_CHECK_UPDATE]
--check-update, --no-check-update
Check for new aider versions on launch [env var:
AIDER_CHECK_UPDATE]
--show-release-notes, --no-show-release-notes
Show release notes on first run of new version
(default: None, ask user) [env var:
AIDER_SHOW_RELEASE_NOTES]
--install-main-branch
Install the latest version from the main branch [env
var: AIDER_INSTALL_MAIN_BRANCH]
--upgrade, --update Upgrade aider to the latest version from PyPI [env
var: AIDER_UPGRADE]
--version Show the version number and exit
Modes:
--message COMMAND, --msg COMMAND, -m COMMAND
Specify a single message to send the LLM, process
reply then exit (disables chat mode) [env var:
AIDER_MESSAGE]
--message-file MESSAGE_FILE, -f MESSAGE_FILE
Specify a file containing the message to send the LLM,
process reply, then exit (disables chat mode) [env
var: AIDER_MESSAGE_FILE]
--gui, --no-gui, --browser, --no-browser
Run aider in your browser (default: False) [env var:
AIDER_GUI]
--copy-paste, --no-copy-paste
Enable automatic copy/paste of chat between aider and
web UI (default: False) [env var: AIDER_COPY_PASTE]
--apply FILE Apply the changes from the given file instead of
running the chat (debug) [env var: AIDER_APPLY]
--apply-clipboard-edits
Apply clipboard contents as edits using the main
model's editor format [env var:
AIDER_APPLY_CLIPBOARD_EDITS]
--exit Do all startup activities then exit before accepting
user input (debug) [env var: AIDER_EXIT]
--show-repo-map Print the repo map and exit (debug) [env var:
AIDER_SHOW_REPO_MAP]
--show-prompts Print the system prompts and exit (debug) [env var:
AIDER_SHOW_PROMPTS]
Voice settings:
--voice-format VOICE_FORMAT
Audio format for voice recording (default: wav). webm
and mp3 require ffmpeg [env var: AIDER_VOICE_FORMAT]
--voice-language VOICE_LANGUAGE
Specify the language for voice using ISO 639-1 code
(default: auto) [env var: AIDER_VOICE_LANGUAGE]
--voice-input-device VOICE_INPUT_DEVICE
Specify the input device name for voice recording [env
var: AIDER_VOICE_INPUT_DEVICE]
Other settings:
--disable-playwright Never prompt for or attempt to install Playwright for
web scraping (default: False). [env var:
AIDER_DISABLE_PLAYWRIGHT]
--file FILE specify a file to edit (can be used multiple times)
[env var: AIDER_FILE]
--read FILE specify a read-only file (can be used multiple times)
[env var: AIDER_READ]
--vim Use VI editing mode in the terminal (default: False)
[env var: AIDER_VIM]
--chat-language CHAT_LANGUAGE
Specify the language to use in the chat (default:
None, uses system settings) [env var:
AIDER_CHAT_LANGUAGE]
--commit-language COMMIT_LANGUAGE
Specify the language to use in the commit message
(default: None, user language) [env var:
AIDER_COMMIT_LANGUAGE]
--yes-always Always say yes to every confirmation [env var:
AIDER_YES_ALWAYS]
-v, --verbose Enable verbose output [env var: AIDER_VERBOSE]
--load LOAD_FILE Load and execute /commands from a file on launch [env
var: AIDER_LOAD]
--encoding ENCODING Specify the encoding for input and output (default:
utf-8) [env var: AIDER_ENCODING]
--line-endings {platform,lf,crlf}
Line endings to use when writing files (default:
platform) [env var: AIDER_LINE_ENDINGS]
-c CONFIG_FILE, --config CONFIG_FILE
Specify the config file (default: search for
.aider.conf.yml in git root, cwd or home directory)
--env-file ENV_FILE Specify the .env file to load (default: .env in git
root) [env var: AIDER_ENV_FILE]
--suggest-shell-commands, --no-suggest-shell-commands
Enable/disable suggesting shell commands (default:
True) [env var: AIDER_SUGGEST_SHELL_COMMANDS]
--fancy-input, --no-fancy-input
Enable/disable fancy input with history and completion
(default: True) [env var: AIDER_FANCY_INPUT]
--multiline, --no-multiline
Enable/disable multi-line input mode with Meta-Enter
to submit (default: False) [env var: AIDER_MULTILINE]
--notifications, --no-notifications
Enable/disable terminal bell notifications when LLM
responses are ready (default: False) [env var:
AIDER_NOTIFICATIONS]
--notifications-command COMMAND
Specify a command to run for notifications instead of
the terminal bell. If not specified, a default command
for your OS may be used. [env var:
AIDER_NOTIFICATIONS_COMMAND]
--detect-urls, --no-detect-urls
Enable/disable detection and offering to add URLs to
chat (default: True) [env var: AIDER_DETECT_URLS]
--editor EDITOR Specify which editor to use for the /editor command
[env var: AIDER_EDITOR]
--shell-completions SHELL
Print shell completion script for the specified SHELL
and exit. Supported shells: bash, tcsh, zsh. Example:
aider --shell-completions bash [env var:
AIDER_SHELL_COMPLETIONS]
Deprecated model settings:
--opus Use claude-3-opus-20240229 model for the main chat
(deprecated, use --model) [env var: AIDER_OPUS]
--sonnet Use anthropic/claude-3-7-sonnet-20250219 model for the
main chat (deprecated, use --model) [env var:
AIDER_SONNET]
--haiku Use claude-3-5-haiku-20241022 model for the main chat
(deprecated, use --model) [env var: AIDER_HAIKU]
--4, -4 Use gpt-4-0613 model for the main chat (deprecated,
use --model) [env var: AIDER_4]
--4o Use gpt-4o model for the main chat (deprecated, use
--model) [env var: AIDER_4O]
--mini Use gpt-4o-mini model for the main chat (deprecated,
use --model) [env var: AIDER_MINI]
--4-turbo Use gpt-4-1106-preview model for the main chat
(deprecated, use --model) [env var: AIDER_4_TURBO]
--35turbo, --35-turbo, --3, -3
Use gpt-3.5-turbo model for the main chat (deprecated,
use --model) [env var: AIDER_35TURBO]
--deepseek Use deepseek/deepseek-chat model for the main chat
(deprecated, use --model) [env var: AIDER_DEEPSEEK]
--o1-mini Use o1-mini model for the main chat (deprecated, use
--model) [env var: AIDER_O1_MINI]
--o1-preview Use o1-preview model for the main chat (deprecated,
use --model) [env var: AIDER_O1_PREVIEW]
Args that start with '--' can also be set in a config file
(./.aider.conf.yml or ~/.aider.conf.yml or specified via -c). The config file
uses YAML syntax and must represent a YAML 'mapping' (for details, see
learn.getgrav.org/advanced/yaml). In general, command-line values
override environment variables which override config file values which
override defaults.
These are the slash commands for use in sessions:
> /help /add Add files to the chat so aider can edit them or review them in detail /architect Enter architect/editor mode using 2 different models. If no prompt provided, switches to architect/editor mode. /ask Ask questions about the code base without editing any files. If no prompt provided, switches to ask mode. /chat-mode Switch to a new chat mode /clear Clear the chat history /code Ask for changes to your code. If no prompt provided, switches to code mode. /commit Commit edits to the repo made outside the chat (commit message optional) /context Enter context mode to see surrounding code context. If no prompt provided, switches to context mode. /copy Copy the last assistant message to the clipboard /copy-context Copy the current chat context as markdown, suitable to paste into a web UI /diff Display the diff of changes since the last message /drop Remove files from the chat session to free up context space /edit Alias for /editor: Open an editor to write a prompt /editor Open an editor to write a prompt /editor-model Switch the Editor Model to a new LLM /exit Exit the application /git Run a git command (output excluded from chat) /help Ask questions about aider /lint Lint and fix in-chat files or all dirty files if none in chat /load Load and execute commands from a file /ls List all known files and indicate which are included in the chat session /map Print out the current repository map /map-refresh Force a refresh of the repository map /model Switch the Main Model to a new LLM /models Search the list of available models /multiline-mode Toggle multiline mode (swaps behavior of Enter and Meta+Enter) /paste Paste image/text from the clipboard into the chat. Optionally provide a name for the image. /quit Exit the application /read-only Add files to the chat that are for reference only, or turn added files to read-only /reasoning-effort Set the reasoning effort level (values: number or low/medium/high depending on model) /report Report a problem by opening a GitHub Issue /reset Drop all files and clear the chat history /run Run a shell command and optionally add the output to the chat (alias: !) /save Save commands to a file that can reconstruct the current chat session's files /settings Print out the current settings /test Run a shell command and add the output to the chat on non-zero exit code /think-tokens Set the thinking token budget, eg: 8096, 8k, 10.5k, 0.5M, or 0 to disable. /tokens Report on the number of tokens used by the current chat context /undo Undo the last git commit if it was done by aider /voice Record and transcribe voice input /weak-model Switch the Weak Model to a new LLM /web Scrape a webpage, convert to markdown and send in a message
Use '/help <question>' to ask questions about how to use aider.
Configuration
The first time you run aider it collects configuration information.
$ aider ─────────────────────────────────────────────────────────────────── You can skip this check with --no-gitignore Add .aider* to .gitignore (recommended)? (Y)es/(N)o [Yes]: Added .aider* to .gitignore No LLM model was specified and no API keys were provided. OpenRouter provides free and paid access to many LLMs. Login to OpenRouter or create a free account? (Y)es/(N)o [Yes]: Please open this URL in your browser to connect Aider with OpenRouter: https://openrouter.ai/auth?callback_url=http://localhost:8484/callback/aider&code_challenge=fu4AONeC5o0&code_challenge_method=S256 Waiting up to 5 minutes for you to finish in the browser... Use Control-C to interrupt. ^C OAuth flow interrupted. OpenRouter authentication did not complete successfully. https://aider.chat/docs/troubleshooting/models-and-keys.html Open documentation URL for more info? (Y)es/(N)o/(D)on't ask again [Yes]:
The OpenRouter authentication message never arrived by email, so I created
~/.aider.conf.yml
and provided keys for Gemini
and Anthropic.
The Aider format for specifying model provider keys is inconsistent and not
well documented. I had to experiment to find the correct format for
.aider.conf.yml.
# Keys anthropic-api-key: sk-ant-api03-...neQAA openai-api-key: sk-proj...eqMgA api-key: - gemini=AIz...uE # Settings dark-mode: true editor-model: anthropic/claude-sonnet-4-6 model: anthropic/claude-sonnet-4-6 # model: ollama_chat/gemma4:26b # Local Ollama models need OLLAMA_API_BASE to be set # export OLLAMA_API_BASE=http://localhost:11434 # setx OLLAMA_API_BASE http://localhost:11434 # # export OLLAMA_API_BASE=http://gojira:11434 # setx OLLAMA_API_BASE http://gojira:11434 timeout: 60 # (seconds) 300 seconds might be required for local inference models show-model-warnings: false # Globally silences "Unknown context window" prompts weak-model: claude-3-5-haiku-latest
Information Sources
PS F:\work\> aider --watch-files --architect ─────────────────────────────────────────────────────────────────── Aider v0.86.2 Main model: anthropic/claude-sonnet-4-6 with architect edit format, infinite output Editor model: anthropic/claude-sonnet-4-6 with editor-whole edit format Weak model: claude-3-5-haiku-latest Git repo: .git with 159 files Repo-map: using 4096 tokens, auto refresh ─────────────────────────────────────────────────────────────────── architect> /ask what information do you have access to
Based on the file summaries you've shared, I have access to: Structural/Architectural information: • Class names, method signatures, and member variables • File organization and module boundaries • Inheritance relationships (e.g., ChannelStrip extends VerticalControlStrip, PitchEngine implements PitchEngineInterface) What I can infer: • The app is a JUCE-based instrument recorder • It has an audio engine, session management, UI components, and Lua scripting • It uses a processing pipeline: audio capture → processing thread → WAV export • It has CPU/memory monitoring, pitch detection, waveform display, and channel configuration What I do NOT have: • Full method implementations • Lua scripts • Build files / CMakeLists • Configuration files • Any file not listed in the summaries • The actual logic inside any function body Modules I can see summaries for: • src/app, src/engine, src/session, src/ui, src/common • tests/unit and tests/integration If you want me to answer detailed questions about specific logic or implementation, you'd need to add those files to the chat. Tokens: 8.7k sent, 255 received. Cost: $0.03 message, $0.03 session. ───────────────────────────────────────────────────────────────────
Free Models
Many free online LLMs can be connected, including the new Online Ollama models. I show many examples of pairing Aider with free LLMs below.
The ollama pull command for cloud models only downloads about 350
bytes... hardly a snack!
Gemini
Free Gemini usage is available, within limits. There are several models to choose from, all with free usage.
API Key
- Go to Google AI Studio.
- Create a free API key.
-
Set the API key in your terminal environment:
Windows CMD or PowerShell
C:\> setx GEMINI_API_KEY your_key_here # then restart the shell
Windows users,setxhas a 1024 character limit; it cannot create aPATHlonger than that. To overcome this problem, read Add To Windows User Path.
Mac/Linux$ export GEMINI_API_KEY=your_key_here
Gemini 2.5 Flash
Gemini 2.5 Flash is the fastest model, with the highest free limits.
This model defaults to the diff-fenced edit format, as you can see.
$ aider --model gemini/gemini-2.5-flash
Aider v0.86.2 Main model: gemini/gemini-2.5-flash with diff-fenced edit format Weak model: claude-3-5-haiku-latest Git repo: .git with 159 files Repo-map: using 4096 tokens, auto refresh
>
Gemini 2.5 Pro
Gemini 2.5 Pro is best for coding, has lower free limits. It defaults to diff-fenced edit format, as you can see.
$ aider --model gemini/gemini-2.5-pro
Aider v0.86.2 Main model: gemini/gemini-2.5-pro with diff-fenced edit format Weak model: claude-3-5-haiku-latest Git repo: .git with 159 files Repo-map: using 4096 tokens, auto refresh
>
Gemini 3.1 Flash Lite Preview
Gemini 3.1 uses whole edit format by default. Gemini 3.1 Flash Lite Preview is best for high-volume, simple tasks.
$ aider --model gemini/gemini-3.1-flash-lite-preview
Aider v0.86.2 Main model: gemini/gemini-3.1-flash-lite-preview with whole edit format Weak model: claude-3-5-haiku-latest Git repo: .git with 159 files Repo-map: using 4096 tokens, auto refresh
>
This model would run faster if diff-fenced edit format was specified.
Gemini Experimental
Experimental is free, but availability is unstable:
$ aider --model gemini/gemini-exp-1206
Aider v0.86.2 Main model: gemini/gemini-exp-1206 with diff edit format Weak model: claude-3-5-haiku-latest Git repo: .git with 159 files Repo-map: using 4096 tokens, auto refresh
>
gpt-oss Cloud
Free gpt-oss:120b cloud usage is available, with usage limits.
$ ollama pull gpt-oss:120b-cloud pulling manifest pulling 923426e76b18: 100% ▕████████████████████████████████▏ 384 B verifying sha256 digest writing manifest success
$ aider --model gpt-oss:120b-cloud
Aider v0.86.2 Main model: gpt-oss:120b-cloud with whole edit format Weak model: claude-3-5-haiku-latest Git repo: .git with 3,735 files Repo-map: using 1024 tokens, auto refresh
>
This model would run faster if --edit-format=diff-fenced was specified.
MiniMax-M2.7 Cloud
Free MiniMax-M2.7 cloud usage is available, with usage limits.
$ ollama pull minimax-m2.7:cloud pulling manifest pulling 476b4620b85b: 100% ▕████████████████████████████████████▏ 382 B verifying sha256 digest writing manifest success
$ aider --model minimax-m2.7:cloud
Aider v0.86.2 Main model: minimax-m2.7:cloud with whole edit format Weak model: claude-3-5-haiku-latest Git repo: .git with 3,735 files Repo-map: using 1024 tokens, auto refresh
>
This model would run faster if --edit-format=diff-fenced was specified.
Nemotron-3-super Cloud
Free nemotron-3-super cloud usage is available, with usage limits.
$ ollama pull nemotron-3-super:cloud pulling manifest pulling eddb1e2169b8: 100% ▕████████████████████████████████▏ 345 B verifying sha256 digest writing manifest success
$ aider --model nemotron-3-super:cloud
Aider v0.86.2 Main model: nemotron-3-super:cloud with whole edit format Weak model: claude-3-5-haiku-latest Git repo: .git with 3,735 files Repo-map: using 1024 tokens, auto refresh
>
This model would run faster if --edit-format=diff-fenced was specified.
QWEN 3.5 Coder Cloud
Free QWEN 3.5 Coder cloud usage is available, with usage limits. QWEN 3.5 Coder is most famous for their downloadable open source models. However, a model this big needs to run in a datacenter-class GPU.
QWEN 3.6 was announced 2026-04-01, but as of 2026-04-04 it was not yet available in the cloud.
$ ollama pull qwen3.5:cloud pulling manifest pulling 476b4620b85b: 100% ▕████████████████████████████████████▏ 382 B verifying sha256 digest writing manifest success
$ aider --model ollama_chat/qwen3.5:cloud
Main model: ollama_chat/qwen3.5:cloud with whole edit format Weak model: claude-3-5-haiku-latest Git repo: .git with 2,515 files Repo-map: using 4096 tokens, auto refresh
>
This model would run faster if --edit-format=diff-fenced was
specified.
Paid Models
Only Ollama models need to be pulled before they can be used.
All models run faster with edit format diff-fenced.
Claude Haiku 4.6
$ aider --model claude-haiku-4-6
Aider v0.86.2 Main model: claude-haiku-4-6 with whole edit format Weak model: claude-3-5-haiku-latest Git repo: .git with 2,515 files Repo-map: using 1024 tokens, auto refresh
>
Claude Sonnet 4.6
$ aider --model claude-sonnet-4-6
Aider v0.86.2 Main model: anthropic/claude-sonnet-4-6 with whole edit format, infinite output Weak model: claude-3-5-haiku-latest Git repo: .git with 159 files Repo-map: using 4096 tokens, auto refresh
>
Claude Opus 4.6
I am unsure why diff edit format is shown instead of whole
edit format, infinite output.
$ aider --model claude-opus-4-6
Aider v0.86.2 Main model: claude-opus-4-6 with diff edit format Weak model: claude-3-5-haiku-latest Git repo: .git with 2,515 files Repo-map: using 4096 tokens, auto refresh
>
Gemini 3.1 Pro
Gemini 3.1 Pro is best for complex reasoning and research:
$ aider --model gemini/gemini-3.1-pro
Main model: gemini/gemini-3.1-pro with whole edit format Weak model: claude-3-5-haiku-latest Git repo: .git with 159 files Repo-map: using 1024 tokens, auto refresh
>
Optional Model Settings
Custom model settings (which override a model’s default values) can be
specified in a file called .aider.model.settings.yml. That file could
exist in the user home directory or in the current project.
- name: anthropic.claude-opus-4-20250514-v1:0
edit_format: diff
weak_model_name: anthropic.claude-3-5-haiku-20241022-v1:0
use_repo_map: true
extra_params:
extra_headers:
anthropic-beta: prompt-caching-2024-07-31,pdfs-2024-09-25,output-128k-2025-02-19
max_tokens: 32000
cache_control: true
editor_model_name: anthropic.claude-sonnet-4-20250514-v1:0
editor_edit_format: editor-diff
accepts_settings:
- thinking_tokens
- name: anthropic.claude-sonnet-4-20250514-v1:0
edit_format: diff
weak_model_name: anthropic.claude-3-5-haiku-20241022-v1:0
use_repo_map: true
extra_params:
extra_headers:
anthropic-beta: prompt-caching-2024-07-31,pdfs-2024-09-25,output-128k-2025-02-19
max_tokens: 64000
cache_control: true
editor_model_name: anthropic.claude-sonnet-4-20250514-v1:0
editor_edit_format: editor-diff
accepts_settings:
- thinking_tokens
- name: anthropic/claude-3-5-haiku-20241022
edit_format: diff
weak_model_name: anthropic/claude-3-5-haiku-20241022
use_repo_map: true
extra_params:
extra_headers:
anthropic-beta: prompt-caching-2024-07-31,pdfs-2024-09-25
cache_control: true
- name: gemini/gemini-3-flash-preview
edit_format: diff-fenced
use_repo_map: true
overeager: true
use_temperature: false
accepts_settings:
- thinking_tokens
- name: gemini/gemini-3-pro-preview
edit_format: diff-fenced
weak_model_name: gemini/gemini-2.5-flash
use_repo_map: true
overeager: true
use_temperature: false
accepts_settings:
- thinking_tokens
- name: ollama_chat/gemma4:26b
# 'diff' is generally best for coding-capable local models
edit_format: diff
# Use the repo map to help the model understand project structure
use_repo_map: true
# Gemma 4 works best with examples provided as system messages
examples_as_sys_msg: true
extra_params:
# Gemma 4 26B (MoE) supports up to 128k, but 32k-64k is a stable local baseline
num_ctx: 32768
# Temperature 0 is recommended for consistent coding results
temperature: 0
- name: ollama_chat/fluffy/l3-8b-stheno-v3.2
# 'whole' is safest for 8B models to ensure code integrity
edit_format: whole
# Important for Llama 3.1 models
examples_as_sys_msg: true
use_repo_map: true
extra_params:
# Llama 3.1 natively supports 128k, but 16k-32k is the sweet spot for local 8B performance
num_ctx: 16384
temperature: 0
# Helps prevent the model from repeating its own thought process
repeat_penalty: 1.1
I copied the YAML configuration files from the WSL home directory to the native Windows home directory:
$ cp ~/.aider*.yml '/mnt/c/Users/Mike Slinn/'
Edit Formats
The --edit-format command-line option determines how the LLM
sends code changes back to your local files. Values are: whole,
diff, and diff-fenced. The default is
diff for most models, but some models (like Gemini 2.5 Flash)
default to diff-fenced and some (like Gemini Flash 3.1) default
to whole.
diff and diff-fenced run much faster than
whole. I did not see an appreciable speed difference between diff and
diff-fenced.
The AIDER_EDIT_FORMAT environment variable and edit_format field in
.aider.model.settings.yml can also be used to set the default
format.
The --edit-format command-line option overrides all other settings.
If the following is typed at a command line, or is present in .bashrc,
then all models will use the diff-fenced edit format instead of any default
whole or diff formats:
export AIDER_EDIT_FORMAT=diff-fenced
.env files are also supported; add the setting to
.env in the project root or the user home directory.
AIDER_EDIT_FORMAT=diff-fenced
Alternatively, you can specify the edit-format key in .aider.conf.yml.
edit-format: diff-fenced
diff
The LLM returns only the specific blocks of changes using a search/replace format.
- Pros: Fast and token-efficient because it only transmits the edits.
- Cons: Higher failure rate if the LLM provides an original block that doesn't perfectly match your local file's text.
- Best For large files and complex refactors where you want to save time and tokens.
diff-fenced
The diff-fenced format is a specialized version of the diff format.
The filename and search/replace blocks are within standard Markdown code fences
(triple backticks). Some models, such as Google's Gemini, are trained to produce
and process code within these fenced blocks for better accuracy. While a standard
diff might use its own internal markers, diff-fenced
uses the Markdown structure that LLMs are already comfortable with.
whole
The LLM returns the entire updated content of every file that needs a change.
- Pros: Extremely simple and robust; less likely to hallucinate where a change belongs because it rewrites everything.
- Cons: Slow and expensive for large files since it uses more tokens to resend unchanged code.
- Best For small files, new file creation, or weaker models that struggle with complex diff syntax.
Prompt
The aider prompt will display the current edit format if overriden:
diff>
diff-fenced>
whole>
If the model does not support the specified edit format the Aider prompt will not show the requested edit format.
Format Comparison
| Format | Output Type | Token Usage | Recommended Model |
|---|---|---|---|
whole
| Full file replacement | High | Weak/local models (e.g., Llama-3-8B) |
diff
| Search/Replace blocks | Low | GPT-4o, Claude 3.5 Sonnet |
diff-fenced
| Fenced Search/Replace | Low | Gemini 2.5 Pro/Flash |
Watch files
Aider includes a feature called Watch Files that allows it to monitor project files for trigger comments. When you add a comment containing AI! to a file, Aider detects it, treats the comment as a prompt, and automatically applies the requested changes.
For AI comments in source files #,
// and /* */:
// AI! put the date last modified here # AI! put the date last modified here /* or we could be much more verbose. - with bullet points - and stuff AI! */
For AI comments in markdown files, you can use #,
//, or the standard HTML comment tags. The Aider
console output shows the file being processed, but the file is not changed.
// AI! put the date last modified here (fails) # AI! put the date last modified here (fails) <!-- AI! put the date last modified here (fails) -->
Enable the feature thusly:
PS F:\work> aider --watch-files
Aider v0.86.2 Main model: anthropic/claude-sonnet-4-6 with whole edit format, infinite output Weak model: claude-3-5-haiku-latest Git repo: .git with 159 files Repo-map: using 4096 tokens, auto refresh
>
You will probably find this works faster with `--edit-format=diff`:
$ aider --watch-files --edit-format=diff
Running Tests
The /test slash command is used like this:
/test make test
The syntax makes sense once you understand that the /test slash
command interprets the output of the remainder of the command as compilation
and test related. The output is only shared with the AI if the command fails
(returns a non-zero exit code). If it fails, Aider automatically pulls the
error into the chat so the AI can attempt a fix. Let’s see it in action:
$ aider --watch-files
Aider v0.86.2 Main model: anthropic/claude-sonnet-4-6 with whole edit format, infinite output Weak model: claude-3-5-haiku-latest Git repo: .git with 159 files Repo-map: using 4096 tokens, auto refresh
> /test make test Checking system dependencies... All system dependencies found. Build Env: RAM=24030MB, Cores=24, JOBS=11, Disk=11950MB (tmpfs) -- The CXX compiler identification is GNU 15.2.0 -- The C compiler identification is GNU 15.2.0 -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Check for working CXX compiler: /usr/bin/c++ - skipped -- Detecting CXX compile features -- Detecting CXX compile features - done -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Check for working C compiler: /usr/bin/cc - skipped -- Detecting C compile features -- Detecting C compile features - done -- Found PkgConfig: /usr/bin/pkg-config (found version "1.8.1") Cloning into 'juce-src'...
Running Shell Commands
The /run command (aliased as !) executes shell
commands directly from the Aider chat. You can decide whether to share the
output with the AI after the command finishes.
> !make help Available targets: all Default target. Configure and build a Release binary. build Configure (if needed) and compile a Release binary. build-windows Build native Windows binary from WSL using Visual Studio + vcpkg. check-cmake Verify CMake version 4.3.0 or higher. check-deps Verify required system libraries are installed (headers + pkg-config). check-ninja Warn with OS-specific install instructions if Ninja is not on PATH. check-vcpkg Verify vcpkg is installed (or install it automatically). clean Remove all build artifacts (both Release and Debug trees). clean-debug Remove only the Debug build tree. clean-release Remove only the Release build tree. clean-windows Remove Windows build artifacts. configure Run CMake configuration for Release build tree. configure-debug Run CMake configuration for Debug build tree. debug Configure (if needed) and compile a Debug binary (symbols included). help List all available targets. install Install the Release binary to the system prefix (default /usr/local). rebuild Clean the Release tree and build from scratch. rebuild-debug Clean the Debug tree and build from scratch. release Alias for build. Explicit Release build. run Build Release (if needed) and launch the application. setup-vcpkg Install vcpkg on Windows (called automatically by build-windows if needed). test Build Release (only if sources changed) and run unit tests. test-all Build Release (only if sources changed) and run all tests (unit + smoke + live). test-debug Build Debug unit tests (only if sources changed) and run them. (Useful for JUCE assertions). test-live Build Release (only if sources changed) and run integration tests with live app. test-smoke Build Release (only if sources changed) and run the full pipeline smoke test. Add 0.4k tokens of command output to the chat? (Y)es/(N)o [Yes]: CTRL+C
^C again to exit
Large Projects
If you have a project with more than a thousand files in it, you might want to set the working directory to a subdirectory. Aider’s watch mechanism is not as robust as it might be.
$ cd $msp/collections/_llm
$ AIDER_SUBTREE_ONLY=true aider --watch-files Aider v0.86.2
Main model: anthropic/claude-sonnet-4-6 with whole edit format, infinite output Weak model: claude-3-5-haiku-latest Git repo: ../../.git with 247 files Repo-map: using 4096 tokens, auto refresh Note: in-chat filenames are always relative to the git working dir, not the current working dir. Cur working dir: /var/sitesUbuntu/www.mslinn.com/collections/_llm Git working dir: /var/sitesUbuntu/www.mslinn.com
Debugging
--verbose is your friend.
$ AIDER_SUBTREE_ONLY=true aider --watch-files --verbose Too soon to check version: 22.9 hours Command Line Args: --watch-files --verbose Environment Variables: AIDER_SUBTREE_ONLY:true Config File (/home/mslinn/.aider.conf.yml): anthropic-api-key: ...eQAA api-key: ['gemini=AIzaSyCijW8690wnOCe5-nNSviAy3KzXezWf5uE'] dark-mode: True editor-model: anthropic/claude-sonnet-4-6 model: anthropic/claude-sonnet-4-6 timeout: 60 show-model-warnings:False weak-model: claude-3-5-haiku-latest Defaults: --set-env: [] --model-settings-file:.aider.model.settings.yml --model-metadata-file:.aider.model.metadata.json --cache-keepalive-pings:0 --map-refresh: auto --map-multiplier-no-files:2 --input-history-file:/var/sitesUbuntu/www.mslinn.com/.aider.input.history --chat-history-file:/var/sitesUbuntu/www.mslinn.com/.aider.chat.history.md --user-input-color:#00cc00 --tool-error-color:#FF2222 --tool-warning-color:#FFA500 --assistant-output-color:#0088ff --code-theme: default --aiderignore: /var/sitesUbuntu/www.mslinn.com/.aiderignore --lint-cmd: [] --test-cmd: [] --voice-format: wav --voice-language: en --encoding: utf-8 --line-endings: platform --env-file: /var/sitesUbuntu/www.mslinn.com/.env Option settings: - 35turbo: False - 4: False - 4_turbo: False - 4o: False - add_gitignore_files: False - aiderignore: /var/sitesUbuntu/www.mslinn.com/.aiderignore - alias: None - analytics: None - analytics_disable: False - analytics_log: None - analytics_posthog_host: None - analytics_posthog_project_api_key: None - anthropic_api_key: ...eQAA - api_key: ['gemini=AIzaSyCijW8690wnOCe5-nNSviAy3KzXezWf5uE'] - apply: None - apply_clipboard_edits: False - assistant_output_color: #00FFFF - attribute_author: None - attribute_co_authored_by: True - attribute_commit_message_author: False - attribute_commit_message_committer: False - attribute_committer: None - auto_accept_architect: True - auto_commits: True - auto_lint: True - auto_test: False - cache_keepalive_pings: 0 - cache_prompts: False - chat_history_file: /var/sitesUbuntu/www.mslinn.com/.aider.chat.history.md - chat_language: None - check_model_accepts_settings: True - check_update: True - code_theme: monokai - commit: False - commit_language: None - commit_prompt: None - completion_menu_bg_color: None - completion_menu_color: None - completion_menu_current_bg_color: None - completion_menu_current_color: None - config: None - copy_paste: False - dark_mode: True - deepseek: False - detect_urls: True - dirty_commits: True - disable_playwright: False - dry_run: False - edit_format: None - editor: None - editor_edit_format: None - editor_model: anthropic/claude-sonnet-4-6 - encoding: utf-8 - env_file: /var/sitesUbuntu/www.mslinn.com/.env - exit: False - fancy_input: True - file: None - files: [] - git: True - git_commit_verify: False - gitignore: True - gui: False - haiku: False - input_history_file: /var/sitesUbuntu/www.mslinn.com/.aider.input.history - install_main_branch: False - just_check_update: False - light_mode: False - line_endings: platform - lint: False - lint_cmd: [] - list_models: None - llm_history_file: None - load: None - map_multiplier_no_files: 2 - map_refresh: auto - map_tokens: None - max_chat_history_tokens: None - message: None - message_file: None - mini: False - model: anthropic/claude-sonnet-4-6 - model_metadata_file: .aider.model.metadata.json - model_settings_file: .aider.model.settings.yml - multiline: False - notifications: False - notifications_command: None - o1_mini: False - o1_preview: False - openai_api_base: None - openai_api_deployment_id: None - openai_api_key: None - openai_api_type: None - openai_api_version: None - openai_organization_id: None - opus: False - pretty: True - read: None - reasoning_effort: None - restore_chat_history: False - set_env: [] - shell_completions: None - show_diffs: False - show_model_warnings: False - show_prompts: False - show_release_notes: None - show_repo_map: False - skip_sanity_check_repo: False - sonnet: False - stream: True - subtree_only: True - suggest_shell_commands: True - test: False - test_cmd: [] - thinking_tokens: None - timeout: 60.0 - tool_error_color: #FF3333 - tool_output_color: None - tool_warning_color: #FFFF00 - upgrade: False - user_input_color: #32FF32 - verbose: True - verify_ssl: True - vim: False - voice_format: wav - voice_input_device: None - voice_language: en - watch_files: True - weak_model: claude-3-5-haiku-latest - yes_always: None Checking imports for version 0.86.2 and executable /home/mslinn/.local/share/uv/tools/aider-chat/bin/python Installs file: /home/mslinn/.aider/installs.json Installs file exists and loaded Not first run, loading imports in background thread Loaded model settings from: - /home/mslinn/.aider.model.settings.yml Searched for model settings files: - /home/mslinn/.aider.model.settings.yml - /var/sitesUbuntu/www.mslinn.com/.aider.model.settings.yml - /var/sitesUbuntu/www.mslinn.com/collections/_llm/.aider.model.settings.yml Loaded model metadata from: - /home/mslinn/.local/share/uv/tools/aider-chat/lib/python3.12/site-packages/aider/resources/model-metadata.json Model metadata: { "cache_creation_input_token_cost": 3.75e-06, "cache_read_input_token_cost": 3e-07, "input_cost_per_token": 3e-06, "litellm_provider": "anthropic", "max_input_tokens": 1000000, "max_output_tokens": 64000, "max_tokens": 64000, "mode": "chat", "output_cost_per_token": 1.5e-05, "search_context_cost_per_query": { "search_context_size_high": 0.01, "search_context_size_low": 0.01, "search_context_size_medium": 0.01 }, "supports_assistant_prefill": true, "supports_computer_use": true, "supports_function_calling": true, "supports_pdf_input": true, "supports_prompt_caching": true, "supports_reasoning": true, "supports_response_schema": true, "supports_tool_choice": true, "supports_vision": true, "tool_use_system_prompt_tokens": 346 } Model settings: accepts_settings: [] cache_control: false caches_by_default: false edit_format: "whole" editor_edit_format: "editor-whole" editor_model_name: "anthropic/claude-sonnet-4-6" examples_as_sys_msg: false extra_params: null lazy: false name: "anthropic/claude-sonnet-4-6" overeager: false reasoning_tag: null reminder: "user" remove_reasoning: null send_undo_reply: false streaming: true system_prompt_prefix: null use_repo_map: false use_system_prompt: true use_temperature: true weak_model_name: "claude-3-5-haiku-latest" RepoMap initialized with map_mul_no_files: 2 Aider v0.86.2 Main model: anthropic/claude-sonnet-4-6 with whole edit format, infinite output Weak model: claude-3-5-haiku-latest Git repo: ../../.git with 247 files Repo-map: using 4096 tokens, auto refresh Note: in-chat filenames are always relative to the git working dir, not the current working dir. Cur working dir: /var/sitesUbuntu/www.mslinn.com/collections/_llm Git working dir: /var/sitesUbuntu/www.mslinn.com
────────────────────────────────────────────────────────── >
Voice Input
Windows
You have several options for using your voice with Aider. If your computer runs Windows 10 or 11, see Microsoft Voice Control.
Mac
I do not own an up-to-date Apple Mac so I am unable to play with the current void capabilities of Macs.
Via ChatGPT
If your computer is really weak, and you have an OpenAPI subscription, Aider can coordinate remote voice processing performed by ChatGPT.
Faster Whisper
You can perform speech-to-text on NVIDIA GPUs and CPUs using Faster Whisper. Locally processed speech-to-text is more secure and private, and it works without an internet connection, so you might want to take a few minutes to set it up. For optimal performance, particularly for long audio, GPUs are faster than CPUs.
Since Anthropic does not provide a native speech-to-text API for Aider to use, you can run a local OpenAI-compatible speech-to-text server process like Whisper or Faster-Whisper.
$ pip install faster-whisper faster-whisper-server
Now launch Faster-Whisper and spoof Aider to believe that OpenAI runs on your
local machine by setting openai-api-base:
$ aider \
--openai-api-base http://localhost:8000/v1 \
--model anthropic/claude-sonnet-4-6
To run Faster-Whisper as a native Windows application, launch it from a PowerShell or CMD console.
I had problems running Aider as native Windows app from some WSL directories
(via the \\wsl.localhost\ UNC path to WSL’s
ext4 file system). Faster-Whisper worked from the Linux user
home directory
(\\wsl.localhost\), but not from
a project directory
(\\wsl.localhost\).
IDE Usage
There are community-developed extensions that provide a graphical interface for Aider:
-
Aider Composer: Adds a GUI similar to Cursor’s composer. It supports chat modes like
ask,code, andarchitect, and provides inline diff previews to review changes before applying them. - VSCode Aider: Adds commands to the Command Palette (CNTRL+SHIFT+P) to start Aider, select models, and manage files directly from the explorer. When I tried it this message appeared: "Aider is not yet fully optimized for Windows. Some features may behave unexpectedly. Do you want to continue?"
- Aider: An older extension that automates adding/dropping files from the chat based on which files you have open in your editor.
Comparison
compare Aider v2026 with other popular clis for agentic programming tasks. pay particular attention to maintaining a context through a long conversation
Aider v2026 remains the gold standard for developers who want control and efficiency in a terminal environment, whereas its main competitors—particularly Claude Code—have moved toward autonomous, massive-context operation. [1]
For maintaining context through long conversations, the philosophical divide is stark: Aider relies on a curated, structural understanding of your code (Repository Map), while competitors increasingly rely on brute-forcing massive context windows (1M+ tokens). [2, 3]
Quick Verdict: Aider vs. The Field
| Feature | Aider v2026 | Claude Code (Anthropic) | OpenCode / OpenDevin | |—|—|—|—| | Context Strategy | Repository Map (AST-based summary) + Manual File Selection | Massive Context (1M token window) + “read everything” approach | Iterative Planning (Agentic loops that refresh context per step) | | Long Conversation Handling | Degrades gracefully. You must occasionally reset/summarize. It prioritizes relevance to save tokens. | Excellent but expensive. It keeps the entire history in context longer, but costs spike significantly. | Variable. Depends on the “plan” state; can get stuck in loops if the plan drifts. | | Cost Efficiency | High. Uses 4-5x fewer tokens for similar tasks. | Low. “Brute force” context management burns tokens rapidly. | Medium. tailored by the specific agentic runtime. | | Best For | Surgical, accurate edits; paired collaboration; keeping costs low. | Large-scale refactors; exploring unfamiliar codebases; “fix this whole repo.” | Autonomous tasks where you want to “set and forget.” |
Deep Dive: Context Management in Long Conversations
1. Aider: The “Surgical” Approach (Relevance > Volume)
Aider’s approach to long conversations is defined by its Repository Map. Instead of stuffing your entire project into the LLM’s context window, Aider builds a compressed map of your codebase based on the Abstract Syntax Tree (AST). [2, 9]
-
How it handles drift: As a conversation gets long, Aider does not infinitely expand its memory. It relies on you to /add or /drop files. This keeps the context focused on the immediate task.
-
The “Long Conversation” Bottleneck: In very long sessions (20+ turns), Aider can lose track of earlier instructions if they fall out of the sliding window.
-
The Fix: Aider encourages a workflow where you “commit frequently.” Because Aider is git-native, the “context” is effectively saved to your git history. You are expected to clear the chat (/clear) after a major success and start fresh, relying on the git log and the Repo Map to maintain continuity. [2, 10, 11]
2. Claude Code: The “Brute Force” Approach (Volume > Relevance)
Claude Code (often cited as Aider’s primary CLI rival in 2026) utilizes the massive context window of models like Claude 3.5/3.7 Opus to “remember everything.” [12, 13]
-
How it handles drift: It simply keeps reading. With a ~1M token window, it can theoretically hold the entire conversation history and file contents for hours.
-
The Trade-off: This is incredibly effective for “recall” (“Wait, why did we decide to change that variable 10 turns ago?”), but it is prohibitively expensive for daily driving compared to Aider. It is also slower, as the “time-to-first-token” increases as the context fills up. [7]
3. OpenCode / OpenDevin: The “Agentic Loop” Approach [14]
These tools operate more like autonomous agents than pair programmers. Their context strategy is based on plans. [15]
-
How it handles drift: They maintain a “state” or “plan” separate from the chat history. The agent looks at the plan, executes a step, and then updates the plan.
-
The Risk: If the plan becomes inaccurate during a long conversation, the agent can hallucinate or get stuck in a loop, repeatedly trying to fix a bug based on outdated context. Aider avoids this by keeping the human in the loop to “prune” the context manually. [16, 17, 18, 19, 20]
Summary Recommendation
-
Stick with Aider if: You know your codebase well. You want a tool that feels like a sharp, fast scalpel. You prefer to manage the “attention” of the AI yourself (by adding/dropping files) to ensure high-quality code generation and low costs.
-
Switch to Claude Code if: You are entering a brand new, large repo and need an agent to “read everything” to explain it to you, or if you are doing a massive refactor where “remembering” a change from 30 minutes ago is critical, regardless of the cost. [21]
- [1] https://www.linkedin.com
- [2] https://levelup.gitconnected.com
- [3] https://www.linkedin.com
- [4] https://news.ycombinator.com
- [5] https://skywork.ai
- [6] https://morphllm.com
- [7] https://www.reddit.com
- [8] https://blog.stackademic.com
- [9] https://skywork.ai
- [10] https://arxiv.org
- [11] https://www.nxcode.io
- [12] https://www.nxcode.io
- [13] https://www.mightybot.ai
- [14] https://news.ycombinator.com
- [15] https://medium.com
- [16] https://aimultiple.com
- [17] https://link.springer.com
- [18] https://aws.plainenglish.io
- [19] https://www.stockholm.ai
- [20] https://aicompetence.org
- [21] https://saigontechnology.com