Mike Slinn

Options for Controlling Ableton Live with MCP

Published 2025-09-08. Last modified 2025-10-25.
Time to read: 3 minutes.

This page is part of the av_studio collection.

Large Language Models (LLMs) can be integrated with Ableton Live for tasks like controlling the DAW with natural language, generating MIDI patterns, or automating production workflows, often using technologies like Model Context Protocol (MCP), AbletonOSC, and Max for Live.

Let the LLM do the dirty work

Creative people need to find effective ways to collaborate with LLMs. LLMs support natural language interaction with Ableton users as they generate and edit creative works with Ableton Live.

Overview

  1. An MCP server is set up that acting as a translator and intermediary. The MCP Server exposes a Tool (or a set of Tools) defined in the protocol's schema. This Tool is specifically designed to interface with Ableton Live.
  2. The Tool’s implementation uses AbletonOSC (an open protocol based on OSC, not a virtual MIDI controller) to communicate directly with Ableton Live. AbletonOSC receives and sends OSC messages to control Live's functionality.
  3. An LLM like Claude or ChatGPT is used as the MCP client.
  4. The user provides natural language commands to the LLM (e.g., "create a new MIDI track" or "add more energy to that bassline").
  5. The LLM interprets the user's intent and decides to call the appropriate Tool exposed by the MCP Server.
  6. The LLM constructs a structured Tool Call request according to the MCP specification, filling in parameters that match the user's command (e.g., a function name like create_track(type='midi')). The LLM sends this Tool Call to the MCP Server.
  7. The MCP Server receives the Tool Call, looks at the requested function, and translates it into the corresponding AbletonOSC command (OSC message).
  8. Ableton Live receives the OSC command and performs the requested action, such as creating a track, generating MIDI, or adjusting parameters.

In other words, The LLM doesn't send OSC commands directly to Ableton Live; instead, it sends an MCP Tool Call to the MCP Server, which is responsible for the final translation into an AbletonOSC/OSC message.

Examples and Applications

  • Generative MIDI: LLMs can generate melodies, chords, and rhythms based on custom constraints provided in text prompts.
  • Workflow Automation: Offload repetitive tasks, like changing instrument routings or adjusting BPMs, using simple natural language commands.
  • Creative Collaboration: Use the LLM as a virtual collaborator, giving general creative directions to generate variations or ideas.
  • Building Sets: Ask the LLM to construct a basic setup for a chamber piece, including tracks and scenes.

Ancient History

This technology is evolving rapidly, and changes month-to-month. In 2024, we were amazed with what AI assistants could do. In 2025, we moved forward to MCP applications.

This is an excellent overview video of the previous level of integration (pre-MCP) between Ableton Live and an LLM:

Assistants

These ‘old’ assistants do not use MCP, so they run in a command line and do not directly interact with Ableton Live.

ableton-live-assistant

ableton-live-assistant controls Ableton Live 11+ with GPT-4 using Node.js.

Ableton Live Ultimate Assistant

The most powerful and trained Ableton Live Assistant, designed for all software versions. Our model is finely-tuned for top-notch guidance and troubleshooting, providing an interactive and user-centric experience. Now includes updates and tool recommendations.

MCP Servers for Ableton Live

The Model Context Protocol (MCP) is an open standard, open-source framework introduced by Anthropic in November 2024 to standardize the way artificial intelligence (AI) systems like large language models (LLMs) integrate and share data with external tools, systems, and data sources. Within 6 months, MCP had become commonly used technology.

MCP servers improve integration between Live and LLMs, so running command lines in a terminal next to Live is no longer required. Instead, a chat dialog next to Live is used, or voice commands.

Obsolete

Older versions that have not been updated to work with mcp v1.3+ are not discussed, for example:

  • itsuzef/ableton-mcp
  • Simon-Kansara/ableton-live-mcp-server

Current

In my article discussing MCP history I describe breaking changes resulting in incompatibility with older MCP clients and servers. This is because of significant updates in the protocol and dependencies. This section discusses currently viable MCP servers for interacting with Ableton Live.

Install these MCP servers in any modern MCP Host.

Below is a table of all known MCP (Model Context Protocol) servers for Live that work with current implementations of MCP, listed in order of popularity. These servers facilitate communication between large language models (LLMs) or other AI assistants and Live, typically using OSC (Open Sound Control) or socket-based systems for music production automation and control.

MCP Server Name Description Key Features Requires
ahujasid/ableton-mcp

I use this name to avoid ambiguity with other, similarly named servers.
Enables AI assistants to control Ableton Live via an Ableton Remote Script for two-way communication, enabling music production tasks like MIDI manipulation and session control. This GitHub project has 1900 stars and 218 forks. This project has 3 primary contributors.

Documented to work with Claude Desktop and Cursor. Documentation promotes Docker for no reason.

Only supports default devices. Manual script setup required.

High level of community engagement: active Discord community, recent Reddit buzz (July 2025 post with 258 votes, 98 comments praising easy Claude setup) No releases/packages.

I wrote an article about using this MCP server with Ableton Live 12.
  • Controls Ableton Live tracks, clips, effects, and playback.
  • Focuses on simple, prompt-based music production.
  • Complex tasks need step-by-step prompts.
  • Track/clip creation
  • MIDI editing
  • Playback control
  • Instrument loading
  • Library browsing
  • Live 10+
Ableton MCP Extended Robust MCP server for natural language control of Ableton Live. Compatible with Claude Desktop, Gemini CLI, Cursor and ElevenLabs MCP. This GitHub project has 53 stars and 6 forks, but it is only 6 months old. This project has medium community engagement.

Extended MCP server for AI-driven control, emphasizing low-latency natural language commands. Support seems to be better than for all the other mentioned in this table. ElevenLabs requires a key.

Automation points are currently unstable. Has limited VST support (in progress). Does not support Live arrangement view. Generates music.

This is a promising candidate, but more time is required before it might be ready for most users.
  • Adds voice/audio generation via ElevenLabs.
  • ElevenLabs TTS/SFX generation
  • Session/transport control with quantization
  • Track management
  • MIDI clip/note manipulation
  • Device/parameter control
  • Browser navigation
  • Audio import
  • UDP low-latency protocol
  • Extensible tools (e.g., XY controller)
  • Python 3.10+
  • Live 11+
Ableton Vibe Not well documented. Requires Node.js and Claude Desktop. Low community engagement: 6 stars, 2 forks. MacOS only. Ableton Live v11 is tested, but v12 is unsupported. Seems like an amateur hack that will be quickly forgotten. Very basic.
  • MIDI track creation (e.g., at specific indices)
  • Programmatic device addition
  • Debugging via MCP Inspector
  • Python 3.8+

Most servers require Python 3.8 or higher, the uv package manager, and dependencies like python-osc, and AbletonOSC, and fastmcp.

* indicates a required field.

Please select the following to receive Mike Slinn’s newsletter:

You can unsubscribe at any time by clicking the link in the footer of emails.

Mike Slinn uses Mailchimp as his marketing platform. By clicking below to subscribe, you acknowledge that your information will be transferred to Mailchimp for processing. Learn more about Mailchimp’s privacy practices.