Show HN: System to have Claude compose and perform a techno track end-to-end

  • Posted 10 hours ago by digitcatphd
  • 2 points
https://github.com/hughes7370/AbletonComposer
I've been fascinated by a fundamental gap in AI music: Current models (Suno, Udio) generate audio via sequence prediction—they pattern-match existing waveforms but don't "know" music theory. Consequently, you can't get stems, adjust the mix, or modify the arrangement logic.

I wanted to see if an LLM could compose music from first principles—understanding scales, chord progressions, and arrangement theory—and control a DAW to generate the audio.

Loom Demo: https://www.loom.com/share/8f55136085a24ed1bc79acb5cdda194c

The Stack Ableton Live 12: The DAW engine.

Ableton MCP (Model Context Protocol): Forked and extended to allow Claude to manipulate MIDI, clips, and devices.

Claude 3.5 Sonnet: The "Composer," equipped with ~12 custom skill files covering arrangement, EQ, and sound design.

Gemini: The feedback loop. Used to analyze rendered audio (via stem separation) and provide critique for iteration.

Python: 1,700+ lines of performance scripts.

The Engineering Challenges 1. The Sample Library Problem Techno relies on curated samples, not just synthesis. But LLMs can't "hear" a sample library to pick the right kick or hat.

I built a sample analysis system that pre-processes the library and generates JSON profiles. This allows Claude to query samples by spectral characteristics rather than just filenames.

JSON { "file_name": "001_Stab_Low.wav", "bpm": 126.0, "key": "N/A (atonal)", "spectral_centroid_mean": 297.2, "brightness": 0.04, "warmth": 1.0, "texture_tags": ["dark", "warm", "soft-attack", "distorted"], "category": "bass" }

2. The Performance Layer (Polymetrics) Ableton's Session View handles loops, but a track needs transitions. I didn't want static blocks; I wanted a live performance.

I wrote a Python performance engine that creates a real-time automation script. It handles volume fading, spectral carving (ducking frequencies when elements collide), and—most importantly—polymetric cycling to create hypnotic phasing:

Python

# Polymetric cycle lengths in beats POLY = { "STAB": 7, # Cycles every 7 beats "RIDE": 5, # Cycles every 5 beats "DING": 11, # Cycles every 11 beats "ARPEGGIO": 13 # Cycles every 13 beats }

The Pipeline

Planning: Claude analyzes target styles (e.g., Ben Klock, Surgeon) and generates an arrangement map (Intro -> Peak -> Outro).

Setup: Spawns 19+ tracks with specific instrument racks.

Generation: Python scripts generate MIDI patterns (e.g., 256 events following G minor with velocity curves).

Performance: The system "plays" the track, automating parameters in real-time based on the energy curve logic.

Results & Learnings

The output is recognizably techno. The mix is balanced, and the structure is logical. However, while the system creates music that is theoretically correct, it currently lacks the intuition to break rules in interesting ways—the "happy accidents" of human production are missing.

I suspect the next step for symbolic music generation is modeling "taste" as a constraint function rather than just adhering to theory.

0 comments