AM

Alabo MacPepple-Jaja

Creative Technologist. AI, Audio & Generative Systems

I trained a neural audio model on 60 hours of archived vocals to build a real-time AI collaborator. That's the kind of thing I make. Tools where machine learning meets sound meets something worth preserving.

Open to roles in AI audio tooling, creative technology, and generative systems. London / Berlin / Remote.

About

Goldsmiths Music Computing graduate. I build AI-powered audio tools: neural synthesis models trained on real-world corpora, full-stack generation studios wired to OpenAI and Stability AI, and generative systems for electronic and experimental music. Started producing at 12, had work on ITV and SBTV (3M+ viewers) by 16. Now focused on AI-native music creation and using generative models to preserve vocal traditions that don't exist in any dataset.

Read artist statement →

Projects

PERI

Neural audio synthesis as an act of cultural reclamation

Max/MSPRAVEPythonNeural Audio SynthesisPost-Colonial Musicology
  • Trained a RAVE model on a curated 60-hour vocal archive from the African diaspora in Britain. These voices have no existing ML representation
  • Grounded in post-colonial musicology research. The question isn't just "can we synthesise this voice" but "who gets to preserve it and on whose terms"
  • Consent-based methodology designed to be reproducible across any audio archive: corpus curation, ethical review, model training, real-time inference
  • Output carries the tonal and timbral identity of the source. A generative collaborator, not a replica
Repo

Audiogen

Full-stack AI sound design studio. Text to sound in seconds

Next.jsReactOpenAI APIStability AI Audio APINode.js
  • Routes generation requests across OpenAI audio and Stability AI's Stable Audio endpoints with automatic provider fallback
  • Custom synthesis parameter layer on top of API responses. Users control texture, brightness, intensity, and noisiness post-generation
  • Server-side architecture: Node.js proxy handles auth, rate limiting, and provider switching; React frontend renders waveform preview and parameter controls
Repo

Spectral Transform Plugin

JUCE audio plugin. Describe the sound you want in plain language

C++JUCEVST3 / AUDSPNatural Language
  • Real-time audio plugin where users type what they want their sound to become: "wider, brighter, less muddy"
  • Custom DSP chain: stereo width processing via M/S encoding, binaural spatialisation, spectral filtering, saturation. All driven by prompt interpretation
  • Ships as VST3, AU, and standalone. Drag and drop any audio file, type a transformation, hear the result immediately

Skills

AI & Machine Learning

RAVENeural Audio SynthesisLoRA Fine-tuningVoice Synthesis / RVCLLM IntegrationPrompt EngineeringOpenAI APIStability AI Audio APIElevenLabs API

Audio & Sound Design

Max/MSPMax for LiveAbleton LiveTouchDesignerSpectral ProcessingGenerative AudioSound Design

Software Development

PythonJavaScriptTypeScriptNext.jsReactNode.jsTailwind CSSREST APIsSQLGit / GitHub

Infrastructure & APIs

VercelSupabaseFirebaseMongoDBStripe API

Design & Creative Tools

FigmaBlenderMidjourneyStable DiffusionCreative Coding

Background

BMus Music Computing, Goldsmiths, University of London

Producing creative technology work since age 12. ITV and SBTV (3M+ viewers) at 16.

Currently building software at Elayr, a property technology company.