Your cart is currently empty!

How Human?
Confessions of a Songwriting Team with an AI Sidekick
We’ve been writing songs on and off for… well, let’s just say long enough that our “works in progress” list now has about 50 tracks on it. Plus a bunch more ideas scribbled on scraps of paper, voice notes, and probably the back of a shopping receipt somewhere.
Our songs come from life, things we’ve been through or felt deeply. The recurring themes? Loss, loneliness, empathy, resilience, feeling like you don’t quite fit in, or just not understanding this weird planet. Oh, and breakups. Breakups light up our songwriter brains like fireworks at midnight.
The quality? Well, we like most of what we write… eventually. But in all the time we’ve been doing this, we’ve almost finished two songs. Yep. Two. That’s not just low on the “song completion” scale, it’s practically subterranean. Why? Because it’s hard! We’ll sometimes make 100 acoustic sketches before we find one we like, then we’ll decide it needs changing.
And here’s the kicker: we’re enthusiastic but objectively average guitarists, keyboardists, bassists, drummers, and vocalists. And when we say average… we mean “crap.” We also have no musician network, no money for studio time, and no spare hours to make things sound exactly the way we want.
Enter AI.
We know, it sounds a bit shady: “AI made our music”, but for us it’s been an invaluable collaborator. It helps us take bedroom recordings and rough sketches into something that actually sounds like a song. It helps us decide if a track is worth polishing or tossing in the “maybe later” pile.
People use AI in music in wildly different ways, from tiny tweaks to entire AI-generated songs. We’re not here to judge, but we do think it’s important to be transparent about how it is used.
Long term, the dream is to finish these songs, pick the best ones, and record them in a real studio. But until then, AI is our session band, vocalist, and producer all rolled into one. Just like a guitar or a microphone, it’s a tool we use to get where we want to go. Whether that makes us the devil… we’ll let you decide. We still believe the heart of what we make is human.
How We Use AI (From Idea to Demo)
1. The Idea – 100% Human
Acoustic guitar, a spark of inspiration, and maybe a rerun of Columbo playing in the background. We noodle around until something clicks, mumble some half-formed lyrics, and record the rough idea.
2. The Lyrics – 100% Human
Once we think the idea has legs, we start the slow, messy process of writing. We try to make every line mean something, to convey a feeling or story. Sometimes we succeed, sometimes we make them worse with endless rewrites.
3. The Concept – 100% Human
This is where we shape chords, pacing, and tone. It’s time-consuming, some songs get 100+ versions before we’re happy.
4. The Demo – Human + AI Jam Session
Sometimes we record with real instruments, but here’s where AI is a lifesaver.
- Track properties – 90% Human:
We set BPM, tone, style, production feel. AI doesn’t always listen, but persistence pays off. - Progressions – 0–100% Human:
We usually hand AI a sketchy concept track to follow. This has been the most successful way for us to get what we want. If that does not work, we tell it the chords and key. Sometimes it nails it. Sometimes it… doesn’t. Every so often, we just let AI pick the progressions. We have two or three tracks where we let AI choose progressions bases on the lyrics. - Instruments – 50–100% Human:
AI is pretty good at using what we ask for, though style choices can override things. We have also started incorporating real instruments (and vocals) into the mix. - Vocalist – 80% Human:
We pick the voice type, tone, style, harmonies, and whether it’s a duet. If you have made a guide track, it will typically try to match the voice. Once you have a voice you like you can create a persona and reuse it. We don’t usually do that, its more fun having a guest vocalist on every track. - Vocals – 80-100% AI:
Once we hand it over, AI decides how to sing it. Sometimes it’s great, sometimes it takes 30 tries. We are looking to move to real vocals, but right now the vocals are almost 100% AI. If you heard the demos you would understand why. - Production – 80-100% AI:
We influence the production through earlier choices, but in the end, we trust the machine. We usually spend a bit of time tweaking it, and sometimes have to use an external editor to get it the way we want it.
5. Final Mix – 100% Human
Once we’ve got the stems, we take over. Adjusting levels, adding effects, replacing AI parts with real instruments or vocals, that’s all us. Right now, we are in the process of converting a backlog of bedroom recordings to packaged tunes. We think they are about 80% on the way to being complete. While we are in this phase, we use many of the tracks “as-is”. Overtime, we will spend much more time in the final mix stage. It’s good enough right now. 🙂
AI is no walk in the park. It can be really hard to get what you want. Sometimes you get so close and the last 20 seconds bites dirt. Sometimes you have to really fight to reach your vision, and sometimes you have to put up with “almost”. You can spend an entire day revising, editing, and getting frustrated. But AI has helped us finally release ideas we’ve sat on for years. Tracks on the site are around 80% complete, good enough to share, but still works in progress. We make the music fit the lyrics, not the other way around, so some tracks run a little long. Hopefully some of the words hit home for you.
That’s how human we are.