edm festival with ai music

· By Will Harken

The Ultimate Guide to Making Electronic Music with AI

Loading the Elevenlabs Text to Speech AudioNative Player...

Artificial intelligence can now write a very good electronic song almost entirely by itself. This raises the question: How much should you let an AI write for you?

This guide covers all the different ways you can use artificial intelligence to create electronic music, whether you're a beginner with no music experience or a pro looking to introduce AI elements into your fully produced tracks.

Note: If you are looking for info about AI vocals, I recommend my article The AI Vocal Mixing Technique No One's Talking About.


The Importance of Listening

The most important thing in creating music in any genre is listening to lots of music within that genre. For example, if you're interested in creating cyberpunk and dark synth music, make sure to listen to plenty of tracks in those styles before getting started.

Learning to Describe Music

The golden skill of the future is learning how to describe what you want! In the past, you had to describe what you wanted to your producer. Now, you're going to have to learn how to describe what you want to a computer. 🎹

It will be helpful to learn things like beats per minute (BPM) and instrument types (percussion, bass, keyboard, synthesizer). This will come with time as you experiment and listen to music.

So you have a frame of reference for the rest of this article, I made my song "Basilisk" without any AI:


Reverse Prompting

Reverse prompting is a game-changer. A helpful tool posted by a Reddit user can accomplish this by allowing you to provide a song you want to mimic, and it will give you prompt ideas for how to do so.

Check out this Reddit post for more information on reverse prompting for music.

Here is the custom GPT tool they mention in the post.

I don't think it will be too long before Suno and Udio have reverse prompting built into them though...

Creating Electronic Music with AI Only

For beginners, it's possible to create a full song with nothing but prompts and a little experimentation. I recommend pasting your prompts into both Udio and Suno at the same time, doing a few generations on each platform, and then picking whichever one you like most as a starting point.

If you're going for speed and aren't as particular about quality control, you can use Udio's longer, two-minute generation option if you pay for their premium version.

Suno automatically comes with a longer generation time of two minutes, getting you pretty close to a full song out of the gate.

An example of a song I created using only AI tools is "Fight For Your Life," which features a track generated by Suno. I layered a couple of extra percussion elements and a screaming man sound with Stable Audio on top of it, but I absolutely could have gotten away with just using the output straight from Suno.

"Nuclear Blood" is an example of a song that is almost entirely created with Udio's AI, with some extra stem layers from Stable Audio.

Want help making personalized music and audio? Visit this page to create yours now!

Extending Human-Produced Ideas

For professional producers, AI can be a great tool for getting ideas for songs you don't know how to finish. You could upload your bars of finished material to either Udio or Suno and have them generate ideas for you.

Then, you can take those ideas and produce them yourself. Or potentially just use the AI outputs as your finished track if they sound good enough. 🎧

My song "Derma Drive 9k" is an example where I gave Udio a starting point and generated the rest. The only part of this that is human is the first drop at 22 sec to 40 sec.

Remember, AI can say a lot of unnecessary things. Your goal is to trim the fat and keep the listener engaged. Use tools like Ultimate Vocal Remover to split out STEMS and have more control over layers.

Generating Stems with AI

To layer stems, I used Stable Audio. With this tool, you can provide the specific BPM and information about the stem you want to generate. I often use this for percussion layers or making weird sound effects.

The bridge of my song "Kill the Strogg" (around 2:12) is generated with Stable Audio as the base layer, and then I added a few other things on top of it to tie it into the rest of the song. Everything else is human produced.

You could potentially use Udio or Suno to generate stems as well, but I found they are more suited for generating full tracks. If you go that route, you might end up having to use a stem separation tool like Ultimate Vocal Remover.

Using Templates as a Starting Point

Another unspoken hero for helping people create electronic songs would be templates. You can get templates at websites like Abletunes, where they create song templates in Ableton. If a template sounds kind of like the vibe you're going for, you can use it as a starting point.

However, this requires significantly more skill because you have to know how to use the DAW and be able to customize the track to your liking. Which may sound easy - but customizing stuff in a DAW can be deceivingly difficult.

Mastering Your AI-Generated Music

You can use tools like Ozone or automatic mastering like Landr to try to improve the final result of your song. Once again, these will generally underperform compared to a human mastering engineer, but if the quality of the song is already questionable because you used AI, it probably won't matter much anyway.

If you need help making AI-generated music to the next level, get in touch.

Embracing the AI Aesthetic

A really big, important disclaimer with any AI-generated audio right now is that the quality still isn't 100% comparable to what you would hear from a professional production. AI-generated tracks generally lack the same punch and sometimes carry a bit of fuzziness that is noticeable. 🎼

However, you could potentially lean into the lower quality aesthetic of a lot of this AI stuff right now. Sometimes, it doesn't necessarily sound bad per se. An example might be making stuff that is technically Lo-Fi music or falls in the lofi genre.

A lot of what makes music "good" is how well it appeals to the listeners of the genre it's going for. So, just because the song doesn't sound 100% like a radio-crisp, crystal-clear Ariana Grande song doesn't mean that your final result is bad. But that is definitely a fine line to tread.


The good news is, even if you make a perfectly flawless song, there's a good chance that it will get lost in the MILLIONS and MILLIONS of songs that are being created every week anyway. So, don't stress about it too much. We're here to make music, not get famous. 😎

For more insights on using AI in music production, check out these helpful resources: