At the heart of the software synthesizer was an AI system called NSynth — the N stands for neural, as in neural network — which is trained on hundreds of thousands of short recordings of single musical notes played on different musical instruments. That data makes it possible for the software to come up with the sounds of other notes, longer notes, and even notes that blend the sounds of multiple instruments.

NSynth came out last spring. It’s one of the foundational technologies from Magenta, an effort from the Google Brain AI research group to push the envelope in the area of creating art and music with AI. Other Magenta projects include AI Duet, which lets you play piano alongside a computer that riffs on what you play, and sketch-rnn, an AI model for drawing pictures based on human drawings.

In developing NSynth, Google Brain worked together with DeepMind, an AI research firm under Google’s parent company, Alphabet. The researchers even released a data set of musical notes and an open-source version of the NSynth software for others to work with.

With the virtual synth, you could choose a pair of instruments and then move a slider toward one or another, to create a combination between the two. Then, with the keys on your computer keyboard, you could play piano notes with that unusual combination acting as a filter.

It’s interesting, but it’s limited in its powers.

The hardware synth prototype, which goes by the name NSynth Super, provides several physical knobs to turn and a slick display to drag your finger on, making it more accommodating for live performers who are used to tweaking hardware boxes on the fly. There are controls for adjusting how much of a note you want to play, along with qualities known as attack, decay, sustain and release. And it lets you play sounds with a combination of four instruments at once, not two. It pushes the limits of what’s possible with NSynth.