Yesterday Google shared a video of a new hardware controller for it’s neural synthesizer, NSynth. The hardware is called the NSynth Super. The neural synthesizer is thanks to a research project at Google called Magenta.
Magenta explores how machine learning can assist artists in creating in new ways. So how does NSynth work? It uses a “deep neural network” to learn about sounds and then create brand new sounds based on the characteristics it’s learned.
According to Synthtopia:
[In the demo video] 16 original source sounds, across a range of 15 pitches, were recorded in a studio and then input into the NSynth algorithm, to precompute the new sounds.
The outputs, over 100,000 new sounds, were then loaded into the experience prototype. Each dial was assigned 4 source sounds. The dials can be used to select the source sounds to explore between.
The NSynth Super can be played through any MIDI source like a DAW or keyboard. It’s even available as a DIY project on Github, but there are no pre-built options or even kits currently available.
Check out the videos below to learn more: