Loading...
Searching...
No Matches
Tutorial: Build a multi-polyphonic synthesiser

Learn the basics of the MPE standard and how to implement a synthesiser that supports MPE. Hook your application up to a ROLI Seaboard Rise!

Level: Intermediate

Platforms: Windows, macOS, Linux

Classes: MPESynthesiser, MPEInstrument, MPENote, MPEValue, SmoothedValue

Getting started

Download the demo project for this tutorial here: PIP | ZIP. Unzip the project and open the first header file in the Projucer.

If you need help with this step, see Tutorial: Projucer Part 1: Getting started with the Projucer.

Note
It would be helpful to read Tutorial: Build a MIDI synthesiser first, as this is used as a reference point in a number of places.

The demo project

The demo project is a simplified version of the MPEDemo project in the JUCE/examples directory. In order to get the most out of this tutorial you will need an MPE compatible controller. MPE stands for MIDI Polyphonic Expression, which is a new specification to allow multidimensional data to be communicated between audio products.

Some examples of such MPE compatible devices are ROLI's own Seaboard range (such as the Seaboard RISE).

Warning
The synthesiser may appear very quiet unless your controller transmits MIDI channel pressure and continuous controller 74 (timbre) in the way that the Seaboard RISE does.

With a Seaboard RISE connected to your computer the window of the demo application should look something like the following screenshot:

The demo application

You will need to enable one of the MIDI inputs (here you can see a Seaboard RISE is shown as an option).

The visualiser

Any notes played on your MPE compatible device will be visualised in the lower portion of the window. This is shown in the following screenshot:

The visualiser

One key feature of MPE is that each new MIDI note event is assigned its own MIDI channel, rather than all notes from a particular controller keyboard being assigned to the same MIDI channel. This allows each individual note to be controlled independently by control change messages, pitch bend message, and so on. In the JUCE implementation of MPE, a playing note is represented by an MPENote object. An MPENote object encapsulates the following data:

  • The MIDI channel of the note.
  • The initial MIDI note value of the note.
  • The note-on velocity (or strike).
  • The pitch-bend value for the note: derived from any MIDI pitch-bend messages received on this note's MIDI channel.
  • The pressure for the note: derived from any MIDI channel pressure messages received on this note's MIDI channel.
  • The timbre for the note: typically derived from any controller messages on this note's MIDI channel for controller 74.
  • The note-off velocity (or lift): this is only valid after the note-off event has been received and until the playing sound has stopped.

With no notes playing you can see that the visualiser represents a conventional MIDI keyboard layout. Each note is represented in the visualiser in the demo application as follows:

  • A grey filled circle represents the note-on velocity (a larger circle for higher velocity).
  • The MIDI channel for the note is displayed above the "+" symbol within this circle;
  • The initial MIDI note name is displayed below the "+" symbol.
  • An overlaid white circle represents the current pressure for this note (again, a larger circle for higher pressure).
  • The horizontal position of the note is derived from the original note and any pitch bend that has been applied to this note.
  • The vertical position of the note is derived from the timbre parameter for the note (from MIDI controller 74 on this note's MIDI channel).

Other setting up

Before delving further into other aspects of the MPE specification, which are demonstrated by this application, let's look at some of the other things our application uses.

First of all, our MainComponent class inherits from the AudioIODeviceCallback [1] and MidiInputCallback [2] classes:

class MainComponent : public juce::Component,
private juce::AudioIODeviceCallback, // [1]
private juce::MidiInputCallback // [2]
{
public:

We also have some important class members in our MainComponent class:

juce::AudioDeviceManager audioDeviceManager; // [3]
juce::AudioDeviceSelectorComponent audioSetupComp; // [4]
Visualiser visualiserComp;
juce::Viewport visualiserViewport;
juce::MPEInstrument visualiserInstrument;
juce::MPESynthesiser synth;
juce::MidiMessageCollector midiCollector; // [5]
};

The AudioDeviceManager [3] class handles the audio and MIDI configuration on our computer, while the AudioDeviceSelectorComponent [4] class gives us a means of configuring this from the graphical user interface (see Tutorial: The AudioDeviceManager class). The MidiMessageCollector [5] class allow us to easily collect messages into blocks of timestamped MIDI messages in our audio callback (see Tutorial: Build a MIDI synthesiser).

It is important that the AudioDeviceManager object is listed first since we pass this to the constructor of the AudioDeviceSelectorComponent object:

MainComponent()
: audioSetupComp (audioDeviceManager, 0, 0, 0, 256,
true, // showMidiInputOptions must be true
true, true, false)

Notice another important argument that is passed to the AudioDeviceSelectorComponent constructor: the showMidiInputOptions must be true to show our available MIDI inputs.

We set up our AudioDeviceManager object in a similar way to Tutorial: The AudioDeviceManager class, but we need also to add a MIDI input callback [6]:

audioDeviceManager.initialise (0, 2, nullptr, true, {}, nullptr);
audioDeviceManager.addMidiInputDeviceCallback ({}, this); // [6]
audioDeviceManager.addAudioCallback (this);

The MIDI input callback

The handleIncomingMidiMessage() is called when each MIDI message is received from any of the active MIDI inputs in the user interface:

void handleIncomingMidiMessage (juce::MidiInput* /*source*/,
const juce::MidiMessage& message) override
{
visualiserInstrument.processNextMidiEvent (message);
midiCollector.addMessageToQueue (message);
}

Here we pass each MIDI message to both:

  • our visualiserInstrument member — which is used to drive the visualiser display; and
  • the midiCollector member — which in turn passes the messages to the synthesiser in the audio callback.

The audio callback

Before any audio callbacks are made, we need to inform the synth and midiCollector members of the device sample rate, in the audioDeviceAboutToStart() function:

void audioDeviceAboutToStart (juce::AudioIODevice* device) override
{
auto sampleRate = device->getCurrentSampleRate();
midiCollector.reset (sampleRate);
synth.setCurrentPlaybackSampleRate (sampleRate);
}

The audioDeviceIOCallbackWithContext() function appears to do nothing MPE-specific:

void audioDeviceIOCallbackWithContext (const float* const* /*inputChannelData*/,
int /*numInputChannels*/,
float* const* outputChannelData,
int numOutputChannels,
int numSamples,
const juce::AudioIODeviceCallbackContext& /*context*/) override
{
// make buffer
juce::AudioBuffer<float> buffer (outputChannelData, numOutputChannels, numSamples);
// clear it to silence
buffer.clear();
juce::MidiBuffer incomingMidi;
// get the MIDI messages for this audio block
midiCollector.removeNextBlockOfMessages (incomingMidi, numSamples);
// synthesise the block
synth.renderNextBlock (buffer, incomingMidi, 0, numSamples);
}
Note
In fact, this is rather similar to the SynthAudioSource::getNextAudioBlock() function in Tutorial: Build a MIDI synthesiser.

Core MPE classes

All of the MPE specific processing is handled by the MPE classes: MPEInstrument, MPESynthesiser, MPESynthesiserVoice, MPEValue, and MPENote (which we mentioned earlier).

The MPEInstrument class

The MPEInstrument class maintains the state of the currently playing notes according to the MPE specification. An MPEInstrument object can have one or more listeners attached and it can broadcast changes to notes as they occur. All you need to do is feed the MPEInstrument object the MIDI data and it handles the rest.

In the MainComponent constructor we configure the MPEInstrument in legacy mode and set the default pitch bend range to 24 semitones:

This special mode is for backwards compatibility with non-MPE MIDI devices and the instrument will ignore the current MPE zone layout.

Note
See Tutorial: Understanding MPE zones for an introduction to more flexible approaches using zones and zone layouts.

In the MainComponent::handleIncomingMidiMessage() function we pass the MIDI messages on to our visualiserInstrument object:

In this example we are using an MPEInstrument object directly as we need it to update our visualiser display. For the purposes of audio synthesis we don't need to create a separate MPEInstrument object. The MPESynthesiser object contains an MPEInstrument object that it uses to drive the synthesiser.

The MPESynthesiser class

We set our MPESynthesiser with the same configuration as our visualiserInstrument object (in legacy mode with a pitch bend range of 24 semitones):

synth.enableLegacyMode (24);
synth.setVoiceStealingEnabled (false);

The MPESynthesiser class can also handle voice stealing for us, but as you can see here, we turn this off. When voice stealing is enabled, the synth will try to take over an existing voice if it runs out of voices and needs to play another note.

As we have already seen in the MainComponent::audioDeviceAboutToStart() function we need to set the MPESynthesiser object's sample rate to work correctly:

And as we have also already seen in the MainComponent::audioDeviceIOCallback() function, we simply pass it a MidiBuffer object containing messages that we want it to use to perform its synthesis operation:

The MPESynthesiserVoice class

You can generally use the MPESynthesiser and MPEInstrument classes as they are (although both classes can be used as base classes if you need to override some behaviours). The most important class you must override in order to use the MPESynthesiser class is the MPESynthesiserVoice class. This actually generates the audio signals from your synthesiser's voices.

Note
This is similar to the SynthesiserVoice class that is used with the Synthesiser class, but it is customised to implement the MPE specification. See Tutorial: Build a MIDI synthesiser.

The code for our voice class is in the MPEDemoSynthVoice class of the demo project. Here we implement the MPEDemoSynthVoice class to inherit from the MPESynthesiserVoice class:

class MPEDemoSynthVoice : public juce::MPESynthesiserVoice
{

We have some member variables to keep track of values to control the level, timbre, and frequency of the tone that we generate. In particular, we use the SmoothedValue class, which is really useful for smoothing out discontinuities in the signal that would be otherwise caused by value changes (see Tutorial: Build a sine wave synthesiser).

juce::SmoothedValue<double> level, timbre, frequency;
double phase = 0.0;
double phaseDelta = 0.0;
double tailOff = 0.0;
// some useful constants
static constexpr auto maxLevel = 0.05;
static constexpr auto maxLevelDb = 31.0;
static constexpr auto smoothingLengthInSeconds = 0.01;
};

Starting and stopping voices

The key to using the MPESynthesiserVoice class is to access its MPESynthesiserVoice::currentlyPlayingNote (protected) MPENote member to access the control information about the note during the various callbacks. For example, we override the MPESynthesiserVoice::noteStarted() function like this:

void noteStarted() override
{
jassert (currentlyPlayingNote.isValid());
jassert (currentlyPlayingNote.keyState == juce::MPENote::keyDown
|| currentlyPlayingNote.keyState == juce::MPENote::keyDownAndSustained);
// get data from the current MPENote
level .setTargetValue (currentlyPlayingNote.pressure.asUnsignedFloat());
frequency.setTargetValue (currentlyPlayingNote.getFrequencyInHertz());
timbre .setTargetValue (currentlyPlayingNote.timbre.asUnsignedFloat());
phase = 0.0;
auto cyclesPerSample = frequency.getNextValue() / currentSampleRate;
phaseDelta = 2.0 * juce::MathConstants<double>::pi * cyclesPerSample;
tailOff = 0.0;
}

The following "five dimensions" are stored in the MPENote object as MPEValue objects:

MPEValue objects make it easy to create values from 7-bit or 14-bit MIDI value sources, and to obtain these values as floating-point values in the range 0..1 or -1..+1.

Note
The MPEValue class stores the value internally using the 14-bit range.

The MainComponent::noteStopped() function triggers the "release" of the note envelope (or stops it immediately, if requested):

void noteStopped (bool allowTailOff) override
{
jassert (currentlyPlayingNote.keyState == juce::MPENote::off);
if (allowTailOff)
{
// start a tail-off by setting this flag. The render callback will pick up on
// this and do a fade out, calling clearCurrentNote() when it's finished.
if (tailOff == 0.0) // we only need to begin a tail-off if it's not already doing so - the
// stopNote method could be called more than once.
tailOff = 1.0;
}
else
{
// we're being told to stop playing immediately, so reset everything..
clearCurrentNote();
phaseDelta = 0.0;
}
}
Note
This is very similar to SineWaveVoice::stopNote() function in Tutorial: Build a MIDI synthesiser. There isn't anything MPE-specific here.
Exercise
Modify the MainComponent::noteStopped() function to allow the note-off velocity (lift) to modify the rate of release of the note. Faster lifts should result in a shorter release time.

Parameter changes

There are callbacks that tell us when either the pressure, pitch bend, or timbre have changed for this note:

void notePressureChanged() override
{
level.setTargetValue (currentlyPlayingNote.pressure.asUnsignedFloat());
}
void notePitchbendChanged() override
{
frequency.setTargetValue (currentlyPlayingNote.getFrequencyInHertz());
}
void noteTimbreChanged() override
{
timbre.setTargetValue (currentlyPlayingNote.timbre.asUnsignedFloat());
}

Again, we access the MPESynthesiserVoice::currentlyPlayingNote member to obtain the current value for each of these parameters.

Generating the audio

The MainComponent::renderNextBlock() actually generates the audio signal, mixing this voice's signal into the buffer that is passed in:

void renderNextBlock (juce::AudioBuffer<float>& outputBuffer,
int startSample,
int numSamples) override
{
if (phaseDelta != 0.0)
{
if (tailOff > 0.0)
{
while (--numSamples >= 0)
{
auto currentSample = getNextSample() * (float) tailOff;
for (auto i = outputBuffer.getNumChannels(); --i >= 0;)
outputBuffer.addSample (i, startSample, currentSample);
++startSample;
tailOff *= 0.99;
if (tailOff <= 0.005)
{
clearCurrentNote();
phaseDelta = 0.0;
break;
}
}
}
else
{
while (--numSamples >= 0)
{
auto currentSample = getNextSample();
for (auto i = outputBuffer.getNumChannels(); --i >= 0;)
outputBuffer.addSample (i, startSample, currentSample);
++startSample;
}
}
}
}

It calls MainComponent::getNextSample() to generate the waveform:

float getNextSample() noexcept
{
auto levelDb = (level.getNextValue() - 1.0) * maxLevelDb;
auto amplitude = std::pow (10.0f, 0.05f * levelDb) * maxLevel;
// timbre is used to blend between a sine and a square.
auto f1 = std::sin (phase);
auto f2 = std::copysign (1.0, f1);
auto a2 = timbre.getNextValue();
auto a1 = 1.0 - a2;
auto nextSample = float (amplitude * ((a1 * f1) + (a2 * f2)));
auto cyclesPerSample = frequency.getNextValue() / currentSampleRate;
phaseDelta = 2.0 * juce::MathConstants<double>::pi * cyclesPerSample;
phase = std::fmod (phase + phaseDelta, 2.0 * juce::MathConstants<double>::pi);
return nextSample;
}

endcode

This simply cross fades between a sine wave and a (non-bandlimited) square wave, based on the value of the timbre parameter.

Exercise
Modify the MPEDemoSynthVoice class to crossfade between two sine waves, one octave appart, in response to the timbre parameter.

Summary

In this tutorial we have introduced some of the MPE based classes in JUCE. You should now know:

  • What MPE is.
  • That MPE compatible devices will allocate each note to their own MIDI channels.
  • How the MPENote class stores information about a note including its MIDI channel, the original note number, velocity, pitch bend, and so on.
  • That the MPEInstrument class maintains the state of the currently playing notes.
  • That the MPESynthesiser class contains an MPEInstrument object that it uses to drive the synthesiser.
  • That you must implement a class that inherits from the MPESynthesiserVoice class to implement your synthesiser's audio code.

See also

linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram