Tutorial: Plugin examples

This tutorial explains several audio/midi plug-in examples in detail and explores the open possibilities of plug-in development.

Level: Intermediate

Platforms: Windows, macOS, Linux, iOS

Plugin Format: VST, VST3, AU, AAX, Standalone

Classes: MidiBuffer, SortedSet, AudioParameterFloat, Synthesiser, MidiBuffer, MidiMessage, AudioProcessorValueTreeState, GenericAudioProcessorEditor

Getting started

There are several demo projects to accompany this tutorial. Download links to these projects are provided in the relevant sections of the tutorial.

If you need help with this step in each of these sections, see Tutorial: Projucer Part 1: Getting started with the Projucer.

The demo projects

The demo projects provided with this tutorial illustrate several different examples of audio/midi plugins. In summary, these plugins are:

We use the GenericAudioProcessorEditor class common to all projects to lay out all of our GUI components across plugin examples.

Note
The code presented here is broadly similar to the PlugInSamples from the JUCE Examples.

The Arpeggiator Plugin

Download the demo project for this section here: PIP | ZIP. Unzip the project and open the first header file in the Projucer.

Note
Make sure to enable the "MIDI Effect Plugin" option in the "Plugin Characteristics" field of the project settings in the Projucer.

The Arpeggiator is a MIDI plugin without any audio processing that can be inserted on a software instrument or MIDI track in a DAW to modify the incoming MIDI signals.

tutorial_plugin_examples_arpeggiator_screenshot1.png
Arpeggiator plugin window

Arpeggiator Implementation

In the Arpeggiator class, we have defined several private member variables to implement our arpeggiator behaviour as shown below:

private:
//==============================================================================
int currentNote, lastNoteValue;
int time;
float rate;

Among these we have a SortedSet object that holds a set of unique int variables according to a certain sorting rule. This will allow us to reorder the MIDI notes efficiently to produce the desired musical patterns.

In the class constructor, we initialise the plugin without any audio bus as we are creating a MIDI plugin. We also add a single parameter for the speed of the arpeggiator as shown here:

public:
//==============================================================================
Arpeggiator()
: AudioProcessor (BusesProperties()) // add no audio buses at all
{
addParameter (speed = new AudioParameterFloat ("speed", "Arpeggiator Speed", 0.0, 1.0, 0.5));
}

In the prepareToPlay() function, we initialise some variables to prepare for subsequent processing like follows:

void prepareToPlay (double sampleRate, int samplesPerBlock) override
{
ignoreUnused (samplesPerBlock);
notes.clear(); // [1]
currentNote = 0; // [2]
lastNoteValue = -1; // [3]
time = 0.0; // [4]
rate = static_cast<float> (sampleRate); // [5]
}
  • [1]: First, we empty the SortedSet of MIDI note numbers.
  • [2]: The currentNote variable temporarily holds the current index for the SortedSet of notes.
  • [3]: The lastNoteValue variable temporarily holds the previous index to be able to stop the note.
  • [4]: The time variable keeps track of the note duration with respect to the buffer size and sample rate.
  • [5]: The rate stores the current sample rate in a float variable.

Next, we perform the actual processing in the processBlock() function as follows:

void processBlock (AudioBuffer<float>& buffer, MidiBuffer& midi) override
{
// the audio buffer in a midi effect will have zero channels!
jassert (buffer.getNumChannels() == 0); // [6]
// however we use the buffer to get timing information
auto numSamples = buffer.getNumSamples(); // [7]
// get note duration
auto noteDuration = static_cast<int> (std::ceil (rate * 0.25f * (0.1f + (1.0f - (*speed))))); // [8]
int ignore;
for (MidiBuffer::Iterator it (midi); it.getNextEvent (msg, ignore);) // [9]
{
if (msg.isNoteOn()) notes.add (msg.getNoteNumber());
else if (msg.isNoteOff()) notes.removeValue (msg.getNoteNumber());
}
midi.clear(); // [10]
//...
  • [6]: To ensure that we deal with a MIDI plugin, assert that there are no audio channels in the audio buffer.
  • [7]: We still retrieve the number of samples in the block from the audio buffer.
  • [8]: According to the speed parameter of our user interface and the sample rate, we calculate the note duration in number of samples.
  • [9]: For every event in the MidiBuffer, we add the note to the SortedSet if the event is a "Note On" and remove the note if the event is a "Note Off".
  • [10]: We then empty the MidiBuffer to add the single notes back in the buffer one by one in the next step.
//...
if ((time + numSamples) >= noteDuration) // [11]
{
auto offset = jmax (0, jmin ((int) (noteDuration - time), numSamples - 1)); // [12]
if (lastNoteValue > 0) // [13]
{
midi.addEvent (MidiMessage::noteOff (1, lastNoteValue), offset);
lastNoteValue = -1;
}
if (notes.size() > 0) // [14]
{
currentNote = (currentNote + 1) % notes.size();
lastNoteValue = notes[currentNote];
midi.addEvent (MidiMessage::noteOn (1, lastNoteValue, (uint8) 127), offset);
}
}
time = (time + numSamples) % noteDuration; // [15]
}
  • [11]: We check whether the current time with the number of samples in the current block added to it is greater than the note duration. If it is the case this means that by the end of the current block, we would reach a note transition and we therefore proceed to modify the MidiBuffer. Otherwise we keep the MIDI state as is.
  • [12]: Calculate the sample offset at which the note transition occurs within the current audio block.
  • [13]: If the previous note is still playing, the lastNoteValue variable is greater than 0 and therefore we need to send a "Note Off" event to stop the note from playing with the correct sample offset. We then reset the lastNoteValue variable.
  • [14]: If there are notes to shuffle and play in the SortedSet, we send a "Note On" event to play the first note in the set after having stored the previous note number and retrieved the next note number.
  • [15]: Finally we keep track of our current time relative to the note duration whether we reach a note transition or not.

The Noise Gate Plugin

Download the demo project for this section here: PIP | ZIP. Unzip the project and open the first header file in the Projucer.

The noise gate is an audio plugin that filters out the input sound below a certain sidechain threshold when placed as an insert in a DAW track.

tutorial_plugin_examples_noise_gate_screenshot1.png
Noise gate plugin window

Noise Gate Implementation

In the NoiseGate class, we have defined several private member variables to implement our noise gate behaviour as shown below:

private:
//==============================================================================
AudioParameterFloat* threshold;
int sampleCountDown;
float lowPassCoeff;

In the class constructor, we initialise the plugin with three stereo buses for the input, output and sidechain respectively [1]. We also add two parameters namely threshold and alpha [2] as shown here:

public:
//==============================================================================
NoiseGate()
: AudioProcessor (BusesProperties().withInput ("Input", AudioChannelSet::stereo()) // [1]
.withOutput ("Output", AudioChannelSet::stereo())
.withInput ("Sidechain", AudioChannelSet::stereo()))
{
addParameter (threshold = new AudioParameterFloat ("threshold", "Threshold", 0.0f, 1.0f, 0.5f)); // [2]
addParameter (alpha = new AudioParameterFloat ("alpha", "Alpha", 0.0f, 1.0f, 0.8f));
}

The threshold parameter determines the power level at which the noise gate should act upon the input signal. The alpha parameter controls the filtering of the sidechain signal.

In the isBusesLayoutSupported() function, we ensure that the number of input channels is identical to the number of output channels and that the input buses are enabled:

bool isBusesLayoutSupported (const BusesLayout& layouts) const override
{
// the sidechain can take any layout, the main bus needs to be the same on the input and output
return layouts.getMainInputChannelSet() == layouts.getMainOutputChannelSet()
&& ! layouts.getMainInputChannelSet().isDisabled();
}

In the prepareToPlay() function, we initialise some variables to prepare for subsequent processing like follows:

void prepareToPlay (double /*sampleRate*/, int /*maxBlockSize*/) override
{
lowPassCoeff = 0.0f; // [3]
sampleCountDown = 0; // [4]
}
  • [3]: The low-pass coefficient will be calculated from the sidechain signal and the alpha parameter to determine the gating behaviour.
  • [4]: The sample countdown allows us to keep track of the number of samples left with regards to the sample rate before the gating occurs.

Next, we perform the actual processing in the processBlock() function as follows:

void processBlock (AudioBuffer<float>& buffer, MidiBuffer&) override
{
auto mainInputOutput = getBusBuffer (buffer, true, 0); // [5]
auto sideChainInput = getBusBuffer (buffer, true, 1);
auto alphaCopy = alpha->get(); // [6]
auto thresholdCopy = threshold->get();
for (auto j = 0; j < buffer.getNumSamples(); ++j) // [7]
{
auto mixedSamples = 0.0f;
for (auto i = 0; i < sideChainInput.getNumChannels(); ++i) // [8]
mixedSamples += sideChainInput.getReadPointer (i) [j];
mixedSamples /= static_cast<float> (sideChainInput.getNumChannels());
lowPassCoeff = (alphaCopy * lowPassCoeff) + ((1.0f - alphaCopy) * mixedSamples); // [9]
if (lowPassCoeff >= thresholdCopy) // [10]
sampleCountDown = (int) getSampleRate();
// very in-effective way of doing this
for (auto i = 0; i < mainInputOutput.getNumChannels(); ++i) // [11]
*mainInputOutput.getWritePointer (i, j) = sampleCountDown > 0 ? *mainInputOutput.getReadPointer (i, j)
: 0.0f;
if (sampleCountDown > 0) // [12]
--sampleCountDown;
}
}
  • [5]: First, we separate the sidechain buffer from the main IO buffer for separate processing in subsequent steps.
  • [6]: Then we retrieve copies of the threshold and alpha parameters.
  • [7]: The outer loop will process the individual samples in the audio buffer block while the inner loops will process the channels in an interleaved manner. This allows us to keep the same state for each channel in a single sample processing.
  • [8]: For each channel in the sidechain, we add the signals together and divide by the number of sidechain channels in order to sum the signal to mono.
  • [9]: Next we calculate the low-pass coefficient from the alpha parameter and the sidechain mono signal using the formula y[i] = ((1 - alpha) * sidechain) + (alpha * y[i - 1]).
  • [10]: If this coefficient is greater than or equal to the threshold, we reset the sample countdown to the sample rate.
  • [11]: For every input channel, we copy the input buffer sample to the output buffer if the countdown is non-zero. Otherwise, we mute the output signal by writing zero samples.
  • [12]: We make sure to decrement the sample countdown value for every sample processed.
Note
The implementation shown here is not how you would typically program a noise gate. There are much more efficient and better algorithms out there.

The Multi-Out Synth Plugin

Download the demo project for this section here: PIP | ZIP. Unzip the project and open the first header file in the Projucer.

Warning
If using the PIP version of this project, please make sure to copy the Resources folder into the generated Projucer project.
Note
Make sure to enable the "Plugin MIDI Input" option in the "Plugin Characteristics" field of the project settings in the Projucer.

The multi-out synth is a software instrument plugin that produces up to five synthesiser voices based on an audio file sample and outputs the signal to up to 16 multiple outputs.

tutorial_plugin_examples_multi_out_synth_screenshot1.png
Multi-out synth plugin window

Multi-Out Synth Implementation

In the MultiOutSynth class, we have defined several private member variables to implement our multi-out synth behaviour as shown below:

private:
//...
//==============================================================================
AudioFormatManager formatManager;

Among these we have an AudioFormatManager in order to register audio file formats to read our sample sound. We also have an array of Synthesiser objects that holds one synth per channel and a smart pointer to the sample sound we use in the tutorial.

We also declare some useful constants as an enum for the maximum number of midi channels and the maximum number of synth voices:

public:
enum
{
maxMidiChannel = 16,
maxNumberOfVoices = 5
};

In the class constructor, we initialise the plugin with 16 stereo output buses but no input bus [1] as we are creating a software instrument plugin. We also register basic audio file formats on the AudioFormatManager object in order to read the ".ogg" sample file [2] as shown here:

MultiOutSynth()
: AudioProcessor (BusesProperties()
.withOutput ("Output #1", AudioChannelSet::stereo(), true)
.withOutput ("Output #2", AudioChannelSet::stereo(), false)
.withOutput ("Output #3", AudioChannelSet::stereo(), false)
.withOutput ("Output #4", AudioChannelSet::stereo(), false)
.withOutput ("Output #5", AudioChannelSet::stereo(), false)
.withOutput ("Output #6", AudioChannelSet::stereo(), false)
.withOutput ("Output #7", AudioChannelSet::stereo(), false)
.withOutput ("Output #8", AudioChannelSet::stereo(), false)
.withOutput ("Output #9", AudioChannelSet::stereo(), false)
.withOutput ("Output #10", AudioChannelSet::stereo(), false)
.withOutput ("Output #11", AudioChannelSet::stereo(), false)
.withOutput ("Output #12", AudioChannelSet::stereo(), false)
.withOutput ("Output #13", AudioChannelSet::stereo(), false)
.withOutput ("Output #14", AudioChannelSet::stereo(), false)
.withOutput ("Output #15", AudioChannelSet::stereo(), false)
.withOutput ("Output #16", AudioChannelSet::stereo(), false)) // [1]
{
// initialize other stuff (not related to buses)
formatManager.registerBasicFormats(); // [2]
for (auto midiChannel = 0; midiChannel < maxMidiChannel; ++midiChannel) // [3]
{
synth.add (new Synthesiser());
for (auto i = 0; i < maxNumberOfVoices; ++i) // [4]
synth[midiChannel]->addVoice (new SamplerVoice());
}
loadNewSample (BinaryData::singing_ogg, BinaryData::singing_oggSize); // [5]
}

For each midi/output channel, we instantiate a new Synthesiser object, add it to the array [3] and create 5 SamplerVoice objects per synth [4]. We also load the sample file as binary data [5] using the loadNewSample() private function defined hereafter:

private:
//...
void loadNewSample (const void* data, int dataSize)
{
auto* soundBuffer = new MemoryInputStream (data, static_cast<std::size_t> (dataSize), false); // [6]
std::unique_ptr<AudioFormatReader> formatReader (formatManager.findFormatForFileExtension ("ogg")->createReaderFor (soundBuffer, true));
BigInteger midiNotes;
midiNotes.setRange (0, 126, true);
SynthesiserSound::Ptr newSound = new SamplerSound ("Voice", *formatReader, midiNotes, 0x40, 0.0, 0.0, 10.0); // [7]
for (auto channel = 0; channel < maxMidiChannel; ++channel) // [8]
synth[channel]->removeSound (0);
sound = newSound; // [9]
for (auto channel = 0; channel < maxMidiChannel; ++channel) // [10]
synth[channel]->addSound (sound);
}
  • [6]: First create a MemoryInputStream from the sample binary data and convert the stream to read the file as an "ogg" format.
  • [7]: Declare a SamplerSound object with the previously created stream reader and constrain the range of midi notes using a BigInteger.
  • [8]: For every Synthesiser object in the synth array, we make sure to clear the currently loaded SynthesiserSound before loading a new one.
  • [9]: Then assign the newly created SamplerSound to the smart pointer to keep a reference to the sound.
  • [10]: Finally, for every Synthesizer object we load the new sound as a SamplerSound object.

To make sure that no buses are added or removed beyond our requirements, we override two functions from the AudioProcessor class as follows:

bool canAddBus (bool isInput) const override
{
return (! isInput && getBusCount (false) < maxMidiChannel);
}
bool canRemoveBus (bool isInput) const override
{
return (! isInput && getBusCount (false) > 1);
}

This prevents input buses from being added or removed and output buses from being added beyond 16 channels or removed completely.

In the prepareToPlay() function, we prepare for subsequent processing by setting the sample rate for every Synthesiser object in the synth array by calling the setCurrentPlaybackSampleRate() function:

void prepareToPlay (double newSampleRate, int samplesPerBlock) override
{
ignoreUnused (samplesPerBlock);
for (auto midiChannel = 0; midiChannel < maxMidiChannel; ++midiChannel)
synth[midiChannel]->setCurrentPlaybackSampleRate (newSampleRate);
}

Next, we perform the actual processing in the processBlock() function as follows:

void processBlock (AudioBuffer<float>& buffer, MidiBuffer& midiBuffer) override
{
auto busCount = getBusCount (false); // [11]
for (auto busNr = 0; busNr < busCount; ++busNr) // [12]
{
auto midiChannelBuffer = filterMidiMessagesForChannel (midiBuffer, busNr + 1);
auto audioBusBuffer = getBusBuffer (buffer, false, busNr);
synth [busNr]->renderNextBlock (audioBusBuffer, midiChannelBuffer, 0, audioBusBuffer.getNumSamples()); // [13]
}
}
  • [11]: First, we retrieve the number of output buses.
  • [12]: For every output bus (and therefore for every Synthesiser instance), we filter out the unnecessary audio bus buffers and filter out the midi messages that do not correspond to the midi channel of the synthesiser by calling a private helper function defined thereafter.
  • [13]: We can then call the renderNextBlock() function directly on the corresponding Synthesiser object to generate the sound by supplying the correct audio bus buffer and midi channel buffer.

The helper function to filter out midi channels is implemented as described below:

private:
//==============================================================================
static MidiBuffer filterMidiMessagesForChannel (const MidiBuffer& input, int channel)
{
int samplePosition;
MidiBuffer output;
for (MidiBuffer::Iterator it (input); it.getNextEvent (msg, samplePosition);) // [14]
if (msg.getChannel() == channel) output.addEvent (msg, samplePosition);
return output; // [15]
}
  • [14]: For every midi channel buffer, we check whether the midi message channel matches the midi channel of the output bus we are looking for and if so we add the MidiMessage to a newly-created MidiBuffer.
  • [15]: When we have iterated over all midi messages in all midi channels, we return with a buffer containing only the midi messages of the selected channel.

The Surround Plugin

Download the demo project for this section here: PIP | ZIP. Unzip the project and open the first header file in the Projucer.

The surround utility is a plugin that monitors incoming signal on individual channels including surround configurations and allows you to send ping sine waves to the channel of your choice.

tutorial_plugin_examples_surround_screenshot1.png
Surround plugin window

Surround Implementation

In the SurroundProcessor class, we have defined several private member variables to implement our surround behaviour as shown below:

private:
Array<int> channelActive;
Array<float> alphaCoeffs;
int channelClicked;
int sampleOffset;

Among these we have an array to keep track of the number of samples in active channels of the plugin and an array to keep track of the alpha coefficients for each channel.

In the class constructor, we initialise the plugin with two stereo pairs of buses for the input and output respectively by default but the configuration will be changed according to the currently-used bus layout setup.

public:
SurroundProcessor()
: AudioProcessor(BusesProperties().withInput ("Input", AudioChannelSet::stereo())
.withOutput ("Output", AudioChannelSet::stereo()))
{}

In the isBusesLayoutSupported() function, we ensure that the input/output channels are discrete channels [1], that the number of input channels is identical to the number of output channels [2] and that the input buses are enabled [3] as shown below:

bool isBusesLayoutSupported (const BusesLayout& layouts) const override
{
return ((! layouts.getMainInputChannelSet() .isDiscreteLayout()) // [1]
&& (! layouts.getMainOutputChannelSet().isDiscreteLayout())
&& (layouts.getMainInputChannelSet() == layouts.getMainOutputChannelSet()) // [2]
&& (! layouts.getMainInputChannelSet().isDisabled())); // [3]
}

In the prepareToPlay() function, we initialise some variables to prepare for subsequent processing like follows:

void prepareToPlay (double sampleRate, int samplesPerBlock) override
{
channelClicked = 0; // [4]
sampleOffset = static_cast<int> (std::ceil (sampleRate)); // [5]
auto numChannels = getChannelCountOfBus (true, 0); // [6]
channelActive.resize (numChannels);
alphaCoeffs.resize (numChannels);
reset(); // [7]
triggerAsyncUpdate(); // [8]
ignoreUnused (samplesPerBlock);
}
  • [4]: First, we reset the temporary variable that designates the channel index that is clicked by the user.
  • [5]: Then we store the sample rate before processing the block and later on incrementing this temporary variable to keep track of phase with sample offsets.
  • [6]: We need to resize the active channels and coefficients arrays to the currently active number of channels for the block.
  • [7]: The reset() function is called at several places to clear the active channels array as defined later.
  • [8]: Finally, we trigger an asynchronous update to the GUI thread and handle the callback later on.

The reset() function is also called in the releaseResources() function after the block processing finishes:

void releaseResources() override
{
reset();
}

The reset() function is implemented by setting every channel value to 0 like follows:

void reset() override
{
for (auto& channel : channelActive)
channel = 0;
}

As for the asynchronous update of the GUI, we handle the callback by calling the updateGUI() function on the AudioProcessorEditor:

void handleAsyncUpdate() override
{
if (auto* editor = getActiveEditor())
if (auto* surroundEditor = dynamic_cast<SurroundEditor*> (editor))
surroundEditor->updateGUI();
}

Since the AudioProcessor inherits from the ChannelClickListener class defined in the SurroundEditor class, we have to override its virtual functions. The channelButtonClicked() callback function is triggered when the user clicks on a channel button. It provides the channel index that was pressed and resets the sample offset variable like so:

void channelButtonClicked (int channelIndex) override
{
channelClicked = channelIndex;
sampleOffset = 0;
}

The isChannelActive() helper function returns whether the specified channel is active by checking whether the active channel array still has samples to process:

bool isChannelActive (int channelIndex) override
{
return channelActive [channelIndex] > 0;
}

Next, we perform the actual processing in the processBlock() function as follows:

void processBlock (AudioBuffer<float>& buffer, MidiBuffer&) override
{
for (auto ch = 0; ch < buffer.getNumChannels(); ++ch) // [9]
{
auto& channelTime = channelActive.getReference (ch);
auto& alpha = alphaCoeffs.getReference (ch);
for (auto j = 0; j < buffer.getNumSamples(); ++j) // [10]
{
auto sample = buffer.getReadPointer (ch)[j];
alpha = (0.8f * alpha) + (0.2f * sample);
if (std::abs (alpha) >= 0.1f) // [11]
channelTime = static_cast<int> (getSampleRate() / 2.0);
}
channelTime = jmax (0, channelTime - buffer.getNumSamples()); // [12]
}
//...
  • [9]: For each channel in the audio buffer, we get a reference to the active channel countdown samples and the alpha coefficient values.
  • [10]: Then for every sample in the buffer block, we get the input sample value of the channel and calculate the alpha coefficient using the formula alpha[i] = ((1 - x) * sample) + (x * alpha[i - 1]) where x = 0.8 in this case.
  • [11]: If the alpha coefficient is greater than or equals to a certain threshold in this case 0.1, we set the countdown samples for that specific channel to half of the sample rate.
  • [12]: We also make sure to subtract the number of samples in the current block from the number of samples in the countdown.
//...
auto fillSamples = jmin (static_cast<int> (std::ceil (getSampleRate())) - sampleOffset,
buffer.getNumSamples()); // [13]
if (isPositiveAndBelow (channelClicked, buffer.getNumChannels())) // [14]
{
auto* channelBuffer = buffer.getWritePointer (channelClicked);
auto freq = (float) (440.0 / getSampleRate());
for (auto i = 0; i < fillSamples; ++i) // [15]
channelBuffer[i] += std::sin (2.0f * MathConstants<float>::pi * freq * static_cast<float> (sampleOffset++));
}
}
  • [13]: Next we calculate the number of output samples to fill by taking the smallest number of the two from the sample offset and the number of samples in the block.
  • [14]: Then we can check whether the channel index clicked is valid and get the write pointer for the correct channel buffer.
  • [15]: Finally we calculate the frequency of the sine wave by dividing the A4 frequency by the sample rate. Then for every sample to fill, we produce a sine wave with appropriate frequency and phase offset using the sample offset variable that we increment after the assignment for the next sample.

The Inter-App Audio Plugin

Download the demo project for this section here: PIP | ZIP. Unzip the project and open the first header file in the Projucer.

The inter-app audio example is a standalone iOS application that can be loaded into IAA compatible hosts such as GarageBand for iOS and displays a simple meter with a gain slider that responds to audio in the host application.

tutorial_plugin_examples_inter_app_audio_screenshot1.png
Inter-app audio plugin window

Inter-App Audio Implementation

In the IAAEffectProcessor class, we have defined several private member variables to implement our IAA behaviour as shown below:

private:
//==============================================================================
float previousGain = 0.0;
std::array<float, 2> meterValues = { { 0, 0 } };
// This keeps a copy of the last set of timing info that was acquired during an
// audio callback - the UI component will display this.

Among these we have an array of two float values to represent the left and right meter values, a temporary variable to store the current position of the host's playhead and a list of MeterListener objects in order to broadcast meter changes to its listeners.

In the class constructor of the IAAEffectProcessor class, we initialise the plugin with two stereo pairs of buses for the input and output respectively and configure the AudioProcessorValueTreeState as follows:

IAAEffectProcessor()
: AudioProcessor (BusesProperties()
.withInput ("Input", AudioChannelSet::stereo(), true)
.withOutput ("Output", AudioChannelSet::stereo(), true)),
parameters (*this, nullptr, "InterAppAudioEffect",
"Gain",
NormalisableRange<float> (0.0f, 1.0f),
1.0f / 3.14f))
{
}

The gain parameter is created to control the level of the incoming signal [1] and the state of the AudioProcessorValueTreeState is saved with the appropriate Identifier that will be used to save and restore the XML file [2].

In the isBusesLayoutSupported() function, we ensure that the input channel is a stereo pair [3] and that the number of input channels is identical to the number of output channels [4].

bool isBusesLayoutSupported (const BusesLayout& layouts) const override
{
if (layouts.getMainInputChannelSet() != AudioChannelSet::stereo()) // [3]
return false;
if (layouts.getMainOutputChannelSet() != layouts.getMainInputChannelSet()) // [4]
return false;
return true;
}

In the prepareToPlay() function, we store the gain value in a temporary variable to prepare for subsequent processing like follows:

void prepareToPlay (double, int) override
{
previousGain = *parameters.getRawParameterValue ("gain");
}

Next, we perform the actual processing in the processBlock() function as follows:

void processBlock (AudioBuffer<float>& buffer, MidiBuffer&) override
{
auto gain = *parameters.getRawParameterValue ("gain"); // [5]
auto totalNumInputChannels = getTotalNumInputChannels();
auto totalNumOutputChannels = getTotalNumOutputChannels();
auto numSamples = buffer.getNumSamples();
for (auto i = totalNumInputChannels; i < totalNumOutputChannels; ++i) // [6]
buffer.clear (i, 0, buffer.getNumSamples());
// Apply the gain to the samples using a ramp to avoid discontinuities in
// the audio between processed buffers.
for (auto channel = 0; channel < totalNumInputChannels; ++channel) // [7]
{
buffer.applyGainRamp (channel, 0, numSamples, previousGain, gain);
meterListeners.call (&IAAEffectProcessor::MeterListener::handleNewMeterValue,
channel,
buffer.getMagnitude (channel, 0, numSamples));
}
previousGain = gain; // [8]
// Now ask the host for the current time so we can store it to be displayed later.
updateCurrentTimeInfoFromHost (lastPosInfo); // [9]
}
  • [5]: First we retrieve the current gain value, the number of input and output channels and the number of samples in the buffer.
  • [6]: We ensure that the buffer is free of noise if the number of output channels is greater than the number of input channels.
  • [7]: Then for every channel, we use a gain ramp to smoothly apply gain changes without any discontinuities and broadcast the new meter value to the AudioProcessorEditor.
  • [8]: Before we move on to the next block we store the previous gain value for the gain ramp function to work properly.
  • [9]: Lastly, we call the updateCurrentTimeInfoFromHost() helper function to retrieve the current time from the host's playhead as defined hereafter.
bool updateCurrentTimeInfoFromHost (AudioPlayHead::CurrentPositionInfo &posInfo)
{
if (auto* ph = getPlayHead()) // [10]
{
if (ph->getCurrentPosition (newTime)) // [11]
{
posInfo = newTime; // Successfully got the current time from the host.
return true;
}
}
// If the host fails to provide the current time, we'll just reset our copy to a default.
lastPosInfo.resetToDefault(); // [12]
return false;
}
  • [10]: First we retrieve the current playhead information from the host and make sure that the pointer is not null.
  • [11]: Then we copy the playhead information into a temporary local variable of type AudioPlayHead::CurrentPositionInfo and if the structure is defined, we copy the information to the class member variable.
  • [12]: Otherwise, we reset the member variable to the default playhead information.

Summary

In this tutorial, we have examined some audio/midi plugin examples. In particular, we have:

  • Built a simple arpeggiator to create interesting musical patterns.
  • Built a noise gate to filter out unwanted noise.
  • Built a synthesiser with multiple outputs.
  • Built a surround compatible plugin to expand the channel count.
  • Learnt how an inter-app audio plugin sends audio between instances.

See also