Home

Awesome

Simple JUCE sampler sequencer synced using Ableton Link.

How to use

alt text .

How to build your own Juce project using Ableton Link

The following instructions are already carried out for this project, they are simply provided for guidance on how to build new Juce projects that need Ableton Link.

1. Add Ableton Link dependency (via git command line)

2. Include Header Search Paths

  1. link/include
  2. link/modules/asio-standalone/asio/include

../../Dependencies/link/include
../../Dependencies/link/modules/asio-standalone/asio/include

3. Configure Projucer

Link requires the platform to be specified, otherwise compiler/build errors result.

MacOSX & iOS

In the Projucer, go to the MacOSX and/or iOS IDE exporters (e.g. Xcode) and include the following under "Extra Preprocessor Definitions"
LINK_PLATFORM_MACOSX=1

Windows

4. Include Header Files

Guidelines on implementing synchronization in your project

These guidelines are intended to be specific to Juce and an extension of the advice provided by Ableton on their github readme. Go there first for the basics, then come back here for some time savers.

1. Use the HostTimeFilter for SessionState synchronization.

Ableton recommends multiple ways of synchronization, based mainly on platform. However, Juce is multiplatform and has its own implementation of audio platform, engine, buffers, etc. In some Ableton examples, you'll find that the microseconds time that is used for synchronization at each audio buffer callback comes from calling Link::clock().micros(). This appears to pull the microseconds time from whatever system clock Link finds depending on OS platform. Although this is possible to do in Juce, my experience found that timing varied weirdly on different machines and I could not find the root of this issue (since they do not provide a Juce specific example).
In other examples, you'll find the use of the HostTimeFilter. In short, the microseconds time given by this object is based not on some system clock, but on the sample time that you have counted manually at each audio buffer callback. The HostTimeFilter then uses some linear regression algorithm to determine the microseconds time that you need to synchronize with. Thus, it needs some "warming up".
For this algorithm to work best, count your sample time as soon as link is instantiated, i.e. do not wait until the user presses play. For example, for a buffer size of 512, you might already be incrementing the sample time by 512 in every audio buffer callback to keep track, maybe in some time_in_samples variable. My advice is to not reset this sample time to 0 when the user either stops or starts playback. Otherwise there will be some short period of time from the moment of playback where it sounds out of sync until Link can DJ his way back in.
In the project, search for "sample_time" and "host_time_filter" to see how this is done.

2. Use numSamples provided by the Juce audio buffers for Link's latency correction

Link requires the output microseconds time be offset by some latency calculation based again on audio platform. In Ableton's examples, they sometimes rely on the audio platform to calculate and provide this latency (such as in ASIO), other times, they explicitly calculate this based on buffer size and sample rate. In my experience, the calculation used in the Jack example worked best, where using the device's buffer size worked best.
At first, I used AudioDeviceManager::getCurrentDevice()->getOutputLatencyInSamples() for the calculation, but although this was mostly ok, I later found that the warm up time to synchronization was slightly longer (and noticeable) than just using the buffer size.
Instead, either use AudioDeviceManager::getCurrentDevice()->getCurrentBufferSizeSamples(), or just use the size of the buffer that's provided to you in Juce's audio callbacks (such as AudioBuffer::getNumSamples() or AudioSourceChannelInfo::numSamples).
See this project's calculate_output_time() method.

3. Prepare for time jumps and buffer overlaps.

Coming from Juce VST development where samples and ppq beats get fed into your plugin at exactly the moment you expect them to come; you might be surprised that your app glitches out when adding Link and not preparing for these jumps and overlaps. At each buffer boundary the beat or phase values provided by link (via beatAtTime() and phaseAtTime) may show that they jump irregularly from buffer cycle to buffer cycle (e.g. buffer 1 might have beat values of 0.11 to 0.13 for samples 0 and 511, then buffer 2 has already jumped to values of 0.21 to 0.23), or overlap might occur (e.g. buffer 1 has 0.11 to 0.13 and buffer 2 has 0.12 to 0.14).
In my experience, these things must be handled to prevent any unwanted artifacts. There are many approaches you can take, where my suggestions may be too convoluted or not accurate enough for your situation so I leave this to you. In my app, I dedicate an entire class to mapping out the results of Link's SessionState::beatAtTime() method to each sample position for every buffer cycle.
Seeing as Link uses real-time values for its algorithms, these can be hard to debug without setting up some test harness or mocking framework. Just don't mistake time jumps or buffer overlaps as bugs themselves, these are occasional, but normal.

4. The std::chrono library provides for creating your own time types

Howard Hinnant (the almighty creator of time) explains this best in his talk at cppcon 2016. In this project, microseconds to samples is converted explicitly using the formula micros_to_samples = 1.0e6 / sample_rate. However, the chrono library provides conveniences to create types that do these conversions automatically.
https://www.youtube.com/watch?v=P32hvk8b13M

5. Use CachedValues for time sensitive state data.

In my other project, I found better efficiencies/performance when using CachedValues for variables that the user requires for low-latent interaction and/or also used in the audio thread. Bpm or tempo is the best example of this, where the user might demand this parameter for real-time performance. Hopefully, you already appreciate the greatness of ValueTrees and are already using them to manager your app's state. What you might still be doing however, is something like ValueTree::setProperty(bpm, .... etc. in your audio thread. This is probably fine for the most part, but I found that if there are too many of these calls, things can slow down or use too much cpu. Especially when Link is involved where not only your user controls the bpm, but any possible number of external peers do as well.
The best results come from wrapping this bpm/tempo value in a CachedValue. To initialize, simply call referTo() and feed it the corresponding tree and property. Then, instead of calling setProperty in either the audio thread or in Link's setTempoCallback(), you can simply just assign to this value as if it were a primitive.
There is some catch though, David Rowland warns that because these reside on the message thread, benign data races may occur. He suggests that this can mostly be accounted for through atomic wrappers, although I personally have not found the need (yet) to go this far. You can find out more here (skip to around 32:00):
https://www.youtube.com/watch?v=3IaMjH5lBEY

Happy syncing!

Feel free to contact me for any questions or help. As of this writing, I am in progress of implementing midi clock sync in my app and having quite a jolly good old (very hard and painful) time! Hopefully, I get through it (maybe with your help?) and plan to post a similar tutorial.