Time is Absolute 

In 1967, the 13th General Conference on Weights and Measures first defined the International System (SI) unit of time, the second, in terms of atomic time rather than the motion of the Earth. Specifically, a second was defined as the duration of 9,192,631,770 cycles of microwave light absorbed or emitted by the hyperfine transition of cesium-133 atoms in their ground state undisturbed by external fields.

 
 
 
 

(c) Garfield Electronics - Doctor Click - 1983

A Tale of Smoke and Mirrors

“You shouldn’t be surprised really, computers are sequential machines – it’s all a plate spinning act. With analogue circuitry all the hardware is there all the time.”

 G.H. 17th May 2001

 
 

Din-Sync was developed by the Roland Corporation in Japan as a method of simplifying the synchronisation of sequencers and drum machines in the early 1980s. Prior to its introduction the two main components for sync - start/stop and clock - were usually designed as two separate sockets. Companies decided on different numbers of clock pulses that equated to a 'step' or rhythmic interval in their products - Roland 24, Linn - 48, Oberheim - 96 etc. The Din-Sync concept put both the sync clock stream and start / stop on separate pins within a common connector and set the number of clocks per quarter note interval at 24.

Out of the ash that was Din-Sync in the mid 1980s so MIDI Clock was born and in all those years not much has changed in the way electronic musicians and producers tempo-synchronise hardware and software.

Same basic principle only MIDI serial data replaced analogue voltage pulses.

24 pulses per quarter note. Start, Stop and Continue.

 
 
 
 

Voltage pulses have a lot going for them. They travel close to light speed and you can do cool things to them with simple hardware that has been around for a long time.Tight tempo-sync is easily achieved using voltage pulses for synchronisation. Each connected device advances one step or clock interval at the rising edge of every pulse. Think of it as well meshed sprockets in a gearbox.

MIDI messages being serial data are slow in comparison and they require a lot more processing to do things to them in meaningful ways. That processing means messages have to wait their turn and if that same processing is shared across an IC that must also scan a keyboard, check for knob value changes and deal with program and control change data in real-time then it's easy to see why there are limitations.

The advantages that MIDI brought with it are many and varied.

Reliable synchronisation is not one of them.

On a single MIDI cable with 16 potential channels of note, velocity and control events alone it is very easy to leave no room for an uninterrupted MIDI Clock stream.

Even if a MIDI Clock hardware device has a dedicated IC for processing sync I/O (and this is very rare these days) most only provide a single MIDI In and Out port to simultaneously
synchronise and transmit/receive other performance data.

The application of MIDI Clock in a modern software environment takes things into even murkier territory.

Now we find no dedicated hardware taking care of synchronisation at all. Creation and processing of MIDI Clock by software that has to share resources with an OS that at any time may be busy looking after a million other tasks is never going to deliver accurate synchronisation. Early sequencing computers like the Atari 1040ST with built-in MIDI ports and well written software running under very lean operating systems went close to matching good hardware sync. The current mainstream computer OS platforms are so overloaded that keeping MIDI Clocks accurate is a very tall order indeed.

Legacy MIDI has now morphed into USB and WIFI making matters potentially even worse.

Our simple meshed gear analogy for voltage pulse synchronisation has now become a virtual gearbox with a worn and highly unpredictable clutch.

 
 
 
 

There are some good MIDI Clock devices out there and many people use older equipment and dedicated MIDI Sync ports. Even the best of these all respond differently when synchronised. Each device or software application will start late against the master to some degree. Many devices that do manage to generate stable outgoing MIDI Clock still struggle to align the outgoing clock pulses with the internal sequencer grid that should be driving it.

Sometimes it is a little. More often it's a lot.

It makes composing electronic music hard work.

You lose the snap in your sequencing.

 
 

How can a CV / Gate Sequencer be Sloppy?

The tempo-stability of any externally synchronised sequencer (CV / Gate / MIDI / Din-Sync) is only as good as the clock driving it and how the design deals with processor / CPU interrupts as far as the tempo / clocking goes. Some are better than others. In the case of any sequencer that can be self driven under its own tempo clock - do the test and see how tight individual steps are relative to each other. Now clock it from a stable external sync master and do the tests again - might be better, might be worse - dependant on design philosophy.

 
 
 
 

The above CPU instruction flow chart is from a well known and very popular vintage (1982) CV / Gate step sequencer. A quick look at the microprocessor task routine and the time intervals between them shows very clearly why this model will always be a loose rubber band on the timing stakes which it most certainly is.

There are hundreds of different step sequencers in existence but just having CV / Gate outputs and even external analogue clock sync does not guarantee any of them can keep good time.

Very early discreet / CMOS step CV / Gate sequencers just followed incoming square wave voltage clock pulses - as long as that was rock solid, so was the step sequencer.

Over the last 25 years, as discreet/CMOS voltage-based sequencer design (Clocks/Timers/Latches/Gate Arrays) gave way to monolithic CPU / ICs with shared resources for both tempo generation and step / event / serial processing - our simple, stable, Pulse Train Express design gradually became all stations to Sloppy Town.

The sequencer input and output method - CV / Gate / Trigger / MIDI / Din-Sync - makes no difference at all. Solid external tempo-clock source stability and internal design that prioritises timing is the key.

 
 

Why use a mixing desk?

These days most audio interfaces have sophisticated internal processing that allow you to use them as a signal input mixer - the supplied control panel software lets you configure the input and output routing in many different ways to suit your particular application.

While this looks neat and simply in theory it is important to understand the limitations when using your audio interface as a pseudo-mixer for your external audio.

The main thing to consider is that your DAW audio interface is a digital audio device whose processing time (latency) is variable depending on your DAW configuration and your method of working.

All audio interfaces and their associated software drivers are always under the control of the host operating system and DAW Application software and this means your input/output latency can vary depending on a number of factors - Buffer Size, Delay Compensation, Plug-in load, are you tracking (recording) and live/software monitoring at the same time or are you playing live and jamming along?

You can always get around these issues if you understand the hardware and software really well but it can be hard work if you don't.

Take a look at the second studio diagram below. At first glance it may seem more complicated - a large mixing desk means more connections and more studio space of course but in reality it has some serious advantages when making music in a DAW environment.

The only real difference between the two setups is we are using the mixing desk to combine ALL your studio audio signals - external hardware and DAW outputs. Rather than connecting your hardware direct to the audio interface you instead use simple signal routing on the mixing desk itself to 'send' the audio you want to record to your audio interface.

Likewise your studio monitors are connected to the L-R Outputs of the mixing desk rather than directly off your audio interface.

The critical advantage being that your external hardware audio inputs have no 'floating' latency - their relative grid/sync position does not change when you make changes to the DAW project in any way.

It's also really nice to mix with real faders if you have the space.