Towards a More Effective OSC Time Tag Scheme

Publication Type  Conference Paper
Year of Publication  2004
Authors  Freed, Adrian
Conference Name  OSC Conference 2004
Conference Start Date  30/07/2004
Abstract  Time tags are not widely used in OSC. This paper will introduce the original intention of time tags, discuss the challenges of implementing and using them, and introduce some new ideas about how they might be better employed and implemented.

Time tags were introduced to address a number of problems and concerns we encountered at CNMAT using a predecessor protocol to OSC:

1) Synchronization of large parameter updates: What if the network transport imposes a limit on message size? How can you specify that a large number of OSC-addressable parameters are to be updated concurrently? The idea is that they can be broken up into a series of smaller OSC packets with the same time tag.
2) Synchronization when parameters are handled by different nodes on the network
3) Jitter Attenuation: by setting parameter updates to occur at a fixed interval in the future, jitter induced by networking and operating system delays can be eliminated.
4) Simplify implementation of sequencer applications. Programs simply buffer a few hundred milliseconds of OSC packets and hand them off for the OSC scheduler to do the fine grain timing

Several important practical issues have prevented widespread adoption of time tags:
1) When OSC was first proposed, commercial operating systems did not come with compatible network time clients so there was no easy way to establish and interpret time tags using a central reliable timing
2) Many performances with OSC are in venues where access to the Internet is impossible and there are still no affordable, readily available NTP servers to provide a local master clock source.
3) The above history discouraged use of time tags and many implementations still ignore them, further discouraging their use.
4) With OSC it is the sender's responsibility to decide when a message will be processed by the receiver. Unfortunately it is the receiver that is in the best position to measure the communication latencies and no mechanism was included in the standard for the receiver to communicate this information to the sender. The sender doesn't know how far in the future to set the message processing to attenuate the jitter.

The specification in OSC that clients and servers use the time tags that can all be interpreted as references from a single master clock is actually stronger than necessary in many applications. Jitter attenuation can be achieved by an adaptive scheme without requiring back channel communication using a stateless protocol. The idea is for senders to use their own 64-bit clock for constructing time tags. All messages are sent as a special bundle with a time tag that contains the time at which the sender sent the message. Receiving nodes maintain a histogram for each sender which measures the relative variations in received times. After receiving a certain number of packets the receiver can estimate the jitter statistics and derive a reasonable and slowly varying delay value that is used to rewrite the time tags in the body of the received packets conforming them to the receiver's clock.

In OSC configurations supporting bidirectional communication, we can adopt a more complicated scheme to establish the actual value of the communication latency. The idea is that receivers return small packets which contain their idea of the current time back to senders. Using the same techniques used in NTP, the senders can adjust their clocks to be close to the receivers' clocks. This scheme is more robust than reliance on a central NTP server as it supports senders and receivers leaving and joining the network dynamically, a common requirement in OSC-based collaborative performances, during debugging and on wireless networks.
Export  EndNote Tagged | XML | BibTex
freed-timetags.pdf98.32 KB