Open Sound Control (OSC) is a protocol for communication among computers, sound synthesizers, and other multimedia devices that is optimized for modern networking technology. Bringing the benefits of modern networking technology to the world of electronic musical instruments, OSC's advantages include interoperability, accuracy, flexibility, and enhanced organization and documentation.

Introduction to OSC

Open Sound Control (OSC) is a protocol for communication among computers, sound synthesizers, and other multimedia devices that is optimized for modern networking technology. Bringing the benefits of modern networking technology to the world of electronic musical instruments, OSC's advantages include interoperability, accuracy, flexibility, and enhanced organization and documentation.

This simple yet powerful protocol provides everything needed for real-time control of sound and other media processing while remaining flexible and easy to implement.


There are dozens of implementations of OSC, including real-time sound and media processing environments, web interactivity tools, software synthesizers, a large variety programming languages, and hardware devices for sensor measurement. OSC has achieved wide use in fields including computer-based new interfaces for musical expression, wide-area and local-area networked distributed music systems, inter-process communication, and even within a single application.


OSC Research at CNMAT

OSC was originally developed, and continues to be a subject of ongoing research at UC Berkeley Center for New Music and Audio Technology (CNMAT). CNMAT is an interdisciplinary research center within the UC Berkeley Department of Music. CNMAT is known for its dynamic educational, performance and research programs focusing on the creative interaction between music and technology. The Center's goal is to provide a common ground where music, cognitive science, computer science, and other disciplines meet to investigate, invent and implement creative tools for composers and performers. For more information, please visit our web site: http://cnmat.berkeley.edu.

The Open Sound Control 1.0 Specification

Version 1.0, March 26 2002, Matt Wright


Open Sound Control (OSC) is an open, transport-independent, message-based protocol developed for communication among computers, sound synthesizers, and other multimedia devices.

OSC Syntax

This section defines the syntax of OSC data.

Atomic Data Types

All OSC data is composed of the following fundamental data types:

32-bit big-endian two's complement integer
64-bit big-endian fixed-point time tag, semantics defined below
32-bit big-endian IEEE 754 floating point number
A sequence of non-null ASCII characters followed by a null, followed by 0-3 additional null characters to make the total number of bits a multiple of 32. (OSC-string examples) In this document, example OSC-strings will be written without the null characters, surrounded by double quotes.
An int32 size count, followed by that many 8-bit bytes of arbitrary binary data, followed by 0-3 additional zero bytes to make the total number of bits a multiple of 32.

The size of every atomic data type in OSC is a multiple of 32 bits. This guarantees that if the beginning of a block of OSC data is 32-bit aligned, every number in the OSC data will be 32-bit aligned.

OSC Packets

The unit of transmission of OSC is an OSC Packet. Any application that sends OSC Packets is an OSC Client; any application that receives OSC Packets is an OSC Server.

An OSC packet consists of its contents, a contiguous block of binary data, and its size, the number of 8-bit bytes that comprise the contents. The size of an OSC packet is always a multiple of 4.

The underlying network that delivers an OSC packet is responsible for delivering both the contents and the size to the OSC application. An OSC packet can be naturally represented by a datagram by a network protocol such as UDP. In a stream-based protocol such as TCP, the stream should begin with an int32 giving the size of the first packet, followed by the contents of the first packet, followed by the size of the second packet, etc.

The contents of an OSC packet must be either an OSC Message or an OSC Bundle. The first byte of the packet's contents unambiguously distinguishes between these two alternatives.

OSC Messages

An OSC message consists of an OSC Address Pattern followed by an OSC Type Tag String followed by zero or more OSC Arguments.

Note: some older implementations of OSC may omit the OSC Type Tag string. Until all such implementations are updated, OSC implementations should be robust in the case of a missing OSC Type Tag String.

OSC Address Patterns

An OSC Address Pattern is an OSC-string beginning with the character '/' (forward slash).

OSC Type Tag String

An OSC Type Tag String is an OSC-string beginning with the character ',' (comma) followed by a sequence of characters corresponding exactly to the sequence of OSC Arguments in the given message. Each character after the comma is called an OSC Type Tag and represents the type of the corresponding OSC Argument. (The requirement for OSC Type Tag Strings to start with a comma makes it easier for the recipient of an OSC Message to determine whether that OSC Message is lacking an OSC Type Tag String.)

This table lists the correspondance between each OSC Type Tag and the type of its corresponding OSC Argument:

The meaning of each OSC Type Tag
OSC Type Tag Type of corresponding argument
i int32
f float32
s OSC-string
b OSC-blob

Some OSC applications communicate among instances of themselves with additional, nonstandard argument types beyond those specified above. OSC applications are not required to recognize these types; an OSC application should discard any message whose OSC Type Tag String contains any unrecognized OSC Type Tags. An application that does use any additional argument types must encode them with the OSC Type Tags in this table:

OSC Type Tags that must be used for certain nonstandard argument types
OSC Type Tag Type of corresponding argument
h 64 bit big-endian two's complement integer
t OSC-timetag
d 64 bit ("double") IEEE 754 floating point number
S Alternate type represented as an OSC-string (for example, for systems that differentiate "symbols" from "strings")
c an ascii character, sent as 32 bits
r 32 bit RGBA color
m 4 byte MIDI message. Bytes from MSB to LSB are: port id, status byte, data1, data2
T True. No bytes are allocated in the argument data.
F False. No bytes are allocated in the argument data.
N Nil. No bytes are allocated in the argument data.
I Infinitum. No bytes are allocated in the argument data.
[ Indicates the beginning of an array. The tags following are for data in the Array until a close brace tag is reached.
] Indicates the end of an array.

OSC Type Tag String examples.

OSC Arguments

A sequence of OSC Arguments is represented by a contiguous sequence of the binary representations of each argument.

OSC Bundles

An OSC Bundle consists of the OSC-string "#bundle" followed by an OSC Time Tag, followed by zero or more OSC Bundle Elements. The OSC-timetag is a 64-bit fixed point time tag whose semantics are described below.

An OSC Bundle Element consists of its size and its contents. The size is an int32 representing the number of 8-bit bytes in the contents, and will always be a multiple of 4. The contents are either an OSC Message or an OSC Bundle.

Note this recursive definition: bundle may contain bundles.

This table shows the parts of a two-or-more-element OSC Bundle and the size (in 8-bit bytes) of each part.

Parts of an OSC Bundle
Data Size Purpose
OSC-string "#bundle" 8 bytes How to know that this data is a bundle
OSC-timetag 8 bytes Time tag that applies to the entire bundle
Size of first bundle element int32 = 4 bytes First bundle element
First bundle element's contents As many bytes as given by "size of first bundle element"
Size of second bundle element int32 = 4 bytes Second bundle element
Second bundle element's contents As many bytes as given by "size of second bundle element"
etc.   Addtional bundle elements

OSC Semantics

This section defines the semantics of OSC data.

OSC Address Spaces and OSC Addresses

Every OSC server has a set of OSC Methods. OSC Methods are the potential destinations of OSC messages received by the OSC server and correspond to each of the points of control that the application makes available. "Invoking" an OSC method is analogous to a procedure call; it means supplying the method with arguments and causing the method's effect to take place.

An OSC Server's OSC Methods are arranged in a tree strcuture called an OSC Address Space. The leaves of this tree are the OSC Methods and the branch nodes are called OSC Containers. An OSC Server's OSC Address Space can be dynamic; that is, its contents and shape can change over time.

Each OSC Method and each OSC Container other than the root of the tree has a symbolic name, an ASCII string consiting of printable characters other than the following:

Printable ASCII characters not allowed in names of OSC Methods or OSC Containers
character name ASCII code (decimal)
' ' space 32
# number sign 35
* asterisk 42
, comma 44
/ forward slash 47
? question mark 63
[ open bracket 91
] close bracket 93
{ open curly brace 123
} close curly brace 125

The OSC Address of an OSC Method is a symbolic name giving the full path to the OSC Method in the OSC Address Space, starting from the root of the tree. An OSC Method's OSC Address begins with the character '/' (forward slash), followed by the names of all the containers, in order, along the path from the root of the tree to the OSC Method, separated by forward slash characters, followed by the name of the OSC Method. The syntax of OSC Addresses was chosen to match the syntax of URLs. (OSC Address Examples)

OSC Message Dispatching and Pattern Matching

When an OSC server receives an OSC Message, it must invoke the appropriate OSC Methods in its OSC Address Space based on the OSC Message's OSC Address Pattern. This process is called dispatching the OSC Message to the OSC Methods that match its OSC Address Pattern. All the matching OSC Methods are invoked with the same argument data, namely, the OSC Arguments in the OSC Message.

The parts of an OSC Address or an OSC Address Pattern are the substrings between adjacent pairs of forward slash characters and the substring after the last forward slash character. (examples)

A received OSC Message must be disptched to every OSC method in the current OSC Address Space whose OSC Address matches the OSC Message's OSC Address Pattern. An OSC Address Pattern matches an OSC Address if

  1. The OSC Address and the OSC Address Pattern contain the same number of parts; and
  2. Each part of the OSC Address Pattern matches the corresponding part of the OSC Address.

A part of an OSC Address Pattern matches a part of an OSC Address if every consecutive character in the OSC Address Pattern matches the next consecutive substring of the OSC Address and every character in the OSC Address is matched by something in the OSC Address Pattern. These are the matching rules for characters in the OSC Address Pattern:

  1. '?' in the OSC Address Pattern matches any single character
  2. '*' in the OSC Address Pattern matches any sequence of zero or more characters
  3. A string of characters in square brackets (e.g., "[string]") in the OSC Address Pattern matches any character in the string. Inside square brackets, the minus sign (-) and exclamation point (!) have special meanings:
    • two characters separated by a minus sign indicate the range of characters between the given two in ASCII collating sequence. (A minus sign at the end of the string has no special meaning.)
    • An exclamation point at the beginning of a bracketed string negates the sense of the list, meaning that the list matches any character not in the list. (An exclamation point anywhere besides the first character after the open bracket has no special meaning.)
  4. A comma-separated list of strings enclosed in curly braces (e.g., "{foo,bar}") in the OSC Address Pattern matches any of the strings in the list.
  5. Any other character in an OSC Address Pattern can match only the same character.

Temporal Semantics and OSC Time Tags

An OSC server must have access to a representation of the correct current absolute time. OSC does not provide any mechanism for clock synchronization.

When a received OSC Packet contains only a single OSC Message, the OSC Server should invoke the correponding OSC Methods immediately, i.e., as soon as possible after receipt of the packet. Otherwise a received OSC Packet contains an OSC Bundle, in which case the OSC Bundle's OSC Time Tag determines when the OSC Bundle's OSC Messages' corresponding OSC Methods should be invoked. If the time represented by the OSC Time Tag is before or equal to the current time, the OSC Server should invoke the methods immediately (unless the user has configured the OSC Server to discard messages that arrive too late). Otherwise the OSC Time Tag represents a time in the future, and the OSC server must store the OSC Bundle until the specified time and then invoke the appropriate OSC Methods.

Time tags are represented by a 64 bit fixed point number. The first 32 bits specify the number of seconds since midnight on January 1, 1900, and the last 32 bits specify fractional parts of a second to a precision of about 200 picoseconds. This is the representation used by Internet NTP timestamps.The time tag value consisting of 63 zero bits followed by a one in the least signifigant bit is a special case meaning "immediately."

OSC Messages in the same OSC Bundle are atomic; their corresponding OSC Methods should be invoked in immediate succession as if no other processing took place between the OSC Method invocations.

When an OSC Address Pattern is dispatched to multiple OSC Methods, the order in which the matching OSC Methods are invoked is unspecified. When an OSC Bundle contains multiple OSC Messages, the sets of OSC Methods corresponding to the OSC Messages must be invoked in the same order as the OSC Messages appear in the packet. (example)

When bundles contain other bundles, the OSC Time Tag of the enclosed bundle must be greater than or equal to the OSC Time Tag of the enclosing bundle. The atomicity requirement for OSC Messages in the same OSC Bundle does not apply to OSC Bundles within an OSC Bundle.

OSC Application Areas

This document compiled by Matt Wright and Adrian Freed lists some of the ways in which OSC has been used, organized into "application area" categories, with examples. Please add descriptions of your own applications of OSC as comments in the appropriate section.

Sensor/Gesture-Based Electronic Musical Instruments

A human musician interacts with sensor(s) that detect physical activity such as motion, acceleration, pressure, displacement, flexion, keypresses, switch closures, etc. The data from the sensor(s) are processed in real time and mapped to control of electronic sound synthesis and processing.

Diagram of processes (ovals) and data (rectangles) flow in a sensor-based musical instrument.
This kind of application is often realized with Heterogenous Distributed Multiprocessing on Local Area Networks, e.g., with the synth control parameters sent over the LAN to a dedicated "synthesis server," or with the sensor measurements sent over the LAN from a dedicated "sensor server". There have also been many realizations of this paradigm using OSC within a single machine.

  • Wacom tablet controlled "scrubbing" of sinusoidal models synthesized on a synthesis server. (Wessel et al. 1997)
  • The MATRIX ("Multipurpose Array of Tactile Rods for Interactive eXpression") consists of a 12x12 array of spring-mounted rods each able to move vertically. An FPGA samples the 144 rod positions at 30 Hz and transmits them serially to a PC that converts the sensor data to OSC messages used to control sound synthesis and processing. (Overholt 2001)
  • In a project at the MIT Media Lab (Jehan and Schoner 2001), the analyzed pitch, loudness, and timbre of a real-time input signal control sinusoids+noise additive synthesis. In one implementation, one machine performs the real-time analysis and sends the control parameters over OSC to a second machine performing the synthesis.
  • The Slidepipe
  • Three projects at UIUC are based on systems consisting of real-time 3D spatial tracking of a physical object, processed by one processor that sends OSC to a Macintosh running Max/MSP for sound synthesis and processing:
    • In the eviolin project (Goudeseune et al. 2001), a Linux machine tracks the spatial position of an electric violin and maps the spatial parameters in real-time to control processing of the violin's sound output with a resonance model.
    • In the Interactive Virtual Ensemble project (Garnett et al. 2001), a conductor wears wireless magnetic sensors that send 3D position and orientation data at 100 Hz to a wireless receiver connected to an SGI Onyx. This machine processes the sensor data to determine tempo, loudness, and other paramaters from the conductor; these parameters are sent via OSC to Max/MSP sound synthesis software.
    • VirtualScore is an immersive audiovisual environment for creating 3D graphical representations of musical material over time (Garnett et al. 2002). It uses a CAVE to render 3D graphics and to receive orientation and location information from a wand and a head tracker. Both real-time gestures from the wand and stored gestures from the “score” go via OSC to the synthesis server.
  • In Stanford’s CCRMA’s Human/Computer Interaction seminar (Music 250a), students connect sensors to a special development board containing an Atmel AVR microcontroller which sends OSC messages over a serial connection to Pd (Wilson et al. 2003).
  • Projects using La Kitchen's Kroonde (wireless) and Toaster (wired) general-purpose multichannel sensor-to-OSC interfaces.
  • Projects using IRCAM's EtherSense sensors-to-OSC digitizing interface
  • The BuckyMedia project uses 3d accelerometers employing OSC over WLAN (box developed by f0am) to transmit the movements of several geodesic, tensile or synetic structures for audiovisual interpretation.

Mapping nonmusical data to sound

This is almost the same as the "Sensor/Gesture-Based Electronic Musical Instruments" application area above, except that the intended user isn't necessarily a musician (though the end result may be intended to be musical). Therefore the focus tends to be more on fun and experimentation rather than musical expression, and the user often interacts directly with the computer's user interface instead of special musical controllers

  • Picker is software for converting visual images into OSC messages for control of sound synthesis
  • Sodaconstructor is software for building, simulating, and manipulating mass/spring models with gravity, friction, stiffness, etc. Parameters of the real-time state of the model (e.g., locations of particular masses) can be mapped to OSC messages for control of sound synthesis.
  • SpinOSC is software for building models of spinning objects. Properties such as size, rotation speed, etc. can be sent as OSC messages.
  • Stanford’s CCRMA’s Circular Optical Object Locator (Hankins et al. 2002) is based on a rotating platter upon which users place opaque objects. A digital video camera observes the platter and custom image-processing software outputs data based on the speed of rotation, the positions of the objects, etc. A separate computer running Max/MSP receives this information via OSC and synthesizes sound.
  • GulliBloon is a message-centric communication framework for advanced pseudo-realtime sonic and visual content generation, built with OSC. Clients register with a central multiplexing server to subscribe to particular data streams or to broadcast their own data.
  • The LISTEN project aims to "augment everyday environments through interactive soundscapes": users wear motion-tracked wireless headphones and receive individual spatial sound based on their individual spatial behavior. This interview discusses artistic aspects.

Multiple-User Shared Musical Control

A group of human players (not necessarily each skilled musicians) each interact with an interface (e.g., via a web browser) in real-time to control some aspect(s) of a single shared sonic environment. This could be thought of as a client/server model in which multiple clients interact with a single sound server.

Multiple players influence a common synthetic sound output


  • In Randall Packer's, Steve Bradley's, and John Young's "collaborative intermedia work" Telemusic #1 (Young 2001) , visitors to a web site interact with Flash controls that affect sound synthesis in a single physical location.  
  • In the Tgarden project (see the f0.am or Georgia Tech sites) visitors in a space collectively affect the synthesized sound indirectly through physical interaction with sensor-equipped objects such as special clothing and large balls (as in "Mapping nonmusical data to sound").
  • PhopCrop is a prototype system in which multiple users create and manipulate objects in a shared virtual space governed by laws of "pseudophysics." Each object has both a graphical and sonic representation.
  • Grenzenlose Freiheit is an interactive sound installation using OSC with WLAN'd PDAs as a sound control interface for the audience.

Web interfaces


Networked LAN Musical Performance

A group of musicians operate a group of computers that are connected on a LAN. Each computer is somewhat independent (e.g., it produces sound in response to local input) yet the computers control each other in some ways (e.g., by sharing a global tempo clock or by controlling some of each others' parameters.) This is somewhat analogous to multi-player gaming.

Each player can control some of the parameters of every other player

  • At ICMC 2000 in Berlin (http://www.audiosynth.com/icmc2k), a network of about 12 Macintoshes running SuperCollider synthesized sound and changed each others’ parameters via OSC, inspired by David Tudor’s composition "Rainforest."
  • The Meta-Orchestra project (Impett and Bongers, 2001) is a large local-area network that uses OSC.
  • Simulus use OSC over WiFi to synchronise clock and tempo information between SuperCollider and AudioMulch in their live performances. See (Bencina, 2003) for a discussion of MIDI clock synchronisation techniques which have since been applied to OSC.

WAN performance and Telepresence

A group of musicians in different physical locations play together as a sort of "musical conference call". Control messages and/or audio from each player go out to all the other sites. Sound is produced at each site to represent the activities of each participant.

  • The Hub's projects (Chris Brown, Mike Berry, Grainwave...)
  • Quintet.net
  • Randall Packer's projects

Virtual Reality


  • CREATE's Distributed Sensing, Computation, and Presentation ("DSCP") systems. Inputs are multiple VR head sensors, hand trackers, etc., from multiple users in a shared virtual world. Dozens of computers interpret gestures, run simulations, render audio+video. Everything communicates with CORBA and OSC. (Pope 2002).
  • UCLA projects

Wrapping Other Protocols Inside OSC

People often convert data from other protocols into OSC for reasons including easier network transport, homogeneity of message formats, compatibility with existing OSC servers, and the possibility of self-documenting symbolic parameter names.

  • MIDI over OSC (e.g., for WAN performance). For example, Michael Zbyszynski's Remote MIDI patches for Max, C. Ramakrishnan's Occam (OSC->MIDI), G. Kling's Macco (MIDI->OSC, part of CSL).
  • Converting "messy", "inconvenient" data from sensors to OSC format (e.g., for Sensor/Gesture-Based Electronic Musical Instruments)


Bencina, R. (2003), PortAudio and Media Synchronisation. In Proceedings of the Australasian Computer Music Conference, Australasian Computer Music Association, Perth, pp. 13-20.

Garnett, G.E., Jonnalagadda, M., Elezovic, I., Johnson, T. and Small, K., Technological Advances for Conducting a Virtual Ensemble, in International Computer Music Conference, (Habana, Cuba, 2001), 167-169.

Garnett, G.E., Choi, K., Johnson, T. and Subramanian, V., VirtualScore: Exploring Music in an Immersive Virtual Environment, in Symposium on Sensing and Input for Media-Centric Systems (SIMS), (Santa Barbara, CA, 2002), 19-23. (pdf)

Goudeseune, C., Garnett, G. and Johnson, T., Resonant Processing of Instrumental Sound Controlled by Spatial Position, in CHI '01 Workshop on New Interfaces for Musical Expression (NIME'01), (Seattle, WA, 2001), ACM SIGCHI. (pdf)

Hankins, T., Merrill, D. and Robert, J., Circular Optical Object Locator, Proc. Conference on New Interfaces for Musical Expression (NIME-02), (Dublin, Ireland, 2002), 163-164.

Impett, J. and Bongers, B., Hypermusic and the Sighting of Sound - A Nomadic Studio Report, Proc. International Computer Music Conference, (Habana, Cuba, 2001), ICMA, 459-462.

Jehan, T. and Schoner, B., An Audio-Driven Perceptually Meaningful Timbre Synthesizer, in Proc. International Computer Music Conference, (Habana, Cuba, 2001), 381-388. (pdf)

Overholt, D., The MATRIX: A Novel Controller for Musical Expression, Proc. CHI '01 Workshop on New Interfaces for Musical Expression (NIME'01), (Seattle, WA , 2001). (pdf)

Pope, S.T. and Engberg, A., Distributed Control and Computation in the HPDM and DSCP Projects, in Proc. Symposium on Sensing and Input for Media-Centric Systems (SIMS), (Santa Barbara, CA, 2002), 38-43. (pdf)

Wessel, David, Matthew Wright, and Shafqat Ali Khan. Preparation for Improvised Performance in Collaboration with a Khyal Singer, in Proc. International Computer Music Conference (Ann Arbor, Michigan, 1998), ICMA, 497-503. (html)

Wilson, Scott, Michael Gurevich, Bill Verplank, and Pascal Stang. Microcontrollers in Music HCI Instruction: Reflections on Our Switch to the Atmel AVR Platform, In Proc. of the Conference on New Interfaces for Musical Expression, (Montreal, 2003) 24-29.

Young, J.P., Using the Web for Live Interactive Music, Proc. International Computer Music Conference, (Habana, Cuba, 2001), 302-305.

OSC Developer Resources

Resources for OSC software development and research.

CNMAT Software & Library Downloads

CNMAT's open-source library for constructing OSC packets: This is all you need if you want your application to be able to format OSC packets for sending over the network.

CNMAT's sendOSC and dumpOSC programs: The Unix program sendOSC allows the user to type in message addresses and arguments via a no-frills text interface, and formats and sends these messages to the desired IP address and port number. The Unix program dumpOSC listens for OpenSoundControl messages on the given port and prints them out in a simple ASCII format. It's a very useful tool for client debugging. Source code is available for these programs. They're also available as compiled binaries for OSX.

The OpenSound Control Kit: consists of an open-source library, API, documentation, and other goodies that implement most of the features of OSC and make it fairly easy for developers to add OSC support to their applications. This Kit was first described in a paper we presented at ICMC 98. Here are the relevant sections from that paper:

2. Introduction to the OpenSound Control Kit

The OSC Kit (http://www.cnmat.berkeley.edu/OpenSoundControl/Kit) implements as many of the features of an OSC-addressable application as possible. (We have also made available a library that constructs OSC via a procedural interface that hides the OSC byte format.) The following issues are handled internally and need not concern users of the Kit: byte format of OSC, time tags and scheduling of messages in the future, atomicity of messages with the same time tag, pattern matching, memory management of OSC objects, and automatic answering of certain queries.

The Kit is available as both C and C++ libraries. The APIs use an object-oriented style with opaque objects; the C version represents objects as pointers to structs. All communication between the Kit and the rest of the application is via arguments and return values; the interface does not use global variables.

3. Interface to OSC-addressable features

OSC is a tool that is useful only insofar as there are interesting features that can be controlled by it. Therefore, the Kit's API for making features controllable via OSC is designed to be as straightforward and convenient as possible. This paper will use the term "user" to refer to the OSC client, presumably a musician, who controls some feature via OSC, and "implementor" to refer to the programmer who implements that feature and makes it OSC-addressable.

The hierarchical OSC address space is modeled as an object-oriented tree of containers, each of which contains subcontainers (for tree structure) and methods, which can implement the actions that OSC messages call for, e.g., updating the value of a parameter. A container's methods and subcontainers share a single namespace. The root of this tree, corresponding to the address "/," is returned by the procedure that initializes the address space. All operations on this tree, including adding or removing a container or method, are O(1) and can be performed dynamically with no risk of compromising reactive real-time performance.

Each method contains a callback procedure, written by the implementor, which the Kit will invoke at the time that the OSC message is to take effect. The arguments to a callback procedure are:

  • void *context - Supplied by the implementor when the method was added to the address space
  • int arglen - Number of bytes of argument data that was sent by the user
  • void *const args - The argument data itself
  • OSCTimeTag when — The time tag used to schedule this message
  • NetworkReturnAddressPtr returnAddr - An opaque object that can be used to send an OSC message back to the user.

For maximum efficiency, the Kit never copies argument data in memory. The args pointer points to data in the buffer where the packet was received. It is the implementor's job to interpret this data according to the argument types expected by this method. Most methods take a single number or a list of numbers of the same type (e.g., 3 floats) as arguments, so their callback procedures simply cast args to the appropriate pointer type and treat it as an array. The Kit provides helper procedures for dealing with OSC-style (4-byte aligned) ASCII strings, including one that turns args and arglen into an array of char * pointers. Callback procedures typically copy their argument data into wherever they keep the state of the application, or compute new values based on the arguments to the callback. They should not store any pointer derived from the args pointer, because eventually the buffer holding the packet will be reused.

When a single address pattern expands into multiple method addresses in the hierarchy, each of their callback procedures is called with the same args and arglen arguments. The context pointer allows for an object-oriented style that can handle polyphony and other "multiple instances of the same functionality with different state" features by registering the same callback procedure with multiple contexts.

The OSC Kit automatically detects the standard OSC queries and generates a response. The raw data needed as the answer to queries like "what are the argument types for this message" or "give me the documentation for this feature" are provided by the implementor when registering containers and methods.

4. Interfaces for Adding OSC to a Real-Time System

There are many ways that the OSC Kit needs to tie into the rest of a reactive realtime system. It must interact with the network services to send and receive packets. It interacts with the scheduler and control flow of the overall real-time system to ensure that the Kit has enough time to process incoming messages but that it defers processing of messages scheduled to occur in the future until the system has some otherwise idle time. Finally, the Kit needs memory to store the address space hierarchy and to receive and process incoming messages; this memory must be available dynamically in real-time, but the Kit does not assume that the real-time system will necessarily have a real-time memory allocator.

All of these interfaces were designed with the following goals:

  • Minimal assumptions about what the rest of the system will look like
  • Maximum real-time performance
  • Simplicity and ease of use
  • Maximum flexibility to change underlying implementations in the future (e.g., for performance optimization) without breaking code written to the API

4.1 Interface to low-level networking code

Network services are usually provided by the operating system via an API. These APIs generally expect to be given a buffer of memory in which to place incoming data from the network. So the OSC Kit manages a pool of PacketBuffer objects that consist mainly of a large buffer for this purpose. (They also store an implementation-dependent network return address.) The Kit never copies OSC data in memory; OSC data is parsed in the PacketBuffer, and eventually a number of callback procedures are invoked with args values that point into the PacketBuffer. Because an OSC packet may contain time-tagged messages to take effect in the future, these PacketBuffers cannot be reused until the last message in the packet takes effect.

4.2 Control Flow/Scheduling Model

The Kit does not include its own scheduler; we assume that the design and implementation of the main part of the sound processing application will determine the overall flow of control. The OSC Kit does its work when the main part of the application calls procedures in the Kit's scheduling API. Low-latency digital sound processing requires a scheduling model in which code to compute output samples is run frequently, leaving many short periods of time for the processor to do other things like process OSC input. Therefore, the Kit's scheduling API is based on the assumption that its procedures will be called often and that they should do a small amount of work and return quickly.

Here is a gross approximation of the inner loop of an OSC-addressable sound synthesizer:

while (1) {
    OSCTimeTag now = GetCurrentTime();
    while (WeHaveEnoughTimeToInvokeMessages()) {
        if (!OSCInvokeMessagesThatAreReady(now)) break;
    while (NetworkPacketWaiting()) {
        OSCPacketBuffer p = OSCAllocPacketBuffer();
        if (!p) {
        } else {
    while (TimeLeftBeforeWeHaveDoSomething()) {
        if (!OSCBeProductiveWhileWaiting()) break;

OSCAcceptPacket "accepts" a newly received packet. When a time tag indicates a message that needs to happen immediately (or should already have happened), it has no choice but to process it immediately: parse, pattern-match the address pattern against the addresses in the current hierarchy, and invoke the necessary callback procedures. When a bundle's time tag indicates a time still in the future, there is time between receipt of the bundle and when it must be executed. In this case, OSCAcceptPacket defers all work, inserting the bundle into the priority queue of messages to be invoked in the future. This queue uses a standard heap data structure [Bentley 86] with O(log(n)) insertion and deletion, but it is easy for the implementor to create a custom queue, e.g., to take advantage of knowing that the main scheduler always operates on fixed-size time blocks [Dannenberg 89].

OSCBeProductiveWhileWaiting performs a small amount of "background" processing on messages in the queue. The idea is to invoke this procedure when the processor would otherwise be idle. (In a thread-based system, a low-priority thread would continuously call this procedure.) This procedure's return value indicates whether there is more work to do, so it can be called as many times as possible during these idle times.

OSCInvokeMessagesThatAreReady takes the current time as an argument, and invokes the callbacks whose time has arrived. (This is the Kit's only interface to the part of the system that knows what time it is.) This should be called frequently to ensure low control latency. Atomicity is guaranteed by this procedure, because it invokes all of the callbacks with a particular time tag before returning. (In a thread-based system, the thread invoking the callbacks must prevent the audio-generating thread from running until OSCInvokeMessagesThatAreReady returns.)

We have integrated the OSC Kit into a real-time system running under IRIX and based upon the select() system call. An example application using this arrangement is available for download. We have also designed a thread-based architecture using the Kit, with separate threads for accepting incoming packets, background processing , producing the list of callback procedures that are ready to be invoked , and invoking callback procedures. Pseudocode for the thread design is also available for download.

4.3 Memory Model

The OSC system needs dynamic memory allocation for the address space, for PacketBuffers, for the priority queue, and for lists of methods that match address patterns. To avoid the problems associated with real-time memory management, the OSC Kit preallocates pools of fixed size memory chunks for each of these objects. A very fast custom memory manager provides O(1) allocation and freeing via these pools. Users of the Kit pass arguments to the initialization procedures that say how many of each kind of object to allocate.

The Kit does not expose implementation details like the contents of these data structures, so in order for users of the Kit to decide how many objects to preallocate we provide a procedure that takes the same arguments as the initialization procedure and returns the number of bytes of memory that would be allocated.

Initialization procedures also take two function pointers as arguments: a pointer to the "initialization-time memory allocator" and a pointer to the "real-time memory allocator." Both of these procedures, like malloc(), take a number of bytes as argument and return a pointer to that much free memory. The Kit invokes the initialization-time allocator only from the initialization procedure. It invokes the real-time allocator if any of the preallocated object pools runs out. If the real-time allocator fails, the Kit has to drop a message, refuse to add to the address space, or otherwise fail in a graceful manner. If a system does not have real-time memory allocation, the real-time allocator can simply be a procedure that always returns 0. Because the memory allocator used by the kit is an argument rather than an internal procedure, it is possible to tune the memory system without recompiling the Kit.

Although much more sophisticated designs would be possible, this system performs well and allows for the important case of static overall memory limits. The ability to convert free memory for one kind of object into needed memory for another kind of object could avoid certain "not enough memory" situations, but managing pools of different-sized objects would drastically increase the complexity of the memory allocator.

Guide to OSC Libraries

There are many implementations of OSC in the form of C language libraries. How do you choose which one to use for your project?

Some relevant features for comparison include:

- license
- which OSC features are implemented
- which flavors of "C" are supported (e.g., C++)
- Age
- Size
- Documentation


Nicholas J Humfrey said (on osc_dev):

"Liblo, the Lite OSC library, is an implementation of the Open Sound
Control protocol for POSIX systems*. It is written in ANSI C99 and
released under the GNU General Public Licence. It is designed to make
developing OSC applictions as easy as possible."

Liblo: Lightweight OSC API


I would not recommend using the OSC-Kit, since the code is old and not very well supported, given that there are newer implementations.



oscpack is written in C++ and distributed under a BSD-style license.



WOscLib is written in C++ and relased under the GNU LGPL.

(from http://wosclib.sourceforge.net/doc):

This work is a re-implementation and (hopefully) modernization of the the classic OSC-Kit, which was originally provided by Matt-Wright and was written in pure C (see http://www.cnmat.berkeley.edu/OpenSoundControl).
Matt's kit uses lots of global variables, a fast but user-unfriendly memory-allocation (and de-allocation)-scheme, makes use of non type-save osc-callback-functions, has no real exception handling and its modularity is heavily restricted due to the C-design. There is also no documentation based on a modern auto-doc-system (e.g. doxygen) what makes it harder for newbies to get an OSC-system running in 10 minutes since they have to read all (or at least some) source-files first.

Summary of the reasons of the re-implementation are:

Higher level programming language (C++) and therefor a higher productivity.
Enhanced modularity.
Enhanced code-reusage.
Elimination of global variables and functions to facilitate usage of multiple OSC-servers in the same process.
Good exception handling.
Type-save interfaces for OSC arguments in OSC-methods.
Better (and dynamic) management of OSC-methods.
Good documentation.
Less complex OSC-system implementation.


OSC Developers Mailing List

create.ucsb.edu currently supports the OSC Developers' List.

- The OSC_dev Info Page
- Archives of the OSC_dev List

Proceedings of the 2004 Open Sound Control Conference

Open Sound Control Conferences bring together OSC developers, practicioners, researchers and industry representatives to focus on new work on the protocol and related research in areas in multimedia.

Topics and themes include:

Friday July 30 2004, 9am - 5pm
Hewlett Packard Auditorium (Room #306, Soda Hall)
on the UC Berkley campus

Presented by the UC Berkeley Center for New Music and Audio Technology (CNMAT).
Thanks to the UC Discovery Grant Program for making this event possible.


8:30 - 9:00 am - Registration

9:00 - 9:10 am - Welcome and overview of conference: David Wessel

9:10 - 9:25 am - Brief overview of OSC and its application areas, Matt Wright, Center for New Music and Audio Technologies (CNMAT), UC Berkeley

9:25 - 9:55 am - Keynote address: OSC and Digital Lifestyle Aggregation, Marc Canter, Broadband Mechanics

9:55 - 10:55 am - Paper Session I: Implementations of OSC, (session chair: Matthew Wright)

10:55 - 11:10 am - Break

11:10 - 12:00 pm - Paper Session II: OSC Hardware, (session chair Adrian Freed)

12:00 - 1:00 pm - Lunch (on-site, included in conference registration)

1:00 - 2:30 pm - Paper Session III: OSC-related research, (session chair David Wessel)

2:30 - 2:45 pm - Break

2:45 - 3:45 pm - Poster Session: Gallery of Projects enabled by OSC

3:45 - 4:00 pm - Break

4:00 - 4:50 pm - Presentation of Draft Proposals from standardization working groups

  • Bidirectional XML mapping, Ben Chun, MIT Media Lab (alum)
  • A Query System for Open Sound Control, Andrew Schmeder, CNMAT
  • Invitations to form working groups:
    • Binary file format
    • time tags and synchronization
    • schemas (mechanism for publishing and standardizing address spaces and semantics)
    • OSC hardware kit
    • regular expressions

4:50 - 5:00 pm - Conference wrap-up and future directions for OSC, David Wessel, Director, and Matt Wright, Musical Systems Designer, CNMAT