Many spatialization algorithms use a self-contained syntax and storage-format, wherein control messages (e.g. trajectories to move a sound virtually) programmed for one application are incompatible with any other implementation.

This lack of a standardization complicates the portability of compositions and requires manual synchronization and conversion of control data - a time consuming affair. Incompatible data formats also prevent collaboration between researchers and institutions.
The general idea is to development of a format to describe, store and share spatial audio scenes across 2D/3D audio applications and concert venues.

Therefore we call for a collaborative developing of SpatDIF - a format to describe spatial audio information in a structured way to support real-time and non-real-time applications.


Platform independence: Ideally any 3D audio rendering algorithm on any computer platform should technically be able to understand SpatDIF;

Easily understandable syntax: to prevent misunderstandings when stored data are shared;

Extendability: Easy adding of descriptors to extend the specification especially as long as SpatDIF is in development;

Free and open source: to increase the acceptance and widespread usage of this new format;

Easy to connect: with interfaces, controllers and sensors for real-time control of spatialization;

Use of existing standards: to focus on conceptual rather than technical development.

Not Goals

SpatDIF is not ...

a music notation system system in general. You may like MusicXML

a sound synthesis language

a 3D graphic format

primary made for computer games. you may like OpenAL or irrKlang

Use Cases

SpatDIF is under development for the following user scenarios.