Mappings

Digital synthesis has freed instrument design from being tied to the physical method of generating sound. Thus any arbitrary interface can be mapped to any given synthesis algorithm; indeed the mapping can also be designed to suit the goals of the designer[Hunt et al.(2002)Hunt, Wanderley, and Paradis]. There are a number of strategies that have been used to derive mappings. The most straightforward method is thinking in terms of controls and parameters that should be controlled. But this often ends up leading to direct mappings of controls to parameters, which can be a limited way of turning gestures into sound. Using the velocity or acceleration of a given control, for example, can provide for much more compelling gestural control. Another classic mapping strategy is creating a multi-dimensional "timbre space" which the musician navigates[Vertegaal(1994)]. In this method, a few dimensions of timbre of chosen and then mapped out into a dimensional space which the user can navigate. A. Cont, T. Coduys, and C. Henry present a novel approach to mapping, using neural networks to create mappings that are based on learning gestures from the user.[Cont et al.(2004)Cont, Coduys, and Henry] Their software, written in Pd, makes designing mappings an iterative process, where the user ranks the desirability of a given gesture, leading eventually to a chosen array of performance gestures.

Before controller data can be mapped, the output from many devices needs to be scaled, smoothed, or otherwise processed in order to provide good control. On the most basic level, the range coming from the input device will have to be scaled to match the parameters being controlled. Data from high resolution devices such as mice and tablets can be jerky and seemingly erratic. Using an [average] object on the data stream, you can smooth the data stream. By making it a weighted average using the [weight( message, you can ameliorate the added latency caused by the averaging. When working with sensors and electronics, often the output of the sensors must be smoothed before it can be properly sampled since the resolution of the microprocessor is limited. This must then be done using electronically, using an integrator, for example.

OSC provides a framework for abstracting the mapping process, which can help clarify the problems of mapping [Wright et al.(2001)Wright, Freed, Lee, Madden, and Momeni]. For example, the output of the controller and the input of the synthesizer can be mapped using a descriptive OSC name space, allowing the instrument designer to more easily focus on the mapping without having to think about the implementation details of the controller or the synthesizer.

Hans-Christoph Steiner 2005-04-25