AOS will be in Macerata ( May 2-6 2014 ) at the Youbiquity Festival for a workshop in which we will understand how to create an ubiquitous soundscape and installation, to create an immersive geography of sound.
“When you listen carefully to the soundscape it becomes quite miraculous.”
––R. Murray Schafer
From the Youbiquity website:
An immersive workshop whose objective is to create an Ubiquitous Soundscape: a sonic landscape which can be experienced using Augmented Reality, and which can be produced collaboratively, through sound sampling and audio representation of data and information.
Participants will learn how to design a specific Augmented Reality smartphone application (iOS and Android), on which to publish their Ubiquitous Soundscape, created through sound samples of any kind and the audio representation of data and information. All of this will form an immersive experience, in which it will be able to walk through the sounds disseminated across natural and urban spaces.
A result of the workshop will be the participation to the second volume of the Read Write Reality publications (you can find the first Read/Write Reality book on Lulu, which was about the creation of an Augmented Reality Movie), and a final show/exhibit/installation, ubiquitously distributed through the streets of beautiful Macerata.
To take part to the workshop you can contact: firstname.lastname@example.org +39 349 6441703
How do you create an ubiquitous Soundscape?
The Soundscape. The sound or combination of sounds which arises from an immersive environment.
This definition of soundscape comes from Canadian composer R. Murray Schafer, who identified three main elements of each place’s soundscapes: the Keynote Sounds, created by nature, geography and climate, and which live in the background of our conscious perception most of the time; the Sound Signals, which are the ones we consciously listen to; and the Soundmark, coming from landmark, which is the sound which is unique to an area.
Bernie Krause classified the elements of the soundscape according to their originating source: the Geophony of a place, generated by non-biological sources; the Biophony, as generated by non-human living beings; and the Anthrophony, generated by human beings.
Both of these definitions can be updated to try to engage the fact according to which entirely new dimensions of space have now entered our realms of perceptions.
Digital data, information and communication has become ubiquitously available and accessible, and everything we do generates data and information somewhere.
We have learned to use all these additional sources of information to transform the ways in which we communicate, work, collaborate, learn, express ourselves and our emotions, relate and consume. Ubiquitous information has entered our daily lives, blurring the boundaries between what is digital and physical, so much that it is progressively loosing sense to make the distinction in the first place.
In RWR UbiquitousSound we wish to address the phenomenology of the Ubiquitous Soundscape.
Our aim is to design a natural way to create and interact with digitally and ubiquitously produced sound in the environment.
As it happens for the biophony, geophony and anthrophony of places, we want to create an Infophony of space, in which we can walk through, orient, experience. We wish to describe and implement the parts of our soundscape which could be created through Ubiquitous Publishing techniques, from social networks, data sets, and from the digital information which we constantly produce from all the places in the world, through our daily lives. We want to make this information physical, evolving, emergent, experienceable, immersive, complex, just as the rest of the soundscape.
We want to create an explicit bridge between the physical and digital realms of our lives, through sound, allowing us to create information ubiquitously, and to experience it immersively.
What we will do
We will create an Augmented Reality application which will allow us to experience the immersive Ubiquitous Soundscape by wearing headphones.
We will create the application together, also co-designing its elements. The application will allow us to load sounds samples and sound-representations of datasets and information, and to map them to a physical space. Then headphones will be used to experience the soundscape in an immersive way: walking up to the sounds, away from them, being able to achieve a new form of sound orientation through the Ubiquitous Soundscape, in the physical world.
We will create our own Ubiquitous Soundscapes.
We will showcase them in a final performance though the streets of Macerata, and though an exhibit.
Who is this workshop for
Any artist, designer, hacker, architect or other who is interested in exploring the possibilities brought on by the opportunity to create ubiquitous sound experiences using samples, data and information.
Although many technologies will be used, no previous technological knowledge is required. The workshop is for everyone. Of course, people with additional technological expertise will be able to appreciate additional levels of detail.
What you need
Your laptop. All your smartphones (iOS or Android).
Optional: sound-related technologies (digital recorders, effects, controllers, software, microphones…).
Publication and digital distribution
Read/Write Reality Ubiquitous Sound will be also a digital publication about the results of the workshop, including as authors also all the participants.
Produced by AOS (Art is Open Source) in collaboration with Teatro Rebis, Youbiquity and Macerata Racconta, this publication will include the critical theoretical approaches used during the workshop, exercises, as well as the description of the techniques and tools used. A digital book for designers, artists, architects, hackers, communicators, ethnographers and developers wishing to expand their perspectives on ubiquitous publishing.