How to make a ubiquitous soundscape using augmented reality: Read/Write Reality, Ubiquitous Sound at Youbiquity!

AOS will be in Macerata ( May 2-6 2014 ) at the Youbiquity Festival for a workshop in which we will understand how to create an ubiquitous soundscape and installation, to create an immersive geography of sound.

“When you listen carefully to the soundscape it becomes quite miraculous.”

––R. Murray Schafer

From the Youbiquity website:

An immersive workshop whose objective is to create an Ubiquitous Soundscape: a sonic landscape which can be experienced using Augmented Reality, and which can be produced collaboratively, through sound sampling and audio representation of data and information.

 

Participants will learn how to design a specific Augmented Reality smartphone application (iOS and Android), on which to publish their Ubiquitous Soundscape, created through sound samples of any kind and the audio representation of data and information. All of this will form an immersive experience, in which it will be able to walk through the sounds disseminated across natural and urban spaces.

A result of the workshop will be the participation to the second volume of the Read Write Reality publications (you can find the first Read/Write Reality book on Lulu, which was about the creation of an Augmented Reality Movie), and a final show/exhibit/installation, ubiquitously distributed through the streets of beautiful Macerata.

Here is the Program and info for the Ubiquitous Sound workshop

To take part to the workshop you can contact: youbiquity.giorgio@gmail.com  +39 349 6441703

How do you create an ubiquitous Soundscape?

The Soundscape. The sound or combination of sounds which arises from an immersive environment.

This definition of soundscape comes from Canadian composer R. Murray Schafer, who identified three main elements of each place’s soundscapes: the Keynote Sounds, created by nature, geography and climate, and which live in the background of our conscious perception most of the time; the Sound Signals, which are the ones we consciously listen to; and the Soundmark, coming from landmark, which is the sound which is unique to an area.

Bernie Krause classified the elements of the soundscape according to their originating source: the Geophony of a place, generated by non-biological sources; the Biophony, as generated by non-human living beings; and the Anthrophony, generated by human beings.

Both of these definitions can be updated to try to engage the fact according to which entirely new dimensions of space have now entered our realms of perceptions.

Digital data, information and communication has become ubiquitously available and accessible, and everything we do generates data and information somewhere.

We have learned to use all these additional sources of information to transform the ways in which we communicate, work, collaborate, learn, express ourselves and our emotions, relate and consume. Ubiquitous information has entered our daily lives, blurring the boundaries between what is digital and physical, so much that it is progressively loosing sense to make the distinction in the first place.

In RWR UbiquitousSound we wish to address the phenomenology of the Ubiquitous Soundscape.

Our aim is to design a natural way to create and interact with digitally and ubiquitously produced sound in the environment.

As it happens for the biophony, geophony and anthrophony of places, we want to create an Infophony of space, in which we can walk through, orient, experience. We wish to describe and implement the parts of our soundscape which could be created through Ubiquitous Publishing techniques, from social networks, data sets, and from the digital information which we constantly produce from all the places in the world, through our daily lives. We want to make this information physical, evolving, emergent, experienceable, immersive, complex, just as the rest of the soundscape.

We want to create an explicit bridge between the physical and digital realms of our lives, through sound, allowing us to create information ubiquitously, and to experience it immersively.

What we will do

We will create an Augmented Reality application which will allow us to experience the immersive Ubiquitous Soundscape by wearing headphones.

We will create the application together, also co-designing its elements. The application will allow us to load sounds samples and sound-representations of datasets and information, and to map them to a physical space. Then headphones will be used to experience the soundscape in an immersive way: walking up to the sounds, away from them, being able to achieve a new form of sound orientation through the Ubiquitous Soundscape, in the physical world.

We will create our own Ubiquitous Soundscapes.

We will showcase them in a final performance though the streets of Macerata, and though an exhibit.

Who is this workshop for

Any artist, designer, hacker, architect or other who is interested in exploring the possibilities brought on by the opportunity to create ubiquitous sound experiences using samples, data and information.

Although many technologies will be used, no previous technological knowledge is required. The workshop is for everyone. Of course, people with additional technological expertise will be able to appreciate additional levels of detail.

What you need

Your laptop. All your smartphones (iOS or Android).

Optional: sound-related technologies (digital recorders, effects, controllers, software, microphones…).

Publication and digital distribution

Read/Write Reality Ubiquitous Sound will be also a digital publication about the results of the workshop, including as authors also all the participants.

Produced by AOS (Art is Open Source) in collaboration with Teatro Rebis, Youbiquity and Macerata Racconta, this publication will include the critical theoretical approaches used during the workshop, exercises, as well as the description of the techniques and tools used. A digital book for designers, artists, architects, hackers, communicators, ethnographers and developers wishing to expand their perspectives on ubiquitous publishing.

Drawing after Drawing: the many lives of an ancient media

Il Disegno dopo il Disegno

Il Disegno dopo il Disegno

The book “Il Disegno dopo il Disegno: le molte vite di un medium antico” (“Drawing after Drawing: the many lives of an ancient media”) just came out for Pisa University Press, edited by Valeria Bruni, Stefano Socci and Franco Speroni.

The book contains essays by Alberto Abruzzese, Giuseppe Andreani, Alessandro Bernardi, Valeria Bruni, Massimo Carboni, Marco Cianchi, Giovanni Fiorentino, Gino Frezza, Francesco Galluzzi, Andrea Granchi, Salvatore Iaconesi, Lorenzo Imbesi, Anna Luppi, Roberto Maragliano, Ruggero Pierantoni, Cristina Reggio, Carlo Sini, Stefano Socci, Franco Speroni, Tommaso Tozzi, Laura Vecere.

S. Iaconesi (2013). “Remixing the Dots: Disegno Memetico ed Evoluzione Culturale” in V. Bruni (ed.), S. Socci (ed.), F. Speroni (ed.) “Il Disegno dopo il Disegno: le molte vite di un medium antico”. Pisa, Italy: Pisa University Press. ISBN 978-88-6741-172-6.

In the book, we present “Remixing the dots: Memetic Drawing and Cultural Evolution”.

In the essay we start from the analysis of drawing and sketching in the histories and economies of human ideas.

We start off by observing the evolution of the trends found in patent drawings to highlight the different ways in which drawing and sketches have established different relationships with the ideas which they try to explain and which they wish to represent, and the transformation of these relationships.

We then move on to describing the transformation of these techniques, including their shift from the focus on pictorial techniques, to the progressive adoption of cut’n paste and the emergence of diagrams and visualizations. In this analysis drawing becomes a performance for conceptual abstraction whose main purpose is to represent ideas, knowledge and information, in factual and possibilistic, recombinant ways.

The contribution ends with the analysis of computational and collaborative processes, and their role in establishing a new form of performance, which is oriented not only to personal expression but also amplifies drawings’ role as enabler of relationships and mutual interconnections between human beings and information.

The chapter ends with an operative hypothesis for a new kind of performance of this kind, enacted through Augmented Reality and, more in general, with the idea of Ubiquitous Publishing. A possible answer is the thing we are formalising in the Remixing the Dots Augmented Reality application, that will be out really soon.

So stay tuned for the updates!

The future of academic discussions, on Limina

Limina n.2

Limina n.2

Our article “Utopian Architectures and the Dictatorship  of the Imaginary. A Selection of Topics in Favor of Holistic Education Paths, and the Role  of the Fish Eye in the Observation of Reality” (by Salvatore Iaconesi and Stefano Bonifazi) has just been published on the n.2 Issue of Limina (on page 183), by Planetary Collegium’s M-Node.

It is a peculiar article, as it uses a novel form of system which has been designed and proposed to suggest new forms of publishing which can be used to structure academic discussions, including their outputs in terms of papers and articles.

THE PROBLEM

“Classical” scientific publications are created by crystallizing the results of the research of one or more academics/scientists/researchers at one specific moment in time, also loosing the information about the dynamics according to which these results have been produced, their connection to the evolution of the research process, the relations and interactions occurred between among all parties engaged, and their unfolding through time and relational space, including the possibility to represent  the network of contributions (be them practical, relational, theoretical, operational, thematic…) which have brought to the scenario described in the “paper”.

The “paper” or article is, basically, a narrative, structured along the lines defined by good academic and scientific practices, which looses all dynamic information about the research process being described in it, also making it very difficult to connect to its further development.

THE SOLUTION

To produce the paper we have used a system called Knowners, an Open Source WordPress Plugin which allows you to represent the network of relations running among the content produced by multiple authors operating onto the same publishing space (implemented through a website running the WordPress CMS).

During the research process all the activities which formed the research were added to the Architon website:

http://architon.artisopensource.net/

While adding the various parts of the research to the system, a network formed, which can be seen in the home page of the site.

Architon's main network

Architon’s main network

The various elements of the network are  calculated in real-time by a keyword based and natural language based algorithm which analyzes the content added to the platform, and uses the themes and tags used in the various information bits (parts of article, texts, data and meta-data added to the system under the form of posts and multimedia elements) to describe the relations running among all elements, their authors (which can be more than one, under the form of multiple WordPress users, or under the forms of commenters).

Each element of the visualizations is interactive, allowing users to decide the focus of their navigation, and to easily traverse the topics and relationship networks defined through the research.

So, as the research process unfolds, its content network unfolds as well, preserving the time-scheme (under the form of the time-data generated when creating the content elements) and the relational-network running between the authors and other forms of contributors (e.g.: commenters) taking part in the research under various roles, and also running between the various contents, allowing to highlight:

  • the themes touched by the research, and their relations
  • the contributions of each author in relation to each theme
  • the time-based dimension of the research process

The system also allows to keep track of the future development of the research: by simply including a QRCode and a link to the graphic layout of the paper, readers can connect directly to the online system and, thus, see its updated status, in real-time, and also eventually contribute to it, making all this wealth of information not only immediately accessible, but also interactive and participatory.

The scientific article becomes alive: a relational network which evolves in time and to which anyone can participate.

FUTURE STEPS

We’re developing the Knowners system to allow interconnection between different systems.

Imagine two teams of researchers working on the similar issues, maybe from a different point of view.

Imagine they work using this kind of system to produce the content and the outputs of the research.

Since the two teams share some of the same issues, some of their content, keywords and themes will match, with the resulting relations-network reflecting the differences of their approaches, methodologies and results.

We’re developing the function which will allow to interconnect two or more of these Knowners systems so that different researches could be visualized onto the same graph.

This would bring an incredible result: the possibility to being able to immediately and visually compare researches operating on related themes, and to visually understand and interact with their interrelations, dependencies and mutual interactions.

Also: the fact that all this is working on standard, open, consolidated protocols (such as the ones used by web systems such as WordPress, including RSS, pingback, XML-RPC, atom, etc) opens the doors to incredible possibilities, such as the one (which we are developing) in which as soon as someone produces scientific content about a theme which is relevant and interrelated to your research, your visualized graph would transform to reflect this, showing, actually, who is working on your same themes, and how, and how it is related/different/complementary/divergent from your work.

AOS at Roma Contemporary: Ubiquitous Publishing, cities and bodies

 

We will be at Roma Contemporary, at the MACRO Museum, Testaccio, Rome, on May 26th 2012, at 5pm, to have a presentation about the scenarios of Ubiquitous Publishing and the transformations which it brings on to cities and human beings.

We will be together with Dario Salani, presenting his Prinp self publishing house, Valentina Tanni, art critique extraordinaire, and Chiara Passa, who will use her wonderful projects to show even more radically fascinating scenarios.

Be there!

Layers, a workshop on ubiquitous publishing at Ualuba

layers, a workshop un ubiquitous publishing

layers, a workshop un ubiquitous publishing

 

We will be at Ualuba, in Brescia, Italy, on May 19th-20th for LAYERS, an intensive workshop on Ubiquitous Publishing.

 

LAYERS

SALVATORE IACONESI & ORIANA PERSICO

May 19+20 2012
16 hours / 2 days / 1 week
from 9am to 6pm
intensive workshop (registration needed)

at:

Cen­tro Arti&Tecnologie
via Forcello 38/a
25124 Brescia
Italy

 

WORKSHOP DESCRIPTION

The spaces of contemporary cities are covered by membranes of digital information.

The wide and ubiquitous availability and accessibility of digital technologies and networks transform our perception of spaces.

Ubiquitous publishing technologies and methodologies – such as augmented reality, location based applications, digital tagging and near-field computing – allow to design natural interaction systems in which content, information and experiences become accessible through bodies, objects and architectural spaces.

In the workshop we will design and build an ubiquitous cinematographic experience: an augmented reality movie, disseminated in the city and accessible by traversing its spaces.

 

Minimum requisites:

The workshop is designed in order to be accessible even for people who never had experience in technological design and development.

Main requirements: curiosity, desire to learn and, most of all, to work in collaborative groups.

The workshop is also designed to provide insights about novel uses for technologies to people who already have previous experiences in Java, C++/Objective-C, graphics, animation, mathematical models, environmental and architectural design.

Program:

  • augmented reality context: design methodology for physical spaces which include ubiquitous interactive experiences;
  • interactive ecosystems: design of interactive ecosystems which traverse media and physical spaces;
  • content management systems: how to transform a plain content management system (we will use a WordPress installation during the workshop) into a system which allows to manage ubiquitous content (location-based, tag-based, augmented reality), optimized for use on multiple devices (iPhone, iPad, Android, tablet computer);
  • design of ubiquitous narratives: what is an ubiquitous narrative and how is it possible to design one; non-linear, emergent, multi-author, disseminated in space;
  • accessibility and usability: digital inclusion and alternative strategies; how to include in experiences people who do not possess smartphones;
  • technologies: cocos3D, cocos2D, Android SDK, iOS SDK, OpenGL ES, OpenFrameworks, Processing, QUalcomm AR SDK, PHP, SQL
  • implementation of an ubiquitous cinematographic experience