Augmented Reality Art: the Emotional Compass featured on a new book

The Emotional Compass featured in a chapter of the new “Augmented Reality Art” book from Springer, edited by Vladimir Geroimenko together with Mark Skwarek, Tamiko Thiel, Gregory L. Ulmer, John Craig Freeman, Conor McGarrigle, Patrick Lichty, Geoffrey Alan Rhodes, Jacob Garbe, Todd Margolis, Kim Vincs, Alison Bennett, John McCormick, Jordan Beth Vincent, Stephanie Hutchison, Ian Gwilt, Judson Wright, Nathan Shafer, Salvatore Iaconesi, Oriana Persico, Dragoş Gheorghiu and Livia Ştefan, Simona Lodi, Margaret Dolinsky, Damon Loren Baker.

view Augmented Reality Art on Springer

In the book our contribution is titled: “An Emotional Compass: Emotions on Social Networks and a new Experience of Cities

cite as:

Iaconesi, S. and Persico, O. (2014). “An Emotional Compass: Emotions on Social Networks and a new Experience of Cities” in Augmented Reality Art: From an Emerging Technology to a Novel Creative Medium, part of the Springer Series on Cultural Computing, Geroimenko, Vladimir (Ed.). New York: Springer. ISBN 978-3-319-06202-0.

Here is a short sample of the introduction of the chapter:

“The map is not the territory.” (Korzybski, 1933)


“The map is not the thing mapped.” (Bell, 1933)


“The tale is the map that is the territory.” (Gaiman, 2006)


“We say the map is different from the territory. But what is the territory? The territory never gets in at all. […] Always, the process of representation will filter it out so that the mental world is only maps of maps, ad infinitum.” (Bateson, 1972)


When we experience territories, we create stories. We model these stories using mental maps, referring to one person’s point of view perception of their own world, influenced by that person’s culture, background, mood and emotional state, instantaneous goals and objectives.


If we move along the streets of my city in a rush, trying to find a certain type of shop or building, our experience will be different than the one we would have had if we were searching for something else.


Focus will change. We will see certain things and not notice other ones which we would have noticed otherwise. Some things we will notice because they are familiar, common, or because associate them to our cultures, to memories and narratives. All this process continuously goes on as our feelings, emotions, objectives and daily activities change, creating the tactics according to which we traverse places and spaces, to do the things we do.


In the density of cities, this process happens for potentially millions of people at the same time. In his “the Image of the City” (Lynch, 1960), Lynch described cities as complex time-based media, symphonies produced by millions of people at the same time in their polyphonic way of acting, moving, interpreting, perceiving and transforming the ambient around themselves: a massive, emergent, real-time, dissonant and randomly harmonic, work of time-based art with millions of authors that change all the time.


In this, our mental maps – the personal representations of the city which we build in our minds to navigate them to fulfill our needs and desires – live a complex life as our perception joins into the great performance of the city.


Dissonance is the essence of the city itself, and represents its complexity, density and opportunities for interaction.


Augmented Reality Art

Augmented Reality Art

How to make a ubiquitous soundscape using augmented reality: Read/Write Reality, Ubiquitous Sound at Youbiquity!

AOS will be in Macerata ( May 2-6 2014 ) at the Youbiquity Festival for a workshop in which we will understand how to create an ubiquitous soundscape and installation, to create an immersive geography of sound.

“When you listen carefully to the soundscape it becomes quite miraculous.”

––R. Murray Schafer

From the Youbiquity website:

An immersive workshop whose objective is to create an Ubiquitous Soundscape: a sonic landscape which can be experienced using Augmented Reality, and which can be produced collaboratively, through sound sampling and audio representation of data and information.


Participants will learn how to design a specific Augmented Reality smartphone application (iOS and Android), on which to publish their Ubiquitous Soundscape, created through sound samples of any kind and the audio representation of data and information. All of this will form an immersive experience, in which it will be able to walk through the sounds disseminated across natural and urban spaces.

A result of the workshop will be the participation to the second volume of the Read Write Reality publications (you can find the first Read/Write Reality book on Lulu, which was about the creation of an Augmented Reality Movie), and a final show/exhibit/installation, ubiquitously distributed through the streets of beautiful Macerata.

Here is the Program and info for the Ubiquitous Sound workshop

To take part to the workshop you can contact:  +39 349 6441703

How do you create an ubiquitous Soundscape?

The Soundscape. The sound or combination of sounds which arises from an immersive environment.

This definition of soundscape comes from Canadian composer R. Murray Schafer, who identified three main elements of each place’s soundscapes: the Keynote Sounds, created by nature, geography and climate, and which live in the background of our conscious perception most of the time; the Sound Signals, which are the ones we consciously listen to; and the Soundmark, coming from landmark, which is the sound which is unique to an area.

Bernie Krause classified the elements of the soundscape according to their originating source: the Geophony of a place, generated by non-biological sources; the Biophony, as generated by non-human living beings; and the Anthrophony, generated by human beings.

Both of these definitions can be updated to try to engage the fact according to which entirely new dimensions of space have now entered our realms of perceptions.

Digital data, information and communication has become ubiquitously available and accessible, and everything we do generates data and information somewhere.

We have learned to use all these additional sources of information to transform the ways in which we communicate, work, collaborate, learn, express ourselves and our emotions, relate and consume. Ubiquitous information has entered our daily lives, blurring the boundaries between what is digital and physical, so much that it is progressively loosing sense to make the distinction in the first place.

In RWR UbiquitousSound we wish to address the phenomenology of the Ubiquitous Soundscape.

Our aim is to design a natural way to create and interact with digitally and ubiquitously produced sound in the environment.

As it happens for the biophony, geophony and anthrophony of places, we want to create an Infophony of space, in which we can walk through, orient, experience. We wish to describe and implement the parts of our soundscape which could be created through Ubiquitous Publishing techniques, from social networks, data sets, and from the digital information which we constantly produce from all the places in the world, through our daily lives. We want to make this information physical, evolving, emergent, experienceable, immersive, complex, just as the rest of the soundscape.

We want to create an explicit bridge between the physical and digital realms of our lives, through sound, allowing us to create information ubiquitously, and to experience it immersively.

What we will do

We will create an Augmented Reality application which will allow us to experience the immersive Ubiquitous Soundscape by wearing headphones.

We will create the application together, also co-designing its elements. The application will allow us to load sounds samples and sound-representations of datasets and information, and to map them to a physical space. Then headphones will be used to experience the soundscape in an immersive way: walking up to the sounds, away from them, being able to achieve a new form of sound orientation through the Ubiquitous Soundscape, in the physical world.

We will create our own Ubiquitous Soundscapes.

We will showcase them in a final performance though the streets of Macerata, and though an exhibit.

Who is this workshop for

Any artist, designer, hacker, architect or other who is interested in exploring the possibilities brought on by the opportunity to create ubiquitous sound experiences using samples, data and information.

Although many technologies will be used, no previous technological knowledge is required. The workshop is for everyone. Of course, people with additional technological expertise will be able to appreciate additional levels of detail.

What you need

Your laptop. All your smartphones (iOS or Android).

Optional: sound-related technologies (digital recorders, effects, controllers, software, microphones…).

Publication and digital distribution

Read/Write Reality Ubiquitous Sound will be also a digital publication about the results of the workshop, including as authors also all the participants.

Produced by AOS (Art is Open Source) in collaboration with Teatro Rebis, Youbiquity and Macerata Racconta, this publication will include the critical theoretical approaches used during the workshop, exercises, as well as the description of the techniques and tools used. A digital book for designers, artists, architects, hackers, communicators, ethnographers and developers wishing to expand their perspectives on ubiquitous publishing.

Ubiquitous Humanity: at iPompei for the next step of smart communities

Back in the city of Pompei for the next step for the future of our cities.

We will be in Pompei on June 3rd and 4th for iPompei, an event organized by the Public Administration of the City of Pompei together with the MIBAC (Italy’s Ministry for Cultural and Artistic Heritage), UNESCO, and MIUR (Italy’s Ministry for Education and Scientific Research)  to present the second phase of the Ubiquitous Pompei project, together with a series of additional initiatives.

Ubiquitous Graffiti

Ubiquitous Graffiti

As you might remember from our previous activities, the Ubiquitous Pompei project engaged high school students of the city of Pompei to provide them with technologies through which they have been able to start a participative process of designing their vision of the digital city, and to start to implement the first services which they imagined.

The project has been really successful so far, as the students skillfully engaged with the opportunities offered by ubiquitous technologies and created mobile applications and web systems which foster active citizen participation, as well as the emergence of new opportunities for public life.

The idea of creating an ubiquitous digital infrastructure for their city has been truly insightful for students, who have imagined tools for everyday life which allow people to engage the important themes of the city, to observe their societies and environments as they live, in real-time, and to promote new opportunities which emerge by combining public, participated city-governance and decision making processes, open data, and the possibility to relate to fellow citizens who share the same interests and visions, and to collaborate with them to the design and implementation of new opportunities.

the first phase of the project

the first phase of the project

This has been a truly important action, as it was designed to activate the young students of Pompei’s high schools, and to bring them to direct contact with the public administration to pragmatically suggest new visions.

We promoted a form of peer-to-peer education and knowledge model, in which we acted as technological facilitators. We created a series of technological tools which students could use to design and assemble their ideas for services and citizen-centered processes.

Students learned about the possibilities offered by the technologies and autonomously designed their visions and services, with our help on the technical and technological side.

In the next step of the project, several innovations will take place:

  • students will engage the rest of the population: further assuming the role of city-designers, students will actively engage the rest of the citizens of Pompei, to collect their requirements and visions for the digital city
  • these ideas and requirements will form the specifications for the next step of the services and citizen-tools which will be produced in the next phase of the project
  • everything will be produced and implemented, and presented before the end of 2012

This project has been a real breakthrough, with innovative ideas springing up at each phase and quickly turning into real services which can be freely used by the rest of the population.

the first phase of the project

the first phase of the project

The project has been chosen by Italy’s Digital Agenda as a best practice for the thematic tables which are leading the design of the policies which will conduct the country’s digital future. A team of consultants of the Ministry of Education and Research  (and, specifically, Damien Lanfrey and Dario Carrera) has been following us closely in this, providing fundamental insights about the strategies which could be used to further enhance this project  and to enable it to scale nationwide.

All this, together, has brought to this second stage of the project which will be presented in Pompei on June 3rd and 4th. The start of the city-wide process which will let the specifications of the next stage of the project emerge and, then, start the next phase of design and implementation. And the start of the phase through which the project will form its strategy for scalability, engaging other schools and the other public administrations which have already shown interest in the process.

Community Development

Community Development

The process will begin with citizens.

The MIUR has kindly provided us with the Ubiquitous Italy platform on IdeaScale to start the public discussion with citizens.

We will keep you updated.

On Sunday June 3rd we will be at the City Hall (4pm – 6pm) in an event which is dedicated to the whole population of the city of Pompei for a workshop in which we will start the participatory design process of the digital city.

On Monday June 4th, at 12am, we will be again at the city hall with a meeting with the media and press, where we, together with the City Administration and the MIUR will officially present the next stages of the project.

More info can be found here:


Here is the presentation that we will give during the event:



Reinvent Reality at FADfest, a conference on Open Design and Shared Creativity, Barcelona

Reff, RomaEuropa FakeFactory

Reff, RomaEuropa FakeFactory

We will be joining in the conversation at Open Design / Shared Creativity Conference in Barcelona, on July 2nd and 3rd to present a scenario for radical innovation, based on a radically open process.

In 2008 we created REFF, RomaEuropa FakeFactory, a meta brand ( an open source brand, a brand which anyone sharing an ethical approach can use, with the results benefiting the whole p2p network of individuals and groups who participate in the brand itself ) which we created to confront Italy’s difficult scenario for what concerns public policies for arts and creativity.

When we created it, we chose to give it a peculiar form: an institution.

A Fake Institution, more precisely.

As Orson Welles might have said: “fake” is real.

Meaning that reinventing reality can become a tool for constructivist action onto society.

This is precisely what we tried to do with REFF: to create a p2p ecosystem dedicated to the systematic reinvention of reality through critical practices of remix, re-contextualization, re-enactment, mash-up.

We used these terms – which are classically associated with media – in real-life, in the space of the city and of the networks of relationships which describe our societies.

The structure of the initiative has been natively peer-to-peer, meaning that the whole “system” was designed as a tool for expression of a network of peers, used to add meaningful layers of information and opportunities for action to our ordinary reality.

This, in our perspective, is a wonderful definition for Augmented Reality, and AR has, in fact, been a great tool for REFF, which used it as a metaphor for its practices.

And this is, for us, a highly effective scenario to create opportunities in our near-future, beyond the scenarios of crisis: bringing the scenarios opened up by technologies to networked models which are deeply rooted into our cities and our communities, in which basic definitions of our societies – such as public space, production, distributionrevenue, governance, policy-making, decision-making and many others – can be redefined to become natively networked processes, and create new schemes for sustainability, sociality and development.

This is the point of view which we will present at FADfest, at the Open Design / Shared Creativity Conference.

Open Design / Shared Creativity Conference is an international forum that seeks to explore and debate the emerging landscape of openness and exchange that is taking shape around practices such as open code, creative commons licensing, co-creation, de-localisation and collaboration.

Digital technology and social networks have reached a point of maturity from which a new industrial culture is emerging, revolutionising the processes of creation, mediation, distribution and consumption. Taking design in all its expressions and forms as a starting point, the conference will be an important international forum of ideas, working platforms and specialised practices that are transforming the articulation of design with society, economy and culture.

Designers, architects, artists, editors, web activists, programmers, curators, lawyers and cultural analysts will explore over two days the reality and the potential of open design culture, from new business models to the most experimental creative practices.

Augmented Reality: the Augmented City, communication and citizenship

On May 17th 2012, we took part  to prof. Marco Stancati’s course of Media Planning at La Sapienza University of Rome with a lecture on the scenarios offered by Augmented Reality to the creation of novel opportunities for communication and business.

HERE you can find some information about our lecture and the MediaPlanning course.

HERE you can download the slides we used for the lecture

(The slides are a lighter version of the ones we used in class, which were full of videos and hi-res images: please feel free to contact us should you want the original ones)

In the lecture, we started from a series of definitions about what, in our times, can be considered as “Augmented Reality”

Augmented Reality in the city

Augmented Reality in the city

In our definitions we chose to describe a wider form of the term, not limiting it to the set of applications to which we’ve all been accustomed to , and abandoning for a moment the vision of people happily strolling through cities with their smartphones raised in front of their faces.

Nonetheless we used classical examples of AR to introduce a possible evolution of what is/will be possible in our cities using ubiquitous technologies.

We focused on the idea of the Augmented City.

augmented city and its many voices

augmented city and its many voices

In this vision of the city, many subjects (individuals, organizations and, using sensors, also the city itself) add layers of digital information, in real-time. We can access and experience these  sets of information in multiple ways, and we can also use them to compose, dynamically, our personal vision of the city, by remixing, re-arranging, re-combining and mashing-up all the information layers which are available.

This is a very interesting situation for cities and their citizens, as it enables for the creation of entire new scenarios for communication, business and personal expression.

It also opens up possibilities which will probably have a high impact on the ways in which, for example, enterprises design their own products, and the ways in which they create the strategies according to which products and services are communicated, marketed, monitored.

We discussed this scenario below, among the many possible:

augmenting the voices on products

augmenting the voices on products

The physical packaging of products usually hosts information and messages which are created by a very limited number of voices (e.g.: the manufacturer, marketing team..).

In the drawing we see depicted a scenario which is becoming progressively more frequent: a multiplicity of subjects are now able to join the Brand in adding digital information to products and services, using Augmented Reality, QRCodes, computer vision, tagging (e.g.: RFID) and location based technologies. We call this Ubiquitous Publishing.

For example, in the Squatting Supermarkets project we used products’ packaging as visual reference for critical Augmented Reality experiences. In the performance, people could use their smartphone to “look at” products on a supermarket shell. When they did, a series of information became available:

  • a map, created using MIT’s Open Source Map, showing where the product and its components came from, where its materials came from, where it had been processed and assembled, and in which places it stopped during transportation; the product becomes a map of the planet highlighting all the places which it touched during its manufacturing and distribution processes;
  • a series of visualizations showing the product’s composition, the percentages of organics, chemicals, fat, aromas… all shown through interactive information visualizations.
visualizations in Squatting Supermarkets

visualizations in Squatting Supermarkets

One of the visualization was analyzed in deeper detail. On the top right of the previous image, is a timeline of the real-time conversations about the product: the timeline scrolls left and right and each colored block represents a conversation and its general sentiment (meaning: the sentiment which is most represented in the conversation); green, yellow and red code positive, mixed and negative sentiments.

So: while strolling through the aisles of a supermarket, you take a take a picture of your favorite product and you are able to see what people on social networks are saying about it, in real-time.

We observed this possibility (to publish real-time, user generated information using ubiquitous publishing techniques and accessible information visualizations) to describe an interesting loop which we are able to make.

We can imagine (and do) to harvest user generated information in real-time about our topics of interest (from blogs, websites, social networks and social media sites) and to publish them when/where they are most useful.

VersuS, the real-time lives of cities

VersuS, the real-time lives of cities

The image above shows the experiment we performed with the VersuS project during the city-wide riots taking place in Rome on October 15th 2011.

The 3D surface covering the map of the city of Rome shows the intensities of the social network conversations taking place during the protests and riots. The image is part of a real-time visualization through which we have been able to observe how social media conversations closely followed the evolution of the protest.

By using harvested conversations (from Facebook, Twitter, Flickr and Foursquare) we have been able to analyze what was being said and where, and we actually were able to demonstrate how a massive amount of useful information was being published by users: about violence, injuries, possible escape routes, missing people. All this information could have been actually collected and organized, and accessed by protesters through specifically designed interfaces to achieve important, pragmatic results such as avoiding being hurt, finding safe escape routes from the riots or find our friends who were lost in them.

We were able to design, for example, the simple Augmented Reality application for smartphones pictured below:

Augmented Reality for Riots

Augmented Reality for Riots

Augmented Reality for Riots

Augmented Reality for Riots

This experimental interface shows how a rioter could have visualized on the screen of the smartphone the degree of safety in the direction he/she was facing, as it could be inferred by the social media conversations with a geographic reference.

An immediate, easy to use tool to achieve important goals.

During the lesson we focused on how it would be possible to use these technological opportunities to conceive and enact innovative communication practices.

We described a couple of scenarios, which we can imagine being applied in different forms, ranging from the creation of scenarios for the public lives of cities and their citizens, to the needs of communicators for their work with enterprises and administrations, to the needs of marketing and advertising.

In synthesis, we imagined a novel, more extensive, definition for Augmented Reality, according to which a loop is formed among the digital and physical world.

In this definition of AR it is possible

  • to harvest user-generated (as well as database and sensor generated) real-time information about relevant places/topics/products/services,
  • to process it using techniques such as Natural Language Processing and Sentiment Analysis,
  • to publish it ubiquitously, where/when it is more useful, using interfaces and interaction schemes which ensure accessibility and usability (including smartphone apps, urban screens, wearable technologies, digital networked devices, information displays…) and
  • to provide ways according to which users are able to both contribute to the flow of information and to re-assemble and re-interpret it, creating additional points of view