VivaCosenza: how to transform a city event into a real-time participatory performance

Realtime VivaCosenza

Realtime VivaCosenza

VivaCosenza Performance Lab is an international event about art and performance that will be held on December 8th and 9th, 2012 in the city of Cosenza, an ancient and beautiful site of the south of Italy.

The event will feature multiple international artists, a city-wide forum engaging the whole population in cultural design and activities dedicated to the creation of public strategies and policies, as well as a series of innovative scenarios dedicated to education, for high school and university students.

At AOS we have been invited to design the digital life of the festival. A first, early version of the website which will host all this part of the initiative can be seen here: http://vivacosenza.it/viz 

We decided to create some tools which could be used by students and citizens to enact the real-time, participatory narratives of the event, as fundamental part of all of the education, communication and cultural formats which have been designed for the festival.

Using a series of open technologies which we had developed for the ConnectiCity and VersuS projects, we have setup a system which is able to capture in real-time all of the social network activity of citizens, students, visitors, organizations and institutions of the city of Cosenza and also of the people who will use social networks to communicate about the festival and the city from other locations.

A set of language-based technologies will then be used to classify all this information, in real time, being able to understand the themes, issues and subjects which all this information is talking about.

Special focus will be given to the projects created by high-school and university students, who have been asked to create communication formats for the festival, dealing with arts, food culture and new forms of journalism and storytelling. The contents created in these formats will be given special highlight and the best ones will be awarded a prize and be taken into consideration for further development for next year’s edition.

Even more, all of the emergent communication which will be generated in real-time during the festival will be captured from social networks, and visualized both online, on smartphone/tablet applications as well as using a projection mapping in a public space in the city, so that all citizens will be able to experience the digital life of the city directly from public space.

The objective of the platform is to understand the ways in which these kinds of technologies can be used to transform the life of the citizens of the city, to imagine, design and enact novel participatory approaches.

In this, we suggest a new role for institutions, who become promoters and maintainers of new forms of expression which are available and accessible to everyone.

Justas we used technology to create an infrastructure for expression to be used by students to create their own formats, we imagine a “city as a platform” (for example as we suggested in Trieste a few weeks ago), where ubiquitous infrastructure (both cultural and technological) is made accessible and usable through public policies, enabling citizens and city dwellers to basically have the tools to design and build their own digital, cultural, business, communication, storytelling, envisioning ecosystem.

We will start from scratch with the students and, thus, we have setup a basic set of technologies, for them to be used as building blocks for their communication and storytelling formats.

For example, we have setup a platform which will capture all city relevant public content generated on social networks (relevant either because it was generated in the city, or because it discusses on city-relevant issues).

Here below you can see a visualization of the data in the system being captured in realtime:

Data being captured and visualized in Cosenza in realtime

Data being captured and visualized in Cosenza in realtime

The green dots show topic clusters (larger means “more important”), while red dots show user clusters, being connected to the topics they are discussing.

Data can be analyzed according to time, using timelines such as the one below:

the Digital Days of the city of Cosenza

the Digital Days of the city of Cosenza

And users can be analyzed for their activity (how many contents they produce on social networks) and according to the topics they discuss, as seen in the two images below

Digital Citizens in Cosenza

Digital Citizens in Cosenza

 

What digital citizens discuss in Cosenza

What digital citizens discuss in Cosenza

For example, topic clusters can be organized into easy to access groups, thus establishing multiple possible participatory communication formats.

Here, for example, we have assembled some for the beginning of the festival (bars are almost empty for now, as the festival has not begun yet), and by simply clicking them people will access what students and city dwellers have produced, shared and communicated in the specific format, across social networks and sites.

some formats, dedicated to the festival

some formats, dedicated to the festival

It must be highlighted how these technologies allow capturing in real-time the public communications which citizens publish on social networks (for VivaCosenza we will be using Facebook, Twitter, Instagram and Youtube). So we will capture all (and only) those messages which are intended as being public by their publishers (users/citizens).

Yet this is a delicate issue, as the definitions of privacy, public/private spaces are rapidly changing, and many times people have a hard time in understanding the reach and scope of visibility which the messages they post online have.

We will use this occasion to also explore these important issues: we do not wish to promote a novel form of Panopticon, but a cultural approach according to which individuals and groups can freely decide what and how to communicate, to whom it should be visible and accessible, and to use this information to create opportunities for collaboration, sustainable business, social innovation and art.

So heads up and come at VivaCosenza Performance Lab!

Updates on Enlarge Your Consciousness In 4 Days 4 Free

Enlarge Your Consciousness in 4 Days 4 Free

Enlarge Your Consciousness in 4 Days 4 Free

Here are some updates to the project Enlarge Your Consciousness in 4 Days 4 Free

Here is the event on the website of BTF Gallery

Here on Artribune

Here is the official announcement on ArteFiera OFF

Here is a wonderful article on D’Ars Magazine

And on this issue of Espoarte you can find an article about the project

 

Here is a slideshow of the backstage, preparing the exhibit:

 

 

and here’s a slideshow about the exhibit:

 

 

 

More information and materials about the work are coming up in the next few days.

Enlarge Your Consciousness in 4 Days 4 Free

“the most exciting attractions are between two opposites that never meet.” Andy Warhol

EYCI4D4F Diagram 1

EYCI4D4F Diagram 1

Introduction

In this essay, we describe the ideas which led us to participate to the “Enlarge Your Consciousness in 4 Days 4 Free” project, together with Mezzapelle-Deriu.

The mutation of the human being in contemporary times is characterized by drastic speed and by powerful, ubiquitous effects, which are transforming not only ourselves, but also the form and function of the whole planet, including the ways in which we learn, communicate, relate, work, think. And the ways in which we experience emotions.

Art is Open Source has often dealt with the theories and practices of human emotion.

Emotions are not “action”, yet they are the energy that creates action. They are the tool/effect through which we experience the world and with which we decide to take action, and in which direction.

And emotions have profoundly changed in the last decade or so, due to our renewed experience of the world, our re-built perception of space and time, our re-created ways of establishing presence, identity, relations, collaborations. Due to the digital membrane which has been covering all our planet and which is now becoming indistinguishable from the rest of the planet itself.

Multiple types of discussion can be started up while engaging these issues: from the most futuristic ones to the most critical. Incredible positive scenarios perfectly match horrible ones.

In this never-ending struggle between adoption and critique, we choose the way of Nature. The way of Nature, in the sense that it is useless and impossible for us human beings to “decide” what is natural, what is unnatural, what is good and what is bad. What we can do, as free human beings, is to observe the constant, fluid, continuous mutation which we experience, and adopt ethical approaches in making our own decisions.

Human beings, the planet and Nature, change, mutate. This mutation includes all the technologies, networks, dangers and opportunities which we’re currently facing. We can observe, try to gain the best possible understanding of things (from our point of view, determined by personal history, cultural background… ), share knowledge, information and perspectives with people, and act.

Enlarge Your Consciousness in 4 Days 4 Free is about this.

A project through which we reflect on the mutation of human emotions.

Background

The web is increasingly relied upon as a reflection of reality (Bray et al, 2007).

This fact gives rise to great challenges for human beings, who are in a state of great transformation of the ways in which they perceive their identity, privacy, relationships, societies, cities, and in which they perceive their presence and role in the planet.

Every action we perform in our daily lives has measurable effects in terms of digital information: wether we turn the lights on in our living room, buy an apple at the supermarket, use our mobile phone to contact our friends to decide to go to see a movie and, in possibly more explicit ways, whenever we study, work and entertain ourselves using one of the multiple internet-aware processes which have started to be progressively more present in our common routine.

It is possible to recognize the fact that a digital information membrane has covered the totality of our world (Pickles, 2004, Mitchell 2005, Zook & Graham 2007), mutating our perception of the spaces, times and modalities in which we conduct our lives.

It is possible to describe the emergence of novel forms of sensoriality through which we experience the world, deeply connected to digital interactions, technologies and networks, or even externalized onto digital devices. (McLuhan, 1964; de Kerckhove, 1997).

Simple experiments allow to gain awareness of this: a simple mobile phone call will force us to move through space in the case of absent network coverage, just as an additional sense outside of the conventional boundaries of our bodies and externalized onto the mobile phone which makes us aware of electromagnetic fields of specific ranges of frequency.

Just like our brains have shown to be able to mutate, to adapt to drastic effects due to impairment or damage (Doidge, 2007), we are experiencing deep changes due to this re-structuring of reality, to integrate the digital layers of the world into our common perception.

This process has already taken place to a certain degree, as we completely give for granted a series of manifestations of this part of our neo-reality in the tasks which we face each day.

Younger generations show distinct transformations in the ways in which they learn, focus, relate, collaborate, work (Turkle, 1995), and in the ways n which they perceive their own identity, privacy, and the definitions of public and private spaces (as in West, Lewis and Currie, 2009; Pearson, 2009; Thompson, 2011; among many others).

The continuous processes through which we simultaneously construct and experience our reality (de Certeau, 1984) see specific effects from these mutations, as our perception digitally changes, and our ways of constructing/interacting with the world progressively adopt digital tools and have digital characteristics.

For example, the idea of recognizing the urban environments described by Lynch in 1960 updates to the concept of Digiplace expressed by Zook and Graham in 2007.

In this, emotions play a crucial role.

In Myer’s definition (2004) emotion involves “physiological arousal, expressive behaviors, and conscious experience”. It is the way in which we relate to the world and the processes which take place in it: it is not action, but the thrust which creates it.

This centrality of emotions has led multiple scholars and practitioners to place the study of emotion at focal points in multiple disciplines, across Neurobiology, Social Sciences, Cognitive Sciences, Psychology, Computer Science, Robotics, Ethnography, Economy, Design, Architecture.

Theory of emotions is crucial in the analysis of organizational processes, design and multiple areas of communication.

Classical researches on the Theory of Emotions have produced multiple approaches and classifications, such as the ones found in Descartes (353, 1989 edition), Spinoza (1656, 2006 edition), Hobbes (1651, 1976 edition), Plutchik (1980), Elkman (1999) and Prinz (2004), describing evolutionary, social, psychological, dimensional and other types of models with reference to the general nature of human beings or to the specifics of different cultures around the planet.

EYCI4D4F Diagram 3

EYCI4D4F Diagram 3

Emotions are understood to be the turning point according to which we identify, use and create information.

“The central focus of a unified theory of information behavior is the process by which users adapt to the information environment and make use of it for personal and social purposes. By making this adaptation process explicit, the model reveals how the ubiquitous information environment can be viewed as an affective information environment because all information needs, seeking, reception, and use is processed through emotions.” Diane Nahl, Danila Bilal (2007)

Therefore, emotions are placed at the center of strategies and design processes, as both tools and measures of experience.

Our mutated perception of the world through technologies and networks has changed or emotional approaches, as well: the fact that we experience the world and that we enact our actions using digital tools (or, more in general, using modalities which have clearly identifiable digital characteristics, either directly or indirectly), also shifts our emotional domains online.

Designers have incorporated the affective dimensions of technology to the extent that the expression “emotional design” has become identified in ergonomics as “Kansei Engineering” or “pleasurable engineering” (Green & Jordan, 2002; Grimsaeth, 2005; Jordan, 2000).

According to Don Norman, “the focus of emotional design is to make our lives more pleasurable” (Van Hout, 2004).

Yet the experience in the merged analog-digital reality which emerges from the observation of the contemporary world is profoundly different than the precedent one.

“When reading fiction or watching a movie we enter the imaginary world even if we remain aware of its imaginary nature. We suspend disbelief and though, on one level, we accept the fictional reality of the characters, on another we recognize that the situation is make-believe. In cyberspace this recognition is often absent.” Aharon Ben-Ze’ev, 2004.

This comment from Ben-Ze’ev describes in synthesis the different directions according to which the observation of human experience can move along.

In the observation of emotions, it is possible to observe how a constructivist approach is used in experience by human beings.

Identity, public/private spaces, privacy, are all the object of personal creation, thanks to the characteristics of the media and tools which take part to the process.

The possibility of freely creating digital content and to attach it to objects and spaces, transforms the world into a public, accessible, free read/write platform (Iaconesi, Persico, 2011).

This modality progressively takes onto our daily lives.

As Turkle (1995) tells us, the users who are “logged on to one MUD or another for at least forty hours a week. It seems misleading to call what [they do] there playing. [they spend their] time constructing a life that is more expansive than the one [they live] in physical reality.”

 

Enlarge Your Consciousness

Enlarge Your Consciousness in 4 Days 4 Free grabs emotions in real-time from social networks and uses them to gain better understanding of the way human beings have transformed  by using digital technologies and networks.

A real-time process has been designed to extract real-time public information from multiple social networks. Specifically, the following social networks are used:

  • Twitter
  • Facebook
  • Flickr
  • FourSquare

Each social network requires specific modalities to be able to read information from it.

For example Twitter allows usage of public APIs of multiple types to query its real-time systems and access information that can be freely used in applications and mash-ups, as long as a series of requirements are met (including correct mentioning of sources, presentation details, the enforcement of restrictions to the types of allowed practices to be performed using the data, etc. ). Using these APIs it is possible to capture, in real-time, the content produced by users relative to specific keywords, timeframes, geographical locations, hashtags etc. .

Foursquare offers a similar mechanism, allowing developers to access real-time data using public APIs. Using these techniques it is possible to extract real-time information about the places people visit (check-ins), the information they value or suggest (tips) and other information which can be easily inferred by analyzing data (for example, great deals  of information can be understood by analyzing the times and time-patterns according to which people access different locations, characterising them as work-places, entertainment venues, commercial places etc).

Flickr also offers extensive support to developers, allowing them to access a variety of APIs which permit multiple types of real-time searches and the creation of wonderful meta-services.

Facebook is the most difficult social network from which to harvest information without breaking any law :) 

While it offers multiple forms of integration to other applications (e.g.: the possibility for users to connect their social network presence to online applications, thus obtaining a variety of different results) Facebook seems to be oriented in ways according to which all possibility to systematically observe societies and communities remain its sole possibility.

Luckily, this enforcement is not too strict, and, together with our lawyers and with the support of the international developer community (including some people at Facebook itself, who have been proved to be very helpful in this, even at our explicit statement that “we are trying to produce a system which lawfully extracts information about the emotions of online users, for non-commercial goals, in respect to your terms of service, and with the sole objective of producing tools for art and scientific research”).

It turns out that by using a combination of the functions offered by the Graph API and with a careful dosage of tuning, it is possible to capture, anonymize and process information from Facebook, in ways which have been proved very useful for our research.

Several automatic processes have been setup to capture information from the aforementioned social networks using these techniques.

The texts, comments, tips, image/video captions published by online users were anonymized and processed using Natural Language Analysis.

Multiple techniques have been developed and documented to analyze textual information to be able to extract from it valuable information.

It is now common practice to process texts to extract information regarding the emotions and issues engaged by user contributions to online discussions, even sometimes being able to identify the places which are being discussed, even when explicit geographical coordinates are not provided in the payload of the messages by using GPS or Assisted GPS technologies.

In EYCI4D4F we decided to avoid using keywords-based analysis, as it often leads to multiple problems:

  • words are often used in multiple ways, which cause erroneous interpretation
  • words are invented all the time, even by simply using creative spelling for them
  • human beings are really creative, and tend to express emotions in multiple ways
  • the same words in two different cultures can represent entirely different meanings

Information is processed using Natural Language Analysis by applying techniques which have been inferred by existing highly effective techniques, such as the ones described in the researches of Gentile/Lanfranchi/others, Leidner/Lieberman, Quin/Xiao/others, Shi/Baker mentioned in the references at the bottom of this article.

The processing techniques were prepared using a set of linguistic templates (similar to regular expressions) created in 29 languages to identify syntactical/structural text patterns which would highlight the user expressing an emotional condition.

This approach led us to being able to systematically filter out with a high level of success (around 93%) messages expressing emotions.

Using a large vocabulary (this, too, in 29 languages, including around 25000 elements) of words which are related to the specific emotions, we have in this way been able to classify 16 base emotions according to Robert Plutchik’s classification. In the obtained classification each message was associated to a weighting parameter according to which a certain emotion was expressed. Each message could be associated to more than one emotion (in accordance with Plutchik’s classification which sees complex emotions being represented as linear combinations of base ones).

The results, thus, looked like:

[user XYZ][message KWX][JOY:n1; SURPRISE: n2...]

In this structure:

  • XYZ is an anonymized version of the user identification strings used on social networks
  • KWX is a reference number of the content, to be able to identify user activity and relational activity
  • n1, n2… are numbers from 1 to 1000 describing the intensity according to which the single emotion has been identified in the message

This information was continuously captured from social networks.

A series of services was designed to that they could be periodically queried (polling) to get constant updated on the most recent emotions that were captured from social networks.

These services were used to pilot a series of information visualizations and a physical installation.

A first visualization was the one shown in the video below:

Here, messages are shown at the top of the screen, together with the color blocks representing the emotions which were found in the message.

As soon as a new message is captures, it is added to the central visualization, and connected through color-coded curves to the blocks representing the single base emotions. If the message expresses complex emotions, more than one connection is made.

At the bottom, a bar graph shows the recent intensities of the base emotions. The values of the bar graph are used in an additive sound synthesis process to generate the everchanging sounds which could be heard at the exhibit at BTF Gallery in Bologna for the presentation of the project.

Here below is a sample of a few minutes of the generated sounds:

EYCI4D4F generative sounds

another visualization can be seen in the following video:

Here each block represents a single emotion, as captured in real-time from social networks. In the visualization each block was very small, and it gave a sense of the enormous amount of data which was being captured.

Another visualization allowed to understand the sequences of emotions which were expressed by users:

EYCI4D4F diagram 4

EYCI4D4F diagram 4

Here three levels showed how one emotion evolved into another for multiple used in the most recent few minutes, effectively showing the trends of complex emotions expressed by individuals.

A further visualization showed the geographical distributions of emotions around the world:

EYCI4D4F diagram 2

EYCI4D4F diagram 2

 

The information about the most recent emotions received from the harvesting system was transformed into signals which powered the motion of the installation.

In the installation 16 jellies were created and associated to two intensity levels of the 8 base emotions in Plutchik’s classification.

EYCI4D4F installation

EYCI4D4F installation

Each jelly was installed onto a silicon base and a step motor was connected to its bottom , so that it would receive a mechanical stimulation from it.

Whenever an emotion was sensed, a signal was sent to the respective motor, thus causing the vibration of the jelly.

A video projector mounted on the ceiling of the exhibition space projected onto the jelly the profile image of the user who generated the emotion.

EYCI4D4F installation

EYCI4D4F installation

The result was a matrix showing in real-time the expression of emotions on social networks, through a suggestive, poetic physical visualization, also alluding to the variability and instability of human emotions through the typology of the motion of jellies.

 

Conclusions

One aspect of this project was considered striking from everyone involved: it seemed incredible how substantially easy it had been to capture and process all this information from unaware internet users.

The captured information was public, to all effect. Yet the messages publicly expressed on social networks engage important themes, and describe to a high level of detail the approaches which each user adopts in confronting to news, relationships and multiple subjects, also describing the users’ tastes, likes, dislikes, wishes, desires and, as we have learned, emotions.

This “public intimacy” represents a fundamental issue for research and discussion of the contemporary era, also because it represents the main driver of online service providers’ business models: the possibility to harvest, process, classify and sell this information in multiple ways still represents the biggest money-making methodology which is available to anyone deciding to create a business using technologies and networks.

The modalities according to which this information is captured is also remarkable.

Internet users continuously sign complicated “Terms of Service” agreements when they access online services: these texts are complex and long, and people read them only rarely and understand them even less.

While there is a general understanding about the fact that the information produced through our behavior is the object of business of service providers, this notion substantially gets lost during what is perceived to be a public, open, transparent set of platforms, in which people perform common routine activity without worrying too much about what implication their actions could have.

To remark these issues, we decided to add a final part to the project.

EYCI4D4F users for sale at 9.99 euros

EYCI4D4F users for sale at 9.99 euros

A set of boxes was designed to contain the profile of a single, random social network user. 100 hundred boxes of this type were produced, randomly selecting users whose emotions came up while processing data for the visualizations and installation.

Each box contained a link and a QRCode. They led to an address at which a small interface showed the profile image of the user (but without showing any other data which could be used to identify him/her/it) together with the list of the most recent emotions expressed on social networks.

The user was transformed into a sort of social-network-mediated-tamagotchi.

We put the boxes on sale for 9.99 euros.

EYCI4D4F users on sale

EYCI4D4F users on sale

 

Users on sale for 9.99 euros. Business as usual. 

 

REFERENCES

 

  • Ben-Ze’ev, A. (2004) Love online: emotions on the internet. Cambridge: Cambridge University Press.
  • Bray, D. A., Chidambaram, L., Epstein, M., Hill, T., Thomas, D., Venkatsubramanyan, S., Watson, R. T. (2007). The Web as a Digital Reflection of Reality. Communications of the Association for Information Systems, Vol. 18, No. 28. Available at SSRN: http://ssrn.com/abstract=961088
  • de Certeau, M. (1984). The Practice of Everyday Life. Berkeley: University of California Press.
  • de Kerckhove, D. (1997). The Skin of Culture: investigating the new electronic reality. UK: Kogan Page Publishers.
  • de Spinoza, B. (2006). The Ethics. Fairford: Echo Library.
  • Descartes, R. (1989). The passions of the soul. Indianapolis: Hackett Publishing.
  • Doidge, N. (2007). The Brain that Changes Itself: Stories of Personal Triumph from the frontiers of brain science. USA: Viking.
  • Elkman, P. (1999). Handbook of Cognition and Emotion. Sussex: John Wiley & Sons.
  • Gentile, L., Lanfranchi, V., Mazumdar, S., Ciravegna, F. (2011). Extracting Semantic User Networks from Informal Communication Exchanges, in The Semantic Web. ISWC 2011, Lecture Notes in Computer Science, Volume 7031/2011, pp. 209-224. New York: Springer Link.
  • Green, W. S., Jordan, P. W. (Eds.) (2002). Pleasure with products: Beyond usability. New York: Taylor & Francis.
  • Grismaeth, K. (2005). Kansei engineering: Linking emotions and product features. Accessed December 15, 2011, from http://www.ivt.ntnu.no/ipd/fag/PD9/2005/artikler/PD9%20Kansei%20Engineering%20K_Grimsath.pdf
  • Hobbes, T. (1976). Leviathan. Forgotten Books. Accessed January 12, 2012, from http://www.forgottenbooks.org/info/9781605069777
  • Iaconesi, S. , Persico, O. (2011). RWR Read/Write Reality Vol. 1. Rome: FakePress Publishing.
  • Jordan, P.W. (2000). Designing pleasurable products: An introduction to the new human factors. Philadelphia: Taylor & Francis.
  • Leidner, J. L., Lieberman, M. D. (2011). Detecting geographical references in the form of place names and associated spatial natural language. SIGSPATIAL Special, Newsletter, Special Issue, Volume 3, Issue 2, pp. 5-11. New York: ACM.
  • Lynch, K. (1960). The image of the city. Cambridge Mass., USA: MIT Press.
  • McLuhan, M. (1964). Understanding Media: the extensions of Man. Canada: MCGraw-Hill.
  • Mitchell, W. (2005). Placing words: symbols, space, and the city. Cambridge Mass., USA: MIT Press.
  • Myers, David G. (2004). Theories of Emotion. Psychology: Seventh Edition. New York, NY: Worth Publishers
  • Nahl, D., Bilal, D. (2007). Information and emotion: the emergent affective paradigm in information behavior research and theory. Medford, New Jersey: Information Today, Inc.
  • Pearson, E. (2009). All the World Wide Web’s a stage: The performance of identity in online social networks. First Monday, Vol. 14, N. 3. Chicago: University of Illinois Press.
  • Pickles, J. (2004). A history of spaces: cartographic reason, mapping, and the geo-coded world. New York, USA: Routledge.
  • Plutchik, R. (1980). Emotion: Theory, research and experience: Vol. 1. Theories of emotion. New York: Academoc.
  • Prinz, J. (2004). Gut Reactions: A Perceptual Theory of Emotion. Oxford: Oxford University Press.
  • Quin, T., Xiao, R., Fang, L., Xie, X., Zhang, L. (2010). An efficient location extraction algorithm by leveraging web contextual information. GIS ’10 Proceedings of the 18th SIGSPATIAL International Conference on Advances in Geographic Information Systems, pp. 53-60. 2010. New York: ACM.
  • Shi, G., Barker, K. (2011). Thematic data extraction from Web for GIS and applications. Spatial Data Mining and Geographical Knowledge Services (ICSDM), 2011 IEEE International Conference on, Proceedings, pp. 273-278. Fuzhou: IEEE.
  • Thompson, J. B. (2011). Shifting Boundaries of Public and Private Life. Theory, Culture & Society, Vol. 28, N. 4. Cambridge, UK: Sage.
  • Turkle, S. (1995) Life on the screen: identity in the age of the Internet. New York: Simon & Shuster.
  • Van Hout, M (2004). Getting emotional with… Donald Norman. Design & Emotion. Accessed December 15, 2011, from www.design-emotion.com/2004/12/15/getting-emotional-with-donald-norman
  • West, A., Lewis, J., Currie, P. (2009). Students’ Facebook ‘friends’: public and private spheres. New York: Taylor & Francis.
  • Zook, M., Graham, M. (2007). From Cyberspace to DigiPlace: Visibility in an Age of Information and Mobility. In H. J. Miller (Ed.), Societies and Cities in the Age of Instant Access. London, UK: Springer.
  • Zook, M., Graham, M. (2007). Mapping DigiPlace: Geocoded Internet Data and the Representation of Place. Environment and Planning: Planning and Design, 34. doi: 10.1068/b3311

AOS and FakePress Publishing: a showcase at The Others, Turin, with Prinp.com

AOS and FakePress Publishing have been invited by Prinp.com to perform a showcase of our projects in which the idea of “publishing” mutates through the use of technologies such as augmented reality and ubiquitous devices and networks, in a process of reconsideration of the ways in which human beings perceive their cities and spaces, and the ways in which they live, work, entertain themselves, relate to each other and perceive their bodies and identities.

We will present some of the most recent and interesting projects by FakePress Publishing, featuring disseminated books, global publications, ubiquitous conversations and literary works in which thousands of authors contribute in real time from all over the planet.

The presentation will take place at “The Others” art fair, during Turin’s art week, on February 4th at 10pm (but please check on the Fair’s website for updated information on location and times)

Prinp.com is an innovative service dedicated to the innovation of the possibilities for artists, galleries, institutions and other organizations to produce high quality art books.

Wearing Emotions by FakePress, presented at the IV10 conference in London, July 2010

Wearing Emotions by FakePress, presented at the IV10 conference in London, July 2010 from salvatore iaconesi on Vimeo.

The video shows the presentation of the paper titled “Wearing Emotions: Physical representation and visualization of human emotions using wearable technologies” presented at the IV10 (Information Visualization 2010) conference at South Bank‘s college in London, on July 26th, 2010.

The paper and presentation describe a research process focused on the scientific research, design and implementation of wearable devices able to display human emotions – be them individual, group or global – on physical bodies.
The devices created in the process have been used to create 3 artistic performances as both proofs of concepts and as innovative forms of artistic and aesthetic expression: Talkers performance, OneAvatar and Conference Biofeedback.

the slides relative to the presentation can be found here:
http://www.slideshare.net/xdxd/wearing-emotions-sifppresentation

the video can be found here:

http://vimeo.com/13779500

or here

http://www.archive.org/details/WearingEmotionsByFakepressPresentedAtTheIv10ConferenceInLondonJuly

on Art is Open Source:

http://www.artisopensource.net/2010/07/31/wearing-emotions-by-fakepress-presented-at-the-iv10-conference-in-london-july-2010/

OneAvatar, wearable technologies connecting body and avatar

OneAvatar, wearable technologies connecting body and avatar

Reference links:

the IV10 conference website:
http://www.graphicslink.co.uk/IV10/

Art is Open Source:
http://www.artisopensource.net/

FakePress:
http://www.fakepress.it/

Talkers Performance:
http://www.artisopensource.net/talkers/

OneAvatar:
http://www.artisopensource.net/OneAvatar/

Conference Biofeedback:
http://www.flickr.com/photos/xdxd_vs_xdxd/sets/72157622816765253/