When documentary makers and others talk about “data telling a story” they’re usually referring to data visualizations or data exploration tools like this from Watch Dogs. But with the rise of ubiquitous computing creating a range of initiatives – smart cities (also connected cities), connected homes, wearables, the Internet of Things (IoT) – the world is turning into a mesh of inputs and outputs that creates a programmable transmedia storytelling layer.
One way to imagine this is to think of a single computer image that we know is made up of millions of pixels. Now imagine that the world is the image and each pixel is represented by a computing device. Just as each pixel is individually addressable and must be changed in coordination with the other pixels to present a new image, so each connected computing device can be changed to create a new reality.
Pixel stands for “picture element”. I’m going to use stel to stand for “storytelling element” which is physically represented by any addressable digital technology that might send or receive data for the purpose of communicating a coherent story.
Usually in transmedia storytelling we talk about channels (video, audio, image), platforms (YouTube, Spotify, Flickr) and formats (pervasive game, treasure hunt, immersive theatre). To try to be specific about the “stel”, we could say that it’s a single multi-channel device but on its own not a platform – stels must be meshed together to create an addressable platform which, for the sake of this blog post we could unimaginatively call StelNet.
In this example, StelNet represents 1000 fixed location multi-channel devices across London and 500,000 mobile stels worn by participating, opt-in audience members. The mobile stels range from Fitbits and Android watches to purpose-made StelWear for those who’ve signed up for maximum immersion.
StelNet is capable of outwardly communicating mood to citizens through color, sound and image and inwardly communicating the mood of citizens through noise level & frequency, traffic flow, air pollution, weather conditions, size of congregations outside pubs and public spaces, personal biometrics and of course location-filtered sentiment on social media. Using an invisible, coordinating, storytelling intelligence (such as Conducttr) the experience designer can tell broadcast and personalized stories across the mesh of devices.
John, for example, plays Gratitude World 5 (GWV) a long-running sci-fi strategy game in which he must build a self-sustaining community on a space station orbiting the earth while repelling aliens who try to build bases across London. The aliens feed on negative energy and are hence hyper active during bad weather, traffic snarl-ups and reports of local council corruption. The GWV dashboard presents easy to digest game-related information that’s been gathered from StelNet allowing John to make intelligence decisions about when to plan cargo shipments, personnel transfers etc.
On his way to work, StelNel signals to John the status of his earlier decisions and the progress of the game. Wrist vibrations indicate the successful arrival of new supplies and the blue StelNet lights at bus stops show that for now the combat situation is under control. A nearby digital display, paid for by advertising, can be swiped to gain 90 second access to the community channel that shows John how his mood compares to the city as a whole. He’s found that smiling more and nodding to strangers helps his mood but also has a knock-on effect in raising the mood of the city. He needs more people to feel good about themselves today to prevent aliens re-establishing a command post near his fictional recruitment site.
Utilizing imagination and well-timed cues across a city of connected devices, many people will soon be living in multiple alternative realities.
Come discuss these ideas and more at the Conducttr Conference, Oct 17th in London, UK
Transmedia podcast – The Black Watchmen
When documentary makers and others talk about “data telling a story” they’re usually...