- Click to share on Facebook (Opens in new window)
- Click to share on Twitter (Opens in new window)
- Click to share on Tumblr (Opens in new window)
- Click to share on Pinterest (Opens in new window)
- Click to share on Google+ (Opens in new window)
- Click to share on Reddit (Opens in new window)
- Click to share on Pocket (Opens in new window)
- Click to email this to a friend (Opens in new window)
- Click to print (Opens in new window)
About three weeks ago now—yeah, I know, but I’ve been busy—Chris Crawford delivered a ‘Masterclass On Interactivity’ at Swansea Metropolitan University.
Chris began with a light-hearted look back at the history of computing and, simultaneously, back over his career. Whilst offering a gentle introduction to the presentation and a chance to get to know him, the opening section did make on major point: that interactivity is what defines modern computing and, by extension, new media in general. The computer is an interaction machine.
Having set out his stall Chris went on to discuss the concept of interactivity. Firstly he said that the best example of interactivity—to which all machine interactions strive—was a human conversation: real-time, using all our senses, pure improvisation. From this observation he derives what I think is the best definition of interactivity I’ve come across: interactivity occurs when computer and user alternately listen, think, and speak.
The quality of the interaction is defined by the weakest element in that chain. For example, modern computer games are very good at ‘speaking to us’—they look fabulous and they sound fabulous—but they’re not so good at thinking: very often the characters or the basic game AI is actually pretty dumb. Call of Duty:Modern Warfare 2 is a perfect example.
Computers are also not very good at ‘listening’ to us. Interaction with a computer is usually limited to a surprisingly small range of gestures and actions: pointing, clicking, dragging, etc.. Whilst multi-touch and gestural interfaces are widening that vocabulary, it remains very limited compared to what is possible with natural language. Chris suggested a Linguistic User Interface as being the future, in turn paving the way for the social aspects of interaction (and, by extension, the social aspects of gaming, evolving into what he calls “interactive storytelling”).
Although computers are good at ‘thinking’, Chris argued that the main limitation of computing was that it currently only used a small number of the “mental modules” we possess, the main ones being spatial reasoning, hand/eye coordination, resource management, and problem-solving. Crucially, our all-important social reasoning module was not challenged at all.
Summing up the first half of the presentation, Chris suggested that our current generation of computer games have developed as far as they can go, and that a separate industry will emerge exploiting the social aspect of the technologies.
After lunch Chris began by talking about the human predilection for talking about experience in terms of things rather than as a system of processes (nouns rather than verbs, data rather than algorithms). Interactivity is communication through process. He went on to talk about interactive storytelling environments where each use generated a new narrative instance, as opposed to our current paradigm where stories are fixed within a medium (novels, comics, films, TV programmes). Chris argued that these interactive stories—hypernarratives—would never achieve the polish of the story fixed within its medium, but that they would have much greater emotional impact because of their personal, individually generated, meaning.
For the final section of the afternoon, Chris talked about what the requirements were for the designer of these new interactive storytelling environments. This was Chris at his most overtly evangelical, throwing wide the doors of learning and revealing an endless landscape for exploration and discovery. Using Erasmus as an example, he very cleverly and humorously showed how little information there was on the Internet compared to that encoded in books. He showed how you could use equations that describe natural processes to model human interaction (for example, human attraction and repulsion convincingly modeled using spring compression equations). He tried to get as to look at the processes underlying the world we live in, not its surface features.
Inevitably I have only offered a very brief overview of the contents of Chris’s Masterclass On Interactivity. The presentation was funny, inspirational, thought-provoking, and very, very, smart. Despite speaking for about 5 hours there was barely a moment that was less than engaging, and the whole audience was gripped throughout. As much as anything else, it was a masterclass on giving a presentation.
Thanks Chris. A privilege.
Chris Crawford Links
Jim who? Jim Reekes is the gentleman who designed many of the Mac system sounds including that start-up sound. This a recent interview with him (posted February 11th of this year) from a Dutch TV program called One More Thing:
A fascinating and amusing insight into the machinations of corporate culture. Interesting also to associate a personality to the sounds our machines make and which we inevitably take for granted.
[Via Create Digital Music]
On Tuesday 9th March I attended the Computational Turn conference at Swansea University. Very good it was too, with the wide range of speakers packed into a single day all having a diverse set of approaches to the main theme. Some of the papers were very challenging, and—whilst not all were of particular interest to me—many shone light into areas I had barely perceived previously, let alone considered in any deliberate way. The highlights of the day were the day’s two keynote speakers: N. Katherine Hayles opening the conference and Lev Manovich closing it.
Hayles outlined the rationale for the “computational turn.” She began by asking how many books could we read in a lifetime. If we read one a day between the ages of 15 and 85, that turns out to be 25,550. Not many compared to the total number of books available. The question becomes, what if we could analyze a whole corpus of books—all the books ever written on WWII, say, or all the books written about Aristotle—using computers? What would this type of mass analysis reveal?
Of course the next question would have to be, an analysis on what basis? Computers can’t “read” in the same way humans can. They may be able to detect patterns in the data—frequency, repetition, structure—but that is a far cry from the type of hermeneutic interpretation that humans are so good at. Quoting Tim Lenoir, she suggests that we “forget meaning and follow the data streams.” Starting with meaning always embodies too many assumptions: if we start with the analytics we can work out what it all means later. She then went on to illustrate her thesis by showing the initial results of her computational analysis of Danielewski’s Only Revolutions.
The Q&A session ranged across a wide range of topics, all of which Hayles dealt with expertly:
- Nigel Thrift’s “technological unconscious” was discussed, the observation that assumptions and limitations are embedded within the technologies we use which are largely unnoticed and unseen. (An idea that seems very close to McLuhan’s theories about media.)
- There was talk of the “adaptive unconscious,” which posits a mind that is effectively a type of internal distributed network where the unconscious is not a Freudian dark place but an active participant in cognition and decision-making.
- There was talk of the “Baldwin Effect,” an elaboration on evolutionary theory which suggests that specific inherited traits are emphasized by cultural behaviour.
- Finally, Hayles talked of culture moving from a deep-attention mode (related to print) into a hyper-attention mode (related to electronic media).
All heady stuff. How some of these issues relates to the computational turn I’m not quite sure, but the whole session was never less than stimulating.
Lev Manovich’s talk was mainly concerned with his projects, all of which are related to visualizations of large bodies of visual data: one million Manga pages, all 3480 Time magazine covers, Vertov movies, the way saturation changes over time in modern painting. He also showed off the Cultural Analytics software his Software Studies initiative has been developing. Here’s one of his Manga visualizations (stolen from his CultureVis photostream):
The accompanying text reads:
X axis: Grey scale standard deviation (measured per page)
Y axis: Entropy (measured per page)
This visualization shows how cultural analytics approach allows us to map continuous style space of a cultural data set. In the current visualization, the pages which have more contrast appear on the right; the pages which have no grey tones but only black and white are on the bottom right; and the pages which have a full range of grey tone (and thus more “realism” ) on the top. Every page in the dataset is situated in the space defined by these extremes.
Here’s another example (from here) showing a subset of the Time magazine covers mapped out in the Cultural Analytics software:
The accompanying text reads:
Exploring a set of 450 Time covers (sampled from the complete set of 4553 covers 1923-2009 by taking every 10th image). Mousing over points reveals larger images and metadata.
I’ve only really presented here the bookends of the Computational Turn conference. There was much else of value, some of which I intend to follow up in my own work. A special thanks must go to Dr. David Berry for organizing the conference, for attracting such marvellous speakers to Swansea, and for the invitation. Thanks also to Sian Rees for coordinating the event and for providing such a warm welcome.
Craig Mod has just published a thoughtful, insightful, and beautifully-presented essay on the future of books in the digital era, using the emergence of devices like the Kindle and the iPad as his focus:
In printed books, the two-page spread was our canvas. It’s easy to think similarly about the iPad. Let’s not. The canvas of the iPad must be considered in a way that acknowledge the physical boundaries of the device, while also embracing the effective limitlessness of space just beyond those edges.
We’re going to see new forms of storytelling emerge from this canvas. This is an opportunity to redefine modes of conversation between reader and content. And that’s one hell of an opportunity if making content is your thing.
This essay could usefully be cross-referenced with Part 2 of Scott McCloud‘s Reinventing Comics from 2000. In other words, some of what’s on offer here is not that new. However, the distinction between Formless and Definite Content is new (to me, at least) and provides a convincing armature around which the essay revolves. And if you need convincing about the inevitability of the move away from printed matter, here it is.
An excellent piece of work, highly recommended. The page must die!
Notes from a presentation made by Bruce Sterling on 6th February 2010 at the Transmediale Futurity Now! festival in Berlin. The theme is “atemporality,” the sense that new media has moved us beyond modernism, beyond postmodernism, beyond all the “grand narratives” of traditional historical discourse. Sterling asks how we survive in this new environment and offers a range of never-less-than interesting and stimulating strategies for designers, artists, and academics. Here are a couple of taster quotes:
1) The Frankenstein Mashup (aka sampling, collage, bricolage):
So how do we just — like — sound out our new scene? What can we do to liven things up, especially as creative artists? Well, the immediate impulse is going to be the Frankenstein Mashup. Because that’s the native expression of network culture. The Frankenstein Mashup is to just take elements of past, present, and future and just collide ‘em together, in sort of a collage. More or less semi-randomly, like a Surrealist “exquisite corpse.” You can do useful and interesting things in that way, but I don’t really think that offers us a great deal. Even when it’s done very deftly, it tends to lead to the kind of levelling blandness of “World Music.” That kind of world music that’s middle-of-the-road disco music which includes pygmy nose-flutes or sitars. This kind of thing is tragically easy to do, but not really very effective. It’s cheap to do. It’s very punk rock. It’s very safety pins and plastic bags. But it’s missing a philosophical high-end…
2) Generative Art:
Then there are other elements which are native to our period that didn’t really work before, such as generative art. I take generative art quite seriously. I’d like to see it move into areas like generative law, or maybe generative philosophy. The thing I like about generative art is that it drains human intentionality out of the art project. Say, in generative manufacturing, you are writing code for a computer fabricator, and you yourself don’t know the outcome of this code. You do not know how it will physically manifest itself. Therefore you end up with creative objects that are bleached of human intent. Now there is tremendous artistic intent — within the software. But the software is not visible in the finished generative product. To me, it’s of great interest that these objects and designs and animations and so forth now exist among us. Because they are, in a strange way, divorced from any kind of historical ideology. They are just not human.
3) Gothic High-Tech vs Favela Chic:
We are in a period which I think is dominated by two great cultural signifiers. An analog system that belonged to our parents, which has been shot full of holes. It is the symbol of the ruined castle. Gothic High-Tech. The ruins of the unsustainable. And the other symbol is the favela slum, Favela Chic, the informalized, illegalized, heavily networked structure of the emergent new order. The things that the twenty first century is doing that are genuinely novel, that have not been domesticated or brought into sociality. The Gothic High-Tech and the Favela Chic. These are very obvious to me, as a novelist and creative artist. Perhaps you won’t see things this way — but I think the life-span of this will be about ten years. A new generation will arise who does not need things explained to them in this way. They will not wonder at a slogan like “Futurity Now!” because they will have never known anything different.
[Video originally included here has been taken down.]
The defining element of the desktop GUI is the icon, which, although it often has a name, is above all a picture that performs or receives an action. These actions give the icon its meaning. As elements in a true picture writing, icons do note merely remind the user of documents and programs, but function as documents and programs. Reorganizing files and activating programs is writing, just as putting alphabetic characters in a row is writing. Rather like the religious relics after which they are named, computer icons are energy units that focus the operative power of the machine into visible and manipulable symbols. Computer icons also remind us of the cultural functions of Hebrew letters in the Cabala or of alchemical and other signs invoked by such Renaissance magi as Giordano Bruno. Magic letters and signs were often objects of meditation, as they were in the logical diagrams of the medieval Raymond Llul, and they were also believed to have operational powers. As functioning representations in computer writing, electronic icons realize what magic signs in the past could only suggest.
Jay David Bolter
What exactly is an interface anyway? In its simplest sense, the word refers to software that shapes the interaction between user and computer. The interface serves as a kind of translator, mediating between the two parties, making one sensible to the other. In other words, the relationship governed by the interface is a semantic one, characterized by meaning and expression rather than physical force. Digital computers are “literary machines,” as hypertext guru Ted Nelson calls them. They work with signs and symbols, although this language, in its most elemental form, is almost impossible to understand. A computer thinks—if thinking is the right word for it—in tiny pulses of electricity, representing either an “on” or an “off” state, a zero or a one. Humans think in words, concepts, images, sounds, associations. A computer that does nothing but manipulate sequences of zeros and ones is nothing but an but an exceptionally inefficient adding machine. For the magic of the digital revolution to take place, a computer must also represent itself to the user, in a language that the user understands.
Representing all that information is going to require a new visual language, as complex and meaningful as the great metropolitan narratives of the nineteenth-century novel.
Put simply, the importance of interface design revolves around this apparent paradox: we live in a society that is increasingly shaped by events in cyberspace, and yet cyberspace remains, for all practical purposes, invisible, outside our perceptual grasp. Our only access to this parallel universe of zeros and ones runs through the conduit of the computer interface, which means that the most dynamic and innovative region of the modern world reveals itself only through the anonymous middlemen of interface design.
[Quote adapted from Johnson, S. (1997) Interface Culture. Harper Collins (pp.14-19).]
When reading a book or even a sentence, there is a beginning step. A book and a sentence both have a beginning that is formally denoted. There is a middle, and, hopefully, there is a solution to a problem that is posed. The reader is recognizing symbols and making associations. The reader controls the pacing, the level of participation, and the dwell-time. But, essentially, the part that interests the reader are the symbols and finding the solution to the problem: that is, making meaning.
Launching an application follows the same steps as reading, with the user of the program recognizing symbols for the sake of solving a problem. The user determines the pacing, the level of participation, and the dwell-time, but in the end is only concerned with the symbols and the solution to the problem.
Simply put, running an application is an interactive form of reading.
[Quote adapted from Meadows, M. (2003) Pause & Effect: The Art of Interactive Narrative. New Riders (pp.25-26).]
The emerging sense of a self-directed, self-aware person takes place within the context of symbolic systems that are increasingly only internally referential. Awareness is not of the world but of the systems of mediated representation. An increase in personal knowledge about the world equates with the extension of mind ever deeper into the mediated systems of representation and meaning. Individual choice and personal freedom thus become based on the ability to discriminate between a limited number of elements presented and represented in the mediated world, whether shampoo or political candidates.
I am encouraged to frame my experiences into the shop-worn clichés of a language that drones perpetually through the airwaves and over the broadband connections.
Rather than arising out of local, human experience elaborated though conversations with other people, language now comes prepackaged and reflects not the need of human beings but the values of capital, the machine, and the technological system.