The Computational Turn

Posted by PH on March 17, 2010
Digital Literacy, Education, Narrative, Visual Culture

On Tuesday 9th March I attended the Computational Turn conference at Swansea University. Very good it was too, with the wide range of speakers packed into a single day all having a diverse set of approaches to the main theme. Some of the papers were very challenging, and—whilst not all were of particular interest to me—many shone light into areas I had barely perceived previously, let alone considered in any deliberate way. The highlights of the day were the day’s two keynote speakers: N. Katherine Hayles opening the conference and Lev Manovich closing it.

N. Katherine Hayles

N. Katherine Hayles

Hayles outlined the rationale for the “computational turn.” She began by asking how many books could we read in a lifetime. If we read one a day between the ages of 15 and 85, that turns out to be 25,550. Not many compared to the total number of books available. The question becomes, what if we could analyze a whole corpus of books—all the books ever written on WWII, say, or all the books written about Aristotle—using computers? What would this type of mass analysis reveal?

Of course the next question would have to be, an analysis on what basis? Computers can’t “read” in the same way humans can. They may be able to detect patterns in the data—frequency, repetition, structure—but that is a far cry from the type of hermeneutic interpretation that humans are so good at. Quoting Tim Lenoir, she suggests that we “forget meaning and follow the data streams.” Starting with meaning always embodies too many assumptions: if we start with the analytics we can work out what it all means later. She then went on to illustrate her thesis by showing the initial results of her computational analysis of Danielewski’s Only Revolutions.

The Q&A session ranged across a wide range of topics, all of which Hayles dealt with expertly:

  • Nigel Thrift’s “technological unconscious” was discussed, the observation that assumptions and limitations are embedded within the technologies we use which are largely unnoticed and unseen. (An idea that seems very close to McLuhan’s theories about media.)
  • There was talk of the “adaptive unconscious,” which posits a mind that is effectively a type of internal distributed network where the unconscious is not a Freudian dark place but an active participant in cognition and decision-making.
  • There was talk of the “Baldwin Effect,” an elaboration on evolutionary theory which suggests that specific inherited traits are emphasized by cultural behaviour.
  • Finally, Hayles talked of culture moving from a deep-attention mode (related to print) into a hyper-attention mode (related to electronic media).

All heady stuff. How some of these issues relates to the computational turn I’m not quite sure, but the whole session was never less than stimulating.

Lev Manovich

Lev Manovich

Lev Manovich’s talk was mainly concerned with his projects, all of which are related to visualizations of large bodies of visual data: one million Manga pages, all 3480 Time magazine covers, Vertov movies, the way saturation changes over time in modern painting. He also showed off the Cultural Analytics software his Software Studies initiative has been developing. Here’s one of his Manga visualizations (stolen from his CultureVis photostream):

Visualization of 50,000 Manga pages

Visualization of 50,000 Manga pages

The accompanying text reads:

X axis: Grey scale standard deviation (measured per page)
Y axis: Entropy (measured per page)

This visualization shows how cultural analytics approach allows us to map continuous style space of a cultural data set. In the current visualization, the pages which have more contrast appear on the right; the pages which have no grey tones but only black and white are on the bottom right; and the pages which have a full range of grey tone (and thus more “realism” ) on the top. Every page in the dataset is situated in the space defined by these extremes.

Here’s another example (from here) showing a subset of the Time magazine covers mapped out in the Cultural Analytics software:

Time Magazine covers

Time Magazine analytics

The accompanying text reads:

Exploring a set of 450 Time covers (sampled from the complete set of 4553 covers 1923-2009 by taking every 10th image). Mousing over points reveals larger images and metadata.

I’ve only really presented here the bookends of the Computational Turn conference. There was much else of value, some of which I intend to follow up in my own work. A special thanks must go to Dr. David Berry for organizing the conference, for attracting such marvellous speakers to Swansea, and for the invitation. Thanks also to Sian Rees for coordinating the event and for providing such a warm welcome.

Tags: , ,

Leave a Reply

Your email address will not be published. Required fields are marked *