Thanks to some curious emails and a couple of dormant Google Alerts, it's come to my attention that the Travel Time Tube Map I made a few years ago has had a sudden resurgence of internet fame. My original blog post informs me that it's over 5 years old. Wow!
I'm not sure who rediscovered it first, but thanks to everyone who's linked to it so far including Fast Co. Design, Creativity Online, Wired UK, PSFK, Roomthily, Inteloquent, OpenStreetMap and numerous Twitter and Facebook users.
The map has been picked up by a few books and exhibitions over the years, including the wonderful Form + Code by Casey Reas and Chandler McWilliams. If you're interested in how this kind of work gets made then the book is essential.
If you're interested in a more thorough theoretical exploration of isochrones I can recommend Nicholas Street's Time Contours paper on the subject. If you find yourself yearning for an even deeper treatment of transit data, look around for people like Mike Frumin who take research far more seriously than I do!
If you want to play around with this code for yourself, it should be relatively easy to fix up for current versions of Processing (probably just the fonts will need updating, please leave a comment if there's anything else) and you can get the data here.
I've had a few requests to update the map with current data, including the East London Line and Heathrow Terminal 5, as well as suggestions to include the overground in south London and elsewhere. Sadly I haven't found a coherent and consistent data source that I could drop-in as a replacement for my hand-edited original. The official Transport for London data sources on data.gov.uk look promising, and I've had a couple of under-the-table offers from people with access to time-table data, but these all require more time and effort than I have for the map at the moment. In future I'd like to move the map to a more 201x format like Canvas or SVG, perhaps porting to Processing JS. Perhaps an app? One day...
If you're the kind of (mainly 2d) graphics programmer that I am, the thing you find most attractive about Processing is the one-click publishing to make a webpage and show people what you've been doing. Everything else after that is a bonus.
If you're not that kind of programmer, and the web isn't your primary concern, then you should definitely check out LÖVE. It looks like they're having a lot of fun over there, and Lua is just nicely mind-bending enough but still familiar if you're coming from Java or Actionscript.
The first and last time I'll cut and paste a press release on this blog. Casey Reas writes:
We've just posted Processing 1.0 at http://processing.org/download. We're so excited about it, we even took time to write a press release.
CAMBRIDGE, Mass. and LOS ANGELES, Calif. - November 24, 2008 - The Processing project today announced the immediate availability of the Processing 1.0 product family, the highly anticipated release of industry-leading design and development software for virtually every creative workflow. Delivering radical breakthroughs in workflow efficiency - and packed with hundreds of innovative, time-saving features - the new Processing 1.0 product line advances the creative process across print, Web, interactive, film, video and mobile.
Whups! That's not the right one. Here we go:
Today, on November 24, 2008, we launch the 1.0 version of the Processing software. Processing is a programming language, development environment, and online community that since 2001 has promoted software literacy within the visual arts. Initially created to serve as a software sketchbook and to teach fundamentals of computer programming within a visual context, Processing quickly developed into a tool for creating finished professional work as well.
Processing is a free, open source alternative to proprietary software tools with expensive licenses, making it accessible to schools and individual students. Its open source status encourages the community participation and collaboration that is vital to Processing's growth. Contributors share programs, contribute code, answer questions in the discussion forum, and build libraries to extend the possibilities of the software. The Processing community has written over seventy libraries to facilitate computer vision, data visualization, music, networking, and electronics.
Students at hundreds of schools around the world use Processing for classes ranging from middle school math education to undergraduate programming courses to graduate fine arts studios.
+ At New York University's graduate ITP program, Processing is taught alongside its sister project Arduino and PHP as part of the foundation course for 100 incoming students each year.
+ At UCLA, undergraduates in the Design | Media Arts program use Processing to learn the concepts and skills needed to imagine the next generation of web sites and video games.
+ At Lincoln Public Schools in Nebraska and the Phoenix Country Day School in Arizona, middle school teachers are experimenting with Processing to supplement traditional algebra and geometry classes.
Tens of thousands of companies, artists, designers, architects, and researchers use Processing to create an incredibly diverse range of projects.
+ Design firms such as Motion Theory provide motion graphics created with Processing for the TV commercials of companies like Nike, Budweiser, and Hewlett-Packard.
+ Bands such as R.E.M., Radiohead, and Modest Mouse have featured animation created with Processing in their music videos.
+ Publications such as the journal Nature, the New York Times, Seed, and Communications of the ACM have commissioned information graphics created with Processing.
+ The artist group HeHe used Processing to produce their award-winning Nuage Vert installation, a large-scale public visualization of pollution levels in Helsinki.
+ The University of Washington's Applied Physics Lab used Processing to create a visualization of a coastal marine ecosystem as a part of the NSF RISE project.
+ The Armstrong Institute for Interactive Media Studies at Miami University uses Processing to build visualization tools and analyze text for digital humanities research.
Processing was founded by Ben Fry and Casey Reas in 2001 while both were John Maeda's students at the MIT Media Lab. Further development has taken place at the Interaction Design Institute Ivrea, Carnegie Mellon University, and the UCLA, where Reas is chair of the Department of Design | Media Arts. Miami University, Oblong Industries, and the Rockefeller Foundation have generously contributed funding to the project.
The Cooper-Hewitt National Design Museum (a Smithsonian Institution) included Processing in its National Design Triennial. Works created with Processing were featured prominently in the Design and the Elastic Mind show at the Museum of Modern Art. Numerous design magazines, including Print, Eye, and Creativity, have highlighted the software.
For their work on Processing, Fry and Reas received the 2008 Muriel Cooper Prize from the Design Management Institute. The Processing community was awarded the 2005 Prix Ars Electronica Golden Nica award and the 2005 Interactive Design Prize from the Tokyo Type Director's Club.
The Processing website (www.processing.org) includes tutorials, exhibitions, interviews, a complete reference, and hundreds of software examples. The Discourse forum hosts continuous community discussions and dialog with the developers.
Extremely well done and congratulations to all involved!
Eric is in Minneapolis at the moment talking about our work at the University of Minnesota. The talk has been in the works for a while but nicely coincides with W(e are )here, and exhibition we're participating in organised by Solutions Twin Cities.
We've prepared a special version of Trulia Hindsight for the show, using the experimental version of Modest Maps I made for Processing in February and animating data for around 1 million homes using OpenGL. We're not ready to distribute the data to a wider audience yet, but here's an example animation from the application:
Since Mike simultaneously outed me and out-did me and linked to the Processing folder of the Modest Maps source at the same time, I thought I'd better post a version of the library I've been working on so that I can stop thinking about it for a while.
Modest Maps is a BSD-licensed display and interaction library for tile-based maps in Flash (ActionScript 2.0 and ActionScript 3.0) and Python...
One of the sad things about using the OpenGL rendering option in Processing is the lack of control over line weights, caps and joins. This week I allowed myself to get distracted by this issue for a little bit too long, but I did succeed in solving it, at last:
I've put some code about line caps and joins in Processing & OpenGL up on Processing Hacks in case it's something that bothers you too. I've not tested it extensively so I'd welcome bug fixes and suggestions there, or in the comments here. One thing I'm interested in doing next is extending (or reimplementing) BasicStroke to generate shapes for thick polylines with varying line thickness. If that's something you've done before, please let me know.
I'm using Java's BasicStroke to generate the path outlines, and GLU's tesselators to generate a mesh that will fill correctly in OpenGL. I "borrowed" the code for the latter from Ben Fry's PGraphicsOpenGL font rendering – thanks Ben!
Towards the end of the project, I needed a few more of the pixelly images at short notice, to illustrate the essays and about pages on the site. Rather than bother Geraldine, I reached for Processing to see if I could match the look of the homepage imagery that she had created.
I came up with an applet that used a bit of blurring, a bit of distance fall-off, and a bit of perlin noise to create the effect that we were looking for (decorative, obscured, but related to the overall site). Here's an example of the imagery created from a picture of Scott Snibbe's Three Drops that we used on the page for Jennifer Frazier's essay:
You can see the full applet and source code here, and I think it's a good example of how a generative solution to design elements can keep a project flexible right up until launch (and beyond).
I've been thinking recently about data visualisation approaches. One that I'm very fond of in the early stage of a project is to figure out a way to arrange the whole thing on screen – or a representative sample of it – and figure out what meaningful segments you can mark on top of it.
That was the rationale behind our Trulia Hindsight movies (show something about everything) and our charts of a day of activity on Digg. It's definitely an approach I've found easiest in Processing, although using it recently I've missed the instant mouse-driven interactivity of Flash or HTML.
My colleague Shawn just posted a visualisation he made to help debug a visualisation he's working on at the moment:
Shawn's chart is basically a simple scatter plot of cabspotting data points (by cab ID and time), except that he's also overlaid some of the connections between the data to show how far back and forward he has to look to accurately predict a cab's location. And the whole thing moves beautifully, showing up bad data and highlighting good data as it goes. Hopefully we'll get a video up soon.
In my post about the good people at Yahoo's design research group in September I suggested that some of their visualisations remind me of the movie War Games. I love the movie, but I continue to think that certain kinds of accidental visual resonance should be avoided. The 'incoming' visualisations by the good people at Dopplr have this problem too.
Today, Mike sent me the above image that Gem ffffound showing the devastation caused by the Oakland Hills firestorm in 1991. It's shocking, stunning and scary all at once to see so many homes ablaze like that. Mike pointed out that it looks like some of the work from our Trulia Hindsight project at Stamen.
Thankfully I think Mike was referring to the early prototypes I made in Processing using additive blending and a red-through-blue colour range. I've uploaded a movie of one of these prototypes to Vimeo so you can get an idea of what we're talking about:
The fact that certain parts of the movie looked like San Francisco was burning, or being bombed, was definitely a problem we had to avoid for the final piece. It's something I wouldn't want to be thinking about addicentally if I was trying to find out about real-estate in the area. What we want is to make something that can illustrate the effects of real devastation if we want it to, without emotionally swindling you if you just want to think about urban growth. That's why we knocked out the red and orange hues in the colour range, added a drop shadow and ditched the additive blending. Ultimately, it was more appropriate to show data on the map than in the map.
So, if you want to you can look up some of the areas of Oakland affected by the fires in 1991, such as this example, and spot the clear rebuilding activity in 1992. With luck, the animation will illustrate some of the devastation caused by the fires, without looking like a simulated disaster.