Random Etc. Notes to self. Work, play, and the rest.

Archive for February 2008

Modest Maps vs Processing

Update 2011-10-10: This project is now hosted on github, please download the latest version there and file an issue if it's not working for you. Thanks!

Since Mike simultaneously outed me and out-did me and linked to the Processing folder of the Modest Maps source at the same time, I thought I'd better post a version of the library I've been working on so that I can stop thinking about it for a while.


Modest Maps is a BSD-licensed display and interaction library for tile-based maps in Flash (ActionScript 2.0 and ActionScript 3.0) and Python...

...And Processing


Correlation = Causation

Ben Tesch overlays the timeline of technology adoption and the timeline of cool and concludes that the clothes dryer was responsible for the rise of soul music. I like it.

I'm reminded of my reaction to the wall of statistics about the world before and after Bill Clinton's presidency, at his library in Little Rock, Arkansas. We're told that among other things the national debt, the number nuclear warheads and world poverty all went down, but that AIDS increased. I'm led to believe that nuclear warheads prevent AIDS, or possibly that AIDS prevents nuclear warheads. Clearly.

What’s Your Online Carbon Footprint?

I just came across a blog post from last May by Rolf Kersten about your CO2 footprint when using the internet. I was particularly intrigued by his estimate of the amount of carbon produced by Google at 6.8 grams per search.

Feeling Guilty

A while ago the 'news' went around that Google could supposedly cut the energy consumption of millions of monitors by changing their website background to black. It turned out only to be true for old CRT monitors, and a bit silly all in all. But I sometimes do a couple of hundred Google searches a day, nevermind the other web-based services I use that are off consuming power on my behalf, and Rolf's post reminded me about an idea I had that I think could really work.

What if Google replaced the "I'm Feeling Lucky" button with one that said "I'm Feeling Patient" instead, and then waited for a convenient moment to perform my search instead of performing it instantly? Would they (could they) reduce the number of servers needed for search if they did that? And will there ever be a point where increased efficiency doesn't get used up by doing more instead of being used as an opportunity to cut back?

Feeling Patient

I know Gavin Bell has been thinking about these ideas too, wondering how to measure the energy consumption of web services in general, including the effects of mod_gzip on the power consumption of nosy routers that inspect packets at every hop. I wonder about the environmental cost of indexing all those mailing lists I leave archived and unread in my GMail.

Clearly some of these things pale into insignificance when compared with the environmental impact of air travel, long commutes, badly insulated homes, old power grids, etc. but I wonder if they start to be worth thinking about at a company operating on Google's scale.

New Processing Hack: Line Caps and Joins in OpenGL

One of the sad things about using the OpenGL rendering option in Processing is the lack of control over line weights, caps and joins. This week I allowed myself to get distracted by this issue for a little bit too long, but I did succeed in solving it, at last:

OpenGL stroke caps and joins

I've put some code about line caps and joins in Processing & OpenGL up on Processing Hacks in case it's something that bothers you too. I've not tested it extensively so I'd welcome bug fixes and suggestions there, or in the comments here. One thing I'm interested in doing next is extending (or reimplementing) BasicStroke to generate shapes for thick polylines with varying line thickness. If that's something you've done before, please let me know.

I'm using Java's BasicStroke to generate the path outlines, and GLU's tesselators to generate a mesh that will fill correctly in OpenGL. I "borrowed" the code for the latter from Ben Fry's PGraphicsOpenGL font rendering – thanks Ben!

ArtNano (notes on Processing for design elements)

As I mentioned in my previous post, we were working with Geraldine Sarmiento on the ArtNano site, and she came up with the pixellised imagery you see throughout.

Towards the end of the project, I needed a few more of the pixelly images at short notice, to illustrate the essays and about pages on the site. Rather than bother Geraldine, I reached for Processing to see if I could match the look of the homepage imagery that she had created.

I came up with an applet that used a bit of blurring, a bit of distance fall-off, and a bit of perlin noise to create the effect that we were looking for (decorative, obscured, but related to the overall site). Here's an example of the imagery created from a picture of Scott Snibbe's Three Drops that we used on the page for Jennifer Frazier's essay:

Three Drops pixels applet

You can see the full applet and source code here, and I think it's a good example of how a generative solution to design elements can keep a project flexible right up until launch (and beyond).

ArtNano (notes on Wordpress as a CMS)

I recently finished helping out on the back-end of the NISE Network's ArtNano site for San Francisco's Exploratorium. Working on a 'straight-up' website is a rare departure for us at Stamen, but we're big fans of the Exploratorium and were delighted at the opportunity to work with them.

ArtNano: New Approaches for Visualizing the Nanoscale

The ArtNano site features contributions from several artists tasked with exploring the nanoscale. My favourites are Semiconductor's gorgeously glitchy 200 Nanowebbers video, Scott Snibbe's multi-scale Three Drops installation and Santiago Ortiz's Time Spiral. Also featured are artworks by Eric Heller, Victoria Vesna and Stephanie Maxwell.

If you're interested in that sort of thing, read the full version of this post for some notes on using the Wordpress blog platform as a CMS for a small website project like ArtNano.


Step 1: Can We Show It All?

I've been thinking recently about data visualisation approaches. One that I'm very fond of in the early stage of a project is to figure out a way to arrange the whole thing on screen – or a representative sample of it – and figure out what meaningful segments you can mark on top of it.

That was the rationale behind our Trulia Hindsight movies (show something about everything) and our charts of a day of activity on Digg. It's definitely an approach I've found easiest in Processing, although using it recently I've missed the instant mouse-driven interactivity of Flash or HTML.

My colleague Shawn just posted a visualisation he made to help debug a visualisation he's working on at the moment:

Shawn's chart is basically a simple scatter plot of cabspotting data points (by cab ID and time), except that he's also overlaid some of the connections between the data to show how far back and forward he has to look to accurately predict a cab's location. And the whole thing moves beautifully, showing up bad data and highlighting good data as it goes. Hopefully we'll get a video up soon.

In the meantime, be sure to read Shawn's description and keep an eye out for the final debugged visualization in the MOMA soon!

De-Po Skinny Theme: Thanks, Derek!

One of those tedious admin posts.

I was tired of the tiny green-blue text and general busy-ness of my previous theme, so I've stripped everything back today and gone with Derek Powazek's De-Po Skinny.

Doubtless I'll want to mess with the look of it in the future, but this will definitely work for now. Flickr photos at the top, too!