Frontier Nerds: An ITP Blog

The Two Cultures

Eric Mika

Electromagnetic spectrum posterInterior of Guggenheim museum

9 AM class on the left. 3 PM class on the right.

The Scarlet S

Eric Mika

Toasters flying amidst Mario Brothers clouds

The remark I remember most from my undergraduate thesis critique came from a fellow student. A film major, I think.

“I’d like to have this as a screensaver!”

She meant it as a compliment, but at the time I took offense. I thought the screensaver was the epitome of digital banality.

But in the last few years, I’ve come around a bit.

I think the late 80s / early 90s era screensavers now read as a refreshing double-negation of the recent 8 bit / animated gif fixation. (Although I’m not sure if there’s a way to revisit / reconsider the After Dark heyday without collapsing in a pile of insincerity.)

I can’t think of a course or context at ITP where calling something out as screensaver-esque would go over particularly well (although opportunities abound). The more general new media community seems divided on the aesthetic / contextual parallels between screen-based art and screensavers.

Some forsake the association, others mine the medium for nostalgic kitsch, and a few embrace the idea and use the screensaver as a means of distributing their work. I expected most to fall into the first category, but some Googling around on the topic suggests that NMSAS (New Media Screensaver Anxiety Syndrome) doesn’t run as deep as I initially expected.

Nevertheless, some choice quotes emerged from the sift. Sources are linked, and if anyone wants to comment / update their opinion — some of the quotes are years old — I’m game for a conversation.

Let’s open the floor with three quotes from Marius Watz:

“The curse of generative art: ‘So you make screensavers?’”
    – Marius Watz

“To the frustration of many digital artists, screensavers have much in common with generative art. They often rely on some kind of ruleset to allow infinite animation and to avoid burning a single image into the screen because of repetition. Many classic screensavers use mathematical formulas like Bezier or Lissajous curves. But most screensavers are created by programmers, not designers, hence the bad reputation they have as cultural artifacts.”
    – Marius Watz

“As for the complaint that generative art is simply decorative, fit only for screensavers or wallpaper patterns, it is hardly worth answering. Such a position would invalidate most of art history.”
    – Marius Watz

“The other hurdle is the ‘screen saver’ comparison. Society has chosen to consider screen savers with very little regard — they are temporary visuals. Another challenge for the legitimacy of this type of art.”
    – Steve Sacks

“Interactive art usually presents more abstract and complex concepts but has terrible interface / interaction. Ok so ‘that’s not the point of the work’ you say. But what irks me is that there is very little work that addresses this. As a result many people will dismiss much interactive art as just a screensaver or digital toy.”
    – Tom Betts

“We don’t want it to look like an iTunes screensaver.”
    – Matt Checkowski on the Opera of the Future

“This generative ‘art’ seemed better suited to screensavers or abstract desktops than canvases — a fact confirmed by Davis’s own Reflect app for the iPhone.”
    – Daniel West on Joshua Davis (flamewar ensues)

“But for me it is just a screen saver, since there is no story.”
    – Mauro Martino

“Neither [Casey Reas nor Sol LeWitt] creates interactive works per se, but they are touchstones for anyone interested in the algorithmic art as something other than a screen saver.”
    – Joshua Noble

“The idea behind a work can sometimes be more compelling than what actually appears on the screen. And for viewers without a thorough grounding in technology — or advanced math — the most innovative visual programs can seem like little more than high-end screensavers.”
    – Susan Delson

Clouds also became a popular desktop and screen saver at some point. I read it on the Internet, like, “Here’s instructions on how to take this and make a screen saver.” I just surfed on it a while back. I was like, Wow, it probably would look nice on the desktop or whatever.”
    – Cory Arcangel

Research on Relative Motion Tracking

Eric Mika

Project Context

My thesis project has undergone a major shift in the last week. I’m moving away from the post-apocalyptic pirate internet, and towards something completely different: A means of projecting content onto surfaces that makes the projection appear intrinsic to the surface.

Imagine a hand-held projector that you can sweep across a room, kind of like a flash light. As it moves, the projected content appears stuck to the wall, the floor, etc. For example, you could add something to the scene in a particular location — a bit of text, perhaps.

After adding the text, you could sweep the projector to a different part of the wall. The text would appear to go out of view once it left the throw-area of the projector, but if you were to move the projector back towards the spot where you initially added the text, you would see the words come back into view. The words are stuck to their environment — the projection is just an incidental way of exploring the space and revealing its content. Two recent technologies make this a particularly ripe time for this project: The Kinect gives cheap 3D scene information, which can improve the quality of motion tracking and automate the projection mapping process. New pico-projectors that can run on battery power and weigh significantly less than their conference-table counterparts mean that carrying around and a projector and using it to explore a space is no longer an entirely ridiculous proposition. This whole idea, which I’m currently calling Thesis II (for personal reasons) will be written up in more detail soon.

Fronts of Inquiry

The creative challenge for the next twelve weeks is to conceive of and build an application that demonstrates the usefulness and creative possibilities of this tool.

The technical challenges are twofold. First, I need a way to track the relative motion between the projector and the projection surface (generally a wall) — I’ll refer to this as relative motion tracking. Second, I need a way to dynamically distort the projected image to match the geometry of the projection surface. This is similar in concept to projection mapping, except the projection surface isn’t static. I’ll call this dynamic projection mapping. The calculations for both of these steps need to happen in less than 20 milliseconds if the effect is going to work and feel fluid.

Other people are already working on dynamic projection mapping, and from a technical standpoint it’s both more familiar ground and less essential to the final project than relative motion tracking. Where projection mapping is “nice to have” and will contribute significantly to the quality of the project, the technology that the project depends on to work at all is dynamic motion tracking. So, this paper will focus on research into means of relative motion tracking, and which (if any) existing open-source projects could be adapted for this application.

Similar Projects

At the most basic level, I need to find a way to take a camera feed and determine how content in the scene is moving. Traditionally, this is called camera tracking — a form of deriving structure from motion. The process goes something like this: First the software identifies feature points within each frame — these are generally areas of high contrast, which relatively easy to pick out algorithmically. On the next frame, the software finds another batch of feature points, and then does correspondence analysis between these feature points in the most recent frame and feature points in the last frame. From this information, the movement of the camera can be inferred. (e.g. if a feature point is at pixel [5, 100] in frame one, and then moves to pixel [10, 80] in frame two, we can guess that the camera shifted about [5, -20] between frames. It’s a bit more complicated than that, because of the parallax effect — points closer to the camera will appear to move more than points further away from the camera. The software can take this into account, and build a rough point cloud of the scene.

This process has applications in special effects and film / post-production. If you have a shot with a lot of camera movement, and you need to add an explosion to the scene, camera tracking gives exactly the information you need to position the explosion in a believable way from frame to frame. Because of this demand, there are a few über-expensive closed-source software packages designed to perform camera tracking reliably. Boujou, for example, sets you back about $10,000. There is, however, a free and open-source option called PTAM — Parallel Tracking and Mapping for Small AR Workspaces which can perform similar tracking.

Caveats

The PTAM code seems like the right starting point for my own adaptation of this concept, but there are a few caveats that make me nervous about just how much of a head start the code will give me. First, PTAM and similar camera tracking software is designed for use on high-contrast two-dimensional RGB bitmaps — basic still film frames. In contrast, the grayscale depth map coming from the Kinect is relatively low contrast, and areas of high contrast are probably best avoided in the feature detection process, since they represent noisy edges between depths. I probably will not be able to use the Kinect’s RGB data, because it’s going to be filled with artifacts from the projection. Also, since the Kinect already gives us a point cloud, I don’t need any of the depth-calculation features from PTAM. Because of these issues, I will probably start work by skimming through the PTAM source code to get an idea of their approach to the implementation, and then seeing how PTAM behaves when fed the grayscale depth map from a Kinect. From there, I will probably start experiment a simpler feature extraction and tracking algorithms in Processing that make the most of the Kinect’s depth data. (This code would be destined for an eventual port to C++.)

Rough Thesis Production Schedule

Eric Mika

A bit of optimism…

Thesis FebruaryThesis MarchThesis AprilThesis May

Loose bedside inventory

Eric Mika

This is an experimental essay composed over winter break

One white cap, striated, counter-clockwise, childproof. One inch, three tenths, twenty hundredths and seven thousandths in diameter, seven tenths, eighty hundredths and one thousandth in height. Twenty-four extra-strength Excedrin, caplets. Seven hundred ninety-four thousandths in length. Three hundred eighty five thousandths in width. Two hundred forty-one thousandths in height. Engraved with an “E”, does not stand for ecstasy. Super-dome like in cross section.

One white label, three thousandths in thickness affixed to soviet-green translucent bottle. Promises for headaches, colds, arthritis, muscle aches, sinusitis, toothache, premenstrual and menstrual cramps delineated with bullets. You can call them at 800 468 7746. Possibly made in New Jersey.

Clear plastic blister on printed cardboard backing, nineteen thousandths thick. Contents formerly four, now two. Cylindrical, five-hundred fifty-six thousandths in diameter. Length measures one thousand nine hundred and ninety-one thousandths, and one point five volts. The positive nip, two hundred and twelve thousandths in diameter. Forty-four thousandths in height. Do not connect improperly. Made in U.S.A. Ne pas installer de maniére inapproprie. Fabriqué aux É-U.

Sixteen thousandths of cardboard, folded over. One thousand five hundred twenty-one thousandths in width. One thousand eight hundred ninety-two thousandths in height (closed). Tapered, one hundred twenty three thousandths at one end, two hundred sixty nine at the other. Profile like an arrow loop. Contents formerly twenty, now sixteen. Tip contains red phosphorus, potassium chlorate, sulfur and starch, a neutralizer, siliceous filler, diatomite and glue. Certain family members consider this a delicacy. Made in New Haven.