Frontier Nerds: An ITP Blog

Media Controller Progress

Eric Mika

Yin, Arturo, and I have started implementing a tap-inspired musical shoe. Right now we’re working on a single shoe, with 5 force sensitive resistors (FSRs) on the bottom designed to capture nuances of a foot fall and translate them into sounds (either sampled or synthesized, we’re not sure yet.)

Here’s how the system looks so far:

On the bottom of the shoe, there’s a layer of thin foam designed to protect the FSRs and evenly distribute the weight of the wearer. We ran into some trouble with the foam tearing on cracks in the floor, so we’re looking for a way to improve durability or maybe laminate something over the foam to protect it.

An arduino connected to a breadboard with myriad wiresForce sensors taped to the bottom of a shoeClose-up of force sensors taped to a shoeTesting sound-reactive shoes with a laptop

The challenge now is to find a way to work with all of the data coming in from each foot… how best to map the FSR’s response to the wearer’s movements in an intuitive and transparent way. On technical fronts, we’re going to need to make the system wireless so the wearer will have freedom of movement, and find a way to route the sensor wires in a more subtle way. There are also concerns about the long-term durability of the FSRs, so we might need to make them easily replaceable. This could be tricky since each sensor is buried under foam and tape…

We’ve written some very basic code for now, just enough to get the signal from each FSR and graph the response in Processing.

Here’s the Arduino code:

// Multi-channel analog serial reader.

// Adapted from "sensor reader".
// reads whichever pins are specified in the sensor pin array
// and sends them out to serial in a period-delimited format.

// Read the inputs from the following pins.
int sensorPins[] = { 0, 1, 2, 3, 4, 5};

// Specify the length of the sensorPins array.
int sensorCount = 6;

void setup() {
// Configure the serial connection:
Serial.begin(9600);
}

void loop() {
// Loop through all the sensor pins, and send
// their values out to serial.
for (int i = 0; i < sensorCount; i++) {

// Send the value from the sensor out over serial.
Serial.print(analogRead(sensorPins[i]), DEC);

if (i < (sensorCount - 1)) {
// Separate each value with a period.
Serial.print(".");
}
else {
// If it's the last sensor, skip the
// period and send a line feed instead.
Serial.println();
}

}

// Optionally, let the ADC settle.
// I skip this, but if you're feeling supersitious...
// delay(10);
}

And the Processing code:

// Multi-channel serial scope.

// Takes a string of period-delimited analog values (0-1023) from the serial
// port and graphs each channel.

// Import the Processing serial library.
import processing.serial.*;

// Create a variable to hold the serial port.
Serial myPort;

int graphXPos;

void setup() {
// Change the size to whatever you like, the
// graphs will scale appropriately.
size(1200,512);

// List all the available serial ports.
println(Serial.list());

// Initialize the serial port.
// The port at index 0 is usually the right one though you might
// need to change this based on the list printed above.
myPort = new Serial(this, Serial.list()[0], 9600);

// Read bytes into a buffer until you get a linefeed (ASCII 10):
myPort.bufferUntil('\n');

// Set the graph line color
stroke(0);
}

void draw() {
// Nothing to do here.
}

void serialEvent(Serial myPort) {
// Read the serial buffer.
String myString = myPort.readStringUntil('\n');

// Make sure you have some bytes worth reading.
if (myString != null) {

// Make sure there's no white space around the serial string.
myString = trim(myString);

// Turn the string into an array, using the period as a delimiter.
int sensors[] = int(split(myString, '.'));

// Find out how many sensors we're working with.
int sensorCount = sensors.length;

// Again, make sure we're working with a full package of data.
if (sensorCount > 1) {

// Loop through each sensor value, and draw a graph for each.
for(int i = 0; i < sensorCount; i++) {

// Set the offset based on which channel we're drawing.
int channelXPos = graphXPos + (i * (width / sensorCount));

// Map the value from the sensor to fit the height of the window.
int sensorValue = round(map(sensors[i], 0, 1024, 0, height));

// Draw a line to represent the sensor value.
line(channelXPos, height, channelXPos, height - sensorValue);
}

// At the edge of the screen, go back to the beginning:
if (graphXPos >= (width / sensorCount)) {
// Reset the X position.
graphXPos = 0;

// Clear the screen.
background(255);
}
else {
// Increment the horizontal position for the next reading.
graphXPos++;
}
}
}
}

Something About Simultaneity

Eric Mika

I was particularly interested in the discussion of bullet time last week — such a surreal way to traverse a moment.

On an individual basis, executing the effect itself is now in reach of DIYers. The rig used to shoot the following was built by the Graffiti Research Lab for $8000 in 2008.

Light panting

The end-point for the glut of earth-centric images and data on the web seems to be a whole-earth snapshot, representing both the present moment any and desired point in (digitally sentient) history. Could we build a navigable world-wide instant if we had enough photos? Could the process be automated? Things like Street View certainly generate an abundance of photographs, but they’re all displaced in time.

I searched pretty thoroughly and was surprised by how few efforts have been made to synchronize photographs in time (though plenty of effort has been made on the geographic space front.) Flickr has put together a clock of sorts. It’s interesting, but it only spans a day’s time and doesn’t layer multiple images on a current moment.

Flickr clock visualization

Still, Flickr’s a great source for huge volumes of photos all taken at the same instant. (Another 5,728 posted in the last minute.)

I wanted to see if the beginnings of this world-scale bullet-time snapshot could be constructed using publicly available tools, so I set out to write a little app via the Flickr API to grab all of the photos taken at a particular second. With this, it’s conceivable (though not yet true in practice) that we could build a global bullet-time view (of sorts) of any moment.

~I ran into some grief with the Flickr API, it doesn’t seem to allow second-level search granularity, although seconds data is definitely in their databases. So as an alternative I went for uploaded date, where seconds data is available through the API (at least for the most recent items.)~

2022 Update: Upon revisiting the source code, it’s now possible to find matching photo capture times instead of just using the upload times. The latest version of the algorithm identifies collisions if:

  • The capture times match down to the second.
  • The photo’s owner is different from those already in the collision collection (to keep the results interesting).
  • The photos’ capture time and upload time don’t match (based on minutes and seconds matching, to deal with the lack of time zone > consistency between these fields).
  • The captures times have a high confidence value according to Flickr’s data.

Kudos to Flickr for keeping their API working with almost no breaking changes for more than a decade.

Here’s a rough cut:

Temporal Collisions

This could be taken even further if the search was narrowed to those images with geotag metadata. With enough of that data, you could construct a worldwide snapshot of any given second with spatial mapping, bringing us closer still to the whole-earth snapshot.

Some efforts have been made to represent the real time flow of data, but they generally map very few items to the current moment, and don’t allow navigation to past moments. For example:


Update: Here’s the reason Flickr’s API refused to give me all the photos within the bounds of a second: “A tag, for instance, is considered a limiting agent as are user defined min_date_taken and min_date_upload parameters — If no limiting factor is passed we return only photos added in the last 12 hours (though we may extend the limit in the future).”

Street View Disaster

Eric Mika

An unlikely glitch in a Google Street View sequence captured on Route 30 in Rankin Pennsylvania.

The current theory is that the an orange bag snagged on the camera array. This would also explain the blurred, unusually large text overlaying the view. Maybe the StreetView-blocking British villagers could deploy a barrage of wind-born privacy prophylactics in their next campaign.

Further discussion is available in the Reddit comment thread.

Here’s a screen capture for posterity, in case the glitch is fixed:Street View frame with strange orange huge

Interaction Observations

Eric Mika

Pick a piece of interactive technology in public, used by multiple people. Write down your assumptions as to how it’s used, and describe the context in which it’s being used. Watch people use it, preferably without them knowing they’re being observed. Take notes on how they use it, what they do differently, what appear to be the difficulties, what appear to be the easiest parts. Record what takes the longest, what takes the least amount of time, and how long the whole transaction takes. Consider how the readings from Norman and Crawford reflect on what you see.

Overview of Tony Rosenthal's Alamo sculptureDetail of graffiti on Tony Rosenthal's Alamo sculptureDetail of Tony Rosenthal's Alamo sculpture's pivot bearingPlaque of Tony Rosenthal's Alamo sculpture

[ I’ll write up the commentary soon… ]

Making XBees Talk

Eric Mika

Arturo, Yin, and myself are all hoping we can cut the wires between an Arduino and a laptop for our upcoming media controller project. We might even have a legitimate reason to bother: our controller is wearable, and the wearer should be able to dance / spin / walk and generally move freely.

So I picked up a couple of XBees in hopes of figuring out which end is up.

After about three hours of fiddling, we got the wireless working — sending serial from the Arduino to my laptop. Our system only needs to send data one way from the microcontroller to a PC, so we didn’t worry about / fuss with sending data the other way. (Though I’d guess it’s as simple as sending serial out from Processing, probably with the ID of the XBee or something similar.)

Here’s what we did and where we ran into trouble:

  1. First we amassed the parts… an Arduino, two XBee modules, a shield to bring the XBee’s tight pin spacing into alignment with a standard breadboard (not pictured), and a small board that’s supposed to take the XBee from serial to USB via an FTDI chip. This is kind of cheating (I’d learn more by wiring this circuit myself) … but it’s very handy.

Two XBee radios and an XBee to USB adapter next to an Arduino

  1. We assembled a wired version of the circuit I want to make wireless. It’s the most basic thing imaginable: The AnalogInSerial example from the Arduino IDE. I made sure this was sending out serial like it should before moving on to the next step.

An Arduino connected to a breadboard and a potentiometer

  1. Next I wired the XBee onto the breadboard. (Note that the XBee’s pins are spaced more tightly (2mm / pin) than the 0.1" inch spacing on a standard breadboard. To drop an XBee onto a breadboard, you’ll need a small shield to compensate for this difference. (I used a $10 shield from the NYU computer store, but you can assemble one for a few bucks less by picking up two 10 pin sockets, an XBee breakout board, and a strip of break-away headers from Sparkfun.

    Note that the XBee wants 3.3 volts of power. I’m not sure if the 5 volts we normally work with on Arduino is enough to fry a $30 XBee module, but I didn’t want to find out. My Arduino already has a 3.3 volt source, so I just used that, but if it’s not available on your Arduino you can wire one up easily enough using a voltage regulator.

An Arduino connected to a potentiometer and a breadboard with an XBee radio

  1. Getting the XBees to communicate required some firmware voodoo. Note: This only applies if you have a Series 2 XBee! I made the mistake of buying an XBee Series 2 instead of the simpler Series 1. In certain cases, it seems like the Series 2 would be a better call (for example, it supports mesh networking) but for this particular application it just introduced additional complexity and confusion. Limor Fried has a great matrix of XBees on her website illustrating the differences between various models. If you’re working with a Series 1, there are a lot of better places than this blog post to get started. (Like Chapter 6 of Tom Igoe’s book.)

    Changing the XBee firmware requires a Windows-only app called XCTU. (And if you haven’t already installed the drivers for the FTDI serial-usb bridge, you’ll need to download those as well.) Luckily, it works fine in a Windows virtual machine on the Mac.

    We used XCTU to configure one of the XBees with the “Coordinator AT” firmware and the other with the “Router / End Device AT” firmware. We also made sure that both of the module’s Personal Area Network (PAN) ID’s matched (though we changed them from the default for good measure.) Somehow the firmware upgrade process. Also, the USB Explorer board we used to connect the XBees to serial for firmware updates doesn’t have a reset switch, so resetting the XBee (something XCTU requests pretty often) was accomplished by very carefully shorting the reset pin to ground with a length of wire. This is probably a very bad idea, since shorting the wrong pin could have unexpected consequences. Perhaps as a result of this, one of the firmware update procedures went terribly wrong, and left the XBee bricked. It took some very strange incantations to bring the XBee back to life, but eventually we were able to restore the firmware to the factory settings and proceed as planned.

  2. Finally, once all the firmware was in alignment, the “Router” XBee plugged into the arduino started flashing twice a second to indicate that it was connected to the “Coordinator” connected to my laptop. Then, serial started flowing in over the air.

    We used a Processing app to graph the signal from the potentiometer. This is where we ran in to some issues that have yet to be resolved… it seems like some data is lost in the signal, as sometimes we get packets of bytes that either terminate prematurely or run too long (past where the newline should have been). We could write some error correction into the Processing side, but before we do that we want to make sure that this issue is not the result of a misconfiguration or some other unexpected variable. (We already tried to change the PAN ID’s in case there was a collision with a nearby XBee, but that didn’t fix the glitches.) There’s also some noticeable lag between turning the pot and the results showing up on the screen. I’m not sure if this is just how things are with wireless, or if we botched a setting somewhere. The video below illustrates the irregular spikes and drops in the serial stream:


Update

Adding delay(10) to Arduino’s loop seems to have fixed the glitches and lag. That was easy…

Tom shared a few other approaches that could have worked and might still be a good idea to implement: We could have written some kind of error correction… bookend each string of data in unique values, and only use the data if it starts and ends with the bookend values as expected.

Another alternative is to switch to a call-response communication method, so the XBees won’t strain over sending bytes full-bore… instead our Processing app could have mediated the exchange by only requesting data when it was needed.


Recommended resources

Cai Stuart-Maver wrote the life-saving tutorial for Series 2 XBees. (Formerly hosted at humboldt.edu, but the file has since 404d. I’ve mirrored it on my own server in case anyone needs it.)

Rob Faludi knows all.

Justin Downs hosts a bunch of XBee documentation.