a digital scan of a 35mm film image of a processing sketch running on an LCD
Skip to Content

Lab: Transistors

The transistor lab explains how to control a higher voltage circuit with a lower voltage one. In this case, I used the Arduino and a transistor to start and stop a DC motor running on a separate 9V power source.

The circuit looks like this. The diode across the transistor prevents voltage from flowing back through the circuit while the motor (...now a generator) spins down after power is cut.

And a little code will set the motor starting and stopping. I initially tried to run the motor on the Arduino's 5V power supply, but as soon as it turned the motor on the Arduino would brown out and reset. Hooking up an external 9V supply fixed this. (However, judging from the frantic sounds the motor produced, 9V might have been more than it wanted.)

Using a potentiometer to control a PWM signal to the transistor allowed for speed control:

November 7 2009 at 6 PM

Rich 3D Modeling: Scrounging up Data

Google Earth:

Google Street View:

Grand Theft Auto IV:

Google Earth's take on the scene sums up the problems with the state of 3D representations of our environment. Since the Brooklyn Bridge enjoys celebrity status, it gets lovingly hand-modeled either by Google or crowd-sourced obsessives. But the rest of the scene is literally flat. Street View isn't exactly 3D, but it shows some of the other data Google has on hand that might go into improving the quality of the ground-level experience in Google Earth. The Grand Theft Auto IV screenshot provides some calibration as to how much detail current hardware can render in real time — and although much of the scene is fictional, it highlights the details Google misses, and highlight the fact that the bottleneck to an improved model is data rather than processing power.

So where is this data going to come from? How are we going to fill in the gaps between items that are epic enough to fuss over by hand (like the bridge) and the aerial photography that's supposed to pass for a backdrop? Where are the cars and the people and the telephone wires that humbly contribute to our sense of place?

I'm not sure the answer to these questions are being pursued very rigorously, but there are a couple of approaches to populating the current, rather sparse 3D representations of the world.

Google's recently released Building Maker makes it easier to add buildings, but it won't let you draw a mailbox or a tree.

In lieu of crowd-sourcing, a site called UpNext created a (most likely) hand-built 3D approximation of Manhattan, that's more navigable (but not necessarily more detailed) at street-level than Google Earth's take on the same space.

A low-budget means of building 3D models can be found in Friedrich Kirschner's Inkscanning project. A cheap camera takes a series of photographs as an object (or person) is submerged into an opaque liquid. The photographic slices are then reconstituted to create a three dimensional point cloud. Not necessarily scaleable, but an interesting example of simple and cheap technology doing the work of much more expensive LIDAR or industrial-grade 3D scanning systems.

Ogle is a legally unsustainable but nevertheless interesting approach to extracting existing 3D data from closed systems. It basically scrapes OpenGL data after it's been rendered into standard 3D model formats.

Here it is extracting data from Google Earth for 3D printing:

A few papers from the 2008 3DPVT conference also show promise in improving the models we have, or building new ones more easily.

Martin Schneider and Reinhard Klein's paper on improving texturing of height map images by merging several orthographic photographs of the same terrain. Could be of use to Google since they have access to ideal source images. The technique is not necessarily applicable to urban areas, but could improve models of rural areas.

The top row shows a region textured without the technique, the bottom row shows the same region with the improved texturing proposed in the paper.

November 5 2009 at 7 AM

Whip vs. Chip

I've run into some debate on the floor about which of the two flavors of XBee antenna gives the most range: Whip (left) or chip (right)?

I can't get a straight answer from the floor, but science delivers thanks to MaxStream's research on the issue. Turns out the antenna type only makes a difference for long-range outdoor applications:

XBee Antenna Performance
Antenna Outdoor (ft.) Indoor (ft.)
XBee Chip 470 80
Whip 845 80
XBee-PRO Chip 1690 140
Whip 4382 140

(Source: Page 3 of Application Note XST-AN019a.)

Photos: Whaleforset, Blankdots

October 30 2009 at 12 AM

Geocolor

Clever visualization of Harvard's campus. Cartographer Andy Woodruff harvested geotagged photographs via Flickr's API, extracted each frame's most prominent colors, and then merged and mapped these values to create a top-down average of street-level colors. Bricks make red, parks makes green, etc.

Full post at Cartogrammar.

October 29 2009 at 12 PM

More Media Controller Progress

Slow progress on the shoe music media controller.

Arturo attached the FSRs to the second shoe. Some tweaks were made to the position in hopes of getting better readings. Tom recommended placing the sensors inside the shoe, but for now we're sticking with the external approach for now since we have it working on the other shoe.

Wear patterns in the shoe informed sensor placement:

I added a multiplexer to the circuit so that we will be able to get analog readings from all 10 FSRs from a single Arduino.

Also, the wireless serial glitches were fixed by adding delay(10) to the Arduino main loop.

October 28 2009 at 2 AM

Multiplexing

When the Arduino's inputs and outputs just aren't numerous enough, a multiplexer saves the day.

A multiplexer's basically just a soft switch — you send it a set of digital on / off values to specify which channel you want to read from, and take a look at the "common" pin with an analogRead() on the Arduino, and then rinse and repeat for the next channel (and the next...) until you have all the values you want.

In this way, you can collect readings from a bunch of sensors with the use of just one of the Arduino's analog pins (plus three or four digital control pins. You can think of the multiplexer as a basic parallel to serial converter -- it lets you take a bunch of different signals and put them into a single stream of data.

In short, it's an easy way to add inputs and outputs to your Arduino.

It's also possible to string several together to get control of even more inputs and outputs. More about this, including schematics, is available on the Arduino website.

Multiplexers come in a variety of flavors... 8 channel varieties (like the 4051) use a 3-bit digital channel picker, while 16 channel models like the 4067 shown below use a 4-bit channel picker. Everything you need to know about the pins and channel encoding is available in the datasheet.

The circuit I'm using for the media controller project looks like this. All of the sensors aren't hooked up, but it gives the idea.

Multiplexer circuit overview

Note The resistor between the common pin and ground is critical — I was getting noisy readings at first; the resistor fixed it:

Multiplexer circuit detail

Here's some Arduino code to read each channel from a multiplexer and then send each value out over serial in a comma-delimited format:

  1. // 4067 16 Channel Multiplexer
  2.  
  3. // Based on the code for the 8 channel 4051 available here:
  4. // http://www.arduino.cc/playground/Learning/4051
  5.  
  6. // Here's a handy datasheet for the 4067:
  7. // http://www.sparkfun.com/datasheets/IC/CD74HC4067.pdf
  8.  
  9. // How many of the 16 channels do we want to actually read?
  10. // set this to a lower number to read a fraction of the channels
  11. int activeChannels = 16;
  12.  
  13. // Keep count of which channel we're reading.
  14. int channel = 0;
  15.  
  16. // Set the common input / output pin.
  17. int pinCommon = 0;
  18.  
  19. // Set which pins tell the multiplexer which input to read from.
  20. int pinS0 = 2;
  21. int pinS1 = 3;
  22. int pinS2 = 4;
  23. int pinS3 = 5;
  24.  
  25. void setup() {
  26.   // Set the switch pins to output.
  27.   pinMode(pinS0, OUTPUT);
  28.   pinMode(pinS1, OUTPUT);
  29.   pinMode(pinS2, OUTPUT);
  30.   pinMode(pinS3, OUTPUT);
  31.  
  32.   // Fire up the serial.
  33.   Serial.begin(9600);
  34. }
  35.  
  36. void loop () {
  37.   // Loop through the channels and read each one.
  38.   for (channel = 0; channel < activeChannels; channel++) {
  39.    
  40.     // Get the switch states from the channel number, and
  41.     // send the state to the multiplexer to read from that
  42.     // channel. This configuration covers all the channels,
  43.     // but doesn't read them in a sequence that matches the datasheet.
  44.     // (This works, but needs revision.)
  45.     digitalWrite(pinS0, (channel & 15) >> 3);  // bit 4
  46.     digitalWrite(pinS1, (channel & 7) >> 2);   // bit 3
  47.     digitalWrite(pinS2, (channel & 3) >> 1);   // bit 2
  48.     digitalWrite(pinS3, (channel & 1));        // bit 1
  49.    
  50.     // Read the common pin and send it out over serial.
  51.     Serial.print(analogRead(pinCommon), DEC);
  52.    
  53.     if(channel < (activeChannels - 1)) {
  54.       // Separate each channel with a comma.
  55.       Serial.print(",");    
  56.     }
  57.     else {
  58.       // If it's the last channel, skip the
  59.       // period and send a line feed instead.
  60.       Serial.println();      
  61.     }
  62.   }
  63.  
  64.   // Take a breath.
  65.   delay(10);  
  66. }

October 28 2009 at 1 AM

Thoughts on Appropriation

This week's readings:
The Ecstacy of Influence
The Rights of Molotov Man
Artist Admits Using Other Photo for ‘Hope’ Poster
Marshall McLuhan's Understanding Media

I’m not sure how much more is worth saying about appropriation, orifices like the RIAA /MPAA, and the sad state of copyright law. Lawrence Lessig’s free culture pitch articulates the history of copyright and the necessity of a commons with plenty of precision and conviction. Clearly, any spirit of creative protection in copyright law has lost out to greed. Our best (and last) hope is either the noble contrarians at Creative Commons, or the collective realization that we're all felons in the eyes of copyright law, so the law had better change. I wouldn't bet on either.

So, Marshall McLuhan's work held my interest more tenaciously than Susan Meiselas's self-righteous kvetching or Shepard Fairey's (perhaps predictable?) back-stabbing dishonesty.

McLuhan's take on the significance of how we communicate (rather than what we communicate) is often renowned as creepily prescient of modern times. I should reserve judgment until I've finished the book, but once again I think the web and computation have disrupted the thesis. Unlike print, or radio, or television, or film, computational media lend themselves to transmogrification between traditional forms. How could you begin to classify the web as hot or cold, when it entangles so many divergent media into one? At ITP, it seems like our mode of production emphasizes creation and manipulation of media over content, a lateral move that might emphasize McLuhan's "medium is the message" conclusion.

[ For discussion: The problem with digital abstraction... who can own a sequence of bits, when the content actually lies in the interpretation of that data, and not necessarily in the sequence itself? For example, I could write a song that happened to use the exact same bit sequence that describes Meiselas's molotov man. When the bits are played as an mp3, it's one thing, when interpreted as, say, a jpeg, they become something else. Who owns what? Can you really own a string of 0s and 1s that could have been generated and interpreted in any number of ways? ]

October 26 2009 at 12 AM

Making XBees Talk

Arturo, Yin, and myself are all hoping we can cut the wires between an Arduino and a laptop for our upcoming media controller project. We might even have a legitimate reason to bother: our controller is wearable, and the wearer should be able to dance / spin / walk and generally move freely.

So I picked up a couple of XBees in hopes of figuring out which end is up.

After about three hours of fiddling, we got the wireless working — sending serial from the Arduino to my laptop. Our system only needs to send data one way from the microcontroller to a PC, so we didn't worry about / fuss with sending data the other way. (Though I'd guess it's as simple as sending serial out from Processing, probably with the ID of the XBee or something similar.)

Here's what we did and where we ran into trouble:

1. First we amassed the parts... an Arduino, two XBee modules, a shield to bring the XBee's tight pin spacing into alignment with a standard breadboard (not pictured), and a small board that's supposed to take the XBee from serial to USB via an FTDI chip. This is kind of cheating (I'd learn more by wiring this circuit myself) ... but it's very handy.



2. We assembled a wired version of the circuit I want to make wireless. It's the most basic thing imaginable: The AnalogInSerial example from the Arduino IDE. I made sure this was sending out serial like it should before moving on to the next step.



3. Next I wired the XBee onto the breadboard. (Note that the XBee's pins are spaced more tightly (2mm / pin) than the 0.1" inch spacing on a standard breadboard. To drop an XBee onto a breadboard, you'll need a small shield to compensate for this difference. (I used a $10 shield from the NYU computer store, but you can assemble one for a few bucks less by picking up two 10 pin sockets, an XBee breakout board, and a strip of break-away headers from Sparkfun.

Note that the XBee wants 3.3 volts of power. I'm not sure if the 5 volts we normally work with on Arduino is enough to fry a $30 XBee module, but I didn't want to find out. My Arduino already has a 3.3 volt source, so I just used that, but if it's not available on your Arduino you can wire one up easily enough using a voltage regulator.


4. Getting the XBees to communicate required some firmware voodoo. Note: This only applies if you have a Series 2 XBee! I made the mistake of buying an XBee Series 2 instead of the simpler Series 1. In certain cases, it seems like the Series 2 would be a better call (for example, it supports mesh networking) but for this particular application it just introduced additional complexity and confusion. Limor Fried has a great matrix of XBees on her website illustrating the differences between various models. If you're working with a Series 1, there are a lot of better places than this blog post to get started. (Like Chapter 6 of Tom Igoe's book.)

Changing the XBee firmware requires a Windows-only app called XCTU. (And if you haven't already installed the drivers for the FTDI serial-usb bridge, you'll need to download those as well.) Luckily, it works fine in a Windows virtual machine on the Mac.

We used XCTU to configure one of the XBees with the "Coordinator AT" firmware and the other with the "Router / End Device AT" firmware. We also made sure that both of the module's Personal Area Network (PAN) ID's matched (though we changed them from the default for good measure.) Somehow the firmware upgrade process. Also, the USB Explorer board we used to connect the XBees to serial for firmware updates doesn't have a reset switch, so resetting the XBee (something XCTU requests pretty often) was accomplished by very carefully shorting the reset pin to ground with a length of wire. This is probably a very bad idea, since shorting the wrong pin could have unexpected consequences. Perhaps as a result of this, one of the firmware update procedures went terribly wrong, and left the XBee bricked. It took some very strange incantations to bring the XBee back to life, but eventually we were able to restore the firmware to the factory settings and proceed as planned.

5. Finally, once all the firmware was in alignment, the "Router" XBee plugged into the arduino started flashing twice a second to indicate that it was connected to the "Coordinator" connected to my laptop. Then, serial started flowing in over the air. We used a Processing app to graph the signal from the potentiometer. This is where we ran in to some issues that have yet to be resolved... it seems like some data is lost in the signal, as sometimes we get packets of bytes that either terminate prematurely or run too long (past where the newline should have been). We could write some error correction into the Processing side, but before we do that we want to make sure that this issue is not the result of a misconfiguration or some other unexpected variable. (We already tried to change the PAN ID's in case there was a collision with a nearby XBee, but that didn't fix the glitches.) There's also some noticeable lag between turning the pot and the results showing up on the screen. I'm not sure if this is just how things are with wireless, or if we botched a setting somewhere. The video below illustrates the irregular spikes and drops in the serial stream:



Update:
Adding delay(10) to Arduino's loop seems to have fixed the glitches and lag. That was easy...

Tom shared a few other approaches that could have worked and might still be a good idea to implement:
We could have written some kind of error correction... bookend each string of data in unique values, and only use the data if it starts and ends with the bookend values as expected.

Another alternative is to switch to a call-response communication method, so the XBees won't strain over sending bytes full-bore... instead our Processing app could have mediated the exchange by only requesting data when it was needed.



Recommended resources:
Cai Stuart-Maver wrote the life-saving tutorial for Series 2 XBees. (Formerly hosted at humboldt.edu, but the file has since 404'd. I've mirrored it on my own server in case anyone needs it.)
Rob Faludi knows all.
Justin Downs hosts a bunch of XBee documentation.

October 25 2009 at 10 PM

Media Controller Progress

Yin, Arturo, and I have started implementing a tap-inspired musical shoe. Right now we're working on a single shoe, with 5 force sensitive resistors (FSRs) on the bottom designed to capture nuances of a foot fall and translate them into sounds (either sampled or synthesized, we're not sure yet.)

Here's how the system looks so far:

On the bottom of the shoe, there's a layer of thin foam designed to protect the FSRs and evenly distribute the weight of the wearer. We ran into some trouble with the foam tearing on cracks in the floor, so we're looking for a way to improve durability or maybe laminate something over the foam to protect it.

The challenge now is to find a way to work with all of the data coming in from each foot... how best to map the FSR's response to the wearer's movements in an intuitive and transparent way. On technical fronts, we're going to need to make the system wireless so the wearer will have freedom of movement, and find a way to route the sensor wires in a more subtle way. There are also concerns about the long-term durability of the FSRs, so we might need to make them easily replaceable. This could be tricky since each sensor is buried under foam and tape...

We've written some very basic code for now, just enough to get the signal from each FSR and graph the response in Processing.

Here's the Arduino code:

  1. // Multi-channel analog serial reader.
  2.  
  3. // Adapted from "sensor reader".
  4. // reads whichever pins are specified in the sensor pin array
  5. // and sends them out to serial in a period-delimited format.
  6.  
  7. // Read the inputs from the following pins.
  8. int sensorPins[] = { 0, 1, 2, 3, 4, 5};
  9.  
  10. // Specify the length of the sensorPins array.
  11. int sensorCount = 6;
  12.  
  13. void setup() {
  14.   // Configure the serial connection:
  15.   Serial.begin(9600);
  16. }
  17.  
  18. void loop() {
  19.   // Loop through all the sensor pins, and send
  20.   // their values out to serial.
  21.   for (int i = 0; i < sensorCount; i++) {
  22.  
  23.     // Send the value from the sensor out over serial.
  24.     Serial.print(analogRead(sensorPins[i]), DEC);
  25.    
  26.     if (i < (sensorCount - 1)) {
  27.       // Separate each value with a period.
  28.       Serial.print(".");      
  29.     }
  30.     else {
  31.       // If it's the last sensor, skip the
  32.       // period and send a line feed instead.
  33.       Serial.println();
  34.     }
  35.   }
  36.  
  37.   // Optionally, let the ADC settle.
  38.   // I skip this, but if you're feeling supersitious...
  39.   // delay(10);  
  40. }

And the Processing code:

  1. // Multi-channel serial scope.
  2.  
  3. // Takes a string of period-delimited analog values (0-1023) from the serial
  4. // port and graphs each channel.
  5.  
  6. // Import the Processing serial library.
  7. import processing.serial.*;
  8.  
  9. // Create a variable to hold the serial port.
  10. Serial myPort;
  11.  
  12. int graphXPos;
  13.  
  14. void setup() {
  15.   // Change the size to whatever you like, the
  16.   // graphs will scale appropriately.
  17.   size(1200,512);
  18.  
  19.   // List all the available serial ports.
  20.   println(Serial.list());
  21.  
  22.   // Initialize the serial port.
  23.   // The port at index 0 is usually the right one though you might
  24.   // need to change this based on the list printed above.
  25.   myPort = new Serial(this, Serial.list()[0], 9600);
  26.  
  27.   // Read bytes into a buffer until you get a linefeed (ASCII 10):
  28.   myPort.bufferUntil('\n');  
  29.  
  30.   // Set the graph line color
  31.   stroke(0);        
  32. }
  33.  
  34. void draw() {
  35.   // Nothing to do here.
  36. }
  37.  
  38. void serialEvent(Serial myPort) {
  39.   // Read the serial buffer.
  40.   String myString = myPort.readStringUntil('\n');
  41.  
  42.   // Make sure you have some bytes worth reading.
  43.   if (myString != null) {
  44.  
  45.     // Make sure there's no white space around the serial string.
  46.     myString = trim(myString);
  47.  
  48.     // Turn the string into an array, using the period as a delimiter.
  49.     int sensors[] = int(split(myString, '.'));
  50.  
  51.     // Find out how many sensors we're working with.
  52.     int sensorCount = sensors.length;
  53.  
  54.     // Again, make sure we're working with a full package of data.
  55.     if (sensorCount > 1) {
  56.  
  57.       // Loop through each sensor value, and draw a graph for each.
  58.       for(int i = 0; i < sensorCount; i++) {
  59.  
  60.         // Set the offset based on which channel we're drawing.
  61.         int channelXPos = graphXPos + (i * (width / sensorCount));
  62.  
  63.         // Map the value from the sensor to fit the height of the window.
  64.         int sensorValue = round(map(sensors[i], 0, 1024, 0, height));
  65.  
  66.         // Draw a line to represent the sensor value.
  67.         line(channelXPos, height, channelXPos, height - sensorValue);
  68.       }
  69.  
  70.       // At the edge of the screen, go back to the beginning:
  71.       if (graphXPos >= (width / sensorCount)) {
  72.         // Reset the X position.
  73.         graphXPos = 0;
  74.  
  75.         // Clear the screen.
  76.         background(255);
  77.       }
  78.       else {
  79.         // Increment the horizontal position for the next reading.
  80.         graphXPos++;
  81.       }            
  82.     }
  83.   }
  84. }

October 24 2009 at 3 AM

Something About Simultaneity

I was particularly interested in the discussion of bullet time last week — such a surreal way to traverse a moment.

On an individual basis, executing the effect itself is now in reach of DIYers. The rig used to shoot the following was built by the
Graffiti Research Lab for $8000 in 2008.

The end-point for the glut of earth-centric images and data on the web seems to be a whole-earth snapshot, representing both the present moment any and desired point in (digitally sentient) history. Could we build a navigable world-wide instant if we had enough photos? Could the process be automated? Things like StreetView certainly generate an abundance of photographs, but they're all displaced in time.

I searched pretty thoroughly and was surprised by how few efforts have been made to synchronize photographs in time (though plenty of effort has been made on the geographic space front.) Flickr has put together a clock of sorts. It's interesting, but it only spans a day's time and doesn't layer multiple images on a current moment.

Still, Flickr's a great source for huge volumes of photos all taken at the same instant. (Another 5,728 posted in the last minute.)

I wanted to see if the beginnings of this world-scale bullet-time snapshot could be constructed using publicly available tools, so I set out to write a little app via the Flickr API to grab all of the photos taken at a particular second. With this, it's conceivable (though not yet true in practice) that we could build a global bullet-time view (of sorts) of any moment.

I ran into some grief with the Flickr API, it doesn't seem to allow second-level search granularity, although seconds data is definitely in their databases. So as an alternative I went for uploaded date, where seconds data is available through the API (at least for the most recent items.)

Here's a rough cut.

This could be taken even further if the search was narrowed to those images with geotag metadata. With enough of that data, you could construct a worldwide snapshot of any given second with spacial mapping, bringing us closer still to the whole-earth snapshot.

Some efforts have been made to represent the real time flow of data, but they generally map very few items to the current moment, and don't allow navigation to past moments. For example:

TwitPic
flickrvision
twittervision


Update: Here's the reason Flickr's API refused to give me all the photos within the bounds of a second:
"A tag, for instance, is considered a limiting agent as are user defined min_date_taken and min_date_upload parameters — If no limiting factor is passed we return only photos added in the last 12 hours (though we may extend the limit in the future)."

October 22 2009 at 3 AM