a digital scan of a 35mm film image of a processing sketch running on an LCD
Skip to Content

Big Screens Ideas

I have a couple of ideas for big screens that attempt to visualize and force into perspective large volumes of real-time data. For a little context, here’s an archive of past big screens projects.

Private Radio

Private Radio concept still

Anyone carrying a cell phone has a radio signature… whether they like it or not they are emitting and receiving radio waves as various gadgets talk to the web.

I’d like to fill the IAC with a network of antennas to pick up chatter from GSM / CDMA / WIFI wavelengths and map the audience’s radio presence to a visualization on the screen.

Ideally the antennas would have some sense of the location of different levels of signal strength throughout the room, which could in turn create regions of high and low radio concentration. If someone receives or places a call, presumably they would create.

WiFi packet sniffers also give access to huge volumes of real-time data, although the vast majority is just the machine-machine chatter necessary to keep networks alive.

The scale of the screen would be used both as a 1:1 real-time heat-map of radio activity in the space, possibly with node-style connections drawn between maxima. This map would be overlaid with data collected at different wavelengths streaming across the screen horizontally.

I’m not completely sure of the technical feasibility of this project, and the hardware involved might be expensive (at best) or logistically / technically untenable (at worst) — I plan to speak with Eric Rosenthal and Rob Faludi for a reality check.

Real Time Web Clock

Real Time Web Clock concept still

Our daily use of the web consists of a call / response model that makes the web seem relatively stable and even a bit static. However, new content is dumped on at such a remarkable rate that it might be more useful to think of the web as a real-time stream.

To put this into context: 100 years of video was uploaded to YouTube today. 7309 edits were made to Wikipedia in the last hour. 4,459 photos were uploaded to Flickr in the last minute. Around 600 tweets were posted in the last second. For every second that passes on the clock, 4.5 hours are spent on Facebook.

I’d like to make a linear, timeline style clock that runs for exactly three minutes, starting with a blank screen and rapidly filling with real-time web content of various types.

The clock would probably be arranged by duration and depth. The first layer would be 10ths of a second. The next would be Individual seconds, the back layer would be minutes. The clock wouldn’t “tick” but scroll smoothly in real time. The layers would combine to create a parallax effect and build up a wall of content and noise over the course of three minutes.

And for good measure, here’s one more idea that’s more of a vague pipe dream than an actual plan:

Live Coding
Has this ever been done before at the IAC? Is 3 minutes enough time to do anything? Presumably you could run a python interpreter on top of Processing or something of the sort and distribute fresh strings of code to each Mac Pro using a socket server. Crashes and restarting would be problematic, and the big screens audience might not be nerdy enough to enjoy a process instead of a product.


Feedback:

Patrick: Using a prop to stage the radio scanning. Airport security like wand or kiosk?

Niel: Finding the wavelength of various web 2.0 services…. interleave and audo.

September 24 2010 at 11 AM

Driving Force Paper Proposal

Synthetic biology stands to have a major influence on the course of technology over the next 5 – 15 years. Specifically, continuing decreases in the cost of DNA synthesis will allow for more experimentation with life’s building blocks by an increasingly diverse group of scientists and amateurs. The core uncertainty surrounding synthetic biology is not “if” or “when”, but rather how this newfound control over the stuff of life will factor into the future. The answer holds implications for a wide swath of fields from energy policy to artificial intelligence to bioterrorism.

The field’s most recent milestone was the creation of a self-replicating bacterial cell from a completely synthetic genome. This proves the basic viability of synthetic biology’s promise. A few other factors will work to compound the field’s influence: The creation of abstractions above the protein / DNA will allow biological processes and characteristics to be treated as basic functional units in the design of new life. This abstraction process is already under-way by the The BioBricks Foundation and similar initiatives.

Research will consist primarily of review of scientific literature on the topic — both technical material and bioethics related commentary will be of interest. Statistical analysis of historical costs for the technical procedures associated with synthetic biology — perhaps most importantly, DNA synthesis — should reveal trends and allow for projections regarding critical cost milestones. Finally, interviews with researchers and amateurs who working on the forefront of the field will round-out my understanding of the role synthetic biology will play in shaping our future.

September 24 2010 at 5 AM

Foamcore Mouse

Original apple desktop bus mouseFinished foam core mouse

To get acquainted with three-dimensional prototyping in foam core, I created a model of the first mouse I ever used, the Apple Desktop Bus mouse. The mouse was first released in 1986 alongside the Apple IIGS.

I don’t have the original mouse on hand, so I used a combination of memory and photographs to reconstruct the approximate dimensions and proportions. (It might have been more interesting to have worked completely from memory, since I haven’t used one of these vintage mice in at least 18 years.)

I drew up the plans in Adobe Illustrator, printed them to scale, and then used the scale print to guide the cutting process for the model mouse.

Foam core plansThe final

Original mouse photo by Pinot & Dita

September 22 2010 at 3 PM

Foam Phone

The finished foam phone

To get acquainted with prototyping with 2” blue insulating foam, I decided to build a large-scale model of a classic phone-booth telephone handset.

The process was relatively simple. Each step is documented below.


First, I cut two pieces of 2” thick foam down to the approximate size of the handset, and then joined the pieces using transfer tape.

Joining the pieces


Next, I sketched the basic outline of a two-dimensional version of the phone, and did a rough cut on the band saw.

Cutting plan, including relief cutsFirst two dimensions of cuts


With a basic two-dimensional version of the phone in hand, I sketched out the third dimension and made the corresponding cuts on the band saw.

Planned cuts on the next planeFinished cuts in three dimensions


And finally, the ear and microphone cups were sketched and cut. I removed a wedge of foam from each disk on the belt sander to make sure they would mate to the handset at a slight angle. A drill press took care of the holes in each disk.

Preparing the ear cupsEar cups ready for attachment


I used another round of transfer tape to attach the disks to the handset. About 20 minutes of sanding and finishing work leaves the finished phone:

The final foam phone


I learned a few things about the material that will guide any future use:

  • Higher speed tools do cleaner, more consistent work — the belt sander and band saw avoid tearing / chunking the foam the way hand tools do.

  • Extra-wide transfer tape is worth the up-front expense for larger projects.

  • The foam seems to have a grain. Sanding in certain directions minimizes chunking. I haven’t figured out how to identify the grain.

  • Relief cuts make shorter work of tight curves.

September 22 2010 at 12 AM

Geo Bot Postmortem

My work on the graph bot ended up veering a bit from my initial plans — rather than constrain several automatons via lengths of string, I worked instead towards a group of drawing machines that would chart their course through a room by excreting yarn in their wake. The intention was to capture both the criss-cross of attention in and to visualize larger patterns in the geographic distribution of activity on the web.

Although I eventually became less and less convinced of the conceptual merits of the project (for which I have no one to blame but myself), it was nevertheless a useful exercise in combining techniques from a number of disciplines.

A picture of the device’s guts, is I suppose, an appropriate place to start since I spent an inordinate amount of time on this aspect of the project, chasing down minor details rather than reconsidering a more elegant approach to the entire concept.

The underside of the Geo Bot.

Here’s how the project’s requirements break down:

  • A mobile robot platform, associated circuit building and firmware development, a rudimentary navigation system, wireless communication and power.
  • A yarn storage and excretion mechanism that can reliably dole out yarn at a range of speeds.
  • Centralized control software and associated connections to live data sources on the web.

More to come on the process and discoveries made along the way.

May 7 2010 at 8 PM

Human vs. Computational Strategies for Face Recognition

Face recognition is one of the mechanical turk’s canonical fortes — reliably identifying faces from a range of perspectives is something we do with out second though, but it proves to be excruciatingly tricky for computers. Why are our brains so good at this? How, exactly, do we work? How do computational strategies differ from biological ones? Where do they overlap?

Behold: Chapter 15 of the Handbook of Face Recognition explores these questions in some detail, describing theories of how the human brain identifies and understands faces. A few highlights from the chapter follow:

First, a few semantic nuances:
Recognition: Have I seen this face before?
Identification: Whose face is it?
Stimulus factors: Facial features
Photometric factors: Amount of light, viewing angle

The Thatcher Illusion: Processing is biased towards typical views

Thatcher Effect

Categorization

Beyond the basic physical categorizations — race, gender, age — we also associate emotional / personality characteristics with the appearance of a face. These use of these snap judgments was found to improve identification rates over those achieved with physical characteristics alone.

Prototype Theory of Face Recognition

Unusual faces were found to be more easily identified than common ones. The ability to recognize atypical faces implies a prototypical face against which others are compared. Therefore recognition may involved positioning a particular face relative to the average, prototypical face. The greater the distance, the higher the accuracy. (The PCA / eigenface model implements this idea.)

This also has implications for the other-race effect, which describes the difficulty humans have with identifying individuals of races to which they are not regularly exposed. However, the PCA approach to face recognition actually does well with the minority faces, since they exist outside the cluster of most faces and therefore have fewer neighbors and lower odds of misidentificaiton.

Caricature

The prototype theory suggests that amplification of facial features should improve recognition and identification even further.

Here’s an example, the original face is at left, and a caricature based on amplifying the face’s distance from the average is at right:

Camera in tupperware enclosure with lid

This also opens the possibility of an anti-caricature, or anti-face, which involved moving in the opposite direction, back past the average, and amplifying the result.

The original face is at left, the anti-face is at right:

Face and Anti-face

Interestingly, caricaturization also seems to age the subject. (Supporting the notion that age brings distinction:

Caricature aging

Prosopagnosia

Prosopagnosia is a condition affecting some stroke / brain injury victims which destroys the ability to identify faces, while leaving other visual recognition tasks intact. This suggests that face identification and recognition is concentrated in one area of the brain, suggesting a modular approach to processing.

(Images: Handbook of Face Recognition)

April 15 2010 at 12 PM

Geo Graph Bot Platform

I’ve created a quick hardware sketch of the Geo Graph Bot:

Current Revision

The bot receives commands over the air to steer, turn, etc. The wheels are too small, and the 9V battery is too weak for the steppers, so it’s not quite as fast / maneuverable as I expect the final version to be. Still, it works.

Here’s what it looks like in motion (it’s receiving commands wirelessly from a laptop):

Pending Modifications

Much of this version was limited by the supplies I had on hand. Several elements will change once the rest of the parts come in:

  • It still needs the compass modules. (And accompanying auto-steering code.)
  • Larger wheels (from 2” diameter to 4” or 5”) should increase speed and improve traction.
  • The whole thing will be powered by a 12v 2000mAh NiMH rechargeable battery. (Instead of a pair of 9Vs.)
  • There will be a mechanism for the excretion of yarn to graph the bots path.
  • Also planning on some kind of aesthetically satisfying enclosure once I have the final dimensions.
  • I will use my own stepper drivers instead of the Adafruit motor shield.

I’m reducing the scope slightly from the originally planned three bots to just two. The parts turned out to be more expensive than I anticipated, so my initial goal is to prepare two bots, and then if time / finances allow, create a third. Part of the idea is to create a platform.

Steppers vs. DC Motors

I agonized a bit about whether to use stepper motors or DC motor to drive the bot’s wheels.

A plain DC motor seems to have some advantages in terms of control (you aren’t dealing with a digital signal), and since steering will be accomplished via a feedback loop from the compass data, their lack of precision probably would not be a big issue.

However, I already had steppers on hand, so I ended up using them instead. Steppers have a few advantages of their own. For one, there’s no need for gearing — in this case, the motor drives the wheels directly. Second, I have finer control over how far the bot travels and how it steers (assuming traction is good), so the platform itself will be more flexible for future (unknown) applications.

The big issue with steppers is that the Arduino code that drives them is all written in a blocking way… that is, you can’t run any other code while the motors are running. This was a problem, since I needed the bots to perform a number of steps in the background while it’s driving around: It needs to receive data from the control laptop, monitor the compass heading, reel out yarn, etc.

For now, I’m using some work-around code that uses a timer to call the stepping commands only when necessary, leaving time for other functions. This might not hold up once the main loop starts to get weighed down with other stuff, so I might end up writing an interrupt-driven version of the stepper library.

April 12 2010 at 5 PM

Haiku Laureate

And now for something completely banal…

Concept

Haiku Laureate generates haiku about a particular geographic location.

For example, the address “Washington D.C.” yields the following haiku:

the white house jonas
of washington president
and obama tree

Much of the work we’ve created in Electronic Text has resulted in output that’s interesting but very obviously of robotic origin. English language haiku has a very simple set of rules, and its formal practice favors ambiguous and unlikely word combinations. These conventions / constraints give haiku a particularly shallow uncanny valley; low-hanging fruit for algorithmic mimicry.

Haiku Laureate takes a street address, a city name, etc. (anything you could drop into Google maps), and then asks Flickr to find images near that location. It skims through the titles of those images, building a list of words associated with the location. Finally, it spits them back out using the familiar three-line 5-7-5 syllable scheme (and a few other basic rules).

The (intended) result is a haiku specifically for and about the location used to seed the algorithm: The code is supposed to become an on-demand all-occasion minimally-talented poet laureate to the world.

Execution

The script breaks down into three major parts: Geocoding, title collection, and finally haiku generation.

Geocoding:

Geocoding takes a street address and returns latitude and longitude coordinates. Google makes this easy, their maps API exposes a geocoder that returns XML, and it works disturbingly well. (e.g. a query as vague as “DC” returns a viable lat / lon.)

This step leaves us with something like this:

721 Broadway, New York NY is at lat: 40.7292910 lon: -73.9936710

Title Collection:

Flickr provides a real glut of geocoded data through their API, and much of it is textual — tags, comments, descriptions, titles, notes, camera metadata, etc. I initially intended to use tag data for this project, but it turned out that harvesting words from photo titles was more interesting and resulted in more natural haiku. The script passes the lat / lon coordinates from Google to Flickr’s photo search function, specifying an initial search radius of 1 mile around that point. It reads through a bunch of photo data, storing all the title words it finds along the way, and counting the number times each word turned up.

If we can’t get enough unique words within a mile of the original search location, the algorithm tries again with a progressively larger search radius until we have enough words to work with. Asking for around 100 - 200 unique words work well. (However, for rural locations, the search radius sometimes has to grow significantly before enough words are found.)

The result of this step is a dictionary of title words, sorted by frequency. For example, here’s the first few lines of the list for ITP’s address:

{"the": 23, "of": 16, "and": 14, "washington": 12, "village": 11, "square": 10, "park": 10, "nyu": 9, "a": 9, "new": 8, "in": 8, "greenwich": 8, "street": 6, "webster": 6, "philosophy": 6, "hall": 6, "york": 6, [...] }

Haiku Generation:

This list of words is passed to the haiku generator, which assembles the words into three-line 5-7-5 syllable poems.

Programmatic syllable counting is a real problem — the dictionary-based lookup approach doesn’t work particularly well in this context due to the prevalence of bizarre words and misspellings on the web. I ended up using a function from the nltk_contrib library which uses phoneme-based tricks to give a best guess syllable count for non-dictionary words. It works reasonably well, but isn’t perfect.

Words are then picked from the top of the list to assemble each line, using care to produce a line of the specified syllable count. This technique alone created mediocre output — it wasn’t uncommon to get lines ending with “the” or a line with a string of uninspired conjunctions. So I isolated these problematic words into a boring_words list — consisting mostly of prepositions and conjunctions — which was used to enforce to enforce a few basic rules: First, each line is allowed to contain only one word from the boring word list. Second, a line may not end in a boring word. This improved readability dramatically. Here’s the output:

the washington square
of village park nyu new street
and greenwich webster



More Sample Output

A few more works by the Haiku Laureate:

Chicago, IL
chicago lucy
trip birthday with balloons fun
gift unwraps her night

Gettysburg
the gettysburg view
monument and from devils
of den sign jess square

Dubai
Dubai Museum Bur
in Hotel The Ramada
with Dancing Room Tour

Tokyo
tokyo shinjuku
metropolitan the night
from government view

Canton, KS
jul thu self me day
and any first baptist cloud
the canton more up

Las Vegas, NV
and eiffel tower
in flamingo from view glass
at caesars palace

eve revolution
trails fabulous heralds blue
emptiness elton

monorail hide new
above bird never jasmine
path boy cleopatra

I’ve also attached a list of 150 haiku about New York generated by the haiku laureate.

Note that the Haiku Laureate isn’t limited to major cities… just about any first-world address will work. Differences in output can be seen at distances of just a few blocks in densely populated areas.

Source Code

The code is intended for use on the command line. You’ll need your own API keys for Google Maps and Flickr.

The script takes one or two arguments. The first is the address (in quotes), and the second is the number of haiku you would like to receive about the particular location.

For example: $ python geo_haiku.py "central park, ny" 5

Will return five three-line haiku about central park.

The source is too long to embed here, but it’s available for download.


April 8 2010 at 8 PM

Geo Graph Bots

Proposal

A trio of small, wheeled robots each beholden to a particular geo-tagged social web service, tethered together with elastic string, each attempting to pull the other towards the physical location of the most recent event on its particular social network.

A number of web services — Flickr, Twitter, etc. — receive updates with geo-tagged data at a remarkable rate. The proposed robots will receive wireless updates from a laptop with this latitude and longitude information (probably on the order of a few times per second). Using this data and an onboard compass, they will steer toward the location of the most recent photograph / tweet / whatever, and then drive furiously in this direction. This will continue until they receive the latest geo data a bit later, at which point they will set a new course and proceed in that direction.

Since the three bots will be tethered to one another with a length of string, the hope is that they will occasionally get pulled in one direction or another by their neighbors, and perhaps eventually get tangled in the string to the point where they can’t move at all.

Alternately, the bots could lay down string in their wake… sketching their path, overlap, etc.

Parts List

  • 3x bot chassis (probably laser cut wood or plexi)
  • 6x stepper motors
  • 6x wheels
  • 3x small casters
  • 3x arduinos
  • 3x digital compass modules
  • 4x xBees (3 for the bots, 1 for the laptop)
  • 1x xBee explorer
  • 1x length of elastic string (6 feet?)
  • 3x eyelets (for string)
  • 3x rechargeable batteries
April 8 2010 at 6 PM

How to Hack Toy EEGs

Arturo Vidich, Sofy Yuditskaya, and I needed a way to read brains for our Mental Block project last fall. After looking at the options, we decided that hacking a toy EEG would be the cheapest / fastest way to get the data we wanted. Here’s how we did it.


The Options

A non-exhaustive list of the consumer-level options for building a brain-computer interface:

  Open EEG Board
Open EEG
Force Trainer Box
Force Trainer
Mindflex Box
Mind Flex
MindSet Box
MindSet
Description Plans and software for building an EEG from scratch Levitating ball game from Uncle Milton Levitating ball game from Mattel Official headset from NeuroSky
Attention / Meditation Values No Yes Yes Yes
EEG Power Band Values Yes (roll your own FFT) No Yes Yes
Raw wave values Yes No No Yes
Cost $200+ $75 (street) $80 (street) $200

Open EEG offers a wealth of hardware schematics, notes, and free software for building your own EEG system. It’s a great project, but the trouble is that the hardware costs add up quickly, and there isn’t a plug-and-play implementation comparable to the EEG toys.

The Nerosky MindSet is a reasonable deal as well — it’s wireless, supported, and plays nicely with the company’s free developer tools.

For our purposes, though, it was still a bit spendy. Since NeuroSky supplies the EEG chip and hardware for the Force Trainer and Mind Flex toys, these options represent a cheaper (if less convenient) way to get the same data. The silicon may be the same between the three, but our tests show that each runs slightly different firmware which accounts for some variations in data output. The Force Trainer, for example, doesn’t output EEG power band values — the Mind Flex does. The MindSet, unlike the toys, also gives you access to raw wave data. However, since we’d probably end up running an FFT on the wave anyway (and that’s essentially what the EEG power bands represent), we didn’t particularly miss this data in our work with the Mind Flex.

Given all of this, I think the Mind Flex represents a sweet spot on the price / performance curve. It gives you almost all of the data the Mind Set for less than half the cost. The hack and accompanying software presented below works fine for the Force Trainer as well, but you’ll end up with less data since the EEG power values are disabled in the Force Trainer’s firmware from the factory.

Of course, the Mind Flex is supposed to be a black-box toy, not an officially supported development platform — so in order to access the actual sensor data for use in other contexts, we’ll need to make some hardware modifications and write some software to help things along. Here’s how.

But first, the inevitable caveat: Use extreme caution when working with any kind of voltage around your brain, particularly when wall power is involved. The risks are small, but to be on the safe side you should only plug the Arduino + Mind Flex combo into a laptop running on batteries alone. (My thanks to Viadd for pointing out this risk in the comments.) Also, performing the modifications outlined below means that you’ll void your warranty. If you make a mistake you could damage the unit beyond repair. The modifications aren’t easily reversible, and they may interfere with the toy’s original ball-levitating functionality.

However, I’ve confirmed that when the hack is executed properly, the toy will continue to function — and perhaps more interestingly, you can skim data from the NeuroSky chip without interfering with gameplay. In this way, we’ve confirmed that the status lights and ball-levitating fan in the Mind Flex are simply mapped to the “Attention” value coming out of the NeuroSky chip.


The Hardware

Here’s the basic layout of the Mind Flex hardware. Most of the action is in the headband, which holds the EEG hardware. A micro controller in the headband parses data from the EEG chip and sends updates wirelessly to a base station, where a fan levitates the ball and several LEDs illuminate to represent your current attention level.

Mind Flex Schematic

This schematic immediately suggests several approaches to data extraction. The most common strategy we’ve seen is to use the LEDs on the base station to get a rough sense of the current attention level. This is nice and simple, but five levels of attention just doesn’t provide the granularity we were looking for.

A quick aside: Unlike the Mind Flex, the Force Trainer has some header pins (probably for programming / testing / debugging) which seem like an ideal place to grab some data. Others have reported success with this approach. We could never get it to work.

We decided to take a higher-level approach by grabbing serial data directly from the NeuroSky EEG chip and cutting the rest of the game hardware out of the loop, leaving a schematic that looks more like this:

Mind Flex Schematic Hacked

The Hack

Parts list:

  • 1 x Mind Flex
  • 3 x AAA batteries for the headset
  • 1 x Arduino (any variety), with USB cable
  • 2 x 12” lengths of solid core hookup wire (around #22 or #24 gauge is best).
  • A PC or Mac to monitor the serial data

Software list:

The video below walks through the whole process. Detailed instructions and additional commentary follow after the video.

Step-by-step:

1. Disassembly.

Grab a screwdriver and crack open the left pod of the Mind Flex headset. (The right pod holds the batteries.)

Mind Flex internal layout



2. The T Pin.

The NeuroSky Board is the small daughterboard towards the bottom of the headset. If you look closely, you should see conveniently labeled T and R pins — these are the pins the EEG board uses to communicate serially to the microcontroller on the main board, and they’re the pins we’ll use to eavesdrop on the brain data. Solder a length of wire (carefully) to the “T” pin. Thin wire is fine, we used #24 gauge. Be careful not to short the neighboring pins.

The T PinT Pin with soldered lead




3. Common ground.

Your Arduino will want to share ground with the Mind Flex circuit. Solder another length of wire to ground — any grounding point will do, but using the large solder pad where the battery’s ground connection arrives at the board makes the job easier. A note on power: We’ve found the Mind Flex to be inordinately sensitive to power… our initial hope was to power the NeuroSky board from the Arduino’s 3.3v supply, but this proved unreliable. For now we’re sticking with the factory configuration and powering the Arduino and Mind Flex independently.

Ground lead



4. Strain relief and wire routing.

We used a dab of hot glue to act as strain relief for the new wires, and drilled a hole in the case for the two wires to poke through after the case was closed. This step is optional.

Strain reliefWire routing



5. Hook up the Arduino.

The wire from the Mind Flex’s “T” pin goes into the Arduino’s RX pin. The ground goes… to ground. You may wish to secure the Arduino to the side of the Mind Flex as a matter of convenience. (We used zip ties.)

Finished hack

That’s the extent of the hardware hack. Now on to the software. The data from the NeuroSky is not in a particularly friendly format. It’s a stream of raw bytes that will need to be parsed before they’ll make any sense. Fate is on our side: the packets coming from the Mind Flex match the structure from NeuroSky’s official Mindset documentation. (See the mindset_communications_protocol.pdf document in the Mindset developer kit if you’re interested.) You don’t need to worry about this, since I’ve written an Arduino library that makes the parsing process as painless as possible.

Essentially, the library takes the raw byte data from the NeuroSky chip, and turns it into a nice ASCII string of comma-separated values.



6. Load up the Arduino.

Download and install the Arduino Brain Library — it’s available here. Open the BrainSerialOut example and upload it to your board. (You may need to disconnect the RX pin during the upload.) The example code looks like this:

  1. #include <Brain.h>
  2.  
  3. // Set up the brain parser, pass it the hardware serial object you want to listen on.
  4. Brain brain(Serial);
  5.  
  6. void setup() {
  7.         // Start the hardware serial.
  8.         Serial.begin(9600);
  9. }
  10.  
  11. void loop() {
  12.         // Expect packets about once per second.
  13.         // The .readCSV() function returns a string (well, char*) listing the most recent brain data, in the following format:
  14.         // "signal strength, attention, meditation, delta, theta, low alpha, high alpha, low beta, high beta, low gamma, high gamma"   
  15.         if (brain.update()) {
  16.                 Serial.println(brain.readCSV());
  17.         }
  18. }



7. Test.

Turn on the Mind Flex, make sure the Arduino is plugged into your computer, and then open up the Serial Monitor. If all went well, you should see the following:

Brain library serial test

Here’s how the CSV breaks down: “signal strength, attention, meditation, delta, theta, low alpha, high alpha, low beta, high beta, low gamma, high gamma”

(More on what these values are supposed to mean later in the article. Also, note that if you are hacking a Force Trainer instead of a Mind Flex, you will only see the first three values — signal strength, attention, and meditation.)

If you put the unit on your head, you should see the “signal strength” value drop to 0 (confusingly, this means the connection is good), and the rest of the numbers start to fluctuate.



8. Visualize.

As exciting as the serial monitor is, you might think, “Surely there’s a more intuitive way to visualize this data!” You’re in luck: I’ve written a quick, open-source visualizer in Processing which graphs your brain activity over time (download). It’s designed to work with the BrainSerialOut Arduino code you’ve already loaded.

Download the code, and then open up the brain_grapher.pde file in Processing. With the Mind Flex plugged in via USB and powered on, go ahead and run the Processing sketch. (Just make sure the Arduino IDE’s serial monitor is closed, otherwise Processing won’t be able to read from the Mind Flex.) You may need to change the index of the serial list array in the brain_grapher.pde file, in case your Arduino is not the first serial object on your machine:

serial = new Serial(this, Serial.list()[0], 9600);

You should end up with a screen like this:

Processing visualizer test


About the data

So what, exactly, do the numbers coming in from the NeuroSky chip mean?

The Mind Flex (but not the Froce Trainer) provide eight values representing the amount of electrical activity at different frequencies. This data is heavily filtered / amplified, so where a conventional medical-grade EEG would give you absolute voltage values for each band, NeuroSky instead gives you relative measurements which aren’t easily mapped to real-world units. A run down of the frequencies involved follows, along with a grossly oversimplified summary of the associated mental states.

In addition to these power-band values, the NeuroSky chip provides a pair of proprietary, black-box data values dubbed “attention” and “mediation”. These are intended to provide an easily-grokked reduction of the brainwave data, and it’s what the Force Trainer and Mind Flex actually use to control the game state. We’re a bit skeptical of these values, since NeuroSky won’t disclose how they work, but a white paper they’ve released suggests that the values are at least statistically distinguishable from nonsense.

Here’s the company line on each value:

  • Attention:

    Indicates the intensity of a user’s level of mental “focus” or “attention”, such as that which occurs during intense concentration and directed (but stable) mental activity. Distractions, wandering thoughts, lack of focus, or anxiety may lower the Attention meter levels.

  • Meditation:

    Indicates the level of a user’s mental “calmness” or “relaxation”. Meditation is related to reduced activity by the active mental processes in the brain, and it has long been an observed effect that closing one’s eyes turns off the mental activities which process images from the eyes, so closing the eyes is often an effective method for increasing the Meditation meter level. Distractions, wandering thoughts, anxiety, agitation, and sensory stimuli may lower the Meditation meter levels.

At least that’s how it’s supposed to work. We’ve found that the degree of mental control over the signal varies from person to person. Ian Cleary, a peer of ours at ITP, used the Mind Flex in a recent project. He reports that about half of the people who tried the game were able to exercise control by consciously changing their mental state.

The most reasonable test of the device’s legitimacy would be a comparison with a medical-grade EEG. While we have not been able to test this ourselves, NeuroSky has published the results of such a comparison. Their findings suggest that the the NeuroSky chip delivers a comparable signal. Of course, NeuroSky has a significant stake in a positive outcome for this sort of test.

And there you have it. If you’d like to develop hardware or software around this data, I recommend reading the documentation that comes with the brain library for more information — or browse through the visualizer source to see how to work with the serial data. If you make something interesting using these techniques, I’d love to hear about it.


March 2013 Update:

Almost three years on, I think I need to close the comments since I don’t have the time (or hardware on hand) to keep up with support. Please post future issues on the GitHub page of the relevant project:

Arduino Brain Library
https://github.com/kitschpatrol/Arduino-Brain-Library

Processing Brain Grapher
https://github.com/kitschpatrol/Processing-Brain-Grapher

Most issues I’m seeing in the comments seem like the result of either soldering errors or compatibility-breaking changes to the Processing and Arduino APIs. I’ll try to stay ahead of these on GitHub and will be happy to accept pull requests to keep the code up to date and working.

Thanks everyone for your feedback and good luck with your projects.

April 7 2010 at 2 PM