a digital scan of a 35mm film image of a processing sketch running on an LCD
Skip to Content

Godspeed, Comp Cameras

Computational Cameras and I have parted ways. I’m sure I’ll end up doing my share of pixel munging as I start work on Thesis II.

February 9 2011 at 8 AM

Street View Automatic

Why is it always daytime in Google Street View?

The disagreement between Street View’s 100:0 ratio of light to dark and my window’s less optimistic 50:50 ratio has been particularly jarring lately. What a tax on our brittle circadian rhythms!

I have created a bookmarklet to solve the simpler (street view) half of this disparity. Now, you can push a button to instantly cast any Street View scene into a weak approximation of darkness. The degree of night is based on what time it actually is in the corner of the world you’re viewing, combined with information on when the sun will rise or set.

An open Street View window, left unattended, will now cycle from day, to night, and back again, indefinitely. No longer will you leave the house under the false promise of daylight at your destination.

The algorithm is operating on the four sample scenes above. If you’d like to give it a try, drag and drop the link below into your bookmarks bar (for quick access) or right click and add it to your bookmarks (for less obtrusive access).


drag and drop the below link to your bookmarks bar

Street View Automatic

drag and drop the above link to your bookmarks bar


Next, navigate to Google Maps, and get into a Street View as you would otherwise. Once the view has loaded, give the new Street View Automatic link in your bookmarks bar a click to show the scene in its true (and current) light. Of course, if it’s actually daytime, you won’t see much change at all. The code also won’t work on embedded maps or portable devices.

My thanks to Jonathan Stott of Earthtools for making his excellent lat / lon to local time and sunrise / sunset API services available free of charge to the public.

January 30 2011 at 9 PM

Ten Face-Related Ideas and One Implementation

This post is in progress!

Film Faceprints (implemented and shown below) Run face detection on film frames, grabbing full-size frames in which non-face areas are masked off. Average these frames together to generate a single-frame representation of presence and characters over the duration of the film. This leaves you with a kind of thumbprint of the film and it’s characters. The results are kind of anticlimactic, there are only vague shadows of faces. A failed experiment, but brings to mind some more interesting directions of approach to the content. (Animating the accumulation of the average, scaling all of the faces to the same dimension before averaging — or maybe ditching the averaging idea and trying a grid arrangement that would reduce a film’s narrative to a series of faces.)

Abe LincolnYellow Submarine

Titicut Follies is at top. Bottom left is an excerpt from a film about Abraham Lincoln, bottom right is Yellow Submarine.

Perhaps more interesting are the algorithms leftovers. As it runs, the latest faces are dumped into a buffer and drawn to the screen. A couple averages in progress are shown below:

Titicut Follies in progressYellow Submarine in progress

Here is the rather messy source code.

Quantify Contact Run face detection on the contents of your computer screen. Log how often faces are encountered in web browsing / photo editing / whatever. In this way the relative loneliness of extended sessions in front of a machine could be quantified.

Curb Paranoia Implement a face-detecting and obfuscating filter at a very low level (somewhere in the camera driver, probably). Pseudo-privacy protection

Quantifibate Run face detection on your laptop’s camera all the time. Since computers tend to be left on, “uptime” doesn’t say much about the hours per day sunk into these machines. Face detection could give more accurate statistics about presence / attention.

Tenso Automate the face swapping / tenso meme.

Almost Face Go through large sets of face-tagged images (an iPhoto library, for example) and hand-pick all of the false positives to build a collection of almost-faces.

Street View Process Google Street View panoramas for faces. Hit rate might be a bit low since google blurs faces, but it would be interesting to build a map of geolocated faces.

A few more to come…


Attachments

January 27 2011 at 11 AM

Your World of Text

I spent twenty minutes trying to remember the name of this brilliant, unmoderated, real-time, infinitely-large canvas of collaborative and anti-collaborative text. It’s Your World of Text by Andrew Badr. The window above is live… anything you type is published instantly. If you run out of room, you can scroll to a fresh plot of page à la Google Maps.

Even more brilliant, Andrew released the source a while back. Interesting to see that it’s built on Django, and that clients keep sync by polling the server instead of some kind of pushed data from the server via Comet or a hidden socket.

January 26 2011 at 9 PM

Upload to Flickr from Processing

About The PImage Uploader
I’ve attached a quick Processing sketch that uploads PImages from a camera directly to Flickr each time you click the mouse.

The actually upload process is pretty simple — it just involves posting a bunch of bytes over HTTP to a specific URL. The hard part is getting Flickr to believe that you are who you say you are so that it will accept the images you upload.

That’s where this code is meant to help. In order to upload images to a Flickr account, your app will need write permission. In order to get write permission, you’ll need to go through the authentication process.

Basically, the first time your app wants to upload it will open up a URL on the Flickr website prompting you to log in and “allow” the app to do what it wants to do. You may be familiar with this procedure if you’ve had to authenticate third party apps that tie into Flickr (such as iPhoto or a desktop flickr uploader). In the case of the attached code, Processing opens the authentication link for you, and then gives you 15 seconds to approve the app on Flickr’s website before continuing on its way.

After this, it stores the authentication data in a text file (called token.txt) local to the Processing sketch, so that you won’t have to go through the online authentication process each time you run the app. I’ve encapsulated this process into a single function called authenticate() to make things as simple as possible. If the token is lost or becomes corrupted, the app will automatically try to fetch a new one the next time it runs. (Note that you should not distribute any sketches with your own generated token file!)

The code makes use of a Flickr library for Java called flickrj. Since flickrj is a generic Java library and isn’t designed specifically for Processing, its use is not quite as intuitive as you’re accustomed to. For one, the steps to use the library with your sketch are a bit different. Instead of putting files in your ~/Documents/Processing/libraries folder, you’ll need to download the .jar file from the flickrj website and drag and drop it onto your sketch window. This creates a folder called “code” inside your sketch folder with a copy of the .jar file inside for your sketch to reference as needed.

If you prefer, you can create the folder and copy the .jar file manually. You’ll end up with the same setup as if you dragged and dropped the file. Also note that you’ll never see anything appear in the “import” menu list since flickrj wasn’t built with Processing in mind. The flickrj jar is included in the zipped uploader code below to make your life easier.


The API / Library Conundrum
The amount of code and number steps involved in getting the necessary authorization is kind of ridiculous. It’s easy to imagine a range of places to improve upon the library.

Flickrj is a pretty direct mirror to the official Flickr API, and that’s how most API libraries are designed. It seems to be designed for experienced Java programmers working on large-scale projects instead of the quick and dirty sketches typical to Processing work. It’s tough to find exactly the right balance between a library that makes sense relative to the official API, and one that adds new features or code and leverages the paradigms of a particular programming language or framework.

For example, a Processing-specific library might incorporate a threaded image downloader that could return arrays of PImages from a given query. It could also wrap up the authorizations into a few lines of code as outlined in this post. These Processing-esque abstractions on top of Flickr’s own API abstractions add a lot of code and maintenance liabilities to our hypothetical library — but it would certainly open things up for beginner coders.

My Processing to-do list is pretty long, but I’ll add a new Flickr library filed under “maybe someday”.


The Code
The core of the sketch is shown below, but note that it will be easiest to download flickr_uploader.zip for testing since it includes the flickrj library. The code looks a bit lengthy and convoluted, but it mostly consists of helper functions to take care of the authentication process and image compression to make the upload process as simple as possible — and the helper functions should be reusable without modification, so all you really need to worry about is creating the Flickr object, calling the authentication function, and then uploading to your heart’s desire.

  1. // Simple sketch to demonstrate uploading directly from a Processing sketch to Flickr.
  2. // Uses a camera as a data source, uploads a frame every time you click the mouse.
  3.  
  4. import processing.video.*;
  5. import javax.imageio.*;
  6. import java.awt.image.*;
  7. import com.aetrion.flickr.*;
  8.  
  9. // Fill in your own apiKey and secretKey values.
  10. String apiKey = "********************************";
  11. String secretKey = "****************";
  12.                    
  13. Flickr flickr;
  14. Uploader uploader;
  15. Auth auth;
  16. String frob = "";
  17. String token = "";
  18.  
  19. Capture cam;
  20.  
  21. void setup() {
  22.   size(320, 240);
  23.  
  24.   // Set up the camera.
  25.   cam = new Capture(this, 320, 240);  
  26.  
  27.   // Set up Flickr.
  28.   flickr = new Flickr(apiKey, secretKey, (new Flickr(apiKey)).getTransport());
  29.  
  30.   // Authentication is the hard part.
  31.   // If you’re authenticating for the first time, this will open up
  32.   // a web browser with Flickr’s authentication web page and ask you to
  33.   // give the app permission. You’ll have 15 seconds to do this before the Processing app
  34.   // gives up waiting fr you.
  35.  
  36.   // After the initial authentication, your info will be saved locally in a text file,
  37.   // so you shouldn’t have to go through the authentication song and dance more than once
  38.   authenticate();
  39.  
  40.   // Create an uploader
  41.   uploader = flickr.getUploader();
  42. }
  43.  
  44. void draw() {
  45.   if (cam.available()) {
  46.     cam.read();
  47.     image(cam, 0, 0);
  48.     text("Click to upload to Flickr", 10, height - 13);
  49.   }
  50. }
  51.  
  52. void mousePressed() {
  53.   // Upload the current camera frame.
  54.   println("Uploading");
  55.  
  56.   // First compress it as a jpeg.
  57.   byte[] compressedImage = compressImage(cam);
  58.  
  59.   // Set some meta data.
  60.   UploadMetaData uploadMetaData = new UploadMetaData();
  61.   uploadMetaData.setTitle("Frame " + frameCount + " Uploaded from Processing");
  62.   uploadMetaData.setDescription("To find out how, go to http://frontiernerds.com/upload-to-flickr-from-processing");  
  63.   uploadMetaData.setPublicFlag(true);
  64.  
  65.   // Finally, upload/
  66.   try {
  67.     uploader.upload(compressedImage, uploadMetaData);
  68.   }
  69.   catch (Exception e) {
  70.     println("Upload failed");
  71.   }
  72.  
  73.   println("Finished uploading");  
  74. }
  75.  
  76. // Attempts to authenticate. Note this approach is bad form,
  77. // it uses side effects, etc.
  78. void authenticate() {
  79.   // Do we already have a token?
  80.   if (fileExists("token.txt")) {
  81.     token = loadToken();    
  82.     println("Using saved token " + token);
  83.     authenticateWithToken(token);
  84.   }
  85.   else {
  86.    println("No saved token. Opening browser for authentication");    
  87.    getAuthentication();
  88.   }
  89. }
  90.  
  91. // FLICKR AUTHENTICATION HELPER FUNCTIONS
  92. // Attempts to authneticate with a given token
  93. void authenticateWithToken(String _token) {
  94.   AuthInterface authInterface = flickr.getAuthInterface();  
  95.  
  96.   // make sure the token is legit
  97.   try {
  98.     authInterface.checkToken(_token);
  99.   }
  100.   catch (Exception e) {
  101.     println("Token is bad, getting a new one");
  102.     getAuthentication();
  103.     return;
  104.   }
  105.  
  106.   auth = new Auth();
  107.  
  108.   RequestContext requestContext = RequestContext.getRequestContext();
  109.   requestContext.setSharedSecret(secretKey);    
  110.   requestContext.setAuth(auth);
  111.  
  112.   auth.setToken(_token);
  113.   auth.setPermission(Permission.WRITE);
  114.   flickr.setAuth(auth);
  115.   println("Authentication success");
  116. }
  117.  
  118.  
  119. // Goes online to get user authentication from Flickr.
  120. void getAuthentication() {
  121.   AuthInterface authInterface = flickr.getAuthInterface();
  122.  
  123.   try {
  124.     frob = authInterface.getFrob();
  125.   }
  126.   catch (Exception e) {
  127.     e.printStackTrace();
  128.   }
  129.  
  130.   try {
  131.     URL authURL = authInterface.buildAuthenticationUrl(Permission.WRITE, frob);
  132.    
  133.     // open the authentication URL in a browser
  134.     open(authURL.toExternalForm());    
  135.   }
  136.   catch (Exception e) {
  137.     e.printStackTrace();
  138.   }
  139.  
  140.   println("You have 15 seconds to approve the app!");  
  141.   int startedWaiting = millis();
  142.   int waitDuration = 15 * 1000; // wait 10 seconds  
  143.   while ((millis() - startedWaiting) < waitDuration) {
  144.     // just wait
  145.   }
  146.   println("Done waiting");
  147.  
  148.   try {
  149.     auth = authInterface.getToken(frob);
  150.     println("Authentication success");
  151.     // This token can be used until the user revokes it.
  152.     token = auth.getToken();
  153.     // save it for future use
  154.     saveToken(token);
  155.   }
  156.   catch (Exception e) {
  157.     e.printStackTrace();
  158.   }
  159.  
  160.   // complete authentication
  161.   authenticateWithToken(token);
  162. }
  163.  
  164. // Writes the token to a file so we don’t have
  165. // to re-authenticate every time we run the app
  166. void saveToken(String _token) {
  167.   String[] toWrite = { _token };
  168.   saveStrings("token.txt", toWrite);  
  169. }
  170.  
  171. boolean fileExists(String filename) {
  172.   File file = new File(sketchPath(filename));
  173.   return file.exists();
  174. }
  175.  
  176. // Load the token string from a file
  177. String loadToken() {
  178.   String[] toRead = loadStrings("token.txt");
  179.   return toRead[0];
  180. }
  181.  
  182. // IMAGE COMPRESSION HELPER FUNCTION
  183.  
  184. // Takes a PImage and compresses it into a JPEG byte stream
  185. // Adapted from Dan Shiffman’s UDP Sender code
  186. byte[] compressImage(PImage img) {
  187.   // We need a buffered image to do the JPG encoding
  188.   BufferedImage bimg = new BufferedImage( img.width,img.height, BufferedImage.TYPE_INT_RGB );
  189.  
  190.   img.loadPixels();
  191.   bimg.setRGB(0, 0, img.width, img.height, img.pixels, 0, img.width);
  192.  
  193.   // Need these output streams to get image as bytes for UDP communication
  194.   ByteArrayOutputStream baStream        = new ByteArrayOutputStream();
  195.   BufferedOutputStream bos              = new BufferedOutputStream(baStream);
  196.  
  197.   // Turn the BufferedImage into a JPG and put it in the BufferedOutputStream
  198.   // Requires try/catch
  199.   try {
  200.     ImageIO.write(bimg, "jpg", bos);
  201.   }
  202.   catch (IOException e) {
  203.     e.printStackTrace();
  204.   }
  205.  
  206.   // Get the byte array, which we will send out via UDP!
  207.   return baStream.toByteArray();
  208. }

December 17 2010 at 3 PM

Spring Thesis Plans

THE POST-APOCALYPTIC PIRATE INTERNET
For background on the basic idea of the post-apocalyptic pirate internet, please read an earlier post on the subject

Problem: The centrally-distributed internet is fragile and politically fickle
The web’s current implementation is built from millions of geographically dispersed clients communicating with a handful of extremely high-density data centers. Despite the many ⇔ many ideals of the web, the infrastructure looks more like many ⇒ one ⇒ many. This topology means that there are points in the network of significant vulnerability: Backbone fiber, ISP central offices, data centers, etc. all represent potential choke points in the web. The destruction of physical infrastructure or installation of firewalls to screen and censor data at one of these points could snuff access to the web. That would be a shame, since the web is arguably the most significant aggregation of knowledge and culture humanity has ever assembled.

How could this knowledge be protected, and how could the current freedom of expression and exchange enjoyed on the centralized web reemerge under a distributed model that is technically immune to data loss and censorship?

Solution: Distributed, mesh-networked backups of the entire web
I propose a distributed backup system for the web to ensure the survival of data and continuation of the platform’s ideals in the face of a political or infrastructural apocalypse.

The basic unit of the post-apocalyptic pirate internet is the “backup node”. These are relatively small, suitcase-sized computers with lots of storage space. Servers, basically. They’re designed for use by consumers of average technical aptitude. Backup nodes would sit in the corner of a room and sip data from the internet to build a backup of some portion of the web. If and when the centralized web infrastructure falls apart, the backup nodes would be poised to respond by automatically transforming from data aggregators to data distributors. Requests for web data in the absence of centralized infrastructure (post-apocalypse) would instead be fulfilled by the backup nodes — at least to the extent that backups are available.

The technical infrastructure of the post-apocalyptic pirate internet has two basic components. The first is physical: local storage nodes — hard disks, flash memory, etc. — on which fragments of the web will be backed up and paired with a supporting computer and interface (most likely a browser). The second is ethereal: wireless communication which will enable the formation of mesh network between physically proximate nodes. This would give apocalypse survivors access to more than just the data stored on their local node. In this sense, a new internet would take shape as the backup nodes enmeshed — an internet that was not vulnerable to centralized oversight or obstruction.

Execution: Research demand and feasibility, then build a backup node
First I’ll have to figure out how / why, exactly, such a system could / should be built. How would the content of the backups backups curated? By some distributed democratic means? By the usage patterns of the backup node’s owner? There’s a judgment to be made in deciding between saving the data people actually interact with on a daily basis (say, Twitter), and the data that actually carries forward knowledge essential to civilization (OpenCourseWare comes to mind).

What role will the backup nodes play before the apocalypse? Will they be seemingly dormant black boxes going about their work without human intervention, or will they become distribution points for content censored from the centralized web (Wikileaks would be the example of the day).

Marina has encouraged me to focus on the conceptual justifications for the system instead of technical implementation. However I’m personally interested in creating at least one actual node to demonstrate the concept. I understand the futility of the gesture, since the pirate internet would require thousands of backup nodes to be built, sold, and operated if it was going to actually protect (and eventually distribute) an appreciable amount of data. A single node is not particularly useful. Nevertheless, I’d like to end the semester with more than an exhaustive string of justifications / marketing material for something that doesn’t actually exist.

December 8 2010 at 3 PM

NIME is Coming

NIME 2010 Poster

December 8 2010 at 3 PM

Signs of the Apocalypse

A glut of headlines relevant to the post-apocalyptic pirate internet have popped up over the last few weeks. Here’s a quick review with commentary.

This first batch is regarding the temporary loss of major online repositories for “user generated content” (to invoke the cliché). Another post discussing the Wikileaks saga is forthcoming in the context of the post-apocalyptic pirate internet is forthcoming.


Tumblr, the celebrated blogging platform, was down for about 24 hours on December 5th. This was their longest outage to date.

Tumblr outage

Users’ trust is shaken by this sort of thing, and a day after the outage they released a backup application that lets users save all of their Tumblr posts to their hard disks.

Here’s the official line:

Unlike other publishing sites’ approach to backups, our goal was to create a useful copy of your blog’s content that can be viewed on any computer, burned to a CD, or hosted as an archive of static HTML files.
Wherever possible, we use simple file formats. Our backup structure is optimized for Mac OS X’s Spotlight for searching and Quick Look for browsing, and we’ll try to use the same structure and achieve the same benefits on other platforms.

To me this reads more like, “Keep uploading! If we implode, we won’t take your data with us.”

The backup app strikes me as Hail Mary decision executed in the interest of damage control (with the side effect of actually being good news for the survivability of the 2+ billion posts Tumblr hosts on their servers). There’s a tension on social media websites between giving users access to their own data (in the form of database dumps) and maximizing “lock in” — since giving users downloadable access to their data can provide an easy means of egress from one service and migration to a competitor. (cf. Facebook’s recent decision to let users dump their data in one step.)

Of course, like most prophylactics, the download tool would only be useful in the context of the post-apocalyptic pirate internet if it 100% of Tumblr publishers used it 100% of the time. Nevertheless, the fact that this piece of preservationist infrastructure was officially released suggests that some portion of the Tumblr staff / users are paranoid enough to prepare for a data or infrastructure related disaster. The app also implicitly migrates the worst-case backup burden from the host to the client. (e.g. “Oops, we lost everything… what, you didn’t back up your posts?”) This represents a significant shift in one of the basic contracts of Web 2.0, which is the idea that “files” as we know them on our PCs don’t exist, you don’t have to worry about which directory things go in, you don’t plan for a day when you’ll need to open Word 3.0 files, and you certainly don’t have to back up. The understanding between consumer and provider is that once something’s uploaded, it’s safe from loss due to technical failure — where every bit is tucked away in multi-million-dollar data centers and placed under the careful watch of bespectacled geeks pacing up and down miles of server racks.

Of course, that’s not how things work out, but the cloud = safe truism is one that will need to be proven catastrophically false before the basic tenet of the post-apocalyptic pirate internet — that local bits are safe bits — can take hold.


Another outage of reasonably high profile (although certainly not on the scale of Tumblr) struck GitHub on November 14th. A botched command by a systems administrator wiped out a database and destroyed some data along the way. The site was unusable for about three hours.

GitHub is much more esoteric than Tumblr, but for the uninitiated it’s basically a web site layering social-networking tools on top of Git. Git, in turn, is a piece of software that runs locally on your computer to keep track of collaborations around / revisions to source code written in the course of developing software.

Anyway, here’s what bad news looked like, as delivered by GitHub’s mascot, the Octocat:

The nature of Git (the version-control system) means that even a total loss of GitHub (the community build on Git) would be inconvenient, but not catastrophic. When you’re working with a Git repository, you have a local copy on your hard disk that is periodically updated and synced to the GitHub server.

If 50 people are working on a particular project, then 50 copies of that project exist on local hard disks in one corner of the world or another. Thus the degree to which a projects is insured against disaster rises proportionally to a project’s popularity / number of collaborators.

So there are two particularly great things about the Git + GitHub combination that should be kept in mind as plans for the post-apocalyptic pirate internet are drawn up:

  1. The same basic software (Git) is running on both your own computer and GitHub’s servers. In this sense, GitHub makes the most of the web when it’s available (by adding a social layer to Git), but Git itself doesn’t completely melt down in the absence of GitHub. In short, Git’s use of the centralized web is value added, not mission critical.

  2. Local backups are generated automatically in the course of using GitHub — unlike Tumblr’s proposed solution, which calls on users to make a conscious decision to back up at regular intervals if they want the safety of their data.

December 8 2010 at 1 PM

It Talks: Text to Speech in Processing

The Mac has a really great text-so-speech (TTS) engine built right in, but at first glance it’s only available at Apple’s whim in specific contexts — e.g. via a menu command in TextEdit, or system-wide through the accessibility settings. Seems grim, but we’re in luck — Apple, in their infinite generosity, have given us a command line program called “say”, which lets us invoke the TTS engine through the terminal. It’s super simple to use, just type the command and then the text you want, e.g. say cosmic manifold.

So that’s great, now what if we wanted to make a Processing sketch talk to us? In Java, as in most languages, there are ways to send commands to the terminal programmatically. By calling Runtime.getRuntime().exec("some command");we can run any code we want on the terminal from within Processing. So to invoke the TTS engine from a Processing sketch, we can just create the say ... command line instruction in a string object, pass that into the runtime execution thing, which in turn handles the TTS conversion.

I’ve put together a small Processing class that makes it easy to add speech to your Processing sketches. It only works on Mac OS, won’t work in a web applet, and has only been tested in Mac OS 10.6. (I think the list of voices has changed since 10.5.)

Note that the since the class is quite simple and really just wraps up a few functions. I’ve set it up for static access, which means that you should never need to instantiate the class by calling something like TextToSpeech tts = new TextToSpeech() — and in fact that would be a Bad Idea. Instead, you can access the methods any time without any prior instantiation using static style syntax, e.g. TextToSpeech.say("cosmic manifold");.

Here’s the class and a sample sketch:

  1. // Processing Text to Speech
  2. // Eric Mika, Winter 2010
  3. // Tested on Max OS 10.6 only, possibly compatible with 10.5 (with modification)
  4. // Adapted from code by Denis Meyer (CallToPower)
  5. // Thanks to Mark Triant for the inspiring sample text
  6.  
  7. String script = "cosmic manifold";
  8. int voiceIndex;
  9. int voiceSpeed;
  10.  
  11. void setup() {
  12.   size(500, 500);
  13. }
  14.  
  15. void draw() {
  16.   background(0);
  17.  
  18.   // set the voice based on mouse y
  19.   voiceIndex = round(map(mouseY, 0, height, 0, TextToSpeech.voices.length - 1));
  20.  
  21.   //set the vooice speed based on mouse X
  22.   voiceSpeed = mouseX;
  23.  
  24.   // help text
  25.   fill(255);
  26.   text("Click to hear " + TextToSpeech.voices[voiceIndex] + "\nsay \"" + script + "\"\nat speed " + mouseX, 10, 20);
  27.  
  28.   fill(128);
  29.   text("Mouse X sets voice speed.\nMouse Y sets voice.", 10, 65);
  30. }
  31.  
  32. void mousePressed() {
  33.   // say something
  34.   TextToSpeech.say(script, TextToSpeech.voices[voiceIndex], voiceSpeed);
  35. }
  36.  
  37.  
  38. // the text to speech class
  39. import java.io.IOException;
  40.  
  41. static class TextToSpeech extends Object {
  42.  
  43.   // Store the voices, makes for nice auto-complete in Eclipse
  44.  
  45.   // male voices
  46.   static final String ALEX = "Alex";
  47.   static final String BRUCE = "Bruce";
  48.   static final String FRED = "Fred";
  49.   static final String JUNIOR = "Junior";
  50.   static final String RALPH = "Ralph";
  51.  
  52.   // female voices
  53.   static final String AGNES = "Agnes";
  54.   static final String KATHY = "Kathy";
  55.   static final String PRINCESS = "Princess";
  56.   static final String VICKI = "Vicki";
  57.   static final String VICTORIA = "Victoria";
  58.  
  59.   // novelty voices
  60.   static final String ALBERT = "Albert";
  61.   static final String BAD_NEWS = "Bad News";
  62.   static final String BAHH = "Bahh";
  63.   static final String BELLS = "Bells";
  64.   static final String BOING = "Boing";
  65.   static final String BUBBLES = "Bubbles";
  66.   static final String CELLOS = "Cellos";
  67.   static final String DERANGED = "Deranged";
  68.   static final String GOOD_NEWS = "Good News";
  69.   static final String HYSTERICAL = "Hysterical";
  70.   static final String PIPE_ORGAN = "Pipe Organ";
  71.   static final String TRINOIDS = "Trinoids";
  72.   static final String WHISPER = "Whisper";
  73.   static final String ZARVOX = "Zarvox";
  74.  
  75.   // throw them in an array so we can iterate over them / pick at random
  76.   static String[] voices = {
  77.     ALEX, BRUCE, FRED, JUNIOR, RALPH, AGNES, KATHY,
  78.     PRINCESS, VICKI, VICTORIA, ALBERT, BAD_NEWS, BAHH,
  79.     BELLS, BOING, BUBBLES, CELLOS, DERANGED, GOOD_NEWS,
  80.     HYSTERICAL, PIPE_ORGAN, TRINOIDS, WHISPER, ZARVOX
  81.   };
  82.  
  83.   // this sends the "say" command to the terminal with the appropriate args
  84.   static void say(String script, String voice, int speed) {
  85.     try {
  86.       Runtime.getRuntime().exec(new String[] {"say", "-v", voice, "[[rate " + speed + "]]" + script});
  87.     }
  88.     catch (IOException e) {
  89.       System.err.println("IOException");
  90.     }
  91.   }
  92.  
  93.   // Overload the say method so we can call it with fewer arguments and basic defaults
  94.   static void say(String script) {
  95.     // 200 seems like a resonable default speed
  96.     say(script, ALEX, 200);
  97.   }
  98.  
  99. }

December 6 2010 at 11 AM

Rough Thesis Proposal

Let’s suppose the internet stops working tomorrow. Not the hard disks, just the wires. Government firewall, tiered service, cut fiber, ISP meltdown — pick a scenario.

How much would be lost? How much could be recovered? What would you need to rebuild?

We have the infrastructure — computing power, abundant storage, networking — but not the know-how, organization, will, or sense of impending disaster required to start building a decentralized web. We need a way to easily rearrange our consumer electronics (which are optimized for consumption) into a network that can’t be centrally controlled or destroyed (and is therefore optimized for creation and distribution). Most importantly, the ubiquity and overlap of consumer-created wireless networks in urban areas means that mesh-based networks with thousands of nodes should be feasible without any reliance on centralized network infrastructure.

Hence the post-apocalyptic pirate web kit: everything you need to bootstrap a decentralized web in a single package. This could take several forms, but my initial thinking suggest a suitcase full of hard disks, wireless connections, and processing power designed to restore fragments of the web to nearby users and act as a node in a broader network of data survivalists.

The hard-disks would be pre-loaded with an archival version of the web. The whole of Wikipedia’s english text content, for example, is readily available for download and amounts to abut 5 terabytes. This could fit on three hard disks, which cost about $100 each, and together displace about as much physical space as a loaf of bread.

In its dormant form, the post-apocalyptic pirate web kit is something you might leave plugged in at the corner of the room — it could sit there indefinitely, like a fire extinguisher. The kit could automatically crawl the web and keep its archival mirror as fresh as possible.

When and if disaster strikes, the kit would be ready to switch into server node and thus preserve our way of internet life. (So that we might continue with a spirit of bold curiosity for the adventure ahead!)

November 30 2010 at 10 AM