Pets, Plants, and Computer Vision
Header

Hack the Museum

July 29th, 2013 | Posted by admin in audio | code | Detroit | Fun! | python | Uncategorized - (Comments Off on Hack the Museum)

My friend talked me into participating in the HackTheMuseum hack-a-thon at the Henry Ford Museum during the Detroit Maker Faire. The hackathon was intended to feature a new API for the museum’s digital assets. We did a recon trip the week before the hack-a-thon and outlined what we thought would make a great app. The newer exhibits at the Henry Ford are actually really slick. You can search through the physical and digital collection and curate collection via large touch screens installed in the museum for later retrieval at home. After our recon trip we distilled a few key design elements that we wanted in our app.

  1. We specifically didn’t want to duplicate the functionality of the existing infrastructure. Given the time constraints we really couldn’t do it better.
  2. We noticed a couple of visually impaired visitors who could of benefited from audio tours. The Henry Ford Museum doesn’t currently have audio tours like a lot of other tours.
  3. We didn’t want to write a mobile application. People go to museums to experience artifacts they can’t see anywhere else. The museum should be for enjoying these objects, not staring face-down into a cell phone or iPad.
  4. The museum sees a wide variety of visitors. Visitors of all ages, nations, ages, races, capabilities, and socio-economic status. We didn’t want to assume that our users had expensive cellphones with equally expensive data plans, or the tech savvy to operate them. My experience is that my parents, the kids I couch, and my family can all operate mp3 players and MMS text messages. We wanted to stick to a medium that could see wide adoption.
  5. Every good idea we had some sort of mapping element which really isn’t available in the Henry Ford Museum API. Every experience we could think of started with, “Where is cool thing X at the museum”. Having a map is an indispensable part of the museum experience. There is simply too much to see in one day, you pick the stuff that really interests you.

The night before the hack-a-thon I thought up an idea that I pitched to the team. The idea was to create custom maps and audio tours for the museum and then deliver them via a diversity of mediums. Have a smartphone? Great, here is a map image or dynamic map and a SoundCloud link / dynamic HTML audio. Have a dumb phone? Get the content delivered via SMS message. Are you a teacher who wants to customize a visit? Awesome, dumb mp3 players are dirt cheap, you can get a set for a classroom for half the price of one iPod. You can load up a tour and hand it over to the kids.

When the hack-a-thon started we decided to narrow our focus to a single exhibit so we could do the mapping and build out the content at least reasonably well. We needed to manually determine the positions of a bunch of exhibits. I started out by figuring out how to do the text to speech. My first task was to get the data I wanted to put in the tour from the museum API. After a bit of massaging I was able to pull out the data I wanted from the API. I settled on the exhibit location descriptions, display titles, the various abstracts/copy about the artifacts, and the thumb nail images. I wrapped this into a function that would pull down this info based on an id number. I then made another function that would stitch together various bits of text into a cohesive body text.

Python is my preferred hacking language of choice so I needed to figure out how to take the large body of text I could generate and spit it out as text to speech. There is pyttsx for text to speech, but it only outputs to the sound card. You could probably snag the raw wav data with a bit of hacking but that I would take awhile. There are also a bunch of APIs but I figured I could do decent job locally, also I had hopes to compose the text to speech with audio from the API but that never materialized. I decided to just use the python subprocess module to call the espeak linux command line utility to generate the speech wav files. I then ran ffmpeg over the raw wav files to convert them to Mp3. Finally I used the SoundCloud API to upload the output WAV file. Pretty? No, but it worked, that’s why they call it a hack-a-thon.

An example "hacked" map of a custom museum tour.

An example “hacked” map of a custom museum tour.

With the sound data taken care of I then proceeded to work on mapping. I had already pulled down the exhibit thumbnails so I all I had to do to position them on map was to map the exhibit location to the map. I just tossed our hand rolled map points into a python dict keyed on ID and made it a class member. Serious duct tape coding to be sure, in real life the museum would provide this info or you could toss it in your favorite data store. I then used a bit of SimpleCV magic to build alpha maps of the thumbnails and blit them onto the map. I also played with generating an animated gif of the the tour, but a lot of gifs were really big even after I ran them gifsicle. Gifs don’t render on old cell phones and imgur kept shitting the bed on big files so I cut it.

To deliver the content I used the Twitter API and the Twilio API. Twilio delivers data via SMS but not MMS so I was also hoping to use the MoGreet API to deliver images and sounds directly but I just plain ran out of time. I had some bits of code for the Twilio API so I just stuck with it. The last step was to link all of the front end UI with the back end bits. To do this I just used the python BaseHTTPRequestHandler and HTTPServer.

Like I said, it isn’t pretty but it works. That is why they call it a hack-a-thon. You can check out the code on Github. The Henry Ford Museum Asset API is only available within the museum so I grabbed a screencast video (sorry no audio). You can also hear some of the auto generated audio above.

We did reasonably well all things considered. We tied for second place at the hack-a-thon and I got a swell hard hat and few tickets to the Henry Ford Museum and Greenfield Village. The entire team did a great job. It was our first hack-a-thon ever, and we were reasonably front end heavy, and I have a minimal back end experience.

Tapsterbot Mark I

July 10th, 2013 | Posted by admin in automation | Automation Alley | demo | Fun! | pics or it didn't happen | python | robots - (Comments Off on Tapsterbot Mark I)

I’ve been working on creating a clone of a Jason Huggins’ tapsterbot, parallel robot in my spare time . I wanted a friendly desktop robot that I could play with to prototype some computer vision applications. Jason was kind enough to open source the code, the BOM, and and all of the design files in a handy github repo. To build the robot I got a membership to the All Hands Active hackerspace here in Ann Arbor so I could fab the parts. All I really needed to build the robot was a 3D printer and a laser cutter. The robot has a really simple design that only requires a few nuts and bolts, three $8 servos, and an arduino for the controller. Once I got the parts it took me a little over a day to build the thing. I had a few slip-ups along the way so I wanted to collect all my knowledge in a blog post. Jason provided me with a ton of awesome photos of the robot in action so I could figure out how to piece it together. One critical component was how to correctly mount the robots arms onto the servo. Jason has provided an awesome video that shows you how to do just that. I now have everything assembled correctly and I plan to take it all apart and provide step by step instructions on how to put everything together. Currently the robot runs using node.js and I am making a python port using PyFirmata. With any luck I should have that work done within the next week and be able to show some more impressive demos. The first thing I want to do is build a path planning algorithm so I can prevent the tapsterbot from accidentally crushing its own arms or swinging into the legs that support it (I’ve already broken an arm). I’ve been reading up on the robot’s inverse kinematics, but I am not sure it lends itself to a closed form solution.

Solving Autostereograms AKA Magic Eyes

July 10th, 2013 | Posted by admin in birds | computer vision | demo | Fun! | OpenCV | pics or it didn't happen | python | signal processing | SimpleCV - (Comments Off on Solving Autostereograms AKA Magic Eyes)
shark

shark

This week I’ve been playing with autostereograms, also called magic eye images. MagicEye images were big in the 90s when I was a kid/teen and every mall had a kiosk peddling framed copies. I wanted to see if I could reconstruct the depth map from the image using a little bit of image processing. Autostreograms work because your eye/brain is really into creating stereo depth maps, and if you set your eye’s focus at a point behind the image your brain basically goes a bit haywire and tries to build a depth map in the plane of the image. Getting your vergence point to sit behind the image plane requires some practice; so if at first you don’t see the image keep trying. I really recommend reading the wikipedia article linked to above as it is really well written with a lot of fantastic diagrams.

HORSE

To do this project I created a small data set of “wall-eyed” random-dot autostereograms. There are other kinds of stereograms, that can be viewed in different ways, but I felt the random-dot ones would be slightly easier to decode. The basic premise is that for every small set of horizontal pixels there is a corresponding row of pixels some distance away. The distance between the matching rows segments is what your brain uses to get the depth map. The matching of the rows of pixels is periodic with a period related to the vergence distance you must view the image at. Figuring out the period of the image is easy, if you look at the image you can basically see columns of pixels. For most autostereograms there are usually between 6 and and 20 for a normal image, the horse image above has seven instances of the repeating patern. If you have an image that is 600 pixels wide, with about six columns, the pixel, or set of pixels at [0,0] will have a correspondence at roughly [100+d,0], where the value of d is the depth value.

BIRD

I baked up a naive algorithm in about 90 minutes and had an early prototype. The basic idea is that you iterate over the rows in the image, and for a small chunk of pixels in that row (roughly ten pixels) you search a window around where the correspondence should exist, and then record that value in a depth map. So for our example image 600 pixels wide, you would try to match pixels [0:10,0] with [100:110,0], [101:111,0] and so on until you found a decent match. For my first example I used the gray scale sum of absolute pixel differences. You could do a correlation, but I thought that the simple solution should suffice. It is worth noting that you can also move up to frequency space and do a multiplication of the spectra but that seemed like a lot of work. I googled a bit and found this example that does just that. That solution seems to get stronger edges, and work on a few different kinds of stereograms, but I would argue mine gets betters depth maps.

My first pass, using naive python looping worked but it was as slow as molasses in January. I decided to see if I could speed things up. The first speed hack I tried was to use an integral image. An integral image is an image where each pixel is the sum of the pixels above and to its left. Integral images are great if you want to calculate lots of different average values across an image really fast, and they are what makes Haar Cascades and face detection possible. In an integral image, once the integral is computed, the sum, and average of any area in the image can be computed with in just four look-ups, and three additions, which is a decent time savings. I modified my code and got maybe a 10-20% speed up (I didn’t bench mark it). Since the operations are done row-wise and each row is independent of the next one this algorithm is really well suited to parallelization. I decided to try my hand at doing some image processing using the python multiprocessing library. It took me about an hour to chunk out the code and get everything running, but it did improve performance significantly (a little less than 4x). I need to go back and refactor the code to deal with some bounds issues, which is causing the horizontal lines in the image, and perhaps use shared memory, but the results are well worth the effort. You can take a look at the code for yourself at this repo, I’ve put a gist of the code below for reference.

If I get some more time I want to see how much of a speed up Numba can get the naive implementation and possibly do some bench marking of the different approaches. I also need to remove the banding caused by the multiprocess “chunking.” The algorithms performance seems to be very dependent on the search window size so I would like to find a more robust way of determining the size of the search window, possibly by looking at low end of the FFT spectrum.

Web Scraping with BeautifulSoup and Python

July 1st, 2013 | Posted by admin in Ann Arbor | artificial intelligence | audio | birds | classification | code | Fun! | machine learning | Michigan | python | signal processing | Uncategorized - (Comments Off on Web Scraping with BeautifulSoup and Python)
Tufted Titmouse - I love these guys, so cute.

Tufted Titmouse – I love these guys, so cute.

I’ve been working on building an automated bird call recognition system using python. The first step in getting everything working is to pull down a data set of bird calls from which to train and test my ideas. To get this data I needed a lot of bird calls. It just so happens that there are a couple of large repositories of this type of data including the Xeno-Canto library and the Cornell Ornithology Library. The only problem is that it lives in websites with embedded players and I need the raw data. I decided to try and write a basic web-scraper that would pull down the data I wanted. To do this I first created a list the scientific names of all of the song birds that I am pretty sure live in my neighborhood (at least the common ones I’ve seen before). I checked a couple of websites to cross check my assumptions and developed the following list:

To do the scraping I used the BeautifulSoup python library to help me navigate the DOM from xeno-canto bird library. The code works by crafting a query for each bird species, and parsing the DOM to look for the xc-button-audio in a div element. In that div element there is a sub tag called data-xc-filepath which points to the mp3 file URL. My friend Ben helped me figure out that last little bit as I am barely competent as a web money. Once I have the mp3 file URL I use os.system to call wget on the mp3 url. I also do some book keeping to keep all the bird calls in different directories and navigate the multiple pages of results. If I get some more time I will try to extract all of the metadata and save it to a CSV file, but for right now this works. You can see the code below:

Now that I have the data I need to figure out how to extract individual calls from each file that can contain multiple calls. My working idea is to look for regions where the peak-to-peak audio values are low enough to be considered silence. I will use these silence intervals to split the files into individual calls. Once I get to the short individual calls I am thinking I will run an FFT over the audio and then bin different regions of the spectrum to create a feature vector. I will also probably keep some information about the call length and maybe try to determine the “warblyness” of the call (i.e. how many different sub-tweets make up a call). I am thinking that it may be useful to do the binned FFT over fixed time slices of the call and calculate the FFT on that (e.g. break the call up into five time chunks and get the FFT values for each chunk). I have an idea that a binary descriptor can be used to compress each time slice if I set an appropriate threshold at each frequency (e.g. use a 32 bit int to encode if one of 32 chunks the frequency space are discernible). If I can get that idea to work I could probably encode each call very succinctly in only a few bytes of memory. Once I have my data I suspect that a K Nearest Neighbors classifier will work reasonably well. I may combine the KNN with a final correlation with the truth data to choose between the K best matches.