The Mug of Baba Yaga, Part I: The Saturn Mug

Before there was Boba Fett, there was Baba Yaga. Baba who? Baba Yaga is the Slavic witch who lives in a hut with the legs of a chicken. I’ve always liked that image of a hut walking around on chicken legs, so I got it into my head to make a coffee mug homage to old witch’s estimable hut. I call it the Mug of Baba Yaga.

But this was really just a wisp of an idea and nothing more. I had no clear idea how to make such a thing. Then one day I saw that Shapeways, the 3d printing company, lets you print things as porcelain. Give them a mug design and they can render a drinkworthy object. So printing the object became possible. But how about modeling it in the first place? My friend David Wey recommended that I learn Blender for this purpose. The Blender software is free, and there are plenty of YouTube videos that will teach you the basics.

Still, it was too much of a stretch for me to go directly to my chicken-legged vessel. I needed a simple test case. With a little work, I was able to create the model on the left. I call it my Saturn Mug because it’s must a mug with a ring around it. Or rather, through it. It’s an absurd test object. I made it because I wanted to know if the process would work. What would the print look like?

I exported it as an STL file and sent it off to Shapeways.


So that’s what I ordered on the left and that’s what I got back in the mail on the right. It came out pretty well, eh? You can even buy it, if you want.

I’m ready to rumble. Bring on the chicken legs.


Affective Maps Are Coming

Traffic apps like Waze use mobile phone data to determine where traffic is coagulating. StreetBump, an app developed in Boston, uses not only the location but the accelerometer in your phone to detect violent bumps as you drive. From this they can locate car-eating potholes without forsaking the comforts of City Hall. SeeClickFix is an app that lets people report the potholes directly.


Now here’s a story about yet another way to crowdsource a map of the city. If you’re riding your bike and you feel unsafe, you can push a special yellow button on the handlebars. The idea is that, if enough people do this, city planners will find out which roads are unsafe for cyclists. Or at least where people FEEL unsafe. I find this last application especially interesting, because it results in a map that displays how the environment makes people feel. The landscape thus revealed is subjective. Subjective, but extremely useful. It is an affective map.

In this case the affect is one dimensional and based on explicit reporting: I feel safe/I don’t feel safe. But hypothetically, let’s suppose your phone can infer your mood at any given time. If it reported it back to a Big Affective Map, we could get a sense  for where the world makes people feel good and where it makes them feel bad. As with all big data problems, once you get enough data, you can smooth out the vicissitudes of any one person’s moods and see the micromood impact of every place people go. What’s your neighborhood’s Spatially Averaged Net Affect? What is the Integrated Affective Cost of your jog?

If your phone isn’t smart enough to figure out your mood now, it will be soon enough, particularly considering the various wearables, paste-ables, and insertables that people are acquiring. Heart rate, blood pressure, galvanic skin response… soon your phone is going to know your mood better than you will.


One way to use this information is to layer it on top of routing software. There is already an app that tries to find the prettiest route from A to B (as opposed to simply the shortest or fastest). But who decides what is pretty? An affective map would be the ideal way to achieve this goal.

Affective maps are coming, and they’re bound to have endless uses. Imagine doing physical A/B testing on civic improvements. What kind of playground equipment do the kids like best? What yields more happiness per person-square foot: a baseball diamond or a soccer field? What architects should command the highest fees? And just imagine the effect on real estate prices or university ratings.

I believe that affective street maps are just the beginning. We’ll be able to apply deep affective mapping to artifacts and individuals. With the advent of digital manufacturing, it’s not a stretch to say that we’ll be able to mutate and modulate made objects continuously based on how well they perform affectively across geographic and cultural domains. If you’re from Ohio, you’ll probably like this kind of stapler. I can’t explain why, but our big computer says that it’s true. And I bet it’s right.


Thanks to Google we understand the power of a text search. All I have to do is search for “name of Alexander the Great’s horse” and I’ll quickly be reminded that it was Captain Crunch. I’m sorry, wait a second… I mean Bucephalus. But the success of textual searching has caused us to forget other kinds of searching. Google actually has a “search by image” feature, but it’s a little cumbersome to use, since you have to upload the image that you want to search for (unless you already have a URL for it).

The new site called Terrapattern introduces a new and extremely efficient form of searching: if Google Maps is essentially one great big image, why not just click on something and tell it to “find me more stuff that looks like this”? It’s a good example of something that’s nontrivial to think up, hard to implement, but brilliant to use.


Right now they’ve only got a few cities. I’m looking at the New York dataset, and it’s terrific fun to, for example, click on a golf course and have it instantly find you every golf course in the New York metropolitan area. Football fields, soccer fields, baseball fields… sports fields are especially easy and entertaining targets. But you can try things like airports and sewage treatment plants.

You have to consider that Google and Facebook can already do this with faces, but they are prevented by good taste and the promise of legal trouble and a public relations nightmare. But the implications are far reaching. Click on a house you like, and you’ll see a dozen more just like it. Click on a Mayan ruin and the computer might just find you a ruin unknown to science. Or maybe by the time you get there, some robot anthropologists will already be digging.