The Mug of Baba Yaga, Part I: The Saturn Mug

Before there was Boba Fett, there was Baba Yaga. Baba who? Baba Yaga is the Slavic witch who lives in a hut with the legs of a chicken. I’ve always liked that image of a hut walking around on chicken legs, so I got it into my head to make a coffee mug homage to old witch’s estimable hut. I call it the Mug of Baba Yaga.

But this was really just a wisp of an idea and nothing more. I had no clear idea how to make such a thing. Then one day I saw that Shapeways, the 3d printing company, lets you print things as porcelain. Give them a mug design and they can render a drinkworthy object. So printing the object became possible. But how about modeling it in the first place? My friend David Wey recommended that I learn Blender for this purpose. The Blender software is free, and there are plenty of YouTube videos that will teach you the basics.

Still, it was too much of a stretch for me to go directly to my chicken-legged vessel. I needed a simple test case. With a little work, I was able to create the model on the left. I call it my Saturn Mug because it’s must a mug with a ring around it. Or rather, through it. It’s an absurd test object. I made it because I wanted to know if the process would work. What would the print look like?

I exported it as an STL file and sent it off to Shapeways.

saturn-mug

So that’s what I ordered on the left and that’s what I got back in the mail on the right. It came out pretty well, eh? You can even buy it, if you want.

I’m ready to rumble. Bring on the chicken legs.

sketch

Affective Maps Are Coming

Traffic apps like Waze use mobile phone data to determine where traffic is coagulating. StreetBump, an app developed in Boston, uses not only the location but the accelerometer in your phone to detect violent bumps as you drive. From this they can locate car-eating potholes without forsaking the comforts of City Hall. SeeClickFix is an app that lets people report the potholes directly.

cyclist-safety-map

Now here’s a story about yet another way to crowdsource a map of the city. If you’re riding your bike and you feel unsafe, you can push a special yellow button on the handlebars. The idea is that, if enough people do this, city planners will find out which roads are unsafe for cyclists. Or at least where people FEEL unsafe. I find this last application especially interesting, because it results in a map that displays how the environment makes people feel. The landscape thus revealed is subjective. Subjective, but extremely useful. It is an affective map.

In this case the affect is one dimensional and based on explicit reporting: I feel safe/I don’t feel safe. But hypothetically, let’s suppose your phone can infer your mood at any given time. If it reported it back to a Big Affective Map, we could get a sense  for where the world makes people feel good and where it makes them feel bad. As with all big data problems, once you get enough data, you can smooth out the vicissitudes of any one person’s moods and see the micromood impact of every place people go. What’s your neighborhood’s Spatially Averaged Net Affect? What is the Integrated Affective Cost of your jog?

If your phone isn’t smart enough to figure out your mood now, it will be soon enough, particularly considering the various wearables, paste-ables, and insertables that people are acquiring. Heart rate, blood pressure, galvanic skin response… soon your phone is going to know your mood better than you will.

happy-map

One way to use this information is to layer it on top of routing software. There is already an app that tries to find the prettiest route from A to B (as opposed to simply the shortest or fastest). But who decides what is pretty? An affective map would be the ideal way to achieve this goal.

Affective maps are coming, and they’re bound to have endless uses. Imagine doing physical A/B testing on civic improvements. What kind of playground equipment do the kids like best? What yields more happiness per person-square foot: a baseball diamond or a soccer field? What architects should command the highest fees? And just imagine the effect on real estate prices or university ratings.

I believe that affective street maps are just the beginning. We’ll be able to apply deep affective mapping to artifacts and individuals. With the advent of digital manufacturing, it’s not a stretch to say that we’ll be able to mutate and modulate made objects continuously based on how well they perform affectively across geographic and cultural domains. If you’re from Ohio, you’ll probably like this kind of stapler. I can’t explain why, but our big computer says that it’s true. And I bet it’s right.

Terrapattern

Thanks to Google we understand the power of a text search. All I have to do is search for “name of Alexander the Great’s horse” and I’ll quickly be reminded that it was Captain Crunch. I’m sorry, wait a second… I mean Bucephalus. But the success of textual searching has caused us to forget other kinds of searching. Google actually has a “search by image” feature, but it’s a little cumbersome to use, since you have to upload the image that you want to search for (unless you already have a URL for it).

The new site called Terrapattern introduces a new and extremely efficient form of searching: if Google Maps is essentially one great big image, why not just click on something and tell it to “find me more stuff that looks like this”? It’s a good example of something that’s nontrivial to think up, hard to implement, but brilliant to use.

terrapattern

Right now they’ve only got a few cities. I’m looking at the New York dataset, and it’s terrific fun to, for example, click on a golf course and have it instantly find you every golf course in the New York metropolitan area. Football fields, soccer fields, baseball fields… sports fields are especially easy and entertaining targets. But you can try things like airports and sewage treatment plants.

You have to consider that Google and Facebook can already do this with faces, but they are prevented by good taste and the promise of legal trouble and a public relations nightmare. But the implications are far reaching. Click on a house you like, and you’ll see a dozen more just like it. Click on a Mayan ruin and the computer might just find you a ruin unknown to science. Or maybe by the time you get there, some robot anthropologists will already be digging.

 

Blog Booking with BlogBooker

book

I began writing on this site in 1996 and I continued until early 2014, a span of something like 18 years. I took a long break from 2014 to 2016, and during that time I thought it would be fun to print out everything I wrote as a book. I found a good service, called BlogBooker, to take my blog text and turn it into a book form suitable for publishing with Lulu.

Then I went to Lulu and printed the whole thing in four separate books. More than a thousand pages, as it turned out. I don’t expect them to sell, but Lulu, as a standard thing, puts your book on sale. So hey, if you’re interested, you can look at them and buy them here!

It’s very satisfying to see all that thinking unrolled on paper. Think to Ink. Bit to It. Word to Flesh. It’s all there. Except for this. This part is not in the book. But it will be.

Data Pollution and Glitchy Reality

This is a terrific short video on the information pollution we will soon be facing in a mixed reality world. Freeze the frame every now and then and admire all the details they put into this. There’s a lot of depth here.

We’re all familiar with the annoying load-time lag and stagger of a mobile website as ads and images drop into place. This video gives a sense of how it will feel in the future. If you’re augmenting reality, then reality itself will be the thing that feels glitchy and slow. A computer virus may make you feel as sick as a biological virus.

HYPER-REALITY from Keiichi Matsuda on Vimeo.

LangFocus and the Zany Basque Language

It’s a great time for bite-size learning. Our collective attention span is waning. We have little appetite for paying close attention to one thing for long periods of time. This observation isn’t novel, but most observers forget to mention something else: there’s never been a better time to be a impatient learner. My 13-year-old daughter watches almost no television, but she watches plenty of YouTube videos. She watches the goofy ones and the cat videos, but a lot of what she watches (and a lot of what I watch) are remarkably well made informative videos created by passionate individuals. Here are a few, just off the top of my head.

If you sat down and watched all the videos made by just these four people, I promise you would learn a lot.

Here’s one that just caught my eye: LangFocus. I noticed it because I have a fascination with languages, and Basque is well-known as a bizarro language. So I landed here:

Basque – A Language of Mystery

As I watched it, I asked myself this question: is watching this video any more informative or any faster than reading a Wikipedia article on the same topic? The script for a video like this is actually quite short. Wikipedia will deliver more data, and more quickly too if you read it top to bottom in one go. But that’s just the problem. We won’t read the article from top to bottom. We’ll read the first paragraph and then skim down the headings and look at a few pictures. The video, on the other hand, has the ability to stick us in place. It’s a better babysitter for your pathetic attention than a book. A well-crafted video can lead us through the material in such a way as to be inspirational, or at least arresting. It’s a “point of entry” for learning. If you get engaged, then you may become willing to do more strenuous forms of research. It’s a vector for learning. The mosquito that delivered the knowledge disease.

Beyond all this is the presenter’s voice and personality. Once you get to know someone like C.G.P. Grey, you appreciate more and more his odd take on the world, and you’re happy to follow him to unusual topics that you wouldn’t ordinarily seek out.

So what about Basque? Nobody has any idea where Basque came from. LangFocus auteur Paul Jorgensen steps us through the basics, but what I really like is when he carefully breaks down the grammar for us word by word.

basque

I get to hear it pronounced and then explained. That’s good stuff!

Now I’m a fan of LangFocus, and I plan to watch a lot more of them. Time to get busy. Those YouTube videos aren’t going to watch themselves.