Peak Twilight

Happy Crepusculus! Tonight is the earliest sunset of the year: 4:12:02 PM. At least it is for me and everybody else at my latitude. This image, taken last week, shows the exact moment of every sunset for the week preceding and following today’s early sunset.

sunset.png

Almost every sunset falls between 4:12 and 4:13. It’s like the sun is standing still! We should give this season a special name to honor this remarkable observation. We’ll call it Sun-Still. No, how about Sun-No-Go? No. How about something fancy and Latin sounding, something derived from sun (sol) and standing still (sistere): solsistere. Sol-sister? Okay fine, let’s just shorten that to solstice. I’m sure everyone will figure out what it means.

If you already know about the solstice but are surprised that it’s happening as early as December 8th, I should point out that this is merely the earliest sunset. The latest sunrise is in January, leaving the shortest day on December 21st where it belongs. If it seems surprising that the earliest sunset and the latest sunrise don’t coincide, you can blame the earth’s slightly elliptical orbit around the sun.

In the meantime, I’m more than happy to celebrate the slow retreat of sunset. Today may not be the actual solstice, but it’s worth observing for its own merits, so I’ve given it the name Crepusculus (more Latin: twilight = crepusculum).

Happy Crepusculus!

The Objective Function of the Good Life

I read an interview with baseball stats guru Bill James in which he said something like this: we know a lot about how to optimize the play for a single team. But we don’t know how to optimize a league. For one team, the goal is simple: win the championship. Anyone good with stats, optimization, and machine learning, if given enough data, can help you solve that problem. But what about an entire sport? Suppose you want to optimize Major League Baseball? What do you optimize? Do you want every single game to be a 50/50 toss-up? Probably not. Do you want one or two teams to dominate season after season? Probably not. Should you try to maximize revenue? Happy owners? Happy fans? Happy wealthy fans? Happy advertisers? It’s easy to see how any of these might have nasty consequences if sufficiently amplified.

In general it’s easier to describe specific undesirable outcomes than universal desirable ones.

In an age of machine intelligence, this becomes increasingly important. Machines and data can help you achieve marvelous things, but only if you have a clearly defined goal, a test to tell you if any given outcome is better than another. This puzzle is the idea behind Nick Bostrom’s Paperclip Syndrome. If you give a sufficiently powerful artificial intelligence the goal of making paperclips, it will chew through the galaxy grinding matter into paperclips, humanity be damned. Bostrom’s scenario sounds silly, but the idea behind it is serious. If you have the power to optimize the human condition, what are you optimizing for? Okay, so we’re not going to make paperclips. But we’re going to make something. What?

paperclip

I recently listened to a Long Now talk by Brian Christian. The topic was Algorithms to Live By. It’s really good. It does a good job addressing the increasingly fraught intersection (or collision) of computer science and the real world. One gets the sense that computer science isn’t ready for it and neither is the real world. Christian takes on several topics, but the most profound one was related to Bill James’s question about baseball: At the highest levels in life, what is the objective function of the Good? It’s clear the answer isn’t to maximize quarterly profits for big corporations. But that’s the world we’re busy building, because we know how.

We’re amazingly good at answering questions. But we’re not so good at coming up with good questions.

Alan’s Color Project on the Move

Here’s a picture of a famous handshake from May of 2008.

alan-color-project

Okay, I’ll confess right away that it’s not really a very famous handshake, but it does signify and commemorate the moment when I told my friend Alan Kennedy (that’s him on the right) that I would host his nascent color project on my website. As beers were involved, I may have said something irresponsible like “You’ll be famous!”

But in that Andy Warhol/internet sense, Alan’s Color Project has become well known. I don’t know of anything like it. Launched it in October of 2008, it displays color-related idioms in many languages. Over the years its has come to contain more than 1500 idioms in 44 different languages. Está tudo azul! De toekomst ziet er rooskleurig uit!

One of the keys to its success is the fact that everyone who reads it is encouraged to contribute to it, whether by adding a new item or fixing an existing one. Add one yourself!

Now I am (ahem) tickled pink to announce that Alan has a new home for the project. He’s taking over the administration of the site on a WordPress blog here: Alan S. Kennedy’s Color/Language Project. You can reaquaint yourself with his Linguistic Facts About Color or jump straight to the great list itself.

colors

Over to you Alan. It’s been an honor.

 

The Perils of (Brain) Porn

I learned a valuable lesson in college. Don’t trust scientific papers. Not overmuch, anyway. As they say in the Royal Society: Nullius in verba, which I will loosely translate as “Don’t take my word for it.”

As an undergrad, I took a psychology class with Julian Jaynes. He was the author of the book The Origin of Consciousness in the Breakdown of the Bicameral Mind. It is an idiosyncratic theory about how consciousness first arose in the human species. But since nobody really has any clue how consciousness got started or even what it is, there’s plenty of room for quirky ideas. Incredibly, despite being 40 years old, the book has never gone out of print.

The short version of his theory is that consciousness as we know it arose only a few thousand years ago. Before that, humans were “bicameral,” meaning one half of the human brain was giving orders to the other. As Jaynes says, “[For bicameral humans], volition came as a voice that was in the nature of a neurological command, in which the command and the action were not separated, in which to hear was to obey.” In other words, all humans used to behave like schizophrenics listening to hallucinated voices which compelled them to act.

For my class with Jaynes, I wrote a paper about schizophrenic hallucinations. This was the idea: if we could see that, during the auditory hallucinations of a schizophrenic, it actually did look like one side of the brain was “talking” and the other “listening,” that might provide some indirect evidence that Jaynes was onto something. But how could you observe such a thing? The answer, it seemed was to use a new (at the time) brain-imaging technology called PET, or Positron Emission Tomography. PET makes beautiful color images of the brain at work. Like this.

brain-scan

Journals are suckers for beautiful color images of brains. It sure looks important, doesn’t it? Some people call this brain porn.

So anyway, I was able to dig up a paper that imaged the brains of schizophrenics as they were hearing voices. At first I found the reference, but I didn’t have the full paper, so I called the author. Actually, I called his office. The author had already departed for another position. But his former officemate picked up the phone and kindly agreed to forward the paper to me. Then he said these words: “I wouldn’t trust the results of that paper if I were you.” Oh? Please go on. “I think his software is no good. The results you see could have more to do with bad programming than brain activity.” Although I suppose one might say the paper was demonstrating brain pathology, only in the investigator rather than the patient. But I took the point.

I was always grateful for the candor of that anonymous officemate, and I always remembered the lesson. These memories came back to me recently because a similar situation has come up with a brain imaging technique called fMRI. Here’s a headline for you: Bug in fMRI software calls 15 years of research into question. If their concerns prove true, as many as 40,000 papers could be invalidated. Exclamation point! And here’s some good background on the same topic from the New Yorker: Neuroscience Fiction.

The march of science is, as they say in the business, nonmonotonic. Beware of pretty pictures and obfuscated code.

SoHo, Soho, and So On

During the 1980s, my sister lived in the East Village in New York. When I visited, she would fill me in on cool Manhattan stuff like the names of the various neighborhoods. This is Chelsea, there’s Tribeca, and here’s SoHo. SoHo, she informed me, stands for South of Houston Street. Oh, and remember that Houston is pronounced more like the building where you live than the city in Texas. Good to know. But I also know that Americans have the habit of naming things after places in Europe. And Soho also happens to be a neighborhood in London. SoHo sounded suspiciously like a legitimizing back-construction.

Name recycling is rampant in the United States. Generally speaking it has two roots: homesickness and marketing.

Where I live in New England (New + “England”), recycled European place names are mostly due to straight up homesickness. Let’s imagine you were originally from Cambridge in England. So you called your squalid new colonial outpost Cambridge, even though there is no river Cam and no bridge over it. But you miss your old home terribly, and this nostalgic gesture was meant to cheer you up. Maybe it did, maybe it didn’t. It hardly matters. You’ll be dead from a fever in a week anyway.

athens

The other kind of re-naming is lamer still. Let’s imagine that you were not originally from Athens, Greece. In fact, you’ve never even been there. You read about it in a book, and you think appropriating its name might help legitimize your otherwise dismal and insubstantial hamlet in, say, north Georgia. In marketing terms, this is simple brand theft, akin to calling your local fizzy wine “champagne.” Wikipedia tells me there are no fewer than 21 Athens in America. The phrase “the Athens of America,” while poetic, is alarmingly nonspecific.

Does any of this apply to SoHo? That is, SoHo, the neighborhood in New + “York”. Was it a genuinely new construction? Or a back reference to Soho in London? This topic is treated by the always entertaining 99 Percent Invisible: The SoHo Effect. It appears to be a new construction originating with an urban planner named Chester Rapkin. But I’d bet a lot of money that it caught on because people were familiar with the London neighborhood of the same name. Whether intentional or not, it looks like a case of incidental nominal legitimization.

This brings us to one of the more interesting things about word and phrase etymology. Naming is thermodynamic. It occurs, if it occurs at all, in many brains. And each brain has its own reason for swallowing and digesting the name it has been fed. For instance, in researching the many Berlins of America, I came across a marvelous example of ambiguous nomenclature. Of the two founders of Berlin, Ohio, one came from Berlin, Germany, and another came from Berlin, Pennsylvania. What city gave Berlin, Ohio its name? Depends on who you ask. Both are true. They probably only agreed because they disagreed. This sort of thing happens all the time.

Naming is a funny business. You can sometimes work out how a name got started, but you can never say for certain why it spread.

The Mug of Baba Yaga, Part I: The Saturn Mug

Before there was Boba Fett, there was Baba Yaga. Baba who? Baba Yaga is the Slavic witch who lives in a hut with the legs of a chicken. I’ve always liked that image of a hut walking around on chicken legs, so I got it into my head to make a coffee mug homage to old witch’s estimable hut. I call it the Mug of Baba Yaga.

But this was really just a wisp of an idea and nothing more. I had no clear idea how to make such a thing. Then one day I saw that Shapeways, the 3d printing company, lets you print things as porcelain. Give them a mug design and they can render a drinkworthy object. So printing the object became possible. But how about modeling it in the first place? My friend David Wey recommended that I learn Blender for this purpose. The Blender software is free, and there are plenty of YouTube videos that will teach you the basics.

Still, it was too much of a stretch for me to go directly to my chicken-legged vessel. I needed a simple test case. With a little work, I was able to create the model on the left. I call it my Saturn Mug because it’s must a mug with a ring around it. Or rather, through it. It’s an absurd test object. I made it because I wanted to know if the process would work. What would the print look like?

I exported it as an STL file and sent it off to Shapeways.

saturn-mug

So that’s what I ordered on the left and that’s what I got back in the mail on the right. It came out pretty well, eh? You can even buy it, if you want.

I’m ready to rumble. Bring on the chicken legs.

sketch

Affective Maps Are Coming

Traffic apps like Waze use mobile phone data to determine where traffic is coagulating. StreetBump, an app developed in Boston, uses not only the location but the accelerometer in your phone to detect violent bumps as you drive. From this they can locate car-eating potholes without forsaking the comforts of City Hall. SeeClickFix is an app that lets people report the potholes directly.

cyclist-safety-map

Now here’s a story about yet another way to crowdsource a map of the city. If you’re riding your bike and you feel unsafe, you can push a special yellow button on the handlebars. The idea is that, if enough people do this, city planners will find out which roads are unsafe for cyclists. Or at least where people FEEL unsafe. I find this last application especially interesting, because it results in a map that displays how the environment makes people feel. The landscape thus revealed is subjective. Subjective, but extremely useful. It is an affective map.

In this case the affect is one dimensional and based on explicit reporting: I feel safe/I don’t feel safe. But hypothetically, let’s suppose your phone can infer your mood at any given time. If it reported it back to a Big Affective Map, we could get a sense  for where the world makes people feel good and where it makes them feel bad. As with all big data problems, once you get enough data, you can smooth out the vicissitudes of any one person’s moods and see the micromood impact of every place people go. What’s your neighborhood’s Spatially Averaged Net Affect? What is the Integrated Affective Cost of your jog?

If your phone isn’t smart enough to figure out your mood now, it will be soon enough, particularly considering the various wearables, paste-ables, and insertables that people are acquiring. Heart rate, blood pressure, galvanic skin response… soon your phone is going to know your mood better than you will.

happy-map

One way to use this information is to layer it on top of routing software. There is already an app that tries to find the prettiest route from A to B (as opposed to simply the shortest or fastest). But who decides what is pretty? An affective map would be the ideal way to achieve this goal.

Affective maps are coming, and they’re bound to have endless uses. Imagine doing physical A/B testing on civic improvements. What kind of playground equipment do the kids like best? What yields more happiness per person-square foot: a baseball diamond or a soccer field? What architects should command the highest fees? And just imagine the effect on real estate prices or university ratings.

I believe that affective street maps are just the beginning. We’ll be able to apply deep affective mapping to artifacts and individuals. With the advent of digital manufacturing, it’s not a stretch to say that we’ll be able to mutate and modulate made objects continuously based on how well they perform affectively across geographic and cultural domains. If you’re from Ohio, you’ll probably like this kind of stapler. I can’t explain why, but our big computer says that it’s true. And I bet it’s right.