Wrong way down a one-way street.

I actually keep a list of topics to blog about, because it’s a lot easier to think of something interesting than it is to painstakingly convert that thought into prose. And I’m lazy.

Anyway, about six months ago, I thought of the following topic. I procrastinated like crazy on it (like I have the other dozen or so topics I have buffered up), until just last week, when I read an article that described exactly the phenomenon I wanted to write about. Good thing… saved me a lot of work. So here’s the topic:

Imagine you’re good friends with someone. You’ve known each other a long time, and you’ve had ample time and evidence to come to believe that your friend is a good person.

Then one day your friend does something mean. Not anything earth-shattering, but definitely something that, if observed in isolation, you’d attribute to a mean person. And there’s the crux of it. Do you

  1. weigh the action against the body of evidence you’ve accrued and chalk it up to an anomaly; your underlying impression of your friend is only marginally altered for the worse.
  2. assume that your friend, like most people, puts on a public persona, and view the action as an insight past this persona and into his or her inner, true personality. Your underlying impression of your friend is radically altered for the worse.

I’ve always felt that people disproportionately opt for choice (b). And this article my brother sent me confirms this feeling:

With warmth, the inverse applies. Someone who does something nice, like helping an elderly pedestrian across an intersection, is not necessarily seen as a generally nice person. But a single instance of negative-warmth behavior—kicking a dog, say—is likely to irredeemably categorize the perpetrator as a cold person.

In other words, people feel that a single positive-competent, or negative-warmth, act reveals character. “You can purposely present yourself as warm—you can control that,” Cuddy explains. “But we feel that competence can’t be faked. So positive competence is seen as more diagnostic. On the other hand, being a jerk—well, we’re not very forgiving of people who act that way.”

I guess I understand the rationale, but it makes one huge assumption: that people are drastically different on the inside from how they act on the outside. Furthermore, I’d wager that if you’d have asked someone who chose (b) whether that was the case — that he didn’t really know his friend’s “inner” personality — before the friend committed the act, he’d vehemently deny it.

I prefer to believe the alternative, that any given person has some variation in his behavior (chemical, situational, whatever), and occasionally you’ll see different aspects of it. In the end it just makes sense to consider the totality of your friend’s behavior. Anything in isolation is simply insufficient. But am I still subject to the phenomenon described in the article, even though I am aware of it?


I really want to do a song-a-day blog kind of thing. These are songs I own on CD that I’ve ripped. What’s a good medium? (Not LJ, I don’t think.) Buzz/Facebook? If I embed a flash player (even possible in those media?) is that more legal than just linking to a file? Help me…

Posted in Uncategorized | 2 Comments

Another one

Wow. I just found out that Aimee Mann provides the female vocals on Rush’s “Time Stand Still”. Given how many times I’ve heard (a) that song and (b) her albums, you’d think I’d have figured it out on my own. But I didn’t. I’m still surprised, hearing it now.

Posted in Uncategorized | Comments Off on Another one

Weather Permitting

I’ve been to a lot of weddings in my day — 27 by last count.*  The one we went to last weekend might have had the most unique atmosphere, literally. It was on Hang Glider Site #3 at Mount Tam, and the conditions were pretty much whiteout fog. I think we were standing out an outcropping overlooking the sea, but who knows. Anyway, it was super awesome. Congrats to Dave and Natalie!

A while back, I discussed The Meaning of Liff, in which real place names are associated with “things that there aren’t any words for yet”, and suggested some additional examples. Here are some more:

LUCKNOW (n.)
The experience, marked by a quiet elation, of reading a New Yorker article about a random topic and realizing that it is going to be ridiculously interesting.

NOME (n.)
The combination of irritation and anger that results when one fills a bowl with cereal only to realize that one is out of milk.

HELL’S KITCHEN (n.)
Where such an event takes place.

Okay, that last one’s a cheap shot, but I couldn’t resist.

—–
* Unfortunately, we’re at the point in our lives where we sadly can’t attend all of them anymore. Which reminds me: you know you’ve grown up when (a) the thing you lack is no longer opportunity or money, but time; and (b) you have no desire to be famous.

Posted in Uncategorized | 4 Comments

Is Catfish real?

Casey Affleck has already admitted that his bizarre “documentary” of Joaquin Phoenix I’m Still Here was staged, essentially a self-destructive two-year-long performance piece.

What about Catfish?

The filmmakers insist that it is. Discerning viewers beg to differ.

Is it life imitating art, or is it merely art imitating art?

I haven’t seen Catfish, and don’t intend to. But hey, it’s the internet, so I believe that makes me fully qualified to offer an opinion: the footage is, for the most part, real, but the filmmakers knew what was going on early on and manipulated the events to their liking.

Does it matter whether it’s real? Yes, it does. All documentaries expose the viewpoints of their creators to some extent, but my guess is that this one crosses far enough over the line of manipulation to make it worthless.

Posted in Uncategorized | Comments Off on Is Catfish real?

Yosemite 2010

A few weekends ago my college roommate James came out to visit. We went to Yosemite for four days. It was pretty sweet. Here are some pictures; I picked ones that I liked, so they’re not necessary comprehensive, and some might look very similar to each other.

On Saturday, we did a 14 mile hike from the valley floor up to Glacier Point, then down Panoramic Trail to Nevada and Vernal Falls, and back down to the valley. The first 5 miles were pretty strenuous, with a 3200′ altitude gain, but after that it was very pleasant and quite beautiful. Probably the most scenic hike in Yosemite. The domes are immense. It’s hard to appreciate their size until you happen to notice a tiny-looking tree or three on the top to put things in perspective.

On Sunday, we checked out a number of spots on Tioga Rd: Olmstead Point, Tenaya Laka, and Tuolumne Meadows.

Because we didn’t get permits in advance, we waited til Monday for our most challenging hike, up to Half Dome. It was about 17 miles roundtrip, with a 4900′ altitude gain. We made great time, about 3.5 hours, up to the base of Half Dome (the top of the sub dome). But the cables up Half Dome were crowded, and it took us an hour to get up to the top. By the way, I last did Half Dome 8 years ago, and in the intervening time I had forgotten how intense the cables are. Even now it doesn’t seem that bad… but when you’re actually on the cables, leveraging yourself up and down steep, steep granite rendered slick by thousands of human feet, your adrenaline is pumping.

Third photo here courtesy of James. On the last day, we checked out Mariposa Grove, home of some huge sequoias. That last photo is of some moss a staghorn lichen (thanks Chris!) on a sequoia.

Yosemite is a pretty dangerous place, especially in bad weather. Luckily for us, the weather was perfect. If you want to read about some amazing search-and-rescues, though, check out the Friends of YOSAR rescue page. Read some harrowing stories here, here, here, and here. Oh, and there’s a good photo showing the steepness of the cable ascent (this shot is taken pointing almost straight down; note, if you can, how small the people and supplies at the bottom are, and imagine climbing it without a carabiner or anything else to tie you on).

Posted in Uncategorized | 2 Comments

Why has laptop technology stagnated?

I used to use a laptop every day. In my mind, the most important aspects of a laptop are: screen size and resolution; keyboard quality; thickness and weight; and battery life.

If you asked someone today to choose a top-of-the-line laptop, he’d likely pick a MacBook Pro, the 15” model of which has the following key features:

Surely an impressive machine. However, in 2003 I got an IBM Thinkpad T40, which has the following corresponding specs:

  • 14.1” screen with 1400×1050 resolution
  • Amazing keyboard
  • 1.0” thick
  • 4.9 lbs
  • 4:40 real-world battery life

It’s remarkably similar, and actually better on several counts. Sure,

  • The processor is slower, but it was state-of-the-art at the time, and was probably more power hungry.
  • The screen is smaller, but it’s higher resolution. Also, the 13.3” MacBook Pro weighs 4.5 lbs, so 14.1”/4.9 is still very competitive.
  • “Macs have better build quality, blah blah”. Does not appear to be the case, at least for this example. The T40 in question was used every day as my primary computer for years. It’s been dropped, while open and running, at least 5 times, with no ill effects. In fact, it is still running, seven years on, as a streaming Netflix server for our TV. It won’t die.

What gives? I know battery technology hasn’t improved much. But what about everything else? Shouldn’t new laptops be half the weight, or have fold-up “retina display” screens, or something?

Okay, here’s the most advanced laptop I could find with an optical drive… the Lenovo ThinkPad T410s.

  • 14” screen with 1440×900 resolution
  • Apparently great keyboard
  • 0.83” thick
  • 3.9 lbs
  • 4:10 real-world battery life

I guess that’s decent. But still. 7 years ago, I listened to music on a portable abacus, and now I have it piped directly into the aural processing unit of my brain. Why can’t laptops catch up?

Edit: In responding to a comment, I came across a better laptop with an optical drive, the ThinkPad X301:

  • 13.3” screen with 1440×900 resolution
  • Apparently great keyboard
  • 0.7” thick
  • 2.9-3.3 lbs
  • 3:40 real-world battery life

That’s not bad. Too bad it starts at like $2,500.

Posted in Uncategorized | 11 Comments

A layperson’s guide to P ?= NP

You might have heard whispers about a computer scientist posting a game-changing paper on the internet a few weeks ago. Vinay Deolalikar’s attempt at a P != NP proof might net him a million dollars.

More likely, you’ve never heard about P and NP and have no idea what I’m talking about. If you’re curious at all about one of the most fundamental problems in computer science, and why proving a simple statement is worth $1M to anybody, or just want to understand a bit about how CS people think about computers, here’s a layperson’s guide to P ?= NP.

How hard is a problem?
Let’s say you have to look up a name in the phone book. You might start at the beginning of the book and thumb through the letters until you find the first letter of the name you’re looking for, and repeat for the second letter amongst the names that start with the first, and so on.

“Let’s see… Smith. [thumbs through to S] A, B, C, …. Q, R, S… okay, S. [now thumbs through S to Sm] Sa, Sb, … Sl, Sm… great. Sma… “

Or you might flip the book open to the middle and see if the name you’re looking for comes before or after the names on the open page, and focus on the first or second half of the book, respectively. You’d repeat this halving procedure, getting down to smaller and smaller sections of the phone book, until you’ve found the name you’re looking for.

“Let’s see… Smith… [opens book to Nelson] that’s after Nelson… [open halfway between Nelson and the end, gets Talbot] before Talbot, so between Nelson and Talbot… [halfway between Nelson and Talbot: Quinn] after Quinn…”

Which way is easier? You may have an intuitive idea. Imagine for a minute that your job is to look up names in a phone book all day, as fast as possible. Now you care a lot more about which of these two methods, or algorithms, you should use. You’d probably want something better than a rough guess; you’d want a quantified understanding of difficulty.

There is a field of computer science called complexity theory that deals with classifying problems by difficulty. Classification by difficulty is very useful. For instance,

  • If you have two proposed algorithms for solving the same problem (as with the phone book lookup above), you can determine which is more efficient, and thus which one to use.
  • If you have a fixed amount of time to prime your thrusters before the space shuttle launches, or else it’ll blow up, you may want to prove that the priming algorithm will take no more than that fixed amount of time to run.
  • If you are searching for an efficient solution to a difficult problem, you may be able to expose the problem as something that is simply too hard to solve by traditional means – one that could literally take longer than the age of the universe to solve, depending on the parameters. Knowing this could save you a lot of time fruitlessly looking for a solution.

What does difficulty mean when solving a problem? If you’re a human, you might think it’s a measure of hard work, intuition, and insight. Of these three, computers are really only good at hard work, so computer scientists measure difficulty in two primary ways, time and space.

Say you’re trying to solve a problem, and you’re using a whiteboard for your step-by-step scratch work. Time represents the number of instances you write or erase something on the whiteboard as you work through your solution, and space represents the size of the whiteboard you’d need to fit in all of your scratch work (assuming that you could erase work you don’t need anymore along the way). Analogously, for a computer, time is the number of instructions it executes, and space is the amount of memory (RAM) it needs.

For this essay, we’ll only deal with time.1 How do we measure how much time a given algorithm takes? There are a couple of different ways to measure time: what’s the fastest the algorithm could run? The average time it takes over different inputs? For simplicity, we’ll focus on the worst-case time: the longest the algorithm could take to run. That way, no matter what parameters we are given, our thrusters will be primed before the space shuttle launches.

And then there’s the issue of size. Looking up a name in the Nome, Alaska phone book takes less time than looking one up in the New York City phone book, right? The time it takes to solve a problem with a given algorithm depends on the size of the input to the problem (what you are asked to do – look a name up in Nome, or NYC?). If the input is a list of words, the size is the number of words. If the input is a single number, the size is the number of digits it has. And so on.

Let’s take a simple example. Say you want to sort a list of n numbers from smallest to biggest:

4  13  21  5  6  9  18  12  7  0

(Here, n is 10, but imagine that it could be anything.) How might you sort the numbers? Here’s one way: scan through the list and find the smallest number. In this case, that’s 0. Take that number out of the list and put it aside. Now, scan through the remaining n-1 numbers to find the next smallest number – in this case, 4. Put that next to the 0. Keep repeating this process, removing one number at a time, until you have no numbers left in the list, and a sorted pile of numbers to the side. Perfect.

How much time did that take? Well, you looked through the n numbers and found the smallest one, and pulled it aside; you repeated this process for each of the n numbers in the list. So you took n*n = n2 steps to sort the list2. We say that sorting numbers with this algorithm, called selection sort, takes n2 time. If there are 100 items in the list, it’ll take around 100*100 = 10,000 time steps to sort them. That’s pretty neat! You give me an input – the list of numbers to sort – and I can give you a limit on how long it’ll take me to sort them, without having to do the actual sorting at all.

Here’s another sorting algorithm. Given a list of n numbers, we permute them (jumble them up) in a random order and see if this new order is sorted or not. If it is, great! We’re done. If not, we jumble them up again in a new random order and repeat. This algorithm will eventually find the right sorted order. But how long will it take? Well, to permute the numbers, we select one of the n to be the first number in the new list. Then of the remaining n-1 numbers, we select another, and so on. The total number of permutations is n*(n-1)*(n-2)*…1, otherwise written as n! (factorial). So in the worst case, when we come across the right sorted order last, this permutation sort algorithm takes n! time to sort. If there are 100 items in the list, that’s 100!, or 100*99*98*…, which is big– something like 1 followed by 158 zeros.

So you just saw two algorithms for sorting numbers, with vastly different execution times. How difficult then, is the problem of sorting? We say that a problem is as difficult as the most efficient algorithm we know of to solve it. So in some sense, it doesn’t matter that permutation sort is really slow; selection sort does a better job, so sorting is at worst an n2 problem. This is a crucial point: if the only algorithm you know of is permutation sort, you think sorting is a really hard problem. When you discover selection sort, suddenly sorting looks easy and life is good. Our classification of problems, then, is often limited by our own creativity and knowledge. There are many problems out there that might be easier – that might be solved with more efficient algorithms – if only we could come up with these better algorithms to solve them.3

In fact, we know of even faster algorithms that can sort in n*log(n) time, which is faster than n2, so sorting really takes no more than n*log(n) time in the general case.

What are P and NP?
So far we’ve shown how problems can be classified by the amount of time it takes to solve them with the most efficient algorithms available. From here we can provide a very simple explanation for P:

P is the set of problems that can be solved in polynomial time.

A polynomial looks like nk: n, n2, n100, etc. So sorting, which takes n2 time, is in P. So are searching, greatest common divising, zip data compression, multiplying, primality checking4, and a host of other common problems. In fact, with few exceptions, the world of computers is built on algorithms that run in polynomial time. In addition to simple stuff like sorting, generic algorithmic sledgehammers like dynamic and linear programming, which solve problems from spell checking to resource allocation, are in P.

Other problems are exponential: their times look like 2n, 100n, and so on.

We consider polynomial problems to be easy: if n is 100, then n2 is 10,000, which isn’t too bad. Exponential problems, on the other hand, are very hard: if n is 100, then 2n is 1 followed by 30 zeros. So if you have a problem, you’d better hope it’s in P and not exponential, or something even worse.5 In a broad sense, problems in P are considered practical, and problems harder than P are cause for concern.

NP is a little more nuanced.

NP is the set of problems that can be checked in polynomial time.

By checking I mean: if I pose a problem and you provide a solution, I can check if your solution is correct relatively quickly, even if solving the problem took a lot of time.

For instance, if I give you a really complicated maze, and you give it back to me with a line drawn from start to finish, I don’t have to solve the whole maze myself to verify your answer; I can just check that your line goes from start to finish without crossing over any other lines, which is a lot easier.

Or: if I ask for the prime factors of a large number, and you give me a list of supposed factors back, it’s easy for me to check if you’re right. If the numbers you give me are prime (which I can check in P), and when I multiply them (which I can also do in P) I get my original number, you’re right. Otherwise you’re wrong.

In each of these cases, I don’t have to do the work of solving the problem myself. I can do perform a simple check of your answer to ensure that it is correct.

There are many other interesting problems in NP, including:

  • Given a bunch of integers, can you pick some of them that together add up to 0?
  • Given a map of roads and cities, is there a trip that visits all the cities exactly once that is shorter than X miles long?

Both of these problems are very easily checkable once you have a potential solution in hand.

Note that NP specifies nothing about how hard it is to actually solve one of its problems; some problems might be easy to solve and others hard. We really don’t know. In fact, it is this unknown — how hard it is to solve problems that are easily checkable — that is at the heart of P ?= NP.

What does “P ?= NP” mean?
Read “P ?= NP” as “Does P equal NP?” Are the problems in P the same as the problems in NP? In other words, is every problem that is checkable in polynomial time also solvable in polynomial time?6

Here’s another way to phrase the question. For certain problems in NP, the best known algorithms involve exhaustively enumerating every possible combination of inputs – an exponential task – to find a solution (much like permutation sort, above), even though checking that solution is ultimately easy to do. P ?= NP asks “Are there some problems for which brute-force (exhaustive) searching for a solution is the best possible algorithm, even if the solution is easy to check?” In other words, are some problems so hard that brute-force is the best we can do?

You can begin to see why this is a hard question. You have to show that either:

  1. All problems that are quickly checkable are quickly solvable (P = NP). And your proof better hold for problems that we haven’t even thought up yet.
  2. A single problem that’s quickly checkable is not quickly solvable (P != NP). Sure, you can easily show that the best known algorithm for this problem is not in P, but you also have to show that no algorithm for solving the problem in P can possibly exist, whether we know of it yet or not.

We know of many problems in NP that may or may not be in P. (I’ll describe one in the next section.) The solutions to these problems can be checked in polynomial time, but arriving at the solutions using the best algorithms we’ve got takes exponential time. We simply do not know whether polynomial time algorithms exist to solve these problems. No one has invented (or discovered) one yet, and we don’t know whether that’s because none exist, or because we just haven’t been smart enough to find one.

If it were ever shown that P = NP, the world would be a different place. Many problems that we consider totally out of reach could be solved. Sure, a few that we hope are hard (such as ones that would allow a hacker to decode your credit card number when you buy something on Amazon) would become simple, which would be bad. But on the whole, it would be an amazing broadening of powers for computers everywhere.

Life couldn’t be that easy, could it? Intuitively, most theorists believe that P != NP: that some problems are hard to solve, even if they are easy to check. But no one has been able to show it yet. For the record, Deolalikar claims to show that in fact P != NP. The current consensus is that his proof is incorrect, even though the underlying claim is most likely true.

And that’s P ?= NP.

(Feel free to stop reading here.)

NP-completeness
Of course, there’s more to the story than that. Thousands of papers have been published about P and NP. One of the more intriguing discoveries is the notion of NP-completeness. A problem is NP-complete if it is in NP (checkable in P) and “as hard as” any other problem in NP.

What does that mean? It’s pretty astounding. If I have an NP-complete problem, and you give me any other problem in NP, I can convert its inputs7 into a formulation of my problem and then solve that problem.

So if you give me an NP problem about scheduling traffic, and I have an NP-complete problem that finds cliques of friends amongst all the relationships on Facebook, I can solve your problem by somehow converting your arrangement of cars into a social graph. These conversions can be technical and complicated, but the bottom line is that I can solve your problem with my problem, and so mine is as hard to solve as yours. (If it were easier, then I shouldn’t be able to solve your problem, right?) Furthermore, this property holds for any problem you can find in NP; my problem as hard as anything else out there.

There are many NP-complete problems, each of which can be used to solve the other; you could say they’re all equally hard8. They allow for some interesting implications:

  1. If you find a single NP-complete problem that is in P, then all NP problems are in P, since they’re all no harder than that single problem.
  2. Conversely, if you find a single NP problem that’s not in P, then all NP-complete problems are not in P, since they’re all at least as hard as that single problem.

Despite their significance, NP-complete problems are not bizarre theoretical constructs. I’ll describe one, just to show you how simple they can be.

Imagine you’re going on a camping trip and have a knapsack that can hold a given amount of weight, say p pounds. You have a whole bunch of gear that you’d like to take along: a sleeping bag (4 lbs), a stove (1 lb), a tent (6 lbs), and so on. What items do you put in your knapsack so that you get the closest to p pounds of stuff in there without going over?

Amazingly, the best known way to get the correct answer is to enumerate combinations of items until you’ve found the solution9, an exponential task. No one, to date, can do better. If you can – you win a million bucks. More amazingly, this knapsack problem is NP-complete, simple though it is. You can transform any other problem in NP (factoring, summing to 0, route finding, etc.) to a knapsack formulation and solve it.

On the other side of things, people have tried to approximate the knapsack problem with polynomial time algorithms, since solving it outright takes so long. If you can’t get the correct answer efficiently, might as well see how close you can get, right? So rather than enumerate all combinations of items, these algorithms heuristically pick and choose certain combinations, sacrificing total correctness for speed. And they’ve succeeded: the best polynomial approximations can get arbitrarily close to the right answer, though not all the way there. So the problem is NP-complete – super hard – but also very easy to approximate. Crazy.10

In addition to knapsack, the two problems I described above as in NP, integer summing and shortest trip, are also NP-complete. But other hard problems, such as the prime factorization one, are not known to be NP-complete. It’s a mysterious world out there.

If you find any of this interesting, Wikipedia is a good place to start. Oh yeah, bonus points if you can figure out which phone book lookup method is better.


1 Note that since it takes one unit of time to write something down in one unit of space, space requirements can never exceed time requirements. Back to text

2 Well, not quite. The first time you scanned the list for the smallest number, you looked through n numbers. But the second time, you’d already taken a number out, so there were only n-1 numbers. And so on. The real number of steps is n + n-1 + n-2 + …. 1 = (n2+n)/2. In such cases, we look for the “biggest” term – the one that will yield the largest result as n gets larger – and discard the others as relatively insignificant. In this case, n2 is the biggest term, so we forget about the rest. Back to text

3 In certain cases it’s possible to prove a lower bound on problem difficulty. That is, you could prove that a problem can’t take less than a certain amount of time, say t. If you know of an algorithm that takes t time, then you can rest easy, because you know that no other faster algorithm can exist. Here’s one example: finding the smallest number in a list of n numbers must take at least n time steps: one each to examine every number in the list. If you ever try to take fewer than n steps, you can’t be examining all the numbers, and I’ll foil you by giving you an input list in which the smallest number is one of the ones you didn’t look at. Boom – you’ve overlooked the smallest number in the list and your algorithm is incorrect. Many problems are complex enough, however, that proving a useful lower bound is very difficult. Back to text

4 Interestingly, primality checking was only shown to be in P in 2002! Back to text

5 Wait a minute, you say. If my problem takes n100 time, it’s in P, but it’ll still take really long! You’re right: in practice, the actual constants of the polynomial (in this case, the number 100) make a difference. A n100 problem is a lot harder than a n2 problem. And in fact, there are certain problems (such as linear programming) for which an exponential algorithm can find a solution faster than a polynomial algorithm in many common cases. But as n gets really large, even the largest polynomial is smaller than the smallest exponential: n1000000 is smaller than 2n (or even 1.0001n) for very large values of n. And geeky computer scientists like to think very big. Back to text

6 We know that the reverse is true: problems solvable in polynomial time are checkable in polynomial time; just solve them again and see if your result agrees with the proposed one. In other words, we know that all the problems in P are in NP. But we don’t know whether all the problems in NP are in P. Back to text

7 For the record, this conversion process must itself take no more than polynomial time. Back to text

8 You can actually differentiate among NP-complete problems by how well you can approximate them with faster algorithms, but that’s a topic for another time. (If you are surprised and curious about the fact that problems that are equally hard to solve can be approximated to different degrees, then start reading up, and welcome to the world of theoretical computer science!) Back to text

9 Without bogging you down with details, I’ll acknowledge that there are other known ways to solve this problem, but they’re all still exponential in the size of the input. Back to text

10 Just for kicks, one more twist: if you add just one more constraint to the knapsack, for instance giving it a volume in addition to a weight limit, you can no longer approximate it to an arbitrary degree of accuracy. Back to text

Posted in Uncategorized | 3 Comments

O RILY?

Rick Reilly is one of the most celebrated sportswriters of my lifetime. He’s the guy who writes the back-page column for whatever magazine he works for. He’s been voted National Sportswriter of the Year eleven times.

More critical acclaim, from his own bio page:

He is the winner of the 2009 Damon Runyon Award for Outstanding Contributions to Journalism, an honor previously won by Jimmy Breslin, Tim Russert, Bob Costas, Mike Royko, George Will, Ted Turner and Tom Brokaw, among others. Three times his columns have been read into the record in the U.S. Congress.

He is “the Tiger Woods of sports columnists,” says Bloomberg News.

As for his sports writing, the New York Daily News called him “one of the funniest humans on the planet.” Publishers Weekly called him, “an indescribable amalgam of Dave Barry, Jim Murray, and Lewis Grizzard, with the timing of Jay Leno and the wit of Johnny Carson.”

He is, in a word, impressive.

He’s also one of the worst, most unpleasant writers I’ve ever read. In content alone, he’s mean-spirited, self-satisfied, sanctimonious, painfully unfunny (oh yeah, and still mean-spirited), and flat out lame. When you get past his pathetic attempts to adopt the Sports Guy’s style (with more faux-authority and far less authenticity), you find out that he’s just a giant smarmy douchebag. Oh yeah, and he’s also technically terrible. Purple prose, painful metaphors, hyperbole at every turn.

It’s kind of astounding to read his columns and then his C.V. and try to reconcile the two. Am I the only one who is flabbergasted by his accolades? Or what am I missing?

(For the record, I like Bill Simmons.)

Edit: Apparently I’m not alone. And there’s even this.

Posted in Uncategorized | 3 Comments

Crimson

Saw Morcheeba last night at the Fillmore on tour in support of their new album. After a seven-year hiatus, Skye is finally back to front the band. My favorite show of theirs was in 2003 — their last tour before she left — at the Warfield, and she was in top form.

One of the best songs from last evening was “Crimson”, the first track on their new album. It was a surprisingly powerful rendition, with a searing guitar solo by Ross Godfrey.

Here are two live versions from previous shows on Youtube. Of course, neither is able to recreate my mental image of the song or the solo, but hey, what can you do. The first has the best video and sound quality, but the person taking the video mistakenly focuses on the bass player during the guitar solo — funny, but also a little distracting.

Posted in Uncategorized | 1 Comment

Pop (music) quiz, hot shot

A year ago this past Sunday, Bekah and I got married. It was awesome.

I try not to post personal stuff here, but I will talk about the music, which we had a lot of fun selecting. We played Sigur Ros’s heavenly “Svefn-G-Englar” as we walked down the aisle, had some friends cover Iron & Wine’s “Resurrection Fern” during the ceremony, and used songs by Leo Kottke, Luna, and the Magnetic Fields for the slide show.

The dinner and reception were in a barn, and we rolled our own music for that, too. I’ve dug up as much as I could of the playlists (we changed them on the fly, and I only have a portion of the originals), and offer you a challenge: name as many artists and song titles as you can, from only the first 10 seconds of each song. Of course, no Googling or other cheating. That ruins the fun!

Here are the 10-second snippets from the reception, where we stuck with favorites for the most part. There are 48, so the clip is around eight minutes long.

Get the Flash Player to see this player.

Fairly standard stuff, on the whole. Give yourself half a point for each artist and half a point for each song title you can name. Rate yourself as follows:

Points Rating
0-15 Really? Stop listening to indie music and turn on MTV!
16-35 Solid. What I’d expect out of someone in my age group who listens to the radio.
36-48 Very impressive. You even got the obscure ones!

If you thought that was easy, here’s more of a challenge. We veered off into indie/folk territory for the background music during the dinner – aiming for quiet, pretty songs that weren’t sad (which eliminates about 90% of quiet, pretty songs).

Get the Flash Player to see this player.

Points Rating
0-4 Really? Stop watching MTV and listen to some other music! Just kidding.
5-8 Solid for a casual listener. You probably got the ones your parents listened to.
9-13 Good for a folkie.
14-22 Very impressive. We seem to have the same taste in quiet music…
23-26 Amazing. You are a god among mortals.

I did leave one song out, because it’s very hard to get – it’s never even been released on an actual album, as far as I know. One bonus point each for the artist, the title, and the original artist of this song. No Googling!

Get the Flash Player to see this player.

If you’d like you check your answers, post them in the comments below. But to avoid ruining the fun for everyone else, make your text white, as follows: (1) Click on the “Don’t auto-format” checkbox. (2) Surround your comment with a <div> tag, as follows:

<div style=”color: white”>
My answers here
….
</div>
Posted in Uncategorized | Comments Off on Pop (music) quiz, hot shot