Monday, October 25, 2010

An Open Letter to NPR

I just received an e-mail from my local NPR affiliate, WNYC, to "double my pledge". This is the e-mail I sent in response:

To Whom It May Concern:

I will not be "doubling my pledge", or indeed renewing my pledge next year when the anniversary rolls around. The reason for my change of heart is the firing of Juan Williams. This was an inexplicable, execrable move that I simply cannot support. Please make it clear to your own management and on up the chain that there is at least a portion of your listenership that does not believe that what Williams had to say was even close to a firing offense, and that firing him as a result illustrates NPR's disrespect and intolerance for opposing viewpoints.

Sincerely,
--

- James

UPDATE: To their credit, WNYC responded. Some boilerplate aside, here's the important part:

As the local New York broadcaster of NPR programming, we are always interested to hear our listeners' comments about their programming content and their organization. However, it is important to understand that our organization does not have influence in the content of NPR programming, with their staff, or in their personnel decisions. We pay NPR for the programming and news that we broadcast and, as such, are separate independent organizations.

It's not directly WNYC's fault that NPR fired Juan Williams. But they're not exactly independent, either. WNYC is part of the NPR family. I don't give money to NPR; I give to WNYC. And it seems perfectly reasonable to cut off funding to WNYC in this case. I want them upset with NPR.

The logic seems to me stronger than when people boycott a company for advertising on a show they find objectionable. The company isn't directly responsible for the show's content, but they know the basic message and can sever the relationship if it grows too burdensome. I'm putting pressure (a tiny amount in my case, but hopefully my case is but one among thousands) on WNYC so that they will pressure NPR.

Friday, October 22, 2010

Solving Linear Equations Just Got Much Faster

In computer science, we measure the efficiency of an algorithm in a couple of ways, but one of the most important is called "asymptotic complexity", usually written in something called big-O notation. This notation captures an algorithm's scalability as the problem size gets larger. For example, an algorithm with O(n) performance takes about twice as long to run if the problem size doubles. An algorithm with O(n^2) performance would take four times as long.

You can get a feel for this without computers. Most of us have sorted books alphabetically before, by title or by author. If you have just a few books, say ten or fewer, you can usually eyeball it and get them sorted without using any special algorithm. What you are really doing here (I think) is a sort of mental version of what we CS types call a "selection sort" - you are finding the alphabetically first book, placing it first, then finding the next one and placing it, and so on. With a small set of books, that works fine. But suppose you have 1,000 books to sort. Now the problem is much harder: you can't store the 1,000 books in your memory, so the only practical way to carry out that algorithm is to pick up the first book at hand and then start checking all other books. As you find one alphabetically earlier, put down the book you're holding and pick up the new one, then continue until you've looked at all the books. The book you're holding is the alphabetically first one, so place it on the shelf. Great! Now repeat another 999 times.

This takes a while. In big-O terms, it's an O(n^2) algorithm, which means that running it for 1,000 books takes about 100^2 = 10,000 times longer than running it for 10 books. Not very practical!

You probably intuitively understand this, because if you were really given the job of sorting 1,000 books, you wouldn't do it this way. An alternate (and much faster) way is to start by sorting the books by first letter: you put all the books with authors starting with A in a pile, all the books with authors starting with B in another pile, and so on. Eventually you have 26 piles, some bigger and some smaller. The average pile size will be 1,000/26, or about 40 books. If you just ran selection sort on the piles, each would take about 4^2 = 16 times longer than sorting 10 books, and you'd have to do it 26 times, so you end up doing 416 times more work than sorting 10 books. But that's a 24x improvement over just running selection sort. (To the detail-oriented: yes, it's true that since the piles are different sizes, you won't actually do quite this well in practice. But you can easily make the large piles small by running our "binning" algorithm on the second letter for those large piles: take the S pile and make new piles for authors starting Sa, Sb, Sc, and so on.) (Also: but what about the time needed to split the the 1,000 books into bins in the first place? The good news here is that deciding which pile to put the books into is easy: just look at the book. So you just look at each book once. That takes some time, but it's not significant compared to the rest of the algorithm, which involves comparing pairs of books. In big-O terms, this is an O(n) contribution to the overall algorithm, so it doesn't change the overall complexity.)

So, now you understand how complexity works. We care a lot about the complexity of algorithms that we use all the time. For example, above I talked about sorting, because we sort all the time. You can hardly take two steps in your average code base without tripping over a sort. It turns out that sorting (assuming nothing about the data being sorted) takes O(nlogn) time, meaning that it scales almost linearly: that logn term rises slowly, but it's there. O(nlogn) is slower than O(n), but it's much, much faster than O(n^2). Suppose an algorithm takes 1 second with 1,000 data points. Then an O(n) algorithm would take 1,000 seconds with 1 million data points. An O(nlogn) algorithm would take 3,000 seconds (we'll use base-10 logs here). But an O(n^2) algorithm would take 1 million seconds.

Another problem that comes up a lot in CS (not as much as sorting, but a lot) is solving systems of linear equations. Here the measure of problem size is "number of variables", and we'd like to solve systems with lots of variables: millions, even billions of them. The naive, basic algorithm for solving them is something called Gaussian elimination, and it's an O(n^3) algorithm - that's cubed. As you can imagine, this performs pretty badly with large numbers of variables. With a million variables, an O(n^3) algorithm will run a billion times slower than it would with 1,000 variables. So if you could solve a system of a thousand variables in a microsecond, it would take 1,000 seconds - almost 17 minutes - to solve one with a million variables. There are some algorithms that solve linear systems slightly faster than this, but improvements have been slow.

So along comes these Carnegie Mellon researchers. They've figured out a clever algorithm that reduces the complexity of solving (certain kinds of) systems of linear equations to O(n(logn)^2) time. It's hard to express how huge an advance this is. Take our example above: where solving a system of 1,000 variables takes a microsecond and solving a system of a million variables takes 17 minutes. With the new algorithm, you could solve the larger problem in 9,000 microseconds, which is still just 9 milliseconds. Or you could solve an even larger problem - a billion variables - in just 36 seconds. This is an extraordinary achievement.

Read the whole story here.

UPDATE: I meant to mention this in the original post, but forgot. It's worth worrying a bit about the constant. The problem with "asymptotic complexity" is that it only gives an ordering of runtimes with very large problem sizes. For small problem sizes, the constant, or even constant-time or other lower-order terms, can dominate the actual run-time.

The constant shows up as follows: suppose that algorithm A has runtime of approximately n seconds for problem size n, and that algorithm B has runtime of 10^-10 n^2 for problem size n. Then algorithm A has O(n) complexity and B has O(n^2) complexity. But for problem size n=1,000, A takes 1000 seconds while B takes only 10^-4 seconds. For problem size n=1,000,000, A takes 1,000,000 seconds while B takes 100 seconds. As you can see, B is much faster over a very wide range of problem sizes. So even though B is "slower" in the asymptotic sense, it's probably faster in most real cases.

Lower-order terms can matter, too: suppose algorithm A has runtime of 10^-6 n + 2 seconds, while algorithm B has runtime of 10^-6 n^2 seconds. Both A and B have the same constant. But for problem size n = 1,000, A takes about 2 seconds while B takes 1 second: the added 2 seconds for A (presumably for setup and precomputation, or something) dominates the runtime. For much larger problems, of course, A may still be faster: its asymptotic complexity is O(n) compared to B's O(n^2).

So this new algorithm has great asymptotic complexity, but it's worth waiting until we see the constants, lower-order complexities, and other things (like how well the algorithm fits into cache, how well it can be parallelized, etc.) before considering it a big win.

Thursday, October 21, 2010

Another Entry in the Black Book of Communism

This is truly awful.

A pregnant woman in south China was detained, beaten and forced to have an abortion just a month before her due date because the baby would have violated the country's one-child limit, her husband said Thursday.

Forced abortions are bad enough. But a forced abortion of an eight-month fetus is indistinguishable from murder (even if you don't think the abortion of, say, a one-month fetus is murder). A fetus at that stage is perfectly viable outside the womb. The Chinese cold-bloodedly killed a baby in their mad pursuit of social policy.

NPR Fires Juan Williams

NPR has fired Juan Williams for his remarks about Muslims on The O'Reilly Factor.

Obviously, NPR is within its rights to do so, being a private organization. (Well, sort of. Doesn't NPR receive government funding? The government certainly twists the arms of other organizations that act in ways it doesn't like if they receive funding.) But there is a difference between having a right to do something and something being the right thing to do. You might imagine, from the fact of his termination, that Williams must have said some truly awful things about Muslims. Here's what he said:

But when I get on the plane, I got to tell you, if I see people who are in Muslim garb and I think, you know, they are identifying themselves first and foremost as Muslims, I get worried. I get nervous.

Many Americans share these views. Any why wouldn't they? Muslims are responsible for most plane hijackings around the world, and four particular ones nine years ago that you might remember. Williams isn't making a policy proposal, for crying out loud. He's airing his personal fears. Surely NPR wouldn't fire someone for giving a personal, mainstream opinion. Maybe it was this (which O'Reilly said, but Williams agreed with):

The cold truth is that in the world today jihad, aided and abetted by some Muslim nations, is the biggest threat on the planet.

That's controversial, surely. You could easily disagree with it. But is it a firing offense?

It's a shame that NPR felt the need to take such an extraordinary step about such ordinary statements.

UPDATE:

Here's video of Juan Williams' side of things:



If Williams said something bigoted here, we should remember this the next time a person of color admits to experiencing a frisson of fear upon seeing a white policeman. Apparently, such an admission can get you fired from NPR.

The Closing of the American Mind (Jazz Version)

Whenever I listen to my Ella Fitzgerald CD, I want to comment on this. Here are the opening lyrics of the Cole Porter song Just One of Those Things:

As Dorothy Parker once said to her boy friend,
"Fare thee well,"
As Columbus announced when he knew he was bounced,
"It was swell, Isabelle, swell,"
As Abélard said to Héloïse,
"Don't forget to drop a line to me, please,"
As Juliet cried in her Romeo's ear,
"Romeo, why not face the fact, my dear?"

This was written in 1935, the depths of the Depression. At that time, these references were considered intelligible enough to use in popular culture. Can you imagine such a thing today?

Tuesday, October 19, 2010

On Nuclear Accidents

Just read an interesting article by Ed Grabianowski on io9 about nuclear accidents and near-disasters. Couple of quotes:

We spent the Cold War in perpetual fear that the U.S. and U.S.S.R. would start an intentional nuclear conflict. The truth is, we came far closer to blowing ourselves up with nuclear weapons than we ever came to WWIII.... The Russians either lost a nuclear sub, lost a sub with nuclear weapons on board, had a nuclear sub's reactor melt down, or all three roughly every other week.... The sad lesson is that we have less to fear from naked aggression than we do from incompetence and bad engineering.

I don't mean to diminish the danger of nuclear accidents, and certainly there was a learning curve involved here - a very dangerous learning curve (it's notable that all the incidents listed occurred between 1950 and 1979, and all the ones in which radiation was released occurred before 1966). But Grabianowski betrays his own politics with some of these claims. How can he know we came "closer to blowing ourselves up than to WWIII"? There were several incidents that came close to sparking WWIII, too - which was "closer"?

It's tempting to think that, as close as some of these engineering mishaps were, they actually weren't as close as the author thinks. Grabianowski himself points out that the Russians had accidents all the time that never had the dire consequences he imagines (other than Chernobyl). That strikes me as evidence that nuclear engineering is not as prone to actual disaster scenarios as the author thinks.

Grabianowski's closing quote, that we have less to fear from aggression than from incompetence, is utter nonsense. Even restricting our historical view just to nuclear weaponry, it's hard to ignore the fact that far more people have been killed by atomic weapons than by atomic accidents. In fact, it's possible that more people were killed on 9/11 than by Chernobyl, the worst nuclear accident in history. (Wikipedia says: "It is estimated that there will ultimately be a total of 4,000 deaths attributable to the accident, due to increased cancer risk." But these estimates are notoriously unreliable, and impossible to prove. In any case, the numbers are comparable.) The twentieth century witnessed hundreds of millions die as a result of naked aggression, and two or three orders of magnitude less due to engineering mishaps.

The numerical comparison demonstrates the absurdity of the claim. But the philosophy behind it is also wrongheaded. Does Grabianowski believe that our military is more dangerous in peacetime than it is valuable in wartime? Having a nuclear-armed military runs the risk of accident. Building and operating nuclear power plants runs similar risks. But in both cases, there is significant return on this investment. In the field of nuclear technology, it is much easier to imagine disaster scenarios than to actually find them.

We should be cautious, but not allow our caution to stop us moving at all. Doing nothing, or worse, doing the wrong things, also runs risks. Suppose the Global Warming people are right. Then we should be building nuclear plants at breakneck speed. The risk of accident should be overwhelmed by the risk (according to the most dire claims of the Warmingists) of warming. But in many cases those very environmentalists block nuclear power, preferring pie-in-the-sky plans involving solar or wind power. The result is what no one wants: ever greater dependency on fossil fuels.

Grabianowski probably was not thinking all of these things when he wrote the article. I have no reason to believe that he is a nuclear-power-protesting environmentalist. Nonetheless, his article feeds a nucleophobia that we as a society must learn to get over.

Friday, October 15, 2010

That Living Constitution

I had a discussion earlier in the week with a liberal (he would prefer me to say "progressive") friend of mine who was astonished at my declaration that the left was responsible for judicial activism in the Supreme Court. Surely, he said, the left would merely say the same thing about the right. This was a phenomenon that was universal, and that both sides practiced in an effort to subvert the other, nothing but further evidence that the Court has become politicized.

He's right: the left would say the same thing. But they'd be wrong. Judicial activism - by which I mean judges writing decisions using their own preferences rather than their interpretation of the Constitution - stems from Progressive impatience with Constitutional restraint. We can go over the whole history of this another time, but a Corner post by John Derbyshire really gets to the root of it.

A reader writes to Derb:

let me relate the story to you of Justice Ruth Bader Ginsburg’s visit to my alma mater. (For the record, I was not in attendance at her lecture, but I heard the same story from multiple trusted colleagues). A student asked Justice Ginsburg about the use of the "intermediate scrutiny" test for equal protection review. Intermediate scrutiny is the standard by which courts review challenges to government classifications based on gender. (The court uses "rational basis" review for what it calls "non-suspect" classifications, such as wealth, which is a relatively easy test for the government to pass, and a "strict scrutiny" test for "suspect classifications," such as race, which is a much harder standard for the government to clear). The basis of the question was Justice Ginsburg’s majority opinion in US v. Virginia (the case that forced Virginia Military Institute to become a co-ed institution), in which she applied a version of intermediate scrutiny that had never been applied before and which seemed to many observers much more like a strict scrutiny review (in other words, she held the government to a higher standard than precedent seemed to dictate). The student wanted to know how Justice Ginsburg arrived at the use of this "revised" standard.

Her answer astonishes me to this day. She told those assembled that the justices do not use the analytical framework to reach the results in a given case, but that they decide the result first and then fit the opinion into the existing framework.

(Emphasis mine.)

This is a stunning admission. I do not believe you would hear anything like it from any originalist on the Court.

Wednesday, October 13, 2010

Practical Passwords

According to this article on Web password standards, I am one of the "stupid people" who maintain unsafe password practices. Here are some of the findings and my responses:

"4 in 10 respondents shared passwords with at least one person in the past year."
I haven't done this one, but I've encouraged my wife to give me a couple of her passwords to help debug some problems she's having. I don't think she's making a huge mistake here. How many of those 4 in 10 shared passwords with a trusted loved one?
"Nearly as many people use the same password to log into multiple Web sites, which could expose their information on each of the sites if one of them becomes compromised."
I use the same password all the time (I have about five that I generally use, across dozens of sites requiring a password). Sure, it's true that this means if one site is compromised then so might others. But the person getting that password would have to know which sites, and my usernames on those sites. That's not necessarily all that easy.
"Almost half of all users never use special characters (e.g. ! ? & #) in their passwords, a simple technique that makes it more difficult for criminals to guess passwords."
Not necessarily. Most use of special characters is in "133t" spelling: instead of your password being "password", it'll be "p455w0rd" or something like that. I've written dictionary attack software. Adding 133t spelling adds some words to check, but it's not that big a deal. It's true that a password like "5*(AJS*&1" is hard to crack, but then so is "jka82pma8". Furthermore, most sites lock you out after a small number of wrong guesses, so dictionary attacks aren't really very effective.
"30 percent [of young people] logged into a site requiring a password over public WiFi (vs. 21 percent overall)."
Again, this is no big deal... if you do it over SSL encryption. True, sending a password on an unencrypted public WiFi channel is risky. Don't do that. But your bank's Web site is SSL-encrypted, so don't worry about that.
"And 30 percent remember their passwords by writing them down and hiding them somewhere like a desk drawer."
No less a security guru than Bruce Schneier has endorsed this practice, and I completely agree. Hackers can't hack your desk, or your wallet. You're far better off having hard passwords written down in a fairly secure location than really bad passwords (e.g. "123456") that you can remember.

There are, of course, good password practices and bad. But I'm not convinced that surveys like this really reveal that people are as stupid as they think (they may be, but not for the reasons put forth in the survey).

Having a different password for every site, as they recommend, is a wonderful idea. It's also totally impractical. They're also correct that having a single password you use for every site is also a bad idea. The happy medium is to have a few passwords. Use one you can easily remember for all the stuff you don't care much about: Facebook, Slashdot, etc. Use a somewhat harder one for online stores (and don't trust any online store that makes it easy to retrieve your credit card number). And use your best one - maybe here it makes sense to go the "one per site" route - for online banking and brokerage accounts.

Monday, October 11, 2010

Who is Tom Donilon?

He's the new National Security Advisor, replacing Marine General James Jones. And his appointment demonstrates President Obama's unseriousness about national security. Donilon is a professional lobbyist, not a national security expert (although he has been an Obama advisor on national security since the transition in 2008-09, and worked in Pres. Clinton's state department). Should we take this as a small but clear signal that Obama will not be practicing Clintonian triangulation after he loses control of Congress in November? The appointment of a party hack to a critical post would seem to indicate this.

Friday, October 8, 2010

Congrats to Liu Xiaobo

This year's winner of the Nobel Peace Prize is a poke in the eye of China's Communist regime. Finally!

China is trying to block news of the announcement from its own citizens, using the Great Firewall of China and even radio jamming. They claim to be upset because Liu is a "criminal". But he's no murderer - all he's guilty of is "inciting subversion of state power", which is the sort of crime that in the United States is likely to get you elected. In China they throw you in jail.

I don't know much about Liu's story, but if China is throwing this big a hissy fit about his winning the Nobel Prize, he must be a good choice.

Thursday, October 7, 2010

Joining the Blog Bomb for Geert Wilders

Geert Wilders is currently on trial in the Netherlands for his short film Fitna. The film can be viewed in its entirety here. Please be advised that it contains graphic scenes of violence.

The film depicts Islam in a very bad light. By interspersing verses from the Koran with scenes of Islamic violence and hatred, it creates the impression that they are related. (I suspect there is something to this, by the way. But that is a question I have dealt with in other blog posts.) One of the most damning indictments in the film is its videos of Muslim clerics speaking in Arabic, subtitled in English. As the invaluable Middle Eastern Media Research Institute (MEMRI) has illustrated time and again, it's a common terrorist tactic to say one thing to Western audiences and quite another to Arabic ones. Fitna provides a similar service.

The Netherlands is prosecuting Wilders under its hate speech code. Does the film fall under the purview of the code? Perhaps: I am not an expert on Dutch law, needless to say. But if you believe in freedom of speech (as the Dutch government clearly does not), this film is exactly the sort of thing that must be protected. It is political speech, protest speech. Note that the film has not been attacked as libelous. Inaccuracy is not part of the complaint.

Furthermore, its message is merely that Islam is dangerous and to "stop Islamization". There is no call to answer violence with violence. This is not hatred. It may be wrong; you may disagree with the message. But silencing the messenger is an offense against one of the basic freedoms of Western civilization.

Monday, October 4, 2010

Enviro-Fascists Show Their Colors

This is a real ad promoting an environmentalist carbon-control program. It's "voluntary", but as you can see from the ad, they'd prefer that it wasn't.

The most telling point, I think, comes at the end, when the voice-over actress gets the button treatment. Just goes to show that no matter how much you do, it isn't enough. You have to be 100% committed to environmentalism, all the time, or they want you dead (or, at least, out of the way - let's give them the benefit of the doubt).

It all reflects a very Tom Friedman-esque viewpoint: if only we had more fascism and not so much democracy, we could really get some things done!