Tuesday, July 7, 2009

Israel Eyes Cyberwar Attack on Iran by Reuters


RAMAT HASHARON, Israel, - In the late 1990s, a computer specialist from Israel's Shin Bet internal security service hacked into the mainframe of the Pi Glilot fuel depot north of Tel Aviv.

It was meant to be a routine test of safeguards at the strategic site. But it also tipped off the Israelis to the potential such hi-tech infiltrations offered for real sabotage.

"Once inside the Pi Glilot system, we suddenly realised that, aside from accessing secret data, we could also set off deliberate explosions, just by programming a re-route of the pipelines," said a veteran of the Shin Bet drill.

So began a cyberwarfare project which, a decade on, is seen by independent experts as the likely new vanguard of Israel's efforts to foil the nuclear ambitions of its arch-foe Iran.

The appeal of cyber attacks was boosted, Israeli sources say, by the limited feasibility of conventional air strikes on the distant and fortified Iranian atomic facilities, and by U.S. reluctance to countenance another open war in the Middle East. "We came to the conclusion that, for our purposes, a key Iranian vulnerability is in its on-line information," said one recently retired Israeli security cabinet member, using a generic term for digital networks. "We have acted accordingly."

Cyberwarfare teams nestle deep within Israel's spy agencies, which have rich experience in traditional sabotage techniques and are cloaked in official secrecy and censorship.

They can draw on the know-how of Israeli commercial firms that are among the world's hi-tech leaders and whose staff are often veterans of elite military intelligence computer units.

"To judge by my interaction with Israeli experts in various international forums, Israel can definitely be assumed to have advanced cyber-attack capabilities," said Scott Borg, director of the U.S. Cyber Consequences Unit, which advises various Washington agencies on cyber security.

Technolytics Institute, an American consultancy, last year rated Israel the sixth-biggest "cyber warfare threat", after China, Russia, Iran, France and "extremist/terrorist groups".

The United States is in the process of setting up a "Cyber Command" to oversee Pentagon operations, though officials have described its mandate as protective, rather than offensive.

CORRUPT, CRASH

Asked to speculate about how Israel might target Iran, Borg said malware -- a commonly used abbreviation for "malicious software" -- could be inserted to corrupt, commandeer or crash the controls of sensitive sites like uranium enrichment plants.

Such attacks could be immediate, he said. Or they might be latent, with the malware loitering unseen and awaiting an external trigger, or pre-set to strike automatically when the infected facility reaches a more critical level of activity. As Iran's nuclear assets would probably be isolated from outside computers, hackers would be unable to access them directly, Borg said. Israeli agents would have to conceal the malware in software used by the Iranians or discreetly plant it on portable hardware brought in, unknowingly, by technicians.

"A contaminated USB stick would be enough," Borg said.

Ali Ashtari, an Iranian businessman executed as an Israeli spy last year, was convicted of supplying tainted communications equipment for one of Iran's secret military projects.

Iranian media quoted a security official as saying that Ashtari's actions "led to the defeat of the project with irreversible damage". Israel declined all comment on the case.

"Cyberwar has the advantage of being clandestine and deniable," Borg said, noting Israel's considerations in the face of an Iranian nuclear programme that Tehran insists is peaceful.

"But its effectiveness is hard to gauge, because the targeted network can often conceal the extent of damage or even fake the symptoms of damage. Military strikes, by contrast, have an instantly quantifiable physical effect."

Israel may be open to a more overt strain of cyberwarfare. Tony Skinner of Jane's Defence Weekly cited Israeli sources as saying that Israel's 2007 bombing of an alleged atomic reactor in Syria was preceded by a cyber attack which neutralised ground radars and anti-aircraft batteries.

"State of War," a 2006 book by New York Times reporter James Risen, recounted a short-lived plan by the CIA and its Israeli counterpart Mossad to fry the power lines of an Iranian nuclear facility using a smuggled electromagnetic-pulse (EMP) device.

A massive, nation-wide EMP attack on Iran could be effected by detonating a nuclear device at atmospheric height. But while Israel is assumed to have the region's only atomic arms, most experts believe they would be used only in a war of last resort


How Many Nukes Does It Take To Defend America? by Brian Palmer


President Obama, in Russia this week, announced an agreement to reduce American and Russian nuclear warhead stockpiles to a range between 1,500 and 1,675 for each country. How did negotiators arrive at these numbers?
By counting up potential targets for a nuclear strike and then negotiating around that number. U.S. military planners dream up a variety of hypothetical conflicts with other nuclear powers and determine how many warheads would be required to destroy all the most important targets in each scenario. The estimate is periodically adjusted downward, as planners eliminate targets to accommodate the president's desire to reduce stockpiles and their own changing views about how much deterrence is truly required. The president then consults allies—like Japan and South Korea—under the U.S. protective nuclear umbrella before entering into negotiations with Russia. Recent treaties have specified acceptable ranges for warhead stockpiles, with the United States tending to stick around the upper limit and Russia the lower limit. (U.S. military planners are more conservative than their Russian counterparts, in part because more countries rely on American protection.)
The first stage in planning for a reduction of the nuclear arsenal takes the form of the Nuclear Posture Review, a periodic policy analysis conducted by the Department of Defense and several other agencies. This report informs the president of the current status and needs of the nuclear program. The president then issues vague guidelines to the secretary of defense about the purpose of the nuclear weapon program, such as whether a pre-emptive strike might ever be employed. Finally, the Pentagon issues a confidential set of strike options detailing how we might be willing to use our nukes.
Next, the strike options go over to the U.S. Strategic Command, where military planners apply them to hypothetical conflicts with six different adversaries: Russia, China, North Korea, Iran, Syria, and a nonstate actor resembling al-Qaida. Within each simulation, the planners count up potential targets in four categories: 1) military forces; 2) weapons of mass destruction infrastructure, like launch bases and storage facilities; 3) military and national leadership; and 4) war-supporting infrastructure, such as factories, rail lines, and power plants. The number of warheads necessary to destroy or cripple these targets is calculated, taking into account the possibility of mechanical failure. (Planners assume that 15 percent of the nuclear weapons will turn out to be duds.) The calculations also take stock of the need for redundancy, so there will be enough nukes for an attack even in the aftermath of a disabling first strike by an opponent.
While the plans do not envision simultaneous nuclear conflict with all six adversaries, the military does plan for the possibility that one nuclear power might take advantage of the conflict between two others, either through blackmail or an actual strike.
Under the 2002 SORT treaty, the last bilateral agreement, the United States and Russia were limited to between 1,700 and 2,200 operationally deployed strategic warheads apiece. This limitation refers only to warheads currently mounted on ICBMs, in submarines, or waiting to be loaded onto long-range bombers. Not included are strategic warhead reserves (many of which can be put into action within a few days) or the smaller, tactical nukes that can be delivered by cruise missiles or fighter jets. Currently, the United States possesses about 500 tactical nuclear weapons, compared with roughly 3,000 for the Russians.

Friday, July 3, 2009

When Our Brains Short-Circuit by Nicholas Kristof


Our political system sometimes produces such skewed results that it’s difficult not to blame bloviating politicians. But maybe the deeper problem lies in our brains.

Evidence is accumulating that the human brain systematically misjudges certain kinds of risks. In effect, evolution has programmed us to be alert for snakes and enemies with clubs, but we aren’t well prepared to respond to dangers that require forethought.

If you come across a garter snake, nearly all of your brain will light up with activity as you process the “threat.” Yet if somebody tells you that carbon emissions will eventually destroy Earth as we know it, only the small part of the brain that focuses on the future — a portion of the prefrontal cortex — will glimmer.

“We humans do strange things, perhaps because vestiges of our ancient brain still guide us in the modern world,” notes Paul Slovic, a psychology professor at the University of Oregon and author of a book on how our minds assess risks.

Consider America’s political response to these two recent challenges:

1. President Obama proposes moving some inmates from Guantánamo Bay, Cuba, to supermax prisons from which no one has ever escaped. This is the “enemy with club” threat that we have evolved to be alert to, so Democrats and Republicans alike erupt in outrage and kill the plan.

2. The climate warms, ice sheets melt and seas rise. The House scrounges a narrow majority to pass a feeble cap-and-trade system, but Senate passage is uncertain. The issue is complex, full of trade-offs and more cerebral than visceral — and so it doesn’t activate our warning systems.

“What’s important is the threats that were dominant in our evolutionary history,” notes Daniel Gilbert, a professor of psychology at Harvard University. In contrast, he says, the kinds of dangers that are most serious today — such as climate change — sneak in under the brain’s radar.

Professor Gilbert argues that the threats that get our attention tend to have four features. First, they are personalized and intentional. The human brain is highly evolved for social behavior (“that’s why we see faces in clouds, not clouds in faces,” says Mr. Gilbert), and, like gazelles, we are instinctively and obsessively on the lookout for predators and enemies.

Second, we respond to threats that we deem disgusting or immoral — characteristics more associated with sex, betrayal or spoiled food than with atmospheric chemistry.

“That’s why people are incensed about flag burning, or about what kind of sex people have in private, even though that doesn’t really affect the rest of us,” Professor Gilbert said. “Yet where we have a real threat to our well-being, like global warming, it doesn’t ring alarm bells.”

Third, threats get our attention when they are imminent, while our brain circuitry is often cavalier about the future. That’s why we are so bad at saving for retirement. Economists tear their hair out at a puzzlingly irrational behavior called hyperbolic discounting: people’s preference for money now rather than much larger payments later.

For example, in studies, most Americans prefer $50 now to $100 in six months, even though that represents a 100 percent return.

Fourth, we’re far more sensitive to changes that are instantaneous than those that are gradual. We yawn at a slow melting of the glaciers, while if they shrank overnight we might take to the streets.

In short, we’re brilliantly programmed to act on the risks that confronted us in the Pleistocene Age. We’re less adept with 21st-century challenges.

At the University of Virginia, Professor Jonathan Haidt shows his Psychology 101 students how evolution has prepared us to fear some things: He asks how many students would be afraid to stand within 10 feet of a friend carrying a pet boa constrictor. Many hands go up, although almost none of the students have been bitten by a snake.

“The objects of our phobias, and the things that are actually dangerous to us, are almost unrelated in the modern world, but they were related in our ancient environment,” Mr. Haidt said. “We have no ‘preparedness’ to fear a gradual rise in the Earth’s temperature.”

This short-circuitry in our brains explains many of our policy priorities. We Americans spend nearly $700 billion a year on the military and less than $3 billion on the F.D.A., even though food-poisoning kills more Americans than foreign armies and terrorists. We’re just lucky we don’t have a cabinet-level Department of Snake Extermination.

Still, all is not lost, particularly if we understand and acknowledge our neurological shortcomings — and try to compensate with rational analysis. When we work at it, we are indeed capable of foresight: If we can floss today to prevent tooth decay in later years, then perhaps we can also drive less to save the planet.

Nuclear Bomb Tests Behind Ivory Dating by Julian Rush


Selling ivory is not illegal, providing it is from an elephant that died before 1947. But until now, proving the age of an item was notoriously difficult, relying on expert opinion.

Around the world, forgers have become adept at faking modern carvings to make them look old. Conservationists argued that sales on auction sites like eBay created a market that encouraged the forgers - and the poachers who kill elephants to meet the demand.

But now a sophisticated forensic scientific technique has been used for the first time in a court case of a woman accused of illegally offering carved ivory items for sale on eBay.

Following a tip-off from the National Wildlife Crime Unit, Hampshire Police raided the home of a woman from Aldershot in April 2007 and found 34 items. She was found not guilty at Winchester Crown Court last week.

The technique - radio carbon dating - is set to become an important weapon in the international fight against the illegal trade in animal parts and products that some suggest is worth billions of pounds a year.

For this case, Hampshire police called in the TRACE Wildlife Forensics Network who put them in touch with scientists in Scotland who specialise in radio carbon dating. It is usually used to date bones for archaeologists or rocks for geologists and uses the radioactive decay of carbon-14. 

Carbon-14 is a radioactive isotope of normal carbon and it occurs naturally in small quantities. It's taken up into the tissue and bones of every living plant and animal on Earth during life. At death, the carbon-14 starts to decay at a known rate, so by measuring the ratio of carbon-14 to normal carbon (which doesn't decay), scientists can determine the age of the sample.

That's fine for dating Neanderthal bones or an Egyptian mummy

Every Mushroom Cloud Has A Silver Lining

But in the 1950s and 60s, the fall-out from atmospheric nuclear bomb tests suddenly added extra carbon-14 to the atmosphere. All of us now have elevated levels of carbon-14 in our bodies as a result. And so too does every elephant, rhinoceros or walrus that's been alive since 1950. 

"If we find that the level of carbon-14 is enriched, then we know that elephant was alive in the nuclear era and therefore the ivory is illegal." Professor Gordon Cook, of the Scottish Universities Environmental Research Centre, who dated the ivory in this case, told Channel 4 News.

The spike is so high and so clear they can identify any organism that was alive after 1950.

Which, by chance, coincides almost exactly with the date for the age of ivory that can be legally sold. 

Though the woman was not convicted, TRACE, an international collaboration of campaigners, enforcement agencies and forensic scientists set up in 2006, believes the technique can be used successfully in future against the illegal wildlife trade like that in tiger body parts, rhino horn and scrimshaw. 

"We're now able to fully enforce the wildlife trade legislation. It opens the door for police to go after people trading illegally in ivory." said Dr. Ross McEwing of TRACE.

Professor Gordon Cook says the "nuclear bomb test" has wider uses. "It has huge potential. We've also looked at human teeth and we think we can tie down the year of birth by measuring the carbon-14. That, for something like mass graves, could be very important."

In January 2009, after a global campaign by environmentalists, eBay finally banned all sale of ivory.

Thursday, July 2, 2009

The Triumph of the Random by Leonard Mlodinow


It was the summer of 1945, and World War II had ended. Former soldiers, including famous baseball stars, streamed back into America and into American life. Yankee slugger Joe DiMaggio was trying to be Yankee fan Joe DiMaggio, sneaking into a mezzanine seat with his 4-year-old son, Joe Jr., before rejoining his team. A fan noticed him, then another. Soon throughout the stadium people were chanting “Joe, Joe, Joe DiMaggio!” DiMaggio, moved, gazed down to see if his son had noticed the tribute. He had. “See, Daddy,” said the little DiMaggio, “everybody knows me!”

We all interpret the events around us according to our own worldview. By adulthood we’ve either gotten beyond the me-me-me context of 4-year-olds, or gone into politics. But drawing conclusions about data we encounter in sports, business, medicine and even our personal lives, we often make errors as significant as that of Joe Jr.

This holiday weekend—the Fourth of July—kicks off the home stretch of a two-month period that made Joe DiMaggio Sr. an icon of American culture. In 1941, a few months before Joe Jr. was born, and sandwiched between the day Hitler’s insane deputy Rudolf Hess parachuted into Scotland on an unauthorized peace mission and the day a secret British report concluded that the Allies could complete an atomic bomb ahead of Germany, there was a period of 56 successive Yankee games in which Joltin’ Joe had at least one hit.

DiMaggio’s hitting streak was an inspiration in troubled times. The pursuit of any record comes with pressure—Roger Maris lost some of his hair during his attempt to break Babe Ruth’s home-run record in 1961—but most records forgive you an off day as long as you compensate at other times. Not so with a streak, which demands unwavering performance. And so DiMaggio’s streak has been interpreted as a feat of mythic proportion, seen as a heroic, even miraculous, spurt of unrivaled effort and concentration.

But was it? Or was this epic moment simply a fluke?

Recent academic studies have questioned whether DiMaggio’s streak is unambiguous evidence of a spurt of ability that exceeded his everyday talent, rather than an anomaly to be expected from some highly talented player, in some year, by chance, something like the occasional 150-yard drive in golf that culminates in a hole in one. No one is saying that talent doesn’t matter. They are just asking whether a similar streak would have happened sometime in the history of baseball even if each player hit with the unheroic and unmiraculous—but steady—ability of an emotionless robot.

That randomness naturally leads to streaks contradicts people’s intuition. If we were to picture randomness, we might think of a graph that looks jerky, not smooth like a straight line. But random processes do display periods of order. In a toss of 100 coins, for example, the chances are more than 75% that you will see a streak of six or more heads or tails, and almost 10% that you’ll produce a streak of 10 or more. As a result a streak can look quite impressive even if it is due to nothing more than chance.

A few years ago Bill Miller of the Legg Mason Value Trust Fund was the most celebrated fund manager on Wall Street because his fund outperformed the broad market for 15 years straight. It was a feat compared regularly to DiMaggio’s, but if all the comparable fund managers over the past 40 years had been doing nothing but flipping coins, the chances are 75% that one of them would have matched or exceeded Mr. Miller’s streak. If Mr. Miller was really merely the goddess of Fortune’s lucky beneficiary, then one would expect that once the streak ended there would be no carryover of his apparent golden touch. In that expectation Mr. Miller did not disappoint: in recent years his fund has significantly lagged the market as he bet on duds like AIG, Bear Stearns, Merrill Lynch & Co. and Freddie Mac.

Of course a 10- or 15-game streak is a far cry from one of 56 games. This is where DiMaggio’s great ability plays a role, for if we are to compare DiMaggio’s performance to a coin, it must be a weighted coin. With a lifetime batting average of .325, DiMaggio had a better-than-75% chance of getting a hit in a game, while a balanced coin has but a 50% chance of success. Moreover, each year for over a century, hundreds of players have sought to achieve a streak such as DiMaggio’s. All those factors increase the odds that such a streak could have occurred by chance alone.

It’s not just the statisticians who wonder whether our heroes achieve records more often than coins. Psychologists, and, increasingly, economists, also puzzle over the seemingly discrete worlds of chance and perception. The fusion of those worlds was sanctified when half of the 2002 Nobel Prize in economics was awarded to psychologist Daniel Kahneman “for having integrated insights from psychological research into economic science, especially concerning human judgment and decision-making under uncertainty.”

Research into why people misinterpret streaks dates to 1985, and a paper co-authored by Mr. Kahneman’s regular collaborator, the late Amos Tversky, in the journal Cognitive Psychology. (No one doubts that, had he lived, Mr. Tversky would have shared in the prize). The paper was titled “The hot hand in basketball: On the misperception of random sequences.” Everyone who has ever played basketball knows the feeling of being “in the zone.” Your hand is on fire. You can’t miss. But are you feeling a true increase in ability, or is your mind inferring it because you just took a bunch of shots that, for whatever reason, went in?

If a person tossing a coin weighted to land on heads 80% of the time produces a streak of 10 heads in a row, few people would see that as a sign of increased skill. Yet when an 80% free throw shooter in the NBA has that level of success people have a hard time accepting that it isn’t. The Cognitive Psychology paper, and the many that followed, showed that despite appearances, the “hot hand” is a mirage. Such hot and cold streaks are identical to those you would obtain from a properly weighted coin.

Why do people have a hard time accepting the slings and arrows of outrageous fortune? One reason is that we expect the outcomes of a process to reflect the underlying qualities of the process itself. For example, if an initiative has a 60% chance of success, we expect that six out of every 10 times such an initiative is undertaken, it will succeed. That, however, is false. In order to warrant confidence that results reflect a deeper truth, you need many more trials than 10. In fact, one of the most counterintuitive features of randomness is that for a small number of trials, the results of a random process typically do not reflect the underlying probabilities.

For example, suppose we undertake an analysis of the resources, effort and ability of all the companies in the Fortune 500 and determine that every company has the same 60% chance of success in any given year. If we observe all the companies over a period of five years and the underlying probability of success were reflected in each company’s results, then over the five-year period every company would have three good years.

The mathematics of chance indeed dictate that in this situation the odds of a company having zero, one, two, four or five good years are lower than the odds of having three. Nevertheless it is not likely that a company will have three out of five good years—because there are so many of those misleading outcomes, their combined odds add up to twice the odds of having exactly three. That means that of the 500 companies, two-thirds will experience results that belie their underlying potential. In fact, according to the rules of randomness, nearly 50 of the companies will have a streak of either five good years, or five bad years, even if their corporate capacities were no better or worse than their counterparts’. And so if you judged the companies by their five-year results alone, you would probably over- or underestimate their true value.

In sports, the championship contenders are usually pretty evenly matched. But in baseball, even if one assumes that the better team has a lopsided 55/45 edge over the inferior one, the lesser team will win the seven-game World Series 40% of the time. That might seem counterintuitive, but you can look at it as follows. If you play a best-of-one game series, then, by our assumption, the lesser team will win 45% of the time. Playing a longer series will cut down that probability. The problem is that playing a seven-game series only cuts it down to 40%, which isn’t cutting it down by much. What if you demand that the lesser team win no more than 5% of the time—a constraint called statistical significance? The World Series would have to be the best of 269 games, and probably draw an audience the size of that for Olympic curling. Swap baseball for marketing, and you find a mistake often made by marketing departments: assuming that the results of small focus groups reflect a trend in the general population.

We find false meaning in the patterns of randomness for good reason: we are animals built to do just that. Suppose, for example, that you sit a subject in front of a light which flashes red twice as often as green, but otherwise without pattern. After the subject watches for a while, you offer the subject a reward for each future flash correctly predicted. What is the best strategy?

A nonhuman animal in this situation will always guess red, the more frequent color. A different strategy is to match your proportion of red and green guesses to the proportion you observed in the past, that is, two reds for every green. If the colors come in some pattern that you can figure out, this strategy will enable you to be right every time. But if the colors come without pattern you will do worse. Most humans try to guess the pattern, and in the process allow themselves to be outsmarted by a rat. (Those trying to time the market lately might wish they had let the rat take charge.) Looking for order in patterns has allowed us to understand the patterns of the universe, and hence to create modern physics and technology; but it also sometimes compels us to submit bids on eBay because we see the face of Jesus in a slice of toast.

Another reason we reject the power of randomness is our need for control. DiMaggio’s streak affects us because we all appreciate struggle and effort, triumphing despite huge odds. The notion that we might not have control over our environment, on the other hand, causes us to shudder.

Many studies illustrate how this basic aspect of human nature translates to a misperception of chance. For example, a group of Yale students were asked to predict the result of a series of coin tosses. The tosses were secretly rigged so that each student would have some success initially, but end up with a 50% success rate. The students were obviously aware of the random nature of their task. Yet when asked whether their performance would be hampered by distraction, and whether it would improve with practice, a significant number indicated that it would. Their deep-seated need for control trumped their intellectual understanding of the situation.

What about DiMaggio’s streak? There are many subtleties in randomness. For example, do you model a player as having a fixed batting average—that is, probability of a hit—or do you allow for the average to vary within the season, or even game to game? How do you treat the variation in at-bats, walks, etc.? The analyses can get long and the number of data needed unwieldy, so the jury is still out, but one of the studies, by Samuel Arbesman and Stephen H. Strogatz of Cornell University, attacked the problem by having a computer generate a mock version of each year in baseball from 1871 to 2005, based on the players’ actual statistics from that year. The scientists had the computer repeat the process 10,000 times, generating in essence 10,000 parallel histories of the sport.

The researchers found that 42% of the simulated histories had a streak of DiMaggio’s length or longer. The longest record streak was 109 games, the shortest, 39. In those 10,000 universes, many other players held the record more often than DiMaggio. Ty Cobb, for example, held it nearly 300 times.

DiMaggio’s streak, for better or worse, defined his life. Decades later, constantly hounded by autograph seekers, he wrote “If I thought this would be taking place due to the streak, I would have stopped hitting at 40 games.” He died just 10 years ago, at age 84. Joe Jr., who had trouble coping with his father’s fame, fell to a history of drug and alcohol abuse. He died five months after his father.

People are remembered—and often rewarded—not for their usual level of talent or hard work, but for their singular achievements, the ones that stand out in memory. It does no harm to view those achievements as heroic. But it does harm us to make investments or other decisions on a basis of misunderstanding. And it can be sad or even tragic when we interpret as failures plans or people simply because they did not succeed. Extraordinary events, both good and bad, can happen without extraordinary causes, and so it is best to always remember the other factor that is always present—the factor of chance.

Saturday, June 27, 2009

Internet Groans Under Weight of Michael Jackson Traffic by Jacqui Cheng


The news of pop icon Michael Jackson's collapse and subsequent death sent ripples across the Web on Thursday afternoon, affecting numerous services and sparking yet another spam campaign. Twitter, Google, Facebook, various news sites, and even iTunes were practically crushed under the weight of the sudden spike in Internet traffic. The phenomenon may not be new on an individual level, but combined across services, it was truly one of the most significant in recent memory.

When news first broke that the Jackson had collapsed in his home, Twitter was immediately abuzz. There were several points when the Ars staff observed between 6,000 and 13,000 new tweets per minute mentioning Michael Jackson before Twitter began to melt down—all before anyone other than TMZ.com was reporting his death. Of course, most of us are intimately familiar with the famed Fail Whale at this point, though Twitter's meltdown was mostly reflected in a major slowing of the service and the inability to send new tweets.

In fact, Twitter cofounder Biz Stone told the L.A. Times that the news of Jackson's passing caused the biggest spike in tweets per second since the US Presidential Election. (Similarly, Facebook—also known as Wannabe Twitter—saw a spike in status updates that was apparently three times more than average for the site, though a spokesperson said the site remained free of performance issues.)

Google, on the other hand, began receiving so many searches for news about Jackson that it caused the search engine to believe it was under attack. The site went into self-protection mode, throwing up CAPTCHAs and malware alerts to users trying to find news. A Google spokesperson described the incident as "volcanic" compared to other major news events, confirming that there was a service slowdown for some time.

The spammers have come out in droves as well (never assume that someone isn't working on a way to instantaneously exploit the death of a major celebrity). Security researchers at Sophos put up a warning this morning saying that the first wave of spam messages has gone out claiming to have "vital information" regarding Jackson's death. There doesn't appear to be any call to action or URL, but is meant as a way to harvest e-mail addresses if recipients make the mistake of replying. We're sure this trend will continue in the coming days, not just about Michael Jackson, but also actress Farrah Fawcett (who also passed earlier in the day on Thursday), and former Tonight Show sidekick Ed McMahon.

Finally, Apple's iTunes Store appeared to experience some slowdowns upon the confirmation of Jackson's passing, though the service has been running smoothly since. Users, however, are paying (literally) tribute to MJ, as several of Jackson's hit singles are climbing into the iTunes Top 10, including "Man In the Mirror," "Thriller," and "Don't Stop 'Til You Get Enough." This is pretty impressive considering that he has made the list with three songs in less than 24 hours, competing with the likes of Black Eyed Peas' "Boom Boom Pow" (which has been close to the top of the iTunes Top 10 for what seems like three millennia—seriously, please quit buying it so it goes away).

Coupled with news that fans have been gathering in cities across the US to perform the renowned (if not a bit morbid, all things considered) Thriller zombie dance, we can't help but feel as if the user-driven age of the Internet will keep his memory alive in ways that past music icons have not had. (Only a couple of Ars staffers—we're looking at you Bangeman and Timmer—are old enough to remember the passing of Elvis Presley in 1977, which dominated newspapers and TV for days afterwards.) YouTube even has a Michael Jackson spotlight on its front page right now, so if you're feeling nostalgic, head on over and check out MJ's smooth moves from the days of yore. R.I.P., King of Pop.

Tuesday, June 23, 2009

The Ins and Outs of Borderline Tennis Calls by Alan Schwarz


When a line judge at Wimbledon rules on a hair-splittingly close call and says the ball is out, the inevitably disgruntled player should not only consider challenging the call for review by digital replay system.

He should consult a recent issue of Current Biology.

A vast majority of near-the-line shots called incorrectly by Wimbledon line judges have come on balls ruled out that were actually in, according to a study published in October by researchers at the University of California-Davis. To the vision scientist, the finding added to the growing knowledge of how the human eye and brain misperceive high-speed objects. To the tennis player, it strongly suggests which calls are worth challenging and which are best left alone.

The researchers identified 83 missed calls during the 2007 Wimbledon tournament. (Some were challenged by players and overruled, and others were later identified as unquestionably wrong through frame-by-frame video.) Seventy of those 83 calls, or 84 percent, were on balls ruled out — essentially, shots that line judges believed had traveled farther than they actually did.

Called perceptual mislocalization by vision scientists, this subconscious bias is known less formally to Wimbledon fans as “You cannot be serious!” — John McEnroe’s infamous dissent when, yes, a 1981 shot was ruled out. Now that players can resort to a more evolved appeal procedure, the researchers’ discovery suggests that players should generally use their limited number of challenges on questionable out calls rather those that are called in, because such out calls have a far better chance of being discovered as mistaken on review, then overturned.

“What we’re really interested in is how visual information is processed, and how it can be used to a player’s advantage,” said David Whitney, an associate professor at U.C.-Davis’s Center for Mind and Brain and the paper’s lead author. “There is a delay of roughly 80 to 150 milliseconds from the first moment of perception to our processing it, and that’s a long time. That’s one reason why it’s so hard to catch a fly — the fly’s ability to dance around is faster than our ability to determine where it is.”

This is the third Wimbledon in which players can challenge questionable calls for review by the Hawk-Eye system, which uses high-speed video cameras to record balls’ flight. (About 25 percent of all challenges result in overturned calls.) There is no cost to the player when a call is proved correct, but after three such episodes in a set a player may not challenge again. Whether through strategy or residual tennis etiquette, most players leave many challenges unused.

Theoretically, line judges should be equally prone to call an out ball in as they are an in ball out. But when objects travel faster than humans’ eyes and brains can precisely track them — for example, Andy Roddick’s 150-mile-per-hour serves — they are left having to fill in the gaps in their perception. In doing so they tend to overshoot the object’s actual location and think it traveled slightly farther than it truly did.

Both successful challenge calls as well as the overlooked mistakes that the researchers later identified were several times more likely to come on “long” calls than “in” calls. (The same pattern existed at Wimbledon last year, Whitney said, although the paper did not present that data.) So players are better off using as many challenges as possible on balls called out, because those are the calls most likely to be wrong; if a player thinks an “in” call was wrong, chances are his own eyes were as fooled as line judges’ sometimes are.

Without knowing it, tennis officials are already told to try to compensate for this mislocalization effect. Published instructions for United States Tennis Association line judges tell them to “focus your eyes on the portion of the line where the ball will land,” rather than attempt to track the ball in flight. “Get to the spot well before the ball arrives,” they are advised.

Rich Kaufman, the association’s director of officials and a linesman and chair umpire from 1976 to 1997, said that of all things “one of the hardest things to teach new linesmen is to take their eye off the ball.”

“I once asked an eye doctor, then what am I seeing on a bounce?” Kaufman said. “The doctor said that’s your brain working — you think you see the initial point of impact but it’s the blur of the entry and exit of the ball.”

A player using his knowledge of this effect in challenging calls could see a benefit of about one or two overturned points per match, Whitney said, plus any psychological boost from feeling vindicated rather than robbed. But Whitney added that understanding how the brain misperceives visual stimuli can help in more real-life matters, like the design and placement of high-speed safety equipment, automobile brake lights and warning signs of all types.

As for Wimbledon, it appears as if the new information can only help players, not the judges who vex them. Kaufman said: “You have to call what you see. Or what you think you see.”

Monday, June 22, 2009

Why the Eyes Have it by Christopher Chabris


Why are we ­humans so good at seeing in color? Why do we have eyes on the front of our heads rather than on the sides, like horses? And how is it that we find it so easy to read when written language didn’t even exist until a few thousand years ago—a virtual millisecond in evolutionary time?
Most of us, understandably, have never given much thought to questions like these. What is surprising is that most cognitive scientists ­haven’t either. People who study the brain generally ask how it works the way it does, not why. But Mark Changizi, a professor at Rensselaer ­Polytechnic Institute and the author of “The Vision ­Revolution,” is indeed a man who asks why, and lucky for us: His ideas about the brain and mind are fascinating, and his explanations for our habits of seeing are, for the most part, persuasive.
Mr. Changizi takes care not to call himself a practitioner of evolutionary psychology. This is the one discipline of the mind sciences that focuses on why questions, but it often answers them by telling just-so stories that cannot be ­disproved. (Why do men have better spatial ability than women? Because a long time ago, in Africa, men needed spatial skills to track prey and to kill at a distance—a plausible theory but one that is difficult to test with experiments.) Instead Mr. Changizi calls ­himself a “theoretical neuroscientist,” seeking explanations for the design of the mind that are based on mathematical and physical analysis. He has his own stories, it is true, but they are grounded solidly in neuroscience, and they are backed up by data about a surprising range of human activities, from ­the colors found in historical ­costumes to the ­correspondence between the shapes found in written letters and the shapes found in ­nature.
Let’s start with the question of color. It is such a natural part of our visual experience that we don’t stop to wonder why we can see it at all. ­Without color television there would have been no “Miami Vice,” of course, but were we really missing out on so much when we had only black and white? The consensus explanation for our superior ability to perceive color is that primates evolved it to see fruit—you can’t order dinner if you can’t read the menu.
Mr. Changizi thinks otherwise. He proposes that color vision is useful for distinguishing the changes in other ­people’s skin color—changes that are caused by shifts in the volume and oxygenation levels of the blood. Such shifts, like blushing, often signal emotional states. The ability to see them is adaptive because it helps an observer to “read” states of mind and states of health in others, information that is in turn useful for predicting their behavior.
Our brains evolved in a time when people lived their entire lives without ever seeing someone with a skin color different from their own. Thus the skin color we grow up seeing, Mr. Changizi says, is “neutral” to us: It serves as a kind of baseline from which we notice even minor deviations in tint or hue. Almost every language has distinct words for some 11 basic colors, but none of them aptly describe the look of skin, which seems colorless (except in our recent multicultural societies, where skin color is newly prominent). As one might expect, primates without color vision tend to have furry faces and hands and thus less need to perceive skin color; ­primates with color vision are more “naked” in this respect, humans most of all.
James Steinberg
Conventional wisdom may be similarly misleading when it comes to binocular vision. It is said that we have two forward-facing eyes, which send our brains two separate images of almost everything in our field of view so that the brain can compare those images to estimate the distance of objects—a generally useful thing to know. But people who are blind in one eye, Mr. Changizi notes, can perform tasks like driving a car by using other cues that help them to judge distance. He offers a different explanation: that two eyes give us a sort of X-ray vision, allowing us to see “through” nearby objects to what is beyond.
You can experience this ability yourself by closing one eye and holding your forefinger near your face: It will appear in your field of vision, of course, and it will block what lies beyond or behind it. If you open both eyes, though, you will suddenly perceive your finger as transparent—that is, you will see it and see, ­unblocked, the full scene in front of you. Mr. Changizi observes that an animal in a leafy environment, with such an ability, gains an advantage: It can lurk in tall grass and still see what is “outside” its hiding place. He correlates the distance between the eyes and the density of vegetation in the habitats of animals and finds that animals with closer-set eyes do tend to live in ­forests rather than on plains.
As for reading, Mr. Changizi stops to observe how remarkable this ability is and how useful, giving us access to the minds of dead people (i.e., deceased writers) and permitting us to take in words much faster than we can by merely listening to them. He claims that we learn to read so easily because the symbols in our written alphabets have evolved, over many generations, to resemble the building blocks of natural scenes—­exactly what previous millennia of evolution adapted the brain to perceive quickly. A “T,” for example, appears in nature when one object overlaps ­another, like a stone lying on top of a stick. With statistical analysis, Mr. Changizi finds that the contour patterns most common in nature are also most common in letter shapes.
Mr. Changizi has more to say about our visual experience—about optical illusions, for instance, which he sees as artifacts of a trick the brain uses to cope with the one-tenth of a second it takes to process the light that hits our eyes and to determine what is actually in front of us. He calls for a new academic discipline of “visual linguistics,” and he tells us why there are no ­species with just one eye.
What does all this add up to? Provocative hypotheses but not settled truth—at least not yet. As a theoretician, Mr. Changizi leaves it to others to design experiments that might render a decisive ­verdict. Someone else will have to study how accurately people can perceive mental states from shifting skin tones, and someone else will have to ­determine whether, in most cases, looking at another ­person’s skin adds any useful information to what is easily known from facial expression, tone of voice and body ­language.
Still, the novel ideas that Mr. Changizi outlines in “The Vision Revolution”—together with the evidence he does present—may have a big effect on our understanding of the human brain. Their implication is that the environments we evolved in shaped the design of our visual system according to a set of deep principles. Our challenge now is to see them clearly.

Thursday, June 18, 2009

I never thought I'd be rooting for Iran by Bradley Burston


I never thought I'd be rooting for Iran 

I am in awe of the courage of the people of Iran They are giving the world hope. They are teaching a shocking lesson about truth. They embody freedom. And, perhaps hardest to grasp, for those of us who live in the Middle East, they are putting their very lives on the line not for the sake of some ferociously sectarian End of Days, but for the most profoundly radical notion of all - a better life. 

Every person who has taken to the streets to demand what their government promised them, free and fair elections, did so knowing that police or secret police could arrest them, act to cripple their careers, or outright gun them down

Tuesday, June 16, 2009

Don't Call What Happened in Iran Last Week an Election by Christopher Hitchens


For a flavor of the political atmosphere in Tehran, Iran, last week, I quote from a young Iranian comrade who furnishes me with regular updates:
I went to the last major Ahmadinejad rally and got the whiff of what I imagine fascism to have been all about. Lots of splotchy boys who can't get a date are given guns and told they're special.
It's hard to better this, either as an evocation of the rancid sexual repression that lies at the nasty core of the "Islamic republic" or as a description of the reserve strength that the Iranian para-state, or state within a state, can bring to bear if it ever feels itself even slightly challenged. There is a theoretical reason why the events of the last month in Iran (I am sorry, but I resolutely decline to refer to them as elections) were a crudely stage-managed insult to those who took part in them and those who observed them. And then there is a practical reason. The theoretical reason, though less immediately dramatic and exciting, is the much more interesting and important one.
Iran and its citizens are considered by the Shiite theocracy to be the private property of the anointed mullahs. This totalitarian idea was originally based on a piece of religious quackery promulgated by the late Ayatollah Ruhollah Khomeini and known as velayat-e faqui. Under the terms of this edict—which originally placed the clerics in charge of the lives and property of orphans, the indigent, and the insane—the entire population is now declared to be a childlike ward of the black-robed state. Thus any voting exercise is, by definition, over before it has begun, because the all-powerful Islamic Guardian Council determines well in advance who may or may not "run." Any newspaper referring to the subsequent proceedings as an election, sometimes complete with rallies, polls, counts, and all the rest of it, is the cause of helpless laughter among the ayatollahs. ("They fell for it? But it's too easy!") Shame on all those media outlets that have been complicit in this dirty lie all last week. And shame also on our pathetic secretary of state, who said that she hoped that "the genuine will and desire" of the people of Iran would be reflected in the outcome. Surely she knows that any such contingency was deliberately forestalled to begin with.
In theory, the first choice of the ayatollahs might not actually "win," and there could even be divisions among the Islamic Guardian Council as to who constitutes the best nominee. Secondary as that is, it can still lead to rancor. After all, corrupt systems are still subject to fraud. This, like hypocrisy, is the compliment that vice pays to virtue. With near-incredible brutishness and cruelty, then, the guardians moved to cut off cell-phone and text-message networks that might give even an impression of fairness and announced though their storm-troop "revolutionary guards" that only one form of voting had divine sanction. ("The miraculous hand of God," announced Supreme Leader Ali Khamenei, had been present in the polling places and had announced a result before many people had even finished voting. He says that sort of thing all the time.)
The obvious evidence of fixing, fraud, and force to one side, there is another reason to doubt that an illiterate fundamentalist like Mahmoud Ahmadinejad could have increased even a state-sponsored plebiscite-type majority. Everywhere else in the Muslim world, in every election in the last two years, the tendency has been the other way. In Morocco in 2007, the much-ballyhooed Justice and Development Party wound up with 14 percent of the vote. In Malaysia and Indonesia, the predictions of increased market share for the pro-Sharia parties were likewise falsified. In Iraq this last January, the local elections penalized the clerical parties that had been making life a misery in cities like Basra. In neighboring Kuwait last month, the Islamist forces did poorly, and four women—including the striking figure of Rola Dashti, who refuses to wear any headgear—were elected to the 50-member parliament. Most important of all, perhaps, Iranian-sponsored Hezbollah was convincingly and unexpectedly defeated last week in Lebanon after an open and vigorous election, the results of which were not challenged by any party. And, from all I hear, if the Palestinians were to vote again this year—as they were at one point supposed to do—it would be highly improbable that Hamas would emerge the victor.
Yet somehow a senile and fanatical religious clique that has failed even to condition the vote in a country like Lebanon, where it has proxy and surrogate parties under arms, is able to reward itself by increasing its "majority" in a festeringly bankrupt state where it controls the media and enjoys a monopoly of violence. I think we should deny it any official recognition of this consolation. (I recommend a reading of "Neither Free Nor Fair: Elections in the Islamic Republic of Iran" and other productions of the Abdorrahman Boroumand Foundation. This shows that past penalties for not pleasing the Islamic Guardian Council have included more than mere disqualification and have extended to imprisonment and torture and death, sometimes in that order. A new movie by Cyrus Nowrasteh, The Stoning of Soraya M., will soon show what happens to those who dare to dissent in other ways and are dealt with by Ahmadinejad's "grass roots" fanatics.)
Mention of the Lebanese elections impels me to pass on what I saw with my own eyes at a recent Hezbollah rally in south Beirut, Lebanon. In a large hall that featured the official attendance of a delegation from the Iranian Embassy, the most luridly displayed poster of the pro-Iranian party was a nuclear mushroom cloud! Underneath this telling symbol was a caption warning the "Zionists" of what lay in store. We sometimes forget that Iran still officially denies any intention of acquiring nuclear weapons. Yet Ahmadinejad recently hailed an Iranian missile launch as a counterpart to Iran's success with nuclear centrifuges, and Hezbollah has certainly been allowed to form the idea that the Iranian reactors may have nonpeaceful applications. This means, among other things, that the vicious manipulation by which the mullahs control Iran can no longer be considered their "internal affair." Fascism at home sooner or later means fascism abroad. Face it now or fight it later. Meanwhile, give it its right name.

Monday, June 15, 2009

Will the Recession Make Europe's Militaries Weaker? by Thomas Valasek


The economic crisis has wracked government budgets across Europe, as revenues have fallen and spending on stimulus and bailouts has soared. Already, there are signs that defense spending across the continent will suffer. Finance ministers will be looking for ways to reduce deficit and debt, and military budgets are a tempting target.
Such budget cuts will have some salutary effects: Defense establishments, with their resistance to civilian oversight and emphasis on continuity, tend to get bloated in times of relative plenty. It often takes a crisis to force meaningful reforms. But cuts also threaten to sap the effectiveness of European fighting forces and leave parts of the world exposed to insecurity.
The easiest portion of the budget to cut is operations. But it's also the most important portion. Withdrawing soldiers from faraway places plays well at home and requires no layoffs, but it means fewer troops in some of the world's most imperiled regions. Poland announced in April that it would withdraw from all U.N. peacekeeping operations. While the Poles may be no less safe, fragile countries such as Chad and Lebanon still need foreign troops to keep the peace.
Rather than withdrawing from conflict zones, European countries and agencies should stop sending overlapping missions to the same trouble spots. Both the EU and NATO sent missions to Sudan in 2007, and three different forces are currently fighting piracy off the coast of Somalia. Better to roll those operations into one; the current duplication wastes taxpayer money.
As defense ministries slash their budgets, their instinct will be to cut multinational weapons programs and make any purchases domestically so as to protect jobs at home. But that carries risks. Many truly necessary systems, such as transport airplanes, are so expensive and complex that they are best funded and shared between countries.
Granted, many past collaborative programs have been disastrous, such as the seven-nation plan to develop the A400M military transport aircraft. A modern-day Spruce Goose, the plane cannot fly because its engines, made by a four-nation European consortium, lack the proper certification; the plane is also said to be too heavy.
But the trouble with the A400M lies not in the collaborative nature of the program. The plane is a failure because its designers have been more concerned with securing production jobs than with obtaining a good product. In return for investing in the aircraft, they have demanded that a commensurate number of production jobs go to their country. As a result, bits of the plane are being built all over Europe -- and not necessarily in the countries most qualified to do the job.
European governments must be smarter. They should accept that it makes more sense to order the needed parts from the plant with the most relevant technical expertise. The governments also need to be more ready to buy off-the-shelf components, rather than try to generate jobs by manufacturing parts from scratch.
The impact of the budget cuts -- particularly the reductions in personnel and equipment -- also threaten to turn some European militaries into showcase forces, incapable of deploying abroad and thus irrelevant to most EU and NATO operations. It makes little sense, for example, for all but very few allies to keep tanks unless they are upgraded to be able to operate in faraway places such as Afghanistan and unless the governments have access to aircraft big enough to transport the tanks. As an excellent new study commissioned by the Nordic governments concluded, "small and medium-sized countries lose their ability to maintain a credible defence" when certain units shrink too much.
There are two ways to avoid such outcomes while cutting budgets. Some of the key equipment that makes modern warfare possible -- such as planes providing air-to-ground surveillance or military transport -- needs to be jointly owned. NATO operates a common fleet of aircraft that coordinates air traffic, and the alliance plans to buy transport airplanes for its members to use. This arrangement allows militaries of smaller and poorer European states, like the new allies in Eastern Europe, to take part in complex operations in distant places.
But that alone will not generate enough savings. Indeed, the time has come for European governments to consider abandoning parts of their national forces and infrastructure and to form joint units with their neighbors. Modern militaries do virtually all their fighting abroad and in coalition with others. If they lack the money to equip and deploy their soldiers overseas, they need to consider radical cost-saving measures. More governments should do as Belgium, the Netherlands, and Luxembourg did -- they merged parts of their air forces -- or emulate the Nordic countries, which are considering joining their amphibious units.
Most European governments have, in the past, found it too difficult to part with the cherished symbol of national sovereignty that is a proper army or an air force. But the practical value of such military services in Europe is often negligible. As the recession deepens, defense ministers across Europe should see the crisis as an opportunity to combine certain units and programs across countries. This will save money, which could be put to use properly training and equipping forces for EU and NATO operations

The Underworked American by the Economist


AMERICANS like to think of themselves as martyrs to work. They delight in telling stories about their punishing hours, snatched holidays and ever-intrusive BlackBerrys. At this time of the year they marvel at the laziness of their European cousins, particularly the French. Did you know that the French take the whole of August off to recover from their 35-hour work weeks? Have you heard that they are so addicted to their holidays that they leave the sick to die and the dead to moulder?
There is an element of exaggeration in this, of course, and not just about French burial habits; studies show that Americans are less Stakhanovite than they think. Still, the average American gets only four weeks of paid leave a year compared with seven for the French and eight for the Germans. In Paris many shops simply close down for August; in Washington, where the weather is sweltering, they remain open, some for 24 hours a day.
But when it comes to the young the situation is reversed. American children have it easier than most other children in the world, including the supposedly lazy Europeans. They have one of the shortest school years anywhere, a mere 180 days compared with an average of 195 for OECD countries and more than 200 for East Asian countries. German children spend 20 more days in school than American ones, and South Koreans over a month more. Over 12 years, a 15-day deficit means American children lose out on 180 days of school, equivalent to an entire year.
American children also have one of the shortest school days, six-and-a-half hours, adding up to 32 hours a week. By contrast, the school week is 37 hours in Luxembourg, 44 in Belgium, 53 in Denmark and 60 in Sweden. On top of that, American children do only about an hour’s-worth of homework a day, a figure that stuns the Japanese and Chinese.
Americans also divide up their school time oddly. They cram the school day into the morning and early afternoon, and close their schools for three months in the summer. The country that tut-tuts at Europe’s mega-holidays thinks nothing of giving its children such a lazy summer. But the long summer vacation acts like a mental eraser, with the average child reportedly forgetting about a month’s-worth of instruction in many subjects and almost three times that in mathematics. American academics have even invented a term for this phenomenon, “summer learning loss”. This pedagogical understretch is exacerbating social inequalities. Poorer children frequently have no one to look after them in the long hours between the end of the school day and the end of the average working day. They are also particularly prone to learning loss. They fall behind by an average of over two months in their reading. Richer children actually improve their performance.
The understretch is also leaving American children ill-equipped to compete. They usually perform poorly in international educational tests, coming behind Asian countries that spend less on education but work their children harder. California’s state universities have to send over a third of their entering class to take remedial courses in English and maths. At least a third of successful PhD students come from abroad.
A growing number of politicians from both sides of the aisle are waking up to the problem. Barack Obama has urged school administrators to “rethink the school day”, arguing that “we can no longer afford an academic calendar designed for when America was a nation of farmers who needed their children at home ploughing the land at the end of each day.” Newt Gingrich has trumpeted a documentary arguing that Chinese and Indian children are much more academic than American ones.
These politicians have no shortage of evidence that America’s poor educational performance is weakening its economy. A recent report from McKinsey, a management consultancy, argues that the lagging performance of the country’s school pupils, particularly its poor and minority children, has wreaked more devastation on the economy than the current recession.
Learning the lesson
A growing number of schools are already doing what Mr Obama urges, and experimenting with lengthening the school day. About 1,000 of the country’s 90,000 schools have broken the shackles of the regular school day. In particular, charter schools in the Knowledge is Power Programme (KIPP) start the school day at 7.30am and end at 5pm, hold classes on some Saturdays and teach for a couple of weeks in the summer. All in all, KIPP students get about 60% more class time than their peers and routinely score better in tests.
Still, American schoolchildren are unlikely to end up working as hard as the French, let alone the South Koreans, any time soon. There are institutional reasons for this. The federal government has only a limited influence over the school system. Powerful interest groups, most notably the teachers’ unions, but also the summer-camp industry, have a vested interest in the status quo. But reformers are also up against powerful cultural forces.
One is sentimentality; the archetypical American child is Huckleberry Finn, who had little taste for formal education. Another is complacency. American parents have led grass-root protests against attempts to extend the school year into August or July, or to increase the amount of homework their little darlings have to do. They still find it hard to believe that all those Chinese students, beavering away at their books, will steal their children’s jobs. But Huckleberry Finn was published in 1884. And brain work is going the way of manual work, to whoever will provide the best value for money. The next time Americans make a joke about the Europeans and their taste for la dolce vita, they ought to take a look a bit closer to home.

Monday, June 8, 2009

Rising Above I.Q by Nicholas D. Kristof

In the mosaic of America, three groups that have been unusually successful are Asian-Americans, Jews and West Indian blacks — and in that there may be some lessons for the rest of us.
Asian-Americans are renowned — or notorious — for ruining grade curves in schools across the land, and as a result they constitute about 20 percent of students at Harvard College.
As for Jews, they have received about one-third of all Nobel Prizes in science received by Americans. One survey found that a quarter of Jewish adults in the United States have earned a graduate degree, compared with 6 percent of the population as a whole.
West Indian blacks, those like Colin Powell whose roots are in the Caribbean, are one-third more likely to graduate from college than African-Americans as a whole, and their median household income is almost one-third higher.
These three groups may help debunk the myth of success as a simple product of intrinsic intellect, for they represent three different races and histories. In the debate over nature and nurture, they suggest the importance of improved nurture — which, from a public policy perspective, means a focus on education. Their success may also offer some lessons for you, me, our children — and for the broader effort to chip away at poverty in this country.
Richard Nisbett cites each of these groups in his superb recent book, “Intelligence and How to Get It.” Dr. Nisbett, a professor of psychology at the University of Michigan, argues that what we think of as intelligence is quite malleable and owes little or nothing to genetics.
“I think the evidence is very good that there is no genetic contribution to the black-white difference on I.Q.,” he said, adding that there also seems to be no genetic difference in intelligence between whites and Asians. As for Jews, some not-very-rigorous studies have found modestly above-average I.Q. for Ashkenazi Jews, though not for Sephardic Jews. Dr. Nisbett is somewhat skeptical, noting that these results emerge from samples that may not be representative.
In any case, he says, the evidence is overwhelming that what is distinctive about these three groups is not innate advantage but rather a tendency to get the most out of the firepower they have.
One large study followed a group of Chinese-Americans who initially did slightly worse on the verbal portion of I.Q. tests than other Americans and the same on math portions. But beginning in grade school, the Chinese outperformed their peers, apparently because they worked harder.
The Chinese-Americans were only half as likely as other children to repeat a grade in school, and by high school they were doing much better than European-Americans with the same I.Q.
As adults, 55 percent of the Chinese-American sample entered high-status occupations, compared with one-third of whites. To succeed in a profession or as managers, whites needed an average I.Q. of about 100, while Chinese-Americans needed an I.Q. of just 93. In short, Chinese-Americans managed to achieve more than whites who on paper had the same intellect.
A common thread among these three groups may be an emphasis on diligence or education, perhaps linked in part to an immigrant drive. Jews and Chinese have a particularly strong tradition of respect for scholarship, with Jews said to have achieved complete adult male literacy — the better to read the Talmud — some 1,700 years before any other group.
The parallel force in China was Confucianism and its reverence for education. You can still sometimes see in rural China the remains of a monument to a villager who triumphed in the imperial exams. In contrast, if an American town has someone who earns a Ph.D., the impulse is not to build a monument but to pass a hat.
Among West Indians, the crucial factors for success seem twofold: the classic diligence and hard work associated with immigrants, and intact families. The upshot is higher family incomes and fathers more involved in child-rearing.
What’s the policy lesson from these three success stories?
It’s that the most decisive weapons in the war on poverty aren’t transfer payments but education, education, education. For at-risk households, that starts with social workers making visits to encourage such basic practices as talking to children. One study found that a child of professionals (disproportionately white) has heard about 30 million words spoken by age 3; a black child raised on welfare has heard only 10 million words, leaving that child at a disadvantage in school.
The next step is intensive early childhood programs, followed by improved elementary and high schools, and programs to defray college costs.
Perhaps the larger lesson is a very empowering one: success depends less on intellectual endowment than on perseverance and drive. As Professor Nisbett puts it, “Intelligence and academic achievement are very much under people’s control.”

Sunday, June 7, 2009

Cage Match by Dahlia Lithwick


The public-opinion two-step on the wisdom of closing the prison camp at Guantanamo is fascinating, and not just because Americans are now inclined to keep the detention facility there open forever. The current legal meltdown about what to do with prisoners still at Guantanamo shows that contrary to popular belief, Americans care a good deal about prisons, prisoners, and prison reform, but only when the inmates threaten to tumble out into their backyards.
But there's the rub: We already have a prison problem, and it's already in our backyards. That's what Sen. James Webb, D-Va., wants us to understand as he launches an ambitious new effort to reform U.S. prisons nationwide. It's not quite as dramatic as the prospect of Abu Zubaydah escaping from the Supermax prison in Colorado and rampaging through the Rockies, but the U.S. prison crisis gets worse every year, and nobody seems to mind. Webb has decided to try to reignite the subject of prison reform, because he's convinced that when it comes to the prison problem, Americans need only know how to count.
Here are the facts about America's prisons, according to Webb:
The United States, with 5 percent of the world's population, houses nearly 25 percent of the world's prisoners. As Webb has explained it, "Either we're the most evil people on earth, or we're doing something wrong." We incarcerate 756 inmates per 100,000 residents, nearly five times the world average. At this point, approximately one in every 31 adults in the United States is in prison, jail, or on supervised release. Local, state, and federal spending on corrections now amounts to about $70 billion per year and has increased 40 percent over the past 20 years.
Webb has no problem locking up the serious baddies. In fact, he wants to reform the justice system in part so that we can incapacitate the worst of the worst. But Webb wants us to recognize that warehousing the nation's mentally ill and drug addicts in crowded correctional facilities tends to create a mass of meaner, more violent, less employable people at the exits. And unlike Guantanamo, there are always going to be exits.
The Justice Department estimates that 16 percent of the adult inmates in American prisons—more than 350,000 of those incarcerated—suffer from mental illness; the percentage among juveniles is even higher. And 2007 Justice statistics showed that nearly 60 percent of the state prisoners serving time for a drug offense had no history of violence and four out of five drug arrests were for drug possession, not sales. Webb also reminds us that while drug use varies little by ethnic group in the United States, African-Americans—estimated at 14 percent of regular drug users—make up 56 percent of those in state prison for drug crimes. We know all of this. The question is how long we want to avoid dealing with it.
Why does the senator from one of the country's most rabid lock-'em-up states believe that with two wars raging, an economy collapsing, and America's Next Top Model beckoning seductively, Americans are truly ready to grapple with his new legislation—the National Criminal Justice Commission Act of 2009—which establishes a blue-ribbon commission to review the nation's entire prison system?
Fear-based policies only get you so far, and when it comes to drugs and prisons, it's time to start thinking about reality. Webb says he looks forward to the challenge of communicating the problem to Americans and working together to solve it. He suspects that if Americans actually have the reality-based conversation about our disastrous prison policies, we'll understand that the trends all move in very dangerous directions: We lock up more people for less violent crime at ever greater expense, ultimately breeding more dangerous criminals and ignoring the worst.
The Guantanamo problem we've finally started to grapple with in a pragmatic, rather than symbolic, way—it's a dangerous place with some dangerous people—is a mere speck in the eye of America's larger prison program. An AP story last week described a small Montana town that was more than willing to take all of the Guantanamo prisoners and incarcerate them because, ultimately, a jail is a jail and prisoners are prisoners. If we are so worried about locking up a few terrorists for life in maximum-security U.S. jails, shouldn't we be giving at least some thought to the folks already there? As Dennis Jett observed recently in the Miami Herald, "even if everyone at Guantánamo were transferred to a U.S. prison it would amount to an increase of less than one hundredth of one percent in the total number incarcerated in this country."
Compared with the powder keg of our domestic prison system, Guantanamo actually starts to look pretty benign. And if we are going to have a huge national panic attack about detaining dangerous individuals after 9/11, let's be honest that the dangers of a handful of Guantanamo prisoners "rejoining the battlefield" or escaping from maximum-security prisons is far more remote than the crisis now festering in our own jails and prisons. Americans who claim to be worried about allowing alleged terrorists into their own backyards would be well advised to recognize what's already happening in their own backyards. The U.S. prison system as it now exists makes even less sense than the prison camp at Guantanamo. And unlike Guantanamo, no matter what we may wish, it won't be contained, ignored, or walled off forever.