Wednesday, August 18, 2010

Please link/subscribe to me!

A cordial request: if you read or feed this blog, please link, subscribe, etc. in whatever format is right for you. I'd like to bump up my PageRank etc. a little. I promise I won't do this often.

Thank you! Here's a puppy:


We now return to our regularly scheduled social/computer-sciency blogging.

Tuesday, August 17, 2010

Probabilistic computing?

Hat tip SMCISS.

For over 60 years, computers have been based on digital computing principles. Data is represented as bits (0s and 1s). Boolean logic gates perform operations on these bits. A processor steps through many of these operations serially in order to perform a function. However, today’s most interesting problems are not at all suited to this approach.

Here at Lyric Semiconductor, we are redesigning information processing circuits from the ground up to natively process probabilities: from the gate circuits to the processor architecture to the programming language. As a result, many applications that today require a thousand conventional processors will soon run in just one Lyric processor, providing 1,000x efficiencies in cost, power, and size.

Hat tip SMCISS.

Sunday, August 15, 2010

Psychologists say that power corrupts, via WSJ

Psychologists say that power corrupts, via WSJ. I know that this thesis is pretty widely accepted, but I want to snipe at it. I think we may be mistaking correlation for causation. The general arc of my argument is that thuggishness may (sometimes) lead to power, instead of the other way around.

First, all the cited psych studies showing "nice guys finish first" are from areas with dense social networks (dorms, sororities, chimp clans) and repeated person-to-person interactions. Not surprisingly, likability translates to popularity here. But I don't see any particular reason why that conclusion should translate to accumulation of power through job interviews or elections. I'm going out on a limb here, but perhaps the qualities that make frat boys popular are not the same qualities that put CEOs in power.

Second, at risk of alienating a lot of my friends in political psychology, I'm not a big fan of gimmicky priming studies. For instance: tell me about a time when you felt powerful, then write the letter E on your head. You wrote it backwards? See -- power corrupts.

Err... run that by me again? At what point does this experiment measure "power," and "corruption," and where does it demonstrate a casual relationship between the two? Sure, the researchers who did this study can do a song and dance about it, but at the end of the day, there are a lot of leaps in the logic. The study is gimmicky. Interesting maybe, but nowhere close to conclusive.

I completely accept the notion that our minds move between different mental states and that primed manipulations can nudge us from one state to another. But I'm not persuaded that we've mapped the set of states and priming cues in enough detail to draw general conclusions -- particularly when treatments and responses are measured in ways that never occur in normal human experience.

Third, the idea that transparency might fix things is nice, but lacking evidence. This is a place where careful study of incentives and strategy could add a lot of leverage. Go game theory!

Parting shot: I'm not anti-niceness or pro-corruption, but I think we often demonize authority and authority figures. There are an awful lot of social problems that best resolved through the benevolent use of authority, and I know plenty of people who live up to that charge. To the extent that power has a corrupting tendency, we should be looking for, promoting, and celebrating those who are proof against it.

Saturday, August 14, 2010

Wacom tablet: post-purchase rationalization



So I bought a Wacom Bamboo tablet last week. It's a nice digital pen tablet, with a drawing area about the size of a postcard -- a good size for sketching & gesture-based interfaces. It plugs into a USB port and (after some fiddling with the drivers) works very nicely with Windows, Firefox, Inkscape, etc.

That said, I'm not really sure why I bought the thing. I do a lot of writing and programming, which leads to a lot of typing, but not much clicking and dragging. Sure, the touchpad on my notebook is small, but it's not really $60 small. I blame postpartum sleep deprivation. It might also have something to do with watching this TED talk.

Anyway, in an effort to assuage my post-purchase cognitive dissonance, I've been telling myself that if I can improve the speed and accuracy of my clicks by just a fraction, this tablet thing will easily pay itself off in productivity in the long run. Right?

To bring some proof to that claim, I dug up this flash-based target practice game. Little targets fly around the screen and you try to click on them: score points for every target you hit; bonus points for consecutive bulls-eyes; miss too many times and the game is over. Great sound effects. This is high brainpower stuff.

I played three rounds in time trial mode, using the touchpad. Mean score 438. Then I played thrice with the tablet. Mean score 685!

To make sure this wasn't just an improvement in my reflexes and strategy (shoot the spring targets at the apex; don't waste a shot on a yo-yo target that might be yanked), I employed an interrupted time series design and played six more rounds. Mean score with the touchpad: 407. Mean score with the tablet: 977!

With that kind of performance improvement, the tablet was clearly worth it. Minority report, here I come.

Thursday, August 12, 2010

TED talk on origami

James Lang, physicist and origami master, speaks at TED. He talks about a fascinating intersection of art and math: origami.

This is a great, visual talk that links ideas you would never expect to come together: water bombs, ibex, and spiders turn into satellites, heart valves, and airbags, using the math of trees and circle packing. A really great example of the unexpected practical value of mathematics.

Hat tip to my brother Matthew for putting me onto Lang's work.

Monday, August 9, 2010

P != NP? The biggest mathematical problem in computer science may be solved

Banks, businesses, and credit card users can rest easy: public key encryption is still secure. Mathematicians breathe a sigh of relief: computers aren't going to replace you by churning out brute force proofs. Vinay Deolalikar, a researcher at HP Labs, is claiming that he has verified the most important conjecture in mathematical computer science: P != NP.

Some links, and then onto an explanation of why this is all a big deal.

What's the difference between P and NP?
Suppose you have a friend -- a real whiz at math and logic. "I can answer any question you ask*," she promises, "it just might take some time." Before you ask her to solve your sudoku, route the world's airline traffic, or crack your bank's electronic encryption key, it's worth figuring out "Just how long is it going to take?"

For many kinds of problems, the answer depends on how many moving parts there are. To cite a famous example, suppose I'm a traveling salesman, using planes, trains, and automobiles to get between cities. I have n cities to visit. What is the fastest route that lets me visit every city?

Intuitively, the more cities there are, the harder this problem is going to be. If n=1, the answer is trivial. I start in the single city and just stay there -- no travel time. If n=2, the answer is easy. It's equal to the amount of time between the two cities (or double that, if I'm required to make a round trip.) If n=3, it starts to get complicated because I have several paths to check. If n=100, then solving the problem could take a long time. Even your math whiz friend might have to chew on it for a while.

In the world of computer science, P-type problems are the fast, easy, tractable ones. They can be solved in "polynomial time": if the problem has more moving parts, it will take longer to solve, but not a whole lot longer. Mathematically speaking, a problem belongs to complexity class P if you can write a function f(n) = n^k and prove that a problem with n elements can be solved in f(n) operations or less. Constructing proofs of this kind is a big part of what mathematical computer scientists do.

If we could prove that k=3 for the traveling salesman problem -- we can't, by the way -- then we could guarantee that a computer could solve the 100-city problem in 1 million time steps or less. If your math whiz friend uses an Intel chip running millions of operations per second, that doesn't look so bad. For P problems, we can put a bound on running time, and know that as n increases, the performance bound increases at worst by the exponent k.

NP-type problems can get harder, because the definition is looser. Instead of requiring your friend to come up with the answer in polynomial time, we only require her to verify an answer in polynomial time. ("My boss says I should go to Albuquerque, then Boston, then Chicago, then Denver. Is that really the fastest route?" "Hang on. I'll get back to you in n^k seconds.") If a problem is of type NP, then we can find the right answer using a "brute force," guess-and-check strategy. Unfortunately, as the number of moving parts increases, the number of possible answers explodes, and the the time required to solve the problem can get a lot longer.

Finding the solution to NP problems when n is large can take a ridiculously long time. Some NP problems are so computationally intensive that you could take all the computers in the world and multiply their processing power by a billion, run them in parallel for the lifespan of the universe, and still not find the solution. Problems or this type are perfectly solvable, but too complicated to be solved perfectly.

There's a third class of problems, called "NP-hard," that are even worse. Things are bad enough already that I'm not going to get into them here. You may also hear about "NP-complete" problems. These are problems that are both NP and NP-hard. (See this very helpful Venn diagram from wikipedia.)



The P !=NP Conjecture
But there's an intriguing catch. Until last week, nobody was sure whether any truly NP problems existed.** Maybe, just maybe, some mathematical or computational trick would allow us to convert tricky NP problems into much easier P problems.

Most computer scientists didn't believe that this was possible -- but they weren't sure. Thus, P!=NP was a longstanding conjecture in the field. (By the way, "!=" means "does not equal" in many programming languages). Many proofs had been attempted to show that P=NP, or P!=NP, but all of them had failed. In 2000, the Clay Mathematics Institute posed P!=NP as one of seven of the most important unsolved problems in mathematics. They offered a million dollar prize for a proof or refutation of the conjecture. The prize has gone unclaimed for a decade.

Last week, Vinay Deolalikar (cautiously) began circulating a 100-page paper claiming to prove that P!=NP. His proof hasn't been published or confirmed yet, but if he's right, he's solved one of the great open problems in mathematics.

So what?
Let's assume for the moment that Deolalikar has dotted his i's and crossed his t's. What would a P!=NP proof mean?

One immediate implication is in code-breaking. Public key encryption -- the basis for most online banking, e-commerce, and secure file transfers -- relies on the assumption that certain mathematical problems are more than P-difficult. Until last week, that was just an assumption. There was always the possibility that someone would crack the P-NP problem and be able to break these codes easily. A P!=NP proof would eliminate that threat. (Other threats to security could still come from number theory or quantum computing.) For wikileaks, credit card companies, and video pirates, this is reason to celebrate.

On the downside, a proof would put a computationally cheap solution to the traveling salesman problem (and every other NP-but-not-P problem) permanently out of reach. This includes a host of practical problems in logistics, engineering, supply chain management, communications, and network theory. Would-be optimizers in those areas will sigh, accept some slack in their systems, and continue looking for "good enough" solutions and heuristics. Expect a little more attention to clever approximations, parallel processing, and statistical approaches to problem solving.

A P!=NP proof can also help explain the messiness and confusion of politics and history. Recent work in game theory demonstrates that finding stable solutions in strategic interactions with multiple players (e.g. solving a game) is a computationally difficult problem. If it turns out that reaching equilibrium is NP, that would put bounds on the quality of likely policy solutions. For instance, we can pose the problem of writing legislation and building a coalition to ratify it as an optimization problem over policy preferences, constrained by preferences. If finding an optimal solution takes more than polynomial time, all the polling, deliberation, and goodwill in the world might still not be enough to arrive at a perfect (i.e. Nash, and/or Pareto-optimal) solution. In the meantime, politicians will make mistakes, politics will stay messy, and history will stay unpredictable.

Finally, a P!=NP proof puts us in an interesting place philosophically, especially when it comes to understanding intelligence. As with engineering, optimal solutions to many problems will always out of reach for artificial intelligence. Unless computation becomes much cheaper, a computer will never play the perfect game of chess, let alone the perfect game of go. Other difficult (and perhaps more useful) problems will also stay out of reach.

For those who take a materialistic view of human intelligence -- that is, your mind/consciousness operates within a brain and body composed of matter -- a P!=NP proof puts the same constraint on our intelligence. As fancy and impressive as our neural hardware is, it's bound by the laws of physics and therefore the laws of algorithmic computation. If that's true, then there are difficult problems that no amount of information, expertise, or intuition will ever solve.

The jury is still out on Deolalikar's paper. But if he's right, this isn't guesswork or extrapolation -- it's a matter of proof.

* Your friend solves logic puzzles; she's not an oracle. To give you the right answer, she needs to have all the relevant information when she begins, so it's no good asking her to pick winning horses in the Kentucky Derby, or guess how many fingers you're holding up behind your back.

** To be precise here, all P problems are also NP. If I can give you the solution in polynomial time, I can also verify the solution in polynomial time. When I say "truly NP," I mean problems that are NP but not P, such as NP-complete problems. This is the class of problems whose existence has been unclear.

Sunday, August 8, 2010

Inkscape

If you haven't tried Inkscape yet, you should. It's an open-source graphics program, like Illustrator, but cheap. Very functional, totally free, the built-in documentation is pretty solid. Since it's open source with a good user base, I expect development to come along rapidly. Worth a look.
Here's a link to the Inkscape showcase.

Here's a link to an essential tool in Inkscape: converting bitmaps to vector graphics.

Here's me showing off the result of an hour playing around with bitmaps, fonts, and vector paths: a spoof on the Baby Einstein logo.

Saturday, August 7, 2010

"Fail small and fast"


Here's an interesting interview with Google's Peter Norvig on Slate's The Wrong Stuff (found via Marginal Revolution). The topic is Google's approach to innovation, error, and failure. I really like the admonition to "fail small and fast."

Just to chip in my own two cents -- I've discovered that academic culture* is intolerant towards failure, errors, and mistakes in general. (You learn this fast when you have a lot of bad ideas.) We treat research like building a space shuttle, not looking for a decent restaurant. We do a great job critiquing ideas and tearing apart arguments, but we're often not very good about brainstorming or being willing to fail publicly. We're very reluctant to accept the null hypothesis.

Why? It's always struck me as really odd that an occupation that is ostensibly about new ideas, innovation, and pushing the status quo ends taking such a slow-moving, conservative approach. I'm still not sure why. Some working hypotheses:
  • Like politicians, professors' careers live and die by reputation. We can't afford to fail publicly. Those who do lose credibility and are forgotten.
  • Grad school selects for the risk-averse. Bigger bets and more daring would benefit the field, but those sorts of people channel themselves to jobs that pay more and don't take 5 extra years of schooling to
  • Committee decisions favor the cautious. Decisions about funding, tenure, and publication are all made by committee, usually anonymously.
  • The old guard kills innovation. If you made your name building up theory X, and some upstart comes along arguing that theory Y is better, it's only natural to resist. I don't believe the strong, cynical version of this hypothesis, but I've seen a number of examples of generational inertia.
  • The publication scale makes big bets risky. It takes several years to publish a book. If you have to write one book to graduate and another to get tenure, you don't have a lot of time to write failed books. Smaller increments might allow for more failure and therefore more innovation.

*This may be limited to political science; I'm not as immersed in the norms of other disciplines.

Thursday, August 5, 2010

Link mishmash

Four links worth a quick visit.

1. "Minimum parking requirements act like a fertility drug for cars." Interesting NYTimes editorial on the consequences of government-mandated parking. I'm interesting in getting the libertarian take on this one. Do minimum parking requirements distort the market for land use by overproducing parking lots and roads, or do they facilitate commerce by lower transaction costs? Hat tip: David Smith.

2. An infographic from Bloomberg Businessweek on negative buzz, doping, and Lance Armstrong's reputation. The interesting thing here is the source: automated buzz tracking services applying sentiment analysis to the blogosphere. This is a technology I plan to use in my dissertation.

3. The myth of a conservative corporate America - A nice infographic on corporate donations to politics. Good use of FEC data.

4. Some good resources for screen scraping

"Information is good"



Picking up a thread from the monkey cage (great blog--worth a look), Slate has an article by Washington Post reporter Anne Applebaum arguing that "WikiLeaks' data-dump reporting simply makes a case for the existence of the mainstream media."

The article is a quick read. I want to pick it apart, and see where it takes us.

Things Applebaum gets:
  • Facts need interpretation to be useful.
  • In this instance WikiLeaks provided lots of facts and very little interpretation.
Things Applebaum misses:
  • "Mainstream media" is not the only institution that can provide useful interpretation, context, expertise, etc.
  • Transparency (making data public) breaks up mainstream media's monopoly on interpretation by allowing others (e.g. bloggers) to tell different stories.
  • Transparency allows savvy readers to come to their own conclusions, instead of relying on reporters to decide which facts are important. Most people won't bother to do this. But it's useful to have multiple perspectives.
My takeaways:
  • Data dumps don't replace mainstream media -- but they do threaten its monopoly on interpretation. The slightly defensive tone of the article suggests that Applebaum agrees, whether or not she wants to admit it. Don't get me wrong. I'm not a Big Media hater, but I am a big fan of transparent reporting.
  • On a more philosophical level, the difference between raw data and interpretation leads to some interesting ideas about "information." Raw data is not actionable. Is it still information? For most people, the dense, jargon-laden text of military reports don't mean anything. Reading them doesn't give you any traction to change your opinion or do anything differently -- they are "uninformative." A few experts (e.g. troops in Afghanistan, military experts) know how to unpack and sift through these reports. Apparently, what constitutes "information" is in the eye of the beholder.