"Welcome to an Internet without privacy, and we've ended up here with hardly a fight." ~ Bruce Schneier
"Welcome to an Internet without privacy, and we've ended up here with hardly a fight." ~ Bruce Schneier
Why bother with artificial intelligence when we're still pretty incompetent with natural intelligence? And yet the fact that a venture is ill advised has never stopped us before.
We aspire to control others without being able to control ourselves.
We judge others more harshly than we judge ourselves.
We take more readily than we give.
Let's talk for a moment about our brain. No, not "our brain" as in us, the crosier of Popehat. (Some blogs have a staff; we have a crosier.) I mean "our brain" as in us, the species homo sapiens somewhat laughably sapiens.
What I want to say is this: we're certainly not going to let the fact that we're baffled by our real brains impede us from trying to build fake ones, right? Perhaps aiming for artifice in matters brainial will help us grasp things actually intracranial.
Of course, if we really knew how to exercise the natural contents of our collective brainboxen, then faced with the prospect of artificial intelligence, we'd all be running around screaming, "No! Stop! Skynet! Nexus!" (Of course, some of us would be doing it with the intonations of Gene Wilder's Willy Wonka, but hey.) We'd all recognize that if we can so easily rationalize our own hypocrisy, then even if we had an anthrobotic system that was tweaked to honor the n laws of robotics, someone somewhere would hack hypocrisy and rationalization right into it. Next stop, SHODAN.
Anyhow, we are blissfully oblivious to risks. And thanks to functional MRI and kindred advances in technology, such as electron microscopy and laser-scanning light microscopy, we (as a species) now stand at the threshold of understanding the brain's architecture and adaptability. We have begun to recognize that "neural circuits tell activity how to propagate, and neural activity tells circuits how to change". It's a great time to be alive, if only for the advent of much better sci-fi.
So what would a computer program based on the way our brains actually work be like? Not one inspired by cheesy 1980s intuitions about fuzzy logic, but a rigorous adaptation of principles actually embedded in our wetware?
Happily, thanks to Jeff Hawkins (the dude who founded Palm and Handspring) we can now begin to understand the answer to that question.
Effective immediately, I'm resigning from this place. I've appreciated the many opportunities I've received here and the chance to cover some important stories, but there are too many constraints on my writing. Now don't get the wrong idea. This is an amicable parting. Readers who've enjoyed my work here are welcome to follow me at my new site, where I'll address issues that really matter, to me and I suspect to most of you. Why in the past couple of days, I've written my best work, on topics I could never have covered here. Just look at the most recent post titles!
Our planet is doomed.
That rich plutocrat is a successful businessman and inventor, but is he really qualified to be President?
Exactly what does that other rich plutocrat mean when he says he's "experiencing the nightlife of this city"?
I will not kneel before you.
I dreamt that the Sun turned red.
The iPhone 5? Not for me! When you really need it, a payphone is still the way to go.
This will be a deeply personal blog, yet one of interest to readers across the globe. I'll be applying my unique insights to problems of interest to all humanity and beyond, in the insightful and penetrating way you've come to expect from my work here, but with no holds barred and no subject out of bounds.
Won't you join me?
The internet is pretty slick. Every attached computer has a unique address sort of like a phone number. (Sometimes, entire sub-networks lurk behind a single address through the miracles of IP and routing and such, just as entire switchboards of phones may lie behind the phone number of a main switchboard, but that's another story.)
Thanks to Transmission Control Protocol (TCP), files can be sent from one address to another with amazing efficiency. The brilliance of TCP's design lies in this: the rate at which stuff is sent automatically throttles up or down in response to network latency as measured by response time!
Let's break it down. TCP is cool because "transmission control" sounds like "mission control" and that sounds like something NASA would have. But TCP is also cool because of how it works. Grossly simplified, it works like this:
Now, here's the trippy science factoid du jour: researchers at Leland Stanford Junior University have discovered that Harvester Ants (including, apparently, the most venemous insect in the world) have been using TCP all along… behind Vint Cerf's and Bob Kahn's backs! Says the press release:
…the rate at which harvester ants – which forage for seeds as individuals – leave the nest to search for food corresponds to food availability.
A forager won't return to the nest until it finds food. If seeds are plentiful, foragers return faster, and more ants leave the nest to forage. If, however, ants begin returning empty handed, the search is slowed, and perhaps called off.
…They also found that the ants followed two other phases of TCP. One phase is known as slow start, which describes how a source sends out a large wave of packets at the beginning of a transmission to gauge bandwidth; similarly, when the harvester ants begin foraging, they send out foragers to scope out food availability before scaling up or down the rate of outgoing foragers.
Another protocol, called time-out, occurs when a data transfer link breaks or is disrupted, and the source stops sending packets. Similarly, when foragers are prevented from returning to the nest for more than 20 minutes, no more foragers leave the nest.
Further research into what these critters might teach us will be undertaken at the newly funded FourmiLab. Meanwhile, I leave you with a meditation on Proverbs 6:6 by e. e. cummings: go(perpe)go from his 1935 manuscript No Thanks (in George James Firmage, ed., E. E. Cummings: Complete Poems, 1904-1962, Revised, NY: Norton, 1994, p. 403 or thereabouts).
About 2 months ago, Cisco pushed to its consumer-grade routers a firmware upgrade that stripped away the ability to log into and configure the routers locally. Instead, consumers thus upgefirmed were treated to a Cloud Connect signup page where they could establish an account that would centralize management of consumers' routers in Cisco's servana.
By the fifth of July, Cisco had backpedaled. "Did we say mandatory? Did we push that firmware? Oopsie. Our bad." They then made it clear that any consumer could opt out and maintain local control of his consumer-grade router by simply following the friendly instructions, which begin "We are sorry to see you downgrading to our Classic software (non-Cloud)…."
Now, via Ars Technica, comes word of the latest fad in centralized management of the people's resources.
…wireless researchers in Germany proposed a way to improve the communications abilities of first responders…: creating an “emergency switch” that lets government employees disable the security mechanisms in the wireless routers people have set up in their own homes. This would allow first responders to use all the routers within range to enhance the capabilities of the mesh networks that allow them to communicate with each other.
…The residents’ wireless traffic would still remain private, in theory…..
This even though bandwidth is already set aside for that purpose.
I, for one, regret that I have but one subnet to allocate for my country. But just to hedge, I'll be printing up a selection of bumper stickers and t-shirts featuring salient slogans:
Anyhow, I'm all for it. First Defenders, after all. And The Children.
What could possibly go wrong?
One hesitates to suggest that there could be a good higher than threatening to bomb one's political opponents, but human survival off this planet, indeed, human expansion into and conquest of the galaxy, may be one of those things.
This is one small step for free enterprise, one giant leap for mankind. The government won't ensure that humans escape this planet before the comet hits, giant tsunamis strike, the core reverses polarity, or the Daleks arrive. The government couldn't find a clue if Colonel Mustard was appointed head of Homeland Security.
Private enterprise will save us, even if it has to destroy the earth to do so.
SHOULD APPELLATE JUDGES BE REQUIRED TO TAKE COURSES IN BASIC SCIENCE? More importantly, should they be required to pass a course on the scientific method and its application to everyday problems?
Daubert and Kumho Tire have been criticized on the ground that they require too much scientific training on the part of judges. But if the recent "pit bull" decision from the Maryland Court of Appeals is any indicator, the problem is that we don't require enough scientific training of our judges. Or any at all.
WHICH MEDICAL PROCEDURES SHOULD YOU CONSIDER REJECTING DURING A PREGNANCY? Personally I trust this sort of advice a lot more when it comes from Consumer Reports than when it comes from a panel of experts appointed by the government. That's because unlike the government, Consumer Reports has never lied to me.
SPEAKING OF DINOSAURS … Former Soviet automaker Lada has announced that it will end manufacture of the Riva, a.k.a. The People's Volkswagen. The Riva was, until now, the longest continuously produced model of car on the planet. Over the years Lada produced millions of Rivas, which had dozens of happy owners.
Long live the revolution!
FASTER PLEASE: in the hunt for New Therapeutic Uses for Existing Molecules, big
Bubble Ed and big Pharma are BFFs.
Remember the last time you read Harpers magazine? You know, the magazine that had the lists of interesting statistics, that your friends in college quoted when they wanted you to know that four years after the speech 36% of George Bush's "Thousand Points of Light" were unemployed, or that for the cost of Bill Clinton's state dinner for Boris Yeltsin three million Russian babies could have been given a year's supply of infant formula.
Of course you don't remember the last time you read Harpers. That's because it's behind a paywall.
It's behind a paywall because, according to Harpers publisher John R. MacArthur, no one in the history of the internet has ever made money by "giving away" words and ideas in return for advertising. It's because the very act of displaying words on a monitor cheapens them, and cheapens the writer, transforming talented journalists and opinion leaders into monkeys desperately tap-tap-tapping away at disease-ridden keyboards with vile, feces-encrusted paws.
In fact, as MacArthur recently lectured students at the Columbia Journalism School, the internet has all but destroyed their once noble profession. Even Harpers is infected by the rot: it's behind a paywall. According to MacArthur, the one way to save writing of quality is to unplug the internet entirely, and to return to selling magazines on paper only, as God intended them to be read and as the droll wits behind MacArthur's favorite journal of sophisticated French humour have done, much to their profit!
As for the notion that the internet democratizes opinion, MacArthur wishes it known that this is a myth, because people who use the internet have no opinions worth reading. No doubt if called to speak at a flight school, MacArthur would agree with his kindred spirit Sideshow Bob:
Aaah for the days when aviation was a gentleman's pursuit — back before every Joe Sweatsock could wedge himself behind a lunch tray and jet off to … Raleigh-Durham.
I recommend every word of MacArthur's lecture, for its comedic value alone. In many respects it reads as a parody of Luddism until one reflects that MacArthur admits, quite openly, that his magazine cannot earn a red cent through the internet.
And don't get him started on the evils of Xerox Machines!
With the Facebook Timeline just around the corner, and with Steve Jobs shuffling off this mortal coil, I'd like to consider what makes some technologies so different, so appealing.
Last night I asked my art history students what was distinctive about the contribution of Steve Jobs. A few compared him to inventors such as Edison or Tesla. A few looked for an answer in his emphasis on design. I joined the second group and challenged the first by pointing out (as The Economist had already done with great clarity) that Jobs had invented none of the technologies or devices for which he's best known: the mouse-driven computer, the digital audio player, the smart phone, and the tablet. But I also pressed that second group with a follow-up question: if his contribution had to do with design, not invention, then just what was the nature of his contribution to design?
The ensuing discussion was brief and stimulating. After the students had shared their views, I shared mine: I think Steve Jobs emphasized machine beauty with such focus and force that he made the artificiality of devices disappear. Calling him "The Magician", The Economist ascribes to him the ability to connect emotion to technology:
"His great achievement was to combine an emotional spark with computer technology, and make the resulting product feel personal."
Almost. It is the relationship we have with ourselves and our own capabilities that is emotional and personal; Jobs introduced into this already extant feedback loop a device which amplifies our self-signal without getting in our way. Rather than wallow in the narcissism of self-admiration as we see our latent powers amplified, we call the device itself cool. But whenever we call a device cool, what we mean is that it can easily make us more powerful in a way we desire. And that's cool.
What is machine beauty? The clearest and most useful answer to this question comes from David Gelernter (innovator and former patent-holder of the Lifestream technology, which has been at the center of consequential litigation involving Apple). Many stakeholders have by now laid claim to this concept, and perhaps we'll have a post here someday on the idiocy of many software patents, the Peter/Paul problems in patent granting, and the incoherence of the very idea of a software patent. For now, though, I want to bracket out the question of Apple's possible employment of Microsoftian market practices. Gelernter is noteworthy here not just because of his technological innovation, but also because he thinks deeply about the usability of machines, about art, and about beauty.
In his terse, punchy book Machine Beauty, Gelernter proposes a simple definition of the factor that distinguishes great technologies: machine beauty is the well-balanced integration of simplicity and power. Consider technologies that consists of devices. A device may be powerful but not simple; it requires the user to learn, study, and practice. A device may be simple but not powerful; it's hardly worthy of attention, so weak is the signal it delivers. And a device may be neither. But the device that manages to empower the user with virtually no learning curve is machine-beautiful.
The iPhone exemplifies this delicate balance. One day there was no iPhone; the next day there was an iPhone. And on that next day, children and elders, techies and Luddites, the deft and the daft— these were all standing around Apple Store displays and using the iPhone, with no instruction, to do things they wanted to do that they had previously been unable to do so efficiently, transparently, and enjoyably. Machine beauty.
Here, then, is a third question: why do we value technologies that are machine-beautiful?
I think it's easier to frame an answer to this question if we think about technologies in the way I recommended in my earlier post on Rodin's The Burghers of Calais:
I prefer to emphasize that technology always stands in a certain relation to the people who use it: technology is anything that amplifies what the human body can already do. A club amplifies the ability to punch. A gun amplifies the ability to throw. A telephone amplifies the ability to shout. A motor vehicle amplifies the ability to run. Clothing amplifies the protective and insulating qualities of skin. Architecture, oddly enough, is large, static, communal clothing. Telecast media amplify vision or audition. The hard drive and RAM of a computer amplify the ability to remember and to calculate. And so on.
Any technology may be understood this way, and therefore anything that acts as a force multiplier on what humans in general can already do may be construed as a technology.
If we take technology in general as any means of converting our existing capabilities into superpowers, then the appeal of a machine-beautiful device is immediately apparent: the power of the device makes us harder, better, faster, stronger, and the simplicity of the device spares us from having to think too much about the device itself. The technology is a nearly transparent biomodification that empowers us to do with facility from now on what we could do only at great pains before.
The distinctive contribution of Steve Jobs, as I see it, is that he created a post-now class of consumer citizens: the Cybourgeoisie.
Conventional wisdom changes over time.
There are two ways to discuss this, the crude, and the technical.
If you're crude, it's fun to discuss things the technical way; if you're technical, it's fun to discuss it crudely. I'm a a bit crude and a bit technical, so I'll share my thoughts on how convention wisdom changes in both ways.
The crude first.
There are two folks sayings (one is actually a Gandhi quote, but that makes it sound a bit high-falutin', so let's just ignore that weird old sexually hung-up dude and call it a folk saying). Anyway:
Science doesn't advance when minds are changed; it advances when old scientists die.
First they ignore you, then they laugh at you, then they fight you, then you win.
The point being contrary to the nice crisp models of the scientific process, (a) the more data you get to support your side, the more vehement the other side gets, and (b) there's no amount of data that can convince some people. You just have to wait until they get somewhat less attractive, and corpsified, and gross, and then continue the conversation over their age-whithered remains.
Now the technical:
Thomas Kuhn's The Structure of Scientific Revolutions is one of those books that everyone with pretensions to intellectualism should read.
For that matter, so is C.P. Snow's essay "The Two Cultures".
The difference is that I've actually read The Structure of Scientific Revolutions. It's not quite as deep – nor as original – as its reputation suggests, nor could it be. The name of the book has become something of a totem – loaded (not "freighted". I hate that term. Unless there are actual, literal forklifts or cranes involved you can stick your "freighted" right next to your "fraught" in your hipster-pretentious-J-school three ring binder, and shelve it next to Salon.com and the NYT style pages).
Uh…where was I?
Right, right. "The Structure of Scientific Revolutions". "Freighted". "Hipsters".
Anyway, the name of the book is loaded with a lot of cultural signifiers and baggage, because that's what pretentious intellectuals do, and because the book is a convenient stick in the dirt and thus its title is as good a phrase as any to label that patch of ground.
The patch of ground being the social process by which conventional wisdom changes.
Kuhn argues (to simplify) that at any given point in time there is a dominant theory. If the theory is hugely dominant, and there are no observed problems with it, there's little action, and no one much cares.
Had any rousing debates about electron shells, the mass of a neutron, of the photovoltaic effect recently?
Nor have I.
However, from time to time, a theory that was dominant gets some countervailing data piled up against it.
…and then a bit more.
…and then a bit more.
In theory there's no difference between the model of the scientific process and the actual practice of science.
…but in actual practice there is.
In theory academics of whatever stripe – physicists, chemists, economists, political scientists – would welcome contrary opinions and contrary data.
We all know what we really see, though: anger, fear, and outrage.
This is because the theory of the scientific process oversimplifies: it forgets that academics are first and foremost humans, and humans are the end product of a whole butt-load of tribal living.
…and when it comes to tribal living, the powerful get first choice of meat and first choice of nubile hunter-gatherers-of-the-curvy-variety.
Thus we humans can be fairly prickly about power, status, and signaling (you can Google up Tyler Cowen and Robin Hanson on your own). When it comes to power dynamics in the nerd – ah – academic set, there's something a lot worse than being challenged by the first-row, second-seat sax player, or having your rook snatched by the kid with an Elo score one notch down from yours. These challenges will just have you lose one or two ranks. The thing that's a lot worse is being kicked out of the group all together: being made a laughing stock and mocked as utterly, entirely wrong.
And, of course, this is exactly what the scientific process – as it's SUPPOSED to work – threatens to do to non-ideal actual-human-meat academics.
So the Old Guard fight as hard and as long as possible…and they get more and more angry as the evidence piles up against them.
…and eventually the expire and the old much-hated ideas are allowed to be spoken in public.
Paul Graham touched on this in his essay What you Can't Say:
To launch a taboo, a group has to be poised halfway between weakness and power. A confident group doesn't need taboos to protect it. It's not considered improper to make disparaging remarks about Americans, or the English. And yet a group has to be powerful enough to enforce a taboo.
Anyway, having quoted two bumper stickers, one philosopher of science, and one start-up millionaire (as well as mocking a universally-beloved 20th century saint), I arrive at my point:
After almost 150 years, the idea of the universal welfare state may be crumbling before our eyes.
The welfare state – at least the American version of it – is like a shark that must constantly swim forward or die. It's like an embezzling employee who must not only show up at work every day to cover her tracks, but must steal more and more to cover the old debts plus new expenses. It's like a Ponzi scheme.
In fact, it's not like a Ponzi scheme, it is a Ponzi scheme. Both at a concrete level and at a conceptual meta-level.
The American welfare state must constantly grow because it is as much a social system of outrage, signaling, and demonstrated "compassion" (those damn dirty apes – uh, I mean "humans" – again). If you're a good progressive and you're born into a system that already has emergency health care for the poor, welfare, free schooling for all, etc., etc. ad nauseam, then how do you demonstrate to the 20th century version of the hunter-gatherers-of-the-curvy-variety that you're a good person with all the right opinions and thus deserve a bunch of crazy monkey sex worthy of a Dan Savage column?
You agitate for even more welfare statism.
(I note that I could have merely referenced the hedonic treadmill to explain all of this, but that wouldn't have allowed me to use the phrase "crazy monkey sex", and I bet Ken two free hours of dental-expert-witnessing that I could work that phrase into every single post for the remainder of the year. I won't even tell you what I get if I win.)
And thus we run into the problem we see today: the economic meltdown.
As Margaret Thatcher famously said "the problem with socialism is that eventually you run out of other people's money".
Bush bought an election or two by buying off the votes of elderly with prescription drug benefits.
Obama's been trying to buy himself a second term since Day One buy buying GM from its creditor/owners and handing it to the unions…along with a thousand other equally stupid schemes.
…but the moment of reckoning that many of us have seen since the 1980s has finally arrived.
We've run out of other people's money.
You can see the graphs everywhere in the economic blogosphere: expenditures racing far beyond revenues and never ever ever being caught.
This leads us to the second shoe drop, which is only a few years away: the point where the government is not just spending wildly more than it takes in, but the point where it becomes physically impossible to even keep up with the interest payments on the debt.
And then the – uh – gripping foot drops a third shoe: as the market sees this point coming, it gets more and more leery of lending any more money to the government, thus bidding up the interest that the government must pay in order to borrow additional dollars.
This is basically a tripwire: as soon as the apocalypse can be seen on the horizon we're rapidly accelerated towards it.
So, back to Kuhn and others: this has all been clear to some folks for a quarter century or more, but it's finally becoming more and more clear to the average man in the street.
In a better world the Krugmans and others would say "hmm…this isn't how I expected things to play out; perhaps my theories are wrong".
…but the Krugmans and others are afraid of losing their status and their access to crazy monkey sex (although I think the NYT still pays in dollars and suggests that columnists go procure on their own…although I admit that that may change as the dollar devalues).
Over the last few decades libertarianism / governmental minimalism / the night watchman state has gone from being a term that most folks had never heard of, to being a concept that just a bunch of low status geeks and freaks chatted about in between rolling the d20, to being a virulent / arrogant / hateful / racist concept.
The welfare state is dying, the evidence is becoming more and more clear, the Chief Monkeys are losing their power, and the world is about to undergo the kind of intellectual revolution and tumult that it only sees once every few centuries or so.
Punctuated equilibrium – it's not just for meatspace evolution any more.
P.S. Hi. I'm Clark. Nice to meet y'all.
If CERN is correct, the little neutrino the Europeans just measured breaking the speed of light means that everything anyone knows is wrong.