Is the Internet Changing the Way You Think? Page 3
Today that sounds absurdly modest. It’s hard to recapture how futuristic it was at the time. The post-Berners-Lee world of 2010, if we could have imagined it forty years ago, would have seemed shattering. Anybody with a cheap laptop computer and a Wi-Fi connection can enjoy the illusion of bouncing dizzily around the world in full color, from a beach webcam in Portugal to a chess match in Vladivostok, and Google Earth actually lets you fly the full length of the intervening landscape, as if on a magic carpet. You can drop in for a chat at a virtual pub in a virtual town whose geographical location is so irrelevant as to be literally nonexistent (and the content of whose LOL-punctuated conversation, alas, is likely to be of a driveling fatuity that insults the technology that mediates it).
“Pearls before swine” overestimates the average chat room conversation, but it is the pearls of hardware and software that inspire me: the Internet itself and the World Wide Web, succinctly defined by Wikipedia as “a system of interlinked hypertext documents accessed via the Internet.” The Web is a work of genius, one of the highest achievements of the human species, whose most remarkable quality is that it was constructed not by one individual genius such as Tim Berners-Lee or Steve Wozniak or Alan Kay, nor by a top-down company such as Sony or IBM, but by an anarchistic confederation of largely anonymous units located (irrelevantly) all over the world. It is Project MAC writ large. Suprahumanly large. Moreover, there is not one massive central computer with lots of satellites, as in Project MAC, but a distributed network of computers of different sizes, speeds, and manufacturers—a network that nobody, literally nobody, ever designed or put together but which grew, haphazardly, organically, in a way that is not just biological but specifically ecological.
Of course there are negative aspects, but they are easily forgiven. I’ve already referred to the lamentable content of many chat room conversations. The tendency to flaming rudeness is fostered by the convention—whose sociological provenance we might discuss one day—of anonymity. Insults and obscenities to which you would not dream of signing your real name flow gleefully from the keyboard when you are masquerading online as “TinkyWinky” or “Flub- Poodle” or “ArchWeasel.”
And then there is the perennial problem of sorting out true information from false. Fast search engines tempt us to see the entire Web as a gigantic encyclopedia, while forgetting that traditional encyclopedias were rigorously edited and their entries composed by chosen experts. Having said that, I am repeatedly astounded by how good Wikipedia can be. I calibrate Wikipedia by looking up the few things I really do know about (and may indeed have written the entry for in traditional encyclopedias)—say, evolution or natural selection. I am so impressed by these calibratory forays that I go with some confidence to entries where I lack firsthand knowledge (which was why I felt able to quote Wikipedia’s definition of the Web, above). No doubt mistakes creep in or are even maliciously inserted, but the half-life of a mistake, before the natural correction mechanism kills it, is encouragingly short. Nevertheless, the fact that the wiki concept works—even if only in some areas, such as science—flies so flagrantly in the face of all my prior pessimism that I am tempted to see it as a metaphor for all that deserves optimism about the World Wide Web.
Optimistic we may be, but there is a lot of rubbish on the Web—more than in printed books, perhaps because they cost more to produce (and, alas, there’s plenty of rubbish there, too). But the speed and ubiquity of the Internet actually help us to be on our critical guard. If a report on one site sounds implausible (or too plausible to be true), you can quickly check it on several more. Urban legends and other viral memes are helpfully cataloged on various sites. When we receive one of those panicky warnings (often attributed to Microsoft or Symantec) about a dangerous computer virus, we do not spam it to our entire address book but instead Google a key phrase from the warning itself. It usually turns out to be, say, Hoax Number 76, its history and geography having been meticulously tracked.
Perhaps the main downside of the Internet is that surfing can be addictive and a prodigious time waster, encouraging a habit of butterflying from topic to topic rather than attending to one thing at a time. But I want to leave negativity and naysaying and end with some speculative—perhaps more positive—observations. The unplanned worldwide unification that the Web is achieving (a science fiction enthusiast might discern the embryonic stirrings of a new life-form) mirrors the evolution of the nervous system in multicellular animals. A certain school of psychologists might see it as mirroring the development of each individual’s personality, as a fusion among split and distributed beginnings in infancy.
I am reminded of an insight that comes from Fred Hoyle’s science fiction novel The Black Cloud. The cloud is a superhuman interstellar traveler whose “nervous system” consists of units that communicate with one another by radio—orders of magnitude faster than our puttering nerve impulses. But in what sense is the cloud to be seen as a single individual rather than a society? The answer is that interconnectedness that is sufficiently fast blurs the distinction. A human society would effectively become one individual if we could read one another’s thoughts through direct, high-speed, brain-to-brain transmission. Something like that may eventually meld the various units that constitute the Internet.
This futuristic speculation recalls the beginning of my essay. What if we look forty years into the future? Moore’s Law will probably continue for at least part of that time, enough to wreak some astonishing magic (as it would seem to our puny imaginations if we could be granted a sneak preview today). Retrieval from the communal exosomatic memory will become dramatically faster, and we shall rely less on the memory in our skulls. At present, we still need biological brains to provide the cross-referencing and association, but more sophisticated software and faster hardware will increasingly usurp even that function.
The high-resolution color rendering of virtual reality will improve to the point where the distinction from the real world becomes unnervingly hard to notice. Large-scale communal games such as Second Life will become disconcertingly addictive to many ordinary people who understand little of what goes on in the engine room. And let’s not be snobbish about that. For many people around the world, “first life” reality has few charms, and, even for those more fortunate, active participation in a virtual world is more intellectually stimulating than the life of a couch potato slumped in idle thrall to Big Brother. To intellectuals, Second Life and its souped-up successors will become laboratories of sociology, experimental psychology, and their successor disciplines yet to be invented and named. Whole economies, ecologies, and perhaps personalities will exist nowhere other than in virtual space.
Finally, there may be political implications. Apartheid South Africa tried to suppress opposition by banning television and eventually had to give up. It will be more difficult to ban the Internet. Theocratic or otherwise malign regimes, such as Iran and Saudi Arabia today, may find it increasingly hard to bamboozle their citizens with their evil nonsense. Whether, on balance, the Internet benefits the oppressed more than the oppressor is controversial and at present may vary from region to region (see, for example, the exchange between Evgeny Morozov and Clay Shirky in Prospect, November–December 2009).
It is said that Twitter played an important part in the unrest surrounding the election in Iran in 2009, and news from that faith pit encouraged the view that the trend will be toward a net positive effect of the Internet on political liberty. We can at least hope that the faster, more ubiquitous, and above all cheaper Internet of the future may hasten the long-awaited downfall of ayatollahs, mullahs, popes, televangelists, and all who wield power through the control (whether cynical or sincere) of gullible minds. Perhaps Tim Berners-Lee will one day earn the Nobel Peace Prize.
Let Us Calculate
Frank Wilczek
Physicist, MIT; 2004 Nobel laureate in physics; author, The Lightness of Being: Mass, Ether, and the Unification of Forces
Apology: The question “How is the Inte
rnet changing the way you think?” is a difficult one for me to answer in an interesting way. The truth is, I use the Internet as an appliance, and it hasn’t profoundly changed the way I think—at least not yet. So I’ve taken the liberty of interpreting the question more broadly, as “How should the Internet, or its descendants, affect how people like me think?”
If controversies were to arise, there would be no more need of disputation between two philosophers than between two accountants. For it would suffice to take their pencils in their hands, to sit down to the slates, and to say to each other (with a friend as witness, if they liked): “Let us calculate.” —Leibniz (1685)
Clearly Leibniz was wrong here, for without disputation philosophers would cease to be philosophers. And it is difficult to see how any amount of calculation could settle, for example, the question of free will. But if we replace, in Leibniz’s visionary program, “sculptors of material reality” for “philosophers,” then we arrive at an accurate description of an awesome opportunity—and an unanswered challenge—that faces us today. This opportunity began to take shape roughly eighty years ago, as the equations of quantum theory reached maturity.
The underlying physical laws necessary for the mathematical theory of a large part of physics and the whole of chemistry are thus completely known, and the difficulty is only that the exact application of these laws leads to equations much too complicated to be soluble. — P. A. M. Dirac (1929)
A lot has happened in physics since Dirac’s 1929 declaration. Physicists have found new equations that reach into the heart of atomic nuclei. High-energy accelerators have exposed new worlds of unexpected phenomena and tantalizing hints of nature’s ultimate beauty and symmetry. Thanks to that new fundamental understanding, we understand how stars work and how a profoundly simple but profoundly alien fireball evolved into the universe we inhabit today. Yet Dirac’s bold claim holds up: While the new developments provide reliable equations for objects smaller and conditions more extreme than we could handle before, they haven’t changed the rules of the game for ordinary matter under ordinary conditions. On the contrary, the triumphant march of quantum theory far beyond its original borders strengthens our faith in its soundness.
What even Dirac probably did not foresee, and what transforms his philosophical reflection of 1929 into a call to arms today, is that the limitation of being “much too complicated to be soluble” could be challenged. With today’s chips and architectures, we can start to solve the equations for chemistry and materials science. By orchestrating the power of billions of tomorrow’s chips, linked through the Internet or its successors, we should be able to construct virtual laboratories of unprecedented flexibility and power. Instead of mining for rare ingredients, refining, cooking, and trying various combinations scattershot, we will explore for useful materials more easily and systematically, by feeding multitudes of possibilities, each defined by a few lines of code, into a world-spanning grid of linked computers.
What might such a world grid discover? Some not unrealistic possibilities: friendlier high-temperature superconductors that would enable lossless power transmission, levitated supertrains, and computers that aren’t limited by the heat they generate; superefficient photovoltaics and batteries that would enable cheap capture and flexible use of solar energy and wean us off carbon burning; superstrong materials that could support elevators running directly from Earth to space.
The prospects we can presently foresee, exciting as they are, could be overmatched by discoveries not yet imagined. Beyond technological targets, we can aspire to a comprehensive survey of physical reality’s potential. In 1964, Richard Feynman posed this challenge: “Today, we cannot see whether Schrödinger’s equation contains frogs, musical composers, or morality—or whether it does not. We cannot say whether something beyond it like God is needed, or not. And so we can all hold strong opinions either way.”
How far can we see today? Not all the way to frogs or to musical composers (at least not good ones), for sure. In fact, only very recently did physicists succeed in solving the equations of quantum chromodynamics (QCD) to calculate a convincing proton, by using the fastest chips, big networks, and tricky algorithms. That might sound like a paltry beginning, but it’s actually an encouraging show of strength, because the equations of QCD are much more complicated than the equations of quantum chemistry. And we’ve already been able to solve those more tractable equations well enough to guide several revolutions in the material foundations of microelectronics, laser technology, and magnetic imaging. But all these computational adventures, while impressive, are clearly warm-up exercises. To make a definitive leap into artificial reality, we’ll need both more ingenuity and more computational power.
Fortunately, both could be at hand. The SETI@home project has enabled people around the world to donate their idle computer time to sift radio waves from space, advancing the search for extraterrestrial intelligence. In connection with the Large Hadron Collider (LHC) project, CERN—where, earlier, the World Wide Web was born—is pioneering the GRID computer project, a sort of Internet on steroids that will allow many thousands of remote computers and their users to share data and allocate tasks dynamically, functioning in essence as one giant brain. Only thus can we cope—barely!—with the gush of information that collisions at the LHC will generate. Projects like these are the shape of things to come.
Pioneering programs allowing computers to play chess by pure calculation debuted in 1958; they rapidly become more capable, beating masters (1978), grandmasters (1988), and world champions (1997). In the later steps, a transition to massively parallel computers played a crucial role. Those special-purpose creations are mini-Internets (actually mini-GRIDs), networking dozens or a few hundred ordinary computers. It would be an instructive project today to set up a SETI@home-style network or a GRID client that could beat the best stand-alones. Players of this kind, once created, would scale up smoothly to overwhelming strength, simply by tapping into ever larger resources.
In the more difficult game of calculating quantum reality, we, with the help of our silicon friends, currently play like weak masters. We know the rules and make some good moves, but we often substitute guesswork for calculation, we miss inspired possibilities, and we take too long doing it. To improve, we’ll need to make the dream of a world GRID into a working reality. To prune the solution space, we’ll need to find better ways of parceling out subtasks in ways that don’t require intense communication, better ways of exploiting the locality of the underlying equations, and better ways of building in physical insight. These issues have not received the attention they deserve, in my opinion. Many people with the requisite training and talent feel it’s worthier to discover new equations, however esoteric, than to solve equations we already have, however important their application.
People respond to the rush of competition and the joy of the hunt. Some well-designed prizes for milestone achievements in the simulation of matter could have a substantial effect by focusing attention and a bit of glamour toward this tough but potentially glorious endeavor. How about, for example, a prize for calculating virtual water that boils at the same temperature as real water?
The Waking Dream
Kevin Kelly
Editor-at-large, Wired; author, What Technology Wants
We already know that our use of technology changes how our brains work. Reading and writing are cognitive tools that change the way in which the brain processes information. When psychologists use neuroimaging technology such as MRI to compare the brains of literates and illiterates working on a task, they find many differences—and not just when the subjects are reading.
Researcher Alexandre Castro-Caldas discovered that the brain’s interhemispheric processing was different for those who could read and those who could not. A key part of the corpus callosum was thicker in literates, and “the occipital lobe processed information more slowly in [individuals who] learned to read as adults compared to those [who] learned at the usual age.”* Ps
ychologists Feggy Ostrosky-Solís, Miguel Arellano García, and Martha Peréz subjected literates and illiterates to a battery of cognitidve tests while measuring their brain waves and concluded that “the acquisition of reading and writing skills has changed the brain organization of cognitive activity in general . . . not only in language but also in visual perception, logical reasoning, remembering strategies, and formal operational thinking.”*
If alphabetic literacy can change how we think, imagine how Internet literacy and ten hours a day in front of one kind of screen or another is changing our brains. The first generation to grow up screen literate is just reaching adulthood, so we have no scientific studies of the full consequence of ubiquitous connectivity, but I have a few hunches based on my own behavior.
When I do long division, or even multiplication, I don’t try to remember the intermediate numbers; long ago I learned to write them down. Because of paper and pencil, I am “smarter” in arithmetic. Similarly, I now no longer to try remember facts, or even where I found the facts. I have learned to summon them on the Internet. Because the Internet is my new pencil and paper, I am “smarter” in factuality.
But my knowledge is now more fragile. For every accepted piece of knowledge I find, there is, within easy reach, someone who challenges the fact. Every fact has its antifact. The Internet’s extreme hyperlinking highlights those antifacts as brightly as the facts. Some antifacts are silly, some are borderline, and some are valid. You can’t rely on experts to sort them out, because for every expert there is an equal and countervailing antiexpert. Thus anything I learn is subject to erosion by these ubiquitous antifactors.