Free Novel Read

Is the Internet Changing the Way You Think? Page 6


  And then somehow the creature became autonomous, an ordinary part of our universe. We are no longer surprised, no longer engaged in so much meta-analysis. We are dependent. Some of us are addicted to this marvelous tool, this multifaceted medium that is—as predicted even ten years ago—concentrating all of communication, knowledge, entertainment, business. I, like many of us, spend so many hours before a computer screen, typing away, even when surrounded by countless books, that it is hard to say exactly how the Internet has affected me. The Internet is becoming as ordinary as the telephone. Humans are good at adapting to the technologies we create, and the Internet is the most malleable, the most human of all technologies—just as it can also alienate us from everything we’ve lived as before now.

  I waver between these two positions—at times gratefully dependent on this marvel, at times horrified at what this dependence signifies. Too much concentrated in one place, too much accessible from one’s house, the need to move about in the real world nearly nil, the rapid establishment of social networking Websites changing our relationships, the reduction of three-dimensionality to that flat screen. Rapidity, accessibility, one click for everything: Where has slowness gone, and tranquility, solitude, quiet? The world I took for granted as a child, and which my childhood books beautifully represented, jerks with the brand-new world of artificial glare and electrically created realities—faster, louder, unrelated to nature, self-contained.

  The technologies we create always have an impact on the real world, but rarely has a technology had such an impact on minds. We know what’s happening to those who were born after the advent of the Internet; for those, like me, who started out with typewriters, books, slowness, reality measured by geographical distance and local clocks, the emerging world is very different indeed from the world we knew.

  I am of that generation for which adapting to computers was welcome and easy but for which the pre-Internet age remains real. I can relate to those who call the radio “the wireless,” and I admire people in their seventies and eighties who communicate by e-mail, because they come from further away still. Perhaps the way forward would be to emphasize the teaching of history in schools, to develop curricula on the history of technology, to remind today’s children that their technology, absolutely embracing as it feels, is relative and does not represent the totality of the universe. Millions of children around the world don’t need to be reminded of this—they have no access to technology at all, many not even to modern plumbing—but those who do should know how to place this tool historically and politically.

  As for me, I am learning how to make room for the need to slow down and disconnect without giving up my addiction to Google, e-mail, and rapidity. I was lucky enough to come from somewhere else, from a time when information was not digitized. And that is what perhaps enables me to use the Internet with a measure of wisdom.

  The Greatest Detractor to Serious Thinking Since Television

  Leo Chalupa

  Ophthalmologist and neurobiologist, University of California, Davis

  The Internet is the greatest detractor to serious thinking since the invention of television. It can devour time in all sorts of frivolous ways, from chat rooms to video games. And what better way to interrupt one’s thought processes than by an intermittent stream of incoming e-mail messages? Moreover, the Internet has made interpersonal communication much more circumscribed than in the pre-Internet era. What you write today may come back to haunt you tomorrow. The brouhaha in late 2009 following the revelations of the climate scientists’ e-mails is an excellent case in point.

  So while the Internet provides a means for rapidly communicating with colleagues globally, the sophisticated user will rarely reveal true thoughts and feelings in such messages. Serious thinking requires honest and open communication, and that is simply untenable on the Internet by those who value their professional reputation.

  The one area in which the Internet could be considered an aid to thinking is the rapid procurement of new information. But even this is more illusory than real. Yes, the simple act of typing a few words into a search engine will virtually instantaneously produce links related to the topic at hand. But the vetting of the accuracy of information obtained in this manner is not a simple matter. What one often gets is no more than abstract summaries of lengthy articles. As a consequence, I suspect that the number of downloads of any given scientific paper has little relevance to the number of times the entire article has been read from beginning to end. My advice is that if you want to do some serious thinking, then you’d better disconnect the Internet, phone, and television set and try spending twenty-four hours in absolute solitude.

  The Large Information Collider, BDTs, and Gravity Holidays on Tuesdays

  Paul Kedrosky

  Editor, Infectious Greed; senior fellow, Kauffman Foundation

  Three friends have told me recently that over the latest holiday they unplugged from the Internet and had big, deep thoughts. This worries me. First, three data points means it’s a trend, so maybe I should be doing it. Second, I wonder if I could disconnect from the Internet long enough to have big, deep thoughts. Third, like most people I know, I worry that even if I disconnect long enough, my info-krill-addled brain is no longer capable of big, deep thoughts (which I will henceforth call BDTs).

  Could I quit? At some level, it seems a silly question, like asking how I feel about taking a breathing hiatus or if on Tuesdays I would give up gravity. The Internet no longer feels involuntary when it comes to thinking. Instead it feels more like the sort of thing that when you make a conscious effort to stop doing it bad things happen. As a kid, I once swore off gravity and jumped from a barn haymow, resulting in a sprained ankle. Similarly, a good friend of mine sometimes asks fellow golfers before a swing whether they breathe in or breathe out. The next swing is inevitably horrible, as the golfer sends a ball screaming into the underbrush.

  Could I quit the Internet if it meant I would have more BDTs? Sure, I suppose I could, but I’m not convinced the BDTs would occur to me. First, the Internet is, for me, a kind of internal cognition- combustion engine, something that vastly accelerates my ability to travel vast landscapes. Without it, I’d have a harder time comparing, say, theories about complexity, cell phones, and bee colony collapse rather than writing an overdue paper or counting hotel room defaults in California versus Washington State. (In case you’re curious, there are roughly twice as many defaulted hotel rooms in California as there are total hotel rooms in Seattle.)

  Like most people I know, I worry noisily and loudly that the Internet has made me incapable of having BDTs. I feel sure that I used to have such things, but for some reason I no longer do. Maybe the Internet has damaged me—I’ve informed myself to death!— to the point that I don’t know what big, deep thoughts are. Or maybe the brain chemicals formerly responsible for their emergence are now doing something else. Then again, this smacks of historical romanticism, like remembering the skies as always being blue and summers as eternal when you were eight years old.

  So, as much as I kind of want to believe people who say they have big, deep thoughts when they disconnect from the Web, I don’t trust them. It’s as though a doctor had declared himself Amish for the day and headed to the hospital by horse and buggy with a hemorrhaging patient. Granted, you could do that and some patients might even survive, but it isn’t prudent or necessary. It seems, instead, a public exercise in macho symbolism—like Iggy Pop carving something in his chest, a way of bloodily demonstrating that you’re different—or even a sign of outright crankishness. Look at me! I’m thinking! No Internet!

  If we know anything about knowledge, about innovation, and therefore about coming up with BDTs, it is that these things are cumulative, accretive processes of happening upon, connecting, and assembling: an infinite Erector set, not just a few pretty I-beams strewn about on a concrete floor. But if BDTs were just about connecting things, then the Internet would only be mildly interesting in changing the way I think. Libra
ries connect things, people connect things, and connections can even happen (yes) while sitting disconnected from the Internet under an apple tree somewhere. Here’s the difference: The Internet increases the speed and frequency of these connections and collisions while dropping the cost of both to near zero.

  It is that combination—cheap connections plus cheap collisions—that has done violence to the way I think. It’s like having a private particle accelerator on my desktop, a way of throwing things into violent juxtaposition, with the resulting collisions reordering my thinking. The result is new particles—ideas!—some of which are BDTs and many of which are nonsense. But the democratization of connections, collisions, and therefore thinking is historically unprecedented. We are the first generation to have the information equivalent of the Large Hadron Collider for ideas. And if that doesn’t change the way you think, nothing will.

  The Web Helps Us See What Isn’t There

  Eric Drexler

  Engineer, molecular technologist; author, Engines of Creation

  As the Web becomes more comprehensive and searchable, it helps us see what’s missing in the world. The emergence of more effective ways to detect the absence of a piece of knowledge is a subtle and slowly emerging contribution of the Web, yet important to the growth of human knowledge. I think we all use absence detection when we try to squeeze information out of the Web. It’s worth considering both how that works and how the process could be made more reliable and user-friendly.

  The contributions of absence detection to the growth of shared knowledge are relatively subtle. Absences themselves are invisible, and when they are recognized (often tentatively), they usually operate indirectly, by influencing the thinking of people who create and evaluate knowledge. Nonetheless, the potential benefits of better absence detection can be measured on the same scale as the most important questions of our time, because improved absence detection could help societies blunder toward somewhat better decisions about those questions.

  Absence detection boosts the growth of shared human knowledge in at least three ways:

  Development of knowledge. Generally, for shared knowledge to grow, someone must invest effort to develop a novel idea into something more substantial (resulting in a blog post, a doctoral dissertation, or whatever). A potential knowledge creator may need some degree of confidence that the expected result doesn’t already exist. Better absence detection can help build that confidence—or drop it to zero and abort a costly duplication.

  Validation of knowledge. For shared knowledge to grow, something that looks like knowledge must gain enough credibility to be treated as knowledge. Some knowledge is born with credibility, inherited from a credible source, yet new knowledge, supported by evidence, can be discredited by arguments backed by nothing but noise. A crucial form of evidence for a proposition is sometimes the absence of credible evidence against it.

  Destruction of antiknowledge. Shared knowledge can also grow through removal of antiknowledge—for example, by discrediting false ideas that displaced or discredited true ones. Mirroring validation, a crucial form of evidence against the credibility of a proposition is sometimes the absence of credible evidence for it.

  Identifying what is absent by observation is inherently more difficult than identifying what is present, and conclusions about absences are usually substantially less certain. The very idea runs counter to the adage, being based on the principle that absence of evidence sometimes is evidence of absence. This can be obvious: What makes you think there’s no elephant in your room? Of course, good intellectual housekeeping demands that reasoning of this sort be used with care. Perceptible evidence must be comprehensive enough that a particular absence, in a particular place, is significant: I’m not at all sure that there’s no gnat in my room, and I can’t be entirely sure that there’s no elephant in my neighbor’s yard.

  Reasonably reliable absence detection through the Web requires both good search and dense information, and this is one reason why the Web becomes effective for the task only slowly, unevenly, and almost imperceptibly. Early on, an absence in the Web shows a gap in the Web; only later does an absence begin to suggest a gap in the world itself.

  I think there’s a better way to detect absences, one that bypasses ad hoc search by creating a public place where knowledge comes into focus. We could benefit immensely from a medium that is as good at representing factual controversies as Wikipedia is at representing factual consensus.

  What I mean by this is a social software system and community much like Wikipedia—perhaps an organic offshoot—that would operate to draw forth and present what is, roughly speaking, the best evidence on each side of a factual controversy. To function well, it would require a core community that shares many of the Wikipedia norms but would invite advocates to present a far-from-neutral point of view. In an effective system of this sort, competitive pressures would drive competent advocates to participate, and incentives and constraints inherent in the dynamics and structure of the medium would drive advocates to pit their best arguments head-to-head and point by point against the other side’s best arguments. Ignoring or caricaturing opposing arguments simply wouldn’t work, and unsupported arguments would become more recognizable.

  Success in such an innovation would provide a single place to look for the best arguments that support a point in a debate, and with these, the best counterarguments—a single place where the absence of a good argument would be good reason to think that none exists.

  The most important debates could be expected to gain traction early. The science of climate change comes to mind, but there are many others. The benefits of more effective absence detection could be immense and concrete.

  Knowledge Without, Focus Within, People Everywhere

  David Dalrymple

  Eighteen-year-old PhD student; researcher, MIT’s Mind Machine Project

  Filtering, not remembering, is the most important skill for those who use the Internet. The Internet immerses us in a milieu of information—not for almost twenty years has a Web user read every available page—and there’s more each minute: Twitter alone processes hundreds of tweets every second, from all around the world, all visible for anyone, anywhere, who cares to see. Of course, the majority of this information is worthless to the majority of people. Yet anything we care to know—What’s the function for opening files in Perl? How far is it from Hong Kong to London? What’s a power law?—is out there somewhere.

  I see today’s Internet as having three primary, broad consequences: (1) information is no longer stored and retrieved by people but is managed externally, by the Internet; (2) it is increasingly challenging and important for people to maintain their focus in a world where distractions are available everywhere; and (3) the Internet enables us to talk to and hear from people around the world effortlessly.

  Before the Internet, most professional occupations required a large body of knowledge, accumulated over years or even decades of experience. But now anyone with good critical thinking skills and the ability to focus on the important information can retrieve it on demand from the Internet instead of from her own memory. However, those with wandering minds, who might once have been able to focus by isolating themselves with their work, now often find they must do their work with the Internet, which simultaneously furnishes a panoply of unrelated information about their friends’ doings, celebrity news, limericks, and millions of other sources of distraction. How well an employee can focus might now be more important than how knowledgeable he is. Knowledge was once an internal property, and focus on the task at hand could be imposed externally; with the Internet, knowledge can be supplied externally but focus must be achieved internally.

  Separable from the intertwined issues of knowledge and focus is the irrelevance of geography in the Internet age. On the transmitting end, the Internet allows many types of professionals to work in any location—from their home in Long Island, from their condo in Miami, in an airport in Chicago, or even in flight on some airlines�
��wherever there’s an Internet connection. On the receiving end, it allows for an Internet user to access content produced anywhere in the world with equal ease. The Internet also enables groups of people to assemble based on interest, rather than on geography—collaboration can take place between people in Edinburgh, Los Angeles, and Perth nearly as easily as if they lived in neighboring cities.

  In the future, we’ll see the development of increasingly subconscious interfaces. Already, making an Internet search is something many people do without thinking about it, like making coffee or driving a car. Within the next fifty years, I expect the development of direct neural links, making the data that are available at our fingertips today available at our synapses in the future, and making virtual reality feel more real than our current sensory perception. Information and experience could be exchanged between our brains and the network without any conscious action. And at some point knowledge may be so external that all knowledge and experience will be universally shared, and the only notion of an “individual” will be a particular focus—a point in the vast network that concerns itself only with a specific subset of the information available.

  In this future, knowledge will be fully outside the individual, focus will be fully inside, and everybody’s selves will truly be spread everywhere.