Trends in Technology
By J. Anthony Vittal
The Exponential Information Environment
We are drowning in information—largely because of increasing access to information and ways to describe, manipulate, and distribute it. For example, there are more than 2.7 billion searches performed on Google alone every month. Because there was nothing to facilitate broad-based information searches before the advent of Internet search engines, research that used to require weeks or months to complete, proscribed by access and language, now takes minutes or hours to accomplish. Then there is the “noise” factor—the vast increase in interpersonal communication driven by technology (e.g., just the number of text messages sent and received every day exceeds the population of the planet).
Let’s look at some related statistics:
- The English language now has about 540,000 words—about five times as many as in Shakespeare's time.
- More than 3,000 books are published daily around the world.
- Estimates indicate that a week's worth of the New York Times—just one newspaper—contains more information than a person was likely to obtain in a lifetime in the 18th Century, when our nation was founded.
- Estimates indicate that 1.5 exabytes (1.5 x1018) of unique new information will be generated worldwide this year—more than in the last 5,000 years combined.
- The amount of new technical information is doubling every two years. By 2010, the amount of new technical information is predicted to double every 72 hours. In real world terms, this means that half of what current college freshmen learn as freshmen will be outdated by their junior year.
- Third-generation fiber optics recently tested by NEC and Alcatel will carry 10 terabits (10 trillion bits) of data per second over each strand of fiber. That’s the equivalent of 1,900 CDs or 150 million simultaneous telephone calls, every second. Data transmission capacity is currently tripling about every six months and is expected to continue doing so for at least the next 20 years. Because the fiber already is in place, the increase in throughput is based on improving the switching on the transmitting and receiving ends. Therefore, the marginal cost of these improvements is virtually nil.
- With improvements in transmission and storage, epaper ultimately will be cheaper than real paper.
- In 2006, 47 million laptops were shipped. The “$100 Laptop Project” will ship 50–100 million laptops a year to children in underdeveloped countries.
- Predictions are that, by 2013, a supercomputer will be built that exceeds the computational capability of the human brain. By 2023, when today’s first graders will be graduating from college and entering the job market, a computer that exceeds the computational capability of the human brain will cost $1,000. Some think that, by 2049, a $1,000 computer will exceed the computational capabilities of the human race.
What does all of this mean? For starters, the personal notebook computer not only is replacing print media (books, magazines, and newspapers), but already has the ability to put entire libraries in the hands of everyone who has access to one. It already has far exceeded the capabilities envisioned for it as the “ultimate book” when it was predicted a generation ago. Translation algorithms allow information published in one language to be readily accessible in others. One of the many online translation applications (babelfish.altavista.com) provides free translation of blocks of text and whole web pages among English and Chinese (both traditional and simple), Dutch, French, German, Greek, Italian, Japanese, Korean, Portuguese, Russian, and Spanish, as well as among other languages.
The combination of access to information and the ability to store, manipulate, and interpret it will be the great leveler across human society. No longer will one require a formal education and a facility with languages to have access to knowledge. Supercomputers will provide the ability to interpret and synthesize that knowledge. The danger will lie in filtering the knowledge, the interpretations, and the syntheses, as they are distributed to those who seek them in an effort to exercise control. Therein lies one of the great challenges to humanity.
Another challenge will be the storage of all of this information in a way that will survive and that will not overwhelm our planet. Current information storage technology depends for the most part on magnetic and optical storage media, which require significant amounts of power to read, write, and maintain. Server bays require power to run the servers, operate the storage drives, and keep them cooled to optimal operating temperatures. Data transmission also requires significant amounts of power in the aggregate to push data—whether wirelessly between wireless access points and connected devices (computers, print servers, etc.), or among satellites and ground stations, or across the Internet.
With power generation to satisfy demand contributing significantly to global warming, and with that demand increasing exponentially along with the need to reduce contributions to global warming, we need to find alternate ways to store and process data. Some alternatives that are coming onto the market (more about them in my next column) include:
- Solid-state storage devices (SSDs)
- Ultra-low-voltage CPUs
- LED backlights for LCD displays
- Improved batteries
We all need to start thinking long-term about the demands these technologies are placing on our environment and the extent to which our reliance on these technologies is rendering our civilization ever more fragile. After all, there is no utility at all in having all of the information in the world at your fingertips, if you have no way to access it, either because you have no way to power your own computer, or there is no power available to run the network or the servers where the information is stored. Just thinking about this makes me want to find a pedal-operated 110V generator to use to charge my portable electronics—“just in case.”
J. Anthony Vittal ( email@example.com), is in private practice with The Vittal Law Firm based in Los Angeles, California. A former member of the ABA Standing Committee on Technology and Information Systems, a member of the editorial boards for Tech eReport and the Technology & Practice Guide issues of GP|SOLO, and a member of various technology-oriented committees of ABA Sections, he speaks and writes frequently on legal technology topics.