Jacques Lyotard, a prominent postmodernist, has this to say about the status of knowledge in modernity:
“Our working hypothesis is that the status of knowledge is altered as societies enter what is known as the postindustrial age and cultures enter what is known as the postmodern age. This transition has been under way since at least the end of the 1950s, which for Europe marks the completion of reconstruction. The pace is faster or slower depending on the country, and within countries it varies according to the sector of activity: the general situation is one of temporal disjunction which makes sketching an overview difficult. A portion of the description would necessarily be conjectural. At any rate, we know that it is unwise to put too much faith in futurology.
“Rather than painting a picture that would inevitably remain incomplete, I will take as my point of departure a single feature, one that immediately defines our object of study. Scientific knowledge is a kind of discourse. And it is fair to say that for the last forty years the “leading” sciences and technologies have had to do with language: phonology and theories of linguistics, problems of communication and cybernetics, modern theories of algebra and informatics, computers and their languages, problems of translation and the search for areas of compatibility among computer languages, problems of information storage and data banks, telematics and the perfection of intelligent terminals, to paradoxology. The facts speak for themselves (and this list is not exhaustive).
“These technological transformations can be expected to have a considerable impact on knowledge. Its two principal functions – research and the transmission of acquired learning-are already feeling the effect, or will in the future. With respect to the first function, genetics provides an example that is accessible to the layman: it owes its theoretical paradigm to cybernetics. Many other examples could be cited. As for the second function, it is common knowledge that the miniaturisation and commercialisation of machines is already changing the way in which learning is acquired, classified, made available, and exploited. It is reasonable to suppose that the proliferation of information-processing machines is having, and will continue to have, as much of an effect on the circulation of learning as did advancements in human circulation (transportation systems) and later, in the circulation of sounds and visual images (the media).
“The nature of knowledge cannot survive unchanged within this context of general transformation. It can fit into the new channels, and become operational, only if learning is translated into quantities of information. We can predict that anything in the constituted body of knowledge that is not translatable in this way will be abandoned and that the direction of new research will be dictated by the possibility of its eventual results being translatable into computer language. The “producers” and users of knowledge must now, and will have to, possess the means of translating into these languages whatever- they want to invent or learn. Research on translating machines is already well advanced.” Along with the hegemony of computers comes a certain logic, and therefore a certain set of prescriptions determining which statements are accepted as “knowledge” statements.
“We may thus expect a thorough exteriorisation of knowledge with respect to the “knower,” at whatever point he or she may occupy in the knowledge process. The old principle that the acquisition of knowledge is indissociable from the training (Bildung) of minds, or even of individuals, is becoming obsolete and will become ever more so. The relationships of the suppliers and users of knowledge to the knowledge they supply and use is now tending, and will increasingly tend, to assume the form already taken by the relationship of commodity producers and consumers to the commodities they produce and consume – that is, the form of value. Knowledge is and will be produced in order to be sold, it is and will be consumed in order to be valorised in a new production: in both cases, the goal is exchange.
“Knowledge ceases to be an end in itself, it loses its “use-value.” ‘
What does this have to do with traditionalists?
1. Statements of value/belief/commitment/understanding/wisdom/faith, unless they have “exchange-value,” are no longer considered to be knowledge. They may be considered opinions, or preferences, or emotions, but they aren’t knowledge.
2. Traditions are replaced by a new tradition, defined by the commodification and exchange of knowledge. The process of commodification itself, and the place of value within that process, becomes the new tradition.
3. Lyotard correctly predicts the compression of knowledge into useable units of exchange. Today, we speak of “sound bites,” or “factoids.” This phenomenon is not anomalous, and will intensify.
4. The instruments by which to manipulate and exchange compressed units of knowledge at the highest possible velocity become indispensable: computers and their paraphanelia.
5. No person, group, or government can possibly store, monitor, or absorb all currently exchangeable units of knowledge. The bravest efforts at this are conducted by the NSA and Google (for entirely different purposes). Google, for example, has no access to the vast data banks of commercial proprietary information, and is limited to information that is actually “published” for general consumption, a practice that will become more and more limited as knowledge commodification intensifies and its exchange value grows; in other words, knowledge will be hoarded, despite righteous protests to the contrary. Encryption and private channels of knowledge exchange will grow and become ubiquitous.
Michael Crichton has been complaining for some time about the consequences of these developments, and points to the “commercialization of scientific research,” the “patenting of life,” and the “patenting of disease.”
It won’t be long before an individual or group attempts to copyright (or patent) his/her/its entire “life”—as a collection of discrete data bits—in all its dimensions and possible expressions.
6. The basic principle of intelligibility within scientific discourse is becoming the dynamic of information. In biology, a physical explanation of evolution (natural selection) is being challenged by an information-based explanation (genes aren’t understood as molecules, but as bits of information, and obey the principles of information). Or, in the physical sciences, the old paradigm was: The universe is composed of two fundamental constituents: matter and energy. This old paradigm is being challenged by a new one: The universe is composed of three fundamental constituents: matter, energy, and information (or perhaps: the principles of information are the basic principle of the physical universe, expressed in matter and energy).
7. Human experience is stripped of its depth and texture, and is reduced to units of experience that can be expressed through compressed information bits (and marketed as such). Human experience therefore becomes disjointed, episodic, and random. Traditional expressions of human experience—rituals, customs, traditions, habits—become meaningless. Individual and communal narrative dies.
8. Human beings are reduced to separate data bits—consumer habits, debt rating, identifying information, etc. Or: Human beings are reduced to information processing centers; human consciousness and experience is the product of hardware (the organic brain) and software (material processes within the brain, known colloquially as “thinking”).
9. These knowledge bits are subjected to statistical analysis which provides a comprehensive view of “reality.” These snapshots of “reality” are reported as reliable reflections of actual life to governments, policy makers, the media, and consumers.
10. Traditional understandings, experiences, and distinctions—among individuals, groups, nations, races, ethnic groups—are simply invisible in this new knowledge ecology.
11. Memory disappears. All knowledge is tentative and disposable, depending on its exchange value at any given moment.
12. Oral or communal traditions disappear, unless “recorded.”
13. One would expect sects or congregations within Christianity, as well as some Christian scholarship, to adopt this new paradigm and its methods.
How much “knowledge” is there?
http://evans-experientialism.freewebspace.com/lesk.htm
This links to an article by Robert Lesk, written sometime in the 90’s, and asking the question: How much knowledge is there?
By “knowledge,” Lesk means useable digital bits of information.
In 2000, Lesk was head of the Division of Information and Intelligent Systems at the National Science Foundation.
Here is some of what Lesk has to say:
How much information is there?
The 20-terabyte size of the Library of Congress is widely quoted and as far as I know is derived by assuming that LC has 20 million books and each requires 1 MB. Of course, LC has much other stuff besides printed text, and this other stuff would take much more space.
Thirteen million photographs, even if compressed to a 1 MB JPG each, would be 13 terabytes. The 4 million maps in the Geography Division might scan to 200 TB. LC has over five hundred thousand movies; at 1 GB each they would be 500 terabytes (most are not full-length color features). Bulkiest might be the 3.5 million sound recordings, which at one audio CD each, would be almost 2,000 TB. This makes the total size of the Library perhaps about 3 petabytes (3,000 terabytes).
How much computer storage is there?
Note that these numbers added up are all comparable to the size of the numbers for the total amount of information in the world. So the implication is that in the year 2000 we will be able to save in digital form everything we want to – including digitizing all the phone calls in the world, all the sound recordings, and all the movies. We’ll probably even be able to do all the home movies in digital form. We can save on disk everything that has any contact with professional production or approval. Soon after the year
2000 the production of disks and tapes will outrun human production of information to put on them. Most computer storage units will have to contain information generated by computer; there won’t be enough of anything else.
How much total human memory is there?
With something like 6 billion people on earth, that makes the total memory of all the people now alive about 1,200 petabytes. To the accuracy with which these calculations are being done, the results are comparable. We can store digitally everything that everyone remembers. For any single person, this isn’t even hard. Landauer estimated that people only take in and remember about a byte a second; a typical lifetime is 25,000 days or 2 billion seconds (counting time asleep). The result is 2 gigabytes, or something that fits on a laptop drive.
Can we digitize a complete human life?
Would it be hard to remember every word you heard in your lifetime, including the ones you forgot? The average American spends 3,304 hours per year with one or another kind of media. [Census 1995]. 1,578 hours are with TV; adding in 12 hours a year of movies, at 120 words per minute that’s 11 million words, perhaps 50 megabytes of Ascii. And 354 hours a year of reading newspapers, magazines and books at 300 words per minute reading speed would be another 32 megabytes of text. In seventy years of life you would be exposed to around six gigabytes of Ascii; today you can buy 23 gigabyte disk drives.
Two years ago I heard Ted Nelson at a conference suggest that we should keep the entire record of everyone’s life – all the home snapshots, videos and the like. Some six-year-old, he said, is going to grow up to be President; and then the historians will wish we knew absolutely everything about his or her life. The only way to do this is to save everything about everyone’s life. I laughed, but it’s indeed possible. Whether it is worthwhile is another question: are we better off having all possible information and giving it the most sketchy consideration, or having less information but trying to analyze it better? Computers do not use log tables, and chess computers have dictionaries of opening and endgame positions but not whole games. We need to understand our ability to model more complex situations to know how to make best use of stored information.
What about learning and education?
Could we simply make a wearable device that would record everything? Yes, if either (a) we had decent speech recognition and OCR, or (b) books move to electronic form and TV sets provide access to the closed-captioned Ascii form of the scripts. Perhaps both of these choices are likely in the near future. School children no longer need to do arithmetic without calculators; perhaps they will soon no longer need to memorize anything either.
Information
As an addendum to Lesk’s observations, he provides this:
Here are the names of the units of very large storage sizes:
gigabyte 1,000 megabytes
terabyte 1,000 gigabytes
petabyte 1,000 terabytes
exabyte 1,000 petabytes
Cold Water on the Information Revolution
The German philosopher Babette Babich casts cold water on the information age as one of postmodern liberation:
“. . . a very postmodern incredulity should be reserved for proposals concerning the postmodern status of knowledge, particularly science, certainly information science. For, apart from Lyotard’s (or Baudrillard’s or McLuhan’s or any other media expert’s) ecstatic enthusiasm for the liberating virtues of the information revolution, the idea itself is patently overblown.
Virtual reality by another name is the simulacrum. The thing about the simulacrum (a computer game, surround sound, multi-media computer graphics) is that it is very manifestly a substitute, like driving a play automobile at a video arcade.
The computer image is coded – read and interpreted with perfectly hermeneutic alacrity – as it is in every other sphere of “real world” perception, but coded as unreal, as an image. It takes away not at all from the realistic charge (or kick) of such virtual images that they are palpably inferior (impalpable) substitutes. For the kick is exactly that they be as good as they are. “Surround sound” sounds as if one might be in a live concert. To sound this way, of course, given the accoutrements of the ordinary living room, drapes, couches, carpets, and given the distractions of a picture window or a nearby kitchen conversation, it has to be, and it is, larger than life. It is in this overwhelming imaging that the realism of the substitute consists.
The issue here does not concern the message but the medium and the consequences of an automatic credulity, better a belief not in metanarrative as such but in megabytes and still and yet in redemption through technology, ever more rarified to the internet, to email, to spreadsheets, to wordprocessing and the labile and virtual text.
Lyotard, Habermas, Rorty, Taylor and so many others tell us in very different ways that the current information age is the age of liberation. Liberation, for enlightenment thinking, is exactly progress. But the point is to press a question against our credulity in automation, as a credulity in the electronic order.”
http://evans-experientialism.freewebspace.com/babich03.htm