Page 443 -
P. 443
426 Chapter 13
than) one result. That is, only one web page in the world (at least as indexed by Google)
will happen to have the combination of words you have entered in the search box ”
(http://www.googlewhack.com/). Some examples of past Googlewhacks that have
been successful include word pairs such as: comparative unicyclist, maladroit wheezer,
blithering clops, and demurrable insuffi ciencies. Both the term and the occupation of
Googlewhacking are the inventions of Gary Stock, Chief Innovation Offi cer, Nexcerpt,
Inc. (http://www.googlewhack.com/ and http:/www.unblinking.com).
The raison d ’ ê tre of this phenomenon lies with the information overload issue: the
number of hits that are returned for a given search term is incredible and yet not
particularly useful. For example, what results from typing “ knowledge management ” ?
It is interesting to compare the results to the concept analysis technique that was
presented in chapter 1. For example, Weinberger (1998) used the keywords human,
user, change management, knowledge worker, and person and kept a tally of the
number of hits returned using those key terms. This can then be compared to the hits
obtained when technology-related key terms are used such as processor, RAID, mouse,
Internet, repository. The number of hits obtained with KM technology terms far
exceeds the number of hits obtained with nontechnology terms. This is partially due
to the fact that they are possibly more technology publications, but it illustrates that
the “ human ” is often the last thing considered as organizations change their technol-
ogy. This is a key reason why many technology initiatives result in failure: neglecting
the human element.
To make matters worse, there is common misconception that the commercial
search engines perform an objective and exhaustive search of all things digital and
that the hits are ranked — that is, the fi rst hit is the most relevant to what you were
looking. Nothing, of course, could be further from the truth. Introna and Nissenbaum
(2000) argue that search engines raise not merely technical issues but also political
ones. Their study of search engines suggests that they systematically exclude (in some
cases by design and in some by accident) certain sites and certain types of sites in
favor of others, systematically giving prominence to some at the expense of others.
Such biases would lead to a narrowing of the web ’ s functioning in society, run counter
to the basic architecture of the web, as well as to the values and ideals that have fueled
widespread support for its growth and development. It is doubtful that the market
mechanism could serve as an acceptable corrective.
There are political as well as technical issues associated with search engines that
exlude certain sites systematically in favor of others (by accident or design). Users are
largely ignorant of what goes on under the hood and this is compounded by unusu-
ally high degree of trust in what the computer says. Lawrence and Giles (1999) con-