THE ANALYSIS OF SEARCH FAILURES IN CHESHIRE
by
Yasar Tonta
A rough draft research proposal
Berkeley
21 February 1991
For some time it has been observed that two types of search failures occur in online information retrieval systems in general: non-relevant documents retrieved (precision failure), and, relevant documents not retrieved (recall failure). The detailed intellectual analysis of reasons why these two kinds of search failures occur is rarely conducted, however.
The present study will attempt to investigate the probable causes of search failures in CHESHIRE, a "third generation" experimental online catalog system. The rigorous analysis of search failures in CHESHIRE will be based on transaction log data which will allow us to study the users' search behavior unobtrusively.
The findings to be obtained through such a study will therefore shed some light on the probable causes of search failures in CHESHIRE in particular, and in other online catalog systems in general. The results will help improve our understanding of the role of natural query languages and indexing in online catalogs. Furthermore, the findings will provide invaluable insight that can be incorporated in "relevance feedback" algorithms to facilitate the trial-and-error process by allowing the users to interactively modify their queries based on the search results obtained during the initial run.
Statement of the Problem
Studies show that users frequently fail to retrieve relevant materials in online catalogs. Considerable numbers of searches (up to 40% of all searches, on the average) in online catalogs result in no retrievals. Users find subject searching in online catalogs especially difficult and they experience the most serious problems in formulating their queries in a way "acceptable" to the system. There is reason to believe that most search failures can be attributed to the "brittle" and unforgiving query languages and controlled vocabularies employed in current online catalogs. Query languages based on strict rules and controlled vocabularies are far more complicated than users can easily grasp in a short period of time.
From the point of users it is certainly preferable to be able to express their information needs in their own natural language terms. However, most, if not all, current online catalogs lack such facilities. The role of natural query languages in search success in online catalogs, on the other hand, is yet to be thoroughly investigated. Literature also lacks studies attempting to analyze the matching between users' vocabulary and that of the system.
The CHESHIRE experimental online catalog system, which is to be utilized in this study, allows users to enter search queries in natural language form. Unlike current online catalogs, CHESHIRE is designed to accommodate probabilistic information retrieval techniques. Retrieval in CHESHIRE is not based on Boolean logic and simple keyword matching techniques. CHESHIRE also provides relevance feedback mechanisms whereby users' relevance judgments on retrieved documents can be incorporated into subsequent searches in order to increase the search success. CHESHIRE therefore presents unique opportunities to study the search failures and the role of query languages and indexing in third generation online catalogs.
Objectives of the Study
The purpose of the present study is to:
1. analyze the search failures in CHESHIRE so as to identify the probable causes and to improve the retrieval effectiveness;
2. identify the role of relevance feedback in improving the retrieval effectiveness in CHESHIRE;
3. ascertain the extent to which users' natural language-based queries match the titles of the documents and the Library of Congress Subject Headings (LCSH) attached to them;
4. identify the role of natural language-based user interfaces in improving the retrieval effectiveness in CHESHIRE;
5. measure the retrieval effectiveness in CHESHIRE in terms of precision and recall.
Hypotheses
Main hypotheses of this study are as follows:
1. Search failures occur in CHESHIRE experimental online catalog system;
2. The match between users' vocabulary and titles of, and LCSH assigned to, documents will help reduce the search failures and improve the retrieval effectiveness in CHESHIRE;
3. The relevance feedback process will reduce the search failures and enhance the retrieval effectiveness in CHESHIRE.
Design of the Study
The following major tasks and activities need to be fulfilled in order to test the hypotheses of this study and to address the research questions.
1. The first major task is to identify the appropriate retrieval and relevance feedback algorithms which will be used in CHESHIRE for this research. Currently, a stack of several retrieval algorithms based on both vector space and probabilistic models are available in CHESHIRE. Studies conducted on CHESHIRE have shown that various retrieval algorithms operate basically at identical levels of retrieval effectiveness. The most appropriate retrieval algorithm(s) will be chosen for the study in consultation with Professors Cooper and Larson.
2. Since CHESHIRE is an experimental and constantly evolving online catalog, it will be necessary to make some arrangements so that the retrieval rules and algorithms will not be changed throughout the study. Gathering comparable data depends a great deal on the availability of a consistent and constant research environment where the rules and regulations would not be changed without notification. [I need to discuss this with Professor Larson. It is likely that my research will increase the workload on CHESHIRE. Furthermore, we have to make sure that the users will get access to the same "version" of CHESHIRE every time they dial up and use the system. I do not know the specifics yet.]
3. In order to capture the users' interaction with CHESHIRE (search queries entered, clusters retrieved, users' relevance judgments of the retrieved records and the records themselves) a series of programs are needed. [I was told that some programs to record the transaction logs are already available on CHESHIRE. Writing some new programs or modifying the existing ones might be necessary.]
4. The next step will be to gather search queries that are based on real information needs of the users. This step will involve several activities and decision making.
First, a decision has to be made as to how many search queries will be needed so as to obtain reliable results. The more queries we will have, the better. Some 200 queries will be aimed for.
Second, a users group has to be identified. The likely candidates are the doctoral students at SLIS. However, the doctoral students only might not generate enough queries for this research. Then, the masters students can also be thought of. In any case, the users will be notified and asked to participate.
Third, each user has to be issued a password in order to get access to CHESHIRE. Although CHESHIRE is available on Internet and can be accessed by any one, passwords will identify the users for data gathering purposes. Passwords will also trigger the transaction log programs so that we do not have to record transactions belonging to anonymous users.
Fourth, a training program for the users needs to be prepared in the use of CHESHIRE (getting access to it, issuing queries in natural language form, making relevance judgments and using relevance feedback mechanism). Some handouts can be prepared. Online tutorial is another possibility. The researcher will be more than happy to provide one-on-one training. Addition of masters students will introduce a different type of demand in terms of training. Namely, the researcher has to consult with L200 professors in order to make the masters students use CHESHIRE. We may need to lecture briefly and issue passwords during a lecture. The masters students need to be advised that they can use CHESHIRE for their annotated bibliography homework since CHESHIRE database contains more or less all the records that MELVYL does in library and information studies. This will not only provide a wide variety of search queries, but it will make sure that the queries are but genuine ones.
Fifth, a detailed letter will be sent to the users explaining the research project and what they are expected of. This letter will also elaborate on several issues such as how to formulate queries, how to make relevance judgments, how to incorporate the relevance feedback mechanism in subsequent searches and so on. Depending on the circumstances, a nominal incentive will be offered to the participating users.
Sixth, after a brief trial period the transactions will be recorded, say, throughout the fall semester.
From time to time reminders will be mailed to the users to reinforce their participation in the study.
The number of search queries accumulated can also be monitored from time to time.
5. Data analysis will be the next step in the project. Having gathered enough queries, the transaction logs will be printed. The following tasks need to be completed during the analysis:
All the search queries should be identified. For each query, the number of records for which relevance judgments were made by the user should be identified for the first retrieval. This will later allow us to calculate the precision ratio for each query. If the user opted for relevance feedback process, again, the number of records for which relevance judgments are made should be identified for the second retrieval. (The users will not be required to judge certain number of records before proceeding to the relevance feedback step. In other words, a user may well decide to perform a relevance feedback search after judging only one record whereas another user may go through the whole list and quit without performing relevance feedback search. However, we will make sure during the training that they will perform relevance feedback searches.)
The records retrieved for each query and subsequently judged by the user should be documented for the purposes of evaluation.
6. Evaluation of data. As mentioned before, the precision ratios for both first retrieval and second retrieval will be calculated. It should be noted that the relevance judgments were made by the users themselves.
Next, the recall ratio for each query will be calculated. The calculation of recall ratio will, however, require a somewhat different method than the calculation of precision. We think that the intention of users can be determined from search queries in natural language form. We can also check the titles and subject headings of records retrieved and judged relevant by the user so as to understand the intention of the user. Later, the search query will be repeated in CHESHIRE in order to find more relevant documents in the database. Several search tactics that were employed in previous information retrieval experiments can be tried to find additional relevant documents. By finding additional relevant documents in the database, we can determine the "minimum" recall ratio for each query.
The precision/recall graphs can be drawn for all the queries in first and second retrievals. The improvement in precision/recall ratios, should there be any, due to the relevance feedback effect can be seen from these graphs.
Although the relevance judgments in the second try will be made by the researcher, we think that the search queries and records that were judged relevant by the user will make the context clear so that the researcher can make healthy relevance judgments when the search is repeated. Since this research is based on transaction log analysis, it is difficult to go back to the user and ask him or her to make relevance judgments for the same query after a certain period of time. Even if each query can be tracked and the "owner" of that query can be determined very easily from the transaction logs, the privacy issue will prevent us from doing that. Moreover, the user may or may not remember the search query and how he/she decided on the relevance art that time. We believe however that the researcher can make consistent relevance judgments and determine the recall ratio for each query.
The analysis of each query and the retrieved records will provide clues as to the probable causes of search failures in CHESHIRE. Comparison of search terms with the titles and LCSH will furnish further evidence to help explain the search failures. Furthermore, by checking the records retrieved during reruns, the researcher will be able to gather some more evidence about the causes of search failures.
Next, the search terms in the queries and the titles of documents and LCSH assigned to them will be compared so as to find out the match between users' vocabulary and that of the system. The results can be tabulated for each query. We will look at if there is a correlation between the success rates obtained through matching and non-matching search queries.
In order to identify the role of natural language-based user interfaces in retrieval effectiveness, some queries can be searched on MELVYL using detailed search tactics. Although the results will not be directly comparable to those obtained in CHESHIRE, the individual records can be compared so as to see if additional records are retrieved by either of the systems.
Determining the role of relevance feedback in improving the retrieval effectiveness in CHESHIRE is one of the most difficult problems to be tackled. Because the database size in CHESHIRE is quite large (30,000+ records) compared to the sizes of test collections used in the past, it is impossibly impractical to trace the movement of each record between the first and second retrievals.
One way to overcome this problem would be to monitor the movements of, say, top 30 documents retrieved during the first retrieval. In fact, "experience with the CHESHIRE system has indicated that the ranking mechanism is working quite well, and the top ranked clusters provide the largest numbers of relevant items" (Larson, 1989 p.133). If chosen, this strategy will allow us to base our findings on top ranked records only.
However, opting for such a strategy might present some difficulties during the evaluation. First, since retrieval is not based on Boolean logic in CHESHIRE, for each query literally the whole collection (30,000+) will be ranked. Although it is highly unlikely for records at the bottom of the rank to pop up to the top of the list during the second retrieval, the number of records to be screened during the second retrieval may be well over, say, 30. [I should discuss this issue with Professor Larson and see the hard figures attached to each ranked record. Probability of relevance figures might tell us something about the likelihood of bottom ranked records to move up. Depending on this, we can determine a threshold for the number of records to be monitored. Theoretically, it is still possible to come up with search queries (i.e., "libraries") that will retrieve thousand of records in CHESHIRE with probability of relevance figures very close to each other. Another example would be a complete fail search where the user retrieves nothing similar to what he/she expects. He/she nevertheless judges an off-the-track record relevant just to give a shot. With such feedback, the system might bring up some records from the bottom of the ranked list which have nothing to do with the retrievals in the first place. But these are fairly stretched examples and they may not represent the way how things work in CHESHIRE, I hope.]
Second, the relevance feedback algorithm should be changed in a way so that we can keep the top ranked records pretty stable in order to trace the improvements in the rankings. In other words, the user will be presented some of the same records after the relevance feedback process. Yet this is not desirable at all because a) a user may not want to judge the record he/she has already seen before; b) the actual relevance feedback effect cannot be determined easily because of the "noise" created by already seen records; c) the relevance feedback algorithm should be changed; and, d) the calculation of precision/recall ratios for the second retrievals will be complicated/inflated in view of the confounding effects from the records that were retrieved in the first place. Third, the relevance feedback process should bring records that are similar to ... [too bad that I can't remember what I was going to say here because I went back to fix something else in the previous paragraphs!!]
6. Various statistical tests are intended to measure the significance of the average difference in values of retrieval effectiveness between the two retrievals. Significance tests will measure the probability if "the two sets of values obtained from two separate runs are actually drawn from samples which have the same characteristics" (Ide, 1971 p.344). The t test and Wilcoxon signed rank test will be used for the evaluation of findings.
The correlation between the search failures and matching of users' natural language query terms with the titles of documents and LCSH will also be sought.
Some qualitative techniques can be used during the intellectual analysis of search failures in CHESHIRE. [It is embarrassing to admit but I don't have a clue (read "scud") about these qualitative techniques.]
Literature Review
To be completed.
Definitions and Limitations
Definitions of various terms such as "search failure," "recall," "precision," "relevance feedback," "natural language-based queries" will be given in this section. Limitations of the study in terms of the number of search queries, the relevance feedback judgments and the techniques to be used in determining the improvement in relevance feedback process will be explained in detail.
Timetable
A chart showing the beginning and ending dates of major tasks and brief description of each activity will be prepared.
Select Bibliography
Bates, Marcia J. 1972. "Factors Affecting Subject Catalog Search Success," Unpublished Doctoral Dissertation. University of California, Berkeley.
___________. 1977a. "Factors Affecting Subject Catalog Search Success,"Journal of the American Society for Information Science 28(3): 161-169.
___________. 1977b."System Meets User: Problems in Matching Subject Search Terms," Information Processing and Management 13: 367-375.
__________. 1986. "Subject Access in Online Catalogs: a Design
Model," Journal of American Society for Information Science
37(6): 357-376.
___________. 1989a. "The Design of Browsing and Berrypicking Techniques for the Online Search Interface," Online Review 13(5): 407-424.
__________. 1989b. "Rethinking Subject Cataloging in the Online Environment," Library Resources and Technical Services 33(4): 400-412.
Besant, Larry. 1982. "Early Survey Findings: Users of Public Online Catalogs Want Sophisticated Subject Access," American Libraries 13: 160.
Blazek, Ron and Dania Bilal. 1988. "Problems with OPAC: a Case Study of an Academic Research Library," RQ 28:169-178.
Borgman, Christine L. 1986. "Why are Online Catalogs Hard to Use? Lessons Learned from Information-Retrieval Studies" Journal of American Society for Information Science 37(6): 387-400.
Borgman, Christine L. "End User Behavior on an Online Information Retrieval System: A Computer Monitoring Study," in: International Conference on Research and Development in Information Retrieval. 6th Annual International ACM SIGIR Conference. Edited by Jennifer J. Kuehn. New York: ACM, 1983. pp.162-176.
Buckley, Chris. (1987). Implementation of the SMART Information Retrieval System. Ithaca, N.Y.: Cornell University, Department of Computer Science.
Byrne, Alex and Mary Micco. 1988. "Improving OPAC Subject Access: The ADFA Experiment," College & Research Libraries 49(5): 432-441.
Campbell, Robert L. 1990. "Developmental Scenario Analysis of Smalltalk Programming," in Empowering People: CHI '90 Conference Proceedings, Seattle, Washington, April 1-5, 1990. Edited by Jane Carrasco Chew and John Whiteside. New York: ACM, 1990, pp.269-276.
Carlyle, Allyson. 1989. "Matching LCSH and User Vocabulary in the Library Catalog," Cataloging & Classification Quarterly 10(1/2): 37-63, 1989.
Chan, Lois Mai. 1986a. Library of Congress Subject Headings:
Principles and Application. 2nd edition. Littleton, Co.:
Libraries Unlimited, Inc.
_________. 1986. Improving LCSH for Use in Online Catalogs.
Littleton, CO.: Libraries Unlimited, Inc.
Cochrane, Pauline A. and Karen Markey. 1983. "Catalog Use Studies -Since the Introduction of Online Interactive Catalogs: Impact on Design for Subject Access," Library and Information Science Research 5(4): 337-363.
Cooper, Michael D. 1991. "Failure Time Analysis of Office System Use," Journal of American Society for Information Science (to appear in 1991).
Cooper, Michael D. and Cristina Campbell. 1989. "An Analysis of User Designated Ineffective MEDLINE Searches," Berkeley, CA: University of California at Berkeley, 1989.
Dale, Doris Cruger. 1989. "Subject Access in Online Catalogs: An Overview Bibliography," Cataloging & Classification Quarterly 10(1/2): 225-251, 1989.
Flanagan, John C. 1954. "The Critical Incident Technique," Psychological Bulletin 51(4): 327-358, July 1954.
Frost, Carolyn O. 1987a. "Faculty Use of Subject Searching in Card and Online Catalogs," Journal of Academic Librarianship 13(2): 86-92.
Frost, Carolyn O. 1989. "Title Words as Entry Vocabulary to LCSH: Correlation between Assigned LCSH Terms and Derived Terms From Titles in Bibliographic Records with Implications for Subject Access in Online Catalogs," Cataloging & Classification Quarterly 10(1/2): 165-179, 1989.
Frost, Carolyn O. and Bonnie A. Dede, 1988. "Subject Heading Compatibility between LCSH and Catalog Files of a Large Research Library: a Suggested Model for Analysis," Information Technology and Libraries 7: 292-299, September 1988.
___________. 1987b. "Subject Searching in an Online Catalog,"
Information Technology and Libraries 6: 61-63.
Gerhan, David R. 1989. "LCSH in vivo: Subject Searching Performance and Strategy in the OPAC Era," Journal of Academic Librarianship 15(2): 83-89.
Hancock, Micheline. 1987. "Subject Searching Behaviour at the Library Catalogue and at the Shelves: Implications for Online Interactive Catalogues," Journal of Documentation 43(4): 303-321.
Hartley, R.J. 1988. "Research in Subject Access: Anticipating the
User," Catalogue and Index (88): 1,3-7.
Hildreth, Charles R. 1989. Intelligent Interfaces and Retrieval Methods for Subject Searching in Bibliographic Retrieval Systems. Washington, DC: Cataloging Distribution Service, Library of Congress.
Holley, Robert P. 1989. "Subject Access in the Online Catalog," Cataloging & Classification Quarterly 10(1/2): 3-8, 1989.
Hays, W.L. and R.L. Winkler. 1970. Statistics: Probability, Inference and Decision. Vol. II. New York: Holt, Rinehart and Winston, 1970. (pp.236-8 for Wilcoxon sign tests in IR research.)
Ide, E. (1971). "New Experiments in Relevance Feedback." in Salton, Gerard, ed. The SMART Retrieval System: Experiments in Automatic Document Processing. Englewood Cliffs, N.J.: Prentice-Hall. pp. 337-354.
Kaske, Neal N. 1988a. "A Comparative Study of subject Searching in an OPAC Among Branch Libraries of a University Library System," Information Technology and Libraries 7: 359-372.
___________. 1988b. "The Variability and Intensity over Time of Subject Searching in an Online Public Access Catalog," Information Technology and Libraries 7: 273-287.
Kaske, Neal K. and Sanders, Nancy P. 1980. "Online Subject Access: the Human Side of the Problem," RQ 20(1): 52-58.
__________. 1983. A Comprehensive Study of Online Public Access Catalogs: an Overview and Application of Findings. Dublin, OH: OCLC. (OCLC Research Report # OCLC/OPR/RR-83-4)
Kern-Simirenko, Cheryl. 1983. "OPAC User Logs: Implications for Bibliographic Instruction," Library Hi Tech 1: ??, Winter 1983.
Kinsella, Janet and Philip Bryant. 1987. "Online Public Access Catalog Research in the United Kingdom: An Overview," Library Trends 35: ??,
Klugman, Simone. 1989. "Failures in Subject Retrieval," Cataloging & Classification Quarterly 10(1/2): 9-35, 1989.
Kretzschmar, J.G. 1987. "Two Examples of Partly Failing Information Systems," in: Wise, John A. and Anthony Debons, eds. Information Systems: Failure Analysis. Berlin: Springer Verlag, 1987.
Lancaster, F.W. 1968. Evaluation of the MEDLARS Demand Search Service. Washington, DC: US Department of Health, Education and Welfare, 1968.
Lancaster, F.W. 1969. "MEDLARS: Report on the Evaluation of Its Operating Efficiency," American Documentation 20(2): 119-142, April 1969.
Larson, Ray R. 1986. "Workload Characteristics and Computer System Utilization in Online Library Catalogs." Doctoral Dissertation, University of California at Berkeley, 1986. (University Microfilms No. 8624828)
___________. 1989. "Managing Information Overload in Online Catalog Subject Searching,"In: ASIS '89 Proceedings of the 52nd ASIS Annual Meeting Washington, DC, October 30-November 2, 1989. Ed. by Jeffrey Katzer et al. Medford, NJ: Learned Information. pp. 129-135.
___________. 1991a. "Classification Clustering, Probabilistic Information Retrieval and the Online Catalog," Library Quarterly 61, April 1991. [in press]
___________. 1991b. "The Decline of Subject Searching: Long Term Trends and Patterns of Index Use in an Online Catalog," Journal of American Society for Information Science (Submitted for publication) [1991].
___________. 1991c. "Evaluation of Advanced Information Retrieval Techniques in an Experimental Online Catalog," Journal of American Society for Information Science (Submitted for publication) [1991].
Larson, Ray R. and V. Graham. 1983. "Monitoring and Evaluating MELVYL," Information Technology and Libraries 2: 93-104.
Lawrence, Gary S. 1985. "System Features for Subject Access in the Online Catalog," Library Resources and Technical Services 29(1): 16-33.
Lawrence, Gary S., V. Graham and H. Presley. 1984. "University of California Users Look at MELVYL: Results of a Survey of Users of the University of California Prototype Online Union Catalog," Advances in Library Administration 3: 85-208.
Lewis, David. 1987. "Research on the Use of Online Catalogs and Its Implications for Library Practice," Journal of Academic Librarianship 13(3): 152-157.
Markey, Karen. 1980. Analytical Review of Catalog Use Studies. Dublin, OH: OCLC, 1980. (OCLC Research Report # OCLC/OPR/RR-80/2.)
_________. 1983. The Process of Subject Searching in the Library Catalog: Final Report of the Subject Access Research Project. Dublin, OH: OCLC.
_________. 1984. Subject Searching in Library Catalogs: Before
and After the Introduction of Online Catalogs. Dublin, OH: OCLC.
_________. 1985. "Subject Searching Experiences and Needs of Online Catalog Users: Implications for Library Classification," Library Resources and Technical Services 29: 34-51.
_________. 1986. "Users and the Online Catalog: Subject Access Problems," in Matthews, J.R. (ed.) The Impact of Online Catalogs pp.35-69. New York: Neal-Schuman, 1986.
_________. 1988. "Integrating the Machine-Readable LCSH into Online Catalogs," Information Technology and Libraries 7: 299-312.
Matthews, Joseph K. 1982. A Study of Six Public Access Catalogs: a Final Report Submitted to the Council on Library Resources, Inc. Grass Valley, CA: J. Matthews and Assoc., Inc.
Matthews, Joseph, Gary S. Lawrence and Douglas Ferguson (eds.) 1983. Using Online Catalogs: a Nationwide Survey. New York: Neal-Schuman.
Mitev, N.N., G.M. Venner and S. Walker. 1985. Designing an Online Public Access Catalogue. (Library and Information Research Report 39) London: British Library, 1985.
Naharoni, A. 1980. "An Investigation of W.T. Grant as Information System Failure," Ph.D. Dissertation, University of Pittsburgh, Pittsburgh, PA, 1980.
Nielsen, Brian. 1986. "What They Say They Do and What They Do: Assessing Online Catalog Use Instruction Through Transaction Monitoring," Information Technology and Libraries 5: 28-34, March 1986.
Norman, D.A. 1980. Errors in human Performance. San Diego, CA: University of California, 1980.
Norman, D.A. 1983. "Some Observations on Mental Models," in: Stevens, A.L and D. Gentner, eds. Mental Models. Hillsdale, NJ: Erlbaum, 1983.
Pease, Sue and Gouke, Mary Noel. 1982. "Patterns of Use in an Online Catalog and a Card Catalog," College and Research Libraries 43(4): 279-291.
Penniman, W.D. and W.D. Dominic. 1980. "Monitoring and Evaluation of On-line Information System Usage," Information Processing & Management 16(1): 17-35, 1980.
Penniman, W. David. 1975. "A Stochastic Process Analysis of On-line User Behavior," Information Revolution: Proceedings of the 38th ASIS Annual Meeting, Boston, Massachusetts, October 26-30, 1975. Volume 12. Washington, DC: ASIS, 1975. pp.147-148.
Peters, Thomas A. 1989. "When Smart People Fail: An Analysis of the Transaction Log of an Online Public Access Catalog," Journal of Academic Librarianship 15(5): 267-273, November 1989.
Reason, J. and K. Mycielska. 1982. Absent-Minded? The Psychology of Mental Lapses and Everyday Errors. Englewood Cliffs, NJ: Prentice Hall, 1982.
Rocchio, Jr., J.J. (1971). "Relevance Feedback in Information Retrieval." in Salton, Gerard, ed. The SMART Retrieval System: Experiments in Automatic Document Processing. Englewood Cliffs, N.J.: Prentice-Hall. pp.313-323.
Salton, G. (1971). "Relevance Feedback and the Optimization of Retrieval Effectiveness." in Salton, Gerard, ed. The SMART Retrieval System: Experiments in Automatic Document Processing. Englewood Cliffs, N.J.: Prentice-Hall. pp.324-336.
Salton, Gerard, ed. (1971). The SMART Retrieval System: Experiments in Automatic Document Processing. Englewood Cliffs, N.J.: Prentice-Hall.
Salton, Gerard and Chris Buckley. (1990). "Improving Retrieval Performance by Relevance Feedback," Journal of the American Society for Information Science 41(4): 288-297.
Shepherd, Michael A. 1981. "Text Passage Retrieval Based on Colon Classification: Retrieval Performance," Journal of Documentation 37(1): 25-35, March 1981.
Shepherd, Michael A. 1983. "Text Retrieval Based on Colon Classification: Failure Analysis," Canadian Journal of Information Science 8: 75-82, June 1983.
Svenonius, Elaine. 1986. "Unanswered Questions in Controlled Vocabularies," Journal of the American Society for information Science.
Svenonius, Elaine and H. P. Schmierer. 1977. "Current Issues in the Subject Control of Information," Library Quarterly 47: 326-346.
Swanson, Don R. (1977). "Information Retrieval as a Trial-and-Error Process," Library Quarterly 47(2): 128-148.
Tague, J. and J. Farradane. 1978. "Estimation and Reliability of Retrieval Effectiveness Measures," Information Processing and Management 14: 1-16, 1978.
Tolle, John E. 1983. Current Utilization of Online Catalogs: Transaction Log Analysis. Dublin, OH: OCLC, 1983.
Tolle, John E. 1983. "Transaction Log Analysis: Online Catalogs," in: International Conference on Research and Development in Information Retrieval. 6th Annual International ACM SIGIR Conference. Edited by Jennifer J. Kuehn. New York: ACM, 1983. pp.147-160.
Users Look at Online Catalogs: Results of a National Survey of Users and Non-Users of Online Public Access Catalogs. 1982. Berkeley, CA: The University of California.
University of California Users Look at MELVYL: Results of a
Survey of Users of the University of California Prototype
Online Union Catalog. 1983. Berkeley, CA: The University of
California, 1983.
Van der Veer, Gerrit C. 1987. "Mental Models and Failures in Human-Machine Systems," in: Wise, John A. and Anthony Debons, eds. Information Systems: Failure Analysis. Berlin: Springer Verlag, 1987.
Van Pulis, N. and L.E. Ludy. 1988. "Subject Searching in an Online Catalog with authority Control," College & Research Libraries 49; 523-533, 1988.
Van Rijsbergen, C.J. (1979). Information Retrieval. 2nd ed. London: Butterworths.
Walker, Stephen and R. de Vere. 1990. Improving Subject Retrieval in Online Catalogues. 2: Relevance Feedback and Query Expansion. (British Library Research Paper, no. 72) London: British Library, 1990.
Wilson, Patrick. 1983. "The Catalog as Access Mechanism: Background and Concepts," Library Resources and Technical Services 27(1): 4-17.
Wilson, Sandra R. and Norma Starr-Schneidkraut. 1989. Use of the Critical Incident Technique to Evaluate the Impact of MEDLINE. (Final Report) Draft August 11, 1989. Contract No. N01-LM-8-3529. Bethesda, MD: National Library of Medicine.
Wise, John A. and Anthony Debons, eds. 1987. Information Systems: Failure Analysis. Berlin: Springer Verlag, 1987.