Vol. 1 #1, 1993-
Vol. 8 #2, 2002

AV Review Database
Reviews of
videos and

Search MCJ


By Christine Frank and Bill Karnoscak


This paper presents information on the design, results and implications of a survey conducted by the Educational Media and Technologies Section (EMTS) of the Medical Library Association (MLA). Conducted between 1996 and 1997, the survey examined learning resource center (LRC) practices at academic health sciences centers and hospitals.


There have been several attempts to gather academic health science LRC data over the past 20 years. The Association of Academic Health Sciences Libraries (AAHSL) began providing basic information about non-print formats in their first Annual Medical School Library Survey of 1977-78. (See Reference 1) In it, they reported on AV items and titles and over the next decade, LRC-specific data expanded to include:

  1. Number of AV programs, AV serial and Computer Software titles
  2. AV collection development budget
  3. AV/Microcomputer professionals, non-professionals, and hourly staff.
In 1991, however, AASHL discontinued breaking out a number of specialized data in the interest of streamlining data collection for member libraries. Although this was a positive move in many respects, the negative impact was that the few benchmarking categories about academic health science LRCs were eliminated.

The Academic Libraries Survey of the Integrated Post-Secondary Education Data System (IPEDS)(see Reference 2) survey reports on budgets, titles and volumes of various non-print and computer formats. In most cases, however, data is available only for a handful of independent, health sciences university libraries such as the Library of Rush University in Chicago and the Scott Memorial Library of Thomas Jefferson University in Philadelphia. For the most part, IPEDS considers the health sciences library a branch of the main campus library and their data are merged with that of the university library system and thus "lost."

In the early 1990s there were several efforts by health sciences LRC librarians to gather more targeted data regarding services offered and physical facilities (see References 3 and 4). Faced with having to periodically scramble for comparative LRC data, the authors urged EMTS to consider sponsoring a national survey that would yield an in-depth look at health science LRC operations.


An EMTS-sponsored taskforce was appointed to develop a survey instrument. Between 1995 and 1996, the taskforce developed a series of questions which were pilot-tested by ten EMTS members' libraries during summer, 1996. In November of that year, a revised survey was sent out to a merged list of 213 EMTS and AASHL members' libraries. Follow-up surveys were sent to non-respondents in February of 1997. Survey items included questions about the following categories: institutional profile, collection development budget, other departmental expenditures, personnel, collection size, facilities usage, information services, and services offered.


The original survey and detailed excel-formatted results are available on the EMTS website:

Institutional Profiles
Various questions were asked to help categorize the respondents (Institutional Profile). Ninety-six sites replied, producing a response rate of 45%. The vast majority of respondents (74%) represented AAHSL libraries. The AASHL sites were all affiliated with a medical school, and in many cases, with additional educational programs. The remaining sites included ten independent medical school LRCs, six hospitals, two Area Health Education Centers (AHEC), and a handful of other specialty LRCs. 84% of the respondents were affiliated with a library, and the remaining 16% were independent units. 35% of the respondents were the sole provider of LRC services; the remaining 65% had other LRC units within their institution.

The survey was intended to serve the entire EMTS membership. This was problematic when examining the staffing resources in hospital LRCs as most hospitals did not have separate full-time staff devoted solely to LRCs. Also, there wasn't enough of an aggregate of the specialty, or non-medical school LRCs, to share meaningful comparative data among them.

Collection Development Expenditures
Respondents were asked to separate their expenditures by the format types of audiovisuals, computer assisted instruction and computer applications software (Expenditures). The average amount either spent or budgeted by each site for collection development was $18,849. Because of the way they tracked expenditures, not all sites could break down their budget by the format types proscribed in the survey. For those 71 sites that could, the following average amounts were spent by format type: Audiovisuals: $9,648; CAIs: $9,358; Computer applications: $3,529. Answers ranged from zero to $80,700. In several cases, no collection development expenditures existed because academic departments paid for AVs and software, so although materials were being provided, the LRCs weren't actually paying for them.

Other Departmental Expenditures
The sites were queried about other expenditures in the areas of equipment, equipment leasing, total salaries and recurring operating budget (Other Expenditures). The average amount spent or budgeted for equipment was $25,796 (N=76). The average total salary budget was $113,156 (N=64). Low response rate for equipment leasing (N=3) and recurring operating budget (N=39) rendered comparative data to be insufficient for further analysis.

Questions were asked about performance of LRC functions by LRC staff or by staff located externally in other units (Personnel). The following patterns emerged for average number of FTE LRC-based staff: Professional staff: 1.4, Non-professional staff: 1.8, Student/Part-time staff: 1.8, Total Staff: 5.

These figures alone do not tell the complete staffing story. To the question, "Are some LRC functions performed by other units within the Library?" 81% of library-affiliated LRCs answered "yes." This implies that in many institutions the aggregate activity devoted to LRC operations extends beyond the four walls of the physical LRC.

The follow-up question was "If yes, write FTE count of staff in other units under Professional and/or Non-Professional in appropriate slots: (If only a percentage of their time is spent in LRC activities, please note as such)." Forty of the 66 responses to this question were discarded. Because their staff count totals were so large, it was assumed that these respondents had submitted actual staff numbers rather than the percentage of their staffs' time expended on LRC-related activities.

Because of the difficulties with this question, only the frequency of "supplemental LRC support" within specific library departments was reported. This practice occurs most frequently in Cataloging (54), followed by Systems (44), Acquisitions (42), Circulation (35), Interlibrary Loan (35), and lastly Reference (30). It is notable that three academic library respondents did not have any dedicated LRC staff; LRC functions were integrated totally into Library staff duties.

Collection Size
Collection size has always been the most traditional benchmark for LRCs. The average collection size was 2,051 titles. The collection size percentages by format were as follows:

  • AV 91%
  • CAIs & Applications 4.75%
  • Subscriptions .75%
  • Models .50%
  • Realia .25%
  • Other* 2.75%

*answers for this category are detailed in the Collection Size spreadsheet at

It is not surprising to find AV formats still comprise the predominant format in LRC collections because they have formed the backbone of these collections far longer than the more recently emerging electronic formats. Because the survey did not inquire about number of items added to collections within the previous fiscal year, it was not possible to track the shift from traditional AV formats to the emerging electronic formats.

Institutions were queried on their hardware resources for public use: IBM-compatible computers, Macintosh computers, videocassette players, videodisc players, audiocassette players, slide/tape or slide viewers, overhead projectors, laptop computers, and LCD panels. However, by this point in time, the most prevalent facilities benchmark has become number of public workstations. A combined PC/Macintosh count was the only equipment ranked across institutions. The average number of workstations per site was 36.

Other notable findings about facilities included the number of group viewing rooms, which ranged from 0 to 14 with an average of 3.5. One third reported that they had 24-hour access to workstations. Questions that were asked about networking and printing facilities were deemed to have only short-term significance.

The following questions pertaining to usage measurement methods were posed (Usage). The usage tracking was reported as follows:

  • Audiovisual usage:
    1. AV software transactions (57%): This measurement activity remains the closest match to traditional print circulation and was employed by largest number of sites.
    2. AV software "usage" (19%): We defined usage as transactions plus additional viewers. This measure most accurately reflects AV circulation because audiovisuals are often used by groups of people although the initial transaction is usually recorded as one individual check out.
    3. AV software outside the LRC (20%): This can be an important figure because at many institutions, AVs are viewed by groups offsite. A significant segment of one's client base is lost if external counts are not maintained.
    4. AV hardware transactions (30%).
    5. LRC Gatecount 22%.
  • Computer usage:
    1. Hours computers are used (18%):
    2. Number of times computers are logged onto (25%).
    3. CAI software transactions (16%). The most surprising finding was that a number of LRCs did not measure their audiovisual (N=37) or computer (N=68) activity at all.

Information Services
The question about information services revealed that only 50% of the respondents kept records of reference questions. The average number of reference questions answered by that 50% was 11,612. The average number of reference transactions answered by AASHL libraries for the equivalent time frame was 30,711. Given these figures, one could infer that LRC reference activity makes up a third of a library's total information services effort. This is a significant expenditure of energy that goes unrecognized in over half the sites surveyed.

Checklist of Services
The final portion of the survey presented respondents with an array of services and asked them to check off which ones they offered (Checklist of Services). Although there is no such thing as a typical LRC, a profile of health science LRCs might be developed by grouping services provided in the order from most often reported to least often reported.


As a result of this survey, more detailed information about academic health science LRCs exists than ever before. Although the survey population is a subset of academic Learning Resource Centers, the findings of our survey might have implications to the broader population of academic LRCs. In future surveys of the field, we would recommend focusing on the data that can best assist individual sites in benchmarking productivity. Data we recommend be collected for productivity inputs and outputs include: primary users, hours, staff size, budget, staffing, total collection size, number of work stations, collection use, computer use, and information services activity. We would also include services that appeared on the checklist which represent staff activity that can be measured by clients' use, such as interlibrary loan, advance reservation of materials, classes, and test grading.

As underscored throughout this paper, LRCs gather this information in disparate ways, if at all. Only one usage output measurement, AV software transactions, was used at a majority of sites. Only 50% of sites reported their reference activity. Categories such as LRC budget, collection and staff size data, which has been tracked by AAHSL for nearly 20 years, are only productivity INPUTS. LRC circulation and reference activity (as well as other services), or OUTPUTS must be tracked for productivity benchmarking to be fully realized.

In addition to the emphasis on gathering output data, it is essential that both inputs and outputs be counted in a standardized fashion so that they can be compared accurately across institutions. These are some strategies that might effect a unified approach to data collection:

  1. Discussion among experts -- LRC librarians -- and a consensus about the best definition for each data category, especially in light of the problems that have surfaced about usage, staffing and budget categories. This inter-institutional exchange of ideas could take place on electronic discussion lists and face-to-face at professional meetings.
  2. Intra-institutional discussion between LRC librarians and their supervisors about the value of spending staff time in more detailed data collection.
  3. Employment of software programs that capture computer use by clients. For example, CentaMeter is a monitoring program that provides several methods of counting hourly usage and number of events (or programs launched) information about each software title. Given the tracking options available in computer monitoring software, it is imperative that LRC librarians agree upon a standardized method for reporting computer use (hourly usage or number of events or both).
  4. Advance warning to LRC librarians about what data will be collected and how they will be defined so that sites have the option to align (or untangle) their record keeping to gather at least a fiscal year's worth of data before a survey is distributed.

Of the four strategies proposed, the use of computer monitoring programs is the key component to implementing a unified approach to reporting output data because computer usage will become an increasingly more important output activity for LRCs in the future. It is also highly probable that data gathering of computer usage will become easier as the concept of "collection" changes. Instead of circulating items from a collection's stacks, more programs will likely be housed on computer, networked either throughout the institution, or via the Web.

Web technology and the issue of access vs. ownership has already made an impact on other library resources, such as e-texts and e-journals. One can expect the same thing to happen to LRC resources. Audiovisuals are morphing into computer multimedia. We expect to soon see an increase of live-action and animated instructional programs leased over the web instead of being purchased and housed on a shelf or being resident on a network server. One current example is the integrated medical curriculum (imcm) Web site by the Gold Standard Multimedia Network. There are already a number of Web use monitoring programs, including Web Trends and Site Server Express that might be employed for tracking program usage once our collections are "housed" on the Web.

Networking among LRC librarians is the unifying theme as we continue to struggle with reporting meaningful benchmarking data. Sharing information about computer monitoring programs used at respective institutions is one example of how an on-going public discussion could uncover the solutions of data collection in the technology-driven context of health science LRCs.

Amidst the economic pressures of shrinking budgets and staff cutbacks, LRC efforts are vital to track in order to prove productivity and justify budgets. To borrow from the patient record keeping model, "If you didn't chart it, you didn't do it." Not only must LRCs show they "did it," they must also illustrate their efficiency in comparison to other LRCs. This requires data be recorded uniformly across institutions so that they are able to compare apples to apples and bytes to bytes.


1. Association of Academic Health Sciences Library Directors. Annual Statistics of Medical School Libraries in the United States and Canada, 1977-1978. Houston: Houston Academy of Medicine-Texas Medical Center Library, 1978.

2. National Center for Education Statistics, Academic Libraries. "Academic Libraries Data." IPEDS 1990-91 Academic Libraries. Last update 2/2/94, (26 August 1999).

3. Futrelle, Diane F. "Results of the Learning Resources Services Questionnaire." Paper presented at the annual meeting of the Medical Library Association, Detroit, Michigan, 5 June 1991.

4. Anderson, P.F. "Survey: LRCs/Computer Labs," 15 Oct 1992, Medlib-L post (26 August 1999).

The authors gratefully acknowledge the support and expertise of the following individuals: Alexandra Dimitroff, University of Wisconsin at Milwaukee, for her consultation regarding effective survey design; Janis Brown, University of Southern California, and Dave Piper, University of Arizona, for their extensive contributions as EMTS taskforce members; Bill Fleming, Rush University, Chicago, Illinois, for his automating the survey's data collection process and formatting the results; Trudy Gardner, Rush University, for her belief in the importance of the survey by allowing staff time and facilities of the Library and Rush University to be dedicated to the effort; and to Trudy and Bill both for also giving us input on earlier incarnations of this paper.

Christine Frank is Associate Director for Information Services, Library of Rush University, Rush-Presbyterian-St. Luke's Medical Center, Chicago, Illinois. Bill Karnoscak is Manager, McCormick Educational Technology Center, Library of Rush University.

Copyright 1999 Christine Frank, Bill Karnoscak. All rights reserved. Commercial use requires permission of the author and the editor of this journal.

The author and editors do not maintain links to World Wide Web resources.

ISSN 1069-6792
Revised: 10/5/99