Comparing Apples to Oranges: Community College and University Cooperation in Virtual Reference
1. Comparing Apples to Oranges?
Community College and University Cooperation in Virtual Reference
Sara Memmott, Eastern Michigan University,
Ypsilanti, MI
Mary Kickham-Samy, Macomb Community College,
Warren, MI
Sandra C. McCarthy, Washtenaw Community College,
Ann Arbor, MI
Christine Tobias, Michigan State University Libraries,
East Lansing, MI
Ann Walaskay, Oakland Community College,
Farmington Hills, MI
Arlene Weismantel, Michigan State University Libraries, East Lansing, MI
Virtual reference cooperatives are designed so that librarians from a variety of institutions can
help library users from any of the member institutions with their research needs. However, some
librarians are concerned that they may not have the expertise or resources to answer questions
from institutions that they perceive to differ significantly from their own.
Research Help Now <http://www.researchhelpnow.org> is a virtual reference cooperative of 14
community college and university libraries in Michigan. In this study, Research Help Now
librarians analyzed, and then, classified by type, a sample of questions submitted from March -
May 2009, in order to determine the similarities and differences between questions asked by
community college and university users of the service.
Research Questions
Do the questions asked by community college and university users of a virtual reference service
differ significantly? Specifically:
1. How frequently do these library users ask questions that require local knowledge to
answer?
2. Do university library users ask more complex questions than community college library
users?
3. Do their questions require access to specialized knowledge and skills?
Librarians are sometimes apprehensive about answering questions submitted by students of other
institutions or about allowing librarians of other institutions to answer questions submitted by
their same institutions. Knowledge of the types of questions being asked allows librarians to be
better prepared for and more confident about virtual reference collaboration.
Apples & Oranges – They Don't Compare by Mike Johnson - TheBusyBrain.com , Creative Commons Attribution License
http://www.flickr.com/photos/thebusybrain/2492945625/
2. Classifying Questions Asked
Categorizing the questions asked is the first step in comparing questions received from these two
groups of library users. Most studies of online reference attempt to categorize questions in some
way, and a number of these base their classification on Katz's categories for reference questions
(Luo, 2007.) Warner (2001) offers an alternate model for classifying reference statistics. This
model has been used in one previous reference study that included email as well as in-person
questions (Meserve, 2009.) Studies have shown that the Warner model can be used consistently
among various library staff as a system for categorizing questions received (Henry & Neville,
2008; Neville & Henry, 2009.) Warner uses four levels to categorize questions by the resources
and skill level required to answer a question:
• Level I: Nonresource-based. Questions that do not require a resource to answer.
• Level II: Skill based. Questions that require a demonstration to answer (i.e., "how-to"
questions that might be answered by a well-developed set of directions. The same
question should always get the same answer.)
• Level III: Strategy-based. Questions that require the formulation of a strategy to locate an
answer and require selection of resources. May require individualized subject approaches.
• Level IV: Consultation. Usually longer encounters outside of regular desk duty. (No
questions in this study received this classification.)
The local nature of reference questions is also an important factor in virtual reference
cooperatives. In Kwon's study of collaborative chat reference in a public library system (Kwon,
2007) local and non-local questions were distinguished. A local question is defined as including
circulation and information about a local library. A Non-local question does not require local
knowledge to answer a question.
A few previous studies have categorized questions by subject (Marsteller & Neuhaus, 2001;
Horowitz et al., 2005), providing an additional facet of classification for the study of users'
needs.
Research Design
The researchers analyzed a sample of 240 virtual reference chat transcripts collected from March
- May 2009: 120 questions from community college users and 120 from university users.
Questions were divided into sets of 60 transcripts. Each set was independently analyzed and
categorized by two librarians (one from a university, one from a community college). Next, the
two librarians reading each set identified and discussed their differences. A third librarian settled
discrepancies when necessary.
Questions were classified by:
1) Level (Difficulty)
2) Local or Non-local
3) Subject
2
3. 1) Question Level/Difficulty
This facet of categorization classifies how difficult or complex a question may be for a librarian
to answer, based on the resources or skills required to answer the question. Definitions used
were taken from Warner, "A new classification for reference statistics."
Level I: Non-resource-based:
Questions that do not require a resource to answer.
Examples from this study:
o "what is the fine when you have an over due book"
o "Is there any place to fax something here in the main lib?"
o "how long does it take to get a book out of storage?"
o "Can i check out a book if i am not enrolled in the spring semester?"
Level II: Skill based:
Questions that require a demonstration to answer (i.e., "how-to" questions that might be
answered by a well-developed set of directions. The same question should always get the
same answer.)
Examples from this study:
o "how do i look for e-books"
o "How do I use inter-library loan?"
o "how can I search and borrow a DVD from wcc lib?"
o "Hi... can I access JSTOR articles from my computer through the library's
website, or do I have to come in?"
Level III: Strategy-based
Questions that require the formulation of a strategy to locate an answer and require
selection of resources. May require individualized subject approaches.
Examples from this study:
o "i need help finding academic articles about developmental math classes and their
controversies"
o "I am doing a research project on the media's coverage of the Vietnam War and
how it swayed public opinion of the war. I am looking for news coverage of the
Tet Offensive but I'm having a hard time finding broadcasts from 1968. Is there a
way to find broadcasts through the library?"
o "I am looking for information on family therapy models where would be the best
place to look? I have tried SW Abstracts and Pych Info"
Level IV: Consultation
Usually longer encounters outside of regular desk duty. May be for the selection of
curriculum materials. The librarian will often have to research recommendations or
3
4. prepare reports for consultation work.
(None of the questions in this study received this classification.)
2) Local or Non-local
Local: questions that involve circulation, policies, or other information about a particular
library or institution.
Examples from this study:
o "Say, is the Orchard Ridge Campus a Wi-fi hot-spot? I figure a reference librarian
would know...."
o "How many books can you check out from the library at a time?"
o "is there a tutor for photoshop ??"
Non-local: a question that is not primarily about local policies. Research questions were
categorized as non-local.
Examples from this study:
o "i need information on how green technology can be an economic stimulus"
o "What would be the best database to find information on Assisted Suicide"
o "I was just wondering if there were any journals articles on Greek Mythology in
the library or on the web that you might know of?"
3) Subject
The researchers were also interested in the subject areas of research questions, and the number of
questions that required subject knowledge compared to the number that focused on policies,
procedures, or general research skills. Questions that were not about a subject were categorized
as "None." For example, a known item search for a book was categorized as "None" even if the
subject of the book was apparent, because the question did not require or assume any familiarity
with the subject.
The subject categories used:
• None
• Agriculture
• Applied Technologies
• Art & Architecture
• Business & Management
• Education
• Engineering
• Health Sciences
• Humanities
• Social Sciences
• Sciences & Mathematics
4
5. Results
1) Level of Difficulty
Patrons Level I Level II Level III
Community College 24% 45% 31%
University 26% 56% 18%
All 25% 50% 25%
5
6. 2) Local vs Non-local Questions
.
Patrons Non-local Local
Community College 68% 27%
University 48% 53%
All 58% 42%
3) Subject of Questions
Patrons Subject-based No Subject
Community College 41% 59%
University 23% 77%
All 32% 68%
Conclusions
Community college library users seem to ask more Level III and subject-based questions than
university library users, suggesting that community college students may be more likely to need
help with complex tasks, such as the process of writing a research paper. University library users
seem to ask more Local, Level II, and non-subject based questions. This suggests that university
students may be more likely to need help navigating complex institutions and procedures. These
6
7. results counter the expectation that university library users would ask a higher percentage of
complex questions than community college users.
While these two groups of library users may have slightly different needs, the difference is not
great enough to impede collaboration among libraries. Community college and university
librarians and library users are not apples and oranges; a better comparison may be apples and
pears. Understanding the differences and similarities between these two groups of library users
can help address the anxiety some librarians have when beginning virtual reference
collaboration. Community college librarians need not be apprehensive that many questions from
university library users will be complex research questions. University librarians can be
prepared to assist community college library users with questions about the research process.
Local knowledge - information about a local library's policies and procedures - is important
when answering questions in a virtual reference collaborative. This need can be addressed by
providing easy-to-find, clear, and complete information about library policies, both on library
web sites and within virtual reference systems (such as QuestionPoint's policy pages).
Anticipating these needs will assist not just librarians from collaborating institutions, but also
local librarians and library users themselves.
View a copy of the poster:
http://www.slideshare.net/smemmott
7
8. References
Arnold, J., & Kaske, N. (2005). Evaluating the quality of a chat service. Portal: Libraries and
the Academy, 5(2), 177-193.
Desai, C. (2003). Instant messaging reference: How does it compare? The Electronic Library,
21(1), 21-30.
Fennewald, J. (2006). Same questions, different venue. The Reference Librarian, 46(95), 21-35.
Foley, M. (2002). Instant messaging reference in an academic library: A case study. College &
Research Libraries, 63(1), 36-45.
Henry, D., & Neville, T. (2008). Testing classification systems for reference questions.
Reference & User Services Quarterly, 47(4), 364-373.
Horowitz, L., Flanagan, P., & Helman, D. (2005). The viability of live online reference: An
assessment. Portal: Libraries and the Academy, 5(2), 239-258.
Houlson, V., McCready, K., & Pfahl, C. (2009). A window into our patron's needs: Analyzing
data from chat transcripts. Internet Reference Services Quarterly, 11(4), 19-39.
Johnson, C. (2004). Online chat reference: Survey results from affiliates of two universities.
Reference & User Services Quarterly, 43(3), 237-247.
Kibbee, J., Ward, D., & Ma, W. (2002). Virtual service, real data: Results of a pilot study.
Reference Services Review, 30(1), 25-36.
8
9. Kwon, N. (2006). User satisfaction with referrals at a collaborative virtual reference service.
Information Research, 11(2), 246.
Kwon, N. (2007). Public library patrons' use of collaborative chat reference service: The
effectiveness of question answering by question type. Library & Information Science
Research, 29(1), 70-91.
Luo, L. (2008). Chat reference evaluation: A framework of perspectives and measures. Reference
Services Review, 36(1), 71-85.
Marsteller, M., & Mizzy, D. (2003). Exploring the synchronous digital reference interaction for
query types, question negotiation, and patron response. Internet Reference Services
Quarterly, 8(1-2), 149-165. doi:10.1300/J136v08n01_13
Marsteller, M., & Neuhaus, P. The chat reference experience at carnegie mellon
university.http://www.contrib.andrew.cmu.edu/~matthewm/ALA_2001_chat.html
Meert, D., & Given, L. (2009). Measuring quality in chat reference consortia: A comparative
analysis of responses to users' queries. College & Research Libraries, 70(1), 71-84.
Meserve, H., Belanger, S., Bowlby, J., & Rosenblum, L. (2009). Reference & User Services
Quarterly, 48(3), 247.
Neville, T., & Henry, D. (2009). Reference classification - is it time to make some changes?
Reference & User Services Quarterly, 48(4), 372-383.
9
10. Patterson, R. (2001). Live virtual reference: More work and more opportunity. Reference
Services Review, 29(3), 204-210.
Pomerantz, J. (2005). A conceptual framework and open research questions for chat-based
reference service. Journal of the American Society for Information Science and Technology,
56(12), 1288-1302.
Pomerantz, J. (2005). A linguistic analysis of question taxonomies. Journal of the American
Society for Information Science, 56(7), 715-728.
Pomerantz, J. (2006). Collaboration as the norm in reference work. Reference and User Services
Quarterly, 46(1), 45-55.
Pomerantz, J., Luo, L., & McClure, C. (2006). Peer review of chat reference transcripts:
Approaches and strategies. Library & Information Science Research, 28(1), 24-48.
Pomerantz, J., Mon, L., & McClure, C. (2008). Evaluating remote reference service: A practical
guide to problems and solutions. Portal: Libraries and the Academy, 8(1), 15-30.
Sears, J. (2001). Chat reference service: An analysis of one semester's data. Issues in Science and
Technology Librarianship, 32(Fall)
Warner, D. (2001). A new classification for reference statistics. Reference & User Services
Quarterly, 41(1), 51-55.
White, M., Abels, E., & Kaske, N. (2003). Evaluation of chat reference service quality pilot
study. D-Lib Magazine, 9(2)
10