The document summarizes a study conducted by Becky Skeen, Liz Woolcott, and Andrea Payant at Utah State University on assessing communication patterns within their cataloging and metadata services department. They used interaction logs filled out by staff weekly and an anonymous survey distributed to other library departments. The study found lower than expected interaction with other technical services units and higher interaction with special collections. It also contradicted stereotypes of catalogers being withdrawn by finding most interactions were social. The data analysis tools used included Excel, Qualtrics, Tableau and OpenRefine. Conducting this assessment on a regular basis and expanding the research was recommended to provide more useful insights into communication over time.
Charting Communication: Assessment and Visualization Tools for Mapping the Communication Patterns of Cataloging and Metadata Units
1. Becky Skeen
Special Collections/Archives Cataloging Librarian
Utah State University
becky.skeen@usu.edu
Liz Woolcott
Head of Cataloging and Metadata Services
Utah State University
liz.woolcott@usu.edu
@lizwoolcott
Andrea Payant
Metadata Librarian
Utah State University
andrea.payant@usu.edu
@rusros25
Charting Communication:
Assessment and Visualization Tools for
Mapping the Communication Patterns of
Cataloging and Metadata Units
Charting Communication:
Assessment and Visualization Tools for
Mapping the Communication Patterns of
Cataloging and Metadata Units
3. Challenges of Cataloging and Metadata Units
• Shrinking staff and changing roles
• Complex & “behind-the-scenes” work
• Cataloger stereotype
• Siloed work environment
• Misconceptions from lack of understanding of what we
do library
The Problem
4. Cataloging and Library Assessment
• Interaction log
• Staff survey
• Compare and assess the results
Our Answer
5. Tools and Logistics
• Log created in Excel
• User friendly
• Succint
– Standardized
– Single page
– Logbook style
– Multiple choice whenever possible
– Color-coded
• Turned in weekly
• Data input by student workers into spreadsheet
Interaction Logs
11. Purpose: Quantify (where possible) the perceptions
of the Cataloging and Metadata Services (CMS)
unit that were held by other library units.
Survey
12. Survey asked about:
• Frequency of interaction
• Reasons of interaction
• What CMS could do better
• Unit of respondent
Structure of Survey
16. What are the typical reasons you interact with a
Cataloging and Metadata Services staff member?
Reasons
17. In what ways does the Cataloging and Metadata
Services department help facilitate your day-to-day
work?
General Perceptions
18. What can the Cataloging and Metadata Services
department do better to help you?
General Perceptions
● Education
● CMS work
● Systems
● Collections
● Patrons
19. Data Analysis Tools
Quantitative Data Qualitative Data
Excel Qualtrics
Qualtrics Excel
Tableau OpenRefine
OpenRefine Controlled Vocabularies
22. • Interaction deficits we didn’t expect
– Lack of interaction with other technical services units
• Example: Collection Development
• Higher frequency of interactions
– Special Collections and Archives
• Possible reasons: Cataloger embedded in SCA, digitization
of SCA materials, and SCA public service desk
• Contradicting Stereotypes
– Quiet and withdrawn? No - most interactions were social
Surprises
23. • Cataloging Assistants
– Lack of interaction of every kind
• Day of the week/Time of day
– Interactions were 2-3 times more likely on Mondays and
Tuesdays
– Most occur in the afternoon
Surprises
24. • Context is important
– We should expand this research further
• Inside and outside our library
• Lack of complete anonymity may have influenced results
• Third party asking the questions may have elicited
different responses
• More useful more than once per year with combined
analysis
• Changing division and unit structure
Major Take Aways
Our interaction assessment alone revealed our own observed patterns of interaction, but we needed to supplement that with information about perceptions of the unit, too. We then compared the perceptions and the observed interaction together to see what pictured emerged.
Just after finishing the interaction assessment that focused on our recorded our interaction with other library units, we sent out a survey to gather the perceptions of our unit held by other library staff and units.
The survey was 10 questions long at max. That number was varied based on whether respondents answers to questions While we had more questions than these, here are the four general areas, I will go over today. All questions were optional and included an “I prefer not to say” option if they were multiple choice because we wanted to accommodate all levels of comfort in contributing feedback to the unit.
For this presentation I will focus on the responses to the questions listed here and compare that to the actual interactions to demonstrate gaps in communication and how we planned to approach this issue.
First, I will just sketch out the demographics of the respondents. We had a fairly good response rate with 49% of all of the library staff responding. (And by library staff, we mean all library employees - faculty, professional, and para-professional.)
The graph you see here is broken down by unit. The blue bar represents the number of responses to the survey from someone identifying as part of that unit. The yellow bar shows how many people were in that unit at the time of the survey. As you can see, we had stellar participation from the Reference and Instruction department (who now go by Learning and Engagement). Other units such as Collection Development and Government Information had a 100% completion rate, which we really appreciated
At the other end of the spectrum, we didn’t have any engagement from the Systems department or most of our tech services counterparts (with the exception of Collection Development). As Becky noted, our interactions with the rest of our division were much lower than we anticipated - and when paired with this lack of participation in the survey, we identified an area where we could concentrate on improving communication.
We also wanted to measure how our actual interaction data matched up again everyone’s perceptions of their interactions with the CMS unit. So, the survey asked the following question [NOTED ON SLIDE]
Now, this graph demonstrates both sets of data - the survey and the interaction assessment. It is really dense, so I apologize in advance if it is a lot to take. We will have an article coming out at the end of July in Cataloging and Classification Quarterly about this project - and we will include this graph along with explanations if you want to dive in further.
But to point out the highlights, the top bars demonstrate the actual frequency of our interactions, split by department. It shows how often we interacted with a department during every week of our month long assessment. The dark blue shows the percentage of days we had daily interactions. The green shows how many weeks we had at least one interaction in the week. And the light blue bar shows the percentage of monthly interactions. So if we had daily interactions each week, then naturally we had 100% weekly and monthly interactions. You will note that we had daily interactions with our administration and our Special Collections and much fewer interactions with our Government Documents, Resource Sharing (ILL) and our distance libraries.
The bottom graph shows how often respondents to our survey felt they interacted with the CMS unit. Please note that the survey provided four options for our respondents instead of three - Daily, weekly, monthly, and intermittent. The pie graphs are split not by how often they interacted but by the percentage of respondents who chose one of those four options. So, for instance, two thirds of our administration respondents thought they interacted on a daily basis and one-third felt that they interacted on a weekly basis. Compared to the bar graph above, two thirds of them were correct. (Note that our Associate Dean for Technical Services at the time was technically in the Administration unit and daily interactions with her were reasonable. Since then, our unit has been moved to an entirely different division (Special Collections and Archives) but the near daily interaction still occurs. )
Overall, just about every unit underestimated how often it interacted with the CMS unit. All of the units that had daily or near daily interactions with the unit came closest to accurately predicting their interaction patterns - with the exception of Collection Development, which had 80% daily interactions - one of our highest - but most of the staff predicted only monthly interactions. I have to note that this was particularly surprising as they share a workspace with us. This was another red flag that told us we needed increased or more efficient communication with that unit. Similarly, the Acquisitions unit expected only a weekly interaction with the catalogers, but came in at over 70% daily interactions.
The outlier was Government Documents, which over estimated how often they interact with CMS, predicting that they had at least weekly interactions, but during the observation period, we logged interactions only 25% of the weeks. For this stat, we noted that we may need to meet expectations of interaction more and make sure that we have someone talking with our Gov Docs staff more regularly.
Ok, one last big spreadsheet graph. We compared the reason that we recorded for interaction with the perceptions of the survey respondents. Like the frequency, the units that had the most accurate views of their interaction reasons were those that interacted the most.
There were ten categories used to group reasons for interactions in both the survey and the interaction logs: database questions, drop off/pick up materials, general library issues, meetings, patron-driven questions, procedures and workflow issues, public service desks, requesting assistance, social, and other.
This evaluation did not compare the percentages for the recorded or perceived reasons for interaction because those percentages measure two different values, but there are still some valuable patterns that emerge. For recorded interactions (this blue column shown here), the percentages indicate what portion of all of the interactions that category encompassed for each unit. For the perceived interactions (the green columns shown here), the percentage indicated what portion of the survey respondents in each unit selected that category in the survey. Respondents in the green column were not asked to weigh how often that interaction occurred, just whether or not they felt they interacted with a CMS staff member for the reason presented.
Most interestingly is that people think they ask CMS about our database (ILS or Digital Library) or patron-drive questions, but they actually don't. So, either the databases rarely caused problems or those issues were not making it to the CMS staff.
Now I’ll give just a few examples of our qualitative data questions. These question were answered in a free text format.
For this question [NOTE QUESTION] Most of the respondents replied that CMS staff helped make material findable. Comments included : [NOTE COMMENTS]
Additional ideas expressed included appreciation of the work that the CMS unit did in helping to teach others about the impact of cataloging and metadata on search and retrieval as well as the renewed emphasis in the unit on streamlining and updating workflows.
This statement is a great example of just how many ideas could be presented in one free text statement. When asked what CMS could do to better help out the library staff, most of the respondents indicated they just wanted to know more about what the unit did…. Or that they wanted to know how we did what we did… or how all of the systems integrate and work…. Or what kinds of collections we had… etc., etc. One way that we processed these free text fields is by doing what catalogers do best - we assigned controlled vocabularies.
The bulleted list here is an example of how we might code a response so that we could analyze the frequency of ideas for analysis later. Clearly our respondent here wanted us to do All. The. Things.
So what did we do to analyze and visualize the data to map out our interaction patterns? For the most part, we used the data analysis and graphing tools in Excel. They were pretty simple. (Sometimes, as with the frequency graph, we made two individual graphs, saved them as images, and then stitched them together in Paint.
We also used OpenRefine to help facet and collect similar types of data.
For the survey data, we relied a lot on Qualtrics. For multiple choice type questions, Qualtrics provides the capacity to graph these simple data points - and even compare across questions that have multiple choice answers.
For the complex comparative data, we primarily downloaded the data set from Qualtrics into excel and, as Becky mentioned, we put the paper form of the interaction assessment into Excel. We then took both of those and used Tableau to compare the results. Tableau brings you the great shaded spreadsheets you saw in the Reasons for interaction graph.
For the qualitative data, we had to take a different route. Qualtrics doesn’t really analyze that data well, but it will put it into Word Clouds that can sometimes be helpful. Since our free text fields were often paragraphs long, this didn’t help us for many of the questions. For that, we focused on the controlled vocabularies I just described and then faceted them in OpenRefine or Excel.
We gathered a lot of useful data...then what?
We were surprised by some of the results of our analysis.
The most surprising trend was the relative infrequency of interactions within the technical services division, in which the CMS unit resided at the time of our assessment. This division was also composed of Collection Development, Materials and Acquisitions, and Resource Sharing and Document Delivery units. Our analysis showed that we had little to no interaction with Collection Development. This was particularly striking when you consider that the CMS unit resided in the same physical space, this suggested that much of the communication differences were related to workflow rather than proximity. Another result that stood out was the high frequency of interactions with one unit in particular – Special Collections and Archives. There are a few possible reasons that may contribute to the high frequency of interactions with SCA, the most obvious being that we have one cataloger embedded in the SCA unit. So, this physical location would naturally lead to more interactions on a daily basis. Another explanation is the library's continued emphasis on digitizing and creating metadata for the unique materials found in SCA. Finally, half of the CMS staff provide weekly reference assistance at the SCA public service desk, which opens up the opportunity for more interactions between the two units. The most common reason for interaction was unexpected – contrary to the stereotypical view of the quiet and withdrawn cataloger, the most common reason a CMS staff member interacted with library colleagues was for social reasons. Almost a quarter of interactions were defines as social. When further broken down by who initiated the interaction, social interactions were also twice as likely to be initiated by the CMS unit member as by non-CMS library colleagues.
The data highlighted the lack of interactions between the CMS cataloging assistants and the rest of the library staff. While their work is more day-to-day and less of the planning or project management focused work, the three cataloging assistants showed a surprising paucity of every kind of interaction, including social. This demonstrated a need to create more opportunities for the cataloging assistants to engage with colleagues in other parts of the library. Also, when considering the days of the week that most interactions occur, the results strongly indicated that communication and engagement are 2 to 3 times more likely to take place on Mondays and Tuesdays. Similarly, most interactions take place in the afternoons, with intermittent interactions in the early morning (before 8 am) or in the evenings (after 5 pm). The date and time of interactions data were useful in deciding how to approach new services that the CMS unit offers, particularly what times of the day were most important for unit members to be available to answer questions, participate in meetings, and receive or deliver library acquisitions.
Surveying the landscape of library peer perceptions provided a valuable self-reflection exercise for the Cataloging and Metadata Services unit. In many ways, it encouraged the unit to begin to reimagine itself and its role in the larger framework of the library. The data we gathered, admittedly, only illustrates the communication patterns of the CMS staff and could benefit from greater context, seeing how it compares to the broader interaction trends among all library staff and also how it compares to other institutions would be an intriguing study to undertake in the future. We also came to realize that although our interaction assessment and survey were anonymous, that anonymity may not be 100%. There may be a possibility that the size of the group we gathered feedback from and the fact that we were the ones who conducted the research may have influenced the results. We believe it is possible that if a third party had conducted the survey and research, questions asked could have elicited different results. We came to realize as well that it would have been more useful to conduct the survey more than just once throughout the year due to absence and participation then those results could have been compared for more in-depth analysis. We also that the stability and structure of units and divisions can effect research. As we conducted our assessment certain individuals alignment within units and/or divisions were in flux and this caused us some difficulty analyzing the data.