As researchers take a more active approach to managing their reputation, what can the data generated by their activities tell us about the best ways to present research online? Many different parties across the scholarly communications community are seeking to understand the data in their respective systems, to determine cause and effect across a range of activities and outcomes. What pitfalls must be avoided, and how can we better integrate our efforts to maximize understanding of the tools to which researchers are turning to support career progression.
6. 0% 20% 40% 60% 80% 100%
Blogging
Commercializing your…
Industry engagement
Engaging with the media
Communicating via social…
Consultancy
Community contribution (e.g.…
Mentoring
Editorship
Winning awards / prizes
Teaching
Peer-reviewing
Winning funding / grants
Collaboration
Presenting at conferences
Publication
How do academics
broadly rank
activities in terms of
contribution to
reputation?
(n = 2,748)
Contributes
most:
Publication
Speaking
Collaboration
Reviewing
Contributes
least:
Blogging
Industry
Media
Social media
7. 0% 20% 40% 60% 80% 100%
Blogging
Commercializing your…
Industry engagement
Engaging with the media
Communicating via social…
Consultancy
Community contribution (e.g.…
Mentoring
Editorship
Winning awards / prizes
Teaching
Peer-reviewing
Winning funding / grants
Collaboration
Presenting at conferences
Publication
How do academics
broadly rank
activities in terms of
contribution to
reputation?
(n = 2,748)
Contributes
most:
Publication
Speaking
Collaboration
Reviewing
Contributes
least:
Blogging
Industry
Media
Social media
“When you see the CVs
of big academics,
they’ve done all these
things. It wasn’t a
strategy – they just did
them.”
8. @charlierapple #uksg16
“Academia is a meritocracy,
but it’s also about
reputation management.
More senior academics might not see this, but as a
junior academic – and a woman –
proactively managing your
reputation is really important.”
13. 49.91%
38.50%
3.72%
7.86%0%
10%
20%
30%
40%
50%
60%
I believe the visibility,
usage or impact of my
articles could be
significantly improved
I believe the visibility,
usage or impact of my
articles could be
somewhat improved
I don’t believe that the
visibility, usage or impact
of my articles could be
improved
I don’t know
To what extent do you think more could be done to increase
the visibility, usage or impact of the work you publish, on or
after publication? (n = 2,900)
Visibility, usage, impact could…
be significantly
improved
be somewhat
improved
not be
improved
I don’t know
14. 0% 20% 40% 60% 80% 100%
Conferences / meetings
Academic networking / profile sites (e.g.…
Conversations with colleagues
Institutional websites / repositories
Email
Social networking sites (e.g. LinkedIn, Twitter,…
Your own blog / website
Subject-based websites / repositories (e.g. arXiv,…
Posts on other blogs / websites
Discussion lists
Multimedia sharing sites (e.g. Slideshare, YouTube)
In which of the following ways do you currently create awareness
of or share materials relating to your work?
(n = 2,826)
@charlierapple #uksg16
15. Actions = data = answers?
Wee Kim Wee School of Communication and Information
Thanks to the Altmetrics Research Team at the Centre for HEalthy and Sustainable CitieS (CHESS),Wee Kim Wee School of Communication and
Information, Nanyang Technological University, Singapore. The Altmetrics team at CHESS is supported by the National Research Foundation, Prime
Minister’s Office, Singapore under its Science of Research, Innovation and Enterprise programme (SRIE Award No. NRF2014-NRF-SRIE001-019).Any
opinions, findings, conclusions and/or recommendations expressed in this material are those of the author(s)and do not necessarily reflect the views of
the Singapore National Research Foundation.
16. 16
Brief background: what is Kudos?
Plain
language
explanations
Trackable
links for
sharing
Range of metrics
against which to map
efforts to explain and
share
18. 18
Open vs closed communications
OPEN
channels – nothing to restrict
visibility of sharing of work,
except time and effort in
finding / following / filtering
CLOSED
channels – very unlikely that
publisher / institution will be
connected with researcher and
have visibility of sharing efforts
here
MEDIATED
channels – possible, but
less likely, for publisher /
institution to have
visibility of sharing
23. 23
0%
30%
60%
90%
Facebook Twitter LinkedIn Others
# publications with
share actions in Kudos
n = 4,610
Publication can be shared in more than
one channel
Facebook is more
commonly used
for sharing
academic work
than you might
expect
Channels used
%ofpublicationssharedinthischannel
26. Can attention drive action? Yes!
0 50 100 150 200
Control group
Treatment group
n = 4,858
n = 4,866
Median full text downloads
121
149
Proactively
explaining and
sharing work
increases
downloads by
23%
@charlierapple
27. thanks our survey partners
Find me
at stand
50
charlie@
growkudos.
com
blog.
growkudos.
com
Notes de l'éditeur
My name is Charlie Rapple and I’m the co-founder of Kudos which provides tools to help researchers – and their institutions, publishers and societies – increase the reach and impact of published work.
I’m here today to talk through findings of a range of projects I’ve been involved with in the last year, all looking broadly at what reputation means to researchers, why it matters to them, what they do to increase their visibility, and the impact of these efforts.
I thought it was worth going back to basics to start with, and considering the simple question of ”what is reputation” – when I talk to researchers on this topic, I always start exploring from this rudimentary point, because I’m curious about how they interpret this potentially complex term. Sometimes my findings validate reasonably obvious expectations, but other times I find myself completely surprised by something they share.
For example, in the context of why reputation is important,
One researcher made the relatively obvious point to me that “reputation opens doors”.
But what surprised me was another researcher saying that reputation is not about helping you get the job, or the impression you make when you get to interview - it’s about whether you’ll even get a chance to apply – whether you even know that job exists. She flagged up how many things like jobs, funding, opportunities for volunteerin, networking – just don’t get advertised; those “recruiting” turn to who they know – so you’ve got to be “known”.
So putting those two points together: Reputation is a mechanism for opening doors, without which you wouldn’t even see the doors, let alone be able to open them. Reputation opens secret doors!
Another thing that is unexpected and interesting is that younger academics are starting to talk in terms of brand. As a marketer I’ve often been hesitant about using the “brand” word with academics for fear that it’s a commercial, almost dirty word. So I’m fascinated that academics are beginning to use these terms for themselves, and to consider it necessary to think about, build and manage their own personal brand.
One early career researcher I interviewed explained this nicely; she said: ”People want to pigeon-hole you... I applied for a postdoc a few years ago and they said “I don’t quite know who you are”. So you can’t spread yourself too widely – you need to be seen as something specific – and it takes a bit of thought. When I started my current job, I worked to “brand” myself as a drylands specialist working on past and present environmental change and challenges.”
I think the lesson here for those of us gathered at UKSG is that, if academics are embracing the concept of brand, so should we – both in terms of supporting their efforts, and in terms of rigorously exploring and managing our own brands.
So a big part of what I’m doing day-to-day at the moment is understanding the shifts in how people create and manage their reputation or brand.
This is from a survey I’ve just completed in the last month. About 3,000 academics responded, about half of which are still completing their PhD or within 10 years of doing so; just under half were in Europe (about 20% in each of Asia and North America). About three quarters were STM, a quarter social sciences, and because we allowed people to choose multiple options, another 15% arts and humanities!
The survey findings broadly align with the interviews I’ve been talking about – with a few interesting differences:
in the interviews, publication is almost taken as read; people actually don’t mention it upfront when you ask what they do to build their reputation – although it does of course come up when you probe
I think people don’t necessarily think of things like publication, peer review and teaching as reputational activities. When given a list that puts those activities in that context, they do rank them highly. But ask them conversationally about things they do to build reputation, act they focus more on activities that might be more explicitly categorised as “outreach” or networking
So, for example, community contribution figured more highly in my interviews than it scored in the survey, as did use of social / academic networks.
I think that reflects too the very specific question I asked in the survey (please rank these activities in terms of how they contribute to your reputation) versus the more general conversations I had with people in person, where they were just telling me about things they do. So the sense is that people are trying things even though they aren’t really sure that they contribute that much.
Something that comes up time and again is how proactive or strategic researchers need to be in managing their reputation.
One of my interviewees made two thought-provoking comments here: firstly “When you see the CVs of big academics, they’ve done all these things. It wasn’t a strategy, they just did them.”
She later said: “Academia is a meritocracy, but it’s also about reputation management. More senior academics might not see this, but as a junior academic – and a woman – proactively managing your reputation is really important.”
I think between those two comments lies another useful thread for us to pull: for people at the earlier stages of their career, the academic space is so crowded now, so competitive, that just waiting for opportunities to drop into your lap isn’t going to work – you have to be more strategic, and more proactive in building up your brand. It isn’t necessarily an easy process: the “old” ways of sharing about your work or participating in the debates in your field tend to favour established academics, so in turn to be favoured by those in senior positions – who then frown upon things like social media because they don’t have personal experience of its value in helping to progress academic discourse.
And that’s precisely why we should encourage, enable and support younger academics to use social media – and in particular to use tools that help get them out of the echo chambers of things like ResearchGate and Academia, which are still only peer to peer, still just online iterations of offline cliques. We have to support and encourage people in reaching broader audiences.
People I talked to felt empowered by the ability that social media gives them to transcend the boundaries of their personal network, or even that of their supervisors.
They also pointed out that social media helps those outside the system – people without an institutional affiliation or from less well-funded countries who may not be able to attend conferences but can participate remotely thanks to social media and video streaming.
Social media gives everyone an equal voice, and an equal opportunity to participate.
A quick comment now on publication as an activity, as that was top of the list and is naturally of interest to this audience.
One academic talked to me about the “bubble” – the ideal or fantasy world, here represented by a unicorn on roller skates – in this world, you will commonly hear academics say things like “impact factors don’t mean anything” or “it’s about the quality of the work, not the brand of the journal.'
.. But then outside the bubble, a good publication is still what makes you – one interviewee talked about a temp in her lab who no-one paid any attention to until they found out she had a paper in Nature, and then she was treated like a “rock star”.
So if publications are such a big contributor to reputation, how satisfied are researchers with the visibility, usage and impact of their publications?
Our survey says .. Not very?!
You can see here that fully 88% of our survey respondents thought more could be done to increase the visibility, usage or impact of their work.
When we did this survey in 2013, it was 84% so this is a persistently high proportion of people who expect more, and there are implications for all of us in terms of growing the support and guidance we give them. Our survey did show that they feel themselves to be primarily responsible for the visibility of their work, but that more support would be welcomed, particularly from institutions.
And of course, it’s not ALL about publications. There are lots of other outputs from people’s work, and lots of ways to share them. Our challenge as a community is to help make these activities more trackable, in terms of their effect on impact and reputation, and more joined up, in terms of enabling the full research story to be easily explored across different formats. At the moment, researchers are experimenting with new approaches but with little more than anecdotal evidence to help them determine which of these activities are most effective.
That then is the challenge we’ve been trying to address at Kudos, by developing a central service through which people explain and share their research, so we can build up and analyse a dataset of what works. The project I’m now talking about was carried out by the CHESS Centre at the Nanyang Technological University.
To give you the background to the study, in terms of what people do with Kudos: academics claim their publications, add plain language explanations, and generate trackable links to underpin whatever sharing they do, and then we can map their communications efforts directly to a range of publication metrics.
We can then roll up and report on these actions at the overall level, but also to institutions with which they’re affiliated, or publishers of the works in question, or societies of which they are members – so that these organisations can also learn more about which communications activities and channels are most popular, and most effective, and shape their own activities, and their support for researchers, accordingly.
So the idea is that we are joining the dots between ”communications around my work”, on the left, with “metrics about the performance of my work”, on the right – so people can get beyond metrics (which measure where you are) to learn more about how you have got there.
We’re trying to build up a “big data set” so the next time a researcher asks you “what network or platform should I use to build visibility of my work” there will be broad data set from which to draw answers.
Just before I move onto our analysis of that data, I also wanted to put a placeholder in your minds about open vs closed communications.
The different ways that researchers share their works are on a spectrum, from completely open channels such as Twitter – where (typically) anyone can find and follow you – to closed channels such as ResearchGate (where only people with an academic email account can register) or email (where only the specific people you mail will see your work).
So Facebook, for example is much more of a closed channel than, say, Twitter. It is much less likely, as institutions or publishers, that we are going to be personally connected on Facebook to the researchers we are involved with. So it’s difficult to know if researchers communicate their work via Facebook: you might see (for example) via Altmetric that there is lots of Facebook conversation around an article, but you can’t easily track that back specifically to any initial action taken by the researcher to increase the viisibility of their work.
Enough background! I know we have some researchers in the audience who will be very familiar with the challenges of data analysis
But for those who aren’t I thought I’d briefly talk through a few of the ways in which it is very difficult to try and answer what seem to be simple questions!
First of all, you always wish you had more data.
We’ve got over a million publications in Kudos now,
But by the time you filter this (for example, on the actions you want to consider, or for a comprehensive data set across both metadata and metrics) we’ve got a data set for this study closer to around 5,000 publications.
Then you have to look at the characteristics of the data –everything from publication type to the frequency of outreach actions taken or the age of the publications.
And then set up a control group – for which you randomly sample the rest of the data you have, and then check to see that it broadly shares the same characteristics.
Once you’ve got valid datasets, you have to start analysing them – and there’s a wealth of different tests for doing this.
For example, in part of our analysis, we started using standard t-tests to compare the mean
But the standard t-test expects a uniform distribution of data
Whereas our data actually had some quite extreme outliers, which can distort the results
So we switched to using the Mann-Whitney U test which uses the median and thereby is not distorted by outliers.
For those of you who’ve started to glaze over, let me give you a plain language analogy:
Dave Nicholas at CIBER calls this the Bill Gates problem – imagine a bus full of passengers for whom you are looking at their average earnings. If you look at the mean, you’re taking the total income of the people on the bus, and dividing it by the number of passengers. Seems sensible …
Until Bill Gates gets on the bus, and suddenly on average everyone’s a multi-millionaire.
This is why we used the median rather than the mean - it excludes the non-typical and in this case at least made the analysis more meaningful.
So what have we found? Here are some of our early highlights.
We started by separating out the main social media channels that researchers can post to directly from Kudos: I was taken aback to learn that academics are more likely to do their sharing via Facebook – really surprising, as you might expect them to use Facebook more for personal updates, and Twitter or LinkedIn for posting about their work. The boundaries between professional and personal networks are blurring.
This is why I wanted to emphasise the point about closed vs open channels. Given that Facebook is towards the closed end, and LinkedIn is on the “mediated” part of the spectrum,, that means a high proportion of sharing is happening in channels where the fact of that sharing has been hard to monitor, let alone its effect.
It becomes all the more important for publishers and institutions to use tools that help surface these activities, and help you understand what role your own authors and researchers have played when their work performs well against metrics.
The next interesting thing we found was that the highest correlation between shares and share referrals was for LinkedIn.
The numbers here represent the strength of correlation between somebody sharing in that channel, and people clicking on a share in that channel, with 1 being the best – for the statisticians among you, this is based on a Pearson product–moment correlation.
What this tells us is that people might do more of their sharing via Facebook, but actually they are more likely to get clicks on the links they share via LinkedIn.
A final area we looked at is whether efforts to explain and share work led to an increase in metrics that aren’t directly related to the processes of sharing – for example, whether explaining and sharing correlates to an increase in downloads of full text on the publisher site.
I’d been thinking a lot about how metrics relate to each other and had proposed that they too are on a spectrum, and I mapped that to the old marketing “AIDA” framework (or Attention, Interest, Desire, Action).
So while researchers (and institutions, and publishers) are certainly interested in the attention end of the spectrum, they continue (rightly or wrongly) to be most impressed or persuaded by metrics further along– so for example, those senior academics who don’t “get” social media do value traditional media coverage, and of course they value citations. And even those who understand better the value of attention metrics still care about things like full text downloads – and want to understand the connection between their efforts to create attention, and whether that helps to drive interest, desire and action.
To which our answer is, yes, it does.
We analysed full text downloads for publications where the authors had used the Kudos toolkit to explain and share their work, and compared this to our control group where the Kudos tools had not been used.
We found that downloads of the full text, on the publisher site, were 23% higher for those publications where authors had used the Kudos toolkit to explain and share their work.
So this is starting to become the kind of evidence we need to persuade those senior academics who frown upon communications efforts by their more junior colleagues – because (in as much as we can correlate downloads to readership), then we’re starting to see that outreach efforts are correlated to improved readership which is key to the creation of any future impact, whether academic or otherwise.
So in conclusion:
Reputation opens secret doors
Some people are very good at building it – we need to level the playing field – this is no longer optional stuff for academics.
Tools, guidance, support to help everyone “compete” equally
We are beginning to have the evidence that will help justify time spent using such tools and undertaking such activities, to those senior academics who are sceptical
New modes of outreach do drive meaningful metrics
Usage ≠ impact but is a vital basis on which impact is built: if people don’t read your work, they can’t apply or cite it.
People who helped us circulate the survey
Add a reference to stand 50