Ce diaporama a bien été signalé.
Nous utilisons votre profil LinkedIn et vos données d’activité pour vous proposer des publicités personnalisées et pertinentes. Vous pouvez changer vos préférences de publicités à tout moment.

Altmetrics: painting a broader picture of impact

3 462 vues

Publié le

Presentation for Academic Publishing in Europe 9 (APE2014)

Publié dans : Formation, Technologie
  • @DavidColquhoun That's weird... it disappeared. I'm reposting:

    David Colquhoun said:
    I'm sorry, but I think we are a long way apart. You say 'you've used them as evidence of scholarly impact assessment yourself'. The whole point of my last comment was that my blog is not scholarly. The blog is just a hobby. If you want to see my real science, look here http://www.onemol.org.uk/?page_id=10 (our 1982 paper in Proc Roy Soc B was 59 pages with 400-odd equations -not exactly pop stuff). You could also look at my Google scholar entry, but that would reveal a lot of citations for some very trivial papers (though you would have to read the papers, and know the field to tell which). Of course I agree that publishing will soon all be web-based, but that is a totally different question. It's to do with cost, open access and escaping the grip of glamour journals. It has nothing whatsoever to do with how you assess the merit of peoples work. I repeat, I think that metrics in general, and altmetrics in particul ar, alter behaviour, provide perverse incentives, and are a corrupting influence. I think it might help if you consider the examples that we gave in http://www.dcscience.net/?p=6369 If you think they are atypical, please produce some evidence to that effect.
    Voulez-vous vraiment ?  Oui  Non
    Votre message apparaîtra ici
  • Hi David,

    Ok so we're not that far apart.

    For me google stats and other just plan hit measures are part of altmetrics. And you've used them as evidence of scholarly impact assessment yourself.

    3.5k visitors to a specialized blog would be interesting. This is why it's important to combine numbers in conjunction with qualitative approaches.

    I would disagree with you that blogs can't be part of doing science. There are many specialist blogs that are extremely useful for communicating science. One of my favorites is http://lambda-the-ultimate.org a programming languages blog. I think it's not a waste of time to write up what your doing on a blog.

    In general, I think things are moving towards science communication using the web. Paul Krugman has a nice reflection on this with respect to economics (http://krugman.blogs.nytimes.com/2013/12/17/the-facebooking-of-economics/).

    Finally, by being careful, what I mean is that using metrics with social systems is tricky as you've pointed out. For example, it's not a good to just count numbers. A interesting take on this is here: http://www.wired.com/business/2014/01/quants-dont-know-everything/
    Voulez-vous vraiment ?  Oui  Non
    Votre message apparaîtra ici
  • Well I agree that hits on one's blog are interesting, in a narcissistic point of view. I get some gratification that (according to Statcounter) my blog has been viewed 3.5 million times, and have even cited that number when UCL decided to submit the blog as part of UCL's submission to the Research Excellence Framework as evidence of 'impact'. But my blog has next to nothing to do with my science (though it does rely to some extent on statistical knowledge gained in the course of my real work.

    If I were to blog about matrix algebra, stochastic processes and maximum likelihood fitting of single molecule records, I'd be lucky to get 3.5k readers, never mind 3.5 m. The blog is a fun hobby for my 'retirement' but I could not possibly have found time to do it when I was doing real science, The fact that it gets so many readers tells you nothing whatsoever about my science.

    In any case there is no need to have bibliometricians to count blog hits. They come free with Google Analytics, Statcounter etc.

    Finally, you say one should be 'careful in all assessment procedures'. I've often heard that said, but I have no idea what 'being careful' means in practice. If you haven't got good data about what the numbers measure, how can you be 'careful' about how you use them?
    Voulez-vous vraiment ?  Oui  Non
    Votre message apparaîtra ici
  • Hmm... usefulness doesn't always imply prediction. For example, I find it useful to know how many and who is reading my blog, downloading my slides, software or using platforms I've built. I think telling people about it isn't a bad thing.

    It would be odd to promote anybody just based on the number of hits on a blog or the fact that they wrote a paper with a catch title. But if part of my scholarship is outreach to a certain community and I can demonstrate that through the help of statistics about my blog readership I think that's useful.

    I actually think bibliometricians (which, btw, I'm not) are concerned with the misuse of these indicators. See again [1]. Also check out [2] for some current thinking on the efficacy of these measures.

    Finally, +1 for being careful in all assessment procedures. I think I emphasize that throughout the talks I've given on this topic.

    [1] http://www.slideshare.net/paulwouters1/issi2013-wg-pw
    [2] https://openaccess.leidenuniv.nl/bitstream/handle/1887/20468/CWTS-WP-2013-002.pdf?sequence=1
    Voulez-vous vraiment ?  Oui  Non
    Votre message apparaîtra ici
  • Perhaps I should have said experimental scientists: those who are adding real knowledge of the natural world. I don't think that bibliometrics will qualify as a science until such time as it is shown that your measures predict something useful. That's not the case at the moment. There is an ever-increasing number of different measures and next-to-no evidence that any of them predict anything useful, like how to pick, or promote, a candidate. It really is thoroughly irresponsible to promote methods when their usefulness simply isn't known.

    What you haven't done in your reply is to respond to the particular papers that we picked out for analysis. They suggest strongly to me that, if anything, altmetrics scores will be highest for trivial papers with buzzwords in the titles that probably haven't been read by those who promote them. That is a very serious matter, because it encourages practices which I would regard as verging on corruption.

    A constructive approach would be to do research on the corrupting influence of metrics. I presume that bibliometricians are not enthusiastic about doing that because it might put them out of business (much like homeopaths). There is a real risk that use of bibliometrics will result in the best young scientists being fired. If the methods in use at Imperial College had been used to evaluate Bert Sakmann (Nobel Prize 1991) he would probably have been fired before he'd had a chance, as I showed in http://www.dcscience.net/?p=182
    Voulez-vous vraiment ?  Oui  Non
    Votre message apparaîtra ici

Altmetrics: painting a broader picture of impact

  1. 1. Altmetrics: building a broader picture of impact Paul Groth @pgroth Web & Media Group Department of Computer Science The Network Institute VU University Amsterdam http://www.few.vu.nl/~pgroth #APE2014
  2. 2. Research Project Grants Applications, awards, and success rates NIH Data Book – (http://report.nih.gov/ndb/index.aspx) Data provided by the Division of Information Services, Reporting Branch
  3. 3. "Outside letters basically trump everything," says Robert Simoni, chairman of the biology department at Stanford University in California. Metrics: Do metrics matter? Nature 2010 http://doi.org/10.1038/465860a
  4. 4. ―Imagine how the academic appointment process might change if search and review committees had access—within an appropriately tagged or linked online CV, for example, or via the ORCID system— to information about the specific contributions made by a candidate to each of his/her works, including contributions that might not otherwise have qualified for ‗authorship‘ status?‖ Point of view: Faculty appointments and the record of scholarship Amy Brand http://dx.doi.org/10.7554/eLife.00452
  5. 5. Point of view: Faculty appointments and the record of scholarship Amy Brand http://dx.doi.org/10.7554/eLife.00452 Opportunities • Individuals and institutions need better tools for curating and networking their own record of scholarship • Institutions need more information about scholarly contribution • ALMs that reliably differentiate sources of input (general; academic; expert; etc.) would be more useful‖ Slide 3: http://article-level-metrics.plos.org/files/2013/10/Brand.pptx
  7. 7. Altmetrics is the study and use of scholarly impact measures based on activity in online tools and environments. http://doi.org/10.1371/journal.pone.0048753
  8. 8. http://blog.peerj.com/post/65345738206/changing-the-currency-of-science-to-solve-our-greatest
  9. 9. Thanks Ian Mulvany
  11. 11. ―It took approximately a generation (20 years) for bibliographic citation analysis to achieve acceptability as a measure of academic impact." (Vaughan and Shaw, 2003)
  12. 12. (Birkholtz et al. 2013) (Fausto et al 2012) http://jasonpriem.org/self-archived/5uni-poster.png The Research is Happening http://ploscollections.org/altmetrics http://asis.org/Bulletin/Apr-13/AprMay13_Piwowar.html
  13. 13. http://www.cwts.nl/pdf/CWTS-WP-2013-003.pdf
  14. 14. http://www.slideshare.net/paulwouters1/issi2013-wg-pw
  15. 15. Bottom Line: use altmetrics as evidence in a larger story
  16. 16. Examples
  17. 17. Published AND discussed AND cited 23
  18. 18. 24
  19. 19. Summary: a broader view • Different research artifacts – papers, preprints, slides, videos, code, data • Different measures – usage, mentions, views, sharing • Different stories – progress so far, workshop impact, outreach
  20. 20. Conclusion • Altmetrics is still developing – But useful today • Allows to build a broader picture of impact – Using a variety of Artifacts & Measures • Final thought: research artifacts exist in a network, we‘re starting to connect it
  21. 21. Thanks Collaborators: Peter van den Besselaar, Julie Birkholz, Frank van Harmelen, Shenghui Wang, Rinke Hoekstra, Thomas Gurney, Mike Taylor, Anita de Waard, Jason Priem, Dario Taraborelli, Cameron Neylon, Ian Mulvany