SlideShare une entreprise Scribd logo
1  sur  34
Télécharger pour lire hors ligne
   ’  

 1
,  2
1Centre for R&D Monitoring and Dept MSI, KU Leuven, Belgium
2Centre for Science and Technology Studies, Leiden University, The Netherlands
 Introduction 
In the last quarter of the 20th
century, bibliometrics evolved from a
sub-discipline of library and information science to an instrument for
evaluation and benchmarking (G, 2006; W, 2013).
• As a consequence, several scientometric tools became used in a context for
which they were not designed (e.g., JIF).
• Due to the dynamics in evaluation, the focus has shied away from macro
studies towards meso and micro studies of both actors and topics.
• More recently, the evaluation of research teams and individual scientists
has become a central issue in services based on bibliometric data.
• The rise of social networking technologies in which all types of activities are
measured and monitored has promoted auto-evaluation with tools such as
Google Scholar, Publish or Perish, Scholarometer.
G  W, The dos and don’ts, Vienna, 2013 2/25
 Introduction 
In the last quarter of the 20th
century, bibliometrics evolved from a
sub-discipline of library and information science to an instrument for
evaluation and benchmarking (G, 2006; W, 2013).
• As a consequence, several scientometric tools became used in a context for
which they were not designed (e.g., JIF).
• Due to the dynamics in evaluation, the focus has shied away from macro
studies towards meso and micro studies of both actors and topics.
• More recently, the evaluation of research teams and individual scientists
has become a central issue in services based on bibliometric data.
• The rise of social networking technologies in which all types of activities are
measured and monitored has promoted auto-evaluation with tools such as
Google Scholar, Publish or Perish, Scholarometer.
G  W, The dos and don’ts, Vienna, 2013 2/25
Introduction
There is not one typical individual-level bibliometrics since there are
different goals, which range from the individual assessment of proposals or
the oeuvre of applicants over intra-institutional research coordination to the
comparative evaluation of individuals and benchmarking of research teams.
As a consequence, common standards for all tasks at the individual level
do not (yet) exist.
☞ Each respective task, the concrete field of application requires a kind
of flexibility on the part of bibliometricians but also the maximum of
precision and accuracy.
In the following we will summarise some important guidelines for the use
of bibliometrics in the context of the evaluation of individual scientists,
leading to ten dos and ten don’ts in individual level bibliometrics .
G  W, The dos and don’ts, Vienna, 2013 3/25
Introduction
There is not one typical individual-level bibliometrics since there are
different goals, which range from the individual assessment of proposals or
the oeuvre of applicants over intra-institutional research coordination to the
comparative evaluation of individuals and benchmarking of research teams.
As a consequence, common standards for all tasks at the individual level
do not (yet) exist.
☞ Each respective task, the concrete field of application requires a kind
of flexibility on the part of bibliometricians but also the maximum of
precision and accuracy.
In the following we will summarise some important guidelines for the use
of bibliometrics in the context of the evaluation of individual scientists,
leading to ten dos and ten don’ts in individual level bibliometrics .
G  W, The dos and don’ts, Vienna, 2013 3/25
 Ten things you must not do … 
1. Don’t reduce individual performance to a single number
• Research performance is influenced by many factors such as age,
time window, position, research domain. Within the same scholarly
environment and position, interaction with colleagues, co-operation,
mobility and activity profiles might differ considerably.
• A single number (even if based on sound methods and correct data)
can certainly not suffice to reflect the complexity of research
activity, its background and its impact adequately.
• Using them to score or benchmark researchers needs to take the
working context of the researcher into consideration.
G  W, The dos and don’ts, Vienna, 2013 4/25
Ten things you must not do …
2. Don’t use IFs as measures of quality
• Once created to supplement ISI’s Science Citation Index, the IF
evolved to an evaluation tool and seems to have become the
“common currency of scientific quality” in research evaluation
influencing scientists’ funding and career (S, 2004).
• However, the Impact Factor is by no means a performance measure
of individual articles nor of the authors of these papers (e.g., S,
1989, 1997).
• Most recently, campaigns against the use of the IF in individual-level
research evaluation emerged on the part of scientists (who feel
victims of evaluation) and bibliometricians themselves (e.g.,
B  HL, 2012; B, 2013).
◦ The San Francisco Declaration on Research Assessment (DORA) has
started an online campaign against the use of the IF for evaluation of
researchers and research groups.
G  W, The dos and don’ts, Vienna, 2013 5/25
Ten things you must not do …
3. Don’t apply (hidden) “bibliometric filters” for selection
• Weights, thresholds or filters are defined for in-house evaluation or
for preselecting material for external use.
Some examples:
◦ A minimum IF might be required for inclusion in official publication
lists.
◦ A minimum h-index is required for receiving a doctoral degree or for
considering a grant application.
◦ A certain amount of citations is necessary for promotion or for
possible approval of applications.
This practice is sometimes questionable: If filters are set, they should
always support human judgement and not pre-empt it.
☞ Also the psychological effect of using such filters might not be
underestimated.
G  W, The dos and don’ts, Vienna, 2013 6/25
Ten things you must not do …
4. Don’t apply arbitrary weights to co-authorship
A known issue in bibliometrics is how to properly credit authors for their
contribution to papers they have co-authored.
• There is no general solution for the problem.
• Only the authors themselves can judge their own contribution.
• In some cases, pre-set weights on the basis of the sequence of
co-authors are defined and applied as strict rules.
• The sequence of co-authors as well the special “function” of the
corresponding authors do not always reflect the amount of their real
contribution.
• Most algorithms are, in practice, rather arbitrary and at this level
possibly misleading.
G  W, The dos and don’ts, Vienna, 2013 7/25
Ten things you must not do …
5. Don’t rank scientists according to 1 indicator
• It is legitimate to rank candidates who have been short-listed, e.g.,
for a job position, according to relevant criteria, but ranking should
not be merely based on bibliometrics.
• Internal or public ranking of research performance without any
particular practical goal (like a candidateship) is problematic.
• There are also ethical issues and possible repercussions of the
emerging “champions-league mentality” on the scientists research
and communication behaviour (e.g., G  D, 2003).
• A further negative effect of ranking lists (as easily accessible and
ready-made data) is that those could be used in decision-making in
other contexts than they have been prepared for.
G  W, The dos and don’ts, Vienna, 2013 8/25
Ten things you must not do …
6. Don’t merge incommensurable measures
• This problematic practice oen begins with output reporting by the
scientists them-selves.
◦ Citation counts appearing in CVs or applications are sometimes based
on different sources (WoS, SCOPUS, Google Scholar).
• The combination of incommensurable sources combined with
inappropriate reference standards make bibliometric indicators
almost completely useless (cf. W, 1993).
• Do not allow users to merge bibliometric results from different
sources without having checked their compatibility.
G  W, The dos and don’ts, Vienna, 2013 9/25
Ten things you must not do …
7. Don’t use flawed statistics
• Thresholds and reference standards for the assignment to
performance classes are proved tools in bibliometrics (e.g, for
identifying industrious authors, uncited or highly cited papers).
◦ This might even be more advantageous than using the original
observations.
• However, looking at the recent literature one finds a plethora of
formulas for “improved” measures or composite indicators lacking
any serious mathematical background.
• Small datasets are typical of this aggregation level: This might
increase the bias or result in serious errors and standard
(mathematical) statistical methods are oen at or beyond their limit
here.
G  W, The dos and don’ts, Vienna, 2013 10/25
Ten things you must not do …
8. Don’t blindly trust one-hit wonders
• Do not evaluate scientists on the basis of one top paper and do not
encourage scientists to prize visibility over targeting in their
publication strategy.
◦ Breakthroughs are oen based on a single theoretical concept or way
of viewing the world. They may be published in a paper that then
aracts star aention.
◦ However, breakthroughs may also be based on a life-long piecing
together of evidence published in a series of moderately cited papers.
☞ Always weight the importance of highly cited papers versus the
value of a series of sustained publishing. Don’t look at top
performance only, consider the complete life work or the research
output created in the time windows under study.
G  W, The dos and don’ts, Vienna, 2013 11/25
Ten things you must not do …
8. Don’t blindly trust one-hit wonders
• Do not evaluate scientists on the basis of one top paper and do not
encourage scientists to prize visibility over targeting in their
publication strategy.
◦ Breakthroughs are oen based on a single theoretical concept or way
of viewing the world. They may be published in a paper that then
aracts star aention.
◦ However, breakthroughs may also be based on a life-long piecing
together of evidence published in a series of moderately cited papers.
☞ Always weight the importance of highly cited papers versus the
value of a series of sustained publishing. Don’t look at top
performance only, consider the complete life work or the research
output created in the time windows under study.
G  W, The dos and don’ts, Vienna, 2013 11/25
Ten things you must not do …
9. Don’t compare apples and oranges
• Figures are always comparable. And contents?
• Normalisation might help make measures comparable but only like
with like.
• Research and communication in different domains is differently
structured. The analysis of research performance in humanities,
mathematics and life sciences needs different concepts and
approaches.
◦ Simply weighting publication types (monographs, articles, working
papers, etc.) and normalising citation rates will just cover up but not
eliminate differences.
G  W, The dos and don’ts, Vienna, 2013 12/25
Ten things you must not do …
10. Don’t allow deadlines and workload to compel you to drop
good practices
• Reviewers and users in research management are oen overcharged
by the flood submissions, applications and proposals combined with
tight deadlines and lack of personnel.
◦ Readily available data like IFs, gross citation counts and the h-index
are sometimes used to make decisions on proposals and candidates.
• Don’t give in to time pressure and heavy workload when you have
responsible tasks in research assessment and the career of scientists
and the future of research teams are at the stake and don’t allow
tight deadlines to compel you to reduce evaluation to the use of
“handy” numbers.
G  W, The dos and don’ts, Vienna, 2013 13/25
 Ten things you might do … 
1. Also individual-level bibliometrics is statistics
• Basic measures (number of publications/citations) are important
measures in bibliometrics at the individual level.
• All statistics derived from these counts require a sufficiently large
publication output to allow valid conclusions.
• If this is met, standard bibliometric techniques can be applied but
special caution is always called for at this level:
◦ A longer publication period might also cover different career
progression and activity dynamics in the academic life of scientists.
◦ Assessment, external benchmarking and comparisons require the use
of appropriate reference standards, notably in interdisciplinary
research or pluridisciplinary activities.
◦ Special aention should be paid to group authorship (group
composition and contribution credit assigned to the author).
G  W, The dos and don’ts, Vienna, 2013 14/25
Ten things you might do …
2. Analyse collaboration profiles of researchers
• Bibliometricians might analyse the scientist’s position among
his/her collaborators and co-authors. In particular, the following
questions can be answered.
◦ Do authors preferably work alone, work in stable teams, or prefer
occasional collaboration.
◦ Who are the collaborators and are the scientists rather ‘junior’, ‘peers’
or ‘senior’ partners in these relationships.
• This might help recognise the scientist’s own role in his/her research
environment but final conclusion should be drawn in combination
with “qualitative methods”.
G  W, The dos and don’ts, Vienna, 2013 15/25
Ten things you might do …
3. Always combine quantitative and qualitative methods
At this level of aggregation, the combination of bibliometrics with
traditional qualitative methods is not only important but indispensable.
• On one hand, bibliometrics can be used to supplement the
sometimes subjectively coloured qualitative methods by providing
“objective” figures to underpin, confirm arguments or to make
assessment more concrete.
• If discrepancies between the two methods are found try to
investigate and understand what the possible reasons for the
different results could be.
☞ This might even enrich and improve the assessment.
G  W, The dos and don’ts, Vienna, 2013 16/25
Ten things you might do …
4. Use citation context analysis
The concept of “citation context” analysis was first introduced in 1973 by
M and later suggested for use in Hungary (B, 2006).
• Here citation context does not cover the position, where a citation is
placed in an article, or the distance from other citations in the same
document. It covers the textual and contentual environment of the
citation in question.
• It is to be shown that a research results is not only referred to but is
used indeed in the colleagues’ research and/or is scholarly
discussed. ⇒ The context might be positive or negative.
“Citation context” represents an approach in-between qualitative and
quantitative methods and can be used in the case of individual proposals
and applications.
G  W, The dos and don’ts, Vienna, 2013 17/25
Ten things you might do …
4. Use citation context analysis
The concept of “citation context” analysis was first introduced in 1973 by
M and later suggested for use in Hungary (B, 2006).
• Here citation context does not cover the position, where a citation is
placed in an article, or the distance from other citations in the same
document. It covers the textual and contentual environment of the
citation in question.
• It is to be shown that a research results is not only referred to but is
used indeed in the colleagues’ research and/or is scholarly
discussed. ⇒ The context might be positive or negative.
“Citation context” represents an approach in-between qualitative and
quantitative methods and can be used in the case of individual proposals
and applications.
G  W, The dos and don’ts, Vienna, 2013 17/25
Ten things you might do …
5. Analyse subject profiles
Many scientists do research in an interdisciplinary environment. Even
their reviewers might work in different panels. The situation is even
worse for “polydisciplinary” scientists.
In principle, three basic approaches are possible.
1. Considering all activities as one total activity and “define” an
adequate topic for benchmarking
2. Spliing up the profile into its components (which might, of course,
overlap) for assessment
3. Neglecting activities outside the actual scope of assessment
It depends on the task, which of the above models should be applied.
More research on these issues is urgently needed.
G  W, The dos and don’ts, Vienna, 2013 18/25
Ten things you might do …
5. Analyse subject profiles
Many scientists do research in an interdisciplinary environment. Even
their reviewers might work in different panels. The situation is even
worse for “polydisciplinary” scientists.
In principle, three basic approaches are possible.
1. Considering all activities as one total activity and “define” an
adequate topic for benchmarking
2. Spliing up the profile into its components (which might, of course,
overlap) for assessment
3. Neglecting activities outside the actual scope of assessment
It depends on the task, which of the above models should be applied.
More research on these issues is urgently needed.
G  W, The dos and don’ts, Vienna, 2013 18/25
Ten things you might do …
6. Make an explicit choice for oeuvre or time-window analysis
The complete oeuvre of a scientist can serve as the basis of the individual
assessment. This option should rather not be used in comparative
analysis.
• The reason is different age, profile, position and the complexity of a
scientists career.
Time-window analysis is more suited for comparison, provided, of course,
like is compared with like and the publication period and citation
windows conform.
G  W, The dos and don’ts, Vienna, 2013 19/25
Ten things you might do …
6. Make an explicit choice for oeuvre or time-window analysis
The complete oeuvre of a scientist can serve as the basis of the individual
assessment. This option should rather not be used in comparative
analysis.
• The reason is different age, profile, position and the complexity of a
scientists career.
Time-window analysis is more suited for comparison, provided, of course,
like is compared with like and the publication period and citation
windows conform.
G  W, The dos and don’ts, Vienna, 2013 19/25
Ten things you might do …
7. Combine bibliometrics with career analysis
This applies to the assessment on the basis of a scientist’s oeuvre.
• Bibliometrics can be used to zoom in on a scientist’s career. Here the
evolution of publication activity, citation impact, mobility and
changing collaboration paerns can be monitored.
• It is not easy to quantify the observations and the purpose is not to
build indicators for possible comparison but to use bibliometric data
to visually and numerically depict important aspects of the progress
of a scientist’s career.
 Some preliminary results have been published by Z  G (2012).
G  W, The dos and don’ts, Vienna, 2013 20/25
Ten things you might do …
8. Clean bibliographic data carefully and use external sources
Bibliometric data at this level are extremely sensitive. This implies that
also input data must be absolutely clean and accurate.
• In order to achieve cleanness, publication lists and CVs should be
used if possible. This is important for two reasons:
◦ External sources help improve the quality of data sources.
◦ Responsibility with the authors or institutes is shared.
• If the assessment is not confidential, researchers themselves might
be involved in the bibliometric exercise.
• Otherwise, scientists might be asked to provide data according to a
given standard protocol that can and should be developed in
interaction between the user and bibliometricians.
G  W, The dos and don’ts, Vienna, 2013 21/25
Ten things you might do …
8. Clean bibliographic data carefully and use external sources
Bibliometric data at this level are extremely sensitive. This implies that
also input data must be absolutely clean and accurate.
• In order to achieve cleanness, publication lists and CVs should be
used if possible. This is important for two reasons:
◦ External sources help improve the quality of data sources.
◦ Responsibility with the authors or institutes is shared.
• If the assessment is not confidential, researchers themselves might
be involved in the bibliometric exercise.
• Otherwise, scientists might be asked to provide data according to a
given standard protocol that can and should be developed in
interaction between the user and bibliometricians.
G  W, The dos and don’ts, Vienna, 2013 21/25
Ten things you might do …
9. Even some “don’ts” are not taboo if properly applied
There is no reason to condemn the oen incorrectly used Impact Factor
and h-index. They can provide supplementary information if they are
used in combination with qualitative methods, and are not used as the
only decision criterion.
Example:
• Good practice (h-index as supporting argument):
“The exceptionally high h-index of the applicant confirms his/her
international standing aested to by our experts.”
• estionable use (h-index as decision criterion):
“We are inclined to support this scientist because his/her h-index
distinctly exceeds that of all other applicants.”
G  W, The dos and don’ts, Vienna, 2013 22/25
Ten things you might do …
9. Even some “don’ts” are not taboo if properly applied
There is no reason to condemn the oen incorrectly used Impact Factor
and h-index. They can provide supplementary information if they are
used in combination with qualitative methods, and are not used as the
only decision criterion.
Example:
• Good practice (h-index as supporting argument):
“The exceptionally high h-index of the applicant confirms his/her
international standing aested to by our experts.”
• estionable use (h-index as decision criterion):
“We are inclined to support this scientist because his/her h-index
distinctly exceeds that of all other applicants.”
G  W, The dos and don’ts, Vienna, 2013 22/25
Ten things you might do …
10. Help users to interpret and apply your results
At any level of aggregation bibliometric methods should be
well-documented. This applies above all to level of individual scientists
and research teams.
• Bibliometricians should support users in a transparent manner to
guarantee replicability of bibliometric data.
• They should issue clear instructions concerning the use and
interpretation of their results.
• They should also stress the limitations of the validity of these results.
G  W, The dos and don’ts, Vienna, 2013 23/25
 Conclusions 
• The (added) value of or damage by bibliometrics in individual-level
evaluation depends on how and in what context bibliometrics is
applied.
• In most situations, the context should determine which bibliometric
methods and how those should be applied.
• Soundness and validity of methods is all the more necessary at the
individual level but not yet sufficient. Accuracy, reliability and
completeness of sources is an absolute imperative at this level.
• We recommend to use individual level bibliometrics always on the
basis of the particular research portfolio. The best method to do this
may be the design of individual researchers profiles combining
bibliometrics with qualitative information about careers and
working contexts. The profile includes the research mission and
goals of the researcher.
G  W, The dos and don’ts, Vienna, 2013 24/25
 Conclusions 
• The (added) value of or damage by bibliometrics in individual-level
evaluation depends on how and in what context bibliometrics is
applied.
• In most situations, the context should determine which bibliometric
methods and how those should be applied.
• Soundness and validity of methods is all the more necessary at the
individual level but not yet sufficient. Accuracy, reliability and
completeness of sources is an absolute imperative at this level.
• We recommend to use individual level bibliometrics always on the
basis of the particular research portfolio. The best method to do this
may be the design of individual researchers profiles combining
bibliometrics with qualitative information about careers and
working contexts. The profile includes the research mission and
goals of the researcher.
G  W, The dos and don’ts, Vienna, 2013 24/25
 Acknowledgement 
The authors would like to thank I R and J G for
their contribution to the idea of a special session on this important issue
as well as the organisers of the ISSI 2013 conference for having given us
the opportunity to organise this session.
We also wish to thank L W and R C for their
useful comments.

Contenu connexe

Tendances

Intellectual Honesty and Research Integrity.pptx
Intellectual Honesty and Research Integrity.pptxIntellectual Honesty and Research Integrity.pptx
Intellectual Honesty and Research Integrity.pptxsheelu57
 
Scopus: Research Metrics and Indicators
Scopus: Research Metrics and Indicators Scopus: Research Metrics and Indicators
Scopus: Research Metrics and Indicators Michaela Kurschildgen
 
Presentation on journal suggestion tool and journal finder
Presentation on journal suggestion tool and journal finderPresentation on journal suggestion tool and journal finder
Presentation on journal suggestion tool and journal findershilpasharma203749
 
Research metrics Apr2013
Research metrics Apr2013Research metrics Apr2013
Research metrics Apr2013Naz Torabi
 
Intellectual honesty and research integrity abu saleh
Intellectual honesty and research integrity abu salehIntellectual honesty and research integrity abu saleh
Intellectual honesty and research integrity abu salehAbuSaleh51
 
Micro-Teaching on RESEARCH METRICS in the Refresher Course on Digital Transf...
Micro-Teaching onRESEARCH METRICS in  the Refresher Course on Digital Transf...Micro-Teaching onRESEARCH METRICS in  the Refresher Course on Digital Transf...
Micro-Teaching on RESEARCH METRICS in the Refresher Course on Digital Transf...Surendra Kumar Pal
 
PLAGIARISM DETECTION & MANAGEMENT USING TURNITIN
PLAGIARISM DETECTION & MANAGEMENT USING TURNITINPLAGIARISM DETECTION & MANAGEMENT USING TURNITIN
PLAGIARISM DETECTION & MANAGEMENT USING TURNITINDr.Kamran Ishfaq
 
Writing a literature review
Writing a literature reviewWriting a literature review
Writing a literature reviewHazel Hall
 
RESEARCH METRICES 25.11.21.pptx
RESEARCH METRICES  25.11.21.pptxRESEARCH METRICES  25.11.21.pptx
RESEARCH METRICES 25.11.21.pptxmahitha22
 
RESEARCH METRICS h-INDEX.pptx
RESEARCH METRICS h-INDEX.pptxRESEARCH METRICS h-INDEX.pptx
RESEARCH METRICS h-INDEX.pptxdrpvczback
 
Vinay BHU PPT_Predatory Journals.ppt
Vinay BHU PPT_Predatory Journals.pptVinay BHU PPT_Predatory Journals.ppt
Vinay BHU PPT_Predatory Journals.pptVinay Kumar
 
Open Access Publishing
Open Access PublishingOpen Access Publishing
Open Access PublishingETH-Bibliothek
 

Tendances (20)

Understanding the Basics of Journal Metrics
Understanding the Basics of Journal MetricsUnderstanding the Basics of Journal Metrics
Understanding the Basics of Journal Metrics
 
Intellectual Honesty and Research Integrity.pptx
Intellectual Honesty and Research Integrity.pptxIntellectual Honesty and Research Integrity.pptx
Intellectual Honesty and Research Integrity.pptx
 
Presentation on literature review
Presentation on literature reviewPresentation on literature review
Presentation on literature review
 
L 9 (10-05-21) literature review in research methodology - copy
L 9 (10-05-21) literature review in research methodology - copyL 9 (10-05-21) literature review in research methodology - copy
L 9 (10-05-21) literature review in research methodology - copy
 
Publishing Your Research
Publishing Your Research Publishing Your Research
Publishing Your Research
 
Scopus: Research Metrics and Indicators
Scopus: Research Metrics and Indicators Scopus: Research Metrics and Indicators
Scopus: Research Metrics and Indicators
 
Presentation on journal suggestion tool and journal finder
Presentation on journal suggestion tool and journal finderPresentation on journal suggestion tool and journal finder
Presentation on journal suggestion tool and journal finder
 
Research Methodology-02: Quality Indices
Research Methodology-02: Quality IndicesResearch Methodology-02: Quality Indices
Research Methodology-02: Quality Indices
 
Research metrics Apr2013
Research metrics Apr2013Research metrics Apr2013
Research metrics Apr2013
 
Intellectual honesty and research integrity abu saleh
Intellectual honesty and research integrity abu salehIntellectual honesty and research integrity abu saleh
Intellectual honesty and research integrity abu saleh
 
Micro-Teaching on RESEARCH METRICS in the Refresher Course on Digital Transf...
Micro-Teaching onRESEARCH METRICS in  the Refresher Course on Digital Transf...Micro-Teaching onRESEARCH METRICS in  the Refresher Course on Digital Transf...
Micro-Teaching on RESEARCH METRICS in the Refresher Course on Digital Transf...
 
PLAGIARISM DETECTION & MANAGEMENT USING TURNITIN
PLAGIARISM DETECTION & MANAGEMENT USING TURNITINPLAGIARISM DETECTION & MANAGEMENT USING TURNITIN
PLAGIARISM DETECTION & MANAGEMENT USING TURNITIN
 
Writing a literature review
Writing a literature reviewWriting a literature review
Writing a literature review
 
Research metrices (cite score)
Research metrices (cite score)Research metrices (cite score)
Research metrices (cite score)
 
RESEARCH METRICES 25.11.21.pptx
RESEARCH METRICES  25.11.21.pptxRESEARCH METRICES  25.11.21.pptx
RESEARCH METRICES 25.11.21.pptx
 
Predatory publishing
Predatory publishingPredatory publishing
Predatory publishing
 
RESEARCH METRICS h-INDEX.pptx
RESEARCH METRICS h-INDEX.pptxRESEARCH METRICS h-INDEX.pptx
RESEARCH METRICS h-INDEX.pptx
 
Vinay BHU PPT_Predatory Journals.ppt
Vinay BHU PPT_Predatory Journals.pptVinay BHU PPT_Predatory Journals.ppt
Vinay BHU PPT_Predatory Journals.ppt
 
Open Access Publishing
Open Access PublishingOpen Access Publishing
Open Access Publishing
 
REDUNDANT PUBLICATION IN RESEARCH
REDUNDANT PUBLICATION IN RESEARCHREDUNDANT PUBLICATION IN RESEARCH
REDUNDANT PUBLICATION IN RESEARCH
 

En vedette

Author Level Bibliometrics
Author Level BibliometricsAuthor Level Bibliometrics
Author Level BibliometricsPaul Wouters
 
конкурс
конкурсконкурс
конкурсimigalin
 
The Mathematics Syllabus that has been adopted worldwide
The Mathematics Syllabus that has been adopted worldwideThe Mathematics Syllabus that has been adopted worldwide
The Mathematics Syllabus that has been adopted worldwideDavid Yeng
 
Henryk Konrad
Henryk KonradHenryk Konrad
Henryk Konradpzgomaz
 
Webfil digital _ fail safe multiplexer __ ufsbi for sge
Webfil   digital  _ fail safe multiplexer __ ufsbi for sgeWebfil   digital  _ fail safe multiplexer __ ufsbi for sge
Webfil digital _ fail safe multiplexer __ ufsbi for sgepuneet kumar rai
 
WeiResearch Social Interest Graph
WeiResearch Social Interest GraphWeiResearch Social Interest Graph
WeiResearch Social Interest GraphRyan Xia
 
Nuovi schemi bilancio, nota e relazione sulla gestione 2015
Nuovi schemi bilancio, nota e relazione sulla gestione 2015Nuovi schemi bilancio, nota e relazione sulla gestione 2015
Nuovi schemi bilancio, nota e relazione sulla gestione 2015Alessandro Porro
 
DeVry BUSN379 Week 5 homework es sample
DeVry BUSN379 Week 5 homework es sampleDeVry BUSN379 Week 5 homework es sample
DeVry BUSN379 Week 5 homework es sampleKim4142
 
Vikas_Bajpai_090915
Vikas_Bajpai_090915Vikas_Bajpai_090915
Vikas_Bajpai_090915Vikas Bajpai
 
TEKS NEGOSIASI
TEKS NEGOSIASITEKS NEGOSIASI
TEKS NEGOSIASISri Utanti
 
Opportunities for children everywhere to receive quality education
Opportunities for children everywhere to receive quality educationOpportunities for children everywhere to receive quality education
Opportunities for children everywhere to receive quality educationDavid Yeng
 
Collingwood Library 2010
Collingwood Library 2010Collingwood Library 2010
Collingwood Library 2010MarciaMcGinley
 
Billgatesppt 101108135150-phpapp02
Billgatesppt 101108135150-phpapp02Billgatesppt 101108135150-phpapp02
Billgatesppt 101108135150-phpapp02Seth Sibangan
 

En vedette (17)

Author Level Bibliometrics
Author Level BibliometricsAuthor Level Bibliometrics
Author Level Bibliometrics
 
Gail presentation
Gail presentationGail presentation
Gail presentation
 
конкурс
конкурсконкурс
конкурс
 
The Mathematics Syllabus that has been adopted worldwide
The Mathematics Syllabus that has been adopted worldwideThe Mathematics Syllabus that has been adopted worldwide
The Mathematics Syllabus that has been adopted worldwide
 
Henryk Konrad
Henryk KonradHenryk Konrad
Henryk Konrad
 
Webfil digital _ fail safe multiplexer __ ufsbi for sge
Webfil   digital  _ fail safe multiplexer __ ufsbi for sgeWebfil   digital  _ fail safe multiplexer __ ufsbi for sge
Webfil digital _ fail safe multiplexer __ ufsbi for sge
 
WeiResearch Social Interest Graph
WeiResearch Social Interest GraphWeiResearch Social Interest Graph
WeiResearch Social Interest Graph
 
Nuovi schemi bilancio, nota e relazione sulla gestione 2015
Nuovi schemi bilancio, nota e relazione sulla gestione 2015Nuovi schemi bilancio, nota e relazione sulla gestione 2015
Nuovi schemi bilancio, nota e relazione sulla gestione 2015
 
DeVry BUSN379 Week 5 homework es sample
DeVry BUSN379 Week 5 homework es sampleDeVry BUSN379 Week 5 homework es sample
DeVry BUSN379 Week 5 homework es sample
 
Peta watak
Peta watakPeta watak
Peta watak
 
Vikas_Bajpai_090915
Vikas_Bajpai_090915Vikas_Bajpai_090915
Vikas_Bajpai_090915
 
TEKS NEGOSIASI
TEKS NEGOSIASITEKS NEGOSIASI
TEKS NEGOSIASI
 
Opportunities for children everywhere to receive quality education
Opportunities for children everywhere to receive quality educationOpportunities for children everywhere to receive quality education
Opportunities for children everywhere to receive quality education
 
Exposicion lalo
Exposicion laloExposicion lalo
Exposicion lalo
 
Collingwood Library 2010
Collingwood Library 2010Collingwood Library 2010
Collingwood Library 2010
 
raf sistemleri
raf sistemleriraf sistemleri
raf sistemleri
 
Billgatesppt 101108135150-phpapp02
Billgatesppt 101108135150-phpapp02Billgatesppt 101108135150-phpapp02
Billgatesppt 101108135150-phpapp02
 

Similaire à The dos and don'ts in individudal level bibliometrics

Journal and author impact measures Assessing your impact (h-index and beyond)
Journal and author impact measures Assessing your impact (h-index and beyond)Journal and author impact measures Assessing your impact (h-index and beyond)
Journal and author impact measures Assessing your impact (h-index and beyond)Aboul Ella Hassanien
 
Calais, gerald j[1]. h index-nftej-v19-n3-2009
Calais, gerald j[1]. h index-nftej-v19-n3-2009Calais, gerald j[1]. h index-nftej-v19-n3-2009
Calais, gerald j[1]. h index-nftej-v19-n3-2009William Kritsonis
 
Calais, Gerald j[1]. Teacher Education, www.nationalforum.com
Calais, Gerald j[1]. Teacher Education, www.nationalforum.comCalais, Gerald j[1]. Teacher Education, www.nationalforum.com
Calais, Gerald j[1]. Teacher Education, www.nationalforum.comWilliam Kritsonis
 
Research and Publication Ethics_Misconduct.pptx
Research and Publication Ethics_Misconduct.pptxResearch and Publication Ethics_Misconduct.pptx
Research and Publication Ethics_Misconduct.pptxVijayKumar17076
 
Unveiling the Ecosystem of Science: How can we characterize and assess divers...
Unveiling the Ecosystem of Science: How can we characterize and assess divers...Unveiling the Ecosystem of Science: How can we characterize and assess divers...
Unveiling the Ecosystem of Science: How can we characterize and assess divers...Nicolas Robinson-Garcia
 
Durham Part Time Distance Research Student 2019: Sample Library Slides
Durham Part Time Distance Research Student 2019: Sample Library SlidesDurham Part Time Distance Research Student 2019: Sample Library Slides
Durham Part Time Distance Research Student 2019: Sample Library SlidesJamie Bisset
 
Alternatives to Empirical Research (Dissertation by literature review) Dec 23...
Alternatives to Empirical Research (Dissertation by literature review) Dec 23...Alternatives to Empirical Research (Dissertation by literature review) Dec 23...
Alternatives to Empirical Research (Dissertation by literature review) Dec 23...hashem Al-Shamiri
 
2012.06.07 Maximising the Impact of Social Sciences Research
2012.06.07 Maximising the Impact of Social Sciences Research2012.06.07 Maximising the Impact of Social Sciences Research
2012.06.07 Maximising the Impact of Social Sciences ResearchNUI Galway
 
Can Bibliometric and Scientometric Measures be Used to Assess Research Quali...
Can Bibliometric and Scientometric Measures be Used to Assess Research Quali...Can Bibliometric and Scientometric Measures be Used to Assess Research Quali...
Can Bibliometric and Scientometric Measures be Used to Assess Research Quali...Yasar Tonta
 
DORA and the reinvention of research assessment
DORA and the reinvention of research assessmentDORA and the reinvention of research assessment
DORA and the reinvention of research assessmentMark Patterson
 
Quantitative research
Quantitative researchQuantitative research
Quantitative researchTooba Kanwal
 
TYPES_OFBUSINESSRESEARCH.pdf
TYPES_OFBUSINESSRESEARCH.pdfTYPES_OFBUSINESSRESEARCH.pdf
TYPES_OFBUSINESSRESEARCH.pdfJubilinAlbania
 
In metrics we trust?
In metrics we trust?In metrics we trust?
In metrics we trust?ORCID, Inc
 
Altmetrics: An Overview
Altmetrics: An OverviewAltmetrics: An Overview
Altmetrics: An OverviewPallab Pradhan
 
Develop three research questions on a topic for which you are
Develop three research questions on a topic for which you are Develop three research questions on a topic for which you are
Develop three research questions on a topic for which you are suzannewarch
 
Types of business research methods
Types of business research methodsTypes of business research methods
Types of business research methodsLal Sivaraj
 

Similaire à The dos and don'ts in individudal level bibliometrics (20)

Journal and author impact measures Assessing your impact (h-index and beyond)
Journal and author impact measures Assessing your impact (h-index and beyond)Journal and author impact measures Assessing your impact (h-index and beyond)
Journal and author impact measures Assessing your impact (h-index and beyond)
 
Calais, gerald j[1]. h index-nftej-v19-n3-2009
Calais, gerald j[1]. h index-nftej-v19-n3-2009Calais, gerald j[1]. h index-nftej-v19-n3-2009
Calais, gerald j[1]. h index-nftej-v19-n3-2009
 
Calais, Gerald j[1]. Teacher Education, www.nationalforum.com
Calais, Gerald j[1]. Teacher Education, www.nationalforum.comCalais, Gerald j[1]. Teacher Education, www.nationalforum.com
Calais, Gerald j[1]. Teacher Education, www.nationalforum.com
 
Preparing research proposal icphi2013
Preparing research proposal icphi2013Preparing research proposal icphi2013
Preparing research proposal icphi2013
 
Research and Publication Ethics_Misconduct.pptx
Research and Publication Ethics_Misconduct.pptxResearch and Publication Ethics_Misconduct.pptx
Research and Publication Ethics_Misconduct.pptx
 
Unveiling the Ecosystem of Science: How can we characterize and assess divers...
Unveiling the Ecosystem of Science: How can we characterize and assess divers...Unveiling the Ecosystem of Science: How can we characterize and assess divers...
Unveiling the Ecosystem of Science: How can we characterize and assess divers...
 
Anu digital research literacies
Anu digital research literaciesAnu digital research literacies
Anu digital research literacies
 
Durham Part Time Distance Research Student 2019: Sample Library Slides
Durham Part Time Distance Research Student 2019: Sample Library SlidesDurham Part Time Distance Research Student 2019: Sample Library Slides
Durham Part Time Distance Research Student 2019: Sample Library Slides
 
Lern, jan 2015, digital media slides
Lern, jan 2015, digital media slidesLern, jan 2015, digital media slides
Lern, jan 2015, digital media slides
 
Alternatives to Empirical Research (Dissertation by literature review) Dec 23...
Alternatives to Empirical Research (Dissertation by literature review) Dec 23...Alternatives to Empirical Research (Dissertation by literature review) Dec 23...
Alternatives to Empirical Research (Dissertation by literature review) Dec 23...
 
2012.06.07 Maximising the Impact of Social Sciences Research
2012.06.07 Maximising the Impact of Social Sciences Research2012.06.07 Maximising the Impact of Social Sciences Research
2012.06.07 Maximising the Impact of Social Sciences Research
 
Can Bibliometric and Scientometric Measures be Used to Assess Research Quali...
Can Bibliometric and Scientometric Measures be Used to Assess Research Quali...Can Bibliometric and Scientometric Measures be Used to Assess Research Quali...
Can Bibliometric and Scientometric Measures be Used to Assess Research Quali...
 
DORA and the reinvention of research assessment
DORA and the reinvention of research assessmentDORA and the reinvention of research assessment
DORA and the reinvention of research assessment
 
Quantitative research
Quantitative researchQuantitative research
Quantitative research
 
TYPES_OFBUSINESSRESEARCH.pdf
TYPES_OFBUSINESSRESEARCH.pdfTYPES_OFBUSINESSRESEARCH.pdf
TYPES_OFBUSINESSRESEARCH.pdf
 
In metrics we trust?
In metrics we trust?In metrics we trust?
In metrics we trust?
 
Altmetrics: An Overview
Altmetrics: An OverviewAltmetrics: An Overview
Altmetrics: An Overview
 
Health Evidence™ Quality Assessment Tool (Sample Answers - May 10, 2018 webinar)
Health Evidence™ Quality Assessment Tool (Sample Answers - May 10, 2018 webinar)Health Evidence™ Quality Assessment Tool (Sample Answers - May 10, 2018 webinar)
Health Evidence™ Quality Assessment Tool (Sample Answers - May 10, 2018 webinar)
 
Develop three research questions on a topic for which you are
Develop three research questions on a topic for which you are Develop three research questions on a topic for which you are
Develop three research questions on a topic for which you are
 
Types of business research methods
Types of business research methodsTypes of business research methods
Types of business research methods
 

Dernier

Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processorsdebabhi2
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerThousandEyes
 
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, AdobeApidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobeapidays
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfsudhanshuwaghmare1
 
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWEREMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWERMadyBayot
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businesspanagenda
 
ICT role in 21st century education and its challenges
ICT role in 21st century education and its challengesICT role in 21st century education and its challenges
ICT role in 21st century education and its challengesrafiqahmad00786416
 
Apidays Singapore 2024 - Scalable LLM APIs for AI and Generative AI Applicati...
Apidays Singapore 2024 - Scalable LLM APIs for AI and Generative AI Applicati...Apidays Singapore 2024 - Scalable LLM APIs for AI and Generative AI Applicati...
Apidays Singapore 2024 - Scalable LLM APIs for AI and Generative AI Applicati...apidays
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024The Digital Insurer
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FMESafe Software
 
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...apidays
 
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingRepurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingEdi Saputra
 
AWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAndrey Devyatkin
 
Real Time Object Detection Using Open CV
Real Time Object Detection Using Open CVReal Time Object Detection Using Open CV
Real Time Object Detection Using Open CVKhem
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...DianaGray10
 
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...apidays
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Scriptwesley chun
 
Artificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyArtificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyKhushali Kathiriya
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Drew Madelung
 
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProduct Anonymous
 

Dernier (20)

Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, AdobeApidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWEREMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
 
ICT role in 21st century education and its challenges
ICT role in 21st century education and its challengesICT role in 21st century education and its challenges
ICT role in 21st century education and its challenges
 
Apidays Singapore 2024 - Scalable LLM APIs for AI and Generative AI Applicati...
Apidays Singapore 2024 - Scalable LLM APIs for AI and Generative AI Applicati...Apidays Singapore 2024 - Scalable LLM APIs for AI and Generative AI Applicati...
Apidays Singapore 2024 - Scalable LLM APIs for AI and Generative AI Applicati...
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
 
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
 
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingRepurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
 
AWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of Terraform
 
Real Time Object Detection Using Open CV
Real Time Object Detection Using Open CVReal Time Object Detection Using Open CV
Real Time Object Detection Using Open CV
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
 
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Script
 
Artificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyArtificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : Uncertainty
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
 
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
 

The dos and don'ts in individudal level bibliometrics

  • 1.    ’     1 ,  2 1Centre for R&D Monitoring and Dept MSI, KU Leuven, Belgium 2Centre for Science and Technology Studies, Leiden University, The Netherlands
  • 2.  Introduction  In the last quarter of the 20th century, bibliometrics evolved from a sub-discipline of library and information science to an instrument for evaluation and benchmarking (G, 2006; W, 2013). • As a consequence, several scientometric tools became used in a context for which they were not designed (e.g., JIF). • Due to the dynamics in evaluation, the focus has shied away from macro studies towards meso and micro studies of both actors and topics. • More recently, the evaluation of research teams and individual scientists has become a central issue in services based on bibliometric data. • The rise of social networking technologies in which all types of activities are measured and monitored has promoted auto-evaluation with tools such as Google Scholar, Publish or Perish, Scholarometer. G  W, The dos and don’ts, Vienna, 2013 2/25
  • 3.  Introduction  In the last quarter of the 20th century, bibliometrics evolved from a sub-discipline of library and information science to an instrument for evaluation and benchmarking (G, 2006; W, 2013). • As a consequence, several scientometric tools became used in a context for which they were not designed (e.g., JIF). • Due to the dynamics in evaluation, the focus has shied away from macro studies towards meso and micro studies of both actors and topics. • More recently, the evaluation of research teams and individual scientists has become a central issue in services based on bibliometric data. • The rise of social networking technologies in which all types of activities are measured and monitored has promoted auto-evaluation with tools such as Google Scholar, Publish or Perish, Scholarometer. G  W, The dos and don’ts, Vienna, 2013 2/25
  • 4. Introduction There is not one typical individual-level bibliometrics since there are different goals, which range from the individual assessment of proposals or the oeuvre of applicants over intra-institutional research coordination to the comparative evaluation of individuals and benchmarking of research teams. As a consequence, common standards for all tasks at the individual level do not (yet) exist. ☞ Each respective task, the concrete field of application requires a kind of flexibility on the part of bibliometricians but also the maximum of precision and accuracy. In the following we will summarise some important guidelines for the use of bibliometrics in the context of the evaluation of individual scientists, leading to ten dos and ten don’ts in individual level bibliometrics . G  W, The dos and don’ts, Vienna, 2013 3/25
  • 5. Introduction There is not one typical individual-level bibliometrics since there are different goals, which range from the individual assessment of proposals or the oeuvre of applicants over intra-institutional research coordination to the comparative evaluation of individuals and benchmarking of research teams. As a consequence, common standards for all tasks at the individual level do not (yet) exist. ☞ Each respective task, the concrete field of application requires a kind of flexibility on the part of bibliometricians but also the maximum of precision and accuracy. In the following we will summarise some important guidelines for the use of bibliometrics in the context of the evaluation of individual scientists, leading to ten dos and ten don’ts in individual level bibliometrics . G  W, The dos and don’ts, Vienna, 2013 3/25
  • 6.  Ten things you must not do …  1. Don’t reduce individual performance to a single number • Research performance is influenced by many factors such as age, time window, position, research domain. Within the same scholarly environment and position, interaction with colleagues, co-operation, mobility and activity profiles might differ considerably. • A single number (even if based on sound methods and correct data) can certainly not suffice to reflect the complexity of research activity, its background and its impact adequately. • Using them to score or benchmark researchers needs to take the working context of the researcher into consideration. G  W, The dos and don’ts, Vienna, 2013 4/25
  • 7. Ten things you must not do … 2. Don’t use IFs as measures of quality • Once created to supplement ISI’s Science Citation Index, the IF evolved to an evaluation tool and seems to have become the “common currency of scientific quality” in research evaluation influencing scientists’ funding and career (S, 2004). • However, the Impact Factor is by no means a performance measure of individual articles nor of the authors of these papers (e.g., S, 1989, 1997). • Most recently, campaigns against the use of the IF in individual-level research evaluation emerged on the part of scientists (who feel victims of evaluation) and bibliometricians themselves (e.g., B  HL, 2012; B, 2013). ◦ The San Francisco Declaration on Research Assessment (DORA) has started an online campaign against the use of the IF for evaluation of researchers and research groups. G  W, The dos and don’ts, Vienna, 2013 5/25
  • 8. Ten things you must not do … 3. Don’t apply (hidden) “bibliometric filters” for selection • Weights, thresholds or filters are defined for in-house evaluation or for preselecting material for external use. Some examples: ◦ A minimum IF might be required for inclusion in official publication lists. ◦ A minimum h-index is required for receiving a doctoral degree or for considering a grant application. ◦ A certain amount of citations is necessary for promotion or for possible approval of applications. This practice is sometimes questionable: If filters are set, they should always support human judgement and not pre-empt it. ☞ Also the psychological effect of using such filters might not be underestimated. G  W, The dos and don’ts, Vienna, 2013 6/25
  • 9. Ten things you must not do … 4. Don’t apply arbitrary weights to co-authorship A known issue in bibliometrics is how to properly credit authors for their contribution to papers they have co-authored. • There is no general solution for the problem. • Only the authors themselves can judge their own contribution. • In some cases, pre-set weights on the basis of the sequence of co-authors are defined and applied as strict rules. • The sequence of co-authors as well the special “function” of the corresponding authors do not always reflect the amount of their real contribution. • Most algorithms are, in practice, rather arbitrary and at this level possibly misleading. G  W, The dos and don’ts, Vienna, 2013 7/25
  • 10. Ten things you must not do … 5. Don’t rank scientists according to 1 indicator • It is legitimate to rank candidates who have been short-listed, e.g., for a job position, according to relevant criteria, but ranking should not be merely based on bibliometrics. • Internal or public ranking of research performance without any particular practical goal (like a candidateship) is problematic. • There are also ethical issues and possible repercussions of the emerging “champions-league mentality” on the scientists research and communication behaviour (e.g., G  D, 2003). • A further negative effect of ranking lists (as easily accessible and ready-made data) is that those could be used in decision-making in other contexts than they have been prepared for. G  W, The dos and don’ts, Vienna, 2013 8/25
  • 11. Ten things you must not do … 6. Don’t merge incommensurable measures • This problematic practice oen begins with output reporting by the scientists them-selves. ◦ Citation counts appearing in CVs or applications are sometimes based on different sources (WoS, SCOPUS, Google Scholar). • The combination of incommensurable sources combined with inappropriate reference standards make bibliometric indicators almost completely useless (cf. W, 1993). • Do not allow users to merge bibliometric results from different sources without having checked their compatibility. G  W, The dos and don’ts, Vienna, 2013 9/25
  • 12. Ten things you must not do … 7. Don’t use flawed statistics • Thresholds and reference standards for the assignment to performance classes are proved tools in bibliometrics (e.g, for identifying industrious authors, uncited or highly cited papers). ◦ This might even be more advantageous than using the original observations. • However, looking at the recent literature one finds a plethora of formulas for “improved” measures or composite indicators lacking any serious mathematical background. • Small datasets are typical of this aggregation level: This might increase the bias or result in serious errors and standard (mathematical) statistical methods are oen at or beyond their limit here. G  W, The dos and don’ts, Vienna, 2013 10/25
  • 13. Ten things you must not do … 8. Don’t blindly trust one-hit wonders • Do not evaluate scientists on the basis of one top paper and do not encourage scientists to prize visibility over targeting in their publication strategy. ◦ Breakthroughs are oen based on a single theoretical concept or way of viewing the world. They may be published in a paper that then aracts star aention. ◦ However, breakthroughs may also be based on a life-long piecing together of evidence published in a series of moderately cited papers. ☞ Always weight the importance of highly cited papers versus the value of a series of sustained publishing. Don’t look at top performance only, consider the complete life work or the research output created in the time windows under study. G  W, The dos and don’ts, Vienna, 2013 11/25
  • 14. Ten things you must not do … 8. Don’t blindly trust one-hit wonders • Do not evaluate scientists on the basis of one top paper and do not encourage scientists to prize visibility over targeting in their publication strategy. ◦ Breakthroughs are oen based on a single theoretical concept or way of viewing the world. They may be published in a paper that then aracts star aention. ◦ However, breakthroughs may also be based on a life-long piecing together of evidence published in a series of moderately cited papers. ☞ Always weight the importance of highly cited papers versus the value of a series of sustained publishing. Don’t look at top performance only, consider the complete life work or the research output created in the time windows under study. G  W, The dos and don’ts, Vienna, 2013 11/25
  • 15. Ten things you must not do … 9. Don’t compare apples and oranges • Figures are always comparable. And contents? • Normalisation might help make measures comparable but only like with like. • Research and communication in different domains is differently structured. The analysis of research performance in humanities, mathematics and life sciences needs different concepts and approaches. ◦ Simply weighting publication types (monographs, articles, working papers, etc.) and normalising citation rates will just cover up but not eliminate differences. G  W, The dos and don’ts, Vienna, 2013 12/25
  • 16. Ten things you must not do … 10. Don’t allow deadlines and workload to compel you to drop good practices • Reviewers and users in research management are oen overcharged by the flood submissions, applications and proposals combined with tight deadlines and lack of personnel. ◦ Readily available data like IFs, gross citation counts and the h-index are sometimes used to make decisions on proposals and candidates. • Don’t give in to time pressure and heavy workload when you have responsible tasks in research assessment and the career of scientists and the future of research teams are at the stake and don’t allow tight deadlines to compel you to reduce evaluation to the use of “handy” numbers. G  W, The dos and don’ts, Vienna, 2013 13/25
  • 17.  Ten things you might do …  1. Also individual-level bibliometrics is statistics • Basic measures (number of publications/citations) are important measures in bibliometrics at the individual level. • All statistics derived from these counts require a sufficiently large publication output to allow valid conclusions. • If this is met, standard bibliometric techniques can be applied but special caution is always called for at this level: ◦ A longer publication period might also cover different career progression and activity dynamics in the academic life of scientists. ◦ Assessment, external benchmarking and comparisons require the use of appropriate reference standards, notably in interdisciplinary research or pluridisciplinary activities. ◦ Special aention should be paid to group authorship (group composition and contribution credit assigned to the author). G  W, The dos and don’ts, Vienna, 2013 14/25
  • 18. Ten things you might do … 2. Analyse collaboration profiles of researchers • Bibliometricians might analyse the scientist’s position among his/her collaborators and co-authors. In particular, the following questions can be answered. ◦ Do authors preferably work alone, work in stable teams, or prefer occasional collaboration. ◦ Who are the collaborators and are the scientists rather ‘junior’, ‘peers’ or ‘senior’ partners in these relationships. • This might help recognise the scientist’s own role in his/her research environment but final conclusion should be drawn in combination with “qualitative methods”. G  W, The dos and don’ts, Vienna, 2013 15/25
  • 19. Ten things you might do … 3. Always combine quantitative and qualitative methods At this level of aggregation, the combination of bibliometrics with traditional qualitative methods is not only important but indispensable. • On one hand, bibliometrics can be used to supplement the sometimes subjectively coloured qualitative methods by providing “objective” figures to underpin, confirm arguments or to make assessment more concrete. • If discrepancies between the two methods are found try to investigate and understand what the possible reasons for the different results could be. ☞ This might even enrich and improve the assessment. G  W, The dos and don’ts, Vienna, 2013 16/25
  • 20. Ten things you might do … 4. Use citation context analysis The concept of “citation context” analysis was first introduced in 1973 by M and later suggested for use in Hungary (B, 2006). • Here citation context does not cover the position, where a citation is placed in an article, or the distance from other citations in the same document. It covers the textual and contentual environment of the citation in question. • It is to be shown that a research results is not only referred to but is used indeed in the colleagues’ research and/or is scholarly discussed. ⇒ The context might be positive or negative. “Citation context” represents an approach in-between qualitative and quantitative methods and can be used in the case of individual proposals and applications. G  W, The dos and don’ts, Vienna, 2013 17/25
  • 21. Ten things you might do … 4. Use citation context analysis The concept of “citation context” analysis was first introduced in 1973 by M and later suggested for use in Hungary (B, 2006). • Here citation context does not cover the position, where a citation is placed in an article, or the distance from other citations in the same document. It covers the textual and contentual environment of the citation in question. • It is to be shown that a research results is not only referred to but is used indeed in the colleagues’ research and/or is scholarly discussed. ⇒ The context might be positive or negative. “Citation context” represents an approach in-between qualitative and quantitative methods and can be used in the case of individual proposals and applications. G  W, The dos and don’ts, Vienna, 2013 17/25
  • 22. Ten things you might do … 5. Analyse subject profiles Many scientists do research in an interdisciplinary environment. Even their reviewers might work in different panels. The situation is even worse for “polydisciplinary” scientists. In principle, three basic approaches are possible. 1. Considering all activities as one total activity and “define” an adequate topic for benchmarking 2. Spliing up the profile into its components (which might, of course, overlap) for assessment 3. Neglecting activities outside the actual scope of assessment It depends on the task, which of the above models should be applied. More research on these issues is urgently needed. G  W, The dos and don’ts, Vienna, 2013 18/25
  • 23. Ten things you might do … 5. Analyse subject profiles Many scientists do research in an interdisciplinary environment. Even their reviewers might work in different panels. The situation is even worse for “polydisciplinary” scientists. In principle, three basic approaches are possible. 1. Considering all activities as one total activity and “define” an adequate topic for benchmarking 2. Spliing up the profile into its components (which might, of course, overlap) for assessment 3. Neglecting activities outside the actual scope of assessment It depends on the task, which of the above models should be applied. More research on these issues is urgently needed. G  W, The dos and don’ts, Vienna, 2013 18/25
  • 24. Ten things you might do … 6. Make an explicit choice for oeuvre or time-window analysis The complete oeuvre of a scientist can serve as the basis of the individual assessment. This option should rather not be used in comparative analysis. • The reason is different age, profile, position and the complexity of a scientists career. Time-window analysis is more suited for comparison, provided, of course, like is compared with like and the publication period and citation windows conform. G  W, The dos and don’ts, Vienna, 2013 19/25
  • 25. Ten things you might do … 6. Make an explicit choice for oeuvre or time-window analysis The complete oeuvre of a scientist can serve as the basis of the individual assessment. This option should rather not be used in comparative analysis. • The reason is different age, profile, position and the complexity of a scientists career. Time-window analysis is more suited for comparison, provided, of course, like is compared with like and the publication period and citation windows conform. G  W, The dos and don’ts, Vienna, 2013 19/25
  • 26. Ten things you might do … 7. Combine bibliometrics with career analysis This applies to the assessment on the basis of a scientist’s oeuvre. • Bibliometrics can be used to zoom in on a scientist’s career. Here the evolution of publication activity, citation impact, mobility and changing collaboration paerns can be monitored. • It is not easy to quantify the observations and the purpose is not to build indicators for possible comparison but to use bibliometric data to visually and numerically depict important aspects of the progress of a scientist’s career.  Some preliminary results have been published by Z  G (2012). G  W, The dos and don’ts, Vienna, 2013 20/25
  • 27. Ten things you might do … 8. Clean bibliographic data carefully and use external sources Bibliometric data at this level are extremely sensitive. This implies that also input data must be absolutely clean and accurate. • In order to achieve cleanness, publication lists and CVs should be used if possible. This is important for two reasons: ◦ External sources help improve the quality of data sources. ◦ Responsibility with the authors or institutes is shared. • If the assessment is not confidential, researchers themselves might be involved in the bibliometric exercise. • Otherwise, scientists might be asked to provide data according to a given standard protocol that can and should be developed in interaction between the user and bibliometricians. G  W, The dos and don’ts, Vienna, 2013 21/25
  • 28. Ten things you might do … 8. Clean bibliographic data carefully and use external sources Bibliometric data at this level are extremely sensitive. This implies that also input data must be absolutely clean and accurate. • In order to achieve cleanness, publication lists and CVs should be used if possible. This is important for two reasons: ◦ External sources help improve the quality of data sources. ◦ Responsibility with the authors or institutes is shared. • If the assessment is not confidential, researchers themselves might be involved in the bibliometric exercise. • Otherwise, scientists might be asked to provide data according to a given standard protocol that can and should be developed in interaction between the user and bibliometricians. G  W, The dos and don’ts, Vienna, 2013 21/25
  • 29. Ten things you might do … 9. Even some “don’ts” are not taboo if properly applied There is no reason to condemn the oen incorrectly used Impact Factor and h-index. They can provide supplementary information if they are used in combination with qualitative methods, and are not used as the only decision criterion. Example: • Good practice (h-index as supporting argument): “The exceptionally high h-index of the applicant confirms his/her international standing aested to by our experts.” • estionable use (h-index as decision criterion): “We are inclined to support this scientist because his/her h-index distinctly exceeds that of all other applicants.” G  W, The dos and don’ts, Vienna, 2013 22/25
  • 30. Ten things you might do … 9. Even some “don’ts” are not taboo if properly applied There is no reason to condemn the oen incorrectly used Impact Factor and h-index. They can provide supplementary information if they are used in combination with qualitative methods, and are not used as the only decision criterion. Example: • Good practice (h-index as supporting argument): “The exceptionally high h-index of the applicant confirms his/her international standing aested to by our experts.” • estionable use (h-index as decision criterion): “We are inclined to support this scientist because his/her h-index distinctly exceeds that of all other applicants.” G  W, The dos and don’ts, Vienna, 2013 22/25
  • 31. Ten things you might do … 10. Help users to interpret and apply your results At any level of aggregation bibliometric methods should be well-documented. This applies above all to level of individual scientists and research teams. • Bibliometricians should support users in a transparent manner to guarantee replicability of bibliometric data. • They should issue clear instructions concerning the use and interpretation of their results. • They should also stress the limitations of the validity of these results. G  W, The dos and don’ts, Vienna, 2013 23/25
  • 32.  Conclusions  • The (added) value of or damage by bibliometrics in individual-level evaluation depends on how and in what context bibliometrics is applied. • In most situations, the context should determine which bibliometric methods and how those should be applied. • Soundness and validity of methods is all the more necessary at the individual level but not yet sufficient. Accuracy, reliability and completeness of sources is an absolute imperative at this level. • We recommend to use individual level bibliometrics always on the basis of the particular research portfolio. The best method to do this may be the design of individual researchers profiles combining bibliometrics with qualitative information about careers and working contexts. The profile includes the research mission and goals of the researcher. G  W, The dos and don’ts, Vienna, 2013 24/25
  • 33.  Conclusions  • The (added) value of or damage by bibliometrics in individual-level evaluation depends on how and in what context bibliometrics is applied. • In most situations, the context should determine which bibliometric methods and how those should be applied. • Soundness and validity of methods is all the more necessary at the individual level but not yet sufficient. Accuracy, reliability and completeness of sources is an absolute imperative at this level. • We recommend to use individual level bibliometrics always on the basis of the particular research portfolio. The best method to do this may be the design of individual researchers profiles combining bibliometrics with qualitative information about careers and working contexts. The profile includes the research mission and goals of the researcher. G  W, The dos and don’ts, Vienna, 2013 24/25
  • 34.  Acknowledgement  The authors would like to thank I R and J G for their contribution to the idea of a special session on this important issue as well as the organisers of the ISSI 2013 conference for having given us the opportunity to organise this session. We also wish to thank L W and R C for their useful comments.