2. Click to edit Master title style
2
Feature Scoring in
Application
Development and
DevOps
P r e s e n t e d b y E r i a w a n K u s u m a w a r d h o n o
3. Click to edit Master title style
3
About Eriawan
3
• Based on Indonesia
• MVP since 2012, focusing on Developer
Technologies (F#/C#/VB, .NET Core,
Azure DevOps, opensource)
• LinkedIn:
https://www.linkedin.com/in/eriawan-
kusumawardhono/
• Github: eriawan
• Member of .NET Foundation’s OSS
Project onboarding committee. Yes,
please ping me for support for OSS
.NET project on GitHub
4. Click to edit Master title style
4
Main course today
4
1. Introduction of Feature scoring
2. Elements of feature scoring
3. Best practices
5. Click to edit Master title style
5
Introduction to Feature
scoring
. . a n d I t ’ s r e l a t i o n t o G r e e n f i e l d s o f t w a r e d e v e l o p m e n t
5
6. Click to edit Master title style
6
What is feature scoring?
6
• A metric to measure the relevance, usability, and perception of
features of an application, from development to the operational
stages of the application
• Each feature of the application must be measurable in a sense it
must be easily understood and also must not have ambiguous
perspective for the developers and the rest of stakeholders (users,
operation/infra departments, and other optional but may take
decisive part such as project owners)
• This measurement takes more than one point in time, as we
measure the metric in terms of how it performs
7. Click to edit Master title style
7
“
“Start with a brand new language and you essentially start
with minus 1,000 points. And now, you’ve got to win back your
1,000 points before we’re even talking. Lots of languages
never get to more than minus 500. Yeah, they add value but
they didn’t add enough value over what was there before.”
- Anders Hejlsberg, Microsoft Technical Fellow
7
8. Click to edit Master title style
8
A feature related to feature scoring
8
• A general but quickly understandable feature of a software, as this is
one of the requirements
• This means a feature must be drilled down from business use case
to at least to a technical use case, and both development party and
other parties stakeholders must be informed
• For development, this means all features of the application in
development
• For infrastructure or operation department, this can be focusing on
how the feature is put into metrics
9. Click to edit Master title style
9
This is a good reason for Green field software development
9
1. Green field development start from “0”, analog with everyday
sample as opening new field for plantation
2. All features defined are starting from “0” or minus. This depends
on your actual needs explained next
3. All features are having the global (general) overview, therefore a
business use case and technical use case must be defined first
10. Click to edit Master title style
10
Feature scoring is used for (from the start of development)
10
• Measure the performance of the feature in terms of how it meets the matrix of
requirements (business use case and technical use case), test result (e.g. SIT
and UAT), actual feedback and performance after deployed in production
• The performance starts early in the development. If the feature has defined as
done in the requirement, development, and in the testing in SIT and UAT is still
having bad user experience and bugs, then the score will be minus and but this
should not be the main concern
• The feature performance is recommended to be checked against what
categorized within MosCow method in the beginning. For example: if a feature
was categorized within “Should have” but if it performs badly because users are
not using it much, then this feature will have minus points
• Therefore, a prioritization of feature score weight must be defined
11. Click to edit Master title style
11
MosCow method
11
https://www.productplan.com/glossary/moscow-prioritization/
12. Click to edit Master title style
12
Elements of Feature scoring
S u b t i t l e
12
13. Click to edit Master title style
13
Fundamental elements of feature scoring
13
• Feature performance score are combinations of matrix of
requirements (they are met or not), test result in SIT/UAT, bug
reports from users, and usability in production
• Gathering usability from production comes from reports from actual
users by examining telemetries. For example: application log
examination
• Prioritizations
• Weigh scale factor
14. Click to edit Master title style
14
Feature weight prioritization
14
• General software architecture must come first as features
• Categorize each features into MosCow methods, therefore “Must
have” features must come first to be measured
• On any bugs related to any features, any minus point gets larger
they are part of lesser priority in MosCow method, especially those
that categorized as “Could have” and below.
• Any Must have and Should have solved bugs will be considered as
positive, equal to the number of related bug so it will have total net
score of “0”.
• Further weighing are open for customization
15. Click to edit Master title style
15
Standard weighing basic considerations
15
1. All bugs must meet basic MosCow Methods category, at least Must Have,
Should Have, Could Have.
2. Combined with above MosCow, it is recommended that all bugs should be at
least tagged with two categories: “Showstopper” (or Critical) or Functional in
addition to MosCow methods. Showstopper/critical means the feature doesn’t
work at all caused by error at first place shown or app goes hung. The
showstopper bugs must be given highest priority and large negative score.
3. Bugs that comes from telemetry/app logging must be further categorized as
“Showstopper” and functional, with additional category of external. An
example: abrupt app timeout logs when app runs and try to communicate with
third party server
4. Bugs that comes from automated UI test must be categorized as functional
bugs first, because it is unattended. Automated UI test always have lowest
score.
16. Click to edit Master title style
16
Sample basic weighing (MosCow and criticality/severity)
16
MosCow Method Score
Must Have -10
Should Have -15
Could Have -25
Telemetry (Log of error) -20
Solved Must Have 10
Solved Should Have 15
Solved Could Have 15
Telemetry of Could Have has
shown consistent frequent
usages
10
Solved Telemetry error 20
Criticality Score
Showstopper -30
Functional -10
Solved Showstopper 30
Solved Functional 10
17. Click to edit Master title style
17
Demo using Azure DevOps
17
18. Click to edit Master title style
18
Relevance in software development
18
• Is feature scoring is important? For product based application
development, it is important, because an application software as
product have features defined. All of these features must have
histories as bugs, test results, and also actual feedback in
production
19. Click to edit Master title style
19
Best practices of Feature scoring
S u b t i t l e
19
20. Click to edit Master title style
20
Known best practices 1/2
20
• It is highly usable in product based in software development, especially in
DevOps as the software must be improved continuously and at the same
time the software must be tracked in not just the functionality, but also the
total value of its features starting from development to deployment to
usage in productions
• Scoring for features must at least have MosCow method in place. This
would help the prioritization of works (especially when fixing bugs) and
also gauge the real value of each features.
• Therefore, it can be that once a “should have” feature gets promoted to
“must have”. Especially if it’s proven to have high value in terms of
combination of high usage, easy to use, and has positive feedback from
actual users after deployment to production
21. Click to edit Master title style
21
Known best practices 2/2
21
• Why lesser MosCow priorities are punished more than must have?
Because priorities are often related to the actual work that should be
prioritized first, and speed of delivery is equal to delivering what
value matters most! Therefore, it is common such in some teams
such as .NET development team at Microsoft to start each feature in
new software development to have -1000 points instead of 0 point,
as new development will always relate to workforce allocation,
prioritization, schedule and also allocated budget.
22. Click to edit Master title style
22
Cultural best practices for Feature Scoring
22
1. Always say no to any features that part of “will not have” when prioritizing work on current
sprint/phase.
2. Any exclusion of of a feature caused by scoring-based demotion from “could have” to “will not
have” (based on MosCow methods) must be considered as part of documented change
request. Because any promotion/demotion will always change prioritization in the
development
3. Architecture of a software must be always included as features. For example: supporting
current OS based on the company that create the OS is part of “must have”, whereas
supporting for OS that is almost out of support should not be considered as “must have”. For
example, in 2021 supporting Windows 7 will add more burden and risks, as it’s not supported
by Microsoft. Supporting Windows Server 2012 and later in 2021 is a must have.
4. Feature scoring can be used for Brown Field development, but every current features must be
inventarized from 0, including documentations of each features. Therefore, software
requirement gathering must be restarted and adjusted to provide feature scoring. Otherwise
feature scoring will be no longer usable even when finishing defining MosCow methods.