4. Software Project Management
The Four P’s:
People — the most important element of a successful project
Product — the software to be built
Process — the set of framework activities and software engineering tasks to
get the job done
Project — all work required to make the product a reality
5. Estimation
Options for achieving reliable cost and effort estimates
Delay estimation until late in the project (we should be able to achieve 100% accurate
estimates after the project is complete)
Base estimates on similar projects that have already been completed
Use relatively simple decomposition techniques to generate project cost and effort
estimates
Use one or more empirical estimation models for software cost and effort estimation
6. Estimation - Approaches
Decomposition techniques
These take a "divide and conquer" approach
Cost and effort estimation are performed in a stepwise fashion by breaking down a
project into major functions and related software engineering activities
Empirical estimation models
Offer a potentially valuable estimation approach if the historical data used to seed the
estimate is good
7. Software sizing
The accuracy of a software project estimate is predicated on a number of
things:
The degree to which you have properly estimated the size of the product to be built;
The ability to translate the size estimate into human effort, calendar time, and
dollars(a function of the availability of reliable software metrics from past projects);
The degree to which the project plan reflects the abilities of the software team; and
The stability of product requirements and the environment that supports the software
engineering effort
8. Approaches to Software Sizing
“Fuzzy logic” sizing
Uses the approximate reasoning techniques that are the cornerstone of fuzzy logic. To
apply this approach, the planner must identify the type of application, establish its
magnitude on a qualitative scale, and then refine the magnitude within the original
range.
Function point sizing
Develop estimates of the information domain characteristics
9. Approaches to Software Sizing
Standard component sizing
Estimate the number of occurrences of each standard component
Use historical project data to determine the delivered LOC size per standard component
Change sizing
Used when changes are being made to existing software
Estimate the number and type of modifications that must be accomplished
Types of modifications include reuse, adding code, changing code, and deleting code
An effort ratio is then used to estimate each type of change and the size of the change
10. Problem-Based Estimation
Start with a bounded statement of scope
Decompose the software into problem functions that can each be estimated
individually
Compute an LOC or FP value for each function
Derive cost or effort estimates by applying the LOC or FP values to your baseline
productivity metrics (e.g., LOC/person-month or FP/person-month)
Combine function estimates to produce an overall estimate for the entire project
11. Problem-Based Estimation
In general, the LOC/pm and FP/pm metrics should be computed by project domain
Important factors are team size, application area, and complexity
LOC and FP estimation differ in the level of detail required for decomposition with each
value
For LOC, decomposition of functions is essential and should go into considerable detail (the more
detail, the more accurate the estimate)
For FP, decomposition occurs for the five information domain characteristics and the 14 adjustment
factors: External inputs, external outputs, external inquiries, internal logical files, external interface
files.
12. Problem-Based Estimation
For both approaches, the planner uses lessons learned to estimate an
optimistic, most likely, and pessimistic size value for each function or count
(for each information domain value)
Then the expected size value S is computed as follows:
S = (Sopt + 4Sm + Spess)/6
Historical LOC or FP data is then compared to S in order to cross-check it
13. LOC Based Estimation
Size oriented measure is derived by considering the size of software
that has been produced.
The organization builds a simple records of size measure for the
software projects. It is built on past experiences of organizations.
It is a direct measure of software
14. LOC Based Estimation
Project LOC Effort Cost Doc. (Pgs.) Errors Defects People
ABC 10,000 20 170 400 100 12 4
PQR 20,000 60 300 1000 129 32 6
XYZ 35,000 65 522 1290 280 87 7
15. LOC Based Estimation
Size Measure:
Size = Kilo Lines of Code (KLOC)
Effort = Person / month
Productivity = KLOC / person-month
Quality = No. of faults / KLOC
Cost = Cost / KLOC
Documentation = Pages of documentation / KLOC
16. LOC Based Estimation
Standard:
Don’t count blank lines
Don’t count comments
Count everything else.
Advantages:
Artifact of software development which is easily counted
Many existing methods use LOC as a key input
A large body of literature and data based on LOC already exists.
17. LOC Based Estimation
Disadvantages:
This measure is dependent upon the programming language.
This method is well designed but shorter program may get suffered
It does not accommodate non procedural languages
In early stage of development it is difficult to estimate LOC.
18. LOC Based Estimation - Example
Function LOC
Function 2340
Function 5380
Function 6800
Function 3350
Function 4950
Function 2140
Function 8400
19. LOC Based Estimation - Example
Given Statement:
Average Productivity = 620 LOC / pm
Labour rate = Rs. 8000 pm.
Find the Total estimated project cost and effort.
Cost / LOC
Total estimated project cost
Total estimated project effort
20. LOC Based Estimation - Example
Solution:
Total estimation = 33360
Cost / LOC = 8000 / 620 = Rs. 13
Total estimated project cost = (33360 * 13) = Rs. 4, 33, 680
Total estimated project effort = (33360 / 620) = 54 person - months
21. Function Oriented Metrics
It is based on functionality of the delivered application.
It is generally independent of the programming language used.
Function points are derived using:
Countable measure of the software requirements domain
Assessments of the software complexity
22. Function Oriented Metrics
Information domain characteristics:
No. of user inputs – It provides distinct application data to the software is
counted.
No. of user outputs – It provides application data to the user is counted.
No. of user inquiries – An on-line input that results in the generation of
some immediate software response in the form of an output.
23. Function Oriented Metrics
Information domain characteristics:
No. of files: Each logical master File.
No. of external interfaces: All machine readable interfaces that are
used to transmit information to another system are counted.
Criteria: Simple, Average and Complex.
24. COCOMO I Model
COCOMO – COnstructive COst MOdel
It predicts the efforts and schedule of a software product based on size of the
software.
Developed in 1981 by Barry Boehm.
Models: Basic, Intermediate and Detailed.
25. COCOMO I Model
COCOMO – COnstructive COst Model:
Classes of software projects – Organic Mode:
If the project deals with developing a well-understood application program.
The size of the development team is reasonably small, and the team members
are experienced in developing similar methods of projects.
Example: Simple inventory management systems
26. COCOMO I Model
COCOMO – COnstructive COst Model:
Classes of software projects –Semidetached:
If the development consists of a mixture of experienced and inexperienced staff.
Team members may have finite experience in related systems but may be unfamiliar
with some aspects of the order being developed.
Examples: Developing a new operating system (OS) and a Database Management
System (DBMS)
27. COCOMO I Model
COCOMO – COnstructive COst Model:
Classes of software projects – Embedded:
If the software being developed is strongly coupled to complex hardware, or
if the strict regulations on the operational method exist.
Example: ATM
28. COCOMO I Model
COCOMO – COnstructive COst Model:
Models: Basic Model:
E = ab KLOC b
b
D = cb E d
b
P = E / D
E is the effort applied in person-months
D is the development time in chronological months
KLOC is the estimated number of delivered lines of code for the project
29. COCOMO I Model
COCOMO – COnstructive COst Model:
Models: Basic Model:
Software
Project
ab bb cb db
organic 2.4 1.05 2.5 0.38
Semi-detached 3.0 1.12 2.5 0.35
embedded 3.6 1.20 2.5 0.32
30. COCOMO I Model
COCOMO – COnstructive COst Model:
Models: Basic Model – Merits:
Good for quick, early, rough order of magnitude estimates of software project.
Models: Basic Model – Limitation:
It does not consider hardware constraints, personal quality and experience,
model techniques and tools.
31. COCOMO I Model
COCOMO – COnstructive COst Model:
Models: Intermediate Model:
It makes use of set of “Cost driver attributes” to compute the cost of software.
Cost Drivers:
Product attributes
Hardware attributes
Personnel attributes
Project attributes
32. COCOMO I Model
COCOMO – COnstructive COst Model:
Models: Intermediate Model - Cost Drivers:
Product attributes: required software reliability, size of application data base and complexity of the product.
Hardware attributes: run-time performance constraints, memory constraints, volatility of the virtual machine
environment and required turnaround time.
Personnel attributes: analyst capability, software engineer capability, applications experience, virtual machine
experience and programming language experience.
Project attributes: use of software tools, application of software engineering methods and required
development schedule
33. COCOMO I Model
COCOMO – COnstructive COst Model:
Models: Intermediate Model - Cost Drivers:
Each of the 15 attributes is rated on a 6 point scale that ranges from "very
low" to "extra high"
Based on the rating, an effort multiplier is determined from tables published
by Boehm [BOE81], and the product of all effort multipliers results is an effort
adjustment factor (EAF). Typical values for EAF range from 0.9 to 1.4.
34. COCOMO I Model
COCOMO – COnstructive COst Model:
Models: Intermediate Model - Cost Drivers:
E = ai KLOC b
i x EAF
E is the effort applied in person-months
KLOC is the estimated number of delivered lines of code for the project.
Duration and person estimate is same as in basic COCOMO model.
35. COCOMO I Model
COCOMO – COnstructive COst Model:
Models: Intermediate Model - Cost Drivers:
Software project ai bi
organic 3.2 1.05
Semi-detached 3.0 1.12
embedded 2.8 1.20
36. COCOMO I Model
COCOMO – COnstructive COst Model:
Models: Intermediate Model - Cost Drivers:
E = ai KLOC b
i x EAF
E is the effort applied in person-months
KLOC is the estimated number of delivered lines of code for the project.
37. COCOMO I Model
COCOMO – COnstructive COst Model:
Models: Intermediate Model - Merits:
This model can be applied to almost entire software product for easy and
rough cost estimation during early stage.
Obtaining more accurate cost estimation
38. COCOMO I Model
COCOMO – COnstructive COst Model:
Models: Intermediate Model - Demerits:
A product with many components is difficult to estimate
39. COCOMO I Model
COCOMO – COnstructive COst Model:
Models: Detailed:
It uses the same equations for estimation as the Intermediate Model.
It can estimate the effort, duration and persons of each of development
phases, subsystems and modules.
40. COCOMO I Model
COCOMO – COnstructive COst Model:
Models: Detailed:
Phases Very Low Low Nominal High Very High
Requirements Planning
and Product Design
1.80 0.85 1.00 0.75 0.55
Detailed Design 1.35 0.85
1.00
0.90 0.75
Code and Unit Test 1.35 0.85
1.00
0.90 0.75
Integrate and Test 1.50 1.20 1.00 0.85 0.70
41. COCOMO II Model
It is applied for modern software development practices.
Sub Models:
Application Composition Model - Used when software is composed from existing parts.
Early design model - Used when requirements are available but design has not yet started
The reuse model - Used to compute the effort of integrating reusable components
Post-architecture level - Used once the system architecture has been designed and more
information about the system is available
43. COCOMO II Model
Sub Models - Application Composition Model:
For estimating the efforts required for the prototyping projects and the
projects in which the existing software components are used application-
composition model is introduced.
• Formula is
• PM = ( NAP ´ (1 - %reuse/100 ) ) / PROD
• PM is the effort in person-months, NAP is the number of application points and PROD is the
productivity.
44. COCOMO II Model
Sub Models - Application Composition Model:
For estimating the efforts required for the prototyping projects and the
projects in which the existing software components are used application-
composition model is introduced.
• Formula is
• PM = ( NAP(1 - %reuse/100 ) ) / PROD
• PM is the effort in person-months, NAP is the number of application points and PROD is the
productivity.
45. COCOMO II Model
Sub Models - Application Composition Model:
Object point productivity:
Developer’s
experience and
capability
Very low Low Nominal High Very high
ICASE maturity
and capability
Very low Low Nominal High Very high
PROD
(NOP/month)
4 7 13 25 50
46. COCOMO II Model
Sub Models - Early design Model:
Estimates can be made after the requirements have been agreed.
The estimation can be made based on the functional points.
PM = A x SizeB x M
A = 2.94, Size in KLOC, B varies from 1.1 to 1.24 depending upon the project
47. COCOMO II Model
Sub Models - Early design Model:
Multipliers: Multipliers reflect the capability of the developers, the non-functional requirements,
the familiarity with the development platform, etc.
RCPX - product reliability and complexity;
RUSE - the reuse required;
PDIF - platform difficulty;
PREX - personnel experience;
PERS - personnel capability;
SCED - required schedule;
FCIL - the team support facilities
48. COCOMO II Model
Sub Models – The reuse Model:
Reused code without change and code that has to be adapted to integrate it
with new code.
Two types of Reusable code:
Black-box reuse where code is not modified. An effort estimate (PM) is computed.
White-box reuse where code is modified
49. COCOMO II Model
Sub Models – The reuse Model:
For generated code:
PM = (ASLOC * AT/100)/ATPROD
ASLOC is the number of lines of generated code
AT is the percentage of code automatically generated.
ATPROD is the productivity of engineers in integrating this code.
50. COCOMO II Model
Sub Models – The reuse Model:
The size estimate of newly developed code is equivalent to the reused
code.
ESLOC = ASLOC * (1-AT/100) * AAM.
AAM is the adaptation adjustment multiplier computed from the costs of changing the reused
code
51. COCOMO II Model
Sub Models – Post-architecture level:
Uses the same formula as the early design model but with 17 rather than 7
associated multipliers.
The code size is estimated as:
Number of lines of new code to be developed;
Estimate of equivalent number of lines of new code computed using the reuse model;
An estimate of the number of lines of code that have to be modified according to
requirements changes.
52. COCOMO II Model
Sub Models – Post-architecture level – Scale:
Precedentedness Reflects the previous experience of the organisation with this type of project. Very low means
no previous experience, Extra high means that the organisation is completely familiar with this
application domain.
Development flexibility Reflects the degree of flexibility in the development process. Very low means a prescribed
process is used; Extra high means that the client only sets general goals.
Architecture/risk
resolution
Reflects the extent of risk analysis carried out. Very low means little analysis, Extra high means
a complete a thorough risk analysis.
Team cohesion Reflects how well the development team know each other and work together. Very low means
very difficult interactions, Extra high means an integrated and effective team with no
communication problems.
Process maturity Reflects the process maturity of the organisation. The computation of this value depends on the
CMM Maturity Questionnaire but an estimate can be achieved by subtracting the CMM process
maturity level from 5.
53. COCOMO II Model
Sub Models – Post-architecture level – Cost Attributes:
Product:
Reliability
Complexity
Size of the data
Amount of documentation used
54. COCOMO II Model
Sub Models – Post-architecture level – Cost Attributes:
Computer:
Execution Time
Volatility of development platform
Memory Constraints
55. COCOMO II Model
Sub Models – Post-architecture level – Cost Attributes:
Personnel:
Project Analyst capability
Programmer capability
Personnel continuity
Programmer’s experience
Experience of languages and tools that are used.
Analyst’s experience
56. COCOMO II Model
Sub Models – Post-architecture level – Cost Attributes:
Project:
Use of software tools
Project schedule compression
Quality of inter-site and multi-site working
57. Project Scheduling
On large projects, hundreds of small tasks must occur to accomplish a
larger goal
Some of these tasks lie outside the mainstream and may be completed without
worry of impacting on the project completion date
Other tasks lie on the critical path; if these tasks fall behind schedule, the
completion date of the entire project is put into risk
58. Project Scheduling
Project manager's objectives
Define all project tasks
Build an activity network that depicts their interdependencies
Identify the tasks that are critical within the activity network
Build a timeline depicting the planned and actual progress of each task
Track task progress to ensure that delay is recognized "one day at a time"
To do this, the schedule should allow progress to be monitored and the project to be
controlled
59. Project Scheduling
Typical problems in project development stage are:
People may leave or remain absent
Hardware may get failed
Software resource may not be available
60. Project Scheduling
Resources required for the project are –
Human effort
Sufficient disk space on server
Specialized hardware
Software Technology
Travel allowance required by the project staff.
61. Project Scheduling - Prinicples
Compartmentalization
The project must be compartmentalized into a number of manageable activities, actions, and
tasks; both the product and the process are decomposed.
Interdependency
The interdependency of each compartmentalized activity, action, or task must be determined
Some tasks must occur in sequence while others can occur in parallel
Some actions or activities cannot commence until the work product produced by another is
available
62. Project Scheduling - Prinicples
Time allocation
Each task to be scheduled must be allocated some number of work units
In addition, each task must be assigned a start date and a completion date that
are a function of the interdependencies
Start and stop dates are also established based on whether work will be
conducted on a full-time or part-time basis
63. Project Scheduling - Prinicpiles
Effort validation
Every project has a defined number of people on the team
As time allocation occurs, the project manager must ensure that no more than
the allocated number of people have been scheduled at any given time
Defined responsibilities
Every task that is scheduled should be assigned to a specific team member
64. Project Scheduling - Priniciples
Defined outcomes
Every task that is scheduled should have a defined outcome for software projects such
as a work product or part of a work product
Work products are often combined in deliverables
Defined milestones
Every task or group of tasks should be associated with a project milestone
A milestone is accomplished when one or more work products has been reviewed for
quality and has been approved
65. Relationship Between
People and Effort
People work on the software project doing various activities such as
requirements gathering, design, analysis, coding and testing.
Common management myth: If we fall behind schedule, we can
always add more programmers and catch up later in the project.
66. Relationship Between
People and Effort
Putnam first studied on how much staffing is required for the
projects.
Norden who had earlier investigated the staffing pattern.
67. Relationship Between
People and Effort
Impossible
region
Effort
cost
Development time
t optimal
t theoretical
E optimal
E theoretical
t minimum
68. Relationship Between
People and Effort
Delaying project delivery can reduce costs significantly as shown in
the equation E = L3/(P3t4) and in the curve below
E = development effort in person-months
L = source lines of code delivered
P = productivity parameter (ranging from 2000 to 12000)
t = project duration in calendar months
69. 40-20-40 Distribution of Effort
A recommended distribution of effort across the software process is
40% (analysis and design), 20% (coding), and 40% (testing)
70. Task Sets
Task set is a collection of software engineering work tasks,
milestones and work products that must be accomplished to complete
particular project.
No single task set is appropriate for all projects
The task set should provide enough discipline to achieve high
software quality.
71. Task Sets
Types of Software Projects:
Concept development projects: Explore some new business concept or application of some new
technology
New application development: Undertaken as a consequence of a specific customer request
Application enhancement: Occur when existing software undergoes major modifications to
function, performance, or interfaces that are observable by the end user
Application maintenance: Correct, adapt, or extend existing software in ways that may not be
immediately obvious to the end user
Reengineering projects: Undertaken with the intent of rebuilding an existing (legacy) system in
whole or in part
72. Task Sets
Factors that Influence a Project’s Schedule
Size of the project,
Number of potential users,
Mission criticality,
Application long life,
Stability of requirements
73. Task Sets
Factors that Influence a Project’s Schedule
Ease of customer/developer communication,
Maturity of applicable technology,
Performance constraints,
Embedded and non-embedded characteristics,
Project staff,
Reengineering factors
74. Task Sets
Purpose:
It is a graphic representation of the task flow for a project
It depicts task length, sequence, concurrency, and dependency
Points out inter-task dependencies to help the manager ensure continuous
progress toward project completion
75. Task Sets
Purpose – Critical Path:
A single path leading from start to finish in a task network
It contains the sequence of tasks that must be completed on schedule if the
project as a whole is to be completed on schedule
It also determines the minimum duration of the project
76. 76
Example Task Network
Task A
3
Task B
3
Task E
8
Task F
2
Task H
5
Task C
7
Task D
5
Task I
4
Task M
0
Task N
2
Task G
3
Task J
5
Task K
3
Task L
10
Where is the critical path and what tasks are on it?
77. 77
Example Task Network
with Critical Path Marked
Task A
3
Task B
3
Task E
8
Task F
2
Task H
5
Task C
7
Task D
5
Task I
4
Task M
0
Task N
2
Task G
3
Task J
5
Task K
3
Task L
10
Critical path: A-B-C-E-K-L-M-N
78. Time Line Chart
Also called a Gantt chart
All project tasks are listed in the far left column
The length of a horizontal bar on the calendar indicates the duration of the task
When multiple bars occur at the same time interval on the calendar, this implies
task concurrency
A diamond in the calendar area of a specific task indicates that the task is a
milestone; a milestone has a time duration of zero
79. Time Line Chart
J
5/9
5/8
2
Install Inc Two
K
E
5/6
5/2
5
Install Inc One
F
E, I
5/7
5/2
6
Test Inc Two
J
A, B
4/13
4/7
7
Analyze Inc Two
G
F, K
H
G
D
C
B
A
None
Pred.
5/11
5/10
2
Close out project
L
4/22
4/19
4
Code Inc Two
I
4/18
4/14
5
Design Inc Two
H
5/1
4/22
10
Test Inc One
E
4/21
4/15
7
Code Inc One
D
4/14
4/7
8
Design Inc One
C
4/6
4/4
3
Analyze Inc One
B
4/3
4/1
3
Establish increments
A
Finish
Start
Duration
Task Name
Task #
4/1 4/8 4/15 4/22 4/29 5/6 5/13 5/20 5/27 6/3
80. Time Line Chart
A. Establish
Increments
3
B. Analyze
Inc One
3
C. Design
Inc One
8
D. Code
Inc One
7
F. Install
Inc One
5
G. Analyze
Inc Two
7
H. Design
Inc Two
5
I. Code
Inc Two
4
K. Install
Inc Two
2
L. Close out
Project
2
E. Test
Inc One
10
J. Test
Inc Two
6
Task network and the critical path: A-B-C-D-E-J-K-L
81. Tracking Schedule
Project manager to decide the project schedule and track the
schedule.
Tracking the schedule means determine the tasks and milestones in
the project as it proceeds.
82. Tracking Schedule
Conduct periodic project status meetings in which each team member reports progress and
problems
Evaluate the results of all reviews conducted throughout the software engineering process
Determine whether formal project milestones (i.e., diamonds) have been accomplished by the
scheduled date
Compare actual start date to planned start date for each project task listed in the timeline chart
Meet informally with the software engineering team to obtain their subjective assessment of
progress to date and problems on the horizon.
Use earned value analysis to assess progress quantitatively
83. Earned Value Analysis
A measure of progress by assessing the percent of completeness for a
project.
The EVA acts as a measure for software project progress.
The total hours to do the whole project are estimated, and every task
is given an earned value based on its estimated percentage of the total.
84. Earned Value Analysis
Planned Value (PV): The planned cost of the work. The planned value is
developed by first determining all of the work, that must be accomplished
for successful project result.
Actual Cost (AC): It represents the actual amount that business has to
expend on the project.
Budget at completion (BAC): It represents total budget for the project.
85. Earned Value Analysis
Budgeted cost of work scheduled (BCWS) - effort planned; the value
of BCWS is the sum of the BCWSi values of all the work tasks that
should have been completed by that point of time in the project
schedule.
Budgeted cost of work performed (BCWP) - The sum of the BCWS
values for all work tasks that have actually been completed by a point
of time on the project schedule
86. Earned Value Analysis
Schedule Variance (SV) - Variance from the planned schedule
SV = BCWP – BCWS
Actual cost of work performed (ASWP) - = sum of BCWP as of time t
Cost Variance (CV) - an absolute indication of cost savings (against
planned costs) or shortfall at a particular stage of a project.
CV = BCWP – ACWP
87. Earned Value Analysis
Schedule performance index (SPI) - An indication of the efficiency
with which the project is utilizing scheduled resources.
SPI close to 1.0 indicates efficient execution of the project schedule
Cost performance index (CPI) close to 1.0 provides a strong
indication that the project is within its defined budget
88. Project Plan
An iterative process and its gets completed only on completion of the
project.
This process is iterative because new information gets available at
each phase of the project development.
89. Project Plan - Structure
Introduction – Goals, Objectives and Constraints are described
Project Organization – No. of people involved along with their roles.
Risk Analysis – All possible risks should be identified. The possible risk reduction
strategies must be decided.
Requirements – It specifies hardware and software requirements
Work breakdown – To define project work breakdown
Project Schedule – The tentative schedule of project activities must be determined.
Report Generation – The structure of the project report is decided.
90. Risk Management
Risk – The uncertainty that may occur in the choices due to past
actions and risk is something which causes heavy losses.
Risk Management – The process of making decisions based on an
evaluation of the factors that threats to the business.
91. Risk Management
Process:
The risk management process continues until the project gets completed
successfully
The risk planning phase is required to minimize or avoid the risks.
These risks get monitored and mitigated in the risk monitoring phase.
Risk Management is a iterative process.
92. Risk Management
Two characteristics of risk
Uncertainty – the risk may or may not happen, that is, there are no
100% risks (those, instead, are called constraints)
Loss – the risk becomes a reality and unwanted consequences or
losses occur
93. Risk Management - Types
Project risk:
It affects budget, schedule, staffing, resources and requirements.
When project risks become severe then the total cost of project gets increased.
Technical risk:
They threaten the quality and timeliness of the software to be produced
If they become real, implementation may become difficult or impossible
94. Risk Management - Types
Business Risk
They threaten the feasibility of the software to be built
If they become real, they risk the project or the product
Market risk - building an excellent product or system that no one really wants
Strategic risk - building a product that no longer fits into the overall business
strategy for the company
Sales risk – building a product that the sales force doesn't understand how to sell
95. Risk Management - Types
Business Risk
Management risk – losing the support of senior management due to a change
in focus or a change in people
Budget risk – losing budgetary or personnel commitment
96. Risk Management - Types
Known risks - Those risks that can be uncovered after careful evaluation
of the project plan, the business and technical environment in which the
project is being developed, and other reliable information sources (e.g.,
unrealistic delivery date)
Predictable risks - Those risks that are extrapolated from past project
experience (e.g., past turnover)
Unpredictable risks - Those risks that can and do occur, but are extremely
difficult to identify in advance
97. Risk Management – Risk Identification
Risk identification is a systematic attempt to specify threats to the
project plan
By identifying known and predictable risks, the project manager takes
a first step toward avoiding them when possible and controlling them
when necessary
98. Risk Management – Risk Identification
Approaches:
Generic risks - Risks that are a potential threat to every software project
Product-specific risks - Risks that can be identified only by those a with a
clear understanding of the technology, the people, and the environment that is
specific to the software that is to be built.
99. Risk Management – Risk Identification
Step 1: Preparation of risk item check list:
Product size – risks associated with overall size of the software to be built
Business impact – risks associated with constraints imposed by
management or the marketplace
Customer characteristics – risks associated with sophistication of the
customer and the developer's ability to communicate with the customer in a
timely manner
100. Risk Management – Risk Identification
Step 1: Preparation of risk item check list:
Process definition – risks associated with the degree to which the software process has
been defined and is followed
Development environment – risks associated with availability and quality of the tools to
be used to build the project
Technology to be built – risks associated with complexity of the system to be built and
the "newness" of the technology in the system
Staff size and experience – risks associated with overall technical and project experience
of the software engineers who will do the work
101. Risk Management – Risk Identification
Step 2: Creating risk components and drivers list:
The set of risk components and drivers list is prepared along with their
probability of occurrence.
Then their impact on the project can be analysed.
102. Risk Management – Risk Identification
Risk Components and Drivers:
Performance risk - the degree of uncertainty that the product will meet its
requirements and be fit for its intended use
Cost risk - the degree of uncertainty that the project budget will be maintained
Support risk - the degree of uncertainty that the resultant software will be easy to
correct, adapt, and enhance
Schedule risk - the degree of uncertainty that the project schedule will be
maintained and that the product will be delivered on time
103. Risk Management – Risk Identification
Risk Components and Drivers:
The impact of each risk driver on the risk component is divided into
one of four impact levels - Negligible, marginal, critical, and
catastrophic
Risk drivers can be assessed as impossible, improbable, probable, and
frequent
104. Risk Management – Risk Identification
Assess Overall Project Risk:
Have top software and customer managers formally committed to support the
project?
Are end-users enthusiastically committed to the project and the system/product to be
built?
Are requirements fully understood by the software engineering team and its
customers?
Have customers been involved fully in the definition of requirements?
105. Risk Management – Risk Identification
Assess Overall Project Risk:
Do end-users have realistic expectations?
Is the project scope stable?
Does the software engineering team have the right mix of skills?
Are project requirements stable?
106. Risk Management – Risk Identification
Assess Overall Project Risk:
Does the project team have experience with the technology to be
implemented?
Is the number of people on the project team adequate to do the job?
Do all customer/user constituencies agree on the importance of the project
and on the requirements for the system/product to be built?
107. Risk Management – Risk Projection
Risk projection (or estimation) attempts to rate each risk in two ways
The probability that the risk is real
The consequence of the problems associated with the risk, should it occur.
108. Risk Management – Risk Projection
The project planner, managers, and technical staff perform four risk
projection steps:
Establish a scale that reflects the perceived likelihood of a risk (e.g., 1-low, 10-high)
Describe the consequences of the risk
Estimate the impact of the risk on the project and product
Note the overall accuracy of the risk projection so that there will be no
misunderstandings
109. Risk Management – Risk Projection
The intent of these steps is to consider risks in a manner that leads to
prioritization
Be prioritizing risks, the software team can allocate limited resources
where they will have the most impact
110. Risk Management – Risk Projection
A risk table provides a project manager with a simple technique for
risk projection
Risk Summary Risk Category Probability Impact RMMM
Is the skilled
staff available
Staff 50% Catastrophic
Is that the team
size sufficient
Staff 62% Critical
111. Risk Management – Risk Projection
Assessing Risk Impact:
Its nature – This indicates the problems that are likely if the risk occurs
Its scope – This combines the severity of the risk (how serious was it) with its
overall distribution (how much was affected)
Its timing – This considers when and for how long the impact will be felt
The overall risk exposure formula is RE = P x C, P = the probability of
occurrence for a risk, C = the cost to the project should the risk actually occur
113. Risk Management – RMMM
Risk mitigation (i.e., avoidance) - Preventing the risks to occur.
Communicate with the concerned staff to find of probable risk.
Find out and eliminate all those causes that can create risk before the project starts
Develop a policy in an organization which will help to continue the project even
though some staff leaves the organization
Everybody in the project team should be acquainted with the current development
activity.
114. Risk Management – RMMM
Risk mitigation (i.e., avoidance) - Preventing the risks to occur.
Maintain the corresponding documents in timely manner.
Conduct timely reviews in order to speed up the work
For conducting every critical activity during software development, provide
the additional staff if required.
115. Risk Management – RMMM
Risk Monitoring:
The approach or the behavior of the team members as pressure of the project
varies.
The degree in which the team performs with the spirit of “team-work”.
The type of co-operation among the team members
The types of problems that are occurring
Availability of jobs within and outside the organization.
116. Risk Management – RMMM
Risk Monitoring – Objectives:
To check whether the predicted risks really occur or not
To ensure the steps defined to avoid the risk are applied properly or not.
To gather the information which can be useful for analyzing the risk
117. Risk Management – RMMM
RMMM Plan:
Risk Information Sheet
Project Name
Risk Id <#> Date Probability Impact
Origin Assigned to
Description < Description of risk identified>
Refinement / Context <Associated information for risk refinement>
Mitigation / Monitoring <enter the mitigation / monitoring steps taken>
Trigger / Contingency Plan <if risk mitigation fails then the plan for handling the risk>
Status < History of the risk>
Approval Closing Date
118. CASE Tools
It automates the project management activities, manage all the work
products.
Importance:
To perform various software lifecycle activities such as analysis, design,
coding and testing.
To reduce amount of efforts
To produce the high quality software efficiently
120. CASE Tools
Building Blocks of CASE:
Computer aided software engineering can be as simple as a single tool that
supports a specific software engineering activity.
It can be complex as a complete "environment" that encompasses tools, a
database, people, hardware, a network, operating systems, standards, and
myriad other components.
121. CASE Tools
Building Blocks of CASE:
The integration framework is a collection of specialized programs that
enables individual CASE tools to communicate with one another, to create a
project database, and to exhibit the same look and feel to the end-user.
Portability services allow CASE tools and their integration framework to
migrate across different hardware platforms and operating systems without
significant adaptive maintenance.
122. CASE Tools
Taxonomy of CASE Tools:
Business process engineering and tools: To model the business
information flow.
Process modeling and management tools: The processes need to be
understood then only it could be modeled.
Project planning tools: To plan and schedule projects. Ex: PERT
123. CASE Tools
Taxonomy of CASE Tools:
Risk Analysis tools: To identify potential risks. Useful for building the risk
table.
Project Management tools: To track the progress of the project.
Requirements tracing tools: To trace requirements
Metrics and Management tools: Provide an overall measure of quality. Ex:
LOC / person-month