Software Project Management Question Answer 2022
VerifiedAdded on 2022/10/27
|34
|8172
|39
AI Summary
Contribute Materials
Your contribution can guide someone’s learning journey. Share your
documents today.
VIVEKANANDHA COLLEGE OF TECHNOLOGY
FOR WOMEN
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING
MG6088 SOFTWARE PROJECT MANAGEMENT
Question Bank
REGULATION -2013
IV- YEAR –CSE (VIII-SEMESTER)
STAFF INCHARGE HOD DEAN
UNIT- II
FOR WOMEN
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING
MG6088 SOFTWARE PROJECT MANAGEMENT
Question Bank
REGULATION -2013
IV- YEAR –CSE (VIII-SEMESTER)
STAFF INCHARGE HOD DEAN
UNIT- II
Secure Best Marks with AI Grader
Need help grading? Try our AI Grader for instant feedback on your assignments.
Page 1
MG6088 Software Project Management
UNIT II
PROJECT LIFE CYCLE AND EFFORT ESTIMATION
PART A
1. What is SCRUM? (Nov/Dec 2018) (APR/MAY 2018)(NOV/Dec 2019)
Answer:
• Scrum is an efficient framework within which you can develop software with
teamwork. It is based on agile principles.
• Scrum supports continuous collaboration among the customer, team members,
and relevant stakeholders.
2. Expand RAD. Is it incremental model? Justify. (Nov/Dec 2018)
Answer:
• RAD model is Rapid Application Development model. It is a type of incremental
model. In RAD model the components or functions are developed in parallel as if
they were mini projects.
• The developments are time boxed, delivered and then assembled into a working
prototype.
3. Identify the uses of RAD model. (APR/MAY2019)
Answer:
• Rapid application development (RAD) describes a method of software
development which heavily emphasizes rapid prototyping and iterative
delivery.
• The RAD model is, therefore, a sharp alternative to the typical waterfall
development model, which often focuses largely on planning and sequential
design practices.
4. Brief about two ways of setting objectives. (APR/MAY 2018)
Answer:
Software process and Process Models – Choice of Process models - mental delivery – Rapid
Application development – Agile methods – Extreme Programming – SCRUM – Managing
interactive processes – Basics of Software estimation – Effort and Cost estimation techniques –
COSMIC Full function points - COCOMO II A Parametric Productivity Model - Staffing
Pattern.
MG6088 Software Project Management
UNIT II
PROJECT LIFE CYCLE AND EFFORT ESTIMATION
PART A
1. What is SCRUM? (Nov/Dec 2018) (APR/MAY 2018)(NOV/Dec 2019)
Answer:
• Scrum is an efficient framework within which you can develop software with
teamwork. It is based on agile principles.
• Scrum supports continuous collaboration among the customer, team members,
and relevant stakeholders.
2. Expand RAD. Is it incremental model? Justify. (Nov/Dec 2018)
Answer:
• RAD model is Rapid Application Development model. It is a type of incremental
model. In RAD model the components or functions are developed in parallel as if
they were mini projects.
• The developments are time boxed, delivered and then assembled into a working
prototype.
3. Identify the uses of RAD model. (APR/MAY2019)
Answer:
• Rapid application development (RAD) describes a method of software
development which heavily emphasizes rapid prototyping and iterative
delivery.
• The RAD model is, therefore, a sharp alternative to the typical waterfall
development model, which often focuses largely on planning and sequential
design practices.
4. Brief about two ways of setting objectives. (APR/MAY 2018)
Answer:
Software process and Process Models – Choice of Process models - mental delivery – Rapid
Application development – Agile methods – Extreme Programming – SCRUM – Managing
interactive processes – Basics of Software estimation – Effort and Cost estimation techniques –
COSMIC Full function points - COCOMO II A Parametric Productivity Model - Staffing
Pattern.
Page 2
• For any project to be success , the project objectives should be clearly defined
and the key people involved in the project must known about it. This only
will make them work with single focus and concentration towards achieving
the objectives
• These key people or organisation is called ‘Stakeholder’. A stackholder is
defined as a person or organization who are directly or indirectly affected by
the project .
• A software project is normally complex in nature and would involve many
people draw from various specialities.
• It is always advisable to set sub objectives suitable for every individual or a
group of individuals in line with the main objectives. Sub objectives are more
meaningful to a team so that it is under their control and achievable by them.
• A workable goal setting must always be done using SMART approach.
5. What is rapid application development? (NOV /DEC 2017)
Answer:
RAD or Rapid Application Development process is an adoption of the
waterfall model; it targets at developing software in a short span of time.
RAD follow the iterative method.
SDLC RAD model has following phases
• Business Modeling
• Data Modeling
• Process Modeling
• Application Generation
• Testing and Turnover
6. Outline the advantage of agile unified process. (NOV /DEC 2017)
Answer:
1. Agile methodology has an adaptive approach which is able to respond to the
changing requirements of the clients
2. Direct communication and constant feedback from customer representative leave
no space for any guesswork in the system.
• For any project to be success , the project objectives should be clearly defined
and the key people involved in the project must known about it. This only
will make them work with single focus and concentration towards achieving
the objectives
• These key people or organisation is called ‘Stakeholder’. A stackholder is
defined as a person or organization who are directly or indirectly affected by
the project .
• A software project is normally complex in nature and would involve many
people draw from various specialities.
• It is always advisable to set sub objectives suitable for every individual or a
group of individuals in line with the main objectives. Sub objectives are more
meaningful to a team so that it is under their control and achievable by them.
• A workable goal setting must always be done using SMART approach.
5. What is rapid application development? (NOV /DEC 2017)
Answer:
RAD or Rapid Application Development process is an adoption of the
waterfall model; it targets at developing software in a short span of time.
RAD follow the iterative method.
SDLC RAD model has following phases
• Business Modeling
• Data Modeling
• Process Modeling
• Application Generation
• Testing and Turnover
6. Outline the advantage of agile unified process. (NOV /DEC 2017)
Answer:
1. Agile methodology has an adaptive approach which is able to respond to the
changing requirements of the clients
2. Direct communication and constant feedback from customer representative leave
no space for any guesswork in the system.
Page 3
7. What is the function of spiral model? (APR/MAY 2017)
Answer:
• The spiral model is similar to the incremental model, with more emphasis
placed on risk analysis.
• A software project repeatedly passes through these phases in iterations (called
Spirals in this model)
8. What is activity model? (APR/MAY 2017)
Answer:
• The activity model indicates the set of activities needed to turn a set of inputs
(capital, raw materials and labour) into the firm’s value proposition (benefits
to customers).
• Examples of such activities include product development, purchasing,
manufacturing, marketing and sales and service delivery.
9. What is a software process model?
Answer:
• A Process Model describes the sequence of phases for the entire lifetime of a
product.
• Therefore it is sometimes also called Product Life Cycle.
• This covers everything from the initial commercial idea until the final de-
installation or disassembling of the product after its use.
10. What were the phases in software process model?
Answer:
• There are three main phases:
∗ Concept phase
∗ Implementation phase
∗ Maintenance phase
• Each of these main phases usually has some sub-phases, like a requirement
engineering phase, a design phase, a build phase and a testing phase.
• The sub-phases may occur in more than one main phase each of them with a
specific peculiarity depending on the main phase.
7. What is the function of spiral model? (APR/MAY 2017)
Answer:
• The spiral model is similar to the incremental model, with more emphasis
placed on risk analysis.
• A software project repeatedly passes through these phases in iterations (called
Spirals in this model)
8. What is activity model? (APR/MAY 2017)
Answer:
• The activity model indicates the set of activities needed to turn a set of inputs
(capital, raw materials and labour) into the firm’s value proposition (benefits
to customers).
• Examples of such activities include product development, purchasing,
manufacturing, marketing and sales and service delivery.
9. What is a software process model?
Answer:
• A Process Model describes the sequence of phases for the entire lifetime of a
product.
• Therefore it is sometimes also called Product Life Cycle.
• This covers everything from the initial commercial idea until the final de-
installation or disassembling of the product after its use.
10. What were the phases in software process model?
Answer:
• There are three main phases:
∗ Concept phase
∗ Implementation phase
∗ Maintenance phase
• Each of these main phases usually has some sub-phases, like a requirement
engineering phase, a design phase, a build phase and a testing phase.
• The sub-phases may occur in more than one main phase each of them with a
specific peculiarity depending on the main phase.
Secure Best Marks with AI Grader
Need help grading? Try our AI Grader for instant feedback on your assignments.
Page 4
11. List various software process models.
Answer:
• Waterfall model,
• Spiral model,
• V-model,
• Iterative model,
• Agile model and RAD model.
12. Define Agile Methods.
Answer:
• Agile model is a combination of iterative and incremental process models with
focus on process adaptability and customer satisfaction by rapid delivery of
working software product.
• Agile Methods break the product into small incremental builds. These builds
are provided in iterations. Every iteration involves cross functional teams
working simultaneously on various areas like planning, requirements
analysis, design, coding, unit testing, and acceptance testing.
13. List out the various agile approaches.
Answer:
• Crystal Technologies
• Atern(formerly DSDM)
• Feature-driven Development
• Scrum
• Extreme Programming(XP)
14. What is extreme programming?
Answer:
• Extreme programming (XP) is a software development methodology, which is
intended to improve software quality and responsiveness to changing
customer requirements.
11. List various software process models.
Answer:
• Waterfall model,
• Spiral model,
• V-model,
• Iterative model,
• Agile model and RAD model.
12. Define Agile Methods.
Answer:
• Agile model is a combination of iterative and incremental process models with
focus on process adaptability and customer satisfaction by rapid delivery of
working software product.
• Agile Methods break the product into small incremental builds. These builds
are provided in iterations. Every iteration involves cross functional teams
working simultaneously on various areas like planning, requirements
analysis, design, coding, unit testing, and acceptance testing.
13. List out the various agile approaches.
Answer:
• Crystal Technologies
• Atern(formerly DSDM)
• Feature-driven Development
• Scrum
• Extreme Programming(XP)
14. What is extreme programming?
Answer:
• Extreme programming (XP) is a software development methodology, which is
intended to improve software quality and responsiveness to changing
customer requirements.
Page 5
• As a type of agile software development, it advocates frequent "releases" in
short development cycles, to improve productivity and introduce checkpoints
at which new customer requirements can be adopted.
15. Write about COCOMO model.
• Constructive Cost Model.
• It refers to a group of models.
• The basic model was built around the equation:
Effort=c*sizek,
• Where effort is measured in pm,or the number of ‘person-months’.
16. Define organic mode.
Answer:
• Organic mode is the case when relatively small teams developed software in a
highly familiar in-house environment and when the system being developed
was small and the interface requirements were flexible.
17. Give an idea about parametric model?
Answer:
• Models that focus on task or system size. Eg.Function Points.
• FPs originally used to estimate Lines of Code, rather than effort
• Models that focus on productivity: e.g. COCOMO
• In this Lines of code (or FPs etc) is an input
• As a type of agile software development, it advocates frequent "releases" in
short development cycles, to improve productivity and introduce checkpoints
at which new customer requirements can be adopted.
15. Write about COCOMO model.
• Constructive Cost Model.
• It refers to a group of models.
• The basic model was built around the equation:
Effort=c*sizek,
• Where effort is measured in pm,or the number of ‘person-months’.
16. Define organic mode.
Answer:
• Organic mode is the case when relatively small teams developed software in a
highly familiar in-house environment and when the system being developed
was small and the interface requirements were flexible.
17. Give an idea about parametric model?
Answer:
• Models that focus on task or system size. Eg.Function Points.
• FPs originally used to estimate Lines of Code, rather than effort
• Models that focus on productivity: e.g. COCOMO
• In this Lines of code (or FPs etc) is an input
Page 6
18. What is the use of COCOMO model and its types?
Answer:
• COCOMO predicts the effort and schedule for a software product development
based on inputs relating to the size of the software and a number of cost
drivers that affect productivity.
• COCOMO has three different models that reflect the complexity:
• The Basic Model
• The Intermediate Model
• The Detailed Model
19. Write any two advantages of function point analysis.
Answer:
• Improved project estimating
• Understanding project and maintenance productivity
• Managing changing project requirements
• Gathering user requirements.
20. Define application composition.
Answer:
• In application composition the external features of the system that the users
will experience are designed. Prototyping will typically be employed to do this
with small application that can be built using high-productivity application
building tools, development can stop at this point.
21. Determine the stages of estimation carried out in a software project.
(APR/MAY2019)
1. Scoping.
18. What is the use of COCOMO model and its types?
Answer:
• COCOMO predicts the effort and schedule for a software product development
based on inputs relating to the size of the software and a number of cost
drivers that affect productivity.
• COCOMO has three different models that reflect the complexity:
• The Basic Model
• The Intermediate Model
• The Detailed Model
19. Write any two advantages of function point analysis.
Answer:
• Improved project estimating
• Understanding project and maintenance productivity
• Managing changing project requirements
• Gathering user requirements.
20. Define application composition.
Answer:
• In application composition the external features of the system that the users
will experience are designed. Prototyping will typically be employed to do this
with small application that can be built using high-productivity application
building tools, development can stop at this point.
21. Determine the stages of estimation carried out in a software project.
(APR/MAY2019)
1. Scoping.
Paraphrase This Document
Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser
Page 7
2. Decomposition.
3. Sizing.
4. Expert and Peer Review.
5. Estimation Finalization.
22) Give example for rapid application development.
Some of the tools that can be used in RAD are those that are strong in
automated code generating, such as:
Super Mojo by Penumbra.
Microsoft Lightswitch.
Visual Studio.
Wavemaker.
Delphi RAD studio.
Lazarus IDE.
PART B
1) How the cost- estimation of Agile projects are done? Explain in detail
(Nov/Dec 2018) (13m)
Answer:
• Agile software development methodology is a process for developing software
(like other software development methodologies – Waterfall model, V-Model,
Iterative model etc.)
• However, Agile methodology differs significantly from other methodologies. In
English, Agile means ‘ability to move quickly and easily’ and responding
swiftly to change – this is a key aspect of Agile software development as well.
• Cost estimation in software engineering is the process of predicting the
resources (money, time, and people) necessary to finish a project within
the defined scope.
• Accurate estimates help everyone involved in the project: Project owners
can decide whether to take on the project
Brief overview of Agile Methodology
Why agile estimates?
There are two generally accepted methodologies for developing software:
waterfall, also known as the traditional model, and agile. Their
approaches to estimating projects are quite different. At Steelkiwi, we follow
2. Decomposition.
3. Sizing.
4. Expert and Peer Review.
5. Estimation Finalization.
22) Give example for rapid application development.
Some of the tools that can be used in RAD are those that are strong in
automated code generating, such as:
Super Mojo by Penumbra.
Microsoft Lightswitch.
Visual Studio.
Wavemaker.
Delphi RAD studio.
Lazarus IDE.
PART B
1) How the cost- estimation of Agile projects are done? Explain in detail
(Nov/Dec 2018) (13m)
Answer:
• Agile software development methodology is a process for developing software
(like other software development methodologies – Waterfall model, V-Model,
Iterative model etc.)
• However, Agile methodology differs significantly from other methodologies. In
English, Agile means ‘ability to move quickly and easily’ and responding
swiftly to change – this is a key aspect of Agile software development as well.
• Cost estimation in software engineering is the process of predicting the
resources (money, time, and people) necessary to finish a project within
the defined scope.
• Accurate estimates help everyone involved in the project: Project owners
can decide whether to take on the project
Brief overview of Agile Methodology
Why agile estimates?
There are two generally accepted methodologies for developing software:
waterfall, also known as the traditional model, and agile. Their
approaches to estimating projects are quite different. At Steelkiwi, we follow
Page 8
an agile development methodology. Why? Let’s explain using a real-world
example of two similar projects by the Federal Bureau of Investigation (FBI)
that were developed with two different methodologies and resulted in vastly
different outcomes.
Waterfall vs Agile
The waterfall methodology is a sequential design process, meaning each
phase is started and completed before moving on to the next. Once a phase
is completed, developers can’t go back to a previous step.
• With agile development, planning consists of three main building blocks:
scope (requirements), resources (software development budget), and
time. With waterfall, the scope is fixed while the budget and time are
flexible to make sure all required functionally is delivered. This means that
the project isn’t considered finished until it meets all requirements. The
agile development, on the other hand, is quality-driven, meaning that
developers aim at maximizing value while sticking to a fixed budget and
schedule.
an agile development methodology. Why? Let’s explain using a real-world
example of two similar projects by the Federal Bureau of Investigation (FBI)
that were developed with two different methodologies and resulted in vastly
different outcomes.
Waterfall vs Agile
The waterfall methodology is a sequential design process, meaning each
phase is started and completed before moving on to the next. Once a phase
is completed, developers can’t go back to a previous step.
• With agile development, planning consists of three main building blocks:
scope (requirements), resources (software development budget), and
time. With waterfall, the scope is fixed while the budget and time are
flexible to make sure all required functionally is delivered. This means that
the project isn’t considered finished until it meets all requirements. The
agile development, on the other hand, is quality-driven, meaning that
developers aim at maximizing value while sticking to a fixed budget and
schedule.
Page 9
• In traditional software development methodologies like Waterfall model, a
project can take several months or years to complete and the customer may
not get to see the end product until the completion of the project.
• At a high level, non-Agile projects allocate extensive periods of time for
Requirements gathering, design, development, testing and UAT, before finally
deploying the project.
• In contrast to this, Agile projects have Sprints or iterations which are shorter
in duration (Sprints/iterations can vary from 2 weeks to 2 months) during
which pre-determined features are developed and delivered.
• Agile projects can have one or more iterations and deliver the complete
product at the end of the final iteration.
Example of Agile software development
• Example: Google is working on project to come up with a competing product
for MS Word, that provides all the features provided by MS Word and any
other features requested by the marketing team.
• The final product needs to be ready in 10 months of time. Let us see how this
project is executed in traditional and Agile methodologies.
• In traditional Waterfall model – At a high level, the project teams would spend
15% of their time on gathering requirements and analysis (1.5 months), 20%
of their time on design (2 months), 40% on coding (4 months) and unit
testing, 20% on System and Integration testing (2 months).
• At the end of this cycle, the project may also have 2 weeks of User Acceptance
testing by marketing teams. In this approach, the customer does not get to
see the end product until the end of the project, when it becomes too late to
make significant changes. Project schedule in traditional software
development.
• With Agile development methodology – In the Agile methodology, each project
is broken up into several ‘Iterations’.
• All Iterations should be of the same time duration (between 2 to 8 weeks).
• At the end of each iteration, a working product should be delivered.
• In simple terms, in the Agile approach the project will be broken up into 10
releases (assuming each iteration is set to last 4 weeks).
• In traditional software development methodologies like Waterfall model, a
project can take several months or years to complete and the customer may
not get to see the end product until the completion of the project.
• At a high level, non-Agile projects allocate extensive periods of time for
Requirements gathering, design, development, testing and UAT, before finally
deploying the project.
• In contrast to this, Agile projects have Sprints or iterations which are shorter
in duration (Sprints/iterations can vary from 2 weeks to 2 months) during
which pre-determined features are developed and delivered.
• Agile projects can have one or more iterations and deliver the complete
product at the end of the final iteration.
Example of Agile software development
• Example: Google is working on project to come up with a competing product
for MS Word, that provides all the features provided by MS Word and any
other features requested by the marketing team.
• The final product needs to be ready in 10 months of time. Let us see how this
project is executed in traditional and Agile methodologies.
• In traditional Waterfall model – At a high level, the project teams would spend
15% of their time on gathering requirements and analysis (1.5 months), 20%
of their time on design (2 months), 40% on coding (4 months) and unit
testing, 20% on System and Integration testing (2 months).
• At the end of this cycle, the project may also have 2 weeks of User Acceptance
testing by marketing teams. In this approach, the customer does not get to
see the end product until the end of the project, when it becomes too late to
make significant changes. Project schedule in traditional software
development.
• With Agile development methodology – In the Agile methodology, each project
is broken up into several ‘Iterations’.
• All Iterations should be of the same time duration (between 2 to 8 weeks).
• At the end of each iteration, a working product should be delivered.
• In simple terms, in the Agile approach the project will be broken up into 10
releases (assuming each iteration is set to last 4 weeks).
Secure Best Marks with AI Grader
Need help grading? Try our AI Grader for instant feedback on your assignments.
Page 10
• Rather than spending 1.5 months on requirements gathering, in Agile
software development, the team will decide the basic core features that are
required in the product and decide which of these features can be developed
in the first iteration.
• Any remaining features that cannot be delivered in the first iteration will be
taken up in the next iteration or subsequent iterations, based on priority.
• At the end of the first iterations, the team will deliver working software with
the features that were finalized for that iteration.
• There will be 10 iterations and at the end of each iteration the customer is
delivered a working software that is incrementally enhanced and updated
with the features that were shortlisted for that iteration.
• Rather than spending 1.5 months on requirements gathering, in Agile
software development, the team will decide the basic core features that are
required in the product and decide which of these features can be developed
in the first iteration.
• Any remaining features that cannot be delivered in the first iteration will be
taken up in the next iteration or subsequent iterations, based on priority.
• At the end of the first iterations, the team will deliver working software with
the features that were finalized for that iteration.
• There will be 10 iterations and at the end of each iteration the customer is
delivered a working software that is incrementally enhanced and updated
with the features that were shortlisted for that iteration.
Page 11
2) Explain estimation through COCOMO II model (Nov/Dec 2018)(Nov/Dec
2019) (13m)
Answer:
COCOMO a parametric model
• COCOMO (Constructive Cost Estimation Model) was proposed by Boehm.
According to him, any software development project can be classified into
one of the following three categories based on the development
complexity: organic, semidetached, and embedded.
• The classification is done considering the characteristics of the product as
well as those of the development team and development environment.
Usually these three product classes correspond to application, utility and
system programs, respectively.
• Data processing programs are normally considered to be application
programs. Compilers, linkers, etc., are utility programs. Operating
systems and realtime system programs, etc. are system programs.
• The definition of organic, semidetached, and embedded systems are
elaborated below.
2) Explain estimation through COCOMO II model (Nov/Dec 2018)(Nov/Dec
2019) (13m)
Answer:
COCOMO a parametric model
• COCOMO (Constructive Cost Estimation Model) was proposed by Boehm.
According to him, any software development project can be classified into
one of the following three categories based on the development
complexity: organic, semidetached, and embedded.
• The classification is done considering the characteristics of the product as
well as those of the development team and development environment.
Usually these three product classes correspond to application, utility and
system programs, respectively.
• Data processing programs are normally considered to be application
programs. Compilers, linkers, etc., are utility programs. Operating
systems and realtime system programs, etc. are system programs.
• The definition of organic, semidetached, and embedded systems are
elaborated below.
Page 12
• Organic: A development project can be considered of organic type, if the
project deals with developing a well understood application program, the
size of the development team is reasonably small, and the team members
are experienced in developing similar types of projects.
• Semidetached: A development project can be considered of semidetached
type, if the development consists of a mixture of experienced and
inexperienced staff.
• Team members may have limited experience on related systems but may
be unfamiliar with some aspects of the system being developed.
• Embedded: A development project is considered to be of embedded type,
if the software being developed is strongly coupled to complex hardware,
or if the stringent regulations on the operational procedures exist.
• According to Boehm, software cost estimation should be done through
three stages: Basic COCOMO, Intermediate COCOMO, and Complete
COCOMO.
Basic COCOMO Model
• The basic COCOMO model gives an approximate estimate of the project
parameters.
• The basic COCOMO estimation model is given by the following
expressions:
Effort = a * (KLOC)b PM Tdev = 2.5 * (Effort)c Months
Where:
• KLOC is the estimated size of the software product expressed in Kilo Lines
of Code a, b, c are constants for each category of software products Tdev
is the estimated time to develop the software, expressed in months Effort
is the total effort required to develop the software product, expressed in
person months (PMs).
• The effort estimation is expressed in units of person-months (PM). The value
of the constants a, b, c are given below:
• Organic: A development project can be considered of organic type, if the
project deals with developing a well understood application program, the
size of the development team is reasonably small, and the team members
are experienced in developing similar types of projects.
• Semidetached: A development project can be considered of semidetached
type, if the development consists of a mixture of experienced and
inexperienced staff.
• Team members may have limited experience on related systems but may
be unfamiliar with some aspects of the system being developed.
• Embedded: A development project is considered to be of embedded type,
if the software being developed is strongly coupled to complex hardware,
or if the stringent regulations on the operational procedures exist.
• According to Boehm, software cost estimation should be done through
three stages: Basic COCOMO, Intermediate COCOMO, and Complete
COCOMO.
Basic COCOMO Model
• The basic COCOMO model gives an approximate estimate of the project
parameters.
• The basic COCOMO estimation model is given by the following
expressions:
Effort = a * (KLOC)b PM Tdev = 2.5 * (Effort)c Months
Where:
• KLOC is the estimated size of the software product expressed in Kilo Lines
of Code a, b, c are constants for each category of software products Tdev
is the estimated time to develop the software, expressed in months Effort
is the total effort required to develop the software product, expressed in
person months (PMs).
• The effort estimation is expressed in units of person-months (PM). The value
of the constants a, b, c are given below:
Paraphrase This Document
Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser
Page 13
Intermediate COCOMO Model
• The basic COCOMO model assumes that effort and development time are
functions of the product size alone. However, many other project
parameters apart from the product size affect the development effort and
time required for the product. Therefore, in order to obtain an accurate
estimation of the effort and project duration, the effect of all relevant
parameters must be taken into account.
• The intermediate COCOMO model recognizes this fact and refines the
initial estimate obtained using the basic COCOMO expressions by using a
set of 15 cost drivers (multipliers) based on various attributes of software
development.
• For example, if modern programming practices are used, the initial
estimates are scaled downward by multiplication with a cost driver having
a value less than 1.
• Each of the 15 attributes receives a rating on a six-point scale that ranges
from "very low" to "extra high" (in importance or value) as shown below. An
effort multiplier from the table below [i] applies to the rating. The product of
all effort multipliers results in an Effort Adjustment Factor (EAF).
• EAF is used to refine the estimates obtained by basic COCOMO as
follows:
Effort|corrected = Effort * EAF
Tdev|corrected = 2.5 * (Effort|corrected) c
Intermediate COCOMO Model
• The basic COCOMO model assumes that effort and development time are
functions of the product size alone. However, many other project
parameters apart from the product size affect the development effort and
time required for the product. Therefore, in order to obtain an accurate
estimation of the effort and project duration, the effect of all relevant
parameters must be taken into account.
• The intermediate COCOMO model recognizes this fact and refines the
initial estimate obtained using the basic COCOMO expressions by using a
set of 15 cost drivers (multipliers) based on various attributes of software
development.
• For example, if modern programming practices are used, the initial
estimates are scaled downward by multiplication with a cost driver having
a value less than 1.
• Each of the 15 attributes receives a rating on a six-point scale that ranges
from "very low" to "extra high" (in importance or value) as shown below. An
effort multiplier from the table below [i] applies to the rating. The product of
all effort multipliers results in an Effort Adjustment Factor (EAF).
• EAF is used to refine the estimates obtained by basic COCOMO as
follows:
Effort|corrected = Effort * EAF
Tdev|corrected = 2.5 * (Effort|corrected) c
Page 14
Complete COCOMO Model
• Both the basic and intermediate COCOMO models consider a software
product as a single homogeneous entity. However, most large systems are
made up several smaller sub-systems, each of them in turn could be of
organic type, some semidetached, or embedded.
• The complete COCOMO model takes into account these differences in
characteristics of the subsystems and estimates the effort and
development time as the sum of the estimates for the individual
subsystems. This approach reduces the percentage of error in the final
estimate.
• The following development project can be considered as an example
application of the complete COCOMO model. A distributed Management
Information System (MIS) product for an organization having offices at
several places across the country can have the following sub-components:
Database part
Graphical User Interface (GUI) part
Communication part
Complete COCOMO Model
• Both the basic and intermediate COCOMO models consider a software
product as a single homogeneous entity. However, most large systems are
made up several smaller sub-systems, each of them in turn could be of
organic type, some semidetached, or embedded.
• The complete COCOMO model takes into account these differences in
characteristics of the subsystems and estimates the effort and
development time as the sum of the estimates for the individual
subsystems. This approach reduces the percentage of error in the final
estimate.
• The following development project can be considered as an example
application of the complete COCOMO model. A distributed Management
Information System (MIS) product for an organization having offices at
several places across the country can have the following sub-components:
Database part
Graphical User Interface (GUI) part
Communication part
Page 15
• Of these, the communication part can be considered as embedded software.
The database part could be semi-detached software, and the GUI part organic
software.
• The costs for these three components can be estimated separately, and
summed up to give the overall cost of the system.
3) i) Explain the steps involved for Extreme programming. (APR/MAY 2018)
(10m)
Answer:
• Extreme Programming technique is very helpful when there is
constantly changing demands or requirements from the customers or
when they are not sure about the functionality of the system.
• It advocates frequent "releases" of the product in short development
cycles, which inherently improves the productivity of the system and also
introduces a checkpoint where any customer requirements can be easily
implemented. The XP develops software keeping customer in the target.
Phases of eXtreme programming:
• There are 6 phases available in Agile XP method, and those are
explained as follows:
• Planning
∗ Identification of stakeholders and sponsors
∗ Infrastructure Requirements
• Of these, the communication part can be considered as embedded software.
The database part could be semi-detached software, and the GUI part organic
software.
• The costs for these three components can be estimated separately, and
summed up to give the overall cost of the system.
3) i) Explain the steps involved for Extreme programming. (APR/MAY 2018)
(10m)
Answer:
• Extreme Programming technique is very helpful when there is
constantly changing demands or requirements from the customers or
when they are not sure about the functionality of the system.
• It advocates frequent "releases" of the product in short development
cycles, which inherently improves the productivity of the system and also
introduces a checkpoint where any customer requirements can be easily
implemented. The XP develops software keeping customer in the target.
Phases of eXtreme programming:
• There are 6 phases available in Agile XP method, and those are
explained as follows:
• Planning
∗ Identification of stakeholders and sponsors
∗ Infrastructure Requirements
Secure Best Marks with AI Grader
Need help grading? Try our AI Grader for instant feedback on your assignments.
Page 16
∗ Security related information and gathering
∗ Service Level Agreements and its conditions
• Analysis
∗ Capturing of Stories in Parking lot
∗ Prioritize stories in Parking lot
∗ Scrubbing of stories for estimation
∗ Define Iteration SPAN(Time)
∗ Resource planning for both Development and QA teams
• Design
∗ Break down of tasks
∗ Test Scenario preparation for each task
∗ Regression Automation Framework
• Execution
∗ Coding
∗ Unit Testing
∗ Execution of Manual test scenarios
∗ Defect Report generation
∗ Conversion of Manual to Automation regression test cases
∗ Mid Iteration review
∗ End of Iteration review
• Wrapping
∗ Small Releases
∗ Regression Testing
∗ Demos and reviews
∗ Develop new stories based on the need
∗ Process Improvements based on end of iteration review
comments
• Closure
∗ Pilot Launch
∗ Training
∗ Production Launch
∗ SLA Guarantee assurance
∗ Review SOA strategy
∗ Security related information and gathering
∗ Service Level Agreements and its conditions
• Analysis
∗ Capturing of Stories in Parking lot
∗ Prioritize stories in Parking lot
∗ Scrubbing of stories for estimation
∗ Define Iteration SPAN(Time)
∗ Resource planning for both Development and QA teams
• Design
∗ Break down of tasks
∗ Test Scenario preparation for each task
∗ Regression Automation Framework
• Execution
∗ Coding
∗ Unit Testing
∗ Execution of Manual test scenarios
∗ Defect Report generation
∗ Conversion of Manual to Automation regression test cases
∗ Mid Iteration review
∗ End of Iteration review
• Wrapping
∗ Small Releases
∗ Regression Testing
∗ Demos and reviews
∗ Develop new stories based on the need
∗ Process Improvements based on end of iteration review
comments
• Closure
∗ Pilot Launch
∗ Training
∗ Production Launch
∗ SLA Guarantee assurance
∗ Review SOA strategy
Page 17
∗ Production Support
∗ There are two storyboards available to track the work on a daily basis,
and those are listed below for reference.
Story Cardboard
∗ This is a traditional way of collecting all the stories in a board in the
form of stick notes to track daily XP activities. As this manual activity
involves more effort and time, it is better to switch to an online form.
Online Storyboard
∗ Online tool Storyboard can be used to store the stories. Several teams
can use it for different purposes.
Roles
Customer
∗ Writes User Stories and specifies Functional Tests
∗ Sets priorities, explains stories
∗ May or may not be an end-user
∗ Has authority to decide questions about the stories !
Programmer
∗ Estimates stories
∗ Defines Tasks from stories, and estimates
∗ Implements Stories and Unit Tests
Coach
∗ Watches everything, makes sure the project stays on course
∗ Helps with anything
Tracker
∗ Monitors Programmers progress, takes action if things seem to
be going off track.
∗ Actions include setting up a meeting with Customer, asking
Coach or another
Programmer to help
Tester
∗ Implements and runs Functional Tests (not Unit Tests!)
∗ Graphs results, and makes sure people know when test results
decline.
∗ Production Support
∗ There are two storyboards available to track the work on a daily basis,
and those are listed below for reference.
Story Cardboard
∗ This is a traditional way of collecting all the stories in a board in the
form of stick notes to track daily XP activities. As this manual activity
involves more effort and time, it is better to switch to an online form.
Online Storyboard
∗ Online tool Storyboard can be used to store the stories. Several teams
can use it for different purposes.
Roles
Customer
∗ Writes User Stories and specifies Functional Tests
∗ Sets priorities, explains stories
∗ May or may not be an end-user
∗ Has authority to decide questions about the stories !
Programmer
∗ Estimates stories
∗ Defines Tasks from stories, and estimates
∗ Implements Stories and Unit Tests
Coach
∗ Watches everything, makes sure the project stays on course
∗ Helps with anything
Tracker
∗ Monitors Programmers progress, takes action if things seem to
be going off track.
∗ Actions include setting up a meeting with Customer, asking
Coach or another
Programmer to help
Tester
∗ Implements and runs Functional Tests (not Unit Tests!)
∗ Graphs results, and makes sure people know when test results
decline.
Page 18
Doomsayer
∗ Ensures that everybody knows the risks involved
∗ Ensures that bad news isn't hidden, glossed over, or blown out of
proportion
Manager
∗ Schedules meetings (e.g. Iteration Plan, Release Plan), makes sure the
meeting process is followed, records results of meeting for future
reporting, and passes to the Tracker
∗ Possibly responsible to the Gold Owner.
∗ Goes to meetings, brings back useful information
Gold Owner
∗ The person funding the project, which may or may not be the same as the
Customer
ii) List all its advantages and disadvantages. (APR/MAY 2018)(6m)
Answer:
The successful use of xp is based on certain conditions. If these do not exist , then
its practice could be difficult. These conditions include the following.
• There must be easy access to users, or at least a customer representative who
is a dmain expert. This may be difficult where developers and users belong to
different organizations.
• Development staff need to be physically locate in the same office
• As users find out about how the system will work only by being presented
with working versions of the code there may be communication problems if
the application does not have a visual interface.
• For work to be sequenced into small iterations of work, it must be possible to
break the system functionality into relatively small and self- contained
components.
• Large, complex system may initially need significant architectural effort . this
might preclude the use of xp
XP does also have some intrinsic potential problems- particularly with regard to its
reliance on tactic expertise and knowledge as opposed to externalized knowledge in
the shape of documentation.
Doomsayer
∗ Ensures that everybody knows the risks involved
∗ Ensures that bad news isn't hidden, glossed over, or blown out of
proportion
Manager
∗ Schedules meetings (e.g. Iteration Plan, Release Plan), makes sure the
meeting process is followed, records results of meeting for future
reporting, and passes to the Tracker
∗ Possibly responsible to the Gold Owner.
∗ Goes to meetings, brings back useful information
Gold Owner
∗ The person funding the project, which may or may not be the same as the
Customer
ii) List all its advantages and disadvantages. (APR/MAY 2018)(6m)
Answer:
The successful use of xp is based on certain conditions. If these do not exist , then
its practice could be difficult. These conditions include the following.
• There must be easy access to users, or at least a customer representative who
is a dmain expert. This may be difficult where developers and users belong to
different organizations.
• Development staff need to be physically locate in the same office
• As users find out about how the system will work only by being presented
with working versions of the code there may be communication problems if
the application does not have a visual interface.
• For work to be sequenced into small iterations of work, it must be possible to
break the system functionality into relatively small and self- contained
components.
• Large, complex system may initially need significant architectural effort . this
might preclude the use of xp
XP does also have some intrinsic potential problems- particularly with regard to its
reliance on tactic expertise and knowledge as opposed to externalized knowledge in
the shape of documentation.
Paraphrase This Document
Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser
Page 19
• There is a reliance on high quality developers which makes software
development vulnerable if staff turn over is significant.
• Even where staff retention is good , once an application has been developed
and implemented the tactic personal, knowledge of the system may decay.
This might make it difficult , for example, for maintenance staff without
documentation to identify which bits of the code to modify to implement a
change in requirement.
• Some software development environments have focused on encouraging code
reuse as a means of improving software development productivity. Such a
policy would seem to be incompatible with XP.
4) What are the components of staffing? Explain the methods of Staffing level
estimation. (APR/MAY 2018) (13m)
Answer:
• Components of Staffing Essay. For an organization to achieve success, it
has to strategize its system of staffing. Staffing, as it is known in Human
Resources Management profession, is composed of three elements:
1. Recruitment
2. Selection and
3. Employment
• Resource allocation in software development is important and many methods
have been proposed. Related empirical research is yet scarce and evidence is
required to validate the theoretical methods. The staffing pattern as a metric
of resource distribution among project phases, and verifies its effect on
software quality and productivity using real project data. The main findings
are:
(1) there exist different staffing patterns in reality;
(2) the staffing pattern has significant effect on software quality (post-
release defect density);
(3) the staffing pattern has no significant effect on productivity;
(4) the effort invested on test, document or code inspection possibly
explains the effect of staffing pattern on software quality;
(5) the effort consumed by rework perhaps counteracts the effect of other
potential factors on productivity.
• There is a reliance on high quality developers which makes software
development vulnerable if staff turn over is significant.
• Even where staff retention is good , once an application has been developed
and implemented the tactic personal, knowledge of the system may decay.
This might make it difficult , for example, for maintenance staff without
documentation to identify which bits of the code to modify to implement a
change in requirement.
• Some software development environments have focused on encouraging code
reuse as a means of improving software development productivity. Such a
policy would seem to be incompatible with XP.
4) What are the components of staffing? Explain the methods of Staffing level
estimation. (APR/MAY 2018) (13m)
Answer:
• Components of Staffing Essay. For an organization to achieve success, it
has to strategize its system of staffing. Staffing, as it is known in Human
Resources Management profession, is composed of three elements:
1. Recruitment
2. Selection and
3. Employment
• Resource allocation in software development is important and many methods
have been proposed. Related empirical research is yet scarce and evidence is
required to validate the theoretical methods. The staffing pattern as a metric
of resource distribution among project phases, and verifies its effect on
software quality and productivity using real project data. The main findings
are:
(1) there exist different staffing patterns in reality;
(2) the staffing pattern has significant effect on software quality (post-
release defect density);
(3) the staffing pattern has no significant effect on productivity;
(4) the effort invested on test, document or code inspection possibly
explains the effect of staffing pattern on software quality;
(5) the effort consumed by rework perhaps counteracts the effect of other
potential factors on productivity.
Page 20
• Preliminary heuristics are suggested to resource allocation practices.
Staffing level estimation
Once the effort required to develop a software has been determined, it is necessary
to determine the staffing requirement for the project.
Norden’s Work
Norden studied the staffing patterns of several R & D projects. He found that the
staffing pattern can be approximated by the Rayleigh distribution curve. Norden
represented the Rayleigh curve by the following equation:
E = K/t²d * t * e-t² / 2 t² d
Where E is the effort required at time t. E is an indication of the number of
engineers (or the staffing level) at any particular time during the duration of the
project, K is the area under the curve, and td is the time at which the curve attains
its maximum value.
Putnam’s Work
Putnam studied the problem of staffing of software projects and found that the
software development has characteristics very similar to other Norden and that the
Rayleigh-Norden curve can be used to relate the number of delivered lines of code to
the effort and the time required to develop the project. By analyzing a large number
of army projects, Putnam derived the following expression:
L = Ck K1/3td 4/3
The various terms of this expression are as follows:
• K is the total effort expended (in PM) in the product development and L is the
product size in KLOC.
• td corresponds to the time of system and integration testing. Therefore, td can be
approximately considered as the time required to develop the software.
• Ck is the state of technology constant and reflects constraints that impede the
progress of the programmer. Typical values of Ck = 2 for poor development
environment (no methodology, poor documentation, and review, etc.), Ck = 8 for
good software development environment (software engineering principles are
adhered to), Ck = 11 for an excellent environment (in addition to following software
engineering principles, automated tools and techniques are used). The exact value of
Ck for a specific project can be computed from the historical data of the
organization developing it.
Staffing patterns as follows:
• Preliminary heuristics are suggested to resource allocation practices.
Staffing level estimation
Once the effort required to develop a software has been determined, it is necessary
to determine the staffing requirement for the project.
Norden’s Work
Norden studied the staffing patterns of several R & D projects. He found that the
staffing pattern can be approximated by the Rayleigh distribution curve. Norden
represented the Rayleigh curve by the following equation:
E = K/t²d * t * e-t² / 2 t² d
Where E is the effort required at time t. E is an indication of the number of
engineers (or the staffing level) at any particular time during the duration of the
project, K is the area under the curve, and td is the time at which the curve attains
its maximum value.
Putnam’s Work
Putnam studied the problem of staffing of software projects and found that the
software development has characteristics very similar to other Norden and that the
Rayleigh-Norden curve can be used to relate the number of delivered lines of code to
the effort and the time required to develop the project. By analyzing a large number
of army projects, Putnam derived the following expression:
L = Ck K1/3td 4/3
The various terms of this expression are as follows:
• K is the total effort expended (in PM) in the product development and L is the
product size in KLOC.
• td corresponds to the time of system and integration testing. Therefore, td can be
approximately considered as the time required to develop the software.
• Ck is the state of technology constant and reflects constraints that impede the
progress of the programmer. Typical values of Ck = 2 for poor development
environment (no methodology, poor documentation, and review, etc.), Ck = 8 for
good software development environment (software engineering principles are
adhered to), Ck = 11 for an excellent environment (in addition to following software
engineering principles, automated tools and techniques are used). The exact value of
Ck for a specific project can be computed from the historical data of the
organization developing it.
Staffing patterns as follows:
Page 21
(1) Rapid-team-buildup pattern (abbreviated Rapid for later reference). The staffing
levels peak in requirement phase, and decrease in later phases. This might mean
the culture of excessive documentation or design, leading to low ability to respond
rapidly for requirement change. Another possible reason would be to outsource part
of the system to other organization for design and development. In both situations,
we suppose the software quality and productivity are low.
(2) Fix-staff pattern (abbreviated Fix). The team size is fixed or stable across
project lifecycle. It is likely that the same team has done all work. Due to sufficient
learning time and communication within the team, we assumed that high software
quality will be yielded as a result. It is hard to assess its productivity: perhaps
peoples work efficiently due to effective communication, perhaps this effect cannot
counteract against the excessive human resources investment in a prolonged
duration.
(3) Design-construction-centric pattern (abbreviated Design). The staffing levels
are high in design and construction phases, and low in other phases. The reason for
low staffing level in requirement and test phases may be indifference to them, or
mature/simple product. The software quality and productivity might be low for the
former possibility, and high for the latter.
(4) Implementation-centric pattern (abbreviated Implement). The staffing levels
are high in construction and test phases, and low in other phases. The reason for
low staffing level in requirement and design phases may be indifference to them, or
merely providing coding and testing services in IT outsourcing market. The software
quality and productivity would be low for the former possibility, and high for the
latter.
(5) Test-centric pattern (abbreviated Test). The staffing levels are relatively low in
early phases, but increase in test and transition phases. The increasing level in test
phase may be the result of intensive testing by a special team (in-house or
outsourcing), or fire fighting for too many bugs. The software quality may be high,
because the staffing level in early phases is also stable and the situation is the same
as Fix-staff pattern. Moreover, it is possible that high staffing level in test or
transition phase detects and removes more defects before delivery. Similar to Fix-
staff pattern, its productivity is hard to be determined.
(6) Classical-Rayleigh pattern (abbreviated Rayleigh). Similar to classical Rayleigh
(1) Rapid-team-buildup pattern (abbreviated Rapid for later reference). The staffing
levels peak in requirement phase, and decrease in later phases. This might mean
the culture of excessive documentation or design, leading to low ability to respond
rapidly for requirement change. Another possible reason would be to outsource part
of the system to other organization for design and development. In both situations,
we suppose the software quality and productivity are low.
(2) Fix-staff pattern (abbreviated Fix). The team size is fixed or stable across
project lifecycle. It is likely that the same team has done all work. Due to sufficient
learning time and communication within the team, we assumed that high software
quality will be yielded as a result. It is hard to assess its productivity: perhaps
peoples work efficiently due to effective communication, perhaps this effect cannot
counteract against the excessive human resources investment in a prolonged
duration.
(3) Design-construction-centric pattern (abbreviated Design). The staffing levels
are high in design and construction phases, and low in other phases. The reason for
low staffing level in requirement and test phases may be indifference to them, or
mature/simple product. The software quality and productivity might be low for the
former possibility, and high for the latter.
(4) Implementation-centric pattern (abbreviated Implement). The staffing levels
are high in construction and test phases, and low in other phases. The reason for
low staffing level in requirement and design phases may be indifference to them, or
merely providing coding and testing services in IT outsourcing market. The software
quality and productivity would be low for the former possibility, and high for the
latter.
(5) Test-centric pattern (abbreviated Test). The staffing levels are relatively low in
early phases, but increase in test and transition phases. The increasing level in test
phase may be the result of intensive testing by a special team (in-house or
outsourcing), or fire fighting for too many bugs. The software quality may be high,
because the staffing level in early phases is also stable and the situation is the same
as Fix-staff pattern. Moreover, it is possible that high staffing level in test or
transition phase detects and removes more defects before delivery. Similar to Fix-
staff pattern, its productivity is hard to be determined.
(6) Classical-Rayleigh pattern (abbreviated Rayleigh). Similar to classical Rayleigh
Secure Best Marks with AI Grader
Need help grading? Try our AI Grader for instant feedback on your assignments.
Page 22
curve, the staffing levels are low at the beginning, gradually increase, peak at
construction phase, and drop at later phases. It is the most common pattern in our
data set. We assumed that the productivity is high in this situation, but the yield
may be low quality pertaining to insufficient communication inside team.
(7) Minimum-design pattern (abbreviated Mini Design). The trend of staffing level
is an “M” shape with 2 peaks at requirement and construction or test phases.
Possibly the design phase is minimized due to indifference to it, architecture reuse,
mature or simple product. It is also possible that several senior developers are
branched from the whole team (and the design is parallel with other phase) rather
than a turnover of staff.
5) Discuss the spiral software development life cycle model with diagrammatic
illustration. What are the spiral model strengths? What are the spiral model
deficiencies? When to use the spiral model? Discuss. (NOV /DEC 2017) (13m)
Or
Outline the spiral SDLC model with a diagram. What are the strengths of the
spiral model? What are the deficiencies of the spiral model . When to use the
spiral model?(Nov/Dec 2019) (13m)
Answer:
Features
1. It is an evolution model
2. Incorporates controlled and systematic aspects of waterfall model.
3. Incorporates evolutionary process (iterative nature) of prototyping.
4. While early iterations results in prototypes and documentation (Paper model), the
later version results in full working product.
5. It allows rapid development if appropriate 4GL languages are used
6. The stages of product development is divided into 4.Also called”TASK REGIONS”
i.e.,
a. Concept development
b. Product development
c. Product Enhancement
d. Product Maintenance
curve, the staffing levels are low at the beginning, gradually increase, peak at
construction phase, and drop at later phases. It is the most common pattern in our
data set. We assumed that the productivity is high in this situation, but the yield
may be low quality pertaining to insufficient communication inside team.
(7) Minimum-design pattern (abbreviated Mini Design). The trend of staffing level
is an “M” shape with 2 peaks at requirement and construction or test phases.
Possibly the design phase is minimized due to indifference to it, architecture reuse,
mature or simple product. It is also possible that several senior developers are
branched from the whole team (and the design is parallel with other phase) rather
than a turnover of staff.
5) Discuss the spiral software development life cycle model with diagrammatic
illustration. What are the spiral model strengths? What are the spiral model
deficiencies? When to use the spiral model? Discuss. (NOV /DEC 2017) (13m)
Or
Outline the spiral SDLC model with a diagram. What are the strengths of the
spiral model? What are the deficiencies of the spiral model . When to use the
spiral model?(Nov/Dec 2019) (13m)
Answer:
Features
1. It is an evolution model
2. Incorporates controlled and systematic aspects of waterfall model.
3. Incorporates evolutionary process (iterative nature) of prototyping.
4. While early iterations results in prototypes and documentation (Paper model), the
later version results in full working product.
5. It allows rapid development if appropriate 4GL languages are used
6. The stages of product development is divided into 4.Also called”TASK REGIONS”
i.e.,
a. Concept development
b. Product development
c. Product Enhancement
d. Product Maintenance
Page 23
7. Each task region have the frame work activities like communication, planning ,
modelling, construction, deployment and customer feedback incorporated.
APPLICABILITY
• Large or high risk projects
• When technology is new and proof of concept is required
• Requirements are fuzzy
• Same products for multiple customers
Example:
Computation intensive system like decision support system , aero space defense
and engineering projects.
Merits
1. Customer and software engineer are in touch with each other in every task region
as communication is mandatory.
2. Prototyping at each stage helps to reduce risk as customer sees the system
evolving.
3. Planning happens at every stage.
4. Estimation and control at every stage
5. This model is a combination of various other models. So it carries with it the
advantages of these models , viz,
Water fall all defines steps are followed for each task region
Incremental allows incremental releases
Prototyping early iterations results in prototypes
Demerits
1. This model is not suitable for small low risk projects as it would be an expensive
affair.
7. Each task region have the frame work activities like communication, planning ,
modelling, construction, deployment and customer feedback incorporated.
APPLICABILITY
• Large or high risk projects
• When technology is new and proof of concept is required
• Requirements are fuzzy
• Same products for multiple customers
Example:
Computation intensive system like decision support system , aero space defense
and engineering projects.
Merits
1. Customer and software engineer are in touch with each other in every task region
as communication is mandatory.
2. Prototyping at each stage helps to reduce risk as customer sees the system
evolving.
3. Planning happens at every stage.
4. Estimation and control at every stage
5. This model is a combination of various other models. So it carries with it the
advantages of these models , viz,
Water fall all defines steps are followed for each task region
Incremental allows incremental releases
Prototyping early iterations results in prototypes
Demerits
1. This model is not suitable for small low risk projects as it would be an expensive
affair.
Page 24
2. Not practiced like others models
3. If the customer keeps changing requirements, the number of spirals increases
and software project manager could not close the project at all.
6) Explain the steps in the COCOMO II effort estimation technique. (NOV /DEC
2017) or
Outline the strategies for software effort estimation techniques.
(APR/MAY2019)(Nov/Dec 2019)
Answer:
Repeat the same answer from Q.NO: 2
7) Explain COCOMO II model. (APR/MAY 2017)
Answer:
Repeat the same answer from Q.NO: 2
8) Discuss extended function point with an example. (APR/MAY 2017) (13m)
Answer:
• Function Point Analysis (FPA) is one of the most widely used methods to
determine the size of software projects. FPA originated at a time when
only a mainframe environment was available.
• Sizing of specifications was typically based on functional decomposition
and modeled data. Nowadays, development methods like Object Oriented,
Component Based and RAD are applied more often. There is also more
attention on architecture and the use of client server and multitier
environments. Another development is the growth in complexity caused
by more integrated applications, real‐ time applications and embedded
systems and combinations. FPA was not designed to cope with these
various newer development approaches.
• The Common Software Measurement International Consortium (COSMIC),
aimed to develop, test, bring to market and to seek acceptance of a new
software sizing method to support estimating and performance
measurement (productivity, time to market and quality). The
measurement method must be applicable for estimating the effort for
developing and maintaining software in various software domains. Not
2. Not practiced like others models
3. If the customer keeps changing requirements, the number of spirals increases
and software project manager could not close the project at all.
6) Explain the steps in the COCOMO II effort estimation technique. (NOV /DEC
2017) or
Outline the strategies for software effort estimation techniques.
(APR/MAY2019)(Nov/Dec 2019)
Answer:
Repeat the same answer from Q.NO: 2
7) Explain COCOMO II model. (APR/MAY 2017)
Answer:
Repeat the same answer from Q.NO: 2
8) Discuss extended function point with an example. (APR/MAY 2017) (13m)
Answer:
• Function Point Analysis (FPA) is one of the most widely used methods to
determine the size of software projects. FPA originated at a time when
only a mainframe environment was available.
• Sizing of specifications was typically based on functional decomposition
and modeled data. Nowadays, development methods like Object Oriented,
Component Based and RAD are applied more often. There is also more
attention on architecture and the use of client server and multitier
environments. Another development is the growth in complexity caused
by more integrated applications, real‐ time applications and embedded
systems and combinations. FPA was not designed to cope with these
various newer development approaches.
• The Common Software Measurement International Consortium (COSMIC),
aimed to develop, test, bring to market and to seek acceptance of a new
software sizing method to support estimating and performance
measurement (productivity, time to market and quality). The
measurement method must be applicable for estimating the effort for
developing and maintaining software in various software domains. Not
Paraphrase This Document
Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser
Page 25
only business software (MIS) but also real time software (avionics,
telecom, process control) and embedded software (mobile phones,
consumer electronics) can be measured.
• The basis for measurement must be found, just as in FPA, in the user
requirements the software must fulfil. The result of the measurement
must be independent of the development environment and the method
used to specify these requirements. Size depend only on the user require.
COSMIC Concepts
• The Functional User Requirements (FUR) are, according to the definition
of a functional size measurement method, the basis for measurement.
They specify user’s needs and procedures that the software should fulfil.
• The FUR are analysed to identify the functional processes. A Functional
Process is an elementary component of a set of FUR. It is triggered by one
or more events in the world of the user of the software being measured.
The process is complete when it has executed all that is required to be
done in response to the triggering event.
• Each functional process consists of a set of sub processes that are either
movements or manipulations of data. Since no one knows how to
measure data manipulation, and since the aim is to measure ‘data
movement rich’ software, the simplifying assumption is made that each
functional process consists of a set of data movements.
• A Data Movement moves one Data Group. A Data Group is a unique
cohesive set of data (attributes) specifying an ‘object of interest’ (i.e.
something that is ‘of interest’ to the user). Each Data Movement is
counted as one CFP (COSMIC function point).
• COSMIC recognises 4 (types of) Data Movements:
1. Entry moves data from outside into the process
2. Exit moves data from the process to the outside world
3. Read moves data from persistent storage to the process
4. Write moves data from the process to persistent storage.
only business software (MIS) but also real time software (avionics,
telecom, process control) and embedded software (mobile phones,
consumer electronics) can be measured.
• The basis for measurement must be found, just as in FPA, in the user
requirements the software must fulfil. The result of the measurement
must be independent of the development environment and the method
used to specify these requirements. Size depend only on the user require.
COSMIC Concepts
• The Functional User Requirements (FUR) are, according to the definition
of a functional size measurement method, the basis for measurement.
They specify user’s needs and procedures that the software should fulfil.
• The FUR are analysed to identify the functional processes. A Functional
Process is an elementary component of a set of FUR. It is triggered by one
or more events in the world of the user of the software being measured.
The process is complete when it has executed all that is required to be
done in response to the triggering event.
• Each functional process consists of a set of sub processes that are either
movements or manipulations of data. Since no one knows how to
measure data manipulation, and since the aim is to measure ‘data
movement rich’ software, the simplifying assumption is made that each
functional process consists of a set of data movements.
• A Data Movement moves one Data Group. A Data Group is a unique
cohesive set of data (attributes) specifying an ‘object of interest’ (i.e.
something that is ‘of interest’ to the user). Each Data Movement is
counted as one CFP (COSMIC function point).
• COSMIC recognises 4 (types of) Data Movements:
1. Entry moves data from outside into the process
2. Exit moves data from the process to the outside world
3. Read moves data from persistent storage to the process
4. Write moves data from the process to persistent storage.
Page 26
• From a pure size measurement point of view, the most important
improvements of the COSMIC method compared with using traditional
Function Points are as follows
∗ The COSMIC method was designed to measure the functional
requirements of software in the domains of business application, real-
time and infrastructure software (e.g. operating systems, web
components, etc.), in any layer of a multilayer architecture and at any
level of decomposition.
∗ Traditional Function Points were designed to measure only the
functionality ‘seen’ by human users of business software in the
application layer.
∗ Traditional Function Points use a size scale with a limited range of
possible sizes for each component. COSMIC functional processes are
measured on a continuous size scale with a minimum of 2 CFP and no
upper size limit.
∗ Modern software can have extremely large processes. Individual
functional processes of roughly 100 CFP have been measured in avionics
software systems and in public national insurance systems. Traditional
Function Points can therefore give highly misleading sizes for certain
types of software which means that great care must be taken when using
these sizes for performance measurement or estimating
∗ The COSMIC method gives a much finer measure of the size of any
changes to be made to software than traditional function points. The
smallest change that can be measured with the COSMIC method is 1
CFP.
• From a pure size measurement point of view, the most important
improvements of the COSMIC method compared with using traditional
Function Points are as follows
∗ The COSMIC method was designed to measure the functional
requirements of software in the domains of business application, real-
time and infrastructure software (e.g. operating systems, web
components, etc.), in any layer of a multilayer architecture and at any
level of decomposition.
∗ Traditional Function Points were designed to measure only the
functionality ‘seen’ by human users of business software in the
application layer.
∗ Traditional Function Points use a size scale with a limited range of
possible sizes for each component. COSMIC functional processes are
measured on a continuous size scale with a minimum of 2 CFP and no
upper size limit.
∗ Modern software can have extremely large processes. Individual
functional processes of roughly 100 CFP have been measured in avionics
software systems and in public national insurance systems. Traditional
Function Points can therefore give highly misleading sizes for certain
types of software which means that great care must be taken when using
these sizes for performance measurement or estimating
∗ The COSMIC method gives a much finer measure of the size of any
changes to be made to software than traditional function points. The
smallest change that can be measured with the COSMIC method is 1
CFP.
Page 27
∗ Users of the COSMIC method have reported the following benefits,
compared with using '1st generation' methods
∗ Easy to learn and stable due to the principles-based approach, hence
'future proof' and cost-effective to implement;
∗ Well-accepted by project staff due to the ease of mapping of the method’s
concepts to modern software requirements documentation methods, and
to its compatibility with modern software architectures;
∗ Improves estimating accuracy, especially for larger software projects;
∗ Possible to size requirements automatically that are held in CASE tools;
• Reveals real performance improvement where using traditional function
points has not indicated any improvement due to their inability to recognise
how software processes have increased in size over time; Sizing with COSMIC
is an excellent way of controlling the quality of the requirements at all stages
as they evolve.
9). Explain five major components of function point analysis.(APR/MAY2019)
(13m)
Answer:
One of the initial design criteria for function points was to provide a mechanism
that both software developers and users could utilize to define functional
requirements.
It was determined that the best way to gain an understanding of the users'
needs was to approach their problem from the perspective of how they view the
results an automated system produces.
Therefore, one of the primary goals of Function Point Analysis is to evaluate a
system's capabilities from a user's point of view.
To achieve this goal, the analysis is based upon the various ways users interact
with computerized systems. From a user's perspective a system assists them in
doing their job by providing five (5) basic functions.
Two of these address the data requirements of an end user and are referred to
as Data Functions.
The remaining three address the user's need to access data and are referred to
as Transactional Functions.
∗ Users of the COSMIC method have reported the following benefits,
compared with using '1st generation' methods
∗ Easy to learn and stable due to the principles-based approach, hence
'future proof' and cost-effective to implement;
∗ Well-accepted by project staff due to the ease of mapping of the method’s
concepts to modern software requirements documentation methods, and
to its compatibility with modern software architectures;
∗ Improves estimating accuracy, especially for larger software projects;
∗ Possible to size requirements automatically that are held in CASE tools;
• Reveals real performance improvement where using traditional function
points has not indicated any improvement due to their inability to recognise
how software processes have increased in size over time; Sizing with COSMIC
is an excellent way of controlling the quality of the requirements at all stages
as they evolve.
9). Explain five major components of function point analysis.(APR/MAY2019)
(13m)
Answer:
One of the initial design criteria for function points was to provide a mechanism
that both software developers and users could utilize to define functional
requirements.
It was determined that the best way to gain an understanding of the users'
needs was to approach their problem from the perspective of how they view the
results an automated system produces.
Therefore, one of the primary goals of Function Point Analysis is to evaluate a
system's capabilities from a user's point of view.
To achieve this goal, the analysis is based upon the various ways users interact
with computerized systems. From a user's perspective a system assists them in
doing their job by providing five (5) basic functions.
Two of these address the data requirements of an end user and are referred to
as Data Functions.
The remaining three address the user's need to access data and are referred to
as Transactional Functions.
Secure Best Marks with AI Grader
Need help grading? Try our AI Grader for instant feedback on your assignments.
Page 28
The Five Components of Function Points
Data Functions
• Internal Logical Files
• External Interface Files
Transactional Functions
• External Inputs
• External Outputs
• External Inquiries
Internal Logical Files
The first data function allows users to utilize data they are responsible for
maintaining. For example, a pilot may enter navigational data through a
display in the cockpit prior to departure.
The data is stored in a file for use and can be modified during the mission.
Therefore the pilot is responsible for maintaining the file that contains the
navigational information.
Logical groupings of data in a system, maintained by an end user, are
referred to as Internal Logical Files (ILF).
External Interface Files
The second Data Function a system provides an end user is also related to
logical groupings of data. In this case the user is not responsible for
maintaining the data.
The data resides in another system and is maintained by another user or
system. The user of the system being counted requires this data for
reference purposes only.
For example, it may be necessary for a pilot to reference position data from a
satellite or ground-based facility during flight.
The pilot does not have the responsibility for updating data at these sites but
must reference it during the flight.
Groupings of data from another system that are used only for reference
purposes are defined as External Interface Files (EIF).
The remaining functions address the user's capability to access the data
contained in ILFs and EIFs.
The Five Components of Function Points
Data Functions
• Internal Logical Files
• External Interface Files
Transactional Functions
• External Inputs
• External Outputs
• External Inquiries
Internal Logical Files
The first data function allows users to utilize data they are responsible for
maintaining. For example, a pilot may enter navigational data through a
display in the cockpit prior to departure.
The data is stored in a file for use and can be modified during the mission.
Therefore the pilot is responsible for maintaining the file that contains the
navigational information.
Logical groupings of data in a system, maintained by an end user, are
referred to as Internal Logical Files (ILF).
External Interface Files
The second Data Function a system provides an end user is also related to
logical groupings of data. In this case the user is not responsible for
maintaining the data.
The data resides in another system and is maintained by another user or
system. The user of the system being counted requires this data for
reference purposes only.
For example, it may be necessary for a pilot to reference position data from a
satellite or ground-based facility during flight.
The pilot does not have the responsibility for updating data at these sites but
must reference it during the flight.
Groupings of data from another system that are used only for reference
purposes are defined as External Interface Files (EIF).
The remaining functions address the user's capability to access the data
contained in ILFs and EIFs.
Page 29
This capability includes maintaining, inquiring and outputting of data.
These are referred to as Transactional Functions.
External Input
The first Transactional Function allows a user to maintain Internal Logical
Files (ILFs) through the ability to add, change and delete the data.
For example, a pilot can add, change and delete navigational information
prior to and during the mission.
In this case the pilot is utilizing a transaction referred to as an External
Input (EI). An External Input gives the user the capability to maintain the
data in ILF's through adding, changing and deleting its contents.
External Output
The next Transactional Function gives the user the ability to produce
outputs. For example a pilot has the ability to separately display ground
speed, true air speed and calibrated air speed.
The results displayed are derived using data that is maintained and data
that is referenced. In function point terminology the resulting display is
called an External Output (EO).
External Inquiries
The final capability provided to users through a computerized system
addresses the requirement to select and display specific data from files.
To accomplish this a user inputs selection information that is used to
retrieve data that meets the specific criteria.
In this situation there is no manipulation of the data. It is a direct retrieval
of information contained on the files.
For example if a pilot displays terrain clearance data that was previously set,
the resulting output is the direct retrieval of stored information. These
transactions are referred to as External Inquiries (EQ).
10). Explain the function of incremental delivery process model with neat
sketch. (APR/MAY2019) (13m)
Answer:
Incremental Model is a process of software development where
requirements are broken down into multiple standalone modules of software
This capability includes maintaining, inquiring and outputting of data.
These are referred to as Transactional Functions.
External Input
The first Transactional Function allows a user to maintain Internal Logical
Files (ILFs) through the ability to add, change and delete the data.
For example, a pilot can add, change and delete navigational information
prior to and during the mission.
In this case the pilot is utilizing a transaction referred to as an External
Input (EI). An External Input gives the user the capability to maintain the
data in ILF's through adding, changing and deleting its contents.
External Output
The next Transactional Function gives the user the ability to produce
outputs. For example a pilot has the ability to separately display ground
speed, true air speed and calibrated air speed.
The results displayed are derived using data that is maintained and data
that is referenced. In function point terminology the resulting display is
called an External Output (EO).
External Inquiries
The final capability provided to users through a computerized system
addresses the requirement to select and display specific data from files.
To accomplish this a user inputs selection information that is used to
retrieve data that meets the specific criteria.
In this situation there is no manipulation of the data. It is a direct retrieval
of information contained on the files.
For example if a pilot displays terrain clearance data that was previously set,
the resulting output is the direct retrieval of stored information. These
transactions are referred to as External Inquiries (EQ).
10). Explain the function of incremental delivery process model with neat
sketch. (APR/MAY2019) (13m)
Answer:
Incremental Model is a process of software development where
requirements are broken down into multiple standalone modules of software
Page 30
development cycle. Incremental development is done
design, implementation, testing/verification, maintenance.
Each iteration passes through the
testing phases. And each subsequent release of the system adds function to
the previous release until all
Characteristics of an Incremental module includes
• System development is broken down into many mini development
projects
• Partial systems are successively built to produce a final total system
• Highest priority requirement is tackled first
• Once the requirement is developed, requirement for that increment
are frozen
Incremental Phases
development cycle. Incremental development is done in steps from analysis
design, implementation, testing/verification, maintenance.
Each iteration passes through the requirements, design, coding and
. And each subsequent release of the system adds function to
the previous release until all designed functionality has been implemented.
Characteristics of an Incremental module includes
System development is broken down into many mini development
Partial systems are successively built to produce a final total system
requirement is tackled first
Once the requirement is developed, requirement for that increment
Activities performed in incremental
phases
in steps from analysis
requirements, design, coding and
. And each subsequent release of the system adds function to
designed functionality has been implemented.
System development is broken down into many mini development
Partial systems are successively built to produce a final total system
Once the requirement is developed, requirement for that increment
Activities performed in incremental
development cycle. Incremental development is done
design, implementation, testing/verification, maintenance.
Each iteration passes through the
testing phases. And each subsequent release of the system adds function to
the previous release until all
Characteristics of an Incremental module includes
• System development is broken down into many mini development
projects
• Partial systems are successively built to produce a final total system
• Highest priority requirement is tackled first
• Once the requirement is developed, requirement for that increment
are frozen
Incremental Phases
development cycle. Incremental development is done in steps from analysis
design, implementation, testing/verification, maintenance.
Each iteration passes through the requirements, design, coding and
. And each subsequent release of the system adds function to
the previous release until all designed functionality has been implemented.
Characteristics of an Incremental module includes
System development is broken down into many mini development
Partial systems are successively built to produce a final total system
requirement is tackled first
Once the requirement is developed, requirement for that increment
Activities performed in incremental
phases
in steps from analysis
requirements, design, coding and
. And each subsequent release of the system adds function to
designed functionality has been implemented.
System development is broken down into many mini development
Partial systems are successively built to produce a final total system
Once the requirement is developed, requirement for that increment
Activities performed in incremental
Paraphrase This Document
Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser
Page 31
As each successive version of the software is constructed and delivered, now
the feedback of the Customer is to be taken and these were then
incorporated in the next version. Each version of the software has more
additional features over the previous ones.
Types of Incremental model
1.Staged Delivery Model – Construction of only one part of the project at a
time.
Requirement Analysis • Requirement and specification of the
software are collected
Design • Some high-end function are designed
during this stage
Code • Coding of software is done during this
stage
Test • Once the system is deployed, it goes
through the testing phase
As each successive version of the software is constructed and delivered, now
the feedback of the Customer is to be taken and these were then
incorporated in the next version. Each version of the software has more
additional features over the previous ones.
Types of Incremental model
1.Staged Delivery Model – Construction of only one part of the project at a
time.
Requirement Analysis • Requirement and specification of the
software are collected
Design • Some high-end function are designed
during this stage
Code • Coding of software is done during this
stage
Test • Once the system is deployed, it goes
through the testing phase
Page 32
2. Parallel Development Model
same time. It can decrease the calendar time needed for the
i.e. TTM (Time to Market), if enough Resources are available.
When to use this
1. Funding Schedule, Risk, Program Complexity, or need for early
realization of benefits.
2. When Requirements are known up
3. When Projects having lengthy
4. Projects with new Technology.
Advantages
• Error Reduction (core modules are used by the customer from the
beginning of the phase and then these are tested thoroughly)
• Uses divide and conquer for breakdown of tasks.
• Lowers initial delivery cost.
• Incremental Resource Deployment.
Disadvantages
Parallel Development Model – Different subsystems are developed at the
same time. It can decrease the calendar time needed for the development,
i.e. TTM (Time to Market), if enough Resources are available.
Funding Schedule, Risk, Program Complexity, or need for early
When Requirements are known up-front.
When Projects having lengthy developments schedules.
Projects with new Technology.
Error Reduction (core modules are used by the customer from the
beginning of the phase and then these are tested thoroughly)
Uses divide and conquer for breakdown of tasks.
livery cost.
Incremental Resource Deployment.
Different subsystems are developed at the
development,
Funding Schedule, Risk, Program Complexity, or need for early
Error Reduction (core modules are used by the customer from the
2. Parallel Development Model
same time. It can decrease the calendar time needed for the
i.e. TTM (Time to Market), if enough Resources are available.
When to use this
1. Funding Schedule, Risk, Program Complexity, or need for early
realization of benefits.
2. When Requirements are known up
3. When Projects having lengthy
4. Projects with new Technology.
Advantages
• Error Reduction (core modules are used by the customer from the
beginning of the phase and then these are tested thoroughly)
• Uses divide and conquer for breakdown of tasks.
• Lowers initial delivery cost.
• Incremental Resource Deployment.
Disadvantages
Parallel Development Model – Different subsystems are developed at the
same time. It can decrease the calendar time needed for the development,
i.e. TTM (Time to Market), if enough Resources are available.
Funding Schedule, Risk, Program Complexity, or need for early
When Requirements are known up-front.
When Projects having lengthy developments schedules.
Projects with new Technology.
Error Reduction (core modules are used by the customer from the
beginning of the phase and then these are tested thoroughly)
Uses divide and conquer for breakdown of tasks.
livery cost.
Incremental Resource Deployment.
Different subsystems are developed at the
development,
Funding Schedule, Risk, Program Complexity, or need for early
Error Reduction (core modules are used by the customer from the
Page 33
• Requires good planning and design.
• Total cost is not lower.
• Well defined module interfaces are required.
----------------------------------------------UNIT II-----------------------------------------------
• Requires good planning and design.
• Total cost is not lower.
• Well defined module interfaces are required.
----------------------------------------------UNIT II-----------------------------------------------
1 out of 34
Related Documents
Your All-in-One AI-Powered Toolkit for Academic Success.
+13062052269
info@desklib.com
Available 24*7 on WhatsApp / Email
Unlock your academic potential
© 2024 | Zucol Services PVT LTD | All rights reserved.