Test Case MTEST Test Design
VerifiedAdded on 2023/03/31
|6
|1014
|424
AI Summary
This document discusses the test case design for MTEST marking and grading application, including real-world examples of software failure, identifying test cases based on software requirements, boundary value analysis, and error guessing testing technique.
Contribute Materials
Your contribution can guide someone’s learning journey. Share your
documents today.
Running head: MTEST TEST DESIGN
Test Case MTEST Test Design
Name of the Student
Name of the University
Author Note
Test Case MTEST Test Design
Name of the Student
Name of the University
Author Note
Secure Best Marks with AI Grader
Need help grading? Try our AI Grader for instant feedback on your assignments.
1MTEST TEST DESIGN
Question 1: Real world example of software failure
There are multiple aspects from which software failures can occur. These can be
updating software applications without having measures in place to test the application after
applied changes through retesting, regression testing (Harman, Jia & Zhang, 2015). An
important aspect is the security setup of software applications as applications having known
vulnerabilities present are targeted and exploited first by the attackers (Böhme & Paul, 2015).
The glitches and problem areas of applications generate undesired output and making the
software unreliable.
A software failure resulted in thousands of call drops for third party call centres who
are in the business of assigning and directing help calls to the emergency line 911. As per the
reports the software used in tracking as well as assigning calls contained counters which
topped at 40 million. The moment this 40 millionth call got placed, all other calls suffered a
bottleneck thereby cutting connections to more than 11 million citizens in more than seven
different states through the 911 hotline for emergency help calls.
The software in question is the automated system from Intrado for assigning uniquely
identifiable codes to the incoming calls before passing them which basically helps in tracking
phone calls moving in and out of the system.
Question 1: Real world example of software failure
There are multiple aspects from which software failures can occur. These can be
updating software applications without having measures in place to test the application after
applied changes through retesting, regression testing (Harman, Jia & Zhang, 2015). An
important aspect is the security setup of software applications as applications having known
vulnerabilities present are targeted and exploited first by the attackers (Böhme & Paul, 2015).
The glitches and problem areas of applications generate undesired output and making the
software unreliable.
A software failure resulted in thousands of call drops for third party call centres who
are in the business of assigning and directing help calls to the emergency line 911. As per the
reports the software used in tracking as well as assigning calls contained counters which
topped at 40 million. The moment this 40 millionth call got placed, all other calls suffered a
bottleneck thereby cutting connections to more than 11 million citizens in more than seven
different states through the 911 hotline for emergency help calls.
The software in question is the automated system from Intrado for assigning uniquely
identifiable codes to the incoming calls before passing them which basically helps in tracking
phone calls moving in and out of the system.
2MTEST TEST DESIGN
Question 2: Identifying the test cases according to the scenarios created
based on requirements of the software
Equivalence class partitioning, error guessing, negative test cases and boundary value
analysis are the type of tests performed in identifying the test cases for MTEST. The list of
test cases for the software MTEST marking and grading application is given below.
Design of MTEST Test Cases
Test Case#
Scenario
Description Test Case
MTA001
Validate
validity of title
record sets
Verify that title
records having
data are
accepted by
MTEST
MTA003
Validate
question
number (1 -
999)
Verify that
error code
present when
question count
equals 0
MTA005
Validate
question
number (1 -
999)
Verify
execution of
input files
when question
count is 1
MTA006
Validate
question
number (1 -
999)
Verify
execution of
input files
when question
count is 999
MTA007
Validate
question
number (1 -
999)
Verify that
error code
present when
question count
equals 1000
MTA008
Validate count
of record sets
in columns (10
- 59)
Verify error for
multiple record
sets for
questions 1-50
MTA009
Validate count
of record sets
in columns (10
- 59)
Verify presence
of single record
between
columns (10-
Question 2: Identifying the test cases according to the scenarios created
based on requirements of the software
Equivalence class partitioning, error guessing, negative test cases and boundary value
analysis are the type of tests performed in identifying the test cases for MTEST. The list of
test cases for the software MTEST marking and grading application is given below.
Design of MTEST Test Cases
Test Case#
Scenario
Description Test Case
MTA001
Validate
validity of title
record sets
Verify that title
records having
data are
accepted by
MTEST
MTA003
Validate
question
number (1 -
999)
Verify that
error code
present when
question count
equals 0
MTA005
Validate
question
number (1 -
999)
Verify
execution of
input files
when question
count is 1
MTA006
Validate
question
number (1 -
999)
Verify
execution of
input files
when question
count is 999
MTA007
Validate
question
number (1 -
999)
Verify that
error code
present when
question count
equals 1000
MTA008
Validate count
of record sets
in columns (10
- 59)
Verify error for
multiple record
sets for
questions 1-50
MTA009
Validate count
of record sets
in columns (10
- 59)
Verify presence
of single record
between
columns (10-
3MTEST TEST DESIGN
59) when count
of questions is
1 - 50
MTA010
Validate count
of record sets
in columns (10
- 59)
Verify presence
of two records
between
columns 10
and 59 when
count of
questions is 51
- 100
MTA011
Validate count
of record sets
in columns (10
- 59)
Verify presence
of three
records
between
columns 10
and 59 when
count of
questions is 51
- 100
MTA012
Validate
student name
between
columns 1 and
9 of student
records
Verify that
error code
present for
non-registered
student names
in columns (1 -
9) of the
student
record/s
MTA013
Validate
column 80 data
for question
and student
records
Verify that
column 80 data
contains 2 for
question and 3
for student
records
Question 3: Boundary Value Analysis (BVA) test cases of MTEST
BVA or boundary value analysis refers to that technique of software testing in which
the software gets tested by using the values at extreme ends of the equivalence class
partitions that is the boundary values based on the requirements specified (Ruzhansky &
Tokmagambetov, 2017). These tests thus denote the reliability of a given software.
59) when count
of questions is
1 - 50
MTA010
Validate count
of record sets
in columns (10
- 59)
Verify presence
of two records
between
columns 10
and 59 when
count of
questions is 51
- 100
MTA011
Validate count
of record sets
in columns (10
- 59)
Verify presence
of three
records
between
columns 10
and 59 when
count of
questions is 51
- 100
MTA012
Validate
student name
between
columns 1 and
9 of student
records
Verify that
error code
present for
non-registered
student names
in columns (1 -
9) of the
student
record/s
MTA013
Validate
column 80 data
for question
and student
records
Verify that
column 80 data
contains 2 for
question and 3
for student
records
Question 3: Boundary Value Analysis (BVA) test cases of MTEST
BVA or boundary value analysis refers to that technique of software testing in which
the software gets tested by using the values at extreme ends of the equivalence class
partitions that is the boundary values based on the requirements specified (Ruzhansky &
Tokmagambetov, 2017). These tests thus denote the reliability of a given software.
Secure Best Marks with AI Grader
Need help grading? Try our AI Grader for instant feedback on your assignments.
4MTEST TEST DESIGN
In the case of the marking application MTEST the boundary values based on the
above definition are identified to be 0, 1, 999, 1000 (Tarasov et al, 2016). The test cases for
boundary value analysis for MTEST are found to be:
Verify that error code present when question count equals 0
Verify that error code present when question count equals 1000
Verify execution of input files when question count is 1
Verify execution of input files when question count is 999
Question 4: Test cases based on Error guessing testing technique for
MTEST marking and grading application
The type of software testing in which test cases get designed as per the guesses
conducted by test analysts in identifying when and where can the concerned
application generate erroneous data (Matalonga, Rodrigues & Travassos, 2015). In this
process the testing experience of the testers comes into play with these test analysts relying
on their ability to comprehend defects based on prior experiences for identifying problematic
sections of a software.
Test cases designed using this software testing method for the application MTEST
are:
Verify error for multiple record sets for questions 1-50
Verify that error code present for non-registered student names in columns (1 - 9) of the
student record/s
In the case of the marking application MTEST the boundary values based on the
above definition are identified to be 0, 1, 999, 1000 (Tarasov et al, 2016). The test cases for
boundary value analysis for MTEST are found to be:
Verify that error code present when question count equals 0
Verify that error code present when question count equals 1000
Verify execution of input files when question count is 1
Verify execution of input files when question count is 999
Question 4: Test cases based on Error guessing testing technique for
MTEST marking and grading application
The type of software testing in which test cases get designed as per the guesses
conducted by test analysts in identifying when and where can the concerned
application generate erroneous data (Matalonga, Rodrigues & Travassos, 2015). In this
process the testing experience of the testers comes into play with these test analysts relying
on their ability to comprehend defects based on prior experiences for identifying problematic
sections of a software.
Test cases designed using this software testing method for the application MTEST
are:
Verify error for multiple record sets for questions 1-50
Verify that error code present for non-registered student names in columns (1 - 9) of the
student record/s
5MTEST TEST DESIGN
References
Böhme, M., & Paul, S. (2015). A probabilistic analysis of the efficiency of automated
software testing. IEEE Transactions on Software Engineering, 42(4), 345-360.
Garousi, V., & Mäntylä, M. V. (2016). When and what to automate in software testing? A
multi-vocal literature review. Information and Software Technology, 76, 92-117.
Harman, M., Jia, Y., & Zhang, Y. (2015, April). Achievements, open problems and
challenges for search based software testing. In 2015 IEEE 8th International
Conference on Software Testing, Verification and Validation (ICST) (pp. 1-12). IEEE.
Matalonga, S., Rodrigues, F., & Travassos, G. H. (2015, June). Matching context aware
software testing design techniques to ISO/IEC/IEEE 29119. In International
Conference on Software Process Improvement and Capability Determination (pp. 33-
44). Springer, Cham.
Ruzhansky, M., & Tokmagambetov, N. (2017). Nonharmonic analysis of boundary value
problems without WZ condition. Mathematical Modelling of Natural
Phenomena, 12(1), 115-140.
Tarasov, V., Tan, H., Ismail, M., Adlemo, A., & Johansson, M. (2016). Application of
inference rules to a software requirements ontology to generate software test cases.
In OWL: Experiences and Directions–Reasoner Evaluation (pp. 82-94). Springer,
Cham.
References
Böhme, M., & Paul, S. (2015). A probabilistic analysis of the efficiency of automated
software testing. IEEE Transactions on Software Engineering, 42(4), 345-360.
Garousi, V., & Mäntylä, M. V. (2016). When and what to automate in software testing? A
multi-vocal literature review. Information and Software Technology, 76, 92-117.
Harman, M., Jia, Y., & Zhang, Y. (2015, April). Achievements, open problems and
challenges for search based software testing. In 2015 IEEE 8th International
Conference on Software Testing, Verification and Validation (ICST) (pp. 1-12). IEEE.
Matalonga, S., Rodrigues, F., & Travassos, G. H. (2015, June). Matching context aware
software testing design techniques to ISO/IEC/IEEE 29119. In International
Conference on Software Process Improvement and Capability Determination (pp. 33-
44). Springer, Cham.
Ruzhansky, M., & Tokmagambetov, N. (2017). Nonharmonic analysis of boundary value
problems without WZ condition. Mathematical Modelling of Natural
Phenomena, 12(1), 115-140.
Tarasov, V., Tan, H., Ismail, M., Adlemo, A., & Johansson, M. (2016). Application of
inference rules to a software requirements ontology to generate software test cases.
In OWL: Experiences and Directions–Reasoner Evaluation (pp. 82-94). Springer,
Cham.
1 out of 6
Related Documents
Your All-in-One AI-Powered Toolkit for Academic Success.
+13062052269
info@desklib.com
Available 24*7 on WhatsApp / Email
Unlock your academic potential
© 2024 | Zucol Services PVT LTD | All rights reserved.