This document discusses the test case design for MTEST marking and grading application, including real-world examples of software failure, identifying test cases based on software requirements, boundary value analysis, and error guessing testing technique.
Contribute Materials
Your contribution can guide someone’s learning journey. Share your
documents today.
Running head:MTEST TEST DESIGN Test Case MTEST Test Design Name of the Student Name of the University Author Note
Secure Best Marks with AI Grader
Need help grading? Try our AI Grader for instant feedback on your assignments.
1MTEST TEST DESIGN Question 1: Real world example of software failure There are multiple aspects from which software failures can occur. These can be updating software applications without having measures in place to test the application after applied changes through retesting, regression testing (Harman, Jia & Zhang, 2015). An important aspect is the security setup of software applications as applications having known vulnerabilities present are targeted and exploited first by the attackers (Böhme & Paul, 2015). The glitches and problem areas of applications generate undesired output and making the software unreliable. A software failure resulted in thousands of call drops for third party call centres who are in the business of assigning and directing help calls to the emergency line 911. As per the reports the software used in tracking as well as assigning calls contained counters which topped at 40 million. The moment this 40 millionth call got placed, all other calls suffered a bottleneck thereby cutting connections to more than 11 million citizens in more than seven different states through the 911 hotline for emergency help calls. The software in question is the automated system from Intrado for assigning uniquely identifiable codes to the incoming calls before passing them which basically helps in tracking phone calls moving in and out of the system.
2MTEST TEST DESIGN Question 2: Identifying the test cases according to the scenarios created based on requirements of the software Equivalence class partitioning, error guessing, negative test cases and boundary value analysis are the type of tests performed in identifying the test cases for MTEST. The list of test cases for the software MTEST marking and grading application is given below. Design of MTEST Test Cases Test Case# Scenario DescriptionTest Case MTA001 Validate validity of title record sets Verify that title records having data are accepted by MTEST MTA003 Validate question number (1 - 999) Verify that error code present when question count equals 0 MTA005 Validate question number (1 - 999) Verify execution of input files when question count is 1 MTA006 Validate question number (1 - 999) Verify execution of input files when question count is 999 MTA007 Validate question number (1 - 999) Verify that error code present when question count equals 1000 MTA008 Validate count of record sets in columns (10 - 59) Verify error for multiple record sets for questions 1-50 MTA009 Validate count of record sets in columns (10 - 59) Verify presence of single record between columns (10-
3MTEST TEST DESIGN 59) when count of questions is 1 - 50 MTA010 Validate count of record sets in columns (10 - 59) Verify presence of two records between columns 10 and 59 when count of questions is 51 - 100 MTA011 Validate count of record sets in columns (10 - 59) Verify presence of three records between columns 10 and 59 when count of questions is 51 - 100 MTA012 Validate student name between columns 1 and 9 of student records Verify that error code present for non-registered student names in columns (1 - 9) of the student record/s MTA013 Validate column 80 data for question and student records Verify that column 80 data contains 2 for question and 3 for student records Question 3: Boundary Value Analysis (BVA) test cases of MTEST BVA or boundary value analysis refers to that technique of software testing in which the software gets tested by using the values at extreme ends of the equivalence class partitions that is the boundary values based on the requirements specified (Ruzhansky & Tokmagambetov, 2017). These tests thus denote the reliability of a given software.
Paraphrase This Document
Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser
4MTEST TEST DESIGN In the case of the marking application MTEST the boundary values based on the above definition are identified to be 0, 1, 999, 1000 (Tarasov et al, 2016). The test cases for boundary value analysis for MTEST are found to be: ï‚·Verify that error code present when question count equals 0 ï‚·Verify that error code present when question count equals 1000 ï‚·Verify execution of input files when question count is 1 ï‚·Verify execution of input files when question count is 999 Question 4: Test cases based on Error guessing testing technique for MTEST marking and grading application The type ofsoftwaretestingin which test cases get designed as per theguesses conductedbytestanalystsinidentifyingwhenandwherecantheconcerned applicationgenerate erroneous data (Matalonga, Rodrigues & Travassos, 2015). In this process the testing experience of the testers comes into play with these test analysts relying on their ability to comprehend defects based on prior experiences for identifying problematic sections of a software. Test cases designed using this software testing method for the application MTEST are: ï‚·Verify error for multiple record sets for questions 1-50 ï‚·Verify that error code present for non-registered student names in columns (1 - 9) of the student record/s
5MTEST TEST DESIGN References Böhme, M., & Paul, S. (2015). A probabilistic analysis of the efficiency of automated software testing.IEEE Transactions on Software Engineering,42(4), 345-360. Garousi, V., & Mäntylä, M. V. (2016). When and what to automate in software testing? A multi-vocal literature review.Information and Software Technology,76, 92-117. Harman,M.,Jia,Y.,&Zhang,Y.(2015, April).Achievements,openproblemsand challengesforsearchbasedsoftwaretesting.In2015IEEE8thInternational Conference on Software Testing, Verification and Validation (ICST)(pp. 1-12). IEEE. Matalonga, S., Rodrigues, F., & Travassos, G. H. (2015, June). Matching context aware softwaretestingdesigntechniquestoISO/IEC/IEEE29119.InInternational Conference on Software Process Improvement and Capability Determination(pp. 33- 44). Springer, Cham. Ruzhansky, M., & Tokmagambetov, N. (2017). Nonharmonic analysis of boundary value problemswithoutWZcondition.MathematicalModellingofNatural Phenomena,12(1), 115-140. Tarasov, V., Tan, H., Ismail, M., Adlemo, A., & Johansson, M. (2016). Application of inference rules to a software requirements ontology to generate software test cases. InOWL: Experiences and Directions–Reasoner Evaluation(pp. 82-94). Springer, Cham.