University MTEST Application: Test Case Design and Analysis Report

Verified

Added on  2023/03/17

|6
|1323
|91
Report
AI Summary
This report presents a comprehensive analysis of test case design for the MTEST grading and marking application. It begins by highlighting a real-world software failure example from the 2014 US bar exam, emphasizing the importance of thorough software testing before updates. The report then identifies various test cases for MTEST, including verifying valid title records, handling different values in the 'No. of questions' column, and ensuring error messages are displayed correctly. Furthermore, it delves into specific testing techniques such as boundary value analysis, detailing test scenarios based on the range of questions, and the error guessing technique. The test cases cover scenarios like empty input files, records with missing titles, and different question counts, ensuring the application's robustness and reliability. The report concludes with a detailed list of references. This report is designed to provide a practical guide for testing the MTEST application.
Document Page
Running head: DESIGN OF TEST CASES ON MTEST TEST
Design of Test Cases on MTEST Test
Name of the Student
Name of the University
Author Note
tabler-icon-diamond-filled.svg

Secure Best Marks with AI Grader

Need help grading? Try our AI Grader for instant feedback on your assignments.
Document Page
1DESIGN OF TEST CASES ON MTEST TEST
Question 1 - Real world software failure example
In 2014, bar exam candidates all around the country of USA went through a
nightmarish experience in trying to upload each of their exam files. This ended up generating
large scale panic that concerned blowing states possibility and the deadlines for uploading.
This problem started occurring due to a software side failure of Examplify exam conducting
application built by ExamSoft.
From the exact start date of the bar exams that were to commence, such exam takers
got to suffer and endure the large scale downtimes of ExamSoft software – portals of
Examplify application which ended up resulting to delays of the law based exams across
several different areas (Casey, Casey and Griffin 2018). This led to exam takers getting
completely prevented from even once submitting any of the work they had done so far. All
this fiasco conjured from ExamSoft’s release of a set of updates of this application just a few
days ahead of the lawexams.
As a result for preventing these software failures from negatively affecting academic
institutions, exam administration as also activities of a number of businesses, it becomes
extremely essential of getting the software tested prior to making any updates with major
changes to the application live. Therefore, when it comes to the application for grading and
marking - MTEST that attempts to automate the process of grading student answers, the
application must require testing according to requirements that have been specified.
Document Page
2DESIGN OF TEST CASES ON MTEST TEST
Question 2a - Identifying Test Cases for MTEST
To conduct the Test case designs for the MTEST grading and marking application, a
range of software testing techniques have been employed (Abdelfattah et al. 2016). These test
mechanisms can be partitioning of identified equivalence classes, boundary value-based
testing, negative test cases and error guessing tests as per the parameters and requirements of
the MTEST application (Bjarnason et al. 2015). These identified test cases are get listed here:
1. Verify that valid title records getting accepted
2. Verify that title records have data between columns 1 to 80
3. Verify that error message present when value in No. of questions column is '0'
4. Verify that error message present for value '1000' in No. of questions column
5. Verify that value '1' in No. of questions column is accepted
6. Verify that value '999' in No. of questions column is accepted
7. Verify that first record present for single question exams
8. Verify that error message for empty question set
9. Verify that 3rd record exist for 101 to 150 questions
10. Verify that 2nd record exists for 51 to 100 questions
11. Verify successful grading for existing student data in columns 1 to 9
12. Verify that error message present for empty columns 1 to 9
13. Verify that error message present for values other than '2' and '3' in column 80 of
record sets
14. Verify that error message present for value other than '2' in column 80 for exam
questions
15. Verify that error message present for value other than '3' in column 80 for student
answers.
Document Page
3DESIGN OF TEST CASES ON MTEST TEST
Question 2b - Test Cases based on Boundary Value Analysis
One of the techniques in software testing is Boundary value analysis. In this testing
technique, the test case design is done by testing with ‘boundary values’ of any specified
range (Hackbusch 2015). The concept evolves from the idea of boundaries where the values
found furthest apart from one side of the boundary range thus representing the boundary
values or boundary test vectors.
Test cases based on the Boundary value testing technique for grading and marking
application, MTEST might include following testing scenarios – empty input files, records
with missing titles, single question exam, 50-question exam, 51-question exam, 999-question
exam and 0-question exam. The Boundary value test vectors for such test cases can be 0, 50,
51, 100, 101 and goes on till it reaches ‘999’ (Ahmed, Abdulsamad and Potrus 2015). Such
result is observed as MTEST is built to categorize question sets, each having 50 questions
and a total number of 999 question exam is supported.
The test cases following the boundary value analysis technique are:
Verify that error message present when value in No. of questions column is '0'
Verify that error message present for value '1000' in No. of questions column
Verify that value '1' in No. of questions column is accepted
Verify that value '999' in No. of questions column is accepted
tabler-icon-diamond-filled.svg

Secure Best Marks with AI Grader

Need help grading? Try our AI Grader for instant feedback on your assignments.
Document Page
4DESIGN OF TEST CASES ON MTEST TEST
Question 2c – Test Cases based on the Error Guessing technique
Error guessing testing technique is all about guessing the errors an application might
have when executing a particular functionality. In tests like these, the most error prone parts
of the application are identified and counted. Then the test cases are designed by the tester
accordingly to expose the respective errors (Matalonga, Rodrigues and Travassos 2015). Here
the tester relies heavily on previous experience, individual skills and techniques. Test cases
based upon the error guessing technique are as follows -
Verify that error message present for values other than '2' and '3' in column 80 of
record sets
Verify that error message present for value other than '2' in column 80 for exam
questions
Verify that error message present for value other than '3' in column 80 for student
answers.
Document Page
5DESIGN OF TEST CASES ON MTEST TEST
References
Abdelfattah, A., Haidar, A., Tomov, S. and Dongarra, J., 2016, June. Performance, design,
and autotuning of batched GEMM for GPUs. In International Conference on High
Performance Computing (pp. 21-38). Springer, Cham.
Ahmed, B.S., Abdulsamad, T.S. and Potrus, M.Y., 2015. Achievement of minimized
combinatorial test suite for configuration-aware software functional testing using the cuckoo
search algorithm. Information and Software Technology, 66, pp.13-29.
Bjarnason, E., Unterkalmsteiner, M., Engström, E. and Borg, M., 2015, May. An industrial
case study on test cases as requirements. In International Conference on Agile Software
Development (pp. 27-39). Springer, Cham.
Casey, K., Casey, M. and Griffin, K., 2018. ACADEMIC INTEGRITY IN THE ONLINE
ENVIRONMENT: TEACHING STRATEGIES AND SOFTWARE THAT ENCOURAGE
ETHICAL BEHAVIOR. Copyright 2018 by Institute for Global Business Research,
Nashville, TN, USA, p.58.
Hackbusch, W., 2015. Hierarchical matrices: algorithms and analysis (Vol. 49). Heidelberg:
Springer.
Matalonga, S., Rodrigues, F. and Travassos, G.H., 2015, June. Matching context aware
software testing design techniques to ISO/IEC/IEEE 29119. In International Conference on
Software Process Improvement and Capability Determination (pp. 33-44). Springer, Cham.
chevron_up_icon
1 out of 6
circle_padding
hide_on_mobile
zoom_out_icon
logo.png

Your All-in-One AI-Powered Toolkit for Academic Success.

Available 24*7 on WhatsApp / Email

[object Object]