Software Testing Report: Analysis of Testing Techniques and Results

Verified

Added on  2019/09/21

|2
|527
|57
Report
AI Summary
This report presents an analysis of various software testing techniques, including structural testing, error guessing, and boundary value analysis. The report evaluates the effectiveness of these techniques based on their application and the results obtained. Structural testing, while intended to provide full code coverage, was found to be less effective in identifying defects in this specific case, particularly with regards to edge cases and the limitations of the technique when the source code is not available. Error guessing, however, proved to be more successful in identifying flaws related to program requirements and documentation, emphasizing the value of human intuition. Finally, Boundary Value Analysis demonstrated a high error-finding capacity, and was successful in identifying a new defect, making it a highly recommended technique for any test suite. The report concludes with a discussion of the relative strengths and weaknesses of each technique, and provides suggestions for their application in future testing scenarios.
Document Page
Test Report
The test results clearly show that the structural generation technique used (combination of
Branch and Condition/Decision technique) was not sufficient to identify all the defects. In
fact, it helped finding a single ambiguous error that had to be tested further to understand its
impact. The chart (Appendix A) shows the correlation between the number of bugs found for
every technique used.
Considering these findings I would suggest that the structural testing is only useful when the
actual (final) source code is available. While technique provides an easy full code coverage,
it does not easily allow to identify and understand the edge cases for the control structures,
neither does it guarantee the absence of defects (Yang, et al 2006). However, while test
cases show that this technique is not too good in finding unexpected bugs outside the
requirement scope, it is arguably the best technique to check if a program conforms to its
specification.
Error Guessing technique proved to be much more effective in this task, mainly because it
leveraged human intuition and experience. Goal of the tests generated using this technique
was to tests program requirement/documentation flaws and logic in extreme cases. While
the success of this technique in this particular case is more of an exception, or luck as
Kuckis would put it (2013), I would argue that when time is not a constraint, this technique
allows to generate very creative tests that might be quite useful to have in the test suite.
Lastly, Boundary Value Analysis has proven to be satisfactory in testing edge input cases. It
allowed to find a new defect which was not identified with any other techniques, by executing
just 3 tests. Technique is very easy to apply and its error-finding capacity is high (Müller
2005). I would say that boundary value analysis is very important in any test suite and
should probably be used as the first option when generating the tests.
In conclusion I am very disappointed in how unsuccessfully the structural testing was applied
in this task. However my understanding is that given the circumstances it was a pure luck
that the error guessing performed much better. The Boundary Analysis effectiveness, on the
other hand, was predictable, and I am completely satisfied with its result.
Word Count: 376
References
Kuckis, T. 2013. Error Guessing. Available at: www.elen.ktu.lt/studentai/lib/exe/fetch.php?
media=error_guessing.ppt (Accessed: 08 December 2015)
Müller, T., et al. 2005 Certified Tester. Foundation Level Syllabus (2005). Available at:
http://www.bcs.org/upload/pdf/istqbsyll.pdf (Accessed: 08 December 2015)
tabler-icon-diamond-filled.svg

Paraphrase This Document

Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser
Document Page
Yang, Q., J. J. Li, and D. Weiss. A survey of coverage based testing tools. In AST ’06:
Proceedings of the 2006 international workshop on Automation of software test, pages 99–
103, New York, NY, USA, 2006. ACM.
chevron_up_icon
1 out of 2
circle_padding
hide_on_mobile
zoom_out_icon