Saturday, February 2, 2008

Test Cases, Suites, Scripts, Scenarios

test case is a software testing document, which consists of event, action, input, output, expected result, and actual result. Clinically defined a test case is an input and an expected result. This can be as pragmatic as 'for condition x your derived result is y', whereas other test cases described in more detail the input scenario and what results might be expected. It can occasionally be a series of steps (but often steps are contained in a separate test procedure that can be exercised against multiple test cases, as a matter of economy) but with one expected result or expected outcome. The optional fields are a test case ID, test step or order of execution number, related requirement(s), depth, test category, author, and check boxes for whether the test is automatable and has been automated. Larger test cases may also contain prerequisite states or steps, and descriptions. A test case should also contain a place for the actual result. These steps can be stored in a word processor document, spreadsheet, database, or other common repository. In a database system, you may also be able to see past test results and who generated the results and the system configuration used to generate those results. These past results would usually be stored in a separate table.

The term test script is the combination of a test case, test procedure, and test data. Initially the term was derived from the product of work created by automated regression test tools. Today, test scripts can be manual, automated, or a combination of both.

The most common term for a collection of test cases is a test suite. The test suite often also contains more detailed instructions or goals for each collection of test cases. It definitely contains a section where the tester identifies the system configuration used during testing. A group of test cases may also contain prerequisite states or steps, and descriptions of the following tests.

Collections of test cases are sometimes incorrectly termed a test plan. They might correctly be called a test specification. If sequence is specified, it can be called a test scriptscenario, or procedure.

The developers are well aware what test plans will be executed and this information is made available to the developers. This makes the developers more cautious when developing their code.This ensures that the developers code is not passed through any suprise test case or test plans.

Software Testing Methods

White Box and Black Box Testing Methods are terms used to describe the point of view that a test engineer takes when designing test cases.


Black box testing treats the software as a black-box without any understanding as to how the internals behave. It aims to test the functionality according to the requirements. Thus, the tester inputs data and only sees the output from the test object. This level of testing usually requires thorough test cases to be provided to the tester who then can simply verify that for a given input, the output value (or behaviour), is the same as the expected value specified in the test case.

White box testing, however, is when the tester has access to the internal data structures, code, and algorithms. For this reason, unit testing and debugging can be classified as white-box testing and it usually requires writing code, or at a minimum, stepping through it, and thus requires more knowledge of the product than the black-box tester. If the software in test is an interface or API of any sort, white-box testing is almost always required.

In recent years the term grey box testing has come into common usage. This involves having access to internal data structures and algorithms for purposes of designing the test cases, but testing at the user, or black-box level. Manipulating input data and formatting output do not qualify as grey-box because the input and output are clearly outside of the black-box we are calling the software under test. This is particularly important when conducting integration testing between two modules of code written by two different developers, where only the interfaces are exposed for test.

Grey box testing could be used in the context of testing a client-server environment when the tester has control over the input, inspects the value in a SQL database, and the output value, and then compares all three (the input, sql value, and output), to determine if the data got corrupt on the database insertion or retrieval. (Wikipedia)

Measuring Software Testing

Usually, quality is constrained to such topics as correctness, completeness, security, but can also include more technical requirements as described under the ISO standard ISO 9126, such as capability, reliabilityefficiencyportabilitymaintainability, compatibility, and usability. And especially for Banking System, quality is constrained to Banking Regulation for each country.


Software Testing Measurements

There are a number of common software measures, often called "metrics", which are used to measure the state of the software or the adequacy of the testing:

  • Bugs found per Tester per unit time (Day/Week/Month)
  • Total bugs found in a release
  • Total bugs found in a module / feature
  • Bugs found / fixed per build
  • Number of customer reported Bug - As a measure of testing effectiveness
  • Bug trend over the period in a release (Bugs should converge towards zero as the project gets closer to release) (It is possible that there are more cosmetic bugs found closer to release - in which case the number of critical bugs found is used instead of total number of bugs found)
  • Number of test cases executed per person per unit time
  •  % of test cases executed so far, total Pass, total fail
  • Test Coverage

How Software Defects arise?

The International Software Testing Qualifications Board says that software faults occur through the following process:

A human being can make an error (mistake), which produces a defect (fault, bug) in the code, in software or a system, or in a document. If a defect in code is executed, the system will fail to do what it should do (or do something it shouldn’t), causing a failure. Defects in software, systems or documents may result in failures, but not all defects do so.

A fault can also turn into a failure when the environment is changed. Examples of these changes in environment include the software being run on a new hardware platform, alterations in source data or interacting with different software. (Wikipedia)


A problem with software testing is that testing all combinations of inputs and preconditions is not feasible when testing anything other than a simple product. This means that the number of defects in a software product can be very large and defects that occur infrequently are difficult to find in testing.

More significantly, parafunctional dimensions of quality--for example, usability, scalability, performance, compatibility, reliability--can be highly subjective; something that constitutes sufficient value to one person may be intolerable to another.

Software Testing VS Debugging

Informally, Software Testing (or just “Testing”) is the process of Uncovering evidence of Defects in Software System. A defect can be introduced during any phase in Software Development Life Cycle (SDLC) or maintenance and results from one or more ‘bugs’, mistakes, misunderstandings, omissions, or even misguided intent on the part of Developers. Testing comprises the efforts to find defects. Testing does not include efforts associated with tracking down bugs and fixing them. In other words, testing does not include Debugging or repair of bugs. Testing is importance because it substantially contributes to ensuring that a Software Application does everything it is supposed to do. Some Testing efforts extend the focus to ensure an application does nothing more than it is supposed to do. In many case, Testing makes a significant contribution to guarding users against Software failures that can results in a loss of time, property, customer, or life.