The cornerstone of a successful test program is effective test planning.
Proper test planning requires an understanding of the corporate culture
and its software-development processes, in order to adapt or suggest
improvements to processes as necessary.
Planning must take place as early as possible in the software life cycle,
because lead times must be considered for implementing the test
program successfully. Gaining an understanding of the task at hand early
on is essential in order to estimate required resources, as well as to get
the necessary buy-in and approval to hire personnel and acquire testing
tools, support software, and hardware. Early planning allows for testing
schedules and budgets to be estimated, approved, and then incorporated
into the overall software development plan.
Lead times for procurement and preparation of the testing environment,
and for installation of the system under test, testing tools, databases,and
other components must be considered early on.
No two testing efforts are the same. Effective test planning requires a
clear understanding of all parts that can affect the testing goal.
Additionally, experience and an understanding of the testing discipline
are necessary, including best practices, testing processes, techniques,
and tools, in order to select the test strategies that can be most
effectively applied and adapted to the task at hand.
During test-strategy design, risks, resources, time, and budget
constraints must be considered. An understanding of estimation
techniques and their implementation is needed in order to estimate the
required resources and functions, including number of personnel, types
of expertise, roles and responsibilities, schedules, and budgets.
There are several ways to estimate testing efforts, including ratio
methods and comparison to past efforts of similar scope. Proper
estimation allows an effective test team to be assembled- not an easy
task, if it must be done from scratch- and allows project delivery
schedules to most accurately reflect the work of the testing team.
Thursday, July 10, 2008
Effective Software Testing (Test Planning)
Diposting oleh Boedhie di 1:46 AM 1 komentar
Tuesday, July 8, 2008
Beware of Developing and Testing Based on an Existing System
In many software-development projects, a legacy application already exists, with little or no existing requirement documentation, and is the basis for an architectural redesign or platform upgrade. Most organizations in this situation insist that the new system be developed and tested based exclusively on continual investigation of the existing application, without taking the time to analyze or document how the application functions. On the surface, it appears this will result in an earlier delivery date, since little or no effort is "wasted" on requirements reengineering or on analyzing and documenting an application that already exists, when the existing application in itself supposedly manifests the needed requirements.
Unfortunately, in all but the smallest projects, the strategy of using an existing application as the requirements baseline comes with many pitfalls and often results in few (if any) documented requirements, improper functionality, and incomplete testing.
Although some functional aspects of an application are self-explanatory, many domain-related features are difficult to reverse-engineer, because it is easy to overlook business logic that may depend on the supplied data. As it is usually not feasible to investigate the existing application with every possible data input, it is likely that some intricacy of the functionality will be missed. In some cases, the reasons for certain inputs producing certain outputs may be puzzling, and will result in software developers providing a "best guess" as to why the application behaves the way it does. To make matters worse, once the actual business logic is determined, it is typically not documented; instead, it is coded directly into the new application, causing the guessing cycle to perpetuate.
Aside from business-logic issues, it is also possible to misinterpret the meaning of user-interface fields, or miss whole sections of user interface completely. Many times, the existing baseline application is still live and under development, probably using a different architecture along with an older technology (for example, desktop vs. Web versions); or it is in production and under continuous maintenance, which often includes defect fixing and feature additions for each new production release. This presents a "moving-target" problem: Updates and new features are being applied to the application that is to serve as the requirements baseline for the new product, even as it is being reverse-engineered by the developers and testers for the new application. The resulting new application may become a mixture of the different states of the existing application as it has moved through its own development life cycle.
Finally, performing analysis, design, development, and test activities in a "moving-target" environment makes it difficult to properly estimate time, budgets, and staffing required for the entire software development life cycle. The team responsible for the new application cannot effectively predict the effort involved, as no requirements are available to clarify what to build or test. Most estimates must be based on a casual understanding of the application's functionality that may be grossly incorrect, or may need to suddenly change if the existing application is upgraded. Estimating tasks is difficult enough when based on an excellent statement of requirements, but it is almost impossible when so- called "requirements" are embodied in a legacy or moving-target application.
On the surface, it may appear that one of the benefits of building an application based on an existing one is that testers can compare the "old" application's output over time to that produced by the newly implemented application, if the outputs are supposed to be the same. However, this can be unsafe: What if the "old" application's output has been wrong for some scenarios for a while, but no one has noticed? If the new application is behaving correctly, but the old application's output is wrong, the tester would document an invalid defect, and the resulting fix would incorporate the error present in the existing application.
If testers decide they can't rely on the "old" application for output comparison, problems remain. Or if they execute their test procedures and the output differs between the two applications, the testers are left wondering which output is correct. If the requirements are not documented, how can a tester know for certain which output is correct? The analysis that should have taken place during the requirements phase to determine the expected output is now in the hands of the tester.
Although basing a new software development project on an existing application can be difficult, there are ways to handle the situation. The first step is to manage expectations. Team members should be aware of the issues involved in basing new development on an existing application. The following list outlines several points to consider.
Use a Fixed Application Version
All stakeholders must understand why the new application must be based on one specific version of the existing software as described and must agree to this condition. The team must select a version of the existing application on which the new development is to be based, and use only that version for the initial development.
Working from a fixed application version makes tracking defects more straightforward, since the selected version of the existing application will determine whether there is a defect in the new application, regardless of upgrades or corrections to the existing application's code base. It will still be necessary to verify that the existing application is indeed correct, using domain expertise, as it is important to recognize if the new application is correct while the legacy application is defective.
Document The Existing Application
The next step is to have a domain or application expert document the existing application, writing at least a paragraph on each feature, supplying various testing scenarios and their expected output. Preferably, a full analysis would be done on the existing application, but in practice this can add considerable time and personnel to the effort, which may not be feasible and is rarely funded. A more realistic approach is to document the features in paragraph form, and create detailed is to document the features in paragraph form, and create detailed requirements only for complex interactions that require detailed documentation.
It is usually not enough to document only the user interface(s) of the current application. If the interface functionality doesn't show the intricacies of the underlying functional behavior inside the application and how such intricacies interact with the interface, this documentation will be insufficient.
Document Updates To The Existing Application
Updates— that is, additional or changed requirements— for the existing baseline application from this point forward should be documented for reference later, when the new application is ready to be upgraded. This will allow stable analysis of the existing functionality, and the creation of appropriate design and testing documents. If applicable, requirements, test procedures, and other test artifacts can be used for both products.
If updates are not documented, development of the new product will become "reactive": Inconsistencies between the legacy and new products will surface piecemeal; some will be corrected while others will not; and some will be known in advance while others will be discovered during testing or, worse, during production.
Implement an Effective Development Process Going Forward
Even though the legacy system may have been developed without requirements, design or test documentation, or any system- development processes, whenever a new feature is developed for either the previous or the new application, developers should make sure a system-development process has been defined, is communicated, is followed, and is adjusted as required, to avoid perpetuating bad software engineering practices.
After following these steps, the feature set of the application under development will have been outlined and quantified, allowing for better organization, planning, tracking, and testing of each feature.
Diposting oleh Boedhie di 12:19 AM 0 komentar
Monday, June 16, 2008
Ensure that The Requirement change are communicated
When Test Procedures are based on Requirements, it is important to keep Test Team members informed of Changes to the Requirements as they occur. This may seem obvious, but it is surprising how often test procedures are executed that differ from an application’s implementation that has been changed due to updated Requirements. Many times, Testers responsible for developing and executing the test procedures are not notified of Requirements changes, which can result in false reports of defects, and loss of Required Research and valuable time.
There can be several reasons for this kind of process breakdown, such as:
Undocumented Changes
If someone, (Project manager, customers, requirements Analyst) has instructed the developer to implement a feature change, without agreement from other stakeholders, and the developer has implemented the change without communicating or documenting it. A process needs to be in place that makes it clear to developer how and when requirements can be changed. This is commonly handled through a Change Control Board, an Engineering Review Board, or some similar mechanism, discussed below.
Outdated Requirement Documented
An oversight on the testers part or poor configuration management may cause a tester to work with an outdated version of the requirement documentation when developing a test plan or procedures. Updates to requirements needed to be documented, placed under configuration management control (baselined), and communicated to all stakeholders involved.
Software Defect
The developer may have implemented a requirement incorrectly, although the requirement documentation and the test documentation are correct. In the last case, a defect report should be written. However if a requirement change process is not being followed, it can be difficult to tell which of the aforementioned scenarios is actually occurring. Is the problem in the software, the requirements, the test procedures, or all the above? To avid guesswork, all requirement changes must be openly evaluated, agreed upon, and communicated to all stakeholders. This can be accomplished by having a requirement-change process in place that facilities the communication of any requirement changes to all stakeholders.
If a requirement needs to be corrected, the change process must take into account the ripple effect upon design, code, and all associated documentation, including test documentation. To effectively manage this process, any changes should be baselined and versioned in a configuration management system. The change process outlines when, how, by whom, and where change request are initiated. The process might specify that a change request can be initiated during any phase of the life cycle, during any type of review, walk-through, or inspection during the requirements, design, code, defect tracking, or testing activities, or any other phase.
Diposting oleh Boedhie di 12:58 AM 0 komentar