The cornerstone of a successful test program is effective test planning.
Proper test planning requires an understanding of the corporate culture
and its software-development processes, in order to adapt or suggest
improvements to processes as necessary.
Planning must take place as early as possible in the software life cycle,
because lead times must be considered for implementing the test
program successfully. Gaining an understanding of the task at hand early
on is essential in order to estimate required resources, as well as to get
the necessary buy-in and approval to hire personnel and acquire testing
tools, support software, and hardware. Early planning allows for testing
schedules and budgets to be estimated, approved, and then incorporated
into the overall software development plan.
Lead times for procurement and preparation of the testing environment,
and for installation of the system under test, testing tools, databases,and
other components must be considered early on.
No two testing efforts are the same. Effective test planning requires a
clear understanding of all parts that can affect the testing goal.
Additionally, experience and an understanding of the testing discipline
are necessary, including best practices, testing processes, techniques,
and tools, in order to select the test strategies that can be most
effectively applied and adapted to the task at hand.
During test-strategy design, risks, resources, time, and budget
constraints must be considered. An understanding of estimation
techniques and their implementation is needed in order to estimate the
required resources and functions, including number of personnel, types
of expertise, roles and responsibilities, schedules, and budgets.
There are several ways to estimate testing efforts, including ratio
methods and comparison to past efforts of similar scope. Proper
estimation allows an effective test team to be assembled- not an easy
task, if it must be done from scratch- and allows project delivery
schedules to most accurately reflect the work of the testing team.
Thursday, July 10, 2008
Effective Software Testing (Test Planning)
Diposting oleh Boedhie di 1:46 AM 1 komentar
Tuesday, July 8, 2008
Beware of Developing and Testing Based on an Existing System
In many software-development projects, a legacy application already exists, with little or no existing requirement documentation, and is the basis for an architectural redesign or platform upgrade. Most organizations in this situation insist that the new system be developed and tested based exclusively on continual investigation of the existing application, without taking the time to analyze or document how the application functions. On the surface, it appears this will result in an earlier delivery date, since little or no effort is "wasted" on requirements reengineering or on analyzing and documenting an application that already exists, when the existing application in itself supposedly manifests the needed requirements.
Unfortunately, in all but the smallest projects, the strategy of using an existing application as the requirements baseline comes with many pitfalls and often results in few (if any) documented requirements, improper functionality, and incomplete testing.
Although some functional aspects of an application are self-explanatory, many domain-related features are difficult to reverse-engineer, because it is easy to overlook business logic that may depend on the supplied data. As it is usually not feasible to investigate the existing application with every possible data input, it is likely that some intricacy of the functionality will be missed. In some cases, the reasons for certain inputs producing certain outputs may be puzzling, and will result in software developers providing a "best guess" as to why the application behaves the way it does. To make matters worse, once the actual business logic is determined, it is typically not documented; instead, it is coded directly into the new application, causing the guessing cycle to perpetuate.
Aside from business-logic issues, it is also possible to misinterpret the meaning of user-interface fields, or miss whole sections of user interface completely. Many times, the existing baseline application is still live and under development, probably using a different architecture along with an older technology (for example, desktop vs. Web versions); or it is in production and under continuous maintenance, which often includes defect fixing and feature additions for each new production release. This presents a "moving-target" problem: Updates and new features are being applied to the application that is to serve as the requirements baseline for the new product, even as it is being reverse-engineered by the developers and testers for the new application. The resulting new application may become a mixture of the different states of the existing application as it has moved through its own development life cycle.
Finally, performing analysis, design, development, and test activities in a "moving-target" environment makes it difficult to properly estimate time, budgets, and staffing required for the entire software development life cycle. The team responsible for the new application cannot effectively predict the effort involved, as no requirements are available to clarify what to build or test. Most estimates must be based on a casual understanding of the application's functionality that may be grossly incorrect, or may need to suddenly change if the existing application is upgraded. Estimating tasks is difficult enough when based on an excellent statement of requirements, but it is almost impossible when so- called "requirements" are embodied in a legacy or moving-target application.
On the surface, it may appear that one of the benefits of building an application based on an existing one is that testers can compare the "old" application's output over time to that produced by the newly implemented application, if the outputs are supposed to be the same. However, this can be unsafe: What if the "old" application's output has been wrong for some scenarios for a while, but no one has noticed? If the new application is behaving correctly, but the old application's output is wrong, the tester would document an invalid defect, and the resulting fix would incorporate the error present in the existing application.
If testers decide they can't rely on the "old" application for output comparison, problems remain. Or if they execute their test procedures and the output differs between the two applications, the testers are left wondering which output is correct. If the requirements are not documented, how can a tester know for certain which output is correct? The analysis that should have taken place during the requirements phase to determine the expected output is now in the hands of the tester.
Although basing a new software development project on an existing application can be difficult, there are ways to handle the situation. The first step is to manage expectations. Team members should be aware of the issues involved in basing new development on an existing application. The following list outlines several points to consider.
Use a Fixed Application Version
All stakeholders must understand why the new application must be based on one specific version of the existing software as described and must agree to this condition. The team must select a version of the existing application on which the new development is to be based, and use only that version for the initial development.
Working from a fixed application version makes tracking defects more straightforward, since the selected version of the existing application will determine whether there is a defect in the new application, regardless of upgrades or corrections to the existing application's code base. It will still be necessary to verify that the existing application is indeed correct, using domain expertise, as it is important to recognize if the new application is correct while the legacy application is defective.
Document The Existing Application
The next step is to have a domain or application expert document the existing application, writing at least a paragraph on each feature, supplying various testing scenarios and their expected output. Preferably, a full analysis would be done on the existing application, but in practice this can add considerable time and personnel to the effort, which may not be feasible and is rarely funded. A more realistic approach is to document the features in paragraph form, and create detailed is to document the features in paragraph form, and create detailed requirements only for complex interactions that require detailed documentation.
It is usually not enough to document only the user interface(s) of the current application. If the interface functionality doesn't show the intricacies of the underlying functional behavior inside the application and how such intricacies interact with the interface, this documentation will be insufficient.
Document Updates To The Existing Application
Updates— that is, additional or changed requirements— for the existing baseline application from this point forward should be documented for reference later, when the new application is ready to be upgraded. This will allow stable analysis of the existing functionality, and the creation of appropriate design and testing documents. If applicable, requirements, test procedures, and other test artifacts can be used for both products.
If updates are not documented, development of the new product will become "reactive": Inconsistencies between the legacy and new products will surface piecemeal; some will be corrected while others will not; and some will be known in advance while others will be discovered during testing or, worse, during production.
Implement an Effective Development Process Going Forward
Even though the legacy system may have been developed without requirements, design or test documentation, or any system- development processes, whenever a new feature is developed for either the previous or the new application, developers should make sure a system-development process has been defined, is communicated, is followed, and is adjusted as required, to avoid perpetuating bad software engineering practices.
After following these steps, the feature set of the application under development will have been outlined and quantified, allowing for better organization, planning, tracking, and testing of each feature.
Diposting oleh Boedhie di 12:19 AM 0 komentar
Monday, June 16, 2008
Ensure that The Requirement change are communicated
When Test Procedures are based on Requirements, it is important to keep Test Team members informed of Changes to the Requirements as they occur. This may seem obvious, but it is surprising how often test procedures are executed that differ from an application’s implementation that has been changed due to updated Requirements. Many times, Testers responsible for developing and executing the test procedures are not notified of Requirements changes, which can result in false reports of defects, and loss of Required Research and valuable time.
There can be several reasons for this kind of process breakdown, such as:
Undocumented Changes
If someone, (Project manager, customers, requirements Analyst) has instructed the developer to implement a feature change, without agreement from other stakeholders, and the developer has implemented the change without communicating or documenting it. A process needs to be in place that makes it clear to developer how and when requirements can be changed. This is commonly handled through a Change Control Board, an Engineering Review Board, or some similar mechanism, discussed below.
Outdated Requirement Documented
An oversight on the testers part or poor configuration management may cause a tester to work with an outdated version of the requirement documentation when developing a test plan or procedures. Updates to requirements needed to be documented, placed under configuration management control (baselined), and communicated to all stakeholders involved.
Software Defect
The developer may have implemented a requirement incorrectly, although the requirement documentation and the test documentation are correct. In the last case, a defect report should be written. However if a requirement change process is not being followed, it can be difficult to tell which of the aforementioned scenarios is actually occurring. Is the problem in the software, the requirements, the test procedures, or all the above? To avid guesswork, all requirement changes must be openly evaluated, agreed upon, and communicated to all stakeholders. This can be accomplished by having a requirement-change process in place that facilities the communication of any requirement changes to all stakeholders.
If a requirement needs to be corrected, the change process must take into account the ripple effect upon design, code, and all associated documentation, including test documentation. To effectively manage this process, any changes should be baselined and versioned in a configuration management system. The change process outlines when, how, by whom, and where change request are initiated. The process might specify that a change request can be initiated during any phase of the life cycle, during any type of review, walk-through, or inspection during the requirements, design, code, defect tracking, or testing activities, or any other phase.
Diposting oleh Boedhie di 12:58 AM 0 komentar
Thursday, June 5, 2008
Design Test Procedure as soon as Requirements are available
Moving the Test Procedure Development effort closer to the requirements phase of the process, rather than waiting until the Software Development Phase, allows for test procedure to provide benefit to the requirement specification activity. During the course of developing a test procedure, certain oversights, omissions, incorrect flows, an other errors may be discovered in the requirements document, as Testers attempt to walk through an interaction with the system at a very specific level, using sets of Test Data as input. If a problem is uncovered in the requirement, that requirement will need to be reworked to account for this discovery. The earlier in the process such corrections are incorporated, the less likely it is that the corrections will affect Software Design or implementation.
As I mentioned before, early detection equates to lower cost. If a requirement defect is discovered in later phases of the process, all stakeholders must change the requirement, design, and code, which will affect budget, schedules, and possibly morale. However, if the defect is discovered during the Requirements Phase repairing it is simply a matter of changing and reviewing the requirement text.
The process of identifying errors or omissions in a requirement through test procedure definition is referred to as Verifying the Requirement’s Testability. If not enough information exists, or the information provided in the specification is too ambiguous to create a complete test procedure with its related test cases for relevant paths, the specification is not considered to be testable, and may not be suitable for Software Development. Whether a test can be developed for a requirement is a valuable check and should be considered part of the process of approving a requirement as complete.
If a requirement cannot be verified, there is no guarantee that it will be implemented correctly. Being able to develop a test procedure that includes data inputs, steps to verify the requirement, and known expected outputs for each related requirement can assure requirement completeness by confirming that important requirement is not missing, making the requirement difficult or even impossible to implement correctly and un-testable. Developing test procedures for requirements early on allows for early discovery of non-verifiability issues.
Developing test procedures after a software build has been delivered to the Testing Team also risks incomplete test-procedure development because of intensive time pressure to complete the product’s testing cycle. This can manifest in various ways. For example, the test procedure might be missing entirely, or it may not be thoroughly defined, omitting certain paths or data elements that may make a difference in the test outcome. As a result, defect might be missed, or the requirement may be incomplete, as described earlier, and not support the definition of the Necessary of Test Procedures, or even proper software development. Incomplete requirements often result in incomplete implementation.
Early evaluation of the Testability of an application’s requirements can be the basis for defining a testing strategy. While reviewing the testability of requirements, testers might be determine, for example, that using a capture/playback tool would be ideal, allowing execution of some of the tests in automated fashion. Determining this early allows enough lead time to evaluate and implement automated testing tools.
Diposting oleh Boedhie di 8:43 PM 0 komentar
Sunday, June 1, 2008
Verifying Requirements (Cont)
Testers need to have guidelines to ensure quality measure for verifying requirement during requirement phase. Following is the checklist that can be used by testers during the requirement phase to verify the quality of requirement. Using this checklist is a first step toward trapping requirement-related defects as early as possible, so they do not propagate to subsequent phases, where they would be more difficult and expensive to find and correct. All stakeholders responsible for requirements should verify that requirements posses the following attributes:
Correctness
Correctness of a requirement is judged based on what the user wants. Following is the questions that need to be concern: Are the rules and regulations stated correctly? Are the standard being followed? Does the requirement exactly reflect the user’s request? It is imperative that the end user, or a suitable representative, be involved during the requirements phase.
Completeness
Completeness ensures that no necessary elements are missing from the requirement. The goal is to avoid omitting requirements simply because no one has asked the right question or examined all of the pertinent source documents. Testers should insists that associated Non Functional Requirements, such as performance, security, usability, compatibility, and accessibility are described along with each Functional Requirements usually documented in two steps:
A System-wide specification is created that defines the Nonfunctional Requirements that apply to the system.
Each requirement description should contain a section titled “Nonfunctional Requirements” documenting any specific nonfunctional needs of that particular requirement that deviate from the system-wide nonfunctional specification.
More details about Functional and Nonfunctional Requirements, I will explain in another posts.
Consistency
Consistency verifies that there are no internal or external contradictions among the elements within the work products, or between works products. We should ask a question “Does the specification define every essential subject-matter term used within the specification?” We can determine whether the elements used in the requirement are clear and precise. Without clear and consistent definitions, determining whether a requirement is correct becomes a matter of opinion.
Testability or Verifiability
Testability of the requirement confirms that it is possible to create a test for the Requirement, and that an expected result is known and can be programmatically or visually verified. If a requirement cannot be tested or otherwise verified, this fact and its associated risks must be stated, and the requirement must be adjusted if possible so that it can be tested.
Feasibility
Feasibility of a requirement ensures that it can be implemented given the budget, schedules, technology, and other resources available.
Necessity
Necessity verifies that every requirement in the specification is relevant to the system. To test for relevance or necessity, tester checks the requirement against the stated goals for the system. “Does this requirement contribute to those goals?”, “Would the excluding this requirement prevent the system from meeting those goals?”, “Are any other requirements dependent on this requirement?”. Some irrelevant requirements are not really requirements, but proposed solutions.
Prioritization
Prioritization allows everyone to understand the relative value to stakeholders of the requirement. We use scale from 1 to 5 to specify the level of reward for good performance, and penalty for bad performance on a requirement. If a requirement is absolutely vital to the success of the system, then it has a penalty of 5 and reward of 5. A requirement that would be nice to have but is not really vital might have a penalty of 1 and reward of 3. This knowledge can be used to make prioritization and trade-off decision when the time comes to design the system. This approach needs to balance the perspective of the user against the cost and technical risk associated with a proposed requirement.
Unambiguousness
Unambiguousness ensures that requirements are stated in a precise and measurable way.
Traceability
This ensures that each requirement is identified in such a way that it can be associated with all parts of the system where it is used. For any change to requirements, is it possible to identify all parts of the system where this change has an effect
Diposting oleh Boedhie di 10:34 PM 0 komentar
Tuesday, May 20, 2008
Verifying Requirements
The most important thing on specifying Requirements is Quality Measure of each Requirement. Quality Measure is specified for a requirement, any solution that meets this measure will be acceptable, and any solution that does not meet the measure will be not acceptable. Quality Measure are used to test the new system against the requirements. Attempting to define the quality Measure for a requirement helps to rationalize fuzzy requirements. For example: everyone would agree with a statement like “System must provide good value”. But each person may have a different interpretation of “Good Value”. In devising the scale that must be used to measure “Good Value”, it will become necessary to identify what that term means. Sometimes requiring the stakeholders to think about a requirement in this way will lead to defining an agreed-upon quality measure. In other cases, there may be no agreement on a quality measure. One solution would be to replace one vague requirement with several unambiguous requirements, each with its own quality measure. It is important that guidelines for requirement development and documentation be defined at the outset of the project. In all but smallest programs, careful analysis is required to ensure that the system is developed properly. Uses Cases are one way to document functional requirements, and can lead to more through system designs and test procedures. In addition to Functional Requirements, it is also important to consider Nonfunctional Requirements, such as performance and security, early in the process. They can determine the technology choices and areas of risk. Nonfunctional Requirements do not endow the system with any specific functions, but rather constrain or further define how the system will perform any given function. Functional Requirements should be specified along with their associated Nonfunctional Requirements.
Diposting oleh Boedhie di 8:14 PM 0 komentar
Monday, May 12, 2008
Involve Testers from the beginning
Now I will describe about the first step of Effective Software Testing in the Requirement Phase. Testers need to be involved from the beginning of a project’s life cycle so they can understand exactly what they are testing and can work with other stakeholders to create testable requirements.
A requirement can be considered Testable if it is possible to design a procedure in which the functionality being tested can be executed, the expected output is known, and the output can be programmatically or visually verified. Tester need a solid understanding of the product, so they can be devise better and more complete test plans, designs, procedures, and cases. Early test team involvement can eliminate confusion about functional behavior later in the project life cycle. In addition, early involvement allows the test team to learn over time which aspects of the application are the most critical to the end user and which are the highest-risk elements. This knowledge enables testers to focus on the most important parts of the application first avoiding over-testing rarely used areas and under-testing the more important ones.
Some organizations regard testers strictly as consumer of the requirements and other software development work products, requiring them to learn the application and domain as software builds are delivered to the testers, instead of involving them during the earlier phases. This may be acceptable in smaller project in a small company, but in the big company especially in financial company (my company) which have complex environment, it is not realistic to expect testers to find all significant defects if their first exposure to the application is after it has already been through requirement, analysis, design, and some software implementation. Testers need deeper knowledge that can come only from understanding the thought process used during the specification of product functionality. Such understanding not only increases the quality and depth of the test procedures developed, but also allows testers to provide feedback regarding the requirements. The earlier in the life cycle a defect is discovered, the cheaper it will be to fix it. In my first few days as software tester, I’d given so many documents contain requirements, analysis, and design to learn. And I had to discussed with another testers, test leader, test manager, and business analyst to make the best Test Design which results the best Test Plan.
Diposting oleh Boedhie di 4:45 AM 1 komentar
Monday, May 5, 2008
Effective Software Testing (Requirements Phase)
Requirements Phase
The most effective Testing Program start at the beginning of project, long before any program code has been written. The requirements documentations is verified first, then in the later stages of the project, testing can concentrate on ensuring the quality of the application code. Expensive reworking is minimized by eliminating requirements-related defect early in the project’s life, prior to detailed design or coding work.
The requirements specifications for a software application or system must ultimately describe its functionality in great detail. One of the most challenging aspects of requirements development is communicating with the people who are supplying the requirements. Each requirement should be stated precisely and clearly, so it can be understood in the same way by everyone who reads it.
If there is a consistent way of documenting requirements, it is possible for the stakeholders responsible for requirements gathering to effectively participate in the requirements process. As soon as requirements is made visible, it can be tested an clarified by asking the stakeholders detailed questions. A variety of requirements test can be applied to ensure that each requirement is relevant, and that everyone has the same understanding of its meaning.
Diposting oleh Boedhie di 2:27 AM 0 komentar
Monday, April 28, 2008
Effective Software Testing (Cont)
To ensure software application reliability and project success Software Testing plays a very crucial role. Everything can and should be tested. Few steps below (as far as i know base on theory and implementation in my company) to make Software Testing Effective:
Requirement Phase
Involve Testers from the beginning
Verify the Requirements
Design Test Procedures as soon as Requirements are available
Ensure that Requirement changes are communicated
Beware of Developing and Testing based on an Existing System
Test Planning
Understand the task at hand and The Related Testing Goal
Consider The Risks
Base Testing efforts on a Prioritized feature schedule
Keep Software issues in mind
Acquire Effective Test Data
Plan The Test Environment
Estimate Test Preparation and Execution Time
The Testing Team
Define Roles and Responsibilities
Require a Mixture of Testing skills, subject-matters expertise, and experience
Evaluate The Testers effectiveness
The System Architecture
Understand the Architecture and Underlaying Components
Verify that the System Supports Testability
Use Logging to Increase System Testability
Verify that the System Supports Debug and Release Execution Modes
Test Design and Documentation
Divide and Conquer
Mandate the use of Test Procedure Template and other Test Design standards
Derive Effective Test Cases from Requirements
Treat Test Procedures as 'Living' Documents
Utilize System Design and Prototypes
Use proven Testing Techniques when designing Test Cases scenarios
Avoid including constraints and detailed data elements within Test Procedures
Apply Exploratory Testing
Unit Testing
Structure the Development Approach to support Effective Unit Testing
Develop Unit Test in paralel or before the Implementation
Make Uni-Test Execution Part of the Build Process
Automated Testing Tools
Know the different types of Testing Support Tools
Consider building a Tool instead of buying one
Know the impact of Automated Tools on The Testing Effort
Focus on the needs of your organization
Test the Tools on an Application Prototype
Automated Testing
Do not rely solely on capture/playback
Develop Test Harness when necessary
Use proven Test Script Development techniques
Automate Regression Tests when feasible
Implement Automated Builds and Smoke Tests
Non Functional Testing
Do not make Non Functional Testing an Afterthought
Conduct Performance Testing with Production-Sized Databases
Tailor usability Test to the intended audience
Consider all aspects of Security for specific requirements and system-wide
Investigate the System's Implementation to plan for concurrency Tests
Set up an efficient environment for Compatibility Testing
Managing Test Execution
Clearly define the beginning and end of The Test Execution Cycle
Isolate the Test Environment from the Development Environment
Implement a Defect Tracking Life Cycle
Track the execution of The Testing Program
Diposting oleh Boedhie di 3:38 AM 1 komentar
Monday, April 21, 2008
Effective Software Testing
What is Effective Software Testing?
How do we measure ‘Effectiveness’ of Software Testing?
The effectiveness of Testing can be measured if the goal and purpose of the testing effort is clearly defined. Some of the typical Testing goals are:
Testing in each phase of the Development cycle to ensure that the “bugs”(defects) are eliminated at the earliest
Testing to ensure no “bugs” creep through in the final product
Testing to ensure the reliability of the software
Above all testing to ensure that the user expectations are met
The effectiveness of testing can be measured with the degree of success in achieving the above goals.
Steps to Effective Software Testing:
Several factors influence the effectiveness of Software Testing Effort, which ultimately determine the success of the Project.
A) Coverage:
The testing process and the test cases should cover
All the scenarios that can occur when using the software application
Each business requirement that was defined for the project
Specific levels of testing should cover every line of code written for the application
There are various levels of testing which focus on different aspects of the software application. The various levels of testing based on V-Model since we discussed are:
Unit Testing
Integration Testing
System Testing
User Acceptance Testing
The goal of each testing level is slightly different thereby ensuring the overall project reliability.
Each Level of testing should provide adequate test coverage.
Unit testing should ensure each and every line of code is tested
Integration Testing should ensure the components can be integrated and all the interfaces of each component are working correctly
System Testing should cover all the “paths”/scenarios possible when using the system
The system testing is done in an environment that is similar to the production environment i.e. the environment where the product will be finally deployed.
There are various types of System Testing possible which test the various aspects of the software application.
B) Test Planning and Process:
To ensure effective Testing Proper Test Planning is important
An Effective Testing Process will comprise of the following steps:
Test Strategy and Planning
Review Test Strategy to ensure its aligned with the Project Goals
Design/Write Test Cases
Review Test Cases to ensure proper Test Coverage
Execute Test Cases
Capture Test Results
Track Defects
Capture Relevant Metrics
Analyze
Having followed the above steps for various levels of testing the product is rolled.
It is not uncommon to see various “bugs”/Defects even after the product is released to production. An effective Testing Strategy and Process helps to minimize or eliminate these defects. The extent to which it eliminates these post-production defects (Design Defects/Coding Defects/etc) is a good measure of the effectiveness of the Testing Strategy and Process.
Diposting oleh Boedhie di 8:50 PM 0 komentar
Tuesday, April 15, 2008
Certification in Software Testing and Software Quality Assurance
Several certification programs exist to support the professional aspirations of software testers and quality assurance specialists. No certification currently offered actually requires the applicant to demonstrate the ability to test software. No certification is based on a widely accepted body of knowledge. No certification board decertifies individuals.[verification needed][citation needed] This has led some to declare that the testing field is not ready for certification.[5] Certification itself cannot measure an individual's productivity, their skill, or practical knowledge, and cannot guarantee their competence, or professionalism as a tester.[6]
Certifications can be grouped into: exam-based and education-based. Exam-based certifications: For these there is the need to pass an exam, which can also be learned by self-study: e.g. for ISTQB or QAI. Education-based certifications are instructor-led sessions, where each course has to be passed, e.g. IIST (International Institute for Software Testing).
Software Testing Certification:- CSTE offered by the Quality Assurance Institute (QAI)
- CSTP offered by the International Institute for Software Testing
- CSTP (TM) (Australian Version) offered by the K. J. Ross & Associates
- CATe offered by the International Institute for Software Testing
- ISEB offered by the Information Systems Examinations Board
- ISTQB offered by the International Software Testing Qualification Board
- CSQE offered by the American Society for Quality Assurance Institute (ASQ)
- CSQA offered by the Quality Assurance Institute (QAI)
Diposting oleh Boedhie di 8:29 PM 0 komentar
Testing Team Structure and Responsibilities
In my Company we have our own Testing Team as mentioned in previous post. Each positions have own Roles and Responsibilities. Below i described briefly the roles and responsibilities for each position:
1. Project Manager
Responsibilites:
- Initiate Testing Project
- Managing The Testing Project and Resource Allocation
- Test Project Planning, Executing, Monitoring/Controlling, Reporting/Closing
Responsibilities:
- Analyze Business Process, Business Requirement, Functional Specifiation
- Participate in Preliminary Planning
Responsibilites:
- Develop system/application
- Business Analyst and Test Leader Interaction
Responsibilities:
- Tracking and Ensuring The Test Team to comply with standard Test Process
- Highlighting non compliance issues to test Management Team
Responsibilities:
- Analyzing Test Requirement
- Designing Test Strategy, and Test Methodology
- Designing Tests Suites, Test Cases, Test Data
Responsibilites:
- Test Preparation
- Test Execution
- Raising and Tracking Defect
Responsibilities:
- Initiate Project
- Initiate Requirement
- End User of the System/Application
Diposting oleh Boedhie di 8:05 PM 0 komentar
Saturday, April 5, 2008
Roles in software testing
Software testing can be done by software testers. Until the 1950s the term software tester was used generally, but later it was also seen as a separate profession. Regarding the periods and the different goals in software testing (see D. Gelperin and W.C. Hetzel) there have been established different roles: test lead/manager, tester, test designer, test automater/automation developer, and test administrator.
Participants of testing team:
- Testers
- Developer
- Business Analyst
- Customer
- Information Service Management
- Test Manager
- Senior Organization Management
- Quality team
In my company, we have our own Organizational Structure for the Testing Team.
Participants of Testing Team in my company:
1. Unit Head (Project Manager)
2. Business Analyst
3. Developer
4. Testing Quality Assurance (QA)
5. Test Leader (Relationship Manager)
6. Tester (My Position)
Diposting oleh Boedhie di 11:07 PM 0 komentar
Level of Testing
tests the minimal software component, or module. Each unit (basic component) of the software is tested to verify that the detailed design for the unit has been correctly implemented. In an Object-oriented environment, this is usually at the class level, and the minimal unit tests include the constructors and destructors.
exposes defects in the interfaces and interaction between integrated components (modules). Progressively larger groups of tested software components corresponding to elements of the architectural design are integrated and tested until the software works as a system.
tests at any level (class, module, interface, or system) for proper functionality as defined in the specification.
tests a completely integrated system to verify that it meets its requirements.
verifies that a system is integrated to any external or third party systems defined in the system requirements.
validates whether the quality of service (sometimes called Non-functional requirements) parameters defined at the requirements stage is met by the final product.
can be conducted by the end-user, customer, or client to validate whether or not to accept the product. Acceptance testing may be performed as part of the hand-off process between any two phases of development.
But in my company, we simplify level of testing:
- Unit Testing
- Functional Testing
- System Integration Testing (SIT)
- User Acceptance Testing (UAT)
Diposting oleh Boedhie di 10:27 PM 1 komentar
Saturday, March 15, 2008
V - Model : Validation Phases
Validation Phases
Unit Testing
In the V-model of software development, unit testing implies the first stage of dynamic testing process. It involves analysis of the written code with the intention of eliminating errors. It also verifies that the codes are efficient and adheres to the adopted coding standards. Testing is usually white box. It is done using the Unit test design prepared during the module design phase. This may be carried out by software testers, software developers or both.
Integration Testing
In integration testing the separate modules will be tested together to expose faults in the interfaces and in the interaction between integrated components. Testing is usually black box as the code is not directly checked for errors. It is done using the integration test design prepared during the architecture design phase. Integration testing is generally conducted by software testers.
System Testing
System testing will compare the system specifications against the actual system. The system test design is derived from the system design documents and is used in this phase. Sometimes system testing is automated using testing tools. Once all the modules are integrated several errors may arise. Testing done at this stage is called system testing.
User Acceptance Testing
Acceptance testing:
- To determine whether a system satisfies its acceptance criteria or not.
- To enable the customer to determine whether to accept the system or not.
- To test the software in the "real world" by the intended audience.
Purpose of acceptance testing:
- To verify the system or changes according to the original needs.
Procedures for conducting the acceptance testing:
Define the acceptance criteria:
- Functionality requirements.
- Performance requirements.
- Interface quality requirements.
- Overall software quality requirements.
Develop an acceptance plan:
- Project description.
- User responsibilities.
- Acceptance description.
- Execute the acceptance test plan.
Diposting oleh Boedhie di 9:12 AM 0 komentar
V - Model : Verfication Phases
V-Model is more helpful and profitable to companies as it reduces the time for whole development of a new product and can also be used to some complex maintenance projects.
Verification Phases
Requirement Analysis
In this phase, the requirements of the proposed system are collected by analyzing the needs of the user(s). This phase is concerned about establishing what the ideal system has to perform. However, it does not determine how the software will be designed or built. Usually, the users are interviewed and a document called the user requirements document is generated. The user requirements document will typically describe the system’s functional, physical, interface, performance, data, security requirements etc as expected by the user. It is one which the business analysts use to communicate their understanding of the system back to the users. The users carefully review this document as this document would serve as the guideline for the system designers in the system design phase. The user acceptance tests are designed in this phase.
System Design
System engineers analyze and understand the business of the proposed system by studying the user requirements document. They figure out possibilities and techniques by which the user requirements can be implemented. If any of the requirements are not feasible, the user is informed of the issue. A resolution is found and the user requirement document is edited accordingly. The software specification document which serves as a blueprint for the development phase is generated. This document contains the general system organization, menu structures, data structures etc. It may also hold example business scenarios, sample windows, reports for the better understanding. Other technical documentation like entity diagrams, data dictionary will also be produced in this phase. The documents for system testing is prepared in this phase.
Architecture Design
This phase can also be called as high-level design. The baseline in selecting the architecture is that it should realize all which typically consists of the list of modules, brief functionality of each module, their interface relationships, dependencies, database tables, architecture diagrams, technology details etc. The integration testing design is carried out in this phase.
Module Design
This phase can also be called as low-level design. The designed system is broken up in to smaller units or modules and each of them is explained so that the programmer can start coding directly. The low level design document or program specifications will contain a detailed functional logic of the module, in pseudocode - database tables, with all elements, including their type and size - all interface details with complete API references- all dependency issues- error message listings- complete input and outputs for a module. The unit test design is developed in this stage.
Coding
Diposting oleh Boedhie di 9:09 AM 0 komentar
Tuesday, March 11, 2008
V Model
V-Model
The V-model can be said to have developed as a result of the evolution of software testing. Various testing techniques were defined and various kinds of testing were clearly separated from each other which led to the waterfall model evolving into the V-model. The tests in the ascending (Validation) hand are derived directly from their design or requirements counterparts in the descending (Verification) hand. The 'V' can also stand for the terms Verification and Validation.
V-Model is more helpful and profitable to companies as it reduces the time for whole development of a new product and can also be used to some complex maintenance projects.
Diposting oleh Boedhie di 7:27 AM 0 komentar
Saturday, February 2, 2008
Test Cases, Suites, Scripts, Scenarios
A test case is a software testing document, which consists of event, action, input, output, expected result, and actual result. Clinically defined a test case is an input and an expected result. This can be as pragmatic as 'for condition x your derived result is y', whereas other test cases described in more detail the input scenario and what results might be expected. It can occasionally be a series of steps (but often steps are contained in a separate test procedure that can be exercised against multiple test cases, as a matter of economy) but with one expected result or expected outcome. The optional fields are a test case ID, test step or order of execution number, related requirement(s), depth, test category, author, and check boxes for whether the test is automatable and has been automated. Larger test cases may also contain prerequisite states or steps, and descriptions. A test case should also contain a place for the actual result. These steps can be stored in a word processor document, spreadsheet, database, or other common repository. In a database system, you may also be able to see past test results and who generated the results and the system configuration used to generate those results. These past results would usually be stored in a separate table. The term test script is the combination of a test case, test procedure, and test data. Initially the term was derived from the product of work created by automated regression test tools. Today, test scripts can be manual, automated, or a combination of both. The most common term for a collection of test cases is a test suite. The test suite often also contains more detailed instructions or goals for each collection of test cases. It definitely contains a section where the tester identifies the system configuration used during testing. A group of test cases may also contain prerequisite states or steps, and descriptions of the following tests. Collections of test cases are sometimes incorrectly termed a test plan. They might correctly be called a test specification. If sequence is specified, it can be called a test script, scenario, or procedure. The developers are well aware what test plans will be executed and this information is made available to the developers. This makes the developers more cautious when developing their code.This ensures that the developers code is not passed through any suprise test case or test plans.
Diposting oleh Boedhie di 8:48 PM 1 komentar