Wednesday, November 11, 2009
11. Types of Defects
After receiving all the defect reports, the development team will analyse these defects for reality (i.e.) the development team will make analysis on this defects to identify the type of defects by go through the explanation provided by the test engineer for the corresponding defects.
If the defect is acceptable then the defect tracking team is categorizing the defects as
i. Test Procedure related defects.
ii. Test data (or) Test Input Related defects.
iii. Coding related defects.
iv. Hardware related defects (or) Infrastructure related defects.
11.1. Test Procedure related defects:
The defects related to the test steps in the test procedure. (i.e.) defects in the testing process.
11.2. Test data related defects:
The data or the input values given by the test engineer to test a requirement. The input values or the data for that requirement is the specification mentioned by the client or customer to that particular requirement.
(Or)
Test data related defects means the defects occurred in the data given to test a requirement.
11.3. Coding related defects:
The defects occurred in the programming logic.
11.4. Hardware related defects:
The defects occurred in the hardware configuration.
Hard wares like (Scanner, Printer, Ram, Rom etc..)
10. Roles & Responsibilities
10.1. Test Associate:
Reporting To:
Team Lead of a project
Responsibilities:
i. Design and develop test conditions and cases with associated test data based upon requirements
ii. Design test scripts
iii. Executes the test ware (Conditions, Cases, Test scripts etc.) with the test data generated
iv. Reviews test ware, record defects, retest and close defects
v. Preparation of reports on Test progress
10.2. Test Engineer:
Reporting To:
Team Lead of a project
Responsibilities:
i. Design and develop test conditions and cases with associated test data based upon requirements
ii. Design test scripts
iii. Executes the test ware (Conditions, Cases, Test scripts etc.) with the test data generated
iv. Reviews test ware, record defects, retest and close defects
v. Preparation of reports on Test progress
10.3. Senior Test Engineer:
Reporting To:
Team Lead of a project
Responsibilities:
i. Responsible for collection of requirements from the users and evaluating the same and send out for team discussion
ii. Preparation of the High level design document incorporating the feedback received on the high level design document and initiate on the low level design document
iii. Assist in the preparation of test strategy document drawing up the test plan
iv. Preparation of business scenarios, supervision of test cases preparation based on the business scenarios
v. Maintaining the run details of the test execution, Review of test condition/cases, test scripts
Defect Management
vi. Preparation of test deliverable documents and defect metrics analysis report
10.4. Test Lead:
Reporting To:
Test Manager
Responsibilities:
i. Technical leadership of the test project including test approach and tools to be used
ii. Preparation of test strategy
iii. Ensure entrance criteria prior to test start-off
iv. Ensure exit criteria prior to completion sign-off
v. Test planning including automation decisions
vi. Review of design documents (test cases, conditions, scripts)
vii. Preparation of test scenarios and configuration management and quality plan
viii. Manage test cycles
ix. Assist in recruitment
x. Supervise test team
xi. Resolve team queries/problems
xii. Report and follow-up test systems outrages/problems
xiii. Client interface
xiv. Project progress reporting
xv. Defect Management
xvi. Staying current on latest test approaches and tools, and transferring this knowledge to test team
xvii. Ensure test project documentation
10.5. Test Manager:
Reporting To:
Management
Responsibilities:
i. Liaison for interdepartmental interactions: Representative of the testing team
ii. Client interaction
iii. Recruiting, staff supervision, and staff training.
iv. Test budgeting and scheduling, including test-effort estimations.
v. Test planning including development of testing goals and strategy.
vi. Test tool selection and introduction.
vii. Coordinating pre and post test meetings.
viii. Test program oversight and progress tracking.
ix. Use of metrics to support continual test process improvement.
x. Test process definition, training and continual improvement.
xi. Test environment and test product configuration management.
xii. Nomination of training
xiii. Cohesive integration of test and development activities.
xiv. Mail Training Process for training needs, if required
xv. Review of the proposal
9. Regression Testing & ReTesting.
9. Regression Testing and Re-testing:
“Retesting of a previously tested program following modification to ensure that faults have not been introduced or uncovered as a result of the changes made.”
“Regression Testing is the process of testing the changes to computer programs to make sure that the older programs still work with the new changes.”
“When making improvements on software, retesting previously tested functions to make sure adding new features has not introduced new problems.”
Regression testing is an expensive but necessary activity performed on modified software to provide confidence that changes are correct and do not adversely affects other system components. Four things can happen when a developer attempts to fix a bug. Three of these things are bad, and one is good:
Because of the high probability that one of the bad outcomes will result from a change to the system, it is necessary to do regression testing. A regression test selection technique chooses, from an existing test set, the tests that are deemed necessary to validate modified software.
There are three main groups of test selection approaches in use:
Minimization approaches seek to satisfy structural coverage criteria by identifying a minimal set of tests that must be rerun.
Coverage approaches are also based on coverage criteria, but do not require minimization of the test set. Instead, they seek to select all tests that exercise changed or affected program components.
Safe attempt instead to select every test that will cause the modified program to produce different output than original program.
9.1. Factors favour Automation of Regression Testing:
i. Ensure consistency
ii. Speed up testing to accelerate releases
iii. Allow testing to happen more frequently
iv. Reduce costs of testing by reducing manual labour
v. Improve the reliability of testing
vi. Define the testing process and reduce dependence on the few who know it
9.2. Tools used in Regression testing:
i. Win Runner from Mercury
ii. e-tester from Empirix
iii. WebFT from Radview
iv. Silktest from Radview
v. Rational Robot from Rational
vi. QA Run from Compuware
8. Review
8. Review :
8.1. Definition:
Review is a process or meeting during which a work product or set of work products, is presented to project personnel, managers, users, customers, or other interested parties for comment or approval.
8.2. Types of Reviews:
There are three general classes of reviews:
Informal / peer reviews
Semiformal / walk-through
Formal / inspections.
8.2.1. Walkthrough:
“A review of requirements, designs or code characterized by the author of the material under review guiding the progression of the review. “
A 'walkthrough' is an informal meeting for evaluation or informational purposes. Little or no preparation is usually required. These are led by the author of the document, and are educational in nature. Communication is therefore predominately one-way in nature. Typically they entail dry runs of designs, code and scenarios/ test cases.
8.2.2. Inspection:
A group review quality improvement process for written material. It consists of two aspects; product (document itself) improvement and process improvement (of both document production and inspection).
An inspection is more formalized than a 'walkthrough', typically with 3-8 people including a moderator, reader, and a recorder to take notes. The subject of the inspection is typically a document such as a requirements specification or a test plan, and the purpose is to find problems and see what's missing, not to fix anything. Attendees should prepare for this type of meeting by reading thru the document; most problems will be found during this preparation. The result of the inspection meeting should be a written report. Thorough preparation for inspections is difficult, painstaking work, but is one of the most cost effective methods of ensuring quality.
Led by trained moderator (not author), has defined roles, and includes metrics and formal process based on rules and checklists with entry and exit criteria.
8.2.3. Informal Review:
Unplanned and Undocumented
Useful, Cheap and widely used
Contrast with walkthroughs is that communication is very much two-way in nature
8.2.4. Technical Review:
Technical reviews are also known as peer review as it is vital that participants are made up from the 'peer group', rather than including managers.
i. Documented
ii. Defined fault detection process
iii. Includes peers and technical experts
iv. No management participant
8.3. Comparison of review types:
Activities in Review : Planning, overview meeting, Review meeting
and follow-up.
Deliverables in Review : Product changes, source document changes
and improvements.
Factors for pitfall of review : Lack of training, documentation and
management support.
Review of the Requirements / Planning and Preparing Acceptance Test
At the beginning of the project the test activities must start. These first activities are:
i. Fixing of test strategy and test concept
ii. risk analysis
iii. determine criticality
iv. expense of testing
v. test intensity
vi. Draw up the test plan
vii. Organize the test team
viii. Training of the test team - If necessary
ix. Establish monitoring and reporting
x. Provide required hardware resources (PC, data base, …)
xi. Provide required software resources (software version, test tools, …)
The activities include the foundations for a manageable and high-quality test process. A test strategy is determined after a risk evaluation, a cost estimate and test plan are developed and progress monitoring and reporting are established. During the development process all plans must be updated and completed and all decisions must be checked for validity. In a mature development process reviews and inspections are carried out through the whole process. The review of the requirement document answers questions like: Are all customers’
requirements fulfilled? Are the requirements complete and consistent? And so on. It is a look back to fix problems before going on in development. But just as important is a look forward. Ask questions like: Are the requirements testable? Are they testable with defensible expenditure? If the answer is no, then there will be problems to implement these requirements. If you have no idea how to test some requirements then it is likely that you have no idea how to implement these requirements. At this stage of the development process all the knowledge for the acceptance tests is available and to hand. So this is the best place for doing all the planning and preparing for acceptance testing.
For example: one can Establish priorities of the tests depending on criticality
Specify (functional and non-functional) test cases Specify and - if possible - provide the required infra-structure
At this stage all of the acceptance test preparation is finished and can be achieved.
8.5. Review of the Specification / Planning and Preparing System Test:
In the review meeting of the specification documents ask questions like: Is the specification testable? Are they testable with defensible expenditure? Only these kinds of specifications can be realistically implemented and be used for the next steps in the development process. There must be a re-work of the specifications if the answers to the questions are no. Here all the knowledge for the system tests is available and to hand. Tasks in planning and preparing for system testing include:
i. Establishing priorities of the tests depending on criticality
ii. Specifying (functional / non-functional) system test cases
iii. Defining and establishing the required infra-structure
iv. As with the acceptance test preparation, all of the system test preparation is finished at this early development stage.
v. Review of the Architectural Design
vi. Detailed Design Planning and
vii. Preparing Integration/Unit Test
viii. During the review of the architectural design one can look forward and ask questions like: What is about the testability of the design? Are the components and interfaces testable? Are they testable with defensible expenditure? If the components are too expensive to test a re-work of the architectural design has to be done before going further in the development process. Also at this stage all the knowledge for integration testing is available. All preparation, like specifying control flow and data flow integration test cases, can be achieved. All accordingly activities of the review of the architectural design and the integration tests can be done here at the level of unit tests.
8.6. Roles and Responsibilities:
In order to conduct an effective review, everyone has a role to play. More specifically, there are certain roles that must be played, and reviewers cannot switch roles easily.
i. The basic roles in a review are:
ii. The moderator
iii. The recorder
iv. The presenter
v. Reviewers
8.6.1. Moderator:
The moderator makes sure that the review follows its agenda and stays focused on the topic at hand. The moderator ensures that side-discussions do not derail the review, and that all reviewers participate equally.
8.6.2. Recorder:
The recorder is an often overlooked, but essential part of the review team. Keeping track of what was discussed and documenting actions to be taken is a full-time task. Assigning this task to one of the reviewers essentially keeps them out of the discussion. Worse yet, failing to document what was decided will likely lead to the issue coming up again in the future. Make sure to have a recorder and make sure that this is the only role the person plays.
8.6.3. Presenter:
The presenter is often the author of the artifact under review. The presenter explains the artefact and any background information needed to understand it (although if the artifact was not self explanatory, it probably needs some work). It’s important that reviews not become “trials” – the focus should be on the artifact, not on the presenter. It is the moderator’s role to make sure that participants (including the presenter) keep this in mind. The presenter is there to kick-off the discussion, to answer questions and to offer clarification.
8.6.4. Reviewer:
Reviewers raise issues. It’s important to keep focused on this, and not get drawn into side discussions of how to address the issue. Focus on results, not the means.
7. Types of Testing
7. Types of Testing:
7.1. Compliance Testing:
Involves test cases designed to verify that an application meets specific criteria, such as processing four-digit year dates, properly handling special data boundaries and other business requirements.
7.2. Intersystem Testing / Interface Testing:
“Integration testing where the interfaces between system components are tested”
The intersystem testing is designed to check and verify the interconnection between application function correctly
Applications are frequently interconnected to other systems. The interconnection may be data coming into the system from another application, leaving for another application frequently in multiple cycles .The intersystem testing involves the operations of multiple systems in test. The basic need of intersystem test arises whenever there is a change in parameters between application systems, where multiple systems are integrated in cycles.
7.3. Parallel Testing:
The process of comparing test results of processing production data concurrently in both the old and new systems.
Process in which both the old and new modules run at the same time so that performance and outcomes can be compared and corrected prior to deployment; commonly done with modules like Payroll.
Testing a new or an alternate data processing system with the same source data that is used in another system. The other system is considered as the standard of comparison.
7.4. Database Testing:
The database component is a critical piece of any data-enabled application. Today’s intricate mix of client-server and Web-enabled database applications is extremely difficult to Test productively.
Testing at the data access layer is the point at which your application communicates with the database. Tests at this level are vital to improve not only your overall Test strategy, but also your product’s quality.
Database testing includes the process of validation of database stored procedures, database triggers; database APIs, backup, recovery, security and database conversion.
7.5. Manual support Testing:
Manual support testing involves all functions performed by the people in preparing data for and using data from automated system. The objective of manual support testing is
Verify the manual – support procedures are documented and complete
Determine the manual-support responsibilities has been assigned
Determine manual support people are adequately trained.
Manual support testing involves first the evaluation of the adequacy of the process and seconds the execution of the process. The method of testing may be testing is same but the objective remains the same.
7.6. Ad-hoc Testing:
“Testing carried out using no recognised test case design technique.”
Testing without a formal test plan or outside of a test plan. With some projects this type of testing is carried out as an adjunct to formal testing. If carried out by a skilled tester, it can often find problems that are not caught in regular testing. Sometimes, if testing occurs very late in the development cycle, this will be the only kind of testing that can be performed. Sometimes ad hoc testing is referred to as exploratory testing.
7.7. Configuration Testing:
Testing to determine how well the product works with a broad range of hardware/peripheral equipment configurations as well as on different operating systems and software.
7.8. Pilot Testing:
Testing that involves the users just before actual release to ensure that users become familiar with the release contents and ultimately accept it. Often is considered a Move-to-Production activity for ERP releases or a beta test for commercial products. Typically involves many users, is conducted over a short period of time and is tightly controlled.
7.9. Automated Testing:
Software testing that utilizes a variety of tools to automate the testing process and when the importance of having a person manually testing is diminished. Automated testing still requires a skilled quality assurance professional with knowledge of the automation tool and the software
being tested to set up the tests.
7.10. Load Testing:
Load Testing involves stress testing applications under real-world conditions to predict system behaviour and performance and to identify and isolate problems. Load testing applications can emulate the workload of hundreds or even thousands of users, so that you can predict how an
application will work under different user loads and determine the maximum number of concurrent users accessing the site at the same time.
7.11. Stress and Volume Testing:
“Stress Testing: Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements.”
“Volume Testing: Testing where the system is subjected to large volumes of data. “
Testing with the intent of determining how well a product performs when a load is placed on the system resources that nears and then exceeds capacity.
Volume Testing, as its name implies, is testing that purposely subjects a system (both hardware and software) to a series of tests where the volume of data being processed is the subject of the test. Such systems can be transactions processing systems capturing real time sales or could be database updates and or data retrieval.
7.12. Usability Testing:
“Testing the ease with which users can learn and use a product.”
All aspects of user interfaces are tested:
Display screens
messages
report formats
navigation and selection problems
7.13. Environmental Testing:
These tests check the system’s ability to perform at the installation site.
Requirements might include tolerance for
heat
humidity
chemical presence
portability
electrical or magnetic fields
Disruption of power, etc.
7.14. Active Testing:
In active testing tester introduced the test data and analyzing the results. For example, we will fill the tank of a car with 1 liter petrol and see it's average.
7.15. Passive Testing:
Passive testing is monitoring the results of a running system without introducing any special test data. For example, a engine is running and we are listening it's sound to note noise pollution by engine.
7.16. CLIENT / SERVER TESTING:
This type of testing usually done for 2 tier applications (usually developed for LAN) ere we will be having front-end and backend.
The application launched on front-end will be having forms and reports which will be monitoring and manipulating data
E.g.: applications developed in VB, VC++, Core Java, C, C++, D2K, PowerBuilder etc.,The backend for these applications would be MS Access, SQL Server, Oracle, Sybase, Mysql, Quadbase
7.17. WEB TESTING:
This is done for 3 tier applications (developed for Internet / intranet / xtranet)Here we will be having Browser, web server and DB server.
The applications accessible in browser would be developed in HTML, DHTML, XML, JavaScript etc. (We can monitor through these applications)
Applications for the web server would be developed in Java, ASP, JSP, VBScript, JavaScript, Perl, Cold Fusion, PHP etc. (All the manipulations are done on the web server with the help of these programs developed)
The DBserver would be having oracle, sql server, Sybase, mysql etc. (All data is stored in the database available on the DB server)
Sunday, November 8, 2009
6. STLC Process
STLC Process is one of the guideline for testing the particular application.
The STLC is including in the system testing process. In the system testing the test engineers have to test the developed software by following this STLC phases Test Initiation
6.1. Test Initiation Phase:
In this phase the project manager category people will be involved. They will receive the reviewed BRS & SRS documents & prepares a testing document.
a. Test Strategy Document:
This is a company level document which is prepared by the project manager category people, this document defines the testing approach.
The first stage is the formulation of a test strategy. A test strategy is a statement of the overall approach to testing, identifying what levels of testing are to be applied and the methods, techniques and tools to be used. A test strategy should ideally be organization wide, being applicable to all of an organizations software developments.
Developing a test strategy which efficiently meets the needs of an organization is critical to the success of software development within the organization. The application of a test strategy to a software development project should be detailed in the projects software quality plan.
IEEE for Test Strategy Document:
i. Scope And Objective:
Scope means the purpose or need for testing the developed project (i.e). What is the need of testing? (Or) why we require testing for developed project.
The process or need of testing in this project is to validate the developed software with respect to customer specification. (i.e.) to make the developed software to meet the customer expectations.
The objective of testing in this project is to find the defects as many as possible while testing the developed software.
ii. Budget (or) Business issues:
This component defines how much amount of the budget is allocated for testing in this project.
iii. Roles and responsibilities:
The names of jobs of test engineer in a testing team and their responsibilities. The name of the jobs is the various levels of test engineer in a testing team such as senior test engineer & junior test engineer.
iv. Communication & status reports:
Communication defines the way of communication between roles in a testing team & the way of communication of testing team with others who worked in this project.
Status reporting means reporting the daily statuses to the test lead by the test engineers.
v. Test Automation & Tools:
It defines the need of automation testing in this project and if automation testing is required then whether that particular automation tool is available in our organization or not for this project.
vi. Change & Configuration Management:
Change means changes or modifications done to the test deliverables.
Configuration management means maintaining all the test deliverables , modifications done to test deliverable have to be maintained in the database of organization for future reference.
Change & Configuration management means the project manager gives the information regarding the changes in the test deliverables & Maintaining these test deliverables for future reference.
vii. Risk & Assumptions:
The list of analyzed risks & solutions to overcome these risks by the testing team while testing the developed software in future. The risks & solutions for the risks are prepared by the project manager by analyzing the risks.
viii. Training plan:
The required no of training sessions that the testing team requires to understand the requirements developed in the project properly and perfectly. s
6.2. Test Plan Phase:
A test plan states what the item to be tested are, at what level they will be tested, what sequence they are to be tested in , how the test strategy will be applied to the testing of each item, and describes the test Environment.
Test Plan Document will be divided into
Ø Master Test Plan.
Ø Detailed Test Plan.
6.2.1. Master Test Plan:
Master Test Plan is the high level view of the testing approach,
a. Testing Team Formation:
Test Manager concentrates on below factor to form Testing Team for corresponding project.
Test Manager will check for Availability of test engineers (Selection will be made in 3:1 ratio)
b. Identifying Tactical Risks:
During Test Plan writing author concentrate on identifying risks w.r.t team formation
Lack of knowledge of Testers in the domain.
Lack of Budget.
Lack of Resources.
Delays in Delivery.
Lack of Test Data.
Lack of Development Process Rigor.
Lack of communication (In between testing team and development team)
After completion of Team Formation and Risk Finding, Author(Test manager or Test Lead) start writing Test Plan document
Here are the steps of writing the test plan.
6.2.2. Detailed test plan:
This is a summary of the ANSI/IEEE Standard 829-1983. It describes a test plan as:
“A document describing the scope, approach, resources, and schedule of intended testing activities. It identifies test items, the features to be tested, the testing tasks, who will do each task, and any risks requiring contingency
Planning.”
… (ANSI/IEEE Standard 829-1983)
This standard specifies the following test plan outline:
a. Test Plan Identifier:
· A unique identifier
b. Introduction:
· Summary of the items and features to be tested
· Need for and history of each item (optional)
· References to related documents such as project authorization, project plan, QA plan, configuration management plan, relevant policies, relevant standards
· References to lower level test plans
c. Test Items:
Test items and their version
Characteristics of their transmittal media
References to related documents such as requirements specification, design specification, users guide, operations guide, installation guide
References to bug reports related to test items
Items which are specifically not going to be tested (optional)
d. Features to be tested:
All software features and combinations of features to be tested
References to test-design specifications associated with each feature and combination of features
e. Features Not to Be Tested:
All features and significant combinations of features which will not be tested
The reasons these features won’t be tested
f. Approach:
Overall approach to testing
For each major group of features of combinations of featres, specify the approach
Specify major activities, techniques, and tools which are to be used to test the groups
Specify a minimum degree of comprehensiveness required
Identify which techniques will be used to judge comprehensiveness
Specify any additional completion criteria
Specify techniques which are to be used to trace requirements
Identify significant constraints on testing, such as test-item availability, testing resource availability, and deadline
g. Item Pass/Fail Criteria:
Specify the criteria to be used to determine whether each test item has passed or failed testing
h. Test Deliverables:
Identify the deliverable documents: test plan, test design specifications, test case specifications, test procedure specifications, test item transmittal reports, test logs, test incident reports, test summary reports
Identify test input and output data
Identify test tools (optional)
i. Testing Tasks:
Identify tasks necessary to prepare for and perform testing
Identify all task interdependencies
Identify any special skills required
j. Environmental Needs:
Specify the level of security required
Identify special test tools needed
Specify necessary and desired properties of the test environment: physical characteristics of the facilities including hardware, communications and system software, the mode of usage (i.e., stand-alone), and any other software or supplies needed
Identify any other testing needs
Identify the source for all needs which are not currently available
k. Responsibilities:
Identify groups responsible for managing, designing, preparing, executing, witnessing, checking and resolving
Identify groups responsible for providing the test items identified in the Test Items section
Identify groups responsible for providing the environmental needs identified in the Environmental Needs section
l. Staffing and Training Needs:
Specify staffing needs by skill level
Identify training options for providing necessary skills
m. Schedule:
Specify test milestones
Specify all item transmittal events
Estimate time required to do each testing task
Schedule all testing tasks and test milestones
For each testing resource, specify its periods of use
n. Risks and Contingencies:
Identify the high-risk assumptions of the test plan
Specify contingency plans for each
o. Approvals:
Specify the names and titles of all persons who must approve the plan
Provide space for signatures and dates
6.3. Test Design Phase:
6.3.1. Test Scenario Document:
A set of test cases that ensure that the business process flows are tested from end to end. They may be independent tests or a series of tests that follow each other, each dependent on the output of the previous one.
Test scenario templates:
- S.no
- Module
- Requirements
- Test Scenario.
- Test Case.
6.3.2. Test Case Document:
It is a group of steps that is to be executed to check the functionality of a specific object.
The main objective of writing test case is to validate the test coverage of an application.
Test case Templates:
- Test Case id.
- Test Case Description.
- Step name.
- Step Description.
- Test data (or) Test Input.
- Expected Result.
- Actual Result.
- Status.
6.4. Execute Test Case Phase:
Executing all the test cases based on the functional specification.
6.5. Test Report Phase:
6.5.1. Defect Report Document:
The document that contains the information regarding accepted defects & rejected defects, defect corrected, status of the defect.
IEEE format for defect report document:
a. Defect id (or) name:
A unique number or name must be given to he defect by the test engineer for future reference.
b. Defect description (or) introduction:
A brief summary or a brief description of the identified defects.
c. Severity:
It means the seriousness of the defect in terms of functionality.
i. High severity:
The software build is not working correctly due to occurrence of defect and not to able continue remaining testing until that defect is resolved.
Eg: login.
ii. Medium severity:
The software build is not working correctly due to occurrence of defect but able to continue remaining testing and that defect must be resolved completely.
iii. Low severity:
The build is having a defect but it may or may not be resolved.
Eg: unwanted options available in the application.
d. Priority:
The importance of the defect to be resolved in terms of severity.
(Or)
It is nothing but how fast to fix the bug in terms of severity.
e. Reprodusable:
It means during execution the defect can be reproduced again and again or not.
Two options are available to mention this
Yes: the defect can be reproduced again. Attach the test procedure in this component & send it to defect tracking team.
No: the defect cannot be reproduced again. Attach the test procedure & snapshot and forward to development team.
f. Status: the status will be new
g. Tested by: Testers name should be mentioned.
h. Fixed by: Developers name should be mentioned.
i. Reported on:
The date in which the defect report was reported.
6.5.2. Tools Used:
Tools that are used to track and report defects are,
a. Clear Quest (CQ)
It belongs to the Rational Test Suite and it is an effective tool in Defect
Management. CQ functions on a native access database and it maintains a common database of defects. With CQ the entire Defect Process can be customized. For e.g., a process can be designed in such a manner that a defect once raised needs to be definitely authorized and then fixed for it to attain the status of retesting. Such a systematic defect flow process can be established and the history for the same can be maintained. Graphs and reports can be customized and metrics can be derived out of the maintained defect repository.
b. Test Director (TD):
Test Director is an Automated Test Management Tool developed by Mercury Interactive for Test Management to help to organize and manage all phases of the software testing process, including planning, creating tests, executing tests, and tracking defects. Test Director enables us to manage user access to a project by creating a list of authorized users and assigning each user a password
and a user group such that a perfect control can be exercised on the kinds of additions and modifications and user can make to the project. Apart from Manual Test Execution, the Win Runner automated test scripts of the project can also be executed directly from Test Director.
Test Director activates Win Runner, runs the tests, and displays the results. Apart form the above, it is used for
To report defects detected in the software.
sweAs a sophisticated system for tracking software defects.
To monitor defects closely from initial detection until resolution.
To analyze our Testing Process by means of various graphs and reports.
c. Defect Tracker:
Defect Tracker is a tool developed by Maveric Systems Ltd. an Independent Software Testing Company in Chennai for defect management. This tool is used to manage the defect, track the defect and report the defect effectively by the testing team.
Test Closure Phase:
a. Sign Off :
Sign off Criteria: In order to acknowledge the completion of the test process and certify the application, the following has to be completed.
All passes have been completed
All test cases should have been executed
All defects raised during the test execution have either been closed or deferred
b. Authorities:
The following personnel have the authority to sign off the test execution process
Client: The owners of the application under test
Project manager: Maveric Personnel who managed the project
Project Lead: Maveric Personnel who managed the test process
c. Deliverables:
i. The following are the deliverables to the Clients
ii. Test Strategy
iii. High Level Test Conditions or Scenarios and Test Conditions document
iv. Consolidated defect report
v. Weekly Status report
vi. Traceability Matrix
vii. Test Acceptance/Summary Report.
d. Metrics:
i. Defect Metrics:
Analysis on the defect report is done for management and client information. These are categorized as
ii. Defect age:
Defect age is the time duration between the points of introduction of defect to the point of closure of the defect. This would give a fair idea on the defect set to be included for smoke test during regression.
iii. Defect Analysis:
The analysis of the defects can be done based on the severity, occurrence and category of the defects. As an example Defect density is a metric which gives the ratio of defects in specific modules to the total defects in the application. Further analysis and derivation of metrics can be done based on the various components of the defect management.
Friday, October 23, 2009
5. Testing Techniques
Testing Techniques will be
5.1. Black-Box Testing.
5.2. White-Box Testing.
5.3. Grey Box Testing.
5.1. Black-Box Testing:
Black box testing treats the system as a “black-box”, so it doesn’t explicitly use Knowledge of the internal structure or code. Or in other words the Test engineer need not know the internal working of the “Black box” or application.
Main focus in black box testing is on functionality of the system as a whole.
Each testing method has its own advantages and disadvantages. There are some bugs that cannot be found using only black box or only white box. Majority of the application are tested by black box testing method. We need to cover majority of test cases so that most of the bugs will get discovered by blackbox testing.
a. Tools used for Black Box testing:
Black box testing tools are mainly record and playback tools. These tools are used for regression testing that to check whether new build has created any bug in previous working application functionality. These record and playback tools records test cases in the form of some scripts like TSL, VB script, Java script, Perl.
b. Advantages of Black Box Testing
· Tester can be non-technical.
· Used to verify contradictions in actual system and the specifications.
· Test cases can be designed as soon as the functional specifications are complete.
c. Disadvantages of Black Box Testing:
· The test inputs needs to be from large sample space.
· It is difficult to identify all possible inputs in limited testing time. So writing test cases is slow and difficult
· Chances of having unidentified paths during this testing
5.1.1. Types of Black-box Testing:
i. Boundary Value Analysis:
Many systems have tendency to fail on boundary. So testing boundry values of application is important. Boundary Value Analysis (BVA) is a test Functional Testing technique where the extreme boundary values are chosen. Boundary values include maximum, minimum, just inside/outside boundaries, typical values, and error values. Extends equivalence partitioning Test both sides of each boundary Look at output boundaries for test cases too Test min, min-1, max, max+1, typical values
a. BVA techniques:
1. Number of variables For n variables: BVA yields 4n + 1 test cases.
2. Kinds of ranges Generalizing ranges depends on the nature or type of variables
Advantages of Boundary Value Analysis:
1. Robustness Testing - Boundary Value Analysis plus values that go beyond the limits
2. Min - 1, Min, Min +1, Nom, Max -1, Max, Max +1
3. Forces attention to exception handling
b. Limitations of Boundary Value Analysis
Boundary value testing is efficient only for variables of fixed values i.e boundary.
ii. Equivalence Class Partitioning:
Equivalence partitioning is a black box testing method that divides the input domain of a program into classes of data from which test cases can be derived.
How this partitioning is performed while testing:
1. If an input condition specifies a range, one valid and one two invalid classes are defined.
2. If an input condition requires a specific value, one valid and two invalid equivalence classes are defined.
3. If an input condition specifies a member of a set, one valid and one invalid equivalence class is defined.
4. If an input condition is Boolean, one valid and one invalid class is defined.
iii. Error Guessing:
This is purely based on previous experience and judgment of tester. Error Guessing is the art of guessing where errors can be hidden. For this technique there are no specific tools, writing the test cases that cover all the application paths.
5.2. White Box Testing:
White box testing (WBT) is also called Structural or Glass box testing.
White box testing involves looking at the structure of the code. When you know the internal structure of a product, tests can be conducted to ensure that the internal operations performed according to the specification. And all internal components have been adequately exercised.
5.2.1. Types of white-box testing:
· Basic Path Testing:
· Control Structure testing:
· Program technique testing:
· Mutation Testing:
i. Basic Path Testing:
The white box testers are using this technique to estimate the execution of a program, without any disturbance such that the program should cover all the independent paths defined in it. That is the program has to be executed depending upon how many no of independent paths defined in it.
To implement this technique , the programmers are follow the approaches.
Step1: preparing a program with respect to design logic.
Step2: prepare flowchart for that program.
Step3: calculating cyclomatic complexity.
Step 4: run the program more than 1 time to cover all the independent paths.
a. Cyclomatic Complexity:
The cyclomatic complexity is a measurement for finding the no of independent paths in a flow graph.
ii. Control Structure Testing:
Validating every input statement and output statement correctness for a control structure.
a. Branch testing:
also called Decision Testing
Definition: "For every decision, each branch needs to be executed at least once."
Shortcoming - ignores implicit paths that result from compound conditionals.
Treats a compound conditional as a single statement. (We count each branch taken out of the decision, regardless which condition lead to the branch.)
This example has two branches to be executed:
IF ( a equals b) THEN
statement 1
ELSE
statement 2
END IF
This examples also has just two branches to be executed, despite the compound conditional:
IF ( a equals b AND c less than d ) THEN
statement 1
ELSE
statement 2
END IF
This example has three branches to be executed:
IF ( a equals b) THEN
statement 1
ELSE
IF ( c equals d) THEN
statement 2
ELSE
statement 3
END IF
END IF
Obvious decision statements are if, for, while, switch.
Subtle decisions are return (Boolean expression), ternary expressions, and try-catch.
For this course you don't need to write test cases for IOException and OutOfMemory exception.
b. Condition testing:
Validating a simple Boolean “if” condition with respect to its input statements and output statements correctness.
(Or)
Condition testing is a test construction method that focuses on exercising the logical conditions in a program module.
Errors in conditions can be due to:
Boolean operator error
Boolean variable error
Boolean parenthesis error
Relational operator error
Arithmetic expression error
Definition: "For a compound condition C, the true and false branches of C and every simple condition in C need to be executed at least once."Multiple-condition testing requires that all true-false combinations of simple conditions be exercised at least once. Therefore, all statements, branches, and conditions are necessarily covered.
c. Dataflow testing:
Validating the flow of data with respect to a control statement.
(Or)
Selects test paths according to the location of definitions and use of variables. This is a somewhat sophisticated technique and is not practical for extensive use. Its use should be targeted to modules with nested if and loop statements.
d. Loop testing:
Validating the looping control structures for its defined no of iterations.
(Or)
Loops are fundamental to many algorithms and need thorough testing.
There are four different classes of loops: simple, concatenated, nested, and unstructured.
Examples:
Create a set of tests that force the following situations:
Simple Loops, where n is the maximum number of allowable passes through the loop.
Skip loop entirely
Only one pass through loop
Two passes through loop
m passes through loop where m
Start with inner loop. Set all other loops to minimum values.
Conduct simple loop testing on inner loop.
Work outwards
Continue until all loops tested.
Concatenated Loops
If independent loops, use simple loop testing.
If dependent, treat as nested loops.
Unstructured loops
Don't test - redesign.
public class loopdemo
{ private int[] numbers = {5,-3,8,-12,4,1,-20,6,2,10};
/** Compute total of numItems positive numbers in the array
* @param numItems how many items to total, maximum of 10.
*/
public int findTotal(int numItems)
{ int total = 0;
if (numItems > 0 && numItems <= 10) { for (int count=0; count < count =" count"> 0)
{ total = total + numbers[count];
} } } return total;
}}
public void testOne()
{ loopdemo app = new loopdemo();
assertEquals(0, app.findTotal(0));
assertEquals(5, app.findTotal(1));
assertEquals(5, app.findTotal(2));
assertEquals(17, app.findTotal(5));
assertEquals(26, app.findTotal(9));
assertEquals(36, app.findTotal(10));
assertEquals(0, app.findTotal(11));
}
iii. Program technique Testing:
During this testing the programmers are calculating the execution time of a program using monitors. If the program execution is not acceptable then the programmers are performing changes in the structures of a program without disturbing the functionality. I.e. if a program takes more time to complete its execution the programmers are reducing the internal steps of the program without disturbing the external functionality of the program.
iv. Mutation testing:
It means a change in a program. Mutation testing means the programmers are performing changes in a testing program to estimate the correctness & completeness of program testing .i.e. they will make modifications within a program and validates that modified program to check whether it is working properly or not with respect to the modifications.
5.2.2. Why we do White Box Testing?
To ensure:
That all independent paths within a module have been exercised at least once.
All logical decisions verified on their true and false values.
All loops executed at their boundaries and within their operational bounds internal data structures validity.
5.2.3. Need of White Box Testing?
To discover the following types of bugs:
Logical error tend to creep into our work when we design and implement functions, conditions or controls that are out of the program
The design errors due to difference between logical flow of the program and the actual implementation
Typographical errors and syntax checking
Skills Required:
We need to write test cases that ensure the complete coverage of the program logic. For this we need to know the program well i.e. we should know the specification and the code to be tested. Knowledge of programming languages and logic.
5.2.4. Limitations of WBT:
Not possible for testing each and every path of the loops in program. This means exhaustive testing is impossible for large systems.
This does not mean that WBT is not effective. By selecting important logical paths and data structure for testing is practically possible and effective.
5.3. Grey Box Testing:
Grey box Testing is the new term, which evolved due to the different architectural usage of the system. This is just a combination of both Black box & White box testing. Tester should have the knowledge of both the internals and externals of the function.
Tester should have good knowledge of White Box Testing and complete knowledge of Black Box Testing.
Grey box testing is especially important with Web and Internet applications, because the Internet is built around loosely integrated components that connect via relatively well defined interfaces