Wednesday, November 11, 2009

18. Interview Questions

1. What is Software Testing?
“The process of exercising or evaluating a system or system component by manual or automated means to verify that it satisfies specified requirements or to identify differences between expected and actual results..."
2. What is the Purpose of Testing?
i. To uncover hidden error.
ii. To achieve the maximum usability of the system
iii. To demonstrate expected performance of the system.
3. What types of testing do testers perform?
i. Black-box testing, White box testing is the basic type of testing testers Performs. Apart from that they also perform a lot of tests like
ii. Ad-Hoc testing
iii. Cookie testing
iv. CET (Customer Experience test)
v. Client-Server Test
vi. Configuration Tests
vii. Compatibility testing
viii. Conformance Testing
ix. Depth Test
x. Error Test
xi. Event-Driven
xii. Full Test
xiii. Negative Test
xiv. Parallel Testing
xv. Performance Testing
xvi. Recovery testing
xvii. Sanity Test
xviii. Security Testing
xix. Smoke testing
xx. Web Testing
4. What is the Outcome of Testing?
A stable application, performing its task as expected.
5. What is the need for testing?
The Primary need is to match requirements get satisfied with the functionality and also to answer two questions
A. Whether the system is doing what it supposes to do?
B. Whether the system is not performing what it is not suppose to do?
6. What are the entry criteria for Functionality and Performance testing?
Functional testing:
Functional Spec/ BRS (CRS) / User Manual. An integrated application, Stable for testing
Performance Testing:
Same above mentioned baseline document support and good and healthy application that supports drastic performance testing
7. What is test metrics?
After doing the actual testing, an evaluation doing on the testing to extract some
information about the application health using outputs of testing. Software metrics is any type of measurement, which relates to a software system, process or related documentation.
Eg: Size of code and Found bugs on that count Number of bugs reported per day.
i. Number of Conditions/Cases tested per day
ii. It can be
iii. Test Efficiency
iv. Total number of tests executed
8. Why do you go for White box testing, when Black box testing is available?
A benchmark that certifies Commercial (Business) aspects and also functional (technical) aspects is objectives of black box testing. Here loops, structures, arrays, conditions, files, etc., are very micro level but they are Basement for any application, So White box takes these things in Macro level and test these things
9. What are the entry criteria for Automation testing?
Application should be stable.
Clear Design and Flow of the application is needed
10. When to start and Stop Testing?
If we follow ‘Waterfall’ model then testing can be started after coding. If ‘V’ model is followed then testing can be started at design phase itself. Regard less of model the following criteria should considered
To start:
i. When test Environment was supportive enough for testing.
ii. When Application study was confident enough
To Stop:
i. After full coverage of Scope of testing
ii. After getting enough confidence on health of the application.
11. What is Quality?
“Fitness to use”
“A journey towards excellence”
12. What is Baseline document, Can you say any two?
A baseline document, which starts the understanding of the application before the tester, starts actual testing.
i. Functional Specification
ii. Business Requirement Document
13. What is verification?
A tester uses verification method to ensure the system complies with an organization standards and processes, relying on review or non executable methods (such as software, hardware, documentation and personnel)
“Are we Building the Right Product”

14. What is validation?
Validation physically ensures that the system operates according to plan by Executing thesystem functions through series of tests that can be observed or evaluated.
“Are we building the Product Right”
15. What is quality assurance?
A planned and systematic pattern for all actions necessary to provide adequate
confidence that the item or product conforms to established technical requirements.
16. What is quality control?
Quality Control is defined as a set of activities or techniques whose purpose is to ensure that all quality requirements are being met. In order to achieve this purpose, processes are monitored and performance problems are solved.
17. What are SDLC and TDLC?
The Flow and explanation process, which clearly pictures how a software development and testing should be done, were explained in SDLC and TDLC respectively. (Software development Life Cycle and testing development Life cycle)
TDLC is a informal concept and also referred as TLC
18. What are the Qualities of a Tester?
i. Should be perfectionist
ii. Should be tactful and diplomatic
iii. Should be innovative and creative
iv. Should be relentless
v. Should possess negative thinking with good judgment skills
vi. Should possess the attitude to break the system
19.What are the various levels of testing?
i. Unit Testing
ii. Integration testing
iii. System Testing
iv. User Acceptance Testing
20. Tell names of some testing type which you learnt or experienced?
Any 5 or 6 types which are related to companies profile is good to say in the interview,
i. Ad - Hoc testing
ii. Cookie Testing
iii. CET (Customer Experience Test)
iv. Client-Server Test
v. Configuration Tests
vi. Compatibility testing
vii. Conformance Testing
viii. Depth Test
ix. Error Test
x. Event-Driven
xi. Full Test
xii. Negative Test
xiii. Parallel Testing
xiv. Performance Testing
xv. Recovery testing
xvi. Sanity Test
xvii. Security Testing
xviii. Smoke testing
xix. Web Testing
21. What exactly is Heuristic checklist approach for unit testing?
It is method of achieving the most appropriate solution of several found by alternative methods is selected at successive stages testing. The check list Prepared to Proceed is called Heuristic check list
22. After completing testing, what would you deliver to the client?
i. Test deliverables namely
ii. Test plan
iii. Test Data
iv. Test design Documents (Condition/Cases)
v. Defect Reports
vi. Test Closure Documents
vii. Test Metrics
23. What is a Test Bed?
Before Starting the Actual testing the elements which supports the testing activity such as Test data, Data guide lines. Are collectively called as test Bed.
24. What is a Data Guideline?
Data Guidelines are used to specify the data required to populate the test bed and prepare test scripts. It includes all data parameters that are required to test the conditions derived from the requirement / specification The Document, which supports in preparing test data are called Data guidelines
25. Why do you go for Test Bed?
When Test Condition is executed its result should be compared to Test result (expected result), as Test data is needed for this here comes the role of test Bed where Test data is made ready.
26. What is Severity and Priority and who will decide what?
Severity:

How much the Bug found is supposed to affect the systems Function/Performance, Usually we divide as Emergency, High, Medium, and Low.
Priority:
Which Bug should be solved fist in order of benefit of system’s health? Normally it starts from Emergency giving first Priority to Low as last Priority.
27. Can Automation testing replace manual testing? If it so, how?
Automated testing can never replace manual Testing. As these tools to Follow GIGO principle of computer tools. Absence of creativity and innovative thinking.
But
1. It speeds up the process. Follow a clear Process, which can be reviewed easily.
Better Suited for Regression testing of Manually tested Application and Performance testing.
28. What is a test case?
A Test Case gives values / qualifiers to the attributes that the test condition can have. Test cases, typically, are dependent on data / standards.
A Test Case is the end state of a test condition, i.e., it cannot be decomposed or
broken down further. Test Case design techniques for Black box Testing.
i. Decision table
ii. Equivalence Partitioning Method
iii. Boundary Value Analysis
iv. Cause Effect Graphing
v. State Transition Testing
vi. Syntax Testing
29. What is a test condition?
A Test Condition is derived from a requirement or specification. It includes all possible combinations and validations that can be attributed to that requirement/specification.
30. What is the test script?
A Test Script contains the Navigation Steps, Instructions, Data and Expected Results required to execute the test case(s). Any test script should say how to drive or swim through out the application even for a new user.
31. What is the test data?
The value which are given at expected places(fields) in a system to verify its functionality have been made ready in a piece of document called test data.
32. What is an Inconsistent bug?
The Bug which is not occurring in a definable format or which cannot be caught, even if a process is followed. It may occur and may not when tested with same scenario.
33. What is the difference between Re-testing and Regression testing?
Retest-
To check for a particular bug and its dependencies after it is said to be fixed.
Regression testing: To check for the added or new functionality's effect on the existing ystem
34. What are the different types of testing techniques?
i. White box
ii. Black box
iii. Gray Box
35. What are the different types of test case techniques?
i. Test Case design techniques for Black box Testing.
ii. Decision table
iii. Equivalence Partitioning Method
iv. Boundary Value Analysis
v. Cause Effect Graphing
vi. State Transition Testing
vii. Syntax Testing
36. What are the risks involved in testing?
i. Resource Risk (A. Human Resource B. Hardware resource C. Software resource)
ii. Technical risk
iii. Commercial Risk
37. Differentiate Test bed and Test Environment?
Test bed holds only testing documents which supports testing which includes Test data, ata guidelines etc.
Test environment includes all supportive elements namely hardware, software, tools, rowsers, Servers, etc.,
38. What ifs the difference between defect, error, bug, failure, fault?
Error:

“Is an undesirable deviation from requirements?”
Any problem or cause for many problems which stops the system to perform its
functionality is referred as Error
Bug:
Any Missing functionality or any action that is performed by the system which is not upposed to be performed is a Bug.
“Is an error found BEFORE the application goes into production?”
Any of the following may be the reason for birth of Bug
1. Wrong functionality
2. Missing functionality
3. Extra or unwanted functionality
Defect:
A defect is a variance from the desired attribute of a system or application.
“Is an error found AFTER the application goes into production?”
Defect will be commonly categorized into two types:
1. Defect from product Specification
2. Variance from customer/user expectation.
Failure:
Any Expected action that is suppose to happen if not can be referred as failure or we can say Absence of expected response for any request.
Fault:
This generally referred in hardware terminologies. A Problem, which cause the system not to perform its task or objective.
39. What is the difference between quality and testing?
“Quality is giving more cushions for user to use system with all its expected
characteristics” It is usually said as Journey towards Excellence. Testing is an activity done to achieve the quality.
40. What is the difference between White & Black Box Testing?
White box: Structural tests verify the structure of the software itself and require complete access to the object's source code. This is known as ‘white box’ testing because you see into the internal workings of the code.
Black Box: Functional tests examine the observable behavior of software as evidenced by its outputs without reference to internal functions. Hence ‘black box’ testing. If the program consistently provides the desired features with acceptable performance, then specific source code features are irrelevant. It's a pragmatic and down-to-earth assessment of software.
41. What is the difference between Quality Assurance and Quality Control?
QA:
Study on Process followed in Project development
QC: Study on Project for its Function and Specification
42. What is the difference between Testing and debugging?
Testing
is done to find bugs
Debugging is an art of fixing bugs.
Both are done to achieve the quality
43. What is the difference between bug and defect?
Bug:
Any Missing functionality or any action that is performed by the system which is not supposed to be performed is a Bug.
“Is an error found BEFORE the application goes into production?”
Any of the following may be the reason for birth of Bug
1. Wrong functionality
2. Missing functionality
3. Extra or unwanted functionality
Defect:
A defect is a variance from the desired attribute of a system or application.
“Is an error found AFTER the application goes into production?”
Defect will be commonly categorized into two types:
1. Defect from product Specification
2. Variance from customer/user expectation
44. What is the difference between verification and validation?
Verification:

The process of determining whether of not the products of a given phase of the software development cycle meets the implementation steps and can be traced to the incoming objectives established during the previous phase. The techniques for verification are testing, inspection and reviewing.
In other words we can say Verification as
“Are we Building the Right Product”
A tester uses verification method to ensure the system complies with an organization standards and processes, relying on review or non executable methods (such as software, hardware, documentation and personnel)
Validation:
The process of evaluating software at the end of the software development process to ensure compliance with software requirements. The technique for validation is testing, inspection and reviewing.
In other words we can say Verification as
“Are we building the Product Right”
Validation physically ensures that the system operates according to plan by Executing the system functions through series of tests that can be observed or evaluated.
45. What is the difference between functional spec? And Business requirement Specification?
Functional specification will be more technical, It holds properties of a field and
functionality dependencies E.g.: size, Type of data whether numeric or alphabets etc.
Business Requirement Specification will be more business oriented which throws light more on needs or requirements
46. What is the difference between unit testing and integration testing?
Unit Testing:
Testing of single unit of code, module or program. it is usually done by the developer of the unit. It validates that the software performs as designed. Deliverable of the unit testing is software unit ready for testing with other system components.
Integration Testing: Testing of related programs, modules or units of code. Validates that multiple parts of the system perform as per specification.
Deliverable of integration testing is parts of system ready for testing with other portions of system.
47. What is difference between Volume & Stress?
Volume testing is increasing the volume of data to maximum withstand capacity of the system.
Stress is the combination of both volume and load, so need not to increase in volume alone even user can also increased objective here is to check the up to which extend it can bare the increasing load and volume.
48. What is the difference between Stress & Load Testing?
Stress is the combination of both volume and load, so need not to increase in volume alone even user can also increased objective here is to check the up to which extend it can bare the increasing load and volume.
Load Testing is increasing number of user to maximum withstand capacity of the system.
49. What is the difference between Client Server & Web Based Testing?
Client server needs a Client server environment that is a system to Request and another to respond to its request.
Web Based testing normally goes with 3W sites testing, done to check its stability and functionality when goes online.
50. What is the Difference between Code Walkthrough & Code Review?
Both are almost same except in one issue that is Walkthrough need not be done by people inside the team are who have more knowledge about the system.
Review is highly recommended to be done by people of higher level in team or who have good knowledge about the application.
51. What is the difference between walkthrough and inspection?
Walkthrough:

In a walk through session, the material being examined is presented by a reviewed and evaluated by a team of reviewers.
A walk through is generally carried out without any plan or preparation. the aim of this review is to enhance the process carried out in production environment.
Inspections:
Design and code inspection was first described by FAGUN.
There are three separate inspection performed, they are
Following design, but prior to implementation.
Following implementation, but prior to Unit testing.
Finally inspecting Unit testing, this was not considered to be cost-effective in discovering errors.
52. What is the Difference between SIT & IST?
SIT can be done when system is on the process of integration,
IST need integrated System of various Unit levels of independent functionality and checks its workability after integration and compares it before integration.
53. What is the Difference between static and dynamic?
· Static testing: Testing performed with expecting any response for specific request placed at that time. Done Based on structures, Algorithms, logic, etc.,
· Dynamic testing: Performed to the System that responds to any specific request. More than all that without executing the application this testing cannot be done.
54. What are the Minimum requirements to start testing?
i. Baseline Documents.
ii. Stable application.
iii. Enough hardware and software support E.g. Browsers, Servers, and Tools)
iv. Optimum maintenance of resource
55. What is Smoke Testing & when it will be done?
A quick-and-dirty test that the major functions of a piece of software work without
bothering with finer details. Originated in the hardware testing practice of turning on a new piece of hardware for the first time and considering it a success if it does not catch on fire.
56. What is Ad hoc testing? When it can be done?
Appropriate and very often syndicated when tester wants to become familiar with the product, or in the environment when technical/testing materials are not 100% completed. It is also largely based on general software product functionality/testing understanding and the normal 'Human Common Sense'.
This can be performed even with non-availability of of Baseline documents.
57. What is cookie testing?
Cookie is a text file normally written by web applications to store all your login-id,
password validation and details about your session. Cookies will get stored in our
machines (client).Its mainly to verify whether cookies are being written correctly. .
Importance of cookie testing:
To evaluate the performance of a web application
To assure the health of www application where more cookies are involved
58. What is security testing?
To test how the system can defense itself from external attacks.
How much it can with stand from breaking the system from performing its assigned task.
Many critical software applications and services have integrated security measures against malicious attacks. The purpose of security testing of these systems include identifying and removing software flaws that may potentially lead to security violations, and validating the effectiveness of security measures. Simulated security attacks can be performed to find vulnerabilities.
59. What is database testing?
The demonstrate the backend response for front end requests
How backend, which stores and retrieve back the data and supports the front end when inneed is justified database testing.
60. What is the relation ship between Quality & Testing?
Quality is a journey towards excellence; Testing is the way of achieving quality.
61. How do you determine, what to be tested?
The Scope of testing should be created based on the requirements or needs given by the end user or client, based on these things the testing scope should be decided.
62. How do you go about testing a project?
i. System study
ii. Understanding the application
iii. Test environment setup
63. What is the Initial Stage of testing?
Right from understanding the application testing starts with clarifying the ambiguities in the application and continues to Test initiation encloses, Test process, test data, Data guidelines Preparation and test design which is finally executed
64. What is Web Based Application Testing?
Web Based testing normally goes with 3W sites testing, done to check its stability and functionality when goes online.
65. What is Client Server Application Testing?
Client server needs a Client server environment that is a system to Request and another to respond to its request.
66. What is the use of Functional Specification?
Functional Specification is a baseline document prepared in technical perspective, says how the system should behave in ideal scenario. Tells right from syntax to its functionality and dependencies
Eg: for a password and user id fields
It should accept number of characters in of type of data and it gets input from and gives output to .
67. Why do we prepare test condition, test cases, test script (Before Starting Testing)?
These are test design document which are used to execute the actual testing
Without which execution of testing is impossible ,finally this execution is going to find the bugs to be fixed so we have prepare this documents.
68. Is it not waste of time in preparing the test condition, test case & Test Script?
No document prepared in any process is waste of time, That too test design documents which plays vital role in test execution can never be said waste of time as without which proper testing cannot be done.
69. How do you go about testing of Web Application?
To approach a web application testing, the first attack on the application should be on its performance behavior as that is very important for a web application and then transfer of data between web server ,front end server, security server and back end server.
70. How do you go about testing of Client Server Application?
To approach a client server environment we can track back the data transfer, Check the compatibility and verify the individual behavior and then to compare as client and server.
71. What is meant by Static Testing?
Structure of a program, Program Logic, Condition coverage, Code coverage etc. can be tested. Analysis of a program carried out without executing the program.
72. Can the static testing be done for both Web & Client Server Application?
Yes, Can be done regardless of type of application, but Depends on the Application’s individual structure and behavior.
73. In the Static Testing, what all can be tested?
i. Functions,
ii. Conditions
iii. Loops
iv. Arrays
v. Structures
74. Can test condition, test case & test script help you in performing the static testing?
Static testing will be done based on Functions, Conditions, loops, arrays and structures. so hardly not needed to have These documents, By keeping this also static testing can be done.
75. What does dynamic testing mean?
Any dynamic application i.e., the system that responds to request of user is tested by executing it is called dynamic testing
76. Is the dynamic testing a functional testing?
Yes, Regardless of static or dynamic if applications functionality's are attacked keeping in mind to achieve the need then it will come under functional testing.
77. Is the Static testing a functional testing?
Yes, Regardless of static or dynamic if applications functionality's is attacked keeping in mind to achieve the need then it will come under functional testing.
78. What is the functional testing you perform?
I have done Conformance testing, Regression testing, Workability testing, Function Validation and Field level validation testing.
79. What is meant by Alpha Testing?
Alpha testing is testing of product or system at developer’s site by the customer.
80. What kind of Document you need for going for a Functional testing?
Functional specification is the ultimate document, which expresses all the functionality's of the application and other documents like user manual and BRS are also need for functional testing.
Gap analysis document will add value to understand expected and existing system.
81. What is meant by Beta Testing?
User Acceptance testing which is done with the objective of achieving all users needs. In this testing usually users or testers will involve in performing.
E.g.: a Product after completion given to customers for trail as Beta version and feedback from users and important suggestions which will add quality will be done before release.
82. At what stage the unit testing has to be done?
After completing coding of individual functionality's unit testing can be done.
Say for E.g.: if an application have 5 functionality's to work together, if they have been developed individually then unit testing can be carried out before their integration is suppose to be done.
Who can perform the Unit Testing?
Both developers and testers can perform this unit level testing
83.What is the testing that a tester performs at the end of Unit Testing?
Integration testing will be performed after unit testing to ensure that unit tested modules get integrated correctly.
84. What are the things, you prefer & Prepare before starting Testing?
Study the application, Understanding the applications expected functionality's, Preparing Test plan, Ambiguity/Clarification Document and test design
Documents.
85. What is Integration Testing?
Integration testing exercises several units that have been combined to form a module, subsystem, or system. Integration testing focuses on the interfaces between units, to make sure the units work together. The nature of this phase is certainly 'white box', as we must have certain knowledge of the units to recognize if we have been successful in fusing them together in the module.
86. What is Incremental Integration Testing?
Incremental Integration Testing is an approach of testing where we will integrate the modules top to bottom or on the incrementing scale of intensity.
87. What is meant by System Testing?
The system test phase begins once modules are integrated enough to perform tests in a whole system environment. System testing can occur in parallel with integration test, especially with the top-down method.
88. What is meant by SIT?
System Integration Testing done after the completion of Unit level testing. An application which is integrated together after assuring their individual functionality's.
89. When do you go for Integration Testing?
When all Separate unit in Unit Level testing is assured to do good their performance, Then Application is recommended for integration after these unit getting integrated, application can be performed integration testing.
90. Can the System testing be done at any stage?
No, The system as a whole can be tested only if all modules are integrated and all modules work correctly
System testing should be done before UAT (User Acceptance testing) and Before Unit Testing.
91. What are stubs & drivers?
Driver programs provide emerging low-level modules with simulated inputs and the necessary resources to function. Drivers are important for bottom-up testing, where you have a complete low-level module, but nothing to test it with.
Stubs simulate sub-programs or modules while testing higher-level routines.
92. What is the Concept of Up-Down & Down-Up in Testing in integration testing?
There is two approach in testing an application if the functionality sequence was mapped and tracked from top to bottom then it is called top down method ,If that was done for integration testing then it is top down model testing in Integration and vice versa for Bottom up model
93. What is the final Stage of Integration Testing?
All the individual units integrated together to Perform a task as a system or Part of the system as expected to do so.
94. Where in the SDLC, the Testing Starts?
It depends upon the Software Model which we follow. If we use Waterfall model then testing will comes in to picture only after coding is done. If we follow V model then testing can be started at the design phase itself. UAT test cases can be written from URS/BRS and System test cases can be written from SRS.
95. What is the Outcome of Integration Testing?
At the completion of integration testing all the unit level functionalities or sub modules are integrated together and finally it should work as a system as whole as expected.
96. What is meant by GUI Testing?
Testing the front-end user interfaces to applications, which use GUI support systems and standard such as MS Windows.
97. What is meant by Back-End Testing?
Database Testing is also called as back end testing checking whether database elements have been accessed by front end whenever required as desired.
98. What are the features, you take care in Prototype testing?
Prototype testing is carrying out testing in same method reputedly to understand the system behavior; here full coverage of functionality should be taken care With the same process followed as for Prototype testing.
99. What is Mutation testing & when can it be done?
Mutation testing is a powerful fault-based testing technique for unit level testing. Since it is a fault-based testing technique, it is aimed at testing and uncovering some specific kindsof faults, namely simple syntactic changes to a program. Mutation testing is based on two assumptions: the competent programmer hypothesis and the coupling effect. The competent programmer hypothesis assumes that competent programmers tend to write nearly "correct" programs. The coupling effect stated that a set of test data that can uncover all simple faults in a program is also capable of detecting more complex faults. Mutation testing injects faults into code to determine optimal test inputs
100. What is Compatibility Testing?
Testing to ensure compatibility of an application with different browsers, Operating Systems, and hardware platforms. Compatibility testing can be performed manually or can be driven by an automated functional or regression test suite.
101. What is Usability Testing?
Usability testing is a core skill because it is the principal means of finding out whether a system (see our definition below) meets its intended purpose. All other skills that we deploy or cultivate aim to make usability (and, ultimately, use) successful.
It is a Process of Testing the effectiveness, efficiency, and satisfaction with which
specified users could achieve specified goals in the Application. Synonymous with "ease of use".
102. What is the Importance of testing?
Software Testing is more Oriented to detecting the defects or often equated to finding bugs. Testing is mainly done to make things go wrong to determine if things happen when they shouldn't or things don't happen when they should. Testing only demonstrates that the product performs each function intended & to show the product is free from defect.
103. What is meant by regression Testing?
Regression testing is an expensive but necessary activity performed on modified softwareto provide confidence that changes are correct and do not adversely affects other system components. Four things can happen when a developer attempts to fix a bug. Three of these things are bad, and one is good:
Because of the high probability that one of the bad outcomes will result from a change to the system, it is necessary to do regression testing.
104. When we prefer Regression & what are the stages where we go for Regression Testing?
We Prefer regression testing to provide confidence that changes are correct & has not affected the flow or Functionality of an application which got Modified or bugs got fixed in it.
Stages where we go for Regression Testing are: -
Minimization approaches seek to satisfy structural coverage criteria by identifying a minimal set of tests that must be retested, for identifying whether the application works fine.
Coverage approaches are also based on coverage criteria, but do not require minimization of the test set. Instead, they seek to select all tests that exercise changed or affected program components.
Safe attempt instead to select every test that will cause the modified program to produce different output than original program.
105. What is performance testing?
An important phase of the system test, often-called load, volume or performance test, and stress tests try to determine the failure point of a system under extreme pressure. Stress tests are most useful when systems are being scaled up to larger environments or being implemented for the first time. Web sites, like any other large-scale system that requires multiple accesses and processing, contain vulnerable nodes that should be tested before deployment. Unfortunately, most stress testing can only simulate loads on various points of the system and cannot truly stress the entire network, as the users would experience it. Fortunately, once stress and load factors have been successfully overcome, it is only necessary to stress test again if major changes take place.
A drawback of performance testing is that can easily confirm that the system can handle heavy loads, but cannot so easily determine if the system is producing the correct information. In other words, processing incorrect transactions at high speed can cause much more damage and liability than simply stopping or slowing the processing of correct transactions.
Performance testing can be applied to understand your application or WWW site's scalability, or to benchmark the performance in an environment of third party products such as servers and middleware for potential purchase. This sort of testing is particularly useful to identify performance bottlenecks in high use applications. Performance testing generally involves an automated test suite as this allows easy simulation of a variety of normal, peak, and exceptional load conditions.
The following three types highly influence Performance of an application.
Load testing, Volume testing, Stress Testing
Stress testing is the combination of both load and volume.
106. What is the Performance testing; those can be done Manually & Automatically?
This sort of testing is particularly useful to identify performance bottlenecks in high use applications. Performance testing generally involves an automated test suite as this allows easy simulation of a variety of normal, peak, and exceptional load conditions.
Manually: - Load, Stress, & Volume are the types of testing which are been done
Manually.
Automated: - Load, Stress, & Volume are the types of testing which are been done automatically, by using the Automated Skills.
107. What is Volume, Stress & Load Testing?
Volume testing: -
Testing the Application under varying loads; keeping the Number of Users constantly & finding the Response time & the system With Standing Capability or varying the Load till saturation Point is reached
Load Testing : -Testing the Application under Constant load; keeping varying the Number of Users & there by finding the Response time & the system With Standing Capability or varying the Users till saturation Point is reached
Stress Testing : - Testing the Application under varying loads; keeping varying the Number of Users simultaneously & there by finding the Response time & the system With Standing Capability or varying the Load & Users till saturation Point is reached
108. What is a Bug?
Bug: -
“Is an error found BEFORE the application goes into production?”
109. What is a Defect?
Defect: -
“Is an error found AFTER the application goes into production?”
110. What is the defect Life Cycle?
Test Team (Here the Defect status will be Open)
Test Lead Authorize the bugs found (Here the Defect Status will be Open)
Development Team reviewing the Defect (Here the Defect Status will be Open)
The defect can be Authorized or Unauthorized by the development Team (Here the Status of the Defect will be Open (For Authorized Defects) & Reject (For Unauthorized Defects)
Development Team fixing the Defect (Here the authorized Bugs will get fixed or differed; it is done again by the Development team. Here the Status after the Development team fixing the bugs will be (Fixed) & Status will be Differed for the bugs which got Differed)
The Fixed bugs will be again Re-tested by the Test Team (Here based on the Closure of the Bug, the status will be made as closed or if the Defect still remains, it will be Re-raised again & even the new bugs with status Open will be sent to the Development team)
The above-mentioned cycle flows on continuously, until all the bugs gets fixed in the application.
111. What is the Priority in fixing the Bugs?
Priority: -
The value will be given to the bugs, by both Testers & Developers (But Mostly the Development team will take care of this). It mainly deals with, what importance should be given to each bug by the Developer. (i.e.) like the Critical bugs should be solved first & then the Major bugs can be taken care.
112. Explain the Severity you rate for the bugs found?
i. Emergency
ii. High (Very High & high)
iii. Medium
iv. Low (Very Low & Low)
Testers will rate severity; it is based on the Defect we find in the application. Severity can be rated as Critical or major or Minor. It is mostly done based on the nature of the defect found in the Application.
Eg: - When user is not able to proceed or system gets crashes & so that tester is not able to proceed further testing (These Bugs will be rated as Critical)
E.g.: - When user tries to add an record & then tries to view the same record added & if the details getting displayed to the fields are not the same which the user provided as the value to the fields (These Type of Bugs will be rated as Major Bugs)
E.g.: - Mostly the FLV Bugs & some functional bugs (Related the value display etc.) will be rated as Minor.
113. Difference between UAT & IST?
UAT & IST
UAT: -
1. Done Using BRD
2. Done with the Live Data
3. Testing is done in User Style
4. Testing in done in the Client Place
5. Testing is done by the Real Users or some Third Party Testers
IST: -
1. Done Using FS
2. Done with the Simulated Data
3. Testing is done in a Controlled Way.
4. Testing in done in Offsite
5. Testing is done in the Testers Company
114. What is meant by UAT?
Traditionally, this is where the users ‘get their first crack’ at the software. Unfortunately, by this time, it's usually too late. If the users have not seen prototypes, been involved with the design, and understood the evolution of the system, they are inevitably going to be unhappy with the result. If you can perform every test as user acceptance tests, you have
a much better chance of a successful project
User Acceptance testing is done to achieve the following:-
User specified requirements have been satisfied
Functionality is doing as per supporting documents
Expected performance have been achieved
End user is comfortable to use the application.
115. What all are the requirements needed for UAT?
Business Requirement Document is the Documents required for performing the UAT Testing by the testers.
Application should be Stable (Means, all the Modules should be tested at least once after Integrating the Modules)
116. What are the docs required for Performance Testing?
Bench Mark is the Basic Document required for Performance Testing. Where the
documents contains in detail about the Response Time, Transaction Time, Data Transfer Time, Virtual Memory in which the Application should work.
117. What is risk analysis?
Risk Analysis is a series step that helps the Software or Testing Team to understand & manage Uncertainty. It is a process of evaluating risks, threats, controls, & vulnerabilities.
Threat: - Which is capable of exploiting vulnerability in the security of a
computer system or application.
Vulnerability: -Is a design, implementation, or operations flaw that may be
exploited by a threat?
Control: -Control is anything that tends to cause the reduction of risk.
118. How to do risk management?
Identifying the Risk Involved in the project & finding Mitigation for the Risk Found will do risk Management. Risk Mitigation will be a solution for the Risk Identified.
119. What are test closure documents?
i. Test Conditions
ii. Test Case
iii. Test Plan
iv. Test Strategy
v. Traceability Matrix
vi. Defect Reports
vii. Test Closure Document
viii. Test Data
(The Above Mentioned Deliverables are based on the deliverables accepted by the Testing Team & mentioned in the Test Strategy)
120. What is Traceability matrix?
Traceability Matrix: -
Through out the testing life cycle of the project Traceability matrix has been maintained to ensure the Verification & Validation of the testing is complete.
121. What ways can be followed for defect management?
Reporting the Bugs through the Defect Report (Excel Template)
Any in-house tool inbuilt in the company may also be used.
Commonly available tools like TEST DIRECTOR can also be employed

17. Questions

Q1: Why does software have bugs?
Ans:

i. Miscommunication or no communication - understand the application requirements.
ii. Software complexity - the complexity of current software applications can be difficult to comprehend for anyone without experience in modern-day software development.
iii. Programming errors - programmers "can" make mistakes.
iv. Changing requirements - A redesign, rescheduling of engineers, effects on other projects, etc. v. If there are many minor changes or any major changes, known and unknown dependencies among parts of the project are likely to interact and cause problems, and the complexity of keeping track of changes may result in errors.
vi. Time pressures - scheduling of software projects is difficult at best, often requiring a lot of guesswork. When deadlines loom and the crunch comes, mistakes will be made.
vii. Poorly documented code - it's tough to maintain and modify code that is badly written or poorly documented that result as bugs.
viii. Software development tools - various tools often introduce their own bugs or are poorly documented, resulting in added bugs.
Q2: What does "finding a bug" consist of?
Ans:
Finding a bug consists of number of steps that are performed:
i. Searching for and locating a bug
ii. Analyzing the exact circumstances under which the bug occurs
iii. Documenting the bug found
iv. Reporting the bug and if necessary, the error is simulated
v. Testing the fixed code to verify that the bug is really fixed
Q3: What will happen about bugs that are already known?
Ans:
When a program is sent for testing (or a website given) a list of any known bugs should accompany the program. If a bug is found, then the list will be checked to ensure that it is not a duplicate. Any bugs not found on the list will be assumed to be new.
Q4: What's the big deal about 'requirements'?
Ans:
Requirements are the details describing an application's externally perceived functionality and properties. Requirements should be, clear & documented, complete, reasonably detailed, cohesive, attainable, and testable. A non-testable requirement would be, for example, 'userfriendly' (too subjective). Without such documentation, there will be no clear-cut way to determine if a software application is performing correctly.
Q5: What can be done if requirements are changing continuously?
Ans:
It's helpful if the application's initial design allows for some adaptability so that any changes done later do not require redoing the application from scratch. To makes changes easier for the developers the code should be well commented and well documented. Use rapid prototyping whenever possible to help customers feel sure of their requirements and minimize changes. Be sure that customers and management understand the scheduling impacts, inherent risks,
and costs of significant requirements changes. Design some flexibility into test cases (this is not easily done; the best bet might be to minimize the detail in the test cases, or set up only higher-level generic-type test plans)
Q6: When to stop testing?
Ans:
This can be difficult to determine. Many modern software applications are so complex, and run in such an interdependent environment, so a complete testing can never be performed.
Common factors in deciding when to stop testing are:
i. Deadlines achieved (release deadlines, testing deadlines, etc.)
ii. Test cases completed with certain percentage passed
iii. Test budget depleted
iv. Coverage of code/functionality/requirements reaches a specified point
v. Defect rate falls below a certain level
vi. Beta or Alpha testing period ends
Q7: How does a client/server environment affect testing?
Ans:
Client/server applications can be quite complex due to the multiple dependencies among clients, data communications, hardware, and servers. Thus testing requirements can be extensive. When time is limited (as it usually is) the focus should be on integration and system testing. Additionally, load/ stress/ Performance testing may be useful in determining client/server application Limitations and capabilities.
Q8: Does it matter how much the software has been tested already?
Ans:
No. It is up to the tester to decide how much to test it before it is tested. An initial assessment of the software is made, and it will be classified into one of three possible stability levels:
i. Low stability (bugs are expected to be easy to find, indicating that the program has not been tested or has only been very lightly tested)
ii. Normal stability (normal level of bugs, indicating a normal amount of programmer testing)
iii. High stability (bugs are expected to be difficult to find, indicating already well tested)
Q9: How is testing affected by object-oriented designs?
Ans:
Well-engineered object-oriented design can make it easier to trace from code to internal design to functional design to requirements. While there will be little affect on black box testing (where an understanding of the internal design of the application is unnecessary), white-box testing can be oriented to the application's objects. If the application was well designed this can simplify test design.
Q10: Will automated testing tools make testing easier?
Ans:
A tool set that allows controlled access to all test assets promoted better communication between all the team members, and will ultimately break down the walls that have traditionally existed between various groups.
Automated testing tools are only one part of a unique solution to achieving customer success. The complete solution is based on providing the user with principles, tools, and services needed to efficiently develop software.
Q11: Why outsource testing?
Ans:
Skill and Expertise Developing and maintaining a team that has the expertise to thoroughly test complex and large applications is expensive and effort intensive.
Testing a software application now involves a variety of skills.
i. Focus - Using a dedicated and expert test team frees the development team to focus on sharpening their core skills in design and development, in their domain areas.
ii. Independent assessment - Independent test team looks afresh at each test project while bringing with them the experience of earlier test assignments, for different clients, on multiple platforms and across different domain areas.
iii. Save time - Testing can go in parallel with the software development life cycle to minimize the time needed to develop the software.
iv. Reduce Cost - Outsourcing testing offers the flexibility of having a large test team, only when needed. This reduces the carrying costs and at the same time reduces the ramp up time and costs associated with hiring and training temporary personnel.
Q12: What steps are needed to develop and run software tests?
Ans:
The following are some of the steps needed to develop and run software tests:
i. Obtain requirements, functional design, and internal design specifications and other necessary documents
ii. Obtain budget and schedule requirements
iii. Determine project-related personnel and their responsibilities, reporting requirements, required standards and processes (such as release processes, change processes, etc.)
iv. Identify application's higher-risk aspects, set priorities, and determine scope and limitations of tests
v. Determine test approaches and methods - unit, integration, functional, system, load, usability tests, etc.
vi. Determine test environment requirements (hardware, software, communications, etc.)
vii. Determine test-ware requirements (record/playback tools, coverage analyzers, test tracking, problem/bug tracking, etc.)
viii. Determine test input data requirements Identify tasks, those responsible for tasks, and labor requirements
ix. Set schedule estimates, timelines, milestones
x. Determine input equivalence classes, boundary value analyses, error classes Prepare test plan document and have needed reviews/approvals
xi. Write test cases
xii. Have needed reviews/inspections/approvals of test cases
xiii. Prepare test environment and test ware, obtain needed user manuals/reference documents/configuration guides/installation guides, set up test tracking processes, set up logging and archiving processes, set up or obtain test input data
xiv. Obtain and install software releases
xv. Perform tests
xvi. Evaluate and report results
xvii. Track problems/bugs and fixes
xviii. Retest as needed
xix. Maintain and update test plans, test cases, test environment, and test ware through life cycle
Q13: What is a Test Strategy and Test Plan?
Ans:
A test strategy is a statement of the overall approach to testing, identifying what levels of testing are to be applied and the methods, techniques and tools to be used. A test strategy should ideally be organization wide, being applicable to all of organizations software developments. Developing a test strategy, which efficiently meets the needs of an organization, is critical to the success of software development within the organization. The application of a test strategy to a software development project should be detailed in the projects software quality plan.
The next stage of test design, which is the first stage within a software development project, is the development of a test plan. A test plan states what the items to be tested are, at what level they will be tested, what sequence they are to be tested in, how the test strategy will be applied to the testing of each item, and describes the test environment.
A test plan may be project wide, or may in fact be a hierarchy of plans relating to the various levels of specification and testing:
i. An Acceptance Test Plan, describing the plan for acceptance testing of the software. This would usually be published as a separate document, but might be published with the system test plan as a single document.
ii. A System Test Plan, describing the plan for system integration and testing. This would also usually be published as a separate document, but might be published with the acceptance test plan.
iii. A Software Integration Test Plan, describing the plan for integration of testes software components. This may form part of the Architectural Design Specification.
iv. Unit Test Plan(s), describing the plans for testing of individual units of software. These may form part of the Detailed Design Specifications.
v. The objective of each test plan is to provide a plan for verification, by testing the software, that the software produced fulfils the requirements or design statements of the appropriate software specification. In the case of acceptance testing and system testing, this means the Requirements Specification.

16. 10 Tips

19. 10 Tips you should read before automating your testing work:
google_protectAndRun("ads_core.google_render_ad", google_handleError, google_render_ad);
I was getting too many questions on when and how to automate testing process. Instead of answering them individually I thought it would be better to have some discussion here. I will put my thoughts about when to automate, how to automate or should we automate our testing work? I know there some of our readers are smarter than me. So it would be always a good idea to start a meaningful discussion on such vast topic to get in-depth idea and thoughts from experts from different areas and their experience in automation testing.

Why Automation testing?
1) You have some new releases and bug fixes in working module. So how will you ensure that the new bug fixes have not introduced any new bug in previous working functionality? You need to test the previous functionality also. So will you test manually all the module functionality every time you have some bug fixes or new functionality addition? Well you might do it manually but then you are not doing testing effectively. Effective in terms of company cost, resources, Time etc. Here comes need of Automation.
- So automate your testing procedure when you have lot of regression work.

2) You are testing a web application where there might be thousands of users interacting with your application simultaneously. How will you test such a web application? How will you create those many users manually and simultaneously? Well very difficult task if done manually.
- Automate your load testing work for creating virtual users to check load capacity of your application.

3) You are testing application where code is changing frequently. You have almost same GUI but functional changes are more so testing rework is more.- Automate your testing work when your GUI is almost frozen but you have lot of frequently functional changes.
What are the Risks associated in Automation Testing?There are some distinct situations where you can think of automating your testing work. I have covered some risks of automation testing here. If you have taken decision of automation or are going to take sooner then think of following scenarios first.
1) Do you have skilled resources?
For automation you need to have persons having some programming knowledge. Think of your resources. Do they have sufficient programming knowledge for automation testing? If not do they have technical capabilities or programming background that they can easily adapt to the new technologies? Are you going to invest money to build a good automation team? If your answer is yes then only think to automate your work.
2) Initial cost for Automation is very high:
I agree that manual testing has too much cost associated to hire skilled manual testers. And if you are thinking automation will be the solution for you, Think twice. Automation cost is too high for initial setup i.e. cost associated to automation tool purchase, training and maintenance of test scripts is very high.There are many unsatisfied customers regretting on their decision to automate their work. If you are spending too much and getting merely some good looking testing tools and some basic automation scripts then what is the use of automation?
3) Do not think to automate your UI if it is not fixed:
Beware before automating user interface. If user interface is changing extensively, cost associated with script maintenance will be very high. Basic UI automation is sufficient in such cases.
4) Is your application is stable enough to automate further testing work?

It would be bad idea to automate testing work in early development cycle (Unless it is agile environment). Script maintenance cost will be very high in such cases.
5) Are you thinking of 100% automation?
Please stop dreaming. You cannot 100% automate your testing work. Certainly you have areas like performance testing, regression testing, load/stress testing where you can have chance of reaching near to 100% automation. Areas like User interface, documentation, installation, compatibility and recovery where testing must be done manually.
6) Do not automate tests that run once:
Identify application areas and test cases that might be running once and not included in regression. Avoid automating such modules or test cases.
7) Will your automation suite be having long lifetime?
Every automation script suite should have enough life time that its building cost should be definitely less than that of manual execution cost. This is bit difficult to analyze the effective cost of each automation script suite. Approximately your automation suite should be used or run at least 15 to 20 times for separate builds (General assumption. depends on specific application complexity) to have good ROI.
Here is the conclusion:
Automation testing is the best way to accomplish most of the testing goals and effective use of resources and time. But you should be cautious before choosing the automation tool. Be sure to have skilled staff before deciding to automate your testing work. Otherwise your tool will remain on the shelf giving you no ROI. Handing over the expensive automation tools to unskilled staff will lead to frustration. Before purchasing the automation tools make sure that tool is a best fit to your requirements. You cannot have the tool that will 100% match with your requirements. So find out the limitations of the tool that is best match with your requirements and then use manual testing techniques to overcome those testing tool limitations. Open source tool is also a good option to start with automation. To know more on choosing automation tools read my previous posts here and here.
Instead of relying 100% on either manual or automation use the best combination of manual and automation testing. This is the best solution (I think) for every project. Automation suite will not find all the bugs and cannot be a replacement for real testers. Ad-hoc testing is also necessary in many cases.

15. CMM Levels

15. CMM Levels:

SEI known as “Software Engineering Institute” at Carnegie mellon university which was initiated by the U.S defence department to help, improve software development process.
SEI invented capability maturity model (CMM). Now-a-days CMM is called as CMMI(Capability Maturity Model Integration).
The software organizations can receive CMMI readings by undergoing assessments by qualified auditors.
The level is given to the software organization depending upon the quality that the organization is producing the software.
Level 1:
In a level 1 company, in order to complete a project successfully, the individuals of that company should put more effort. The success may not be repeatable. The project may be or may not be completed.
Level2:
It consists of software project tracking, requirement management, and realistic planning and configuration management processes. The successful practices can be repeated.
Level 3:
It consists of standard software development and maintenance processes. These two are integrated throughout an organization.
Level 4:
In this level, measurements are used to track productivity, processes and products. In this level company the quality is high.
Level 5:
In this level the focus on continuous process improvement. The impact of new processes and technologies can be predicted and effectively implemented when required.
Here success & quality is high, testing and developing is completely satisfied

14. Test Stratergy

14. Test Strategy:
Test strategies are two types
Static testing
Dynamic testing

14.1. Static testing:
Source code has been checked before executed.
14.2. Dynamic testing:
Source code has been executed and checked. Then the actual result are compared with the expected result.

13. Bug Life Cycle

13. Bug Life Cycle (or) Defect life cycle:

Introduction:
Bug can be defined as the abnormal behavior of the software. No software exists without a bug. The elimination of bugs from the software depends upon the efficiency of testing done on the software. A bug is a specific concern about the quality of the Application under Test (AUT).

Bug Life Cycle:
In software development process, the bug has a life cycle. The bug should go through the life cycle to be closed. A specific life cycle ensures that the process is standardized. The bug attains different states in the life cycle. The life cycle of the bug can be shown diagrammatically as follows:

The different states of a bug can be summarized as follows:

1. New
2. Open
3. Assign
4. Test
5. Verified
6. Deferred
7. Reopened
8. Duplicate
9. Rejected and
10. Closed



Description of Various Stages:
1. New:


When the bug is posted for the first time, its state will be “NEW”. This means that the bug is not yet approved.

2. Open:

After a tester has posted a bug, the lead of the tester approves that the bug is genuine and he changes the state as “OPEN”.

3. Assign:

Once the lead changes the state as “OPEN”, he assigns the bug to corresponding developer or developer team. The state of the bug now is changed to “ASSIGN”.

4. Test:

Once the developer fixes the bug, he has to assign the bug to the testing team for next round of testing. Before he releases the software with bug fixed, he changes the state of bug to “TEST”. It specifies that the bug has been fixed and is released to testing team.

5. Deferred:

The bug, changed to deferred state means the bug is expected to be fixed in next releases. The reasons for changing the bug to this state have many factors. Some of them are priority of the bug may be low, lack of time for the release or the bug may not have major effect on the software.

6. Rejected:

If the developer feels that the bug is not genuine, he rejects the bug. Then the state of the bug is changed to “REJECTED”.

7. Duplicate:

If the bug is repeated twice or the two bugs mention the same concept of the bug, then one bug status is changed to “DUPLICATE”.

8. Verified:
Once the bug is fixed and the status is changed to “TEST”, the tester tests the bug. If the bug is not present in the software, he approves that the bug is fixed and changes the status to “VERIFIED”.

9. Reopened:
If the bug still exists even after the bug is fixed by the developer, the tester changes the status to “REOPENED”. The bug traverses the life cycle once again.

10. Closed:
Once the bug is fixed, it is tested by the tester. If the tester feels that the bug no longer exists in the software, he changes the status of the bug to “CLOSED”. This state means that the bug is fixed, tested and approved.

While defect prevention is much more effective and efficient in reducing the number of defects, most organization conducts defect discovery and removal. Discovering and removing defects is an expensive and inefficient process. It is much more efficient for an organization to conduct activities that prevent defects.

12. Types of Risks & Solutions

12. Types of risks & its Solutions:
The types of risk the tester will face during the testing the applications are
Due to lack of time (Buddy Testing).
Due to lack of Documentation (Exploratory testing).
Due to lack of Domain Knowledge (Pair testing).

12.1. Buddy Testing (Due to lack of time):
Buddy means group of programmers & testers. In this kind of testing, the coding and testing processes are going on parallely due to lack time for testing (i.e.) as the develop the programs to develop the functionality, parallely it will be tested by the tester.
(Or)Due to lack of time and lack of test data test engineers grouped with developers to conduct test on application as early as possible this style of testing. "Buddy means group of programmers and testers".
(Or)Buddy Testing is the testing technique adapted under unplanned testing situations...Where a Developer and Tester with work together in order to test and develop the application at a rapid pace. Each will do their own job and as they work together the chance of bug’s raises/fixes will be reduced so as to meet the time deadline.

12.2. Exploratory Testing (Due to lack of Documentation):
Due to lack of documentation, the testing team is depending on available documents, discussions with other in a testing team, internet browsing, previous experience to understand the customer requirements, contacting the client or customer who specifies the requirement.
(Or)
Its a kind of testing conducting at the time of documentation lacking. In the testing process whenever we don’t have sufficient documentation to do the testing process like preparing test scenarios, test cases then we will go for the following
Techniques.
1) Getting information from the previous projects
2) Taking suggestions from the seniors and leads or managers

12.3. Pair Testing (Due to lack of Domain knowledge):
Due to lack of lack of domain knowledge, the test lead is defining pairs with a senior test engineer & a junior test Engineer to share their knowledge during testing. Pair means a senior test engineer, a junior test engineer. Lack of domain knowledge for test engineer in a testing team means no training sessions are provided for them to understand the development project.

11. Types of Defects

11. Types of defects:
After receiving all the defect reports, the development team will analyse these defects for reality (i.e.) the development team will make analysis on this defects to identify the type of defects by go through the explanation provided by the test engineer for the corresponding defects.
If the defect is acceptable then the defect tracking team is categorizing the defects as

i. Test Procedure related defects.
ii. Test data (or) Test Input Related defects.
iii. Coding related defects.
iv. Hardware related defects (or) Infrastructure related defects.

11.1. Test Procedure related defects:
The defects related to the test steps in the test procedure. (i.e.) defects in the testing process.
11.2. Test data related defects:
The data or the input values given by the test engineer to test a requirement. The input values or the data for that requirement is the specification mentioned by the client or customer to that particular requirement.
(Or)
Test data related defects means the defects occurred in the data given to test a requirement.
11.3. Coding related defects:
The defects occurred in the programming logic.
11.4. Hardware related defects:
The defects occurred in the hardware configuration.
Hard wares like (Scanner, Printer, Ram, Rom etc..)

10. Roles & Responsibilities

10. Roles & Responsibilities:

10.1. Test Associate:

Reporting To:
Team Lead of a project
Responsibilities:
i. Design and develop test conditions and cases with associated test data based upon requirements
ii. Design test scripts
iii. Executes the test ware (Conditions, Cases, Test scripts etc.) with the test data generated
iv. Reviews test ware, record defects, retest and close defects
v. Preparation of reports on Test progress

10.2. Test Engineer:

Reporting To:
Team Lead of a project
Responsibilities:
i. Design and develop test conditions and cases with associated test data based upon requirements
ii. Design test scripts
iii. Executes the test ware (Conditions, Cases, Test scripts etc.) with the test data generated
iv. Reviews test ware, record defects, retest and close defects
v. Preparation of reports on Test progress


10.3. Senior Test Engineer:

Reporting To:
Team Lead of a project
Responsibilities:
i. Responsible for collection of requirements from the users and evaluating the same and send out for team discussion
ii. Preparation of the High level design document incorporating the feedback received on the high level design document and initiate on the low level design document
iii. Assist in the preparation of test strategy document drawing up the test plan
iv. Preparation of business scenarios, supervision of test cases preparation based on the business scenarios
v. Maintaining the run details of the test execution, Review of test condition/cases, test scripts
Defect Management
vi. Preparation of test deliverable documents and defect metrics analysis report

10.4. Test Lead:

Reporting To:
Test Manager
Responsibilities:
i. Technical leadership of the test project including test approach and tools to be used
ii. Preparation of test strategy
iii. Ensure entrance criteria prior to test start-off
iv. Ensure exit criteria prior to completion sign-off
v. Test planning including automation decisions
vi. Review of design documents (test cases, conditions, scripts)
vii. Preparation of test scenarios and configuration management and quality plan
viii. Manage test cycles
ix. Assist in recruitment
x. Supervise test team
xi. Resolve team queries/problems
xii. Report and follow-up test systems outrages/problems
xiii. Client interface
xiv. Project progress reporting
xv. Defect Management
xvi. Staying current on latest test approaches and tools, and transferring this knowledge to test team
xvii. Ensure test project documentation

10.5. Test Manager:

Reporting To:
Management
Responsibilities:
i. Liaison for interdepartmental interactions: Representative of the testing team
ii. Client interaction
iii. Recruiting, staff supervision, and staff training.
iv. Test budgeting and scheduling, including test-effort estimations.
v. Test planning including development of testing goals and strategy.
vi. Test tool selection and introduction.
vii. Coordinating pre and post test meetings.
viii. Test program oversight and progress tracking.
ix. Use of metrics to support continual test process improvement.
x. Test process definition, training and continual improvement.
xi. Test environment and test product configuration management.
xii. Nomination of training
xiii. Cohesive integration of test and development activities.
xiv. Mail Training Process for training needs, if required
xv. Review of the proposal

9. Regression Testing & ReTesting.

9. Regression Testing and Re-testing:
“Retesting of a previously tested program following modification to ensure that faults have not been introduced or uncovered as a result of the changes made.”

“Regression Testing is the process of testing the changes to computer programs to make sure that the older programs still work with the new changes.”
“When making improvements on software, retesting previously tested functions to make sure adding new features has not introduced new problems.”

Regression testing is an expensive but necessary activity performed on modified software to provide confidence that changes are correct and do not adversely affects other system components. Four things can happen when a developer attempts to fix a bug. Three of these things are bad, and one is good:

Because of the high probability that one of the bad outcomes will result from a change to the system, it is necessary to do regression testing. A regression test selection technique chooses, from an existing test set, the tests that are deemed necessary to validate modified software.
There are three main groups of test selection approaches in use:
Minimization approaches seek to satisfy structural coverage criteria by identifying a minimal set of tests that must be rerun.
Coverage approaches are also based on coverage criteria, but do not require minimization of the test set. Instead, they seek to select all tests that exercise changed or affected program components.
Safe attempt instead to select every test that will cause the modified program to produce different output than original program.

9.1. Factors favour Automation of Regression Testing:
i. Ensure consistency
ii. Speed up testing to accelerate releases
iii. Allow testing to happen more frequently
iv. Reduce costs of testing by reducing manual labour
v. Improve the reliability of testing
vi. Define the testing process and reduce dependence on the few who know it

9.2. Tools used in Regression testing:
i. Win Runner from Mercury
ii. e-tester from Empirix
iii. WebFT from Radview
iv. Silktest from Radview
v. Rational Robot from Rational
vi. QA Run from Compuware

8. Review

8. Review :

8.1. Definition:
Review is a process or meeting during which a work product or set of work products, is presented to project personnel, managers, users, customers, or other interested parties for comment or approval.

8.2. Types of Reviews:
There are three general classes of reviews:
Informal / peer reviews
Semiformal / walk-through
Formal / inspections.

8.2.1. Walkthrough:

“A review of requirements, designs or code characterized by the author of the material under review guiding the progression of the review. “

A 'walkthrough' is an informal meeting for evaluation or informational purposes. Little or no preparation is usually required. These are led by the author of the document, and are educational in nature. Communication is therefore predominately one-way in nature. Typically they entail dry runs of designs, code and scenarios/ test cases.

8.2.2. Inspection:
A group review quality improvement process for written material. It consists of two aspects; product (document itself) improvement and process improvement (of both document production and inspection).

An inspection is more formalized than a 'walkthrough', typically with 3-8 people including a moderator, reader, and a recorder to take notes. The subject of the inspection is typically a document such as a requirements specification or a test plan, and the purpose is to find problems and see what's missing, not to fix anything. Attendees should prepare for this type of meeting by reading thru the document; most problems will be found during this preparation. The result of the inspection meeting should be a written report. Thorough preparation for inspections is difficult, painstaking work, but is one of the most cost effective methods of ensuring quality.

Led by trained moderator (not author), has defined roles, and includes metrics and formal process based on rules and checklists with entry and exit criteria.

8.2.3. Informal Review:

Unplanned and Undocumented
Useful, Cheap and widely used
Contrast with walkthroughs is that communication is very much two-way in nature

8.2.4. Technical Review:

Technical reviews are also known as peer review as it is vital that participants are made up from the 'peer group', rather than including managers.
i. Documented
ii. Defined fault detection process
iii. Includes peers and technical experts
iv. No management participant


8.3. Comparison of review types:

8.4. Activities performed during review:

Activities in Review : Planning, overview meeting, Review meeting
and follow-up.
Deliverables in Review : Product changes, source document changes
and improvements.
Factors for pitfall of review : Lack of training, documentation and
management support.
Review of the Requirements / Planning and Preparing Acceptance Test
At the beginning of the project the test activities must start. These first activities are:
i. Fixing of test strategy and test concept
ii. risk analysis
iii. determine criticality
iv. expense of testing
v. test intensity
vi. Draw up the test plan
vii. Organize the test team
viii. Training of the test team - If necessary
ix. Establish monitoring and reporting
x. Provide required hardware resources (PC, data base, …)
xi. Provide required software resources (software version, test tools, …)
The activities include the foundations for a manageable and high-quality test process. A test strategy is determined after a risk evaluation, a cost estimate and test plan are developed and progress monitoring and reporting are established. During the development process all plans must be updated and completed and all decisions must be checked for validity. In a mature development process reviews and inspections are carried out through the whole process. The review of the requirement document answers questions like: Are all customers’
requirements fulfilled? Are the requirements complete and consistent? And so on. It is a look back to fix problems before going on in development. But just as important is a look forward. Ask questions like: Are the requirements testable? Are they testable with defensible expenditure? If the answer is no, then there will be problems to implement these requirements. If you have no idea how to test some requirements then it is likely that you have no idea how to implement these requirements. At this stage of the development process all the knowledge for the acceptance tests is available and to hand. So this is the best place for doing all the planning and preparing for acceptance testing.
For example: one can Establish priorities of the tests depending on criticality
Specify (functional and non-functional) test cases Specify and - if possible - provide the required infra-structure
At this stage all of the acceptance test preparation is finished and can be achieved.

8.5. Review of the Specification / Planning and Preparing System Test:
In the review meeting of the specification documents ask questions like: Is the specification testable? Are they testable with defensible expenditure? Only these kinds of specifications can be realistically implemented and be used for the next steps in the development process. There must be a re-work of the specifications if the answers to the questions are no. Here all the knowledge for the system tests is available and to hand. Tasks in planning and preparing for system testing include:
i. Establishing priorities of the tests depending on criticality
ii. Specifying (functional / non-functional) system test cases
iii. Defining and establishing the required infra-structure
iv. As with the acceptance test preparation, all of the system test preparation is finished at this early development stage.
v. Review of the Architectural Design
vi. Detailed Design Planning and
vii. Preparing Integration/Unit Test
viii. During the review of the architectural design one can look forward and ask questions like: What is about the testability of the design? Are the components and interfaces testable? Are they testable with defensible expenditure? If the components are too expensive to test a re-work of the architectural design has to be done before going further in the development process. Also at this stage all the knowledge for integration testing is available. All preparation, like specifying control flow and data flow integration test cases, can be achieved. All accordingly activities of the review of the architectural design and the integration tests can be done here at the level of unit tests.

8.6. Roles and Responsibilities:
In order to conduct an effective review, everyone has a role to play. More specifically, there are certain roles that must be played, and reviewers cannot switch roles easily.
i. The basic roles in a review are:
ii. The moderator
iii. The recorder
iv. The presenter
v. Reviewers

8.6.1. Moderator:
The moderator makes sure that the review follows its agenda and stays focused on the topic at hand. The moderator ensures that side-discussions do not derail the review, and that all reviewers participate equally.

8.6.2. Recorder:
The recorder is an often overlooked, but essential part of the review team. Keeping track of what was discussed and documenting actions to be taken is a full-time task. Assigning this task to one of the reviewers essentially keeps them out of the discussion. Worse yet, failing to document what was decided will likely lead to the issue coming up again in the future. Make sure to have a recorder and make sure that this is the only role the person plays.

8.6.3. Presenter:
The presenter is often the author of the artifact under review. The presenter explains the artefact and any background information needed to understand it (although if the artifact was not self explanatory, it probably needs some work). It’s important that reviews not become “trials” – the focus should be on the artifact, not on the presenter. It is the moderator’s role to make sure that participants (including the presenter) keep this in mind. The presenter is there to kick-off the discussion, to answer questions and to offer clarification.

8.6.4. Reviewer:
Reviewers raise issues. It’s important to keep focused on this, and not get drawn into side discussions of how to address the issue. Focus on results, not the means.

7. Types of Testing

7. Types of Testing:

7.1. Compliance Testing:
Involves test cases designed to verify that an application meets specific criteria, such as processing four-digit year dates, properly handling special data boundaries and other business requirements.
7.2. Intersystem Testing / Interface Testing:
“Integration testing where the interfaces between system components are tested”
The intersystem testing is designed to check and verify the interconnection between application function correctly
Applications are frequently interconnected to other systems. The interconnection may be data coming into the system from another application, leaving for another application frequently in multiple cycles .The intersystem testing involves the operations of multiple systems in test. The basic need of intersystem test arises whenever there is a change in parameters between application systems, where multiple systems are integrated in cycles.
7.3. Parallel Testing:
The process of comparing test results of processing production data concurrently in both the old and new systems.
Process in which both the old and new modules run at the same time so that performance and outcomes can be compared and corrected prior to deployment; commonly done with modules like Payroll.
Testing a new or an alternate data processing system with the same source data that is used in another system. The other system is considered as the standard of comparison.

7.4. Database Testing:
The database component is a critical piece of any data-enabled application. Today’s intricate mix of client-server and Web-enabled database applications is extremely difficult to Test productively.
Testing at the data access layer is the point at which your application communicates with the database. Tests at this level are vital to improve not only your overall Test strategy, but also your product’s quality.
Database testing includes the process of validation of database stored procedures, database triggers; database APIs, backup, recovery, security and database conversion.

7.5. Manual support Testing:
Manual support testing involves all functions performed by the people in preparing data for and using data from automated system. The objective of manual support testing is
Verify the manual – support procedures are documented and complete
Determine the manual-support responsibilities has been assigned
Determine manual support people are adequately trained.
Manual support testing involves first the evaluation of the adequacy of the process and seconds the execution of the process. The method of testing may be testing is same but the objective remains the same.

7.6. Ad-hoc Testing:
“Testing carried out using no recognised test case design technique.”
Testing without a formal test plan or outside of a test plan. With some projects this type of testing is carried out as an adjunct to formal testing. If carried out by a skilled tester, it can often find problems that are not caught in regular testing. Sometimes, if testing occurs very late in the development cycle, this will be the only kind of testing that can be performed. Sometimes ad hoc testing is referred to as exploratory testing.

7.7. Configuration Testing:
Testing to determine how well the product works with a broad range of hardware/peripheral equipment configurations as well as on different operating systems and software.

7.8. Pilot Testing:
Testing that involves the users just before actual release to ensure that users become familiar with the release contents and ultimately accept it. Often is considered a Move-to-Production activity for ERP releases or a beta test for commercial products. Typically involves many users, is conducted over a short period of time and is tightly controlled.

7.9. Automated Testing:
Software testing that utilizes a variety of tools to automate the testing process and when the importance of having a person manually testing is diminished. Automated testing still requires a skilled quality assurance professional with knowledge of the automation tool and the software
being tested to set up the tests.

7.10. Load Testing:
Load Testing involves stress testing applications under real-world conditions to predict system behaviour and performance and to identify and isolate problems. Load testing applications can emulate the workload of hundreds or even thousands of users, so that you can predict how an
application will work under different user loads and determine the maximum number of concurrent users accessing the site at the same time.

7.11. Stress and Volume Testing:
“Stress Testing: Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements.”

Volume Testing: Testing where the system is subjected to large volumes of data. “

Testing with the intent of determining how well a product performs when a load is placed on the system resources that nears and then exceeds capacity.
Volume Testing, as its name implies, is testing that purposely subjects a system (both hardware and software) to a series of tests where the volume of data being processed is the subject of the test. Such systems can be transactions processing systems capturing real time sales or could be database updates and or data retrieval.

7.12. Usability Testing:
“Testing the ease with which users can learn and use a product.”

All aspects of user interfaces are tested:
Display screens
messages
report formats
navigation and selection problems

7.13. Environmental Testing:
These tests check the system’s ability to perform at the installation site.
Requirements might include tolerance for
heat
humidity
chemical presence
portability
electrical or magnetic fields
Disruption of power, etc.
7.14. Active Testing:
In active testing tester introduced the test data and analyzing the results. For example, we will fill the tank of a car with 1 liter petrol and see it's average.
7.15. Passive Testing:
Passive testing is monitoring the results of a running system without introducing any special test data. For example, a engine is running and we are listening it's sound to note noise pollution by engine.
7.16. CLIENT / SERVER TESTING:
This type of testing usually done for 2 tier applications (usually developed for LAN) ere we will be having front-end and backend.
The application launched on front-end will be having forms and reports which will be monitoring and manipulating data
E.g.: applications developed in VB, VC++, Core Java, C, C++, D2K, PowerBuilder etc.,The backend for these applications would be MS Access, SQL Server, Oracle, Sybase, Mysql, Quadbase

7.17. WEB TESTING:
This is done for 3 tier applications (developed for Internet / intranet / xtranet)Here we will be having Browser, web server and DB server.
The applications accessible in browser would be developed in HTML, DHTML, XML, JavaScript etc. (We can monitor through these applications)
Applications for the web server would be developed in Java, ASP, JSP, VBScript, JavaScript, Perl, Cold Fusion, PHP etc. (All the manipulations are done on the web server with the help of these programs developed)
The DBserver would be having oracle, sql server, Sybase, mysql etc. (All data is stored in the database available on the DB server)

Sunday, November 8, 2009

6. STLC Process

6. STLC Process: (Software Test Life Cycle):
STLC Process is one of the guideline for testing the particular application.
The STLC is including in the system testing process. In the system testing the test engineers have to test the developed software by following this STLC phases Test Initiation


6.1. Test Initiation Phase:
In this phase the project manager category people will be involved. They will receive the reviewed BRS & SRS documents & prepares a testing document.
a. Test Strategy Document:
This is a company level document which is prepared by the project manager category people, this document defines the testing approach.

The first stage is the formulation of a test strategy. A test strategy is a statement of the overall approach to testing, identifying what levels of testing are to be applied and the methods, techniques and tools to be used. A test strategy should ideally be organization wide, being applicable to all of an organizations software developments.

Developing a test strategy which efficiently meets the needs of an organization is critical to the success of software development within the organization. The application of a test strategy to a software development project should be detailed in the projects software quality plan.

IEEE for Test Strategy Document:
i. Scope And Objective:
Scope means the purpose or need for testing the developed project (i.e). What is the need of testing? (Or) why we require testing for developed project.
The process or need of testing in this project is to validate the developed software with respect to customer specification. (i.e.) to make the developed software to meet the customer expectations.

The objective of testing in this project is to find the defects as many as possible while testing the developed software.

ii. Budget (or) Business issues:
This component defines how much amount of the budget is allocated for testing in this project.

iii. Roles and responsibilities:
The names of jobs of test engineer in a testing team and their responsibilities. The name of the jobs is the various levels of test engineer in a testing team such as senior test engineer & junior test engineer.
iv. Communication & status reports:
Communication defines the way of communication between roles in a testing team & the way of communication of testing team with others who worked in this project.

Status reporting means reporting the daily statuses to the test lead by the test engineers.
v. Test Automation & Tools:
It defines the need of automation testing in this project and if automation testing is required then whether that particular automation tool is available in our organization or not for this project.
vi. Change & Configuration Management:
Change means changes or modifications done to the test deliverables.
Configuration management means maintaining all the test deliverables , modifications done to test deliverable have to be maintained in the database of organization for future reference.

Change & Configuration management means the project manager gives the information regarding the changes in the test deliverables & Maintaining these test deliverables for future reference.
vii. Risk & Assumptions:
The list of analyzed risks & solutions to overcome these risks by the testing team while testing the developed software in future. The risks & solutions for the risks are prepared by the project manager by analyzing the risks.
viii. Training plan:
The required no of training sessions that the testing team requires to understand the requirements developed in the project properly and perfectly. s

6.2. Test Plan Phase:
A test plan states what the item to be tested are, at what level they will be tested, what sequence they are to be tested in , how the test strategy will be applied to the testing of each item, and describes the test Environment.
Test Plan Document will be divided into
Ø Master Test Plan.
Ø Detailed Test Plan.

6.2.1. Master Test Plan:
Master Test Plan is the high level view of the testing approach,
a. Testing Team Formation:
Test Manager concentrates on below factor to form Testing Team for corresponding project.
Test Manager will check for Availability of test engineers (Selection will be made in 3:1 ratio)
b. Identifying Tactical Risks:
During Test Plan writing author concentrate on identifying risks w.r.t team formation
Lack of knowledge of Testers in the domain.
Lack of Budget.

Lack of Resources.
Delays in Delivery.
Lack of Test Data.
Lack of Development Process Rigor.
Lack of communication (In between testing team and development team)
After completion of Team Formation and Risk Finding, Author(Test manager or Test Lead) start writing Test Plan document
Here are the steps of writing the test plan.

6.2.2. Detailed test plan:
This is a summary of the ANSI/IEEE Standard 829-1983. It describes a test plan as:
“A document describing the scope, approach, resources, and schedule of intended testing activities. It identifies test items, the features to be tested, the testing tasks, who will do each task, and any risks requiring contingency
Planning.”
… (ANSI/IEEE Standard 829-1983)
This standard specifies the following test plan outline:
a. Test Plan Identifier:
· A unique identifier
b. Introduction:
· Summary of the items and features to be tested
· Need for and history of each item (optional)
· References to related documents such as project authorization, project plan, QA plan, configuration management plan, relevant policies, relevant standards
· References to lower level test plans
c. Test Items:
Test items and their version
Characteristics of their transmittal media


References to related documents such as requirements specification, design specification, users guide, operations guide, installation guide
References to bug reports related to test items
Items which are specifically not going to be tested (optional)
d. Features to be tested:
All software features and combinations of features to be tested
References to test-design specifications associated with each feature and combination of features
e. Features Not to Be Tested:
All features and significant combinations of features which will not be tested
The reasons these features won’t be tested
f. Approach:
Overall approach to testing
For each major group of features of combinations of featres, specify the approach
Specify major activities, techniques, and tools which are to be used to test the groups
Specify a minimum degree of comprehensiveness required
Identify which techniques will be used to judge comprehensiveness
Specify any additional completion criteria
Specify techniques which are to be used to trace requirements
Identify significant constraints on testing, such as test-item availability, testing resource availability, and deadline
g. Item Pass/Fail Criteria:
Specify the criteria to be used to determine whether each test item has passed or failed testing
h. Test Deliverables:
Identify the deliverable documents: test plan, test design specifications, test case specifications, test procedure specifications, test item transmittal reports, test logs, test incident reports, test summary reports
Identify test input and output data
Identify test tools (optional)
i. Testing Tasks:
Identify tasks necessary to prepare for and perform testing
Identify all task interdependencies
Identify any special skills required
j. Environmental Needs:
Specify the level of security required
Identify special test tools needed
Specify necessary and desired properties of the test environment: physical characteristics of the facilities including hardware, communications and system software, the mode of usage (i.e., stand-alone), and any other software or supplies needed
Identify any other testing needs
Identify the source for all needs which are not currently available
k. Responsibilities:
Identify groups responsible for managing, designing, preparing, executing, witnessing, checking and resolving
Identify groups responsible for providing the test items identified in the Test Items section
Identify groups responsible for providing the environmental needs identified in the Environmental Needs section
l. Staffing and Training Needs:
Specify staffing needs by skill level
Identify training options for providing necessary skills


m. Schedule:
Specify test milestones
Specify all item transmittal events
Estimate time required to do each testing task
Schedule all testing tasks and test milestones
For each testing resource, specify its periods of use
n. Risks and Contingencies:
Identify the high-risk assumptions of the test plan
Specify contingency plans for each
o. Approvals:
Specify the names and titles of all persons who must approve the plan
Provide space for signatures and dates

6.3. Test Design Phase:
6.3.1. Test Scenario Document:

A set of test cases that ensure that the business process flows are tested from end to end. They may be independent tests or a series of tests that follow each other, each dependent on the output of the previous one.
Test scenario templates:

  • S.no
  • Module
  • Requirements
  • Test Scenario.
  • Test Case.

6.3.2. Test Case Document:
It is a group of steps that is to be executed to check the functionality of a specific object.

The main objective of writing test case is to validate the test coverage of an application.

Test case Templates:

  • Test Case id.
  • Test Case Description.
  • Step name.
  • Step Description.
  • Test data (or) Test Input.
  • Expected Result.
  • Actual Result.
  • Status.

6.4. Execute Test Case Phase:
Executing all the test cases based on the functional specification.

6.5. Test Report Phase:

6.5.1. Defect Report Document:
The document that contains the information regarding accepted defects & rejected defects, defect corrected, status of the defect.

IEEE format for defect report document:


a. Defect id (or) name:
A unique number or name must be given to he defect by the test engineer for future reference.
b. Defect description (or) introduction:
A brief summary or a brief description of the identified defects.
c. Severity:
It means the seriousness of the defect in terms of functionality.
i. High severity:
The software build is not working correctly due to occurrence of defect and not to able continue remaining testing until that defect is resolved.
Eg: login.

ii. Medium severity:
The software build is not working correctly due to occurrence of defect but able to continue remaining testing and that defect must be resolved completely.
iii. Low severity:
The build is having a defect but it may or may not be resolved.
Eg: unwanted options available in the application.
d. Priority:
The importance of the defect to be resolved in terms of severity.
(Or)
It is nothing but how fast to fix the bug in terms of severity.
e. Reprodusable:
It means during execution the defect can be reproduced again and again or not.
Two options are available to mention this
Yes: the defect can be reproduced again. Attach the test procedure in this component & send it to defect tracking team.
No: the defect cannot be reproduced again. Attach the test procedure & snapshot and forward to development team.
f. Status: the status will be new
g. Tested by: Testers name should be mentioned.
h. Fixed by: Developers name should be mentioned.
i. Reported on:
The date in which the defect report was reported.

6.5.2. Tools Used:
Tools that are used to track and report defects are,
a. Clear Quest (CQ)
It belongs to the Rational Test Suite and it is an effective tool in Defect
Management. CQ functions on a native access database and it maintains a common database of defects. With CQ the entire Defect Process can be customized. For e.g., a process can be designed in such a manner that a defect once raised needs to be definitely authorized and then fixed for it to attain the status of retesting. Such a systematic defect flow process can be established and the history for the same can be maintained. Graphs and reports can be customized and metrics can be derived out of the maintained defect repository.

b. Test Director (TD):
Test Director is an Automated Test Management Tool developed by Mercury Interactive for Test Management to help to organize and manage all phases of the software testing process, including planning, creating tests, executing tests, and tracking defects. Test Director enables us to manage user access to a project by creating a list of authorized users and assigning each user a password
and a user group such that a perfect control can be exercised on the kinds of additions and modifications and user can make to the project. Apart from Manual Test Execution, the Win Runner automated test scripts of the project can also be executed directly from Test Director.
Test Director activates Win Runner, runs the tests, and displays the results. Apart form the above, it is used for
To report defects detected in the software.
sweAs a sophisticated system for tracking software defects.
To monitor defects closely from initial detection until resolution.
To analyze our Testing Process by means of various graphs and reports.


c. Defect Tracker:
Defect Tracker is a tool developed by Maveric Systems Ltd. an Independent Software Testing Company in Chennai for defect management. This tool is used to manage the defect, track the defect and report the defect effectively by the testing team.


Test Closure Phase:
a. Sign Off :

Sign off Criteria: In order to acknowledge the completion of the test process and certify the application, the following has to be completed.
All passes have been completed
All test cases should have been executed
All defects raised during the test execution have either been closed or deferred
b. Authorities:
The following personnel have the authority to sign off the test execution process
Client: The owners of the application under test
Project manager: Maveric Personnel who managed the project
Project Lead: Maveric Personnel who managed the test process
c. Deliverables:
i. The following are the deliverables to the Clients
ii. Test Strategy
iii. High Level Test Conditions or Scenarios and Test Conditions document
iv. Consolidated defect report
v. Weekly Status report
vi. Traceability Matrix
vii. Test Acceptance/Summary Report.
d. Metrics:
i. Defect Metrics:
Analysis on the defect report is done for management and client information. These are categorized as

ii. Defect age:
Defect age is the time duration between the points of introduction of defect to the point of closure of the defect. This would give a fair idea on the defect set to be included for smoke test during regression.
iii. Defect Analysis:
The analysis of the defects can be done based on the severity, occurrence and category of the defects. As an example Defect density is a metric which gives the ratio of defects in specific modules to the total defects in the application. Further analysis and derivation of metrics can be done based on the various components of the defect management.