Friday, October 23, 2009

5. Testing Techniques

5. Testing Techniques:
Testing Techniques will be
5.1. Black-Box Testing.
5.2. White-Box Testing.
5.3. Grey Box Testing.

5.1. Black-Box Testing:
Black box testing treats the system as a “black-box”, so it doesn’t explicitly use Knowledge of the internal structure or code. Or in other words the Test engineer need not know the internal working of the “Black box” or application.
Main focus in black box testing is on functionality of the system as a whole.
Each testing method has its own advantages and disadvantages. There are some bugs that cannot be found using only black box or only white box. Majority of the application are tested by black box testing method. We need to cover majority of test cases so that most of the bugs will get discovered by blackbox testing.
a. Tools used for Black Box testing:
Black box testing tools are mainly record and playback tools. These tools are used for regression testing that to check whether new build has created any bug in previous working application functionality. These record and playback tools records test cases in the form of some scripts like TSL, VB script, Java script, Perl.


b. Advantages of Black Box Testing
· Tester can be non-technical.
· Used to verify contradictions in actual system and the specifications.
· Test cases can be designed as soon as the functional specifications are complete.
c. Disadvantages of Black Box Testing:
· The test inputs needs to be from large sample space.
· It is difficult to identify all possible inputs in limited testing time. So writing test cases is slow and difficult
· Chances of having unidentified paths during this testing
5.1.1. Types of Black-box Testing:
i. Boundary Value Analysis:
Many systems have tendency to fail on boundary. So testing boundry values of application is important. Boundary Value Analysis (BVA) is a test Functional Testing technique where the extreme boundary values are chosen. Boundary values include maximum, minimum, just inside/outside boundaries, typical values, and error values. Extends equivalence partitioning Test both sides of each boundary Look at output boundaries for test cases too Test min, min-1, max, max+1, typical values
a. BVA techniques:
1. Number of variables For n variables: BVA yields 4n + 1 test cases.
2. Kinds of ranges Generalizing ranges depends on the nature or type of variables
Advantages of Boundary Value Analysis:
1. Robustness Testing - Boundary Value Analysis plus values that go beyond the limits
2. Min - 1, Min, Min +1, Nom, Max -1, Max, Max +1
3. Forces attention to exception handling
b. Limitations of Boundary Value Analysis
Boundary value testing is efficient only for variables of fixed values i.e boundary.
ii. Equivalence Class Partitioning:
Equivalence partitioning is a black box testing method that divides the input domain of a program into classes of data from which test cases can be derived.
How this partitioning is performed while testing:
1. If an input condition specifies a range, one valid and one two invalid classes are defined.
2. If an input condition requires a specific value, one valid and two invalid equivalence classes are defined.
3. If an input condition specifies a member of a set, one valid and one invalid equivalence class is defined.
4. If an input condition is Boolean, one valid and one invalid class is defined.
iii. Error Guessing:
This is purely based on previous experience and judgment of tester. Error Guessing is the art of guessing where errors can be hidden. For this technique there are no specific tools, writing the test cases that cover all the application paths.
5.2. White Box Testing:
White box testing (WBT) is also called Structural or Glass box testing.
White box testing involves looking at the structure of the code. When you know the internal structure of a product, tests can be conducted to ensure that the internal operations performed according to the specification. And all internal components have been adequately exercised.
5.2.1. Types of white-box testing:
· Basic Path Testing:
· Control Structure testing:
· Program technique testing:
· Mutation Testing:
i. Basic Path Testing:
The white box testers are using this technique to estimate the execution of a program, without any disturbance such that the program should cover all the independent paths defined in it. That is the program has to be executed depending upon how many no of independent paths defined in it.
To implement this technique , the programmers are follow the approaches.
Step1: preparing a program with respect to design logic.
Step2: prepare flowchart for that program.
Step3: calculating cyclomatic complexity.
Step 4: run the program more than 1 time to cover all the independent paths.

a. Cyclomatic Complexity:
The cyclomatic complexity is a measurement for finding the no of independent paths in a flow graph.
ii. Control Structure Testing:
Validating every input statement and output statement correctness for a control structure.
a. Branch testing:
also called Decision Testing
Definition: "For every decision, each branch needs to be executed at least once."
Shortcoming - ignores implicit paths that result from compound conditionals.
Treats a compound conditional as a single statement. (We count each branch taken out of the decision, regardless which condition lead to the branch.)
This example has two branches to be executed:
IF ( a equals b) THEN
statement 1
ELSE
statement 2
END IF

This examples also has just two branches to be executed, despite the compound conditional:
IF ( a equals b AND c less than d ) THEN
statement 1
ELSE
statement 2
END IF

This example has three branches to be executed:
IF ( a equals b) THEN
statement 1
ELSE
IF ( c equals d) THEN
statement 2
ELSE
statement 3
END IF
END IF
Obvious decision statements are if, for, while, switch.
Subtle decisions are return (Boolean expression), ternary expressions, and try-catch.
For this course you don't need to write test cases for IOException and OutOfMemory exception.

b. Condition testing:
Validating a simple Boolean “if” condition with respect to its input statements and output statements correctness.
(Or)
Condition testing is a test construction method that focuses on exercising the logical conditions in a program module.
Errors in conditions can be due to:
Boolean operator error
Boolean variable error
Boolean parenthesis error
Relational operator error
Arithmetic expression error
Definition: "For a compound condition C, the true and false branches of C and every simple condition in C need to be executed at least once."Multiple-condition testing requires that all true-false combinations of simple conditions be exercised at least once. Therefore, all statements, branches, and conditions are necessarily covered.
c. Dataflow testing:
Validating the flow of data with respect to a control statement.
(Or)
Selects test paths according to the location of definitions and use of variables. This is a somewhat sophisticated technique and is not practical for extensive use. Its use should be targeted to modules with nested if and loop statements.



d. Loop testing:
Validating the looping control structures for its defined no of iterations.
(Or)
Loops are fundamental to many algorithms and need thorough testing.
There are four different classes of loops: simple, concatenated, nested, and unstructured.
Examples:
Create a set of tests that force the following situations:
Simple Loops, where n is the maximum number of allowable passes through the loop.
Skip loop entirely
Only one pass through loop
Two passes through loop
m passes through loop where mNested Loops
Start with inner loop. Set all other loops to minimum values.
Conduct simple loop testing on inner loop.
Work outwards
Continue until all loops tested.
Concatenated Loops
If independent loops, use simple loop testing.
If dependent, treat as nested loops.
Unstructured loops
Don't test - redesign.
public class loopdemo
{ private int[] numbers = {5,-3,8,-12,4,1,-20,6,2,10};
/** Compute total of numItems positive numbers in the array
* @param numItems how many items to total, maximum of 10.
*/
public int findTotal(int numItems)
{ int total = 0;
if (numItems > 0 && numItems <= 10) { for (int count=0; count < count =" count"> 0)
{ total = total + numbers[count];
} } } return total;
}}
public void testOne()
{ loopdemo app = new loopdemo();
assertEquals(0, app.findTotal(0));
assertEquals(5, app.findTotal(1));
assertEquals(5, app.findTotal(2));
assertEquals(17, app.findTotal(5));
assertEquals(26, app.findTotal(9));
assertEquals(36, app.findTotal(10));
assertEquals(0, app.findTotal(11));
}

iii. Program technique Testing:
During this testing the programmers are calculating the execution time of a program using monitors. If the program execution is not acceptable then the programmers are performing changes in the structures of a program without disturbing the functionality. I.e. if a program takes more time to complete its execution the programmers are reducing the internal steps of the program without disturbing the external functionality of the program.
iv. Mutation testing:
It means a change in a program. Mutation testing means the programmers are performing changes in a testing program to estimate the correctness & completeness of program testing .i.e. they will make modifications within a program and validates that modified program to check whether it is working properly or not with respect to the modifications.

5.2.2. Why we do White Box Testing?
To ensure:
That all independent paths within a module have been exercised at least once.
All logical decisions verified on their true and false values.
All loops executed at their boundaries and within their operational bounds internal data structures validity.
5.2.3. Need of White Box Testing?
To discover the following types of bugs:

Logical error tend to creep into our work when we design and implement functions, conditions or controls that are out of the program
The design errors due to difference between logical flow of the program and the actual implementation
Typographical errors and syntax checking
Skills Required:
We need to write test cases that ensure the complete coverage of the program logic. For this we need to know the program well i.e. we should know the specification and the code to be tested. Knowledge of programming languages and logic.
5.2.4. Limitations of WBT:
Not possible for testing each and every path of the loops in program. This means exhaustive testing is impossible for large systems.
This does not mean that WBT is not effective. By selecting important logical paths and data structure for testing is practically possible and effective.
5.3. Grey Box Testing:
Grey box Testing is the new term, which evolved due to the different architectural usage of the system. This is just a combination of both Black box & White box testing. Tester should have the knowledge of both the internals and externals of the function.
Tester should have good knowledge of White Box Testing and complete knowledge of Black Box Testing.
Grey box testing is especially important with Web and Internet applications, because the Internet is built around loosely integrated components that connect via relatively well defined interfaces

Thursday, October 22, 2009

3. SDLC Process (Software Development Life Cycle)

3.1. Requirement Analysis:

Roles:
Business Analyst (B.A), Engagement Manager (E.M)
Process:
First of all the Business analyst will take an appointment from the customer, collects the templates from the company, meets the customer on appointed day, gathers the requirements with the help of template and comes back to the company with the requirements documents. Once the requirement document has come to the company the engagement manager will check whether the customer gives any extra requirements or confused requirements. In case of extra requirements he deals the excess cost of the project. In case of confused requirements he is the responsible for prototype demonstration and gathering the clear requirements.
Proof: The proof document of this phase is Requirement Document. This is called with different names in different companies.
1. FRS (Functional Requirements Specification)
2. CRS (Customer Requirement Specification)
3. URS (User Requirement Specification)
4. BDD (Business Design Document)
5. BD (Business Document)
6. BRS (Business Requirement Specification)
Some companies may maintain the over all business flow information in one document and the detailed functional requirement information in the other document.

3.1.1. BRS (Business Requirement Specification)
Roles:
Business Analyst (B.A), Project Manager (P.M)
Process:
BRS is developed by Business Analyst.
BRS is a Business Requirement Specification Initially client will give the req's in their own format then it will be converted in to Standard format by which s/w people can understand.

In BRS the requirements are defined in general format, where as in SRS the requirements will be divided in to modules and each module contains How many interfaces and screens.





3.1.2. Analysis:
(a) Tasks:

1. Feasibility Study.
2. Tentative planning.
3. Technology Selection.
4. Requirement Analysis.

(b)Roles:
System Analyst (S.A), Project Manager (P.M), and Team Manager (T.M)
Process:
1. Feasibility Study : It is detailed study of the requirements in order to
check Weather the requirements are possible or not.
2. Tentative Planning : In this section the resource planning and the time
planning (Scheduling) is done temporarily.
3. Technology Selection : The list of all the technologies that is required to
accomplish this project. Successfully will be analyzed and listed out in this section.
4. Requirement Analysis : The list of all the requirements that is required to
accomplish this project.

3.1.3. SRS (Software Requirement Specification):
Roles:

Project Manager (P.M)
Process:
SRS Document is one of the Client Requirement Document.
The SRS is often referred to as the "parent" document because all subsequent project management documents, such as design specifications, statements of work, software architecture specifications, testing and validation plans, and documentation plans, are related to it.

An SRS is basically an organization's understanding (in writing) of a customer or potential client's system requirements and dependencies at a particular point in time (usually) prior to any actual design or development work. It's a two-way insurance policy that assures that both the client and the organization understand the other's requirements from that perspective at a given point in time.

It's important to note that an SRS contains functional and nonfunctional requirements only; it doesn't offer design suggestions, possible solutions to technology or business issues, or any other information other than what the development team understands the customer's system requirements to be.
A well-designed, well-written SRS accomplishes four major goals:
* It provides feedback to the customer. An SRS is the customer's assurance that the development organization understands the issues or problems to be solved and the software behavior necessary to address those problems. Therefore, the SRS should be written in natural language in an unambiguous manner that may also include charts, tables, data flow diagrams, decision tables, and so on.
* It decomposes the problem into component parts. The simple act of writing down software requirements in a well-designed format organizes information, places borders around the problem, solidifies ideas, and helps break downthe problem into its component parts in an orderly fashion.
* It serves as an input to the design specification. As mentioned previously, the
SRS serves as the parent document to subsequent documents, such as the software design specification and statement of work. Therefore, the SRS must contain sufficient detail in the functional system requirements so that a design solution can be devised.
* It serves as a product validation check. The SRS also serves as the parent document for testing and validation strategies that will be applied to the requirements for verification.

3.2. Software Design:
The development process is the process by which the user requirement are elicited and software satisfying this requirement is designed, Build, tested and delivered to the customer.

3.2.1. HLD (High Level Designed Document):
a. Purpose of this Document
This HLD is also called as Architectural Designed Document (or) System Design Document (Or) Macro Level Design.

It is the phases of the life cycle when a Logical view of the computer implementation of the solution to the customer requirement is developed. It gives the solution at high level of abstraction. The solution contains two main components.
The functional Architecture of the application and the
Database design.

(Or)
This High-Level Design (HLD) document specifies the implementation, including intercomponent dependencies, and provides sufficient design detail that any product based on this HLD will satisfy the product requirements.
(Or)
This is the first place other developers/maintainers will look to learn how your project works. It must provide a comprehensive outline of the entire design. The design lead is responsible for writing the overview. Write this section as HTML and place it in the javadoc overview file.


3.2.2. LLD (Low Level Designed Document):
This LLD Design is also called as detailed design (Or) Micro level Design.

During the detailed design phase, the view of the application developed during the high-level design is broken down into modules and programs. Logic design is done for every program and then documented as program specifications. For every program a unit test is created. Important activities in the detailed design stage include the identification of common routines and programs development of skeleton programs, and development of utilities and tool for productivity improvement.
(Or)

This document describes each and every module in an elaborate manner so that the programmer can directly code the program based on this. There will be at least 1 document for each module and there may be more for a module. The LLD will contain: - detailed functional logic of the module in pseudo code - database tables with all elements including their type and size - all interface details with complete API references (both requests and responses) - all dependency issues -error message listings - complete input and outputs for a module.

3.3 Coding:
This coding part will be done by only Programmers nothing but Developers.

First the Developer write the coding for the Application (or) Product based on the Low Level Designed (LLD) Document, because this document is the detailed document based on the SRS Document. After writing the coding for the LLD design the Programmer will write the coding for the HLD design.



3.4. Testing:
3.4.1. Levels of Testing:
3.4.1.1. Unit Testing:
Unit testing is also called as Component Testing.

Unit testing is defines as Testing the individual module. The testing is done to a unit or to a smallest piece of software. Unit testing are done to verify if it satisfies its functional specification or its intended design structure.
(Or)
This is the first and the most important level of testing. As soon as the programmer develops a unit of code the unit is tested for various scenarios. As the application is built it is much more economical to find and eliminate the bugs early on. Hence Unit Testing is the most important of all the testing levels. As the software project progresses ahead it becomes more and more costly to find and fix the bugs.
In most cases it is the developer’s responsibility to deliver Unit Tested Code.
a. Benefits of Unit Testing:
Assurance of working components before integration
Tests are repeatable - Every time you change something you can rerun your suite of tests to verify that the unit still works.
Tests can be designed to ensure that the code fulfills the requirements.
All debugging is separated from the code.

b. Component Test Process:
a) Component Test Planning;
b) Component Test Specification;
c) Component Test Execution;
d) Component Test Recording;
e) Checking for Component Test Completion.

This Figure illustrates the generic test process described in Component Test
Planning shall begin the test process and Checking for Component Test Completion shall end it; these activities are carried out for the whole component. Component Test Specification, Component Test Execution, and Component Test Recording may however, on any one iteration, be carried out for a subset of the test cases associated with a component. Later activities for one test case may occur before earlier activities for another. Whenever an error is corrected by
Making a change or changes to test materials or the component under test, the affected activities shall be repeated.



3.4.1.2. Integration Testing:
“Testing performed to expose faults in the interfaces and in the interaction between integrated components”
Testing of combined parts of an application to determine they function together correctly. The 'parts' can be code modules, individual applications, client and server applications on a network etc. This type of testing is especially relevant to client/server and distributed systems.
Objective:
The typical objectives of software integration testing are to:
Cause failures involving the interactions of the integrated software components when running on a single platform.
Report these failures to the software development team so that the underlying defects can be identified and fixed.
Help the software development team to stabilize the software so that it can be successfully distributed prior to system testing.
Minimize the number of low-level defects that will prevent effective system and launch testing.
Entry criteria:
The integration team is adequately staffed and trained in software integration testing.
The integration environment is ready.
The first two software components have:
Ø Passed unit testing.
Ø Been ported to the integration environment.
Ø Been integrated.
Documented Evidence that component has successfully completed unit test.
Adequate program or component documentation is available
Verification that the correct version of the unit has been turned over for integration.
Exit criteria:
1. A test suite of test cases exists for each interface between software components.
2. All software integration test suites successfully execute (i.e., the tests completely execute and the actual test results match the expected test results).
3. Successful execution of the integration test plan
4. No open severity 1 or 2 defects
Component stability
Guidelines:
The iterative and incremental development cycle implies that software integration testing is regularly performed in an iterative and incremental manner.
Software integration testing must be automated if adequate regression testing is to occur.
Software integration testing can elicit failures produced by defects that are difficult to detect during system or launch testing once the system has been completely integrated.

3.4.1.2.1. Incremental Integration Testing
“Integration testing where system components are integrated into the system one at a time until the entire system is integrated”
Continuous testing of an application as new functionality is added; requires that various aspects of an application's functionality be independent enough to work separately before all parts of the program are completed, or that test drivers be developed as needed; done by programmers or by testers. Integration testing where system components are integrated into the system one at a time until the entire system is integrated.

The Incremental Integration Testing will divided into 3 types,
a.) Top down Integration.
b.) Bottom up Integration.
c.) Sandwich Integration.

a.) Top down Integration
“An approach to integration testing where the component at the top of the component hierarchy is tested first, with lower level components being simulated by stubs. Tested components are then used to test lower level components. The process is repeated until the lowest level component has been tested.”

Modules integrated by moving down the program design hierarchy.
Can use depth first or breadth first top down integration





Steps:
Main control module used as the test driver, with stubs for all subordinate modules.
Replace stubs either depth first or breadth first
Replace stubs one at a time.
Test after each module integrated
Use regression testing (conducting all or some of the previous tests) to ensure new errors are not introduced.
Verifies major control and decision points early in design process.

b.) Bottom up Integration:
“An approach to integration testing where the lowest level components are tested first then used to facilitate the testing of higher level components. The process is repeated until the component at the top of the hierarchy is tested.”

Begin construction and testing with atomic modules (lowest level modules).
Use driver program to test.





Steps:
1. Low level modules combined in clusters (builds) that perform specific software sub functions.
2. Driver program developed to test. Cluster is tested.
3. Driver programs removed and clusters combined, moving upwards in program structure.

i. Stub and Drivers:

Stubs:
Stubs are program units that are stands for the other (more complex) program units that are directly referenced by the unit being tested.
Stubs are usually expected to provide the following:
An interface that is identical to the interface that will be provided by the actual program unit, and the minimum acceptable behaviour expected of the actual program unit. (This can be as simple as a return statement)
Drivers:
Drivers are programs or tools that allow a tester to exercise/examine in a controlling manner the unit of software being tested.
A driver is usually expected to provide the following:
A means of defining, declaring, or otherwise creating, any variables, constants, or other items needed in the testing of the unit, and a means of monitoring the states of these items, any input and output mechanisms needed in the testing of the unit

c.) Sandwich Testing:
This testing is also called as Bidirectional Testing (Or) Hybrid Testing.
It is a Combination of both bottom-up and top-down testing using testing layer.



3.4.1.3. Smoke Testing:
This Smoke Testing is done by Developers. This testing is one of the last tests for testers.
The smoke testing is also called as Build Verification Testing.
Build is nothing but .exe (execution) file. The execution file contains the Source code and Executable code.
When a build is received, a smoke test is run to ascertain if the build is stable and it can be considered for further testing.
Smoke testing can be done for testing the stability of any interim build.
Smoke testing can be executed for platform qualification tests.

3.4.1.4. Sanity Testing:
Sanity Testing is one of the basic tests for testers.
Once a new build is obtained with minor revisions, instead of doing a through regression, sanity is performed so as to ascertain the build has indeed rectified the issues and no further issue has been introduced by the fixes. Its generally a subset of regression testing and a group of test cases are executed that are related with the changes made to the app.
Generally, when multiple cycles of testing are executed, sanity testing may be done during the later cycles after through regression cycles.

3.4.1.5. System Testing:
“System testing is the process of testing an integrated system to verify that it meets specified requirements".

System test Entrance Criteria:
Successful execution of the Integration test cases
No open severity 1 or 2 defects
75-80% of total system functionality and 90% of major functionality delivered
System stability for 48-72 hours to start test

System Test Exit Criteria:
Successful execution of the system test cases ,and documentation that shows
coverage of requirements and high-risk system components
System meets pre-defined quality goals
100% of total system functionality delivered

3 .4.1.5.1. Types of system testing:
a.) Functional Testing:
b.) Non Functional Testing:


A.) Functional Testing:
Functional testing can be subdivided into two types:
Ø Functionality Testing.
Ø Sanitation Testing.

a. Functionality Testing:
Functionality testing can be done by using some coverage’s.
i. GUI Coverage.
ii. Input Domain Coverage.
iii. Output Domain Coverage.
iv. Error Handling Coverage.
v. Database Coverage.
vi. Order of Functionality.

i.) GUI Coverage:
Under GUI Coverage the testers are checking the properties for the particular application.
Properties are
1. Height.
2. Width.
3. Text.
4.Length.
5.Start Position(X).
6. End Position(Y).

ii.) Input Domain Coverage:
Under Input Domain Coverage the Testers are checking the input for the particular application based on the customer or client requirements.
Checking whether the input will be accepted by our application or not.
iii.) Output Domain Coverage:
Under output domain coverage the testers are checking the output for the particular application based on the given input in the input domain coverage.
Output is nothing but customer expectation.
iv.) Error handling Coverage:
Under error Handling Coverage the testers are checking that our application have the capability to move from Abnormal State to Normal State or not. Abnormal State is nothing but Error Handling State.

v.) Database Coverage:
Database Coverage is also called as Backend Coverage.
Database coverage is nothing but retrieving the value from the database.
Database Coverage can be Sub Divided into
· Data Validation.
· Data Integrity.
Data Validation:
It is nothing but to check whether the new values entered in the front end,
that same value will be entered in the database or not.
Data Integrity:
It is nothing but to check whether the modified values will be stored in the database or not.



vi.) Order of Functionality:
To check the functionality of the particular application in the order wise, that is nothing but to check the input for application in the procedure wise.
If you not check the input in procedure wise it will show the error message.

b. Sanitation Testing:
Sanitation testing is nothing but to check the extra functionalities in the particular application. Because if you not check the extra functionalities it will affect the main functionality also.
That’s why the tester first check all the main functionalities which is available in the customer requirement document, then only he/she checks the extra functionalities which is not mentioned in the customer requirement document.

B.) Non Functional Testing:
Non functional testing is nothing but just checking the performance of the application.
Non functional testing can be sub divided into
i.) Usability testing.
ii.) Recovery testing.
iii.) Security testing.
iv.) Compatibility testing.
v.) Configuration testing.
vi.) Data-Volume Testing.
vii.) Performance testing.
i.) Usability Testing:
Usability Testing is also called as User-Interface Testing.
Usability Testing is nothing but User-friendliness check. Application
Flow is tested, Can new user understand the application easily, Proper help documented whenever user stuck at any point. Basically system
Navigation is checked in this testing.
User friendliness means
1. Easy to use.
2. Easy to follow.
3. Easy to understand.
4. Look & Attractive.
(Or)
“Testing the ease with which users can learn and use a product.”
All aspects of user interfaces are tested:
1. Display screens.
2. Messages.
3. Report formats.
4. Navigation and selection problems.

ii.) Recover testing:
“Testing aimed at verifying the system's ability to recover from varying degrees of failure.”

Recovery is the ability to restart operations after the integrity of the application has been lost. The process normally involves reverting to a point where the integrity of the system is known, and then reprocessing transactions up until the point of failure. The importance of recovery will vary from application to application.
Objectives:
Recovery testing is used to ensure that operations can be continued after a disaster, recovery testing not only verifies the recovery process, but also the effectiveness of the component parts of that process. Specific objectives of recovery testing include:
Adequate backup data is preserved
1. Backup data is stored in a secure location
2. Recovery procedure are documented
3. Recovery personnel have been assigned and trained
4. Recovery tools have been developed and are available

When to use Recovery Testing:
Recovery testing should be performed whenever the user of the application states that the continuity of operation of the application is essential to the proper functioning of the user area. The user should estimate the potential loss
associated with inability to recover operations over various time spans. The amount of the potential loss should both determine the amount of resource to be put into disaster planning as well as recovery testing.

iii.) Security testing:
“Testing whether the system meets its specified security objectives.”
Security is a protection system that is needed for both secure confidential information and for competitive purposes to assure third parties their data will be protected. Protecting the confidentiality of the information is designed to protect the resources of the organization. Security testing is designed to evaluate the adequacy of the protective procedures and countermeasures.

Objectives:
Security defects do not become as obvious as other types of defects. Therefore, the objectives of security testing are to identify defects that are very difficult to identify. Even failures in the security system operation may not be detected, resulting in a loss or compromise of information without the knowledge of that loss. The security testing objectives include:
Determine that adequate attention has been devoted to identifying security risks
Determining that a realistic definition and enforcement of access to the system has been implemented
Determining that sufficient expertise exists to perform adequate security testing
Conducting reasonable tests to ensure that the implemented security measures function properly

When to Use security Testing:
Security testing should be used when the information and/or assets protected by the application system are of significant value to the organization. The testing should be performed both prior to the system going into an operational status and after the system is placed into an operational status. The extent of testing should depend on the security risks, and the individual assigned to conduct the test should be selected based on the estimated sophistication that might be used to penetrate security.

iv.) Compatibility testing:
Compatibility testing, part of software non-functional tests, is testing conducted on the application to evaluate the application's compatibility with the computing environment. Computing environment may contain some or all of the below mentioned elements:
Operating systems (MVS, UNIX, Windows, etc.)
Other System Software (Web server, networking/ messaging tool, etc.)
Browser compatibility (Fire fox, Netscape, Internet Explorer, Safari, etc.)

v.) Configuration testing:
This testing is also called as Hardware Compatibility Testing.During this testing tester validate how well our current project is able to supports on different types of hardware technologies like as different types of printers, n/w interface cord(NIC),topology etc. this testing is also called as hardware testing or portable testing


vi.) Data-Volume testing:
Data Volume testing is also called as Storage testing (Or) Memory Testing.
Volume Testing belongs to the group of non-functional tests, which are often misunderstood and/or used interchangeably. Volume testing refers to testing a software application with a certain amount of data. This amount can, in generic terms, be the database size or it could also be the size of an interface file that is the subject of volume testing. For example, if you want to volume test your application with a specific database size, you will expand your database to that size and then test the application's performance on it. Another example could be when there is a requirement for your application to interact with an interface file (could be any file such as .dat, .xml); this interaction could be reading and/or writing on to/from the file. You will create a sample file of the size you want and then test the application's functionality with that file in order to test the performance.Volume Testing means testing the software with large volume of data in the database.

vii.) Performance testing:
It is nothing but to check the time response for the particular application. Time response is nothing but to check how much time it takes to perform a particular task.
(Or)
“Testing conducted to evaluate the compliance of a system or component with specified performance requirements.”
Performance testing is designed to test run time performance of software within the context of an integrated system. It is not until all systems elements are fully integrated and certified as free of defects the true performance of a system can be ascertained.
Performance tests are often coupled with stress testing and often require both hardware and software infrastructure. That is, it is necessary to measure resource utilization in an exacting fashion. External instrumentation can monitor intervals, log events. By instrument the system, the tester can uncover situations that lead to degradation and possible system failure.

Performance testing can be sub divided into
Load Testing.
Stress Testing.

Load testing:
Load Testing involves stress testing applications under real-world conditions to predict system behaviour and performance and to identify and isolate problems. Load testing applications can emulate the workload of hundreds or even thousands of users, so that you can predict how an application will work under different user loads and determine the maximum number of concurrent users accessing the site at the same time.

Stress Testing:
Testing conducted to evaluate a system or component at or beyond the limits of
Its specified requirements.

3.4.1.6. User Acceptance Testing:
Formal testing conducted to enable a user, customer, or other authorized
entity to determine whether to accept a system or component”

User Acceptance Testing (UAT) is performed by Users or on behalf of the users to ensure that the Software functions in accordance with the Business Requirement Document. UAT focuses on the following aspects:
1. All functional requirements are satisfied
2. All performance requirements are achieved
3. Other requirements like transportability, compatibility, error recovery etc. are satisfied.
4. Acceptance criteria specified by the user is met.
Entry Criteria
1. SIT must be completed.
1. Availability of stable Test Environment with the latest version of the Application.
3. Test Cases prepared by the testing team to be reviewed and signed-off by the Project coordinator (AGM-Male).
4. All User IDs requested by the testing team to be created and made available to the testing team one week prior to start of testing.
Exit Criteria
1. All Test Scenarios/conditions would be executed and reasons will be provided for untested conditions arising out of the following situations
2. Non -Availability of the Functionality
3. Deferred to the Future Release
4. All Defects Reported are in the ‘Closed’ or ‘Deferred’ status. The client team should sign off the ‘Deferred’ defects.

User Acceptance can be sub divided into
Ø Alpha Testing.
Ø Beta Testing.

a. Alpha Testing:
Alpha testing is conducted at the developer's site by a customer. The customer uses the software with the developer 'looking over the shoulder' and recording errors and usage problems. Alpha testing conducted in a controlled environment.

b. Beta Testing:
Beta testing is conducted at one or more customer sites by end users. It is 'live' testing in an environment not controlled by the developer. The customer records and reports difficulties and errors at regular intervals.

3.5. Implementation:
Just installing the developed application in the client or customer side and working with the application in the customer side.

3.6. Maintanance:
Just we main maintaining the application by using some methods.
Ø Bug Fixing.
Ø Enhancement.
Ø Upgradation.

3.6.1. Bug Fixing:
After Implementation we customer or client found the defect, then we need to clear the bug that is nothing but bug fixing.

3.6.2. Enhancement:
Adding extra features or modifying the already developed application is called enhancement.

3.6.3. Upgradation:
After adding extra features just modifying the version name for the particular application.

2.Quality Assurance, Quality Control, Verification & Validation

2. Quality Assurance, Quality Control, Verification & Validation

2.1. Quality Assurance
“A planned and systematic pattern for all actions necessary to provide adequate confidence that the item or product conforms to established technical requirements”

2.2. Quality Control
“QC is a process by which product quality is compared with applicable standards, and the action taken when nonconformance is detected.”

“Quality Control is defined as a set of activities or techniques whose purpose is to ensure that all quality requirements are being met. In order to achieve this purpose, processes are monitored and performance problems are solved.”

2.3. Verification
“The process of evaluating a system or component to determine whether the products of the given development phase satisfy the conditions imposed at the start of that phase.”
(Or)
“Are we building the Product Right?”

2.4. Validation
Determination of the correctness of the products of software development with respect to the user needs and requirements.

(Or)
“Are we building the Right Product”?



Difference Table:

Quality Assurance:

1. Study on Process followed in Project development.

Quality Control:
Study on Project for its Function andSpecification

Verification

1. Process of determining whether output of one phase of development conforms to itsprevious phase.

2. Verification is concerned with phase containment of errors.

Validation:

1. Process of determining whether a fully developed system conforms to its SRS document.

2. Validation is concerned about the final product to be error free

1.Manual Testing Fundamentals

1. Testing Fundamentals
1.1. Definition

“The process of exercising software to verify that it satisfies specified requirements and to detect errors “
(Or)
“Testing is the process of executing a program with the intent of finding errors”
(Or)
Testing identifies faults, whose removal increases the software quality by increasing the software’s potential reliability. Testing is the measurement of software quality. We measure how closely we have achieved quality by testing the relevant factors such as correctness, reliability, usability, maintainability, reusability and testability.
(Or)
Software testing is a process to check whether our application or product meets the client or customer requirements or not.

1.2. Objective

· Testing is a process of executing a program with intent of finding an error.
· A good test is one that has a high probability of finding an as-yet-undiscovered error.
· A successful test is one that uncovers an as-yet-undiscovered error.
· Testing should also aim at suggesting changes or modifications if required, thus adding value to the entire process.
· The objective is to design tests that systematically uncover different classes of errors and do so with a minimum amount of time and effort.
· Demonstrating that the software application appears to be working as required by the specification.
· Meeting performance requirements.
· Software reliability and software quality based on the data collected during testing.

1.3. Benefits of Testing
· Increase accountability and Control.
· Cost reduction.
· Time reduction.
· Defect reduction.
· Increase productivity of the Software developers.
· Quantitative Management of Software delivery.

1.4. Defect:
If software misses some feature or function from what is there in requirement it is called as defect.
(Or)
It is nothing but mismatch between the developer developed requirement and the customer specific requirement, then the mismatch is called as defect.
Missing of the customer requirement during development

1.5. Bug:
A fault in a program which causes the program to perform in an unintended or unanticipated manner.
(Or)
Bug means deviation from the expectation.
(or)
Actual result not equal to expected result.