Thursday, October 22, 2009

3. SDLC Process (Software Development Life Cycle)

3.1. Requirement Analysis:

Roles:
Business Analyst (B.A), Engagement Manager (E.M)
Process:
First of all the Business analyst will take an appointment from the customer, collects the templates from the company, meets the customer on appointed day, gathers the requirements with the help of template and comes back to the company with the requirements documents. Once the requirement document has come to the company the engagement manager will check whether the customer gives any extra requirements or confused requirements. In case of extra requirements he deals the excess cost of the project. In case of confused requirements he is the responsible for prototype demonstration and gathering the clear requirements.
Proof: The proof document of this phase is Requirement Document. This is called with different names in different companies.
1. FRS (Functional Requirements Specification)
2. CRS (Customer Requirement Specification)
3. URS (User Requirement Specification)
4. BDD (Business Design Document)
5. BD (Business Document)
6. BRS (Business Requirement Specification)
Some companies may maintain the over all business flow information in one document and the detailed functional requirement information in the other document.

3.1.1. BRS (Business Requirement Specification)
Roles:
Business Analyst (B.A), Project Manager (P.M)
Process:
BRS is developed by Business Analyst.
BRS is a Business Requirement Specification Initially client will give the req's in their own format then it will be converted in to Standard format by which s/w people can understand.

In BRS the requirements are defined in general format, where as in SRS the requirements will be divided in to modules and each module contains How many interfaces and screens.





3.1.2. Analysis:
(a) Tasks:

1. Feasibility Study.
2. Tentative planning.
3. Technology Selection.
4. Requirement Analysis.

(b)Roles:
System Analyst (S.A), Project Manager (P.M), and Team Manager (T.M)
Process:
1. Feasibility Study : It is detailed study of the requirements in order to
check Weather the requirements are possible or not.
2. Tentative Planning : In this section the resource planning and the time
planning (Scheduling) is done temporarily.
3. Technology Selection : The list of all the technologies that is required to
accomplish this project. Successfully will be analyzed and listed out in this section.
4. Requirement Analysis : The list of all the requirements that is required to
accomplish this project.

3.1.3. SRS (Software Requirement Specification):
Roles:

Project Manager (P.M)
Process:
SRS Document is one of the Client Requirement Document.
The SRS is often referred to as the "parent" document because all subsequent project management documents, such as design specifications, statements of work, software architecture specifications, testing and validation plans, and documentation plans, are related to it.

An SRS is basically an organization's understanding (in writing) of a customer or potential client's system requirements and dependencies at a particular point in time (usually) prior to any actual design or development work. It's a two-way insurance policy that assures that both the client and the organization understand the other's requirements from that perspective at a given point in time.

It's important to note that an SRS contains functional and nonfunctional requirements only; it doesn't offer design suggestions, possible solutions to technology or business issues, or any other information other than what the development team understands the customer's system requirements to be.
A well-designed, well-written SRS accomplishes four major goals:
* It provides feedback to the customer. An SRS is the customer's assurance that the development organization understands the issues or problems to be solved and the software behavior necessary to address those problems. Therefore, the SRS should be written in natural language in an unambiguous manner that may also include charts, tables, data flow diagrams, decision tables, and so on.
* It decomposes the problem into component parts. The simple act of writing down software requirements in a well-designed format organizes information, places borders around the problem, solidifies ideas, and helps break downthe problem into its component parts in an orderly fashion.
* It serves as an input to the design specification. As mentioned previously, the
SRS serves as the parent document to subsequent documents, such as the software design specification and statement of work. Therefore, the SRS must contain sufficient detail in the functional system requirements so that a design solution can be devised.
* It serves as a product validation check. The SRS also serves as the parent document for testing and validation strategies that will be applied to the requirements for verification.

3.2. Software Design:
The development process is the process by which the user requirement are elicited and software satisfying this requirement is designed, Build, tested and delivered to the customer.

3.2.1. HLD (High Level Designed Document):
a. Purpose of this Document
This HLD is also called as Architectural Designed Document (or) System Design Document (Or) Macro Level Design.

It is the phases of the life cycle when a Logical view of the computer implementation of the solution to the customer requirement is developed. It gives the solution at high level of abstraction. The solution contains two main components.
The functional Architecture of the application and the
Database design.

(Or)
This High-Level Design (HLD) document specifies the implementation, including intercomponent dependencies, and provides sufficient design detail that any product based on this HLD will satisfy the product requirements.
(Or)
This is the first place other developers/maintainers will look to learn how your project works. It must provide a comprehensive outline of the entire design. The design lead is responsible for writing the overview. Write this section as HTML and place it in the javadoc overview file.


3.2.2. LLD (Low Level Designed Document):
This LLD Design is also called as detailed design (Or) Micro level Design.

During the detailed design phase, the view of the application developed during the high-level design is broken down into modules and programs. Logic design is done for every program and then documented as program specifications. For every program a unit test is created. Important activities in the detailed design stage include the identification of common routines and programs development of skeleton programs, and development of utilities and tool for productivity improvement.
(Or)

This document describes each and every module in an elaborate manner so that the programmer can directly code the program based on this. There will be at least 1 document for each module and there may be more for a module. The LLD will contain: - detailed functional logic of the module in pseudo code - database tables with all elements including their type and size - all interface details with complete API references (both requests and responses) - all dependency issues -error message listings - complete input and outputs for a module.

3.3 Coding:
This coding part will be done by only Programmers nothing but Developers.

First the Developer write the coding for the Application (or) Product based on the Low Level Designed (LLD) Document, because this document is the detailed document based on the SRS Document. After writing the coding for the LLD design the Programmer will write the coding for the HLD design.



3.4. Testing:
3.4.1. Levels of Testing:
3.4.1.1. Unit Testing:
Unit testing is also called as Component Testing.

Unit testing is defines as Testing the individual module. The testing is done to a unit or to a smallest piece of software. Unit testing are done to verify if it satisfies its functional specification or its intended design structure.
(Or)
This is the first and the most important level of testing. As soon as the programmer develops a unit of code the unit is tested for various scenarios. As the application is built it is much more economical to find and eliminate the bugs early on. Hence Unit Testing is the most important of all the testing levels. As the software project progresses ahead it becomes more and more costly to find and fix the bugs.
In most cases it is the developer’s responsibility to deliver Unit Tested Code.
a. Benefits of Unit Testing:
Assurance of working components before integration
Tests are repeatable - Every time you change something you can rerun your suite of tests to verify that the unit still works.
Tests can be designed to ensure that the code fulfills the requirements.
All debugging is separated from the code.

b. Component Test Process:
a) Component Test Planning;
b) Component Test Specification;
c) Component Test Execution;
d) Component Test Recording;
e) Checking for Component Test Completion.

This Figure illustrates the generic test process described in Component Test
Planning shall begin the test process and Checking for Component Test Completion shall end it; these activities are carried out for the whole component. Component Test Specification, Component Test Execution, and Component Test Recording may however, on any one iteration, be carried out for a subset of the test cases associated with a component. Later activities for one test case may occur before earlier activities for another. Whenever an error is corrected by
Making a change or changes to test materials or the component under test, the affected activities shall be repeated.



3.4.1.2. Integration Testing:
“Testing performed to expose faults in the interfaces and in the interaction between integrated components”
Testing of combined parts of an application to determine they function together correctly. The 'parts' can be code modules, individual applications, client and server applications on a network etc. This type of testing is especially relevant to client/server and distributed systems.
Objective:
The typical objectives of software integration testing are to:
Cause failures involving the interactions of the integrated software components when running on a single platform.
Report these failures to the software development team so that the underlying defects can be identified and fixed.
Help the software development team to stabilize the software so that it can be successfully distributed prior to system testing.
Minimize the number of low-level defects that will prevent effective system and launch testing.
Entry criteria:
The integration team is adequately staffed and trained in software integration testing.
The integration environment is ready.
The first two software components have:
Ø Passed unit testing.
Ø Been ported to the integration environment.
Ø Been integrated.
Documented Evidence that component has successfully completed unit test.
Adequate program or component documentation is available
Verification that the correct version of the unit has been turned over for integration.
Exit criteria:
1. A test suite of test cases exists for each interface between software components.
2. All software integration test suites successfully execute (i.e., the tests completely execute and the actual test results match the expected test results).
3. Successful execution of the integration test plan
4. No open severity 1 or 2 defects
Component stability
Guidelines:
The iterative and incremental development cycle implies that software integration testing is regularly performed in an iterative and incremental manner.
Software integration testing must be automated if adequate regression testing is to occur.
Software integration testing can elicit failures produced by defects that are difficult to detect during system or launch testing once the system has been completely integrated.

3.4.1.2.1. Incremental Integration Testing
“Integration testing where system components are integrated into the system one at a time until the entire system is integrated”
Continuous testing of an application as new functionality is added; requires that various aspects of an application's functionality be independent enough to work separately before all parts of the program are completed, or that test drivers be developed as needed; done by programmers or by testers. Integration testing where system components are integrated into the system one at a time until the entire system is integrated.

The Incremental Integration Testing will divided into 3 types,
a.) Top down Integration.
b.) Bottom up Integration.
c.) Sandwich Integration.

a.) Top down Integration
“An approach to integration testing where the component at the top of the component hierarchy is tested first, with lower level components being simulated by stubs. Tested components are then used to test lower level components. The process is repeated until the lowest level component has been tested.”

Modules integrated by moving down the program design hierarchy.
Can use depth first or breadth first top down integration





Steps:
Main control module used as the test driver, with stubs for all subordinate modules.
Replace stubs either depth first or breadth first
Replace stubs one at a time.
Test after each module integrated
Use regression testing (conducting all or some of the previous tests) to ensure new errors are not introduced.
Verifies major control and decision points early in design process.

b.) Bottom up Integration:
“An approach to integration testing where the lowest level components are tested first then used to facilitate the testing of higher level components. The process is repeated until the component at the top of the hierarchy is tested.”

Begin construction and testing with atomic modules (lowest level modules).
Use driver program to test.





Steps:
1. Low level modules combined in clusters (builds) that perform specific software sub functions.
2. Driver program developed to test. Cluster is tested.
3. Driver programs removed and clusters combined, moving upwards in program structure.

i. Stub and Drivers:

Stubs:
Stubs are program units that are stands for the other (more complex) program units that are directly referenced by the unit being tested.
Stubs are usually expected to provide the following:
An interface that is identical to the interface that will be provided by the actual program unit, and the minimum acceptable behaviour expected of the actual program unit. (This can be as simple as a return statement)
Drivers:
Drivers are programs or tools that allow a tester to exercise/examine in a controlling manner the unit of software being tested.
A driver is usually expected to provide the following:
A means of defining, declaring, or otherwise creating, any variables, constants, or other items needed in the testing of the unit, and a means of monitoring the states of these items, any input and output mechanisms needed in the testing of the unit

c.) Sandwich Testing:
This testing is also called as Bidirectional Testing (Or) Hybrid Testing.
It is a Combination of both bottom-up and top-down testing using testing layer.



3.4.1.3. Smoke Testing:
This Smoke Testing is done by Developers. This testing is one of the last tests for testers.
The smoke testing is also called as Build Verification Testing.
Build is nothing but .exe (execution) file. The execution file contains the Source code and Executable code.
When a build is received, a smoke test is run to ascertain if the build is stable and it can be considered for further testing.
Smoke testing can be done for testing the stability of any interim build.
Smoke testing can be executed for platform qualification tests.

3.4.1.4. Sanity Testing:
Sanity Testing is one of the basic tests for testers.
Once a new build is obtained with minor revisions, instead of doing a through regression, sanity is performed so as to ascertain the build has indeed rectified the issues and no further issue has been introduced by the fixes. Its generally a subset of regression testing and a group of test cases are executed that are related with the changes made to the app.
Generally, when multiple cycles of testing are executed, sanity testing may be done during the later cycles after through regression cycles.

3.4.1.5. System Testing:
“System testing is the process of testing an integrated system to verify that it meets specified requirements".

System test Entrance Criteria:
Successful execution of the Integration test cases
No open severity 1 or 2 defects
75-80% of total system functionality and 90% of major functionality delivered
System stability for 48-72 hours to start test

System Test Exit Criteria:
Successful execution of the system test cases ,and documentation that shows
coverage of requirements and high-risk system components
System meets pre-defined quality goals
100% of total system functionality delivered

3 .4.1.5.1. Types of system testing:
a.) Functional Testing:
b.) Non Functional Testing:


A.) Functional Testing:
Functional testing can be subdivided into two types:
Ø Functionality Testing.
Ø Sanitation Testing.

a. Functionality Testing:
Functionality testing can be done by using some coverage’s.
i. GUI Coverage.
ii. Input Domain Coverage.
iii. Output Domain Coverage.
iv. Error Handling Coverage.
v. Database Coverage.
vi. Order of Functionality.

i.) GUI Coverage:
Under GUI Coverage the testers are checking the properties for the particular application.
Properties are
1. Height.
2. Width.
3. Text.
4.Length.
5.Start Position(X).
6. End Position(Y).

ii.) Input Domain Coverage:
Under Input Domain Coverage the Testers are checking the input for the particular application based on the customer or client requirements.
Checking whether the input will be accepted by our application or not.
iii.) Output Domain Coverage:
Under output domain coverage the testers are checking the output for the particular application based on the given input in the input domain coverage.
Output is nothing but customer expectation.
iv.) Error handling Coverage:
Under error Handling Coverage the testers are checking that our application have the capability to move from Abnormal State to Normal State or not. Abnormal State is nothing but Error Handling State.

v.) Database Coverage:
Database Coverage is also called as Backend Coverage.
Database coverage is nothing but retrieving the value from the database.
Database Coverage can be Sub Divided into
· Data Validation.
· Data Integrity.
Data Validation:
It is nothing but to check whether the new values entered in the front end,
that same value will be entered in the database or not.
Data Integrity:
It is nothing but to check whether the modified values will be stored in the database or not.



vi.) Order of Functionality:
To check the functionality of the particular application in the order wise, that is nothing but to check the input for application in the procedure wise.
If you not check the input in procedure wise it will show the error message.

b. Sanitation Testing:
Sanitation testing is nothing but to check the extra functionalities in the particular application. Because if you not check the extra functionalities it will affect the main functionality also.
That’s why the tester first check all the main functionalities which is available in the customer requirement document, then only he/she checks the extra functionalities which is not mentioned in the customer requirement document.

B.) Non Functional Testing:
Non functional testing is nothing but just checking the performance of the application.
Non functional testing can be sub divided into
i.) Usability testing.
ii.) Recovery testing.
iii.) Security testing.
iv.) Compatibility testing.
v.) Configuration testing.
vi.) Data-Volume Testing.
vii.) Performance testing.
i.) Usability Testing:
Usability Testing is also called as User-Interface Testing.
Usability Testing is nothing but User-friendliness check. Application
Flow is tested, Can new user understand the application easily, Proper help documented whenever user stuck at any point. Basically system
Navigation is checked in this testing.
User friendliness means
1. Easy to use.
2. Easy to follow.
3. Easy to understand.
4. Look & Attractive.
(Or)
“Testing the ease with which users can learn and use a product.”
All aspects of user interfaces are tested:
1. Display screens.
2. Messages.
3. Report formats.
4. Navigation and selection problems.

ii.) Recover testing:
“Testing aimed at verifying the system's ability to recover from varying degrees of failure.”

Recovery is the ability to restart operations after the integrity of the application has been lost. The process normally involves reverting to a point where the integrity of the system is known, and then reprocessing transactions up until the point of failure. The importance of recovery will vary from application to application.
Objectives:
Recovery testing is used to ensure that operations can be continued after a disaster, recovery testing not only verifies the recovery process, but also the effectiveness of the component parts of that process. Specific objectives of recovery testing include:
Adequate backup data is preserved
1. Backup data is stored in a secure location
2. Recovery procedure are documented
3. Recovery personnel have been assigned and trained
4. Recovery tools have been developed and are available

When to use Recovery Testing:
Recovery testing should be performed whenever the user of the application states that the continuity of operation of the application is essential to the proper functioning of the user area. The user should estimate the potential loss
associated with inability to recover operations over various time spans. The amount of the potential loss should both determine the amount of resource to be put into disaster planning as well as recovery testing.

iii.) Security testing:
“Testing whether the system meets its specified security objectives.”
Security is a protection system that is needed for both secure confidential information and for competitive purposes to assure third parties their data will be protected. Protecting the confidentiality of the information is designed to protect the resources of the organization. Security testing is designed to evaluate the adequacy of the protective procedures and countermeasures.

Objectives:
Security defects do not become as obvious as other types of defects. Therefore, the objectives of security testing are to identify defects that are very difficult to identify. Even failures in the security system operation may not be detected, resulting in a loss or compromise of information without the knowledge of that loss. The security testing objectives include:
Determine that adequate attention has been devoted to identifying security risks
Determining that a realistic definition and enforcement of access to the system has been implemented
Determining that sufficient expertise exists to perform adequate security testing
Conducting reasonable tests to ensure that the implemented security measures function properly

When to Use security Testing:
Security testing should be used when the information and/or assets protected by the application system are of significant value to the organization. The testing should be performed both prior to the system going into an operational status and after the system is placed into an operational status. The extent of testing should depend on the security risks, and the individual assigned to conduct the test should be selected based on the estimated sophistication that might be used to penetrate security.

iv.) Compatibility testing:
Compatibility testing, part of software non-functional tests, is testing conducted on the application to evaluate the application's compatibility with the computing environment. Computing environment may contain some or all of the below mentioned elements:
Operating systems (MVS, UNIX, Windows, etc.)
Other System Software (Web server, networking/ messaging tool, etc.)
Browser compatibility (Fire fox, Netscape, Internet Explorer, Safari, etc.)

v.) Configuration testing:
This testing is also called as Hardware Compatibility Testing.During this testing tester validate how well our current project is able to supports on different types of hardware technologies like as different types of printers, n/w interface cord(NIC),topology etc. this testing is also called as hardware testing or portable testing


vi.) Data-Volume testing:
Data Volume testing is also called as Storage testing (Or) Memory Testing.
Volume Testing belongs to the group of non-functional tests, which are often misunderstood and/or used interchangeably. Volume testing refers to testing a software application with a certain amount of data. This amount can, in generic terms, be the database size or it could also be the size of an interface file that is the subject of volume testing. For example, if you want to volume test your application with a specific database size, you will expand your database to that size and then test the application's performance on it. Another example could be when there is a requirement for your application to interact with an interface file (could be any file such as .dat, .xml); this interaction could be reading and/or writing on to/from the file. You will create a sample file of the size you want and then test the application's functionality with that file in order to test the performance.Volume Testing means testing the software with large volume of data in the database.

vii.) Performance testing:
It is nothing but to check the time response for the particular application. Time response is nothing but to check how much time it takes to perform a particular task.
(Or)
“Testing conducted to evaluate the compliance of a system or component with specified performance requirements.”
Performance testing is designed to test run time performance of software within the context of an integrated system. It is not until all systems elements are fully integrated and certified as free of defects the true performance of a system can be ascertained.
Performance tests are often coupled with stress testing and often require both hardware and software infrastructure. That is, it is necessary to measure resource utilization in an exacting fashion. External instrumentation can monitor intervals, log events. By instrument the system, the tester can uncover situations that lead to degradation and possible system failure.

Performance testing can be sub divided into
Load Testing.
Stress Testing.

Load testing:
Load Testing involves stress testing applications under real-world conditions to predict system behaviour and performance and to identify and isolate problems. Load testing applications can emulate the workload of hundreds or even thousands of users, so that you can predict how an application will work under different user loads and determine the maximum number of concurrent users accessing the site at the same time.

Stress Testing:
Testing conducted to evaluate a system or component at or beyond the limits of
Its specified requirements.

3.4.1.6. User Acceptance Testing:
Formal testing conducted to enable a user, customer, or other authorized
entity to determine whether to accept a system or component”

User Acceptance Testing (UAT) is performed by Users or on behalf of the users to ensure that the Software functions in accordance with the Business Requirement Document. UAT focuses on the following aspects:
1. All functional requirements are satisfied
2. All performance requirements are achieved
3. Other requirements like transportability, compatibility, error recovery etc. are satisfied.
4. Acceptance criteria specified by the user is met.
Entry Criteria
1. SIT must be completed.
1. Availability of stable Test Environment with the latest version of the Application.
3. Test Cases prepared by the testing team to be reviewed and signed-off by the Project coordinator (AGM-Male).
4. All User IDs requested by the testing team to be created and made available to the testing team one week prior to start of testing.
Exit Criteria
1. All Test Scenarios/conditions would be executed and reasons will be provided for untested conditions arising out of the following situations
2. Non -Availability of the Functionality
3. Deferred to the Future Release
4. All Defects Reported are in the ‘Closed’ or ‘Deferred’ status. The client team should sign off the ‘Deferred’ defects.

User Acceptance can be sub divided into
Ø Alpha Testing.
Ø Beta Testing.

a. Alpha Testing:
Alpha testing is conducted at the developer's site by a customer. The customer uses the software with the developer 'looking over the shoulder' and recording errors and usage problems. Alpha testing conducted in a controlled environment.

b. Beta Testing:
Beta testing is conducted at one or more customer sites by end users. It is 'live' testing in an environment not controlled by the developer. The customer records and reports difficulties and errors at regular intervals.

3.5. Implementation:
Just installing the developed application in the client or customer side and working with the application in the customer side.

3.6. Maintanance:
Just we main maintaining the application by using some methods.
Ø Bug Fixing.
Ø Enhancement.
Ø Upgradation.

3.6.1. Bug Fixing:
After Implementation we customer or client found the defect, then we need to clear the bug that is nothing but bug fixing.

3.6.2. Enhancement:
Adding extra features or modifying the already developed application is called enhancement.

3.6.3. Upgradation:
After adding extra features just modifying the version name for the particular application.

No comments:

Post a Comment