Wednesday, November 11, 2009

17. Questions

Q1: Why does software have bugs?
Ans:

i. Miscommunication or no communication - understand the application requirements.
ii. Software complexity - the complexity of current software applications can be difficult to comprehend for anyone without experience in modern-day software development.
iii. Programming errors - programmers "can" make mistakes.
iv. Changing requirements - A redesign, rescheduling of engineers, effects on other projects, etc. v. If there are many minor changes or any major changes, known and unknown dependencies among parts of the project are likely to interact and cause problems, and the complexity of keeping track of changes may result in errors.
vi. Time pressures - scheduling of software projects is difficult at best, often requiring a lot of guesswork. When deadlines loom and the crunch comes, mistakes will be made.
vii. Poorly documented code - it's tough to maintain and modify code that is badly written or poorly documented that result as bugs.
viii. Software development tools - various tools often introduce their own bugs or are poorly documented, resulting in added bugs.
Q2: What does "finding a bug" consist of?
Ans:
Finding a bug consists of number of steps that are performed:
i. Searching for and locating a bug
ii. Analyzing the exact circumstances under which the bug occurs
iii. Documenting the bug found
iv. Reporting the bug and if necessary, the error is simulated
v. Testing the fixed code to verify that the bug is really fixed
Q3: What will happen about bugs that are already known?
Ans:
When a program is sent for testing (or a website given) a list of any known bugs should accompany the program. If a bug is found, then the list will be checked to ensure that it is not a duplicate. Any bugs not found on the list will be assumed to be new.
Q4: What's the big deal about 'requirements'?
Ans:
Requirements are the details describing an application's externally perceived functionality and properties. Requirements should be, clear & documented, complete, reasonably detailed, cohesive, attainable, and testable. A non-testable requirement would be, for example, 'userfriendly' (too subjective). Without such documentation, there will be no clear-cut way to determine if a software application is performing correctly.
Q5: What can be done if requirements are changing continuously?
Ans:
It's helpful if the application's initial design allows for some adaptability so that any changes done later do not require redoing the application from scratch. To makes changes easier for the developers the code should be well commented and well documented. Use rapid prototyping whenever possible to help customers feel sure of their requirements and minimize changes. Be sure that customers and management understand the scheduling impacts, inherent risks,
and costs of significant requirements changes. Design some flexibility into test cases (this is not easily done; the best bet might be to minimize the detail in the test cases, or set up only higher-level generic-type test plans)
Q6: When to stop testing?
Ans:
This can be difficult to determine. Many modern software applications are so complex, and run in such an interdependent environment, so a complete testing can never be performed.
Common factors in deciding when to stop testing are:
i. Deadlines achieved (release deadlines, testing deadlines, etc.)
ii. Test cases completed with certain percentage passed
iii. Test budget depleted
iv. Coverage of code/functionality/requirements reaches a specified point
v. Defect rate falls below a certain level
vi. Beta or Alpha testing period ends
Q7: How does a client/server environment affect testing?
Ans:
Client/server applications can be quite complex due to the multiple dependencies among clients, data communications, hardware, and servers. Thus testing requirements can be extensive. When time is limited (as it usually is) the focus should be on integration and system testing. Additionally, load/ stress/ Performance testing may be useful in determining client/server application Limitations and capabilities.
Q8: Does it matter how much the software has been tested already?
Ans:
No. It is up to the tester to decide how much to test it before it is tested. An initial assessment of the software is made, and it will be classified into one of three possible stability levels:
i. Low stability (bugs are expected to be easy to find, indicating that the program has not been tested or has only been very lightly tested)
ii. Normal stability (normal level of bugs, indicating a normal amount of programmer testing)
iii. High stability (bugs are expected to be difficult to find, indicating already well tested)
Q9: How is testing affected by object-oriented designs?
Ans:
Well-engineered object-oriented design can make it easier to trace from code to internal design to functional design to requirements. While there will be little affect on black box testing (where an understanding of the internal design of the application is unnecessary), white-box testing can be oriented to the application's objects. If the application was well designed this can simplify test design.
Q10: Will automated testing tools make testing easier?
Ans:
A tool set that allows controlled access to all test assets promoted better communication between all the team members, and will ultimately break down the walls that have traditionally existed between various groups.
Automated testing tools are only one part of a unique solution to achieving customer success. The complete solution is based on providing the user with principles, tools, and services needed to efficiently develop software.
Q11: Why outsource testing?
Ans:
Skill and Expertise Developing and maintaining a team that has the expertise to thoroughly test complex and large applications is expensive and effort intensive.
Testing a software application now involves a variety of skills.
i. Focus - Using a dedicated and expert test team frees the development team to focus on sharpening their core skills in design and development, in their domain areas.
ii. Independent assessment - Independent test team looks afresh at each test project while bringing with them the experience of earlier test assignments, for different clients, on multiple platforms and across different domain areas.
iii. Save time - Testing can go in parallel with the software development life cycle to minimize the time needed to develop the software.
iv. Reduce Cost - Outsourcing testing offers the flexibility of having a large test team, only when needed. This reduces the carrying costs and at the same time reduces the ramp up time and costs associated with hiring and training temporary personnel.
Q12: What steps are needed to develop and run software tests?
Ans:
The following are some of the steps needed to develop and run software tests:
i. Obtain requirements, functional design, and internal design specifications and other necessary documents
ii. Obtain budget and schedule requirements
iii. Determine project-related personnel and their responsibilities, reporting requirements, required standards and processes (such as release processes, change processes, etc.)
iv. Identify application's higher-risk aspects, set priorities, and determine scope and limitations of tests
v. Determine test approaches and methods - unit, integration, functional, system, load, usability tests, etc.
vi. Determine test environment requirements (hardware, software, communications, etc.)
vii. Determine test-ware requirements (record/playback tools, coverage analyzers, test tracking, problem/bug tracking, etc.)
viii. Determine test input data requirements Identify tasks, those responsible for tasks, and labor requirements
ix. Set schedule estimates, timelines, milestones
x. Determine input equivalence classes, boundary value analyses, error classes Prepare test plan document and have needed reviews/approvals
xi. Write test cases
xii. Have needed reviews/inspections/approvals of test cases
xiii. Prepare test environment and test ware, obtain needed user manuals/reference documents/configuration guides/installation guides, set up test tracking processes, set up logging and archiving processes, set up or obtain test input data
xiv. Obtain and install software releases
xv. Perform tests
xvi. Evaluate and report results
xvii. Track problems/bugs and fixes
xviii. Retest as needed
xix. Maintain and update test plans, test cases, test environment, and test ware through life cycle
Q13: What is a Test Strategy and Test Plan?
Ans:
A test strategy is a statement of the overall approach to testing, identifying what levels of testing are to be applied and the methods, techniques and tools to be used. A test strategy should ideally be organization wide, being applicable to all of organizations software developments. Developing a test strategy, which efficiently meets the needs of an organization, is critical to the success of software development within the organization. The application of a test strategy to a software development project should be detailed in the projects software quality plan.
The next stage of test design, which is the first stage within a software development project, is the development of a test plan. A test plan states what the items to be tested are, at what level they will be tested, what sequence they are to be tested in, how the test strategy will be applied to the testing of each item, and describes the test environment.
A test plan may be project wide, or may in fact be a hierarchy of plans relating to the various levels of specification and testing:
i. An Acceptance Test Plan, describing the plan for acceptance testing of the software. This would usually be published as a separate document, but might be published with the system test plan as a single document.
ii. A System Test Plan, describing the plan for system integration and testing. This would also usually be published as a separate document, but might be published with the acceptance test plan.
iii. A Software Integration Test Plan, describing the plan for integration of testes software components. This may form part of the Architectural Design Specification.
iv. Unit Test Plan(s), describing the plans for testing of individual units of software. These may form part of the Detailed Design Specifications.
v. The objective of each test plan is to provide a plan for verification, by testing the software, that the software produced fulfils the requirements or design statements of the appropriate software specification. In the case of acceptance testing and system testing, this means the Requirements Specification.

No comments:

Post a Comment