search
Login | Signup | Support
  • 0
  • ×

    Add as FriendSoftware Testing Strategies

    by: Rogers

    Current Rating : Rate It :

    240

    Views

    Download
     
    1 : 1 Software Testing Strategiesbased onChapter 13 - Software Engineering: A Practitioner’s Approach, 6/ecopyright © 1996, 2001, 2005R.S. Pressman & Associates, Inc.For University Use OnlyMay be reproduced ONLY for student use at the university levelwhen used in conjunction with Software Engineering: A Practitioner's Approach.Any other reproduction or use is expressly prohibited.
    2 : 2 Software Testing Testing is the process of exercising a program with the specific intent of finding errors prior to delivery to the end user. errors requirements conformance performance an indication of quality
    3 : 3 Who Tests the Software? developer independent tester Understands the system but, will test "gently" and, is driven by "delivery" Must learn about the system, but, will attempt to break it and, is driven by quality
    4 : 4 Levels of Testing Unit testing Integration testing Validation testing Focus is on software requirements System testing Focus is on system integration Alpha/Beta testing Focus is on customer usage Recovery testing forces the software to fail in a variety of ways and verifies that recovery is properly performed Security testing verifies that protection mechanisms built into a system will, in fact, protect it from improper penetration Stress testing executes a system in a manner that demands resources in abnormal quantity, frequency, or volume Performance Testing test the run-time performance of software within the context of an integrated system
    5 : 5 Unit Testing module to be tested test cases results software engineer interface local data structures boundary conditions independent paths error handling paths
    6 : 6 Integration Testing Strategies Options: • the “big bang” approach • an incremental construction strategy Top Down, bottom-up, sandwich Integration
    7 : 7 OOT Strategy class testing is the equivalent of unit testing operations within the class are tested the state behavior of the class is examined integration applied three different strategies/levels of abstraction thread-based testing—integrates the set of classes required to respond to one input or event use-based testing—integrates the set of classes required to respond to one use case cluster testing—integrates the set of classes required to demonstrate one collaboration …if there is no nesting of classes …this is pushing…
    8 : 8 test cases results Debugging suspected causes identified causes corrections regression tests new test cases Debugging: A Diagnostic Process
    9 : 9 Software Testing Techniques based onChapter 14 - Software Engineering: A Practitioner’s Approach, 6/e copyright © 1996, 2001, 2005R.S. Pressman & Associates, Inc.For University Use OnlyMay be reproduced ONLY for student use at the university levelwhen used in conjunction with Software Engineering: A Practitioner's Approach.Any other reproduction or use is expressly prohibited.
    10 : 10 What is a “Good” Test? a high probability of finding an error not redundant. neither too simple nor too complex "Bugs lurk in corners and congregate at boundaries ..." Boris Beizer OBJECTIVE: CRITERIA: CONSTRAINT: to uncover errors in a complete manner with a minimum of effort and time
    11 : 11 Exhaustive Testing loop < 20 X There are approx. 10 possible paths! If we execute one test per millisecond, it would take 3,170 years to test this program!! 14 Where does 10 14 come from? White-Box or Black-Box?
    12 : 12 RE in V Model system requirements system integration software requirements preliminary design detailed design code & debug acceptance test software integration component test unit test Time Level of abstraction analyze and design test and integrate [Chung]
    13 : 13 Software Testing White-Box Testing ... our goal is to ensure that all statements and conditions have been executed at least once ... Black-Box Testing requirements events input output
    14 : 14 Basis Path Testing First, we compute the cyclomatic complexity: number of simple decisions + 1 or number of enclosed areas + 1 In this case, V(G) = 4 White-Box Testing
    15 : 15 Basis Path Testing Next, we derive the independent paths: Since V(G) = 4, there are four paths Path 3: 1,2,3,6,7,8 Path 2: 1,2,3,5,7,8 Path 1: 1,2,4,7,8 Path 4: 1,2,4,7,2,4,...7,8 Finally, we derive test cases to exercise these paths. White-Box Testing A number of industry studies have indicated that the higher V(G), the higher the probability or errors.
    16 : 16 Loop Testing Nested Loops Concatenated Loops Unstructured Loops Simple loop White-Box Testing Why is loop testing important?
    17 : 17 Equivalence Partitioning & Boundary Value Analysis If x = 5 then … What would be the equivalence classes? If x > -5 and x < 5 then … White-Box Testing Black-Box Testing
    18 : 18 Comparison Testing Used only in situations in which the reliability of software is absolutely critical (e.g., human-rated systems) Separate software engineering teams develop independent versions of an application using the same specification Each version can be tested with the same test data to ensure that all provide identical output Then all versions are executed in parallel with real-time comparison of results to ensure consistency Black-Box Testing
    19 : 19 OOT—Test Case Design Berard [BER93] proposes the following approach: 1. Each test case should be uniquely identified and should be explicitly associated with the class to be tested, 2. A list of testing steps should be developed for each test and should contain [BER94]: a. a list of specified states for the object that is to be tested b. a list of messages and operations that will be exercised as a consequence of the test how can this be done? c. a list of exceptions that may occur as the object is tested d. a list of external conditions (i.e., changes in the environment external to the software that must exist in order to properly conduct the test) {people, machine, time of operation, etc.}
    20 : 20 OOT Methods: Behavior Testing The tests to be designed should achieve all state coverage [KIR94]. That is, the operation sequences should cause the Account class to make transition through all allowable states Is the set of initial input data enough?
    21 : 21 Omitted Slides
    22 : 22 Testability Operability—it operates cleanly Observability—the results of each test case are readily observed Controllability—the degree to which testing can be automated and optimized Decomposability—testing can be targeted Simplicity—reduce complex architecture and logic to simplify tests Stability—few changes are requested during testing Understandability—of the design
    23 : 23 Strategic Issues Understand the users of the software and develop a profile for each user category. Develop a testing plan that emphasizes “rapid cycle testing.” Use effective formal technical reviews as a filter prior to testing Conduct formal technical reviews to assess the test strategy and test cases themselves.
    24 : 24 Counting Bugs Sometimes reliability requirements take the form: "The software shall have no more than X bugs/1K LOC" But how do we measure bugs at delivery time? Bebugging Process - based on a Monte Carlo technique for statistical analysis of random events. 1. before testing, a known number of bugs (seeded bugs) are secretly inserted. 2. estimate the number of bugs in the system 3. remove (both known and new) bugs. # of detected seeded bugs/ # of seeded bugs = # of detected bugs/ # of bugs in the system # of bugs in the system = # of seeded bugs x # of detected bugs /# of detected seeded bugs Example: secretely seed 10 bugs an independent test team detects 120 bugs (6 for the seeded) # of bugs in the system = 10 x 120/6 = 200 # of bugs in the system after removal = 200 - 120 - 4 = 76 But, deadly bugs vs. insignifant ones; not all bugs are equally detectable; ( Suggestion [Musa87]: "No more than X bugs/1K LOC may be detected during testing" "No more than X bugs/1K LOC may be remain after delivery, as calculated by the Monte Carlo seeding technique" NFRs:Reliability   [Chung, RE Lecture Notes]]
    25 : 25 Cyclomatic Complexity A number of industry studies have indicated that the higher V(G), the higher the probability or errors. V(G) modules modules in this range are more error prone White-Box Testing

    Presentation Tags

    Copyright © 2019 www.slideworld.com. All rights reserved.