Software Testing and Analysis: Process, Principles and Techniques Average downloads per article, Giovanni Denaro, Alessandro Margara, Mauro Pezzè, Mattia Vivanti, Dynamic data flow testing of object oriented systems. complementary software test and analysis techniques in an integrated Mauro Pezz, PhD, is Professor of Software Engineering at the. University of Mauro Pezzè A typical download takes one hour, and an interrupted download must be. Mauro Pezze'. Michal Young. Mauro Pezze'. Software Testing and Analysis: Process, Principles, and Techniques Mauro Pezz`e and Chapter 14 Structural Testing The structure of the software itself is a valuable source of Download pdf.

Software Testing And Analysis Mauro Pezze Download

Language:English, Arabic, French
Genre:Personal Growth
Published (Last):24.02.2015
ePub File Size:18.42 MB
PDF File Size:12.49 MB
Distribution:Free* [*Sign up for free]
Uploaded by: ARDEN

Mauro Pezzè and others published Software Testing and Analysis: Process, Principles, Download full-text PDF Software Testing and Analysis: Process. Software Testing and Analysis: Process, Principles and Techniques Author: Mauro Pezze | Michal Young. 93 downloads Views 9MB Size Report. Software Testing and Analysis: Process, Principles, and Techniques. Mauro Pezzè A solutions manual is available to instructors. Please contact us and.

He co-presented a number of tutorials on mining software engineering data and software testing at past ICSEs. Inexpensive multicore processors with several cores on a chip are standard in PCs, laptops, servers, and embedded devices. Software engineers are now asked to write parallel applications of all sorts, and need to quickly grasp the relevant aspects of general-purpose parallel programming.

This technical briefing outlines state-of-the-art concepts and techniques in multicore software engineering, such as basics of parallel programming, programming languages, and testing and debugging techniques for multicore. In addition, it discusses experience reports on the parallelization of real-world applications. Pankratius' current research concentrates on how to make parallel programming easier and covers a range of research topics including auto-tuning, language design, debugging, and empirical studies.

We are moving from a world where we provide domain-specific middleware platforms e. Existing middleware approaches and paradigms are simply unable to cope with the demands of such heterogeneous and dynamic environments.

Indeed, as we move towards a world of systems of systems, we can say that middleware is in crisis, being unable to deliver on its most central promises, which is offering interoperability, i. This requires a fundamental re-think of the architectural principles and techniques underpinning middleware platforms. We need to turn from relatively static solutions based on promoting a particular interoperability solution or bridging strategy, to much more dynamic solutions where we generate appropriate machinery for interoperability on the fly.

This promotes an approach that may be termed emergent middleware designed to solve interoperability at runtime according to what is discovered and needed in a given context.

You might also like: SIX SIGMA AND MINITAB EBOOK

This briefing surveys the state of the art in the area of interoperability, and in particular extensive work on protocol mediation and middleware interoperability. It will then concentrate on the key role of Models runtime to meet the challenges of interoperability in the ever changing pervasive networking environment, reporting on the results of the CONNECT project, a collaborative initiative bringing together experts in middleware and software engineering, semantic modeling of services, and formal foundations of distributed systems, which together provide key building blocks for enabling emergent middleware.

She has co authored over technical papers in the area of distributed systems and software engineering, and has been involved in a number of European and industrial projects. She has been and is serving as PC member, including as PC chair, of leading international events in the areas of distributed systems, middleware, software engineering and trust management. This raises the need for methods and tools that support developers in the creation and re-distribution of software systems with the ability of properly coping with legal constraints.

We conjecture that legal constraints are another dimension software analysts, architects and developers have to consider, making them an important area of future research in software engineering. This technical briefing illustrates the importance of licensing analysis in software analysis, illustrating existing techniques to support checking of licensing inconsistencies in software systems and to recommend developers with appropriate architectural connectors to comply with licensing constraints.

Also, the briefing would outline relevant open research challenges in this area. Biographies Daniel M. German is associate professor of computer science at the University of Victoria, Canada. He has authored over 80 journal, conference and workshop papers.

Further info at turingmachine. His research interests include software maintenance and evolution, reverse engineering, empirical software engineering, search-based software engineering service-centric software engineering. He is author of over papers appeared on journals, conferences and workshops.

The key challenge was to integrate this application with their existing IT infrastructure. Additionally, the system has to be extensible and maintainable to allow for feature modification in response to continually evolving requirements. We performed domain analysis to achieve this objective.

Syllabus of CS 458 - Software Verification and Validation

We gathered the input information for the analysis from domain experts and a high level abstract requirements document. Since we used an agile development process, we frequently changed the architecture of the system according to our evolving requirements.

This required to conduct the domain analysis iteratively and constantly refine and improve it. However, existing domain analysis methods were designed to support traditional software development processes where requirements are considered to be present and complete in the beginning. This means that the analysis is performed and completed before the implementation starts.

Software Testing and Analysis

Domain analysis methods have been used to identify reusable parts of a system including requirements, architecture, and test plans. The identification of reusable parts is considered to benefit the overall quality of the system. Reusability also has a positive cost effect. In this session, we present the problems we faced.

We present difficulties that might arise while executing existing domain analysis practices in an agile software development environment. The complexity of the iterative domain analysis lies in the discovery and integration of new requirements.

Translating code comments to procedure specifications

For example, the program in Figure Thus, it is not a correct implementation of a specification that requires control characters to be iden- tified and skipped. A test suite designed only to adequately cover the control structure of the program will not explicitly include test cases to test for such faults, since no elements of the structure of the program correspond to this feature of the specifications.

In practice, control flow testing criteria are used to evaluate the thorough- ness of test suites derived from specification-based testing criteria, by iden- tifying elements of the programs not adequately exercised.

Unexecuted el- ements may be due to natural differences between specification and imple- mentation, or they may reveal flaws of the software or its development pro- cess: inadequacy of the specifications that do not include cases present in the implementation; coding practice that radically diverges from the specifi- cation; or inadequate specification-based test suites.

Control flow adequacy can be easily measured with automatic tools4. The degree of control flow coverage achieved during testing is often used as an in- dicator of progress and can be used as a criterion of completion of the testing activity5. Let T be a test suite for a program P. T satisfies the statement adequacy criterion for P , iff, for each statement S of P , there exists at least one test case in T that causes the execution of S. The statement coverage CStatement of T for P is the fraction of statements of program P executed by at least one test case in T.

The ratio of visited control flow graph nodes to total nodes may be different from the ratio of executed statements to all statements, depending on the granular- ity of the control flow graph representation. For the stan- dard control flow graph models discussed in Chapter 6, the relation between coverage of statements and coverage of nodes is monotonic, i. In the limit, statement coverage is 1 exactly when node coverage is 1. Let us consider for example the program of Figure The program con- 18 tains statements.

Coverage is not monotone with respect to the size of the test suites, i. In the former example, T1 contains only one test case, while T0 contains three test cases, but T1 achieves a higher coverage than T0.

Test suites used in this chapter are summarized in Table Criteria can be satisfied by many test suites of different sizes. In the former example both T1 and T2 satisfy the statement adequacy criterion for program cgi decode although one consists of a single test case and the other consists of 4 test cases.

Notice that while we typically wish to limit the size of test suites, in some cases we may prefer a larger test suite over a smaller suite that achieves the same coverage. A test suite with fewer test cases may be more difficult to generate or may be less helpful in debugging. Let us consider, for 1 example, to have missed the in the statement at line??

On the other hand, a test suite obtained by adding test cases to T2 would satisfy the statement adequacy criterion, but would not have any particular advantage over T2 with respect to the total effort required to reveal and local- ize faults. Designing complex test cases that exercise many different elements of a unit is seldom a good way to optimize a test suite, although it may oc- casionally be justifiable when there is large and unavoidable fixed cost e.

Control flow coverage may be measured incrementally while executing a test suite. In this case, the contribution of a single test case to the overall cov- erage that has been achieved depends on the order of execution of test cases.

The increment of coverage due to the execution of a specific test case does not measure the absolute efficacy of the test case. Measures independent from the order of execution may be obtained by identifying independent statements. However, in practice we are only interested in the coverage of the test suite and in the statement not exer- cised by the test suite, not in the coverage of test cases.

Consider, for example, a faulty program cgi decode0 obtained from program cgi decode by removing line The con- trol flow graph of program cgi decode0 is shown in Figure In the new program there are no statements following the false branch exiting node D. BRANCH TESTING Structural Testing Thus, a test suite that tests only translation of specially treated characters but not treatment of strings containing other characters that are copied without change satisfies the statement adequacy criterion, but would not reveal the missing code in program cgi decode0.

The branch adequacy criterion requires each branch of the program to be Draft version produced 31st March Structural Testing T satisfies the branch adequacy criterion for P , iff, for each branch B of P , there exists at least one test case in T that causes execution of B.

This is equivalent to stating that every edge in the control flow graph model of program P belongs to some execution path exercised by a test case in T.

Test suite T2 satisfies the branch adequacy criterion, and would reveal the fault. Intuitively, since traversing all edges of a graph causes all nodes to be visited, test suites that satisfy the branch ade- quacy criterion for a program P satisfy also the statement adequacy criterion for the same program.

Condition coverage considers this decomposi- tion in more detail, forcing exploration not only of both possible results of a boolean expression controlling a branch, but also of different combinations of the individual conditions in a compound boolean expression. The branch adequacy crite- rion can be satisfied, and both branches exercised, with test suites in which the first comparison evaluates always to False and only the second is varied.

Such tests do not systematically exercise the first comparison, and will not re- veal the fault in that comparison.

Condition adequacy criteria overcome this problem by requiring different elementary conditions of the decisions to be separately exercised. T covers all basic conditions of P , i. The basic condition coverage CBasic Condition of T for P is the fraction of the total number of truth values assumed by the basic conditions of program P during the execution of all test cases in T. Notice that the total number of truth values of all basic conditions is twice the number of basic conditions, since each basic condition can assume value true or false.

Three basic conditions correspond to the simple decisions at lines?? Thus they are covered by any test suite that covers all branches. The last two conditions correspond to the compound de- cision at line In this case, test suites T1 and T3 cover the decisions without covering the basic conditions. However test suite T1 does not cover the first condition, since it has only outcome True.

To sat- isfy the basic condition adequacy criterion, we need to add an additional test that produces outcome false for the first condition, e.

The basic condition adequacy criterion can be satisfied without satisfying branch coverage. Thus ask why as a solved exercise or say branch and basic condition adequacy criteria are not directly comparable, it here. A more complete extension that includes both the basic condition and the branch adequacy criteria is the compound condition adequacy criterion,7 which requires a test for each possible combination of basic conditions. The number of test cases required for compound condition adequacy can, in principle, grow exponentially with the number of 2 basic conditions in a decision all N combinations of N basic conditions , which would make compound condition coverage impractical for programs with very complex conditions.

Short circuit evaluation is often effective in reducing this to a more manageable number, but not in every case. Consider the number of cases required for compound condition cover- age of the following two boolean expressions, each with five basic conditions. MCDC The modified condition adequacy criterion requires that each basic con- dition be shown to independently affect the outcome of each decision. That is, for each basic condition C , there are two test cases in which the truth val- ues of all conditions except C are the same, and the compound condition as a whole evaluates to True for one of those test cases and False for the other.

For modified condition adequacy, only 6 combinations are required.

Here they have been numbered for easy comparison with the previous table. The following table indicates which rows of the former table correspond to the independent effect for each vari- able: Draft version produced 31st March Structural Testing Case 1 appears here three times in combination with other cases, showing the effect of conditions a, c, and e.

Note also that this is not the only possible set of test cases to satisfy the criterion; a different selection of boolean combinations could be equally effective.

The formatting of the line has been altered for readability in this printed form. For example, we may consider case 1 as including the assignment hFalse; True; True; Truei and 2 as including hTrue; True; True; Truei, thereby differing only in the assignment to the Room condition and in the outcome of the evaluation. Prove that the number of test cases required to satisfy the modified condi- tion adequacy criterion for a predicate with N basic conditions is N.

Sometimes, though, a fault is revealed only through ex- ercise of some sequence of decisions, i. T satisfies the path adequacy crite- rion for P , iff, for each path p of P , there exists at least one test case in T that causes the execution of p.

For Draft version produced 31st March Structural Testing Part i is the control flow graph of the C function cgi decode, identical to Figure Part ii is a tree derived from part i by following each path in the control flow graph up to the first repeated node.

To obtain a practical criterion, it is necessary to partition the infinite set of paths into a finite number of classes, and require only that representatives from each class be explored. Useful criteria can be obtained by limiting the number of paths to be covered.

Relevant subsets of paths to be covered can be identified by limiting the number of traversals of loops, the length of the paths to be traversed, or the dependencies among selected paths. The boundary interior criterion groups together paths that differ only in the sub-path they follow when repeating the body of a loop.

Figure Initialization of the back pointer is missing, causing a failure only if the search key is found in the second posi- tion in the list.

Get this edition

Draft version produced 31st March Structural Testing The number of sub-paths that must be covered can grow exponen- tially in the number of statements and control flow graph nodes, even without any loops at all. Moreover, choosing test data to force execution of one partic- ular path may be very difficult, or even impossible if the conditions are not independent. One can define several small variations on the loop boundary criterion.

For example, we might excuse from consideration loops that are always ex- ecuted a definite number of times e. In practice the last part of the 9 Section It is easy enough to define such a coverage criterion for loops, but how can we justify it? Why should we believe that these three cases — zero times through, once through, and several times through — will be more effective in revealing faults than, say, requiring an even and an odd number of iter- ations?

The intuition is that the loop boundary coverage criteria reflect a deeper structure in the design of a program. This can be seen by their re- lation to the reasoning we would apply if we were trying to formally verify the correctness of the loop. The basis case of the proof would show that the loop is executed zero times only when its postcondition what should be true immediately following the loop is already true.

We would also show that an invariant condition is established on entry to the loop, that each iteration of the loop maintains this invariant condition, and that the invariant together with the negation of the loop test i. The loop boundary criterion does not require us to explicitly state the precondition, invariant, and postcondition, but it forces us to exercise es- sentially the same cases that we would analyze in a proof.

There are additional path-oriented coverage criteria that do not explicitly consider loops. Among these are criteria that consider paths up to a fixed length. We have stated that coverage of individual LCSAJs is almost, but not quite, equivalent to branch coverage. How can they differ?

The number of paths to be exercised can also be limited by identifying a subset that can be combined in a manner to be described shortly to form all the others.

To be more precise, the sense in which a basis set of paths can be com- bined to form other paths is to consider each path as a vector of counts in- dicating how many times each edge in the control flow graph was traversed, e.

The basis set is combined by adding or subtracting these vectors and not, as one might intuitively expect, by concatenating paths. We can represent one basis set of many possible Draft version produced 31st March Structural Testing Cyclomatic testing does not require that any particular basis set is covered. Rather, it counts the number of independent paths that have actually been covered i. They are not well suited to integration testing or sys- tem testing.For example, the program in Figure Others may wish to proceed to chapters that describe application of structural testing in the par- ticular problem domains.

Test suites used in this chapter are summarized in Table However, in practice we are only interested in the coverage of the test suite and in the statement not exer- cised by the test suite, not in the coverage of test cases. This raises the need for methods and tools that support developers in the creation and re-distribution of software systems with the ability of properly coping with legal constraints. Open Research Issues Devising and comparing structural criteria was a hot topic in the 80s.

Based on this simple observation, a program has not been adequately tested if some of its elements have not been executed. Sometimes, though, a fault is revealed only through ex- ercise of some sequence of decisions, i.

AUGUSTINA from San Antonio
Please check my other articles. I have always been a very creative person and find it relaxing to indulge in gnoming. I do love reading comics tightly.