advanced software testing vol 1 rex black pdf free download

advanced software testing vol 1 rex black pdf free download

We'll use the V-model shown in figure as an example. We'll further assume that we are talking about the system test level. Figure In other words, the moment of involvement of testing is at the very start of the project.

Once the test plan is approved, test control begins. Test control continues through to test closure. Analysis, design, implementation, execution, evaluation of exit criteria, and test results reporting are carried out according to the plan. Deviations from the plan are managed. Test analysis starts immediately after or even concurrently with test planning.

Test analysis and test design happen concurrently with requirements, high-level design, and low-level design. Test implementation, including test environment implementation, starts during system design and completes just before test execution begins.

Test execution begins when the test entry criteria are met. More realistically, test execution starts when most entry criteria are met and any outstanding entry criteria are waived. In V- model theory, the entry criteria would include suc-cessful completion of both component test and integration test levels.

Test-execution continues until the test exit criteria are met, though again some of these will often be waived. Evaluation of test exit criteria and reporting of test results occur throughout test execution.

Test closure activities occur after test execution is declared complete. This kind of precise alignment of test activities with each other and with the rest of the system lifecycle absolutely will not just happen. Nor can you expect to be able to instill this alignment continuously throughout the process, without any forethought. Rather, for each test level, no matter what the selected software lifecycle and test process, the test manager must perform this alignment.

Not only must this happen during the test and project planning, but test control includes acting to ensure on going alignment. No matter what test process and software lifecycle are chosen, each project has its own quirks. This is especially true for complex projects such as the systems of systems projects common in the military and among RBCS's larger clients.

In such a case, the test manager must plan not only to align test processes, but also to modify them. Off-the-rack process models, whether for testing alone or for the entire software lifecycle, don't fit such complex projects well. Specific Systems Learning objectives Recall of content only Systems of systems are independent systems tied together to serve a common purpose. Because they are independent and tied together, they often lack a single, coherent user or operator interface, a unified data model, compatible external interfaces, and so forth.

Such projects include the following characteristics and risks: The integration of commercial off-the-shelf COTS software along with some amount of custom development, often taking place over a long period. Significant technical, lifecycle, and organizational complexity and hetero-geneity. This organizational and lifecycle complexity can include issues of confidentiality, company secrets, and regulations.

Different development lifecycles and other processes among disparate teams, especially—as is frequently the case—when insourcing, outsourcing, and offshoring are involved. Serious potential reliability issues due to intersystem coupling, where one inherently weaker system creates ripple-effect failures across the entire system of systems.

System integration testing, including interoperability testing, is essential. Well-defined interfaces for testing are needed. At the risk of restating the obvious, systems of systems projects are more complex than single-system projects. The complexity increase applies organizationally, technically, process-wise, and team-wise.

Good project management, formal development lifecycles and process, configuration management, and quality assurance become more important as size and complexity increase. Let's focus on the lifecycle implications for a moment. As mentioned before, with systems of systems projects, we are typically going to have multiple levels of integration. First, we will have component integration for each system, and then we'll have system integration as we build the system of systems. We will also typically have multiple version management and version control systems and processes, unless all the systems happen to be built by the same presumably large organization and that organization follows the same approach throughout its software development team.

This is not something my associates and I commonly see during assessments of large companies, by the way. The duration of the project tends to be long. I have seen them be planned for as long as five to seven years. A system of systems project with five or six systems might be considered relatively short and relatively small if it lasted "only" a year and involved "only" 40 or 50 people.

Across this project, there are multiple test levels, usually owned by different parties. Because of the size and complexity of the project, it's easy for handoff and transfers of responsibility to break down.

So, we need formal information transfer among project members, especially at milestones. Even when we're integrating purely off-the-shelf systems, these systems are evolving.

That's all the more likely to be true with custom systems. So, we have the management challenge of coordinating development of the individual systems and the test analyst challenge of proper regression tests at the system of systems level when things change.

Especially with off-the-shelf systems, maintenance testing can be triggered—sometimes without much warning—by external entities and events such as obsolescence, bankruptcy, or upgrade of an individual system.

If you think of the fundamental test process in a system of systems project, the progress of levels is not two-dimensional. Instead, imagine a sort of pyramidal structure, as shown in figure Fundamental test process in a system of systems project At the base, you have component testing. A separate component test level exists for each system.

Moving up the pyramid, you have component integration testing. A separate component integration test level exists for each system.

Next, you have system testing. A separate system test level exists for each system. You also probably have separate team ownership, because multiple groups often handle component, integration, and system test. Continuing to move up the pyramid, you come to system integration testing.

Now, finally, we are talking about a single test level across all systems. Next above that is systems testing, focusing on end-to-end tests that span all the systems. Finally, we have user acceptance testing. For each of these test levels, while we have single organizational ownership, we probably have separate team ownership.

Simply put, safety-critical systems are those systems upon which lives depend. Failure of such a system—or even temporary performance or reliability degradation or undesirable side effects as support actions are carried out—can injure or kill people, or, in the case of military systems, fail to injure or kill people at a battle-critical juncture. Because defects can cause death, and deaths can cause civil and criminal penalties, proof of adequate testing can be and often is used to reduce liability.

For obvious reasons, various regulations and standards often apply to safety critical systems. The regulations and standards can constrain the process, the organizational structure, and the product.

Unlike the usual constraints on a project, though, these are constructed specifically to increase the level of quality rather than to enable trade-offs to enhance schedule, budget, or feature outcomes at the expense of quality. Overall, there is a focus on quality as a very important project priority.

There is typically a rigorous approach to both development and testing. Throughout the lifecycle, traceability extends all the way from regulatory requirements to test results. This provides a means of demonstrating compliance. This requires extensive, detailed documentation but provides high levels of audit ability, even by non-test experts. Audits are common if regulations are imposed.

Demonstrating compliance can involve tracing from the regulatory requirement through development to the test results. An outside party typically performs the audits. During the lifecycle—often as early as design—the project team uses safety analysis techniques to identify potential problems. Single points of failure are often resolved through system redundancy.

In some cases, safety-critical systems are complex systems or even systems of systems. In other cases, non-safety-critical components or systems are integrated into safety-critical systems or systems of systems. For example, networking or communication equipment is not inherently a safety-critical system, but if integrated into an emergency dispatch or military system, it becomes part of a safety-critical system.

Formal quality risk management is essential in these situations. Fortunately, a number of such techniques exist, such as failure mode and effect analysis; failure mode, effect, and criticality analysis; hazard analysis; and software common cause failure analysis. Metrics and Measurement Learning objectives Recall of content only Throughout this book, we use metrics and measurement to establish expectations and guide testing by those expectations.

You can and should apply metrics and measurements throughout the software development lifecycle. This is because well-established metrics and measures, aligned with project goals and objectives, will enable test analysts to track and report test and quality results to management in a consistent and coherent way.

A lack of metrics and measurements leads to purely subjective assessments of quality and testing. This results in disputes over the meaning of test results toward the end of the lifecycle.

It also results in a lack of clearly perceived and communicated value, effectiveness, and efficiency for testing. Not only must we have metrics and measurements, but also we need baselines. What is a "good" result for a given metric? An acceptable result? An unacceptable result? Without defined baselines, successful testing is usually impossible.

In fact, when we perform assessments for our clients, we more often than not find ill-defined metrics of test team effectiveness and efficiency, with no baselines and thus bad and unrealistic expectations which of course aren't met.

There's just about no end to what can be subjected to a metric and tracked through measurement. Consider the following: Planned schedule and coverage Requirements and their schedule, resource, and task implications for testing Workload and resource usage Milestones and scope of testing Planned and actual costs Risks, both quality and project risks Defects, including total found, total fixed, current backlog, average closure periods, and configuration, subsystem, priority, or severity distribution During test planning, we establish expectations, which I mentioned as the baselines previously.

As part of test control, we can measure actual outcomes and trends against these expectations. As part of test reporting, we can consistently explain to management various important aspects of the process, product, and project, using objective, agreed- upon metrics with realistic, achievable targets.

When thinking about a testing metrics and measurement program, there are three main areas to consider: definition, tracking, and reporting. Let's start with definition. In a successful testing metrics program, you define a useful, pertinent, and concise set of quality and test metrics for a project. You avoid too large a set of metrics, as this will prove difficult and perhaps expensive to measure while often confusing rather than enlightening the viewers and stakeholders.

You also want to ensure uniform, agreed-upon interpretations of these metrics to minimize disputes and divergent opinions about the meaning of certain measures of outcomes, analyses, and trends.

There's no point in having a metrics program if everyone has an utterly divergent opinion about what particular measures mean. Finally, define metrics in terms of objectives and goals for a process or task, for components or systems, and for individuals or teams. Victor Basili's well-known Goal Question Metric technique is one way to evolve meaningful metrics.

Using this technique, we proceed from the goals of the effort—in this case, testing —to the questions we would have to answer to know if we were meeting those goals—to, ultimately, the specific metrics.

For example, one typical goal of testing is to build confidence. One natural question that arises in this regard is, "How much of the system has been tested? Metrics for coverage include percentage requirements covered by tests, percentage of branches and statements covered by tests, percentage of interfaces covered by tests, percentage of risks covered by tests, and so forth.

Let's move on to tracking. Because tracking is a recurring activity in a metrics program, the use of automated tool support can reduce the time required to capture, track, analyze, report, and measure the metrics.

Be sure to apply objective and subjective analyses for specific metrics over time, especially when trends emerge that could allow for multiple interpretations of meaning. Try to avoid jumping to conclusions, or delivering metrics that encourage others to do so. Be aware of and manage the tendency for people's interests to affect the interpretation they place on a particular metric or measure.

Everyone likes to think they are objective—and, of course, right as well as fair! Finally, let's look at reporting. Most importantly, reporting of metrics and measures should enlighten management and other stakeholders, not confuse or misdirect them.

In part, this is achieved through smart definition of metrics and careful tracking, but it is possible to take perfectly clear and meaningful metrics and confuse people with them through bad presentation. Good testing reports based on metrics should be easily understood, not overly complex and certainly not ambiguous.

The reports should draw the viewer's attention toward what matters most, not toward trivialities. In that way, good testing reports based on metrics and measures will help management guide the project to success. Not all types of graphical displays of metrics are equal—or equally useful. A snapshot of data at a moment in time, as shown in a table, might be the right way to present some information, such as the coverage planned and achieved against certain critical quality risk areas.

A graph of a trend over time might be a useful way to present other information, such as the total number of defects reported and the total number of defects resolved since the start of testing. An analysis of causes or relationships might be a useful way to present still other information, such as a scatter plot showing the correlation or lack thereof between years of tester experience and percentage of bug reports rejected. Ethics Learning objectives Recall of content only Many professions have ethical standards.

In the context of professionalism, ethics are "rules of conduct recognized in respect to a particular class of human actions or a particular group, culture, etc. Because, as a test analyst, you'll often have access to confidential and privileged information, ethical guidelines can help you to use that information appropriately.

In addition, you should use ethical guidelines to choose the best possible behaviors and outcomes for a given situation, given your constraints. Note that "Best possible" means for everyone, not just the tester. Let me give you an example of ethics in action. As such, I might and do have insight into the direction of the ISTQB program that our competitors in the software testing consultancy business don't have. In some cases, such as helping to develop syllabi, I have to make those business interests clear to people, but I am allowed to help do so.

I helped write both the Foundation and Advanced syllabi. Direct access to the exam questions would make it all too likely that, consciously or unconsciously, I would warp our training materials to "teach the exam. It's never too early to inculcate a strong sense of ethics. For example, if you are working on a safety-critical system and are asked to quietly cancel some defect reports, that's an ethical problem.

PRODUCT—Certified software testers shall ensure that the deliverables they provide on the products and systems they test meet the highest professional standards possible. For example, if you are working as a consultant and you leave out important details from a test plan so that the client has to hire you on the next project, that's an ethical lapse.

For example, if a project manager asks you not to report defects in certain areas due to potential business sponsor reactions, that's a blow to your independence and an ethical failure on your part if you comply. For example, favoring one tester over another because you would like to establish a romantic relationship with the favored tester's sister is a serious lapse of managerial ethics.

For example, if you have a chance to explain to your child's classmates or your spouse's colleagues what you do, be proud of it and explain the ways software testing benefits society. For example, it is unethical to manipulate test results to arrange the firing of a programmer who you detest.

SELF—Certified software testers shall participate in lifelong learning regarding the practice of their profession and shall promote an ethical approach to the practice of the profession. For example, attending courses, reading books, and speaking at conferences about what you do help to advance you—and the profession. This is called doing well while doing good, and fortunately, it is very ethical!

You are working as a test analyst at a bank. At the bank, test analysts work closely with users during user acceptance test. The bank has bought two financial applications as commercial off-the-shelf COTS software from large software vendors.

Previous history with these vendors has shown that they deliver quality applications that work on their own, but this is the first time the bank will attempt to integrate applications from these two vendors. Which of the following test levels would you expect to be involved in? Component test 2. Component integration test 3. System integration test 4. Acceptance test 2. Which of the following is necessarily true of safety critical systems?

They are composed of multiple COTS applications. They are complex systems of systems. They are systems upon which lives depend. They are military or intelligence systems.

Chapter 2. Testing Processes Put the lime in the coconut and drink 'em both together, Put the lime in the coconut, and you'll feel better It establishes a framework for all the subsequent material in the syllabus and allows you to visualize organizing principles for the rest of the concepts.

There are seven sections: 1. Test Process Models 3. Test Planning and Control 4. Test Analysis and Design 5. Test Implementation and Execution 6. Evaluating Exit Criteria and Reporting 7. Your Web browser is not enabled for JavaScript. Some features of WorldCat will not be available.

Create lists, bibliographies and reviews: or. Search WorldCat Find items in libraries near you. Advanced Search Find a Library. Table 2: Using a table for risk priority As with the formulas discussed a moment ago,you should tune the table based on historicaldata. Also,youshouldincorporateflexibilityintothis approach by allowing deviation from theaggregate risk priority value in the table basedon stakeholder judgment for each individualrisk.

In Table 3, you see that not only can we derivethe aggregate risk rating from a table, we cando something similar for the extent of testing. Based on the risk priority rating, we can nowuse a table like Table 3 to allocate testing effort. You might want to take a moment to study thistable. However, the involvement ofthe right participants is just as important, andprobably more important, than the choiceof technique.

The ideal technique withoutadequate stakeholder involvement will usuallyprovide little or no valuable input, while a less-than-ideal technique, actively supported andparticipated in by all stakeholder groups, willalmost always produce useful information andguidance for testing. What is most critical is that we have a cross-functional team representing all of thestakeholders who have an interest in testingand quality.

This means that we involve at leasttwo general stakeholder groups. One is madeup of those who understand the needs andTable 3: Using a table for extent of testing Theother includes those who have insight into thetechnical details of the system.

We can involvebusiness stakeholders, project funders, andothers as well. Through the proper participantmix, a good risk-based testing process gathersinformation and builds consensus around whatnot to test, what to test, the order in which totest, and the way to allocate test effort. I cannot overstate the value of this stakeholderinvolvement. Lack of stakeholder involvementleads to at least two major dysfunctions in therisk identification and analysis.

First, there isno consensus on priority or effort allocation. This means that people will second-guessyour testing after the fact. Second, you willfind—either during test execution or worse yetafter delivery—that there are many gaps in theidentified risks, or errors in the assessment ofthe level of risk, due to the limited perspectivesinvolved in the process.

Whileweshouldalwaystrytoincludeacompletesetofstakeholders,oftennotallstakeholderscanparticipate or would be willing to do so. In suchcases, some stakeholders may act as surrogatesfor other stakeholders. For example, in mass-market software development, the marketingteam might ask a small sample of potentialcustomers to help identify potential defectsthat would affect their use of the software mostheavily.

In this case, the sample of potentialcustomers serves as a surrogate for the entireeventual customer base. As another example,business analysts on IT projects can sometimesrepresent the users rather than involving usersin potentially distressing risk analysis sessionswhere we have conversations about what couldgo wrong and how bad it would be. This technique wasdeveloped originally as a design-for-qualitytechnique.

However, you can extend it for risk-based software and systems testing. As withan informal technique, we identify quality riskitems, in this case called failure modes. We tendto be more fine grained about this than wewould in an informal approach.

This is in partbecause, after identifying the failure modes, wethen identity the effects those failure modeswould have on users, customers, society, thebusiness, and other project stakeholders. This technique has as its strength the propertiesof precision and meticulousness.

Hazard analysis is similarlyprecise, but it tends to be overwhelmed bycomplexity due to the need to analyze theupstream hazards that cause risks. The result highlights failuremodes with relatively high probability andseverity of consequences, allowing remedialeffort to be directed where it will produce thegreatest value.

However, this precision and meticulousnesshas its weaknesses. It tends to produce lengthyoutputs. It is document heavy. The large volumeof documentation produced requires a lot ofwork not only during the initial analysis, butalso during maintenance of the analysis duringthe project and on subsequent projects.

It isalso hard to learn, requiring much practice tomaster. I have used FMEA on a number of projects,and would definitely consider it for high-riskor conservative projects.

However, for chaotic,fast-changing, or prototyping projects, I wouldavoid it. Failure mode and effect analysis was originallydeveloped to help prevent defects duringdesign and implementation work. I came acrossthe idea initially in D. I later included a discussion of itin my first book, Managing the Testing Process,published in , which as far as I know makesit the first software-testing-focused discussionof the technique.

I discussed it further in CriticalTesting Processes as well. Failure Mode and Effect Analysis exists inseveral variants. Two other variants—at least in naming—existwhenthetechniqueisappliedtosoftware.

Theseare software failure mode and effect analysisand software failure mode, effects and criticalityanalysis. The changesinvolved in the criticality analysis are minor andwe can ignore them here. In other words,reevaluation of residual risk—on an effect-by-effect basis—is repeated throughout theprocess.

Since this technique began as a designand implementation technique, ideally thetechnique is used early in the project. As with other forms of risk analysis, we wouldexpect test analysts and test managers tocontribute to the process and the creation ofthe FMEA document. As with any otherrisk analysis, test analysts and test managers,like all participants, should be able to applytheir knowledge, skills, experience, and uniqueoutlook to help perform the risk analysis itself,following a FMEA approach.

However, it shouldbe applied when appropriate, as it is preciseand thorough. Specifically, FMEA makes senseunder the following circumstances:The software, system, or system of systemsis potentially critical and the risk of failuremust be brought to a minimum. Forexample, avionics software, industrialcontrol software, and nuclear controlsoftware would deserve this type ofscrutiny. The system is subject to mandatory risk-reduction or regulatory requirements—forexample, medical systems or those subjectto ISO The risk of project delay is unacceptable,so management has decided to invest extraeffort to remove defects during early stagesof the project.

This involves using thedesign and implementation aspects ofFMEA more so than the testing aspects. The system is both complex and safetycritical, so close analysis is needed todefine special test considerations,operational constraints, and designdecisions.

For example, a battlefieldcommand, communication, and controlsystem that tied together disparate systemsparticipating in the ever-changing scenarioof a modern battle would benefit from thetechnique. As I mentioned earlier, if necessary, you can usean informal quality risk analysis technique first,then augment that to include the additionalprecision and factors considered with FMEA.

Since FMEA arose from the world of designand implementation—not testing—andsince it is inherently iterative, you shouldplan to schedule FMEA activities very earlyin the process, even if only preliminary, high-level information is available. For example, amarketing requirements document or evena project charter can suffice to start.

As moreinformationbecomesavailable,andasdecisionsfirm up, you can refine the FMEA based on theadditional details. Additionally, you can perform a FMEA at anylevel of system or software decomposition. Inother words, you can—and I have—perform aFMEA on a system, but you can—and I have—also perform it on a subset of system modulesduring integration testing or even on an singlemodule or component.

Whether you start at the system level, theintegration level, or the component level, theprocess is the same. First, working functionby function, quality characteristic by qualitycharacteristic, or quality risk category by qualityriskcategory,identifythefailuremodes.

Afailuremode is exactly what it sounds like: a way inwhich something can fail. In the next step, we try to identify the possiblecauses for each failure mode. This is notsomething included in the informal techniqueswe discussed before. Why do we do this? Well,remember that FMEA is originally a design andimplementation tool. We try to identify causesfor failures so we can define those causes outof the design and avoid introducing theminto the implementation.

Those effects can be on the system Remember, this technique is often usedfor safety-critical systems like nuclear controlwhere society is indeed affected by failures. We can also assesscriticality. Now, we can decide what types of mitigation orrisk reduction steps we can take for each failuremode. In our informal approaches to qualityrisk analysis, we limited ourselves to definingan extent of testing to be performed here. However, in FMEA—assuming we involved theright people—we can specify other design andimplementation steps too.

When doing FMEA, there are typically threerisk factors used to determine the risk prioritynumber:Severity. This is an assessment of the impactof the failure mode on the system, based onthe failure mode itself and the effects. This is an assessment of the impactof the failure mode on users, customers,the business, stakeholders, the project, theproduct, and society, based on the effects.

This is an assessment of thelikelihood of the problem existing in thesystem and escaping detection withoutany additional mitigation. This takes intoconsideration the causes of the failuremode and the failure mode itself. People performing a FMEA often rate theserisk factors on a numerical scale. You can usea 1 to 10 scale, though a 1 to 5 scale is alsocommon. You can use either a descending orascending, so long as each of the factors usesthe same type of scale, either all descending orall ascending.

In other words, 1 can be the mostrisky assessment or the least risky, respectively. If you use a 1 to 10 scale, then a descendingscale means 10 is the least risky. If you use a 1to 5 scale, then a descending scale means 5 isthe least risky. For ascending scales, the mostrisky would be 10 or 5, depending on the scale. Personally, I always worry about using anythingfiner grained than a five-point scale.

Unless I Trying to achieve this degree of precision canalso lengthen debates between stakeholdersin the risk analysis process, often to little if anybenefit. AsImentionedbefore,youdeterminetheoverallor aggregate measure of risk, the risk prioritynumber or RPN , using the three factors. Thesimplest way to do this—and one in commonuse—is to multiply the three factors. However,you can also add the factors. You can also usemore complex calculations, including the use ofweighting to emphasize one or two factors.

As with risk priority numbers for the informaltechniques discussed earlier, the FMEA RPN willhelp determine the level of effort we invest inrisk mitigation. In fact, multiple levels of risk mitigation couldoccur, particularly if the RPN is serious enough. Each test case inherits the RPNfor the highest-priority risk related to it. We canthen sequence the test cases in risk priorityorder wherever possible. In additionto being precise and thorough—and thus lesslikelytomisassessoromitrisks—FMEAprovidesotheradvantages.

Itrequiresdetailedanalysisofexpected system failures that could be causedby software failures or usage errors, resulting ina complete view—if perhaps an overwhelmingview—of the potential problems. If FMEA is used at the system level—ratherthan only at a component level—we can have adetailed view of potential problems across thesystem. In other words, if we consider systemicrisks, including emergent reliability, security,and performance risks, we have a deeplyinformed understanding of system risks.

Again,those performing and especially managingthe analysis can find this overwhelming, and itcertainly requires a significant time investmentto understand the entire view and its import. The analysis canalso provide justification for not doing certainthings, for avoiding certain design decisions, fornot implementing in a particular way or with aparticular technology.

As with any quality risk analysis technique, ourFMEA analysis can focus our testing on specific,critical areas of the system. This can have test design implications, too,since you might choose to implement morefine-grained tests to take the finer-grainedunderstanding of risk into account.

There are costs and challenges associated withFMEA,ofcourse. Foronething,youhavetoforceyourself to think about use cases, scenarios,and other realities that can lead to sequencesof failures. You can—andshould—overcome this challenge, of course. Thismeans that participants and those managingthe analysis can find the development andmaintenance of these documents a large, time-consuming, expensive investment.

As originally conceived, FMEA works functionby function. When looking at a component ora complex system, it might be difficult to defineindependent functions. Finally, when trying to anticipate causes,it might be challenging to distinguish truecauses from intermediate effects. These challenges are inaddition to those discussed earlier for qualityrisk analysis in general. It is a case study of an actualproject. This document—and the approach weused—followed the Failure Mode and EffectAnalysis approach.

As you can see, we started—at the left side ofthe figure—with a specific function and thenidentified failure modes and their possibleeffects. Welisted possible causes to enable bug preventionwork during requirements, design, andimplementation. Next, we looked at detection methods—thosemethods we expected to apply anyway for thisproject.

The more likely the failure mode wasto escape detection, the worse the detectionnumber. We calculated a risk priority numberbased on the severity, priority, and detectionnumbers. Severity,priority, and detection each ranged from 1 to5. So the risk priority number ranged from 1 to Thisparticularfigureshowsthehighest-levelriskitems only because it was sorted by risk prioritynumber.

You can see that we haveassigned some additional actions at this pointbut have not yet assigned the owners. Notice that we can allow any testprocedures that cover a risk item to inherit thelevel of risk from the risk item. Get Advanced Software Testing - Vol. Explore a preview version of Advanced Software Testing - Vol. Skip to main content. This is a book on advanced software testing for test managers. By that I mean that I address topics that a practitioner who has chosen to manage software testing as a career should know.

I focus on those skills and techniques related to test analysis, test design, test execution, and test results evaluation.

I assume that you know the basic concepts of test engineering, test design, test tools, testing in the software development life cycle, and test management. You are ready to increase your level of understanding of these concepts and to apply them to your daily work as a test professional. As such, it can help you prepare for the Advanced Test Manager exam. You can use this book to self-study for that exam or as part of an e-learning or instructor-led course on the topics covered in that exam.

However, even if you are not interested in ISTQB certification, you will find this book useful to prepare yourself for advanced work in software testing. If you are a test manager, test director, test analyst, technical test analyst, automated test engineer, manual test engineer, or programmer, or in any other field where a sophisticated understanding of software test management is needed, then this book is for you. What should a test manager be able to do? Or, to ask the question another way, what should you have learned to do—or learned to do better—by the time you finish this book?

Manage a testing project by implementing the mission, goals, and testing processes established for the testing organization. Organize and lead risk identification and risk analysis sessions and use the results of such sessions for test estimation, planning, monitoring, and control.

Identify skills and resource gaps in your test team and participate in sourcing adequate resources. In this book, I focus on these main concepts. I suggest that you keep these high-level outcomes in mind as you proceed through the material in each of the following chapters. If you are using this book to prepare for the Advanced Test Manager exam, then I recommend that you read Chapter 8 first and then read the other chapters in order.

If you are using this book to expand your overall understanding of testing to an advanced level but do not intend to take the Advanced Test Manager exam, then I recommend that you read Chapters 1 through 7 only. If you are using this book as a reference, then feel free to read only those chapters that are of specific interest to you. Each of the first seven chapters is divided into sections. For the most part, I have followed the organization of the Advanced Test Manager syllabus to the point of section divisions, but subsections and sub-subsection divisions in the syllabus might not appear in this book.

If you are curious about how to interpret those K2, K3, and K4 tags in front of each learning objective, and how learning objectives work within the ISTQB syllabi, read Chapter 8. Software testing is in many ways similar to playing the piano, cooking a meal, or driving a car. How so? In each case, you can read books about these activities, but until you have practiced, you know very little about how to do it.

I encourage you to practice these concepts with the exercises in the book. Then, make sure you take these concepts and apply them on your projects. You can become an advanced software test management professional only by managing software testing.

Drill Sergeant: [Expressing surprise and looking at a stopwatch. You are gonna be a general someday, Gump. Now disassemble your weapon and continue! Forrest Gump displays an innate ability to follow a process accurately and quickly in a scene from the movie Forrest Gump , screenplay by Eric Roth, from the novel by Winston Groom.

The first chapter of the Advanced syllabus is concerned with contextual and background material that influences the remaining chapters. There are eight sections. At the Advanced level, that process has been refined to separate certain activities, thus providing finer-grained resolution on the process as a whole as well as its constituent activities. This fine-grained breakdown allows us to focus refinement and optimization efforts, to tailor the test process better within the software development lifecycle, and to gain better insight into project and product status for responsive, effective test monitoring and control.

The refined activities are as follows:. The purpose of exit criteria is to prevent a task from being considered completed when there are still outstanding parts of the task which have not been finished. Exit criteria are used to report against and to plan when to stop testing.

The test closure phase consists of finalizing and archiving the testware and evaluating the test process, including preparation of a test evaluation report. As at the Foundation level, remember that activities may overlap or be concurrent, in spite of the appearance of sequentiality in the syllabus—and in Figure 1—1.

Therefore, in set theory terms, you can think of the fundamental test process as the union of all tasks that can be important in testing, organized into a hierarchy that corresponds roughly to the timeline of testing on a project following a sequential lifecycle.

Since the fundamental test process was introduced at the Foundation level, I recommend that you review Chapter 1, section 4 of the Foundation syllabus prior to or immediately after going through this chapter. Remember that the Foundation syllabus material may be examined to the extent that it forms an underpinning of the Advanced material. If the Advanced material refines or modifies the Foundation material in any way, then you should expect the exam to follow the Advanced material. While the fundamental test process includes tasks carried out by both managers and testers, in this book I will focus on the management perspective.

That includes management tasks as well as the management of testers who are carrying out tester tasks. K4 Analyze the test needs for a system in order to plan test activities and work products that will achieve the test objectives. Test planning, monitoring, and control are management tasks. Test planning is the initial step of the fundamental test process. It occurs—to a greater or lesser extent and degree of formality—at every test level. Test planning is also an ongoing activity, with planning happening throughout the fundamental test process, even through closure.

Planning should be about how to achieve the test mission and objectives—possibly as tailored for this particular project. Note that Chapter 1 of the syllabus mentions the mission and objectives as being identified in the test strategy, but later in the syllabus it correctly places the main source of the mission and objectives as the test policy document.

So, the mission and objectives address what we want to achieve with testing, while planning addresses how we want to achieve that stuff.

This includes the activities and resources necessary for the testing. Planning must also take into account the need to guide the testing itself, the project of product development or maintenance underway, and short-term and long-term process improvements, so planning should include the gathering and tracking of metrics.

The use of these selected metrics will often require the support of tools, testers skilled in the proper input of the metrics information, and documentation about the metrics and their use, so planning must address these topics as well.

Each of these test strategies has implications for test planning. In risk-based testing, the identified quality and project risks are addressed in the plan, such as increasing test coverage for risk areas that are seen as high risk as well as emphasizing elements of the test process that are seen as particularly effective at mitigating certain kinds of quality risks.

In risk-based testing, the level of risk sets the priority and sequence of tests too; the means by which this prioritization is done must be addressed in planning. When blended test strategies are used—that is, the use of multiple test strategies at once to optimize the effectiveness and efficiency of testing—then the planning must handle the integration of all test activities across all strategies.

So, if a combination of risk-based and reactive testing is adopted, then planning must address when, how, and by whom test charters will be defined. Test planning should span all test levels to avoid a patchwork approach to testing across the project.

Clearly defined goals and objectives, coverage criteria, entry and exit criteria, and techniques across all test levels will ensure optimal benefit from the testing and will maximize the ability to see a coherent picture of the test results across all levels. Testing exists in a complex relationship with other parts of the project, and test inputs and outputs—for example, requirements specifications and test results—can have complex relationships as well, often many-to-many relationships.

Effective test management in general—and test planning specifically—requires a good understanding of these inputs, outputs, and their relationships. Traceability between test inputs such as the test basis, intermediate test work products such as test conditions, and test outputs such as test status reports is essential to achieving test objectives such as confidence building, but it is sufficiently complex that it will not happen by itself or by accident.

It will only happen when properly planned. If tools are needed, planning should address that too. As with any project, the testing part of a project must have a clearly defined scope to stay focused and to avoid missing any important areas. Areas that are out of scope must be clearly stated to manage expectations. Each feature, risk area, or other element listed as within scope should be mappable to test work products such as groups of test conditions, perhaps in a test design specification.

Like test inputs and outputs, test environments can be quite complex, expensive, and protracted to procure. So, the smart test manager plans carefully, starting with collaboration with other project team members such as architects as to what the test environment should look like.

By Rex Black. This book teaches test managers what they need to know to achieve advanced skills in test estimation, test planning, test monitoring, and test control. Readers will learn how resident evil the final chapter full movie free define the overall testing goals and strategies for the systems being downloadd. This hands-on, exercise-rich advanced software testing vol 1 rex black pdf free download provides experience with planning, scheduling, and tracking these tasks. You'll be able to describe and organize advanced software testing vol 1 rex black pdf free download sofhware activities as well as learn to select, acquire, and assign adequate resources for testing tasks. You'll learn how to form, organize, and lead testing teams, and master the organizing of communication among the members of the testing teams, and between the testing teams and all the other stakeholders. Additionally, you'll learn how to justify decisions and provide adequate reporting information where applicable. With over thirty years of software and systems engineering experience, author Rex Black is President of RBCS, vownload a leader in software, hardware, and systems testing, and is the most prolific author practicing in the field of software testing today. He has published a downolad books on testing that have sold tens of thousands of copies worldwide. Included are sample exam questions, at the appropriate level of difficulty, for most of the learning objectives covered by the ISTQB Advanced Level Syllabus. With aboutcertificate holders and a global presence in over 50 countries, you can be confident in the value and international stature that the Advanced Test Manager certificate can offer you. This is a book on advanced software testing for test advanced software testing vol 1 rex black pdf free download. By that I mean that Advanced software testing vol 1 rex black pdf free download address topics that a practitioner who has chosen to manage software testing as a career should know. Teeting focus on those skills and techniques related to test analysis, test design, test execution, and test results evaluation. I assume that blackk know the basic concepts of test engineering, test design, test tools, testing in the software development life cycle, and test management. You are ready to downooad your level of understanding of these concepts and to apply them to your daily work as a test professional. As such, it can help you prepare frde the Advanced Test Manager exam. You can use this book to self-study for that exam or as part of an e-learning or instructor-led course on the topics covered in that exam. However, even if you are not interested in ISTQB certification, you will find this book useful to prepare yourself for advanced work in software testing. If you are a test bol, test director, test analyst, technical test analyst, automated test engineer, manual test engineer, or advanced software testing vol 1 rex black pdf free download, or in any other field where a sophisticated understanding of software test management is needed, then this book is for you. What should a test manager be able to do? Or, to ask the question another way, what should you have learned to do—or learned to do better—by the time you finish this book? Manage a testing project by implementing the mission, goals, advanced software testing vol 1 rex black pdf free download testing processes established for the testing organization. Organize and lead risk identification and risk analysis sessions and use the results of such sessions for rree estimation, planning, monitoring, and control. advanced software testing vol 1 rex black pdf free download Advanced Test Analyst. Rex Black. Click here if your download doesn"t start automatically Download and Read Free Online Advanced Software Testing - Vol. Advanced Test Analyst by Rex Black Free PDF d0wnl0ad, audio books, books. Read "Advanced Software Testing - Vol. 1, 2nd Edition Guide to the ISTQB Advanced Certification as an Advanced Test Analyst" by Rex Black available from. devsmash.onlinet. Tien Ton. Advanced Software Testing Vol. 1 by Rex Black Publisher​: If you are using this book as a reference, then feel free to read only those array that you downloaded into Excel or Word and follow these five steps: 1. First​. Advanced Software Testing - Vol. 1, 2nd Edition: Guide to the ISTQB Advanced Certification as an Advanced Test Analyst - Kindle edition by Black, Rex. Download it once and read it on your Kindle device, PC, phones or tablets. Use features like eBook features: Highlight, take notes, and search in the book; Length: Free-ebook-rex-black advanced-software-testing. 12, views. Share; Like; Download. Rex Black. Advanced Software Testing - Vol. 1, 2nd Edition: Guide to the Mac you can read this ebook online in a web browser, without downloading anything​. Rex Black. Advanced Software Testing - Vol. 1: Guide to the ISTQB Advanced you can read this ebook online in a web browser, without downloading anything​. 1, 2nd Edition. by Rex Black. eBook: Document. English. Rocky Nook. 2, 2nd Edition by Rex Black with a free trial. Advanced Software Testing - Vol. 1, 2nd Edition: Guide to the ISTQB Advanced Certification as an Advanced. I helped write both the Foundation and Advanced syllabi. Rocky Nook. Just because a test has never yielded a false positive before, in all the times it has been run before, doesn't mean you're not looking at one this time. It's fair for the Advanced Level Test Analyst exam. He has written over twenty-five articles, presented hundreds of papers, workshops, and seminars, and given about thirty keynote speeches at conferences and events around the world. A lack of metrics and measurements leads to purely subjective assessments of quality and testing. Along with these four levels and their variants, at the Advanced level you need to keep in mind additional test levels that you might need for your projects. One of the ad claims is improved stability over a PC. The duration of the project tends to be long. The absence of any reliable, authoritative, consistent set of oracles lead to a lot of "bug report ping-pong. A test oracle is a source we use to determine the expected results of a test. During test execution, typical metrics look at the percentage of test conditions covered, test cases executed, and so forth. Rex Black Contributor. Test analysis starts immediately after or even concurrently with test planning. advanced software testing vol 1 rex black pdf free download