advanced software testing vol 2 rex black pdf free download

advanced software testing vol 2 rex black pdf free download

Create lists, bibliographies and reviews: or. Search WorldCat Find items in libraries near you. Advanced Search Find a Library. Showing all editions for 'Advanced software testing. Refine Your Search Year. Your list has reached the maximum number of items.

This book teaches test managers what they need to know to achieve advanced skills in test estimation, test planning, test monitoring, and test control. Readers will learn how to define the overall testing goals and strategies for the systems being tested. This hands-on, exercise-rich book provides experience with planning, scheduling, and tracking these tasks.

You'll be able to describe and organize the necessary activities as well as learn to select, acquire, and assign adequate resources for testing tasks. You'll learn how to form, organize, and lead testing teams, and master the organizing of communication among the members of the testing teams, and between the testing teams and all the other stakeholders. I have seen a number of groups that were using Scrum for managing the sprints but strong senior leadership from outside the team determined sprint content.

In a number of these situations, the agile principle of sustainable work pace was violated when management refused to allow actual team velocity—that is, the rate of story points that can be achieved in a sprint—to determine the commitments made for each sprint.

This led to test team crunches at the end of each sprint. Another example of agile models is Extreme Programming, or XP. XP provides a set of practices for programming in agile environments. It includes pair programming, a heavy emphasis on automated unit tests using tools like the x-unit frameworks, and again the concept of self-directed teams. Its originator, Kent Beck, is famous—or infamous, if you prefer—in testing circles for showing up at a testing conference in San Francisco in the late s to proclaim that independent testing teams, and independent testers, were going to be rendered entirely obsolete by agile methods.

Fifteen years later, testing as a profession is more recognized than ever. Testing issues with agile methods are similar to those with iterative lifecycles, though the pace of the iterations makes the issues more acute. In addition, the exact relationship between the testers, the independent test team, and the agile teams is something that varies considerably from one organization to another.

Volume and speed of change. Testers must typically adopt lightweight documentation and rely more heavily on tester expertise than detailed test cases to keep up. Remaining effective in short iterations. Because time is at a premium, techniques such as risk-based testing can help focus attention on the important areas. Increased regression risk. Because of this, both at the unit level and at a system level, automated regression testing is important in agile projects.

Inconsistent or inadequate unit testing. When developers short-change unit testing, this creates serious problems because it compounds the increased regression risk and exacerbates the time squeeze inherent in the short iterations. Poor, changing, and missing test oracles and a shifting test basis. Embracing change sometimes degrades into changing the content of a sprint or the way it should work without telling the testers, which can lead to a lot of confusion, ineffectiveness, and inefficiency.

Meeting overload. In some organizations, an improper implementation of agile has led to lots of meetings, reducing the efficiency of the team, including testers. Sprint team siloing.

The agile hype cycle and high expectations. Since agile is relatively new in terms of widespread use, the extent to which people are overhyping what it can accomplish—especially consultants and training companies who benefit the most from this hype—has led to unrealistic expectations.

When these expectations are not achieved, particularly in the area of increased quality, testers are sometimes blamed. Automated unit testing.

When it is fully employed, the use of automated unit testing results in much higher-quality code delivered for testing. Static code analysis. More and more agile teams use static code analysis tools, and this also improves the quality of code, especially its maintainability, which is particularly important given the increased regression risk.

Code coverage. Agile teams also use code coverage to measure the completeness of their unit tests and, increasingly, their automated functional tests, which helps quantify the level of confidence we can have in the testing. Continuous integration. The use of continuous integration tools, especially when integrated with automated unit testing, static analysis, and code coverage analysis, increases overall code quality and reduces the incidence of untestable builds delivered for testing.

Automated functional testing. Agile test tools include a number of automated functional testing tools, some quite good and also free. In addition, automated functional tests can often be integrated into the continuous frameworks as well. Requirements or user story reviews and test acceptance criteria reviews.

In a properly functioning agile team, the user stories and the acceptance criteria are jointly reviewed by the entire team. Reasonable workload. While some agile teams abuse this, a properly implemented version of agile will try to keep work weeks within a hour limit, including for testers, even at the end of a sprint. Control of technical debt via fix bugs first. Some agile teams have a rule that if bugs are allowed to escape from one sprint, they are fixed immediately in the next sprint, before any new user stories are implemented, which does an excellent job of controlling technical debt.

Test planning and management should work to manage the issues and challenges while deriving full benefit from the opportunities. Finally, we have the spiral model, where early prototypes are used to design the system. The development work goes through a sequence of prototypes that are tested, then redesigned and reprototyped, and retested, until all of the risky design decisions have been proven or disproven and rejected through testing. The spiral model was developed by Barry Boehm.

I have also used it myself on small projects to develop e-learning packages with new technologies. It is quite useful when applied to projects with a large number of unknowns. The first issue is that, by its very nature, the designs of the system will change. This means that flexibility is paramount in all test case, test data, test tool, and test environment decisions early in the project. If the test manager commits too heavily to a particular way of generating test data, for example, and the structure of the system data repository changes dramatically, say from a relational database to XML files, that will have serious rework implications in testing.

The second issue is the unique, experimental mode of early testing. Confidence building is not an objective, typically. These different test objectives for earlier testing, evolving into a more typical role of testing in the final stages, requires the test manager to change plans and strategies as the project progresses. Again, flexibility is key. This can make estimating and planning the testing work difficult, particularly if other projects are active at the same time.

Again, these are surmountable issues, but they are quite troublesome if not dealt with properly by the test manager. As testing work proceeds, we need to monitor it and, when needed, take control steps to keep it on track for success. During the planning process, the schedule and various monitoring activities and metrics should be defined.

Then, as testing proceeds, we use this framework to measure whether we are on track in terms of the work products and test activities we need to complete. Of course, this framework must be aligned with the test strategy and test policy so that we can measure our success at achieving defined objectives.

On a small or simple project, success is sometimes more self-evident, but on many projects the effort spent defining this framework is valuable. For example, being able to measure the coverage of risks, requirements, supported configurations, and other elements of the test basis can be very helpful in determining whether the product is ready for release.

The work done during test planning in terms of setting up traceability between the test basis, the test conditions, and other test work products will pay off during test control and monitoring. Traceability will allow you to talk to stakeholders in terms of the extent to which quality risks are mitigated, requirements are met, and supported configurations work properly as well as operational business cycles, use cases, and so forth.

If formal documentation of the system is lacking, then coverage must be established based on targets defined in collaboration with the stakeholders. In risk-based testing, this is done in terms of risks. In agile projects, user stories will provide the basis for testing. Testers should never assume that, since formal documentation of the system is not present, coverage metrics will not be needed.

Testing must connect to the important elements of the system and measure for stakeholders the quality of those elements. Lack of formal documentation can have the salutary effect of driving testers to establish earlier relationships with stakeholders in which the important elements, and how to report test status, are discussed.

So, even when formal documentation such as detailed requirements specifications is available, testers and test managers should make the effort to establish strong relationships with stakeholders right from the start. When testing progresses in a manner that threatens success along one or more of these dimensions, the test manager must take appropriate control actions. As discussed at the Foundation level, those control actions may be local to the test team—in which case test managers can typically act on their own initiative—or involve other project participants—in which case test managers need to coordinate with their coworkers, peers, and senior managers.

In the latter situation, having already built strong stakeholder relationships will help buffer the interactions, because control actions are sometimes taken in response to crises where stress levels can be high.

Proper planning helps set the stage for proper control. Proper planning allows for fine-grained visibility into the activities underway, which means deviations from a successful path are found sooner rather than later. Proper planning also provides better metrics, which allows a determination of the nature of the problem and the best way to solve it to be based on facts and data rather than subjective opinion.

This example, shown in Figure 1—6, is from a large, complex project to develop a system of systems providing an entertainment network in North America. These are the entry criteria for the System Integration Test phase, the last of the formal test phases before delivery of the system for operation.

Entry criterion 1 requires that the Pre-Integration Test phase has exited. This was an informal test phase to check, on a risk-driven basis, whether various intersystem interfaces were working properly. Final, approved versions of the primary specification documents exist and have been provided to the individuals responsible for performing System Integration Testing. The Network Team configures the [entire] live system for testing In addition, all integrated components, hardware and software, must be of the correct major and minor releases All Development Teams provide revision-controlled, complete systems for Integration Test, with these products having completed at least one cycle of the applicable System Test.

All Development Teams provide all custom documentation needed [for] the software mentioned in entry criteria 5. Entry criterion 2 requires that we have final, approved versions of the primary specification documents. This has lifecycle assumptions embedded in it, which is that we are following a lifecycle that would produce such documents.

Entry criterion 3 requires approval of the system integration test plan and the system integration tests. The development and project management teams were to review these documents and provide approval, which was to align development and test activities.

Entry criterion 4 requires that the live system be set up for testing. The lifecycle assumption here is that we would have access to the live system and that it would be ready well in advance of going into production so that we could use it for testing. Categories ofQuality RisksI mentioned that informal risk-based testingtends to rely on a checklist to identify risk items. What are the categories of risks that we wouldlook for? In part, that depends on the level oftesting we are considering.

Does the unit handle state-relatedbehaviors properly? Do transitionsfrom one state to another occur when theappropriate events occur? Are the correctactions triggered? Are the correct eventsassociated with each input? Can the unit handle thetransactions it should handle, correctly,without any undesirable side effects? What statements, branches,conditions, complex condition, loops, andother paths through the code might resultin failures?

What flows of datainto or out of the unit—whether through Is the functionality providedto the rest of the system by this componentincorrect, or might it have invalid sideeffects? If this component interactswith the user, might users have problemsunderstanding prompts and messages,deciding what to do next, or feelingcomfortable with color schemes andgraphics? For hardware components,might this component wear out or fail afterrepeated motion or use?

For hardware components,are the signals correct and in the correctform? As we move into integration testing, additionalrisks arises, many in the following areas:Arethe interfaces between components welldefined? What problems might arisein direct and indirect interaction betweencomponents? Again, what problemsmight exist in terms of actions and sideeffects, particularly as a result ofcomponent interaction?

Are the static dataspaces such as memory and disk spacesufficient to hold the information needed? Are the dynamic volume conduits suchas networks going to provide sufficientbandwidth? Willthe integrated component respondcorrectly under typical and extremeadverse conditions? Can they recover tonormal functionality after such a condition? Can the system store, load,modify, archive, and manipulate datareliably, without corruption or loss of data?

What problems might existin terms of response time, efficient resourceutilization, and the like? Again, for this integrationcollection of components, if a userinterface is involved, might users haveproblems understanding prompts andmessages, deciding what to do next, orfeeling comfortable with color schemes andgraphics?

Similar issues apply for system integrationtesting, but we would be concerned withintegration of systems, not components. Finally, what kinds of risk might we considerfor system and user acceptance testing?

Again, we need to considerfunctionality problems. At these levels, theissues we are concerned with are systemic. Do end-to-end functions work properly? Are deep levels of functionality andcombinations of related functions working? In terms ofthe whole system interface to the user,are we consistent? Can the user understandthe interface? Do we mislead or distract theuser at any point? Trap the user in dead-endinterfaces? Overall, does Considering states the user or objectsacted on by the system might be in, arethere potential problems here?

Considering the entire setof data that the system uses—includingdata it might share with other systems—can the system store, load, modify, archive,and manipulate that data reliably, withoutcorrupting or losing it?

Complex systems oftenrequire administration. Databases,networks, and servers are examples. Operations these administrators performcan include essential maintenance tasks.

For example, might there be problemswith backing up and restoring files ortables? Can you migrate the system fromone version or type of database server ormiddleware to another? Can storage,memory, or processor capacity be added? Are there potential issues with responsetime? With behavior under combinedconditions of heavy load and lowresources? Insufficient static space?

Insufficient dynamic capacity andbandwidth? Willthe system fail under normal, exceptional,or heavy load conditions? Might the systembe unavailable when needed? Might itprove unstable with certain functions?

Configuration: What installation,data migration, application migration,configuration, or initial conditions mightcause problems? Will the system respond correctly undertypical and extreme adverse conditions? Can it recover to normal functionality aftersuch a condition? Might its response tosuch conditions create consequentconditions that negatively affectinteroperating or cohabiting applications? Might certaindate- or time-triggered events fail? Dorelated functions that use dates or timeswork properly together?

Could situationslike leap years or daylight saving timetransitions cause problems? What abouttime zones? In terms of the variouslanguages we need to support, will someof those character sets or translatedmessages cause problems? Might currencydifferences cause problems? Do latency,bandwidth, or other factors related to thenetworking or distribution of processingand storage cause potential problems? Might the system beincompatible with various environmentsit has to work in? Might the systembe incompatible with interoperatingor cohabiting applications in some of thesupported environments?

What standards apply to oursystem, and might it violate some of thosestandards? Is it possible for users withoutproper permission to access functions ordata they should not?

Are users with properpermission potentially denied access? Is data encrypted when it should be? Cansecurity attacks bypass various accesscontrols? For hardware systems, Will humidity,dust, or heat cause failures, eitherpermanent or intermittent?

Are there problems with powerconsumption for hardware systems? Donormal variations in the quality of thepower supplied cause problems?

Is batterylife sufficient? For hardwaresystems, might foreseeable physical shocks,background vibrations, or routine bumpsand drops cause failureIs thedocumentation incorrect, insufficient, orunhelpful?

Is the packaging sufficient? Can we upgrade thesystem? Apply patches? Remove or addfeatures from the installation media? There are certainly other potential riskcategories, but this list forms a good startingpoint. DocumentingQuality RisksIn Figure 2, you see a template that can beused to capture the information you identifyin quality risk analysis. In this template, youstart by identifying the risk items, using thecategories just discussed as a framework. Next,for each risk item, you assess its level of risk interms of the factors of likelihood and impact.

You then use these two ratings to determinethe overall priority of testing and the extentof testing. Finally, if the risks arise from specificFigure 2: A template for capturing quality risk information First, remember that quality risks are potentialsystem problems that could reduce usersatisfaction. Working with thestakeholders, we identify one or more qualityrisk item for each category and populate thetemplate.

Having identified the risks, we can now gothrough the list of risk items and assess thelevel of risk because we can see the risk itemsin relation to each other. An informal techniquetypically uses main factors for assessing risk. Thefirstisthelikelihoodoftheproblem,whichisdetermined mostly by technical considerations. The second is the impact ofthe problem, which is determined mostlyby business or operational considerations.

Both likelihood and impact can be rated on anordinal scale. A three-point ordinal scale is high,medium, and low.

I prefer to use a five-pointscale, from very high to very low. Given the likelihood and impact, we cancalculate a single, aggregate measure of riskfor the quality risk item. A generic term for thismeasure of risk is risk priority number. One wayto do this is to use a formula to calculate the riskpriority number from the likelihood and impact. Theriskprioritynumbercanbeusedtosequencethe tests.

To allocate test effort, I determine theextent of testing. As noted before, while you go through thequality risk analysis process, you are likely togenerate various useful by-products. Theseinclude implementation assumptions that youand the stakeholders made about the systemin assessing likelihood. The by-products also includeproject risks that you discovered, which theproject manager can address. Perhaps mostimportantly, the by-products include problemswith the requirements, design, or other inputdocuments.

We can now avoid having theseproblemsturnintoactualsystemdefects. Noticethat all three enable the bug-preventive role oftesting discussed earlier in this book. In Figure 3, you see an example of an informalquality risk analysis. We have used six qualitycategories for our framework: Of course, for atypical product there would more like to total quality risks—perhaps even more forparticularly complex products.

Quality Risk AnalysisUsing ISO We can increase the structure of an informalquality risk analysis—formalize it slightly, if youwill—by using the ISO standard as thequality characteristic framework instead of therather lengthy and unstructured list of qualityrisk categories given on the previous pages. This has some strengths. The ISO standardprovidesapredefinedandthoroughframework. The standard itself—that is, the entire set ofdocuments that the standard comprises—provides a predefined way to tailor it to yourorganization.

If you use this across all projects,you will have a common basis for your qualityrisk analyses and thus your test coverage. Consistency in testing across projects providescomparability of results. Figure 3: Informal quality risk analysis example For one thing, if you are notcareful tailoring the quality characteristics, youcould find that you are potentially over-broadin your analysis.

That makes you less efficient. For another thing, applying the standard to allprojects, big and small, complex and simple,could prove over-regimented and heavyweightfrom a process point of view. I would suggest that you consider the use ofISO structure for risk analysis whenever abit more formality and structure is needed, or ifyou are working on a project where standardscompliance matters.

I would avoid its use onatypical projects or projects where too muchstructure, process overhead, or paperwork islikely to cause a problem, relying instead onthe lightest-weight informal process possible insuch cases. Quality Risk AnalysisUsing Cost ofExposureAnother form of quality risk analysis is referredto as cost of exposure, a name derived fromthe financial and insurance world.

The costof exposure—or the expected payout ininsurance parlance—is the likelihood of a losstimes the average cost of such a loss.

Across alarge enough sample of risks for a long enoughperiod, we would expect the total amount lostto tend toward the total of the costs of exposurefor all the risks. So, for each risk, we should estimate, evaluate,and balance the costs of testing versus nottesting. If the cost of testing were below the costof exposure for a risk, we would expect testingto save us money on that particular risk.

If thecost of testing were above the cost of exposurefor a risk, we would expect testing not to be asmart way to reduce costs of that risk. This is obviously a very judicious and balancedapproach to testing. What could be more practical? That said, it has some problems. In order to dothis with any degree of confidence, we needenough data to make reasonable estimates oflikelihood and cost. Furthermore, this approachuses monetary considerations exclusively todecide on the extent and sequence of testing.

For some risks, the primary downsides arenonmonetary, or at least difficult to quantify,such as lost business and damage to companyimage. Theaccessibility of the technique to the otherparticipants in the risk analysis process is quitevaluable. A hazard is the thing that creates a risk. Forexample, a wet bathroom floor creates therisk of a broken limb due to a slip and fall. Inhazard analysis, we try to understand thehazards that create risks for our systems.

Thishas implications not only for testing but also forupstream activities that can reduce the hazardsand thus reduce the likelihood of the risks. As you might imagine, this is a very exact,cautious, and systematic technique.

Havingidentified a risk, we then must ask ourselveshow that risk comes to be and what we mightdo about the hazards that create the risk. However, in complex systems there could bedozens or hundreds or thousands of hazardsthat interact to create risks.

Many of the hazardsmight be beyond our ability to predict. So,hazard analysis is overwhelmed by excessivecomplexity and in fact might lead us to thinkthe risks are fewer than they really are. I would consider using this technique onmedical or embedded systems projects. Determining theAggregate RiskPriorityWe are going to cover one more approach forrisk analysis in a moment, but I want to returnto this issue of using risk factors to derive anaggregate risk priority using a formula.

It is alsoimplicit in the cost of exposure technique,where the cost of exposure for any given risk isthe product of the likelihood and the averagecost of a loss associated with that risk. For example, Rick Craig usesaddition of the likelihood and impact. To see that, take a moment toconstruct two tables. Use likelihood and impactranging from 1—5 for each, and then populatethe tables showing all possible risk prioritynumber calculations for all combinations oflikelihood and impact.

The tables should eachhave 25 cells. In the case of addition, the riskpriority numbers range from 2—10, while in thecase of multiplication, the risk priority numbersrange from 1— For example, certain test managementtools such as the newer versions of QualityCenter support this.

In these formulas, we canweight some of the factors so that they accountfor more points in the total risk priority scorethan others.

In addition to calculating a risk priority numberfor sequencing of tests, we also need to use riskfactors to allocate test effort. We can derive theextent of testing using these factors in a coupleways. We could try to use another formula. Forexample, we could take the risk priority numberand multiply it times some given number ofhours for design and implementation andsome other number of hours for test execution.

Alternatively, we could use a qualitativelymethod where we try to match the extent oftesting with the risk priority number, allowingsome variation according to tester judgment. If you do choose to use formulas, make sureyou tune them based on historical data. Or, ifyou are time-boxed in your testing, you canuse formulas based on risk priority numbers todistribute the test effort proportionally basedon risks.

Some people prefer to use a table rather thana formula to derive the aggregate risk priorityfrom the factors. Table 2 shows an example ofsuch a table. First you assess the likelihood and impact asbefore. You then use the table to select theaggregate risk priority for the risk item basedon likelihood and impact scores.

Slideshare uses cookies to improve functionality and advanced software testing vol 2 rex black pdf free download, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our User Agreement and Privacy Policy. See our Privacy Policy and User Agreement for details. Advanced software testing vol 2 rex black pdf free download on Apr 26, SlideShare Explore Search You. Submit Search. Successfully reported this slideshow. We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime. Free-ebook-rex-black advanced-software-testing. Upcoming SlideShare. Like this document? Why not share! Embed Size px. Start on. Show related SlideShares at end. WordPress Shortcode. Qualister Follow. Published in: TechnologyBusiness. Full Name Comment goes here. Are you sure you want to Yes No. Anne MillerStudent My personal experience with research paper writing services attack on titan season 3 episode 3 free online highly positive. Thankfully, the writer I chose followed my instructions to the letter. advanced software testing vol 2 rex black pdf free download Read Advanced Software Testing - Vol. 2, 2nd Edition by Rex Black with a free trial. Read unlimited* books and audiobooks on the web, iPad, iPhone and. Rex Black. Advanced Software Testing - Vol. 2, 2nd Edition: Guide to the ISTQB After you've bought this ebook, you can choose to download either the PDF. Software Testing - Vol. 2, 2nd Edition: Guide to the ISTQB Advanced Certification as an Advanced Test Manager - Kindle edition by Black, Rex. Download it once and read it on your Kindle device, PC, phones or tablets. Amazon Business: For business-only pricing, quantity discounts and FREE Shipping. Register a free. Explore a preview version of Advanced Software Testing - Vol. 2 Start your free trial engineering experience, author Rex Black is President of RBCS, is a leader in software, hardware, and systems testing, Download the app today and. Advanced software testing guide to the ISTQB advanced certification as an advanced technical test analyst. by Rex Black. eBook: Document. English. Read "Advanced Software Testing - Vol. 2, 2nd Edition Guide to the ISTQB Advanced Certification as an Advanced Test Manager" by Rex Black available from. 2: Advanced Software Testing - Vol. 2: Guide to the Free-ebook-rex-black advanced-software-testing. 12, views. Share; Like; Download. Free book Advanced Software Testing - Vol. 2, 2nd Edition: Guide to the ISTQB Advanced Certification as an Advanced Test Manager, Edition 2 by Rex Black. Rex Black. Advanced Software. Testing—Vol. 3. Guide to the ISTQB Advanced Certification 1. Electronic data processing personnel-Certification. 2. Computer software- If you are using this book as a reference, then feel free to read devsmash.online You can download any ebooks you wanted like Linux Kernel Development in easy step and you can download it now. It is one of the best seller books in this month. Read It. August 09, Employing the industry's most experienced and recognized consultants, RBCS conducts product testing, builds and improves testing groups, and provides testing staff for hundreds of clients worldwide. Additionally, learn how to evaluate system requirements as part of formal and informal reviews, using an understanding of the business domain to determine requirement validity. July 05, Earning your ISSAP is a deserving achievement that gives you a competitive advantage and makes you a member of an elite network of professionals worldwide. Gary Rueda Sandoval. A hands-on guide to testing techniques that deliver reliable software and systems Testing even a simple system can quickly turn into a potentially infinite task. advanced software testing vol 2 rex black pdf free download