When testing a complex system you are often faced with complex test problems. Cause and effect cannot be deduced in advance, only in retrospect.
According to the Cynefin framework, the general approach to tackle complexity is probe-sense-respond. Try something, analyze the outcome, and based on that outcome, try something else. This is the basis of all my approaches to begin unraveling complex test problems. But how do I select my test scope for a specific complex test problem?
Report
Share
Report
Share
1 of 12
Download to read offline
More Related Content
Similar to Approaches to unraveling a complex test problem
The document discusses challenges with testing software without requirements documentation and provides some strategies to help with testing in such situations. It notes that QA teams may have to test without knowing what the application is supposed to do. It then suggests several paths that testing teams can take when faced with limited or missing documentation, such as UI teams creating screenshots and development teams creating technical design documents. The document also advocates for daily standup meetings between teams to help coordinate testing efforts in lieu of documentation.
resume graham (2006) book FUNDAMENTALS OF TESTING
resume of Graham et al Foundationf of Software Testing (2006)
created by Fadhilla Elita information system class
Fundamental of testing (what is testing)helfa safitri
This document provides an overview of software testing fundamentals. It begins with definitions of software testing and its objectives such as finding defects, increasing confidence, and preventing defects. An analogy is made between software testing and driving tests, where the tester evaluates the software in the same way an examiner evaluates a driver. The document discusses how testing can be used to identify defect clusters and focus testing efforts. It also explains that while testing can find many defects, it cannot prove a system is completely defect-free. The key goal of testing is to ensure software meets user needs and requirements.
This document discusses fundamentals of software testing, including definitions, objectives, and principles. It defines software testing as evaluating a system or component against testing criteria like requirements and design specifications. It aims to find defects, improve quality, and prevent defects. The document uses an analogy comparing software testing to driving tests, and discusses how testing helps identify defect clusters to focus testing efforts. It also explains that while testing can find many defects, it cannot prove a system is defect-free, and that users ultimately care about a software's ability to meet their needs.
A Common Sense Guide to Agile Development and Testing that might just change your Agile approach forever.
Answering the 9 most common questions asked about Agile Testing:
- What is Agile Testing?
- Do we still need testers in Agile?
- What is an Agile Tester?
- What does a Software Tester Actually Do?
- Should we automate our testing?
- What tools should we use for our Agile Testing?
- How Much Should we Automate?
- How can we automate and still finish the sprint?
- How can we finish all our testing in the sprint?
A high quality download of the 9 points as a free "Print out and Keep" Poster is available at http://eviltester.com/agile
The document discusses exploratory testing in an agile context. It describes exploratory testing as simultaneously learning about a system while designing and executing tests, using feedback from the last test to inform the next. It also discusses that agile teams do both checking of requirements through automated tests as well as exploring to discover unintended consequences. Finally, it provides an example charter for exploratory testing of editing user profiles on a system.
The document discusses fundamentals of software testing, including common objectives, purposes, and principles. It defines testing as a process used to find defects, provide confidence in quality, and prevent defects. It provides an analogy between driving tests and software testing to illustrate key concepts. It discusses how testing aims to meet objectives like finding defects and gaining confidence, and how focusing on defects can help plan tests and identify areas needing more attention over time. Debugging is introduced as the process of fixing defects found during testing. The document emphasizes that testing cannot prove a system defect-free, and that customers ultimately care about a system meeting their needs.
This document provides an introduction to software testing for startups. It discusses that testing early in the development cycle results in faster development, better software, and enhanced investment appeal. It recommends creating test cases based on functional specifications and menus. The document outlines six principles of testing, including that you cannot test every scenario and defects congregate in particular areas. It recommends testing frequently with both developers and testers working closely together.
This document discusses several key principles and concepts related to software testing:
1) Testing is context dependent and different types of software require different testing approaches. For example, safety critical software needs more rigorous testing than an e-commerce site.
2) Human errors can introduce defects during any stage of the software development life cycle, from requirements to maintenance. Thorough testing is needed to identify and reduce defects.
3) Exhaustive testing all possible combinations of inputs and conditions is not feasible except for simple cases. Risk-based prioritization is used to guide focused testing efforts.
Software quality refers to how well a software product or service meets requirements and expectations. It is subjective as it depends on the perspective of the customer. Common aspects of quality include the software being bug-free, delivered on time and on budget, meeting requirements, and being maintainable. True software quality can only be determined by measuring how well the software serves its intended purpose from the viewpoint of all stakeholders.
This document provides an overview of fundamentals of software testing. It discusses why testing is needed due to human errors in development that can introduce defects. It defines software testing as evaluating a system or component against requirements or to identify defects. The document outlines the typical test process, including planning, analysis, implementation, execution and reporting. It also discusses testing principles such as how testing can find defects but not prove their absence and how test cases need regular revision to avoid becoming outdated.
Testing is needed to identify defects, provide confidence, and prevent defects. The objectives of testing include finding defects, providing information, and achieving confidence. Exhaustive testing is impossible, so risk-based testing is used instead of testing all combinations of inputs. Testing activities should start early in the software development life cycle and focus on defined objectives. Defect clusters are used to plan risk-based tests and test cases are regularly revised to overcome the pesticide paradox. The fundamental test process includes test planning, analysis and design, implementation and execution, evaluation and reporting, and closure activities. Independence is important for testing to provide an objective perspective.
The document provides an overview of software testing concepts including definitions of software, types of software, the software development lifecycle, and principles of software testing. It defines software as a set of instructions that operate computers and defines system software and application software. It discusses the importance of testing in finding defects and outlines seven principles of software testing including that exhaustive testing is not possible, early testing is important, and testing is context dependent. It also provides an analogy comparing software testing to driving tests.
This document provides an introduction to software testing fundamentals. It discusses why testing is important to find defects, how testing promotes quality, and how testing fits into quality assurance. It defines key terms like bug, defect, error, failure, fault, and explains causes of software defects. It discusses when defects arise and the costs of defects. It also covers the role of testing in software development and maintenance, how testing relates to quality, and challenges around determining how much testing is needed. Finally, it discusses using defect data to plan tests and how testing aims to improve quality but can never prove a system is completely defect-free.
Fundamentals of testing what is testing (reference graham et.al (2006))Alfarizi ,S.Kom
The document discusses software testing, its objectives, and its importance. It uses an analogy to a driving test to explain software testing. Some key points made:
1) Testing helps find defects, provide confidence in quality, and prevent defects, similar to how a driving test evaluates a driver's skills.
2) Both static and dynamic testing provide information to improve the system and development/testing processes.
3) Over time, as processes improve, dynamic testing finds fewer defects while static testing finds more early on.
Specification based or black box techniques (andika m)Andika Mardanu
This document discusses specification-based or black box testing techniques, specifically equivalence partitioning, boundary value analysis, decision tables, state transition testing, and use case testing. It provides definitions and explanations of each technique, including that equivalence partitioning divides test conditions into groups that should be handled equivalently by the system, decision tables deal with combinations of inputs and conditions, state transition testing models systems that can be in different states, and use case testing identifies test cases that exercise full system transactions.
Describes a model to analyze software systems and determine areas of risk. Discusses limitations of typical test design methods and provides an example of how to use the model to create high volume automated testing framework.
Similar to Approaches to unraveling a complex test problem (20)
In this article I outline why I believe it should not be mandatory for all code changes to go through QA before they are merged to a master branch and released.
Quality Information Coverage - A QI ConceptJohan Hoberg
Quality Information Coverage is a concept that refers to the degree to which an organization has collected all necessary quality information about a product to make informed decisions regarding quality. It is more comprehensive than individual metrics like code coverage or test coverage. An organization should evaluate whether it has sufficient Quality Information Coverage of all relevant areas of a product before making decisions like release. While a single percentage metric may be difficult, organizations should have conversations around what quality information is available for different areas of a product.
The Bug Backlog - An Evergrowing MountainJohan Hoberg
If you are part of a development team working on a game, and you are working in some kind of Agile way, you most likely have a bug backlog, or at least bugs as part of some kind of backlog. The bug backlog looks very different during different stages of the game development cycle - it starts out empty, and then as features and complexity is added, it grows. And in most cases it never stops growing.
One of the most important aspects of Quality Intelligence is transparency and visibility. Intelligence is worthless if it does not affect the decision-making process in some way. If the intelligence is not available to the people involved in the decision-making process, then it will have no effect.
This document discusses building a QA mindset in development teams through coaching. It emphasizes empowering teams, building trust, and using open-ended questions to facilitate discussion rather than providing direct solutions. The author advocates understanding quality risks and letting teams own problem-solving and solutions. An example shows how the author prepared for a meeting, listened to discussions, asked questions, and gave feedback without dictating outcomes. The goal is for teams to develop expertise in quality assurance through a coaching approach.
Quality Intelligence - what does the term stand for in theory and in practice? This is a follow up to my previous presentation about why I think QI should replace QA as the acronym of choice.
This presentation outlines principles and thoughts that guide me in my pursuit of creating high quality complex software
I will also try to give concrete examples at the end of the presentation of what this looks like in practice
Testit 2017 - Exploratory Testing for EveryoneJohan Hoberg
The document discusses how a game development team at King.com replaced their scripted testing process with exploratory testing. Previously, the team relied on scripted regression tests and test cases for new features, which took a long time and no one enjoyed. They decided to try exploratory testing instead for one release. Everyone on the team participated in testing for one hour based on risk areas rather than running all test cases. They found the same number of major bugs but more minor bugs. Everyone took more ownership of quality. Testing was completed much faster. They no longer created unnecessary test artifacts for new features. The results were seen as very positive.
When dealing with complexity you have to be aware of that cause and effect can only be deduced in retrospect. With this in mind, success or failure, is not completely in our hands when we are developing complex products. What is in our hands is the commitment we show, the ownership we take, and the effort we put in. That is what we should celebrate.
Moving from scripted regression testing to exploratory testingJohan Hoberg
The Pet Rescue Saga team moved from scripted regression testing to exploratory testing before each release to improve test coverage and motivation. Previously, developers ran the same scripted tests in isolation with low motivation. Now, the entire production team spends one hour exploring test missions together, focusing on new risks. This gives everyone autonomy over testing and increases competence, cooperation, and understanding of the game. The results have been better coverage, quicker turnaround, and higher motivation for all.
In this article I will explore what I believe is a good foundation for building high quality software. I will cover a wide array of different topics which have in common that I believe they all contribute to this goal.
The document argues that the term "Quality Assurance" (QA) is a misnomer for testers' actual work, and should be replaced by "Quality Intelligence" (QI). QA implies that testers assure quality, but they actually provide information about quality through testing. This information role is analogous to Business Intelligence. The document proposes replacing QA with QI to better reflect that testers transform raw product data into useful quality information through techniques like testing.
In this article I will explore why I think that deadlines should never be communicated to the development teams, and why all deadlines are basically meaningless anyway.
The document argues that the term "quality assurance" (QA) no longer accurately describes the role of testers in an Agile development process. QA implies monitoring and assuring quality from the outside, whereas testers are part of the development team. "Quality assistance" is also problematic as it implies assisting others rather than creating value. The document proposes using "quality intelligence" instead, which better captures how testers acquire and transform product data into useful information for stakeholders. Adopting this new term would help testing roles fully transition to an Agile paradigm.
Do we really need game testers in development teams? What is it that defines the core competence of a tester, and does this competence add any value to the development team?
This presentation outlines my views on why and how you should give feedback in a Scrum Team
Feedback is a critical tool in growing the self-organizing and genuine team
Literature Reivew of Student Center DesignPriyankaKarn3
It was back in 2020, during the COVID-19 lockdown Period when we were introduced to an Online learning system and had to carry out our Design studio work. The students of the Institute of Engineering, Purwanchal Campus, Dharan did the literature study and research. The team was of Prakash Roka Magar, Priyanka Karn (me), Riwaz Upreti, Sandip Seth, and Ujjwal Dev from the Department of Architecture. It was just a scratch draft made out of the initial phase of study just after the topic was introduced. It was one of the best teams I had worked with, shared lots of memories, and learned a lot.
Enhancing Security with Multi-Factor Authentication in Privileged Access Mana...Bert Blevins
In the constantly evolving field of cybersecurity, ensuring robust protection for sensitive data and critical systems has never been more vital. As cyber threats grow more sophisticated, organizations continually seek innovative ways to bolster their defenses. One of the most effective tools in the security arsenal is Multi-Factor Authentication (MFA), particularly when integrated with Privileged Access Management (PAM).
Privileged Access Management encompasses the methods, procedures, and tools used to regulate and monitor access to privileged accounts within an organization. Users with privileged accounts possess elevated rights, enabling them to perform essential operations such as system configuration, access to sensitive data, and management of network infrastructure. However, these elevated privileges also pose a significant security risk if they fall into the wrong hands.
By combining MFA with PAM, organizations can significantly enhance their security posture. MFA adds an additional layer of verification, ensuring that even if privileged account credentials are compromised, unauthorized access can be thwarted. This integration of MFA and PAM provides a robust defense mechanism, protecting critical systems and sensitive data from increasingly sophisticated cyber threats.
In May 2024, globally renowned natural diamond crafting company Shree Ramkrishna Exports Pvt. Ltd. (SRK) became the first company in the world to achieve GNFZ’s final net zero certification for existing buildings, for its two two flagship crafting facilities SRK House and SRK Empire. Initially targeting 2030 to reach net zero, SRK joined forces with the Global Network for Zero (GNFZ) to accelerate its target to 2024 — a trailblazing achievement toward emissions elimination.
Profiling of Cafe Business in Talavera, Nueva Ecija: A Basis for Development ...IJAEMSJORNAL
This study aimed to profile the coffee shops in Talavera, Nueva Ecija, to develop a standardized checklist for aspiring entrepreneurs. The researchers surveyed 10 coffee shop owners in the municipality of Talavera. Through surveys, the researchers delved into the Owner's Demographic, Business details, Financial Requirements, and other requirements needed to consider starting up a coffee shop. Furthermore, through accurate analysis, the data obtained from the coffee shop owners are arranged to derive key insights. By analyzing this data, the study identifies best practices associated with start-up coffee shops’ profitability in Talavera. These findings were translated into a standardized checklist outlining essential procedures including the lists of equipment needed, financial requirements, and the Traditional and Social Media Marketing techniques. This standardized checklist served as a valuable tool for aspiring and existing coffee shop owners in Talavera, streamlining operations, ensuring consistency, and contributing to business success.
Development of Chatbot Using AI/ML Technologiesmaisnampibarel
The rapid advancements in artificial intelligence and natural language processing have significantly transformed human-computer interactions. This thesis presents the design, development, and evaluation of an intelligent chatbot capable of engaging in natural and meaningful conversations with users. The chatbot leverages state-of-the-art deep learning techniques, including transformer-based architectures, to understand and generate human-like responses.
Key contributions of this research include the implementation of a context- aware conversational model that can maintain coherent dialogue over extended interactions. The chatbot's performance is evaluated through both automated metrics and user studies, demonstrating its effectiveness in various applications such as customer service, mental health support, and educational assistance. Additionally, ethical considerations and potential biases in chatbot responses are examined to ensure the responsible deployment of this technology.
The findings of this thesis highlight the potential of intelligent chatbots to enhance user experience and provide valuable insights for future developments in conversational AI.
How to Manage Internal Notes in Odoo 17 POSCeline George
In this slide, we'll explore how to leverage internal notes within Odoo 17 POS to enhance communication and streamline operations. Internal notes provide a platform for staff to exchange crucial information regarding orders, customers, or specific tasks, all while remaining invisible to the customer. This fosters improved collaboration and ensures everyone on the team is on the same page.
OCS Training Institute is pleased to co-operate with
a Global provider of Rig Inspection/Audits,
Commission-ing, Compliance & Acceptance as well as
& Engineering for Offshore Drilling Rigs, to deliver
Drilling Rig Inspec-tion Workshops (RIW) which
teaches the inspection & maintenance procedures
required to ensure equipment integrity. Candidates
learn to implement the relevant standards &
understand industry requirements so that they can
verify the condition of a rig’s equipment & improve
safety, thus reducing the number of accidents and
protecting the asset.
A vernier caliper is a precision instrument used to measure dimensions with high accuracy. It can measure internal and external dimensions, as well as depths.
Here is a detailed description of its parts and how to use it.
Social media management system project report.pdfKamal Acharya
The project "Social Media Platform in Object-Oriented Modeling" aims to design
and model a robust and scalable social media platform using object-oriented
modeling principles. In the age of digital communication, social media platforms
have become indispensable for connecting people, sharing content, and fostering
online communities. However, their complex nature requires meticulous planning
and organization.This project addresses the challenge of creating a feature-rich and
user-friendly social media platform by applying key object-oriented modeling
concepts. It entails the identification and definition of essential objects such as
"User," "Post," "Comment," and "Notification," each encapsulating specific
attributes and behaviors. Relationships between these objects, such as friendships,
content interactions, and notifications, are meticulously established.The project
emphasizes encapsulation to maintain data integrity, inheritance for shared behaviors
among objects, and polymorphism for flexible content handling. Use case diagrams
depict user interactions, while sequence diagrams showcase the flow of interactions
during critical scenarios. Class diagrams provide an overarching view of the system's
architecture, including classes, attributes, and methods .By undertaking this project,
we aim to create a modular, maintainable, and user-centric social media platform that
adheres to best practices in object-oriented modeling. Such a platform will offer users
a seamless and secure online social experience while facilitating future enhancements
and adaptability to changing user needs.
A brand new catalog for the 2024 edition of IWISS. We have enriched our product range and have more innovations in electrician tools, plumbing tools, wire rope tools and banding tools. Let's explore together!
1. Approaches to unraveling a complex test problem
Disclaimer: Nothing in this article is groundbreaking or new and takes inspiration from already existing material, it is
just one of many mental model or approach I use to select a test scope for complex test problems, and may or may
not be applicable to your context
When testing a complex system you are often faced with complex test problems. Cause and
effect cannot be deduced in advance, only in retrospect. According to the Cynefin framework,
the general approach to tackle complexity is probe-sense-respond [1]. Try something, analyze
the outcome, and based on that outcome, try something else. This is the basis of all my
approaches to begin unraveling complex test problems. But how do I select my test scope for a
specific complex test problem? The different approaches listed below are not exclusive,
sometimes even overlapping, and I often use several in combination. They help me with
direction as to what I should base my scope selection on, but not exactly what to test, or how to
test it. This is not an exhaustive list, but some or the more common ones I use.
Our theoretical complex test problem:
A developer has made a code change in a part of the software system with many complicated
dependencies. The developer has designed and executed unit tests for the code change, but is
worried that the code change may have caused regressions, functional or non-functional, in
other parts of the system.
The Explorer
You know very little about (or do not take into consideration) the system you are testing, its
stakeholders or anything else, and you just start exploring, and adjusting your path as you learn
and discover new information. This approach is the basic probe-sense-respond. This approach
could be used the first time you get your hands on a new system, or if you have exhausted all
known leads and want to start over.
2. The Detective
Often the developer responsible for the code change can give you some insight into what
potential impact the change could have on the system, or at least some understanding of the
risks involved in the code change. In this approach you latch on to those leads and follow them
until you have unearthed any product risks associated with them. Developers are not perfect
oracles, so be prepared that they could be wrong.
The Sculptor
You always create a model of any system you are testing, at least in your head. Sometimes it
can be valuable to explicitly document that model to investigate it thoroughly. What capabilities
does the system have? How does information flow through the system? What are the
interactions and dependencies between different parts of the system? Are there any actions that
need to happen in sequence? Explicitly creating this model can allow you to catch details that
you might have missed if it was just in your head, but it can also be needless documentation if
you do not actually need it of course.
3. The Historian
History repeats itself. Obviously that is not always the case, but sometimes it is worth looking
into. If changes to one part of the system has historically had impact on another part of the
system, it makes sense to at least look into if that is also the case this time. This could be in the
form of previously reported bugs or test results. History also tells us how we reached a certain
point, and what outcomes different decisions had. This knowledge can help us unveil additional
product risks.
The Architect
Having insight into the architecture of the system and how different features and changes have
been implemented, is of course extremely valuable. Knowing how the system is built, and in
what programming language, can help you set the right test scope and design the right tests to
mitigate the product risks and find those regressions. Using this approach requires you to not
only have a good understanding of software engineering in general, but also extensive
knowledge of the system in question, which is why this approach is more often a dream than
reality. If you have actually built similar systems yourself, even better.
4. The Engineer
When you have explored all known product risks, one option is to fall back on different
coverage-based test techniques or heuristics [2], that all testers have learned, to try to uncover
potential risks. Equivalence class analysis, boundary testing, state transition testing and
pairwise testing are some examples of techniques that can be used. Probably used in
combination with some other approach, such as The Economist, explained below.
The Customer
Another option can be to focus on different user types and user behaviours so that you are at
least sure that the common use cases are not affected by the code change. So one part of this
approach is covering important use cases, but you might also generate some good test ideas by
approaching the system from a user perspective that you might not have come up with
otherwise. But it requires some groundwork to be done - you need understanding of the users of
your system, and their behaviour. One source of this information could be to look at videos of
user tests, to study different behaviours.
5. The Data Cruncher
If the code change has already gone live to a percentage of the users, maybe in the form of a
live tech test, then looking at the data those users generate may give valuable insights. This
approach depends a lot on if the system is built in such a way that you can gain this type of
insight. If you can understand how users are actually utilizing the system, and if you can get any
error messages the system generates, then this can greatly facilitate pinpointing potential
regressions.
The Economist
If there are no known product risks, starting with covering high value features may be one way
to go. This approach is basically about regression testing after a priority list based on the value
different features or parts of the system brings to stakeholders, and relevant business risks they
have identified. This approach is probably the one I use most in combination with others.
6. The Ghost Whisperer
Sometimes you just know what you need to test. Not everything can be explicitly stated. Tacit
knowledge and gut feeling must sometimes be allowed to lead the way. Given time you might be
able to dissect why you had this gut feeling, but we don’t always have the luxury of time.
The Blacksmith
Tools can be very valuable when used correctly. Sometimes this means building a tool that will
help you explore the complex problem more efficiently, like creating a Python script. Sometimes
“used correctly” just means unleashing tools on the system, if they can help you gain a lot of
insight at a low cost. A monkey testing tool or a performance measurement tool could be good
to have just running in the background to help you come up with new test ideas - maybe they
will trigger something and give you a lead.
7. The Paper Pusher
If you are stuck or out of test ideas, falling back to available requirements documentation or
scripted test cases may be a good palate cleanser. Maybe it will trigger a new test idea or cover
something you had forgotten about.
The Tailgater
If you have an automated test framework, analyzing the test results may give you some idea of
where to start. Even if all the tests have passed, the coverage of the automated tests could help
you set a direction for your testing. Maybe start in an area with less automated tests for
example. The same thing goes if you are not the first tester to try to tackle the complex test
problem - look at the previous tester’s result and test scope and set your scope with that in
mind.
8. The Psychologist
Everyone has cognitive biases [3]. By understanding these cognitive biases we as testers can
gain insight into where developers may have made mistakes, and this in turn can lead us to
potential regressions in the system. It also helps us avoid making similar mistakes.
The Machine
If our system is communicating with other systems, the interface between the systems may be a
good place to start our testing. Partly because it is important to make sure our system works for
third parties, but also because understanding how other systems communicate with our system
may give valuable insights into what could go wrong.
9. The Ambulance Chaser
If we have already released our change to a percentage of our users, following up with
customer support can be a quick way to get valuable information. Users may have run into
explicit bugs, but they could also have faced minor symptoms that could lead us to the source.
The Cyborg
Hardware can often have unforeseen impacts on our software system. Even something as
“simple” as a mobile application has to run on a wide range of devices. This approach focuses
on testing those areas which are impacted by different hardware configurations, since the
complexity is often even higher here.
10. The Clone
Sometimes comparing the current system version with a previous one may reveal regressions
or give you test ideas that would have been hard to find or come up with by only looking at the
current version. The smaller discrepancies which are hard to spot. Running through test
scenarios (selected by perhaps using The Economist approach) on both versions
simultaneously and comparing outcomes is one way to go about this.
The Mind Reader
Not all requirements are explicitly written down, and requirements can often be interpreted in a
multitude of ways. Sitting down and talking to the developer about his or her interpretation of the
requirements can be enlightening - it can give more insight into how something was actually
implemented and help reveal discrepancies between what is expected, and what actually is.
Knowing the developers’ starting point and mindset during implementation can help reveal
additional product risks.
11. Conclusion
So where does this leave us? When I am faced with a complex test problem, I usually bring with
me this toolkit of approaches, and start applying them as the context demands, until the problem
is solved. In a specific context different approaches are more or less valid, and this list is far
from exhaustive, but hopefully it can trigger some ideas on what to base your test scope
selection on when faced with a complex test problem.
Graphic Design: ChatGPT 4.0 & DALL-E