- A Technical Review is a type of peer
- review, and is considered to be a formal review type, even though no Managers are expected to attend. It involves a structured encounter, in which a peer/s analyse the work with a view to improve the quality of the original work.
- - Ideally led by the Moderator
- - Attended by peers / technical experts
- - Documentation is required
- - No Management presence
- - Decision making
- - Solving technical problems
A walkthrough is a set of procedures and techniques designed for a peer group, lead by the author to review software code. It is considered to be a fairly informal type of review. The walkthrough takes the form a meeting, normally between one and two hours in length.
- - Led by the Author
- - Attended by a peer group
- - Varying level of formality
- - Knowledge gathering
- - Defect finding
An inspection is a formal type of review. It requires preparation on the part the review team members before the inspection meeting takes place. A follow-up stage is also a requirement of the inspection. This ensures that any re-working is carried out correctly.
- - Led by a Moderator
- - Attended by specified roles
- - Metrics are included
- - Formal process
- - Entry and Exit Criteria
- - Defect finding
An informal review is an extremely popular choice early on in the development lifecycle of both software and documentation. The review is commonly performed by peer or someone with relevant experience, and should be informal and brief.
- - Low cost
- - No formal process
- - No documentation required
- - Widely used review
V & V
Software Validation and Verification can involve analysis, reviewing, demonstrating or testing of all software developments. This will include the development process and the development product itself. Verification and Validation is normally carried out at the end of the development lifecycle (after all software developing is complete). But it can also be performed much earlier on in the development lifecycle by simply using reviews.
Validation involves the actual testing. This should take place after verification phase has been completed.
Validation: confirmation by examination and provision of objective evidence that the particular requirements for a specific intended use have been fulfilled.
Validation: Are we building the right product?
Verification would normally involve meetings and reviews and to evaluate the documents, plans, requirements and specifications. This can be achieved by using reviews and meetings etc.
Verification: confirmation by examination and provision of objective evidence that specified requirements have been fulfilled.
Verification: Are we building the product right?
Rational Unified Process (RUP) is an object-oriented and Web-enabled program development methodology. RUP works by establishing four separate phases of development, each of which is organised into a number of separate iterations that must satisfy defined criteria before the next phase is undertaken.
- The four project lifecycle phases:
- inception, elaboration, construction and transition.
V - Model
The V-Model is an industry standard framework that shows clearly the software development lifecycle in relation to testing. It also highlights the fact that the testing is just as important as the software development itself. The relationships between development and testing are clearly defined.
The V-Model improves the presence of the testing activities to display a more balanced approach.
Agile Software Development is a conceptual framework for software development that promotes development iterations throughout the life-cycle of the project. Many different types of Agile development methods exist today, but most aim to minimize risk by developing software in short amounts of time. Each period of time is referred to as an iteration, which typically lasts from one to four weeks.
RAD represents ‘Rapid Application Development’. In order to implement a RAD development, all of the requirements must be known in advance. But with RAD the requirements are formally documented. Each requirement is categorised into individual components. Then each component is developed and tested in parallel. All this is done in a set period of time. RAD is considered to be an iterative development model.
What is an Error?
Error: A human action that produces an incorrect result.
An example of an error would be the misspelling of a variable within the program code.
What is a defect?
A flaw in a component or system that can cause the component or system to fail to perform its required function, e.g. an incorrect statement or data definition. A defect, if encountered during execution, may cause a failure of the component or system. (may also be termed a ‘defect’ or ‘bug’)
An example of a defect would be a variable that is never used within the program code.
Component testing is also known as Unit, Module, or Program Testing. In simple terms, this type of testing focuses simply on testing of the individual components themselves.
It is common for component testing to be carried out by the Developer of the software. This however has a very low rating of testing independence.
This type of Integration testing is concerned with ensuring the interactions between the software components at the module level behave as expected.
It is commonly performed after any Component Testing has completed, and the behaviour tested may cover both functional and non-functional aspects of the integrated system.
Requirements Based Testing
Requirements-based Testing is simply testing the functionality of the software/system based on the requirements. The tests themselves should be derived from the documented requirements and not based on the software code itself. This method of functional testing ensures that the users will be getting what they want, as the requirements document basically specifies what the user has asked for.
Business ProcessFunctional Testing
Different types of users may use the developed software in different ways. These ways are analysed and business scenarios are then created. User profiles are often used in Business Process Functional Testing. Remember that all of the functionality should be tested for, not just the most commonly used areas.
Testing the ability of the system to be able to bear loads. An example would be testing that a system could process a specified amount of transactions within a specified time period. So you are effectively loading the system up to a high level, then ensuring it can still function correctly whilst under this heavy load.
A program/system may have requirements to meet certain levels of performance. For a program, this could be the speed of which it can process a given task. For a networking device, it could mean the throughput of network traffic rate. Often, Performance Testing is designed to be negative, i.e. prove that the system does not meet its required level of performance.
Stress Testing simply means putting the system under stress. The testing is not normally carried out over a long period, as this would effectively be a form of duration testing. Imagine a system was designed to process a maximum of 1000 transactions in an hour. A stress test would be seeing if the systems could actually cope with that many transactions in a given time period. A useful test in this case would be to see how the system copes when asked to process more than 1000.
A major requirement in today’s software/systems is security, particularly with the internet revolution. Security testing is focused at finding loopholes in the programs security checks. A common approach is to create test cases based on known problems from a similar program, and test these against the program under test.
This is where consideration is taken into account of how the user will use the product. It is common for considerable resources to be spent on defining exactly what the customer requires and how simple it is to use the program to achieve their aims. For example; test cases could be created based on the Graphical User Interface, to see how easy it would be to use in relation to a typical customer scenario.
This type of testing may focus on the actual memory used by a program or system under certain conditions. Also disk space used by the program/system could also be a factor. These factors may actually come from a requirement, and should be approached from a negative testing point of view.
Volume Testing is a form of Systems Testing. It primary focus is to concentrate on testing the systems while subjected to heavy volumes of data. Testing should be approached from a negative point of view to show that the program/system cannot operate correctly when using the volume of data specified in the requirements.
A complicated program may also have a complicated installation process. Consideration should be made as to whether the program will be installed by a customer or an installation engineer. Customer installations commonly use some kind of automated installation program. This would obviously have to under go significant testing in itself, as an incorrect installation procedure could render the target machine/system useless.
Documentation in today’s environment can take several forms, as the documentation could be a printed document, an integral help file or even a web page. Depending of the documentation media type, some example areas to focus on could be, spelling, usability, technical accuracy etc.
Recovery Testing is normally carried out by using test cases based on specific requirements. A system may be designed to fail under a given scenario, for example if attacked by a malicious user; the program/system may have been designed to shut down. Recovery testing should focus on how the system handles the failure and how it handles the recovery process.
System Integration Testing
This type of Integration Testing is concerned with ensuring the interactions between systems behave as expected. It is commonly performed after any Systems Testing has completed. Typically not all systems referenced in the testing are controlled by the developing organization. Some systems maybe controlled by other organizations, but interface directly with the system under test.
Acceptance testing (also known a User acceptance testing) is commonly the last testing performed on the software product before its actual release. It is common for the customer to perform this type of testing, or at least be partially involved. Often, the testing environment used to perform acceptance testing is based on a model of the customer’s environment. This is done to try and simulate as closely as possible the way in which the software product will actually be used by the customer.
Contract & RegulationAcceptance Testing
This type of Acceptance Testing is aimed at ensuring the acceptance criteria within the original contract have indeed been met by the developed software. Normally any acceptance criteria is defined when the contract is agreed. Regulation Acceptance Testing is performed when there exists specific regulations that
This form of acceptance testing is commonly performed by a System Administrator and would normally be concerned with ensuring that functionality such as; backup/restore, maintenance, and security functionality is present and behaves as expected.
Alpha Testing should be performed at the developer’s site, and predominantly performed by internal testers only. Often, other company department personnel can act as testers. The marketing or sales departments are often chosen for this purpose.
Beta Testing (sometimes known as ‘Field testing’) is commonly performed at the customer’s site, and normally carried out by the customers themselves. Potential customers are often eager to trial a new product or new software version. This allows the customer to see any improvements at first hand and ascertain whether or not it satisfies their requirements. On the flip side, it gives invaluable feedback to the developer, often at little or no cost.
Also known as Confirmation testing. It is imperative that when a defect is fixed it is re-tested to ensure the defect has indeed been correctly fixed.
Re-test:“Whenever a defect is detected and fixed then the software should be re-tested to ensure that the original defect has been successfully removed.”
When checking a fixed defect, you can also consider checking that other existing functionality has not been adversely affected by the fix. This is called Regression Testing.
Regression Test:“Regression testing attempts to verify that modifications have not caused unintended adverse side effects in the unchanged software (regression defects) and that the modified system still meets its requirements.”
Test Plan Contents
1. Test Plan identifier. 2. Introduction. 3. Test items. 4. Features to be tested. 5. Features not to be tested. 6. Approach. 7. Item pass/fail criteria. 8. Suspension criteria and resumption requirements. 9. Test deliverables. 10. Testing tasks. 11. Environmental needs. 12. Responsibilities. 13. Staffing and training needs. 14. Schedule. 15. Risks and contingencies. 16. Approvals.
Fundamental Test Process
- - Test Planning & Control
- - Test Analysis & Design
- - Test Implementation & Execution
- - Evaluating Exit Criteria & Reporting
- - Test Closure Activities
- Although logically sequential, each of the above activities in the process may overlap
- or occur at the same time.
In simple terms, Integration testing is basically placing the item under test in a Test environment (an environment contains hardware, instrumentation, simulators, software tools, and other support elements needed to conduct a test). The main purpose of Integration Testing is to find defects in the way the item under test carries out these interactions. Integration Testing can be thought of as two separate entities which are Component Integration testing and Systems Integration Testing.
Top down integration testing is considered to be an incremental integration testing technique. It works by testing the top level module first, and then progressively adds lower level modules each one at a time. Normally near the beginning of the process, the lower level modules may not be available, and so they are normally simulated by stubs which effectively ‘stand-in’ for the lower level modules. As the development progresses, more of the stubs can be replaced with the actual ‘real’ components.
Bottom-up integration testing works by first testing each module at the lowest level of the hierarchy. This is then followed by testing each the modules that called the previously tested ones. The process is then repeated until all of the modules have been tested. When code for the remaining modules is available, the drivers are replaced with the actual ‘real’ module. This approach, the lower level modules are tested thoroughly with the aim to making sure that the highest used module is tested to a reasonable level to provide confidence.
This type of testing involves waiting until all modules are available, and then testing all modules at once as a complete system. This method is not widely recommended and is normally performed by inexperienced developers/testers. If testing a simple sequential program, the method can sometimes work, but due to the complexity of modern systems/software, it would more often than not provide meaningless results and more investigative work to track down defects.
What is a Failure?
Failure: Deviation of the software from its expected delivery or service.
An example of a failure would be the fact that the warning message is never displayed when it is required.
Quality & Testing
If the tests are well designed, and they pass, then we can say that the overall level of risk in the product has been reduced. If any defects are found, rectified and subsequently successfully tested, then we can say that the quality of the software product has increased. The testing term ‘Quality’ can be thought of as an overall term, as the quality of the software product is dependent upon many factors.
Some areas (or modules) tested may contain significantly higher defects than others. By being aware of defect clustering, we can ensure that testing is focused in those areas that contain the most defects. If the same area or functionality is tested again, then previous knowledge gained can be used to great effect as to the potential risk of more defects being found, allowing a more focused test effort.
If we ran the same tests over and over again, we would probably find the amount of new defects found would decrease. This could be due to the fact that all defects found using these test cases had been fixed. So re-running the same tests would not show any new defects. To avoid this, the tests should be regularly reviewed to ensure all expected areas of functionality are covered. New tests can be written to exercise the code in new or different ways to highlight potential defects.
What this method allows you to do is effectively partition the possible program inputs. For each of the above input fields, it should not matter which values are entered as long as they are within the correct range and of the correct type.
So the point of equivalence portioning is to reduce the amount of testing by choosing a small selection of the possible values to be tested, as the program will handle them in the same way.
Boundary Value Analysis
By the use of equivalence partitioning, a tester can perform effective testing without testing every possible value. This method can be enhanced further by another method called ‘Boundary Value Analysis’. After time, an experienced Tester will be often realise that problems can occur at the boundaries of the input and output spaces. When testing only a small amount of possible values, the minimum and maximum possible values should be amongst the first items to be tested.
Decision Table Testing
Decision tables are a Black-box test design technique used as a way to capture system requirements that may contain logical conditions, and also as a method to document internal system designs. They are created by first analyzing the specification. Conditions and subsequent system actions can then be identified from it. These input conditions and actions are commonly presented in true or false way, referred to as ‘Boolean’.
State Transition Testing
This type of Black-box test design technique is based on the concept of ‘states’ and ‘finite-states’, and is based on the tester being able to view the software’s states, transition between states, and what will trigger a state change. Test coverage can include tests designed to cover a typical sequence of states, to cover every state, to exercise every transition, to exercise specific sequences of transitions or to test invalid transitions.
This testing method involves using a model of the source code which identifies statements. These statements are the categorized as being either ‘executable’ or ‘non-executable’. In order to use this method, the input to each component must be identified. Also, each test case must be able to identify each individual statement. Lastly, the expected outcome of each test case must be clearly defined.
This test method uses a model of the source code which identifies individual decisions, and their outcomes. A ‘decision’ is defined as being an executable statement containing its own logic. This logic may also have the capability to transfer control to another statement. Each test case is designed to exercise the decision outcomes.
Use Case Testing
A description of a system's behaviour as it responds to a request that comes from outside of that system. Use cases effectively describe the interaction between a Primary Actor (which is the interaction initiator) and the system itself, and is normally represented in a set of clear steps. An Actor is basically something or someone which come from outside the system, which participate in a sequence of activities with the system, to achieve some goal.
Absence of errors fallacy
There is no point in developing and testing an item of software, only for the end user to reject it on the grounds that it does not do what was required of it. Considerable time may be spent testing to ensure that no errors are apparent, but it could be a wasted effort if the end result does not satisfy the requirement. Early reviews of requirements and designs can help with highlighting any discrepancies between the customer’s requirements and what is actually being developed.
All documents from which the requirements of a component or system can be inferred. The documentation on which the test cases are based. If a document can be amended only by way of formal amendment procedure, then the test basis is called a frozen test basis.
Artifacts produced during the test process required to plan, design, and execute tests, such as documentation, scripts, inputs, expected results, set-up and clear-up procedures, files, databases, environment, and any additional software or utilities used in testing.
A software product that is developed for the general market, i.e. for a large number of customers, and that is delivered to many customers in identical format.
Iterative-incremental Development Model
Iterative-incremental development is the process of establishing requirements, designing, building and testing a system, done as a series of shorter development cycles. Examples are: prototyping, rapid application development (RAD), Rational Unified Process (RUP) and agile development models.
A group of test activities that are organized and managed together. A test level is linked to the responsibilities in a project. Examples of test levels are component test, integration test, system test and acceptance test.
Test Driven Development
A way of developing software where the test cases are developed, and often automated, before the software is developed to run those test cases.
Static Analysis is a set of methods designed to analyse software code in an effort to establish it is correct, prior to actually running the software. As we already know, the earlier we find a defect the cheaper it is to fix. So by using Static Analysis, we can effectively test the program even before it has been written. This would obviously only find a limited number of problems, but at least it is something that can be done very early on in the development lifecycle.
This refers to the sequence of events (paths) in the execution through a component or system. Within a programming language, a control flow statement is an instruction that when executed can cause a change in the subsequent control flow to differ from the natural sequential order in which the instructions are listed.
Determining how the existing system may be affected by changes is called impact analysis, and is used to help decide how much regression testing to do.
Static testing techniques rely on the manual examination (reviews) and automated analysis (static analysis) of the code or other project documentation.
Dataflow can be thought of as a representation of the sequence and possible changes of the state (creation, usage, or destruction) of data objects. A good example of dataflow is a spreadsheet. As in a spreadsheet, you can specify a cell formula which depends on other cells; then when any of those cells is updated the first cell's value is automatically recalculated. It's possible for one change to initiate a whole sequence of changes, if one cell depends on another cell which depends on yet another cell, and so on.
Why can one Tester find more errors than another Tester in the same piece of software? More often than not this is down to a technique called ‘Error Guessing’. To be successful at Error Guessing, a certain level of knowledge and experience is required. A Tester can then make an educated guess at where potential problems may arise. This could be based on the Testers experience with a previous iteration of the software, or just a level of knowledge in that area of technology. Also called Fault Attack.
This informal test design technique is typically governed by time. It consists of using tests based on a test chapter that contains test objectives. It is most effective when there are little or no specifications available. It should only really be used to assist with, or compliment a more formal approach. It can basically ensure that major functionality is working as expected without fully testing it. The tester can also use the information gained while testing to design new and better tests for the future.
Test Execution Schedule
A scheme for the execution of test procedures. The test procedures are included in the test execution schedule in their context and in the order in which they are to be executed.
A Tester normally selects test input data from what is termed an ‘input domain’. Random Testing is simply when the Tester selects data from the input domain ‘randomly’. Questions that arise: Is the chosen data sufficient? Should only valid values be selected? There is little structure involved in ‘Random Testing’. In order to avoid dealing with the above questions, a more structured Black-box Test Design could be implemented instead. However, using a random approach could save valuable time and resources if used in the right circumstances.
Test Procedure Specification
A document specifying a sequence of actions for the execution of a test. Also known as test script or manual test script.
- Directed and focused attempt to evaluate the quality, especially reliability, of a test object by attempting to force specific failures to occur. Also called Error Guessing.
- Examples when testing an input box:
- - Entering too many characters
- - Entering no characters
- - Entering invalid characters
The implementation of the test strategy for a specific project. It typically includes the decisions made that follow based on the (test) project’s goal and the risk assessment carried out, starting points regarding the test process, the test design techniques to be applied, exit criteria and test types to be performed.
The ratio of the number of failures of a given category to a given unit of measure, e.g. failures per unit of time, failures per number of transactions, failures per number of computer runs.
Organisations will commonly have different named roles than those listed below, but this will give you an idea of a commonly used set of roles used throughout the world.
- - Manager
- - Moderator
- - Author
- - Reviewer
- - Scribe
Review Process Structure
An example of a typical review process is below. This is probably the most documented review process you will find in the software development world, and is open to interpretation:
- - Planning
- - Kick-off
- - Preparation
- - Meeting
- - Rework
- - Follow-up
- - Exit Criteria
We term an incident; any significant, unplanned event that occurs during testing that requires subsequent investigation and/or correction. The incident should be raised when the actual result differs from the expected result. After the inevitable investigation of the incident, there may be a reason other than a software defect, for example:
- - Test environment incorrectly set up
- - Incorrect Test Data used
- - Incorrect Test Specification
The number of defects identified in a component or system divided by the size of the component or system (expressed in standard measurement terms, e.g. lines-of code, number of classes or function points).
A test management task that deals with developing and applying a set of corrective actions to get a test project on track when monitoring shows a deviation from what was planned.
A test management task that deals with the activities related to periodically checking the status of a test project. Reports are prepared that compare the actual to that which was planned.
A discipline applying technical and administrative direction and surveillance to: identify and document the functional and physical characteristics of a configuration item, control changes to those characteristics, record and report change processing and implementation status, and verify compliance with specified requirements.
An element of configuration management, consisting of the evaluation, co-ordination, approval or disapproval, and implementation of changes to configuration items after formal establishment of their configuration identification.
- A Tester’s attributes:
- - Professionalism
- - A critical eye
- - Curiosity
- - Attention to detail
- - Good communication skills
- - Experience
Reviews - Manager
The Manager will be the person who makes the decision to hold the review. The Manager will ultimately decide if the review objectives have been met. Managing people’s time with respect to the review is also a Managers responsibility.
Reviews - Moderator
The Moderator effectively has overall control and responsibility of the review. They will schedule the review, control the review, and ensure any actions from the review are carried out successfully. Training may be required in order to carry out the role of Moderator successfully.
Reviews - Author
The Author is the person who has created the item to be reviewed. The Author may also be asked questions within the review.
Reviews - Reviewer
The reviewers (sometimes referred to as checkers or inspectors) are the attendees of the review who are attempting to find defects in the item under review. They should come from different perspectives in order to provide a well balanced review of the item.
Reviews - Scribe
The Scribe (or Recorder) records each defect mentioned and any suggestions for process improvement during a review meeting, on a logging form. The scribe has to ensure that the logging form is readable and understandable.
Project risks are related to the risks associated with the management and control of the project. Risks that are associated with a project will affect the capability of the project to deliver its objectives. When analyzing, managing and the risks, the test manager should be following well established project management principles.
When referring to product associated risks, we are talking about the risk directly related to the object under test. For example, the risk to people if the product fails (Air Traffic Control Software?). Or the risk that the software does not do what it was designed to do.
Test Design Tools
This type of tool can generate test inputs or test cases from items such as requirements, interfaces, design models or actual code. In some cases, the expected test result may also be generated. Tests created by this tool for state or object models are only really used for verifying the implementation of the model and nothing more, as they would not be sufficient for verifying all aspect for the software/system.
Test Data Preparation Tool
Data can be manipulated from test databases, files or data transmissions to setup and prepare data for specific tests. Advanced types of this tool can utilise a range of database and file formats. An added advantage of these tools is to ensure that live data transferred to a test environment is made anonymously, which is ideal for data protection.
Test Execution Tools
By using a scripting language, Test execution tools allow tests to be executed automatically, or semi-automatically, using stored inputs and expected outcomes. In this situation the scripting language allows manipulation of the tests with little effort. Most tools of this type will have dynamic comparison functionality and may provide test logging. Some test execution tools have ‘capture/playback’ functionality. This allows test inputs to be directly captured, and then played back repeatedly.
Test Harness/Unit Test Framework
The purpose of a test harness is to facilitate the testing of components or part of a system by attempting to simulate the environment in which that test object will run. The reason for this could be that the other components of that environment are not yet available and are replaced by stubs and/or drivers.Primarily used by developers, a framework may be created where part of the code, object, method or function, unit or component can be executed, by calling the object to be tested and/or giving feedback to that object.
Incident Management Tools
This type of tool stores and manages any incident reports. Prioritisation of incident reports can be achieved by this tool. Automatic assigning of actions, and status reporting is also a common feature of this type of tool.
Performance Test Tools
This type of tool comprises of two components; Load Generation and Test Transaction Measurement., Load Generation is commonly performed by running the application using its interface or by using drivers. The number of transactions performed this way is then logged. Performance test tools will commonly be able to display reports and graphs of load against response time.
Dynamic Analysis Tools
Primarily used by developers, Dynamic analysis tools gather run-time information on the state of the executing software. These tools are ideally suited for monitoring the use and allocation of memory. Defects such as memory leaks, unassigned pointers can be found, which would otherwise be difficult to find manually. Traditionally, these types of tools are of most use when used in component and component integration testing.
Review tools are also known as ‘review process support tools’. This type of tool provides features such as storing review comments, review processes, traceability between documents and source code. A popular use for a review tool is when the situation arises where the review team are at remote locations, as the tool may support online reviews.
This type of tool is used to automatically highlight differences between files, databases or test results. They can be useful when multiple complicated sets of results require comparison to see if any changes have occurred. Similarly, databases can also be compared saving vast amounts of man hours. Off the shelf Comparison Tools can normally deal with a range of file and database formats.
Test Management Tools
Test Management Tools commonly have multiple features. Test Management is mainly concerned with the management, creation and control of test documentation. More advanced tools have additional capabilities such as test management features, for example; result logging and test scheduling.
Coverage Measurement Tools
Primarily used by developers, this type of tool provides objective measures of structural test coverage when the actual tests are executed. Before the programs are compiled, they are first instrumented. Once this has been completed they can then be tested. The instrumentation process allows the coverage data to be logged whilst the program is running. Once testing is complete, the logs can provide statistics on the details of the tests covered. Coverage measurement tools can be either intrusive or non-intrusive.
Primarily a developer orientated tool, a modelling tool is used to validate the models of a software or system. Several different types of modelling tools exist today ranging from finding defects and inconsistencies in state models, object models and data models. Additionally, the tool may also assist with test case generation. Valuable defects can be found using modelling tools, with the added benefit of finding them early in the development lifecycle, which can be cost effective.
Although not strictly a testing tool, Monitoring tools can provide information that can be used for testing purposes and which is not available by other means. A monitoring tool is a software tool or hardware device that runs concurrently with the component or system under test and can provide us with continuous reports about systems resources, and even warn us about imminent problems. Traceability is normally catered for with this tool by storing software version details.
A security testing tool supports operational security. Security Testing has become an important step in testing today’s products. Security tools exist to assist with testing for viruses, denial of service attacks etc. The purpose of this type of tool is to expose any vulnerability of the product. Although not strictly a security tool, a firewall may be used in security testing a system.
Requirements Management Tools
Requirements management tools are designed to support the recording of requirements, requirements attributes (e.g. priority, knowledge responsible) and annotation, and facilitates traceability through layers of requirements and requirements change management. They also allow requirements to be prioritized and enable individual tests to be traceable to requirements, functions and/or features. Traceability is most likely to be reported in a test management progress report.
Tool Selection Process
A suggested tool selection and evaluation process is:
- - Create a list of potential tools that may be suitable
- - Arrange for a demonstration or free trial
- - Test the product using a typical scenario (pilot project)
- - Organise a review of the tool
It is a good idea to create a pilot project to test the tool for suitability. The benefits of using a pilot project are; gaining experience using the tool, identification of any changes in the test process that may be required.Roll-out of the tool should only occur following a successful pilot project or evaluation period.
Review – Planning
Selecting the personnel, allocating roles; defining the entry and exit criteria for more formal review types (e.g. inspection); and selecting which parts of documents to look at.
Review – Kick-off
Distributing documents; explaining the objectives, process and documents to the participants; and checking entry criteria (for more formal review types).
Review - Preparation
Work done by each of the participants on their own, before the review meeting. Noting potential defects, questions and comments.
Review - Meeting
Discussion or logging, with documented results or minutes (for more formal review types). The meeting participants may simply note defects, make recommendations for handling the defects, or make decisions.
Review - Rework
Fixing defects found, typically done by the author.
Review - Follow-up
Checking that defects have been addressed, gathering metrics and checking on exit criteria (for more formal review types).
Review – Exit Criteria
Exit Criteria can take the form of ensuring that all actions are completed, or that any uncorrected items are properly documented, possibly in a defect tracking system.
The Test Leader
The Test leader may also be called a Test manager or Test coordinator. A Test leader’s role can also be performed by a Project manager, a Development manager, a Quality assurance manager or a manager or a test group. Larger projects may require that the role be split between the roles of Test leader and Test manager. The Test leader will typically plan, whilst the Test manager would monitor and control the testing activities. Ideally, a Test leader would come from a testing background and have a full understanding of how testing is performed.
Over recent years the importance of the activity of testing has increased, and given rise to an increase in the amount of dedicated professional software testers.
Today, a Tester is known as a skilled professional who is involved in the testing of a component or system, and can specialise into categories, including test analysis, test design or test automation.
An approach to testing to reduce the level of product risks and inform stakeholders on their status, starting in the initial stages of a project. It involves the identification of product risks and their use in guiding the test process.
A scripting technique that stores test input and expected results in a table or spreadsheet, so that a single control script can execute all of the tests in the table. Data driven testing is often used to support the application of test execution tools such as capture/playback tools.
A scripting technique that uses data files to contain not only test data and expected results, but also keywords related to the application being tested. The keywords are interpreted by special supporting scripts that are called by the control script for the test.
A programming language in which executable test scripts are written, used by a test execution tool (e.g. a capture / playback tool).
A test approach in which the test suite comprises all combinations of input values and preconditions.
The degree, expressed as a percentage, to which a specified coverage item has been exercised by a test suite.