CTFL PREP

The flashcards below were created by user emsanborn on FreezingBlue Flashcards.

  1. Define test planning
    The activity of defining the objectives of testing and the specification of test activities in order to meet the objectives of the mission.
  2. Define test control.
    The ongoing activity of comparing actual progress against the plan, and reporting the status, including deviations from the plan.
  3. Define Test Analysis and Design.
    The activity during which general testing objectives are transformed into tangible test conditions and test cases.
  4. What is part of the Test Basis?
    Requirements, software integrity level (risk level), risk analysis reports, architecture, design and interface specifications.
  5. List the Test Analysis and Design Activity major tasks (7).
    • 1. Reviewing TEST BASIS.
    • 2. Evaluating testability of the TEST BASIS and TEST OBJECTS.
    • 3. Identifying and prioritizing TEST CONDITIONS. 
    • 4. Designing and prioritizing high level test cases.
    • 5. Identifying test data required. 
    • 6. Designing test environment and identifying any required infrastructure and tools. 
    • 7. Creating traceability between test basis and test cases.
  6. Seven Testing Principles

    1. Testing shows presence of defects.
    Testing can show defects are present, but cannot prove there are no defects.
  7. Seven Testing Principles

    2. Exhaustive testing is impossible.
    Testing everything is not feasible.  Instead, risk analysis and priorities should be used to focus testing efforts.
  8. Seven Testing Principles

    3. Early Testing.
    Testing activities should be started as early as possible in the software or system development life cycle.
  9. Seven Testing Principles

    4. Defect clustering.
    Test effort should be focused proportionally to the expected and later observed defect density of modules.
  10. Seven Testing Principles

    5. Pesticide paradox.
    Execution of the same test cases repeatedly will eventually no longer find any new defects.  Test cases need to be regularly reviewed and revised.
  11. Seven Testing Principles

    6. Testing is context dependent.
    i.e. safety-critical software is tested differently from e-commerce site.
  12. Seven Testing Principles

    7. Absence of errors fallacy.
    Finding and fixing defects does not help if the system build is unusable and does not fulfill the users' needs and expectations.
  13. How much testing is enough?
    Testing should provide sufficient information to stakeholders to make informed decisions about the release of the software or system.
  14. Define test implementation and execution.
    The activity where test procedures or scripts are specified by combining test cases in a particular order and including any information needed for test execution.  The test environment is set up and tests executed.
  15. Evaluating EXIT CRITERIA and reporting.
    The activity where test execution is assessed against the defined objectives.
  16. Major tasks of evaluating exit criteria are:  (3)
    • 1. Check test logs against exit criteria specified in test planning. 
    • 2. Assess if more tests are needed or if exit criteria should be changed. 
    • 3. Write a test summary report.
  17. List TEST CLOSURE activities.
    • 1. Checking which planned deliverables have been delivered. 
    • 2. Closing of incident reports and reporting those that remain open.
    • 3. Documenting the acceptance of the system. 
    • 4. Finalizing testware, test environment and test infrastructure for later reuse. 
    • 5. Handing testware over to maintenance organization.
    • 6. Analyze lessons learned.
    • 7. Use information gathered to improve test maturity.
  18. Who performs ACCEPTANCE TESTING?
    It is the responsibility of the customers or user of a system.
  19. What is the goal of ACCEPTANCE TESTING?
    To establish confidence in the system.  Finding defects is not the main focus.  AT may assess the system's readiness for deployment and use.
  20. Define FUNCTIONAL TESTING.
    Testing "what" the system does.  Requirements, use cases, or functional specifications are used in functional testing.  Black box testing in used in functional testing.
  21. Define NON-FUNCTIONAL TESTING.
    Includes but is not limited to performance testing, load testing, stress testing, usability testing, maintainability testing, reliability testing and portability testing.  It is the testing of "how" the system works.
  22. Define STRUCTURAL TESTING.
    White box testing.  Includes statement testing, decision testing.
  23. Define STATIC TESTING.
    The manual examination (reviews) and automated analysis (static analysis) of the code or other project documentation without the execution of code.
  24. What are the benefits of STATIC TESTING?
    Early detection and correction, development productivity improvements, reduced development timescales, reduced testing cost and time, lifetime cost reductions, fewer defects and improved communication.
  25. Define Equivalence Partitioning.
    Black box testing technique.  Inputs to the system or software are divided into groups that are expected to exhibit similar behavior.
  26. Define BOUNDARY VALUE ANALYSIS.
    Black box testing technique.  Analysis performed in creation of test cases which cover valid and invalid boundary values.
  27. Define Decision Table Testing.
    Black box testing technique.  A way of capturing system requirements that contain logical conditions.  Stated in such a way that input conditions and actions where the result in Boolean (true or false).
Author
ID
319556
Card Set
CTFL PREP
Description
CTFL PREP Questions
Updated
Show Answers