Test Plan

Master Test Plan

Project: Galactic Fruit

Version 1.2
Date 2025-11-30
Authors D. Brooks, C. Rojas

Table of Contents

1.0 Introduction

1.1 Purpose

This Master Test Plan (MTP) outlines the comprehensive strategy, objectives, scope, approach, environment, resources, and schedule for the testing activities of the Galactic Fruit game. It serves as the primary agreement between the Quality Assurance team and developer regarding what will be tested, how quality will be assured for the pre-release build, and the criteria for release readiness. Since formal feature documentation is limited, this plan also defines the exploratory

1.3 Acronyms

  • FPS: Frames Per Second
  • UI/UX: User Interface / User Experience
  • SUT: System Under Test
  • GF: Galactic Fruit
  • PC: Personal Computer

1.4 References

  • Steam Store Page
  • In-game Operations Manual
  • Informal Communication With Developer
  • Exploratory Notes

2.0 Approach

2.1 Assumptions and Constraints

2.1.1 Assumptions

  • The game will only be available on the Steam client for PC.
  • Each build of the game will have passed unit and unit-to-unit testing before it is transferred to the system integration testing environment.
  • White box testing and analysis will be performed by the developer.

2.1.2 Constraints

  • Single Developer: Turnaround time for bug fixes may vary depending on complexity.
  • Limited Hardware: Testing is constrained to the specific hardware configurations available to the QA team (see Section 3.4).
  • Human-in-the-loop Automation: Automated tests require a human observer (Oracle) to validate visual outcomes and game states; fully unattended testing is not supported.

2.2 Coverage

Test coverage will be measured by the execution of test cases against identified requirements and features. In the event that coverage levels are not met due to time constraints, the QA Lead will determine if the actual levels are adequate based on risk assessment.

Coverage targets include:

  • 100% of Critical Path (A → B progression) features.
  • 100% of UI/Menu functionality (Start, Options, Quit).
  • 100% of terminal feature coverage.
  • 100% of minigame feature coverage.

2.3 Test Types

The following types of testing will be performed during system integration testing:

  • Functional Testing: Execution of test cases based on gameplay mechanics (e.g., movement, interaction) to ensure they behave as intended.
  • Regression Testing: Re-testing previously validated functionality after a new build is deployed to ensure no new defects have been introduced.
  • Exploratory Testing: Testing the game without a specific plan to identify unexpected issues and validate the overall gameplay experience.
  • Smoke Testing: Testing the game to ensure it is stable and free of critical defects.
  • Usability Testing: Ensuring the game is easy to use and navigate.
  • Recovery Testing: Ensuring the game can handle unexpected closures (Alt+F4) without corrupting save data.
  • Performance Testing: Ensuring the game runs at a minimum of 60 FPS on our spec hardware.

2.4 Test Data

To perform system integration testing, test data will be supplied from two sources:

  • Fresh Saves: New game states created at the start of each test cycle.
  • Dev Tools: Tools provided by the developer to jump specifically to late-game sections without playing through the full loop using the [F3] dev tool menu.

2.5 Test Basis Derivation Strategy

In the absence of formal requirement specifications, the Test Basis and subsequent Test Conditions will be derived primarily from Exploratory Notes. This approach allows the QA team to reverse-engineer requirements based on the current state of the application and developer intent.

2.5.1 Definition of Exploratory Notes

Exploratory Notes are unstructured records created during time-boxed exploration sessions. They serve as the raw data for our test planning.

  • Observations: The actual results observed during exploration. These form the base test basis and the foundation for expected outcomes.
  • Inferences: Hypotheses about how the system should work. These are later confirmed by design, developer input, or extended exploration.
  • Questions: Identified gaps in understanding or missing requirement areas. These require clarification to lock in expected behavior.

2.5.2 Derivation Logic

The transformation from notes to formal features follows this logic:

Observations + Answers/Confirmations → Features

Note: If a question has an obvious answer based on domain knowledge or standard gaming conventions (e.g., "Spacebar should jump"), the feature can be derived immediately without a formal Q&A cycle.

2.6 Automation Strategy

To optimize testing efficiency, a custom automation framework utilizing nut.js has been integrated. This framework is designed to handle basic smoke testing, unique repetitive scenarios (such as terminal commands and menu navigation), and core progression tests (including the intro sequence and stage transitions).

Unlike traditional integration tests, this framework operates as a "Black Box" tool with a "Human-in-the-loop" design, relying on the player to act as the test oracle.

Tech Stack

  • Runtime: Node.js
  • Input: Nut.js (Hardware-level Mouse/Keyboard simulation)
  • Integration: PowerShell (Overlay UI, Screen Capture, Window Focus)

The "User Oracle"

Since the automation cannot read game memory (e.g., verifying if an animation played), logic is offloaded to the Automation User. The script handles inputs and setup, then pauses to present a screenshot and a specific question. The user acts as the verifyable truth (Oracle).

Limitations:

  • Game window must be in focus and resolution matched (1920x1080).
  • Tests are fragile to UI layout changes.
  • Cannot be run "headless" or in background CI/CD pipelines.

3.0 Plan

3.1 Test Team

Name Role Responsibilities
Dominic Brooks QA Lead Lead all testing activities, including test planning, test automation, test execution, and status reporting. Manage defect triage.
Christopher Rojas QA Analyst Design and execute test cases, create defect tickets, perform exploratory testing, and verify bug fixes.

3.2 Major Tasks and Deliverables

Task Deliverable(s)
Test Planning Master Test Plan
Test Analysis Requirements Traceability Matrix, Risk Analysis Report, Test Basis Documentation
Test Design Test Case Specifications, Test Data Requirements, Test Scenario Documents
Test Implementation Test Cases (Detailed), Test Environment Setup Documentation
Test Execution Test Execution Logs, Defect Reports, Test Results, Regression Test Results, Build Reports
Test Closure Test Summary Report, Test Metrics Report, Release Recommendation, Lessons Learned Document

3.3 Environmental Needs

Hardware

All test cases will be executed on local PC workstations meeting the following minimum specifications:

  • CPU: Ryzen 5 3600 / Intel i5-9400F
  • GPU: NVIDIA GTX 1060 6GB / AMD RX 580
  • RAM: 16 GB DDR4
  • Storage: SSD (SATA or NVMe)

Software

  • OS: Windows 10 (21H2 or later) / Windows 11
  • Platform: Steam Client (Latest Beta or Stable)
  • Drivers: Latest Game Ready Drivers (NVIDIA) / Adrenalin (AMD)
  • Tools: OBS Studio (Recording), ShareX (Screenshots), Github Issues (Tracking)

4.0 Features to be Tested

WIP

5.0 Features Not to be Tested

  • Dev Tools: [F3] Menu including dev & test tools

6.0 Testing Procedures

6.1 Test Execution

For each requirement, business process, or system feature to be tested, the tester will execute a set of pre-defined test cases. Each test case will have a series of actions and expected results. As each action is performed, the results are evaluated.

  • If the observed results are equal to the expected results, the test case is marked as PASS .
  • If the observed results differ, the test case is marked as FAIL , and a defect is logged.

6.2 Pass/Fail Criteria

To pass the system integration test, the following criteria must be met:

  • All core gameplay loops are functional.
  • No "Critical" (S1) or "High" (S2) defects remain open.
  • The game launches, runs, and closes without crashing.
  • Performance maintains a minimum of 60 FPS on our spec hardware.

6.3 Suspension Criteria

Testing will be suspended if any of the following occur:

  • Build instability causes crashes during a test case over 25% of the time.
  • A critical progression blocker prevents access to the majority of test cases.
  • The test environment (Steam/Hardware) becomes unavailable.

6.4 Defect Management

Defects will be tracked using GitHub Issues. Each defect report must be written clearly and unambiguously.

  • Title: A concise, descriptive summary of the problem (what is wrong, where it occurs).
  • Description: A clear explanation of the defect, including its impact on gameplay or test objectives.
  • Preconditions: Any required setup such as game version, platform/OS, etc.
  • Steps to Reproduce: Numbered, detailed steps to reliably reproduce the defect from a known state.
  • Expected Result: The correct behavior as per requirements or design.
  • Actual Result: The observed incorrect behavior.
  • Attachments: Screenshots or video evidence demonstrating the defect.
  • Additional Information: Any relevant logs, patterns, or workarounds.
  • Severity and Priority: Assigned based on impact and urgency (see below).

Severity Levels

1 - Critical

System crash, data loss, or progression blocker.

2 - High

Major functionality broken. Workaround exists but is difficult.

3 - Medium

Minor functionality issue or glitch. Does not impede progress.

4 - Low

Cosmetic issue (typo, texture clip) or suggestion.

Priority Levels

1 - Immediate

Must be fixed immediately. Blocks further testing.

2 - High

Must be fixed before the next release/build.

3 - Normal

Fix when time permits. Standard queue.

6.5 Test Case Requirements

All test cases must follow a standardized structure to ensure consistency, clarity, and full traceability across the testing effort. Each test case must include the following required components:

  • ID: A unique identifier following the project’s naming conventions (e.g., TC-SMOKE-002).
  • Name: A clear, concise title summarizing the scenario being validated.
  • Description: A high-level explanation of the scenario, feature, or user flow under test.
  • Test Objective: The specific purpose of the test case, stating what functionality or behavior is being validated.
  • Priority: Business or risk-based importance (e.g., Critical, High, Medium, Low).
  • Type: The classification of the test (e.g., Smoke, Regression, Functional).
  • Author & Dates: Metadata including test case creator, creation date, and last modified date.
  • Preconditions: Required setup steps, system states, or test environment conditions that must be true before execution.
  • Test Data: Any structured values needed to execute the scenario (e.g., player name, difficulty, inventory items).
  • Environment: The target environment(s) for execution (e.g., Windows, console builds, staging server).
  • Steps: A numbered, ordered list of user actions paired with the expected result for each step. (Each step must define “action” and “expected.”)
  • Postconditions: The expected end state of the game or system once the test case completes successfully.
  • Estimated Execution Time: Approximate time needed to execute the test manually.
  • Tags: Keywords supporting filtering, reporting, grouping, and suite organization (e.g., smoke, navigation, critical-path).

7.0 Risks and Contingencies

Risk Impact Contingency Plan
Progression Blockers / Soft Locks Critical Since Galactic Fruit is a short story game, any bug that halts progression will effectively soft-lock the player. Prioritize extensive testing of all story checkpoints, stage transitions, and mandatory interactions. Maintain detailed logs of progression paths and validate all critical path scenarios.
Control & Progression Intuitiveness High Players may struggle to understand controls or how to progress through the story. Conduct usability testing with fresh testers unfamiliar with the game. Document any confusing sections and provide feedback to developer for clarity improvements (UI hints, tutorials, etc.).
Minigame & Terminal Integration Issues High With many minigames and terminal features developed, integration issues or unexpected combinations may cause bugs that halt progression. Create a comprehensive test matrix covering all minigame-terminal interactions. Test edge cases and unusual player sequences.

Approvals

Dominic Brooks

QA Lead

Christopher Rojas

QA Analyst