Test Cases Guidelines

The Balean Best Practices for Test Case Design

Introduction

Clear and consistent test cases are essential to ensure software quality, reproducibility of results, and effective collaboration across the team.
These guidelines define Balean’s best practices for designing test cases, prioritizing them, and reporting defects.
Following this standard helps us:

  • Improve communication between testers, developers, and stakeholders
  • Ensure test coverage and traceability of requirements
  • Reduce ambiguity and duplication in testing
  • Provide a consistent format for documenting and tracking results

Standard Test Case Format

Here is a standard format to write test cases:

Test Case Format:

  • Test Case ID
  • Title
  • Test Scenario
  • Test Steps
  • Prerequisites
  • Test Data
  • Expected Results
  • Actual Results
  • Test Status – Pass/Fail

While writing test cases, remember to include:

  • A reasonable description of the requirement
  • A description of the test process
  • Details related to the testing setup: version of the software under test, browser, OS, environment, date, time, prerequisites, etc.
  • Any related documents or attachments testers will require
  • Alternatives to prerequisites, if they exist

Test Case Example

Here is a sample case based on a specific scenario:

  • Test Case ID: #TC-001
  • Test Scenario: Authenticate a successful user login on Gmail.com
  • Test Steps:
    1. The user navigates to Gmail.com
    2. The user enters a registered email address in the “email” field
    3. The user clicks the “Next” button
    4. The user enters the registered password
    5. The user clicks “Sign In”
  • Prerequisites: A registered Gmail ID with a unique username and password
  • Browser: Chrome v86
  • Device: Samsung Galaxy Tab S7
  • Test Data:
    • Username: aquaman@gmail.com
    • Password: aquaman_123
  • Expected/Intended Results: Once username and password are entered, the web page redirects to the user’s inbox, displaying and highlighting new emails at the top.
  • Actual Results: As Expected
  • Test Status – Pass/Fail: Pass

Test Case Prioritization

This is vital while writing test cases in software testing. Running all the test cases in a test suite requires much time and effort. As the number of features increases, testing the entire suite for every build is practically impossible. Test case prioritization helps overcome these challenges.

Testing should be automated as much as possible. This will overcome the limitation of the number of test cases to test before each build. With automated tests you can run as many tests as you want before each build, testing the application more thoroughly. But as with manual testing, time and resources are still limited. So prioritization will also help in test automation in two different ways:

  1. It will help determine in which order to develop automated tests.
  2. With continuous integration, you want quick and early feedback. So you probably can’t run the entire test suite for every commit, since that would take too long. The priority can then help build different test suites for quick tests and for the pre-production validation.

Test Case Priority Levels

While deciding how to assign priority to your test cases, consider the following levels:

  • Priority 1: The test cases MUST be executed, or the consequences may worsen after the product is released. These are critical test cases where the chances of a functionality being disrupted due to a new feature are high.
  • Priority 2: The test cases COULD be executed if enough time is available. These are not very critical cases but can be executed as a best practice for a double check before launch.
  • Priority 3: The test cases are NOT important to be tested prior to the current release. These can be tested later, shortly after the release of the current software version, as a best practice. However, there is no direct dependency on them.
  • Priority 4: The test cases are NEVER important, as their impact is nearly negligible.

What is a Defect or Bug Report?

A Defect Report is a document created during the software testing process to identify, describe, and track any issues or defects found in a software application.

It provides detailed information about the defect, including how it was discovered, the environment in which it occurred, and steps to reproduce it. A defect report includes complete details about the application/software defects, sources, actions needed to resolve them, and the expected result.

How to Write a Bug Report

Below is a short summary of what to include for each component of a full bug report:

Components of a Defect Report:

  • Defect ID
  • Defect Title
  • Action Steps (Steps to Reproduce):
    The Steps to Reproduce section should list each step required to reproduce the bug in chronological order.
  • Expected Result
    The Expected Results section should state how the app should behave according to its intended behavior.
    Example: The authorization phase completes successfully, the user is logged into the newly created account, and redirected to the home page of the app.
  • Environment
    Each bug report must include details of the specific environment, device, and operating system used during testing to identify the bug.
  • Actual Result
    This section should expand on the title by stating the behavior observed when the issue occurs.
  • Severity
    • Critical – The bug prevents critical functionality within the app from working. This includes crashing or freezing where no workaround is possible, and a fix is required immediately.
    • High – The bug affects major functionality within the app. However, it can be temporarily avoided with a workaround.
    • Medium – The bug does not cause a failure and does not interfere with the fluent work of the system. It has an easy workaround.
    • Low – The bug does not affect functionality or data, or require a workaround. It is a result of non-conformance to a standard that does not impact productivity (e.g., typos, aesthetic inconsistencies).
  • Usability – A suggestion that would improve how an app is understood, experienced, or used efficiently.
  • Other Notes/Error Messages
    Include necessary and relevant evidence to show the problem you are describing. Evidence may include: screenshots, videos, or logs (if available).

Last modified August 27, 2025: Test Cases Guidelines creation (0cd180f)