This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Engineering

All our documentation around engineering processes.

Coding styles

At Balean we strive for readable and maintainable code. This requires all developers to adhere to a certain standard. This guideline will give details on that coding standard.

Source code style

When writing new source code at Balean, all code must be indented with 2 spaces. Lines should always end with the lf character to ensure the build runs successfully in linux-based containers. Windows developers should ensure that source code is pushed with the lf line-ending. The default character set is utf-8. Please ensure, before pushing, that your source file uses this character set.

In oder to automatically apply this standard to your source code, a file called .editorconfig should be added to all repositories to ensure correct styleguides. See Editor config below for more information.

Cross-platform compatibility

It is up to the developer to develop on the system of their choice, whether it’s Windows, Mac or Linux. Deployment targets, build images and scripts tend to run on Linux systems though. It is therefore important that the code-bases are all compatible with Linux but should be usable on other platforms as well.

There are certain scripts and config files that are specifically sensitive to the correct encoding and line-ending. Within Balean, the following settings are being used:

  • Text file encoding: UTF-8
  • Line ending: lf (equivalent to \\n)

Git settings

Git is the main source of truth and should contain the correct encoding and line-ending, regardless of what individual contributors may want to use. In order to ensure that files are in the right configuration, repositories should have a .gitattributes file that configures the expected line-ending. See also the git-attributes docs

To configure your local system to normalize files to the proper line-ending, the core.autocrlf setting must be set: git config --global core.autocrlf true This does not force normalization of text files, but does ensure that text files that you introduce to the repository have their line endings normalized to LF when they are added, and that files that are already normalized in the repository stay normalized. See git docs

By default, git supports files in UTF-8 encoding. Other encodings are handled as binary files.

Example .gitattributes file

# Automatically normalize all text files to lf
* text=auto eol=lf

# Exceptions and specific configurations
*.bat text eol=crlf
*.sh text eol=lf

VS Code settings

Use the following settings in VS Code for compatibility with Balean’s settings:

"files.encoding": "utf8"
"files.eol": "\n"

IDE Config

Editor config

Adding a file called .editorconfig to your repository will help set the correct coding standard. The editorconfig file is based on an open standard maintained by editorconfig.org.

An example .editorconfig is given below. When adding this file to the root of your repository will automatically apply the Balean standard to your repository.

# editorconfig.org

root = true

[*]
indent_style = space
indent_size = 2
end_of_line = lf
charset = utf-8
trim_trailing_whitespace = true
insert_final_newline = true

1 - Test Cases Guidelines

The Balean Best Practices for Test Case Design

Introduction

Clear and consistent test cases are essential to ensure software quality, reproducibility of results, and effective collaboration across the team.
These guidelines define Balean’s best practices for designing test cases, prioritizing them, and reporting defects.
Following this standard helps us:

  • Improve communication between testers, developers, and stakeholders
  • Ensure test coverage and traceability of requirements
  • Reduce ambiguity and duplication in testing
  • Provide a consistent format for documenting and tracking results

Standard Test Case Format

Here is a standard format to write test cases:

Test Case Format:

  • Test Case ID
  • Title
  • Test Scenario
  • Test Steps
  • Prerequisites
  • Test Data
  • Expected Results
  • Actual Results
  • Test Status – Pass/Fail

While writing test cases, remember to include:

  • A reasonable description of the requirement
  • A description of the test process
  • Details related to the testing setup: version of the software under test, browser, OS, environment, date, time, prerequisites, etc.
  • Any related documents or attachments testers will require
  • Alternatives to prerequisites, if they exist

Test Case Example

Here is a sample case based on a specific scenario:

  • Test Case ID: #TC-001
  • Test Scenario: Authenticate a successful user login on Gmail.com
  • Test Steps:
    1. The user navigates to Gmail.com
    2. The user enters a registered email address in the “email” field
    3. The user clicks the “Next” button
    4. The user enters the registered password
    5. The user clicks “Sign In”
  • Prerequisites: A registered Gmail ID with a unique username and password
  • Browser: Chrome v86
  • Device: Samsung Galaxy Tab S7
  • Test Data:
    • Username: aquaman@gmail.com
    • Password: aquaman_123
  • Expected/Intended Results: Once username and password are entered, the web page redirects to the user’s inbox, displaying and highlighting new emails at the top.
  • Actual Results: As Expected
  • Test Status – Pass/Fail: Pass

Test Case Prioritization

This is vital while writing test cases in software testing. Running all the test cases in a test suite requires much time and effort. As the number of features increases, testing the entire suite for every build is practically impossible. Test case prioritization helps overcome these challenges.

Testing should be automated as much as possible. This will overcome the limitation of the number of test cases to test before each build. With automated tests you can run as many tests as you want before each build, testing the application more thoroughly. But as with manual testing, time and resources are still limited. So prioritization will also help in test automation in two different ways:

  1. It will help determine in which order to develop automated tests.
  2. With continuous integration, you want quick and early feedback. So you probably can’t run the entire test suite for every commit, since that would take too long. The priority can then help build different test suites for quick tests and for the pre-production validation.

Test Case Priority Levels

While deciding how to assign priority to your test cases, consider the following levels:

  • Priority 1: The test cases MUST be executed, or the consequences may worsen after the product is released. These are critical test cases where the chances of a functionality being disrupted due to a new feature are high.
  • Priority 2: The test cases COULD be executed if enough time is available. These are not very critical cases but can be executed as a best practice for a double check before launch.
  • Priority 3: The test cases are NOT important to be tested prior to the current release. These can be tested later, shortly after the release of the current software version, as a best practice. However, there is no direct dependency on them.
  • Priority 4: The test cases are NEVER important, as their impact is nearly negligible.

What is a Defect or Bug Report?

A Defect Report is a document created during the software testing process to identify, describe, and track any issues or defects found in a software application.

It provides detailed information about the defect, including how it was discovered, the environment in which it occurred, and steps to reproduce it. A defect report includes complete details about the application/software defects, sources, actions needed to resolve them, and the expected result.

How to Write a Bug Report

Below is a short summary of what to include for each component of a full bug report:

Components of a Defect Report:

  • Defect ID
  • Defect Title
  • Action Steps (Steps to Reproduce):
    The Steps to Reproduce section should list each step required to reproduce the bug in chronological order.
  • Expected Result
    The Expected Results section should state how the app should behave according to its intended behavior.
    Example: The authorization phase completes successfully, the user is logged into the newly created account, and redirected to the home page of the app.
  • Environment
    Each bug report must include details of the specific environment, device, and operating system used during testing to identify the bug.
  • Actual Result
    This section should expand on the title by stating the behavior observed when the issue occurs.
  • Severity
    • Critical – The bug prevents critical functionality within the app from working. This includes crashing or freezing where no workaround is possible, and a fix is required immediately.
    • High – The bug affects major functionality within the app. However, it can be temporarily avoided with a workaround.
    • Medium – The bug does not cause a failure and does not interfere with the fluent work of the system. It has an easy workaround.
    • Low – The bug does not affect functionality or data, or require a workaround. It is a result of non-conformance to a standard that does not impact productivity (e.g., typos, aesthetic inconsistencies).
  • Usability – A suggestion that would improve how an app is understood, experienced, or used efficiently.
  • Other Notes/Error Messages
    Include necessary and relevant evidence to show the problem you are describing. Evidence may include: screenshots, videos, or logs (if available).