QA & Testing

resized_image_400x400 resized_image_400x400

At Webdelo, testing is part of our architecture and culture — not a separate phase. We design systems to make bugs hard to appear in the first place. Quality is ensured at every level: from module design to deployment and monitoring processes.
QA is a team process involving developers, QA engineers, and DevOps specialists. It starts with understanding requirements, continues with writing testable code, and culminates in a CI/CD pipeline that automatically verifies product integrity.
Testing shows how the system behaves under both predictable and unpredictable conditions — including load scenarios not considered at the start. This ensures system stability in production and allows us to deploy changes confidently without risk to users.

resized_image_400x400 resized_image_400x400
General principles of QA

General principles of QA

Product reliability doesn’t begin at release — it starts much earlier, during design and development. To ensure Webdelo systems run with long-term stability, we embed quality principles into architecture, processes, and team culture. In this section, we explain how we build testability from the ground up, set up automation, and establish collaboration between developers and QA engineers at every stage.

The Role of QA in the Development Process

We integrate testing from the very beginning. When an architect designs modules, they build in testability. When a developer writes code, they write tests alongside it. When QA receives a task, they already understand the context, architecture, and goals.

The entry point for QA is not code review — it’s task definition. This reduces the number of bugs and speeds up issue resolution.

How We Test Large-Scale Projects

We structure the process so that each stage logically follows the previous one, verifying critical aspects of the system.

First, the developer covers core business logic with unit tests — these are fast and focus on key calculations and rules. Next come integration tests, which ensure the service interacts correctly with the database, message queues, and external APIs.

After that, we perform manual testing. QA specialists check interfaces, browser behavior, and complex cases where automation may be less effective. Then we run regression tests — revalidating previously working scenarios. During the pre-release phase, we verify that updates don’t break compatibility and that the system can handle load.

Once released, monitoring tools take over: we track logs, errors, and metrics to react promptly if anything goes wrong.

The Role of CI/CD in Quality Assurance

Each code change triggers a chain of automated checks. First, unit, integration, and e2e tests are executed. Then linters and static analysis run to ensure the code meets team standards. After that, frontend builds and regression tests compare results against previous versions.

If everything passes — the code can be merged. If not — the CI pipeline stops. This prevents broken changes from reaching production and speeds up debugging.
A 60% reduction in production bugs thanks to CI/CD.
After implementing automated testing through GitLab CI — including complex scenarios and regression tests — GitLab achieved a significant reduction in production bugs, by approximately 60%. The team accelerated releases through reliable automation of every commit.
GitLab

Types of Tests and Their Purpose

We use different types of tests, each responsible for a specific level of reliability.

Unit tests verify individual functions and modules. They run quickly and protect core business logic. Integration tests show how different parts of the system interact — for example, between a service and a database. End-to-end tests simulate user behavior, from logging in to completing an order.

In addition, we perform manual testing to ensure that the interface behaves correctly, looks as intended, and remains stable under non-standard scenarios.

Checklists, Test Cases, and Scenarios

To make sure no critical scenario is missed, we organize testing into clear, reproducible formats:
  • PHPUnit, Cypress, Playwright, Vitest — for automated testing at all levels, from backend to interface.
  • Postman and Newman — for API testing and automation.
  • Sentry, Prometheus, Grafana — to monitor errors, metrics, crashes, and performance.
  • Loki, Jaeger, Graylog — for logging and tracing requests in distributed systems.
All these tools are integrated into CI/CD. If a test fails — we know immediately.

Regression and Pre-Release Testing

Before release, we conduct a full control cycle:
  • We verify that existing features still work as expected (regression testing).
  • We run migrations on the staging environment and roll them back to confirm reliability.
  • We compare performance metrics before and after changes.
  • We ensure that external services we integrate with remain available and version-compatible.
The final decision rests with QA — no release goes live without their approval.

Collaboration Between QA and Developers

Collaboration between QA and developers is built on clear role distribution and continuous communication.
  • Who writes the tests?
    Developers write unit tests for their business logic — covering services, requests, and controllers. For complex components, they also add integration or e2e tests, especially for critical user scenarios. QA focuses on user flows and logic, building on what developers have already verified.
  • Who performs manual testing?
    Manual testing is QA’s responsibility. This includes visual checks, UX validation, cross-browser testing, mobile views, non-standard cases, and integrations. QA follows checklists, reproduces edge cases, and compares behavior against specifications.
  • Who writes bug reports?
    All bugs are documented by QA. Each report includes reproduction steps, expected and actual results, environment details, and screenshots. Clarity is essential — a good bug report should be self-explanatory. Developers review and either confirm the issue or clarify with QA.
  • How do we collaborate?
    Every merge request passes through QA for review: they check test coverage, behavior, and regression impact. If improvements are needed, QA provides feedback directly. QA also joins task grooming to define testing scenarios in advance. All agreements are logged in bug reports or test documentation.


This workflow helps the team move faster and more efficiently. Everyone knows their responsibilities and works with a shared focus on delivering a stable, reliable result.
Testing in Code: Go

Testing in Code: Go

Go is our language of choice for high-load microservices. Its strict typing, simplicity, and high performance require a rigorous testing discipline. Here, it’s especially important to separate core application logic, ensure correct interaction with external services, and set up reliable automated checks for every change.

Unit Tests in Go

We use Go’s standard **testing** package along with the **testify** library for convenient assertions. Unit tests in Go provide a fast and reliable way to ensure that business logic works correctly. We follow a clear naming convention (`TestXxx_WhenYyy_ShouldZzz`) and cover both normal and edge cases — such as empty fields, maximum values, or unexpected inputs.

Integration Tests

When we need to test interactions between services, databases, or message brokers, we use **httptest**, **dockertest**, and temporary container-based test configurations. This allows us to spin up PostgreSQL, Redis, Kafka, and other dependencies in an isolated environment. We run the system under conditions as close to production as possible — without using any real user data.

Mocking and Fake Implementations

To test individual parts of the system without involving the entire stack, we substitute real components with specialized stubs that simulate behavior without performing actual actions. For this, we use **gomock** and **mockery**. Since all dependencies are connected via interfaces, we can easily replace a database or an API with a simple mock that returns predictable responses. This allows us to focus on verifying business logic rather than the behavior of the entire system.

Coverage and Testability

We design modules to be easily testable in isolation. This means keeping logic separate, making dependencies replaceable, and ensuring behavior can be reproduced independently.

To understand how well our code is protected by tests, we measure **coverage** — the percentage of code actually executed during testing. For this, we use:
  • go test -coverprofile=coverage.out — generates a file showing what percentage of code lines were executed during tests;
  • go tool cover -html=coverage.out — opens a clear color visualization where tested lines are highlighted and untested ones are not.


This helps us immediately identify weak spots and add the necessary checks.

The goal of tracking coverage is to pinpoint risk areas and growth opportunities. If coverage drops, we revisit the architecture or strengthen the test suite.

CI for Go

Every time a developer pushes changes to the repository, an automated verification sequence is triggered using **GitHub Actions**. It runs step by step:
  • Automatic test execution — via go test ./... to ensure existing functionality isn’t broken.
  • Code quality and security checksgo vet, staticcheck, and golangci-lint detect potential issues, incorrect data types, and code style violations.
  • Test coverage evaluation — to assess how well the codebase is tested.


If any check fails, the process stops immediately. The developer receives feedback and can make corrections right away. This prevents bugs from entering the main branch and ensures that only verified, stable code is merged into the system.
Testing in Code: PHP / Laravel

Testing in Code: PHP / Laravel

In Laravel, events, queues, forms, and migrations play a critical role — all of which must be covered by tests to minimize the risk of failures after updates. Each task goes through unit checks, scenario tests, and manual review when necessary. CI immediately reports any failures, while mock services and factory classes help speed up verification. This combination keeps Laravel predictable and stable, even in large-scale projects.

Unit Tests in Laravel

phpunit - is our main tool for automated code verification in Laravel. It ensures that key parts of the system — such as services, forms, validation, and business logic — work as intended. We write tests that confirm each function performs its expected action: for example, an order is created successfully, a user receives a notification, or a form returns an error for invalid data. Tests are stored alongside the main codebase, following a consistent structure across all projects. This makes it easy for any developer to navigate, understand what each test covers, and run checks quickly. The result is a transparent, maintainable, and efficient testing process.

Feature and Integration Tests

We test the core elements of the application — routes, middleware, and database migrations. This ensures that user requests follow the correct paths, all checks execute properly, and the database structure matches expectations. To keep tests independent from real data and external verifications, we use withoutMiddleware() to disable middleware and RefreshDatabase to reset the database before each test. This provides a clean state and improves test reliability. We also check not only standard scenarios but edge cases — empty fields, overly long values, invalid inputs — to make sure the application handles any data robustly.

CI for PHP

In Laravel projects, we use **GitHub Actions** to automatically validate the code every time someone makes changes. This helps us detect issues early and ensure the application works as expected.

Here’s what happens step by step:
  • Automatic tests: PHPUnit runs first, using the --stop-on-failure flag to halt on the first error — saving time and pinpointing the problem immediately.
  • Static analysis: phpstan (at the strictest level 8) and psalm scan the code to detect potential bugs and architectural violations before runtime.
  • Dependency validation: composer validate and composer normalize ensure the composer.json file is properly structured and consistent.


If any of these steps fail, the pipeline stops. This means no faulty code can be merged into the main branch until all issues are resolved. This approach guarantees that everything entering the project has already been tested and is ready for production.
Frontend Testing (Vue.js and SPA)

Frontend Testing (Vue.js and SPA)

Frontend at Webdelo is a critical part of the system’s operation. It handles forms, manages data exchange with the server, tracks interface states, and responds to user actions. We carefully test every detail — from how individual components like buttons or input fields behave, to how the entire system performs in complex scenarios, such as a user completing a full checkout process. This ensures that the interface remains stable and the system responds predictably under any conditions.

Component Unit Tests

We use Vitest or Jest to test each interface component in isolation. This means checking whether text renders correctly, how elements respond to clicks, whether the right classes activate, and if conditional rendering behaves as expected. These tests don’t connect to the server or database — they focus purely on the component’s visual and logical behavior, independent of the rest of the application. This allows us to quickly detect rendering or interaction issues without spending time on complex environment setup.

Integration and End-to-End (E2E) Tests

To ensure users can smoothly navigate the entire application journey, we use Cypress and Playwright. These tools test real-life scenarios such as logging in, submitting forms, editing profiles, and following links. The tests launch the interface in a browser and simulate user actions step by step — as if a real person were using the system. This helps confirm that everything works correctly: buttons respond, pages load, data saves properly, and no errors occur.

API Mocks and State Management

During testing we don’t contact real servers or external APIs. Instead, we create fake responses that mimic real ones and return predefined data. This is called mocking — we substitute dummy responses for REST or gRPC requests. This approach prevents reliance on the internet or third-party services and makes tests fast and stable. On each run, tests receive the same responses and operate under predictable conditions.

Linters and Visual Tests

We use eslint and stylelint to automatically check for syntax errors or style inconsistencies, keeping the code clean and uniform. When the interface is particularly complex or visual accuracy is critical, we add screenshot-based tests — they capture screens and compare them pixel by pixel with the reference version. If anything shifts or disappears, the test immediately flags it. This approach helps us catch visual bugs early and maintain design stability.

CI in Frontend Development

As soon as a developer pushes code to the project, a full set of automated checks starts running:
  • First, the project builds — this ensures the code compiles without errors and works as a complete system.
  • Then the linter runs — it checks for syntax and style issues.
  • Next, unit tests are executed to verify that each part of the interface functions correctly.
  • If visual tests are connected, the system also compares page appearances with reference screenshots.
If any error occurs, the build stops immediately, and the code doesn’t get merged into the main branch. This helps the team detect and fix issues before they reach production.

Conclusion

Testing at Webdelo is a built-in part of every project’s architecture. From the start, we design systems with testability in mind, automate repetitive actions, and use tools that provide real-time visibility into code quality.

A unified approach across all layers — backend, frontend, and DevOps — allows us to deliver stable releases, detect and fix issues faster, and evolve products without risking the production environment. It all works thanks to discipline, clear accountability, and a shared culture of quality within the team.

We view results through the lens of stability, speed, and reliability — because that’s what defines the strength of a digital product.

Want to Discuss Your Project?

Submit Your Request — Let’s Take Your Business to the Next Level Together.

Start a project