Integration Tests¶
A practical guide for writing and running integration tests in orbital-manager-backend.
What is an Integration Test?¶
Integration tests verify that different layers of the application work correctly together — the router, service, and data layers communicating with real infrastructure (database, message broker, cache). They differ from unit tests in scope and purpose:
| Unit Test | Integration Test | |
|---|---|---|
| Scope | Single function or class in isolation | Full request path through the stack |
| Infrastructure | Mocked | Real containers (Postgres, RabbitMQ, etc.) |
| Feedback | Fast, localized failures | Catches wiring and contract bugs |
| When to use | Pure business logic, edge cases | Main logic branches, API contracts, message flows |
Write integration tests for the main logic branches of each endpoint or message flow: the happy path, the principal error cases, and any invariants that would otherwise require mocking several layers at once. Prefer unit tests for isolated calculations or validations where mocking one dependency is sufficient.
Test Environments¶
Testcontainers¶
The test suite uses Testcontainers to spin up real infrastructure containers for every test session. All containers are declared in conftest.py at the project root and started once per pytest run:
| Container | Image | Purpose |
|---|---|---|
| PostgreSQL | postgres:17.4-alpine |
Primary database |
| RabbitMQ | rabbitmq:3.13-alpine |
Message broker |
| MongoDB | mongo:7 |
Document store |
| Redis | redis:5.0.8-alpine |
Cache |
You do not need to start any containers manually — pytest handles the full lifecycle.
Docker must be running
Testcontainers pulls and starts Docker images automatically. Make sure Docker Desktop (or equivalent) is running before executing the test suite.
Adding a new resource: If a new piece of infrastructure is needed for tests, add a session-scoped fixture for it in the root conftest.py and wire its connection URL into the configure_test_env fixture. Follow the existing pattern for each container type already present in that file.
Environment Variables¶
Static test configuration lives in .env.test at the project root. This file is loaded automatically before any test runs and covers values that do not change between sessions, such as JWT secrets and queue names:
# .env.test (excerpt)
ENV=test
JWT-SECRET-KEY=integration-test-secret
RABBITMQ-SERVICES-EXCHANGE=services_test
Dynamic values — database URLs, broker host/port, Redis host — are not in .env.test. They are injected at runtime by the configure_test_env fixture in conftest.py once the containers are live and their ports are known.
Overriding a variable in a specific test
You can override any environment variable for the scope of a single test by setting os.environ["KEY"] = "value" in the test body and restoring it in a finally block. This is useful for testing behaviour that depends on a specific configuration value.
DB Migrations¶
Database schemas are managed with Alembic. All migration files live in a single linear chain under shared/migrations/versions/ and are shared across every service. The run_db_migrations fixture in conftest.py runs alembic upgrade head automatically before any test executes, so the test database is always up to date without any manual steps.
Adding a new migration¶
-
From the project root, generate a new revision file:
-
A timestamped file is created in
shared/migrations/versions/. Edit theupgrade()anddowngrade()functions with your DDL changes. -
Commit the migration file alongside the feature code that depends on the new schema.
Keep the chain linear
Every revision must have exactly one parent. Do not create parallel branches — two migrations sharing the same parent will break the upgrade chain. See Best Practices for the rationale.
The Alembic configuration lives in shared/migrations/alembic.ini. The env.py in the same directory reads POSTGRES-SUPABASE-URL at runtime, so no database URL is hard-coded in the config file.
Guide to Writing an Integration Test¶
Step 1: Choose the right test folder¶
Place the test under your service's tests/integration/ directory, mirroring the feature path of the code under test:
self_delivery_service/tests/
├── conftest.py # Session-scoped TestClient
├── integration/
│ ├── conftest.py # Function-scoped TestClient (if needed)
│ ├── courier_location/
│ │ └── test_courier_location_rabbitmq_e2e.py
│ └── utils/
│ └── courier_test_helper.py # Seed / cleanup helpers
Step 2: Use the right fixture scope¶
Two conftest.py files typically exist per service:
tests/conftest.py— session-scopedTestClient. One app instance shared across all tests; suitable for router and service tests where you fully control state through the HTTP layer.tests/integration/conftest.py— function-scopedTestClient. Restarts the app lifespan for every test; use this when the test triggers side effects (RabbitMQ connections, background workers) that must not bleed between runs.
Step 3: Seed test data¶
Never rely on pre-existing database state. Each test must create its own data and clean it up in a finally block. Provide seed and cleanup helpers in tests/integration/utils/:
# tests/integration/utils/courier_test_helper.py
async def seed_user(external_id: UUID) -> int:
pool = ensure_async_postgres_pool()
return await pool.fetchval(
"INSERT INTO marten.users (external_id) VALUES ($1) RETURNING id",
external_id,
)
async def cleanup_courier_seed(user_id: int | None) -> None:
if user_id is None:
return
pool = ensure_async_postgres_pool()
await pool.execute("DELETE FROM self_delivery.couriers WHERE user_id = $1", user_id)
await pool.execute("DELETE FROM marten.users WHERE id = $1", user_id)
Seed helpers use the raw asyncpg pool (ensure_async_postgres_pool()) directly — not the app's own services or repositories. This keeps seeding independent of the code under test so a bug in a repository cannot hide a real test failure.
Step 4: Run async helpers from a sync test¶
FastAPI's TestClient is synchronous. Use client.portal.call(async_fn, ...) to run coroutines from your test body:
Step 5: Make the HTTP call and assert¶
response = client.post("/courier/location", headers=headers, json=body)
assert response.status_code == status.HTTP_204_NO_CONTENT
Step 6: Always clean up in finally¶
Wrap every test body in a try/finally block so seeded data is removed even when an assertion fails:
def test_post_courier_location_publishes_to_rabbitmq_broker(
self_delivery_client: TestClient,
) -> None:
user_id: int | None = None
portal = self_delivery_client.portal
try:
user_id = portal.call(seed_user, uuid4())
portal.call(seed_courier_row, user_id, "offline")
response = self_delivery_client.post(
COURIER_LOCATION_PATH,
headers=build_authorization_headers(user_id),
json=build_location_body(),
)
assert response.status_code == status.HTTP_204_NO_CONTENT
# ... additional assertions
finally:
portal.call(cleanup_courier_seed, user_id)
See test_courier_location_rabbitmq_e2e.py for the complete example.
How to Run¶
Run all tests from the project root:
Run a single service's tests:
Run a specific file or test by name:
pytest self_delivery_service/tests/integration/courier_location/test_courier_location_rabbitmq_e2e.py
pytest -k test_login_api
Run with verbose output:
Last Updated: 2026-04-30