type
concept
created
Tue Apr 07 2026 02:00:00 GMT+0200 (Central European Summer Time)
updated
Tue Apr 07 2026 02:00:00 GMT+0200 (Central European Summer Time)
sources
raw/notes/progress, raw/articles/STATUS, raw/articles/DISCOVERIES
tags
testing pytest benchmarks quality test-suites

Test Suites Overview

abstract
The Paper Surplus Marketplace has 741 passing tests plus 16 slow benchmarks, covering models, API endpoints, services, ingestion, matching, containers, security, integration flows, and performance -- run via pytest on the Django backend.

Test Counts

Category Tests What It Covers
Phase 0 (models) 131 All 13 entity model constraints, FK integrity, enum validation, state machine
Phase 1 (API) 82 69+ endpoint CRUD, permissions, pagination, filtering
Container Fill 39 Bin-packing algorithm, freight estimation, proposal lifecycle
Phase 3 (matching) 79 5 scoring dimensions, composite scoring, auto-triggers
Phase 4 (visibility) 22 Allow/deny rules, scope hierarchy, buyer filtering
Phases 5-8 ~65 Newsletters, containers, exclusivity, pre-production
Phases 9-11 ~58 API tests, dashboard, security (audit logging, RBAC)
Phases 1-2 (ingestion + bootstrap) ~59 File processing, parsing, validation, import commands
Phase 12 (benchmarks + integration) ~30 4 E2E integration, 16 performance benchmarks
Scattered ~36 Full re-match benchmark, newsletter visibility, bounce handling, freight estimation
MorichalAI migration 119 95 unit + 16 integration + 8 command tests for data import
Total 741 + 16 slow

How to Run

Full Test Suite

cd backend
source venv/bin/activate
python manage.py test

Specific App Tests

python manage.py test apps.surplus
python manage.py test apps.matching
python manage.py test apps.ingestion
python manage.py test apps.containers

Including Slow Benchmarks

The 16 slow benchmarks (performance tests) are excluded from the default test run. To include them:

python manage.py test --include-slow

MorichalAI Migration Tests

python manage.py test common.services.morichal_import

What the Tests Cover

Model Tests (Phase 0)

Every model field has validation tests: enum constraints (e.g., PaperType only accepts 11 values), range validators (GSM: 13-500, width: 100-5000mm), FK referential integrity, unique constraints (Mill slug, MatchResult surplus_item+buyer), auto-populated timestamps, and the SurplusItem state machine (valid and invalid transitions).

API Tests (Phase 1)

Full CRUD coverage for all viewsets. Permission tests verify that mill users cannot access buyer-only endpoints and vice versa. Admin can access everything. Tests cover pagination, filtering, ordering, and error responses.

Matching Tests (Phase 3)

Each of the 5 scoring dimensions (paper type, GSM, width, grade, geography) has individual tests. Composite scoring tests verify the weighted combination. Auto-trigger tests verify that creating/updating a SurplusItem triggers matching. The full re-match benchmark tests system-wide recalculation.

Ingestion Tests (Phases 1b-2)

File processor tests cover SHA-256 hashing, MIME validation, size limits, and duplicate detection. Parser tests cover column alias resolution and unit conversion. Validator tests cover field-level validation with row-level error reporting. Pipeline tests cover status transitions and commit/reject flows.

Security Tests (Phase 11)

Audit logging tests verify that sensitive operations are recorded. File upload tests verify type whitelisting and size limits. RBAC tests verify role-based endpoint access across admin, mill, and buyer roles.

Integration Tests (Phase 12)

Four end-to-end tests cover complete business flows: surplus creation -> matching -> offer -> close. These are slower tests that exercise the full stack.

Performance Benchmarks (Phase 12)

16 benchmarks measuring: single match scoring latency, batch scoring throughput (100+ items), full re-match performance, and per-item timing analysis. These are marked as slow tests.

Test Infrastructure

Sources

Related