Automated API tests for the LI.FI cross-chain aggregation API, built for the QA Engineer Take-Home Assignment. Playwright Test, TypeScript.
tests/
fixtures/
test-data.ts # Constants: chains, tokens, amounts, wallet
schemas.ts # JSON Schemas derived from OpenAPI spec
schema-validators.ts # Validation wrappers (AJV)
api-helpers.ts # Request wrappers
quote/ # GET /v1/quote
quote-happy-path.spec.ts
quote-validation.spec.ts
quote-edge-cases.spec.ts
quote-errors.spec.ts
advanced-routes/ # POST /v1/advanced/routes
advanced-routes.spec.ts
tools/ # GET /v1/tools
tools.spec.ts
tokens/ # GET /v1/token + /v1/tokens (assignment asks for token search + price)
tokens.spec.ts
performance/ # Separate project - not in npm test
performance.spec.ts
docs/
TEST_PLAN.md
BUG_REPORT.md
.github/workflows/
api-tests.yml # Functional tests (PRs + manual)
performance-tests.yml # Performance tests (manual only)
Schema-first validation. Every 200 response is validated against JSON Schemas derived from the official OpenAPI spec, using playwright-schema-validator (AJV).
Rate limit-aware design. Without API key, /quote and /advanced/routes are each limited to 75 requests per 2-hour rolling window (docs). The suite stays well within both budgets.
| Endpoint | Calls | Limit |
|---|---|---|
/quote |
37 | 75 / 2h |
/advanced/routes |
12 | 75 / 2h |
What this drove: retries disabled (retries: 0), validation tests consolidated into loops, performance tests isolated in a separate Playwright project and CI workflow, CI only on PRs + manual dispatch with concurrency group. With an API key (LIFI_API_KEY), limit goes to 100/min - non-issue.
Three representative quote pairs. Instead of random token combinations:
- Cross-chain bridge - USDC ETH to USDC ARB (bridge only)
- Same-chain swap - USDC to ETH on Ethereum (DEX only)
- Cross-chain swap - USDC ETH to ETH ARB (bridge + DEX multi-step)
No hard-coded bridge names. Routing changes with market conditions (Polymer on April 1st, Mayan Swift on April 3rd for the same request). Tests validate structure and relationships, never specific bridges.
BigInt for amounts. While 1M USDC (6 decimals) fits in a standard number, 18-decimal tokens (ETH for instance) or "whale" transactions quickly exceed Number.MAX_SAFE_INTEGER (
| Category | Tests | Focus |
|---|---|---|
| Quote - Happy Path | 8 | Schema, amounts, slippage coherence, whale amounts, multi-step |
| Quote - Validation | 7 | Missing fields, bad amounts, bad slippage, fake tokens, unknown chains |
| Quote - Edge Cases | 11 | slippage=0 (intent vs pool), prefer/allow/deny bridges, FASTEST/CHEAPEST, dust |
| Quote - Errors | 3 | Wrong HTTP methods, empty error context |
| Advanced Routes | 12 | 3 pairs, multi-route, step structure, bridge filtering, input validation |
| Tools | 6 | Schema, Solana/Bitcoin/SUI bridge availability |
| Tokens | 5 | Token search by symbol/address, price accuracy, token list |
| Performance | 3 | Concurrent quotes/routes, response time |
| # | Finding | Severity |
|---|---|---|
| BUG-001 | DELETE/PUT on /quote return 1003 instead of 405 |
Low |
| BUG-002 | allowBridges=none cross-chain - empty error context |
Medium |
| BUG-003 | Same-chain swap executionDuration is 0 |
Low |
Plus 4 observations. Details in docs/BUG_REPORT.md.
npm install
# All API tests (excludes performance)
npm test
# With API key (recommended)
LIFI_API_KEY=your-key npm test
# Performance tests only
npm run test:performance
# Specific endpoint
npm run test:quote
npm run test:routes
npm run test:tools
npm run test:tokens
# HTML report
npm run report| Variable | Default | Description |
|---|---|---|
API_BASE_URL |
https://li.quest |
Base URL |
LIFI_API_KEY |
(none) | Raises limit to 100 req/min |
Two workflows:
api-tests.yml- Functional tests. Triggers on PRs + manual dispatch. Skips runs when only docs change (paths-filter). Concurrency group cancels overlapping runs. Not triggered on push - two runs within 2h would blow the unauthenticated quota.performance-tests.yml- Performance tests. Manual dispatch only.
Both use the github reporter in CI - Playwright Run Summary annotations with pass/fail counts directly in the workflow run.
The "performance" tests are concurrency tests - they assert that the API handles parallel requests without errors, not that it responds within specific latency targets. The only timing assertion (single quote < 15s) is a smoke test, not a performance benchmark.
Playwright is a functional testing tool. As per the assignment, we need to use it. Promise.all validates that concurrent requests succeed but doesn't measure latency percentiles, throughput, or degradation under load. k6, Artillery, or Locust would be the right tools for that - they provide virtual users, RPS control, and p50/p95/p99 metrics.
- Test Plan - Scope, approach, test cases, risks
- Bug Report - 3 bugs + 4 observations