Skip to main content

Large Data Volumes

When working with large datasets, specific best practices help optimize performance and avoid timeouts.

Increase timeouts

Large requests take longer. Make sure your HTTP client uses a sufficiently high timeout:

  • Minimum: 30 seconds
  • Recommended: 60 seconds or higher
# Example with curl (60-second timeout)
curl --max-time 60 "https://YOUR_SERVER-api.mail2many.de/api/v1/subscribers" \
--user 'mail2many:YOUR_API_KEY' \
-H "Content-Type: application/json" \
-H "Accept: application/json"

Loading data (GET)

Use pagination when retrieving large datasets:

# Load data page by page instead of all at once
for page in 1 2 3 4 5; do
curl "https://YOUR_SERVER-api.mail2many.de/api/v1/subscribers?page=$page&limit=100" \
--user 'mail2many:YOUR_API_KEY' \
-H "Content-Type: application/json" \
-H "Accept: application/json"
done

Recommended limit: 50–100 per request.

Creating/updating data (POST/PUT)

Split large payloads into multiple requests. Use the upsert route for batch operations.

Not recommended:

# Do not send 10,000 records in one request
curl -X POST "https://YOUR_SERVER-api.mail2many.de/api/v1/subscribers/upsert" \
--user 'mail2many:YOUR_API_KEY' \
-H "Content-Type: application/json" \
-H "Accept: application/json" \
-d '{"subscribers": [... 10000 items ...]}'

Also not recommended: updating 10,000 subscribers one by one.

# Not efficient: 10,000 individual API calls
for subscriber in "${subscribers[@]}"; do
curl -X PUT "https://YOUR_SERVER-api.mail2many.de/api/v1/subscribers/$id" \
--user 'mail2many:YOUR_API_KEY' \
-H "Content-Type: application/json" \
-H "Accept: application/json" \
-d "$subscriber"
done

Recommended:

# 10 requests with 1,000 records each via upsert
for i in 1 2 3 4 5 6 7 8 9 10; do
curl -X POST "https://YOUR_SERVER-api.mail2many.de/api/v1/subscribers/upsert" \
--user 'mail2many:YOUR_API_KEY' \
-H "Content-Type: application/json" \
-H "Accept: application/json" \
-d '{"subscribers": [... 1000 items ...]}'
done

An ideal batch size is 500–2,000 records per request.

Optimize request frequency

Not every request needs to run frequently. Review your sync frequency:

  • Frequent changes: hourly or daily
  • Infrequent changes: weekly or monthly
  • One-time imports: spread over multiple days

Fewer requests = less load on your system and on the API.

Limit concurrency

In addition to request-rate limits, there is a concurrent limit: maximum 10 parallel requests per API key.

  • Keep workers/threads at a fixed number (e.g., 5–10)
  • Use queueing instead of unlimited parallelism
  • On 429, apply short backoff and retry

For header details and reset behavior, see Rate Limiting.

Filtering and sorting

Use filters to load only relevant data:

# Load only subscribers created in the last 7 days
search=[{"createdAt":{"condition":">=","value":"2026-02-25 00:00:00"}}]

This reduces transfer volume and processing time.

Best-practices summary

  1. Set timeouts: minimum 30–60 seconds
  2. Use pagination: 50–100 records per page
  3. Batch size: 500–2,000 records per request
  4. Frequency: no more often than necessary
  5. Filter data: request only what you need
  6. Error handling: see Fault Tolerance

See also the Pagination documentation for more details.