Batch Posting
Batch Posting
Submit up to 1,000 telemetry points in a single request for maximum throughput.
POST /api/v1/telemetry/batch
Submit a batch of telemetry data.
Request
POST /api/v1/telemetry/batchAuthorization: Bearer <token>Content-Type: application/jsonParameters
The request body is an array of telemetry objects (max 1,000 items).
Each object has the same schema as the single telemetry endpoint.
Example Request
curl -X POST https://api.constellation-io.com/api/v1/telemetry/batch \ -H "Authorization: Bearer $TOKEN" \ -H "Content-Type: application/json" \ -d '[ { "timestamp": "2026-01-16T20:00:00Z", "node_id": "sat-001", "node_type": "satellite", "snr_db": 25.5, "latency_ms": 12.3, "throughput_gbps": 45.6 }, { "timestamp": "2026-01-16T20:00:00Z", "node_id": "sat-002", "node_type": "satellite", "snr_db": 28.2, "latency_ms": 10.1, "throughput_gbps": 52.3 }, { "timestamp": "2026-01-16T20:00:00Z", "node_id": "gs-001", "node_type": "ground_station", "snr_db": 32.1, "latency_ms": 5.2, "throughput_gbps": 100.0 } ]'Response
{ "success": true, "count": 3, "received_at": "2026-01-16T20:00:00.123Z"}Response Fields
| Name | Type | Description |
|---|---|---|
success | boolean | Whether the request succeeded |
count | number | Number of telemetry points processed |
received_at | string | Server timestamp |
Performance
Batch posting provides significant performance improvements:
| Metric | Single POST | Batch POST |
|---|---|---|
| Throughput | ~1,000 msg/sec | ~50,000 msg/sec |
| Latency (P99) | ~50ms | ~150ms |
| API calls | 1 per message | 1 per 1,000 messages |
How It Works
- Redis Pipeline: Batch operations use Redis pipelining for atomic writes
- Throttled Graph Updates: Graph engine updates throttled to every 15 seconds
- Background Persistence: Database writes handled by background workers
Limits
| Constraint | Limit |
|---|---|
| Max batch size | 1,000 items |
| Max request body | 10 MB |
| Rate limit | 100 requests/minute |
Batches exceeding 1,000 items are automatically truncated.
CLI Example
Post from a JSON file containing an array:
constellation telemetry post -f batch.json --batchExample batch.json:
[ {"node_id": "sat-001", "node_type": "satellite", "snr_db": 25.5, "timestamp": "2026-01-16T20:00:00Z"}, {"node_id": "sat-002", "node_type": "satellite", "snr_db": 28.2, "timestamp": "2026-01-16T20:00:00Z"}, {"node_id": "sat-003", "node_type": "satellite", "snr_db": 22.1, "timestamp": "2026-01-16T20:00:00Z"}]Or pipe from stdin:
cat batch.json | constellation telemetry post --stdin --batchPython Example
import requests
telemetry_batch = [ { "timestamp": "2026-01-16T20:00:00Z", "node_id": f"sat-{i:03d}", "node_type": "satellite", "snr_db": 25.0 + (i * 0.1), "latency_ms": 10.0 + (i * 0.5), "throughput_gbps": 50.0, } for i in range(100)]
response = requests.post( "https://api.constellation-io.com/api/v1/telemetry/batch", headers={"Authorization": f"Bearer {token}"}, json=telemetry_batch,)
print(f"Processed {response.json()['count']} telemetry points")Best Practices
1. Batch by Time Window
Group telemetry points by timestamp to maintain data consistency:
from collections import defaultdict
# Group by 1-second windowsbatches = defaultdict(list)for point in telemetry_points: window = point["timestamp"][:19] # Truncate to seconds batches[window].append(point)
for window, batch in batches.items(): post_batch(batch)2. Implement Retry Logic
import timefrom requests.adapters import HTTPAdapterfrom urllib3.util.retry import Retry
session = requests.Session()retries = Retry(total=3, backoff_factor=0.5)session.mount("https://", HTTPAdapter(max_retries=retries))3. Use Compression for Large Batches
curl -X POST https://api.constellation-io.com/api/v1/telemetry/batch \ -H "Authorization: Bearer $TOKEN" \ -H "Content-Type: application/json" \ -H "Content-Encoding: gzip" \ --data-binary @batch.json.gz4. Monitor Throughput
Check your ingestion rate with the benchmark endpoint:
curl https://api.constellation-io.com/api/v1/benchmark/resultsError Handling
Partial Failures
The batch endpoint is atomic - either all items succeed or none do.
If validation fails for any item, the entire batch is rejected:
{ "success": false, "error": { "code": "VALIDATION_ERROR", "message": "Invalid node_type at index 5", "field": "batch[5].node_type" }}Recommended Error Handling
try: response = post_batch(batch) response.raise_for_status()except requests.HTTPError as e: if e.response.status_code == 422: # Validation error - check individual items error = e.response.json()["error"] print(f"Validation failed: {error['message']}") elif e.response.status_code == 429: # Rate limited - back off and retry time.sleep(60) post_batch(batch) else: raise