Skip to content

Batch Posting

Batch Posting

Submit up to 1,000 telemetry points in a single request for maximum throughput.

POST /api/v1/telemetry/batch

Submit a batch of telemetry data.

Request

POST /api/v1/telemetry/batch
Authorization: Bearer <token>
Content-Type: application/json

Parameters

The request body is an array of telemetry objects (max 1,000 items).

Each object has the same schema as the single telemetry endpoint.

Example Request

Terminal window
curl -X POST https://api.constellation-io.com/api/v1/telemetry/batch \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d '[
{
"timestamp": "2026-01-16T20:00:00Z",
"node_id": "sat-001",
"node_type": "satellite",
"snr_db": 25.5,
"latency_ms": 12.3,
"throughput_gbps": 45.6
},
{
"timestamp": "2026-01-16T20:00:00Z",
"node_id": "sat-002",
"node_type": "satellite",
"snr_db": 28.2,
"latency_ms": 10.1,
"throughput_gbps": 52.3
},
{
"timestamp": "2026-01-16T20:00:00Z",
"node_id": "gs-001",
"node_type": "ground_station",
"snr_db": 32.1,
"latency_ms": 5.2,
"throughput_gbps": 100.0
}
]'

Response

{
"success": true,
"count": 3,
"received_at": "2026-01-16T20:00:00.123Z"
}

Response Fields

NameTypeDescription
successbooleanWhether the request succeeded
countnumberNumber of telemetry points processed
received_atstringServer timestamp

Performance

Batch posting provides significant performance improvements:

MetricSingle POSTBatch POST
Throughput~1,000 msg/sec~50,000 msg/sec
Latency (P99)~50ms~150ms
API calls1 per message1 per 1,000 messages

How It Works

  1. Redis Pipeline: Batch operations use Redis pipelining for atomic writes
  2. Throttled Graph Updates: Graph engine updates throttled to every 15 seconds
  3. Background Persistence: Database writes handled by background workers

Limits

ConstraintLimit
Max batch size1,000 items
Max request body10 MB
Rate limit100 requests/minute

Batches exceeding 1,000 items are automatically truncated.

CLI Example

Post from a JSON file containing an array:

Terminal window
constellation telemetry post -f batch.json --batch

Example batch.json:

[
{"node_id": "sat-001", "node_type": "satellite", "snr_db": 25.5, "timestamp": "2026-01-16T20:00:00Z"},
{"node_id": "sat-002", "node_type": "satellite", "snr_db": 28.2, "timestamp": "2026-01-16T20:00:00Z"},
{"node_id": "sat-003", "node_type": "satellite", "snr_db": 22.1, "timestamp": "2026-01-16T20:00:00Z"}
]

Or pipe from stdin:

Terminal window
cat batch.json | constellation telemetry post --stdin --batch

Python Example

import requests
telemetry_batch = [
{
"timestamp": "2026-01-16T20:00:00Z",
"node_id": f"sat-{i:03d}",
"node_type": "satellite",
"snr_db": 25.0 + (i * 0.1),
"latency_ms": 10.0 + (i * 0.5),
"throughput_gbps": 50.0,
}
for i in range(100)
]
response = requests.post(
"https://api.constellation-io.com/api/v1/telemetry/batch",
headers={"Authorization": f"Bearer {token}"},
json=telemetry_batch,
)
print(f"Processed {response.json()['count']} telemetry points")

Best Practices

1. Batch by Time Window

Group telemetry points by timestamp to maintain data consistency:

from collections import defaultdict
# Group by 1-second windows
batches = defaultdict(list)
for point in telemetry_points:
window = point["timestamp"][:19] # Truncate to seconds
batches[window].append(point)
for window, batch in batches.items():
post_batch(batch)

2. Implement Retry Logic

import time
from requests.adapters import HTTPAdapter
from urllib3.util.retry import Retry
session = requests.Session()
retries = Retry(total=3, backoff_factor=0.5)
session.mount("https://", HTTPAdapter(max_retries=retries))

3. Use Compression for Large Batches

Terminal window
curl -X POST https://api.constellation-io.com/api/v1/telemetry/batch \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-H "Content-Encoding: gzip" \
--data-binary @batch.json.gz

4. Monitor Throughput

Check your ingestion rate with the benchmark endpoint:

Terminal window
curl https://api.constellation-io.com/api/v1/benchmark/results

Error Handling

Partial Failures

The batch endpoint is atomic - either all items succeed or none do.

If validation fails for any item, the entire batch is rejected:

{
"success": false,
"error": {
"code": "VALIDATION_ERROR",
"message": "Invalid node_type at index 5",
"field": "batch[5].node_type"
}
}
try:
response = post_batch(batch)
response.raise_for_status()
except requests.HTTPError as e:
if e.response.status_code == 422:
# Validation error - check individual items
error = e.response.json()["error"]
print(f"Validation failed: {error['message']}")
elif e.response.status_code == 429:
# Rate limited - back off and retry
time.sleep(60)
post_batch(batch)
else:
raise