Overview
The most efficient way to collect data for all buoys in a country is to use the ?country= filter on GET /buoys. Unlike calling last_readings in batches, a single country-filtered request returns complete buoy metadata and the latest reading for every active buoy.
Getting all French buoys
curl -H "Authorization: Bearer YOUR_API_KEY" \
"https://thesurfkit.com/api/v2/buoys?country=FR"
Pass any ISO 3166-1 alpha-2 country code. France has approximately 25–35 active buoys at any given time, so all results fit in the default page.
Response structure:
{
"status": "success",
"data": {
"buoys": [
{
"id": 12,
"name": "Anglet",
"lat": 43.4832,
"lng": -1.5586,
"source": "Candhis",
"source_identifier": "64002",
"slug": "anglet",
"last_reading_time": "2026-03-27T08:00:00Z",
"readings_count": 142300,
"last_reading": {
"significient_height": 1.8,
"maximum_height": 2.4,
"period": 9.5,
"direction": 285,
"water_temperature": 14.2,
"time": "2026-03-27T08:00:00Z"
},
"timezone": "Europe/Paris"
}
],
"count": 28
},
"meta": {
"page": 1,
"per_page": 500,
"total_pages": 1,
"timestamp": "2026-03-27T09:00:00Z"
}
}
When ?country= is set, the per-page cap increases to 500 (from the default 100) since the geographic scope already constrains the result set.
Building a cron job
Here’s a complete cron job pattern to collect the latest readings for all French buoys every 30 minutes:
import requests
import json
from datetime import datetime
API_KEY = "YOUR_API_KEY"
BASE_URL = "https://thesurfkit.com/api/v2"
def collect_france_buoy_readings():
headers = {"Authorization": f"Bearer {API_KEY}"}
response = requests.get(
f"{BASE_URL}/buoys",
params={"country": "FR"},
headers=headers,
timeout=30,
)
response.raise_for_status()
data = response.json()
buoys = data["data"]["buoys"]
collected_at = datetime.utcnow().isoformat()
readings = [
{
"buoy_id": b["id"],
"buoy_name": b["name"],
"lat": b["lat"],
"lng": b["lng"],
"source": b["source"],
"timezone": b.get("timezone"),
"collected_at": collected_at,
"reading": b.get("last_reading"),
}
for b in buoys
if b.get("last_reading")
]
print(f"Collected {len(readings)} readings from {len(buoys)} buoys")
return readings
if __name__ == "__main__":
readings = collect_france_buoy_readings()
# Save to your database or message queue here
print(json.dumps(readings[0], indent=2))
Cron schedule
Buoy readings are typically updated every 30 minutes. A reasonable polling interval is 30 minutes — more frequent calls would return duplicate readings and consume your rate limit budget unnecessarily.
# crontab — run every 30 minutes
*/30 * * * * /usr/bin/python3 /path/to/collect_buoys.py >> /var/log/buoy_collector.log 2>&1
Handling missing readings
Some buoys may temporarily have no reading (e.g., maintenance, transmission gaps). The last_reading field will be null in those cases. Always guard against this:
readings = [b for b in buoys if b.get("last_reading") is not None]
Supported countries
Any ISO 3166-1 alpha-2 code works. Currently active networks include:
| Code | Country | Primary sources |
|---|
FR | France | Candhis, Météo France |
ES | Spain | Puertos del Estado |
PT | Portugal | SNIRH |
US | United States | NOAA/NDBC |
IS | Iceland | Vegagerðin |
Use GET /api/v2/countries to get the full list of countries with active buoys.