Streaming & Response Formats
How Plaza streams responses without buffering, and the three response formats (JSON, GeoJSON, CSV).
How Plaza streams responses
Section titled “How Plaza streams responses”Plaza streams rows from the database cursor directly into the HTTP response via chunked transfer encoding. The full result set is never buffered in server memory. This applies to all response formats (GeoJSON, JSON, CSV).
For small responses — geocoding, routing, isochrones — this is invisible. Your HTTP client buffers the whole thing and you parse it as normal. For large result sets (thousands of features from /features, /search, or /datasets/{id}/features), the chunked encoding lets you process results as they arrive instead of waiting for the entire response to download.
Response formats
Section titled “Response formats”Endpoints that support the format parameter: /features, /search, /datasets/{id}/features.
| Value | Content-Type | Description |
|---|---|---|
geojson | application/json | GeoJSON FeatureCollection (default). |
json | application/json | Plain JSON array of objects. |
csv | text/csv | CSV with header row. One row per feature. |
Endpoints that always return GeoJSON (no format parameter): geocoding, reverse geocoding, autocomplete, routing, isochrones.
The PlazaQL query endpoint also always returns GeoJSON — Plaza normalizes every query result to a GeoJSON FeatureCollection regardless of query complexity.
When streaming matters
Section titled “When streaming matters”- Queries returning 1,000+ features
- Exporting data for offline processing
- Showing results incrementally in a UI
- Memory-constrained environments
For small result sets, just parse the response normally. You don’t need any of this.
Streaming by language
Section titled “Streaming by language”Plaza responses are standard HTTP — use any streaming parser that works with chunked transfer encoding. Here’s what we recommend for each language and format.
JavaScript
Section titled “JavaScript”Browser (GeoJSON / JSON) — use oboe.js. It pattern-matches nodes inside a streaming JSON response, so you get each feature the moment it’s fully parsed:
import oboe from "oboe";
oboe({ url: "https://plaza.fyi/api/v1/features?within=-74.01,40.70,-73.97,40.75&tags[amenity]=cafe&format=geojson", headers: { "x-api-key": process.env.PLAZA_API_KEY },}) .node("features.*", (feature) => { console.log(feature.properties.name); return oboe.drop; // free memory after processing each feature }) .done(() => console.log("done")) .fail((err) => console.error("stream error:", err));For format=json (a plain array), match on * instead of features.*.
Node.js (GeoJSON / JSON) — use jsonstream-next. Compose it with the fetch response body using stream.compose and async-iterate:
import { compose } from "node:stream";import JSONStream from "jsonstream-next";
const response = await fetch( "https://plaza.fyi/api/v1/features?within=-74.01,40.70,-73.97,40.75&tags[amenity]=cafe&format=geojson", { headers: { "x-api-key": process.env.PLAZA_API_KEY } },);
for await (const feature of compose(response.body, JSONStream.parse("features.*"))) { console.log(feature.properties.name);}For format=json, use "*" instead of "features.*".
CSV (Node.js) — use csv-parser. Same pattern:
import { compose } from "node:stream";import csvParser from "csv-parser";
const response = await fetch( "https://plaza.fyi/api/v1/features?within=-74.01,40.70,-73.97,40.75&tags[amenity]=cafe&format=csv", { headers: { "x-api-key": process.env.PLAZA_API_KEY } },);
for await (const row of compose(response.body, csvParser())) { console.log(row.name);}Python
Section titled “Python”GeoJSON / JSON — use ijson. It yields objects matching a prefix path as they’re parsed from the stream:
import httpximport ijson
with httpx.stream( "GET", "https://plaza.fyi/api/v1/features", params={"within": "-74.01,40.70,-73.97,40.75", "tags[amenity]": "cafe", "format": "geojson"}, headers={"x-api-key": "your-key"},) as response: for feature in ijson.items(response.stream, "features.item"): print(feature["properties"]["name"])For format=json, use "item" as the prefix instead of "features.item".
CSV — the standard library handles this. Wrap the streaming response in csv.DictReader:
import csvimport httpx
with httpx.stream( "GET", "https://plaza.fyi/api/v1/features", params={"within": "-74.01,40.70,-73.97,40.75", "tags[amenity]": "cafe", "format": "csv"}, headers={"x-api-key": "your-key"},) as response: reader = csv.DictReader(response.iter_lines()) for row in reader: print(row["name"])GeoJSON / JSON — Go’s standard library already streams JSON. json.NewDecoder reads from an io.Reader and decodes tokens incrementally — no extra dependency needed:
decoder := json.NewDecoder(resp.Body)
// Skip to the features arrayfor { t, err := decoder.Token() if err != nil { log.Fatal(err) } if t == "features" { decoder.Token() // opening bracket break }}
// Decode each Feature as it arrivesfor decoder.More() { var feature geojson.Feature if err := decoder.Decode(&feature); err != nil { log.Fatal(err) } fmt.Println(feature.Properties["name"])}For format=json, skip the token-scanning — just open the bracket and decode each element.
CSV — use the standard library’s encoding/csv package. It reads records one at a time from any io.Reader:
reader := csv.NewReader(resp.Body)
header, _ := reader.Read() // first row is the headernameIdx := slices.Index(header, "name")
for { record, err := reader.Read() if err != nil { break } fmt.Println(record[nameIdx])}Use --no-buffer to see results as they arrive:
curl --no-buffer \ -H "x-api-key: $PLAZA_API_KEY" \ "https://plaza.fyi/api/v1/features?within=-74.01,40.70,-73.97,40.75&tags[building]=yes&format=geojson"Error handling
Section titled “Error handling”If the connection drops mid-response, your parser will see an unexpected EOF. To handle this:
- Track the last
osm_idyou processed. - Retry with a filter that excludes already-seen IDs, or use cursor-based pagination.
- If the same query consistently fails, add tighter spatial filters to reduce the result set.