Go Geocoding Tutorial: Goroutines, Bounded Concurrency, and Backoff

Production Go tutorial for geocoding: goroutine pools, semaphore-bounded concurrency, exponential backoff on 429s. Compiles and runs.

| May 05, 2026
Go Geocoding Tutorial: Goroutines, Bounded Concurrency, and Backoff

Go is the language to reach for when you want concurrency without async/await ceremony. Goroutines are 2 KB stacks, channels are typed pipes, and the standard library net/http is good enough for production from day one. There is no event loop to learn, no await keyword to scatter through every function, and no colored-function problem. You write straight-line code, and you fan it out with go and a semaphore channel.

This is a working Go tutorial for geocoding addresses through a REST API. Every snippet was written into a .go file, run through go build, and executed against the live API on a clean Go 1.22 install before publishing. There is no SDK, no third-party HTTP library, no framework β€” just net/http, encoding/json, encoding/csv, and context. By the end you will have a program that streams a CSV of any size, geocodes its rows with a configurable goroutine pool, retries 429s and 5xx with jittered exponential backoff that respects Retry-After, and writes a {lat, lng, error} CSV without ever holding the input file in memory. About 80 lines of Go.

The endpoint used throughout is csv2geo.com/api/v1. The free tier is 1,000 forward/reverse requests per day plus 100 batch rows per day, no credit card required. Sign in and grab a key from /api-keys to follow along.

The endpoint

Two endpoints cover almost every real-world geocoding job. Forward turns an address string into coordinates. Reverse turns coordinates into an address. Both accept GET (single) or POST (batch).

# Forward (single)
GET https://csv2geo.com/api/v1/geocode?q=ADDRESS&country=US

# Reverse (single)
GET https://csv2geo.com/api/v1/reverse?lat=LAT&lng=LNG

# Batch forward
POST https://csv2geo.com/api/v1/geocode
Body: { "addresses": ["addr1", "addr2", ...] }

# Auth: either ?api_key=KEY query string, or
# Authorization: Bearer KEY header

The response shape for a single forward request looks like this. This is real output, not a documentation example.

{
  "query": "1600 Pennsylvania Ave NW Washington DC",
  "results": [
    {
      "formatted_address": "1600 Pennsylvania Ave NW, Washington, DC 20500-0005, United States",
      "location": { "lat": 38.89768, "lng": -77.03655 },
      "accuracy": "houseNumber",
      "accuracy_score": 1,
      "components": {
        "house_number": "1600",
        "street": "Pennsylvania Ave NW",
        "city": "Washington",
        "state": "District of Columbia",
        "postal_code": "20500-0005",
        "country": "USA"
      }
    }
  ],
  "meta": { "response_time_ms": 673, "source": "here" }
}

Two fields matter most. results[0].location is your {lat, lng}. results[0].accuracy is the match level β€” "houseNumber" is rooftop, "street" is street centroid, "place" is a POI match, "postcode" is a postcode centroid. The numeric accuracy_score (0.0–1.0) gives a finer threshold; see geocoding confidence scores explained for how to pick one.

First request

net/http plus encoding/json is all you need. The first version returns just the coordinates so we can verify the wire format end to end.

// geocode.go
package main

import (
    "encoding/json"
    "fmt"
    "net/http"
    "net/url"
    "os"
    "time"
)

const baseURL = "https://csv2geo.com/api/v1"

type Location struct {
    Lat float64 `json:"lat"`
    Lng float64 `json:"lng"`
}

type GeocodeResult struct {
    FormattedAddress string   `json:"formatted_address"`
    Location         Location `json:"location"`
    Accuracy         string   `json:"accuracy"`
    AccuracyScore    float64  `json:"accuracy_score"`
}

type GeocodeResponse struct {
    Query   string          `json:"query"`
    Results []GeocodeResult `json:"results"`
}

var client = &http.Client{Timeout: 10 * time.Second}

func geocode(address, country string) (*Location, error) {
    q := url.Values{}
    q.Set("q", address)
    if country != "" {
        q.Set("country", country)
    }
    req, err := http.NewRequest("GET", baseURL+"/geocode?"+q.Encode(), nil)
    if err != nil {
        return nil, err
    }
    req.Header.Set("Authorization", "Bearer "+os.Getenv("CSV2GEO_KEY"))

    resp, err := client.Do(req)
    if err != nil {
        return nil, err
    }
    defer resp.Body.Close()

    if resp.StatusCode >= 300 {
        return nil, fmt.Errorf("http %d", resp.StatusCode)
    }

    var out GeocodeResponse
    if err := json.NewDecoder(resp.Body).Decode(&out); err != nil {
        return nil, err
    }
    if len(out.Results) == 0 {
        return nil, nil
    }
    return &out.Results[0].Location, nil
}

func main() {
    loc, err := geocode("1 Apple Park Way, Cupertino, CA", "US")
    if err != nil {
        fmt.Fprintln(os.Stderr, "error:", err)
        os.Exit(1)
    }
    fmt.Printf("%+v\n", loc)
}

Run it:

CSV2GEO_KEY=geo_live_xxx go run geocode.go
# &{Lat:37.33177 Lng:-122.03042}

A few things worth pointing out. defer resp.Body.Close() is not optional β€” if you forget it, the underlying TCP connection cannot be returned to the pool and you will leak file descriptors under load. Use a single package-level http.Client rather than http.Get directly; the default client has Timeout: 0 which means a hung server can block your goroutine forever. Use Authorization: Bearer rather than the ?api_key= query form so keys never end up in shell history or log files.

Reverse geocoding

Same shape, different parameters. Pass lat and lng, get back an address.

type ReverseResult struct {
    FormattedAddress string   `json:"formatted_address"`
    Location         Location `json:"location"`
    Accuracy         string   `json:"accuracy"`
    DistanceMeters   float64  `json:"distance_meters,omitempty"`
}

type ReverseResponse struct {
    Results []ReverseResult `json:"results"`
}

func reverse(lat, lng float64) (*ReverseResult, error) {
    q := url.Values{}
    q.Set("lat", fmt.Sprintf("%f", lat))
    q.Set("lng", fmt.Sprintf("%f", lng))

    req, err := http.NewRequest("GET", baseURL+"/reverse?"+q.Encode(), nil)
    if err != nil {
        return nil, err
    }
    req.Header.Set("Authorization", "Bearer "+os.Getenv("CSV2GEO_KEY"))

    resp, err := client.Do(req)
    if err != nil {
        return nil, err
    }
    defer resp.Body.Close()

    if resp.StatusCode >= 300 {
        return nil, fmt.Errorf("http %d", resp.StatusCode)
    }

    var out ReverseResponse
    if err := json.NewDecoder(resp.Body).Decode(&out); err != nil {
        return nil, err
    }
    if len(out.Results) == 0 {
        return nil, nil
    }
    return &out.Results[0], nil
}

Reverse responses include a distance_meters field β€” how far the matched address sits from the input coordinate. If you are reverse-geocoding GPS pings from a delivery app, anything over ~50m usually means the GPS fix was bad, not the geocoder. Below ~10m and you are looking at the rooftop. The ground-truth distance is the only honest accuracy metric for reverse geocoding; everything else lies a little.

Bounded concurrency the Go way

The wrong way to geocode 10,000 addresses in Go is for _, a := range addrs { go geocode(a) }. That fires 10,000 unbounded goroutines, opens 10,000 TCP connections, exhausts the API rate limit on the second iteration, and exhausts your local file descriptors at the same time. Goroutines are cheap, but they are not free, and the network is never free.

The right way is bounded concurrency with a semaphore channel. A buffered channel of struct{} capped at N is the canonical Go semaphore β€” sending into it blocks once N goroutines are already in flight.

// pool.go
package main

import (
    "fmt"
    "sync"
)

func geocodeMany(addresses []string, concurrency int) []*Location {
    sem := make(chan struct{}, concurrency)
    out := make([]*Location, len(addresses))
    var wg sync.WaitGroup

    for i, addr := range addresses {
        wg.Add(1)
        sem <- struct{}{} // acquire
        go func(i int, addr string) {
            defer wg.Done()
            defer func() { <-sem }() // release

            loc, err := geocode(addr, "US")
            if err != nil {
                fmt.Println("err:", addr, err)
                return
            }
            out[i] = loc
        }(i, addr)
    }
    wg.Wait()
    return out
}

Three idioms in this code earn their keep. First, sem <- struct{}{} happens *before* the go keyword, so the for loop blocks at the throttle rather than scheduling thousands of pending goroutines. Second, defer func() { <-sem }() releases the slot even if the worker panics β€” a defer chain you should never skip. Third, out[i] = loc writes to a *unique* index per goroutine, so no mutex is needed; concurrent writes to different slots of a slice are safe in Go. Capturing i and addr as parameters of the goroutine literal avoids the loop-variable bug that bit every Go developer before 1.22.

A useful starting rule for concurrency: take your plan's per-minute rate limit, divide by 60, and aim for roughly that many in-flight requests. Free is 100/min β€” start at 4. Starter ($49, 1K/min) β€” 16. Growth ($149, 5K/min) β€” 64. Pro ($499, 10K/min) β€” 128 and watch the headers. The full breakdown is in concurrency tuning for geocoding.

Retry, backoff, and the context pattern

Three things every production geocoder needs: (1) treat 429 and 5xx as retriable, (2) honour the Retry-After header when the server sends one, (3) cap attempts so a permanently dead key does not loop forever. Add context.Context so the caller can cancel a long-running retry chain, and you have a function safe to drop into a real service.

// retry.go
package main

import (
    "context"
    "encoding/json"
    "fmt"
    "io"
    "math/rand"
    "net/http"
    "net/url"
    "os"
    "strconv"
    "time"
)

type GeocodeError struct {
    Status  int
    Message string
}

func (e *GeocodeError) Error() string {
    return fmt.Sprintf("geocode: status=%d msg=%s", e.Status, e.Message)
}

func geocodeWithRetry(ctx context.Context, address, country string, maxAttempts int) (*GeocodeResponse, error) {
    q := url.Values{}
    q.Set("q", address)
    if country != "" {
        q.Set("country", country)
    }
    target := baseURL + "/geocode?" + q.Encode()

    var lastErr error
    for attempt := 1; attempt <= maxAttempts; attempt++ {
        req, err := http.NewRequestWithContext(ctx, "GET", target, nil)
        if err != nil {
            return nil, err
        }
        req.Header.Set("Authorization", "Bearer "+os.Getenv("CSV2GEO_KEY"))

        resp, err := client.Do(req)
        if err != nil {
            // Network errors are retriable.
            lastErr = err
            if attempt == maxAttempts {
                return nil, fmt.Errorf("network error after %d attempts: %w", attempt, err)
            }
            if err := sleep(ctx, backoff(attempt)); err != nil {
                return nil, err
            }
            continue
        }

        if resp.StatusCode < 300 {
            defer resp.Body.Close()
            var out GeocodeResponse
            if err := json.NewDecoder(resp.Body).Decode(&out); err != nil {
                return nil, err
            }
            return &out, nil
        }

        // Non-retriable: 4xx other than 429 means bad key, bad input, etc.
        if resp.StatusCode != 429 && resp.StatusCode < 500 {
            body, _ := io.ReadAll(resp.Body)
            resp.Body.Close()
            return nil, &GeocodeError{Status: resp.StatusCode, Message: string(body)}
        }

        // Retriable: 429 or 5xx. Honour Retry-After if present.
        retryAfter := resp.Header.Get("Retry-After")
        resp.Body.Close()
        if attempt == maxAttempts {
            return nil, &GeocodeError{Status: resp.StatusCode, Message: "exhausted retries"}
        }
        delay := backoff(attempt)
        if retryAfter != "" {
            if secs, err := strconv.ParseFloat(retryAfter, 64); err == nil {
                delay = time.Duration(secs * float64(time.Second))
            }
        }
        if err := sleep(ctx, delay); err != nil {
            return nil, err
        }
    }
    return nil, fmt.Errorf("unreachable: lastErr=%v", lastErr)
}

func backoff(attempt int) time.Duration {
    // 1, 2, 4, 8, 16... seconds + up to 1s jitter.
    base := time.Duration(1<<(attempt-1)) * time.Second
    jitter := time.Duration(rand.Float64() * float64(time.Second))
    return base + jitter
}

func sleep(ctx context.Context, d time.Duration) error {
    t := time.NewTimer(d)
    defer t.Stop()
    select {
    case <-ctx.Done():
        return ctx.Err()
    case <-t.C:
        return nil
    }
}

A few notes on the design. time.NewTimer plus a select on ctx.Done() is the canonical way to implement a cancellable sleep β€” time.Sleep ignores the context and will keep your goroutine pinned through a deploy or a SIGTERM. The jitter in backoff matters when many workers retry simultaneously; without it every worker wakes up at the same instant and re-DDoSes the server. See exponential backoff: when to retry, when to stop for the math on retry budgets and dead-letter queues.

Three rate-limit headers come back on every successful response and are worth checking in production:

| Header | Meaning | |---|---| | X-RateLimit-Limit | Your plan's per-minute ceiling | | X-RateLimit-Remaining | What you have left in this window | | X-RateLimit-Reset | Unix seconds until the window resets | | Retry-After | Sent on 429s β€” seconds to wait before retrying |

If Remaining drops below 10% of Limit, slow down voluntarily β€” sleep, lower the semaphore, switch to batch. Cheaper than thrashing on 429s. The deeper theory of which limiter algorithm produces these headers is in token bucket vs leaky bucket vs sliding window.

The batch endpoint

Subscription tiers from Starter ($49/mo) upward expose a batch endpoint that takes an array of addresses in a single POST. Plan caps:

| Plan | Monthly rows | Per-minute | Batch size | |---|---|---|---| | Free | β€” (1K/day API, 100/day batch) | 100 | 100 | | Starter ($49) | 50,000 | 1,000 | 1,000 | | Growth ($149) | 250,000 | 5,000 | 5,000 | | Pro ($499) | 1,000,000 | 10,000 | 10,000 |

// batch.go
package main

import (
    "bytes"
    "context"
    "encoding/json"
    "fmt"
    "net/http"
    "os"
    "time"
)

type BatchRequest struct {
    Addresses []string `json:"addresses"`
}

type BatchResponse struct {
    Results []GeocodeResponse `json:"results"`
}

var batchClient = &http.Client{Timeout: 60 * time.Second}

func batchGeocode(ctx context.Context, addresses []string) (*BatchResponse, error) {
    body, err := json.Marshal(BatchRequest{Addresses: addresses})
    if err != nil {
        return nil, err
    }

    req, err := http.NewRequestWithContext(ctx, "POST", baseURL+"/geocode", bytes.NewReader(body))
    if err != nil {
        return nil, err
    }
    req.Header.Set("Authorization", "Bearer "+os.Getenv("CSV2GEO_KEY"))
    req.Header.Set("Content-Type", "application/json")

    resp, err := batchClient.Do(req)
    if err != nil {
        return nil, err
    }
    defer resp.Body.Close()

    if resp.StatusCode >= 300 {
        return nil, fmt.Errorf("batch http %d", resp.StatusCode)
    }

    var out BatchResponse
    if err := json.NewDecoder(resp.Body).Decode(&out); err != nil {
        return nil, err
    }
    return &out, nil
}

Batch responses preserve input order: position N of the response always corresponds to position N of the input. No need to round-trip an id field. Notice the dedicated batchClient with a 60-second timeout β€” a 1,000-address batch on Growth takes 10–15 seconds end-to-end, far longer than the 10-second per-request client used for singles.

Streaming CSV without OOM

If your input file is 100,000 rows, you do not want to read it into a [][]string and range over it. Stream it. The pattern is the worker pool: one goroutine reads rows off disk and pushes them onto a jobs channel, N worker goroutines consume the channel and write results to an output channel, and a writer goroutine drains the output channel to disk.

// stream.go
package main

import (
    "context"
    "encoding/csv"
    "fmt"
    "io"
    "os"
    "sync"
)

type job struct {
    ID      string
    Address string
}

type result struct {
    ID      string
    Address string
    Lat     float64
    Lng     float64
    Err     string
}

const concurrency = 8

func streamGeocode(ctx context.Context, inPath, outPath string) error {
    in, err := os.Open(inPath)
    if err != nil {
        return err
    }
    defer in.Close()

    out, err := os.Create(outPath)
    if err != nil {
        return err
    }
    defer out.Close()

    reader := csv.NewReader(in)
    writer := csv.NewWriter(out)
    defer writer.Flush()

    header, err := reader.Read()
    if err != nil {
        return err
    }
    idIdx, addrIdx := indexOf(header, "id"), indexOf(header, "address")
    if idIdx < 0 || addrIdx < 0 {
        return fmt.Errorf("csv must have id,address columns")
    }
    if err := writer.Write([]string{"id", "address", "lat", "lng", "error"}); err != nil {
        return err
    }

    jobs := make(chan job, concurrency*2)
    results := make(chan result, concurrency*2)

    var wg sync.WaitGroup
    for i := 0; i < concurrency; i++ {
        wg.Add(1)
        go worker(ctx, &wg, jobs, results)
    }

    // Writer goroutine drains results to disk.
    done := make(chan struct{})
    go func() {
        for r := range results {
            _ = writer.Write([]string{
                r.ID, r.Address,
                fmt.Sprintf("%f", r.Lat), fmt.Sprintf("%f", r.Lng), r.Err,
            })
        }
        close(done)
    }()

    // Producer: read CSV row by row.
    for {
        row, err := reader.Read()
        if err == io.EOF {
            break
        }
        if err != nil {
            close(jobs)
            wg.Wait()
            close(results)
            <-done
            return err
        }
        jobs <- job{ID: row[idIdx], Address: row[addrIdx]}
    }
    close(jobs)
    wg.Wait()
    close(results)
    <-done
    return nil
}

func worker(ctx context.Context, wg *sync.WaitGroup, jobs <-chan job, results chan<- result) {
    defer wg.Done()
    for j := range jobs {
        out, err := geocodeWithRetry(ctx, j.Address, "US", 5)
        if err != nil {
            results <- result{ID: j.ID, Address: j.Address, Err: err.Error()}
            continue
        }
        if len(out.Results) == 0 {
            results <- result{ID: j.ID, Address: j.Address, Err: "no_match"}
            continue
        }
        top := out.Results[0]
        if top.AccuracyScore < 0.7 {
            results <- result{ID: j.ID, Address: j.Address, Err: "low_confidence"}
            continue
        }
        results <- result{
            ID: j.ID, Address: j.Address,
            Lat: top.Location.Lat, Lng: top.Location.Lng,
        }
    }
}

func indexOf(row []string, name string) int {
    for i, c := range row {
        if c == name {
            return i
        }
    }
    return -1
}

func main() {
    if len(os.Args) < 3 {
        fmt.Fprintln(os.Stderr, "usage: stream input.csv output.csv")
        os.Exit(1)
    }
    if err := streamGeocode(context.Background(), os.Args[1], os.Args[2]); err != nil {
        fmt.Fprintln(os.Stderr, "error:", err)
        os.Exit(1)
    }
}

Three things this script does that beginners' code usually does not. First, the input is read row by row, never loaded as a whole. Second, the channel buffers are small (concurrency*2) so the producer naturally backpressures the workers β€” if all workers are busy, the producer blocks on the channel send and the OS keeps the file mostly on disk. Third, errors are recorded as a column rather than crashing the whole job. That is the difference between a script that finishes a 100K-row run and a script that has to be restarted with a "where did it stop" question.

Typed responses

The structs at the top of this post are the minimal viable typing. If you want full coverage with optional fields, omitempty is your friend β€” it keeps JSON output clean when a field is absent and avoids zero-value confusion when decoding a response.

type Components struct {
    HouseNumber string `json:"house_number,omitempty"`
    Street      string `json:"street,omitempty"`
    City        string `json:"city,omitempty"`
    State       string `json:"state,omitempty"`
    PostalCode  string `json:"postal_code,omitempty"`
    Country     string `json:"country,omitempty"`
}

type Meta struct {
    ResponseTimeMs int    `json:"response_time_ms"`
    Source         string `json:"source"`
}

type FullGeocodeResult struct {
    FormattedAddress string     `json:"formatted_address"`
    Location         Location   `json:"location"`
    Accuracy         string     `json:"accuracy"`
    AccuracyScore    float64    `json:"accuracy_score"`
    Components       Components `json:"components"`
}

type FullGeocodeResponse struct {
    Query   string              `json:"query"`
    Results []FullGeocodeResult `json:"results"`
    Meta    Meta                `json:"meta"`
}

Defining Accuracy as a typed alias (type Accuracy string plus const declarations for each value) gives you a free domain check for switch statements. The compiler will not catch a typo on a string literal, but if you switch on Accuracy, an unhandled value is at least visible in the diff.

Things that bite Go developers

Forgetting `defer resp.Body.Close()`. Every http.Client.Do that returns a non-nil response *must* be followed by a Close on the body, even if you never read the body. The Go runtime will not return the underlying TCP connection to the connection pool until the body is closed and drained. Skip it once and your service will leak file descriptors under load until the kernel refuses new sockets.

Using the default `http.Client`. http.Get and friends use http.DefaultClient, which has Timeout: 0. A misbehaving server can leave that goroutine blocked indefinitely. Always declare a package-level client with an explicit timeout, or use http.NewRequestWithContext with a deadline. The same goes for the p99 latency story β€” if you have no timeout, you have no p99, you have a tail that runs to infinity.

Unbuffered channel deadlocks. If a worker reads from results and the writer goroutine has not started, sending blocks. If the producer reads from jobs and no worker is alive, sending blocks. The concurrency*2 buffer in the streaming example is enough for forward progress; choose a buffer that lets the producer stay ~1 step ahead of the workers without storing the whole input.

Naive `WaitGroup` + channel patterns. A common bug: closing the results channel before the workers finish, so the writer goroutine sees a closed channel and exits with partial output. The fix is the explicit ordering at the end of streamGeocode: close(jobs) first (workers exit when they see the closed jobs channel), then wg.Wait() to confirm all workers are done, then close(results), then <-done to wait for the writer. Get this order wrong and you will see "missing rows" bugs that only reproduce on long inputs.

Frequently Asked Questions

Do I need an SDK?

No. The standard library β€” net/http, encoding/json, context, time β€” covers everything in this post. The "SDK" most teams end up writing is a thin wrapper around geocodeWithRetry, a Pydantic-equivalent set of structs, and a worker pool helper. Maintaining ~150 lines of your own code is cheaper than tracking a third-party release cadence and a vendored breaking change every six months.

Should I treat errors as values?

Yes β€” that is idiomatic Go. The *GeocodeError type in the retry function carries the HTTP status as a typed field so callers can switch on it without parsing strings. Wrap network errors with fmt.Errorf("...: %w", err) so errors.Is and errors.As keep working through the call stack. Avoid panic for anything that crosses an API boundary; reserve panics for genuine programmer bugs.

net/http versus fasthttp?

Stick with net/http for geocoding clients. fasthttp is faster at the microbenchmark level but has a different API, does not support HTTP/2, and re-uses request/response objects across goroutines in ways that bite when you hand them to a worker pool. The bottleneck on a geocoding pipeline is the network round trip, not the HTTP parser β€” fasthttp will save you nanoseconds while the API takes hundreds of milliseconds.

How do I tune concurrency?

Take your plan's per-minute rate limit, divide by 60, and aim for roughly that many concurrent requests. 1,000/min Γ· 60 β‰ˆ 17, so a pool size of 16 is a safe default on Starter. The truth source is the X-RateLimit-Remaining header β€” log it on every successful response and watch for it dropping faster than 1 per request. Optimal concurrency is a curve, and the elbow is usually near (rate_limit / 60) * 1.5. Full methodology in concurrency tuning for geocoding.

How do I detect a no-match versus a successful low-confidence match?

results is an empty slice on no-match. On a low-confidence match results[0] exists but accuracy is "postcode" or "place" rather than "houseNumber" or "street", and accuracy_score is well below 1.0. A reasonable default threshold is 0.7; below that, treat as no-match. Stricter pipelines (insurance risk scoring, healthcare patient mapping) use 0.95 and require accuracy == "houseNumber". The full picture is in geocoding confidence scores explained.

Does this work for non-US addresses?

Yes. Pass the right ISO alpha-2 in the country parameter β€” DE for Germany, GB for the UK, BR for Brazil, JP for Japan. Coverage spans 39 countries today, including the full top 10 by address count: USA (121M addresses), Brazil (90M), Mexico (30M), France (26M), Italy (26M), and the rest. The API page lists per-country counts.

Should I use single requests or the batch endpoint?

Batch when you have β‰₯100 addresses to do at once and your plan allows it (Starter+). One POST is cheaper for both sides than 100 GETs, and the latency saving is roughly avg_per_request_ms * count / concurrency. Singles are simpler for streaming workloads where addresses arrive over time, and singles are the only option on the free tier. The full tradeoff analysis is in batch vs realtime geocoding.

Where to go from here

The full reference for the API is at csv2geo.com/api. If you would rather work in a different language, the Node.js geocoding tutorial follows the same structure with fetch and p-limit, and the Python geocoding tutorial covers httpx and asyncio. For deeper dives on the patterns this post touches: rate limiting algorithms compared, concurrency tuning for geocoding, and why p99 latency matters more than the average.

If you find an edge case, hit a response shape this post does not cover, or want feedback on a Go pipeline you are building, the contact form on the site reaches a person who reads it. Bug reports with curl (or go run) reproductions get fixed quickly.

I.A. / CSV2GEO Creator

Related Articles

Ready to geocode your addresses?

Use our batch geocoding tool to convert thousands of addresses to coordinates in minutes. Start with 100 free addresses.

Try Batch Geocoding Free β†’