Back in January 2025 I wrote about influxdb-weather-ingestor, a small Kotlin application I built to capture outdoor temperature data and send it to InfluxDB. At the time it was a fairly straightforward Micronaut app that pulled data from the Meteomatics weather API, looked up coordinates via postcodes.io, and wrote the results to InfluxDB on a schedule.

Since then it’s had quite a few improvements, so I thought it was worth a follow-up post covering what’s changed.

Meteomatics dropped their free tier

The biggest catalyst for change was Meteomatics deprecating their free API accounts. This was a bit of a blow, as the free tier had been perfectly adequate for my use case. Rather than simply switching to a different provider though, I took it as an opportunity to make the weather provider pluggable — so that if it happened again, swapping providers would be trivial.

The application now defines a WeatherClient interface with a single method, and each provider has its own implementation that’s conditionally loaded based on the WEATHER_PROVIDER environment variable. Micronaut’s @Requires annotation makes this very clean — each client and its configuration are only instantiated when the matching provider is selected.

WeatherAPI.com

After looking at alternatives, I settled on WeatherAPI.com as the new default provider. Like Meteomatics, they offer a free tier that’s more than sufficient for periodic temperature checks. The application now defaults to WeatherAPI.com, though the Meteomatics implementation is still there if anyone has a paid account.

The Temperature measurement in InfluxDB now includes a provider tag, so you can see which API sourced each data point — handy if you ever switch providers and want to compare or filter the data.

GraalVM native image

In the original post I mentioned that I’d been unable to compile the application as a GraalVM native image, due to the InfluxDB Kotlin client’s use of reflection at runtime. GraalVM’s ahead-of-time compilation needs to know about reflective access upfront, and the client wasn’t making that easy.

I’m pleased to say I’ve since cracked this — both here and in lan2rf-gateway-stats. The solution was to provide explicit reflection and proxy configuration files that register the Temperature measurement class and the various InfluxDB service interfaces. With those in place, the native image builds cleanly and the Docker image now ships a native binary rather than a JVM application.

The practical benefits are noticeable: faster startup and significantly lower memory usage, both of which matter when you’re running several of these utilities in a home lab.

Reactive pipeline

The original scheduling mechanism was fairly imperative — a TaskScheduler with a cron expression would fire a Runnable at each interval. It worked, but it wasn’t particularly resilient. If the weather API call failed, the ScheduledExecutorService would silently swallow the exception, and you’d only notice the gap in your data if you happened to be looking at your Grafana dashboard at the right time.

I rewrote the scheduling as a reactive pipeline using Project Reactor. A TemperatureEmitter class now creates a Flux.interval stream that sequentially fetches weather data using concatMap (rather than flatMap, to prevent overlapping executions when the API is slow) and writes measurements to InfluxDB. Errors in either the fetch or write phase are caught with onErrorResume, logged, and the stream continues on the next tick.

One side effect of this change is that the scheduling configuration moved from a cron expression (CHECKS_SCHEDULE_EXPRESSION) to a simpler duration format (CHECKS_CHECK_INTERVAL), defaulting to one minute. I found the duration format much more intuitive for this kind of periodic polling:

CHECKS_CHECK_INTERVAL=30s  # Check every 30 seconds
CHECKS_CHECK_INTERVAL=2m   # Check every 2 minutes
CHECKS_CHECK_INTERVAL=1h   # Check every hour

Health check

I run several of these data collection utilities in containers, and I wanted a way for my container orchestration to know whether the application is actually doing its job, not just whether the process is alive.

The application now exposes a /health endpoint (via Micronaut Management) with a custom TemperatureScheduleHealthIndicator. This reports UP when the reactive temperature subscription is active, and DOWN if it has terminated for any reason. The response includes a subscriptionActive detail, so you can see at a glance whether data is flowing. This is particularly useful if you’re running in Kubernetes or a similar system where you want liveness and readiness probes that reflect application-level health rather than just process-level health.

Postcode lookup caching

The application resolves a UK postcode to latitude/longitude coordinates on every check interval via the postcodes.io API. Since postcode-to-location mappings don’t change, this was an obvious candidate for caching. I added Caffeine caching via Micronaut’s cache abstraction, with a 24-hour TTL and a maximum cache size of 100 entries. One less external API call per interval, and one less thing that can go wrong.

Error handling improvements

Beyond the reactive pipeline improvements, I fixed a few other error handling issues that had been lurking:

  • Silent exception swallowing — the original ScheduledExecutorService would silently eat exceptions thrown by scheduled tasks. This meant DNS failures or network issues in Docker networks would cause the app to stop collecting data with no indication in the logs. Now properly caught and logged at ERROR level.
  • InfluxDB connection leak — the InfluxDB client wasn’t being closed on application shutdown. Added preDestroy = "close" to the bean definition so InfluxDBClientKotlin.close() is called during graceful shutdown.
  • Unsafe response parsing — the Meteomatics client was using unchecked casts to extract temperature values from the API response. Replaced with safe navigation and descriptive error messages, and added handling for integer temperature values (not just doubles).

CI/CD

The project now has a more mature CI/CD pipeline. Pull requests run unit and integration tests, build the native Docker image, and publish a pre-release version to Docker Hub. A comment is automatically posted on the PR with the image name, tag, and pull command, making it easy for reviewers to test. When a PR is merged to main, a versioned release is published along with a :latest tag, and another comment is posted back on the (now-closed) PR with the production image details — creating a nice audit trail from PR to released artefact.

Getting started

The application is published on Docker Hub: eddgrant/influxdb-weather-ingestor.

If you’re coming from the original version, the main things to be aware of are:

  • The default weather provider is now weatherapi.com — you’ll need a free API key from WeatherAPI.com.
  • The schedule configuration has changed from CHECKS_SCHEDULE_EXPRESSION (cron) to CHECKS_CHECK_INTERVAL (duration), defaulting to 60s.
docker run --rm \
  --env CHECKS_CHECK_INTERVAL=60s \
  --env CHECKS_POSTCODE="SW1A 1AA" \
  --env WEATHER_PROVIDER=weatherapi.com \
  --env WEATHER_API_KEY="your-weatherapi-key" \
  --env INFLUXDB_ORG="my-influxdb-org" \
  --env INFLUXDB_BUCKET="weather" \
  --env INFLUXDB_TOKEN="my-influxdb-token" \
  --env INFLUXDB_URL="http://<your-influxdb-host>:8086" \
    eddgrant/influxdb-weather-ingestor:latest

Full details are in the README.

Feedback

Please leave a reaction or comment below if you’ve found this useful. I’d be particularly interested to hear from anyone who was using the Meteomatics integration and has switched to WeatherAPI.com, or if you’ve been running influxdb-weather-ingestor and have suggestions for improvements.

Cheers!

Edd

Support:

If you’ve found my writing helpful and would like to show your support, I’d be truly grateful for your contribution.