Feed from luftdaten.info has data loss


The data feed from luftdaten.info is currently experiencing a data loss. The last recorded measurement is displayed on 2018-03-18 at 00:26:


Check the platform

A simple “mosquitto_sub -h luftdaten.getkotori.org -t 'luftdaten/#' -v” reveals the problem:

luftdaten/testdrive/earth/43/data.json {"P1":  9.55, "P2": 8.3,  "geohash": "sx8dg9jgecqt", "location_id": 4085, "location_name": "", "sensor_id":  8084, "sensor_type": "SDS011", "time": "2018-03-24T03:08:52Z"}
luftdaten/testdrive/earth/43/data.json {"P1": 42.37, "P2": 15.1, "geohash": "u1j4dxn2x1dm", "location_id": 2097, "location_name": "", "sensor_id":  4166, "sensor_type": "SDS011", "time": "2018-03-24T03:08:52Z"}
luftdaten/testdrive/earth/43/data.json {"P1": 22.0,  "P2": 12.1, "geohash": "u1jhdty6r466", "location_id": 5368, "location_name": "", "sensor_id": 10639, "sensor_type": "SDS011", "time": "2018-03-24T04:12:03Z"}


The field "location_name" is empty. There must be something wrong with the Nominatim reverse geocoder in luftdaten-to-mqtt.

Reproduce locally

After setting up Kotori like

pip install kotori[daq_geospatial] --extra-index-url=https://packages.elmyra.de/elmyra/foss/python/ --upgrade

and then running luftdaten-to-mqtt like

luftdaten-to-mqtt \
    --mqtt-uri mqtt://localhost/luftdaten/testdrive/earth/56/data.json \
    --geohash --reverse-geocode --sensor=297 --dry-run

shows more details about the problem:

2018-03-24 05:56:57,652 [kotori.vendor.luftdaten.luftdaten_api_to_mqtt]
    ERROR  : Reverse geocoding failed: GeocoderInsufficientPrivileges:
    HTTP Error 403: Forbidden. lat=49.625, lon=9.680

2018-03-24 05:56:57,654 [kotori.vendor.luftdaten.luftdaten_api_to_mqtt]
    INFO   : Dry-run. Would publish record:
    {u'P1': 94.72,
     u'P2': 27.75,
     'geohash': 'u0yfk3by867p',
     'location_id': 133,
     'sensor_id': 297,
     'sensor_type': u'SDS011',
     'time': u'2018-03-24T04:55:05Z'}


As the reverse geocoder obviously fails with "HTTP Error 403: Forbidden", the "location_name" field is missing from the egress record.


Nominatim expects the User-Agent as HTTP header otherwise it returns a HTTP-403 error. They just started to block requests not following this policy, i.e. by still sending a User-Agent like "Python-urllib/3.5".

The problem can been fixed on any running DAQ instance by upgrading to geopy-1.12.0 like

/opt/kotori/bin/pip install geopy --upgrade

and then setting an appropriate user agent like

geolocator = Nominatim(user_agent="luftdaten-to-mqtt/1.0.0")

in luftdaten_api_to_mqtt.py, line 284, which usually gets installed at


when using the Kotori Debian package.



Systeme entstört

Data acquisition on luftdaten.getkotori.org should work properly again:

$ mosquitto_sub -h luftdaten.getkotori.org -t 'luftdaten/#' -v
luftdaten/testdrive/earth/43/data.json {"P1": 20.8,         "P2": 11.3,       "geohash": "u336rnh19uc9", "location_id": 3643, "location_name": "Zimmermannstra\u00dfe, Steglitz, Berlin, DE", "sensor_id": 7203, "sensor_type": "SDS011", "time": "2018-03-24T05:38:58Z"}
luftdaten/testdrive/earth/43/data.json {"temperature": 2.0, "humidity": 93.1, "geohash": "u336rnh19uc9", "location_id": 3643, "location_name": "Zimmermannstra\u00dfe, Steglitz, Berlin, DE", "sensor_id": 7204, "sensor_type": "DHT22", "time": "2018-03-24T05:38:58Z"}

The first readings (e.g. from Measurements for Zimmermannstraße, Steglitz, Berlin, DE) are already coming back online:

1 Like


The data feed from luftdaten.info is currently experiencing data loss again. The last recorded measurement is displayed on 2018-03-30 at 15:43:

When looking under the hood, we could find that there’s just no data on the MQTT bus.

$ mosquitto_sub -h luftdaten.getkotori.org -t 'luftdaten/#' -v

No output, even after waiting for 30 minutes. Bummer. LuftdatenPumpe must have problems again.

Now it’s different, https://api.luftdaten.info/ currently does not answer any requests. If anybody knows what’s going on don’t hesitate to share it with us. Maybe it’s also related to Easterhegg ;]?

Good luck and cheers to all of you!

2 posts were split to a new topic: Monitoring In-Flight Data and Database for Freshness

It looks like the API is back

==> Availability of api.luftdaten.info is OK! <==

Info:    HTTP OK: HTTP/1.1 200 OK - 390 bytes in 0.089 second response time 

When:    2018-04-01 12:02:55 +0200
Service: Availability of api.luftdaten.info (Display Name: "Availability of api.luftdaten.info")

and data is arriving again

==> Grafana datasource freshness for luftdaten.info is OK! <==

Info:    OK - Data in luftdaten_testdrive:earth_43_sensors is more recent than 1h

When:    2018-04-01 12:15:20 +0200
Service: Grafana datasource freshness for luftdaten.info (Display Name: "Grafana datasource freshness for luftdaten.info")

Cheers to the folks at luftdaten.info and happy Easter!

Real resurrection on Easter Sunday! Nice! ;-)

Leider klappts doch noch nicht:

connect to address api.luftdaten.info and port 443: Connection refused
HTTP CRITICAL - Unable to open TCP socket

Weiterhin good luck nach Stuttgart!

The data feed from luftdaten.info currently has some issues:

> SELECT * FROM luftdaten_info.autogen.earth_43_sensors WHERE time > now() - 7h LIMIT 1;
{empty result}

> SELECT * FROM luftdaten_info.autogen.earth_43_sensors WHERE time > now() - 8h LIMIT 1;

time                P0 P1    P2   durP1 durP2 geohash      humidity location_id location_name                       max_micro min_micro pressure pressure_at_sealevel ratioP1 ratioP2 samples sensor_id sensor_type temperature
----                -- --    --   ----- ----- -------      -------- ----------- -------------                       --------- --------- -------- -------------------- ------- ------- ------- --------- ----------- -----------
1527778432000000000    11.18 9.98             u0y7yx63ktxk          3082        Pfarrgasse, Großostheim, Bayern, DE                                                                           6103      SDS011

We are currently investigating whether this could be related to the work on Hardware upgrade and server migration. We also moved the "luftdaten_info" InfluxDB database from the machine "elbanco" to the machine "eltiempo".


Indeed we messed up a configuration setting when switching the data feed back to the faster UDP submission after the data channel used the slower HTTP-based data acquisition again when renaming the database from "luftdaten_testdrive" to "luftdaten_info" yesterday.


We will fix the setting and restart the InfluxDB instance after replaying the delta from the new "luftdaten_testdrive" into the designated "luftdaten_info" database.

After replaying

> SELECT * INTO luftdaten_info..earth_43_sensors FROM luftdaten_testdrive..earth_43_sensors;

time written
---- -------
0    264221

the database became unresponsive, so we had to restore from a backup from two days ago.


Data acquisition from luftdaten.info uses the InfluxDB UDP interface for data submission. To enable this data path, it has to be configured in the InfluxDB configuration file and binds an UDP port to a specific database as you can see in this configuration snippet:

# High-volume data feed for luftdaten.info

  enabled = true

  # UDP port we are listening to
  bind-address = ":4445"

  # Name of the database that will be written to
  database = "luftdaten_info"

  # Will flush after buffering this many points
  batch-size = 5000

  # Will flush each N seconds if batch-size is not reached
  batch-timeout = "30s"

  # Number of batches that may be pending in memory
  batch-pending = 100

  # UDP read buffer size: 8 MB (8*1024*1024)
  read-buffer = 8388608

Here, the "database" setting just was wrong and still contained the old database = "luftdaten_testdrive" while we had already migrated to "luftdaten_info".

1 Like