Improving the "audiohealth" program


I;ve been working on small project with some beekeepers here in the UK to monitor a set of beehives.

During research I found your community, and fantastic work you’ve done!

I noticed you had the OSBH ML model and I thought I’d try containerising your version - for ease of use. Therefore, I wrapped the audiohealth program into the dockeraudiohealth container image for Docker.

I added to it the possibility of sending the overall result to an MQTT topic (needs further work for robustness).

I would be interested in your feedback and whether it is of any use, also if you give permission for this?

I would be curious to know how successful the model has been in use?




Hi John, OSBH makes still some advertising with the key word “open source” but unfortunately the OSBH software is no more open, see:

So we gave up to spend more work and time in audiohealth because we have not the sources for the learning algorithm.

In 2017 we did some analysis and provided our audio samples also OSBH to feed the algorithm, but the classification result was disappoing and instable, see:

So I’d like to say many thanks for the Docker container but unfortunately this is a dead end street because we have no possibilty to train the algorithm again and we have not the algo at all.

1 Like

Ah, that is indeed unfortunate but thanks for letting me know of your experience.



Hi Clemens

I am getting the same results, the hive Iam monitoring has a small colony of bees and of course quite inactive at this time of year.

It occurred to me that microphones (I’m using an ICS-43434) all have varying frequency response curves. So I took a few of my samples and ran them through audacity - applying a filter curve EQ that inversely approximates the bias of the microphone.


I also amplified the sample, across the spectrum, so peak was 0Db

From getting a collapsed, or no response, I now get an active result.

I also tried various amplifications lower by a few Db and they worked too - presumably there’s a threshold it has to pass to register.

Here’s the FC data I used.

Audacity Filter Curve EQ Preset data (import as file)
FilterCurve:f0=“26.769588” f1=“46.757941” f2=“61.017915” f3=“99.40182” f4=“198.33941” f5=“297.55306” f6=“403.34899” f7=“500.33805” FilterLength=“8191” InterpolateLin=“0” InterpolationMethod=“B-spline” v0="-18.309456" v1="-17.621777" v2="-17.106018" v3="-14.871059" v4="-7.306591" v5="-2.492836" v6="-0.42980003" v7=“0.085960388”




Dear John,

thank you for signalling interest in this program and for adding the Docker setup and the MQTT publishing feature. I will try to respond to the relevant topics appropriately.


After @einsiedlerkrebs and me started this endeavour full of enthusiasm the other day, based on work by OSBH, we got a bit disappointed, as outlined by @clemens already. However, the project and program is not considered dead, because there is a rare but regular interest in it. We worked together with @Jodaille at How to clone, install and run the "audiohealth" program ff. and @Flat at How to clone, install and run the "audiohealth" program - #44 by Flat ff., where we have been able to make the program ready for Python 3 just a few months ago.

Robust analysis aka. does the model work?

It also occurred to me that the audio files would need some appropriate preprocessing in order to expect sane results. So, thanks a stack for dedicating some time to this – it would be super sweet if we could document or even integrate the necessary steps back into the program itself. For tracking this important detail, I just created Improve robustness of analysis · Issue #8 · hiveeyes/audiohealth · GitHub.

Docker container image

We will be very happy to integrate the relevant improvements back to the mainline repository and to run automatic Docker image publishing on each release, in order to fulfill your needs and to make the installation procedure for other interested people in the community more convenient. In order to track this, I just created Publish Docker container images · Issue #6 · hiveeyes/audiohealth · GitHub.

MQTT publishing

Excellent! If you feel it is ready, we will be happy if you would submit a patch as well – for tracking it, we created Publish result of analysis to MQTT · Issue #7 · hiveeyes/audiohealth · GitHub. Please let us know if we can help with the robustness details, don’t hesitate to share an early draft.


After @Jodaille and @Flat, not many people showed a strong interest in this program, so the maintenance and improvement rate was poor in general. However, we have been able to make significant improvements on each occasion, that’s why I appreciate your efforts very much, specifically on the topic of having actual sane outcomes from the OSBH model.

I am really looking forward to converge all of your improvements into the program in order to bring it to the next level. While replies from my end might often be delayed, I will always appreciate if we can continue our discussion here, in order to improve the program together.

With kind regards,

1 Like

Thanks Andreas

I’m aware that I am repeating the same path that your team have already followed with the almost certain disappointment that entails.

I will update the github issues if I make any progress.

Thanks again.



Maybe we can actually improve the system this time to a degree it will work more reasonably, based on your findings. If you like, you can both use and contribute [to] the recording samples added by @clemens the other day at Sound Samples and Basic Analysis Hive with Queen vs. Queenless, for verifying your setup.

Maybe @MKO can also chime in and share a raw or curated collection of the recordings he made last season with us, for the same purpose? [1]

Thank you!

  1. At Erschließung von Saraswati für den Betrieb auf einem Industrie PC, mit Upload per SSH+rsync auf Synology NAS, we set the stage for a recording system based on Saraswati [2], thanks to the commitment of @MKO. Because this discussion is mostly in German, please use the translation feature by pressing the globe button below each post. ↩︎

  2. Developing Saraswati: A robust, multi-channel audio recording, transmission and storage system

    The idea for Saraswati is to streamline the acquisition and archival of raw audio material, which can later be used to apply any kinds of manual or automated analysis on – like an improved arecord. Nothing fancy but still useful for having a more robust solution than “just a bunch of shell files”. ↩︎

edited at 21:58pm CET

I hope the below formats correctly!

I’ve amended the docker solution as part of my testing.

  • I have removed the sox transforms:
  • Changed the model sample rate to 8K (same as my recordings)

#command = 'sox "{input}" "{output}" {remix_option} norm -3 sinc 30-3150 rate 6300'.format(input=audiofile,, remix_option=remix_option)
    command = 'sox "{input}" "{output}" {remix_option}'.format(input=audiofile,, remix_option=remix_option)

The reason for that is I have been applying the sox transformation prior to passing the wav file through audiohealth. This allows me to rapidly change the behaviour without having to rebuild the docker file while testing.

What am I testing?

I checked the OSBH microphone used in the buzzbox - it is/was an Invensys ICS-40300 mems

Comparison of Microphones


ICS-43434 (my recording hardware)

The ICS-40300 has a very flat response under 100Hz (circa -2Db @ 10Hz) compared to the ICS-43434

  1. To test if this was a significant factor I took an audio sample (16bits/sample rate=8000)

Frequency Analysis:

Note the very flat gain up to 50Hz.

Submitting this to audiohealth (no sox transformations) we get:

Feb 11 17:07:00 0s - 30s active ===

Feb 11 17:07:00 30s - 40s missing_queen =

Feb 11 17:07:00 40s - 70s active ===

This is actually better than the collapsed I was getting before removing the sox transforms (usually collapsed).

  1. I then added a filter to Audacity based on the frequency response differences (approximately based on the graphs above) to bias the audio towards the response the ICS-40300 might have produced.


The resulting Audacity Filter Curve:

This produces the following:


Note the change in the gain up to 50Hz

Submitting this to Audiohealth we get this output:

Feb 11 17:11:58 0s - 70s active =======

  1. Sox transform:

To use this in sox I applied the following to the input file (after some trial and error):

sox input.wav output.wav -S bass 24 10 0.4s

(24 Db gain, Centre frequency 10Hz, spread 0.4s)

Producing this:


and this result from


Feb 11 18:07:20 0s - 70s active =======

Ideally a FIR filter would be constructed, the sox bass option is far from ideal but easily implemented as quick test.

Given I only have access to the one hive currently the above may just be a one off result and is about as far from conclusive as is possible to be :).

Some questions:

  • Not knowing details, is removing the sox transforms a huge mistake?

  • Changing the sample rate of the OBHS model (in the Docker model it is now 8K). Is this parameter there to be changed - to my thinking if the ML model was using spectrum imaging then that wouldn’t alter in the sub3Khz much ?

  • Do we know the microphones used for the recordings mentioned above? Would be interesting to see if they have any FR bias?

From collapsed to active, a history! (from my home assistant)


You can find some samples in this thread Sound Samples and Basic Analysis Hive with Queen vs. Queenless I have via mobile phone, I think it was a Sony Xperia Compact.

Thanks Clemens, I couldn’t find any info on what OEM microphone Sony used, looking at the FA it looks like it might be a candidate for a poor low frequency response.


I fear it depends not only on the mic, but more on the recording software, mostly optimized for speach or music. And in this case I would expect that the MP3 algorithm will also delete “unnecessary” parts. But just a guess, no real knowledge what is canceled and how it has effect to bee hum.

Great, resurrection of the dead! :-)

Very intersting finding

For a comparison see also vs.

So for what frequencies is the one better and for what the other? Or what was the reason for the choice?

1 Like

The potential conclusion I’m drawing from this, and I’m stressing this could be completely bogus unless we can test the assumption more fully, is that to some degree accurate measurement of the sub 100Hz frequency area is needed to get a sensible result from the model.

However my one hive test is not sufficient evidence. It’ll probably swarm in May and still say active/queenless :grimacing:

1 Like

I don’t know - I chose ICS-43434 because it happened to be an I2S mems microphone which works well with the ESP32 MC I am using.

As a complete guess the ICS-40300 was chosen for it’s wide linear response.

On review of the model code I see there are digital filters in the code So the sample rate has to be consistent (6300Hz) . Have reverted the docker build accordingly.

1 Like

I have found some INMP441-Audiodaten aus einem Bienenvolk im September The INMP441 is outdated in the meantime, see I2S Mikrophone - #10 by clemens but perhaps the audio data are valuable.

Hi Clemens

I check the recordings and attach a Libreoffice document (if you can open it)

analysius of nmp441 recordings.odt (375.7 KB)

a version in word format

analysius of nmp441 recordings.docx (295.9 KB)

Some of the charts get messed up when converted to word format…

The INMP441 has a similar response to the ICS-43434


Thank you John, I bought some INMP441 breakouts a longer time ago. They are impressive cheap with a bit more than 3 EUR while you pay 12 EUR for Pesky’s ICS-43434 breakout at tindie.


Been running the docker version for a week now

Here’s some observations…

Blue = active/Lilac=Collapsed/green = Queen missing.

When it rains, or is very windy this sound is also picked up. You can see in the graphic above when the recent storrm happened it caused the status to change from active to collapsed or queen missing.

Considering how to make the microphone more directional, or using a mems microphone which can be unidirectional like the ICS-40800 (then needs a conversion to I2S via a codec).

1 Like

Dear John,

I am enjoying very much to read about your constant updates on this matter. Thank you very much for sharing your insights.

With kind regards,

P.S.: My knowledge is not very profound on this topic, but may I ask if there are any ways to filter out noise coming from rain and storm in the preprocessing step before submitting the audio to the qualification algorithm? Solving this by improving the microphone, as you suggested, is definitively the better solution. I am just asking to explore the possibility.

My undergraduate knowledge of DSP is from over twenty years ago now. So not sure I have any great answers - other than turning it off when it rains or blows :slight_smile:

When the weather is fine it seems to work great which is something positive! (one hive, one week isn’t much of a sample size though).

Of course my neighbour could have had a wooden hive, they have a metal roof - its quite shaded (trees).