Audio acquisition and analysis with ARM Cortex-M0 and I2S microphone



Ok, thanks.

When looking at Basic audio recorder for ARM Cortex-M0 and ICS43432 I2S microphone · GitHub again: Why exactly are you calling I2S.end(); there again?


You probably explained it already at I2S.end() hangs · Issue #386 · arduino/ArduinoCore-samd · GitHub :

I want to begin and end I2S within a function, rather than beginning it once and never ending it.

But still: Why actually?


Because I want to do an audio measurement, say, every half our. In between I’d like to switch off
I2S/microphone for two reasons:

  • In earlier tests I noticed issues with LMIC (mostly when expecting to receive a join accept message from the TTN gateway). Though I haven’t gone into a large depth and verified this, it occurred to me that I2S might mess up timing in LMIC.
  • Power saving during the time in which the microphone is not used.

Probably there are other methods in realising same … I guess …


I see your point. Thanks again.


Also, when reconsidering the code at lines 44-47:

if (!I2S.begin(I2S_PHILIPS_MODE, SAMPLE_RATE, 32)) {
    Serial.println("Failed to initialize I2S!");
    while (1); // do nothing

This busy loop also might be the reason for the stalls, right? You might miss that point because you are not calling Serial.flush() there, which will pause your program until the transmit buffer has been flushed to the UART. At least, this is what we already learned painfully when attempting to debug in similar conditions.


Not sure how I can edit you gist, so I edited the file on my github: added some Serial.println statements. Please have a look if I understood your comment correctly. Running that sketch outputs the following to serial:

Before end()
After end()
Before end()

and then nothing …


Thanks for sharing insights about the runtime behavior of your code, we’ve updated the Gist Basic audio recorder for ARM Cortex-M0 and ICS43432 I2S microphone · GitHub accordingly.


Did you actually read this post (by Andrew J. Fox again), Wouter?


While I perfectly understand this from looking at the problem from an ad hoc perspective, I would like to add some thoughts here. Usually, when putting the MCU into deep sleep, all the peripherals most probably have been shut down anyway and will have to be initialized again when resuming from hibernation.

So I believe it’s not really appropriate to cycle through I2S.begin() and I2S.end() in this manner so quickly to actually emulate something you will not have in the final version as this most probably leads to some bus member hiccup.

So, I would like to advise to get rid of this I2S.begin() / I2S.end() obstacle until we have produced a stable version if you have no other objections about this.


I’ll give it a try; see if my earlier observation (it standing in the way of LMIC) still stands and if I can work around that…


In tests I did half a year ago, having I2S running would get in the way of successfully joining the TTN network. My assumption was that the I2S library messed with timing, so that LMIC’s receive window for an OTAA join accept from the gateway was slightly out ot sync. This brought me to run I2S only for a brief period of time, between LoRa Tx and Rx events.

Now, with this node (same brand/type of board, different “instance”) it does join TTN when I2S has been initiated already. In the meanwhile what changed as well is that I have a different gateway and it is closer by (so now I can join with SF7 rather than SF11). LoRa is quite sensitive to timing, so, although it works now, I do feel a bit uneasy about not understanding this point. Well it works now … note to self to think about this if at one point it has difficulties joining again.


Hi Wouter,

what you are telling about the LMIC/I2S interaction eventually makes absolute sense to me when looking at the code.

The point is that just almost everything is timing critical here. As LMIC apparently introduces a scheduling/task system, there’s additional care to be taken in comparison/addition to the regular Arduino HAL main() / loop() lifecycle.

I just thought about isolating the I2S sensor domain first and getting this working. After that, we can dedicate ourselves to the interaction of I2S with LMIC. I would have to say quite some words about that, especially as the FFT computation in between obviously is not a cheap operation, if I’m getting this right?

If you think reading from I2S works stable now, another phone call for talking about how to bring I2S together with LMIC/TTN would be appropriate.



Just loud thinking: We do not need I2S and LoRa in parallel. We measure and then we submit data, also “listening” in parallel to the audio acquisition is not necessary. So we can try to get one sleeping / disabled while the other is working in case they come in conflict.


Hi Clemens, that’s right. We know when we schedule for the next LoRA transmission. We can do the audio stuff before the scheduled LoRa transmission and there shouldn’t be an issue.

When a transmission happens, for a few seconds afterwards a window is opened to listen to uplink messages (such as needed for MAC commands or application payloads). This windows needs to be cleanly timed. I am not sure how much jitter is added by the onI2Sreceive event, but it indeed seems undesirable to have that running.

However, I haven’t found a public method (other than end()) to disable it or put it to sleep.


Hi Andreas … thanks; still in the process of verifying that :-)


Dear Clemens,

LoRa and LoRaWAN (via LMIC) are totally different beasts when it comes to scheduling / task lifecycles. Please take that into account. It’s actually not about parallelism (there isn’t actually any), it’s all about timing to get things right.



Thanks for letting us know. Will be happy to support you further if we are through this. Are there any things around reading from I2S I can provide better assistance for? If not: Good luck figuring things out and please don’t hesitate to contact me for even more rubber duck debugging - it really worked well for the last time, right?


I assume it’s really just that you are calling I2S.begin() after I2S.end() which most probably makes things go south when doing this too quickly. Please get rid of this for the first tests to achieve stable and deterministic results of this building block which are absolutely required before assembling things into larger systems.

I have recognized your motivation for doing this, but still think it fits into the premature optimization category regarding deep sleep mode - which you really should approach differently than by putting I2S.begin() / I2S.end() into the measurement cycle.


Hi Andreas. I continued some test with the Adafruit_Zero FFT and I2S libraries. I was utterly confused for a while, but then it turned out that something was wrong with the hardware (I should perhaps up my ESD safe working habits) as the microphone output had no relation anymore to the sound coming into it. I had another one (ICS43434 this time) sitting around, and with that one things made sense again.

This one doesn’t show the jumps in the spectrum seen earlier. It looks promising. Though, the difference is not understood. contains the test-code. If you’d like to have a look. I’d welcome comments!


Hi @clemens and @Andreas,

A small update: I did extensive testing with the Adafruit_ZeroI2S library and the new ICS43434 and I don’t see any of the issues noticed earlier with the Arduino I2S library:

  • The spectrum looks nice and clean, without these half, quarter, etc, jumps shown earlier in this thread.
  • The spectrum looks the same for code w/ and w/o LMIC.
  • LMIC joins the network nicely, even if I don’t disable the I2S code temporarily.

W.r.t. to the latter comment: on this particular hardware the issue of not being able to join TTN with I2S active was not seen anymore. In contrast to an earlier set of hardware with different I2S microphone and perhaps different crystal in the MCU. So, it’s not to say if Adafruit_ZeroI2S.h is better or worse in that respect.

One more comment: Adafruit_ZeroI2S.h doesn’t implement DMA for the SAMD21. I would imagine that for small sample sizes, recorded in an intermittant non-streaming manner, DMA is not really needed. @Andreas, does that thinking make sense?

Finally, Adafruit_ZeroFFT.h does its magic on 16 bit data, whereas the microphone delivers 24 bit data. Compressing the range of the microphone data in 16 bits is easy enough, so not really an issue. For the moment I am quite happy that it now finally seems to work fine, so I won’t yet start looking into the FFT algorithms provided in arm_math.h that operate on q31_t type data.

UPDATE: GitHub - wjmb/FeatherM0_BEEP is now public. Please note that it is work in progress code. Please comment if you’d feel it could be improved in one way or another!

Best regards,