Back to Smartcitizen.me

The MEMs I2S Microphone in Smart Citizen Kit 2.0

Hi everyone!

As some of you might know, there will be a digital MEMs I2S microphone on the new Smart Citizen Kit 2.0. Throughout some development insights posts, we will be illustrating the process we have followed during the implementation, selection and testing of this microphone in the upcoming kit.

Our final choice for the microphone was the INVENSENSE (now TDK) ICS43432: a tiny digital MEMs microphone with I2S output. There is an extensive documentation at TDK’s website coming from the former and we would recommend to review the nicely put documents for those interested in the topic.

![Invensense ICS43432](upload://oZeZmmzCxOG8vBIoavYzsWvSh4S.png)

Image credit: Invensense ICS43432

So, to begin with this series of posts, we’ll talk about the microphone itself and the I2S protocol. The MEMs microphone comes with a transducer element which converts the sound pressure into electric signals. The sound pressure reaches the transducer through a hole drilled in the package and the transducer’s signal is sent to an ADC which provides with a signal which can be pulse density modulated (PDM) or in I2S format. Since the ADC is already in the microphone, we have an all-digital audio capture path to the processor and it’s less likely to pick up interferences from other RF, such as the WiFi, for example. The I2S option comes from the advantage of the decimated output, and since the SAMD21 has an I2S port, this allows us to connect it directly to the microcontroller with no CODEC needed to decode the audio data. Additionally, there is a bandpass filter, which eliminates DC and low frequency components (i.e. at fs = 48kHz, the filter has -3dB corner at 3,7Hz) and high frequencies at 0,5·fs (-3dB cutoff). Both specifications are important to consider when analysing the data and discarding unusable frequencies. The microphone acoustic response has to be considered as well, with subsequent equalisation in the data treatment in order. We will review these points on dedicated posts.

![ICS43432 Datasheet](upload://lqTgW59zHw9lf9g9sHHjs6L8GQZ.png)

Image credit: ICS43432 Datasheet - TDK Invensense

The I2S protocol (Inter-IC-Sound) is a serial bus interface which consists of: a bit clock line or Serial Clock (SCK), a word clock line or Word Select (WS) and a multiplexed Serial Data line (SD). The SD is transmitted in two’s complement with MSB first, with a 24-bit word length in the microphone we picked. The WS is used to indicate which channel is being transmitted (left or right).

![ICS43432 Datasheet](upload://neUHSGB6XYzCx4vz3kKHp51OC96.png)

Image credit: I2S bus specification - Philips Semiconductors

In the case of the ICS43432, there is an additional pin which corresponds with the L/R, allowing to use the left or right channel to output the signal and the use of stereo configurations. When set to left, the data follows WS’s falling edge and when set to right, the WS’s rising edge. For the SAMD21 processor, there is a well developed I2S library that will take control of this configuration.

To finalise this first post, we would like to highlight that the SD line of the I2S protocol is quite delicate at high frequencies and it is largely affected by noise in the path the line follows. If you want to try this at home (for example with an Arduino Zero and an I2S microphone like this one, it is important not to use cables in this line and to connect the output pin directly to the board, to avoid having interfaces throughout the SD line. One interesting way to see this is that every time the line sees a medium change, part of it will be reflected and part will be transmitted, just like any other wave. This means that introducing a cable for the line will provoke at least three medium changes and a potential signal quality loss much higher than a direct connection. Apart from this point, the I2S connection is pretty straight forward and it is reasonably easy to retrieve data from the line and start playing around with some FFT analysis… we’ll see this in other post!

So, hope you enjoyed it and let us know your comments/questions about it!

2 Likes

Do you have any functional Arduino code for this microphone? I’d also love to know if you have any new knowledge / exprerience about this device.

I’m developing a cheap noise pollution sensor (it is a public sector project) and I can read this mic (the model is Adafruit’s dev board version), do some FFT for the samples and now I’m trying to do some A-weighting for the samples. I guess now it would be a good moment to check someone else’s code how to do this thing. :slight_smile:

My hardware is Adafruit I2S MEMS Microphone Breakout - SPH0645LM4H and Arduino MKR1000.

(I also got borrowed a Smart Citizen kit 2.0 for a few weeks, that is the reason why I found this post.)

Hi!

Nice to hear/read you are working on this topic! :slight_smile:

Regarding the libraries: yes, we do. You can find two libraries on this subject that we are developing:

  • The one to be integrated in the firmware: that should run on a SCK V2.0, with some modifications concerning the power of the MiCs gas analysers if you don’t have the whole-under-development-firmware. This also has a temptative A-FSK communication strategy.
  • A bit more complex library with more features: which is intended for a more customized and audio-dedicated usage with not so much to worry about memory load or interrupts. For example choose buffer sizes, windowing functions (Hann, Hamming, Blackman…), weightings (A, C or Z/none) and apply FIR filters instead of FFT. The master branch should be representative on a Arduino Zero with a ICS43432, and the other branches are on their way. One point you should take into account is to treat the samples you receive and convert them to 18bits instead of 24bits as we do.

Another important point is the equalization of the microphone response, which I believe for the one you are using is not as bad as in the ICS43432. This means that you need to correct the frequency spectrum a posteriori since the microphone amplifies/reduces certain frequencies and it’s not perfectly linear. Of course, this will depend on the accuracy you are aiming to get.

Finally, the only hard-point we have found so far is what we believe a DMA issue while managing the buffers at relatively high frequencies (fs = 44,1kHz). A more developed description of this can be found in this issue and we are still checking this out.

Hope it helped! If you need more info or help with the topic or have some info you’d like to share, do not hesitate to drop a message.

Thanks,

Óscar

2 Likes

Hi again and thanks for the quick response!

I picked up the second library, it runs out of the box in my setup (oh, I compiled the sketch using Arduino IDE so I had to rename main.cpp –> src.ino and symlink the libraries to Arduino/libraries/), but I’ve some difficulties to interpret the values printed to serial console. I put the microphone inside noise cancelling headphones and played Pink noise. Values are quite smooth (except 0 Hz which is 0 and 86 Hz, which is 58) until 18000 Hz.

Buffer Results (arduino)
0	0
86	58
172	37
258	41
[...frequencies between 344-19724 here, values are mostly between 10-30, above 18400 between 1-10...]
[...19810-21963 are all 0...]
--
90.66
21604

When I use tone generator I get reasonable peak around used frequency, E.g. at 516 Hz:

[...]
344	5
430	37
516	41   <–––
602	35
689	5
[...]
--
88.74
21604

The second to last number is resultdB, which is always between 88-91 (with some variation).

Now I’m wondering what I should do to get a comparable noise level value from resultdB or those values after frequencies? I think I’m not looking for scientific accuracy (yet :), but maybe it would be possible get dBA’ish values, which I could compare to values which are produced by a commercial noise meter, Cesva TA120?

Cesva sends once in a minute dBA average for last minute and also values for all those seconds. (I save all data into InfluxDB database for future use and analysis.) I can do the same in this Arduino setup, just taking samples for 1000 ms, averaging them and then also averaging 60 seconds of those 1000 ms averages.

1 Like

Hi! :raised_hand_with_fingers_splayed:

Thanks for coming back and checking out the library! It’s nice to see that you are using it.

Before trying to comment on your different points, I have a pre-warning I forgot to mention before: if you are using the Master branch, it is important (for now) to keep the buffer sizes and sampling frequency as they are, since mostly all the corrections on weighting, windowing or equalization are hardcoded. I am working on making this fully flexible, where you can pick them when calling the library, however, it’s not there yet… (currently it’s in the dev-i2s-dbg branch). Another option, if you need to change these values, is to change the hardcoded parameters in ConstantSounds.h - if you want I can help you out with it.


Now! I assume from your results (nicely put btw, thanks for the clarity) that you have used the default values. Going to your points:

Values are quite smooth (except 0 Hz which is 0 and 86 Hz, which is 58) until 18000 Hz.

That’s probably normal and it can be due to the number of samples and the windowing function used (a bit more detailed description is given at the end of this post).

Another possible source for this, it’s an issue we saw while testing this microphone on the Arduino Zero, and it was that some buffers were shifted with a DC component (not centered on zero). This could explain why you see a big value in the first item of the Spectrum. If you want, you can have a look to the actual buffer that you are retrieving (_sampleBuffer) and see if the values always look somehow centered around 0 -let me know if you need help doing this-.


I think I’m not looking for scientific accuracy (yet :), but maybe it would be possible get dBA’ish values, which I could compare to values which are produced by a commercial noise meter, Cesva TA120?

In principle you should almost have them, but there are two things you should consider in the workflow:

  1. First, in the signal acquisition part, if you haven’t already, you need to change the bit length in the code, according to the bit number that you the microphone has. This has a direct effect on the sound level (since it is not reference properly):

Here, you have to input your microphone characteristics:

If I am not wrong, from the datasheet, these values should be BIT_LENGTH = 18 and FULL_SCALE_DBSPL = 120 for your microphone):

And here, I think, you have to change the value to *buff = sample >> 13;

  1. The step about Spectrum Normalization is relative to the microphone itself and it’s meant to equalize the microphone response. By microphone response, I mean that the sensor doesn’t interpret all frequencies equally and sometimes it amplifies some of them. Therefore, we need to correct this behaviour and, in the library you are using, it’s meant for the ICS43432. For now, I suggest you comment out the lines carrying out that operation in:

Now, depending on the accuracy you aim to get, some extra testing is needed in order to calibrate this Spectrum Normalization or Equalization. This means going into an anechoic chamber or similar and play some white noise to the microphone with a proper speaker and to a reference linear microphone, and by comparing both, characterise the correction. However, this is probably not needed for the microphone you are using and the accuracy you are preliminary aiming for, since the response is much more linear than the one that we are using.

NB: it’s nice that you are using it since I will try to document all this not-at-all-logic-points in the github page for other people to use it as well.

Finally! I would like to enquire if you are using the microphone at 44100Hz and if you are experiencing any trouble in the long run, meaning if, at some point, it collapses or it stops giving back values. If so, please, let me know! :crossed_fingers:

Let me know about your results! :smiley:

1 Like

We had our noise sensor workshop yesterday and participants got their sensor boxes with them. I tried this branch, but didn’t have time to tweak it enough to get reasonable results, so we had to go with original “vu-meter” code, which just averages peak-to-peak values from samples and sends them to the server and database.

However, we still have two MKR1000 boads in our hands and I might have more time to check this. I could e.g. do some long running tests, try to catch bugs and so on. We can anyway update most of those noise sensor boxes during this spring, if we get better software to read the microphone.

So, should I take a closer look to your previous suggestions or is there something newer to check out?

Our poor code is here:


It is just a starting point which just works, but of course we are willing to improve it. :slight_smile:
2 Likes

Hi! :wave:t4::vulcan_salute:t4:

I really apologize for my late response: I am having some busy months now.

I understand: it is a little tricky to get it running if you confront the library from scratch, but as soon as I have some more time I will try to put it easier for these purposes.

Nevertheless, for your case, the processing you are making now seems reasonable, however you’ll get only dB values in the Z-scale (or without any weighting) and this is somehow representative of the relative noise levels but not fully representative in a human hearing way. If you wanted to go into performing analysis at any other scales (to obtain dBA or dBC values for instance) you would need to dig into the world of FFT or FIR filtering. For now, if you are not interested in them, I would suggest you convert the peak-to-peak values to the RMS values, which are easily related to the peak-to-peak values by just dividing by sqrt(2). Then, you could obtain even more representative dB values if you consider the noise characteristic of the microphone:

DB = FULL_SCALE_DB_SPL-(FULL_SCALE_DBFS-20*log10(sqrt(2) * RMS);

Where FULL_SCALE_DB_SPL is the FULL SCALE value of the sensor in dB SPL (clipping point of the microphone) and FULL_SCALE_DBFS is the value at which this is seen by the microphone in terms of numeric representation (the maximum number it can represent with the number of bits it has, i.e. BIT_LENGTH = 24 in the ICS43432):

For reference, in our case:

const int FULL_SCALE_DBSPL = 120; // FULL SCALE dBSPL (AOP = 116dB SPL)
const double FULL_SCALE_DBFS = 20*log10(pow(2,(BIT_LENGTH))); // BIT LENGTH = 24 for ICS43432

I would suggest to have a look to the previous’s post suggestions, if you haven’t done that yet and only if you want to dig deeper into the other scales. I could give a hand with checking the results, if you have them at hand. If not, and you are OK with dBZ results, I would say it’s OK to keep the analysis as you have it now.

Thanks and apologies for the late response, :pray:t4:

Óscar

1 Like

Hi As you mention the decimation is present inside only so can start taking data directly based on sick clock?..Also please let me know the samples are available for testing.
I have to interface megs with samd21 using ASF not Arduino.

Hi!

Check out the latest updates here: https://github.com/fablabbcn/smartcitizen-kit-20/tree/master/lib/AudioAnalysis

Reading through the FFTAnalyser.cpp, you can use the allocated buffer for testing:

Regards!

Hello I tried to include the library but I get an error, because it does not have the necessary format. Is there a solution to this problem?

Dear @raiquenpazanin,

Sorry for the delay on this.
It is probably better to discuss this directly in the issue tracker of the Github Repository.

Thanks!

Hello Óscar,

With a lot of interest I have been reading through this post. I have a few (newbie) questions:

Thanks!

Hi…I want to connect this MEMS mic using SPI comm protocal to arduino mega2560 in this way. Suggest me a 5 bit binar counter (counts upto 32 clock cyclyes then o/p RCO =High) to achive the the o/p data shown in data sheet
For data in n out of MEMS mic as per Word select(WS) , Clk plz see the attchment -MONO CHANNEL DATA either left or right .

Hello…in my case. It turns out only great with the AudioInputI2S class. You can’t utilize it simultaneously as the Audio Shield since they’d both need to be associated with I2S0 RX. I haven’t took a gander at the AudioControlSGTL5000 class in detail. Possibly you can debilitate it’s I2S yield? It would likewise be conceivable to interface the mouthpiece to I2S1 RX. In any case, the Quad I2S class doesn’t yet uphold BCLK = LRCLK*64.