-
Notifications
You must be signed in to change notification settings - Fork 7.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ESP32 ADC DMA generates samples way too fast? (IDFGH-7285) #8874
Comments
I think is I2S clock issue and prescaler used also for ADC DMA. //ESP32 ADC uses the DMA through I2S. The I2S needs to be configured.
#define I2S_BASE_CLK (2*APB_CLK_FREQ)
#define SAMPLE_BITS 16
#define ADC_LL_CLKM_DIV_NUM_DEFAULT 2
#define ADC_LL_CLKM_DIV_B_DEFAULT 0
#define ADC_LL_CLKM_DIV_A_DEFAULT 1 /**
* For esp32s2 and later chips
* - Set ADC digital controller clock division factor. The clock is divided from `APLL` or `APB` clock.
* Expression: controller_clk = APLL/APB * (div_num + div_a / div_b + 1).
* - Enable clock and select clock source for ADC digital controller.
* For esp32, use I2S clock
*/
static void adc_hal_digi_sample_freq_config(adc_hal_context_t *hal, uint32_t freq)
{
#if !CONFIG_IDF_TARGET_ESP32
uint32_t interval = APB_CLK_FREQ / (ADC_LL_CLKM_DIV_NUM_DEFAULT + ADC_LL_CLKM_DIV_A_DEFAULT / ADC_LL_CLKM_DIV_B_DEFAULT + 1) / 2 / freq;
//set sample interval
adc_ll_digi_set_trigger_interval(interval);
//Here we set the clock divider factor to make the digital clock to 5M Hz
adc_ll_digi_controller_clk_div(ADC_LL_CLKM_DIV_NUM_DEFAULT, ADC_LL_CLKM_DIV_B_DEFAULT, ADC_LL_CLKM_DIV_A_DEFAULT);
adc_ll_digi_clk_sel(0); //use APB
#else
i2s_ll_rx_clk_set_src(hal->dev, I2S_CLK_D2CLK); /*!< Clock from PLL_D2_CLK(160M)*/
uint32_t bck = I2S_BASE_CLK / (ADC_LL_CLKM_DIV_NUM_DEFAULT + ADC_LL_CLKM_DIV_B_DEFAULT / ADC_LL_CLKM_DIV_A_DEFAULT) / 2 / freq;
i2s_ll_mclk_div_t clk = {
.mclk_div = ADC_LL_CLKM_DIV_NUM_DEFAULT,
.a = ADC_LL_CLKM_DIV_A_DEFAULT,
.b = ADC_LL_CLKM_DIV_B_DEFAULT,
};
i2s_ll_rx_set_clk(hal->dev, &clk);
i2s_ll_rx_set_bck_div_num(hal->dev, bck);
#endif
} If the setted sampling frequency is 10e3, the variable I tried changing
and the real sampling frequency slow down, the buffer isn't no more always overflowing and the channel sequence is correct. But I don't know what the exact clock prescaler for which the code was designed for, documentation is not so clear... |
Same problem with adc i2s, on ESP32 after v4.4.1 release! Sample rate is broken. Had to roll back idf version. Please help with this error. |
I can't make sense of the example at all. Is there some more in-depth documentation on what those struct members actually do? E.g. the example initializes the ADC with the following two structs #define TIMES 256
adc_digi_init_config_t adc_dma_config = {
.max_store_buf_size = 1024,
.conv_num_each_intr = TIMES,
.adc1_chan_mask = adc1_chan_mask,
.adc2_chan_mask = adc2_chan_mask,
};
adc_digi_configuration_t dig_cfg = {
.conv_limit_en = ADC_CONV_LIMIT_EN,
.conv_limit_num = 250,
.sample_freq_hz = 10 * 1000,
.conv_mode = ADC_CONV_MODE,
.format = ADC_OUTPUT_TYPE,
}; What is this configuration supposed to do? /edit |
I can confirm the test done by @ssymo84. Did change ADC_LL_CLKM_DIV_NUM_DEFAULT to 256 and got stable DMA readings from 6 channels. By using a signal generator (1Khz) and fft, it looks like 300000Hz equals 26100Hz.
|
According to the API documentation the maximum sample_freq_hz is 83333Hz. (not that it would change anything, in my opinion that setting still is broken) |
I did spend some time looking into this issue recently and I found out how to get it working right. According to the reference manual for ESP32, this is how the sampling frequency is set up:
The formula for the sampling frequency is the following:
In my test, ch = 6 and bits = 16 (or 12 but in adc_hal.h SAMPLE_BITS is 16 which goes into I2S_SAMPLE_RATE_CONF_REG as I2S_RX_BITS_MOD. This is set in i2s_ll.h line 607). What is left is N, a, b and M. That means four dimensions for finding the right sampling frequency. After discussing this with my friend at work I ended up with the following program to calculate the sampling frequency:
Running the program (https://cplayground.com/) gives the following result:
Not all frequencies fit into this register setup and therefore you might get a few values for each sampling frequency depending on the deltaf which limits the offset. Running 20480Hz as sampling frequency gives the following:
I tested this a bit and here are some results (1khz signal as input): The changes I made were all in the adc_hal.c. I defined N, M, a and b in my application and then referenced them with extern:
Overall, I was able to have spot-on sampling frequencies. I was also able to go much lower in sampling frequency than explained in the documentation or 829.9616Hz. |
PLL_D2_CLK = PLL_CLK/2 ? |
I find that unlikely. In the original code inside adc_hal.c (around line 101) there is the definition of I2S_BASE_CLK:
Inside soc.h, line 221 and 223, APB_CLK_FREQ is defined with an interesting comment that it might be incorrect:
This means that I2S_BASE_CLK is set to 160MHz and that's how I ended up with this number. However, I have set CPU frequency to 240MHz in my SDK configuration so PLL_D2_CLK should actually be that number but that does not give me right results. Here is a more advanced version of the program to calculate the register values:
|
Thanks @haukurhafsteins for the work, now I foud a commit in v4.4: cb62457 They increased the minimum sampling rate to 20khz and used a function to calculate the best frequency divider coefficient. |
So now the minimum sampling rate is 20Khz ? So if I collect 10hz, I won't be able to collect the full cycle. |
So now the minimum sampling rate is 20Khz ? So if I collect 10hz, I won't be able to collect the full cycle. |
To collect one cycle (or period) of 10Hz signal you need 20000Hz / 10Hz = 2000 samples. You can also look at it like this:
|
This is only true for the ESP32, for all other devices it's 611Hz. Can anyone explain to me how SOC_ADC_DIGI_DATA_BYTES_PER_CONV and SOC_ADC_DIGI_RESULT_BYTES are related? /edit ./src/adc.cpp:65: Unit: 1, Channel: 4, Value: 329
./src/adc.cpp:65: Unit: 1, Channel: 4, Value: 339
./src/adc.cpp:65: Unit: 1, Channel: 7, Value: 0
./src/adc.cpp:65: Unit: 1, Channel: 6, Value: 113
./src/adc.cpp:65: Unit: 1, Channel: 4, Value: 340
./src/adc.cpp:65: Unit: 1, Channel: 4, Value: 341
./src/adc.cpp:65: Unit: 1, Channel: 7, Value: 0
./src/adc.cpp:65: Unit: 1, Channel: 7, Value: 0
./src/adc.cpp:65: Unit: 1, Channel: 6, Value: 112
./src/adc.cpp:65: Unit: 1, Channel: 4, Value: 336
./src/adc.cpp:65: Unit: 1, Channel: 7, Value: 0
./src/adc.cpp:65: Unit: 1, Channel: 7, Value: 0
./src/adc.cpp:65: Unit: 1, Channel: 6, Value: 112
./src/adc.cpp:65: Unit: 1, Channel: 6, Value: 113
./src/adc.cpp:65: Unit: 1, Channel: 4, Value: 334
./src/adc.cpp:65: Unit: 1, Channel: 7, Value: 0 /edit2
I've currently configured 3x ADC channels measured at 20kHz. The |
I'm trying to search the answer to this and the only result I found so far is this post asking this very question. |
It's actually exactly what the name says it is. SOC_ADC_DIGI_DATA_BYTES_PER_CONV defines how many bytes a single conversion takes up and SOC_ADC_DIGI_RESULT_BYTES how many bytes the result uses. Those two defines are vastly different across all chip types. I assume because of DMA/I2S constraints, but who knows... In the end it doesn't really matter. The only thing to look out for is that the buffer you pass to the ADC is in multiples of SOC_ADC_DIGI_DATA_BYTES_PER_CONV. |
Thanks.
Apparently even that is not enough though. |
Okay, I think I finally got it. The whole reason behind ...and that's 4. That's it.
Apparently not. A single conversion still uses the same 2 bytes. The To sum up, |
Reading the IDF source code suggests that this is a documentation bug, probably a leftover back from the days of (now obsolete) As of now, the only way |
Environment
Problem Description
I'm trying to do real-time sampling of 4 input signals using the onboard ADCs; as best I can tell, the ADC DMA mode should be a good way to do this, despite the lack of documentation.
So I copied the dma_read example and tweaked it to fit my use case. But as soon as I removed the time-consuming print statements, it was generating far more samples than it seemed like it was supposed to.
(Could this be related to #6691? I don't know the code well enough to guess...)
Expected Behavior
Since I've reduced the config values to
sample_freq_hz = 2000
andconv_num_each_intr = 8
, I would expect to be getting 16000 bytes per second.Actual Behavior
I'm getting around 865000 bytes per second.
Steps to reproduce
Here's my code (a slight modification of the dma_read example code - I've tweaked the parameters, removed the excess prints, infrequently logged the average number of bytes read, and stripped out the non-ESP32-specific code) https://gist.github.com/elidupree/cfdbff1b909eb8654f8b5ca38642e726
I simply put that code in the place of the dma_read main file, build and flash over USB to an unmodified WeMos D1 Mini ESP32; no extra wiring is needed.
Debug Logs
The text was updated successfully, but these errors were encountered: