Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

In the third step, the event is extracted. The program cannot continue to run, but no error will be reported #22

Closed
xujiameng opened this issue Apr 16, 2023 · 31 comments

Comments

@xujiameng
Copy link

Hello developers of AQUA, thank you very much for developing such a convenient calcium event analysis tool. I am a PhD in biomedical engineering, focusing on the specific analysis of calcium events in astrocytes.

I have recently been using AQUA to analyze calcium events, but I have encountered some problems. First of all, let me talk about my environment. For stability, I am using matlab2018a version (versions above 2020 will have problems when drawing ROI), window10 system (matlab on Linux system cannot run AQUA), memory 128Gb. Before analyzing the astrocyte tif file, I will use Deepcad tool to complete the image denoising, and then use NoRMCorre to complete the motion correction.

According to my experience, Aqua can complete a full task in about 4 hours of normal operation on my computer. However, many times the AQUA run stops at step 3, where the event is extracted. Neither an error is reported nor continues to run. At the same time, the data in the memory is not released

When the prompt "The maximum variable value allowed by the program was exceeded." appeared, I modified the code in "spgtw.m" and inserted the following command on line 40 to ensure the normal operation of the program. Because thrMax occasionally returns inf, which causes the command "thrVec = 0:thrMax;" to run incorrectly.

if isinf(thrMax)
dFip = fillmissing(dFip,'nearest');
s00 = s00 + 1;
thrMax = ceil(quantile(dFip(:),0.999)/s00);
fprintf('thrMax = %d\n',thrMax)
end

But when I don't modify s00 (that is, snrThr is inf), the program has the same problem as I mentioned, and AQUA neither continues to run nor reports an error.

I can't locate what is wrong now, please help!

image

@XuelongMi
Copy link
Collaborator

Hi Jiameng, I'm Xuelong, I can help you with this. As you know, "s00" in the script represents the estimated noise, and it is calculated by the command dFDif = (dF(:,:,1:end-1)-dF(:,:,2:end)).^2; s00 = double(sqrt(median(dFDif(:))/0.9113));. That means it is calculated from the difference of adjacent time points. I guess after you do imaging denoising, most of the differences in some event are 0, and thus "s00" is 0 or very small. Then "thrMax" is a very large number.
Line 42 tMapMT = burst.getSuperEventRisingMapMultiThr(dFip,m0Msk,thrVec,s00); uses "thrVec" which implements a for-loop, every value in "0:thrVec" results in one loop. Thus, I think AQuA is still running and reports no errors. And you know, sometimes even Matlab runs something, it may not show it's running.
One way to modify that, you can modify line 17 as "s00 = max(s00Bound, double(sqrt(median(dFDif(:))/0.9113)));" to set a minimum bound "s00Bound" of s00. **s00Bound** could be one value you defined, like 0.1 or 0.01.

Besides, we have a developing version of AQuA2 which should be faster and more accurate. We plan to write a paper for it this year. If you plan to use it before our publication, you could send a request to my advisor "yug@vt.edu" and promise to keep it within your own lab, then I can send you the package.

@xujiameng
Copy link
Author

xujiameng commented Apr 17, 2023

Hi Jiameng, I'm Xuelong, I can help you with this. As you know, "s00" in the script represents the estimated noise, and it is calculated by the command dFDif = (dF(:,:,1:end-1)-dF(:,:,2:end)).^2; s00 = double(sqrt(median(dFDif(:))/0.9113));. That means it is calculated from the difference of adjacent time points. I guess after you do imaging denoising, most of the differences in some event are 0, and thus "s00" is 0 or very small. Then "thrMax" is a very large number. Line 42 tMapMT = burst.getSuperEventRisingMapMultiThr(dFip,m0Msk,thrVec,s00); uses "thrVec" which implements a for-loop, every value in "0:thrVec" results in one loop. Thus, I think AQuA is still running and reports no errors. And you know, sometimes even Matlab runs something, it may not show it's running. One way to modify that, you can modify line 17 as "s00 = max(s00Bound, double(sqrt(median(dFDif(:))/0.9113)));" to set a minimum bound "s00Bound" of s00. **s00Bound** could be one value you defined, like 0.1 or 0.01.

Besides, we have a developing version of AQuA2 which should be faster and more accurate. We plan to write a paper for it this year. If you plan to use it before our publication, you could send a request to my advisor "yug@vt.edu" and promise to keep it within your own lab, then I can send you the package.

Hello Xuelong. I have listened to your suggestion and modified the definition of s00 in line 17 of the spgtw.m file: s00Bound=0.01; s00 = max(s00Bound, double(sqrt(median(dFDif(:))/0.9113)));
However, the AQuA software still experienced the same prolonged stagnation. I located the stagnant position and found it on line 122 of the spgtw.m file: [~, labels1]=aoIBFS. graphCutMex (ss, ee). Aqua has been stuck here for nearly 7 hours without continuing with the next command, and it hasn’t run yet.

I tried to search for the function ‘graphCutMex’ in the folder of the AQuA, but only found an annotated m file (AQuA/master/tools/+aoIBFS/graphCutMex. m).
By retrieving the running records, it was found that graphCutMex can be executed quickly at certain times, while at other times it may not respond for several hours.
I just recalled that the same problem occurred when processing a astrocyte video before modifying the program. After waiting for nearly a day, AQUA still had no new output.
How should I accelerate the calculation of graphCutMex?

Also, thank you for inviting us to use AQuA2. I am discussing the content in the official email with my advisor.

@XuelongMi
Copy link
Collaborator

I guess the root reason is still because of the denoising of the video, AQuA itself will estimate the noise again and use it to detect signals. When the estimated noise is very small, AQuA may identify very large signals and many steps would be very time-consuming. Besides, "GraphCutMex" is a complied C++ function for solving min-cut problem. The stagnation in step 3 is caused by too large input graph.

Several ways to solve it:

  • Way 1: Don't do denoising if SNR of the video is not very low. Directly use NoRMCorre (if motion is not big, this step may be also omitted) and then use AQuA.
  • Way 2: Downsample your preprocessed data, then use AQuA.
  • Way 3: This way is to decrease the graph size for "GraphCutMex". In "AQuA\src+burst\se2evt.m" file, set Line 9 "spSz" to be larger, like spSz = 200, or another larger value. Also, Modify "AQuA\src+gtw\mov2spSNR.m" line 73 and line 132, change for-loop iteration number from 100 to 1000.
  • Way 4: AQuA2 has accelerated that part, and the stagnation won't happen again.

@xujiameng
Copy link
Author

I guess the root reason is still because of the denoising of the video, AQuA itself will estimate the noise again and use it to detect signals. When the estimated noise is very small, AQuA may identify very large signals and many steps would be very time-consuming. Besides, "GraphCutMex" is a complied C++ function for solving min-cut problem. The stagnation in step 3 is caused by too large input graph.

Several ways to solve it:

  • Way 1: Don't do denoising if SNR of the video is not very low. Directly use NoRMCorre (if motion is not big, this step may be also omitted) and then use AQuA.
  • Way 2: Downsample your preprocessed data, then use AQuA.
  • Way 3: This way is to decrease the graph size for "GraphCutMex". In "AQuA\src+burst\se2evt.m" file, set Line 9 "spSz" to be larger, like spSz = 200, or another larger value. Also, Modify "AQuA\src+gtw\mov2spSNR.m" line 73 and line 132, change for-loop iteration number from 100 to 1000.
  • Way 4: AQuA2 has accelerated that part, and the stagnation won't happen again.

Thank you very much for your detailed guidance. I will try to modify the code according to your suggestions.

I hope this won't trouble you. I am very interested in AQUA2, but my advisor and I are considering ways to commit to "keep it within our own lab". Do you have any suggestions?

@XuelongMi
Copy link
Collaborator

Just send an email to my advisor yug@vt.edu, saying "I promise to not share AQuA2 with other people". The major concern is that AQuA2 is not published yet.

@xujiameng
Copy link
Author

I guess the root reason is still because of the denoising of the video, AQuA itself will estimate the noise again and use it to detect signals. When the estimated noise is very small, AQuA may identify very large signals and many steps would be very time-consuming. Besides, "GraphCutMex" is a complied C++ function for solving min-cut problem. The stagnation in step 3 is caused by too large input graph.

Several ways to solve it:

  • Way 1: Don't do denoising if SNR of the video is not very low. Directly use NoRMCorre (if motion is not big, this step may be also omitted) and then use AQuA.
  • Way 2: Downsample your preprocessed data, then use AQuA.
  • Way 3: This way is to decrease the graph size for "GraphCutMex". In "AQuA\src+burst\se2evt.m" file, set Line 9 "spSz" to be larger, like spSz = 200, or another larger value. Also, Modify "AQuA\src+gtw\mov2spSNR.m" line 73 and line 132, change for-loop iteration number from 100 to 1000.
  • Way 4: AQuA2 has accelerated that part, and the stagnation won't happen again.

Hello Xuelong. I spent one day to test, mainly for the Way 3. Unfortunately, adjusting these parameters, the problem that the GraphCutMex function is very slow at certain times did not resolved. Also, as expected, the model took more time in the third step "event". I only tried to tune the initial parameters of some variables in the code (like spSz = 200) three times and it took an insane amount of time.

For the Way 1, I can be sure that the AQUA program will run smoothly if I don’t use deepcad to denoise the image. But I’m still stuck with it. Below is my reasoning. When I used AQUA to analyze raw footage without denoising, I was able to get 284k seeds and 23k events (after “clean”, “Merge” and “Recon”) with default parameters. However, when I used AQUA to analyze the denoised video, by adjusting the threshold (other parameters remained unchanged), the seed reached 290k, but the number of events was only 1.1k. This gave me reason to believe that image denoising was meaningful to the analysis results of AQUA.

For the Way 2, I have downsampled the video sampling frequency from 40 frames to 5 frames. I also reduced 32bit to 8bit, but this did not fundamentally solve the problem of low AQUA operating efficiency.

This is really frustrating. I think I'll have to apply the not yet published AQUA2 to analyze calcium events in astrocytes

@XuelongMi
Copy link
Collaborator

You can send one email to my advisor then I can send you the developing AQuA2.

By the way, how large of your dataset? And if you do not need propagation information, maybe we can adjust the GitHub code to skip step 3.

@xujiameng
Copy link
Author

Each of my videos has a length of 10 minutes, and there are 2-4 cells in the field of vision. I won’t skip step 3 because it contains important communication information that might be the focus of my analysis.

@XuelongMi
Copy link
Collaborator

Is that convenient for you to send me one small crop of the data (like a recording of 1 minute, or half a minute) so that I can check the code and try to adjust the parameters?

@xujiameng
Copy link
Author

Is that convenient for you to send me one small crop of the data (like a recording of 1 minute, or half a minute) so that I can check the code and try to adjust the parameters?

Thank you for your support. These materials are not yet public, and I need to apply to my supervisor, which may take some time. Please understand. I’m grateful for your support

@XuelongMi
Copy link
Collaborator

Sure.

@xujiameng
Copy link
Author

Hello Xuelong. I came up with such a question this morning. For example, website address“ https://github.com/yu-lab-vt/AQuA ”A high SNR video "ExVivoSuppRaw" provided in( https://drive.google.com/file/d/13tNSFQ1BFV__42TY0lZbHd1VYTRfNyfD/view )What method was used to preprocess such an image?

Perhaps the problem I encountered is actually very simple. The preprocessing method I chose is not suitable for AQUA.
image

@XuelongMi
Copy link
Collaborator

For this data, we don't do preprocessing and directly use AQuA. For different data, we may adjust parameters in GUI.
In your data, if the detected events exceed your expectation (for the case without denoising step), maybe you could set stricter parameters like higher intensity threshold and higher z score threshold.

@xujiameng
Copy link
Author

I may not have been clear about what I meant. I mean, like this picture shows the original image? Is it so clear without any noise removal or motion correction?

@XuelongMi
Copy link
Collaborator

Yes, it is raw data

@YiqunWang-FDU
Copy link

I also encountered the same situation. Now it has been running for 4 hours in the third step, and the final output of the command line is shown in the figure. I'm curious to know if this problem is solved in the end, can you guide me how to deal with it? Thank you very much.@xujiameng @XuelongMi
image

@XuelongMi
Copy link
Collaborator

I need more information to locate the issue you met. Could you send me an example data that you face this issue?

Or you can just try the method jiameng has used, that is:
In the "AQuA/src/+gtw/" folder, open "spgtw.m" file, modify the line 17 to
s00 = max(0.1,double(sqrt(median(dFDif(:))/0.9113)));
and then test again. This modification is to set a lowerbound of the estimated noise in step 3 to avoid 0 noise resulting in infinite for-loop.
If the issue is still not solved, maybe I need the data to locate the issue.

@YiqunWang-FDU
Copy link

I have taken this measure, and the situation I am talking about is the result of modifying the code.

@XuelongMi
Copy link
Collaborator

It is so strange.
Could you add line fprintf('sp2graph finished\n') after line 112 [ref,tst,refBase,s,t,idxGood] = gtw.sp2graph(dat,validMap,spLst,spSeedVec(1),gapSeed) , add line fprintf('buildGTWGraph finished\n') after line 121 [ ss,ee,gInfo ] = gtw.buildGTWGraph( ref, tst, s, t, smoBase, maxStp, s2); and add line fprintf('Alignment finished\n') after line 123 path0 = gtw.label2path4Aosokin( labels1, ee, ss, gInfo ); to help me locate the issue?
After modification, run step 3 again and show me the output of the command window.

@YiqunWang-FDU
Copy link

In addition, my data is also the result of deepcad denoising processing. Could this be the source of the problem? Is it possible to modify the code to adapt the denoised data? The signal-to-noise ratio of my original image is so low that various conventional filtering methods are ineffective, so I think the denoising of deepcad is a necessary link.

@XuelongMi
Copy link
Collaborator

It is possible, but now I cannot locate what happened in the code. If it is convenient, could you send me the example data (a little crop of the data is enough as long as the issue is there).

@YiqunWang-FDU
Copy link

I have modified the code as you said and re-run. Now that the program is still running, this is the current output, and it hasn't changed for about 10 minutes after outputting this content. Can you locate the problem? If you need data, how can I send it to you, may I know your email?

`Reading data
Reading done.
Done ...
Detecting ...
Done
Detecting ...
Grow 1
Grow 2
Grow 3
Grow 4
Grow 5
Grow 6
Grow 7
Grow 8
Grow 9
Grow 10
Grow 11
Grow 12
Grow 13
Grow 14
Grow 15
Grow 16
Grow 17
Grow 18
Grow 19
Grow 20
Grow 21
Grow 22
Grow 23
Grow 24
Grow 25
Grow 26
Grow 27
Grow 28
Grow 29
Grow 30
Grow 31
Grow 32
Grow 33
Grow 34
Grow 35
Grow 36
Grow 37
Grow 38
Grow 39
Grow 40
Cleaning super voxels by size ...
Cleaning super voxels by z score...
Extending super voxels ...
1000
2000
3000
4000
5000
1000
2000
3000
4000
Done
Detecting ...
Detecting super events ...
1000
2000
3000
4000
1000
2000
3000
1000
2000
1000
2000
1000
1000
1000
Detecting events ...
SE 1
Max 12000 - Tgt 7 - Now 48 - Thr 2.000000
Node 48, SNR 6.020600e+00 dB Ratio 1.00
sp2graph finished
buildGTWGraph finished
Alignment finished
SE 2
Max 11538 - Tgt 6 - Now 39 - Thr 2.000000
Node 39, SNR 6.020600e+00 dB Ratio 1.00
sp2graph finished
buildGTWGraph finished
Alignment finished
SE 3
SE 4
Max 7500 - Tgt 8 - Now 56 - Thr 2.000000
Node 56, SNR 6.020600e+00 dB Ratio 1.00
sp2graph finished
buildGTWGraph finished
Alignment finished
SE 5
Max 27273 - Tgt 2 - Now 11 - Thr 3.000000
Node 11, SNR 9.542425e+00 dB Ratio 1.00
sp2graph finished
buildGTWGraph finished
Alignment finished
SE 6
Max 8571 - Tgt 6 - Now 39 - Thr 2.000000
Node 39, SNR 6.020600e+00 dB Ratio 1.00
sp2graph finished
buildGTWGraph finished
Alignment finished
SE 7
Max 12500 - Tgt 6 - Now 46 - Thr 2.000000
Node 46, SNR 6.020600e+00 dB Ratio 1.00
sp2graph finished
buildGTWGraph finished
Alignment finished
SE 8
1000
2000
3000
4000
5000
6000
7000
8000
9000
10000
11000
Max 5556 - Tgt 1634 - Now 11533 - Thr 1.000000
1000
2000
3000
4000
5000
6000
7000
8000
9000
10000
11000
Max 5556 - Tgt 1634 - Now 11533 - Thr 1.500000
1000
2000
3000
4000
5000
6000
7000
8000
9000
10000
11000
Max 5556 - Tgt 1634 - Now 11533 - Thr 1.500000
1000
2000
3000
4000
5000
6000
7000
8000
9000
10000
11000
Max 5556 - Tgt 1634 - Now 11533 - Thr 1.500000
1000
2000
3000
4000
5000
6000
7000
8000
9000
10000
11000
Max 5556 - Tgt 1634 - Now 11533 - Thr 1.500000
Node 11533, SNR 3.521825e+00 dB Ratio 1.00
sp2graph finished
buildGTWGraph finished

`

@XuelongMi
Copy link
Collaborator

Example data is helpful. I promise this data will be only used in fixing this issue. My email is mixl18@vt.edu, you can send it through google drive.

@XuelongMi
Copy link
Collaborator

By the way, I generally know where is wrong, line 122 and line 123 [~, labels1] = aoIBFS.graphCutMex(ss,ee); path0 = gtw.label2path4Aosokin( labels1, ee, ss, gInfo );. But line 122 using an existing C++ package, it should work, maybe the input has some issue.
I think send the example data will help me to solve the issue.

@xujiameng
Copy link
Author

I also encountered the same situation. Now it has been running for 4 hours in the third step, and the final output of the command line is shown in the figure. I'm curious to know if this problem is solved in the end, can you guide me how to deal with it? Thank you very much.@xujiameng @XuelongMi image

"I strongly discourage using videos processed by deepcad for importing into AQUA. Although from a human perspective, the information contained in the video increases after preprocessing, in our tests, any preprocessing method will affect the video information. In the tests, we used six preprocessing methods: DeepCad, NoRMCorre, DeepCad+NoRMCorre, median filtering, Image Stabilizer (by ImageJ), and median filtering+Image Stabilizer. We evaluated the information changes before and after preprocessing using two indicators: peak signal-to-noise ratio (PSNR) and structural similarity (SSMI).

PSNR (peak signal-to-noise ratio) is an indicator that measures image quality. It is the ratio of the peak signal energy and mean square error between the original image and the noisy image, expressed in decibels (dB). The higher the PSNR, the better the image quality. Above 30 dB usually indicates that the image has no obvious distortion; 40dB can be considered as visually lossless.

SSIM (structural similarity) is another indicator that measures image quality. It is designed based on the human visual system (HVS) sensitivity to image structural information. SSIM assumes that image quality mainly depends on the similarity of brightness, contrast, and structure. This indicator usually ranges between 0 and 1, and its value closer to 1 indicates that the two images are more similar. Generally, an SSIM value above 0.75 indicates that the image has high quality/similarity.

Our results show that both high computational complexity preprocessing methods (such as DeepCad+NoRMCorre) and low computational complexity preprocessing methods (such as median filtering+Image Stabilizer) will cause changes in the valid information contained in the original video.
image
image

Let’s take an inappropriate example. If we filter a normal electrocardiogram signal, then the area under its curve or other derived parameters will usually change. But in Astrocyte we don’t know if this information change is what we want (although from a human perspective the video becomes clearer and the subcellular structure more accurate). In the paper Accurate quantification of astrocyte and neurotransmitter fluorescence dynamics for single-cell and population-level physiology, no preprocessing other than Gaussian filtering was performed on the video. You can also see similar preprocessing methods in other related papers.

I hope this helps you."

@xujiameng
Copy link
Author

@XuelongMi “Also, when using AQuA to analyze calcium signals in Astrocyte, some of the parameters calculated sometimes have NaN or Inf (such as Landmark event_away_from_landmark_landmark, Propagation offset_one_direction_ratio_Anterior, etc.). How should I deal with these abnormal values, use mean interpolation? Or not analyze these calcium signals that contain abnormal values?”

@XuelongMi
Copy link
Collaborator

Please let me check it tomorrow.

@YiqunWang-FDU
Copy link

Thank you for your reply. If I understand correctly, your point is that the image after complex preprocessing may introduce some unknown changes, and this change is not necessarily reliable. Your evidence is that the PSNR and SSIM of the image after complex processing are lower.
You also mentioned that the proportion of effective information in the complexly processed image has increased through human observation, but the PSNR and SSIM are lower. The two are diametrically opposed, and the reason is that the image quality assessment methods through PSNR and SSIM are inherently flawed, you can take a look at this paper “The Unreasonable Effectiveness of Deep Features as a Perceptual Metric”.
image
Judging the similarity of images by calculating PSNR and SSIM tends to be biased towards blurred images, which is why Gaussian filtering has a lower index.
So, my point is that we all know that preprocessing data changes the image, and perhaps by some metrics, those changes can be detrimental. However, our ultimate goal is to analyze the image. If the analysis result is better after processing the image, why should we stop eating because of choking?
On the other hand, for my data, its shooting conditions are relatively harsh, and subsequent analysis cannot be performed without complex preprocessing such as deepcad. So for me, this step was inevitable. I think it may be a better choice to adjust aqua to fit the data processed by deepcad. And, yesterday with the help of XuelongMi, I successfully used aqua to extract the events in the image processed by deepcad. Do you agree with my point of view?
Finally, thank you for your reply and a great discussion with you!

@XuelongMi
Copy link
Collaborator

@xujiameng Hi, jiameng, for the "Propagation offset_one_direction_ratio_Anterior", it is calculated by "propagation offset at some direction" / sum("propagation offset at some direction"). So if the propagation offset at all directions are 0, then the ratio will be nan value. You can just set the ratios as 0.
For the "Landmark-event_away_from_landmark", I test several datasets and cannot reproduce this issue. Could you give me some examples? And since in my test cases I didn't find this issue, I suppose this case is rare. If the data is private and the cases in your data are very few, maybe you can ignore these signals.

@xujiameng
Copy link
Author

@xujiameng Hi, jiameng, for the "Propagation offset_one_direction_ratio_Anterior", it is calculated by "propagation offset at some direction" / sum("propagation offset at some direction"). So if the propagation offset at all directions are 0, then the ratio will be nan value. You can just set the ratios as 0. For the "Landmark-event_away_from_landmark", I test several datasets and cannot reproduce this issue. Could you give me some examples? And since in my test cases I didn't find this issue, I suppose this case is rare. If the data is private and the cases in your data are very few, maybe you can ignore these signals.

Thank you for your reply. In the analysis of 40 Astrocytes, AQUA detected a total of 6.5k calcium events, with the missing rates of the "onset/offset_one_direction-ratio" parameter being around 10%, the missing rates of the "event_toward/away_landmark" parameter being around 0.005%, and the missing rates of the "event_towardlandmark_beforereaching" or "event_away_from_landmark_after_reaching" parameter being around 0.006%. The former accounts for a higher proportion, and using fixed value interpolation can better ensure the original distribution of data; The latter situation is rare, it would be better to ignore such samples.

Thank you very much for your suggestion.

@xujiameng
Copy link
Author

Thank you for your reply. If I understand correctly, your point is that the image after complex preprocessing may introduce some unknown changes, and this change is not necessarily reliable. Your evidence is that the PSNR and SSIM of the image after complex processing are lower. You also mentioned that the proportion of effective information in the complexly processed image has increased through human observation, but the PSNR and SSIM are lower. The two are diametrically opposed, and the reason is that the image quality assessment methods through PSNR and SSIM are inherently flawed, you can take a look at this paper “The Unreasonable Effectiveness of Deep Features as a Perceptual Metric”. image Judging the similarity of images by calculating PSNR and SSIM tends to be biased towards blurred images, which is why Gaussian filtering has a lower index. So, my point is that we all know that preprocessing data changes the image, and perhaps by some metrics, those changes can be detrimental. However, our ultimate goal is to analyze the image. If the analysis result is better after processing the image, why should we stop eating because of choking? On the other hand, for my data, its shooting conditions are relatively harsh, and subsequent analysis cannot be performed without complex preprocessing such as deepcad. So for me, this step was inevitable. I think it may be a better choice to adjust aqua to fit the data processed by deepcad. And, yesterday with the help of XuelongMi, I successfully used aqua to extract the events in the image processed by deepcad. Do you agree with my point of view? Finally, thank you for your reply and a great discussion with you!

Hello Yiqun. When analyzing the sensitivity of AQuA to hyperparameter settings, we found that the two sets of images before and after preprocessing exhibited different characteristics.
In the following two pictures, we set minSize and thrARScl as the dependent variable, and nseed as the independent variable. We use the cftool toolbox in matlab to fit the data we calculated. The Model selection is Biharmonic (v4), which can generate a biharmonic surface, making it consistent with the original data at the data points. Its characteristic is that it can maintain the smoothness and shape of the surface, avoiding oscillation or overfitting.
The following figure shows the parameter sensitivity of AQuA on the original video. Black dots are real points.
image

The figure shows good characteristics, with the influence of parameters on AQuA being relatively independent and showing a marginal decreasing effect (which we have obtained in more video analysis). However, it should be noted that the parameter range we set during testing is not suitable for the application process. For example, in practical applications, it is recommended to set the range of ThrARScl to 2-3.Let's quantify this image and observe Partial derivative+1 of "ThrARScl",reaching maximum within the range of 12 to 15.
image

The following figure shows the parameter sensitivity of AQuA on preprocessed videos (DeepCad+NoRMCore). But when we calculate the Partial derivative, we do not see a similar rule.
image

I do not deny the prompt effect of DeepCAD on images. I believe that DeepCAD can indeed improve image quality, which is a very useful and efficient tool. I hope some of my work can provide more basis for your judgment.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants