Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

train on VIPL_HR based on MTTS-CAN #8

Closed
huq02 opened this issue Apr 29, 2022 · 44 comments
Closed

train on VIPL_HR based on MTTS-CAN #8

huq02 opened this issue Apr 29, 2022 · 44 comments

Comments

@huq02
Copy link

huq02 commented Apr 29, 2022

Hi,
I want to train on VIPL_HR(hr、SpO2) based on MTTS-CAN, I have generate training file (VIPL_HR_Train.h5dfVIPL_HR_Train.h5df). During the training , the index error will appear.
image

Can you help me to solve the problem?

@SpicyYeol
Copy link
Collaborator

Hi,
It seems to be appearance at the MTTSDataset.py.
our MTTSDataset.py use the length of hr_label.
I think your appearance_img's data set size isn't same with the hr_label.
(Total length of img dataset, batch, channel, height, width) (Total length of hr,batch, hr_value)
//Total Length of img dataset != Total Length of hr

I think you need to check the data preprocessing sequence.

@huq02
Copy link
Author

huq02 commented Apr 29, 2022 via email

@SpicyYeol
Copy link
Collaborator

If you get a perfect BVP signal, you can get HR and RR information. But, SpO2 isn't.

  1. About SpO2 information

BVP signal consists of all Hemoglobin diffuse reflection information.
But. SpO2 consists of OxyHemoglobin and DeOxyHemoglobin information.

In the MTTS case, they use the Multi-Task approach.
Multi-Task approach can find the two labels common features.
So, If you try to train with SpO2 and BVP signal, You are backbone Network learned about Oxy and Deoxy hemoglobins information.

  1. How to get Hr and RR

I recommend using the Band Pass Filter.
generally. Hr has the 0.8~4Hz( 48bpm to 240bpm) band. and RR has 0.1 ~ 0.5Hz(6 rr to 30rr).

The Second method is using peak detection.
all BVP has a pattern like PQRST pattern. If you can find the peak of BVP signal, you can estimate the hr.

I introduce two python package.

@huq02
Copy link
Author

huq02 commented Apr 29, 2022 via email

@SpicyYeol
Copy link
Collaborator

If you have any other questions about the rPPG area, give me a mail.
have a good day!.

@huq02
Copy link
Author

huq02 commented May 5, 2022 via email

@SpicyYeol
Copy link
Collaborator

SpicyYeol commented May 5, 2022

Hi,
You must find the peak in the BVP signal. I recommend elgendi algorithm.
If you can find the peak(onesets), you can use IBI(peak to peak) to estimate heart rate.

Elgendi Argorithm's implematation can be find in NeuroKit and BIOSPPY.

  • Elgendi M, Norton I, Brearley M, Abbott D, Schuurmans D (2013) Systolic Peak Detection in
    Acceleration Photoplethysmograms Measured from Emergency Responders in Tropical Conditions.
    PLoS ONE 8(10): e76585. doi:10.1371/journal.pone.0076585.

@huq02
Copy link
Author

huq02 commented May 5, 2022 via email

@SpicyYeol
Copy link
Collaborator

SpicyYeol commented May 5, 2022 via email

@huq02
Copy link
Author

huq02 commented May 9, 2022 via email

@SpicyYeol
Copy link
Collaborator

@huq02
Copy link
Author

huq02 commented May 17, 2022 via email

@SpicyYeol
Copy link
Collaborator

You can get information in main/utils/seq_preprocess.py

Can I get your preprocessing sequence for VIPL-dataset ?

@huq02
Copy link
Author

huq02 commented May 18, 2022 via email

@SpicyYeol
Copy link
Collaborator

  1. I known the seq_preprocess.py . what is the "path" ? origin label file or need convert to label_Mat file.
  • do you mean about dataset.mat file?
  1. You need VIPL-HR dataset? How can I copy to you?
  • No. I have VIPL-HR dataset already. but VIPL-HR dataset can't train well. so I want to know how to preprocess VIPL-HR dataset

@huq02
Copy link
Author

huq02 commented May 18, 2022 via email

@huq02
Copy link
Author

huq02 commented May 18, 2022 via email

@SpicyYeol
Copy link
Collaborator

In PP-NET paper, they use the "MIMIC" dataset.
you can find "MIMIC" dataset from https://physionet.org/.

There is no mat file which is preprocessed with UBFC dataset. And then, ubfc and ubfc_phy need to certificate from data owner.

You can get pretrained model(Duplicate of #1 )

My Question is " How to preprocess the VIPL-HR dataset".
which method were you use?
I think there is some timing issue with Video and label. also have label shape issue.
When I searched other research's repo, they use preprocessd label for training.
So, I wonder how to preproceee label.

@huq02
Copy link
Author

huq02 commented May 18, 2022 via email

@SpicyYeol
Copy link
Collaborator

It's OK :)

@huq02
Copy link
Author

huq02 commented May 18, 2022 via email

@SpicyYeol
Copy link
Collaborator

You can get dataset as follow website:

UBFC1 / https://sites.google.com/view/ybenezeth/ubfcrppg
UBFC2 / https://sites.google.com/view/ybenezeth/ubfcrppg

and, preprocess that dataset. you can make hdf5 file.
you can't get the preprocessed train/test hdf5 file. but you can make the hdf5 use downloaded data.

  • How to calculate directly from BVP to get Respiratory rate?
  1. you need to make estimate bvp signal longer than fps.
  2. fft
  3. BPF [0.18 - 0.5Hz]
  4. select the highest frequency, and then mutiply * 60

@huq02
Copy link
Author

huq02 commented May 18, 2022 via email

@SpicyYeol
Copy link
Collaborator

SpicyYeol commented May 18, 2022

Thanks!
I have got the UBFC1 and UBFC2.
I want to known you have training MTTS-CAN based on UBFC to get hr and rr?

  • Yes, I have. but It's been a while so I can't remember the result. I remember, may be It train well just use UBFC.

I don't quite understand your advice about to get rr。Can you provide the code about rr? I get the total_rPPG, next to do to get rr?

  • I have code. But.. It use java.
  • I mean, the easy way get the rr. transform is required. FFT transform's input signal must be longer than video signal fps.

`
public float[][] FFT_trans(int gaussian_w, int BUFFER_SIZE, float[][] f_pixel_buff, Noise noise, float[][] fft_buffer, boolean[] HR_filter, boolean[] RR_filter) {

    this.gaussian_w = gaussian_w;
    this.BUFFER_SIZE = BUFFER_SIZE;

    for (int i = 0; i < gaussian_w * gaussian_w; i++) {
        float[] pixel_reconstruct = new float[BUFFER_SIZE];
        for (int j = 0; j < BUFFER_SIZE; j++) {
            pixel_reconstruct[j] = f_pixel_buff[j][i];
        }//fft하기 위함

        noise.fft(pixel_reconstruct, fft_buffer[i]); //fft

        // index 0,1은 dc 나머지 정상 나이퀴스트
        //filter
        fft_buffer[i][0] = fft_buffer[i][1] = 0; // fft dc component

        for (int j = 0; j < BUFFER_SIZE / 2; j++) {

            fft_buffer[i][j*2+1] = fft_buffer[i][j*2+2] * (RR_filter[j] ?1.0f:0.0f);//use imagenary part temporary// fft realvalue filtering for RR_cal
            fft_buffer[i][j * 2 + 2] = fft_buffer[i][j * 2 + 2] * (HR_filter[j] ? 1.0f : 0.0f); // fft real value filtering for HR_cal
        }
    }
    return fft_buffer;
}

public int HR_index(float[][] fft_buffer) {

    float[] avg_freq = new float[BUFFER_SIZE / 2]; // imagenery field는 사용하지 않음으로 절반

    for (int i = 0; i < gaussian_w * gaussian_w; i++) {
        for (int j = 0; j < BUFFER_SIZE / 2; j++) {
            avg_freq[j] += fft_buffer[i][j * 2 + 2];
        }
    }// 누적이미지 전체 픽셀에 대해 frequency 평균구함

    int index = 0;
    float max = 0.0f;
    for (int i = 0; i < BUFFER_SIZE / 2; i++) {
        avg_freq[i] /= (BUFFER_SIZE / 2);
        if (avg_freq[i] > max) {
            max = avg_freq[i];
            index = i;
        }
    }
    return index;
}
public int RR_index(float[][] fft_buffer) {

    float[] avg_freq = new float[BUFFER_SIZE / 2]; // imagenery field는 사용하지 않음으로 절반

    for (int i = 0; i < gaussian_w * gaussian_w; i++) {
        for (int j = 0; j < BUFFER_SIZE / 2; j++) {
            avg_freq[j] += fft_buffer[i][j * 2 + 1]; //1을 저장공간으로 사용
        }
    }// 누적이미지 전체 픽셀에 대해 frequency 평균구함

    int index = 0;
    float max = 0.0f;
    for (int i = 0; i < BUFFER_SIZE / 2; i++) {
        avg_freq[i] /= (BUFFER_SIZE / 2);
        if (avg_freq[i] > max) {
            max = avg_freq[i];
            index = i;
        }
    }
    return index;
}

`

@huq02
Copy link
Author

huq02 commented May 18, 2022 via email

@huq02
Copy link
Author

huq02 commented May 19, 2022 via email

@huq02
Copy link
Author

huq02 commented May 26, 2022 via email

@SpicyYeol
Copy link
Collaborator

Hi:
I have get the hr、hrv、rr。 I have some questions
(1)why I training the VIPL_HR dataset , the val_loss don't convergence。The error is relatively large。Is there a problem with preprocessing?

  • I think it is. VIPL_HR need to some preprocessing skill. I still find it. When I saw the physformer's train set, There are more pre-processing steps, but they are not publicly available.

(2)Do you known the UBFC_2(label: groundtruth.txt) how to get?

@huq02
Copy link
Author

huq02 commented May 26, 2022 via email

@SpicyYeol
Copy link
Collaborator

@huq02
Copy link
Author

huq02 commented May 26, 2022 via email

@huq02
Copy link
Author

huq02 commented May 27, 2022 via email

@SpicyYeol
Copy link
Collaborator

@huq02
Copy link
Author

huq02 commented May 30, 2022 via email

@SpicyYeol
Copy link
Collaborator

yes. I have same situation in my experiment. so, there is so many timing issue.

i guess that is one of reason for use long signal.

I'm experimenting with various timing issues right now, so I'll tell you when I solve it.

@huq02
Copy link
Author

huq02 commented Jun 1, 2022 via email

@huq02
Copy link
Author

huq02 commented Jun 15, 2022 via email

@SpicyYeol
Copy link
Collaborator

No, It must be shared with the permission of the data creator.
From my experience, use v4v rather than PURE.

@SpicyYeol
Copy link
Collaborator

Which model did you successfully train with?

@huq02
Copy link
Author

huq02 commented Oct 11, 2022 via email

@huq02
Copy link
Author

huq02 commented Oct 11, 2022 via email

@huq02
Copy link
Author

huq02 commented Oct 11, 2022 via email

@nizhezhiwei
Copy link

您是否处理预处理VIPL_HR的问题。我发现了一个问题,我想和大家讨论一下:我不用val数据集。train_loss(p1_p10)可以收敛,MAE: 7.6 bpm(p1_v1_source3.hdf5), 但train_loss (p1_p46) 是楠。我不知道为什么,你有没有遇到过类似的问题?在 2022-05-30 11:28:24, “金达伊尔” @.***>写道:是的。我在实验中也有同样的情况。所以,有很多时间问题。我想这是使用长信号的原因之一。我现在正在尝试各种计时问题,所以我会在解决它时告诉你。— 直接回复此电子邮件,在 GitHub 上查看或取消订阅。您收到此消息是因为您创作了该线程。消息编号: >

You can get information in main/utils/seq_preprocess.py

Hello:
I'm also studying rppg,I have achieved PhysNet、Physformer and RTRPPG. I used MSELoss and Neg PersonLoss to train them with UBFC dataset.However,the best result are as follows RMSE4.3,MAE3.3,PC0.96 ,do you have any better result?
Besides,I don't know how to do preprocess for VIPL-dataset .Can I get your preprocessing sequence for VIPL-dataset ?

@nizhezhiwei
Copy link

Hi, Why the hr value is so high? I donnot know the reason. Do you met the problem? the sample_rate is the Sampling frequency (Hz)? 60 or 100 or 120? At 2022-05-05 17:54:33, "Daeyeol Kim" @.> wrote: Try to use bvp.bvp in biosppy. this function use elgendi algorithm. Bvp.bvp(signal,sampling_rate) https://biosppy.readthedocs.io/en/stable/biosppy.signals.html#biosppy.signals.bvp.bvp 2022년 5월 5일 (목) 오후 6:46, huq02 @.>님이 작성:
Thanks. You have finished hr value estimate? Can you share the code? At 2022-05-05 11:52:11, "Daeyeol Kim" @.> wrote: Hi, You must find the peak in the BVP signal. I recommend elgendi algorithm. If you can find the peak(onesets), you can use IBI(peak to peak) to estimate heart rate. Elgendi Argorithm's implematation can be find in NeuroKit and BIOSPPY. Elgendi M, Norton I, Brearley M, Abbott D, Schuurmans D (2013) Systolic Peak Detection in Acceleration Photoplethysmograms Measured from Emergency Responders in Tropical Conditions. PLoS ONE 8(10): e76585. doi:10.1371/journal.pone.0076585. — Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.> — Reply to this email directly, view it on GitHub <#8 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AGQAS77E5A2EGLWQPXWXJCTVIOKGXANCNFSM5UUK6P5A . You are receiving this because you modified the open/close state.Message ID: @.>
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.
>

Hello:
I'm also studying rppg,I have achieved PhysNet、Physformer and RTRPPG. I used MSELoss and Neg PersonLoss to train them with UBFC dataset.However,the best result are as follows RMSE4.3,MAE3.3,PC0.96 ,do you have any better result?
Besides,I don't know how to do preprocess for VIPL-dataset .Can I get your preprocessing sequence for VIPL-dataset ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants