Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Necessary steps for running MobilityNet analysis scripts #624

Open
xubowenhaoren opened this issue Feb 25, 2021 · 33 comments
Open

Necessary steps for running MobilityNet analysis scripts #624

xubowenhaoren opened this issue Feb 25, 2021 · 33 comments

Comments

@xubowenhaoren
Copy link
Contributor

Hello,

In issue #616, we are working on an app with a custom IMU sensor. We wish to use our custom app in the data collection instead of the standard evaluation app. We've already set up our servers and identified the routes for the pre-defined trips.

Then, before the actual data collection, what are the other necessary steps to run MobilityNet analysis scripts?

@shankari
Copy link
Contributor

shankari commented Feb 26, 2021

Did you check the docs?
https://github.com/MobilityNet/mobilitynet-analysis-scripts/#contributing-additonal-data-and-experiments

Also, you run the analysis scripts after you get the data. You first define the spec, then go out and do the trips and then run the analysis scripts

Does your custom app have the ionic plugin?

@xubowenhaoren
Copy link
Contributor Author

Thank you for pointing out the doc.

Does your custom app have the ionic plugin?

Which plugin are you referring to?

I have several questions regarding the doc:

  1. The purpose for data collection in our project is to 1) collect raw data with the IMU sensor for future usage and 2) run a basic analysis on the motion mode prediction accuracy. Then can I have just one experiment phone running high accuracy and no other control phones?
  2. How do I get the osm_id for a particular place and the route_waypoints for a particular route? Link: https://github.com/MobilityNet/mobilitynet-analysis-scripts/blob/master/spec_creation/evaluation.spec.sample#L123
  3. For the spec upload, what URL should I use since I'm using my custom branch build?

@shankari
Copy link
Contributor

shankari commented Mar 2, 2021

Which plugin are you referring to?

    "cordova-plugin-ionic": "5.4.7",

The purpose for data collection in our project is to 1) collect raw data with the IMU sensor for future usage and 2) run a basic analysis on the motion mode prediction accuracy. Then can I have just one experiment phone running high accuracy and no other control phones?

So you are only concerned about accuracy, not power?

How do I get the osm_id for a particular place and the route_waypoints for a particular route? Link: https://github.com/MobilityNet/mobilitynet-analysis-scripts/blob/master/spec_creation/evaluation.spec.sample#L123

From OSM (https://openstreetmap.org/) by using the "Map Data" option under the Map Layers. I've actually been using http://geojson.io/ to create polygons and http://geojson.io and https://open-polyline-decoder.60devs.com/ to create polylines. Let me update the sample spec.

For the spec upload, what URL should I use since I'm using my custom branch build?

Do you have a custom server branch in addition to a custom phone app?

@xubowenhaoren
Copy link
Contributor Author

I confirm that I do have "cordova-plugin-ionic": "5.4.7" in my project.

So you are only concerned about accuracy, not power?

That's correct.

Do you have a custom server branch in addition to a custom phone app?

I am using the docker based gis-based-mode-detection branch. Would this branch work? If yes, what URL should I use then? Or can I skip this step without affecting the data analysis?

@shankari
Copy link
Contributor

shankari commented Mar 3, 2021

@xubowenhaoren I have updated the spec docs and the supporting notebooks.
MobilityNet/mobilitynet.github.io@c01e583#diff-84e7ae9a65d52029b264d13d8f14b027ec5018289f40a4e1a2b02e1afeaa75d5

I'm a bit confused about this:

I am using the docker based gis-based-mode-detection branch.

Can you clarify the current architecture in greater detail? Is the data from the IMU sent to the e-mission server, stored on the phone, or sent to the AWARE server? If it is sent to the e-mission server, do you have a new data type for it on the server?

Would this branch work? If yes, what URL should I use then? Or can I skip this step without affecting the data analysis?

Do you want to send the data to your own server, or directly to a public server? There used to be a public server for MobilityNet. I had to take it down when I left Berkeley, but I can set it up again on the DFKI server.

@xubowenhaoren
Copy link
Contributor Author

Is the data from the IMU sent to the e-mission server, stored on the phone, or sent to the AWARE server? If it is sent to the e-mission server, do you have a new data type for it on the server?

Sorry for the confusion! The IMU integration only includes local data collection due to the scope of the project. While the IMU data might be used to improve the motion detection for wheelchairs, currently the IMU data will not be used in the MobilityNet data analysis.

Do you want to send the data to your own server, or directly to a public server?

We have no problems with sharing the data. But does it take a lot of time/resources to set up the MobilityNet server? We are actually approaching the end of this semester-long project so I need to perform the data analysis in a speedy fashion.

@shankari
Copy link
Contributor

shankari commented Mar 3, 2021

No, the MobilityNet server is just an e-mission server from the emevalzephyr branch. Again, I can set one up for you, or you can run it yourself and then export the data at the end of the project.

@xubowenhaoren
Copy link
Contributor Author

I see. I will try to first set up myself. This should simplify the data importing/exporting process.

@shankari
Copy link
Contributor

shankari commented Mar 5, 2021

@xubowenhaoren I went ahead and set this up at http://exv-91200.sb.dfki.de:6638

After you make your spec, you can upload it to the server using $ python3 upload_validated_spec.py from the mobilitynet analysis scripts.

Then, if you click on the link at https://e-mission.eecs.berkeley.edu/#/client_setup?new_client=emevalzephyr
it should customize your app with the calibration UI. LMK if it doesn't work - I may have to send you an ionic config file.

@shankari
Copy link
Contributor

shankari commented Mar 5, 2021

you can try it with the sample spec first. Upload it with your own email (e.g. your_school_email@your_college.edu). Use ucb-sdb-android-1 to log in and bwx074@cs.washington.edu for the evaluation_ author_email
https://github.com/MobilityNet/mobilitynet.github.io/blob/master/em-eval-procedure/collecting_new_data.md#install-the-evaluation-apps-on-the-test-phones-and-configure-them

You should able be able to select the sample spec and play with it. Tell me when you are done and I can delete it so it doesn't confuse the data :)

@xubowenhaoren
Copy link
Contributor Author

xubowenhaoren commented Mar 5, 2021

Thank you for setting it up for me! I actually also set up a docker container with the emevalzephyr branch on my mac locally. Before I try the client setup though, I do want to ask a few clarification questions:

  1. I currently have some pre-defined trip data collected & analyzed using a gis-based-mode-detection branch based server. How can I use these data to run the data analysis on the new emevalzephyr/MobilityNet server?
  2. While l looking at the segmented trips with the gis-based-mode-detection server, I noticed that some trips have incorrect motion modes. Given that we know the trip ground truth, how can I train a new model based on the existing seed and get a new seed with my ground truth?
  3. Should I move question 2 to a new issue?

@shankari
Copy link
Contributor

shankari commented Mar 5, 2021

@xubowenhaoren the emevalzephyr client is designed to be used while you are collecting the data, so we can get high quality temporal ground truth in addition to spatial ground truth. The analysis scripts can then pull the ground truth ranges and arrange them into a tree (phone, settings, run, trip, section) each of which has the locations that segment associated with it. You can then create a new section for each of the ground truth section ranges and use it to train.

You might want to look at https://github.com/MobilityNet/mobilitynet-analysis-scripts/blob/master/Data_exploration_template.ipynb to see the tree and the explorations possible.

@xubowenhaoren
Copy link
Contributor Author

Thank you for the reply.

The emevalzephyr client is designed to be used while you are collecting the data, so we can get high quality temporal ground truth in addition to spatial ground truth.

Does this mean I can connect the emevalzephyr client to my gis-based-mode-detection server and salvage the existing data? Or must I connect the emevalzephyr client to the emevalzephyr/MobilityNet server and do the pre-defined trips again?

@shankari
Copy link
Contributor

shankari commented Mar 5, 2021

@xubowenhaoren the client is designed for data collection, not data retrieval or analysis. It is basically a different UI skin on apps that include the ionic plugin.

You can try to salvage the existing data by manually creating the temporal ground truth (assuming you have it). The temporal ground truth transitions required are defined here:
https://github.com/e-mission/e-mission-server/blob/emevalzephyr/emission/core/wrapper/evaltransition.py

If you:

  • copy the data over to the emevalzephyr server,
  • create a spec with the spatial ground truth, and
  • manually create the temporal ground truth with those flags

you should be able to run the analysis scripts

If that is too complicated, you can also re-do the predefined trips.

Note that the analysis scripts don't actually create a new seed - they just allow you to retrieve the ground truth along with sensed entries so that you can experiment with various featurlzations and ML algorithms.

@xubowenhaoren xubowenhaoren changed the title Necessary steps for running MobilityNet analysis scripts with a custom E-mission app Necessary steps for running MobilityNet analysis scripts Mar 5, 2021
@xubowenhaoren
Copy link
Contributor Author

I see. Then I think it's easier to re-do the predefined trips with both the emevalzephyr client and server :)

More on generating a new model seed - do you have existing resources for this?

@shankari
Copy link
Contributor

shankari commented Mar 5, 2021

I was just typing this out at the same time as you!

if you want to create a new seed with your existing trips, but not evaluate the section segmenatation and/or trajectory matching, it is a bit tricky.
https://github.com/e-mission/e-mission-server/blob/f4ed2a7cca31410e01bc2ffde26822fe28e1b78e/emission/analysis/classification/inference/mode/seed/pipeline.py#L37
is the current code to create and save the model.

Unfortunately, that only works on the old moves format, since that is the only format in which I had a large amount of labeled data. We'd have to create a new class, similar to ModeInferencePipelineMovesFormat that works with the new emission format.

I can do that, but not immediately. Or if you want to do it, I can review it...

@xubowenhaoren
Copy link
Contributor Author

I see. I will contact my project admin regarding the next steps.

@shankari
Copy link
Contributor

shankari commented Mar 5, 2021

before doing that, you might want to see whether the segmentation is accurate and it is only the mode inference that is incorrect, or whether both have errors. If the segmentation has errors, you should really use the Mobilitynet/emeval procedure.

Note also that you would not have to write code from scratch (https://github.com/e-mission/e-mission-server/blob/master/emission/analysis/classification/inference/mode/pipeline.py) already has the code to extract the exact same features from the e-mission format.

(that's how we can train on moves format but predict on emission format)

you would just have to train instead of predicting after the featurization

@xubowenhaoren
Copy link
Contributor Author

before doing that, you might want to see whether the segmentation is accurate and it is only the mode inference that is incorrect, or whether both have errors.

Right. I did notice that we had cases where multiple trips were segmented into one trip. I suppose I should really use the Mobilitynet client + server combination.

Note also that you would not have to write code from scratch.

Unfortunately, I only have roughly 1 week left before the mandatory completion of this quarter-long project. Thus I was really looking for a simplified round of data collection & analysis that could generate useful results for future studies. Given the time constraint, it is most likely that I will not be able to finish the model training work.

@shankari
Copy link
Contributor

shankari commented Mar 5, 2021

well, if you think you are blocked by the script to generate the seed, I can try to work on it over the weekend.

However, the script will generate the seed based on the current featurization, which is largely based on location data.
https://github.com/e-mission/e-mission-server/blob/master/emission/analysis/classification/inference/mode/pipeline.py#L158

We used the set of features that Zheng et al used on the GeoLife dataset, where they had no accelerometer data, and 2sec interval between location points. You have accelerometer data and 30 sec interval between location points.

I assume you want to use the IME data as well to generate features (that's why you collected it). That will require you to experiment with the data and try out different features and different ML algorithms.

So maybe you should focus on collecting high quality ground truth data this semester and work on the ML next semester.

Let's see what your project admin says.

@xubowenhaoren
Copy link
Contributor Author

xubowenhaoren commented Mar 7, 2021

you can try it with the sample spec first. Upload it with your own email (e.g. your_school_email@your_college.edu). Use ucb-sdb-android-1 to log in and your_school_email@your_college.edu for the evaluation_ author_email
https://github.com/MobilityNet/mobilitynet.github.io/blob/master/em-eval-procedure/collecting_new_data.md#install-the-evaluation-apps-on-the-test-phones-and-configure-them

You should able be able to select the sample spec and play with it. Tell me when you are done and I can delete it so it doesn't confuse the data :)

Thanks for setting up the server! However, after I scann the QR code and logged in with ucb-sdb-android-1, I was prompted with the same "cleartext traffic not permitted" error. Thus I didn't get a chance to enter your_school_email@your_college.edu for the evaluation_ author_email. I know that I can fix this by running npx cordova plugin add cordova-plugin-cleartext in my own e-mission project. But I cannot do so for your pre-built emTripLog APK. What should I do then?

image

@shankari
Copy link
Contributor

shankari commented Mar 8, 2021

@xubowenhaoren the emTripLog app does not support cleartext. But I assumed that you would want to run the mobilitynet skin with your app so you could have the ground truth for the IME data as well. Since your app is also forked from e-mission and has the ionic plugin included, it should launch your app when you scan the QR code.

If it doesn't, I might need to give you some ionic config information. LMK

@xubowenhaoren
Copy link
Contributor Author

Thank you for pointing that out, I misunderstood earlier. I switched to my own build and clicked https://e-mission.eecs.berkeley.edu/#/client_setup?new_client=emevalzephyr. My own build was able to successfully download and use the new UI. However, I noticed that I cannot find any sample specs to try out. Screenshots and logcat below.

Screenshot_2021-03-07-18-29-34-339_edu berkeley eecs emission
Screenshot_2021-03-07-18-26-09-126_edu berkeley eecs emission
Screenshot_2021-03-07-18-26-02-954_edu berkeley eecs emission

2021-03-07 18:26:33.523 29081-29407/edu.berkeley.eecs.emission D/SERVER: Handling local request: http://localhost/json/connectionConfig.json
2021-03-07 18:26:33.544 29081-29081/edu.berkeley.eecs.emission I/chromium: [INFO:CONSOLE(104)] "About to return message {"key_list":["config/evaluation_spec"],"start_time":0,"end_time":1615170393,"user":"MY_ACTUAL_SCHOOL_EMAIL"}", source: http://localhost/js/eval.js (104)
2021-03-07 18:26:33.544 29081-29081/edu.berkeley.eecs.emission I/chromium: [INFO:CONSOLE(105)] "getRawEntries: about to get pushGetJSON for the timestamp", source: http://localhost/js/eval.js (105)

@shankari
Copy link
Contributor

shankari commented Mar 8, 2021

Did you upload any spec? You can't retrieve a spec registered with your school email unless you upload it first.

@xubowenhaoren
Copy link
Contributor Author

you can try it with the sample spec first. Upload it with your own email (e.g. your_school_email@your_college.edu). Use ucb-sdb-android-1 to log in and [your_school_email@your_college.edu] for the evaluation_ author_email
https://github.com/MobilityNet/mobilitynet.github.io/blob/master/em-eval-procedure/collecting_new_data.md#install-the-evaluation-apps-on-the-test-phones-and-configure-them

You should able be able to select the sample spec and play with it. Tell me when you are done and I can delete it so it doesn't confuse the data :)

Hmm, I misread your reply then. However, considering that the sample spec would be very different from my evaluation spec, should I still upload the sample spec now?

@shankari
Copy link
Contributor

shankari commented Mar 8, 2021

Hmm, I misread your reply then. However, considering that the sample spec would be very different from my evaluation spec, should I still upload the sample spec now?

There can be multiple specs for one user. Since you may want to test in more than one location.

@xubowenhaoren
Copy link
Contributor Author

I see. I will try the sample spec upload and experience the UI.

@xubowenhaoren
Copy link
Contributor Author

I was able to upload the sample spec by running

python upload_validated_spec.py http://exv-91200.sb.dfki.de:6638 [MY_ACTUAL_SCHOOL_EMAIL] evaluation.spec.filled.json

I can see selections for sensing settings and calibrations. When I click on the calibration list, I confirm that I can view the same set of lists as in the sample doc. However, I noticed that the map doesn't refocus on the specific trip. I also noticed that the map doesn't show any markers, even I manually zoomed to that area.

My questions:

  • Is this the expected behavior or is there something wrong?
  • How do I delete the sample spec, since you mentioned that I need to do that?

image

@shankari
Copy link
Contributor

shankari commented Mar 9, 2021

How do I delete the sample spec, since you mentioned that I need to do that?

I said I would delete the sample spec

"Tell me when you are done and I can delete it so it doesn't confuse the data :)"

Let me see if I can reproduce your problem with the spec, and then I can delete it.

@shankari
Copy link
Contributor

shankari commented Mar 9, 2021

Let me see if I can reproduce your problem with the spec, and then I can delete it.

Actually, I'm pretty sure I know what the problem is, even investigating further. We changed the format of the filled spec as part of the reroute changes MobilityNet/mobilitynet-analysis-scripts#48, fixing MobilityNet/mobilitynet.github.io#11

I just need to change the code to support the new format, by implementing MobilityNet/mobilitynet-analysis-scripts@bb91f3c in the JS code as well

@shankari
Copy link
Contributor

While making this change, I discovered that the sample spec was only valid until 2019-06-22. Bumping up the validity to 10 years into the future so people can use it without thinking too much. I will also upload the spec so I can test against it...

@shankari
Copy link
Contributor

Updated sample spec MobilityNet/mobilitynet-analysis-scripts#53

@shankari
Copy link
Contributor

This problem

I can see selections for sensing settings and calibrations. When I click on the calibration list, I confirm that I can view the same set of lists as in the sample doc. However, I noticed that the map doesn't refocus on the specific trip. I also noticed that the map doesn't show any markers, even I manually zoomed to that area.

from #624 (comment)

is fixed in e-mission/e-mission-phone#747

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants