-
Notifications
You must be signed in to change notification settings - Fork 32
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Necessary steps for running MobilityNet analysis scripts #624
Comments
Did you check the docs? Also, you run the analysis scripts after you get the data. You first define the spec, then go out and do the trips and then run the analysis scripts Does your custom app have the ionic plugin? |
Thank you for pointing out the doc.
Which plugin are you referring to? I have several questions regarding the doc:
|
So you are only concerned about accuracy, not power?
From OSM (https://openstreetmap.org/) by using the "Map Data" option under the Map Layers. I've actually been using http://geojson.io/ to create polygons and http://geojson.io and https://open-polyline-decoder.60devs.com/ to create polylines. Let me update the sample spec.
Do you have a custom server branch in addition to a custom phone app? |
I confirm that I do have
That's correct.
I am using the docker based |
@xubowenhaoren I have updated the spec docs and the supporting notebooks. I'm a bit confused about this:
Can you clarify the current architecture in greater detail? Is the data from the IMU sent to the e-mission server, stored on the phone, or sent to the AWARE server? If it is sent to the e-mission server, do you have a new data type for it on the server?
Do you want to send the data to your own server, or directly to a public server? There used to be a public server for MobilityNet. I had to take it down when I left Berkeley, but I can set it up again on the DFKI server. |
Sorry for the confusion! The IMU integration only includes local data collection due to the scope of the project. While the IMU data might be used to improve the motion detection for wheelchairs, currently the IMU data will not be used in the MobilityNet data analysis.
We have no problems with sharing the data. But does it take a lot of time/resources to set up the MobilityNet server? We are actually approaching the end of this semester-long project so I need to perform the data analysis in a speedy fashion. |
No, the MobilityNet server is just an e-mission server from the |
I see. I will try to first set up myself. This should simplify the data importing/exporting process. |
@xubowenhaoren I went ahead and set this up at http://exv-91200.sb.dfki.de:6638 After you make your spec, you can upload it to the server using Then, if you click on the link at https://e-mission.eecs.berkeley.edu/#/client_setup?new_client=emevalzephyr |
you can try it with the sample spec first. Upload it with your own email (e.g. your_school_email@your_college.edu). Use You should able be able to select the sample spec and play with it. Tell me when you are done and I can delete it so it doesn't confuse the data :) |
Thank you for setting it up for me! I actually also set up a docker container with the
|
@xubowenhaoren the You might want to look at https://github.com/MobilityNet/mobilitynet-analysis-scripts/blob/master/Data_exploration_template.ipynb to see the tree and the explorations possible. |
Thank you for the reply.
Does this mean I can connect the |
@xubowenhaoren the client is designed for data collection, not data retrieval or analysis. It is basically a different UI skin on apps that include the ionic plugin. You can try to salvage the existing data by manually creating the temporal ground truth (assuming you have it). The temporal ground truth transitions required are defined here: If you:
you should be able to run the analysis scripts If that is too complicated, you can also re-do the predefined trips. Note that the analysis scripts don't actually create a new seed - they just allow you to retrieve the ground truth along with sensed entries so that you can experiment with various featurlzations and ML algorithms. |
I see. Then I think it's easier to re-do the predefined trips with both the More on generating a new model seed - do you have existing resources for this? |
I was just typing this out at the same time as you! if you want to create a new seed with your existing trips, but not evaluate the section segmenatation and/or trajectory matching, it is a bit tricky. Unfortunately, that only works on the old moves format, since that is the only format in which I had a large amount of labeled data. We'd have to create a new class, similar to I can do that, but not immediately. Or if you want to do it, I can review it... |
I see. I will contact my project admin regarding the next steps. |
before doing that, you might want to see whether the segmentation is accurate and it is only the mode inference that is incorrect, or whether both have errors. If the segmentation has errors, you should really use the Mobilitynet/emeval procedure. Note also that you would not have to write code from scratch (https://github.com/e-mission/e-mission-server/blob/master/emission/analysis/classification/inference/mode/pipeline.py) already has the code to extract the exact same features from the e-mission format. (that's how we can train on moves format but predict on emission format) you would just have to train instead of predicting after the featurization |
Right. I did notice that we had cases where multiple trips were segmented into one trip. I suppose I should really use the Mobilitynet client + server combination.
Unfortunately, I only have roughly 1 week left before the mandatory completion of this quarter-long project. Thus I was really looking for a simplified round of data collection & analysis that could generate useful results for future studies. Given the time constraint, it is most likely that I will not be able to finish the model training work. |
well, if you think you are blocked by the script to generate the seed, I can try to work on it over the weekend. However, the script will generate the seed based on the current featurization, which is largely based on location data. We used the set of features that Zheng et al used on the GeoLife dataset, where they had no accelerometer data, and 2sec interval between location points. You have accelerometer data and 30 sec interval between location points. I assume you want to use the IME data as well to generate features (that's why you collected it). That will require you to experiment with the data and try out different features and different ML algorithms. So maybe you should focus on collecting high quality ground truth data this semester and work on the ML next semester. Let's see what your project admin says. |
Thanks for setting up the server! However, after I scann the QR code and logged in with |
@xubowenhaoren the emTripLog app does not support cleartext. But I assumed that you would want to run the mobilitynet skin with your app so you could have the ground truth for the IME data as well. Since your app is also forked from e-mission and has the ionic plugin included, it should launch your app when you scan the QR code. If it doesn't, I might need to give you some ionic config information. LMK |
Thank you for pointing that out, I misunderstood earlier. I switched to my own build and clicked https://e-mission.eecs.berkeley.edu/#/client_setup?new_client=emevalzephyr. My own build was able to successfully download and use the new UI. However, I noticed that I cannot find any sample specs to try out. Screenshots and logcat below.
|
Did you upload any spec? You can't retrieve a spec registered with your school email unless you upload it first. |
Hmm, I misread your reply then. However, considering that the sample spec would be very different from my evaluation spec, should I still upload the sample spec now? |
There can be multiple specs for one user. Since you may want to test in more than one location. |
I see. I will try the sample spec upload and experience the UI. |
I was able to upload the sample spec by running
I can see selections for sensing settings and calibrations. When I click on the calibration list, I confirm that I can view the same set of lists as in the sample doc. However, I noticed that the map doesn't refocus on the specific trip. I also noticed that the map doesn't show any markers, even I manually zoomed to that area. My questions:
|
I said I would delete the sample spec "Tell me when you are done and I can delete it so it doesn't confuse the data :)" Let me see if I can reproduce your problem with the spec, and then I can delete it. |
Actually, I'm pretty sure I know what the problem is, even investigating further. We changed the format of the filled spec as part of the reroute changes MobilityNet/mobilitynet-analysis-scripts#48, fixing MobilityNet/mobilitynet.github.io#11 I just need to change the code to support the new format, by implementing MobilityNet/mobilitynet-analysis-scripts@bb91f3c in the JS code as well |
While making this change, I discovered that the sample spec was only valid until |
Updated sample spec MobilityNet/mobilitynet-analysis-scripts#53 |
This problem
from #624 (comment) is fixed in e-mission/e-mission-phone#747 |
Hello,
In issue #616, we are working on an app with a custom IMU sensor. We wish to use our custom app in the data collection instead of the standard evaluation app. We've already set up our servers and identified the routes for the pre-defined trips.
Then, before the actual data collection, what are the other necessary steps to run MobilityNet analysis scripts?
The text was updated successfully, but these errors were encountered: