New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Time logging of questions #257

Closed
mberg opened this Issue Nov 21, 2016 · 59 comments

Comments

Projects
None yet
@mberg

mberg commented Nov 21, 2016

As a user, I would like to be able to see how long an enumerator spent on each question in the login.

Tracking swiping, back forth, etc would also be valuable but the time spent per question would provide the most upfront value.

Ideally, this would be part of a meta data file which can be included with the submissions.

@ChrisCorey

This comment has been minimized.

Show comment
Hide comment
@ChrisCorey

ChrisCorey Jan 3, 2017

I'm surprised this hasn't been addressed. There are 4, at least, reasons to add this feature:

  1. As an indicator of data quality. When an item, or section of items are answered too quickly it can indicate respondent indifference to the substance of the items and/or falsification by an interviewer.
  2. In pretesting timing is needed for budgetary reasons. In large surveys decisions may need to be made about items to retain or exclude in order to stay w/in budget estimates for interviewer labor hours.
  3. Our IRB has required timing as a proxy for respondent burden. Excessive time spent on a bank of sensitive items has been taken as an indication that respondents are having difficulty w/ the subject matter.
  4. In attitude research time spent on an item can lead to polarization of item responses for some types of respondents.

I have support for undertaking this from our Manager of Emerging Technology and Engineering saying, "we ought to do this." Which is different from saying we have money for doing this. We do have a very senior developer reviewing the code to scope the task. I would be interested if others have looked at this and had general comments about the best way to approach this.

ChrisCorey commented Jan 3, 2017

I'm surprised this hasn't been addressed. There are 4, at least, reasons to add this feature:

  1. As an indicator of data quality. When an item, or section of items are answered too quickly it can indicate respondent indifference to the substance of the items and/or falsification by an interviewer.
  2. In pretesting timing is needed for budgetary reasons. In large surveys decisions may need to be made about items to retain or exclude in order to stay w/in budget estimates for interviewer labor hours.
  3. Our IRB has required timing as a proxy for respondent burden. Excessive time spent on a bank of sensitive items has been taken as an indication that respondents are having difficulty w/ the subject matter.
  4. In attitude research time spent on an item can lead to polarization of item responses for some types of respondents.

I have support for undertaking this from our Manager of Emerging Technology and Engineering saying, "we ought to do this." Which is different from saying we have money for doing this. We do have a very senior developer reviewing the code to scope the task. I would be interested if others have looked at this and had general comments about the best way to approach this.

@lognaturel

This comment has been minimized.

Show comment
Hide comment
@lognaturel

lognaturel Jan 11, 2017

Member

@ChrisCorey I think there is significant interest in this feature but I don't believe anyone has looked at the technical approach recently. Could you please ask your senior dev to jumpstart the technical conversation here or on the dev Slack at http://slack.opendatakit.org/? That will help bring it to the front of everyone's minds.

Member

lognaturel commented Jan 11, 2017

@ChrisCorey I think there is significant interest in this feature but I don't believe anyone has looked at the technical approach recently. Could you please ask your senior dev to jumpstart the technical conversation here or on the dev Slack at http://slack.opendatakit.org/? That will help bring it to the front of everyone's minds.

@chrislrobert

This comment has been minimized.

Show comment
Hide comment
@chrislrobert

chrislrobert Jan 11, 2017

chrislrobert commented Jan 11, 2017

@ChrisCorey

This comment has been minimized.

Show comment
Hide comment
@ChrisCorey

ChrisCorey Jan 13, 2017

ChrisCorey commented Jan 13, 2017

@lognaturel

This comment has been minimized.

Show comment
Hide comment
@lognaturel

lognaturel Jan 17, 2017

Member

@ChrisCorey Keep us posted. Hopefully someone will have the cycles to take this on soon.

@chrislrobert That's very good to know. Once someone is ready to take this on it would be good to see if it's possible to do some kind of coordination.

Member

lognaturel commented Jan 17, 2017

@ChrisCorey Keep us posted. Hopefully someone will have the cycles to take this on soon.

@chrislrobert That's very good to know. Once someone is ready to take this on it would be good to see if it's possible to do some kind of coordination.

@nap2000

This comment has been minimized.

Show comment
Hide comment
@nap2000

nap2000 Jan 21, 2017

Contributor

@mberg This sounds like a feature that is worth adding.

@chrislrobert It would make sense to me to consider leveraging off the work done by surveyCTO and presumably one of the objectives would be to make the odkCollect implementation compatible with the CTO solution where possible. However do you have any specific thoughts on how the XForms specification could be used to implement this? No need to list all 990,000.

Anyone have any thoughts on how we could use the XForms spec?

Contributor

nap2000 commented Jan 21, 2017

@mberg This sounds like a feature that is worth adding.

@chrislrobert It would make sense to me to consider leveraging off the work done by surveyCTO and presumably one of the objectives would be to make the odkCollect implementation compatible with the CTO solution where possible. However do you have any specific thoughts on how the XForms specification could be used to implement this? No need to list all 990,000.

Anyone have any thoughts on how we could use the XForms spec?

@lognaturel

This comment has been minimized.

Show comment
Hide comment
@lognaturel

lognaturel Jan 23, 2017

Member

@MartijnR, would appreciate your thoughts on how to approach this in an XFormsy way.

Member

lognaturel commented Jan 23, 2017

@MartijnR, would appreciate your thoughts on how to approach this in an XFormsy way.

@MartijnR

This comment has been minimized.

Show comment
Hide comment
@MartijnR

MartijnR Jan 23, 2017

XForm 'Actions' would be the XFormsy way to do this, I think. If one of the existing XForms events does not meet the requirements, we could create (a) custom event(s). Depends on exactly what it should measure. If may require two actions (start and end, just like the already supported metadata for overall duration). At first glance that makes the most sense to me.

In ODK we don't have Actions, but use an (fairly equivalent) custom feature: "preload items" (note: CommCare did replace preload items with Actions). So if there is no desire to already implement XForm Actions, a quick way of doing this would be to create another preload item (or two).

It think either of these options could probably be adopted (if we flesh them out further) in The Spec.
[edit]One thing to figure out though, is how to link them with a particular page or question.[/edit]

MartijnR commented Jan 23, 2017

XForm 'Actions' would be the XFormsy way to do this, I think. If one of the existing XForms events does not meet the requirements, we could create (a) custom event(s). Depends on exactly what it should measure. If may require two actions (start and end, just like the already supported metadata for overall duration). At first glance that makes the most sense to me.

In ODK we don't have Actions, but use an (fairly equivalent) custom feature: "preload items" (note: CommCare did replace preload items with Actions). So if there is no desire to already implement XForm Actions, a quick way of doing this would be to create another preload item (or two).

It think either of these options could probably be adopted (if we flesh them out further) in The Spec.
[edit]One thing to figure out though, is how to link them with a particular page or question.[/edit]

@chrislrobert

This comment has been minimized.

Show comment
Hide comment
@chrislrobert

chrislrobert Jan 23, 2017

chrislrobert commented Jan 23, 2017

@nap2000

This comment has been minimized.

Show comment
Hide comment
@nap2000

nap2000 Jan 24, 2017

Contributor

@MartijnR Do you think you would implement support for timing in Enketo? And related to that @chrislrobert in your solution how did you calculate timings for a page of questions (field-list / table-list)?

Contributor

nap2000 commented Jan 24, 2017

@MartijnR Do you think you would implement support for timing in Enketo? And related to that @chrislrobert in your solution how did you calculate timings for a page of questions (field-list / table-list)?

@chrislrobert

This comment has been minimized.

Show comment
Hide comment
@chrislrobert

chrislrobert Jan 24, 2017

chrislrobert commented Jan 24, 2017

@MartijnR

This comment has been minimized.

Show comment
Hide comment
@MartijnR

MartijnR Jan 24, 2017

@MartijnR Do you think you would implement support for timing in Enketo?

Only if we can figure out how to do this in a solid manner, and if a sponsor pushes this. (It would be for 'pages mode' only.) It has not entered our roadmap so far.

Chris brings up many good points. I figured "page-flip-start" and "page-flip-end" events would be the core of this feature, but indeed it's much more complex than that if you look at editing draft records and users flipping back to a previous page.

A lot of this complexity seems specific to the finer details of how the data collection client has implemented the UI around the forms (the stuff that is not described in the spec). I'm starting to wonder if for that reason this is maybe better done outside of XForms. Or we could use a hybrid option with some interoperability potential where we simply agree on adding a new meta element (e.g. orx:meta/orx:timing) to the spec with a binary datatype (i.e. an attachment), optionally with a predefined format, and leave the actual implementation of populating that file to the client (i.e. "magic"). It sounds like this would then be almost what SurveyCTO is doing. The presence of this meta element could signal to the client to enable the audit feature (in conjunction with a setting on the app perhaps) or show a 'feature not supported' warning.

For the hybrid option with an agreed format, we wouldn't have the ideal interoperability as timing data from submissions by both Enketo and ODK Collect for the same survey cannot be reliably combined but at least a server would be able to process timing data submitted by both.

[edited]

MartijnR commented Jan 24, 2017

@MartijnR Do you think you would implement support for timing in Enketo?

Only if we can figure out how to do this in a solid manner, and if a sponsor pushes this. (It would be for 'pages mode' only.) It has not entered our roadmap so far.

Chris brings up many good points. I figured "page-flip-start" and "page-flip-end" events would be the core of this feature, but indeed it's much more complex than that if you look at editing draft records and users flipping back to a previous page.

A lot of this complexity seems specific to the finer details of how the data collection client has implemented the UI around the forms (the stuff that is not described in the spec). I'm starting to wonder if for that reason this is maybe better done outside of XForms. Or we could use a hybrid option with some interoperability potential where we simply agree on adding a new meta element (e.g. orx:meta/orx:timing) to the spec with a binary datatype (i.e. an attachment), optionally with a predefined format, and leave the actual implementation of populating that file to the client (i.e. "magic"). It sounds like this would then be almost what SurveyCTO is doing. The presence of this meta element could signal to the client to enable the audit feature (in conjunction with a setting on the app perhaps) or show a 'feature not supported' warning.

For the hybrid option with an agreed format, we wouldn't have the ideal interoperability as timing data from submissions by both Enketo and ODK Collect for the same survey cannot be reliably combined but at least a server would be able to process timing data submitted by both.

[edited]

@nap2000

This comment has been minimized.

Show comment
Hide comment
@nap2000

nap2000 Feb 4, 2017

Contributor

I will be happy to put together a solution for this feature request.

@MartijnR I like your idea of implementing a hybrid approach with the timing component being implemented in the collect or other client. However I'm still keeping an open mind on the solution.

The following link proposal doc is to a document where I have consolidated some of the ideas already posted here plus some more thoughts. Again no solution is intended as yet. Please add, critique or rule out options, assumptions etc directly in the document or, if you post directly to this issue, I will update the document on your behalf.

Contributor

nap2000 commented Feb 4, 2017

I will be happy to put together a solution for this feature request.

@MartijnR I like your idea of implementing a hybrid approach with the timing component being implemented in the collect or other client. However I'm still keeping an open mind on the solution.

The following link proposal doc is to a document where I have consolidated some of the ideas already posted here plus some more thoughts. Again no solution is intended as yet. Please add, critique or rule out options, assumptions etc directly in the document or, if you post directly to this issue, I will update the document on your behalf.

@yanokwa

This comment has been minimized.

Show comment
Hide comment
@yanokwa

yanokwa Feb 6, 2017

Member

@nap2000 Could you please add commenting privileges to whomever has the link to that document?

Member

yanokwa commented Feb 6, 2017

@nap2000 Could you please add commenting privileges to whomever has the link to that document?

@nap2000

This comment has been minimized.

Show comment
Hide comment
@nap2000

nap2000 Feb 7, 2017

Contributor

I have updated the link to allow commenting (hopefully).

Contributor

nap2000 commented Feb 7, 2017

I have updated the link to allow commenting (hopefully).

@yanokwa

This comment has been minimized.

Show comment
Hide comment
@yanokwa

yanokwa Feb 7, 2017

Member

@joeflack4 Your fork of Collect has timing in a sidecar file right? Any regrets on that approach?

Member

yanokwa commented Feb 7, 2017

@joeflack4 Your fork of Collect has timing in a sidecar file right? Any regrets on that approach?

@joeflack4

This comment has been minimized.

Show comment
Hide comment
@joeflack4

joeflack4 Feb 7, 2017

@yanokwa I would have to double check, but yes, I believe the logs are stored as space/tab delimited lines in a plain text file.

This was before I started here. No regrets yet, though plain text approaches tend to get iterated on eventually. I believe right now we're parsing them in Python, after an earlier attempt with R. This is something that James can elaborate more on.

joeflack4 commented Feb 7, 2017

@yanokwa I would have to double check, but yes, I believe the logs are stored as space/tab delimited lines in a plain text file.

This was before I started here. No regrets yet, though plain text approaches tend to get iterated on eventually. I believe right now we're parsing them in Python, after an earlier attempt with R. This is something that James can elaborate more on.

@yanokwa

This comment has been minimized.

Show comment
Hide comment
@yanokwa

yanokwa Feb 7, 2017

Member

It'd be good to understand what kinds of things you track and of those, what are actually useful. Also, do you submit those files to a server or is it pulled off the SD card.

Member

yanokwa commented Feb 7, 2017

It'd be good to understand what kinds of things you track and of those, what are actually useful. Also, do you submit those files to a server or is it pulled off the SD card.

@joeflack4

This comment has been minimized.

Show comment
Hide comment
@joeflack4

joeflack4 Feb 7, 2017

Definitely. I hope we can allocate some of our resources to furthering ODK, as it looks like we will be sticking with the platform for awhile. I know very little about Collect's codebase or even our innovations. We collect quite a lot of this log data, so Im assuming server. However I do not believe we've modified aggregate. I'll confer this week and get back with some details.

joeflack4 commented Feb 7, 2017

Definitely. I hope we can allocate some of our resources to furthering ODK, as it looks like we will be sticking with the platform for awhile. I know very little about Collect's codebase or even our innovations. We collect quite a lot of this log data, so Im assuming server. However I do not believe we've modified aggregate. I'll confer this week and get back with some details.

@yanokwa

This comment has been minimized.

Show comment
Hide comment
@yanokwa

yanokwa Feb 7, 2017

Member

@ChrisCorey I want to make sure we aren't forgetting you! Be sure to review Neil's proposal doc so we capture your use case!

Member

yanokwa commented Feb 7, 2017

@ChrisCorey I want to make sure we aren't forgetting you! Be sure to review Neil's proposal doc so we capture your use case!

@joeflack4

This comment has been minimized.

Show comment
Hide comment
@joeflack4

joeflack4 Feb 7, 2017

@yanokwa I went ahead and looked into how our logs are submitted on our JHU fork. They are submitted as flat text file attachments to ODK aggregate. James and I have our hands full this week, but we can speak further on this topic and others soon.

joeflack4 commented Feb 7, 2017

@yanokwa I went ahead and looked into how our logs are submitted on our JHU fork. They are submitted as flat text file attachments to ODK aggregate. James and I have our hands full this week, but we can speak further on this topic and others soon.

@nap2000

This comment has been minimized.

Show comment
Hide comment
@nap2000

nap2000 Feb 8, 2017

Contributor

@chrislrobert Do you have any comments or suggestions on the proposal for adding collection of timing information to the odkCollect base?

Contributor

nap2000 commented Feb 8, 2017

@chrislrobert Do you have any comments or suggestions on the proposal for adding collection of timing information to the odkCollect base?

@lognaturel

This comment has been minimized.

Show comment
Hide comment
@lognaturel

lognaturel Feb 8, 2017

Member

The one high-level thing I'd really like to hear from @mberg, @ChrisCorey and anyone else who has a need for this feature is whether you believe that total time spent per question is sufficient information. It sounds like it's been good enough for SurveyCTO's users and I believe it would be enough to satisfy @ChrisCorey's stated needs.

An alternate option would be to log a specific set of events such as "swipe left", "swipe right", "value change". This would allow for somewhat richer analysis and a deeper understanding of what actually happened. For example, if an enumerator spent a total of 10 minutes in a question and entered it twice, is it because s/he quickly skipped it the first time and then went back to it and then spent a long time in it, or...?

@benb111's dissertation "Algorithmic Approaches to Detecting Interviewer Fabrication in Surveys" is really interesting and perhaps relevant to this conversation. He added some event logging to ODK (in a way that we unfortunately can't use) and found that he could use these to detect falsified data. His algorithms for doing so are implemented here. Section 5.4 describes the events he logged and 5.5 the aggregate values he computed from those events. In Section 6.3, he says:

Given that user-trace metadata can help, one could ask how much detail needs to be
recorded in the traces to make the most effective predictions. One possibility is that it
is sufficient to record just the time spent on each question. If this were true, it would mean
simpler implementations and smaller log files. Thus, it is important to justify the increased
complexity that is required to record more detailed user trace logs, with entries for events
like edits. I argue here that this level of detail really does help.

Of course, a lot of what he did is not practical to generalize but I think it still provides some interesting insights into what could eventually be done with this kind of logged data. The big disadvantage to event logging is that it would need to be processed to provide any real value whereas total time data might be of some use on its own (someone could skim total time spent across many instances and spot outliers).

I want to make sure we've at least considered this as an option.

cc @aflaxman since he reviewed this work.

Member

lognaturel commented Feb 8, 2017

The one high-level thing I'd really like to hear from @mberg, @ChrisCorey and anyone else who has a need for this feature is whether you believe that total time spent per question is sufficient information. It sounds like it's been good enough for SurveyCTO's users and I believe it would be enough to satisfy @ChrisCorey's stated needs.

An alternate option would be to log a specific set of events such as "swipe left", "swipe right", "value change". This would allow for somewhat richer analysis and a deeper understanding of what actually happened. For example, if an enumerator spent a total of 10 minutes in a question and entered it twice, is it because s/he quickly skipped it the first time and then went back to it and then spent a long time in it, or...?

@benb111's dissertation "Algorithmic Approaches to Detecting Interviewer Fabrication in Surveys" is really interesting and perhaps relevant to this conversation. He added some event logging to ODK (in a way that we unfortunately can't use) and found that he could use these to detect falsified data. His algorithms for doing so are implemented here. Section 5.4 describes the events he logged and 5.5 the aggregate values he computed from those events. In Section 6.3, he says:

Given that user-trace metadata can help, one could ask how much detail needs to be
recorded in the traces to make the most effective predictions. One possibility is that it
is sufficient to record just the time spent on each question. If this were true, it would mean
simpler implementations and smaller log files. Thus, it is important to justify the increased
complexity that is required to record more detailed user trace logs, with entries for events
like edits. I argue here that this level of detail really does help.

Of course, a lot of what he did is not practical to generalize but I think it still provides some interesting insights into what could eventually be done with this kind of logged data. The big disadvantage to event logging is that it would need to be processed to provide any real value whereas total time data might be of some use on its own (someone could skim total time spent across many instances and spot outliers).

I want to make sure we've at least considered this as an option.

cc @aflaxman since he reviewed this work.

@lognaturel

This comment has been minimized.

Show comment
Hide comment
@lognaturel

lognaturel Feb 8, 2017

Member

Um. Plot twist. Ben's logging code was actually added in here... sigh.

Member

lognaturel commented Feb 8, 2017

Um. Plot twist. Ben's logging code was actually added in here... sigh.

@lognaturel

This comment has been minimized.

Show comment
Hide comment
@lognaturel

lognaturel Feb 8, 2017

Member

To be clear, the logging code I reference above is undocumented and does not include any way for getting the data off the phone (you need to manually copy a db file off the phones). We still have to design and implement a solution, I just didn't realize that code was in trunk until I searched for it.

If you want to try it out, I found a message from @mitchellsundt with brief instructions here.

Logging is enabled if the file "/sdcard/odk/log/enabled" exists.
The logging database will be "/sdcard/odk/log/activityLog.db"

Member

lognaturel commented Feb 8, 2017

To be clear, the logging code I reference above is undocumented and does not include any way for getting the data off the phone (you need to manually copy a db file off the phones). We still have to design and implement a solution, I just didn't realize that code was in trunk until I searched for it.

If you want to try it out, I found a message from @mitchellsundt with brief instructions here.

Logging is enabled if the file "/sdcard/odk/log/enabled" exists.
The logging database will be "/sdcard/odk/log/activityLog.db"

@chrislrobert

This comment has been minimized.

Show comment
Hide comment
@chrislrobert

chrislrobert Feb 8, 2017

chrislrobert commented Feb 8, 2017

@nap2000

This comment has been minimized.

Show comment
Hide comment
@nap2000

nap2000 Feb 8, 2017

Contributor

Thanks @chrislrobert its good hearing about your experience. It seems that most value from the timing data has happened since the server started doing some presentation and that few people used the raw data files? Can you provide an example CSV file that you expect to be submitted. Also how is this CSV file packed into the submission web request?

Contributor

nap2000 commented Feb 8, 2017

Thanks @chrislrobert its good hearing about your experience. It seems that most value from the timing data has happened since the server started doing some presentation and that few people used the raw data files? Can you provide an example CSV file that you expect to be submitted. Also how is this CSV file packed into the submission web request?

@joeflack4

This comment has been minimized.

Show comment
Hide comment
@joeflack4

joeflack4 Feb 8, 2017

@yanokwa Also, to answer your question as to what our logs look like / what kind of information it logs, here is a short example snippet.

1478095110545	oP	managing_authority[1]	
1478095121658	LH	available[1]	
1478095121678	oR	available[1]	
1478095123225	LP	available[1]	yes
1478095123282	EP	consent_start[1]	
1478095124368	LP	consent_start[1]	
1478095124381	EP	consent[1]	
1478095125902	LP	consent[1]	
1478095125915	EP	begin_interview[1]	
1478095127273	LP	begin_interview[1]	yes
1478095127352	EP	participant_signature[1]/sign[1]	
1478095127352	EP	participant_signature[1]/checkbox[1]	
1478095128700	LP	participant_signature[1]/sign[1]	
1478095128700	LP	participant_signature[1]/checkbox[1]	1
1478095128761	EP	witness_auto[1]	
1478095130065	LP	witness_auto[1]	1
1478095130093	EP	facility_name[1]	
1478095133027	LP	facility_name[1]	Sinoko-Dispensary
1478095133058	EP	MFL_number[1]	
1478095135002	LP	MFL_number[1]	6
1478095135025	EP	position[1]	
1478095136097	EH	position[1]	
1478095136121	oP	position[1]	
1478095164755	LH	facility_type[1]	nursing_maternity
1478095164795	oR	facility_type[1]	nursing_maternity
1478095166611	LP	facility_type[1]	pharmacy
1478095166643	EP	managing_authority[1]	
1478095167676	EH	managing_authority[1]	
1478095167700	oP	managing_authority[1]	
1478095182617	LH	managing_authority[1]	
1478095182636	oR	managing_authority[1]	
1478095184908	SF	managing_authority[1]	
1478095185610	oP	managing_authority[1]	```

joeflack4 commented Feb 8, 2017

@yanokwa Also, to answer your question as to what our logs look like / what kind of information it logs, here is a short example snippet.

1478095110545	oP	managing_authority[1]	
1478095121658	LH	available[1]	
1478095121678	oR	available[1]	
1478095123225	LP	available[1]	yes
1478095123282	EP	consent_start[1]	
1478095124368	LP	consent_start[1]	
1478095124381	EP	consent[1]	
1478095125902	LP	consent[1]	
1478095125915	EP	begin_interview[1]	
1478095127273	LP	begin_interview[1]	yes
1478095127352	EP	participant_signature[1]/sign[1]	
1478095127352	EP	participant_signature[1]/checkbox[1]	
1478095128700	LP	participant_signature[1]/sign[1]	
1478095128700	LP	participant_signature[1]/checkbox[1]	1
1478095128761	EP	witness_auto[1]	
1478095130065	LP	witness_auto[1]	1
1478095130093	EP	facility_name[1]	
1478095133027	LP	facility_name[1]	Sinoko-Dispensary
1478095133058	EP	MFL_number[1]	
1478095135002	LP	MFL_number[1]	6
1478095135025	EP	position[1]	
1478095136097	EH	position[1]	
1478095136121	oP	position[1]	
1478095164755	LH	facility_type[1]	nursing_maternity
1478095164795	oR	facility_type[1]	nursing_maternity
1478095166611	LP	facility_type[1]	pharmacy
1478095166643	EP	managing_authority[1]	
1478095167676	EH	managing_authority[1]	
1478095167700	oP	managing_authority[1]	
1478095182617	LH	managing_authority[1]	
1478095182636	oR	managing_authority[1]	
1478095184908	SF	managing_authority[1]	
1478095185610	oP	managing_authority[1]	```
@chrislrobert

This comment has been minimized.

Show comment
Hide comment
@chrislrobert

chrislrobert Feb 8, 2017

chrislrobert commented Feb 8, 2017

@nap2000

This comment has been minimized.

Show comment
Hide comment
@nap2000

nap2000 Feb 9, 2017

Contributor

OK adding a type I guess its a preload that adds an attachment. I think this is a good approach. I can't see the attachment though, is it there?

Contributor

nap2000 commented Feb 9, 2017

OK adding a type I guess its a preload that adds an attachment. I think this is a good approach. I can't see the attachment though, is it there?

@lognaturel

This comment has been minimized.

Show comment
Hide comment
@lognaturel

lognaturel Feb 9, 2017

Member

@nap2000, the attachment is linked from the top of @chrislrobert's post. It looks like

Field name,Total duration (seconds),First appeared (seconds into survey)
has_caseid[1]/contactinfo,7,3
has_caseid[1]/exercise_1,471,10

Sounds like the complexity of the log file doesn't really matter because in practice even the most minimal format requires processing to be useful and actionable. In other words, "users won't be able to use the data directly" could be an argument against logging more detailed info or increasing the complexity of the log file. But it's hard to make sense of even a simple log file and there will need to be a processing layer for this feature to be useful to a broad range of users no matter what the log file looks like.

Member

lognaturel commented Feb 9, 2017

@nap2000, the attachment is linked from the top of @chrislrobert's post. It looks like

Field name,Total duration (seconds),First appeared (seconds into survey)
has_caseid[1]/contactinfo,7,3
has_caseid[1]/exercise_1,471,10

Sounds like the complexity of the log file doesn't really matter because in practice even the most minimal format requires processing to be useful and actionable. In other words, "users won't be able to use the data directly" could be an argument against logging more detailed info or increasing the complexity of the log file. But it's hard to make sense of even a simple log file and there will need to be a processing layer for this feature to be useful to a broad range of users no matter what the log file looks like.

@nap2000

This comment has been minimized.

Show comment
Hide comment
@nap2000

nap2000 Feb 12, 2017

Contributor

Giving some thought to implementation.

There is an existing Logger, in org.odk.collect.android.database.ActivityLogger.

This was initially contributed as “Ben’s logging implementation” for logging user interactions and was originally named Logger. Was renamed by @mitchellsundt to ActivityLogger and extended to log additional application events such as “createDeleteInstancesDialog”.

It would seem to make sense to use the ActivityLogger for the user timing information, if only to keep all logging in a single class. Some changes could be:

  • Create a new logTimerEvent() method where the event would be written to the timer log file as well as the activity log? Alternatively the existing log parameters could be parsed to see if the event should be written to the timer log.
  • Manage the life cycle of timer log files associated with each instance
  • Add additional calls to log()timer events throughout the code as needed.
Contributor

nap2000 commented Feb 12, 2017

Giving some thought to implementation.

There is an existing Logger, in org.odk.collect.android.database.ActivityLogger.

This was initially contributed as “Ben’s logging implementation” for logging user interactions and was originally named Logger. Was renamed by @mitchellsundt to ActivityLogger and extended to log additional application events such as “createDeleteInstancesDialog”.

It would seem to make sense to use the ActivityLogger for the user timing information, if only to keep all logging in a single class. Some changes could be:

  • Create a new logTimerEvent() method where the event would be written to the timer log file as well as the activity log? Alternatively the existing log parameters could be parsed to see if the event should be written to the timer log.
  • Manage the life cycle of timer log files associated with each instance
  • Add additional calls to log()timer events throughout the code as needed.
@chrislrobert

This comment has been minimized.

Show comment
Hide comment
@chrislrobert

chrislrobert Feb 12, 2017

chrislrobert commented Feb 12, 2017

@nap2000

This comment has been minimized.

Show comment
Hide comment
@nap2000

nap2000 Feb 12, 2017

Contributor

Thanks Chris very good to get your experience on this. Did you measure how much the time changed due to network variation? A couple of seconds here and there might be acceptable?

Otherwise we could adopt your approach, possibly storing the current state in the instance database as an alternative to the instance file.

In order to get timestamps that are accurate, relative to each other and within a form editing session, perhaps we could record the time stamp on form open, along with the elapsedRealTime() and then at each event add the delta in the elapsedRealTime() to the timestamp? If we did this maybe we don't need to store the state of the timer and we would wear any wall clock time variation of the user was to stop and then restart the editing session after shutting down their phone.

Contributor

nap2000 commented Feb 12, 2017

Thanks Chris very good to get your experience on this. Did you measure how much the time changed due to network variation? A couple of seconds here and there might be acceptable?

Otherwise we could adopt your approach, possibly storing the current state in the instance database as an alternative to the instance file.

In order to get timestamps that are accurate, relative to each other and within a form editing session, perhaps we could record the time stamp on form open, along with the elapsedRealTime() and then at each event add the delta in the elapsedRealTime() to the timestamp? If we did this maybe we don't need to store the state of the timer and we would wear any wall clock time variation of the user was to stop and then restart the editing session after shutting down their phone.

@chrislrobert

This comment has been minimized.

Show comment
Hide comment
@chrislrobert

chrislrobert Feb 12, 2017

chrislrobert commented Feb 12, 2017

@nap2000

This comment has been minimized.

Show comment
Hide comment
@nap2000

nap2000 Feb 13, 2017

Contributor

Yes I understand. We have this problem know when showing time to complete a survey. It just takes one enumerator to get to the end of the survey and not save as finalised to make the averages meaningless. The server can always eliminate outliers but it would be good to address at source.

I'm still keen to record real times. For example if settings are reset and then the enumerator continues the next day I'd like the system to attempt to record that, it seems your approach will not record that?

Contributor

nap2000 commented Feb 13, 2017

Yes I understand. We have this problem know when showing time to complete a survey. It just takes one enumerator to get to the end of the survey and not save as finalised to make the averages meaningless. The server can always eliminate outliers but it would be good to address at source.

I'm still keen to record real times. For example if settings are reset and then the enumerator continues the next day I'd like the system to attempt to record that, it seems your approach will not record that?

@yanokwa

This comment has been minimized.

Show comment
Hide comment
@yanokwa

yanokwa Feb 13, 2017

Member

Given all this wackiness, I'm leaning more and more towards the logging solution. At least a human being (or clever script) can look at the raw data and try to make sense of it.

Agreed that a timestamp plus elapsedRealtime is a pretty good place to start. Another option we could add is an occasional GPS time.

Member

yanokwa commented Feb 13, 2017

Given all this wackiness, I'm leaning more and more towards the logging solution. At least a human being (or clever script) can look at the raw data and try to make sense of it.

Agreed that a timestamp plus elapsedRealtime is a pretty good place to start. Another option we could add is an occasional GPS time.

@lognaturel

This comment has been minimized.

Show comment
Hide comment
@lognaturel

lognaturel Feb 20, 2017

Member

Starting with the ActivityLogger class sounds reasonable but I don't think we should feel any attachment to it if it's not meeting our needs or not well designed (I haven't looked at it in detail yet). It would be totally reasonable to design a parallel system and then take out ActivityLogger and related functionality. I know that it is used somewhat because it results in a couple of NullPointerExceptions that we see in the Google Play dev console occasionally but I can't imagine anyone wanting to use it once this feature is in place and documented.

I think it's worth spending some extra time to design the system well and to put some testing in place. @nap2000, consider running things by #collect-code in Slack as you're building if you want another brain or two.

Member

lognaturel commented Feb 20, 2017

Starting with the ActivityLogger class sounds reasonable but I don't think we should feel any attachment to it if it's not meeting our needs or not well designed (I haven't looked at it in detail yet). It would be totally reasonable to design a parallel system and then take out ActivityLogger and related functionality. I know that it is used somewhat because it results in a couple of NullPointerExceptions that we see in the Google Play dev console occasionally but I can't imagine anyone wanting to use it once this feature is in place and documented.

I think it's worth spending some extra time to design the system well and to put some testing in place. @nap2000, consider running things by #collect-code in Slack as you're building if you want another brain or two.

@nap2000

This comment has been minimized.

Show comment
Hide comment
@nap2000

nap2000 Feb 20, 2017

Contributor

OK thanks Hélène,

That is the approach I will take then. Start with Activity Logger and if it seems right create a TimerLogger class instead.

What do you mean by testing? A test plan or is this automated testing?

Contributor

nap2000 commented Feb 20, 2017

OK thanks Hélène,

That is the approach I will take then. Start with Activity Logger and if it seems right create a TimerLogger class instead.

What do you mean by testing? A test plan or is this automated testing?

@lognaturel

This comment has been minimized.

Show comment
Hide comment
@lognaturel

lognaturel Feb 20, 2017

Member

#392 had been on my mind so I think the ideal would be some automated testing as appropriate of things like turning logging on and off, making sure the file has the intended structure, the file is submitted and things like that. This should hopefully encourage a more modular design that will be easier to change and more robust.

This is new for this project so we're all learning together and it will take more time but if you're up for some experimenting with this feature, I think there will be lots of benefits.

Member

lognaturel commented Feb 20, 2017

#392 had been on my mind so I think the ideal would be some automated testing as appropriate of things like turning logging on and off, making sure the file has the intended structure, the file is submitted and things like that. This should hopefully encourage a more modular design that will be easier to change and more robust.

This is new for this project so we're all learning together and it will take more time but if you're up for some experimenting with this feature, I think there will be lots of benefits.

@nap2000

This comment has been minimized.

Show comment
Hide comment
@nap2000

nap2000 Feb 20, 2017

Contributor

ok I will add a test plan to the document and look to include some early unit tests.

Contributor

nap2000 commented Feb 20, 2017

ok I will add a test plan to the document and look to include some early unit tests.

@ChrisCorey

This comment has been minimized.

Show comment
Hide comment
@ChrisCorey

ChrisCorey Feb 20, 2017

ChrisCorey commented Feb 20, 2017

@nap2000

This comment has been minimized.

Show comment
Hide comment
@nap2000

nap2000 Mar 6, 2017

Contributor

Additional design sections have been added to the google doc that summarises the work on this feature request. https://docs.google.com/document/d/1LqVlVpePjA7Q1snjhA_ZQoDuzFqbtQcIygwcEzO_LJs/edit?usp=sharing

Feel free to provide comments on the approach in the document or here on github.

Contributor

nap2000 commented Mar 6, 2017

Additional design sections have been added to the google doc that summarises the work on this feature request. https://docs.google.com/document/d/1LqVlVpePjA7Q1snjhA_ZQoDuzFqbtQcIygwcEzO_LJs/edit?usp=sharing

Feel free to provide comments on the approach in the document or here on github.

@lognaturel lognaturel added this to the April Release milestone Mar 23, 2017

@nap2000 nap2000 referenced this issue Mar 23, 2017

Merged

Timing Log #257 #760

@nap2000

This comment has been minimized.

Show comment
Hide comment
@nap2000

nap2000 Mar 23, 2017

Contributor

Its probably time to move to close off the design stage and get feedback on what should be included in the final build.

The design is in https://docs.google.com/document/d/1LqVlVpePjA7Q1snjhA_ZQoDuzFqbtQcIygwcEzO_LJs/edit?usp=sharing. Final comments please!

I have created a pull request containing a prototype for the implementation of this issue. #760 Please also review and provide feedback. The pull request is only a prototype so feel free to suggest a completely different approach to the code if you think it justified.

The prototype does work. You can enable logging in the general preferences after which a log file will be created when you open or re-open a form. The log will be called timing.csv and will be sent to the server when you submit the finalised form.

Contributor

nap2000 commented Mar 23, 2017

Its probably time to move to close off the design stage and get feedback on what should be included in the final build.

The design is in https://docs.google.com/document/d/1LqVlVpePjA7Q1snjhA_ZQoDuzFqbtQcIygwcEzO_LJs/edit?usp=sharing. Final comments please!

I have created a pull request containing a prototype for the implementation of this issue. #760 Please also review and provide feedback. The pull request is only a prototype so feel free to suggest a completely different approach to the code if you think it justified.

The prototype does work. You can enable logging in the general preferences after which a log file will be created when you open or re-open a form. The log will be called timing.csv and will be sent to the server when you submit the finalised form.

@jkpr

This comment has been minimized.

Show comment
Hide comment
@jkpr

jkpr Mar 27, 2017

Sorry to jump in late, @nap2000 @lognaturel. We at PMA2020 have been using something very similar now for slightly more than a year, and we have found it very useful for tracking enumerator times. We make a log that has timestamped events. @joeflack4 commented on it earlier in the thread, but there wasn't much followup. It seems you are pretty far down your own development path, so I will link to a few of our files and explain how they work. Maybe something will be useful for you.

We make use of Handler, HandlerThread, and Looper to create a background thread that handles file writing. This is tied to the FormEntryActivity lifecycle, see onCreate and onDestroy.

The bulk of the timer logging code is in the UseLog class. It creates a file, writes to a buffer, and flushes to file periodically. We track a few events, they are listed here in the UseLogContact class. They should be self-explanatory for the most part. In the FormEntryActivity we generate Messages which consist of a code for the event, timestamp, current node, and current value stored in the view (examples here, here, and here. Then post to the Handler and the Handler writes to file in the background.

An important decision, in my opinion, was to write the file as a tab separated file. Since we include the value of the view/xpath node in the log, I thought commas would be more common than tabs (consider text entry question). Therefore, the TSV file was a more appropriate format than CSV for us.

I noticed you are tracking a few things that we are not, such as language change. That would be interesting for us to add. Also, it seems like you are trying to track GPS as the log progresses. I know our team would be very interested in that. We will keep an eye on your development to see if there is any inspiration we can gain.

jkpr commented Mar 27, 2017

Sorry to jump in late, @nap2000 @lognaturel. We at PMA2020 have been using something very similar now for slightly more than a year, and we have found it very useful for tracking enumerator times. We make a log that has timestamped events. @joeflack4 commented on it earlier in the thread, but there wasn't much followup. It seems you are pretty far down your own development path, so I will link to a few of our files and explain how they work. Maybe something will be useful for you.

We make use of Handler, HandlerThread, and Looper to create a background thread that handles file writing. This is tied to the FormEntryActivity lifecycle, see onCreate and onDestroy.

The bulk of the timer logging code is in the UseLog class. It creates a file, writes to a buffer, and flushes to file periodically. We track a few events, they are listed here in the UseLogContact class. They should be self-explanatory for the most part. In the FormEntryActivity we generate Messages which consist of a code for the event, timestamp, current node, and current value stored in the view (examples here, here, and here. Then post to the Handler and the Handler writes to file in the background.

An important decision, in my opinion, was to write the file as a tab separated file. Since we include the value of the view/xpath node in the log, I thought commas would be more common than tabs (consider text entry question). Therefore, the TSV file was a more appropriate format than CSV for us.

I noticed you are tracking a few things that we are not, such as language change. That would be interesting for us to add. Also, it seems like you are trying to track GPS as the log progresses. I know our team would be very interested in that. We will keep an eye on your development to see if there is any inspiration we can gain.

@nap2000

This comment has been minimized.

Show comment
Hide comment
@nap2000

nap2000 Mar 27, 2017

Contributor

Hi @jkpr. I had a look at your code which I'm not going to think much about at the moment. Someone else might want to suggest if we should incorporate that into the solution.

I've also had another look at your output file which I should have looked at more thoroughly before.

  1. What do people think about showing duration inside a question within a single line or having two events EP and LP? The draft code is generating csv files that look like this:

timing

Hence showing the start and end time of when the user is in a prompt on the same line. However it may make sense to create two separate events as you have done.

  1. I'm not convinced of the value of including the response entered by a user as the server can readily combine the data for analysis if required.

  2. If there was a possibility of commas in the data we could add quotation marks around the entries. Anyone else prefer tab delimited?

  3. It looks like your "on pause" events and "on resume events" are generated by the android activity life cycle? The "resume" event I am recording happens when the user saves a survey and then starts editing the saved survey. Recording the activity pause and resume looks like a good idea I will add that.

Contributor

nap2000 commented Mar 27, 2017

Hi @jkpr. I had a look at your code which I'm not going to think much about at the moment. Someone else might want to suggest if we should incorporate that into the solution.

I've also had another look at your output file which I should have looked at more thoroughly before.

  1. What do people think about showing duration inside a question within a single line or having two events EP and LP? The draft code is generating csv files that look like this:

timing

Hence showing the start and end time of when the user is in a prompt on the same line. However it may make sense to create two separate events as you have done.

  1. I'm not convinced of the value of including the response entered by a user as the server can readily combine the data for analysis if required.

  2. If there was a possibility of commas in the data we could add quotation marks around the entries. Anyone else prefer tab delimited?

  3. It looks like your "on pause" events and "on resume events" are generated by the android activity life cycle? The "resume" event I am recording happens when the user saves a survey and then starts editing the saved survey. Recording the activity pause and resume looks like a good idea I will add that.

@lognaturel

This comment has been minimized.

Show comment
Hide comment
@lognaturel

lognaturel Mar 27, 2017

Member

@nap2000 This is all looking very awesome to me! And thanks @jkpr for sharing your implementation.

@nap2000, in 1, am I understanding correctly that EP is something like enter prompt and LP is leave prompt? I don't really have a strong feeling either way. I thought it would be slightly simpler to log entry and exit events separately but you seem to have managed a good implementation with both in the same row. It seems having them on the same row is marginally more human friendly.

  1. I think we should not include the response. Including it makes the timing file potentially more sensitive and as you say, it doesn't add any new information.

  2. I prefer commas but don't feel strongly.

  3. 👍

Member

lognaturel commented Mar 27, 2017

@nap2000 This is all looking very awesome to me! And thanks @jkpr for sharing your implementation.

@nap2000, in 1, am I understanding correctly that EP is something like enter prompt and LP is leave prompt? I don't really have a strong feeling either way. I thought it would be slightly simpler to log entry and exit events separately but you seem to have managed a good implementation with both in the same row. It seems having them on the same row is marginally more human friendly.

  1. I think we should not include the response. Including it makes the timing file potentially more sensitive and as you say, it doesn't add any new information.

  2. I prefer commas but don't feel strongly.

  3. 👍

@lognaturel

This comment has been minimized.

Show comment
Hide comment
@lognaturel

lognaturel Mar 28, 2017

Member

@mberg @ChrisCorey @MartijnR @yanokwa Any last comments to make on the general approach or the prototype at #760 before @nap2000 builds The Real Deal?

Member

lognaturel commented Mar 28, 2017

@mberg @ChrisCorey @MartijnR @yanokwa Any last comments to make on the general approach or the prototype at #760 before @nap2000 builds The Real Deal?

@ChrisCorey

This comment has been minimized.

Show comment
Hide comment
@ChrisCorey

ChrisCorey Mar 29, 2017

ChrisCorey commented Mar 29, 2017

@lognaturel lognaturel modified the milestones: May release, April Release Mar 31, 2017

@nap2000

This comment has been minimized.

Show comment
Hide comment
@nap2000

nap2000 Apr 2, 2017

Contributor

I have received the following requirements from a customer for audio logging:

  • they need a single audio file for a duration.
  • No need for controlling just start recording when the data collector start asking the questions and stop when he finish asking.
  • No. we need to use it just to make sure that the data collectors are collect a real data, it’s a kind of insuring the data quality. "and to avoid data collectors of controlling the recording" - This in response to suggestion that data collector could start and stop recording manually)
  • No need for analysis it's just to archive it and listen for some of them randomly.
Contributor

nap2000 commented Apr 2, 2017

I have received the following requirements from a customer for audio logging:

  • they need a single audio file for a duration.
  • No need for controlling just start recording when the data collector start asking the questions and stop when he finish asking.
  • No. we need to use it just to make sure that the data collectors are collect a real data, it’s a kind of insuring the data quality. "and to avoid data collectors of controlling the recording" - This in response to suggestion that data collector could start and stop recording manually)
  • No need for analysis it's just to archive it and listen for some of them randomly.
@lognaturel

This comment has been minimized.

Show comment
Hide comment
@lognaturel

lognaturel Jul 6, 2017

Member

To give a little update here, #760 was merged recently and has the bulk of the functionality 🎉. We still want to do a little bit more verification since correctness is key here. #1234 is one bug currently filed.

opendatakit/javarosa#76 will need to be addressed before the file can be sent to the server and opendatakit/javarosa#60 and XLSForm/pyxform#128 are needed for generating forms with audits.

We also need to make a decision on #1204 (setting).

Member

lognaturel commented Jul 6, 2017

To give a little update here, #760 was merged recently and has the bulk of the functionality 🎉. We still want to do a little bit more verification since correctness is key here. #1234 is one bug currently filed.

opendatakit/javarosa#76 will need to be addressed before the file can be sent to the server and opendatakit/javarosa#60 and XLSForm/pyxform#128 are needed for generating forms with audits.

We also need to make a decision on #1204 (setting).

@lognaturel

This comment has been minimized.

Show comment
Hide comment
@lognaturel

lognaturel Jul 11, 2017

Member

@nap2000 Quick question for you -- did you double check that Aggregate can indeed receive and show csv files? We know there are currently issues with the namespaces but it occurs to me that I have not yet verified that a csv file can be properly received and displayed by Aggregate without namespace problems. Have you? If not, can you please take a look and report back. It would be fantastic to be able to announce this feature soon but it needs to be working end to end.

Member

lognaturel commented Jul 11, 2017

@nap2000 Quick question for you -- did you double check that Aggregate can indeed receive and show csv files? We know there are currently issues with the namespaces but it occurs to me that I have not yet verified that a csv file can be properly received and displayed by Aggregate without namespace problems. Have you? If not, can you please take a look and report back. It would be fantastic to be able to announce this feature soon but it needs to be working end to end.

@nap2000

This comment has been minimized.

Show comment
Hide comment
@nap2000

nap2000 Jul 11, 2017

Contributor

@lognaturel No I'm not a user of Aggregate. Collect will include the audit.csv file in the files it sends to the server when you submit the results. What aggregate does with it I cannot tell. I haven't put much thought into how Smap Server will process the CSV file yet. However I will add some functions to view and download the data soon.

Contributor

nap2000 commented Jul 11, 2017

@lognaturel No I'm not a user of Aggregate. Collect will include the audit.csv file in the files it sends to the server when you submit the results. What aggregate does with it I cannot tell. I haven't put much thought into how Smap Server will process the CSV file yet. However I will add some functions to view and download the data soon.

@dorey

This comment has been minimized.

Show comment
Hide comment
@dorey

dorey Jul 26, 2017

Would there be support for making the format of the output configurable? E.g. if JSON or even HTML can better represent the audit logs as they get more structured in the future.

I could try making a simple HTML template that formats an audit with these values in a more human-readable way if there's interest. My ulterior motive, however, would be to demonstrate some limitations of CSV.

dorey commented Jul 26, 2017

Would there be support for making the format of the output configurable? E.g. if JSON or even HTML can better represent the audit logs as they get more structured in the future.

I could try making a simple HTML template that formats an audit with these values in a more human-readable way if there's interest. My ulterior motive, however, would be to demonstrate some limitations of CSV.

@lognaturel

This comment has been minimized.

Show comment
Hide comment
@lognaturel

lognaturel Jul 26, 2017

Member

I think that ideally the format would be something that is agreed upon across the ecosystem and evolved as needed over time (e.g. opendatakit/xforms-spec#102). It seems the format could certainly change if needed but if there's a compelling reason to change it now while the feature is still not officially released we should consider it.

Hopefully servers will be able to provide most of the analysis (e.g. @pld's implementation of outlier detection applied to time per question). That said, the raw sidecar files may still be useful for spot verification or alternate analysis. I imagine that this would require some kind of manipulation so HTML doesn't seem as useful to me (not that HTML can't be parsed but it seems nice to be able to do basic calculations in Excel, for example). I'm not seeing a strong case for JSON either but @dorey perhaps you could elaborate.

Member

lognaturel commented Jul 26, 2017

I think that ideally the format would be something that is agreed upon across the ecosystem and evolved as needed over time (e.g. opendatakit/xforms-spec#102). It seems the format could certainly change if needed but if there's a compelling reason to change it now while the feature is still not officially released we should consider it.

Hopefully servers will be able to provide most of the analysis (e.g. @pld's implementation of outlier detection applied to time per question). That said, the raw sidecar files may still be useful for spot verification or alternate analysis. I imagine that this would require some kind of manipulation so HTML doesn't seem as useful to me (not that HTML can't be parsed but it seems nice to be able to do basic calculations in Excel, for example). I'm not seeing a strong case for JSON either but @dorey perhaps you could elaborate.

@dorey

This comment has been minimized.

Show comment
Hide comment
@dorey

dorey Jul 27, 2017

After thinking about it, the reasons that I am averse to using CSV are not really applicable here because this feature is generating (rather than parsing) the files and the inputs are comprised of a short, finite list of pre-sanitized strings and numbers.

I will move my comments to a different issue.

dorey commented Jul 27, 2017

After thinking about it, the reasons that I am averse to using CSV are not really applicable here because this feature is generating (rather than parsing) the files and the inputs are comprised of a short, finite list of pre-sanitized strings and numbers.

I will move my comments to a different issue.

jknightco pushed a commit to PembrokeStudio/collect that referenced this issue Aug 2, 2017

@opendatakit-bot

This comment has been minimized.

Show comment
Hide comment
@opendatakit-bot

opendatakit-bot Sep 15, 2017

Member

ERROR: This issue is in progress, but has no assignee.

Member

opendatakit-bot commented Sep 15, 2017

ERROR: This issue is in progress, but has no assignee.

@opendatakit-bot

This comment has been minimized.

Show comment
Hide comment
@opendatakit-bot

opendatakit-bot Sep 26, 2017

Member

ERROR: This issue is in progress, but has no assignee.

Member

opendatakit-bot commented Sep 26, 2017

ERROR: This issue is in progress, but has no assignee.

@lognaturel lognaturel closed this Sep 26, 2017

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment