Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

systems_thinking_coding #78

Closed
lzim opened this issue Aug 7, 2018 · 66 comments
Closed

systems_thinking_coding #78

lzim opened this issue Aug 7, 2018 · 66 comments
Assignees
Labels
feature development of a new feature mixed_methods #mixed_methods_workflow fomerly qualitative

Comments

@lzim
Copy link
Owner

lzim commented Aug 7, 2018

Hello Qualitative Workgroup!

Starting an issue for us to track our work on the systems thinking coding of Team Meetings.

The Systems Thinking Codebook is here: https://github.com/lzim/teampsd/tree/master/qual_workgroup/codebooks

The coded team meetings are here: https://github.com/lzim/teampsd/tree/master/qual_workgroup/coded_meetings

Thanks!

Lindsey

@lzim
Copy link
Owner Author

lzim commented Sep 11, 2018

Hi Everyone on the Qualitative Workgroup!

@swapmush @staceypark @dlounsbu 😄

@lzim
Copy link
Owner Author

lzim commented Sep 12, 2018

Thanks Kathryn,

We want to try to set this up for our two hour systems thinking coding meeting next Tuesday from 3-5PM.
So, if you have any updates at all, progress, questions, barriers, please discuss them with the group on GitHub issue #78:
#78

This is far better than our emails.

Thanks!

Lindsey

@teampsdkathryn
Copy link
Collaborator

Hello Team,

I am wondering if setting this up on my computer in Outpatient mental health could be an issue for sharing with others since I am in a different building. Any thoughts?

@teampsdkathryn
Copy link
Collaborator

Which working directory do you want this coding project to be in?

@teampsdkathryn
Copy link
Collaborator

Hello Team,

I have made a first pass at completing this week's assignment.
Stacey and I met for about a half hour trying to figure out which directory to put this in and addressing whether or not I should use my computer in OMH to do this or not.
We determined that we did not have "write privileges" to most drives so that leaves us really with the default option that appears after opening RQDA.
We also determined that I will use my computer in OMH. At this time, I am not aware of a way to link our computers together so we can code off the same documents.
Hence, we determined that I would need to articulate the procedure and everyone would need to duplicate this on their own computer.

Overall, we had only a few issues:

  1. First each person will need to install RQDA and maybe GTK+
  2. I was not able to attach individual levels to memos, just one memo for each code. That is an issue to be resolved in the future.
  3. I was not able to directly download files from Github to RQDA so I created a file within MyDocuments where I could then import the files.
  4. I viewed this assignment as a pilot so I only imported the 12 files from team1 so we can trouble shoot as a team.

Here is the procedure I came up with for our next meeting:
First, open RStudio and leave it running.
Next, install RQDA on your computer by checking the box off in the Package list in RStudio.
You may find that you need to install GTK+. This will pop up if it is not already on your computer. The VA will allow you to install it, so do so. It will take about 5 minutes.
RQDA window should automatically open, if not type RQDA()
Click on new project
Name project: systems_thinking_coding
Choose your file path, I used the default MyDocuments path because every other option I was not authorized to write to.
Click "open project"
Click "import files". I was not able to import directly from Github so I went to the shared research drive and copy and pasted 12 team1 files into a separate folder in MyDocuments.
Go back to "import files", navigate to your folder in MyDocuments, and copy and paste one by one into RQDA.
To do this you click "files" button, click "import" and it should appear. I did it one by one for each of the 12 files but I am sure there is a trick where you can copy more than one file at a time. I just do not know this trick.
Once your files are imported, add codes one by one.
Add complexity code
Add Feedback code
Add Behavior code
Add Time code
Now we need to add memo codes for five levels for each code 0,1,2,3,4.
I could not figure this out, so in a memo for each code, I put "Levels 0,1,2,3,4" as a placeholder.
I did this "Levels 0,1,2,3,4" memo for each code.
It appears you can only add one memo per code, but there may be a way around this.,
Go to the cases tab
Click add
type: team1
click attributes tab
Click add
Do this for
module type, session number, discipline, time spent, attendance, team size

Type RQDA() if you need to get back into RQDA.

This is as far as I went. I welcome your feedback.

@swapmush @staceypark @dlounsbu

@lzim
Copy link
Owner Author

lzim commented Sep 18, 2018

I began to set-up the project on Friday.

Go to the Team PSD folder, navigate to the “qual_workgroup” folder and then to “r_qual_scripts” folder.

First, you can see screenshots of the tasks I completed organized in order by RQDA tab.
You will see that I worked on things for tabs 1-6.

  1. Tab 1 – project – I set up the systems thinking project (this is the .rqda file that is in the “systems_thinking” folder – see below)
  2. Tab 2 – files – I only uploaded one file so far. But, it read it fine and I was able to do a sample code with it. So, we just need to upload the rest of the .txt team meeting files before tomorrow’s 3PM meeting if we can. There is help online about how to batch import and work with this files.
  3. Tab 3 – codes – I set up the four codes for systems thinking: complex, feedback, behavior, time.
  4. Tab 4 – code categories – I set up each of the four codes to have four levels.
  5. Tab 5 - Attributes – the sessions of MTL (1 through 12 – see the fidelity checklist on GitHub to review the 12-session plan).
  6. Tab 5 – Cases – these are our teams, deidentified using the numbering system
  7. Tab 6 – Notes – I added the Team PSD members’ initials for whose notes they are.

**Note I also began a .Rmd file that will be our instruction file for using RQDA, it’s called “rqda_script.Rmd” I haven’t had time to fully edit this and clean it up. We can do this as we go tomorrow.

There were two additional dimensions that I wasn’t able to research yet. Perhaps you can help us prepare for tomorrow, by learning more about those?

Second, you can find the .rqda project file that I set-up following those tab tasks according to our prior work in the systems thinking folder.
It is named, “systems_thinking.rqda”

Third, I found that I was only able to set-up and run this .rqda project file from the two library locations I have under write control: “MyDocuments” and my “U- Drive.” I expect that this will be true for you two. For tomorrow, we should each try to set it up to our U-Drives, so that we can get going. Stacey, Swap and Kathryn, please take my work and add the .txt files before out 3PM meeting.

Please copy the systems_thining.rqda project file I created to your U-Drive, and then put the de-identified, team .txt files in the same working directory, and import them to your local copy of the systems_thinking.rqda project before the meeting tomorrow.

We will start our meeting in person, tomorrow, and then divvy up coding tasks after working through some examples.

@teampsdkathryn @swapmush @staceypark @dlounsbu

@teampsdkathryn
Copy link
Collaborator

Dear Team,

Time permitting, those of us who could use more background on R basics can take a self paced course online for free or minimal cost. Please see the information below.

Site: www.edx.org

Course title: Data Science: R Basics

@lzim @dlounsbu @staceypark @swapmush @teampsdkathryn

@lzim
Copy link
Owner Author

lzim commented Sep 22, 2018

Hi Kathryn and Qual Workgroup!

Thanks for sharing this resource. I recommend DataCamp, which was created by the same folks who brought us R Studio. I would start with the resources available for free in their first 9 free courses. It is the best matched our team!

We have worked through a planned curriculum in the past with Team PSD mentees from our NCPTSD training programs: http://lindseyzimmerman.com/r-datacamp/ Data camp worked with us to make our free "class." However, the timing structure of a DataCamp course is a little bit of a mis-fit to our research and quality improvement externship, internship, fellowship and residency programs, which all have different timing/learner cycles.

Even if we started a new course, however, everyone should start with those first 9 free courses, so please check them out, I highly recommend them 👍

Lindsey

@lzim @dlounsbu @staceypark @swapmush @teampsdkathryn

@teampsdkathryn
Copy link
Collaborator

Team PSD Coding Philosophy Draft: Kathryn Azevedo, Ph.D., October 11, 2018

Protection of the identity of the participants is the foundation of our PSD coding philosophy.

Narrative Data: Traditional Ethnographic Research

Coding allows us to analyze the data to draw out major themes in our qualitative textual narrative data. This is often referred to as ethnographic analysis.

Narrative analysis often involves “telling,” “transcribing,” and “analyzing”
(Riesmann, 1993). In practice, the first step of “telling” involved interviewing the
participants. The survey instrument, an organized series of questions, usually asks for both specific discrete pieces of information and questions left toward the end are usually more open-ended, giving the participants a chance to open up and elaborate on their experiences.
To ensure a more systematic examination of the qualitative data generated from interviews
and field notes, we could choose to analyze our data by blending
Riesmann’s narrative analysis techniques with the Spradley ethnographic
Method (Azevedo et al, 2005).

James Spradley, a prominent ethnographer, encouraged researchers to
perform “domain analysis” as a way to guide the emergence of cultural
themes. Cultural themes are defined as “any principle recurrent in a number
of domains, tacit or explicit, and serving as a relationship among the
subsystems of cultural meaning” (Spradley, 1980).

Domains consist of “cover terms” and a semantic relationship. The domains and their respective cover terms or related themes can be described as “folk terms” or “emic terms” or
“native viewpoints” generated from the patients interviewed (Spradley, 1980).

Spradley’s methodology is useful because it helps conceptualize the multiple meanings
and experiences presented by our participant population into unifying
cultural themes. Narrative data is analyzed to uncover cultural themes
that give meaning to participant’s lived experiences.

In clinical research, the uncovering of these cultural themes yields an organized volume of knowledge, feelings, and interactions with the health care system (Azevedo, 2005). Ultimately, the information produced from ethnographic research can be used to better serve Veterans.

This approach, however, is very anthropological. The fields of psychology, nursing, economics, and political science have also developed narrative/textual analysis techniques that adds to the anthropological literature. We could further explore these avenues as well.

Health services research uses ethnography but this style often distills the participants voices to a few sentences where only a few quotes are used. Traditional ethnography, lets the quotes tell the story.

Interrater Reliability
When coding, our team should strive for strong interrater reliability.
Coders should meet regularly to ensure a unified interpretation of the codes.
Agreement between coders can be monitored thru percent agreement and Kappa testing.
Kappa testing is the gold standard but it is well known to be time consuming (Cohen 1960; Landis and Koch 1977; & Hruschka et. al. 2004) but top health services research journals expect it. The following from Hruschka et al. states:

” Achievement of perfect agreement is difficult and often impractical given finite
resource and time constraints. Several different taxonomies have been
offered for interpreting kappa values that offer different criteria, although the
criteria for identifying “excellent” or “almost perfect” agreement tend to be
similar. Landis and Koch (1977) proposed the following convention: 0.81–
1.00 = almost perfect; 0.61–0.80 = substantial; 0.41–0.60 = moderate; 0.21–
0.40 = fair; 0.00–0.20 = slight; and < 0.00 = poor. Adapting Landis and
Koch’s work, Cicchetti (1994) proposed the following: 0.75–1.00 = excellent;
0.60–0.74 = good; 0.40–0.59 = fair; and < 0.40 = poor. Fleiss (1981)
proposed similar criteria. Cicchetti’s criteria consider reliability in terms of
clinical applications rather than research; hence, the upper levels are somewhat
more stringent. Miles and Huberman (1994) do not specify a particular
intercoder measure, but they do suggest that intercoder reliability should
approach 0.90, although the size and range of the coding scheme may not
permit this. In the studies presented below, we used stringent cutoffs at
kappa 0.80 or 0.90, roughly between Cicchetti’s and Miles and
Huberman’s criteria (Hruschka, D. J., Schwartz, D., St. John, D. C., Picone-Decaro, E., Jenkins, R. A., & Carey, J. W., 2004).”

If we decide to go this route, a Kappa cutoff at .80 is reasonable, .90 is ideal but this means a lot of coding consensus meetings.

Kappa testing involves downloading a few specific programs from the internet, learning them, and then developing an agreement exercise procedure. Usually one person is designated to perform this task and ideally this person is not one of the coders. However, in practice usually it is because few studies have a dedicated, independent statistician on the team.

Percent agreement is similar but does not require sophisticated statistical analysis. Anyone with a basic background in descriptive statistics can learn this task. Still it takes preparation. One needs to develop and agreement exercise and test the team. Ideally, team members should strive to achieve .85 percent agreement but this varies depending on field and where one is aiming to publish.

We could also try a blended approach where after a training period, we perform Kappa testing until we achieve the designated IRR level. Then moving forward, we could do percent agreement.

References

Azevedo K, Nguyen A, Rowhani-Rahbar A, Rose A, Sirinian E, Thotakura A, Payne C
2005 Pain Impacts Sexual Functioning Among Interstitial Cystitis Patients.
Sexuality and Disability, Winter 23(4):189-208.

Hruschka, D. J., Schwartz, D., St. John, D. C., Picone-Decaro, E., Jenkins, R. A., & Carey, J. W. (2004). Reliability in coding open-ended data: Lessons learned from HIV behavioral research. Field methods, 16(3), 307-331.

Spradley JP: Participant Observation, Holt, Winehart, and Winstron, 1980.

Garro LY: The ethnography of health care decisions. Social Science & Medicine 16:1451–1452, 1982.

Riessman CK: Narrative Analysis, Qualitative Research Methods Series. Sage Publications,
Vol. 30, 1993.

@lzim @ dlounsbu @staceypark @swapmush @ teampsdkathryn

@teampsdkathryn
Copy link
Collaborator

Good Morning, Resending to David and Kathryn!

Team PSD Coding Philosophy Draft: Kathryn Azevedo, Ph.D., October 11, 2018

Protection of the identity of the participants is the foundation of our PSD coding philosophy.

Narrative Data: Traditional Ethnographic Research

Coding allows us to analyze the data to draw out major themes in our qualitative textual narrative data. This is often referred to as ethnographic analysis.

Narrative analysis often involves “telling,” “transcribing,” and “analyzing”
(Riesmann, 1993). In practice, the first step of “telling” involved interviewing the
participants. The survey instrument, an organized series of questions, usually asks for both specific discrete pieces of information and questions left toward the end are usually more open-ended, giving the participants a chance to open up and elaborate on their experiences.
To ensure a more systematic examination of the qualitative data generated from interviews
and field notes, we could choose to analyze our data by blending
Riesmann’s narrative analysis techniques with the Spradley ethnographic
Method (Azevedo et al, 2005).

James Spradley, a prominent ethnographer, encouraged researchers to
perform “domain analysis” as a way to guide the emergence of cultural
themes. Cultural themes are defined as “any principle recurrent in a number
of domains, tacit or explicit, and serving as a relationship among the
subsystems of cultural meaning” (Spradley, 1980).

Domains consist of “cover terms” and a semantic relationship. The domains and their respective cover terms or related themes can be described as “folk terms” or “emic terms” or
“native viewpoints” generated from the patients interviewed (Spradley, 1980).

Spradley’s methodology is useful because it helps conceptualize the multiple meanings
and experiences presented by our participant population into unifying
cultural themes. Narrative data is analyzed to uncover cultural themes
that give meaning to participant’s lived experiences.

In clinical research, the uncovering of these cultural themes yields an organized volume of knowledge, feelings, and interactions with the health care system (Azevedo, 2005). Ultimately, the information produced from ethnographic research can be used to better serve Veterans.

This approach, however, is very anthropological. The fields of psychology, nursing, economics, and political science have also developed narrative/textual analysis techniques that adds to the anthropological literature. We could further explore these avenues as well.

Health services research uses ethnography but this style often distills the participants voices to a few sentences where only a few quotes are used. Traditional ethnography, lets the quotes tell the story.

Interrater Reliability
When coding, our team should strive for strong interrater reliability.
Coders should meet regularly to ensure a unified interpretation of the codes.
Agreement between coders can be monitored thru percent agreement and Kappa testing.
Kappa testing is the gold standard but it is well known to be time consuming (Cohen 1960; Landis and Koch 1977; & Hruschka et. al. 2004) but top health services research journals expect it. The following from Hruschka et al. states:

” Achievement of perfect agreement is difficult and often impractical given finite
resource and time constraints. Several different taxonomies have been
offered for interpreting kappa values that offer different criteria, although the
criteria for identifying “excellent” or “almost perfect” agreement tend to be
similar. Landis and Koch (1977) proposed the following convention: 0.81–
1.00 = almost perfect; 0.61–0.80 = substantial; 0.41–0.60 = moderate; 0.21–
0.40 = fair; 0.00–0.20 = slight; and < 0.00 = poor. Adapting Landis and
Koch’s work, Cicchetti (1994) proposed the following: 0.75–1.00 = excellent;
0.60–0.74 = good; 0.40–0.59 = fair; and < 0.40 = poor. Fleiss (1981)
proposed similar criteria. Cicchetti’s criteria consider reliability in terms of
clinical applications rather than research; hence, the upper levels are somewhat
more stringent. Miles and Huberman (1994) do not specify a particular
intercoder measure, but they do suggest that intercoder reliability should
approach 0.90, although the size and range of the coding scheme may not
permit this. In the studies presented below, we used stringent cutoffs at
kappa 0.80 or 0.90, roughly between Cicchetti’s and Miles and
Huberman’s criteria (Hruschka, D. J., Schwartz, D., St. John, D. C., Picone-Decaro, E., Jenkins, R. A., & Carey, J. W., 2004).”

If we decide to go this route, a Kappa cutoff at .80 is reasonable, .90 is ideal but this means a lot of coding consensus meetings.

Kappa testing involves downloading a few specific programs from the internet, learning them, and then developing an agreement exercise procedure. Usually one person is designated to perform this task and ideally this person is not one of the coders. However, in practice usually it is because few studies have a dedicated, independent statistician on the team.

Percent agreement is similar but does not require sophisticated statistical analysis. Anyone with a basic background in descriptive statistics can learn this task. Still it takes preparation. One needs to develop and agreement exercise and test the team. Ideally, team members should strive to achieve .85 percent agreement but this varies depending on field and where one is aiming to publish.

We could also try a blended approach where after a training period, we perform Kappa testing until we achieve the designated IRR level. Then moving forward, we could do percent agreement.

References

Azevedo K, Nguyen A, Rowhani-Rahbar A, Rose A, Sirinian E, Thotakura A, Payne C
2005 Pain Impacts Sexual Functioning Among Interstitial Cystitis Patients.
Sexuality and Disability, Winter 23(4):189-208.

Hruschka, D. J., Schwartz, D., St. John, D. C., Picone-Decaro, E., Jenkins, R. A., & Carey, J. W. (2004). Reliability in coding open-ended data: Lessons learned from HIV behavioral research. Field methods, 16(3), 307-331.

Spradley JP: Participant Observation, Holt, Winehart, and Winstron, 1980.

Garro LY: The ethnography of health care decisions. Social Science & Medicine 16:1451–1452, 1982.

Riessman CK: Narrative Analysis, Qualitative Research Methods Series. Sage Publications,
Vol. 30, 1993.

@dlounsbu @teampsdkathryn

@lzim
Copy link
Owner Author

lzim commented Oct 15, 2018

@teampsdkathryn Thanks for your hard work on this on behalf of the team!
Note that we are not 1) interviewing, 2) doing narrative coding, 3) ethnography, or 4) thematic analysis.

Team - @staceypark @dlounsbu @swapmush and @ericasimon

We have already clarified our qualitative philosophy to that extent that we will code using theory-based constructs that have been defined in prior research. See our systems thinking codebook.

Systems Thinking Codebook references are available in the TeamPSD Zotero Library in qualitative_workgroup -> systems_thinking
Maani & Maharaj (2004) - Complexity Sweeney and Sterman (2007) Appendix B - System Behavior Sweeny and Sterman (2007) Table 4 - Feedback Sweeny and Sterman (2007) Table 6 Change over Time

Now to fully operationalize our qualitative methods, we must establish procedures for determining the:

A) validity of our systems_thinking codebook_definitions

B) reliability of our coding_methodology

for the following:

1) constructs

  • Complexity
  • Feedback
  • [System] Behavior
  • [Over] Time

2) and their degree

  • Level 0 (absent) to Level 4 (very high level systems thinking)

METHODOLOGY

We have done a lot of work on our codebook_definitions (A above) to date, but much more work and refinement of the definitions is required to establish their validity.

  • Codebook validity is required first, before we move on to coding reliability.

Only once we establish the validity of our codebook, do we move on to issues of reliability, such as kappa coefficients.

Therefore, as part of our phased coding_methodology (B above) we outline our coding procedures.

CODEBOOK VALIDITY - Introduction Section Drafted in this Phase

  1. Reviewing the codebook together to finalize decision rules for systems_thinking codes (part 1 of training).
  • First, we will make decisions rules for a) finalizing the code_definitions, and/or b) revising/refining the codes later, need to be documented well and described in manuscripts.
  1. Reviewing the codebook together to finalize decision rules for systems_thinking codes (part 2 of training).
  • Second, we will make decisions for applying the code_definitions.
  • Applying the code_definitions will require clarifying rule-in/rule-out criteria, if/then criteria, mutual exclusion definitions, etc. That provide clarity, and can be consistently applied by all coders.

CODING RELIABILITY - Introduction and Methods Section Drafted this Phase
Interrater reliability
3. First, we will decide how to estimate inter-reliability for our purposes. There are many a) definitional, b) procedural, and c) analytic decisions that are incorporated into determining which reliability measure is appropriate.
a) definitional - these decisions include the rules for agreement and disagreement (e.g., coded by one-coder, but not the other, coded by both but different at the word-level, sentence level, etc.)
b) procedural - these decisions include the rules for who will code what components of the corpus (e.g., how many coders will code, how much coding overlap of the text corpus will there be (0-100%), will coding be blinded and how, etc.)
c) analytic - these decisions include whether we will use simple interrator reliability, or will we use an estimate that accounts for agreement simply due to chance (e.g., Cohen's kappa), and what R packages can be used with our RQDA coding package to calculate this estimate

  1. Second, we will justify our decisions regarding the level of reliability needed to a) establish an individual coder as a reliable coder, and b) end coder training.
  • This decision will follow from decisions 3a, 3b and 3c.

Separating our Training data and Coding [Analysis] data
5. First, we will set aside a coder training dataset and we will set aside a dataset that we will code to reliability.

  • We will determine what sub-sample comprises our training dataset specifying our procedures.
  • Given the size of our corpus, I anticipate that it is reasonable/defensible to randomly select 20% of or the total corpus for training.

Begin Coder Training
6. Training will require coding our text corpus together with our finalized codebook_definitions and discussing and resolving any discrepancies in our coding.

Coding Analyses - Introduction, Methods and Findings Drafted in this Phase
7. Coding - we will follow all the coding procedures decided/justified a priori (meaning before we begin any actual coding analyses) in steps 1-6 until the text corpus has been coded.
8. Analyses - we will review and describe our coding_findings regarding the extent to which teams enlisted systems_thinking during their participation.
a) - First, we will estimate the reliability of our coding during coding analyses.
Second we will answer our key analytic questions:
b) describe observations of each construct (C.F.B.T.) [RQDA code] - do they differ, if so how? do teams converge and one code is highly prominent or absent?
c) describe levels of each construct observed across teams [RQDA code level 0-4]
e) describe high/low variation between teams [RQDA cases] in constructs [RQDA codes]
e) describe the codes in relation to MTL 12-session plan content [RQDA attributes]
f) estimate within team improvement in systems thinking [RQDA cases by RQDA C.F.B.T code levels 0-4]
9. Third will identify the optimal ways to display and describe our findings in our manuscript.

  • This will include a) selecting example codes, b) producing tables, c) visualizations, d) preparing our open-science supplementary materials (.Rmd code files for full transparent replication and reproducibility, etc.).

Discussion and Dissemination of Findings - Discussion Section Drafted and Manuscript Submitted in this Phase
10. We will document our scientific background, rationale for procedures for each of steps 1-9 as we completed each step. Therefore, we will draft our discussion at this phase.

  • Finally, submit the manuscript!

@teampsdkathryn
Copy link
Collaborator

teampsdkathryn commented Oct 15, 2018 via email

@lzim
Copy link
Owner Author

lzim commented Oct 15, 2018

@teampsdkathryn perhaps you're right there are two Sweeney and Sterman 2007 articles that we used, I believe.

I can check ASAP

@lzim
Copy link
Owner Author

lzim commented Oct 15, 2018

Hi Qual Workgroup! @dlounsbu @swapmush @staceypark and @teampsdkathryn

  • It looks like the systems_thinking and rqda articles are no longer in Zotero, when they were there a couple hours ago?

  • Anyone able to put them back? 😃

@teampsdkathryn
Copy link
Collaborator

teampsdkathryn commented Oct 15, 2018 via email

@teampsdkathryn
Copy link
Collaborator

teampsdkathryn commented Oct 15, 2018 via email

@teampsdkathryn
Copy link
Collaborator

Have we solidified our definitions of our constructs?

Complexity, Feedback, Behavior, Time

@teampsdkathryn @staceypark @dlounsbu @swapmush @ericasimon @lzim

@dlounsbu
Copy link
Collaborator

Yes, I believe we have. They are C, F, B and T, PLUS Level of systems thinking (1-4). Correct?
@teampsdkathryn @staceypark @swapmush @ericasimon @lzim

@teampsdkathryn
Copy link
Collaborator

teampsdkathryn commented Oct 17, 2018 via email

@lzim
Copy link
Owner Author

lzim commented Oct 17, 2018

Hi @teampsdkathryn and Qual Workgroup @swapmush @staceypark @ericasimon

Yes @dlounsbu They are C, F, B and T, PLUS Level of systems thinking (1-4).
In addition, there is an example sentence for each MTL module that shows the level.

Please refer to the updated systems_thinking_codebook_2018-10_16 here: https://github.com/lzim/teampsd/tree/master/qual_workgroup/qual_codebooks/systems_thinking

Today we made 5 coding decisions:
5 Coding Decision Rules

  1. Coding the four Systems Thinking Codes (Complex, Feedback, Behavior, Time) WILL be dichotmous (0 = absent; 1 = present).

  2. When one of the four systems Thinking Codes (C.F.B.T) is present assign it a level 1, 2, 3 or 4.

systems_thinking_codes

  1. Coding the four Systems Thinking Codes (Complex, Feedback, Behavior, Time) will NOT be mutually excusive (i.e., the four codes can overlaps and be present in the same text - see the Examples Tab of the codebook).

examples_headers

  1. We need to code the facilitators and team members text in the meeting notes. Determine whether this will be 2 sets of codes or something we set up in the Settings Tab.

  2. We will use 20% of the meeting notes as our training sample and 80% as our analysis sample. The training dataset will be balanced for a) note taker, b) team, c) time (early months vs. later months).

Thanks!

@lzim
Copy link
Owner Author

lzim commented Oct 24, 2018

Hi Everyone 😸

Following up on the Action Items from Lucid. The most up-to-date files are available in Lucid > Records > Documents. Or, in the Team PSD Qualitative folder.

Consistent with the action item assigned via Lucid, everyone should make sure they are working with these two updated files:
• “rqda_script.Rmd” which provides the instructions and code for launching your work session.
• “systems_thinking.rqda” which has the 24 team meeting notes for our training dataset already uploaded.

Please discuss your coding questions on our Systems Thinking Issue GitHub here: #78

Everyone should try to spend one to two hours coding and go as far down the list of 24 team meeting notes as you can.

Going in order starting with team 1 will be best because it will give diversity of exposure to:

  • Different note takers
  • A single team over time, giving greater exposure to a development of greater systems thinking over time.

Thanks 👍
@swapmush @ericasimon @staceypark @dlounsbu @teampsdkathryn

@teampsdkathryn
Copy link
Collaborator

  1. Once we upload all the appropriate documents to MyDocuments, I assume we then open up RStudio. Is this correct? From there do we open up RQDA with the library function or can we simply type RQDA ()?

  2. Once in R Studio, how do we correctly open up the Rmd document to view the instructions ?

  3. How do we correctly open up the systems_thinking.rqda document to correctly import the 24 team meeting notes into RQDA?

  4. Once we have the files in RQDA, do we have a particular color scheme we want to follow or perhaps that is in the instructions?

Thank you for considering these basic questions.

@swapmush @ericasimon @staceypark @dlounsbu @teampsdkathryn @lzim
From the vid

@dlounsbu
Copy link
Collaborator

Hi @teampsdkathryn ,

In general, we each need to be able to successfully load the RQDA package on our one laptop. RQDA is a package accessed via R Studio. So R Studio needs to be successfully installed, too.

Once in RQDA, we should be able to launch our Systems Thinking coding project. Within the project, we should be able to associate all the source files (i.e., the team notes to be coded). And then we need to start coding the text, which would require that we have our codebook set up in RQDA, too.

Based on our last meeting, think we have defined our codes. If I recall our decisions from the last meeting, we are coding for references to notes from team members that illustrate 'thinking' that is COMPLEX, FEEDBACK, BEHAVIORal, and/or TIMEbound. We are also coding for LEVEL OF SYSTEMS THINKING (1 to 4).

We need to make sure we have agreed upon definitions for all of these codes, and I am not sure where they are in GitHub. Maybe @staceypark does??

@dlounsbu

@lzim
Copy link
Owner Author

lzim commented Nov 14, 2018

Hopefully you can keep up with the Lucid meeting @dlounsbu!

We tried to keep good documentation of all the decisions made

@teampsdkathryn
Copy link
Collaborator

I am at the American Anthropology Association Conference this week.

Below is a revised definition of behavior based on our qualitative meeting. With regards to
differentiating levels of behavior, here is my draft suggestion:

BEHAVIOR:
Definition: Systems Thinking Behavior
Systems thinking behavior describes a trend over time. How health system variables change can be described as flows, not just a simple snap shot in time. System dynamics makes systems endogenous, systems cause their own behavior. Systems thinking makes the dynamics of behavior more transparent. We are testing causal relationships and looking at operational thinking (mental map) by looking at the physics of relationships. We think there is evidence of systems thinking in stakeholder decisions. Behavior-increasingly link the observed system behavior to the structure. What is behavior? A path.

Level 1: Demonstrates simple interconnections of the relationship between appointments and patients,
Level 2: Demonstrates simple numerical awareness of system behavior and characteristics between appointment availability and number of patients.
Level 3: Demonstrates the understanding of the behavior of systems by articulating how the proportion of available appointments is impacted by patient demand.
Level 4: Demonstration of sophisticated, reflective, integrative thinking where stakeholder can describe the relationships between appointments, patients served, provider availability, and can offer novel suggestions on how to improve health service delivery.

Please review and revise for further clarity and specificity.

Happy Thanksgiving:)

Best Regards, Kathryn

@lzim @staceypark @swapmush @teampsdkathryn @ericasimon :

@lzim
Copy link
Owner Author

lzim commented Nov 16, 2018

Thanks @teampsdkathryn!
Have a great meeting 😄

@teampsdkathryn
Copy link
Collaborator

Welcome back from break! I thought it would be useful to re-post our notes from the last meeting so we have all the coding instructions in one place that can be accessed within VA.

Meeting Record
Tuesday, November 13, 2018
We will not meet Tues 11/20 or Tues 11/27 for qualitative.
Our next meeting will be the first Tues of Dec 4th (Lindsey out), 11th 18th.

Change time codes to reflect:
0=no reference to time
1=non-specific time
2=specific time (behavior expected; specific value; increase/decrease)
3=fuller awareness of time (short/long term expected; better before worse/worse before better)
4=accurate time (system behavior as a function of the feedback; contingent on time)

Coding at the sentence/word/phrase level.

Coding both team and facilitator. (Note: for testing data set, we will focus on code category and level and will wait on separating out team from facilitator).
Coder agreement includes an agreement that it is not present as well as that it is present.
Walking through the diagram should pull for complexity and feedback.
Walking through the results dashboard should pull for behavior and time.
In the QHFD process, you should see feedback and behavior in the hypotheses and findings.
Complexity should describe either a) the relationship between two or more variables (e.g., patient start rate, patient ending rate: same unit, different variables) or b) two or more units. (e.g., appointments and patients)
Action Items
Action Item Assigned Due Date Completed
WE need to differentiate levels of behavior in code book. David L., Erica S., Kathryn A., Lindsey Z., Stacey P., Swap M. Nov 16
1.0 Issues for Immediate Resolution
Key Workgroup Dependencies: qual_workgroup, quant_workgroup, hq_workgroup

  1. Discuss qualitative coding of team 1 in the training corpus:
  2. complexity
  3. behavior
  4. feedback
  5. time
    • Relevant GitHub Issue:
    o systems_thinking_coding #78

Notes and Action Items
We will not meet Tues 11/20 or Tues 11/27 for qualitative.
Our next meeting will be the first Tues of Dec 4th (Lindsey out), 11th 18th.
Change time codes to reflect:
0=no reference to time
1=non-specific time
2=specific time (behavior expected; specific value; increase/decrease)
3=fuller awareness of time (short/long term expected; better before worse/worse before better)
4=accurate time (system behavior as a function of the feedback; contingent on time)
Coding at the sentence/word/phrase level.
Coding both team and facilitator. (Note: for testing data set, we will focus on code category and level and will wait on separating out team from facilitator).
Coder agreement includes an agreement that it is not present as well as that it is present.
Walking through the diagram should pull for complexity and feedback.
Walking through the results dashboard should pull for behavior and time.
In the QHFD process, you should see feedback and behavior in the hypotheses and findings.
Complexity should describe either a) the relationship between two or more variables (e.g., patient start rate, patient ending rate: same unit, different variables) or b) two or more units. (e.g., appointments and patients)
Best Regards, Kathryn

@lzim @staceypark @swapmush @teampsdkathryn @ericasimon

@teampsdkathryn
Copy link
Collaborator

Hello Team,

I have revised the codebook today to reflect the decisions reached at our last qualitative meeting. It is available on the server and below. Please let me know if there are questions or concerns and/or the need to make further revisions. Happy coding!

Systems Thinking Codebook November 26, 2018
Coding Decision Rules:

  1. Coding for Systems Thinking Codes (Complexity, Feedback, Behavior, Time) WILL be dichotomous (0=absent, 1= present)
  2. Coding for Systems Thinking Codes (Complexity, Feedback, Behavior, Time) will NOT be mutually exclusive (four codes can overlap and be present in the same text)
  3. When the Systems Thinking Codes (Complexity, Feedback, Behavior, Time) are present, assign it/them a level using the examples tab.
  4. We need to code the facilitators and team members text in the meeting notes. Determine whether this will be 2 sets of codes or something we set up in the settings tab.
  5. We will use 20% of the meeting notes as our training sample and 80% as our analysis sample. The training dataset will be balanced for a) notetaker, b) time, c) time (early months versus later months).
    Interrater Reliability:
  6. Coder agreement includes an agreement that it is not present as well as that it is present.
    Coding Guidelines:
  7. Coding at the sentence/word/phrase level.
  8. Coding both team and facilitator. (Note: for testing data set, we will focus on code category and level and will wait on separating out team from facilitator).
  9. Walking through the diagram should pull for complexity and feedback.
  10. Walking through the results dashboard should pull for behavior and time.
  11. In the QHFD (Questions, Hypothesis, Findings, Decisions) process, you should see feedback and behavior in the hypotheses and findings.


Proposed Definitions:
COMPLEXITY:
Definition: Complexity should describe either a) the relationship between two or more variables (e.g., patient start rate, patient ending rate: same unit, different variables) or b) two or more units. (e.g., appointments and patients). Forest thinking.
Level 1: Basic one-to-one relationships, largely intuitive
Level 2: Complex one-to-one relationships
Level 3: Three-way relationships
Level 4: Big picture

BEHAVIOR:
Definition: Systems thinking behavior describes a trend over time. How health system variables change can be described as flows. System dynamics makes systems endogenous, systems cause their own behavior. Systems thinking makes the dynamics of behavior more transparent. We are testing causal relationships and looking at operational thinking (mental map) by looking at the physics of relationships. We think there is evidence of systems thinking in stakeholder decisions. Behavior-increasingly link the observed system behavior to the structure. We are looking for a movie, not a snap shot in time.
Level 1: Demonstrates simple interconnections of the relationship between appointments and patients.

Level 2: Demonstrates simple numerical awareness of system behavior and characteristics between appointment availability and number of patients.

Level 3: Demonstrates the understanding of the behavior of systems by articulating how the proportion of available appointments is impacted by patient demand.

Level 4: Demonstration of sophisticated, reflective, integrative thinking where stakeholder can describe the relationships between appointments, patients served, provider availability, and can offer novel suggestions on how to improve health service delivery.

FEEDBACK: (Sweeney and Sternman, 2007, Table 4)
Definition: Thinking in loops. Stakeholders have made some sort of a circle in their thinking. Feedback is increasingly more complete-close their feedback loop, intermediate variables. Causes are in the feedback loop. Most feedback loops are balancing loops because of limited resources and units of time. Most causation errors are those where we only pay attention to the in-flows and not paying attention to the out-flows. 2 types of feedback: 1. Reinforcing 2. Balancing.
Level 1: Open loop: non-closed loop
Level 2: Closed loop: return to the variable you started with.
Level 3: Behavior of closed loop over time.
Level 4: Multiple closed loops

TIME: (Sweeney and Sternman, 2007, Table 6)
Definition: Reference to change over time. Time- increasingly sophisticated understanding of change over time; i.e. worse before better.
Level 0: No reference to time
Level 1: Non-specific time
Level 2: Specific time (behavior expected; specific value; increase/decrease)
Level 3: Fuller awareness of time (short/long term expected; better before worse/worse before better)
Level 4: Accurate time (system behavior as a function of the feedback; contingent on time)

Best Regards, Kathryn

@lzim @staceypark @swapmush @teampsdkathryn @ericasimon

@staceypark
Copy link
Contributor

@teampsdkathryn @dlounsbu @ericasimon
We would typically meet at 3pm Pacific/6pm Eastern tomorrow for our 2hour Qual meeting. We could however move it earlier to 1pm Pacific/4pm Eastern? Let us (stacey.park2@va.gov and erica.simon@va.gov) know if this would work for folks

fyi @lzim and @swapmush are both out right now.

@dlounsbu
Copy link
Collaborator

dlounsbu commented Dec 4, 2018

I would be happy to meet at 4pm EST instead of 6pm EST.

@teampsdkathryn
Copy link
Collaborator

teampsdkathryn commented Dec 4, 2018 via email

@teampsdkathryn
Copy link
Collaborator

Hello Everyone, I have called into the meeting!!

Best Regards, Kathryn

Thank you and have a great week coding! Best Regards, Kathryn

@swapmush @ericasimon @staceypark @dlounsbu @teampsdkathryn @lzim

@dlounsbu
Copy link
Collaborator

dlounsbu commented Dec 4, 2018

I am just trying to connect now. Are you in Lucid?

@teampsdkathryn
Copy link
Collaborator

teampsdkathryn commented Dec 4, 2018 via email

@teampsdkathryn
Copy link
Collaborator

I'm still getting the music:)

@ericasimon @staceypark @dlounsbu @teampsdkathryn

@dlounsbu
Copy link
Collaborator

dlounsbu commented Dec 4, 2018 via email

@dlounsbu
Copy link
Collaborator

dlounsbu commented Dec 5, 2018

Great work yesterday. My big concern, now, is how will we get this done in a timely way?

@dlounsbu
Copy link
Collaborator

@swapmush @ericasimon @staceypark @dlounsbu @teampsdkathryn @lzim

I have two Qual Coding meetings in my calendar for Tuesday, From 4pm-5pm EST and again from 6:30pm to 8:30pm. I can only attend the 4pm meeting. I have a dinner meeting with my Chair that conflicts with anything after 5pm EST.
Best, @dlounsbu

@ericasimon ericasimon removed their assignment Mar 7, 2019
@lzim
Copy link
Owner Author

lzim commented May 22, 2019

Good Morning Team PSD 😸

Attention to two interdependent workgroups:

@TomRust and I simplified our (C.F.B.T.) definitions of Systems Thinking even further, and I added it to the

MTL focuses on improving systems thinking among frontline teams making care decisions.

Systems Thinking Definition
Complex Forest not trees. Relationships among two or more variables (wait times, improvement rate), or two or more settings (primary care, general mental health).
Feedback Loop not line. Not simple cause and effect. The end of the story often influences the beginning, and is strengthened (reinforcing) or stabilized (balancing) around the loop.
System Behavior Movie not snapshot. Trends over time.  Systems cause their own behavior through feedback.
Time Short and long term. Better understanding of change over time (e.g., worse before better, better before worse).

@swapmush
Copy link
Contributor

swapmush commented Sep 3, 2019

I am having trouble with updated Mac OS Mojave and getting updated version of R 3.6.1 to work. This may be a problem unique to my computer, when I find fix, I will post in case others run into same issue.

@lzim
Copy link
Owner Author

lzim commented Sep 4, 2019

@swapmush

Did you ask @saveth about this? She is an R and Mac user?

@staceypark staceypark added the mixed_methods #mixed_methods_workflow fomerly qualitative label Feb 12, 2020
@staceypark
Copy link
Contributor

@lzim @jessfroe @dlounsbu @teampsdkathryn @swapmush
It looks like this got lost as we changed workflow tracking systems.
This issue was used for a while to track our systems thinking work. Where (if at all) does it make sense to place this issue?

I've placed it into the feature_tracker for now. But if it makes sense to close the issue, we can do that as well.

@jessfroe
Copy link
Contributor

@staceypark as far as I know, systems_thinking_coding was completed a while ago and we have since moved on to fidelity_coding. It looks like some additional housekeeping tasks were being tracked here (see @swapmush 's comment about R) but I believe those have been resolved as well. I personally think this issue can be closed but I'll defer to @lzim @dlounsbu @teampsdkathryn @swapmush before we close.

@staceypark
Copy link
Contributor

I resolved @swapmush issue with him already. I am going to go ahead and close this. Anyone can feel free to reopen if necessary.

@staceypark as far as I know, systems_thinking_coding was completed a while ago and we have since moved on to fidelity_coding. It looks like some additional housekeeping tasks were being tracked here (see @swapmush 's comment about R) but I believe those have been resolved as well. I personally think this issue can be closed but I'll defer to @lzim @dlounsbu @teampsdkathryn @swapmush before we close.

@mnallajerla mnallajerla added the feature development of a new feature label Sep 22, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature development of a new feature mixed_methods #mixed_methods_workflow fomerly qualitative
Projects
Status: closed
Development

No branches or pull requests

8 participants