Skip to content

Timeseries filtering

gregcaporaso edited this page Dec 21, 2012 · 16 revisions

@jrrideout added qiime.filter.sample_ids_from_category_state_coverage (now merged - some additional features are coming soon). This lets us explore the effect of different timeseries filtering strategies. We pass a mapping file and can specify the minimum number of timepoints that an individual must have provided to include them and/or specific timepoints that must be present for a given individual. Here's how to use it if others want to explore the data.

# Define a valid-states string to filter mapping file to remove samples with ambiguous timepoints (list compiled by Gilbert) or samples with fewer than 10k sequences
In [ ]: valid_states = 'SequenceCountAbove10000:*,!na;AmbiguousTimepoint:No;Control:no'

# Get an idea of what will be filtered
In [ ]: !print_metadata_stats.py -i SMP-map.tsv -c SequenceCountAbove10000 | egrep -c na
1
In [ ]: !print_metadata_stats.py -i SMP-map.tsv -c AmbiguousTimepoint
No	3673
Yes	62
In [ ]: !print_metadata_stats.py -i SMP-map.tsv -c Control
no	3540
yes	195

# Filter the mapping file and write it to a new file
In [ ]: open('SMP-map.filt.tsv','w').write(filter_mapping_file_by_metadata_states(open('./SMP-map.tsv','U'),valid_states))

# Confirm that filtering worked as expected
In [ ]: !print_metadata_stats.py -i SMP-map.filt.tsv -c SequenceCountAbove10000 | egrep -c na
0
In [ ]: !print_metadata_stats.py -i SMP-map.filt.tsv -c AmbiguousTimepoint
No	3245
In [ ]: !print_metadata_stats.py -i SMP-map.filt.tsv -c Control
no	3245
# do some setup
In [0]: from qiime.filter import sample_ids_from_category_state_coverage as s
In [1]: f = list(open("./SMP-map.filt.tsv",'U'))
In [2]: cc = "WeeksSinceStart"
In [3]: sc = "PersonalID"


# call help to learn how to use this function
In [4]: help(s)

# Now let's explore the data:
# How many individuals donated at least 1 timepoint?
In [29]: s(f,cc,sc,1)[1]
Out[29]: 121

# How many individuals donated at least 5 timepoint?
In [30]: s(f,cc,sc,5)[1]
Out[30]: 88

# What about 6 and up?
In [31]: s(f,cc,sc,6)[1]
Out[31]: 88
In [32]: s(f,cc,sc,7)[1]
Out[32]: 86
In [33]: s(f,cc,sc,8)[1]
Out[33]: 77
In [34]: s(f,cc,sc,9)[1]
Out[34]: 68
In [35]: s(f,cc,sc,10)[1]
Out[35]: 51
In [36]: s(f,cc,sc,11)[1]
Out[36]: 17
In [37]: s(f,cc,sc,12)[1]
Out[37]: 10

# We can also specify specific timepoints that we care about.
# How many individuals donated at samples at timepoints 0 and 10?
In [38]: s(f,cc,sc,1,[0,10])[1]
Out[38]: 50
# And some other specific timepoints
In [39]: s(f,cc,sc,1,[0,9])[1]
Out[39]: 49
In [40]: s(f,cc,sc,1,[0,8])[1]
Out[40]: 76
In [41]: s(f,cc,sc,1,[0,7])[1]
Out[41]: 53

# We can also check how many individuals we retain if we require a certain number of 
# timepoints in some range. For example, how many individuals provided at least 6 samples
# between timepoints 0 and 8 (optionally including those timepoints). 

In [42]: s(f,cc,sc,6,considered_states=[0,0.5,1,1.5,2,2.5,3,3.5,4,4.5,5,5.5,6,6.5,7,7.5,8])[1]
Out[42]: 86

# ... or at least 7 of those timepoints
In [43]: s(f,cc,sc,7,considered_states=[0,0.5,1,1.5,2,2.5,3,3.5,4,4.5,5,5.5,6,6.5,7,7.5,8])[1]
Out[43]: 79
# ... or at least 8 of those timepoints
In [44]: s(f,cc,sc,8,considered_states=[0,0.5,1,1.5,2,2.5,3,3.5,4,4.5,5,5.5,6,6.5,7,7.5,8])[1]
Out[44]: 61

# Alternatively, if we require 7 samples between timepoints 0 and 9 we retain almost the same 
#  number of individuals as with 6 samples between timepoints 0 and 8, so we'll go with that.
In [45]: s(f,cc,sc,7,considered_states=[0,0.5,1,1.5,2,2.5,3,3.5,4,4.5,5,5.5,6,6.5,7,7.5,8,8.5,9])[1]
Out[45]: 85

The commands below illustrate how the GutTimeseries, TongueTimeseries, PalmTimeseries, ForeheadTimeseries, and AnyTimeseries columns were created in the mapping file.

# Filter the mapping file to create body-site-specific mapping files.
In [ ]: from qiime.filter import filter_mapping_file_by_metadata_states
In [ ]: open('./SMP-map_forehead.filt.tsv','w').write(filter_mapping_file_by_metadata_states(open('./SMP-map.filt.tsv','U'),"BodySite:forehead"))
In [ ]: open('./SMP-map_gut.filt.tsv','w').write(filter_mapping_file_by_metadata_states(open('./SMP-map.filt.tsv','U'),"BodySite:gut"))
In [ ]: open('./SMP-map_palm.filt.tsv','w').write(filter_mapping_file_by_metadata_states(open('./SMP-map.filt.tsv','U'),"BodySite:palm"))
In [ ]: open('./SMP-map_tongue.filt.tsv','w').write(filter_mapping_file_by_metadata_states(open('./SMP-map.filt.tsv','U'),"BodySite:tongue"))


# Determine number of individuals retained with on a per-body-site basis
In [ ]: def f(min_num_states,considered_states=None):
    sites = ['gut','tongue','forehead','palm']
    for site in sites:
        print site, s(open('./SMP-map_%s.filt.tsv' % site,'U'),cc,sc,min_num_states,considered_states=considered_states)[1]
   ....:         

In [ ]: f(7,considered_states=[0,0.5,1,1.5,2,2.5,3,3.5,4,4.5,5,5.5,6,6.5,7,7.5,8,8.5,9])
gut 75
tongue 80
forehead 80
palm 61

In [ ]: considered_states=[0,0.5,1,1.5,2,2.5,3,3.5,4,4.5,5,5.5,6,6.5,7,7.5,8,8.5,9]

# Generate the list of sids with enough timepoints on a per-body-site basis
In [ ]: gut_sids = s(open('./SMP-map_gut.filt.tsv','U'),cc,sc,7,considered_states=considered_states)[0]
In [ ]: tongue_sids = s(open('./SMP-map_tongue.filt.tsv','U'),cc,sc,7,considered_states=considered_states)[0]
In [ ]: palm_sids = s(open('./SMP-map_palm.filt.tsv','U'),cc,sc,7,considered_states=considered_states)[0]
In [ ]: forehead_sids = s(open('./SMP-map_forehead.filt.tsv','U'),cc,sc,7,considered_states=considered_states)[0]

# Parse the mapping file to a dict
In [ ]: from qiime.parse import parse_mapping_file_to_dict
In [ ]: map = parse_mapping_file_to_dict(open('./SMP-map.tsv','U'))[0]

# Create a new mapping file for just this timeseries data which will be 
# merged with the mapping file in Google Docs. Note that we are only keeping the
# timepoints between WeeksSinceStart 0 and 9 (inclusive) 
In [ ]: g = open('ts_map.txt','w')

In [ ]: g.write('#SampleID\tGutTimeseries\tTongueTimeseries\tPalmTimeseries\tForeheadTimeseries\tAnyTimeseries\n')

In [ ]: for k, v in map.items():
    if v['WeeksSinceStart'] == 'na' or float(v['WeeksSinceStart']) > 9:
        g.write('%s\t%s\n' % (k,'\t'.join(["No"] * 5)))
    else:
        fields = []
        if k in gut_sids:
            fields.append('Yes')
        else:
            fields.append('No')
        if k in tongue_sids:
            fields.append('Yes')
        else:
            fields.append('No')
        if k in palm_sids:
            fields.append('Yes')
        else:
            fields.append('No')
        if k in forehead_sids:
            fields.append('Yes')
        else:
            fields.append('No')
        if 'Yes' in fields:
            fields.append('Yes')
        else:
            fields.append('No')
        g.write('%s\t%s\n' % (k,'\t'.join(fields)))
   .....:         

In [101]: g.close()

Finally the new mapping file is merged with the input file, and the result is uploaded to Google Docs.

merge_mapping_files.py -m SMP-map.tsv,ts_map.txt -o SMP-map_w_ts.tsv