Skip to content

PacBio CCS Amplicon SOP v2 (qiime2 2022.2)

amcomeau edited this page Jun 6, 2023 · 33 revisions

This standard operating procedure (SOP) is based on QIIME2 and is meant for users who want to quickly run their PacBio CCS amplicon data through the Microbiome Helper virtual box image and for internal use.

If you use this workflow make sure to keep track of the commands you use locally as this page will be updated over time (see "revisions" above for earlier versions).

Requirements

This workflow assumes that you have QIIME2 installed in a conda environment.

This workflow also assumes that the input is raw Circular Consensus Sequencing (CCS) PacBio data in demultiplexed FASTQ format located within a folder called raw_data. The filenames can be almost anything you wish (contrary to most QIIME2 importing) since you are going to use a "manifest file" to list each file.

1. First steps

1.1 Format metadata file

You can format the metadata file (which is in a compatible format to the QIIME 1 mapping files) in a spreadsheet and validate the format using a google spreadsheet plugin as described on the QIIME 2 website. The only required column is the sample id column, which should be first. All other columns should correspond to sample metadata. Below, we assume that your metadata filename has been assigned to a bash variable called "$METADATA".

This can be done like so (for example if your file is called metadata.txt and is found in the folder /home/user):

METADATA="/home/user/metadata.txt"

1.2 Set number of cores

Several commands throughout this workflow can run on multiple cores in parallel. How many cores to use in these cases will be saved to the NCORES variable defined below. We set this variable to 1 below, but you can change this to be however many cores you would like to use.

NCORES=1

1.3 Inspect read quality

Visualize sequence quality across raw reads. This is important as a sanity check that your reads are of good quality. QIIME 2 comes with a plugin for visualizing read quality, which we will use at a later step. However, when dealing with raw reads the easiest method to use is a combination of FASTQC and MultiQC. Note that these tools are not packaged with QIIME 2 so you will need to install them separately.

This is an important step for identifying outlier samples with especially low quality, read sizes, read depth, and other metrics.

You can run FASTQC with this command (after creating the output directory).

mkdir fastqc_out
fastqc -t $NCORES raw_data/*.fastq.gz -o fastqc_out

If you receive the error Value "FASTQ" invalid for option threads (number expected) (where "FASTQ" is an input filename) then make sure you have defined the NCORES variable correctly and re-run the command.

FASTQC generates a report for each individual file. To aggregate the summary files into a single report we can run MultiQC with these commands (including entering and leaving the FASTQC output directory):

cd fastqc_out
multiqc .
cd ..

The full report is found within multiqc_report.html in the FASTQC output directory. You can view this report in a web-browser on your local computer. The most important reason to visualize this report is to ensure that your samples are of high-quality (based largely on whether the per-base quality is >30 across most of the reads) and that there are no outlier samples.

1.4 Activate QIIME 2 conda environment

You should run the rest of the workflow in a conda environment, which makes sure the correct version of the Python packages required by QIIME 2 are being used. You can activate this conda environment with this command (you may need to swap in source for conda if you get an error):

conda activate qiime2-2022.2

1.5 Import FASTQs as QIIME 2 artifact

The raw reads will be imported into the QIIME 2 "artifact" file format (with the extension QZA). The slight difference here compared to standard Illumina file importing is that you need to use a "manifest" file - consult the QIIME2 documentation about preparing it, but essentially it is just a tab-delimited text file containing the sample names + absolute path to each file in the raw_data/ folder .

mkdir reads_qza

qiime tools import \
    --type SampleData[SequencesWithQuality] \
    --input-path PacBioCCSmanifest.tsv \
    --output-path reads_qza/raw_reads.qza \
    --input-format SingleEndFastqManifestPhred33V2

1.6 Summarize raw FASTQs

You can run the demux summarize command after importing the reads to get a report of the number of reads per sample and quality distribution across the reads. This generates a more basic output compared to FASTQC/MultiQC, but is sufficient for this step.

qiime demux summarize \
   --i-data reads_qza/raw_reads.qza \
   --o-visualization reads_qza/raw_reads_summary.qzv

Note that we gave the output file above the extension .qzv since this a special type of artifact file - a visualization. You can look at the visualization by uploading the file to the QIIME2 view website and clicking on the Interactive Quality Plot tab at the top of the page.

2. Denoising the reads into amplicon sequence variants

At this stage, the main 2 pipelines you can use are based on either deblur or DADA2. We recommend running DADA2, which now supports PacBio reads, as Deblur may not work correctly since it does not currently have a PacBio mode.

2.1 Running DADA2

Run the DADA2 workflow to remove primers, correct reads and get amplicon sequence variants (ASVs). Note that the previous version of the PacBio CCS Amplicon SOP included two steps ("Resolve orientation problems" and "Trim primers with cutadapt") prior to running DADA2. Those steps are now included as part of DADA2's "denoise-ccs" mode described below. Also, note that we are not doing any form of initial quality filtering (unlike our Illumina SOP) or truncation in the below command due to the fact the CCS reads are already of very high quality when produced (HiFi reads are 99% consensus accuracy). You will probably also want to increase the number of threads used below to the maximum your system has available. If your PacBio data is not CCS in nature or you have used a lower consensus threshold (ie: <99%), then we would suggest you add in a quality-filter step. The below primers correspond to the full-length 16S (Bacteria-specific primer set).

mkdir dada2_output

qiime dada2 denoise-ccs \
   --i-demultiplexed-seqs reads_qza/raw_reads.qza \
   --p-min-len 1200 --p-max-len 1800 \
   --p-front AGRGTTYGATYMTGGCTCAG --p-adapter RGYTACCTTGTTACGACTT \
   --p-n-threads $NCORES \
   --o-table dada2_output/table.qza \
   --o-representative-sequences dada2_output/representative_sequences.qza \
   --o-denoising-stats dada2_output/stats.qza \
   --verbose

Note we are using linked anchored adapters here to ensure only reads having the primers at the extremities are retained and a size-range of 1300-1800 nt to allow some indels (there should be relatively few in actuality), but prevent amplicon dimers (we've observed ~1% in PacBio CCS data) from passing through, since the size of the 16S amplicon here is ~1500 nt.

If using our full-length 16S Archaea-specific primers (currently in testing), use the following command:

qiime dada2 denoise-ccs \
   --i-demultiplexed-seqs reads_qza/raw_reads.qza \
   --p-min-len 1200 --p-max-len 1800 \
   --p-front TCCGGTTGATCCYGCCGG --p-adapter CRGTGWGTRCAAGGRGCA \
   --p-n-threads $NCORES \
   --o-table dada2_output/table.qza \
   --o-representative-sequences dada2_output/representative_sequences.qza \
   --o-denoising-stats dada2_output/stats.qza \
   --verbose

If using our full-length 18S primers, use the following command (note a much wider size-range, around the ~1800 nt normal size, due to the fact some species of Eukaryotes have much larger indels in their rRNA than Bacteria do):

qiime dada2 denoise-ccs \
   --i-demultiplexed-seqs reads_qza/raw_reads.qza \
   --p-min-len 1000 --p-max-len 3000 \
   --p-front CTGGTTGATYCTGCCAGT --p-adapter TGATCCTTCTGCAGGTTCACCTAC \
   --p-n-threads $NCORES \
   --o-table dada2_output/table.qza \
   --o-representative-sequences dada2_output/representative_sequences.qza \
   --o-denoising-stats dada2_output/stats.qza \
   --verbose

If using our full-length fungal ITS primers, use the following command (again, size-range is modified for the proper range to keep variation, but hopefully remove larger dimers; ~600 nt average size):

qiime dada2 denoise-ccs \
   --i-demultiplexed-seqs reads_qza/raw_reads.qza \
   --p-min-len 300 --p-max-len 900 \
   --p-front TAGAGGAAGTAAAAGTCGTAA --p-adapter TCCTCCGCTTWTTGWTWTGC \
   --p-n-threads $NCORES \
   --o-table dada2_output/table.qza \
   --o-representative-sequences dada2_output/representative_sequences.qza \
   --o-denoising-stats dada2_output/stats.qza \
   --verbose

If you see substantial losses of read numbers after this step (ie: your file-sizes are now much smaller than the original "raw" CCS files), then make absolutely certain you are using the correct primer sequences (in the correct linked orientations) and have not filtered out most of your reads due to the size-range restrictions - this would indicate a problem in matching your fragment with the correct parameters (especially important if using custom primers and adjusting all the above values).

2.2 Summarizing DADA2 output

Once a denoising pipeline has been run you can summarize the output table with the below command, which will create a visualization artifact for you to view.

qiime feature-table summarize \
   --i-table dada2_output/table.qza \
   --o-visualization dada2_output/dada2_table_summary.qzv

We will use this visualization later to determine the the cut-offs for filtering the table below, but for now you should mainly take a look at the visualization to ensure that sufficient reads have been retained after running dada2. This denoising tool filters out reads that either do match to known noise or that do not match with low similarity to the expected amplicon region. If your samples have very low depth after running dada2 (compared to the input read depth) this could be a red flag that either you ran the tool incorrectly, you have a lot of noise in your data, or that dada2 is inappropriate for your dataset.

3. Assign taxonomy to ASVs

You can assign taxonomy to your ASVs using a Naive-Bayes approach implemented in the scikit learn Python library and the SILVA or UNITE databases.

3.1 Build or acquire taxonomic classifier

This approach requires that a classifier be trained in advance on a reference database. We recommend users use a widely used classifier to help ensure there are no unexpected issues with the Naive-Bayes model. We previously maintained primer-specific classifiers, which theoretically can provide more accurate classifications, but we no longer do this due to concerns regarding issues with the trained models that are difficult to catch if only a couple people are running them. No matter what approach you use, it's a good idea to run a few sanity checks on the output to make sure it worked correctly for your data (see below).

Remember to use the full-length versions of the taxonomic reference files when identifying ASVs. The full-length 16S/18S classifier can be downloaded from the QIIME 2 website (silva-138-99-nb-classifier.qza for the latest classifier).

Custom classifiers for the ITS region that we have generated from the UNITE database are available as well (see downloads and commands used to create these files):

  • Full ITS - fungi only (classifier_sh_refs_qiime_ver9_99_s_27.10.2022_ITS.qza)
  • Full ITS - all eukaryotes (classifier_sh_refs_qiime_ver9_99_s_all_27.10.2022_ITS.qza)

3.2 Run taxonomic classification

You can run the taxonomic classification with this command, which is one of the longest running and most memory-intensive command of the SOP. If you receive an error related to insufficient memory (and if you cannot increase your memory usage) then you can look into the --p-reads-per-batch option and set this to be lower than the default (which is dynamic depending on sample depth and the number of threads) and also try running the command with fewer jobs (e.g. set --p-n-jobs 1).

qiime feature-classifier classify-sklearn \
   --i-reads dada2_output/representative_sequences.qza \
   --i-classifier /home/shared/taxa_classifiers/qiime2-2022.2_classifiers/silva-138-99-nb-classifier.qza \
   --p-read-orientation same \
   --p-n-jobs $NCORES \
   --output-dir taxa

As with all QZA files, you can export the output file to take a look at the classifications and confidence scores:

qiime tools export \
   --input-path taxa/classification.qza --output-path taxa

3.3 Assess subset of taxonomic assignments with BLAST

The performance of the taxonomic classification is difficult to assess without a gold-standard reference, but nonetheless one basic sanity check is to compare the taxonomic assignments with the top BLASTn hits for certain ASVs.

It is simple to do this with QIIME 2 by running:

qiime feature-table tabulate-seqs --i-data dada2_output/representative_sequences.qza \
                                  --o-visualization dada2_output/representative_sequences.qzv

The file dada2_output/representative_sequences.qzv is a QIIME 2 visualization file that you can open in the QIIME 2 viewer. The format makes it easy to BLAST certain ASVs against the NCBI nt database. By comparing these BLAST hits with the taxonomic assignment of ASVs generated above you can reassure yourself that the taxonomic assignments overall worked correctly. It's a good idea to select ~5 ASVs to BLAST for this validation, which should be from taxonomically different groups, such as different phyla, according to the taxonomic classifier.

4. Filtering resultant table

Filtering the denoised table is an important step of microbiome data analysis. You can see more details on this process in the QIIME 2 filtering tutorial.

4.1 Filter out rare ASVs

Based on the summary visualization created in step 2.4 above you can choose a cut-off for how frequent a variant needs to be (and optionally how many samples need to have the variant) for it to be retained. For Illumina sequencing, we recommend removing all ASVs that have a frequency of less than 0.1% of the mean sample depth. This cut-off excludes ASVs that are likely due to MiSeq bleed-through between runs (reported by Illumina to be 0.1% of reads). To calculate this cut-off you would identify the mean sample depth in the visualization created in step 2.4, multiply it by 0.001, and round to the nearest integer. There is no "bleed-through" phenomenon in PacBio sequencing and depth is much shallower than Illumina, therefore you may have to adjust the levels to which you filter out rare sequences (although some filtering to remove noise/singletons is probably still recommended).

Once you've determined how you would like to filter your table you can do so with this command (X is a placeholder for your choice):

qiime feature-table filter-features \
   --i-table dada2_output/table.qza \
   --p-min-frequency X \
   --p-min-samples 1 \
   --o-filtered-table dada2_output/dada2_table_filt.qza

4.2 Filter out contaminant and unclassified ASVs

Now that we have assigned taxonomy to our ASVs we can use that information to remove ASVs which are likely contaminants or noise based on the taxonomic labels. Two common contaminants in 16S sequencing data are mitochondrial and chloroplast 16S sequences, which can be removed by excluding any ASV which contains those terms in its taxonomic label. It can also be sometimes useful to exclude any ASV that is unclassified at the phylum level since these sequences could be noise (e.g. possible chimeric sequences). Note that if your data has not been classified against the default database you may need to change p__ to be a string that enables phylum-level assignments to be identified or simply omit that line.

In general though, it can be very informative if your sequencing reads are coming back with significant amounts of unclassified ASVs as it can indicate upstream analysis problems or indicate you are studying a poorly characterized environment where you have a good chance of identifying a lot of novel phyla. Therefore, our recommendation is to not filter out the unclassified sequences by default:

qiime taxa filter-table \
   --i-table dada2_output/dada2_table_filt.qza \
   --i-taxonomy taxa/classification.qza \
   --p-exclude mitochondria,chloroplast \
   --o-filtered-table dada2_output/dada2_table_filt_contam.qza

If you do want to exclude all the unclassifieds, then run the following instead (with the above caveat that the line might need to be adapted according to the tax database used):

qiime taxa filter-table \
   --i-table dada2_output/dada2_table_filt.qza \
   --i-taxonomy taxa/classification.qza \
   --p-include p__ \
   --p-exclude mitochondria,chloroplast \
   --o-filtered-table dada2_output/dada2_table_filt_contam.qza

4.3 Exclude low-depth samples

Often certain samples will have quite low depth after these filtering steps, which can be excluded from downstream analyses since they will largely add noise. There is no single cut-off that works best for all datasets, but researchers often use minimum cut-offs within the range of 1000 to 4000 reads. You can also use a cut-off much lower than this if you want to retain all samples except those that failed entirely (e.g. depth < 50 reads). Ideally you would choose this cut-off after visualizing rarefaction curves to determine at what read depth the richness of your samples plateaus and choose a cut-off as close to this plateau as possible while retaining sufficient sample size for your analyses.

To perform this rarefaction curve analysis you would first need to summarize the filtered table we produced in the last step:

qiime feature-table summarize \
   --i-table dada2_output/dada2_table_filt_contam.qza \
   --o-visualization dada2_output/dada2_table_filt_contam_summary.qzv

From this table you need to determine the maximum depth across your samples. You can then generate the rarefaction curves with this command (where X is a placeholder for the max depth across samples).

qiime diversity alpha-rarefaction \
   --i-table dada2_output/dada2_table_filt_contam.qza \
   --p-max-depth X \
   --p-steps 20 \
   --p-metrics 'observed_features' \
   --o-visualization rarefaction_curves_test.qzv

Take a look at these curves to help decide on a minimum depth cut-off for retaining samples. Once you decide on a hard cut-off you can exclude samples below this cut-off with this command (where SET_CUTOFF is a placeholder for the minimum depth you select):

qiime feature-table filter-samples \
   --i-table dada2_output/dada2_table_filt_contam.qza \
   --p-min-frequency SET_CUTOFF \
   --o-filtered-table dada2_output/dada2_table_final.qza

Alternatively, if you do not wish to exclude any samples then you can simply make a copy of the QZA file with the final table filename (i.e. cp dada2_output/dada2_table_filt_contam.qza dada2_output/dada2_table_final.qza), since this is the filename used for the remaining commands below.

4.4 Subset and summarize filtered table

Once we have our final filtered table we will need to subset the QZA of ASV sequences to the same set. You can exclude the low frequency ASVs from the sequence file with this command:

qiime feature-table filter-seqs \
   --i-data dada2_output/representative_sequences.qza \
   --i-table dada2_output/dada2_table_final.qza \
   --o-filtered-data dada2_output/rep_seqs_final.qza

Finally, you can make a new summary of the final filtered abundance table:

qiime feature-table summarize \
   --i-table dada2_output/dada2_table_final.qza \
   --o-visualization dada2_output/dada2_table_final_summary.qzv

5A. Build tree with SEPP QIIME 2 plugin (16S data)

SEPP is one method for placing short sequences into a reference phylogenetic tree. This is a useful way of determining a phylogenetic tree for your ASVs. For 16S data you can do this with q2-fragment-insertion using the below command:

qiime fragment-insertion sepp \
   --i-representative-sequences dada2_output/rep_seqs_final.qza \
   --i-reference-database /home/shared/rRNA_db/16S/sepp-refs-gg-13-8.qza \
   --o-tree asvs-tree.qza \
   --o-placements insertion-placements.qza \
   --p-threads $NCORES

Note that if you do not already have this file locally you will need to download sepp-refs-gg-13-8.qza as specified in the fragment-insertion instructions. You can specify custom reference files to place other amplicons, but the easiest approach for 18S and ITS data is to instead create a de novo tree as outlined below.

5B. QIIME 2 de novo tree creation (18S and ITS data)

Given the lack of a pre-calculated reference tree for 18S and ITS data (unlike the above 16S tree) to do direct placement, we create the tree de novo here below.

Making multiple-sequence alignment

We'll need to make a multiple-sequence alignment of the ASVs before running FastTree. First, we'll make a folder for the output files.

mkdir tree_out

We'll use MAFFT to make a de novo multiple-sequence alignment of the ASVs.

qiime alignment mafft --i-sequences dada2_output/rep_seqs_final.qza \
                      --p-n-threads $NCORES \
                      --o-alignment tree_out/rep_seqs_final_aligned.qza

Filtering multiple-sequence alignment

Variable positions in the alignment need to be masked before FastTree is run, which can be done with this command:

qiime alignment mask --i-alignment tree_out/rep_seqs_final_aligned.qza \
                     --o-masked-alignment tree_out/rep_seqs_final_aligned_masked.qza

Running FastTree

Finally FastTree can be run on this masked multiple-sequence alignment:

qiime phylogeny fasttree --i-alignment tree_out/rep_seqs_final_aligned_masked.qza \
                         --p-n-threads $NCORES \
                         --o-tree tree_out/rep_seqs_final_aligned_masked_tree

Add root to tree

FastTree returns an unrooted tree. One basic way to add a root to a tree is to add it add it at the midpoint of the largest tip-to-tip distance in the tree, which is done with this command:

qiime phylogeny midpoint-root --i-tree tree_out/rep_seqs_final_aligned_masked_tree.qza \
                              --o-rooted-tree tree_out/rep_seqs_final_aligned_masked_tree_rooted.qza

Re-name file

To keep this output filename consistent with the SOP you can simply make a copy of this output tree.

cp tree_out/rep_seqs_final_aligned_masked_tree_rooted.qza asvs-tree.qza

6. Generate rarefaction curves

A key quality control step is to plot rarefaction curves for all of your samples to determine if you performed sufficient sequencing. The below command will generate these plots (X is a placeholder for the maximum depth in your dataset, which you can determine by running the summarize command above). Depending on if you decided to exclude any samples you may not want to re-create rarefaction curves. Note however that the below command will output rarefaction curves for a range of alpha-diversity metrics (including phylogenetic metrics), whereas above the curves were based on richness (referred to as observed_features) only. Also note that the $METADATA bash variable was defined at the first step of this SOP and simply points to the metadata table.

qiime diversity alpha-rarefaction \
   --i-table dada2_output/dada2_table_final.qza \
   --p-max-depth X \
   --p-steps 20 \
   --i-phylogeny asvs-tree.qza \
   --m-metadata-file $METADATA \
   --o-visualization rarefaction_curves.qzv

For some reason, the QIIME 2 default in the above curves with the metadata file (which you can see in the visualization) is to not give you the option of seeing each sample's rarefaction curve individually (even though this is the default later on in stacked barplots!), only the "grouped" curves by each metadata type. As it can be quite important in data QC to see if you have inconsistent samples, we need to rerun the above command, but this time omitting the metadata file (use the same X for the maximum depth, as above).

qiime diversity alpha-rarefaction \
   --i-table dada2_output/dada2_table_final.qza \
   --p-max-depth X \
   --p-steps 20 \
   --i-phylogeny asvs-tree.qza \
   --o-visualization rarefaction_curves_eachsample.qzv

7. Generate stacked barchart of taxa relative abundances

A more useful output is the interactive stacked bar-charts of the taxonomic abundances across samples, which can be output with this command:

qiime taxa barplot \
   --i-table dada2_output/dada2_table_final.qza \
   --i-taxonomy taxa/classification.qza \
   --m-metadata-file $METADATA \
   --o-visualization taxa/taxa_barplot.qzv

Now the QIIME 2 default in the above barplots is to plot each sample individually. However, in the visualizer you can label the samples according to their metadata types, but the plots are not summed together according to their metadata (as we could do in QIIME1 using the summarize_taxa_through_plots.py script), so we have to rerun the above command after running a new command first to group the samples by metadata - this creates a new feature table (you have to do this for each metadata category) which becomes the new input for the above same taxa barplot command (again, run each time for each metadata category). Note that CATEGORY here below is a placeholder for the text label of your category of interest from the metadata file. Also you will need to create a new metadata file (called CATEGORY_METADATA below) to make a barplot of this grouped data.

qiime feature-table group \
   --i-table dada2_output/dada2_table_final.qza \
   --p-axis sample \
   --p-mode sum \
   --m-metadata-file $METADATA \
   --m-metadata-column CATEGORY \
   --o-grouped-table dada2_output/dada2_table_final_CATEGORY.qza

qiime taxa barplot \
   --i-table dada2_output/dada2_table_final_CATEGORY.qza \
   --i-taxonomy taxa/classification.qza \
   --o-visualization taxa/taxa_barplot_CATEGORY.qzv

8. Calculating diversity metrics and generating ordination plots

Common alpha and beta-diversity metrics can be calculated with a single command in QIIME 2. In addition, ordination plots (such as PCoA plots for weighted UniFrac distances) will be generated automatically as well. This command will also rarefy all samples to the sample sequencing depth before calculating these metrics (X is a placeholder for the lowest reasonable sample depth; samples with depth below this cut-off will be excluded).

qiime diversity core-metrics-phylogenetic \
   --i-table dada2_output/dada2_table_final.qza \
   --i-phylogeny asvs-tree.qza \
   --p-sampling-depth X \
   --m-metadata-file $METADATA \
   --p-n-jobs-or-threads $NCORES \
   --output-dir diversity

You can then produce boxplots comparing the different categories in your metadata file. For example, to create boxplots comparing the Shannon alpha-diversity metric you can use this command:

qiime diversity alpha-group-significance \
   --i-alpha-diversity diversity/shannon_vector.qza \
   --m-metadata-file $METADATA \
   --o-visualization diversity/shannon_compare_groups.qzv

Note that you can also export (see below) this or any other diversity metric file (ending in .qza) and analyze them with a different program.

9. Identifying differentially abundant features with ANCOM

ANCOM is one method to test for differences in the relative abundance of features between sample groupings. It is a compositional approach that makes no assumptions about feature distributions. However, it requires that all features have non-zero abundances so a pseudocount first needs to be added (1 is a typical pseudocount choice):

qiime composition add-pseudocount \
   --i-table dada2_output/dada2_table_final.qza \
   --p-pseudocount 1 \
   --o-composition-table dada2_output/dada2_table_final_pseudocount.qza

Then ANCOM can be run with this command; note that CATEGORY is a placeholder for the text label of your category of interest from the metadata file:

qiime composition ancom \
   --i-table dada2_output/dada2_table_final_pseudocount.qza \
   --m-metadata-file $METADATA \
   --m-metadata-column CATEGORY \
   --output-dir ancom_output

10. Exporting the final abundance, profile and sequence files

Lastly, to get the BIOM file (with associated taxonomy) and FASTA file (one per ASV) for your dataset to plug into other programs you can use the commands below.

To export the FASTA:

qiime tools export \
   --input-path dada2_output/rep_seqs_final.qza \
   --output-path dada2_output_exported

To export the BIOM table (with taxonomy added as metadata):

sed -i -e '1 s/Feature/#Feature/' -e '1 s/Taxon/taxonomy/' taxa/taxonomy.tsv

qiime tools export \
   --input-path dada2_output/dada2_table_final.qza \
   --output-path dada2_output_exported

biom add-metadata \
   -i dada2_output_exported/feature-table.biom \
   -o dada2_output_exported/feature-table_w_tax.biom \
   --observation-metadata-fp taxa/taxonomy.tsv \
   --sc-separated taxonomy

biom convert \
   -i dada2_output_exported/feature-table_w_tax.biom \
   -o dada2_output_exported/feature-table_w_tax.txt \
   --to-tsv \
   --header-key taxonomy

Other resources

There are many other possible QIIME 2 analyses that we recommend you look into. You may also find these other resources useful:

Clone this wiki locally