Skip to content
Go to file

Latest commit


Git stats


Failed to load latest commit information.
Latest commit message
Commit time

Click here for our improved statistical classifier for immune repertoires, Dynamic Kernel Matching

Statistical classifiers for diagnosing disease from immune repertoires



The full set of antibodies and immune receptors in an individual contains traces of past and current immune responses. These traces can serve as biomarkers for diseases mediated by the adaptive immune system (e.g. infectious disease, organ rejection, autoimmune disease, cancer). Only a handful of immune receptors that can be sequenced from a patient are expected to contain these traces. Here we present the source code to a method for elucidating these traces.

First, the CDR3 is parsed from every antibody sequence in a patient (see VDJ Server). The CDR3 is then cut into fixed-length subsequences that we call snippets. These are nothing more than the k-mers of the CDR3. The amino acid residues of each snippet are then described by their biochemical properties in a position dependent manner using Atchley factors.

The main idea is to score every snippet by its biochemical features with a dectector function and to aggregate the scores into a single value that can represent a diagnosis. Because only a handful of snippets are expected to have a high score in patients with a disease, we aggregate the scores together by taking the maximum score. The maximum score is then used to predict the probability that a patient has a positive diagnosis (a high score would suggest a positive diagnosis, no high scores would suggest a negative diagnosis). The parameters of the detector function are fitted by maximizing the log-likelihood (minimizing the cross-entropy error) that each diagnosis is correct.

The model is fitted to the training data using gradient based optimization techniques. First, initial values are randomly drawn for each parameter. Then 2,500 steps of gradient based optimization are used to find a locally optimal fit to the data. We find that the fitting procedure must be repeated hundreds of thousands of times to find a good fit to the training data. Using TensorFlow, the fitting procedure is run repeatedly in parallel on a GPU. We call each thread a "replica", and the "replica" with the best fit to the training data is then scored on unseen and unused data.

For a complete description of this approach, see our publication in BMC Bioinformatics:



  • Download: zip
  • Git: git clone

Primary Files

  • (Data used to develop the approach cannot be made available at this time)
  • (Overwrite with this file to see how the model performs on synthetic data)