This single cell pytorch dataloader / lighting datamodule is designed to be used with:
and:
It allows you to:
- load thousands of datasets containing millions of cells in a few seconds.
- preprocess the data per dataset and download it locally (normalization, filtering, etc.)
- create a more complex single cell dataset
- extend it to your need
built on top of lamindb
and the .mapped()
function by Sergey: https://github.com/Koncopd
The package has been designed together with the scPRINT paper and model.
I needed to create this Data Loader for my PhD project. I am using it to load & preprocess thousands of datasets containing millions of cells in a few seconds. I believed that individuals employing AI for single-cell RNA sequencing and other sequencing datasets would eagerly utilize and desire such a tool, which presently does not exist.
pip install scdataloader
git clone https://github.com/jkobject/scDataLoader.git
pip install -e scDataLoader
# initialize a local lamin database
# !lamin init --storage ~/scdataloader --schema bionty
from scdataloader import utils
from scdataloader.preprocess import LaminPreprocessor, additional_postprocess, additional_preprocess
# preprocess datasets
DESCRIPTION='preprocessed by scDataLoader'
cx_dataset = ln.Collection.using(instance="laminlabs/cellxgene").filter(name="cellxgene-census", version='2023-12-15').one()
cx_dataset, len(cx_dataset.artifacts.all())
do_preprocess = LaminPreprocessor(additional_postprocess=additional_postprocess, additional_preprocess=additional_preprocess, skip_validate=True, subset_hvg=0)
preprocessed_dataset = do_preprocess(cx_dataset, name=DESCRIPTION, description=DESCRIPTION, start_at=6, version="2")
# create dataloaders
from scdataloader import DataModule
import tqdm
datamodule = DataModule(
collection_name="preprocessed dataset",
organisms=["NCBITaxon:9606"], #organism that we will work on
how="most expr", # for the collator (most expr genes only will be selected)
max_len=1000, # only the 1000 most expressed
batch_size=64,
num_workers=1,
validation_split=0.1,
test_split=0)
for i in tqdm.tqdm(datamodule.train_dataloader()):
# pass #or do pass
print(i)
break
# with lightning:
# Trainer(model, datamodule)
see the notebooks in docs:
You can use the command line to preprocess a large database of datasets like here for cellxgene. this allows parallelizing and easier usage.
scdataloader --instance "laminlabs/cellxgene" --name "cellxgene-census" --version "2023-12-15" --description "preprocessed for scprint" --new_name "scprint main" --start_at 10 >> scdataloader.out
The main way to use
please refer to the scPRINT documentation and lightning documentation for more information on command line usage
Read the CONTRIBUTING.md file.
This project is licensed under the MIT License - see the LICENSE file for details.
Awesome single cell dataloader created by @jkobject