Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

several errors in trial runs #6

Closed
shobhitagrawal1 opened this issue Jul 11, 2023 · 4 comments
Closed

several errors in trial runs #6

shobhitagrawal1 opened this issue Jul 11, 2023 · 4 comments
Labels
bug Something isn't working

Comments

@shobhitagrawal1
Copy link

Report

hi, really interesting method, congrats.
Trying to run the demo running into several problems.
the parameters commented out below throw errors at subsequent steps -- for exampl decoder_width is an unrecognised parameter for model etc.
After commenting the parameters which were causing a problem :
Support for training_epoch_end has been removed in v2.0.0. biolordTrainingPlan implements this method. You can use the on_train_epoch_end hook instead. To access outputs, save them in-memory as instance attributes. You can find migration examples in Lightning-AI/pytorch-lightning#16520

Would be grateful for any assistance :)

module_params = {
#"decoder_width": 1024,
#"decoder_depth": 4,
"attribute_nn_width": 512,
"attribute_nn_depth": 2,
"n_latent_attribute_categorical": 4,
"loss_ae": "gauss",
"reconstruction_penalty": 1e2,
"unknown_attribute_penalty": 1e1,
"unknown_attribute_noise_param": 1e-1,
"attribute_dropout_rate": 0.1,
"use_batch_norm": False,
"use_layer_norm": False,
"seed": 42,
}

model = biolord.Biolord(
adata=adata,
n_latent=32,
model_name="spatio_temporal_infected",
module_params=module_params,
train_classifiers=False,
split_key="split_random",
)

trainer_params = {
"n_epochs_warmup": 0,

"latent_lr": 1e-4,

"latent_wd": 1e-4,

"decoder_lr": 1e-4,

"decoder_wd": 1e-4,

"attribute_nn_lr": 1e-2,
"attribute_nn_wd": 4e-8,
"step_size_lr": 45,
"cosine_scheduler": True,
"scheduler_final_lr": 1e-5,}

model.train(
max_epochs=500,
batch_size=512,
plan_kwargs=trainer_params,
early_stopping=True,
early_stopping_patience=20,
check_val_every_n_epoch=10,
num_workers=1,
enable_checkpointing=False,
)

Version information

anndata 0.9.1
biolord 0.0.1
matplotlib 3.7.2
numpy 1.24.4
pandas 2.0.3
scanpy 1.9.3
scipy 1.11.1
scvi 1.0.2
seaborn 0.12.2
session_info 1.0.0

PIL 9.4.0
absl NA
aiohttp 3.8.4
aiosignal 1.3.1
anyio NA
async_timeout 4.0.2
attr 23.1.0
backoff 2.2.1
brotli NA
bs4 4.12.2
certifi 2023.05.07
cffi 1.15.1
charset_normalizer 2.0.4
chex 0.1.7
click 8.1.4
contextlib2 NA
croniter NA
cycler 0.10.0
cython_runtime NA
dateutil 2.8.2
deepdiff 6.3.1
docrep 0.3.2
etils 1.3.0
fastapi 0.100.0
flax 0.7.0
frozenlist 1.3.3
fsspec 2023.6.0
gmpy2 2.1.2
h5py 3.9.0
idna 3.4
importlib_metadata NA
importlib_resources NA
jax 0.4.13
jaxlib 0.4.13
joblib 1.3.1
kiwisolver 1.4.4
lightning 2.0.5
lightning_cloud NA
lightning_fabric 2.0.5
lightning_utilities 0.9.0
llvmlite 0.40.1
ml_collections NA
ml_dtypes 0.2.0
mpl_toolkits NA
mpmath 1.2.1
msgpack 1.0.5
mudata 0.2.3
multidict 6.0.4
multipart 0.0.6
multipledispatch 0.6.0
natsort 8.4.0
numba 0.57.1
numpyro 0.12.1
nvfuser NA
opt_einsum v3.3.0
optax 0.1.5
ordered_set 4.1.0
packaging 23.1
patsy 0.5.3
pkg_resources NA
psutil 5.9.5
pydantic 1.10.11
pygments 2.15.1
pyparsing 3.0.9
pyro 1.8.5
pytorch_lightning 2.0.5
pytz 2023.3
requests 2.29.0
rich NA
setuptools 67.8.0
six 1.16.0
sklearn 1.3.0
sniffio 1.3.0
socks 1.7.1
soupsieve 2.4.1
sparse 0.14.0
starlette 0.27.0
statsmodels 0.14.0
sympy 1.11.1
threadpoolctl 3.1.0
toolz 0.12.0
torch 2.0.1
torchaudio 2.0.2
torchmetrics 1.0.0
torchvision 0.15.2
tqdm 4.65.0
tree 0.1.8
typing_extensions NA
urllib3 1.26.16
uvicorn 0.22.0
websocket 1.6.1
websockets 11.0.3
xarray 2023.6.0
yaml 6.0
yarl 1.9.2
zipp NA
zoneinfo NA

Python 3.9.17 (main, Jul 5 2023, 20:41:20) [GCC 11.2.0]
Linux-4.18.0-305.12.1.el8_4.x86_64-x86_64-with-glibc2.31

@shobhitagrawal1 shobhitagrawal1 added the bug Something isn't working label Jul 11, 2023
@zoepiran
Copy link
Contributor

Hi,

I am sorry for the inconvenience - if you git clone the repo the framework can be run without errors.
we will soon make a new release which will also include updated tutorials!

Please excuse the inconvenience,
Zoe

@shobhitagrawal1
Copy link
Author

thanks for replying will try your recommendation. Looking forward to your new tutorials!

@shobhitagrawal1
Copy link
Author

Hi Zoe, everything ran fine, except that the parameter "loss ae" was not found, which I commented out for now. It seems it is not mentioned in the _module.py, init.. - i did not delve deeper (whether by default it uses MSE or Gaussian)... thanks !

@zoepiran
Copy link
Contributor

zoepiran commented Jul 12, 2023

i am sorry; to match current scvi we changed "loss_ae"-> "gene_likelihood" (if tour input is normalized pass "normal", and if raw counts use "nb" )

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants