Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TypeError running prediction task -> unexpected keyword argument #235

Closed
biyanisuraj opened this issue Aug 3, 2023 · 1 comment
Closed

Comments

@biyanisuraj
Copy link

While running prediction task I receive type error

export EXPERIMENT_NAME="llm"
export RUN_ID=$(python madewithml/predict.py get-best-run-id --experiment-name $EXPERIMENT_NAME --metric val_loss --mode ASC)
python madewithml/predict.py predict \
    --run-id $RUN_ID \
    --title "Transfer learning with transformers" \
    --description "Using transformers for transfer learning on text classification tasks."

╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ /Users/surajb/private/Made-With-ML/madewithml/predict.py:133 in predict                          │
│                                                                                                  │
│   130 │                                                                                          │
│   131 │   # Predict                                                                              │
│   132 │   sample_df = pd.DataFrame([{"title": title, "description": description, "tag": "other   │
│ ❱ 133 │   results = predict_with_proba(df=sample_df, predictor=predictor, index_to_class=prepr   │
│   134 │   logger.info(json.dumps(results, cls=NumpyEncoder, indent=2))                           │
│   135 │   return results                                                                         │
│   136                                                                                            │
│                                                                                                  │
│ ╭─────────────────────────────────────────── locals ───────────────────────────────────────────╮ │
│ │ best_checkpoint = TorchCheckpoint(local_path=/private/tmp/mlflow/852956352849222177/3dab467… │ │
│ │     description = 'Using transformers for transfer learning on text classification tasks.'   │ │
│ │       predictor = TorchPredictor(model=FinetunedLLM(                                         │ │
│ │                     (llm): BertModel(                                                        │ │
│ │                   │   (embeddings): BertEmbeddings(                                          │ │
│ │                   │     (word_embeddings): Embedding(31090, 768, padding_idx=0)              │ │
│ │                   │     (position_embeddings): Embedding(512, 768)                           │ │
│ │                   │     (token_type_embeddings): Embedding(2, 768)                           │ │
│ │                   │     (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)   │ │
│ │                   │     (dropout): Dropout(p=0.1, inplace=False)                             │ │
│ │                   │   )                                                                      │ │
│ │                   │   (encoder): BertEncoder(                                                │ │
│ │                   │     (layer): ModuleList(                                                 │ │
│ │                   │   │   (0-11): 12 x BertLayer(                                            │ │
│ │                   │   │     (attention): BertAttention(                                      │ │
│ │                   │   │   │   (self): BertSelfAttention(                                     │ │
│ │                   │   │   │     (query): Linear(in_features=768, out_features=768,           │ │
│ │                   bias=True)                                                                 │ │
│ │                   │   │   │     (key): Linear(in_features=768, out_features=768, bias=True)  │ │
│ │                   │   │   │     (value): Linear(in_features=768, out_features=768,           │ │
│ │                   bias=True)                                                                 │ │
│ │                   │   │   │     (dropout): Dropout(p=0.1, inplace=False)                     │ │
│ │                   │   │   │   )                                                              │ │
│ │                   │   │   │   (output): BertSelfOutput(                                      │ │
│ │                   │   │   │     (dense): Linear(in_features=768, out_features=768,           │ │
│ │                   bias=True)                                                                 │ │
│ │                   │   │   │     (LayerNorm): LayerNorm((768,), eps=1e-12,                    │ │
│ │                   elementwise_affine=True)                                                   │ │
│ │                   │   │   │     (dropout): Dropout(p=0.1, inplace=False)                     │ │
│ │                   │   │   │   )                                                              │ │
│ │                   │   │     )                                                                │ │
│ │                   │   │     (intermediate): BertIntermediate(                                │ │
│ │                   │   │   │   (dense): Linear(in_features=768, out_features=3072, bias=True) │ │
│ │                   │   │   │   (intermediate_act_fn): GELUActivation()                        │ │
│ │                   │   │     )                                                                │ │
│ │                   │   │     (output): BertOutput(                                            │ │
│ │                   │   │   │   (dense): Linear(in_features=3072, out_features=768, bias=True) │ │
│ │                   │   │   │   (LayerNorm): LayerNorm((768,), eps=1e-12,                      │ │
│ │                   elementwise_affine=True)                                                   │ │
│ │                   │   │   │   (dropout): Dropout(p=0.1, inplace=False)                       │ │
│ │                   │   │     )                                                                │ │
│ │                   │   │   )                                                                  │ │
│ │                   │     )                                                                    │ │
│ │                   │   )                                                                      │ │
│ │                   │   (pooler): BertPooler(                                                  │ │
│ │                   │     (dense): Linear(in_features=768, out_features=768, bias=True)        │ │
│ │                   │     (activation): Tanh()                                                 │ │
│ │                   │   )                                                                      │ │
│ │                     )                                                                        │ │
│ │                     (dropout): Dropout(p=0.5, inplace=False)                                 │ │
│ │                     (fc1): Linear(in_features=768, out_features=6, bias=True)                │ │
│ │                   ), preprocessor=<madewithml.data.CustomPreprocessor object at              │ │
│ │                   0x17fbc5b10>, use_gpu=False)                                               │ │
│ │    preprocessor = <madewithml.data.CustomPreprocessor object at 0x17fbc5b10>                 │ │
│ │          run_id = '3dab46713f524aa0a9a3df3227e0ca1f'                                         │ │
│ │       sample_df = │   │   │   │   │   │   │   │    title                                     │ │
│ │                   description    tag                                                         │ │
│ │                   0  Transfer learning with transformers  Using transformers for transfer    │ │
│ │                   learning on te...  other                                                   │ │
│ │           title = 'Transfer learning with transformers'                                      │ │
│ ╰──────────────────────────────────────────────────────────────────────────────────────────────╯ │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
TypeError: predict_with_proba() got an unexpected keyword argument 'index_to_class'

I was able to successfully run predict task by removing the last argument

results = predict_with_proba(df=sample_df, predictor=predictor, index_to_class=preprocessor.index_to_class)

to match the function definition

def predict_with_proba(
df: pd.DataFrame,
predictor: ray.train.torch.torch_predictor.TorchPredictor,
) -> List: # pragma: no cover, tested with inference workload

@GokuMohandas
Copy link
Owner

Hi @biyanisuraj , great catch! I've updated the code to fix this mistake. Thanks for submitting this issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants