Skip to content

Latest commit

ย 

History

History
198 lines (131 loc) ยท 10.5 KB

troubleshooting.md

File metadata and controls

198 lines (131 loc) ยท 10.5 KB

๋ฌธ์ œ ํ•ด๊ฒฐ[[troubleshoot]]

๋•Œ๋•Œ๋กœ ์˜ค๋ฅ˜๊ฐ€ ๋ฐœ์ƒํ•  ์ˆ˜ ์žˆ์ง€๋งŒ, ์ €ํฌ๊ฐ€ ๋„์™€๋“œ๋ฆฌ๊ฒ ์Šต๋‹ˆ๋‹ค! ์ด ๊ฐ€์ด๋“œ๋Š” ํ˜„์žฌ๊นŒ์ง€ ํ™•์ธ๋œ ๊ฐ€์žฅ ์ผ๋ฐ˜์ ์ธ ๋ฌธ์ œ ๋ช‡ ๊ฐ€์ง€์™€ ๊ทธ๊ฒƒ๋“ค์„ ํ•ด๊ฒฐํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•ด ๋‹ค๋ฃน๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์ด ๊ฐ€์ด๋“œ๋Š” ๋ชจ๋“  ๐Ÿค— Transformers ๋ฌธ์ œ๋ฅผ ํฌ๊ด„์ ์œผ๋กœ ๋‹ค๋ฃจ๊ณ  ์žˆ์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ๋ฌธ์ œ ํ•ด๊ฒฐ์— ๋” ๋งŽ์€ ๋„์›€์„ ๋ฐ›์œผ๋ ค๋ฉด ๋‹ค์Œ์„ ์‹œ๋„ํ•ด๋ณด์„ธ์š”:

  1. ํฌ๋Ÿผ์—์„œ ๋„์›€์„ ์š”์ฒญํ•˜์„ธ์š”. Beginners ๋˜๋Š” ๐Ÿค— Transformers์™€ ๊ฐ™์€ ํŠน์ • ์นดํ…Œ๊ณ ๋ฆฌ์— ์งˆ๋ฌธ์„ ๊ฒŒ์‹œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์žฌํ˜„ ๊ฐ€๋Šฅํ•œ ์ฝ”๋“œ์™€ ํ•จ๊ป˜ ์ž˜ ์„œ์ˆ ๋œ ํฌ๋Ÿผ ๊ฒŒ์‹œ๋ฌผ์„ ์ž‘์„ฑํ•˜์—ฌ ์—ฌ๋Ÿฌ๋ถ„์˜ ๋ฌธ์ œ๊ฐ€ ํ•ด๊ฒฐ๋  ๊ฐ€๋Šฅ์„ฑ์„ ๊ทน๋Œ€ํ™”ํ•˜์„ธ์š”!
  1. ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์™€ ๊ด€๋ จ๋œ ๋ฒ„๊ทธ์ด๋ฉด ๐Ÿค— Transformers ์ €์žฅ์†Œ์—์„œ ์ด์Šˆ๋ฅผ ์ƒ์„ฑํ•˜์„ธ์š”. ๋ฒ„๊ทธ์— ๋Œ€ํ•ด ์„ค๋ช…ํ•˜๋Š” ์ •๋ณด๋ฅผ ๊ฐ€๋Šฅํ•œ ๋งŽ์ด ํฌํ•จํ•˜๋ ค๊ณ  ๋…ธ๋ ฅํ•˜์—ฌ, ๋ฌด์—‡์ด ์ž˜๋ชป ๋˜์—ˆ๋Š”์ง€์™€ ์–ด๋–ป๊ฒŒ ์ˆ˜์ •ํ•  ์ˆ˜ ์žˆ๋Š”์ง€ ๋” ์ž˜ ํŒŒ์•…ํ•  ์ˆ˜ ์žˆ๋„๋ก ๋„์™€์ฃผ์„ธ์š”.

  2. ์ด์ „ ๋ฒ„์ „์˜ ๐Ÿค— Transformers์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝ์šฐ ์ค‘์š”ํ•œ ๋ณ€๊ฒฝ ์‚ฌํ•ญ์ด ๋ฒ„์ „ ์‚ฌ์ด์— ๋„์ž…๋˜์—ˆ๊ธฐ ๋•Œ๋ฌธ์— ๋งˆ์ด๊ทธ๋ ˆ์ด์…˜ ๊ฐ€์ด๋“œ๋ฅผ ํ™•์ธํ•˜์„ธ์š”.

๋ฌธ์ œ ํ•ด๊ฒฐ ๋ฐ ๋„์›€ ๋งค๋‰ด์–ผ์— ๋Œ€ํ•œ ์ž์„ธํ•œ ๋‚ด์šฉ์€ Hugging Face ๊ฐ•์ขŒ์˜ 8์žฅ์„ ์ฐธ์กฐํ•˜์„ธ์š”.

๋ฐฉํ™”๋ฒฝ ํ™˜๊ฒฝ[[firewalled-environments]]

ํด๋ผ์šฐ๋“œ ๋ฐ ๋‚ด๋ถ€๋ง(intranet) ์„ค์ •์˜ ์ผ๋ถ€ GPU ์ธ์Šคํ„ด์Šค๋Š” ์™ธ๋ถ€ ์—ฐ๊ฒฐ์— ๋Œ€ํ•œ ๋ฐฉํ™”๋ฒฝ์œผ๋กœ ์ฐจ๋‹จ๋˜์–ด ์—ฐ๊ฒฐ ์˜ค๋ฅ˜๊ฐ€ ๋ฐœ์ƒํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์Šคํฌ๋ฆฝํŠธ๊ฐ€ ๋ชจ๋ธ ๊ฐ€์ค‘์น˜๋‚˜ ๋ฐ์ดํ„ฐ๋ฅผ ๋‹ค์šด๋กœ๋“œํ•˜๋ ค๊ณ  ํ•  ๋•Œ, ๋‹ค์šด๋กœ๋“œ๊ฐ€ ์ค‘๋‹จ๋˜๊ณ  ๋‹ค์Œ ๋ฉ”์‹œ์ง€์™€ ํ•จ๊ป˜ ์‹œ๊ฐ„ ์ดˆ๊ณผ๋ฉ๋‹ˆ๋‹ค:

ValueError: Connection error, and we cannot find the requested files in the cached path.
Please try again or make sure your Internet connection is on.

์ด ๊ฒฝ์šฐ์—๋Š” ์—ฐ๊ฒฐ ์˜ค๋ฅ˜๋ฅผ ํ”ผํ•˜๊ธฐ ์œ„ํ•ด ๐Ÿค— Transformers๋ฅผ ์˜คํ”„๋ผ์ธ ๋ชจ๋“œ๋กœ ์‹คํ–‰ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค.

CUDA ๋ฉ”๋ชจ๋ฆฌ ๋ถ€์กฑ(CUDA out of memory)[[cuda-out-of-memory]]

์ˆ˜๋ฐฑ๋งŒ ๊ฐœ์˜ ๋งค๊ฐœ๋ณ€์ˆ˜๋กœ ๋Œ€๊ทœ๋ชจ ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•˜๋Š” ๊ฒƒ์€ ์ ์ ˆํ•œ ํ•˜๋“œ์›จ์–ด ์—†์ด ์–ด๋ ค์šธ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. GPU ๋ฉ”๋ชจ๋ฆฌ๊ฐ€ ๋ถ€์กฑํ•œ ๊ฒฝ์šฐ ๋ฐœ์ƒํ•  ์ˆ˜ ์žˆ๋Š” ์ผ๋ฐ˜์ ์ธ ์˜ค๋ฅ˜๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค:

CUDA out of memory. Tried to allocate 256.00 MiB (GPU 0; 11.17 GiB total capacity; 9.70 GiB already allocated; 179.81 MiB free; 9.85 GiB reserved in total by PyTorch)

๋‹ค์Œ์€ ๋ฉ”๋ชจ๋ฆฌ ์‚ฌ์šฉ์„ ์ค„์ด๊ธฐ ์œ„ํ•ด ์‹œ๋„ํ•ด ๋ณผ ์ˆ˜ ์žˆ๋Š” ๋ช‡ ๊ฐ€์ง€ ์ž ์žฌ์ ์ธ ํ•ด๊ฒฐ์ฑ…์ž…๋‹ˆ๋‹ค:

  • [TrainingArguments]์˜ per_device_train_batch_size ๊ฐ’์„ ์ค„์ด์„ธ์š”.
  • [TrainingArguments]์˜ gradient_accumulation_steps์€ ์ „์ฒด ๋ฐฐ์น˜ ํฌ๊ธฐ๋ฅผ ํšจ๊ณผ์ ์œผ๋กœ ๋Š˜๋ฆฌ์„ธ์š”.

๋ฉ”๋ชจ๋ฆฌ ์ ˆ์•ฝ ๊ธฐ์ˆ ์— ๋Œ€ํ•œ ์ž์„ธํ•œ ๋‚ด์šฉ์€ ์„ฑ๋Šฅ ๊ฐ€์ด๋“œ๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”.

์ €์žฅ๋œ TensorFlow ๋ชจ๋ธ์„ ๊ฐ€์ ธ์˜ฌ ์ˆ˜ ์—†์Šต๋‹ˆ๋‹ค(Unable to load a saved TensorFlow model)[[unable-to-load-a-saved-uensorFlow-model]]

TensorFlow์˜ model.save ๋ฉ”์†Œ๋“œ๋Š” ์•„ํ‚คํ…์ฒ˜, ๊ฐ€์ค‘์น˜, ํ›ˆ๋ จ ๊ตฌ์„ฑ ๋“ฑ ์ „์ฒด ๋ชจ๋ธ์„ ๋‹จ์ผ ํŒŒ์ผ์— ์ €์žฅํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ๋ชจ๋ธ ํŒŒ์ผ์„ ๋‹ค์‹œ ๊ฐ€์ ธ์˜ฌ ๋•Œ ๐Ÿค— Transformers๋Š” ๋ชจ๋ธ ํŒŒ์ผ์— ์žˆ๋Š” ๋ชจ๋“  TensorFlow ๊ด€๋ จ ๊ฐ์ฒด๋ฅผ ๊ฐ€์ ธ์˜ค์ง€ ์•Š์„ ์ˆ˜ ์žˆ๊ธฐ ๋•Œ๋ฌธ์— ์˜ค๋ฅ˜๊ฐ€ ๋ฐœ์ƒํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. TensorFlow ๋ชจ๋ธ ์ €์žฅ ๋ฐ ๊ฐ€์ ธ์˜ค๊ธฐ ๋ฌธ์ œ๋ฅผ ํ”ผํ•˜๋ ค๋ฉด ๋‹ค์Œ์„ ๊ถŒ์žฅํ•ฉ๋‹ˆ๋‹ค:

  • ๋ชจ๋ธ ๊ฐ€์ค‘์น˜๋ฅผ h5 ํŒŒ์ผ ํ™•์žฅ์ž๋กœ model.save_weights๋กœ ์ €์žฅํ•œ ๋‹ค์Œ [~TFPreTrainedModel.from_pretrained]๋กœ ๋ชจ๋ธ์„ ๋‹ค์‹œ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค:
>>> from transformers import TFPreTrainedModel
>>> from tensorflow import keras

>>> model.save_weights("some_folder/tf_model.h5")
>>> model = TFPreTrainedModel.from_pretrained("some_folder")
  • ๋ชจ๋ธ์„ [~TFPretrainedModel.save_pretrained]๋กœ ์ €์žฅํ•˜๊ณ  [~TFPreTrainedModel.from_pretrained]๋กœ ๋‹ค์‹œ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค:
>>> from transformers import TFPreTrainedModel

>>> model.save_pretrained("path_to/model")
>>> model = TFPreTrainedModel.from_pretrained("path_to/model")

ImportError[[importerror]]

ํŠนํžˆ ์ตœ์‹  ๋ชจ๋ธ์ธ ๊ฒฝ์šฐ ๋งŒ๋‚  ์ˆ˜ ์žˆ๋Š” ๋‹ค๋ฅธ ์ผ๋ฐ˜์ ์ธ ์˜ค๋ฅ˜๋Š” ImportError์ž…๋‹ˆ๋‹ค:

ImportError: cannot import name 'ImageGPTImageProcessor' from 'transformers' (unknown location)

์ด๋Ÿฌํ•œ ์˜ค๋ฅ˜ ์œ ํ˜•์˜ ๊ฒฝ์šฐ ์ตœ์‹  ๋ชจ๋ธ์— ์•ก์„ธ์Šคํ•  ์ˆ˜ ์žˆ๋„๋ก ์ตœ์‹  ๋ฒ„์ „์˜ ๐Ÿค— Transformers๊ฐ€ ์„ค์น˜๋˜์–ด ์žˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”:

pip install transformers --upgrade

CUDA error: device-side assert triggered[[cuda-error-deviceside-assert-triggered]]

๋•Œ๋•Œ๋กœ ์žฅ์น˜ ์ฝ”๋“œ ์˜ค๋ฅ˜์— ๋Œ€ํ•œ ์ผ๋ฐ˜์ ์ธ CUDA ์˜ค๋ฅ˜๊ฐ€ ๋ฐœ์ƒํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.

RuntimeError: CUDA error: device-side assert triggered

๋” ์ž์„ธํ•œ ์˜ค๋ฅ˜ ๋ฉ”์‹œ์ง€๋ฅผ ์–ป์œผ๋ ค๋ฉด ์šฐ์„  ์ฝ”๋“œ๋ฅผ CPU์—์„œ ์‹คํ–‰ํ•ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ ํ™˜๊ฒฝ ๋ณ€์ˆ˜๋ฅผ ์ฝ”๋“œ์˜ ์‹œ์ž‘ ๋ถ€๋ถ„์— ์ถ”๊ฐ€ํ•˜์—ฌ CPU๋กœ ์ „ํ™˜ํ•˜์„ธ์š”:

>>> import os

>>> os.environ["CUDA_VISIBLE_DEVICES"] = ""

๋˜ ๋‹ค๋ฅธ ์˜ต์…˜์€ GPU์—์„œ ๋” ๋‚˜์€ ์—ญ์ถ”์ (traceback)์„ ์–ป๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋‹ค์Œ ํ™˜๊ฒฝ ๋ณ€์ˆ˜๋ฅผ ์ฝ”๋“œ์˜ ์‹œ์ž‘ ๋ถ€๋ถ„์— ์ถ”๊ฐ€ํ•˜์—ฌ ์—ญ์ถ”์ ์ด ์˜ค๋ฅ˜๊ฐ€ ๋ฐœ์ƒํ•œ ์†Œ์Šค๋ฅผ ๊ฐ€๋ฆฌํ‚ค๋„๋ก ํ•˜์„ธ์š”:

>>> import os

>>> os.environ["CUDA_LAUNCH_BLOCKING"] = "1"

ํŒจ๋”ฉ ํ† ํฐ์ด ๋งˆ์Šคํ‚น๋˜์ง€ ์•Š์€ ๊ฒฝ์šฐ ์ž˜๋ชป๋œ ์ถœ๋ ฅ(Incorrect output when padding tokens aren't masked)[[incorrect-output-when-padding-tokens-arent-masked]]

๊ฒฝ์šฐ์— ๋”ฐ๋ผ input_ids์— ํŒจ๋”ฉ ํ† ํฐ์ด ํฌํ•จ๋œ ๊ฒฝ์šฐ hidden_state ์ถœ๋ ฅ์ด ์˜ฌ๋ฐ”๋ฅด์ง€ ์•Š์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ฐ๋ชจ๋ฅผ ์œ„ํ•ด ๋ชจ๋ธ๊ณผ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ๊ฐ€์ ธ์˜ค์„ธ์š”. ๋ชจ๋ธ์˜ pad_token_id์— ์•ก์„ธ์Šคํ•˜์—ฌ ํ•ด๋‹น ๊ฐ’์„ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ผ๋ถ€ ๋ชจ๋ธ์˜ ๊ฒฝ์šฐ pad_token_id๊ฐ€ None์ผ ์ˆ˜ ์žˆ์ง€๋งŒ ์–ธ์ œ๋“ ์ง€ ์ˆ˜๋™์œผ๋กœ ์„ค์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.

>>> from transformers import AutoModelForSequenceClassification
>>> import torch

>>> model = AutoModelForSequenceClassification.from_pretrained("google-bert/bert-base-uncased")
>>> model.config.pad_token_id
0

๋‹ค์Œ ์˜ˆ์ œ๋Š” ํŒจ๋”ฉ ํ† ํฐ์„ ๋งˆ์Šคํ‚นํ•˜์ง€ ์•Š์€ ์ถœ๋ ฅ์„ ๋ณด์—ฌ์ค๋‹ˆ๋‹ค:

>>> input_ids = torch.tensor([[7592, 2057, 2097, 2393, 9611, 2115], [7592, 0, 0, 0, 0, 0]])
>>> output = model(input_ids)
>>> print(output.logits)
tensor([[ 0.0082, -0.2307],
        [ 0.1317, -0.1683]], grad_fn=<AddmmBackward0>)

๋‹ค์Œ์€ ๋‘ ๋ฒˆ์งธ ์‹œํ€€์Šค์˜ ์‹ค์ œ ์ถœ๋ ฅ์ž…๋‹ˆ๋‹ค:

>>> input_ids = torch.tensor([[7592]])
>>> output = model(input_ids)
>>> print(output.logits)
tensor([[-0.1008, -0.4061]], grad_fn=<AddmmBackward0>)

๋Œ€๋ถ€๋ถ„์˜ ๊ฒฝ์šฐ ๋ชจ๋ธ์— attention_mask๋ฅผ ์ œ๊ณตํ•˜์—ฌ ํŒจ๋”ฉ ํ† ํฐ์„ ๋ฌด์‹œํ•ด์•ผ ์ด๋Ÿฌํ•œ ์กฐ์šฉํ•œ ์˜ค๋ฅ˜๋ฅผ ๋ฐฉ์ง€ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด์ œ ๋‘ ๋ฒˆ์งธ ์‹œํ€€์Šค์˜ ์ถœ๋ ฅ์ด ์‹ค์ œ ์ถœ๋ ฅ๊ณผ ์ผ์น˜ํ•ฉ๋‹ˆ๋‹ค:

์ผ๋ฐ˜์ ์œผ๋กœ ํ† ํฌ๋‚˜์ด์ €๋Š” ํŠน์ • ํ† ํฌ๋‚˜์ด์ €์˜ ๊ธฐ๋ณธ ๊ฐ’์„ ๊ธฐ์ค€์œผ๋กœ ์‚ฌ์šฉ์ž์— ๋Œ€ํ•œ 'attention_mask'๋ฅผ ๋งŒ๋“ญ๋‹ˆ๋‹ค.

>>> attention_mask = torch.tensor([[1, 1, 1, 1, 1, 1], [1, 0, 0, 0, 0, 0]])
>>> output = model(input_ids, attention_mask=attention_mask)
>>> print(output.logits)
tensor([[ 0.0082, -0.2307],
        [-0.1008, -0.4061]], grad_fn=<AddmmBackward0>)

๐Ÿค— Transformers๋Š” ํŒจ๋”ฉ ํ† ํฐ์ด ์ œ๊ณต๋œ ๊ฒฝ์šฐ ํŒจ๋”ฉ ํ† ํฐ์„ ๋งˆ์Šคํ‚นํ•˜๊ธฐ ์œ„ํ•œ attention_mask๋ฅผ ์ž๋™์œผ๋กœ ์ƒ์„ฑํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ๊ทธ ์ด์œ ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค:

  • ์ผ๋ถ€ ๋ชจ๋ธ์—๋Š” ํŒจ๋”ฉ ํ† ํฐ์ด ์—†์Šต๋‹ˆ๋‹ค.
  • ์ผ๋ถ€ ์‚ฌ์šฉ ์‚ฌ๋ก€์˜ ๊ฒฝ์šฐ ์‚ฌ์šฉ์ž๊ฐ€ ๋ชจ๋ธ์ด ํŒจ๋”ฉ ํ† ํฐ์„ ๊ด€๋ฆฌํ•˜๊ธฐ๋ฅผ ์›ํ•ฉ๋‹ˆ๋‹ค.

ValueError: ์ด ์œ ํ˜•์˜ AutoModel์— ๋Œ€ํ•ด ์ธ์‹ํ•  ์ˆ˜ ์—†๋Š” XYZ ๊ตฌ์„ฑ ํด๋ž˜์Šค(ValueError: Unrecognized configuration class XYZ for this kind of AutoModel)[[valueerror-unrecognized-configuration-class-xyz-for-this-kind-of-automodel]]

์ผ๋ฐ˜์ ์œผ๋กœ, ์‚ฌ์ „ ํ•™์Šต๋œ ๋ชจ๋ธ์˜ ์ธ์Šคํ„ด์Šค๋ฅผ ๊ฐ€์ ธ์˜ค๊ธฐ ์œ„ํ•ด [AutoModel] ํด๋ž˜์Šค๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. ์ด ํด๋ž˜์Šค๋Š” ๊ตฌ์„ฑ์— ๋”ฐ๋ผ ์ฃผ์–ด์ง„ ์ฒดํฌํฌ์ธํŠธ์—์„œ ์˜ฌ๋ฐ”๋ฅธ ์•„ํ‚คํ…์ฒ˜๋ฅผ ์ž๋™์œผ๋กœ ์ถ”๋ก ํ•˜๊ณ  ๊ฐ€์ ธ์˜ฌ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ชจ๋ธ์„ ์ฒดํฌํฌ์ธํŠธ์—์„œ ๊ฐ€์ ธ์˜ฌ ๋•Œ ์ด ValueError๊ฐ€ ๋ฐœ์ƒํ•˜๋ฉด, ์ด๋Š” Auto ํด๋ž˜์Šค๊ฐ€ ์ฃผ์–ด์ง„ ์ฒดํฌํฌ์ธํŠธ์˜ ๊ตฌ์„ฑ์—์„œ ๊ฐ€์ ธ์˜ค๋ ค๋Š” ๋ชจ๋ธ ์œ ํ˜•๊ณผ ๋งคํ•‘์„ ์ฐพ์„ ์ˆ˜ ์—†๋‹ค๋Š” ๊ฒƒ์„ ์˜๋ฏธํ•ฉ๋‹ˆ๋‹ค. ๊ฐ€์žฅ ํ”ํ•˜๊ฒŒ ๋ฐœ์ƒํ•˜๋Š” ๊ฒฝ์šฐ๋Š” ์ฒดํฌํฌ์ธํŠธ๊ฐ€ ์ฃผ์–ด์ง„ ํƒœ์Šคํฌ๋ฅผ ์ง€์›ํ•˜์ง€ ์•Š์„ ๋•Œ์ž…๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, ๋‹ค์Œ ์˜ˆ์ œ์—์„œ ์งˆ์˜์‘๋‹ต์— ๋Œ€ํ•œ GPT2๊ฐ€ ์—†๊ธฐ ๋•Œ๋ฌธ์— ์˜ค๋ฅ˜๊ฐ€ ๋ฐœ์ƒํ•ฉ๋‹ˆ๋‹ค:

>>> from transformers import AutoProcessor, AutoModelForQuestionAnswering

>>> processor = AutoProcessor.from_pretrained("openai-community/gpt2-medium")
>>> model = AutoModelForQuestionAnswering.from_pretrained("openai-community/gpt2-medium")
ValueError: Unrecognized configuration class <class 'transformers.models.gpt2.configuration_gpt2.GPT2Config'> for this kind of AutoModel: AutoModelForQuestionAnswering.
Model type should be one of AlbertConfig, BartConfig, BertConfig, BigBirdConfig, BigBirdPegasusConfig, BloomConfig, ...