Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue serving a model trained with the provided training code #28

Closed
lincior opened this issue Mar 25, 2024 · 14 comments
Closed

Issue serving a model trained with the provided training code #28

lincior opened this issue Mar 25, 2024 · 14 comments

Comments

@lincior
Copy link

lincior commented Mar 25, 2024

I'm trying to run inference on a custom model, trained with the provided code, but there seems to be a problem with building the model:

self.model = self._build_model(self.cfg).to(self.device)
def _build_model(self, cfg):
from openlrm.models import model_dict
hf_model_cls = wrap_model_hub(model_dict[self.EXP_TYPE])
model = hf_model_cls.from_pretrained(cfg.model_name)
return model

(venv) root@bc700a1d6a6c:/workspace/OpenLRM# python -m openlrm.launch infer.lrm --infer=configs/infer-b.yaml model_name=exps/checkpoints/lrm-objaverse/overfitting-test/001000 image_input=test.png export_video=true export_mesh=true
Detected kernel version 5.4.0, which is below the recommended minimum of 5.5.0; this can cause the process to hang. It is recommended to upgrade the kernel to the minimum version or higher.
config.json not found in /workspace/OpenLRM/exps/checkpoints/lrm-objaverse/overfitting-test/001000
Traceback (most recent call last):
  File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/workspace/OpenLRM/openlrm/launch.py", line 36, in <module>
    main()
  File "/workspace/OpenLRM/openlrm/launch.py", line 31, in main
    with RunnerClass() as runner:
  File "/workspace/OpenLRM/openlrm/runners/infer/lrm.py", line 121, in __init__
    self.model = self._build_model(self.cfg).to(self.device)
  File "/workspace/OpenLRM/openlrm/runners/infer/lrm.py", line 126, in _build_model
    model = hf_model_cls.from_pretrained(cfg.model_name)
  File "/workspace/OpenLRM/venv/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn
    return fn(*args, **kwargs)
  File "/workspace/OpenLRM/venv/lib/python3.10/site-packages/huggingface_hub/hub_mixin.py", line 277, in from_pretrained
    instance = cls._from_pretrained(
  File "/workspace/OpenLRM/venv/lib/python3.10/site-packages/huggingface_hub/hub_mixin.py", line 485, in _from_pretrained
    model = cls(**model_kwargs)
TypeError: wrap_model_hub.<locals>.HfModel.__init__() missing 1 required positional argument: 'config'

and the folder that is passed as model_name argument looks like this:

exps
|-- checkpoints
|   `-- lrm-objaverse
|       `-- overfitting-test
|           `-- 001000
|               |-- custom_checkpoint_0.pkl
|               |-- model.safetensors
|               |-- optimizer.bin
|               `-- random_states_0.pkl

which contains a file named model.safetensors as required by huggingface_hub when initialising from path.

From some tests it seems that the method hf_model_cls.from_pretrained needs as dictionary the section "model" from file configs/train-sample.yaml

model:
    camera_embed_dim: 1024
    rendering_samples_per_ray: 96
    transformer_dim: 512
    transformer_layers: 12
    transformer_heads: 8
    triplane_low_res: 32
    triplane_high_res: 64
    triplane_dim: 32
    encoder_type: dinov2
    encoder_model_name: dinov2_vits14_reg
    encoder_feat_dim: 384
    encoder_freeze: false

But even so, after passing this as a dictionary, the code breaks a bit further:

(venv) root@bc700a1d6a6c:/workspace/OpenLRM# python -m openlrm.launch infer.lrm --infer=configs/infer-b.yaml model_name=exps/checkpoints/lrm-objaverse/overfitting-test/001000 image_input=test.png export_video=true export_mesh=true
Detected kernel version 5.4.0, which is below the recommended minimum of 5.5.0; this can cause the process to hang. It is recommended to upgrade the kernel to the minimum version or higher.
config.json not found in /workspace/OpenLRM/exps/checkpoints/lrm-objaverse/overfitting-test/001000
[2024-03-25 15:34:32,383] openlrm.models.modeling_lrm: [INFO] Using DINOv2 as the encoder
/workspace/OpenLRM/openlrm/models/encoders/dinov2/layers/swiglu_ffn.py:43: UserWarning: xFormers is available (SwiGLU)
  warnings.warn("xFormers is available (SwiGLU)")
/workspace/OpenLRM/openlrm/models/encoders/dinov2/layers/attention.py:27: UserWarning: xFormers is available (Attention)
  warnings.warn("xFormers is available (Attention)")
/workspace/OpenLRM/openlrm/models/encoders/dinov2/layers/block.py:39: UserWarning: xFormers is available (Block)
  warnings.warn("xFormers is available (Block)")
Loading weights from local directory
  0%|                                                                                                                                                                                                                              | 0/1 [00:00<?, ?it/s]/workspace/OpenLRM/openlrm/datasets/cam_utils.py:153: UserWarning: Using torch.cross without specifying the dim arg is deprecated.
Please either pass the dim explicitly or simply use torch.linalg.cross.
The default value of dim will change to agree with that of linalg.cross in a future release. (Triggered internally at ../aten/src/ATen/native/Cross.cpp:63.)
  x_axis = torch.cross(up_world, z_axis)
  0%|                                                                                                                                                                                                                              | 0/1 [00:14<?, ?it/s]
Traceback (most recent call last):
  File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/workspace/OpenLRM/openlrm/launch.py", line 36, in <module>
    main()
  File "/workspace/OpenLRM/openlrm/launch.py", line 32, in main
    runner.run()
  File "/workspace/OpenLRM/openlrm/runners/infer/base_inferrer.py", line 62, in run
    self.infer()
  File "/workspace/OpenLRM/openlrm/runners/infer/lrm.py", line 298, in infer
    self.infer_single(
  File "/workspace/OpenLRM/openlrm/runners/infer/lrm.py", line 258, in infer_single
    mesh = self.infer_mesh(planes, mesh_size=mesh_size, mesh_thres=mesh_thres, dump_mesh_path=dump_mesh_path)
  File "/workspace/OpenLRM/openlrm/runners/infer/lrm.py", line 221, in infer_mesh
    vtx_colors = self.model.synthesizer.forward_points(planes, vtx_tensor)['rgb'].squeeze(0).cpu().numpy()  # (0, 1)
  File "/workspace/OpenLRM/openlrm/models/rendering/synthesizer.py", line 206, in forward_points
    for k in outs[0].keys()
IndexError: list index out of range

Could anyone help here?

@SamBahrami
Copy link

SamBahrami commented Mar 26, 2024

Im dealing with a similar issue right now. It seems like the infer code is pretty much hard-coded to only work with models on huggingface.

I am trying to figure out how to do inference with a locally trained model as well. I'll keep you updated if I get anywhere with it, I think its possible to load our custom models with the function load_model_ in base_trainer.py but that seems a bit hacky, hopefully the authors can help us out :)

@ZexinHe
Copy link
Collaborator

ZexinHe commented Mar 26, 2024

Hi,

Plz refer to this issue #24 (comment).
You may need to convert the trained checkpoint to hf compatible models before doing inference for historical reasons.
Details are updated here.
https://github.com/3DTopia/OpenLRM?tab=readme-ov-file#inference-on-trained-models

Plz take a look on this and see if there's still any problem.

Bests

@lincior lincior closed this as completed Mar 28, 2024
@hayoung-jeremy
Copy link

Hi @ZexinHe , @SamBahrami , I'm wondering what does <YOUR_EXACT_TRAINING_CONFIG> refer to?
I've generated checkpoints under OpenLRM/exps/checkpoints/lrm-objaverse/000100.
But don't know how to exactly run the convert command. What should I put into <YOUR_EXACT_TRAINING_CONFIG> below?
Does it refer to configs/train_sample.yaml?

python scripts/convert_hf.py --config <YOUR_EXACT_TRAINING_CONFIG> convert.global_step=null

Thank you in advance.

@SamBahrami
Copy link

Hi @ZexinHe , @SamBahrami , I'm wondering what does <YOUR_EXACT_TRAINING_CONFIG> refer to? I've generated checkpoints under OpenLRM/exps/checkpoints/lrm-objaverse/000100. But don't know how to exactly run the convert command. What should I put into <YOUR_EXACT_TRAINING_CONFIG> below? Does it refer to configs/train_sample.yaml?

python scripts/convert_hf.py --config <YOUR_EXACT_TRAINING_CONFIG> convert.global_step=null

Thank you in advance.

For me, the line I used was python scripts/convert_hf.py --config ./configs/train-sample.yaml where train-sample.yaml is my training config I used to train the model. It automatically finds the latest model in exps/ that corresponds, I think!

@hayoung-jeremy
Copy link

hayoung-jeremy commented Apr 18, 2024

Awesome @SamBahrami , thank you so much for your quick and kind response!
I'll try it:)

@hayoung-jeremy
Copy link

Thanks to you @SamBahrami , I've successfully converted checkpoints to huggingface-compatible model!
I'm trying to run the inference command on my trained-model, could you please check if I'm doing right?
Below is the inference command I'm going to try :

# Example usage
EXPORT_VIDEO=true
EXPORT_MESH=true
INFER_CONFIG="./configs/infer-b.yaml"
MODEL_NAME="./exps/releases/lrm-objaverse/small-dummyrun/step_000100"
IMAGE_INPUT="./assets/sample_input/owl.png"

python -m openlrm.launch infer.lrm --infer $INFER_CONFIG model_name=$MODEL_NAME image_input=$IMAGE_INPUT export_video=$EXPORT_VIDEO export_mesh=$EXPORT_MESH

@hayoung-jeremy
Copy link

Oh, when I tried, it gives me list index error :

root@b5f5ee77bf34:~/OpenLRM# # Example usage
EXPORT_VIDEO=true
EXPORT_MESH=true
INFER_CONFIG="./configs/infer-b.yaml"
MODEL_NAME="./exps/releases/lrm-objaverse/small-dummyrun/step_000100"
IMAGE_INPUT="./assets/sample_input/owl.png"

python -m openlrm.launch infer.lrm --infer $INFER_CONFIG model_name=$MODEL_NAME image_input=$IMAGE_INPUT export_video=$EXPORT_VIDEO export_mesh=$EXPORT_MESH
[2024-04-18 02:29:51,344] openlrm.models.modeling_lrm: [INFO] Using DINOv2 as the encoder
/root/OpenLRM/openlrm/models/encoders/dinov2/layers/swiglu_ffn.py:43: UserWarning: xFormers is available (SwiGLU)
  warnings.warn("xFormers is available (SwiGLU)")
/root/OpenLRM/openlrm/models/encoders/dinov2/layers/attention.py:27: UserWarning: xFormers is available (Attention)
  warnings.warn("xFormers is available (Attention)")
/root/OpenLRM/openlrm/models/encoders/dinov2/layers/block.py:39: UserWarning: xFormers is available (Block)
  warnings.warn("xFormers is available (Block)")
Loading weights from local directory
  0%|                                                                                                                                                                 | 0/1 [00:00<?, ?it/s]/root/OpenLRM/openlrm/datasets/cam_utils.py:153: UserWarning: Using torch.cross without specifying the dim arg is deprecated.
Please either pass the dim explicitly or simply use torch.linalg.cross.
The default value of dim will change to agree with that of linalg.cross in a future release. (Triggered internally at ../aten/src/ATen/native/Cross.cpp:63.)
  x_axis = torch.cross(up_world, z_axis)
  0%|                                                                                                                                                                 | 0/1 [00:14<?, ?it/s]Traceback (most recent call last):
  File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/root/OpenLRM/openlrm/launch.py", line 36, in <module>
    main()
  File "/root/OpenLRM/openlrm/launch.py", line 32, in main
    runner.run()
  File "/root/OpenLRM/openlrm/runners/infer/base_inferrer.py", line 62, in run
    self.infer()
  File "/root/OpenLRM/openlrm/runners/infer/lrm.py", line 284, in infer
    self.infer_single(
  File "/root/OpenLRM/openlrm/runners/infer/lrm.py", line 244, in infer_single
    mesh = self.infer_mesh(planes, mesh_size=mesh_size, mesh_thres=mesh_thres, dump_mesh_path=dump_mesh_path)
  File "/root/OpenLRM/openlrm/runners/infer/lrm.py", line 207, in infer_mesh
    vtx_colors = self.model.synthesizer.forward_points(planes, vtx_tensor)['rgb'].squeeze(0).cpu().numpy()  # (0, 1)
  File "/root/OpenLRM/openlrm/models/rendering/synthesizer.py", line 206, in forward_points
    for k in outs[0].keys()
IndexError: list index out of range

@SamBahrami
Copy link

Not sure. I did it the same. Try ./configs/infer-s.yaml, thats what I used.

Perhaps try using a pretrained model for inference, and see if that works, before trying to use your own. This command python -m openlrm.launch infer.lrm --infer "./configs/infer-s.yaml" model_name="zxhezexin/openlrm-obj-small-1.1" image_input="./assets/sample_input/owl.png" export_video=true export_mesh=true

@hayoung-jeremy
Copy link

Sure, thanks!

@hayoung-jeremy
Copy link

@SamBahrami , could it be due to a lack of training data?
I rendered and trained with 100 GLB files, but is that too few?
I'm new to AI and don't know what the appropriate number is. 😢

@SamBahrami
Copy link

No, it should not give you an error even if you don't have enough training data.

@hayoung-jeremy
Copy link

Thank you for checking, @SamBahrami .
The inference command I've run is as follows :

EXPORT_VIDEO=true
EXPORT_MESH=true
INFER_CONFIG="./configs/infer-l.yaml"
MODEL_NAME="./exps/releases/lrm-objaverse/small-dummyrun/step_000100"
IMAGE_INPUT="./assets/sample_input/test.png"

python -m openlrm.launch infer.lrm --infer $INFER_CONFIG model_name=$MODEL_NAME image_input=$IMAGE_INPUT export_video=$EXPORT_VIDEO export_mesh=$EXPORT_MESH

Also, I modified the infer-l.yaml config to fit the sigma value of my model as follows :

source_size: 448
source_cam_dist: 2.0
render_size: 384
render_views: 160
render_fps: 40
frame_size: 2
mesh_size: 384
mesh_thres: 0.28 # ONLY MODIFIED THIS VALUE

This is because, the output sigma values are all within the range of 0.2 to 0.3.
I added print function to see the sigma value in openlrm/runners/infer/lrm.py's infer_mesh function :

    def infer_mesh(self, planes: torch.Tensor, mesh_size: int, mesh_thres: float, dump_mesh_path: str):
        grid_out = self.model.synthesizer.forward_grid(
            planes=planes,
            grid_size=mesh_size,
        )
        print("Sigma values:", grid_out['sigma']) # ADDED THIS LINE
        
        vtx, faces = mcubes.marching_cubes(grid_out['sigma'].squeeze(0).squeeze(-1).cpu().numpy(), mesh_thres)
        vtx = vtx / (mesh_size - 1) * 2 - 1

        vtx_tensor = torch.tensor(vtx, dtype=torch.float32, device=self.device).unsqueeze(0)
        vtx_colors = self.model.synthesizer.forward_points(planes, vtx_tensor)['rgb'].squeeze(0).cpu().numpy()  # (0, 1)
        vtx_colors = (vtx_colors * 255).astype(np.uint8)
        
        mesh = trimesh.Trimesh(vertices=vtx, faces=faces, vertex_colors=vtx_colors)

        # dump
        os.makedirs(os.path.dirname(dump_mesh_path), exist_ok=True)
        mesh.export(dump_mesh_path)
...

...

And here is the output log of the sigma values :

root@b5f5ee77bf34:~/OpenLRM# EXPORT_VIDEO=true
EXPORT_MESH=true
INFER_CONFIG="./configs/infer-l.yaml"
MODEL_NAME="./exps/releases/lrm-objaverse/small-dummyrun/step_000100"
IMAGE_INPUT="./assets/sample_input/test.png"

python -m openlrm.launch infer.lrm --infer $INFER_CONFIG model_name=$MODEL_NAME image_input=$IMAGE_INPUT export_video=$EXPORT_VIDEO export_mesh=$EXPORT_MESH
[2024-04-18 04:49:36,099] openlrm.models.modeling_lrm: [INFO] Using DINOv2 as the encoder
/root/OpenLRM/openlrm/models/encoders/dinov2/layers/swiglu_ffn.py:43: UserWarning: xFormers is available (SwiGLU)
  warnings.warn("xFormers is available (SwiGLU)")
/root/OpenLRM/openlrm/models/encoders/dinov2/layers/attention.py:27: UserWarning: xFormers is available (Attention)
  warnings.warn("xFormers is available (Attention)")
/root/OpenLRM/openlrm/models/encoders/dinov2/layers/block.py:39: UserWarning: xFormers is available (Block)
  warnings.warn("xFormers is available (Block)")
Loading weights from local directory
  0%|                                                                                                                                                                 | 0/1 [00:00<?, ?it/s]/root/OpenLRM/openlrm/datasets/cam_utils.py:153: UserWarning: Using torch.cross without specifying the dim arg is deprecated.
Please either pass the dim explicitly or simply use torch.linalg.cross.
The default value of dim will change to agree with that of linalg.cross in a future release. (Triggered internally at ../aten/src/ATen/native/Cross.cpp:63.)
  x_axis = torch.cross(up_world, z_axis)
Sigma values: tensor([[[[[0.2982],
           [0.2950],
           [0.2925],
           ...,
           [0.2924],
           [0.2949],
           [0.2985]],

          [[0.2965],
           [0.2925],
           [0.2895],
           ...,
           [0.2887],
           [0.2925],
           [0.2966]],

          [[0.2949],
           [0.2900],
           [0.2862],
           ...,
           [0.2861],
           [0.2904],
           [0.2949]],

          ...,

          [[0.2965],
           [0.2916],
           [0.2878],
           ...,
           [0.2891],
           [0.2922],
           [0.2968]],

          [[0.2979],
           [0.2937],
           [0.2907],
           ...,
           [0.2924],
           [0.2949],
           [0.2983]],

          [[0.2994],
           [0.2961],
           [0.2935],
           ...,
           [0.2959],
           [0.2979],
           [0.3002]]],


         [[[0.2952],
           [0.2900],
           [0.2863],
           ...,
           [0.2880],
           [0.2912],
           [0.2957]],

          [[0.2936],
           [0.2873],
           [0.2830],
           ...,
           [0.2830],
           [0.2879],
           [0.2938]],

          [[0.2921],
           [0.2851],
           [0.2799],
           ...,
           [0.2803],
           [0.2853],
           [0.2921]],

          ...,

          [[0.2944],
           [0.2874],
           [0.2821],
           ...,
           [0.2841],
           [0.2882],
           [0.2948]],

          [[0.2956],
           [0.2894],
           [0.2848],
           ...,
           [0.2877],
           [0.2908],
           [0.2963]],

          [[0.2971],
           [0.2917],
           [0.2879],
           ...,
           [0.2917],
           [0.2944],
           [0.2983]]],


         [[[0.2931],
           [0.2863],
           [0.2810],
           ...,
           [0.2841],
           [0.2880],
           [0.2934]],

          [[0.2912],
           [0.2832],
           [0.2770],
           ...,
           [0.2783],
           [0.2842],
           [0.2917]],

          [[0.2899],
           [0.2812],
           [0.2738],
           ...,
           [0.2746],
           [0.2816],
           [0.2902]],

          ...,

          [[0.2928],
           [0.2843],
           [0.2769],
           ...,
           [0.2791],
           [0.2855],
           [0.2937]],

          [[0.2940],
           [0.2862],
           [0.2797],
           ...,
           [0.2834],
           [0.2883],
           [0.2945]],

          [[0.2952],
           [0.2888],
           [0.2832],
           ...,
           [0.2877],
           [0.2914],
           [0.2963]]],


         ...,


         [[[0.2916],
           [0.2861],
           [0.2811],
           ...,
           [0.2845],
           [0.2887],
           [0.2940]],

          [[0.2900],
           [0.2827],
           [0.2771],
           ...,
           [0.2805],
           [0.2859],
           [0.2924]],

          [[0.2890],
           [0.2807],
           [0.2741],
           ...,
           [0.2774],
           [0.2833],
           [0.2908]],

          ...,

          [[0.2867],
           [0.2788],
           [0.2723],
           ...,
           [0.2787],
           [0.2835],
           [0.2894]],

          [[0.2892],
           [0.2824],
           [0.2770],
           ...,
           [0.2824],
           [0.2870],
           [0.2921]],

          [[0.2920],
           [0.2863],
           [0.2817],
           ...,
           [0.2865],
           [0.2900],
           [0.2950]]],


         [[[0.2946],
           [0.2900],
           [0.2857],
           ...,
           [0.2888],
           [0.2923],
           [0.2968]],

          [[0.2932],
           [0.2875],
           [0.2819],
           ...,
           [0.2852],
           [0.2897],
           [0.2947]],

          [[0.2919],
           [0.2852],
           [0.2794],
           ...,
           [0.2824],
           [0.2873],
           [0.2929]],

          ...,

          [[0.2892],
           [0.2837],
           [0.2785],
           ...,
           [0.2851],
           [0.2875],
           [0.2921]],

          [[0.2918],
           [0.2863],
           [0.2814],
           ...,
           [0.2879],
           [0.2906],
           [0.2946]],

          [[0.2946],
           [0.2902],
           [0.2861],
           ...,
           [0.2909],
           [0.2936],
           [0.2973]]],


         [[[0.2983],
           [0.2941],
           [0.2908],
           ...,
           [0.2935],
           [0.2962],
           [0.2996]],

          [[0.2966],
           [0.2921],
           [0.2885],
           ...,
           [0.2908],
           [0.2940],
           [0.2978]],

          [[0.2950],
           [0.2899],
           [0.2859],
           ...,
           [0.2887],
           [0.2921],
           [0.2961]],

          ...,

          [[0.2933],
           [0.2886],
           [0.2844],
           ...,
           [0.2905],
           [0.2928],
           [0.2955]],

          [[0.2956],
           [0.2916],
           [0.2880],
           ...,
           [0.2937],
           [0.2957],
           [0.2977]],

          [[0.2976],
           [0.2939],
           [0.2908],
           ...,
           [0.2962],
           [0.2981],
           [0.3001]]]]], device='cuda:0')
  0%|                                                                                                                                                                 | 0/1 [00:20<?, ?it/s]Traceback (most recent call last):
  File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/root/OpenLRM/openlrm/launch.py", line 36, in <module>
    main()
  File "/root/OpenLRM/openlrm/launch.py", line 32, in main
    runner.run()
  File "/root/OpenLRM/openlrm/runners/infer/base_inferrer.py", line 62, in run
    self.infer()
  File "/root/OpenLRM/openlrm/runners/infer/lrm.py", line 285, in infer
    self.infer_single(
  File "/root/OpenLRM/openlrm/runners/infer/lrm.py", line 245, in infer_single
    mesh = self.infer_mesh(planes, mesh_size=mesh_size, mesh_thres=mesh_thres, dump_mesh_path=dump_mesh_path)
  File "/root/OpenLRM/openlrm/runners/infer/lrm.py", line 208, in infer_mesh
    vtx_colors = self.model.synthesizer.forward_points(planes, vtx_tensor)['rgb'].squeeze(0).cpu().numpy()  # (0, 1)
  File "/root/OpenLRM/openlrm/models/rendering/synthesizer.py", line 206, in forward_points
    for k in outs[0].keys()
IndexError: list index out of range

Could you please point me in the right direction? Thanks in advance for your help!

@hayoung-jeremy
Copy link

Hi @SamBahrami , thanks to you I've generated checkpoint model, and tried inference on my trained model.
But the result quality of the inference ply is not very good.
Could you please check my issue when you have time?
Really great thanks, you helped me a lot, I appreciated it:)

@JINNMnm
Copy link

JINNMnm commented Apr 21, 2024

Hi @SamBahrami , thanks to you I've generated checkpoint model, and tried inference on my trained model. But the result quality of the inference ply is not very good. Could you please check my issue when you have time? Really great thanks, you helped me a lot, I appreciated it:)

Hi @SamBahrami, thank you for your comment! I appreciate your help. I'm currently experiencing a "list index out of range" error and I'm not sure how to resolve it. Could you please assist me in addressing this issue? Thank you in advance for your assistance!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants