Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issues Test Cases on Customized Backend #101

Closed
administrator2992 opened this issue Jan 12, 2023 · 1 comment
Closed

Issues Test Cases on Customized Backend #101

administrator2992 opened this issue Jan 12, 2023 · 1 comment

Comments

@administrator2992
Copy link
Contributor

administrator2992 commented Jan 12, 2023

Hello, i have issues when trying test cases on customized backend. actually i copy tflitecpu backend for my customized backend but cause i want to implement in my laptop (using Linux Environtment). so i edited the shell commands and profiler commands as needed.

MyProfiler :

# Copyright (c) Microsoft Corporation.
# Licensed under the MIT license.
import os
from nn_meter.builder.backends import BaseProfiler

class TFLiteProfiler(BaseProfiler):
    use_gpu = None

    def __init__(self, dst_kernel_path, benchmark_model_path, graph_path='', dst_graph_path='', num_threads=1, num_runs=50, warm_ups=10):
        """
        @params:
        graph_path: graph file. path on host server
        dst_graph_path: graph file. path on device
        kernel_path: dest kernel output file. path on device
        benchmark_model_path: path to benchmark_model
        """
        self._graph_path = graph_path
        self._dst_graph_path = dst_graph_path
        self._dst_kernel_path = dst_kernel_path
        self._benchmark_model_path = benchmark_model_path
        self._num_threads = num_threads
        self._num_runs = num_runs
        self._warm_ups = warm_ups

    def profile(self, graph_path, preserve = False, clean = True, close_xnnpack = False, **kwargs):
        """
        @params:
        preserve: tflite file exists in remote dir. No need to push it again.
        clean: remove tflite file after running.
        """
        model_name = os.path.basename(graph_path)
        remote_graph_path = os.path.join(self._dst_graph_path, model_name)
        kernel_cmd = f'--kernel_path={self._dst_kernel_path}' if self._dst_kernel_path else ''
        close_xnnpack_cmd = f'--use_xnnpack=false' if close_xnnpack else ''

        try:
            kernel_cmd = f'--kernel_path={self._dst_kernel_path}' if self._dst_kernel_path else ''
            close_xnnpack_cmd = f'--use_xnnpack=false' if close_xnnpack else ''
            res = os.system(f' {self._benchmark_model_path} {kernel_cmd} {close_xnnpack_cmd}' \
                               f' --num_threads={self._num_threads}' \
                               f' --num_runs={self._num_runs}' \
                               f' --warmup_runs={self._warm_ups}' \
                               f' --graph={remote_graph_path}' \
                               f' --enable_op_profiling=true' \
                               f' --use_gpu={"true" if self.use_gpu else "false"}')
        except:
            raise
        finally:
            if clean:
                os.system(f"rm {remote_graph_path}")

        return res

MyBackend :

# Copyright (c) Microsoft Corporation.
# Licensed under the MIT license.
import os
import shutil
import logging
from nn_meter.builder.backends import BaseBackend
from nn_meter.utils.path import get_filename_without_ext
logging = logging.getLogger("nn-Meter")

class TFLiteBackend(BaseBackend):
    parser_class = None
    profiler_class = None

    def update_configs(self):
        """update the config parameters for TFLite platform
        """
        super().update_configs()
        self.profiler_kwargs.update({
            'dst_graph_path': self.configs['REMOTE_MODEL_DIR'],
            'benchmark_model_path': self.configs['BENCHMARK_MODEL_PATH'],
            'dst_kernel_path': self.configs['KERNEL_PATH']
        })

    def convert_model(self, model_path, save_path, input_shape=None):
        """convert the Keras model instance to ``.tflite`` and return model path
        """
        import tensorflow as tf
        model_name = get_filename_without_ext(model_path)
        model = tf.keras.models.load_model(model_path)
        converter = tf.lite.TFLiteConverter.from_keras_model(model)
        tflite_model = converter.convert()
        converted_model = os.path.join(save_path, model_name + '.tflite')
        open(converted_model, 'wb').write(tflite_model)
        shutil.rmtree(model_path)
        return converted_model
    
    def test_connection(self):
        """check the status of backend interface connection
        """
        ...
        logging.keyinfo("hello TFLitex64 backend !")

my backend register config :

builtin_name: TFLitex64
package_location: /home/nn-meter/backends/tflitex64
class_module: cpu
class_name: TFLiteCPUBackend
defaultConfigFile: /home/nn-meter/backends/tflitex64/backend_tflitex64_config.yaml

my backend default config :

REMOTE_MODEL_DIR: /home/nn-meter/models
BENCHMARK_MODEL_PATH: /tensorflow_src/bazel-bin/tensorflow/lite/tools/benchmark/benchmark_model
KERNEL_PATH:

and i used cpu.py in this repo

my custom backend is able to be registered in my laptop but i have issue when i try test fusion rules. i got profile_error.log that error 'int' object has no attribute 'splitlines'. so i try to debug content above line 27 cpu.py because in line 27 call splitlines and the result is 0 (it's integer) so i think that is the issue of error 'int' object has no attribute 'splitlines' but i don't know what is wrong in my configuration or code.

And also i got the issue WARNING:tensorflow:No training configuration found in save file, so the model was *not* compiled. Compile it manually. when i try test fusion rules. which configuration and where should I save the configuration file ?

and i have a question again, does REMOTE_MODEL_DIR must to have any model when running profile_model ?

Thanks

@administrator2992
Copy link
Contributor Author

i got the solving for my case,

so the issue is from my profiler when running benchmark. i change from os.system to subsprocess,check_output because os.system cannot return shell output. that the code :

res = subprocess.check_output(f' {self._benchmark_model_path} {kernel_cmd} {close_xnnpack_cmd}' \
                               f' --num_threads={self._num_threads}' \
                               f' --num_runs={self._num_runs}' \
                               f' --warmup_runs={self._warm_ups}' \
                               f' --graph={remote_graph_path}' \
                               f' --enable_op_profiling=true' \
                               f' --use_gpu={"true" if self.use_gpu else "false"}', shell=True)

the return must to decode to string as well

return res.decode('utf-8')

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant