Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Run in Sipeed Maix Bit #45

Closed
RicardoCostaCMD opened this issue Apr 18, 2021 · 6 comments
Closed

Run in Sipeed Maix Bit #45

RicardoCostaCMD opened this issue Apr 18, 2021 · 6 comments

Comments

@RicardoCostaCMD
Copy link

I'm training for 2 classes and for detection, everything went well in the training, but when I'm going to run on a sipeed maix bit, through Maixpy, I always get an error whether it is the 2006 memory error or the others in version v3 / v4, whether or not using a card from memory. What is the correct script to run detection with Axelerate and what firmware version?

@RicardoCostaCMD
Copy link
Author

I realized that there are several parameters to adjust, such as the outputs the firmware and maybe it is interesting to say, (at least for me because all the models I trained gave a problem because the kmodel output is always greater than 3mb) anyway, I think it is interesting say that:

Step 1 - Use the minimum firmware with the latest ide with v4 support to use in Maixpy ​​or without ide directly in the terminal (I couldn't see the output in the terminal but maybe it is from the script, I will study it yet).
Step 2 - The script for v4 that is here in your example scripts works
Step 3 - But it only works for sd card loading (all models I trained gave more than 3mb)
Step 4 - Between tests always remember to press the reset button
Step 5 - When training, it is easier to use Roboflow.com to resize the images and store them.

Great job, I used transferlearning too and I'm delivering a great job to my college in Brazil

@AIWintermuteAI
Copy link
Owner

AIWintermuteAI commented Apr 19, 2021

Use the minimum firmware with the latest ide with v4 support to use in Maixpy ​​or without ide directly in the terminal (I couldn't see the output in the terminal but maybe it is from the script, I will study it yet).

Yes. I normally build firmware myself, with kmodelv4 support, IDE and optionally ulab/video. It is ~1.6 Mb in size.

The script for v4 that is here in your example scripts works

Yes. aXeleRate uses nncase v0.2.0-beta4, which outputs kmodelv4

But it only works for sd card loading (all models I trained gave more than 3mb)

If you plan to use Micropython firmware, you should use MobileNet2_5, MobileNet5_0,, MobileNet7_5 or Tiny Yolo backends. You probably have used MobileNet1_0, which is too large to fit in memory IF Micropython is used. MobileNet1_0 can be used if you're writing C code for K210.

Between tests always remember to press the reset button

Yes.

When training, it is easier to use Roboflow.com to resize the images and store them.

Not sure why. aXeleRate automatically resizes and even augments the images.

@AIWintermuteAI
Copy link
Owner

AIWintermuteAI commented Apr 19, 2021

Anyways, it should work with latest version of Micropython firmware with kmodel v4 support.
The memory needed to run the model depends on backend chosen.

Does it answer your question?

@RicardoCostaCMD
Copy link
Author

Perfectly.

I will change the backend, Thanks

@arthurkafer
Copy link

Me and my teammates worked on two projects using MobileNet1_0 on 2 models, one was classification and the other was object detection. These were tight fits, and we had to use the maixpy_v0.6.2_41_g02d12688e_minimum_with_kmodel_v4_support custom firmware to make it work.

The object classification one was just a basic test, probably 150+ images, but the classification one we were using almost 2000 images, and it worked fine.

Salve irmão

@RicardoCostaCMD
Copy link
Author

Eu e meus colegas de equipe trabalhamos em dois projetos usando MobileNet1_0 em 2 modelos, um era classificação e o outro era detecção de objetos. Eram ajustes apertados e tivemos que usar o maixpy_v0.6.2_41_g02d12688e_minimum_with_kmodel_v4_supportfirmware customizado para fazê-lo funcionar.

A classificação de objetos foi apenas um teste básico, provavelmente mais de 150 imagens, mas a classificação estávamos usando quase 2.000 imagens e funcionou bem.

Salve irmão

  • HAHAHA adorei o Salve irmão

I also used this with the ide, it worked well here, I used a pre-trained network to improve the performance with 430 images, maybe I should increase the bank or not because I will already deliver this work.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants