Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Classifier process errored - CPU Architecture #65

Closed
LotusAxt opened this issue Aug 24, 2021 · 46 comments
Closed

Classifier process errored - CPU Architecture #65

LotusAxt opened this issue Aug 24, 2021 · 46 comments
Projects

Comments

@LotusAxt
Copy link

LotusAxt commented Aug 24, 2021

I installed v1.5.8 but the procces still fails without any usefull error message. :(

{
  "reqId": "b6Wn01s9daui30J6ezBX",
  "level": 2,
  "time": "2021-08-24T07:45:23+00:00",
  "remoteAddr": "",
  "user": "--",
  "app": "recognize",
  "method": "",
  "url": "--",
  "message": "Classifier process errored",
  "userAgent": "--",
  "version": "22.1.0.1",
  "id": "6124ac5f5bb41"
}
{
  "reqId": "b6Wn01s9daui30J6ezBX",
  "level": 2,
  "time": "2021-08-24T07:45:23+00:00",
  "remoteAddr": "",
  "user": "--",
  "app": "recognize",
  "method": "",
  "url": "--",
  "message": "Classifier process error",
  "userAgent": "--",
  "version": "22.1.0.1",
  "id": "6124ac5f5bb0a"
}
@marcelklehr
Copy link
Member

marcelklehr commented Aug 24, 2021

What's your Nextcloud version and system architecture? Out of curiosity: Did you install via the web UI, via occ or by manually extracting the tarball?

@LotusAxt
Copy link
Author

My Nextcloud runs in a Debian Buster Docker Container on a Synolgy DS918+, so it should be x64.
I tried both installations, first manually and when that didn't work I uninstalled the app in did a reinstall from the web UI.

@marcelklehr
Copy link
Member

ok.

(The fact that you didn't run into #52 when installing from the web UI is interesting...)

@marcelklehr
Copy link
Member

There should be a warning-level log message before these two log messages, saying something like Classifier process output: ...

@LotusAxt
Copy link
Author

Jep, also not very verbose :-/

{
  "reqId": "b6Wn01s9daui30J6ezBX",
  "level": 2,
  "time": "2021-08-24T07:45:23+00:00",
  "remoteAddr": "",
  "user": "--",
  "app": "recognize",
  "method": "",
  "url": "--",
  "message": "Classifier process output: ",
  "userAgent": "--",
  "version": "22.1.0.1"
}

@LotusAxt
Copy link
Author

Speaking of verbose: Is there a verbose logging parameter for the occ command?

@marcelklehr
Copy link
Member

marcelklehr commented Aug 24, 2021

Is there a verbose logging parameter for the occ command?

Not, currently. I agree that it could spit out more information again. The output was reduced by a refactor, but I'll take a look again.

"Classifier process output: "

That would mean that the process fails silently :/ Can you try executing the classifier manually?

$ node recognize/src/classifier_imagenet.js path/to/some/image-file.jpg

(Update: Forgot the node binary...)

Thank you for sponsoring me, btw ❤️

@marcelklehr
Copy link
Member

marcelklehr commented Aug 24, 2021

Mmh. seems correct. Can you try sudo -u http bin/node-v14.17.4-linux-x64 --version?

And for good measure: lscpu

@LotusAxt
Copy link
Author

LotusAxt commented Aug 24, 2021

Sorry, I deleted my former post as I realized I executed the command on the host machine and not within the Docker container. 😅

Within the container its:

#: sudo docker exec -i -u www-data nextcloud /var/www/html/apps/recognize/bin/node-v14.17.4-linux
-x64 /var/www/html/apps/recognize/src/classifier_imagenet.js /var/www/html/data/Admin/files/Photos/Frog.jpg
#: sudo docker exec -i -u www-data nextcloud /var/www/html/apps/recognize/bin/node-v14.17.4-linux-x64 --version
v14.17.4
#:
#: sudo docker exec -i -u www-data nextcloud lscpu
Architecture:        x86_64
CPU op-mode(s):      32-bit, 64-bit
Byte Order:          Little Endian
Address sizes:       39 bits physical, 48 bits virtual
CPU(s):              4
On-line CPU(s) list: 0-3
Thread(s) per core:  1
Core(s) per socket:  4
Socket(s):           1
Vendor ID:           GenuineIntel
CPU family:          6
Model:               92
Model name:          Intel(R) Celeron(R) CPU J3455 @ 1.50GHz
Stepping:            9
CPU MHz:             1501.000
CPU max MHz:         1501.0000
CPU min MHz:         800.0000
BogoMIPS:            2995.24
Virtualization:      VT-x
L1d cache:           24K
L1i cache:           32K
L2 cache:            1024K
Flags:               fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 ds_cpl vmx est tm2 ssse3 sdbg cx16 xtpr pdcm sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave rdrand lahf_lm 3dnowprefetch intel_pt ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust smep erms mpx rdseed smap clflushopt sha_ni xsaveopt xsavec xgetbv1 dtherm ida arat pln pts md_clear arch_capabilities
#:

For completeness the result on the host was:

#: sudo -u http bin/node-v14.17.4-linux-x64 /volume1/Nextcloud/web/apps/recognize/src/classifier_imagenet.
js /volume1/Nextcloud/data/Admin/files/Photos/Frog.jpg
Illegal instruction
#: sudo -u http bin/node-v14.17.4-linux-x64 --version
v14.17.4

@marcelklehr
Copy link
Member

It seems we have run into tensorflow/tfjs#2631

@marcelklehr
Copy link
Member

Two paths forward from here:

  • I will probably add an option in the settings to force running the models in pure-js mode (slooow)
  • More relevant for you: Install node.js and npm and run rm -rf node_modules && npm install in apps/recognize to have npm build libtensorflow according to your hardware specs.

@LotusAxt
Copy link
Author

So, I installed node.js via nvm and ran the rm -rf node_modules && npm install. The good news: The two error messages in the Nextcloud Log are gone. The bad news: it still dosen't work. :(
If I run occ recognize:classify the output is:

Classifying photos of user Admin
Failed to classify images
Classifier process error

{
  "reqId": "KBbgSJd7nbEibjgv1vJ5",
  "level": 2,
  "time": "2021-08-24T15:43:10+00:00",
  "remoteAddr": "",
  "user": "--",
  "app": "recognize",
  "method": "",
  "url": "--",
  "message": "Classifier process output: ",
  "userAgent": "--",
  "version": "22.1.0.1",
  "id": "6125138e97f97"
}

If I ran your test command node recognize/src/classifier_imagenet.js path/to/some/image-file.jpg:

#$ /var/www/html/apps/recognize/bin/node-v14.17.4-linux-x64 /var/www/html/apps/recognize/src/classifier_imagenet.js /var/www/html/data/Admin/files/Photos/Frog.jpg
Illegal instruction (core dumped)
#$

I already tried npm rebuild @tensorflow/tfjs-node --build-addon-from-source but that also didn't help.

Any other Ideas?

@marcelklehr
Copy link
Member

I think that's as deep as we're gonna go on this one. I'll publish a new release soon where you can select pure-js operation. That should definitely work, even though it's slower.

@LotusAxt
Copy link
Author

Okay, Thank you!

@spicemint
Copy link

now that i have been able to install (raspbian, nc 21.0.4) following this
when starting i get an error, nothing in logs:
sudo -u www-data /usr/bin/php /var/www/nextcloud/occ recognize:classify Classifying photos of user admin Failed to classify images Classifier process error

@marcelklehr
Copy link
Member

@spicemint Yep, raspi is arm, which does not work, yet. I'm on it.

@marcelklehr marcelklehr changed the title Classifier process errored Classifier process errored - CPU Architecture Aug 25, 2021
@HelloKS
Copy link

HelloKS commented Aug 29, 2021

I'm using J5040 (Goldmont Plus architecture), and same thing happens.
The source of problem is prebuilt libraries (libtensorflow.so, libtensorflow_framework.so) in tfjs-node library, which requires some specific CPU instructions which some CPUs don't have.

While I can try plain javascript option but I didn't want to because it will be painfully slow anyway. So here's how I did:

  • Follow Optional: Build optimal TensorFlow from source.
  • During ./configure, there will be Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified step. Enter -march=goldmont.
  • Replace deps folder in recongnize folder to what you built.

Voila!
result

@marcelklehr
Copy link
Member

This may also be a nice resource for downloading pre-built binaries for some architectures: https://github.com/kaufman-lab/build_tensorflow/releases (whl files are simply zip archives)

marcelklehr added a commit that referenced this issue Aug 29, 2021
hopefully use most optimal tensorflow variant

see #65
@arch-user-france1
Copy link

ok.

(The fact that you didn't run into #52 when installing from the web UI is interesting...)

Yes, some versions worked. I also installed the LAST working version with Nextcloud to install
Smh it worked lol
Just deleted then BCS of bug and couldn't reinstall - made it without UI

@arch-user-france1
Copy link

I'm using J5040 (Goldmont Plus architecture), and same thing happens.
The source of problem is prebuilt libraries (libtensorflow.so, libtensorflow_framework.so) in tfjs-node library, which requires some specific CPU instructions which some CPUs don't have.

While I can try plain javascript option but I didn't want to because it will be painfully slow anyway. So here's how I did:

* Follow [Optional: Build optimal TensorFlow from source](https://github.com/tensorflow/tfjs/tree/master/tfjs-node#optional-build-optimal-tensorflow-from-source).

* During `./configure`, there will be `Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified` step. Enter `-march=goldmont`.

* Replace deps folder in recongnize folder to what you built.

Voila!
result

Does this improve any performance (I just installed it by Package and not built byself and I don't have to use JavaScript mode)

@marcelklehr
Copy link
Member

I'm currently setting up a repo for building various flavors of libtensorflow.
It would be useful to know which kinds people need.

If you'd like your machine to be covered, run the following on your machine and post the output here:

gcc -march=native -Q --help=target | grep march

(cc @jakobroehrl)

@Emporea
Copy link

Emporea commented Oct 29, 2021

Hey. I am using the docker alpine nextcloud image.
Is is sufficent to do apk add libc6-compat to get it running, besides the tensorflow problem?

Output of gcc:
skylake-avx512

Because currently when I try to recognize:classify I get:

Classifying photos of user emporea
Failed to classify images
Classifier process error

@jakobroehrl
Copy link

root@server:# gcc -march=native -Q --help=target | grep march
-march= nehalem
Known valid arguments for -march= option:
root@server:
#

@jakobroehrl
Copy link

This may also be a nice resource for downloading pre-built binaries for some architectures: https://github.com/kaufman-lab/build_tensorflow/releases (whl files are simply zip archives)

How to use/install them? Thanks

@marcelklehr
Copy link
Member

I've forked that repository and setup a better pipeline to streamline this: https://github.com/marcelklehr/build_tensorflow/actions

@marcelklehr marcelklehr moved this from Backlog to In progress in Recognize Jan 7, 2022
@YouveGotMeowxy
Copy link

YouveGotMeowxy commented Mar 30, 2022

I'm getting a log full of these too. Using NC version 23.0.3 and the latest Recognize installed via the Apps page within NC itself.

image

If I manually run: occ recognize:classify-images

It scrolls through many images found, finally ending with this:

  8073 => '/data/__groupfolders/1/Pets/Tucker/20210321_094827.jpg',
)
Running array (
  0 => '',
  1 => '/config/www/nextcloud/apps/recognize/src/classifier_imagenet.js',
  2 => '-',
)
sh: taskset: not found
Classifier process output: sh: exec: line 1: : Permission denied

Classifier process output: sh: exec: line 1: : Permission denied

Failed to classify images
Classifier process error

I'm using the Nextcloud Docker container, on Win 10 x64, WSL (Ubuntu containers).

@marcelklehr
Copy link
Member

@YouveGotMeowxy As the title says, most likely, your CPU architecture is not supported by the standard tensorflow build. That means you have two options

  1. Run in JS mode (Can be enabled in the admin settings for recognize)
  2. Use a custom build of tensorflow for your architecture, you'll have to compile from source (This repo may help, but I can't help with the details atm)

@YouveGotMeowxy
Copy link

YouveGotMeowxy commented Mar 30, 2022

@YouveGotMeowxy As the title says, most likely, your CPU architecture is not supported by the standard tensorflow build.

I'm just running it within an Ubuntu container on WSL (AMD Ryzen processor). Standard Tensorflow doesn't support that?

Also, are there any drawbacks to using the js mode? And if I use JS mode, will the OCC manual recognize still work?

@YouveGotMeowxy

This comment was marked as off-topic.

@marcelklehr
Copy link
Member

I'm just running it within an Ubuntu container on WSL (AMD Ryzen processor). Standard Tensorflow doesn't support that?

I have no idea if tensorflow supports your CPU. lscpu will tell you which instructions your CPU supports. Tensorflow needs avx and possibly avx512 or something like that. Don't remember off the top of my head. A different reason for the Permission denied error could be that it's actually about permissions. Recognize makes sure that all permissions are set on the node.js binary, but you can try whether executing recognize/bin/node --version works. If it does work, your CPU is to blame, if it does not, the installation failed.

Also, are there any drawbacks to using the js mode? And if I use JS mode, will the OCC manual recognize still work?

JS mode is slower, but other than that should work fine.

@kbftech
Copy link

kbftech commented May 17, 2022

I installed node latest (18.2.0), npm 8.9.0, rebuilt tensorflow from source.
Running a VM (Linux KVM) with CPU model passthrough (which supports AVX):

model name      : Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz
[...]
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq vmx ssse3 cx16 pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx hypervisor lahf_lm cpuid_fault pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid tsc_adjust xsaveopt arat umip md_clear arch_capabilities

Running the manual command seems to work fine:

root@nextcloud:/var/www/html/nextcloud/apps/recognize# sudo -u www-data node /var/www/html/nextcloud/apps/recognize/src/classifier_imagenet.js  /var/www/html/nextcloud/data/WhateverPath/IMG.jpg
{
  className: 'banana',
  probability: 0.9136211276054382,
  rule: { threshold: 0.1, categories: [ 'fruit', 'food' ] }
}
{
  className: 'orange',
  probability: 0.0013075546594336629,
  rule: { label: 'fruit', threshold: 0.1, categories: [ 'food' ] }
}

But running the recognition doesn't (after listing 500 images and getting ready to process them):

Running array (
  0 => '',
  1 => '/var/www/html/nextcloud/apps/recognize/src/classifier_imagenet.js',
  2 => '-',
)
Classifier process output: sh: 1: exec: : Permission denied
Classifier process output: sh: 1: exec: : Permission denied
Failed to classify images
Classifier process error

Not sure what's wrong as afaik, all permissions should be fine.

@marcelklehr
Copy link
Member

0 => '',

@kbftech You'll need to set the path to a node.js executable in the settings. The setting seems to have been lost.

@kbftech
Copy link

kbftech commented May 17, 2022

0 => '',

@kbftech You'll need to set the path to a node.js executable in the settings. The setting seems to have been lost.

Seems to work! Thanks :D

Side note:
If you want to, try this: https://nextcloud.kfoster.tech/s/cdroSrwBejb2YDL/preview
If you register to Brave rewards, users like me can tip you. 1 bat is worth approximately 1USD afaik.

EDIT: It's been scanning for 5 hours straight. about 5000 pictures analysed so far.

@kbftech
Copy link

kbftech commented May 17, 2022

@YouveGotMeowxy
Not sure if you fixed your issue, but you seem to have had the same as me (Node path is empty). Check @marcelklehr reply:

0 => '',

@kbftech You'll need to set the path to a node.js executable in the settings. The setting seems to have been lost.

@YouveGotMeowxy
Copy link

@kbftech

TY for that. I added node v14.19.0, placed it's path in the recognize settings (/usr/bin/node) and then ran occ recognize:classify-images and it at least got me past that original error. Now I see all this though, lol:

Running array ( 0 => '/usr/bin/node', 1 => '/config/www/nextcloud/apps/recognize/src/classifier_imagenet.js', 2 => '-',)Classifier process output: Error: Error loading shared library ld-linux-x86-64.so.2: No such file or directory (needed by /config/www/nextcloud/apps/recognize/node_modules/@tensorflow/tfjs-node/lib/napi-v8/../../deps/lib/libtensorflow.so.2) at Object.Module._extensions..node (internal/modules/cjs/loader.js:1144:18) at Module.load (internal/modules/cjs/loader.js:950:32) at Function.Module._load (internal/modules/cjs/loader.js:790:12) at Module.require (internal/modules/cjs/loader.js:974:19) at require (internal/modules/cjs/helpers.js:101:18) at Object. (/config/www/nextcloud/apps/recognize/node_modules/@tensorflow/tfjs-node/dist/index.js:60:16) at Module._compile (internal/modules/cjs/loader.js:1085:14) at Object.Module._extensions..js (internal/modules/cjs/loader.js:1114:10) at Module.load (internal/modules/cjs/loader.js:950:32) at Function.Module._load (internal/modules/cjs/loader.js:790:12) { code: 'ERR_DLOPEN_FAILED'}Classifier process output: Trying js-only modeClassifier process output: Error: Error loading shared library ld-linux-x86-64.so.2: No such file or directory (needed by /config/www/nextcloud/apps/recognize/node_modules/@tensorflow/tfjs-node/lib/napi-v8/../../deps/lib/libtensorflow.so.2) at Object.Module._extensions..node (internal/modules/cjs/loader.js:1144:18) at Module.load (internal/modules/cjs/loader.js:950:32) at Function.Module._load (internal/modules/cjs/loader.js:790:12) at Module.require (internal/modules/cjs/loader.js:974:19) at require (internal/modules/cjs/helpers.js:101:18) at Object. (/config/www/nextcloud/apps/recognize/node_modules/@tensorflow/tfjs-node/dist/index.js:60:16) at Module._compile (internal/modules/cjs/loader.js:1085:14) at Object.Module._extensions..js (internal/modules/cjs/loader.js:1114:10) at Module.load (internal/modules/cjs/loader.js:950:32) at Function.Module._load (internal/modules/cjs/loader.js:790:12) { code: 'ERR_DLOPEN_FAILED'}Trying js-only modeClassifier process output: Error: Backend name 'tensorflow' not found in registry at Engine. (/config/www/nextcloud/apps/recognize/node_modules/@tensorflow/tfjs-core/dist/tf-core.node.js:4238:35) at step (/config/www/nextcloud/apps/recognize/node_modules/@tensorflow/tfjs-core/dist/tf-core.node.js:123:27) at Object.next (/config/www/nextcloud/apps/recognize/node_modules/@tensorflow/tfjs-core/dist/tf-core.node.js:72:53) at /config/www/nextcloud/apps/recognize/node_modules/@tensorflow/tfjs-core/dist/tf-core.node.js:65:71 at new Promise () at __awaiter (/config/www/nextcloud/apps/recognize/node_modules/@tensorflow/tfjs-core/dist/tf-core.node.js:51:12) at Engine.setBackend (/config/www/nextcloud/apps/recognize/node_modules/@tensorflow/tfjs-core/dist/tf-core.node.js:4232:16) at Object.setBackend (/config/www/nextcloud/apps/recognize/node_modules/@tensorflow/tfjs-core/dist/tf-core.node.js:10577:19) at Object. (/config/www/nextcloud/apps/recognize/src/classifier_imagenet.js:175:4) at Module._compile (internal/modules/cjs/loader.js:1085:14)Classifier process output: Error: Error loading shared library ld-linux-x86-64.so.2: No such file or directory (needed by /config/www/nextcloud/apps/recognize/node_modules/@tensorflow/tfjs-node/lib/napi-v8/../../deps/lib/libtensorflow.so.2) at Object.Module._extensions..node (internal/modules/cjs/loader.js:1144:18) at Module.load (internal/modules/cjs/loader.js:950:32) at Function.Module._load (internal/modules/cjs/loader.js:790:12) at Module.require (internal/modules/cjs/loader.js:974:19) at require (internal/modules/cjs/helpers.js:101:18) at Object. (/config/www/nextcloud/apps/recognize/node_modules/@tensorflow/tfjs-node/dist/index.js:60:16) at Module._compile (internal/modules/cjs/loader.js:1085:14) at Object.Module._extensions..js (internal/modules/cjs/loader.js:1114:10) at Module.load (internal/modules/cjs/loader.js:950:32) at Function.Module._load (internal/modules/cjs/loader.js:790:12) { code: 'ERR_DLOPEN_FAILED'}Trying js-only modeError: Error loading shared library ld-linux-x86-64.so.2: No such file or directory (needed by /config/www/nextcloud/apps/recognize/node_modules/@tensorflow/tfjs-node/lib/napi-v8/../../deps/lib/libtensorflow.so.2) at Object.Module._extensions..node (internal/modules/cjs/loader.js:1144:18) at Module.load (internal/modules/cjs/loader.js:950:32) at Function.Module._load (internal/modules/cjs/loader.js:790:12) at Module.require (internal/modules/cjs/loader.js:974:19) at require (internal/modules/cjs/helpers.js:101:18) at Object. (/config/www/nextcloud/apps/recognize/node_modules/@tensorflow/tfjs-node/dist/index.js:60:16) at Module._compile (internal/modules/cjs/loader.js:1085:14) at Object.Module._extensions..js (internal/modules/cjs/loader.js:1114:10) at Module.load (internal/modules/cjs/loader.js:950:32) at Function.Module._load (internal/modules/cjs/loader.js:790:12) { code: 'ERR_DLOPEN_FAILED'}Trying js-only modeError: Backend name 'tensorflow' not found in registry at Engine. (/config/www/nextcloud/apps/recognize/node_modules/@tensorflow/tfjs-core/dist/tf-core.node.js:4238:35) at step (/config/www/nextcloud/apps/recognize/node_modules/@tensorflow/tfjs-core/dist/tf-core.node.js:123:27) at Object.next (/config/www/nextcloud/apps/recognize/node_modules/@tensorflow/tfjs-core/dist/tf-core.node.js:72:53) at /config/www/nextcloud/apps/recognize/node_modules/@tensorflow/tfjs-core/dist/tf-core.node.js:65:71 at new Promise () at __awaiter (/config/www/nextcloud/apps/recognize/node_modules/@tensorflow/tfjs-core/dist/tf-core.node.js:51:12) at Engine.setBackend (/config/www/nextcloud/apps/recognize/node_modules/@tensorflow/tfjs-core/dist/tf-core.node.js:4232:16) at Object.setBackend (/config/www/nextcloud/apps/recognize/node_modules/@tensorflow/tfjs-core/dist/tf-core.node.js:10577:19) at Object. (/config/www/nextcloud/apps/recognize/src/classifier_imagenet.js:175:4) at Module._compile (internal/modules/cjs/loader.js:1085:14)

However, when I check the "Enable WASM mode" box and run it again, after awhile, it shows this (does this look right?):

image

@kbftech
Copy link

kbftech commented May 18, 2022

@YouveGotMeowxy Wild guess: Your node_modules have not been generated correctly? Try this "again":
rm -rf node_modules && npm install

FYI, I updated node and npm to latest before re-installing dependencies. I had a few errors saying my node version wasn't 14.whatever but it works fine nonetheless.

@YouveGotMeowxy
Copy link

YouveGotMeowxy commented May 18, 2022

@kbftech I appreciate all the help. 🌝

Maybe I'm installing it wrong? I'm running it in a container that's running Alpine, so I just do a apk add nodejs and that's all.

I was under the impression that would install and setup everything on it's own?

@marcelklehr
Copy link
Member

@YouveGotMeowxy It seems that you have run into a bug with v2.0.0: #207 For now enabling WASM mode should do the trick, even though it's slower. I will release a new version shortly that will work with native mode again.

@marcelklehr
Copy link
Member

node.js should not be 500mb, afaik

@YouveGotMeowxy
Copy link

node.js should not be 500mb, afaik

OK, you're right, I misread the startup log about the size. It's not 500 mb. Sorry about that. :)

@YouveGotMeowxy
Copy link

@marcelklehr Just for future reference, is [object Object] the only result shown when it's successful?

@marcelklehr
Copy link
Member

I'm not sure occ has an interactive shell, so I'm confused about what program you are running. The occ command used with recognize is located in the root of your nextcloud installation and can be run using

php occ recognize:classify-images

@YouveGotMeowxy
Copy link

YouveGotMeowxy commented May 18, 2022

I'm not sure occ has an interactive shell, so I'm confused about what program you are running. The occ command used with recognize is located in the root of your nextcloud installation and can be run using

I'm using the OCC app that you can install from the Apps page within NC. It lets you run OCC right from within NC.

@marcelklehr
Copy link
Member

Then [object Object] means there's a bug in the OCC app.

@YouveGotMeowxy
Copy link

@marcelklehr ok.

Side note: is it normal for the 'Checking CPU' waiting indicator on the settings page to never stop "waiting"? I've let it run all night and wake up to see it still waiting.

@YouveGotMeowxy
Copy link

YouveGotMeowxy commented May 19, 2022

Appears to be working with the latest update, and my own node.js added in the path field.

Update:
Also just tried w/out using my own node.js, and seems to be working. :)

Recognize automation moved this from In progress to Done Oct 21, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Development

No branches or pull requests

9 participants