Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Android support #12

Closed
Young-Flash opened this issue Jul 18, 2023 · 56 comments
Closed

Android support #12

Young-Flash opened this issue Jul 18, 2023 · 56 comments
Labels
enhancement New feature or request

Comments

@Young-Flash
Copy link

Are there any plans to support Android devices in the future?

@mazzzystar
Copy link
Owner

Hi I'm not good at Android dev, and my energy is also devoted to other projects, so I'd glad if someone could help : )

@mazzzystar mazzzystar added the enhancement New feature or request label Jul 19, 2023
@sunlin-xiaonai
Copy link

i will have a try, now i am do somethings about model

@greyovo
Copy link

greyovo commented Jul 31, 2023

I split the model into ImageEncoder and TextEncoder, like the author did, and successfully ran the demo on Android using PyTorch Mobile.

Running result under a Xiaomi 12S @ Snapdragon 8+ Gen 1 SoC:

  • Inference speed: around 10 seconds for 100 images and 250 seconds for 2000 images (much slower than that on iOS). All photos are 400*400px JPEGs.

  • The exported ImageEncoder and TextEncoder are also very large (335 MB and 242 MB, respectively).

The efforts I have tried before:

  1. Quantifying the model, but it failed because I am not familiar with Pytorch's quantization operations
  2. Converting to NNAPI mode for operation, but no luck.

I observed the device status in profile mode and found that the CPU usage is only about 45%. I am not sure if NPU was used during inference.

I'm still evaluating whether it's worth to keep going, or anyone could give any advice would be appreciated 😄

Refs:

@x97425
Copy link

x97425 commented Jul 31, 2023

I split the model into ImageEncoder and TextEncoder, like the author did, and successfully ran the demo on Android using PyTorch Mobile.

Running result under a Xiaomi 12S @ Snapdragon 8 Gen 2 SoC:

  • Inference speed: around 10 seconds for 100 images and 250 seconds for 2000 images (much slower than that on iOS). All photos are 400*400px JPEGs.
  • The exported ImageEncoder and TextEncoder are also very large (335 MB and 242 MB, respectively).

The efforts I have tried before:

  1. Quantifying the model, but it failed because I am not familiar with Pytorch's quantization operations
  2. Converting to NNAPI mode for operation, but no luck.

I observed the device status in profile mode and found that the CPU usage is only about 45%. I am not sure if NPU was used during inference.

I'm still evaluating whether it's worth to keep going, or anyone could give any advice would be appreciated 😄

Refs:

i've send you an email,pls check out if you have time. respect!

@williamlee1982
Copy link

I split the model into ImageEncoder and TextEncoder, like the author did, and successfully ran the demo on Android using PyTorch Mobile.

Running result under a Xiaomi 12S @ Snapdragon 8+ Gen 1 SoC:

  • Inference speed: around 10 seconds for 100 images and 250 seconds for 2000 images (much slower than that on iOS). All photos are 400*400px JPEGs.
  • The exported ImageEncoder and TextEncoder are also very large (335 MB and 242 MB, respectively).

The efforts I have tried before:

  1. Quantifying the model, but it failed because I am not familiar with Pytorch's quantization operations
  2. Converting to NNAPI mode for operation, but no luck.

I observed the device status in profile mode and found that the CPU usage is only about 45%. I am not sure if NPU was used during inference.

I'm still evaluating whether it's worth to keep going, or anyone could give any advice would be appreciated 😄

Refs:

look forward to your work, really want this on Android

@Young-Flash
Copy link
Author

I split the model into ImageEncoder and TextEncoder, like the author did, and successfully ran the demo on Android using PyTorch Mobile.

Running result under a Xiaomi 12S @ Snapdragon 8+ Gen 1 SoC:

  • Inference speed: around 10 seconds for 100 images and 250 seconds for 2000 images (much slower than that on iOS). All photos are 400*400px JPEGs.
  • The exported ImageEncoder and TextEncoder are also very large (335 MB and 242 MB, respectively).

The efforts I have tried before:

  1. Quantifying the model, but it failed because I am not familiar with Pytorch's quantization operations
  2. Converting to NNAPI mode for operation, but no luck.

I observed the device status in profile mode and found that the CPU usage is only about 45%. I am not sure if NPU was used during inference.

I'm still evaluating whether it's worth to keep going, or anyone could give any advice would be appreciated 😄

Refs:

Maybe you can briefly introduce the required technology stack and technical route, so that others can participate 😄

@mazzzystar
Copy link
Owner

Inference speed: around 10 seconds for 100 images and 250 seconds for 2000 images (much slower than that on iOS). All photos are 400*400px JPEGs.

@greyovo for the ViT-B-32 CLIP model, the required image size is 224x224, so maybe do some preprocess would make the indexing faster

@greyovo
Copy link

greyovo commented Aug 3, 2023

@greyovo for the ViT-B-32 CLIP model, the required image size is 224x224, so maybe do some preprocess would make the indexing faster

@mazzzystar Thanks for the advice. I actually did preprocess (i.e. resizing to 224px, center croping, normalizing..., like CLIP's preprocess() function did) before encoding images. Since I tried with 3000px*2000px images and got the same result, I dont think that's the main problem :(

@greyovo
Copy link

greyovo commented Aug 3, 2023

Maybe you can briefly introduce the required technology stack and technical route, so that others can participate 😄

@Young-Flash In fact, most of the work relies on the Collab script provided by the author @mazzzystar, and I just exported the two encoders to Pytorch-JIT models respectively instead of CoreML.

Possible improvements in my opinion:

  1. Converting the model for NNAPI execution
  2. Pruning or quantifying the model, which requires a deep understanding of the model structure and may require retraining
  3. Distilling knowledge from the model, which requires familiarity with deep learning techniques and also need retraining
  4. Looking for other multimodal models similar to CLIP, but I searched around and couldn't find anything more efficient and smaller than Vit-B/32 :(

Perhaps the easiest way is to convert the model to be suitable for NNAPI in order to speed up executing encoders. I tried by following pytorch's official tutorial but failed. It seems to require an ARM64 processor PC. I'm not sure if I missed something.

@mazzzystar
Copy link
Owner

Pruning or quantifying the model, which requires a deep understanding of the model structure and may require retraining

I think you may be able to do quantization when exporting pytorch model. pytorch/pytorch#76726

@greyovo
Copy link

greyovo commented Aug 3, 2023

I think you may be able to do quantization when exporting pytorch model. pytorch/pytorch#76726

@mazzzystar Yes I also tried quantization but encountered several problems that I cannot solve, and hence the quantization was failed, not to mention the NNAPI convertion (which needs a quantized model). I may later share a jupyter notebook I used to see if anyone would help.

@greyovo
Copy link

greyovo commented Aug 3, 2023

An interesting thing is that I found some efforts on distilling the CLIP model:

At least they proved that knowledge distillation may be a feasible direction, but requires a notable effort to do so.

@mazzzystar
Copy link
Owner

@greyovo I'm not sure if distillation is a good idea compared to quantinazation. Even you don't quantize the model, using FP16 would significantly increase model speed.

I might try to export a lightweight Android version if I have time in the future : ) Is ONNX okay?

@greyovo
Copy link

greyovo commented Aug 3, 2023

I might try to export a lightweight Android version if I have time in the future : ) Is ONNX okay?

@mazzzystar You are right! There's another option - ONNX. It seems they have complete docs and demos. So yes, it's worth a try! Thanks :)

@stakancheck
Copy link

@mazzzystar Hi, I didn't quite understand how things are with the android app. I would have taken up this project in my spare time and rewrote part of the logic on kotlin (KMP), but I would have needed the help of an AI specialist. Are there any developments in this direction?

@greyovo
Copy link

greyovo commented Aug 4, 2023

@stakancheck The original model ViT-B/32 was too large for Android devices (see the discussion above) and hence the speed of encoding images into embeddings was much slower than those on iOS. So we are dealing with the model to see if me or @mazzzystar might export a light-weight version of the model to speed up executing and reduce the size of the model.

By the way, are u familiar with kotlin or jetpack compose? I am a starter in Android development (I used Flutter before) but I would love to help in building the app :)

@Young-Flash
Copy link
Author

I just exported the two encoders to Pytorch-JIT models respectively instead of CoreML.

@greyovo could you please share your code(including how to running on your Xiaomi 12S)?

btw, how about use NCNN to compile and deploy on andriod?

@greyovo
Copy link

greyovo commented Aug 10, 2023

I have made some progress in quantinazation with onnx. Please check my repo CLIP-android-demo for detail :)
@Young-Flash @mazzzystar

@greyovo
Copy link

greyovo commented Aug 10, 2023

btw, how about use NCNN to compile and deploy on andriod?

Deploying using NCNN needs good cpp and JNI develop skills, which I am not familiar with... Sorry.

@Young-Flash
Copy link
Author

Deploying using NCNN needs good cpp and JNI develop skills, which I am not familiar with... Sorry.

haha it doesn't matter at all, just a suggestion. you have make a great job!

@mazzzystar
Copy link
Owner

@greyovo Thanks for your great work, would like to see an android app : )

@Young-Flash
Copy link
Author

I verifed the onnx quantized model, code is here, result on my local machine is as follows:

model result
CLIP [[6.1091479e-02 9.3267566e-01 5.3717378e-03 8.6108845e-04]]
clip-image-encoder.onnx & clip-text-encoder.onnx [[6.1091259e-02 9.3267584e-01 5.3716768e-03 8.6109847e-04]]
clip-image-encoder-quant-int8.onnx & clip-text-encoder-quant-int8.onnx [[4.703762e-02 9.391219e-01 9.90335e-03 3.93698e-03]]

I think it is good to go.

@Young-Flash
Copy link
Author

CLIP don't support chinese well, see here, and I test a same image with chinese input(["老虎", "猫", "狗", "熊"]) and English input(["a tiger", "a cat", "a dog", "a bear"]), the logits value are [[0.09097634 0.18403262 0.24364232 0.4813488 ]] and [[0.04703762 0.9391219 0.00990335 0.00393698]] respectively, the chinese test result isn't ideal.

@mazzzystar How do you deal with chinese text input in Queryable?

I tried Chinese-CLIP and onnx quantized model today and got ideal result, code is here, result is as follows:

model result
Chinese-CLIP Chinese [[1.9532440e-03 9.9525285e-01 2.2442457e-03 5.4962368e-04]]
Chinese-CLIP English [[2.5376787e-03 9.9683857e-01 4.3544930e-04 1.8830669e-04]]
clip-cn-image-encoder.onnx & clip-cn-text-encoder.onnx Chinese [[1.9535627e-03 9.9525201e-01 2.2446462e-03 5.4973643e-04]]
clip-cn-image-encoder.onnx & clip-cn-text-encoder.onnx English [[2.5380836e-03 9.9683797e-01 4.3553708e-04 1.8835040e-04]]
clip-cn-image-encoder-quant-int8.onnx & clip-cn-text-encoder-quant-int8.onnx Chinese [[0.00884504 0.98652565 0.00179121 0.00283814]]
clip-cn-image-encoder-quant-int8.onnx & clip-cn-text-encoder-quant-int8.onnx English [[0.02240802 0.97132427 0.00435637 0.00191139]]

@mazzzystar
Copy link
Owner

@Young-Flash
Queryable does not support Chinese. I traind a Chinese Text Encoder myself, the method is similar to Chinese-CLIP, you can refer to this article . However, the training data is not MIT open-sourced, so I could not provide the Chinese version of model weight, but you can convert the open-source Chinese text encoder above on your demand.

@Young-Flash
Copy link
Author

Young-Flash commented Aug 16, 2023

@Young-Flash Queryable does not support Chinese. I traind a Chinese Text Encoder myself, the method is similar to Chinese-CLIP, you can refer to this article . However, the training data is not MIT open-sourced, so I could not provide the Chinese version of model weight, but you can convert the open-source Chinese text encoder above on your demand.

I see, I found a demo which use chinese to query, I thought it was translating chinese into english, but I didn't find the relevant code here so I felt puzzled.

Chinese-CLIP is a pre-trained model with MIT license, The above clip-cn-image-encoder-quant-int8.onnx and clip-cn-text-encoder-quant-int8.onnx take 84.93 MB and 97.89 MB, while @greyovo's clip-image-encoder-quant-int8.onnx and clip-text-encoder-quant-int8.onnx take 91.2 MB and 61.3 MB. I think Chinese-CLIP after quantization is acceptable. So maybe we could use it to replace CLIP, how do you see?

@mazzzystar
Copy link
Owner

@Young-Flash That's excately what I mean, notice that the Chinese-CLIP's arch(BERT) a little bit diffrent from ViT-B-32, you may adjust the jupyter notebook according to it.

@Young-Flash
Copy link
Author

yeah I have made it here.

@williamlee1982
Copy link

guys, any update for Android version? really want it.

@Young-Flash
Copy link
Author

I am blocked by a weird onnxruntime issue, the text encoder run at android can get the same inference result as python while the vit image encoder can't.

@greyovo
Copy link

greyovo commented Sep 8, 2023

@Young-Flash Same here. 😢 But I found that, converting the onnx format to ort, and using *.with_runtime_opt.ort version, may bridge the result gap at a bit. See here and here .... Though the difference are still observed, querying results are acceptable. (I am using CLIP but not Chinese-CLIP)

And I also observed that the quantized model may yield this problem while the original model would not.

By the way, I have done the basic indexing and querying feature, but I am working on UI issues. I might replace the model with Chinese-CLIP in the near future.

@Young-Flash
Copy link
Author

@Young-Flash Same here. 😢 But I found that, converting the onnx format to ort, and using *.with_runtime_opt.ort version, may bridge the result gap at a bit. See here and here .... Though the difference are still observed, querying results are acceptable. (I am using CLIP but not Chinese-CLIP)

And I also observed that the quantized model may yield this problem while the original model would not.

By the way, I have done the basic indexing and querying feature, but I am working on UI issues. I might replace the model with Chinese-CLIP in the near future.

@greyovo I tried ort too but give up when I found the inference result different. After that I tried the resnet50 in Chinese-CLIP instead of vit, got the same result as python inference. Maybe the problem lies on the operator that vit model use?

Do you plan to share your code? I think Chinese-CLIP is good for try, which support chinese and english. Feel free to let me know if there's anything I can help with.

@greyovo
Copy link

greyovo commented Sep 10, 2023

I tried ort too but give up when I found the inference result different. After that I tried the resnet50 in Chinese-CLIP instead of vit, got the same result as python inference. Maybe the problem lies on the operator that vit model use?

I agreed.

Do you plan to share your code? I think Chinese-CLIP is good for try, which support chinese and english.

I will try with ChineseCLIP. I need to apply for a Software Copyright Certificate (aka. 软件著作权) to get it on the app market, and then I'll make it open source.

Feel free to let me know if there's anything I can help with.

Thanks in advance :) @Young-Flash

@zhangjh
Copy link

zhangjh commented Sep 11, 2023

I developed an android app named smartSearch already. You guys can try using it.
https://play.google.com/store/apps/details?id=me.zhangjh.smart.search

@zhangjh
Copy link

zhangjh commented Sep 11, 2023

Got here from my previous github issue there: microsoft/onnxruntime#16472, which linked this repo.

@williamlee1982
Copy link

I developed an android app named smartSearch already. You guys can try using it. https://play.google.com/store/apps/details?id=me.zhangjh.smart.search.en

tried and didn't work, crush in a few seconds after start building index

@zhangjh
Copy link

zhangjh commented Sep 11, 2023

I developed an android app named smartSearch already. You guys can try using it. https://play.google.com/store/apps/details?id=me.zhangjh.smart.search.en

tried and didn't work, crush in a few seconds after start building index

Could you feedback some device info? Which brand, which version?
The high probability is due to insufficient mobile phone memory, which caused an OOM issue.

@williamlee1982
Copy link

I developed an android app named smartSearch already. You guys can try using it. https://play.google.com/store/apps/details?id=me.zhangjh.smart.search.en

tried and didn't work, crush in a few seconds after start building index

Could you feedback some device info? Which brand, which version? The high probability is due to insufficient mobile phone memory, which caused an OOM issue.

OnePlus 11, ColorOS13, with 16GB memory, should be ok to run.

@Young-Flash
Copy link
Author

Got here from my previous github issue there: microsoft/onnxruntime#16472, which linked this repo.

hi, have you solve that issue? and do you use OpenAI CLIP or Chinese-CLIP?

@zhangjh
Copy link

zhangjh commented Sep 11, 2023

Got here from my previous github issue there: microsoft/onnxruntime#16472, which linked this repo.

hi, have you solve that issue? and do you use OpenAI CLIP or Chinese-CLIP?

Yeah, I've been using Chinese-CLIP.
I've forgotten how to solve the issue cause it passed a long time. Maybe I overlooked it because I found that it worked well in the Android environment.

@greyovo
Copy link

greyovo commented Oct 8, 2023

Great news! The Android app (PicQuery) is now on Google Play for free, support both English and Chinese: https://play.google.com/store/apps/details?id=me.grey.picquery

The source code will be public soon, as I need to clean up something :)

@Young-Flash @williamlee1982 @mazzzystar @stakancheck

@mazzzystar
Copy link
Owner

@greyovo Great!Will update the Android code and app details in the README after your work is complete. :)

@Young-Flash
Copy link
Author

@greyovo nice UI, thanks your great work.
but seems can't work at my device(Honor 9x pro), index pics but can't no get any single pic after query.

@nodis
Copy link

nodis commented Oct 8, 2023

好消息!Android应用程序_PicQuery_)现已免费上线Google Play,支持中英文版:https://play.google.com/store/apps/details?id=me.grey.picquery

源代码很快就会公开,因为我需要清理一些东西:)

@greyovo

大佬,索引相册时,每次扫描大概八九百张图片的时候就会发生闪退
环境:
手机:小米13ultra
Android版本:Android 13
MIUI版本:MIUI 14 V14.0.23.9.18.DEV开发版

java.io.FileNotFoundException: Failed to create image decoder with message 'invalid input'Input contained an error.
at android.database.DatabaseUtils.readExceptionWithFileNotFoundExceptionFromParcel(DatabaseUtils.java:151)
at android.content.ContentProviderProxy.openTypedAssetFile(ContentProviderNative.java:780)
at android.content.ContentResolver.openTypedAssetFileDescriptor(ContentResolver.java:2029)
at android.content.ContentResolver.openTypedAssetFile(ContentResolver.java:1934)
at android.content.ContentResolver.lambda$loadThumbnail$0(ContentResolver.java:4159)
at android.content.ContentResolver$$ExternalSyntheticLambda1.call(Unknown Source:10)
at android.graphics.ImageDecoder$CallableSource.createImageDecoder(ImageDecoder.java:550)
at android.graphics.ImageDecoder.decodeBitmapImpl(ImageDecoder.java:1870)
at android.graphics.ImageDecoder.decodeBitmap(ImageDecoder.java:1863)
at android.content.ContentResolver.loadThumbnail(ContentResolver.java:4158)
at android.content.ContentResolver.loadThumbnail(ContentResolver.java:4142)
at me.grey.picquery.domain.ImageSearcher.encodePhotoList(ImageSearcher.kt:116)
at me.grey.picquery.domain.ImageSearcher$encodePhotoList$1.invokeSuspend(Unknown Source:14)
at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:106)
at kotlinx.coroutines.EventLoopImplBase.processNextEvent(EventLoop.common.kt:280)
at kotlinx.coroutines.BlockingCoroutine.joinBlocking(Builders.kt:85)
at kotlinx.coroutines.BuildersKt__BuildersKt.runBlocking(Builders.kt:59)
at kotlinx.coroutines.BuildersKt.runBlocking(Unknown Source:1)
at kotlinx.coroutines.BuildersKt__BuildersKt.runBlocking$default(Builders.kt:38)
at kotlinx.coroutines.BuildersKt.runBlocking$default(Unknown Source:1)
at me.grey.picquery.domain.AlbumManager$encodeAlbums$2.invokeSuspend(AlbumManager.kt:103)
at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:106)
at kotlinx.coroutines.scheduling.CoroutineScheduler.runSafely(CoroutineScheduler.kt:584)
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.executeTask(CoroutineScheduler.kt:793)
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.runWorker(CoroutineScheduler.kt:697)
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.run(CoroutineScheduler.kt:684)
Suppressed: kotlinx.coroutines.internal.DiagnosticCoroutineContextException: [androidx.compose.ui.platform.MotionDurationScaleImpl@e39da66, androidx.compose.runtime.BroadcastFrameClock@1141a7, StandaloneCoroutine{Cancelling}@7840654, AndroidUiDispatcher@61c79fd]

@zhangjh
Copy link

zhangjh commented Oct 8, 2023

可以试试我的smartsearch,不过我是收费的。
https://play.google.com/store/apps/details?id=me.zhangjh.smart.search
上面那个picquery也太卷了吧,免费我不知道花大精力投入开发图什么,不尊重自己的劳动吗😂

@Young-Flash
Copy link
Author

@greyovo just go ahead, I am willing to help if needed.

@nodis
Copy link

nodis commented Oct 8, 2023

Great news! The Android app (PicQuery) is now on Google Play for free, support both English and Chinese: https://play.google.com/store/apps/details?id=me.grey.picquery

The source code will be public soon, as I need to clean up something :)

@greyovo

Hello, when indexing an album, there is a flicker every time around 800 or 900 images are scanned
Environment:
Mobile phone: Xiaomi 13ultra
Android version: Android 13
MIUI version: MIUI 14 V14.0.23.9.18.DEV development version

java.io.FileNotFoundException: Failed to create image decoder with message 'invalid input'Input contained an error.
at android.database.DatabaseUtils.readExceptionWithFileNotFoundExceptionFromParcel(DatabaseUtils.java:151)
at android.content.ContentProviderProxy.openTypedAssetFile(ContentProviderNative.java:780)
at android.content.ContentResolver.openTypedAssetFileDescriptor(ContentResolver.java:2029)
at android.content.ContentResolver.openTypedAssetFile(ContentResolver.java:1934)
at android.content.ContentResolver.lambda$loadThumbnail$0(ContentResolver.java:4159)
at android.content.ContentResolver$$ExternalSyntheticLambda1.call(Unknown Source:10)
at android.graphics.ImageDecoder$CallableSource.createImageDecoder(ImageDecoder.java:550)
at android.graphics.ImageDecoder.decodeBitmapImpl(ImageDecoder.java:1870)
at android.graphics.ImageDecoder.decodeBitmap(ImageDecoder.java:1863)
at android.content.ContentResolver.loadThumbnail(ContentResolver.java:4158)
at android.content.ContentResolver.loadThumbnail(ContentResolver.java:4142)
at me.grey.picquery.domain.ImageSearcher.encodePhotoList(ImageSearcher.kt:116)
at me.grey.picquery.domain.ImageSearcher$encodePhotoList$1.invokeSuspend(Unknown Source:14)
at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:106)
at kotlinx.coroutines.EventLoopImplBase.processNextEvent(EventLoop.common.kt:280)
at kotlinx.coroutines.BlockingCoroutine.joinBlocking(Builders.kt:85)
at kotlinx.coroutines.BuildersKt__BuildersKt.runBlocking(Builders.kt:59)
at kotlinx.coroutines.BuildersKt.runBlocking(Unknown Source:1)
at kotlinx.coroutines.BuildersKt__BuildersKt.runBlocking$default(Builders.kt:38)
at kotlinx.coroutines.BuildersKt.runBlocking$default(Unknown Source:1)
at me.grey.picquery.domain.AlbumManager$encodeAlbums$2.invokeSuspend(AlbumManager.kt:103)
at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:106)
at kotlinx.coroutines.scheduling.CoroutineScheduler.runSafely(CoroutineScheduler.kt:584)
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.executeTask(CoroutineScheduler.kt:793)
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.runWorker(CoroutineScheduler.kt:697)
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.run(CoroutineScheduler.kt:684)
Suppressed: kotlinx.coroutines.internal.DiagnosticCoroutineContextException: [androidx.compose.ui.platform.MotionDurationScaleImpl@e39da66, androidx.compose.runtime.BroadcastFrameClock@1141a7, StandaloneCoroutine{Cancelling}@7840654, AndroidUiDispatcher@61c79fd]

@zhangjh

This comment was marked as off-topic.

@mazzzystar

This comment was marked as off-topic.

@Baiyssy
Copy link

Baiyssy commented Oct 9, 2023

Great news! The Android app (PicQuery) is now on Google Play for free, support both English and Chinese: https://play.google.com/store/apps/details?id=me.grey.picquery

The source code will be public soon, as I need to clean up something :)

@Young-Flash @williamlee1982 @mazzzystar @stakancheck

太神奇了!作为第一版,完成度竟然已经这么高!
在我的小米9(Android 11)上几乎完美运行,索引了约1.2万张图片。
有几个小问题和建议:

  1. 索引某几个文件夹时会闪退。
  2. 少数情况下,搜索后点击搜索框右端的X号,焦点不在搜索框内,需要再点击搜索框才能输入。
  3. 搜索最多出来30张图片,可以出来更多结果吗?
  4. 图片浏览界面能增加个分享菜单就好了,现在只能查看。
  5. 搜索结果可以选择按相关度或是时间排序吗?
  6. 可以加上时间、位置筛选吗?
  7. 能否提供相关关键词?比如搜索日出,程序发现很多海上日出的照片,就提示相关关键字是海上日出。不过这个原理上似乎不支持。

谢谢!

@LXY1226
Copy link

LXY1226 commented Oct 9, 2023

Great job at all!
但是闪退的事情也有遇到,期待开源,立即去修,还有onnx的NNAPI似乎看起来可以直接用

@greyovo
Copy link

greyovo commented Oct 9, 2023

@Baiyssy @nodis 感谢反馈!这些bug都注意到了,提到的这些功能大多数也会加上的,只是需要一点时间,最近比较忙。

@LXY1226 感谢支持!

@Baiyssy
Copy link

Baiyssy commented Oct 10, 2023

@Baiyssy @nodis 感谢反馈!这些bug都注意到了,提到的这些功能大多数也会加上的,只是需要一点时间,最近比较忙。

@LXY1226 感谢支持!

似乎不会索引新增的图片?

@greyovo
Copy link

greyovo commented Oct 12, 2023

似乎不会索引新增的图片?

@Baiyssy 是的,忘了考虑这个问题,目前版本是无法自动更新索引的,也没什么办法重建索引……后续会解决这个问题

@greyovo
Copy link

greyovo commented Oct 13, 2023

PicQuery is open-source now, see https://github.com/greyovo/PicQuery :)

@mazzzystar
Copy link
Owner

@greyovo Great! I've added your Android repository link in the README.

@Young-Flash
Copy link
Author

You rock!!! @greyovo

It's time to close this issue, new discussion could make in PicQuery , thanks everyone 😄

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests