Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Photo upload fails #830

Closed
sunny5055 opened this issue Apr 10, 2022 · 33 comments
Closed

[BUG] Photo upload fails #830

sunny5055 opened this issue Apr 10, 2022 · 33 comments
Labels
bug Something isn't working

Comments

@sunny5055
Copy link

Describe the bug
Picture upload fails to new resume

Product Flavor

  • [Yes] Self Hosted

To Reproduce
Create a new resume
Upload a photo

Expected behavior
Photo to be uploaded and displayed

Desktop (please complete the following information):

  • OS: Docker latest version
  • Browser Firefox
  • Version 100

Additional context
request sent to /api/resume/3/photo
Give response
500 Internal Server Error

@sunny5055 sunny5055 added the bug Something isn't working label Apr 10, 2022
@AmruthPillai
Copy link
Owner

You might need to update your ENVs to have proper S3 credentials. Have they been added?

@melbadry97
Copy link

Could you provide a sample of s3 credentials? I've added my gateway and bucket name but still not working - confirmed through terminal access/read/write of s3 bucket.

@sunny5055
Copy link
Author

You might need to update your ENVs to have proper S3 credentials. Have they been added?

have gone through the documentation but didnt notice about this. May be recently added..
is there is way to store locally instead of S3 ?

@YuriMB
Copy link

YuriMB commented Apr 12, 2022

Could you provide a sample of s3 credentials? I've added my gateway and bucket name but still not working - confirmed through terminal access/read/write of s3 bucket.

Right, so after a bit of tinkering, the following configuration seems to be a good example:

      STORAGE_BUCKET=MY_BUCKET_NAME
      STORAGE_REGION=eu-central-1
      STORAGE_ENDPOINT=https://s3.eu-central-1.amazonaws.com/
      STORAGE_URL_PREFIX=https://MY_BUCKET_NAME.s3.eu-central-1.amazonaws.com/
      STORAGE_ACCESS_KEY=IAMUSERACCESSKEY
      STORAGE_SECRET_KEY=IAMUSERSECRETKEY

Note that the region needs to match the region of the bucket.

In my case I've created an IAM user, and used the credentials provided during generation as access key and secret.
I've configured the bucket to allow public access. Does this help?

I'm still with sunny5055 though, I'm not very happy with being dependent on a cloud provider. I would rather see a choice between locally hosted and some storage bucket on another provider.

@dvd741-a
Copy link

@modem7 - I don't understand how you were able to use local storage.
It's hardcoded to use the S3 client.

I would prefer local storage anyway..

@modem7
Copy link
Contributor

modem7 commented Apr 13, 2022

@modem7 - I don't understand how you were able to use local storage. It's hardcoded to use the S3 client.

I would prefer local storage anyway..

I think you might be right.

I used to be able to use local storage, but it seems in the latest version, whilst it's able to use what I had before, it doesn't allow me to upload new images now.

I would say that if reactive-resume is going "self-hosted", every part should be self-hosted, with zero reliance on 3rd party storage or APIs.

I certainly won't be using S3, and if that's the only solution, that'll be me out unfortunately, especially as I'm only hosting it for myself.

@Pheggas
Copy link

Pheggas commented Apr 16, 2022

That's literally doesn't make any sense to have S3 cloud as part of requirements. I'm trying to deploy this for like a month already and still no luck. Firstly it was because of outdated YAML parser that portainer uses, secondly this weird requirement of S3 cloud.

How much longer will it take to deploy app like this?

BTW: i searched whole documentation of reactive resume and didn't find any storage related var.

@kgotso
Copy link
Contributor

kgotso commented Apr 16, 2022

Had issues deploying the self hosted docker following the instructions on the tutorial. Ran into issues related to the server not starting up correctly and it was due to the the S3 parameters that are not mentioned at all anywhere. I see no point in labeling this as selfhostable if it is still dependent on external parties for crucial features. Photo uploads do not work at all in the standalone version without S3 parameters. The default should be to use the fill system available to the server which can easily be mapped by the installer.

@AmruthPillai
Copy link
Owner

That's literally doesn't make any sense to have S3 cloud as part of requirements. I'm trying to deploy this for like a month already and still no luck. Firstly it was because of outdated YAML parser that portainer uses, secondly this weird requirement of S3 cloud.

Please understand that a growing app like this can have it's issues with fast-and-loose development practices. I am trying my best to keep it as simple, but also working as much as I can.

The YAML anchors issue was resolved later as I removed them and reverted back to adding ENV_VARS in docker's environment array directly. The reason I had to add S3 as a requirement was because when users (even on self hosted) were uploading their images, the previous logic used to store these files locally. But because of the way I have CI/CD set up, a new instance is spun-up with the old one being discarded. This means, all old files on the filesystem also get deleted. So I had to move files to a non-ephemeral FS, hence DigitalOcean Spaces (otherwise S3).

I do hope to make S3 an optional requirement, and once I figure that out, will do what is required to make it simpler.

@dvd741-a
Copy link

dvd741-a commented Apr 30, 2022

Can't you just store the files in a mapped volume/directory? This way they would be stored independently from the instance

@modem7
Copy link
Contributor

modem7 commented May 29, 2022

@AmruthPillai Local mount storage would not get overwritten.

The image should point to an internal directory for images, which we can overwrite with a bind/volume mount.

That's typically how Docker works, the images themselves are ephemeral, but local storage is not.

@gymnae
Copy link

gymnae commented May 31, 2022

While I managed to use a free tier scaleway bucket I also fail to see why S3 compatible is required, when locally mounted host folder or named docker volumes would allow permanent storage, very much compatible with a CI pipeline.

@martadinata666
Copy link
Contributor

Basically s3 in attempt to solve some weird problem, #818 and this will be good if the s3 work with self hosted s3 like minio.

@modem7
Copy link
Contributor

modem7 commented May 31, 2022

Basically s3 in attempt to solve some weird problem, #818 and this will be good if the s3 work with self hosted s3 like minio.

Sure, but given how heavy reactive resume already is with three containers, adding a fourth is not the direction this should head in, especially given docker volumes and bind mounts exist for this exact reason.

@martadinata666
Copy link
Contributor

Well, if the reactive-resume store image/assets correctly, we may already use volume/bind mounts right now, but seems it not as simple as it should be, anywhere this head in, we just hope it will work for self hosted solution.

@dvd741-a
Copy link

dvd741-a commented Jun 6, 2022

#906

Once this gets approved & merged, add environmental variable:
STORAGE_S3_ENABLED=false

@martadinata666
Copy link
Contributor

currently testing your patch, it work locally right now, but need further test as issue #818 the image magically missing in few days.

@dvd741-a
Copy link

dvd741-a commented Jun 7, 2022

currently testing your patch, it work locally right now, but need further test as issue #818 the image magically missing in few days.

If we mount the path (see documentation: https://docs.docker.com/storage/bind-mounts/) where the images are saved to a folder on the docker host, the images will be persisted (and stay available) - as I can't build the docker image, I created this pull request.

Does the patch work for you? Then I guess you could try the folder mounting. (Not sure how to build a docker image myself - it kept throwing several errors)

If the patch works we could still figure out why they disapear after. (Don't forget ENV STORAGE_S3_ENABLED=false)
Default: Amazon S3 bucket, this env explicitly disables S3

@martadinata666
Copy link
Contributor

currently testing your patch, it work locally right now, but need further test as issue #818 the image magically missing in few days.

If we map the path where the images are saved to a folder on the docker host, the images will be persisted (and stay available) - as I can't build the docker image, I created this pull request.

Does the patch work for you? Then I guess you could try the folder mapping. (Not sure how to build a docker image myself - it kept throwing several errors)

If the patch works we could still figure out why they disapear after. (Don't forget ENV STORAGE_S3_ENABLED=false) Default: Amazon S3 bucket, this env explicitly disables S3

The patch work correctly, with some caveat, i need to remove aws-sdk, docusaurus in package.json. As it already in per workspace package.json.

Now im waiting if some black magic image sudden disappear.

@dvd741-a
Copy link

dvd741-a commented Jun 7, 2022

If you run a new docker image version it will - since the images are stored "within" the container.

With mounting the images are stored "outside" the container on the host.

@martadinata666
Copy link
Contributor

martadinata666 commented Jun 7, 2022

If you run a new docker image version it will - since the images are stored "within" the container.

With mounting the images are stored "outside" the container on the host.

on version before s3 impletemented, even we bind the assets to outside either volumes mount/bind mount it will just disappear after few days. So either something overwriting / some routine clean up. Cant really sure, let see after few days.

Edit: working pretty good, lets hope your PR getting to master.

@dvd741-a
Copy link

@AmruthPillai can you have a look?

@dvd741-a
Copy link

If you run a new docker image version it will - since the images are stored "within" the container.
With mounting the images are stored "outside" the container on the host.

on version before s3 impletemented, even we bind the assets to outside either volumes mount/bind mount it will just disappear after few days. So either something overwriting / some routine clean up. Cant really sure, let see after few days.

Edit: working pretty good, lets hope your PR getting to master.

Do you know how to convert the source code into working docker container(s)?

  • I wasn't able to figure this out

@martadinata666
Copy link
Contributor

martadinata666 commented Jun 11, 2022

If you run a new docker image version it will - since the images are stored "within" the container.
With mounting the images are stored "outside" the container on the host.

on version before s3 impletemented, even we bind the assets to outside either volumes mount/bind mount it will just disappear after few days. So either something overwriting / some routine clean up. Cant really sure, let see after few days.
Edit: working pretty good, lets hope your PR getting to master.

Do you know how to convert the source code into working docker container(s)?

* I wasn't able to figure this out

The official one now not working for you? I must admit that i dont use official Dockerfile so can't really tell if it will work.
Mine is https://github.com/martadinata666/dockerized/blob/abf8805d23b8cdab69cfb167e1f57b37dd29e0e3/reactive-resume/Dockerfile.v3 , may that give you some gist, and i already do build in local with NODE_ENV=development, so the Dockerfile just fetching deps, pack and run it.

@dvd741-a
Copy link

dvd741-a commented Jun 20, 2022

Made it to master & release 3.4.6 - Should be resolved with
STORAGE_S3_ENABLED=false

@martadinata666
Copy link
Contributor

martadinata666 commented Aug 29, 2022

can someone reconfirm that 3.6.4 break local storage picture? thanks

@AmruthPillai
Copy link
Owner

@martadinata666 Trying to recreate the issue locally and debugging now, will fix the issue asap :)

@AmruthPillai
Copy link
Owner

@martadinata666 Should be fixed in the next release: https://github.com/AmruthPillai/Reactive-Resume/releases/tag/v3.6.5

Now, you don't need any other flags. If you omit the STORAGE_BUCKET env, it would automatically store images on local storage.

@martadinata666
Copy link
Contributor

@martadinata666 Should be fixed in the next release: https://github.com/AmruthPillai/Reactive-Resume/releases/tag/v3.6.5

Now, you don't need any other flags. If you omit the STORAGE_BUCKET env, it would automatically store images on local storage.

i see, just tried, it work correctly thanks for the fast response and fix. 👍🏼

@rodrigogonegit
Copy link

rodrigogonegit commented Nov 26, 2023

Not setting STORAGE_BUCKET will not work. It throws:

        throw result.error;
        ^

ZodError: [
  {
    "code": "invalid_type",
    "expected": "string",
    "received": "undefined",
    "path": [
      "STORAGE_BUCKET"
    ],
    "message": "Required"
  }
]

How exactly should I configure it to use local storage?

@AmruthPillai
Copy link
Owner

@rodrigogonegit What you were referring to is the older version. In the new version, you have to make use of an S3 like storage service. Which is why Minio is part of the docker compose example.

@rodrigogonegit
Copy link

@AmruthPillai using the default config provided in the examples does not allow me to upload a picture. Trying to access the URL of the picture leads to:

AccessDeniedAccess Denied.pictures/clpe1l9pe0000fsyo2fo0tf1p.jpgclpe1l9pe0000fsyo2fo0tf1p/clpe1l9pe0000fsyo2fo0tf1p/pictures/clpe1l9pe0000fsyo2fo0tf1p.jpg179B4E1A2748749Fdd9025bab4ad464b049177c95eb6ebf374d3b3fd1af9251148b658df7ac2e3e8

@AmruthPillai
Copy link
Owner

@AmruthPillai using the default config provided in the examples does not allow me to upload a picture. Trying to access the URL of the picture leads to:

AccessDeniedAccess Denied.pictures/clpe1l9pe0000fsyo2fo0tf1p.jpgclpe1l9pe0000fsyo2fo0tf1p/clpe1l9pe0000fsyo2fo0tf1p/pictures/clpe1l9pe0000fsyo2fo0tf1p.jpg179B4E1A2748749Fdd9025bab4ad464b049177c95eb6ebf374d3b3fd1af9251148b658df7ac2e3e8

Can you check if the bucket policy was applied correctly? Are you using a different user account other than minioadmin?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests