Skip to content
This repository has been archived by the owner on Feb 12, 2024. It is now read-only.

Totally unable to get started; consider posting quickstart example? #30

Closed
aseemk opened this issue Feb 20, 2013 · 7 comments
Closed

Totally unable to get started; consider posting quickstart example? #30

aseemk opened this issue Feb 20, 2013 · 7 comments

Comments

@aseemk
Copy link

aseemk commented Feb 20, 2013

Hey there,

I've been really excited to try FakeS3 ever since I heard about it. I love the idea of being able to develop code against S3 even when I'm offline.

I finally got around to it, but unfortunately I'm having a ton of trouble figuring out how to use it for even something as basic as accessing an image from my web browser. I'm a Ruby noob unfortunately, so I'm also not having any luck trying to debug.

I'm on Mac OS X 10.7.5 (Lion), running the system default Ruby 1.8.7. I installed fakes3 via sudo gem install fakes3 (just gem install fakes3 gave me a permissions error), which installed FakeS3 0.1.5.

I tried starting it with -p 4567 -r ~/Dropbox, to see if I could browse my Dropbox files via FakeS3. Going to http://localhost:4567/ indeed showed the top-level directories in my Dropbox, but trying to browse to any sub-directory, or any individual file within any of those sub-directories, e.g. http://localhost:4567/Pictures/Misc/pixel.gif, always resulted in a 404.

I saw that indeed, FakeS3 was thinking each of those top-level dirs were buckets, and you've documented that it's recommended to use hostname-style requests, so I added s3.amazonaws.com and dropbox.s3.amazonaws.com to my /etc/hosts and ran with -r ~/, and indeed, I'm now able to view my home directory top-level dirs as buckets at http://s3.amazonaws.com:4567/. Unfortunately, I see the same buckets at http://dropbox.s3.amazonaws.com:4567/. And still, I can't access any file, e.g. http://dropbox.s3.amazonaws.com:4567/Pictures/Misc/pixel.gif or http://s3.amazonaws.com:4567/Dropbox/Pictures/Misc/pixel.gif.

I've done a bunch more experimenting, and still can't manage to just view a simple image in my browser. What am I doing wrong?

My sincere apologies if I'm missing something obvious, and thanks for your help. It might be helpful to have a simple "quickstart" example in the readme. If you'd like, I'd be happy to submit one after I figure this out.

Thanks!

@aseemk
Copy link
Author

aseemk commented Apr 5, 2013

Re-inquiring about this. Would love to use!

@reiz
Copy link

reiz commented Apr 2, 2014

@aseemk Not sure if fakes3 is the best solution to browse your pictures in the browser. I'm using it for local development and to be able to run my tests offline. For that it works perfect.

@jubos
Copy link
Owner

jubos commented Apr 3, 2014

@aseemk, I agree with @reiz. Unfortunately, you can't point S3 at an arbitrary directory and have it serve the directory as if it were served from S3. You have to give it a dedicated directory and fill that directory by doing S3 put calls.

If you want to serve files from a current directory over http, I have found this one liner works well:
python -m SimpleHTTPServer

@jubos jubos closed this as completed Apr 3, 2014
@aseemk
Copy link
Author

aseemk commented Apr 4, 2014

No worries guys, this explains it perfectly:

Unfortunately, you can't point S3 at an arbitrary directory and have it serve the directory as if it were served from S3. You have to give it a dedicated directory and fill that directory by doing S3 put calls.

I was under the mistaken impression that FakeS3 worked purely via the file system. Thanks for clarifying!

It'd still be great if I could get experience the functionality in some way without writing a bunch of code first, but I understand that might be difficult or impractical.

@aseemk
Copy link
Author

aseemk commented Apr 4, 2014

(By "some way", I mean e.g. like the AWS S3 console lets me get a sense of S3.)

@SeanHayes
Copy link

Unfortunately, you can't point S3 at an arbitrary directory and have it serve the directory as if it were served from S3. You have to give it a dedicated directory and fill that directory by doing S3 put calls.

Jesus, that ought to be at the top of the README. This project has been a colossal waste of time for me.

@gaul
Copy link
Contributor

gaul commented Feb 25, 2016

@SeanHayes S3Proxy can serve existing directories and files, although only newly created objects will have all content metadata, e.g., ETag.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants