Calling Model#save can raise AWS::S3::Errors::RequestTimeout #751

Closed
indirect opened this Issue Feb 22, 2012 · 120 comments

Comments

Projects
None yet
@indirect

Hi guys, I've been noticing that after switching to aws-sdk, calling #save on a model with a paperclip attachment will now sometimes (I assume only those times when S3 times out) raise an AWS::S3::Errors::RequestTimeout error. It seems completely crazy to me to have to add a rescue AWS::S3::Errors::RequestTimeout clause after every single time that I ever call save anywhere in my app. Shouldn't paperclip just catch the error and either retry or report that the save failed? Thanks.

@505aaron

This comment has been minimized.

Show comment
Hide comment
@505aaron

505aaron Feb 24, 2012

Try setting the :whiny option to false.

Try setting the :whiny option to false.

@indirect

This comment has been minimized.

Show comment
Hide comment
@indirect

indirect Feb 25, 2012

The source says this about the :whiny option:

    # * +whiny+: Will raise an error if Paperclip cannot post_process an uploaded file due
    #   to a command line error. This will override the global setting for this attachment.
    #   Defaults to true. This option used to be called :whiny_thumbanils, but this is
    #   deprecated.

Unfortunately, :whiny doesn't apply to my situation at all. First, this isn't happening during post-processing, it's happening when Paperclip tries to write the file to S3. Second, the problem I am reporting is not that Paperclip is writing out errors into the logs, it is that Paperclip doesn't catching exceptions raised by the aws-s3 library that it uses. As a result, I am seeing exceptions in my application any time S3 has a hiccup. Not very cool. :(

The source says this about the :whiny option:

    # * +whiny+: Will raise an error if Paperclip cannot post_process an uploaded file due
    #   to a command line error. This will override the global setting for this attachment.
    #   Defaults to true. This option used to be called :whiny_thumbanils, but this is
    #   deprecated.

Unfortunately, :whiny doesn't apply to my situation at all. First, this isn't happening during post-processing, it's happening when Paperclip tries to write the file to S3. Second, the problem I am reporting is not that Paperclip is writing out errors into the logs, it is that Paperclip doesn't catching exceptions raised by the aws-s3 library that it uses. As a result, I am seeing exceptions in my application any time S3 has a hiccup. Not very cool. :(

@sikachu

This comment has been minimized.

Show comment
Hide comment
@sikachu

sikachu Mar 2, 2012

Contributor

Yeah, that's totally sucks. I think we could do better on this one.

But then, how do you think we should handle this? I'd want to do a retry for once or twice, but for third time should it adds the error to the model?

Contributor

sikachu commented Mar 2, 2012

Yeah, that's totally sucks. I think we could do better on this one.

But then, how do you think we should handle this? I'd want to do a retry for once or twice, but for third time should it adds the error to the model?

@indirect

This comment has been minimized.

Show comment
Hide comment
@indirect

indirect Mar 3, 2012

Yeah, that seems like exactly the right answer to me.

indirect commented Mar 3, 2012

Yeah, that seems like exactly the right answer to me.

@nengine

This comment has been minimized.

Show comment
Hide comment
@nengine

nengine Mar 16, 2012

One thing for sure that it is happening on Ruby 1.9. It works fine on 1.8. According to a link here https://forums.aws.amazon.com/thread.jspa?threadID=74945, it said :encoding => "BINARY" would solve the problem. But I don't know where I should set this variable.

nengine commented Mar 16, 2012

One thing for sure that it is happening on Ruby 1.9. It works fine on 1.8. According to a link here https://forums.aws.amazon.com/thread.jspa?threadID=74945, it said :encoding => "BINARY" would solve the problem. But I don't know where I should set this variable.

@indirect

This comment has been minimized.

Show comment
Hide comment
@indirect

indirect Mar 16, 2012

There is not a chance that changing the encoding will stop Amazon S3 from having timeouts. :P

There is not a chance that changing the encoding will stop Amazon S3 from having timeouts. :P

@nengine

This comment has been minimized.

Show comment
Hide comment
@nengine

nengine Mar 16, 2012

I 'm running Jruby 1.6.7 and worked on Ruby 1.8, but not with 1.9. To me then looks like something to do with 1.9

nengine commented Mar 16, 2012

I 'm running Jruby 1.6.7 and worked on Ruby 1.8, but not with 1.9. To me then looks like something to do with 1.9

@charles-luv

This comment has been minimized.

Show comment
Hide comment
@charles-luv

charles-luv Mar 23, 2012

@indirect This is a slightly parallel issue, but I was having timeouts every single time until I explicitly marked the file with a binary encoding. I'm guessing this is a problem with the AWS gem, not paperclip. I'm sure there will sometimes be a legitimate timeout error when S3 is down, for which this error should be addressed.

@tcpipmen I was uploading a file from a form, and used file.tempfile.binmode. Not sure if that's the right thing to do, but I no longer timeout every single time. Try that out.

@indirect This is a slightly parallel issue, but I was having timeouts every single time until I explicitly marked the file with a binary encoding. I'm guessing this is a problem with the AWS gem, not paperclip. I'm sure there will sometimes be a legitimate timeout error when S3 is down, for which this error should be addressed.

@tcpipmen I was uploading a file from a form, and used file.tempfile.binmode. Not sure if that's the right thing to do, but I no longer timeout every single time. Try that out.

@jasperkennis

This comment has been minimized.

Show comment
Hide comment
@jasperkennis

jasperkennis Apr 1, 2012

@charles-luv Where do you set "file.tempfile.binmode"? I'd like to try that too. The issue seems highly related to Issue #721 indeed, I posted some details about the trouble I was experiencing there.

@charles-luv Where do you set "file.tempfile.binmode"? I'd like to try that too. The issue seems highly related to Issue #721 indeed, I posted some details about the trouble I was experiencing there.

@charles-luv

This comment has been minimized.

Show comment
Hide comment
@charles-luv

charles-luv Apr 2, 2012

@jasperkennis with a file object, try:

filename = "destination_filename"
s3_obj = s3.buckets["bucket_name"].objects[filename]
s3_obj.write(file.tempfile.binmode)

@jasperkennis with a file object, try:

filename = "destination_filename"
s3_obj = s3.buckets["bucket_name"].objects[filename]
s3_obj.write(file.tempfile.binmode)
@keilmillerjr

This comment has been minimized.

Show comment
Hide comment
@keilmillerjr

keilmillerjr Apr 3, 2012

I am having the same issue I believe.

New rails 3.2.3 app
aws-sdk (1.3.9)
paperclip (3.0.1)
paperclip-env_aware (0.0.3)

set my credentials in paperclip_env.yml

I did not ignore the yml file. Git defaults and ignored /public/system (for dev only). Pushed to heroku cedar stack. Timed out with an error 500 upon trying to upload a file with my app online. Checked the logs.

Logs summarized
Started PUT
Command :: identify
Command :: convert
[paperclip] saving
[paperclip] Saving attachments.
Error R12 (Exit timeout) -> Process failed to exit within 10 seconds of SIGTERM
Stopping process with SIGKILL
Process exited with status 137
Completed 500 Internal Server Error in 23052ms
AWS::S3::Errors::RequestTimeout (Your socket connection to the server was not read from or written to within the timeout period. Idle connections will be closed.):

I am having the same issue I believe.

New rails 3.2.3 app
aws-sdk (1.3.9)
paperclip (3.0.1)
paperclip-env_aware (0.0.3)

set my credentials in paperclip_env.yml

I did not ignore the yml file. Git defaults and ignored /public/system (for dev only). Pushed to heroku cedar stack. Timed out with an error 500 upon trying to upload a file with my app online. Checked the logs.

Logs summarized
Started PUT
Command :: identify
Command :: convert
[paperclip] saving
[paperclip] Saving attachments.
Error R12 (Exit timeout) -> Process failed to exit within 10 seconds of SIGTERM
Stopping process with SIGKILL
Process exited with status 137
Completed 500 Internal Server Error in 23052ms
AWS::S3::Errors::RequestTimeout (Your socket connection to the server was not read from or written to within the timeout period. Idle connections will be closed.):

@keilmillerjr

This comment has been minimized.

Show comment
Hide comment
@keilmillerjr

keilmillerjr Apr 4, 2012

Degrading to gem "paperclip", "~> 2.7" (2.7.0) immediately solved the issue. There is definitely a bug somewhere. Let me know when it is resolved! :)

Degrading to gem "paperclip", "~> 2.7" (2.7.0) immediately solved the issue. There is definitely a bug somewhere. Let me know when it is resolved! :)

@sikachu

This comment has been minimized.

Show comment
Hide comment
@sikachu

sikachu Apr 4, 2012

Contributor

Oh my, this is going to be hard to debug :/

Contributor

sikachu commented Apr 4, 2012

Oh my, this is going to be hard to debug :/

@indirect

This comment has been minimized.

Show comment
Hide comment
@indirect

indirect Apr 4, 2012

Sorry, next time I will try to only have problems that are easy to debug. :trollface:

:)

On Apr 3, 2012, at 5:57 PM, Prem Sichanugrist wrote:

Oh my, this is going to be hard to debug :/


Reply to this email directly or view it on GitHub:
#751 (comment)

indirect commented Apr 4, 2012

Sorry, next time I will try to only have problems that are easy to debug. :trollface:

:)

On Apr 3, 2012, at 5:57 PM, Prem Sichanugrist wrote:

Oh my, this is going to be hard to debug :/


Reply to this email directly or view it on GitHub:
#751 (comment)

@sikachu

This comment has been minimized.

Show comment
Hide comment
@sikachu

sikachu Apr 4, 2012

Contributor

HAHAHAHA ;)

Contributor

sikachu commented Apr 4, 2012

HAHAHAHA ;)

@keilmillerjr

This comment has been minimized.

Show comment
Hide comment
@keilmillerjr

keilmillerjr Apr 4, 2012

Debug == roll version back... :p

Debug == roll version back... :p

@jasperkennis

This comment has been minimized.

Show comment
Hide comment
@jasperkennis

jasperkennis Apr 4, 2012

Not sure if a debug is needed. It seems like 2.x and 3.x just follow a different strategy, which can be fine, but it should be documented very clearly.

Not sure if a debug is needed. It seems like 2.x and 3.x just follow a different strategy, which can be fine, but it should be documented very clearly.

@sikachu

This comment has been minimized.

Show comment
Hide comment
@sikachu

sikachu Apr 4, 2012

Contributor

It's not a different strategy though. We've switched to aws-sdk since 2.7 and it seems like the problem came since then. Unless if people has a problem with 3.0.1 but not 3.0.0, then we have some problem on our side.

Anyhow, too much chit-chat on this. If anybody can provide me a backtrace when that error happen it would be great.

Contributor

sikachu commented Apr 4, 2012

It's not a different strategy though. We've switched to aws-sdk since 2.7 and it seems like the problem came since then. Unless if people has a problem with 3.0.1 but not 3.0.0, then we have some problem on our side.

Anyhow, too much chit-chat on this. If anybody can provide me a backtrace when that error happen it would be great.

@swrobel

This comment has been minimized.

Show comment
Hide comment
@swrobel

swrobel Apr 4, 2012

I'm currently using 2.7 with aws-sdk and it's fine. See backtraces here: #721

swrobel commented Apr 4, 2012

I'm currently using 2.7 with aws-sdk and it's fine. See backtraces here: #721

@brumm

This comment has been minimized.

Show comment
Hide comment
@brumm

brumm Apr 4, 2012

Contributor

I think I may have found the issue? Consider this:

1.9.2p180 :003 > object = b.objects['test.png']
 => <AWS::S3::S3Object:mybucket/test.png> 

# specifying a path via :file
1.9.2p180 :004 > object.write(:file => "~/Desktop/test.png", :acl => :public_read)
 => <AWS::S3::S3Object:mybucket/test.png> 

# specifying a pure path
1.9.2p180 :005 > object.write("~/Desktop/test.png", :acl => :public_read)
 => <AWS::S3::S3Object:mybucket/test.png> 

# File.new with :encoding option set to "BINARY"
1.9.2p180 :006 > object.write(File.new("~/Desktop/test.png", :encoding => "BINARY"), :acl => :public_read)
 => <AWS::S3::S3Object:mybucket/test.png> 

# File.new without :encoding set, file.external_encoding is "UTF-8"
1.9.2p180 :007 > object.write(File.new("~/Desktop/test.png"), :acl => :public_read)
AWS::S3::Errors::RequestTimeout: <?xml version="1.0" encoding="UTF-8"?>
<Error><Code>RequestTimeout</Code><Message>Your socket connection to the server was not read from or written to within the timeout period. Idle connections will be closed.</Message><RequestId>12D953A74BCC9FCD</RequestId><HostId>sy6gBHDp2tUtWPHZTf/p3thSX8J2rdjKe0wfVy8SYfPcU/2yIANq46ckhUyDvLiZ</HostId></Error>
  from ~/.rvm/gems/ruby-1.9.2-p180-patched@jobmensa/gems/aws-sdk-1.3.4/lib/aws/core/client.rb:261:in `return_or_raise'
  from ~/.rvm/gems/ruby-1.9.2-p180-patched@jobmensa/gems/aws-sdk-1.3.4/lib/aws/core/client.rb:321:in `client_request'
  from (eval):3:in `put_object'
  from ~/.rvm/gems/ruby-1.9.2-p180-patched@jobmensa/gems/aws-sdk-1.3.4/lib/aws/s3/s3_object.rb:315:in `write'
  from (irb):6
  from ~/.rvm/gems/ruby-1.9.2-p180-patched@jobmensa/gems/railties-3.2.3/lib/rails/commands/console.rb:47:in `start'
  from ~/.rvm/gems/ruby-1.9.2-p180-patched@jobmensa/gems/railties-3.2.3/lib/rails/commands/console.rb:8:in `start'
  from ~/.rvm/gems/ruby-1.9.2-p180-patched@jobmensa/gems/railties-3.2.3/lib/rails/commands.rb:41:in `<top (required)>'
  from script/rails:6:in `require'
  from script/rails:6:in `<main>'

I think there may be a problem in how Paperclip is setting encodings for the FileAdapter stuff, but I haven't dug any deeper so far. Any ideas?

Contributor

brumm commented Apr 4, 2012

I think I may have found the issue? Consider this:

1.9.2p180 :003 > object = b.objects['test.png']
 => <AWS::S3::S3Object:mybucket/test.png> 

# specifying a path via :file
1.9.2p180 :004 > object.write(:file => "~/Desktop/test.png", :acl => :public_read)
 => <AWS::S3::S3Object:mybucket/test.png> 

# specifying a pure path
1.9.2p180 :005 > object.write("~/Desktop/test.png", :acl => :public_read)
 => <AWS::S3::S3Object:mybucket/test.png> 

# File.new with :encoding option set to "BINARY"
1.9.2p180 :006 > object.write(File.new("~/Desktop/test.png", :encoding => "BINARY"), :acl => :public_read)
 => <AWS::S3::S3Object:mybucket/test.png> 

# File.new without :encoding set, file.external_encoding is "UTF-8"
1.9.2p180 :007 > object.write(File.new("~/Desktop/test.png"), :acl => :public_read)
AWS::S3::Errors::RequestTimeout: <?xml version="1.0" encoding="UTF-8"?>
<Error><Code>RequestTimeout</Code><Message>Your socket connection to the server was not read from or written to within the timeout period. Idle connections will be closed.</Message><RequestId>12D953A74BCC9FCD</RequestId><HostId>sy6gBHDp2tUtWPHZTf/p3thSX8J2rdjKe0wfVy8SYfPcU/2yIANq46ckhUyDvLiZ</HostId></Error>
  from ~/.rvm/gems/ruby-1.9.2-p180-patched@jobmensa/gems/aws-sdk-1.3.4/lib/aws/core/client.rb:261:in `return_or_raise'
  from ~/.rvm/gems/ruby-1.9.2-p180-patched@jobmensa/gems/aws-sdk-1.3.4/lib/aws/core/client.rb:321:in `client_request'
  from (eval):3:in `put_object'
  from ~/.rvm/gems/ruby-1.9.2-p180-patched@jobmensa/gems/aws-sdk-1.3.4/lib/aws/s3/s3_object.rb:315:in `write'
  from (irb):6
  from ~/.rvm/gems/ruby-1.9.2-p180-patched@jobmensa/gems/railties-3.2.3/lib/rails/commands/console.rb:47:in `start'
  from ~/.rvm/gems/ruby-1.9.2-p180-patched@jobmensa/gems/railties-3.2.3/lib/rails/commands/console.rb:8:in `start'
  from ~/.rvm/gems/ruby-1.9.2-p180-patched@jobmensa/gems/railties-3.2.3/lib/rails/commands.rb:41:in `<top (required)>'
  from script/rails:6:in `require'
  from script/rails:6:in `<main>'

I think there may be a problem in how Paperclip is setting encodings for the FileAdapter stuff, but I haven't dug any deeper so far. Any ideas?

@brumm

This comment has been minimized.

Show comment
Hide comment
@brumm

brumm Apr 5, 2012

Contributor

submitted pull request to fix this issue: #811

Contributor

brumm commented Apr 5, 2012

submitted pull request to fix this issue: #811

@sikachu

This comment has been minimized.

Show comment
Hide comment
@sikachu

sikachu Apr 5, 2012

Contributor

I've merged #811 into master. Can you guys give it a try and see if it fixes your problem?

Contributor

sikachu commented Apr 5, 2012

I've merged #811 into master. Can you guys give it a try and see if it fixes your problem?

@calicoder

This comment has been minimized.

Show comment
Hide comment
@calicoder

calicoder Apr 9, 2012

@sikachu, it resolved the issue for me. Pre-commit, it worked fine on my local environment but broke in heroku land. Now it works in both places. Cheers!

@sikachu, it resolved the issue for me. Pre-commit, it worked fine on my local environment but broke in heroku land. Now it works in both places. Cheers!

@sikachu

This comment has been minimized.

Show comment
Hide comment
@sikachu

sikachu Apr 9, 2012

Contributor

W00t! Going to release 3.0.2 later today. Thanks for confirmation.

Contributor

sikachu commented Apr 9, 2012

W00t! Going to release 3.0.2 later today. Thanks for confirmation.

@keilmillerjr

This comment has been minimized.

Show comment
Hide comment
@keilmillerjr

keilmillerjr Apr 13, 2012

Thanks for the update guys! 3.0.2 did indeed resolve the issue. :)

Thanks for the update guys! 3.0.2 did indeed resolve the issue. :)

@sikachu sikachu closed this Apr 25, 2012

@alexdowad

This comment has been minimized.

Show comment
Hide comment
@alexdowad

alexdowad Jun 11, 2012

The same problem is occurring for me with version 2.7. Why not just upgrade to 3.0.2? See spree/spree#1653 and you'll understand.

Is it possible to backport this fix to the 2.7 branch?

The same problem is occurring for me with version 2.7. Why not just upgrade to 3.0.2? See spree/spree#1653 and you'll understand.

Is it possible to backport this fix to the 2.7 branch?

@sikachu

This comment has been minimized.

Show comment
Hide comment
@sikachu

sikachu Jun 11, 2012

Contributor

Yep, I can release another 2.7.x

Contributor

sikachu commented Jun 11, 2012

Yep, I can release another 2.7.x

@alexdowad

This comment has been minimized.

Show comment
Hide comment
@alexdowad

alexdowad Jun 11, 2012

THANKS! PAPERCLIP ROCKS!!!

...Can you post here when the new version is out?

THANKS! PAPERCLIP ROCKS!!!

...Can you post here when the new version is out?

@sikachu sikachu reopened this Jun 15, 2012

@bastilian

This comment has been minimized.

Show comment
Hide comment
@bastilian

bastilian Jul 19, 2012

Is there a status on backporting it to 2.7?

Is there a status on backporting it to 2.7?

@sikachu

This comment has been minimized.

Show comment
Hide comment
@sikachu

sikachu Jul 19, 2012

Contributor

@bastilian Current ticket status: confused. >_<

See, the regression was introduced in 3.0.1 whereas the adapters wasn't mark the file as binary as it should be, resulted in wrongly calculated signature. I've looked into 2.7 code and seems like we already mark those files as binary, so I wasn't sure where else do we need to fix.

Let me have another through look at it to see if I can file the culprit.

Contributor

sikachu commented Jul 19, 2012

@bastilian Current ticket status: confused. >_<

See, the regression was introduced in 3.0.1 whereas the adapters wasn't mark the file as binary as it should be, resulted in wrongly calculated signature. I've looked into 2.7 code and seems like we already mark those files as binary, so I wasn't sure where else do we need to fix.

Let me have another through look at it to see if I can file the culprit.

@jonhyman

This comment has been minimized.

Show comment
Hide comment
@jonhyman

jonhyman Aug 23, 2012

I'm seeing this in 3.1.4. Has anyone else seen issues with this?

AWS::S3::Errors::RequestTimeout
Your socket connection to the server was not read from or written to within the timeout period. Idle connections will be closed.

I'm seeing this in 3.1.4. Has anyone else seen issues with this?

AWS::S3::Errors::RequestTimeout
Your socket connection to the server was not read from or written to within the timeout period. Idle connections will be closed.

@jonhyman

This comment has been minimized.

Show comment
Hide comment
@jonhyman

jonhyman Aug 23, 2012

@sikachu this issue is back due to a recent version of aws-sdk. I downgraded to aws-sdk 1.5.8 (which I was previously on) and it fixes my problem. It looks like aws-sdk 1.6.3 and 1.6.4 add updates to S3, it's probably due to a change in 1.6.3 if I had to guess: http://aws.amazon.com/releasenotes/5728376747252106.

@sikachu this issue is back due to a recent version of aws-sdk. I downgraded to aws-sdk 1.5.8 (which I was previously on) and it fixes my problem. It looks like aws-sdk 1.6.3 and 1.6.4 add updates to S3, it's probably due to a change in 1.6.3 if I had to guess: http://aws.amazon.com/releasenotes/5728376747252106.

@sikachu

This comment has been minimized.

Show comment
Hide comment
@sikachu

sikachu Aug 23, 2012

Contributor

Oh, I guess our monkey patch broke ... Thanks for investigating that, I'll have a look soon.

It seems like they haven't test their library against Ruby 1.9 yet, which is kind of shame.

Contributor

sikachu commented Aug 23, 2012

Oh, I guess our monkey patch broke ... Thanks for investigating that, I'll have a look soon.

It seems like they haven't test their library against Ruby 1.9 yet, which is kind of shame.

@masterkain

This comment has been minimized.

Show comment
Hide comment
@masterkain

masterkain Sep 28, 2012

I'm getting Excon's timeout exceptions also during save when using Fog.

I'm getting Excon's timeout exceptions also during save when using Fog.

@jonhyman

This comment has been minimized.

Show comment
Hide comment
@jonhyman

jonhyman Nov 7, 2012

I just upgraded from aws-sdk 1.5.8 to 1.7.0 to try it again and got a RequestTimeout again. Leaving my aws-sdk gem at 1.5.8.

jonhyman commented Nov 7, 2012

I just upgraded from aws-sdk 1.5.8 to 1.7.0 to try it again and got a RequestTimeout again. Leaving my aws-sdk gem at 1.5.8.

@jonhyman

This comment has been minimized.

Show comment
Hide comment
@jonhyman

jonhyman Nov 7, 2012

And I'm using paperclip master, btw. 567086e is still causing RequestTimeout issues.

jonhyman commented Nov 7, 2012

And I'm using paperclip master, btw. 567086e is still causing RequestTimeout issues.

@jasonfb

This comment has been minimized.

Show comment
Hide comment
@jasonfb

jasonfb Nov 27, 2012

I am seeing this same error too with Ruby 1.9.3, paperclip 3.3.1, aws-sdk 1.7.1.

indeed, as other said, downgrading aws-sdk to 1.5.8 fixes the problem for me.

jasonfb commented Nov 27, 2012

I am seeing this same error too with Ruby 1.9.3, paperclip 3.3.1, aws-sdk 1.7.1.

indeed, as other said, downgrading aws-sdk to 1.5.8 fixes the problem for me.

@jonhyman

This comment has been minimized.

Show comment
Hide comment
@jonhyman

jonhyman Jul 30, 2013

I just updated to aws-sdk 1.14.1 and paperclip 3.5.0 to test out if it was fixed. Attachments that don't have pre-processing are uploading fine, but attachments that do have pre-processing are timing out with

Your socket connection to the server was not read from or written to within the timeout period. Idle connections will be closed

It could be that since the preprocessing is running advpng and optipng for compression that it's taking too long. I also posted this in aws/aws-sdk-ruby#241 but for me something is still broken, going to downgrade.

I just updated to aws-sdk 1.14.1 and paperclip 3.5.0 to test out if it was fixed. Attachments that don't have pre-processing are uploading fine, but attachments that do have pre-processing are timing out with

Your socket connection to the server was not read from or written to within the timeout period. Idle connections will be closed

It could be that since the preprocessing is running advpng and optipng for compression that it's taking too long. I also posted this in aws/aws-sdk-ruby#241 but for me something is still broken, going to downgrade.

@trevorrowe

This comment has been minimized.

Show comment
Hide comment
@trevorrowe

trevorrowe Jul 31, 2013

@jonhyman I would be very interested in working with you to try to replicate the issue and fix the timeout errors for the pre-processed images. I have a few ideas that could resolve the issue, but I am unable to test them myself (I can do replicate the issue). Would you be willing to email me at trevrowe at amazon dot com and setting aside some time to troubleshoot this?

@jonhyman I would be very interested in working with you to try to replicate the issue and fix the timeout errors for the pre-processed images. I have a few ideas that could resolve the issue, but I am unable to test them myself (I can do replicate the issue). Would you be willing to email me at trevrowe at amazon dot com and setting aside some time to troubleshoot this?

@jonhyman

This comment has been minimized.

Show comment
Hide comment

Sure.

@cbrunsdon

This comment has been minimized.

Show comment
Hide comment
@cbrunsdon

cbrunsdon Sep 4, 2013

@trevorrowe @jonhyman Did you two ever have success identifying or fixing the issues? I'm running into the exact same problem.

@trevorrowe @jonhyman Did you two ever have success identifying or fixing the issues? I'm running into the exact same problem.

@jasonfb

This comment has been minimized.

Show comment
Hide comment
@jasonfb

jasonfb Sep 4, 2013

We upgraded to aws-sdk 1.15.0 and paperclip 3.4.2 and things are looking solid (no timeouts). If we encounter the timeouts, I'll post here again.

jasonfb commented Sep 4, 2013

We upgraded to aws-sdk 1.15.0 and paperclip 3.4.2 and things are looking solid (no timeouts). If we encounter the timeouts, I'll post here again.

@jasonfb

This comment has been minimized.

Show comment
Hide comment
@jasonfb

jasonfb Sep 4, 2013

also the processing itself was hitting heroku's 30 second timeout limit (which is a separate problem from the request timeout described in this thread is about), we used this to do the processing in the background https://github.com/jrgifford/delayed_paperclip/

jasonfb commented Sep 4, 2013

also the processing itself was hitting heroku's 30 second timeout limit (which is a separate problem from the request timeout described in this thread is about), we used this to do the processing in the background https://github.com/jrgifford/delayed_paperclip/

@jonhyman

This comment has been minimized.

Show comment
Hide comment
@jonhyman

jonhyman Sep 4, 2013

@cbrunsdon, @trevorrowe and I paired on it a few weeks ago and were able to reproduce the issue. Trevor got enough information in the repro to do further investigation and fix. This was still an issue with aws-sdk 1.15.0 when we tested it.

jonhyman commented Sep 4, 2013

@cbrunsdon, @trevorrowe and I paired on it a few weeks ago and were able to reproduce the issue. Trevor got enough information in the repro to do further investigation and fix. This was still an issue with aws-sdk 1.15.0 when we tested it.

@cbrunsdon

This comment has been minimized.

Show comment
Hide comment
@cbrunsdon

cbrunsdon Sep 4, 2013

@trevorrowe For the record, I believe the issue we ran into is a timeout when attempting to upload a zero-length file.

@trevorrowe For the record, I believe the issue we ran into is a timeout when attempting to upload a zero-length file.

@trevorrowe

This comment has been minimized.

Show comment
Hide comment
@trevorrowe

trevorrowe Sep 5, 2013

Sorry for the silence. I've been out of the office for all but three days out of the last three weeks. I wan't to give special thanks to @jonhyman for helping me reproduce the issue. I will try to devote some time to this today.

Sorry for the silence. I've been out of the office for all but three days out of the last three weeks. I wan't to give special thanks to @jonhyman for helping me reproduce the issue. I will try to devote some time to this today.

@cbauer10

This comment has been minimized.

Show comment
Hide comment
@cbauer10

cbauer10 Sep 13, 2013

aws-sdk 1.18.0 and paperclip 3.5.1 still causes this issue. Any update for us?

aws-sdk 1.18.0 and paperclip 3.5.1 still causes this issue. Any update for us?

@scaryguy scaryguy referenced this issue in aws/aws-sdk-ruby Sep 17, 2013

Closed

Net::OpenTimeout - execution expired #362

@scaryguy

This comment has been minimized.

Show comment
Hide comment
@scaryguy

scaryguy Sep 17, 2013

Hi there...

I'm on Ubuntu 12.04 installed with VMWare Player.

I have Ruby 2.0 and Rails 4 installed. I'm using latest master branch.

Paperclip 3.5.1 and aws-sdk 1.11.1.

I'm experiencing this "execution timeout" issue in my development environment.

I can save images to AWS in production but I can't do it in development.

Any suggestions?

UPDATE: I've tried aws-sdk 1.18.0 and still same issue...

Hi there...

I'm on Ubuntu 12.04 installed with VMWare Player.

I have Ruby 2.0 and Rails 4 installed. I'm using latest master branch.

Paperclip 3.5.1 and aws-sdk 1.11.1.

I'm experiencing this "execution timeout" issue in my development environment.

I can save images to AWS in production but I can't do it in development.

Any suggestions?

UPDATE: I've tried aws-sdk 1.18.0 and still same issue...

@scaryguy

This comment has been minimized.

Show comment
Hide comment
@scaryguy

scaryguy Oct 4, 2013

Hi peeps...

Can't decide on how to feel. I'm a little bit angry for losing almost one month because of this issue. But at the other hand, I'm happy because issue is resoleved. Do you know how?

_ALL was a DNS issue..._

After me changing my virtual machines and physical OS's DNS settings to use Google DNS, everthing was FIXED!!!!!!

I DON'T know who is responsible of this situation? Is it my ISP provider? Who I have to be angry with??

Here is that regular Google DNS settings:

8.8.8.8
8.8.4.4

scaryguy commented Oct 4, 2013

Hi peeps...

Can't decide on how to feel. I'm a little bit angry for losing almost one month because of this issue. But at the other hand, I'm happy because issue is resoleved. Do you know how?

_ALL was a DNS issue..._

After me changing my virtual machines and physical OS's DNS settings to use Google DNS, everthing was FIXED!!!!!!

I DON'T know who is responsible of this situation? Is it my ISP provider? Who I have to be angry with??

Here is that regular Google DNS settings:

8.8.8.8
8.8.4.4
@jonhyman

This comment has been minimized.

Show comment
Hide comment
@jonhyman

jonhyman Nov 18, 2013

@trevorrowe Were you ever able to fix this? I'm going to attempt to update again starting tomorrow when you drop the nokogiri < 1.6.0 requirement but want to know if it's even worth attempting yet.

@trevorrowe Were you ever able to fix this? I'm going to attempt to update again starting tomorrow when you drop the nokogiri < 1.6.0 requirement but want to know if it's even worth attempting yet.

@trevorrowe

This comment has been minimized.

Show comment
Hide comment
@trevorrowe

trevorrowe Nov 19, 2013

I have not taken a look at this in a while. I went digging through my email, and I was unable to find the sample attachment processor you were using that was reproducing the bug. If you can (re)send me that processor, I'm hoping I can clear this bug out. I have a good amount of time this week I could devote to the problem.

I have not taken a look at this in a while. I went digging through my email, and I was unable to find the sample attachment processor you were using that was reproducing the bug. If you can (re)send me that processor, I'm hoping I can clear this bug out. I have a good amount of time this week I could devote to the problem.

@jonhyman

This comment has been minimized.

Show comment
Hide comment
@jonhyman

jonhyman Nov 19, 2013

Resent it to you.

Sent from my mobile device
On Nov 19, 2013 3:13 AM, "Trevor Rowe" notifications@github.com wrote:

I have not taken a look at this in a while. I went digging through my
email, and I was unable to find the sample attachment processor you were
using that was reproducing the bug. If you can (re)send me that processor,
I'm hoping I can clear this bug out. I have a good amount of time this week
I could devote to the problem.


Reply to this email directly or view it on GitHubhttps://github.com/thoughtbot/paperclip/issues/751#issuecomment-28772448
.

Resent it to you.

Sent from my mobile device
On Nov 19, 2013 3:13 AM, "Trevor Rowe" notifications@github.com wrote:

I have not taken a look at this in a while. I went digging through my
email, and I was unable to find the sample attachment processor you were
using that was reproducing the bug. If you can (re)send me that processor,
I'm hoping I can clear this bug out. I have a good amount of time this week
I could devote to the problem.


Reply to this email directly or view it on GitHubhttps://github.com/thoughtbot/paperclip/issues/751#issuecomment-28772448
.

@kobekoto

This comment has been minimized.

Show comment
Hide comment
@kobekoto

kobekoto Nov 19, 2013

Are you on the latest version of the gem?

Trevor is the man... I was able to resolve the same issue with his help by running :

  1. gem search aws-sdk via the command line -> lets you know what the latest version of the aws-sdk gem is
  2. gem install aws-sdk.
  3. I ran bundle update afterwards, but I was still running version 1.11.1 (in my case). I specified version '1.26.0' via (gem 'aws-sdk', '1.26.0') in my Gemfile.
  4. Ran bundle once again...

BOOM!

Hope that helps..

Are you on the latest version of the gem?

Trevor is the man... I was able to resolve the same issue with his help by running :

  1. gem search aws-sdk via the command line -> lets you know what the latest version of the aws-sdk gem is
  2. gem install aws-sdk.
  3. I ran bundle update afterwards, but I was still running version 1.11.1 (in my case). I specified version '1.26.0' via (gem 'aws-sdk', '1.26.0') in my Gemfile.
  4. Ran bundle once again...

BOOM!

Hope that helps..

@masterkain

This comment has been minimized.

Show comment
Hide comment
@masterkain

masterkain Nov 19, 2013

bundle outdated --strict
bundle update aws-sdk

bundle outdated --strict
bundle update aws-sdk

@trevorrowe

This comment has been minimized.

Show comment
Hide comment
@trevorrowe

trevorrowe Nov 19, 2013

Just to update, I am currently (as in right now) working on this. Intermittent request timeouts caused by latent network issues should be fixed. The newer versions of the SDK retry these by default now.

I am still tracking down the other issues. Thanks to @jonhyman I now have a case I can reproduce locally. I am tracking the issue down, but it appears to be an issue where Paperclip::FileAdapater objects are reporting an improper value when responding to #size. There are multiple scenarios I have been able to create that can cause this behavior.

The reason older versions of the Ruby SDK are not affected is they don't ask the object for its size. Instead it simply reads the object into memory and gets the bytesize from the actual data, which is always correct.

Why does the #size matter so much?

It is sent along to S3 as the Content-Length header. S3 receives the full file and then waits for the "rest" based on the discrepancy between the actual file size and the value reported in Content-Length.

In one of my examples, the FileAdapter indicates the file is 976 bytes, when the file on disk is actually 757 bytes. The 757 bytes are read from disk and then S3 sits waiting for the additional bytes until it times out. The discrepancy appears to creep in during post processing.

I'll update this issue once the dust settles.

Just to update, I am currently (as in right now) working on this. Intermittent request timeouts caused by latent network issues should be fixed. The newer versions of the SDK retry these by default now.

I am still tracking down the other issues. Thanks to @jonhyman I now have a case I can reproduce locally. I am tracking the issue down, but it appears to be an issue where Paperclip::FileAdapater objects are reporting an improper value when responding to #size. There are multiple scenarios I have been able to create that can cause this behavior.

The reason older versions of the Ruby SDK are not affected is they don't ask the object for its size. Instead it simply reads the object into memory and gets the bytesize from the actual data, which is always correct.

Why does the #size matter so much?

It is sent along to S3 as the Content-Length header. S3 receives the full file and then waits for the "rest" based on the discrepancy between the actual file size and the value reported in Content-Length.

In one of my examples, the FileAdapter indicates the file is 976 bytes, when the file on disk is actually 757 bytes. The 757 bytes are read from disk and then S3 sits waiting for the additional bytes until it times out. The discrepancy appears to creep in during post processing.

I'll update this issue once the dust settles.

@trevorrowe

This comment has been minimized.

Show comment
Hide comment
@trevorrowe

trevorrowe Nov 19, 2013

As mentioned above, I was able to reproduce the issue by attempting to upload an attachment that is processed. In the example code I was given the issue can be tracked down to how a particular processor modified the file. See the following made up example that produces the buggy behavior:

require 'tempfile'
require 'paperclip'

tmpfile = Tempfile.new(['tempfile', '.png'])

`cp test.png #{tmpfile.path}`

# both report the correct size of 976 bytes
puts tmpfile.size, File.size(tmpfile.path)
# 976
# 976

# perform some processing of the png
Paperclip.run("optipng", "-zc1-9 -zm1-9 -zs0-2 -f0-5 #{tmpfile.path}")

# oops! tmpfile.size returns 976, should be 757, ** File.size is correct **
puts tmpfile.size, File.size(tmpfile.path)
# 976
# 757

Paperclip takes the result from the processor(s) and then wraps then in an IO adapter. In this case, the FileAdapter is chosen. This adapter creates a tempfile and would normally correct the size issue, except it does the following on line 15:

# second oops! 
@size = File.size(@target)

Calling File.size with a File/Tempfile object will use that objects #size attribute. If this were changes to File.size(@target.path), then the correct size would be found and then there would be no issue.

I can not be certain, but I suspect other users experiencing this issue are running into similar issues. One possible fix is to update aws-sdk to use File.size(object.path) instead of object.size on any object that responds to #path. Essentially, the SDK is being bitten by trusting the #size attribute. Secondly, a one-line patch to Paperclip::FileAdapter might potentially correct a number of processors that return bad Tempfiles.

At this point, I'm inclined to make/submit both changes.

Thoughts?

As mentioned above, I was able to reproduce the issue by attempting to upload an attachment that is processed. In the example code I was given the issue can be tracked down to how a particular processor modified the file. See the following made up example that produces the buggy behavior:

require 'tempfile'
require 'paperclip'

tmpfile = Tempfile.new(['tempfile', '.png'])

`cp test.png #{tmpfile.path}`

# both report the correct size of 976 bytes
puts tmpfile.size, File.size(tmpfile.path)
# 976
# 976

# perform some processing of the png
Paperclip.run("optipng", "-zc1-9 -zm1-9 -zs0-2 -f0-5 #{tmpfile.path}")

# oops! tmpfile.size returns 976, should be 757, ** File.size is correct **
puts tmpfile.size, File.size(tmpfile.path)
# 976
# 757

Paperclip takes the result from the processor(s) and then wraps then in an IO adapter. In this case, the FileAdapter is chosen. This adapter creates a tempfile and would normally correct the size issue, except it does the following on line 15:

# second oops! 
@size = File.size(@target)

Calling File.size with a File/Tempfile object will use that objects #size attribute. If this were changes to File.size(@target.path), then the correct size would be found and then there would be no issue.

I can not be certain, but I suspect other users experiencing this issue are running into similar issues. One possible fix is to update aws-sdk to use File.size(object.path) instead of object.size on any object that responds to #path. Essentially, the SDK is being bitten by trusting the #size attribute. Secondly, a one-line patch to Paperclip::FileAdapter might potentially correct a number of processors that return bad Tempfiles.

At this point, I'm inclined to make/submit both changes.

Thoughts?

trevorrowe added a commit to aws/aws-sdk-ruby that referenced this issue Nov 19, 2013

Now using File.size to determine file size for File/Tempfile objects.
It is possible for a File/Tempfile to report an incorrect value
from `#size`.  This change causes the SDK to prefer the response
from `File.size(file.path)` over `file.size`.

See thoughtbot/paperclip#751
@trevorrowe

This comment has been minimized.

Show comment
Hide comment
@trevorrowe

trevorrowe Nov 22, 2013

Version 1.28.0 of the aws-sdk has been released. I would appreciate users that have locked to an older version to give this a try. This release contains the fixes I detailed above.

Version 1.28.0 of the aws-sdk has been released. I would appreciate users that have locked to an older version to give this a try. This release contains the fixes I detailed above.

@jonhyman

This comment has been minimized.

Show comment
Hide comment
@jonhyman

jonhyman Nov 22, 2013

Works for me! Thanks, @trevorrowe!

Works for me! Thanks, @trevorrowe!

@brettv

This comment has been minimized.

Show comment
Hide comment
@brettv

brettv Nov 27, 2013

@trevorrowe, it appears that aws-sdk 1.28.0 has resolved the issue for me! (fingers crossed) Thanks for all the hard work tracking down the #size issue.

brettv commented Nov 27, 2013

@trevorrowe, it appears that aws-sdk 1.28.0 has resolved the issue for me! (fingers crossed) Thanks for all the hard work tracking down the #size issue.

@unbalancedparentheses

This comment has been minimized.

Show comment
Hide comment
@unbalancedparentheses

unbalancedparentheses Dec 16, 2013

With aws-sdk 1.10 i had this isuee. With aws-sdk 1.30.0 everything works fine.

With aws-sdk 1.10 i had this isuee. With aws-sdk 1.30.0 everything works fine.

@jasonivers

This comment has been minimized.

Show comment
Hide comment
@jasonivers

jasonivers Jan 6, 2014

I'm running into this issue again with aws-sdk 1.31.0, paperclip 3.5.2. My file encoding is set as binary.

The issue occurs when doing any processing on a file that alters the file size from the original (as mentioned above)... in my case I have some TIFF files coming across with headers/profiles (from Photoshop) that make RMagick barf, so I was stripping them in the validation process (where I have to do various operations with RMagick to ensure image files meet minimum standards).

I have moved this process to the code that is creating the file (reading a remote file, writing a local file), and I no longer get this error... however, it seems to me that doing processing on a file that has already been read should not cause an S3 timeout error, as the file being pushed to S3 should be the one that Paperclip is already referencing, not the original.

I'm running into this issue again with aws-sdk 1.31.0, paperclip 3.5.2. My file encoding is set as binary.

The issue occurs when doing any processing on a file that alters the file size from the original (as mentioned above)... in my case I have some TIFF files coming across with headers/profiles (from Photoshop) that make RMagick barf, so I was stripping them in the validation process (where I have to do various operations with RMagick to ensure image files meet minimum standards).

I have moved this process to the code that is creating the file (reading a remote file, writing a local file), and I no longer get this error... however, it seems to me that doing processing on a file that has already been read should not cause an S3 timeout error, as the file being pushed to S3 should be the one that Paperclip is already referencing, not the original.

@joshcutler

This comment has been minimized.

Show comment
Hide comment
@joshcutler

joshcutler Jan 18, 2014

I'm seeing the same thing as jasonivers

I'm seeing the same thing as jasonivers

@jasonivers

This comment has been minimized.

Show comment
Hide comment
@jasonivers

jasonivers Jan 23, 2014

I looked at the code from the most recent referenced commit and fixed this issue (for me) by moving the path case to the bottom of the case statement in lib/aws/s3/data_options.rb (in the AWS-SDK gem). I don't use AWS-SDK other than with Paperclip, so I have no idea what effect that will have on any other uses of the gem, but it did fix my issue (and seemed to make it faster to write, as well).

I looked at the code from the most recent referenced commit and fixed this issue (for me) by moving the path case to the bottom of the case statement in lib/aws/s3/data_options.rb (in the AWS-SDK gem). I don't use AWS-SDK other than with Paperclip, so I have no idea what effect that will have on any other uses of the gem, but it did fix my issue (and seemed to make it faster to write, as well).

@inspire22

This comment has been minimized.

Show comment
Hide comment
@inspire22

inspire22 Apr 10, 2014

I'm also seeing the same thing as jasonivers. aws-sdk 1.35 and paperclip 3.5.2

Did you fork the aws-sdk gem, or report the issue to them yet?

I'm also seeing the same thing as jasonivers. aws-sdk 1.35 and paperclip 3.5.2

Did you fork the aws-sdk gem, or report the issue to them yet?

@thinkclay

This comment has been minimized.

Show comment
Hide comment
@thinkclay

thinkclay May 5, 2014

I'm also getting silent failures when ImageMagick is used to add a default background color to transparent png images. Works fine in development on my mac, but fails silently in Ubuntu on production. I've confirmed each component works separately and that imagemagick is working in prod. I'm assuming this is due to a inconsistent content length header as mentioned above, so I'm going to tweak some settings in production and see if I can't confirm that it is entirely related to modifying the background transparency.

Here's my config:

  has_mongoid_attached_file :avatar,
      :path           => ':attachment/:id/:style.:extension',
      :default_url    => '/assets/p2bi/avatar.png',
      :storage        => :s3,
      :bucket         => :p2bi,
      :s3_host_alias  => 'p2bi.s3-website-us-east-1.amazonaws.com',
      :s3_protocol    => 'https',
      :styles => {
        :small    => ['100x100#',   :jpg],
        :medium   => ['250x250',    :jpg],
        :large    => ['500x500>',   :jpg]
      },
      :convert_options => { :all => '-background white -flatten +matte' }
    validates_attachment_content_type :avatar, :content_type => /\Aimage\/.*\Z/

And the only thing I see in my production log from the post:

Parameters: {"utf8"=>"✓", "authenticity_token"=>"qvzQ2tQ9j8+qblf9/uK9SDUmluL6B/n56oQAGVEDTqg=", "lender"=>{"avatar"=>#, @original_filename="brain-1000.png", @content_type="image/png", @headers="Content-Disposition: form-data; name=\"lender[avatar]\"; filename=\"brain-1000.png\"\r\nContent-Type: image/png\r\n">}, "commit"=>"Update", "id"=>"516f6af9673faae398000001"}
I, [2014-05-05T15:15:56.997854 #20467]  INFO -- : [AWS S3 200 0.163364 0 retries] head_object(:bucket_name=>"p2bi",:key=>"avatars/516f6af9673faae398000001/original.jpg")  
I, [2014-05-05T15:15:57.012470 #20467]  INFO -- : [AWS S3 200 0.012739 0 retries] head_object(:bucket_name=>"p2bi",:key=>"avatars/516f6af9673faae398000001/small.jpg")  
I, [2014-05-05T15:15:57.025596 #20467]  INFO -- : [AWS S3 200 0.011673 0 retries] head_object(:bucket_name=>"p2bi",:key=>"avatars/516f6af9673faae398000001/medium.jpg")  
I, [2014-05-05T15:15:57.039060 #20467]  INFO -- : [AWS S3 200 0.012135 0 retries] head_object(:bucket_name=>"p2bi",:key=>"avatars/516f6af9673faae398000001/large.jpg")  

I'm also getting silent failures when ImageMagick is used to add a default background color to transparent png images. Works fine in development on my mac, but fails silently in Ubuntu on production. I've confirmed each component works separately and that imagemagick is working in prod. I'm assuming this is due to a inconsistent content length header as mentioned above, so I'm going to tweak some settings in production and see if I can't confirm that it is entirely related to modifying the background transparency.

Here's my config:

  has_mongoid_attached_file :avatar,
      :path           => ':attachment/:id/:style.:extension',
      :default_url    => '/assets/p2bi/avatar.png',
      :storage        => :s3,
      :bucket         => :p2bi,
      :s3_host_alias  => 'p2bi.s3-website-us-east-1.amazonaws.com',
      :s3_protocol    => 'https',
      :styles => {
        :small    => ['100x100#',   :jpg],
        :medium   => ['250x250',    :jpg],
        :large    => ['500x500>',   :jpg]
      },
      :convert_options => { :all => '-background white -flatten +matte' }
    validates_attachment_content_type :avatar, :content_type => /\Aimage\/.*\Z/

And the only thing I see in my production log from the post:

Parameters: {"utf8"=>"✓", "authenticity_token"=>"qvzQ2tQ9j8+qblf9/uK9SDUmluL6B/n56oQAGVEDTqg=", "lender"=>{"avatar"=>#, @original_filename="brain-1000.png", @content_type="image/png", @headers="Content-Disposition: form-data; name=\"lender[avatar]\"; filename=\"brain-1000.png\"\r\nContent-Type: image/png\r\n">}, "commit"=>"Update", "id"=>"516f6af9673faae398000001"}
I, [2014-05-05T15:15:56.997854 #20467]  INFO -- : [AWS S3 200 0.163364 0 retries] head_object(:bucket_name=>"p2bi",:key=>"avatars/516f6af9673faae398000001/original.jpg")  
I, [2014-05-05T15:15:57.012470 #20467]  INFO -- : [AWS S3 200 0.012739 0 retries] head_object(:bucket_name=>"p2bi",:key=>"avatars/516f6af9673faae398000001/small.jpg")  
I, [2014-05-05T15:15:57.025596 #20467]  INFO -- : [AWS S3 200 0.011673 0 retries] head_object(:bucket_name=>"p2bi",:key=>"avatars/516f6af9673faae398000001/medium.jpg")  
I, [2014-05-05T15:15:57.039060 #20467]  INFO -- : [AWS S3 200 0.012135 0 retries] head_object(:bucket_name=>"p2bi",:key=>"avatars/516f6af9673faae398000001/large.jpg")  
@thinkclay

This comment has been minimized.

Show comment
Hide comment
@thinkclay

thinkclay May 5, 2014

Removing :convert_options => { :all => '-background white -flatten +matte' } by itself doesn't solve anything, so I'm apt to believe imagemagick isn't to blame. Also worth noting that I've tried various file types and all work on development and none change the result in production.

Removing :convert_options => { :all => '-background white -flatten +matte' } by itself doesn't solve anything, so I'm apt to believe imagemagick isn't to blame. Also worth noting that I've tried various file types and all work on development and none change the result in production.

treydempsey added a commit to FoodCare/aws-sdk-ruby that referenced this issue May 20, 2014

@maclover7

This comment has been minimized.

Show comment
Hide comment
@maclover7

maclover7 Mar 9, 2015

Collaborator

Hi everybody! Is this still an issue for you in Paperclip; I know this issue is from approximately 3 years ago. If it is still an issue, can you please provide the code that's causing you the error? Thanks!

Collaborator

maclover7 commented Mar 9, 2015

Hi everybody! Is this still an issue for you in Paperclip; I know this issue is from approximately 3 years ago. If it is still an issue, can you please provide the code that's causing you the error? Thanks!

@brettv

This comment has been minimized.

Show comment
Hide comment
@brettv

brettv Mar 9, 2015

it is no longer an issue for me, thanks!

On Mon, Mar 9, 2015 at 2:56 PM, Jon Moss notifications@github.com wrote:

Hi everybody! Is this still an issue for you in Paperclip; I know this
issue is from approximately 3 years ago. If it is still an issue, can you
please provide the code that's causing you the error? Thanks!


Reply to this email directly or view it on GitHub
#751 (comment)
.

brettv commented Mar 9, 2015

it is no longer an issue for me, thanks!

On Mon, Mar 9, 2015 at 2:56 PM, Jon Moss notifications@github.com wrote:

Hi everybody! Is this still an issue for you in Paperclip; I know this
issue is from approximately 3 years ago. If it is still an issue, can you
please provide the code that's causing you the error? Thanks!


Reply to this email directly or view it on GitHub
#751 (comment)
.

@jasonfb

This comment has been minimized.

Show comment
Hide comment
@jasonfb

jasonfb Mar 9, 2015

I remember this issue from so long ago. I never quite identified it but I haven't seen it an awhile. (We are on AWS-SDK 1.27.0)

Some of the people here may be reporting this issue when the file is read by another process before Paperclip processes the styles. In those cases, the fix is to reset the read head of the file using the rewind method on Ruby's File I/O object (http://ruby-doc.org/core-2.0.0/IO.html#method-i-rewind)

But I do think that not all cases of the symptom involve that, only some of them.

jasonfb commented Mar 9, 2015

I remember this issue from so long ago. I never quite identified it but I haven't seen it an awhile. (We are on AWS-SDK 1.27.0)

Some of the people here may be reporting this issue when the file is read by another process before Paperclip processes the styles. In those cases, the fix is to reset the read head of the file using the rewind method on Ruby's File I/O object (http://ruby-doc.org/core-2.0.0/IO.html#method-i-rewind)

But I do think that not all cases of the symptom involve that, only some of them.

@maclover7

This comment has been minimized.

Show comment
Hide comment
@maclover7

maclover7 Mar 9, 2015

Collaborator

@jferris @jyurek Please close issue, problem appears to be solved.

Collaborator

maclover7 commented Mar 9, 2015

@jferris @jyurek Please close issue, problem appears to be solved.

@scratchoo

This comment has been minimized.

Show comment
Hide comment
@scratchoo

scratchoo Jul 19, 2015

@maclover7 I have actually this issue
I am using

gem 'paperclip', '~> 4.3.0'
gem 'aws-sdk', '< 2.0' 

how I can solve it ?

@maclover7 I have actually this issue
I am using

gem 'paperclip', '~> 4.3.0'
gem 'aws-sdk', '< 2.0' 

how I can solve it ?

@murilosardinha

This comment has been minimized.

Show comment
Hide comment
@murilosardinha

murilosardinha Aug 4, 2015

I have the save error. It seems to need another downgrade. Any help?

gem 'paperclip', '4.1.1'
gem 'aws-sdk',   '1.38.0'

I have the save error. It seems to need another downgrade. Any help?

gem 'paperclip', '4.1.1'
gem 'aws-sdk',   '1.38.0'
@scratchoo

This comment has been minimized.

Show comment
Hide comment
@scratchoo

scratchoo Aug 5, 2015

@murilosardinha I found that the problem is because the difference between the request time and the current time is too large so verify if your machine time (especially date) is correct.

PS : for every one who use Amazon S3 be sure to backup your content somewhere else than amazon (they could close your account at any time without any reason and then if you try to contact them they never respond)

@murilosardinha I found that the problem is because the difference between the request time and the current time is too large so verify if your machine time (especially date) is correct.

PS : for every one who use Amazon S3 be sure to backup your content somewhere else than amazon (they could close your account at any time without any reason and then if you try to contact them they never respond)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment