Skip to content

Commit

Permalink
CHANGES: Add current git/0.5.x progress
Browse files Browse the repository at this point in the history
  • Loading branch information
jonnor committed Feb 25, 2016
1 parent 3225a4b commit eb3fd7c
Showing 1 changed file with 35 additions and 1 deletion.
36 changes: 35 additions & 1 deletion CHANGES.md
Expand Up @@ -2,7 +2,41 @@ imgflo-server 0.5
====================
Released: N/A

Since 0.4.16 using cedar-14 stack on Heroku instead of old 'cedar'.
Continued scalability improvements.
Since June 2015, main deployment is using [guv](http://www.jonnor.com/2015/11/guv-automatic-scaling/)
for automatic scaling of Heroku workers.

API
---

* New `POST /graph/` allows to cause processing to happen without waiting for response.
Otherwise identical to the syncronous `GET` interface.
* Supports a `noop` graph, which cannot process images, just cache them.
Useful for proxying, like for getting rid of HTTP/S mixed content warnings.
* Output image size now has a (configurable) limit, will return HTTP 422 if exceeded.
* Failure to download input images now gives an informative HTTP 504 instead of 500 internal error


Scaling improvements
--------------------

* Redis cache frontend for Amazon S3, makes cache checks much faster (since 0.4.10)
* Decicated worker(s) for urgent (GET) jobs, avoids non-syncronous jobs (POST) blocking.
* Pub-sub connection from workers back to web, allows multiple web frontends (since .
* Processing is separated by runtime type (imgflo, noflo), as they have different perf/scaling characteristics

Bugfixes
------

* S3: Content-Type header is now correctly set for processed images.
* Fixed a image corruption issue with high concurrency rates, due to different requests overwriting same files.
* Fixed wrong parsing on inputs without Content-Type and query parameters

Other
------

* New Relic can optionally be used for metrics reporting
* Since 0.4.16 using cedar-14 stack on Heroku instead of old 'cedar'.


imgflo-server 0.4.0
Expand Down

0 comments on commit eb3fd7c

Please sign in to comment.