Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

If failure in callback then transient content is wiped #7

Merged
merged 1 commit into from
Dec 24, 2012

Conversation

markjaquith
Copy link
Owner

Lets say my callback is:

    public static function my_callback( $query ) {

        $response = wp_remote_retrieve_body( wp_remote_get( $query, array( 'timeout' => 30 ) ) );

        if( is_wp_error( $response ) )
            return false;

        return $response;
    }

If the call to the remote website fails to get new content then I'm going to want to retain the existing content not blank it out. Is it possible to setup the callback that if it returns false then keep the existing content and retry after the cache limit?

Currently only way around this is to read the transient inside the callback on failure and return the transient content.

@mintindeed
Copy link
Contributor

This can be done in a simplistic fashion pretty simply.

In TLC_Transient::fetch_and_cache() Change:

try {
    $data = call_user_func_array( $this->callback, $this->params );
    $this->set( $data );
} catch( Exception $e ) {

To:

try {
    $data = call_user_func_array( $this->callback, $this->params );
    if ( $data ) {
        $this->set( $data );
    }
} catch( Exception $e ) {

The problem is, you will keep trying the same operation repeatedly, putting additional load on your server and probably banging your head against the wall of theirs. For example, if you've reached Twitter's API rate limit, then continually trying every couple seconds isn't going to help you.

I think the callback being responsible for returning data from the transient if the call fails is really the best approach here.

@l3rady
Copy link
Author

l3rady commented Jun 14, 2012

@mintindeed Yes I see what you are saying:

In the event of no data from the remote server every page load will ask the server remote server for content, which you don't want.

What I was suggesting is that if no data was returned (Boolean False) then the data is kept in the transient but the transient timeout is reset to the cache length.

So for example I have a cache of 30 seconds. On the second refresh the server returns false (lets say we hit the rate limit of twitter). False is returned and the cache is kept and the transient is set to retry in 30 seconds.

So the check for false, I feel needs to happen in TLC_Transient::set() not in TLC_Transient::fetch_and_cache()

* set like ->extend_on_fail( 10 )
* If an update fails, say if a web service is down, throw an exception in your callback
* If an exception is detected, and extend_on_fail is set, the existing data (if it exists), will be re-set for extends_on_fail seconds
* This is useful for throttling failed calls to web services

fixes #7
@markjaquith
Copy link
Owner

My approach was to add an ->extend_on_fail( integer ) method. I thought that you might not want to extend it for another whole TTL cycle. You might just want a mini-extension (say, to wait until the external web service sorts out its issues).

Pretty basic, but should handle most HTTP-fetch failures. This acts as a nice throttle, and solves the issue @mintindeed mentioned about continuously retrying.

Thoughts?

@mintindeed
Copy link
Contributor

I dig it.

markjaquith added a commit that referenced this pull request Dec 24, 2012
…ate-failure

Introduce extend_on_fail() for better graceful failure.
@markjaquith markjaquith merged commit ba41e29 into master Dec 24, 2012
@markjaquith markjaquith deleted the issue-extend-expiration-on-update-failure branch December 24, 2012 20:12
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants