Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PHP Warning: Error while sending QUERY packet after 1 job #269

Closed
azhard4int opened this issue Sep 2, 2015 · 4 comments
Closed

PHP Warning: Error while sending QUERY packet after 1 job #269

azhard4int opened this issue Sep 2, 2015 · 4 comments

Comments

@azhard4int
Copy link

I am getting this error, and i have been consistently looking for some solution to get this work around.

PHP Warning: Error while sending QUERY packet.

Even, i closed all mysql connections once the job is done, for the first time job gets done successfully.

Next time I get the above error I mentioned.

@danhunsaker
Copy link
Contributor

You need to close and reopen the connection when the job starts. Forking reuses everything from the parent process, including the MySQL connection.

@azhard4int
Copy link
Author

@danhunsaker, thanks mate.

We were able to resolve the issue.

@jeetkr
Copy link

jeetkr commented Apr 25, 2017

@danhunsaker didnt get you, could you please elaborate more?
I am facing the same issue. Here is my snippet:
public function perform()
{
$mysql = Mysql::getInstance();
EH::setErrorHandler();
$user_id = $this->args['user_id'];

    	// add karma score
    $mysql->query("UPDATE user SET karma = karma + 1 WHERE id={$user_id}");
   .
       .
 }

I am reopening the connection at start, still get a "error while sending query packet". Also, this isnt a packet size issue since it works fine if put in main php thread and not in resque.

@danhunsaker
Copy link
Contributor

Reopening the connection needs to be done inside the perform() method, or if you have multiple jobs, you can add a beforePerform hook and reconnect there. Every job runs in a fork - a completely new process with an exact copy of the old one's memory contents (though it's copy-on-write, so it's still really fast) - so every job needs to reconnect immediately before trying to use the DB, every time.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants