[MERGE] cron race condition leading to unneeded executions, courtesy of acsone
When mulitple cron workers get their list of jobs to process at the same time, some jobs might be executed multiple times. We fix this by keeping the listing filter when taking the lock. bzr revid: al@openerp.com-20140228161524-y8nyq5uw9yq9rcc3
This commit is contained in:
commit
ed7ee4df52
|
@ -225,12 +225,21 @@ class ir_cron(osv.osv):
|
|||
lock_cr = db.cursor()
|
||||
try:
|
||||
# Try to grab an exclusive lock on the job row from within the task transaction
|
||||
# Restrict to the same conditions as for the search since the job may have already
|
||||
# been run by an other thread when cron is running in multi thread
|
||||
lock_cr.execute("""SELECT *
|
||||
FROM ir_cron
|
||||
WHERE id=%s
|
||||
WHERE numbercall != 0
|
||||
AND active
|
||||
AND nextcall <= (now() at time zone 'UTC')
|
||||
AND id=%s
|
||||
FOR UPDATE NOWAIT""",
|
||||
(job['id'],), log_exceptions=False)
|
||||
|
||||
locked_job = lock_cr.fetchone()
|
||||
if not locked_job:
|
||||
# job was already executed by another parallel process/thread, skipping it.
|
||||
continue
|
||||
# Got the lock on the job row, run its code
|
||||
_logger.debug('Starting job `%s`.', job['name'])
|
||||
job_cr = db.cursor()
|
||||
|
|
Loading…
Reference in New Issue