1999-10-13 04:15:49 +00:00
|
|
|
/*
|
2005-09-14 20:46:50 +00:00
|
|
|
* Asterisk -- An open source telephony toolkit.
|
|
|
|
*
|
2010-12-20 17:15:54 +00:00
|
|
|
* Copyright (C) 1999 - 2010, Digium, Inc.
|
2005-09-14 20:46:50 +00:00
|
|
|
*
|
2005-04-15 17:34:14 +00:00
|
|
|
* Mark Spencer <markster@digium.com>
|
2010-12-20 17:15:54 +00:00
|
|
|
* Russell Bryant <russell@digium.com>
|
1999-10-13 04:15:49 +00:00
|
|
|
*
|
2005-09-14 20:46:50 +00:00
|
|
|
* See http://www.asterisk.org for more information about
|
|
|
|
* the Asterisk project. Please do not directly contact
|
|
|
|
* any of the maintainers of this project for assistance;
|
|
|
|
* the project provides a web site, mailing lists and IRC
|
|
|
|
* channels for your use.
|
|
|
|
*
|
|
|
|
* This program is free software, distributed under the terms of
|
|
|
|
* the GNU General Public License Version 2. See the LICENSE file
|
|
|
|
* at the top of the source tree.
|
|
|
|
*/
|
|
|
|
|
2005-10-24 20:12:06 +00:00
|
|
|
/*! \file
|
1999-10-13 04:15:49 +00:00
|
|
|
*
|
2005-10-24 20:12:06 +00:00
|
|
|
* \brief Scheduler Routines (from cheops-NG)
|
1999-10-13 04:15:49 +00:00
|
|
|
*
|
2005-12-30 21:18:06 +00:00
|
|
|
* \author Mark Spencer <markster@digium.com>
|
1999-10-13 04:15:49 +00:00
|
|
|
*/
|
|
|
|
|
2012-06-15 16:20:16 +00:00
|
|
|
/*** MODULEINFO
|
|
|
|
<support_level>core</support_level>
|
|
|
|
***/
|
|
|
|
|
2006-06-07 18:54:56 +00:00
|
|
|
#include "asterisk.h"
|
|
|
|
|
git migration: Refactor the ASTERISK_FILE_VERSION macro
Git does not support the ability to replace a token with a version
string during check-in. While it does have support for replacing a
token on clone, this is somewhat sub-optimal: the token is replaced
with the object hash, which is not particularly easy for human
consumption. What's more, in practice, the source file version was often
not terribly useful. Generally, when triaging bugs, the overall version
of Asterisk is far more useful than an individual SVN version of a file. As a
result, this patch removes Asterisk's support for showing source file
versions.
Specifically, it does the following:
* Rename ASTERISK_FILE_VERSION macro to ASTERISK_REGISTER_FILE, and
remove passing the version in with the macro. Other facilities
than 'core show file version' make use of the file names, such as
setting a debug level only on a specific file. As such, the act of
registering source files with the Asterisk core still has use. The
macro rename now reflects the new macro purpose.
* main/asterisk:
- Refactor the file_version structure to reflect that it no longer
tracks a version field.
- Remove the "core show file version" CLI command. Without the file
version, it is no longer useful.
- Remove the ast_file_version_find function. The file version is no
longer tracked.
- Rename ast_register_file_version/ast_unregister_file_version to
ast_register_file/ast_unregister_file, respectively.
* main/manager: Remove value from the Version key of the ModuleCheck
Action. The actual key itself has not been removed, as doing so would
absolutely constitute a backwards incompatible change. However, since
the file version is no longer tracked, there is no need to attempt to
include it in the Version key.
* UPGRADE: Add notes for:
- Modification to the ModuleCheck AMI Action
- Removal of the "core show file version" CLI command
Change-Id: I6cf0ff280e1668bf4957dc21f32a5ff43444a40e
2015-04-12 02:38:22 +00:00
|
|
|
ASTERISK_REGISTER_FILE()
|
2006-06-07 18:54:56 +00:00
|
|
|
|
1999-10-13 04:15:49 +00:00
|
|
|
#ifdef DEBUG_SCHEDULER
|
2006-05-05 18:11:55 +00:00
|
|
|
#define DEBUG(a) do { \
|
|
|
|
if (option_debug) \
|
|
|
|
DEBUG_M(a) \
|
|
|
|
} while (0)
|
1999-10-13 04:15:49 +00:00
|
|
|
#else
|
2012-03-22 19:51:16 +00:00
|
|
|
#define DEBUG(a)
|
1999-10-13 04:15:49 +00:00
|
|
|
#endif
|
|
|
|
|
|
|
|
#include <sys/time.h>
|
2003-11-21 18:38:42 +00:00
|
|
|
|
2005-04-21 06:02:45 +00:00
|
|
|
#include "asterisk/sched.h"
|
|
|
|
#include "asterisk/channel.h"
|
|
|
|
#include "asterisk/lock.h"
|
2005-07-15 23:00:47 +00:00
|
|
|
#include "asterisk/utils.h"
|
2009-02-17 21:04:08 +00:00
|
|
|
#include "asterisk/heap.h"
|
2010-07-16 20:35:28 +00:00
|
|
|
#include "asterisk/threadstorage.h"
|
|
|
|
|
2012-03-22 19:51:16 +00:00
|
|
|
/*!
|
2010-12-20 17:15:54 +00:00
|
|
|
* \brief Max num of schedule structs
|
|
|
|
*
|
|
|
|
* \note The max number of schedule structs to keep around
|
|
|
|
* for use. Undefine to disable schedule structure
|
|
|
|
* caching. (Only disable this on very low memory
|
|
|
|
* machines)
|
|
|
|
*/
|
|
|
|
#define SCHED_MAX_CACHE 128
|
|
|
|
|
2010-07-16 20:35:28 +00:00
|
|
|
AST_THREADSTORAGE(last_del_id);
|
1999-10-13 04:15:49 +00:00
|
|
|
|
scheduler: Use queue for allocating sched IDs.
It has been observed that on long-running busy systems, a scheduler
context can eventually hit INT_MAX for its assigned IDs and end up
overflowing into a very low negative number. When this occurs, this can
result in odd behaviors, because a negative return is interpreted by
callers as being a failure. However, the item actually was successfully
scheduled. The result may be that a freed item remains in the scheduler,
resulting in a crash at some point in the future.
The scheduler can overflow because every time that an item is added to
the scheduler, a counter is bumped and that counter's current value is
assigned as the new item's ID.
This patch introduces a new method for assigning scheduler IDs. Instead
of assigning from a counter, a queue of available IDs is maintained.
When assigning a new ID, an ID is pulled from the queue. When a
scheduler item is released, its ID is pushed back onto the queue. This
way, IDs may be reused when they become available, and the growth of ID
numbers is directly related to concurrent activity within a scheduler
context rather than the uptime of the system.
Change-Id: I532708eef8f669d823457d7fefdad9a6078b99b2
2015-09-10 22:19:26 +00:00
|
|
|
/*!
|
|
|
|
* \brief Scheduler ID holder
|
|
|
|
*
|
|
|
|
* These form a queue on a scheduler context. When a new
|
|
|
|
* scheduled item is created, a sched_id is popped off the
|
|
|
|
* queue and its id is assigned to the new scheduled item.
|
|
|
|
* When the scheduled task is complete, the sched_id on that
|
|
|
|
* task is then pushed to the back of the queue to be re-used
|
|
|
|
* on some future scheduled item.
|
|
|
|
*/
|
|
|
|
struct sched_id {
|
|
|
|
/*! Immutable ID number that is copied onto the scheduled task */
|
|
|
|
int id;
|
|
|
|
AST_LIST_ENTRY(sched_id) list;
|
|
|
|
};
|
|
|
|
|
1999-10-13 04:15:49 +00:00
|
|
|
struct sched {
|
2009-02-17 21:04:08 +00:00
|
|
|
AST_LIST_ENTRY(sched) list;
|
scheduler: Use queue for allocating sched IDs.
It has been observed that on long-running busy systems, a scheduler
context can eventually hit INT_MAX for its assigned IDs and end up
overflowing into a very low negative number. When this occurs, this can
result in odd behaviors, because a negative return is interpreted by
callers as being a failure. However, the item actually was successfully
scheduled. The result may be that a freed item remains in the scheduler,
resulting in a crash at some point in the future.
The scheduler can overflow because every time that an item is added to
the scheduler, a counter is bumped and that counter's current value is
assigned as the new item's ID.
This patch introduces a new method for assigning scheduler IDs. Instead
of assigning from a counter, a queue of available IDs is maintained.
When assigning a new ID, an ID is pulled from the queue. When a
scheduler item is released, its ID is pushed back onto the queue. This
way, IDs may be reused when they become available, and the growth of ID
numbers is directly related to concurrent activity within a scheduler
context rather than the uptime of the system.
Change-Id: I532708eef8f669d823457d7fefdad9a6078b99b2
2015-09-10 22:19:26 +00:00
|
|
|
/*! The ID that has been popped off the scheduler context's queue */
|
|
|
|
struct sched_id *sched_id;
|
2006-05-05 17:09:27 +00:00
|
|
|
struct timeval when; /*!< Absolute time event should take place */
|
|
|
|
int resched; /*!< When to reschedule */
|
|
|
|
int variable; /*!< Use return value from callback to reschedule */
|
2007-09-21 14:40:10 +00:00
|
|
|
const void *data; /*!< Data */
|
2006-05-05 17:09:27 +00:00
|
|
|
ast_sched_cb callback; /*!< Callback */
|
2009-02-17 21:04:08 +00:00
|
|
|
ssize_t __heap_index;
|
2014-08-26 22:14:46 +00:00
|
|
|
/*!
|
|
|
|
* Used to synchronize between thread running a task and thread
|
|
|
|
* attempting to delete a task
|
|
|
|
*/
|
|
|
|
ast_cond_t cond;
|
|
|
|
/*! Indication that a running task was deleted. */
|
|
|
|
unsigned int deleted:1;
|
1999-10-13 04:15:49 +00:00
|
|
|
};
|
|
|
|
|
2010-12-20 17:15:54 +00:00
|
|
|
struct sched_thread {
|
|
|
|
pthread_t thread;
|
|
|
|
ast_cond_t cond;
|
|
|
|
unsigned int stop:1;
|
|
|
|
};
|
|
|
|
|
|
|
|
struct ast_sched_context {
|
2003-11-21 18:38:42 +00:00
|
|
|
ast_mutex_t lock;
|
2006-07-12 18:28:31 +00:00
|
|
|
unsigned int eventcnt; /*!< Number of events processed */
|
2008-04-16 20:09:39 +00:00
|
|
|
unsigned int highwater; /*!< highest count so far */
|
2009-02-17 21:04:08 +00:00
|
|
|
struct ast_heap *sched_heap;
|
2010-12-20 17:15:54 +00:00
|
|
|
struct sched_thread *sched_thread;
|
2014-08-26 22:14:46 +00:00
|
|
|
/*! The scheduled task that is currently executing */
|
|
|
|
struct sched *currently_executing;
|
1999-10-13 04:15:49 +00:00
|
|
|
|
|
|
|
#ifdef SCHED_MAX_CACHE
|
2006-05-05 17:09:27 +00:00
|
|
|
AST_LIST_HEAD_NOLOCK(, sched) schedc; /*!< Cache of unused schedule structures and how many */
|
2006-07-12 18:28:31 +00:00
|
|
|
unsigned int schedccnt;
|
1999-10-13 04:15:49 +00:00
|
|
|
#endif
|
scheduler: Use queue for allocating sched IDs.
It has been observed that on long-running busy systems, a scheduler
context can eventually hit INT_MAX for its assigned IDs and end up
overflowing into a very low negative number. When this occurs, this can
result in odd behaviors, because a negative return is interpreted by
callers as being a failure. However, the item actually was successfully
scheduled. The result may be that a freed item remains in the scheduler,
resulting in a crash at some point in the future.
The scheduler can overflow because every time that an item is added to
the scheduler, a counter is bumped and that counter's current value is
assigned as the new item's ID.
This patch introduces a new method for assigning scheduler IDs. Instead
of assigning from a counter, a queue of available IDs is maintained.
When assigning a new ID, an ID is pulled from the queue. When a
scheduler item is released, its ID is pushed back onto the queue. This
way, IDs may be reused when they become available, and the growth of ID
numbers is directly related to concurrent activity within a scheduler
context rather than the uptime of the system.
Change-Id: I532708eef8f669d823457d7fefdad9a6078b99b2
2015-09-10 22:19:26 +00:00
|
|
|
/*! Queue of scheduler task IDs to assign */
|
|
|
|
AST_LIST_HEAD_NOLOCK(, sched_id) id_queue;
|
|
|
|
/*! The number of IDs in the id_queue */
|
|
|
|
int id_queue_size;
|
1999-10-13 04:15:49 +00:00
|
|
|
};
|
|
|
|
|
2009-02-06 10:55:35 +00:00
|
|
|
static void *sched_run(void *data)
|
|
|
|
{
|
2010-12-20 17:15:54 +00:00
|
|
|
struct ast_sched_context *con = data;
|
2009-02-06 10:55:35 +00:00
|
|
|
|
2010-12-20 17:15:54 +00:00
|
|
|
while (!con->sched_thread->stop) {
|
2009-02-06 10:55:35 +00:00
|
|
|
int ms;
|
|
|
|
struct timespec ts = {
|
2010-12-20 17:15:54 +00:00
|
|
|
.tv_sec = 0,
|
2009-02-06 10:55:35 +00:00
|
|
|
};
|
|
|
|
|
2010-12-20 17:15:54 +00:00
|
|
|
ast_mutex_lock(&con->lock);
|
2009-02-06 10:55:35 +00:00
|
|
|
|
2010-12-20 17:15:54 +00:00
|
|
|
if (con->sched_thread->stop) {
|
|
|
|
ast_mutex_unlock(&con->lock);
|
2009-02-06 10:55:35 +00:00
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2010-12-20 17:15:54 +00:00
|
|
|
ms = ast_sched_wait(con);
|
2009-02-06 10:55:35 +00:00
|
|
|
|
|
|
|
if (ms == -1) {
|
2010-12-20 17:15:54 +00:00
|
|
|
ast_cond_wait(&con->sched_thread->cond, &con->lock);
|
|
|
|
} else {
|
2009-02-06 10:55:35 +00:00
|
|
|
struct timeval tv;
|
|
|
|
tv = ast_tvadd(ast_tvnow(), ast_samp2tv(ms, 1000));
|
|
|
|
ts.tv_sec = tv.tv_sec;
|
|
|
|
ts.tv_nsec = tv.tv_usec * 1000;
|
2010-12-20 17:15:54 +00:00
|
|
|
ast_cond_timedwait(&con->sched_thread->cond, &con->lock, &ts);
|
2009-02-06 10:55:35 +00:00
|
|
|
}
|
|
|
|
|
2010-12-20 17:15:54 +00:00
|
|
|
ast_mutex_unlock(&con->lock);
|
2009-02-06 10:55:35 +00:00
|
|
|
|
2010-12-20 17:15:54 +00:00
|
|
|
if (con->sched_thread->stop) {
|
2009-02-06 10:55:35 +00:00
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2010-12-20 17:15:54 +00:00
|
|
|
ast_sched_runq(con);
|
2009-02-06 10:55:35 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2010-12-20 17:15:54 +00:00
|
|
|
static void sched_thread_destroy(struct ast_sched_context *con)
|
2009-02-06 10:55:35 +00:00
|
|
|
{
|
2010-12-20 17:15:54 +00:00
|
|
|
if (!con->sched_thread) {
|
|
|
|
return;
|
2009-02-06 10:55:35 +00:00
|
|
|
}
|
|
|
|
|
2010-12-20 17:15:54 +00:00
|
|
|
if (con->sched_thread->thread != AST_PTHREADT_NULL) {
|
|
|
|
ast_mutex_lock(&con->lock);
|
|
|
|
con->sched_thread->stop = 1;
|
|
|
|
ast_cond_signal(&con->sched_thread->cond);
|
|
|
|
ast_mutex_unlock(&con->lock);
|
|
|
|
pthread_join(con->sched_thread->thread, NULL);
|
|
|
|
con->sched_thread->thread = AST_PTHREADT_NULL;
|
2009-02-06 10:55:35 +00:00
|
|
|
}
|
|
|
|
|
2010-12-20 17:15:54 +00:00
|
|
|
ast_cond_destroy(&con->sched_thread->cond);
|
2009-02-06 10:55:35 +00:00
|
|
|
|
2010-12-20 17:15:54 +00:00
|
|
|
ast_free(con->sched_thread);
|
|
|
|
|
|
|
|
con->sched_thread = NULL;
|
2009-02-06 10:55:35 +00:00
|
|
|
}
|
|
|
|
|
2010-12-20 17:15:54 +00:00
|
|
|
int ast_sched_start_thread(struct ast_sched_context *con)
|
2009-02-06 10:55:35 +00:00
|
|
|
{
|
2010-12-20 17:15:54 +00:00
|
|
|
struct sched_thread *st;
|
|
|
|
|
|
|
|
if (con->sched_thread) {
|
|
|
|
ast_log(LOG_ERROR, "Thread already started on this scheduler context\n");
|
|
|
|
return -1;
|
|
|
|
}
|
2009-02-06 10:55:35 +00:00
|
|
|
|
|
|
|
if (!(st = ast_calloc(1, sizeof(*st)))) {
|
2010-12-20 17:15:54 +00:00
|
|
|
return -1;
|
2009-02-06 10:55:35 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
ast_cond_init(&st->cond, NULL);
|
|
|
|
|
|
|
|
st->thread = AST_PTHREADT_NULL;
|
|
|
|
|
2010-12-20 17:15:54 +00:00
|
|
|
con->sched_thread = st;
|
2009-02-06 10:55:35 +00:00
|
|
|
|
2010-12-20 17:15:54 +00:00
|
|
|
if (ast_pthread_create_background(&st->thread, NULL, sched_run, con)) {
|
|
|
|
ast_log(LOG_ERROR, "Failed to create scheduler thread\n");
|
|
|
|
sched_thread_destroy(con);
|
|
|
|
return -1;
|
2009-02-06 10:55:35 +00:00
|
|
|
}
|
|
|
|
|
2010-12-20 17:15:54 +00:00
|
|
|
return 0;
|
2009-02-06 10:55:35 +00:00
|
|
|
}
|
2008-04-16 20:09:39 +00:00
|
|
|
|
2009-02-17 21:04:08 +00:00
|
|
|
static int sched_time_cmp(void *a, void *b)
|
|
|
|
{
|
2009-02-23 17:29:16 +00:00
|
|
|
return ast_tvcmp(((struct sched *) b)->when, ((struct sched *) a)->when);
|
2009-02-17 21:04:08 +00:00
|
|
|
}
|
|
|
|
|
2010-12-20 17:15:54 +00:00
|
|
|
struct ast_sched_context *ast_sched_context_create(void)
|
1999-10-13 04:15:49 +00:00
|
|
|
{
|
2010-12-20 17:15:54 +00:00
|
|
|
struct ast_sched_context *tmp;
|
2006-03-18 19:16:36 +00:00
|
|
|
|
2010-12-20 17:15:54 +00:00
|
|
|
if (!(tmp = ast_calloc(1, sizeof(*tmp)))) {
|
2006-03-18 19:16:36 +00:00
|
|
|
return NULL;
|
2010-12-20 17:15:54 +00:00
|
|
|
}
|
2006-03-18 19:16:36 +00:00
|
|
|
|
|
|
|
ast_mutex_init(&tmp->lock);
|
|
|
|
tmp->eventcnt = 1;
|
2009-02-17 21:04:08 +00:00
|
|
|
|
scheduler: Use queue for allocating sched IDs.
It has been observed that on long-running busy systems, a scheduler
context can eventually hit INT_MAX for its assigned IDs and end up
overflowing into a very low negative number. When this occurs, this can
result in odd behaviors, because a negative return is interpreted by
callers as being a failure. However, the item actually was successfully
scheduled. The result may be that a freed item remains in the scheduler,
resulting in a crash at some point in the future.
The scheduler can overflow because every time that an item is added to
the scheduler, a counter is bumped and that counter's current value is
assigned as the new item's ID.
This patch introduces a new method for assigning scheduler IDs. Instead
of assigning from a counter, a queue of available IDs is maintained.
When assigning a new ID, an ID is pulled from the queue. When a
scheduler item is released, its ID is pushed back onto the queue. This
way, IDs may be reused when they become available, and the growth of ID
numbers is directly related to concurrent activity within a scheduler
context rather than the uptime of the system.
Change-Id: I532708eef8f669d823457d7fefdad9a6078b99b2
2015-09-10 22:19:26 +00:00
|
|
|
AST_LIST_HEAD_INIT_NOLOCK(&tmp->id_queue);
|
|
|
|
|
2009-02-17 21:04:08 +00:00
|
|
|
if (!(tmp->sched_heap = ast_heap_create(8, sched_time_cmp,
|
|
|
|
offsetof(struct sched, __heap_index)))) {
|
2010-12-20 17:15:54 +00:00
|
|
|
ast_sched_context_destroy(tmp);
|
2009-02-17 21:04:08 +00:00
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
1999-10-13 04:15:49 +00:00
|
|
|
return tmp;
|
|
|
|
}
|
|
|
|
|
2014-08-26 22:14:46 +00:00
|
|
|
static void sched_free(struct sched *task)
|
|
|
|
{
|
scheduler: Use queue for allocating sched IDs.
It has been observed that on long-running busy systems, a scheduler
context can eventually hit INT_MAX for its assigned IDs and end up
overflowing into a very low negative number. When this occurs, this can
result in odd behaviors, because a negative return is interpreted by
callers as being a failure. However, the item actually was successfully
scheduled. The result may be that a freed item remains in the scheduler,
resulting in a crash at some point in the future.
The scheduler can overflow because every time that an item is added to
the scheduler, a counter is bumped and that counter's current value is
assigned as the new item's ID.
This patch introduces a new method for assigning scheduler IDs. Instead
of assigning from a counter, a queue of available IDs is maintained.
When assigning a new ID, an ID is pulled from the queue. When a
scheduler item is released, its ID is pushed back onto the queue. This
way, IDs may be reused when they become available, and the growth of ID
numbers is directly related to concurrent activity within a scheduler
context rather than the uptime of the system.
Change-Id: I532708eef8f669d823457d7fefdad9a6078b99b2
2015-09-10 22:19:26 +00:00
|
|
|
/* task->sched_id will be NULL most of the time, but when the
|
|
|
|
* scheduler context shuts down, it will free all scheduled
|
|
|
|
* tasks, and in that case, the task->sched_id will be non-NULL
|
|
|
|
*/
|
|
|
|
ast_free(task->sched_id);
|
2014-08-26 22:14:46 +00:00
|
|
|
ast_cond_destroy(&task->cond);
|
|
|
|
ast_free(task);
|
|
|
|
}
|
|
|
|
|
2010-12-20 17:15:54 +00:00
|
|
|
void ast_sched_context_destroy(struct ast_sched_context *con)
|
1999-10-13 04:15:49 +00:00
|
|
|
{
|
2006-05-05 17:09:27 +00:00
|
|
|
struct sched *s;
|
scheduler: Use queue for allocating sched IDs.
It has been observed that on long-running busy systems, a scheduler
context can eventually hit INT_MAX for its assigned IDs and end up
overflowing into a very low negative number. When this occurs, this can
result in odd behaviors, because a negative return is interpreted by
callers as being a failure. However, the item actually was successfully
scheduled. The result may be that a freed item remains in the scheduler,
resulting in a crash at some point in the future.
The scheduler can overflow because every time that an item is added to
the scheduler, a counter is bumped and that counter's current value is
assigned as the new item's ID.
This patch introduces a new method for assigning scheduler IDs. Instead
of assigning from a counter, a queue of available IDs is maintained.
When assigning a new ID, an ID is pulled from the queue. When a
scheduler item is released, its ID is pushed back onto the queue. This
way, IDs may be reused when they become available, and the growth of ID
numbers is directly related to concurrent activity within a scheduler
context rather than the uptime of the system.
Change-Id: I532708eef8f669d823457d7fefdad9a6078b99b2
2015-09-10 22:19:26 +00:00
|
|
|
struct sched_id *sid;
|
2006-05-05 17:09:27 +00:00
|
|
|
|
2010-12-20 17:15:54 +00:00
|
|
|
sched_thread_destroy(con);
|
|
|
|
con->sched_thread = NULL;
|
|
|
|
|
2003-11-21 18:38:42 +00:00
|
|
|
ast_mutex_lock(&con->lock);
|
2006-05-05 17:09:27 +00:00
|
|
|
|
2001-04-23 16:50:12 +00:00
|
|
|
#ifdef SCHED_MAX_CACHE
|
2010-12-20 17:15:54 +00:00
|
|
|
while ((s = AST_LIST_REMOVE_HEAD(&con->schedc, list))) {
|
2014-08-26 22:14:46 +00:00
|
|
|
sched_free(s);
|
2010-12-20 17:15:54 +00:00
|
|
|
}
|
2001-04-23 16:50:12 +00:00
|
|
|
#endif
|
2006-05-05 17:09:27 +00:00
|
|
|
|
2009-02-17 21:04:08 +00:00
|
|
|
if (con->sched_heap) {
|
|
|
|
while ((s = ast_heap_pop(con->sched_heap))) {
|
2014-08-26 22:14:46 +00:00
|
|
|
sched_free(s);
|
2009-02-17 21:04:08 +00:00
|
|
|
}
|
|
|
|
ast_heap_destroy(con->sched_heap);
|
|
|
|
con->sched_heap = NULL;
|
|
|
|
}
|
2008-04-16 20:09:39 +00:00
|
|
|
|
scheduler: Use queue for allocating sched IDs.
It has been observed that on long-running busy systems, a scheduler
context can eventually hit INT_MAX for its assigned IDs and end up
overflowing into a very low negative number. When this occurs, this can
result in odd behaviors, because a negative return is interpreted by
callers as being a failure. However, the item actually was successfully
scheduled. The result may be that a freed item remains in the scheduler,
resulting in a crash at some point in the future.
The scheduler can overflow because every time that an item is added to
the scheduler, a counter is bumped and that counter's current value is
assigned as the new item's ID.
This patch introduces a new method for assigning scheduler IDs. Instead
of assigning from a counter, a queue of available IDs is maintained.
When assigning a new ID, an ID is pulled from the queue. When a
scheduler item is released, its ID is pushed back onto the queue. This
way, IDs may be reused when they become available, and the growth of ID
numbers is directly related to concurrent activity within a scheduler
context rather than the uptime of the system.
Change-Id: I532708eef8f669d823457d7fefdad9a6078b99b2
2015-09-10 22:19:26 +00:00
|
|
|
while ((sid = AST_LIST_REMOVE_HEAD(&con->id_queue, list))) {
|
|
|
|
ast_free(sid);
|
|
|
|
}
|
|
|
|
|
2003-11-21 18:38:42 +00:00
|
|
|
ast_mutex_unlock(&con->lock);
|
2004-06-22 17:42:14 +00:00
|
|
|
ast_mutex_destroy(&con->lock);
|
2010-12-20 17:15:54 +00:00
|
|
|
|
2007-06-06 21:20:11 +00:00
|
|
|
ast_free(con);
|
1999-10-13 04:15:49 +00:00
|
|
|
}
|
|
|
|
|
scheduler: Use queue for allocating sched IDs.
It has been observed that on long-running busy systems, a scheduler
context can eventually hit INT_MAX for its assigned IDs and end up
overflowing into a very low negative number. When this occurs, this can
result in odd behaviors, because a negative return is interpreted by
callers as being a failure. However, the item actually was successfully
scheduled. The result may be that a freed item remains in the scheduler,
resulting in a crash at some point in the future.
The scheduler can overflow because every time that an item is added to
the scheduler, a counter is bumped and that counter's current value is
assigned as the new item's ID.
This patch introduces a new method for assigning scheduler IDs. Instead
of assigning from a counter, a queue of available IDs is maintained.
When assigning a new ID, an ID is pulled from the queue. When a
scheduler item is released, its ID is pushed back onto the queue. This
way, IDs may be reused when they become available, and the growth of ID
numbers is directly related to concurrent activity within a scheduler
context rather than the uptime of the system.
Change-Id: I532708eef8f669d823457d7fefdad9a6078b99b2
2015-09-10 22:19:26 +00:00
|
|
|
#define ID_QUEUE_INCREMENT 16
|
2006-05-05 17:09:27 +00:00
|
|
|
|
scheduler: Use queue for allocating sched IDs.
It has been observed that on long-running busy systems, a scheduler
context can eventually hit INT_MAX for its assigned IDs and end up
overflowing into a very low negative number. When this occurs, this can
result in odd behaviors, because a negative return is interpreted by
callers as being a failure. However, the item actually was successfully
scheduled. The result may be that a freed item remains in the scheduler,
resulting in a crash at some point in the future.
The scheduler can overflow because every time that an item is added to
the scheduler, a counter is bumped and that counter's current value is
assigned as the new item's ID.
This patch introduces a new method for assigning scheduler IDs. Instead
of assigning from a counter, a queue of available IDs is maintained.
When assigning a new ID, an ID is pulled from the queue. When a
scheduler item is released, its ID is pushed back onto the queue. This
way, IDs may be reused when they become available, and the growth of ID
numbers is directly related to concurrent activity within a scheduler
context rather than the uptime of the system.
Change-Id: I532708eef8f669d823457d7fefdad9a6078b99b2
2015-09-10 22:19:26 +00:00
|
|
|
/*!
|
|
|
|
* \brief Add new scheduler IDs to the queue.
|
|
|
|
*
|
|
|
|
* \retval The number of IDs added to the queue
|
|
|
|
*/
|
|
|
|
static int add_ids(struct ast_sched_context *con)
|
|
|
|
{
|
|
|
|
int new_size;
|
|
|
|
int original_size;
|
|
|
|
int i;
|
|
|
|
|
|
|
|
original_size = con->id_queue_size;
|
|
|
|
/* So we don't go overboard with the mallocs here, we'll just up
|
|
|
|
* the size of the list by a fixed amount each time instead of
|
|
|
|
* multiplying the size by any particular factor
|
1999-10-13 04:15:49 +00:00
|
|
|
*/
|
scheduler: Use queue for allocating sched IDs.
It has been observed that on long-running busy systems, a scheduler
context can eventually hit INT_MAX for its assigned IDs and end up
overflowing into a very low negative number. When this occurs, this can
result in odd behaviors, because a negative return is interpreted by
callers as being a failure. However, the item actually was successfully
scheduled. The result may be that a freed item remains in the scheduler,
resulting in a crash at some point in the future.
The scheduler can overflow because every time that an item is added to
the scheduler, a counter is bumped and that counter's current value is
assigned as the new item's ID.
This patch introduces a new method for assigning scheduler IDs. Instead
of assigning from a counter, a queue of available IDs is maintained.
When assigning a new ID, an ID is pulled from the queue. When a
scheduler item is released, its ID is pushed back onto the queue. This
way, IDs may be reused when they become available, and the growth of ID
numbers is directly related to concurrent activity within a scheduler
context rather than the uptime of the system.
Change-Id: I532708eef8f669d823457d7fefdad9a6078b99b2
2015-09-10 22:19:26 +00:00
|
|
|
new_size = original_size + ID_QUEUE_INCREMENT;
|
|
|
|
if (new_size < 0) {
|
|
|
|
/* Overflow. Cap it at INT_MAX. */
|
|
|
|
new_size = INT_MAX;
|
2014-08-26 22:14:46 +00:00
|
|
|
}
|
scheduler: Use queue for allocating sched IDs.
It has been observed that on long-running busy systems, a scheduler
context can eventually hit INT_MAX for its assigned IDs and end up
overflowing into a very low negative number. When this occurs, this can
result in odd behaviors, because a negative return is interpreted by
callers as being a failure. However, the item actually was successfully
scheduled. The result may be that a freed item remains in the scheduler,
resulting in a crash at some point in the future.
The scheduler can overflow because every time that an item is added to
the scheduler, a counter is bumped and that counter's current value is
assigned as the new item's ID.
This patch introduces a new method for assigning scheduler IDs. Instead
of assigning from a counter, a queue of available IDs is maintained.
When assigning a new ID, an ID is pulled from the queue. When a
scheduler item is released, its ID is pushed back onto the queue. This
way, IDs may be reused when they become available, and the growth of ID
numbers is directly related to concurrent activity within a scheduler
context rather than the uptime of the system.
Change-Id: I532708eef8f669d823457d7fefdad9a6078b99b2
2015-09-10 22:19:26 +00:00
|
|
|
for (i = original_size; i < new_size; ++i) {
|
|
|
|
struct sched_id *new_id;
|
2006-05-05 17:09:27 +00:00
|
|
|
|
scheduler: Use queue for allocating sched IDs.
It has been observed that on long-running busy systems, a scheduler
context can eventually hit INT_MAX for its assigned IDs and end up
overflowing into a very low negative number. When this occurs, this can
result in odd behaviors, because a negative return is interpreted by
callers as being a failure. However, the item actually was successfully
scheduled. The result may be that a freed item remains in the scheduler,
resulting in a crash at some point in the future.
The scheduler can overflow because every time that an item is added to
the scheduler, a counter is bumped and that counter's current value is
assigned as the new item's ID.
This patch introduces a new method for assigning scheduler IDs. Instead
of assigning from a counter, a queue of available IDs is maintained.
When assigning a new ID, an ID is pulled from the queue. When a
scheduler item is released, its ID is pushed back onto the queue. This
way, IDs may be reused when they become available, and the growth of ID
numbers is directly related to concurrent activity within a scheduler
context rather than the uptime of the system.
Change-Id: I532708eef8f669d823457d7fefdad9a6078b99b2
2015-09-10 22:19:26 +00:00
|
|
|
new_id = ast_calloc(1, sizeof(*new_id));
|
|
|
|
if (!new_id) {
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
new_id->id = i;
|
|
|
|
AST_LIST_INSERT_TAIL(&con->id_queue, new_id, list);
|
|
|
|
++con->id_queue_size;
|
|
|
|
}
|
|
|
|
|
|
|
|
return con->id_queue_size - original_size;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int set_sched_id(struct ast_sched_context *con, struct sched *new_sched)
|
|
|
|
{
|
|
|
|
if (AST_LIST_EMPTY(&con->id_queue) && (add_ids(con) == 0)) {
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
|
|
|
new_sched->sched_id = AST_LIST_REMOVE_HEAD(&con->id_queue, list);
|
|
|
|
return 0;
|
1999-10-13 04:15:49 +00:00
|
|
|
}
|
|
|
|
|
2010-12-20 17:15:54 +00:00
|
|
|
static void sched_release(struct ast_sched_context *con, struct sched *tmp)
|
1999-10-13 04:15:49 +00:00
|
|
|
{
|
scheduler: Use queue for allocating sched IDs.
It has been observed that on long-running busy systems, a scheduler
context can eventually hit INT_MAX for its assigned IDs and end up
overflowing into a very low negative number. When this occurs, this can
result in odd behaviors, because a negative return is interpreted by
callers as being a failure. However, the item actually was successfully
scheduled. The result may be that a freed item remains in the scheduler,
resulting in a crash at some point in the future.
The scheduler can overflow because every time that an item is added to
the scheduler, a counter is bumped and that counter's current value is
assigned as the new item's ID.
This patch introduces a new method for assigning scheduler IDs. Instead
of assigning from a counter, a queue of available IDs is maintained.
When assigning a new ID, an ID is pulled from the queue. When a
scheduler item is released, its ID is pushed back onto the queue. This
way, IDs may be reused when they become available, and the growth of ID
numbers is directly related to concurrent activity within a scheduler
context rather than the uptime of the system.
Change-Id: I532708eef8f669d823457d7fefdad9a6078b99b2
2015-09-10 22:19:26 +00:00
|
|
|
if (tmp->sched_id) {
|
|
|
|
AST_LIST_INSERT_TAIL(&con->id_queue, tmp->sched_id, list);
|
|
|
|
tmp->sched_id = NULL;
|
|
|
|
}
|
|
|
|
|
1999-10-13 04:15:49 +00:00
|
|
|
/*
|
|
|
|
* Add to the cache, or just free() if we
|
|
|
|
* already have too many cache entries
|
|
|
|
*/
|
2012-03-22 19:51:16 +00:00
|
|
|
#ifdef SCHED_MAX_CACHE
|
1999-10-13 04:15:49 +00:00
|
|
|
if (con->schedccnt < SCHED_MAX_CACHE) {
|
2006-05-05 17:09:27 +00:00
|
|
|
AST_LIST_INSERT_HEAD(&con->schedc, tmp, list);
|
1999-10-13 04:15:49 +00:00
|
|
|
con->schedccnt++;
|
|
|
|
} else
|
|
|
|
#endif
|
2014-08-26 22:14:46 +00:00
|
|
|
sched_free(tmp);
|
1999-10-13 04:15:49 +00:00
|
|
|
}
|
|
|
|
|
scheduler: Use queue for allocating sched IDs.
It has been observed that on long-running busy systems, a scheduler
context can eventually hit INT_MAX for its assigned IDs and end up
overflowing into a very low negative number. When this occurs, this can
result in odd behaviors, because a negative return is interpreted by
callers as being a failure. However, the item actually was successfully
scheduled. The result may be that a freed item remains in the scheduler,
resulting in a crash at some point in the future.
The scheduler can overflow because every time that an item is added to
the scheduler, a counter is bumped and that counter's current value is
assigned as the new item's ID.
This patch introduces a new method for assigning scheduler IDs. Instead
of assigning from a counter, a queue of available IDs is maintained.
When assigning a new ID, an ID is pulled from the queue. When a
scheduler item is released, its ID is pushed back onto the queue. This
way, IDs may be reused when they become available, and the growth of ID
numbers is directly related to concurrent activity within a scheduler
context rather than the uptime of the system.
Change-Id: I532708eef8f669d823457d7fefdad9a6078b99b2
2015-09-10 22:19:26 +00:00
|
|
|
static struct sched *sched_alloc(struct ast_sched_context *con)
|
|
|
|
{
|
|
|
|
struct sched *tmp;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* We keep a small cache of schedule entries
|
|
|
|
* to minimize the number of necessary malloc()'s
|
|
|
|
*/
|
|
|
|
#ifdef SCHED_MAX_CACHE
|
|
|
|
if ((tmp = AST_LIST_REMOVE_HEAD(&con->schedc, list))) {
|
|
|
|
con->schedccnt--;
|
|
|
|
} else
|
|
|
|
#endif
|
|
|
|
{
|
|
|
|
tmp = ast_calloc(1, sizeof(*tmp));
|
|
|
|
if (!tmp) {
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
ast_cond_init(&tmp->cond, NULL);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (set_sched_id(con, tmp)) {
|
|
|
|
sched_release(con, tmp);
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
return tmp;
|
|
|
|
}
|
|
|
|
|
2015-02-19 02:03:01 +00:00
|
|
|
void ast_sched_clean_by_callback(struct ast_sched_context *con, ast_sched_cb match, ast_sched_cb cleanup_cb)
|
|
|
|
{
|
|
|
|
int i = 1;
|
|
|
|
struct sched *current;
|
|
|
|
|
|
|
|
ast_mutex_lock(&con->lock);
|
|
|
|
while ((current = ast_heap_peek(con->sched_heap, i))) {
|
|
|
|
if (current->callback != match) {
|
|
|
|
i++;
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
ast_heap_remove(con->sched_heap, current);
|
|
|
|
|
|
|
|
cleanup_cb(current->data);
|
|
|
|
sched_release(con, current);
|
|
|
|
}
|
|
|
|
ast_mutex_unlock(&con->lock);
|
|
|
|
}
|
|
|
|
|
2006-03-08 17:41:03 +00:00
|
|
|
/*! \brief
|
2012-03-22 19:51:16 +00:00
|
|
|
* Return the number of milliseconds
|
2006-03-08 17:41:03 +00:00
|
|
|
* until the next scheduled event
|
|
|
|
*/
|
2010-12-20 17:15:54 +00:00
|
|
|
int ast_sched_wait(struct ast_sched_context *con)
|
1999-10-13 04:15:49 +00:00
|
|
|
{
|
|
|
|
int ms;
|
2009-02-17 21:04:08 +00:00
|
|
|
struct sched *s;
|
2006-05-05 17:09:27 +00:00
|
|
|
|
2007-06-24 18:51:41 +00:00
|
|
|
DEBUG(ast_debug(1, "ast_sched_wait()\n"));
|
2006-05-05 17:09:27 +00:00
|
|
|
|
2003-11-21 18:38:42 +00:00
|
|
|
ast_mutex_lock(&con->lock);
|
2009-02-17 21:04:08 +00:00
|
|
|
if ((s = ast_heap_peek(con->sched_heap, 1))) {
|
|
|
|
ms = ast_tvdiff_ms(s->when, ast_tvnow());
|
|
|
|
if (ms < 0) {
|
2003-11-21 18:38:42 +00:00
|
|
|
ms = 0;
|
2009-02-17 21:04:08 +00:00
|
|
|
}
|
|
|
|
} else {
|
|
|
|
ms = -1;
|
2003-11-21 18:38:42 +00:00
|
|
|
}
|
|
|
|
ast_mutex_unlock(&con->lock);
|
2006-05-05 17:09:27 +00:00
|
|
|
|
1999-10-13 04:15:49 +00:00
|
|
|
return ms;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2006-03-08 17:41:03 +00:00
|
|
|
/*! \brief
|
|
|
|
* Take a sched structure and put it in the
|
|
|
|
* queue, such that the soonest event is
|
2012-03-22 19:51:16 +00:00
|
|
|
* first in the list.
|
2006-03-08 17:41:03 +00:00
|
|
|
*/
|
2010-12-20 17:15:54 +00:00
|
|
|
static void schedule(struct ast_sched_context *con, struct sched *s)
|
1999-10-13 04:15:49 +00:00
|
|
|
{
|
2009-02-17 21:04:08 +00:00
|
|
|
ast_heap_push(con->sched_heap, s);
|
2008-05-07 21:11:33 +00:00
|
|
|
|
2014-02-11 20:17:42 +00:00
|
|
|
if (ast_heap_size(con->sched_heap) > con->highwater) {
|
|
|
|
con->highwater = ast_heap_size(con->sched_heap);
|
2009-02-17 21:04:08 +00:00
|
|
|
}
|
1999-10-13 04:15:49 +00:00
|
|
|
}
|
|
|
|
|
2006-03-08 17:41:03 +00:00
|
|
|
/*! \brief
|
2005-07-15 23:00:47 +00:00
|
|
|
* given the last event *tv and the offset in milliseconds 'when',
|
|
|
|
* computes the next value,
|
|
|
|
*/
|
2008-08-10 20:57:25 +00:00
|
|
|
static int sched_settime(struct timeval *t, int when)
|
1999-10-13 04:15:49 +00:00
|
|
|
{
|
2005-07-15 23:00:47 +00:00
|
|
|
struct timeval now = ast_tvnow();
|
2003-07-27 03:53:19 +00:00
|
|
|
|
2015-09-28 20:31:38 +00:00
|
|
|
if (when < 0) {
|
|
|
|
/*
|
|
|
|
* A negative when value is likely a bug as it
|
|
|
|
* represents a VERY large timeout time.
|
|
|
|
*/
|
|
|
|
ast_log(LOG_WARNING,
|
|
|
|
"Bug likely: Negative time interval %d (interpreted as %u ms) requested!\n",
|
|
|
|
when, (unsigned int) when);
|
|
|
|
ast_assert(0);
|
|
|
|
}
|
|
|
|
|
2007-06-25 13:42:51 +00:00
|
|
|
/*ast_debug(1, "TV -> %lu,%lu\n", tv->tv_sec, tv->tv_usec);*/
|
2008-08-10 20:57:25 +00:00
|
|
|
if (ast_tvzero(*t)) /* not supplied, default to now */
|
|
|
|
*t = now;
|
|
|
|
*t = ast_tvadd(*t, ast_samp2tv(when, 1000));
|
|
|
|
if (ast_tvcmp(*t, now) < 0) {
|
|
|
|
*t = now;
|
1999-10-13 04:15:49 +00:00
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2010-12-20 17:15:54 +00:00
|
|
|
int ast_sched_replace_variable(int old_id, struct ast_sched_context *con, int when, ast_sched_cb callback, const void *data, int variable)
|
2007-08-17 14:07:44 +00:00
|
|
|
{
|
2007-08-30 20:31:45 +00:00
|
|
|
/* 0 means the schedule item is new; do not delete */
|
2008-05-02 02:33:04 +00:00
|
|
|
if (old_id > 0) {
|
|
|
|
AST_SCHED_DEL(con, old_id);
|
|
|
|
}
|
2007-08-17 14:07:44 +00:00
|
|
|
return ast_sched_add_variable(con, when, callback, data, variable);
|
|
|
|
}
|
2005-08-22 22:55:06 +00:00
|
|
|
|
2006-03-08 17:41:03 +00:00
|
|
|
/*! \brief
|
|
|
|
* Schedule callback(data) to happen when ms into the future
|
|
|
|
*/
|
2010-12-20 17:15:54 +00:00
|
|
|
int ast_sched_add_variable(struct ast_sched_context *con, int when, ast_sched_cb callback, const void *data, int variable)
|
1999-10-13 04:15:49 +00:00
|
|
|
{
|
|
|
|
struct sched *tmp;
|
2003-11-21 18:38:42 +00:00
|
|
|
int res = -1;
|
2008-09-10 16:41:55 +00:00
|
|
|
|
2007-06-24 18:51:41 +00:00
|
|
|
DEBUG(ast_debug(1, "ast_sched_add()\n"));
|
2008-09-10 16:41:55 +00:00
|
|
|
|
2003-11-21 18:38:42 +00:00
|
|
|
ast_mutex_lock(&con->lock);
|
1999-10-13 04:15:49 +00:00
|
|
|
if ((tmp = sched_alloc(con))) {
|
scheduler: Use queue for allocating sched IDs.
It has been observed that on long-running busy systems, a scheduler
context can eventually hit INT_MAX for its assigned IDs and end up
overflowing into a very low negative number. When this occurs, this can
result in odd behaviors, because a negative return is interpreted by
callers as being a failure. However, the item actually was successfully
scheduled. The result may be that a freed item remains in the scheduler,
resulting in a crash at some point in the future.
The scheduler can overflow because every time that an item is added to
the scheduler, a counter is bumped and that counter's current value is
assigned as the new item's ID.
This patch introduces a new method for assigning scheduler IDs. Instead
of assigning from a counter, a queue of available IDs is maintained.
When assigning a new ID, an ID is pulled from the queue. When a
scheduler item is released, its ID is pushed back onto the queue. This
way, IDs may be reused when they become available, and the growth of ID
numbers is directly related to concurrent activity within a scheduler
context rather than the uptime of the system.
Change-Id: I532708eef8f669d823457d7fefdad9a6078b99b2
2015-09-10 22:19:26 +00:00
|
|
|
con->eventcnt++;
|
1999-10-13 04:15:49 +00:00
|
|
|
tmp->callback = callback;
|
|
|
|
tmp->data = data;
|
|
|
|
tmp->resched = when;
|
2005-08-22 22:55:06 +00:00
|
|
|
tmp->variable = variable;
|
2006-05-05 18:11:55 +00:00
|
|
|
tmp->when = ast_tv(0, 0);
|
2014-10-14 19:12:58 +00:00
|
|
|
tmp->deleted = 0;
|
1999-10-13 04:15:49 +00:00
|
|
|
if (sched_settime(&tmp->when, when)) {
|
|
|
|
sched_release(con, tmp);
|
2003-11-21 18:38:42 +00:00
|
|
|
} else {
|
1999-10-13 04:15:49 +00:00
|
|
|
schedule(con, tmp);
|
scheduler: Use queue for allocating sched IDs.
It has been observed that on long-running busy systems, a scheduler
context can eventually hit INT_MAX for its assigned IDs and end up
overflowing into a very low negative number. When this occurs, this can
result in odd behaviors, because a negative return is interpreted by
callers as being a failure. However, the item actually was successfully
scheduled. The result may be that a freed item remains in the scheduler,
resulting in a crash at some point in the future.
The scheduler can overflow because every time that an item is added to
the scheduler, a counter is bumped and that counter's current value is
assigned as the new item's ID.
This patch introduces a new method for assigning scheduler IDs. Instead
of assigning from a counter, a queue of available IDs is maintained.
When assigning a new ID, an ID is pulled from the queue. When a
scheduler item is released, its ID is pushed back onto the queue. This
way, IDs may be reused when they become available, and the growth of ID
numbers is directly related to concurrent activity within a scheduler
context rather than the uptime of the system.
Change-Id: I532708eef8f669d823457d7fefdad9a6078b99b2
2015-09-10 22:19:26 +00:00
|
|
|
res = tmp->sched_id->id;
|
2003-11-21 18:38:42 +00:00
|
|
|
}
|
|
|
|
}
|
2005-07-15 22:21:31 +00:00
|
|
|
#ifdef DUMP_SCHEDULER
|
|
|
|
/* Dump contents of the context while we have the lock so nothing gets screwed up by accident. */
|
2006-05-05 18:11:55 +00:00
|
|
|
if (option_debug)
|
|
|
|
ast_sched_dump(con);
|
2005-07-15 22:21:31 +00:00
|
|
|
#endif
|
2010-12-20 17:15:54 +00:00
|
|
|
if (con->sched_thread) {
|
|
|
|
ast_cond_signal(&con->sched_thread->cond);
|
|
|
|
}
|
2003-11-21 20:41:43 +00:00
|
|
|
ast_mutex_unlock(&con->lock);
|
2008-09-10 16:41:55 +00:00
|
|
|
|
2003-11-21 18:38:42 +00:00
|
|
|
return res;
|
1999-10-13 04:15:49 +00:00
|
|
|
}
|
|
|
|
|
2010-12-20 17:15:54 +00:00
|
|
|
int ast_sched_replace(int old_id, struct ast_sched_context *con, int when, ast_sched_cb callback, const void *data)
|
2007-08-17 14:07:44 +00:00
|
|
|
{
|
2008-05-02 02:33:04 +00:00
|
|
|
if (old_id > -1) {
|
|
|
|
AST_SCHED_DEL(con, old_id);
|
|
|
|
}
|
2007-08-17 14:07:44 +00:00
|
|
|
return ast_sched_add(con, when, callback, data);
|
|
|
|
}
|
|
|
|
|
2010-12-20 17:15:54 +00:00
|
|
|
int ast_sched_add(struct ast_sched_context *con, int when, ast_sched_cb callback, const void *data)
|
2005-08-22 22:55:06 +00:00
|
|
|
{
|
|
|
|
return ast_sched_add_variable(con, when, callback, data, 0);
|
|
|
|
}
|
|
|
|
|
2014-02-11 20:17:42 +00:00
|
|
|
static struct sched *sched_find(struct ast_sched_context *con, int id)
|
2008-04-16 20:09:39 +00:00
|
|
|
{
|
2014-02-11 20:17:42 +00:00
|
|
|
int x;
|
|
|
|
size_t heap_size;
|
|
|
|
|
|
|
|
heap_size = ast_heap_size(con->sched_heap);
|
|
|
|
for (x = 1; x <= heap_size; x++) {
|
|
|
|
struct sched *cur = ast_heap_peek(con->sched_heap, x);
|
|
|
|
|
scheduler: Use queue for allocating sched IDs.
It has been observed that on long-running busy systems, a scheduler
context can eventually hit INT_MAX for its assigned IDs and end up
overflowing into a very low negative number. When this occurs, this can
result in odd behaviors, because a negative return is interpreted by
callers as being a failure. However, the item actually was successfully
scheduled. The result may be that a freed item remains in the scheduler,
resulting in a crash at some point in the future.
The scheduler can overflow because every time that an item is added to
the scheduler, a counter is bumped and that counter's current value is
assigned as the new item's ID.
This patch introduces a new method for assigning scheduler IDs. Instead
of assigning from a counter, a queue of available IDs is maintained.
When assigning a new ID, an ID is pulled from the queue. When a
scheduler item is released, its ID is pushed back onto the queue. This
way, IDs may be reused when they become available, and the growth of ID
numbers is directly related to concurrent activity within a scheduler
context rather than the uptime of the system.
Change-Id: I532708eef8f669d823457d7fefdad9a6078b99b2
2015-09-10 22:19:26 +00:00
|
|
|
if (cur->sched_id->id == id) {
|
2014-02-11 20:17:42 +00:00
|
|
|
return cur;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2008-04-16 20:09:39 +00:00
|
|
|
return NULL;
|
|
|
|
}
|
2010-07-16 20:35:28 +00:00
|
|
|
|
2014-02-11 20:17:42 +00:00
|
|
|
const void *ast_sched_find_data(struct ast_sched_context *con, int id)
|
|
|
|
{
|
|
|
|
struct sched *s;
|
|
|
|
const void *data = NULL;
|
|
|
|
|
|
|
|
ast_mutex_lock(&con->lock);
|
|
|
|
|
|
|
|
s = sched_find(con, id);
|
|
|
|
if (s) {
|
|
|
|
data = s->data;
|
|
|
|
}
|
|
|
|
|
|
|
|
ast_mutex_unlock(&con->lock);
|
|
|
|
|
|
|
|
return data;
|
|
|
|
}
|
|
|
|
|
2006-03-08 17:41:03 +00:00
|
|
|
/*! \brief
|
|
|
|
* Delete the schedule entry with number
|
|
|
|
* "id". It's nearly impossible that there
|
|
|
|
* would be two or more in the list with that
|
|
|
|
* id.
|
|
|
|
*/
|
2008-07-30 22:38:58 +00:00
|
|
|
#ifndef AST_DEVMODE
|
2010-12-20 17:15:54 +00:00
|
|
|
int ast_sched_del(struct ast_sched_context *con, int id)
|
2008-07-18 16:48:18 +00:00
|
|
|
#else
|
2010-12-20 17:15:54 +00:00
|
|
|
int _ast_sched_del(struct ast_sched_context *con, int id, const char *file, int line, const char *function)
|
2008-07-18 16:48:18 +00:00
|
|
|
#endif
|
1999-10-13 04:15:49 +00:00
|
|
|
{
|
2014-02-11 20:17:42 +00:00
|
|
|
struct sched *s = NULL;
|
2012-03-14 01:35:30 +00:00
|
|
|
int *last_id = ast_threadstorage_get(&last_del_id, sizeof(int));
|
2006-05-05 17:09:27 +00:00
|
|
|
|
2008-04-16 20:09:39 +00:00
|
|
|
DEBUG(ast_debug(1, "ast_sched_del(%d)\n", id));
|
2010-07-16 20:35:28 +00:00
|
|
|
|
|
|
|
if (id < 0) {
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2003-11-21 18:38:42 +00:00
|
|
|
ast_mutex_lock(&con->lock);
|
2014-02-11 20:17:42 +00:00
|
|
|
|
|
|
|
s = sched_find(con, id);
|
2008-04-16 20:09:39 +00:00
|
|
|
if (s) {
|
2009-02-17 21:04:08 +00:00
|
|
|
if (!ast_heap_remove(con->sched_heap, s)) {
|
scheduler: Use queue for allocating sched IDs.
It has been observed that on long-running busy systems, a scheduler
context can eventually hit INT_MAX for its assigned IDs and end up
overflowing into a very low negative number. When this occurs, this can
result in odd behaviors, because a negative return is interpreted by
callers as being a failure. However, the item actually was successfully
scheduled. The result may be that a freed item remains in the scheduler,
resulting in a crash at some point in the future.
The scheduler can overflow because every time that an item is added to
the scheduler, a counter is bumped and that counter's current value is
assigned as the new item's ID.
This patch introduces a new method for assigning scheduler IDs. Instead
of assigning from a counter, a queue of available IDs is maintained.
When assigning a new ID, an ID is pulled from the queue. When a
scheduler item is released, its ID is pushed back onto the queue. This
way, IDs may be reused when they become available, and the growth of ID
numbers is directly related to concurrent activity within a scheduler
context rather than the uptime of the system.
Change-Id: I532708eef8f669d823457d7fefdad9a6078b99b2
2015-09-10 22:19:26 +00:00
|
|
|
ast_log(LOG_WARNING,"sched entry %d not in the sched heap?\n", s->sched_id->id);
|
2009-02-17 21:04:08 +00:00
|
|
|
}
|
2008-04-16 20:09:39 +00:00
|
|
|
sched_release(con, s);
|
scheduler: Use queue for allocating sched IDs.
It has been observed that on long-running busy systems, a scheduler
context can eventually hit INT_MAX for its assigned IDs and end up
overflowing into a very low negative number. When this occurs, this can
result in odd behaviors, because a negative return is interpreted by
callers as being a failure. However, the item actually was successfully
scheduled. The result may be that a freed item remains in the scheduler,
resulting in a crash at some point in the future.
The scheduler can overflow because every time that an item is added to
the scheduler, a counter is bumped and that counter's current value is
assigned as the new item's ID.
This patch introduces a new method for assigning scheduler IDs. Instead
of assigning from a counter, a queue of available IDs is maintained.
When assigning a new ID, an ID is pulled from the queue. When a
scheduler item is released, its ID is pushed back onto the queue. This
way, IDs may be reused when they become available, and the growth of ID
numbers is directly related to concurrent activity within a scheduler
context rather than the uptime of the system.
Change-Id: I532708eef8f669d823457d7fefdad9a6078b99b2
2015-09-10 22:19:26 +00:00
|
|
|
} else if (con->currently_executing && (id == con->currently_executing->sched_id->id)) {
|
2014-08-26 22:14:46 +00:00
|
|
|
s = con->currently_executing;
|
|
|
|
s->deleted = 1;
|
|
|
|
/* Wait for executing task to complete so that caller of ast_sched_del() does not
|
|
|
|
* free memory out from under the task.
|
|
|
|
*/
|
scheduler: Use queue for allocating sched IDs.
It has been observed that on long-running busy systems, a scheduler
context can eventually hit INT_MAX for its assigned IDs and end up
overflowing into a very low negative number. When this occurs, this can
result in odd behaviors, because a negative return is interpreted by
callers as being a failure. However, the item actually was successfully
scheduled. The result may be that a freed item remains in the scheduler,
resulting in a crash at some point in the future.
The scheduler can overflow because every time that an item is added to
the scheduler, a counter is bumped and that counter's current value is
assigned as the new item's ID.
This patch introduces a new method for assigning scheduler IDs. Instead
of assigning from a counter, a queue of available IDs is maintained.
When assigning a new ID, an ID is pulled from the queue. When a
scheduler item is released, its ID is pushed back onto the queue. This
way, IDs may be reused when they become available, and the growth of ID
numbers is directly related to concurrent activity within a scheduler
context rather than the uptime of the system.
Change-Id: I532708eef8f669d823457d7fefdad9a6078b99b2
2015-09-10 22:19:26 +00:00
|
|
|
while (con->currently_executing && (id == con->currently_executing->sched_id->id)) {
|
2015-08-29 00:57:14 +00:00
|
|
|
ast_cond_wait(&s->cond, &con->lock);
|
|
|
|
}
|
2014-08-26 22:14:46 +00:00
|
|
|
/* Do not sched_release() here because ast_sched_runq() will do it */
|
2008-04-16 20:09:39 +00:00
|
|
|
}
|
2010-07-16 20:35:28 +00:00
|
|
|
|
2005-07-15 22:21:31 +00:00
|
|
|
#ifdef DUMP_SCHEDULER
|
|
|
|
/* Dump contents of the context while we have the lock so nothing gets screwed up by accident. */
|
2006-05-05 18:11:55 +00:00
|
|
|
if (option_debug)
|
|
|
|
ast_sched_dump(con);
|
2005-07-15 22:21:31 +00:00
|
|
|
#endif
|
2010-12-20 17:15:54 +00:00
|
|
|
if (con->sched_thread) {
|
|
|
|
ast_cond_signal(&con->sched_thread->cond);
|
|
|
|
}
|
2003-11-21 18:38:42 +00:00
|
|
|
ast_mutex_unlock(&con->lock);
|
2006-05-05 17:09:27 +00:00
|
|
|
|
2010-07-16 20:35:28 +00:00
|
|
|
if (!s && *last_id != id) {
|
2007-06-24 18:51:41 +00:00
|
|
|
ast_debug(1, "Attempted to delete nonexistent schedule entry %d!\n", id);
|
2015-04-20 21:00:00 +00:00
|
|
|
/* Removing nonexistent schedule entry shouldn't trigger assert (it was enabled in DEV_MODE);
|
|
|
|
* because in many places entries is deleted without having valid id. */
|
2010-07-16 20:35:28 +00:00
|
|
|
*last_id = id;
|
|
|
|
return -1;
|
|
|
|
} else if (!s) {
|
2003-11-21 18:38:42 +00:00
|
|
|
return -1;
|
2006-05-05 17:09:27 +00:00
|
|
|
}
|
2010-07-16 20:35:28 +00:00
|
|
|
|
2006-05-05 17:09:27 +00:00
|
|
|
return 0;
|
1999-10-13 04:15:49 +00:00
|
|
|
}
|
|
|
|
|
2010-12-20 17:15:54 +00:00
|
|
|
void ast_sched_report(struct ast_sched_context *con, struct ast_str **buf, struct ast_cb_names *cbnames)
|
2008-04-16 20:09:39 +00:00
|
|
|
{
|
2009-02-17 21:04:08 +00:00
|
|
|
int i, x;
|
2008-04-16 20:09:39 +00:00
|
|
|
struct sched *cur;
|
2009-02-15 20:56:27 +00:00
|
|
|
int countlist[cbnames->numassocs + 1];
|
2009-02-17 21:04:08 +00:00
|
|
|
size_t heap_size;
|
2012-03-22 19:51:16 +00:00
|
|
|
|
2010-02-03 19:26:53 +00:00
|
|
|
memset(countlist, 0, sizeof(countlist));
|
2014-05-09 22:49:26 +00:00
|
|
|
ast_str_set(buf, 0, " Highwater = %u\n schedcnt = %zu\n", con->highwater, ast_heap_size(con->sched_heap));
|
2009-02-15 20:56:27 +00:00
|
|
|
|
2009-02-15 21:27:33 +00:00
|
|
|
ast_mutex_lock(&con->lock);
|
2009-02-17 21:04:08 +00:00
|
|
|
|
|
|
|
heap_size = ast_heap_size(con->sched_heap);
|
|
|
|
for (x = 1; x <= heap_size; x++) {
|
|
|
|
cur = ast_heap_peek(con->sched_heap, x);
|
2008-04-16 20:09:39 +00:00
|
|
|
/* match the callback to the cblist */
|
2009-02-15 20:56:27 +00:00
|
|
|
for (i = 0; i < cbnames->numassocs; i++) {
|
|
|
|
if (cur->callback == cbnames->cblist[i]) {
|
2008-04-16 20:09:39 +00:00
|
|
|
break;
|
2009-02-15 20:56:27 +00:00
|
|
|
}
|
2008-04-16 20:09:39 +00:00
|
|
|
}
|
2009-02-15 20:56:27 +00:00
|
|
|
if (i < cbnames->numassocs) {
|
2008-04-16 20:09:39 +00:00
|
|
|
countlist[i]++;
|
2009-02-15 20:56:27 +00:00
|
|
|
} else {
|
2008-04-16 20:09:39 +00:00
|
|
|
countlist[cbnames->numassocs]++;
|
|
|
|
}
|
|
|
|
}
|
2009-02-17 21:04:08 +00:00
|
|
|
|
2009-02-15 21:27:33 +00:00
|
|
|
ast_mutex_unlock(&con->lock);
|
2008-04-16 20:09:39 +00:00
|
|
|
|
2009-02-15 20:56:27 +00:00
|
|
|
for (i = 0; i < cbnames->numassocs; i++) {
|
|
|
|
ast_str_append(buf, 0, " %s : %d\n", cbnames->list[i], countlist[i]);
|
|
|
|
}
|
2008-04-16 20:09:39 +00:00
|
|
|
|
2009-02-15 20:56:27 +00:00
|
|
|
ast_str_append(buf, 0, " <unknown> : %d\n", countlist[cbnames->numassocs]);
|
|
|
|
}
|
2012-03-22 19:51:16 +00:00
|
|
|
|
2006-05-05 18:11:55 +00:00
|
|
|
/*! \brief Dump the contents of the scheduler to LOG_DEBUG */
|
2010-12-20 17:15:54 +00:00
|
|
|
void ast_sched_dump(struct ast_sched_context *con)
|
1999-10-13 04:15:49 +00:00
|
|
|
{
|
|
|
|
struct sched *q;
|
2008-08-10 20:57:25 +00:00
|
|
|
struct timeval when = ast_tvnow();
|
2009-02-17 21:04:08 +00:00
|
|
|
int x;
|
|
|
|
size_t heap_size;
|
2001-04-23 16:50:12 +00:00
|
|
|
#ifdef SCHED_MAX_CACHE
|
2014-05-09 22:49:26 +00:00
|
|
|
ast_debug(1, "Asterisk Schedule Dump (%zu in Q, %u Total, %u Cache, %u high-water)\n", ast_heap_size(con->sched_heap), con->eventcnt - 1, con->schedccnt, con->highwater);
|
2001-04-23 16:50:12 +00:00
|
|
|
#else
|
2014-05-09 22:49:26 +00:00
|
|
|
ast_debug(1, "Asterisk Schedule Dump (%zu in Q, %u Total, %u high-water)\n", ast_heap_size(con->sched_heap), con->eventcnt - 1, con->highwater);
|
2001-04-23 16:50:12 +00:00
|
|
|
#endif
|
|
|
|
|
2007-06-25 13:42:51 +00:00
|
|
|
ast_debug(1, "=============================================================\n");
|
|
|
|
ast_debug(1, "|ID Callback Data Time (sec:ms) |\n");
|
|
|
|
ast_debug(1, "+-----+-----------------+-----------------+-----------------+\n");
|
2009-02-15 21:27:33 +00:00
|
|
|
ast_mutex_lock(&con->lock);
|
2009-02-17 21:04:08 +00:00
|
|
|
heap_size = ast_heap_size(con->sched_heap);
|
|
|
|
for (x = 1; x <= heap_size; x++) {
|
|
|
|
struct timeval delta;
|
|
|
|
q = ast_heap_peek(con->sched_heap, x);
|
|
|
|
delta = ast_tvsub(q->when, when);
|
2012-03-22 19:51:16 +00:00
|
|
|
ast_debug(1, "|%.4d | %-15p | %-15p | %.6ld : %.6ld |\n",
|
scheduler: Use queue for allocating sched IDs.
It has been observed that on long-running busy systems, a scheduler
context can eventually hit INT_MAX for its assigned IDs and end up
overflowing into a very low negative number. When this occurs, this can
result in odd behaviors, because a negative return is interpreted by
callers as being a failure. However, the item actually was successfully
scheduled. The result may be that a freed item remains in the scheduler,
resulting in a crash at some point in the future.
The scheduler can overflow because every time that an item is added to
the scheduler, a counter is bumped and that counter's current value is
assigned as the new item's ID.
This patch introduces a new method for assigning scheduler IDs. Instead
of assigning from a counter, a queue of available IDs is maintained.
When assigning a new ID, an ID is pulled from the queue. When a
scheduler item is released, its ID is pushed back onto the queue. This
way, IDs may be reused when they become available, and the growth of ID
numbers is directly related to concurrent activity within a scheduler
context rather than the uptime of the system.
Change-Id: I532708eef8f669d823457d7fefdad9a6078b99b2
2015-09-10 22:19:26 +00:00
|
|
|
q->sched_id->id,
|
2007-06-25 13:42:51 +00:00
|
|
|
q->callback,
|
|
|
|
q->data,
|
2008-05-15 10:56:29 +00:00
|
|
|
(long)delta.tv_sec,
|
2007-06-25 13:42:51 +00:00
|
|
|
(long int)delta.tv_usec);
|
1999-10-13 04:15:49 +00:00
|
|
|
}
|
2009-02-15 21:27:33 +00:00
|
|
|
ast_mutex_unlock(&con->lock);
|
2007-06-25 13:42:51 +00:00
|
|
|
ast_debug(1, "=============================================================\n");
|
1999-10-13 04:15:49 +00:00
|
|
|
}
|
|
|
|
|
2006-03-08 17:41:03 +00:00
|
|
|
/*! \brief
|
|
|
|
* Launch all events which need to be run at this time.
|
|
|
|
*/
|
2010-12-20 17:15:54 +00:00
|
|
|
int ast_sched_runq(struct ast_sched_context *con)
|
1999-10-13 04:15:49 +00:00
|
|
|
{
|
|
|
|
struct sched *current;
|
2008-08-10 20:57:25 +00:00
|
|
|
struct timeval when;
|
2006-07-12 18:28:31 +00:00
|
|
|
int numevents;
|
2003-11-21 22:05:08 +00:00
|
|
|
int res;
|
2006-07-12 18:28:31 +00:00
|
|
|
|
2007-06-24 18:51:41 +00:00
|
|
|
DEBUG(ast_debug(1, "ast_sched_runq()\n"));
|
2010-12-20 17:15:54 +00:00
|
|
|
|
2003-11-21 18:38:42 +00:00
|
|
|
ast_mutex_lock(&con->lock);
|
2006-07-12 18:28:31 +00:00
|
|
|
|
2010-08-10 18:05:40 +00:00
|
|
|
when = ast_tvadd(ast_tvnow(), ast_tv(0, 1000));
|
2009-02-17 21:04:08 +00:00
|
|
|
for (numevents = 0; (current = ast_heap_peek(con->sched_heap, 1)); numevents++) {
|
2005-07-15 23:00:47 +00:00
|
|
|
/* schedule all events which are going to expire within 1ms.
|
|
|
|
* We only care about millisecond accuracy anyway, so this will
|
|
|
|
* help us get more than one event at one time if they are very
|
|
|
|
* close together.
|
|
|
|
*/
|
2009-02-17 21:04:08 +00:00
|
|
|
if (ast_tvcmp(current->when, when) != -1) {
|
2006-07-12 18:28:31 +00:00
|
|
|
break;
|
2009-02-17 21:04:08 +00:00
|
|
|
}
|
2010-12-20 17:15:54 +00:00
|
|
|
|
2009-02-17 21:04:08 +00:00
|
|
|
current = ast_heap_pop(con->sched_heap);
|
|
|
|
|
2006-07-12 18:28:31 +00:00
|
|
|
/*
|
|
|
|
* At this point, the schedule queue is still intact. We
|
|
|
|
* have removed the first event and the rest is still there,
|
|
|
|
* so it's permissible for the callback to add new events, but
|
|
|
|
* trying to delete itself won't work because it isn't in
|
2012-03-22 19:51:16 +00:00
|
|
|
* the schedule queue. If that's what it wants to do, it
|
2006-07-12 18:28:31 +00:00
|
|
|
* should return 0.
|
|
|
|
*/
|
2010-12-20 17:15:54 +00:00
|
|
|
|
2014-08-26 22:14:46 +00:00
|
|
|
con->currently_executing = current;
|
2006-07-12 18:28:31 +00:00
|
|
|
ast_mutex_unlock(&con->lock);
|
|
|
|
res = current->callback(current->data);
|
|
|
|
ast_mutex_lock(&con->lock);
|
2014-08-26 22:14:46 +00:00
|
|
|
con->currently_executing = NULL;
|
|
|
|
ast_cond_signal(¤t->cond);
|
2010-12-20 17:15:54 +00:00
|
|
|
|
2014-08-26 22:14:46 +00:00
|
|
|
if (res && !current->deleted) {
|
2010-12-20 17:15:54 +00:00
|
|
|
/*
|
2006-07-12 18:28:31 +00:00
|
|
|
* If they return non-zero, we should schedule them to be
|
|
|
|
* run again.
|
|
|
|
*/
|
|
|
|
if (sched_settime(¤t->when, current->variable? res : current->resched)) {
|
|
|
|
sched_release(con, current);
|
2009-02-17 21:04:08 +00:00
|
|
|
} else {
|
2006-07-12 18:28:31 +00:00
|
|
|
schedule(con, current);
|
2009-02-17 21:04:08 +00:00
|
|
|
}
|
2006-07-12 18:28:31 +00:00
|
|
|
} else {
|
|
|
|
/* No longer needed, so release it */
|
2009-02-17 21:04:08 +00:00
|
|
|
sched_release(con, current);
|
2006-07-12 18:28:31 +00:00
|
|
|
}
|
1999-10-13 04:15:49 +00:00
|
|
|
}
|
2006-07-12 18:28:31 +00:00
|
|
|
|
2003-11-21 18:38:42 +00:00
|
|
|
ast_mutex_unlock(&con->lock);
|
2010-12-20 17:15:54 +00:00
|
|
|
|
2006-07-12 18:28:31 +00:00
|
|
|
return numevents;
|
1999-10-13 04:15:49 +00:00
|
|
|
}
|
2005-04-13 18:46:35 +00:00
|
|
|
|
2010-12-20 17:15:54 +00:00
|
|
|
long ast_sched_when(struct ast_sched_context *con,int id)
|
2005-04-13 18:46:35 +00:00
|
|
|
{
|
2014-02-11 20:17:42 +00:00
|
|
|
struct sched *s;
|
2006-07-12 18:28:31 +00:00
|
|
|
long secs = -1;
|
2007-06-24 18:51:41 +00:00
|
|
|
DEBUG(ast_debug(1, "ast_sched_when()\n"));
|
2005-04-13 18:46:35 +00:00
|
|
|
|
|
|
|
ast_mutex_lock(&con->lock);
|
2010-12-20 17:15:54 +00:00
|
|
|
|
2014-02-11 20:17:42 +00:00
|
|
|
s = sched_find(con, id);
|
2006-05-05 17:09:27 +00:00
|
|
|
if (s) {
|
2005-07-15 23:00:47 +00:00
|
|
|
struct timeval now = ast_tvnow();
|
2006-04-06 15:55:15 +00:00
|
|
|
secs = s->when.tv_sec - now.tv_sec;
|
2005-04-13 18:46:35 +00:00
|
|
|
}
|
2014-02-11 20:17:42 +00:00
|
|
|
|
2005-04-13 18:46:35 +00:00
|
|
|
ast_mutex_unlock(&con->lock);
|
2010-12-20 17:15:54 +00:00
|
|
|
|
2005-04-13 18:46:35 +00:00
|
|
|
return secs;
|
|
|
|
}
|