The lazy property `pure_function_fields` was not invalidated upon every setup
of models, and hence could contain old instances of fields. As every model
setup re-creates instances of fields, the property has to be recomputed.
A cross-registry cache was introduced by e2ea691ce.
The initial idea was praiseworthy but sub-optimal for servers with a
lot of registries.
When there is lot of registry loaded, the cache size was huge and
clearing some entries took a lot of CPU time, increasing the chances
of timeout.
Also, the cache was not cleaned when a registry is removed from
registry LRU (this operation would also consume time).
The registry size is now assumed to be around 10Mb, and ormcache size is
proportional to the maximum number of registries. Statistics about ormcache
usage is shown to the log when receiving signal SIGUSR1.
The ormcache is now shared among registries. The cached methods use keys like
(DBNAME, MODELNAME, METHOD, args...). This allows registries with high load to
use more cache than other registries.
The model setup sometimes misses entries in _inherit_fields and _all_columns.
This is because those dictionaries are computed from parent models which are
not guaranteed to be completely set up: sometimes a parent field is only
partially set up, and columns are missing (they are generated from fields after
their setup).
To avoid this bug, the setup has been split in three phases:
(1) determine all inherited and custom fields on models;
(2) setup fields, except for recomputation triggers, and generate columns;
(3) add recomputation triggers and complete the setup of the model.
Making these three phases explicit brings good invariants:
- when setting up a field, all models know all their fields;
- when adding recomputation triggers, you know that fields have been set up.
Loading views for custom models from module data files was not possible because
custom models and fields were introduced into the registry after all modules
were loaded. As a consequence, the view architecture did not pass the checks.
This patch takes a different approach: custom models and fields are loaded
early on in the registry, so that views can be validated. The trick is to take
special care of relational custom fields: we skip them if their comodel does
not appear in the registry. This allows to install and upgrade modules that
create/modify custom models, fields and views for them.
Changing the decimal precision of float fields is a rare
operation, while cache clearing occurs fairly frequently.
Signaling a full registry change when the decimal precision
is changed (instead of a mere cache change) is therefore
a better trade-off, and more semantically correct as well.
This way we avoid the decimal precision refresh for each
invalidation.
Registry invalidation implies cache invalidation.
There was an issue in _setup_fields(): the method invokes _inherits_reload(),
which recomputes inherited fields, and invokes itself recursively on children
models. This may be problematic if the children models have already been set
up.
This optimization avoids recursive calls of method _inherits_reload(). In
_setup_fields(), first all parent models are set up, then their fields are
inspected to determine inherited fields, and their setup is done. This scheme
guarantees that inherited fields are computed once per model.
When working with a large number of databases, the memory allocated to
registries wasn't limited, resulting to waste memory (especially in the
longpolling worker, which is not recycled).
The size of the LRU is depending on the soft limit configured for
workers.
When a decimal_precision record is created/modified, the float fields of the
models in the registry must be reset. This was done on old-API columns only.
It is now handled by the new-API fields.
A squashed merge is required as the conversion of the apiculture branch from
bzr to git was not correctly done. The git history contains irrelevant blobs
and commits. This branch brings a lot of changes and fixes, too many to list
exhaustively.
- New orm api, objects are now used instead of ids
- Environements to encapsulates cr uid context while maintaining backward compatibility
- Field compute attribute is a new object oriented way to define function fields
- Shared browse record cache
- New onchange protocol
- Optional copy flag on fields
- Documentation update
- Dead code cleanup
- Lots of fixes
- TestCursor subclasses Cursor, and simulates commit and rollback with savepoints
- the registry manages a test mode, in which it only uses the test cursor
- a reentrant lock forces the serialization of parallel requests
bzr revid: rco@openerp.com-20140408151736-j0guy68i2wjexy3d
remove deprecated zipfile support
add preload_registry option when server is running
allow registries to be used in contruction in test mode
add a rollback test case for http tests
add a phantomjs helper
bzr revid: al@openerp.com-20140209004005-p5pwym4sqc23vw5b
For fresh databases, the signaling sequences in the
database stays at 1 until the installation of the
first module. If several workers are hit for this
database before the first module is installed,
this database registry will be loaded with a signaling
sequence of 1. Since was the default value, any
signal received by workers in this state was ignored
because they thought the registry was previously
not loaded.
Using None to indicate an sequence value
is more accurate and avoids this error
bzr revid: dle@openerp.com-20140116151157-3zlyrg48xqn2lkd0
if no registry exists and several calls to RegistryManager.get() are called at the same time
by several threads, several registries will be created one after the other and only the last
one will be kept in cls.registries (courtesy of Guewen Baconnier (Camptocamp)
Invert behaviour of commit 3685 because, at that time, the new trigger the schedule_cron_jobs method which ran another lock and called get on the registry. We had a deadlock with the cron. This is no longer the case as we don't call the same method at the end of the creation of the registry and it does not trigger a lock
bzr revid: mat@openerp.com-20131114144401-k00podawlem7cjd1
This should have no effect because when the first `if`
is entered a new registry is instantiated with
the default value for base_cache_signaling_sequence
set to 1, preventing entering the next `if` even
without `elif`.
bzr revid: odo@openerp.com-20131011123914-7zuvd9mch21yxgj8
When the sequence value is 1 it means that either:
- the registry was just instantiated, so there is no
reason to reload it immediately, the real checks will
start at next request
- the db was just created with new sequences set to 1,
so there has been no change to reload
In both cases there is no good reason to reload the
registry, and it is actually a performance killer,
especially for cron workers that keep iterating on
the list of databases.
bzr revid: odo@openerp.com-20131011100313-4bud8e9xq2afp9z7
auto=True reports are looked up in the database for each rendering, so they do
no have to be in the report registry any longer.
auto=False reports will register themselves but at this point
they can be updated to use the new `parser` XML attribute.
bzr revid: vmt@openerp.com-20130322153955-s6nyux2pyez6c01w
The pooljobs and scheduled_cron_jobs stuff was only used to
delay the processing of cron jobs until after the registry
was fully loaded. However this is already the case because
RegistryManager.new() only sets the flag at the end of the
init step.
The flag was named `registry.cron` but simply meant that the
registry was fully loaded and ready, so it is simpler to
rename it to `registry.ready`.
In multiprocess mode this flag is enterily irrelevant
so there is no need to selectively set it to True or
False. `registry.ready` is simpler.
bzr revid: odo@openerp.com-20121221133751-h4x670vblfr3d09e
The setting/clearing of the tracking were not done
consistently, causing log messages that appeared
to come from one database while coming from another
one or none at all.
The tracker is now set at the earliest points
of request handling as possible:
- in web, when creating WebRequests (dbname, uid)
- at RPC dispatching in server (uid)
- at cron job acquisition in CronWorker (dbname)
- at Registry acquisition in RegistryManager (dbname)
The tracker is cleared at the very entrance of
the request in the WSGI `application`, ensuring
that no logging is produced with an obsolete
db name. (It cannot be cleared at the end of
the request handling because the werkzeug
wrapper outputs more logging afterwards)
bzr revid: odo@openerp.com-20130301182510-1fqo9o8di0jw95b5
Some important points to consider:
- signaling should be done after any schema alteration (including module [un]installation),
service registration (e.g. reports)
- the changes need to be committed to the database *before* signaling, otherwise an
obvious race condition occurs during reload by other workers
- any call to restart_pool() must be considered a possible candidate for
signaling, and the 2 above conditions must be checked
The number of explicit calls was reduced by forcing the signaling at the end of
Registry.new() in case `update_module` was passed as True. In that situation
we always want to signal the changes - so all the redundant signaling calls
can be centralized. We can also assume that the relevant changes have already
been committed at that point, otherwise the registry update would not
have worked in the first place.
This means that there is no need for explicit signaling anymore everytime
`restart_pool` is called with `update_module=True`.
Some missing cr.commit() and explicit signaling calls were added or
moved to the right place. As a reminder: signaling must be done
*after* committing the changes, and usually *after* reloading the
registry on the current worker.
bzr revid: odo@openerp.com-20130301143203-e2csf5pkllwhmwqs
The setting/clearing of the tracking were not done
consistently, causing log messages that appeared
to come from one database while coming from another
one or none at all.
The tracker is now set at the earliest points
of request handling where we can:
- in web client, when creating WebRequests (dbname, uid)
- at RPC dispatching in server (uid)
- at cron job acquisition in CronWorker (dbname)
- at Registry acquisition in RegistryManager (dbname)
The tracker is cleared at the very entrance of
the request in the WSGI `application`, ensuring
that no logging is produced with an obsolete
db name. (It cannot be cleared at the end of
the request handling because the werkzeug
wrapper outputs more logging afterwards)
bzr revid: odo@openerp.com-20130301120744-jfitcmze2jldecod
The case where no constraint existed at all was not working,
and after fixing it, the ORM started to re-create the same
constraints multiple times for some modules. This was for
models that are split within a module (such as res.users in
base, which is made of several small classes): all the
partial models were going through _auto_init instead
of only the final one - doing useless extra work.
Unfortunately establishing the FK happens before the
XML data files are loaded, so a number of FK and
NOT NULL constraints failed to apply due to missing
tables/records. base.sql had to be extended a bit
to cover these cases in a minimalist fashion
bzr revid: odo@openerp.com-20121217214645-av9n003yzterhujw
Nota: If we replace sequence signaling for cache invalidation with pg
listen/notify in the future, we will use the same mechanism for more accurate
cron timing.
bzr revid: al@openerp.com-20121209170447-zs0k3jazokylwvar