In commit 04ba0e99, we introduced an optimization for reading inherited fields
in a single query. There is an issue when you have more than one level of
`_inherits`. The query looks like:
SELECT ...
FROM table0, table1 AS alias1, table2 AS alias2
WHERE table0.link0 = alias1.id AND table1.link1 = alias2.id AND ...
^^^^^^
should be alias1
This fixes the issue, and adds a test to reproduce it. The fix is based on
@emiprotechnologies's own proposal, but is cleaner and does not break APIs.
The `set` method of the one2many class returns a list
of the fields that require recomputation,
the computing of the function fields being delayed
for performances reasons.
Nevertheless, if the `set` method was called
through another `set` method, in other words,
nested `set` calls, the fields to recompute returned
by the second, nested, call to set were never recomputed,
the result list were simply lost.
e.g.:
```
create/write
│set
└─── create/write with recs.env.recompute set to False
│set
└─── create
with recs.env.recompute set to False
```
To overcome this problem, the list of old api style
compute fields to recompute is stored
within the environment, and this list is cleared
each time the store_set_value has done its job of
recomputing all old api style compute fields.
opw-629650
opw-632624
closes#6053
Consider a new field that uses the same compute method as another existing
field. When the field is introduced in database, its value must be computed on
existing records. In such a case, the existing field should not be written, as
its value is not supposed to have changed. Not writing on the existing field
can avoid useless recomputations in cascade, which is the reason we introduce
this patch.
Accessing `field.digits` can crash if no environment is available at that
point. This happens in function `get_pg_type()`, which is called from method
`_auto_init()`. An environment is simply created in the method's scope to be
available for `field.digits`.
As done in write and already in next version (see 0fd773a), accessing a deleted
record (through read or check access rights) should always return a MissingError
instead of the generic except_orm.
This will allow code ignoring deleted record (e.g. 'recompute' method) to safely
process the other records.
Fixes#6105
A cross-registry cache was introduced by e2ea691ce.
The initial idea was praiseworthy but sub-optimal for servers with a
lot of registries.
When there is lot of registry loaded, the cache size was huge and
clearing some entries took a lot of CPU time, increasing the chances
of timeout.
Also, the cache was not cleaned when a registry is removed from
registry LRU (this operation would also consume time).
The following case has shown the issue: extend the model `res.company` by
adding at least two fields F and G, where F has a default value defined as:
lambda self: self.env.user.company_id.name
If the column F is created before G in the database, the existing records will
be filled with the default value of F. When the default value is computed, the
field `name` from a `res.company` is read, and other fields are prefetched,
including G. This operation fails, because G does not exist in database yet!
The optimization consists in using tuples for attributes `inverse_fields`,
`computed_fields` and `_triggers`, and to let them share their value when it is
empty, which is common. This saves around 1.8Mb per registry.
The computed value of parameter digits is no longer stored into fields and
columns; instead the value is recomputed everytime it is needed. Note that
performance is not an issue, since the method `get_precision` of model
'decimal.precision' is cached by the orm. This simplifies the management of
digits on fields and saves about 300Kb per registry.
Sometimes, the expected mro of the model is not the same as the one built with
a binary class hierarchy. So we reorder the base classes in order to match the
equivalent binary class hierarchy. This also fixes the cases where duplicates
appear in base classes.
Instead of composing classes in a binary tree hierarchy, we make one class that
inherits from all its base classes. This avoids keeping intermediate data
structures for columns, inherits, constraints, etc. This saves about 600Kb per
registry.
The mappings model._all_columns takes quite some memory (around 2MB per
registry) because of the numerous dictionaries (one per model) and inefficient
memory storage of column_info. Since it is deprecated and almost no longer
used, it can be computed on demand.
The ormcache is now shared among registries. The cached methods use keys like
(DBNAME, MODELNAME, METHOD, args...). This allows registries with high load to
use more cache than other registries.
When opening a record from a many2one,
the context is not propagated to fields_view_get.
This is a problem if you set "form_view_ref" in the context for example.
opw-629628
This should improve the performance of method read() on models with inherited
fields, like product.product. The inherited fields that are stored as columns
in parent tables (except for translated fields) are read in the same query as
the fields of the model. Those fields will be directly stored in cache under
the main model, so that no copying will take place in cache for accessing them
(this is the default implementation of inherited fields).
This makes the query construction more robust, as it handles joins for
conditions and ORDER BY clauses. It also makes it easier to read() from
several tables (like inherited fields).
Custom fields can point to custom models that have not been initialized
yet (`_setup_base` not called). Ensure every models in the registry
have a `_fields` attribute.
Use a `frozendict` as a defensive check to ensure it wont be modified
before calling `_setup_base`.
This was due to secondary fields loaded from database in 'onchange' mode. In
that mode, the secondary fields were marked 'dirty', and therefore returned by
the method `onchange`. The fix consists in loading those secondary fields in
cache before processing the onchanges.
This incidentally fixes a test on method `onchange`: in a one2many field, some
dirty fields were unexpectedly returned in the result. This was due to those
fields being loaded while processing onchanges.
When overriding a field defined as a function field, the field must either
create a corresponding column that is not a fields.function (if stored), or
have no corresponding column (if not stored).
Idea: look up for the model's fields in method `_setup_base()` instead of
method `__init__()`. This does not make a significant difference when
installing or upgrading modules, but when simply loading a registry, the
(expensive) field lookup is done once per model instead of once per class.
In a workflow context (for instance, in the invoice workflow),
context is not passed.
Therefore, relying on the 'recompute' key being the context
in order to not recompute the fields does not work with Workflows.
It leads to huge performance issues,
as fields are recomputed recursively (instead of sequentially)
when several records are implied.
For instance, when reconciling several invoices with one payment
(100 invoices with 1 payment for instance),
records of each invoice are recomputed uselessly in each workflow call
(for each "confirm_paid" method done for each invoice).
With a significant number of invoices (100, for instance),
it even leads to a "Maximum recursion depth reached" errror.
closes#4905
This fixes a bug introduced by commit f650522bbf
(related fields should not be copied by default). Inherited fields are a
particular case, and given the implementation of copy(), they must be copied if
their original field is copied.
The test on copy() in test_orm has been modified to show the bug.
This helps fixing old-api onchange methods with a record id as a parameter.
Browsing this record id may be problematic, since it reads the record in an
environment with an empty context. This is really problematic when the record
is a new record, because such a record only exists in a given environment.
The onchange() on new records processes fields in non-predictable order. This
is problematic when onchange methods are designed to be applied after each
other. The expected order is presumed to be the one of the fields in the view.
In order to implement this behavior, the JS client invokes method onchange()
with the list of fields (in view order) instead of False. The server then uses
that order for evaluating the onchange methods.
This fixes#4897.
If a selection field is created with an empty list of choices (e.g. added by
submodules), initialise the field as a varchar column (most common).
Check if the list exists to avoid crashing while checking the type of the first
key.
Fixes#3810
When changing the type of a column (if size differs for example),
'selection' field should be considered like a 'char' field (since they
are internaly the same column type)
This will fix some migration issues where 'char' fields were correctly
changed but not 'selection,' field.
Use case:
* create a 6.0 db with 'stock' module installed
* 'state' field in 'stock.move' model is of type 'character varying(16)'
* migrate it to 8.0
* 'state' field is still 'character varying(16)' but should normally be
'character varying'
These revs. introduced an API change in the _name_search method.
Indeed, the 'operator' attribute used to have 'ilike' as default value.
This cannot be changed, as every modules overriding this method
overrided it using the signature with operator='ilike'
For instance, _name_search method of addons/base/ir/ir_model.py
expects having 'ilike' as operator.
As it was not anymore the case,
it leaded to a crash when performing a name_search call on the model ir.model,
like when adding a new custom field to a model, from the web client.
opw-626161
The model setup sometimes misses entries in _inherit_fields and _all_columns.
This is because those dictionaries are computed from parent models which are
not guaranteed to be completely set up: sometimes a parent field is only
partially set up, and columns are missing (they are generated from fields after
their setup).
To avoid this bug, the setup has been split in three phases:
(1) determine all inherited and custom fields on models;
(2) setup fields, except for recomputation triggers, and generate columns;
(3) add recomputation triggers and complete the setup of the model.
Making these three phases explicit brings good invariants:
- when setting up a field, all models know all their fields;
- when adding recomputation triggers, you know that fields have been set up.
The field setup on models is improved: only fields are determined when building
the model's class; the final _columns is computed from the fields once they are
set up.
For models using a datetime field as name (hr.attendance for instance), the user timezone wasn't applied in the display name.
Therefore, in the breadcrumb, the datetime was different than in the form if the user had another timezone than UTC.
This rev. is related to 27d8cb843b, but is for the 8.0 api
We are aware we introduce a tiny change of API (method signature change), which we normally prohibit, but considering the really low level of the method, the fact it is probably not ovveriden by any other modules and the fact there is no cleaner way to correct this, we are making an exception.
Columns defined in the new api as interger, computed and non-stored should not be aggregated in read_group.
Fallback on False if column is None
Fixes#3972, opw 619536