In a workflow context (for instance, in the invoice workflow),
context is not passed.
Therefore, relying on the 'recompute' key being the context
in order to not recompute the fields does not work with Workflows.
It leads to huge performance issues,
as fields are recomputed recursively (instead of sequentially)
when several records are implied.
For instance, when reconciling several invoices with one payment
(100 invoices with 1 payment for instance),
records of each invoice are recomputed uselessly in each workflow call
(for each "confirm_paid" method done for each invoice).
With a significant number of invoices (100, for instance),
it even leads to a "Maximum recursion depth reached" errror.
closes#4905
This reverts commit 8cd2cc8910.
It turned out that forcing recomputing fields as user admin breaks some
existing behavior. Instead, we make the recomputation as user admin explicit
by adding compute_sudo=True in the field's definition.
This fixes an issue where the same function is used in several model classes:
once the function is wrapped for the first class, it is erroneously considered
as a wrapper for the second class, and is therefore not wrapped in other
classes.
When an onchange returns a change in a 2many field line (a '1' tuple, update), avoid to return all fields of the *2many field but only the altered field.
Otherwise, the web client regard all the fields of the 2many as dirty, and try to write on all fields (even if the value is the same, thus)
opw-615062
The method _prefetch_field() was accidentally prefetching fields to recompute;
which was skipping the actual recomputation, since a value was put in cache.
But sometimes the field's value was fixed by an extra recomputation of the
field. Here we remove the extra recomputation and fix the cache corruption.
This fixes a bug which is usually triggered in module account_followup, but
does not occur deterministically. Some recomputations of computed fields are
apparently missing. Environment objects containing recomputations todos and
kept alive by a WeakSet, are removed by the Python garbage collector before
recomputation takes place. We fix the bug by moving the recomputation todos in
a non-weakref'ed object.
Fixes#966
* As a preallocation optimization, ``list()`` calls ``__len__`` on its
parameter if it's available
* Before Python 2.7.4, WeakSet has a bug[0] where ``len()`` is unsafe: it is
done by iteration and weakrefs may be removed from the underlying set during
the iteration
As a result, the safety feature of listifying a WeakSet to ensure we have
strong refs on all items during iteration may blow up.
Wrapping the weakset in a ``iter()`` makes ``__len__()`` invisible and ensures
we're within the IterationGuard[1].
Which now that I think about it means we *should* be able to safely iterate
weaksets in the first place and may not have needed to listify them...
[0] http://bugs.python.org/issue14159
[1] http://hg.python.org/cpython/file/b6acfbe2bdbe/Lib/_weakrefset.py#l58
A squashed merge is required as the conversion of the apiculture branch from
bzr to git was not correctly done. The git history contains irrelevant blobs
and commits. This branch brings a lot of changes and fixes, too many to list
exhaustively.
- New orm api, objects are now used instead of ids
- Environements to encapsulates cr uid context while maintaining backward compatibility
- Field compute attribute is a new object oriented way to define function fields
- Shared browse record cache
- New onchange protocol
- Optional copy flag on fields
- Documentation update
- Dead code cleanup
- Lots of fixes