In the previous implementation of the new API fields,
both fields.Selection and fields.Reference were performing
early validation of their `value` as soon as it entered
the cache, either by being read, written, or computed.
This is a source of trouble and performance problems,
and is unnecessary, as we should consider that the database
always contains valid values. If that is not the case it
means it was modified externally and is an exception that
should be handled externally as well.
Revalidating selection/reference values can be expensive
when the domain of values is dynamic and requires extra
database queries, with extra access rights control, etc.
This patch adds a `validate` parameter to `convert_to_cache`,
allowing to turn off the re-validation on demand. The ORM
will turn off validation whenever the value being converted
is supposed to be already validated, such as when reading it
from the database.
The parameter is currently ignored by all other fields,
and defaults to True so validation is performed in all other
caes.
16d6744 turns out to not be great, because it filters out the todos for
prefetched fields (rather than those just for the field being asked) there are
situations where it ends up not fetching the records it was originally asked
for and breaks a bunch of stuff e.g. unreconcile line in bank statements
Force the ids explicitly asked for back in the fetched set, so that the
prefetch is at most a noop, rco will have to take an actual look at it.
A todo would only filter out records selectioned by the same field's caching,
it should filter out on the whole prefetching selection or an other field
could/would just add it back to the set of records to fetch (and lead to Bad
Things).
Note: this probably deserves a test somehow, but I'm not quite sure how the
todos thing works so...
If expansion of the recordset is done during _prefetch_field, if one of the
prefetches (not the base record(s) but one of those selected by
BaseModel._in_cache_without) can't be read by the current user (due to an
access rule or for field reading reasons, or whatever) the whole read is
failed, even if the record which was specifically asked for could be read on
its own.
By only expanding the read set in _read_from_database, the cache is correctly
set but read() and _prefetch_field() only check the records explicitly asked
for for AccessDenied, prefetched records will only be asked if they are ever
accessed.
fixes#1013
This should fix an issue discovered by tde when reading all fields on a record
on which you don't have access right:
- _read_from_database() fetches result and store it in cache
- read() retrieves values from cache, starting with field 'create_date'...
- ... which is not in cache, so prefetch that field, read it, which goes into
an infinite loop
The problem is that _read_from_database() finds out that you don't have access
on the record, and stores a FailedValue in cache on all fields... except magic
fields. Fix the problem by storing the FailedValue on all fields but 'id'.
* Either further operations don't really care (e.g. ``str.join`` takes any
iterable)
* Or they do their own seq (``browse`` calls ``tuple()`` on iterable params)
Fixes#966
* As a preallocation optimization, ``list()`` calls ``__len__`` on its
parameter if it's available
* Before Python 2.7.4, WeakSet has a bug[0] where ``len()`` is unsafe: it is
done by iteration and weakrefs may be removed from the underlying set during
the iteration
As a result, the safety feature of listifying a WeakSet to ensure we have
strong refs on all items during iteration may blow up.
Wrapping the weakset in a ``iter()`` makes ``__len__()`` invisible and ensures
we're within the IterationGuard[1].
Which now that I think about it means we *should* be able to safely iterate
weaksets in the first place and may not have needed to listify them...
[0] http://bugs.python.org/issue14159
[1] http://hg.python.org/cpython/file/b6acfbe2bdbe/Lib/_weakrefset.py#l58
Likely caused by a type incoherence e.g. providing an id as string when the
table uses integer ids. Postgres performs an implicit conversion from string
to integer[0], this wasn't much of an issue in the old API, whatever cache was
there would simply not be used, but because the new API's cache is part of its
behavior it has a semantic impact and can lead to infinite recursion.
[0] more precisely from quoted value, which is untyped
A squashed merge is required as the conversion of the apiculture branch from
bzr to git was not correctly done. The git history contains irrelevant blobs
and commits. This branch brings a lot of changes and fixes, too many to list
exhaustively.
- New orm api, objects are now used instead of ids
- Environements to encapsulates cr uid context while maintaining backward compatibility
- Field compute attribute is a new object oriented way to define function fields
- Shared browse record cache
- New onchange protocol
- Optional copy flag on fields
- Documentation update
- Dead code cleanup
- Lots of fixes