The current code when applying negative operator on an expression used
recursion which in extreme case is not best friend with python.
e.g: on instance with a lot of wharehouse, some simple action could lead
to a domain with lot of elements which could easiliy go over the python
maximum recursion limit.
This commit fixes this by replacing recursion with iteration.
We have a stack of negation flags and loop on each token of the domain
as follow :
- when we iterate on a leaf, it consumes the top negation flag,
- after a '!' operator, the top token negation is inversed,
- after an '&' or '|' operator, the top negation flag is duplicated on
the top of the stack.
closes#9433
opw-653802
In the `product.template` model,
when searching products within a category that doesn't exist,
all products were returned, while none should be returned.
For instance,
In Sales > Products,
In the search input, search with internal category:
"A category that doesn't exist",
all products were returned.
opw-649548
When making on model A a read_group with groupby equal to a many2one field F1 to a model B
which is ordered by a inherited not stored field F2, the group containing all the
records from A with F1 not set was not returned.
Example:
model A= "hr.applicant"
model B= "res.users" (_order = "name,login")
inherited model= "res.partner"
field F1= "user_id"(to "res.users)
field F2= "name"(inherited from "res.partner")
In this example, the query generated by the function "read_group" was:
SELECT min(hr_applicant.id) AS id, count(hr_applicant.id) AS user_id_count , "hr_applicant"."user_id" as "user_id"
FROM "hr_applicant" LEFT JOIN "res_users" as "hr_applicant__user_id" ON ("hr_applicant"."user_id" = "hr_applicant__user_id"."id"),"res_partner" as "hr_applicant__user_id__partner_id"
WHERE ("hr_applicant"."active" = true) AND ("hr_applicant__user_id"."partner_id" = "hr_applicant__user_id__partner_id"."id")
GROUP BY "hr_applicant"."user_id","hr_applicant__user_id__partner_id"."name","hr_applicant__user_id"."login"
ORDER BY "hr_applicant__user_id__partner_id"."name" ,"hr_applicant__user_id"."login"
which always returned "hr.applicant" groups of records with a "user_id" set due to the inner join maked on res_partners.
This inner join on "res_partner" is coming from function "add_join" calling by "_inherits_join_add"
in _generate_order_by_inner.
Introduced by dac52e344c
opw:651949
In some complex use case of a workflow instance with several workitems,
a given workitem processing could itself somewhat recursively process
one of the following workitems.
This situation was currently not taken into account, so in that given
case, the already processed workitems would be processed again.
opw-647580
This reverts commit bd9cbdfc41.
The above revision solved the SQL constraints not being
translated when raised. They were not translated because
the context, containing the lang, was not located as expected
in the `kwargs` dict.
While it solved this issue, it had as side-effect to raise
`current transaction is aborted,
commands ignored until end of transaction block` errors more
often when using the web client.
This can be explained by the double check, when the first
check raised this error
- which can happen, e.g. when the cursor is closed,
there is a retry mechanism in such cases -
and by the fact the transaction was not rollbacked.
This issue could have been solved as well by rollbacking
the transaction, but it is regarded as not-so-clean.
Therefore, to solve this issue, while still having
the SQL constraints translated, we apply the
second patch proposed in bd9cbdfc41
commit message, which is not-so-clean as well, but
which is a proper solution.
opw-651393
Avoid "patching" the registry, as this introduces inconsistencies (some field
attributes are lost). Instead, proceed as follows:
- update the definition of custom fields in database;
- clear the corresponding cache on the registry (this was missing);
- setup the models in registry (this reloads the custom models and fields);
- update the database schema of the models based on the registry.
This makes the update of custom fields simpler and more robust.
In case of different directory for stroing po and pot files than 'i18n'
(e.g. 'i18n_extra'), a po could be linked to a wrong pot file.
Use the same folder as the po file to look for pot.
Closes#4323
If multiple warnings were returned by a cascading onchange
call, only the last warning was displayed.
This revision concatenates the warnings in such a case.
opw-649275
The "Synchronise translation" wizard almost doubles the number of translated
terms in database.
This is due to the loss of the module reference at the synchronisation
(`module_name` is empty as updating all modules)
Only overwrite the module when it is set (default None)
Fixes#6149
Second part of the patches avoid inserting translations without any module for
locale xml ids.
If an error happens in an overload of setUp, the already-open cursor
is likely not to get properly released before the next test,
deadlocking the db, because tearDown only runs if setUp has
succesfully completed.
Cleanups were added specifically to run every time, after tearDown has
(potentially) been executed.
closes#8327
Note: cleanups run in LIFO order (as they should).
In 5efac22043 and
1cf5723835 the computation
of the decimal precision of float fields was modified to
reuse a random cursor from the request environments.
Environment.envs is a WeakSet, subject to unpredictable
garbage collections.
This could cause hard-to-diagnose problems by executing
a SELECT query in a cursor that is supposed to be
otherwise idle and untouched.
As an example, this could cause a transaction start in a
cursor that was just committed/rolled back,
and was waiting for another operation to complete.
One such case happened semi-randomly during a module
installation triggered from the web client, where the request
cursor is committed and waiting for the registry
initialization done with a different cursor,
in the middle of _button_immediate_function()
After the registry initialization, the request cursor is
used again to determine the next action (ir.actions.todo),
and this could cause a TransactionRollbackError if the
request cursor was used during the initialization.
Using a new cursor every time is much safer, simpler,
and will not cause any significant performance hit,
because it will usually grab an available connection
from the connection pool, rather than create a new one.
Fixes opw-649151
* alter docstring of @api.one to mark it as deprecated for 9.0,
recommend using @api.multi instead
- deprecation notes were not correctly styled, add styling
matching "warning" alerts
* move @api.one down the doc page to deemphasize it
* fix "backend" tutorial to remove all instances of ``@api.one``
closes#8527
When linking a record into a `one2many` relation with command `(4, rid)`, a
query checks whether the record is already linked to the current record id:
SELECT 1 FROM {inv_table} WHERE id={rid} AND {inv_field}={id}
where `inv_field` is the name of the inverse field, and `inv_table` is the
table where this field is stored.
The query is wrong if the inverse field is inherited, because the `rid` does
not belong to the table `inv_table`.
The test has been replaced by a plain ORM access:
rec = obj.browse(cr, SUPERUSER_ID, rid)
if int(rec[inv_field]) != id:
...
This fixes#4685.
When exporting the translations, the terms stored in individual files outside of
addons folders (e.g. openerp/models.py) needs to be scanned as well and added in
base translations.
Some folders like osv and report were enough to add these files but with the new
api moving the ORM to openerp folder, a non-recusrive scan needs to be added.
Fixes#7482
The recomputation should not be necessary, as we normally don't change the
record referred by an ir_model_data record. This speeds up the creation of
ir_model_data records by 33%, which should be noticeable during module
installations.
Postgresql has a limit of 64 characters for tables, columns names
as well as for aliases names.
When generating an alias name, e.g. for group by and order
by clauses, if the alias is longer than 64 characters,
use hashing to force the alias length to be within this
64 chars limit.
Fixes#8094Closes#8142
Consider the following setting:
- on model A, field F is computed, stored, and depends on field G
- on model A, field one2many G to model B, with inverse field H
- on model B, field many2one H is inherited (_inherits) from model C
- on model C, field many2one H is stored
When adding records of model B, the field F must be recomputed. In order to
determine which records to recompute, one searches model A with a domain like
[(G, 'in', ids)]. In expression.py, this is resolved with an SQL query like
select H from B where id in {ids}
This query fails, since the field H is not stored in model B. This happens in
general if H is not stored (it may be any computed field). In that case, one
should instead browse records from B, and read field H through the ORM.
A test case has been added: it introduces a many2one field in a parent model,
and a one2many field using the inherited many2one on a child model. The test
checks whether one can search on the one2many field.
The risk was introduced by b7f1b9c.
IF _store_set_values() recall another _create() or _write(),
the recomputation mechanism enter in an infinite recursion
trying to reevaluate for each call exactly the same fields
for the same records than the previous one
This revision replaces the loop of _store_set_values()
by 2 nested loops:
- that not breaks the entire consumption
of recompute_old queue
(Tested thanks to revision a922d39),
- that allows to clear the queue
before each recomputations bundle fixing thereby the recursion
Closes#7558
Escape the currency symbol to prevent javascript errors,
for instance when reconciling a bank statement
(with the special reconciliation widget)
Closes#8216
Previously when replacing time related sequence in a prefix or suffix of
a sequence, the timezone used for the time values would always be the
server's timezone.
With this fix, the context timezone is used if available (UTC is used
otherwise).
closes#8159
opw-646487
When the Bundle mechanism caches bundle files into the
ir.attachment table, it can sometimes cache an empty
resource file (For example, if a less/saas compiled file
results in an empty CSS file) for the bundle URL.
The appropriate behavior for _serve_attachment()
when the browser loads that bundle URL is to return
an empty file (which is the correct content), instead of
redirecting again to the same URL, triggering a loop.
In addition, this commit removes the special case for
returning 204 No Content. This HTTP status code is not
really meant for GET requests - returning an empty file
with a 304 or 200 code is more appropriate and allows
for normal browser caching.
When ordering results on a many2one fields, results are ordered by
order of the target model. The code was wrongly assuming that this
`_order` attribute only contains `_classic_read` fields (that can be
directly read from the table in database). Now correctly generate the
"ORDER BY" clause using the current table alias.
`res.users` can now be sorted by name.
When saving a template in version 8.0, html would be saved as it should
be displayed once on the site. In particular, if some text should be
escaped once send to the browser, it will be saved as such.
But when rendering, a text node content is unescaped two times:
* for translation which seems wrong since we already use .text of a node
which already escaped it, doing it one more time is bad,
* when rendering the template, since the html template is stored in xml,
This commit remove superfluous unescaping for translation, and add an
escaping when saving the changed template content.
closes#7967
opw-646889
otherwise when a method such as fields_get is overridden
using the new API, self.env.context is an empty directory,
and the translations cannot be performed correctly
Closes#7911
Authentication modules are supposed to override res_users.check_credentials()
in order to plug in their own mechanism, without actually modifying the
behavior of res_users.check(), res_users.authenticate() or
res_users._login().
auth_openid was incorrectly overriding check() instead of
check_credentials(), and unnecessarily accessing private
attributes of res_users. Fixing the implementation of auth_openid
to follow the API means we can completely make those attributes
private.
Authentication modules are supposed to override res_users.check_credentials()
in order to plug in their own mechanism, without actually modifying the
behavior of res_users.check(), res_users.authenticate() or
res_users._login().
auth_openid was incorrectly overriding check() instead of
check_credentials(), and unnecessarily accessing private
attributes of res_users. Fixing the implementation of auth_openid
to follow the API means we can completely make those attributes
private.
Partial revert of commit 39d17c2580
of PR #4617, that broke translations by introducing a different
set of criterions for filtering translations in `_set_ids()` and
`_get_ids()`, adding the `module` column in `_set_ids`.
This meant `_get_ids` would see translations that `_set_ids`
would never update: the translation interface did not appear to
be working anymore as soon as a translation entry existed with
no `module` value.
This is likely for manual entries and for entries bootstrapped
by `translate_fields()`, as they do not belong to any module.
The situation described in PR #4617 probably requires an extra
fix in the translation import system instead.
This regex is used for a quick sanity check of
the order_spec in `search(order=<order_spec>)`.
Because it was build on the repetition of a
group ending with a series of optional patterns,
it could cause expensive backtracking when the
order spec did not actually match the regex
(the regex engine was trying all possible ways
to split the groups)
Forcing the repeating group to either end
with a comma or the end of the string prevents
prohibitive backtracking, while being even
more restrictive with regard to the syntax of
the order spec.
Closes#7755
Complements commit af9393d505
in light of commit 62b0d99cfe,
to really have the correct effect.
When the prefetching failed due to the presence of
extra records in the cache (for which the access is denied),
the `read` operation was indeed retried. However the
result was not stored in the cache because the cache
already held a FailedValue (automatically added when the
prefetch failed).
xmlrpc 1.0 does not support None/null, so commit f3e4d0a will break xmlrpc api.
Therefore, we export an empty string if the field is empty, instead of False or
None. For integers and floats, zero is exported.
opw-643966
deprecate phantom_jsfile method, keeping only phantom_js, phantom_js takes a
code argument to run client side only.
Removing phantom_jsfile will allow to switch from phantom to any other engine
such as running google chrome or firefox directly. The only use of
phantom_jsfile was an example.
New installation was detected using the `installed_version` attribute
(`latest_version` in `ir_module_module` table), but this field wasn't
reset at module uninstallation (now fixed by cb29f9e) avoiding
execution of hooks.
Same logic was applied for migration scripts at 8ff7230.
Fixes#7708
The new-api record prefetching algorithm attempts
to load data for all known records from the requested
model (i.e. all IDs present in the environment cache),
regardless of how indirectly/remotely they were
referenced. An indirect parent record may therefore
be prefetched along with its directly browsed children,
possibly crossing company boundaries involuntarily.
This patch implements a fallback mechanism when
the prefetching failed due to what looks like an
ACL restriction.
The implementation of `_read_from_database` handle
ACL directly and set an `AccessError` as cache value
for restricted records.
If a model (like `mail.message`) overwrites `read` to
implements its own ACL checks and raises an `AccessError`
before calling `super()` (which will then call
`_read_from_database`), the cache will be not fill,
leading to an unexpected exception.
If this commit messae looks familiar to you, that's
simply because this is the new-api counterpart of
b7865502e4
When updating a translation, the previous one is deleted and a new one is
recreated (with no module and state). When the source module is updated, the
previous term is inserted again to the lsit of terms.
Instead of dropping and recreating terms during update, simply update the
existing term and create one only if there were no previous translation.
Fixes#4617
Comments in .po(t) files for translations of type "code" (e.g. field labels)
specify the path to the file containing the translation. This path should be
OS-independent to get the same result whatever the plateform the instance is
running on.
Closes#7561
Currencies must be unique per company.
Therefore, without this revision, adding `(Copy)`
to the currency name on duplication, this
is simply not possible to duplicate a currency.
Closes#2443
If we export False or an empty string, the Excel export will consider the field
similarly to a boolean, and en empty value will be converted into "=False()" in
Excel. To prevent this, we return "None" in the following cases:
- String
- Date
- Datetime
- Selection
- Reference
- Many2one
- RelationalMulti
Introduced by 6243d18
opw-643966
Some models (e.g. calendar.event), use "virtual" ids which are not represented
as integers. It was not possible to use sorted method on those models as calling
int() is failing.
This commit fixes the method, making it agnostic to the type of the
'id' member variable.
Fixes#7454
According to the USPS, separating the city name
and the zip code with a comma is acceptable,
but the preferred format omits the comma.
http://pe.usps.gov/cpim/ftp/pubs/Pub28/pub28.pdf
opw-644000
In version 8.0, postgresql's pg_size_pretty function is used
(http://www.postgresql.org/docs/9.4/static/functions-admin.html) when
getting the size of a binary field when reading if `bin_size`
or `bin_size_[col_name]` is set in the context.
So in 8.0 the size of a binary field get units bytes, kB, MB, GB and TB
which was not taken into account. e.g: '5.3 GB'.
This fix uses the size of the string to choose to differenciate the two.
e.g: '10000 bytes' (the longest size) will be returned directly, but
for something longer the human size of the content length will be
returned.
There is a corner case if a file is shorter than 12 bytes but
it is an enough of a small scenario with small implications that it is
deemed acceptable.
closes#7485
opw-644085
Previously, if the ID column was displayed it was not sortable.
The particular case added 247c1972 is no longer needed with the new api,
id is in the list of existing fields (see xmo comment in #7461).
closes#7487, closes#7461fixes#7459
opw-644009
Pretty much completely rewritten theme with custom HTML translator and a
few parts of the old theme extracted to their own extensions.
Banner images thought not to be that huge after all, and not worth the
hassle of them living in a different repository.
co-authored with @stefanorigano
The `search` method of models has an additional keyword parameter, `count`.
It not being specified in the docs could lead people to inherit `search`
without defining it, which would result in a `TypeError` when called with
`count=`.
Closes#7451
The field display_name is present in account_report_company but not in base
on the res.partner (has been added in v8 in base).
Create a hook method to keep using the slow CASE in base and switch to the
faster display_name when installing account_report_company.
Revert 83282f2d for a cleaner sanitizing earlier in the generation of the error
message.
If the import is failing, the error message contains the value that is
problematic. Escape this value in case it contains '%'
In commit 44f2c8d54 we unified the return value of the function to int,
but it seems the returned size could be None which is not a valid input
of the int() built-in function.
Using the `start` CLI command with the `--path` or `-p` option arrors with:
odoo.py: error: no such option: --path
This is because the `--path` option is passed on the server start routine,
and it's invalid there.
A workaround is to remove those command options from the arguments passed
to the main() server start.
Closes#5896
During tests, some creation of user records would unnecessarily trigger
password reset or set a password, both of which would trigger password
hashing which takes some time (for good reasons).
Fix by:
* passing no_reset_password in YAML tests and some Python tests still
missing it (a number of Python tests already used it)
* removing passwords from YAML records as they're never necessary, the
test user records are not expected to ever log in
View validation accounts for a fair cost of module installation, most of
that is spent checking for view validity, and a significant fraction
of *that* is spent reading the validated view from the DB.
Turns out _check_xml requested *all* view fields even though it needed
none of them save for the arch. Specifying fields to read_combined
rather than fetching all fields seems to result in a ~30% speedup of
_check_xml (under line_profiler). Most of the rest is spent fetching
sub-views (in get_inheriting_view_arch), I don't know that it can be
improved without hand-crafting a fairly complex SQL request.
Some tests (e.g. mail) have expensive and significant DB setup for a
number of small and cheap tests. Using a TransactionCase, the DB setup
far dominates the tests themselves, by up to 10x (mail unit tests take
~130s on my machine, the tests themselves take ~15s).
The SavepointCase introduced here is an hybrid of SingleTransactionCase
and TransactionCase: it uses a single transaction for all tests in a
class, but each test case is isolated by a rollbacked savepoint. This
allows a common DB setup (via setUpClass) while keeping independent
tests.
TransactionCase should remain the primary test case superclass, but
SavepointCase can be a fair optimisation when setup costs far dominate.
To distinguish two ir.attachment 'File Content', we base it on the file
size (this I imagine is for efficiency).
This is done by setting bin_size in the context. Doing this, the
returned db_datas field will be the file size of the converted to base64
content (if no filestore).
But this size is returned in string format, so "get_nice_size" (in
openerp/osv/fields.py) which gets this size calculates the length of the
string. e.g: if the size is '2142' get_nice_size will return 4.
This fix solves this by converting the string to an integer, thus
unifying it with the filestore case (where we use os.path.getsize which
return an integer).
Also, the field presenting the issue (FieldBinaryFile) has been changed
so it is always updated even if the size is the same (as it was already
done by 3632949 for FieldBinaryImage widget).
closes#7223
opw-643071
Required after e17844c946 which
enabled graceful shutdown of workers, including the cron worker.
The latter may typically be busy on long-running tasks that
will not be aborted by a simple graceful shutdown request.
A typical shutdown sequence initiated by a daemon manager such
as start-stop-daemon will involve multiple SIGTERM signals to
ensure the process really stops in a timely manner.
This was not possible after e17844c946
if any cron worker was busy.
It was removed at 2df9060d97 and broke
tests in community modules that relied on it.
Tests using it should switch to the new get_db_name() method.
Closes#7054
In 7.0, a field overriding an inherited model would overwrite all the
previously set field attributes. In v8 the new API allow us to keep
previous attribute, and only overwrite attributes of our wish.
Following the commit 7b1ef708 (october 2014) an old api field overriding
could conserve previous attributes values in some cases (if the new value
is falsy (None, "", 0, False, (), {}, [], ...) when it should not. It
was partly an optimization to diminish the size of orm registries
dictionaries to save memory (this patch has been tested with all odoo
modules and added +/- 3M).
This commit (proposed by rco) should overwrite falsy value (so the
behavior of v7 for old api fields overwritting is re-introduced) and
get back the behavior of old api in V7.
closes#7035
opw-639712
Since 31d817e, we rotate then session at login/logout.
Unfortunatly, `openerpframework.js` does not support session id change
at authentication and keep old one.
In order to keep compatibility with existing js clients (including 7.0
ones), we do not rotate the session at authentication.
Fixes#6948Closes#6949
return request.not_found crash with a internal error, because get_response
takes a environment as param.
Werkzeug Documentation:
Keep in mind that you have to pass an environment to get_response() because
some errors fetch additional information from the WSGI environment.
Remove languages that were not translated fro, the list of installable languages.
Installing languages with no translations (en_CA) has no effect and having a too
long list is misleading. Sorry Klingon.
Adding languages that were translated but not installable (fr_CA, en_AU)
Thamks to parent commit, `request.env` doesn't raise `AttributeError`
anymore for requests without session bound to a database.
This exception was bubbling up to `digits` property (and `__getattr__`)
This reverts commit 49cb46fb78.
This reinstate commit eeedd2d9f5.
The `rotate` flag introduced by 31d817e849
was initialized at the very end of the session init, after
the reset of the `modified` flag.
This had the side-effect of marking the session as modified
for every request, saving the session to disk every time
even without any change.
Closes#6795
This reverts commit eeedd2d9f5.
This revision introduces an issue more serious than the ones
it fixes. This is no longer possible to receive an email
aimed a sale.order thread with catchall.
To reproduce the issue:
- Create a new sale order
- Send a message in the thread to the customer
- Reply to the mail received in the customer mailbox
- Traceback, AttributeError: digits
opw-640370
When setting a custom filter with as domain
[(0, '=', 1)]
the domain was rejected, because (0, '=', 1) wasn't
considered as a valid leaf, while it is.
This is because the Javascript converts this domain
using list instead of tuple
[(0, '=', 1)] -> [[0, '=', 1]]
And therefore, comparing the "list" leaf
to the TRUE/FALSE leaf tuple failed.
Ensuring "element" as a tuple solves the issue.
opw-640306
When a record is created with a field property of type integer or float equal to 0, there is
no record created in ir_property for this field. Then for a search of a record with this type of
field equal to 0, it must match all the records that have no corresponding property in the table
"ir.property" for this field.
ps: in 7.0, the function search_multi doesn't exist and the bug linked to cost price
doesn't happen because it's a float field.
opw:639746
When receiving goods with average price set as costing method,
for a move from another company than the SUPERUSER_id company,
the average price updated was the one from the SUPERUSER company
instead of the one of the move.
opw-634167
In commit 04ba0e99, we introduced an optimization for reading inherited fields
in a single query. There is an issue when you have more than one level of
`_inherits`. The query looks like:
SELECT ...
FROM table0, table1 AS alias1, table2 AS alias2
WHERE table0.link0 = alias1.id AND table1.link1 = alias2.id AND ...
^^^^^^
should be alias1
This fixes the issue, and adds a test to reproduce it. The fix is based on
@emiprotechnologies's own proposal, but is cleaner and does not break APIs.
This fixes an issue in property `field.digits` that cannot find a valid cursor
to the database. Forcing the instantiation of an environment makes the cursor
retrievable.
When importing a CSV file with an "id" column containing
external IDs (XML IDs), the system automatically creates
or updates the corresponding ir.model.data entries.
This would fail for regular users who do not have
create/write access on this internal model.
The server parameter `log-db` gives the possibility
to store the logs of a server in a specific database,
in the ir.logging model.
Unfortunately, it wasn't possible to store these logs within
the database from which the logs came from in a multi databases
environment (e.g. the SAAS).
This revision gives the possibility to store the logs
within the database where the logs come from,
using --log-db=%d (inspired from the --db-filter arg)
We also added the possibility to change the level of logs to store in
the database, with the --log-db-level argument, which is set
by default to `warning`.
The `set` method of the one2many class returns a list
of the fields that require recomputation,
the computing of the function fields being delayed
for performances reasons.
Nevertheless, if the `set` method was called
through another `set` method, in other words,
nested `set` calls, the fields to recompute returned
by the second, nested, call to set were never recomputed,
the result list were simply lost.
e.g.:
```
create/write
│set
└─── create/write with recs.env.recompute set to False
│set
└─── create
with recs.env.recompute set to False
```
To overcome this problem, the list of old api style
compute fields to recompute is stored
within the environment, and this list is cleared
each time the store_set_value has done its job of
recomputing all old api style compute fields.
opw-629650
opw-632624
closes#6053
Consider a new field that uses the same compute method as another existing
field. When the field is introduced in database, its value must be computed on
existing records. In such a case, the existing field should not be written, as
its value is not supposed to have changed. Not writing on the existing field
can avoid useless recomputations in cascade, which is the reason we introduce
this patch.
The lazy property `pure_function_fields` was not invalidated upon every setup
of models, and hence could contain old instances of fields. As every model
setup re-creates instances of fields, the property has to be recomputed.
Accessing `field.digits` can crash if no environment is available at that
point. This happens in function `get_pg_type()`, which is called from method
`_auto_init()`. An environment is simply created in the method's scope to be
available for `field.digits`.
The computation of property `digits` was creating a new cursor to call the
function that determines digits. This technique is fragile because the current
cursor may have pending updates that the new cursor will not see.
The issue was discovered by Cécile Tonglet (cto). She observed an infinite
loop during a database migration, and a traceback inside the loop showed the
presence of the `digits` property. This change fixes the infinite loop issue.
As done in write and already in next version (see 0fd773a), accessing a deleted
record (through read or check access rights) should always return a MissingError
instead of the generic except_orm.
This will allow code ignoring deleted record (e.g. 'recompute' method) to safely
process the other records.
Fixes#6105
When searching if a many2one property field is not set, there may be less
results since only the ones with a reference set to NULL are returned.
We should also get those not in the table.
This commit change this case so instead of returning ['id', 'in', {matching non-set ids}],
the ['id', 'not in', {matching set ids}] is returned.
e.g: if (1, 3, 8) are set, (5, 9) are not set. ['id', 'not in', (1, 3, 8)] would
be returned instead of ['id', 'in', (5, 9)] which might not select all non-set
property fields.
closes#6044
opw-631057
The ir.ui.view.graph_get() method depended on the natural
semi-random order of Python dict keys in the _columns dict.
When the number and/or names of the _columns happened to
yield the o2m field of the "incoming transitions" *before*
the "outgoing transitions" of the "Node model"
(e.g. workflow activity), it would swap the incoming and
outgoing transitions fields around, causing a crash later
in the `tools.graph.process` method (currently an infinite
loop in the `tree_order()` method of tools.graph.py).
Closes#3614
Fixes https://bugs.launchpad.net/openobject-server/+bug/1316948
opw-633765
The defaults of ir.values depends on the user company, but there is no
cache invalidation when the company of a user change.
This fix adds the clearing of the default's cache in this case.
Closes#6339
opw-629979
When reading over IN_MAX (currently 1000) records, the select is slip into
subsets of IN_MAX records. However the list of ids was split using set() method
which removes order that may have been pass. The subsets are ordered using
_order attribute but the subsets were not ordered between themself.
This is an issue in case of browsing a o2m field with more than 1000 lines as it
will return sorted blocked but the order of the blocks is the order of the
contained ids (e.g. split(2, [5, 4, 3, 2, 1]) -> [[2,1], [4,3], [5]]).
Removes the set() to make sure the order of the given ids is preserved.
opw 616070, linked to #439
When starting the Odoo server with the parameters --syslog,
the logs are supposed to be pushed in the syslog.
This was no longer the case since e9d047e611.
Indeed, if no address is specified ('localhost', 514) is used,
which not always work. We therefore have no choice to define an address
which is '/var/run/log' for MacOSx,
and '/dev/log' for any other linux system.
opw-633074
`function` fields are fully copied via `copy.copy()`.
`copy.copy()` *do not* call `__init__` after object creation; then
restore the state via `__setstate__()` or by updating `__dict__` or via
`setattr()` when the object uses `__slots__`.
As `__init__` is not called, the newly created object does not have any
`_args` attribute. This lead to a recursive call of `__getattr__ when
`copy.copy` check the existance of `__setstate__` attribute.
When break this loop by forbidding explicitly by checking the attribute
name accessed (We cannot check the presence of `_args` in `__dict__`
because we uses `__slots__`).
See http://bugs.python.org/issue5370Fixes#6037
opw:633109
context_timestamp should always return a timezone aware timestamp, even when
no timezone is set on the user.
7f88681 fixed the bug in old api fields (openerp/osv/fields.py) but it was not
applied to new api fields (openerp/fields.py). opw 616612