At a language import (translation files), we want to push a big batch of
translation records into the database. These will need update-or-insert
logic (against existing ones) or even resolution of ir.model.data .
Doing this loop in Python had been slow (invoking 2x read()s, +1 for
ir.model.data, 1 insert or update), triggered the cache (fill and clean
at each iteration).
Instead, follow the old-school db recipe for mass records insertion:
- create a temporary table w/o indexes or constraints
- quickly populate the temp with all records of the batch
(through a dedicated "cursor" object)
- process the table, doing lookups in collective SQL queries (yes, SQL
is all about loops of data processing, efficiently)
- insert all records from temp into ir.model.data
- call (implicitly) all constraints of ir.model.data at the end of that
single query.
This improves performance of translation imports by ~3x at least.
bzr revid: xrg@linux.gr-20110608162059-rfy1vvwp8w66ry0i
select fields validate themselves in ORM, through _check_selection_field_value(),
which /retrieves/ the .selection list and checks if value to be written
is among possible ones. This, however, is very expensive for models that
have variable selection lists (eg. ones that require an SQL query).
In fact, for ir.translation, the query would be repeat for every single
write() to ir.translation. The 'lang' field is actually a reference for
res.lang.code (not the .id, sadly), so let SQL implement the constraint
for us.
bzr revid: xrg@linux.gr-20110608093254-ua66p5co6dc203zs
- Some logging code moved from netsvc.py to loglevels.py
- Changed imports to use the new openerp module
- config and netsvc initialization calls move to openerp-server.py
- Moved openerp-server.py outside the old bin directory
- Some imports in tools moved inside the methods to break mutual-dependencies
bzr revid: vmt@openerp.com-20110207125723-ooee7d7ng5elmkso