At a language import (translation files), we want to push a big batch of
translation records into the database. These will need update-or-insert
logic (against existing ones) or even resolution of ir.model.data .
Doing this loop in Python had been slow (invoking 2x read()s, +1 for
ir.model.data, 1 insert or update), triggered the cache (fill and clean
at each iteration).
Instead, follow the old-school db recipe for mass records insertion:
- create a temporary table w/o indexes or constraints
- quickly populate the temp with all records of the batch
(through a dedicated "cursor" object)
- process the table, doing lookups in collective SQL queries (yes, SQL
is all about loops of data processing, efficiently)
- insert all records from temp into ir.model.data
- call (implicitly) all constraints of ir.model.data at the end of that
single query.
This improves performance of translation imports by ~3x at least.
bzr revid: xrg@linux.gr-20110608162059-rfy1vvwp8w66ry0i
- openerp.pooler no longer provides get_db_only, which is a provided by sql_db
- openerp.sql_db does not rely anymore on netsvc, which is goog as it was
making a circular import. The downside is that db_close callers have to clean
also the Agent themselves.
bzr revid: vmt@openerp.com-20110420141407-au0oanwjc0t15vy5
- Some logging code moved from netsvc.py to loglevels.py
- Changed imports to use the new openerp module
- config and netsvc initialization calls move to openerp-server.py
- Moved openerp-server.py outside the old bin directory
- Some imports in tools moved inside the methods to break mutual-dependencies
bzr revid: vmt@openerp.com-20110207125723-ooee7d7ng5elmkso