the web client currently does not send all record data when an o2m is used as a context value, it only sends the ids (at least when the o2m records have not been locally modified)
bzr revid: xmo@openerp.com-20111007131052-4qqo027b2mp16nd6
* Extracted creation of VARCHAR pg_type into a separate function, make missing
size (or size 0) create an unlimited VARCHAR field (effectively limited by
postgres to 1GB data)
* Extracted fields to pg_types mapping outside of get_pg_type
* Made fields.function recursively forward to get_pg_type (via a type overload)
instead of reimplementing half get_pg_type in itself
* Simplified some get_pg_type cases
Note: if this is merged, it might be nice to convert fields.selection to use an
API similar to fields.function: default to VARCHAR storage, if there's a type
attribute override use that type. Currently, fields.selection is handled the
following way:
* If the selection is a list and the type of the first half of the first item
is an integer, use type int4
* If the field has a __size=-1__ attribute, use type int4
* Else use type varchar (with size specified on the field, if any)
One change from previous version is that if type of the first half of the first
item of the selection was str or unicode, it tried to find the longest string
and used that as the field's size. This meant silent loss of data if new,
longer items were added to the selection without recreating the whole db (or at
least manually altering the relevant fields). It also used the field's size or
*16* as a minimum default, for some reason, and if there was no size specified
on the selection (or size=0) it just hardcoded the size to 16.
bzr revid: vmt@openerp.com-20111006081336-uka6srvdmvs0s4lm
Python has an iteration fallback protocol: when iterating over an
object which does not define __iter__, if the object defines
__getitem__ it considers that it's a list, and invokes __getitem__
from index `0`.
This can be a problem in openerp in methods which expect an list of
ids but are only given a single id (not a singleton list), in that
case self.browse() will return a single browse_record (instea of a
list) and the method tries to iterate over it by calling
browse_record.__getitem__.
Problem is that browse_record.__getitem__ is pretty deep and does
little validation, so the error appears 3 frames below where the
actual issue is with a completely cryptic message of "KeyError: 0",
which makes the actual harder to track.
By raising an error immediately in browse_record.__iter__, this kind
of issues is much easier to handle, the stack points precisely to the
frame in error with a clearer message.
bzr revid: xmo@openerp.com-20111005112444-jcp9fw6pa36ahpsd
- _original_module is now available on model/browse_records
- context usage in res.partner.*
- proper name_search() + default values for res.currency
- active_model in wkf exec context
- safe_eval allows try/except/finally
- yaml_import: !ref {id: xml_id} works
- ir_mail_server: support for alternative body/subtype
- default value for web.base.url config parameter
- consistency rename: Model.*get_xml_id* -> *get_external_id*
bzr revid: odo@openerp.com-20111005100954-c8mbd4kz6kkqaj84
The 'module' field of ir.model.data is required, so we
we need to set it when auto-generating ir.mode.data
entries. This acts as the namespace of the record.
Because we don't want exported records to look like they
belong to an existing module (and risk being garbage
collected at the next module update), we put these
auto-generated names in a reserved '__export__' module
namespace.
bzr revid: odo@openerp.com-20111004205140-duaww77ng4qmktj2
ORM Models already have a _module attribute that contains the
name of the module that declared this class, however
sometimes we also need the name of the module that
declared this model the first time.
This will be stored in _original_module and is the
name of the module to which the first parent with
the same _name belongs to.
bzr revid: odo@openerp.com-20111004204705-8z9o70n1ynpvng3i