2006-10-20 14:31:06 +00:00
#
2010-08-12 20:35:48 +00:00
# Packaging process
2006-10-20 14:31:06 +00:00
#
2010-08-12 20:35:48 +00:00
# Executive summary: This class iterates over the functions listed in PACKAGEFUNCS
2012-03-25 11:46:15 +00:00
# Taking D and splitting it up into the packages listed in PACKAGES, placing the
2010-08-12 20:35:48 +00:00
# resulting output in PKGDEST.
#
# There are the following default steps but PACKAGEFUNCS can be extended:
#
2011-05-18 13:15:01 +00:00
# a) package_get_auto_pr - get PRAUTO from remote PR service
2010-08-12 20:35:48 +00:00
#
2011-05-18 13:15:01 +00:00
# b) perform_packagecopy - Copy D into PKGD
2010-08-12 20:35:48 +00:00
#
2011-05-18 13:15:01 +00:00
# c) package_do_split_locales - Split out the locale files, updates FILES and PACKAGES
#
# d) split_and_strip_files - split the files into runtime and debug and strip them.
2011-02-08 22:07:47 +00:00
# Debug files include debug info split, and associated sources that end up in -dbg packages
#
classes/package.bbclass: Add fixup_perms
Add a new function that is responsible for fixing directory and file
permissions, owners and groups during the packaging process. This will fix
various issues where two packages may create the same directory and end up
with different permissions, owner and/or group.
The issue being resolved is that if two packages conflict in their ownership
of a directory, the first installed into the rootfs sets the permissions.
This leads to a least potentially non-deterministic filesystems, at worst
security defects.
The user can specify their own settings via the configuration files
specified in FILESYSTEM_PERMS_TABLES. If this is not defined, it will
fall back to loading files/fs-perms.txt from BBPATH. The format of this
file is documented within the file.
By default all of the system directories, specified in bitbake.conf, will
be fixed to be 0755, root, root.
The fs-perms.txt contains a few default entries to correct documentation,
locale, headers and debug sources. It was discovered these are often
incorrect due to being directly copied from the build user environment.
The entries needed to match the base-files package have also been added.
Also tweak a couple of warnings to provide more diagnostic information.
(From OE-Core rev: 8c720efa053f81dc8d2bb604cdbdb25de9a6efab)
Signed-off-by: Mark Hatle <mark.hatle@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-06-20 15:57:49 +00:00
# e) fixup_perms - Fix up permissions in the package before we split it.
#
# f) populate_packages - Split the files in PKGD into separate packages in PKGDEST/<pkgname>
2010-08-12 20:35:48 +00:00
# Also triggers the binary stripping code to put files in -dbg packages.
#
classes/package.bbclass: Add fixup_perms
Add a new function that is responsible for fixing directory and file
permissions, owners and groups during the packaging process. This will fix
various issues where two packages may create the same directory and end up
with different permissions, owner and/or group.
The issue being resolved is that if two packages conflict in their ownership
of a directory, the first installed into the rootfs sets the permissions.
This leads to a least potentially non-deterministic filesystems, at worst
security defects.
The user can specify their own settings via the configuration files
specified in FILESYSTEM_PERMS_TABLES. If this is not defined, it will
fall back to loading files/fs-perms.txt from BBPATH. The format of this
file is documented within the file.
By default all of the system directories, specified in bitbake.conf, will
be fixed to be 0755, root, root.
The fs-perms.txt contains a few default entries to correct documentation,
locale, headers and debug sources. It was discovered these are often
incorrect due to being directly copied from the build user environment.
The entries needed to match the base-files package have also been added.
Also tweak a couple of warnings to provide more diagnostic information.
(From OE-Core rev: 8c720efa053f81dc8d2bb604cdbdb25de9a6efab)
Signed-off-by: Mark Hatle <mark.hatle@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-06-20 15:57:49 +00:00
# g) package_do_filedeps - Collect perfile run-time dependency metadata
2010-08-20 15:54:53 +00:00
# The data is stores in FILER{PROVIDES,DEPENDS}_file_pkg variables with
# a list of affected files in FILER{PROVIDES,DEPENDS}FLIST_pkg
#
2012-07-11 17:33:43 +00:00
# h) package_do_shlibs - Look at the shared libraries generated and autotmatically add any
# depenedencies found. Also stores the package name so anyone else using this library
2010-08-12 20:35:48 +00:00
# knows which package to depend on.
#
classes/package.bbclass: Add fixup_perms
Add a new function that is responsible for fixing directory and file
permissions, owners and groups during the packaging process. This will fix
various issues where two packages may create the same directory and end up
with different permissions, owner and/or group.
The issue being resolved is that if two packages conflict in their ownership
of a directory, the first installed into the rootfs sets the permissions.
This leads to a least potentially non-deterministic filesystems, at worst
security defects.
The user can specify their own settings via the configuration files
specified in FILESYSTEM_PERMS_TABLES. If this is not defined, it will
fall back to loading files/fs-perms.txt from BBPATH. The format of this
file is documented within the file.
By default all of the system directories, specified in bitbake.conf, will
be fixed to be 0755, root, root.
The fs-perms.txt contains a few default entries to correct documentation,
locale, headers and debug sources. It was discovered these are often
incorrect due to being directly copied from the build user environment.
The entries needed to match the base-files package have also been added.
Also tweak a couple of warnings to provide more diagnostic information.
(From OE-Core rev: 8c720efa053f81dc8d2bb604cdbdb25de9a6efab)
Signed-off-by: Mark Hatle <mark.hatle@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-06-20 15:57:49 +00:00
# i) package_do_pkgconfig - Keep track of which packages need and provide which .pc files
2010-08-12 20:35:48 +00:00
#
classes/package.bbclass: Add fixup_perms
Add a new function that is responsible for fixing directory and file
permissions, owners and groups during the packaging process. This will fix
various issues where two packages may create the same directory and end up
with different permissions, owner and/or group.
The issue being resolved is that if two packages conflict in their ownership
of a directory, the first installed into the rootfs sets the permissions.
This leads to a least potentially non-deterministic filesystems, at worst
security defects.
The user can specify their own settings via the configuration files
specified in FILESYSTEM_PERMS_TABLES. If this is not defined, it will
fall back to loading files/fs-perms.txt from BBPATH. The format of this
file is documented within the file.
By default all of the system directories, specified in bitbake.conf, will
be fixed to be 0755, root, root.
The fs-perms.txt contains a few default entries to correct documentation,
locale, headers and debug sources. It was discovered these are often
incorrect due to being directly copied from the build user environment.
The entries needed to match the base-files package have also been added.
Also tweak a couple of warnings to provide more diagnostic information.
(From OE-Core rev: 8c720efa053f81dc8d2bb604cdbdb25de9a6efab)
Signed-off-by: Mark Hatle <mark.hatle@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-06-20 15:57:49 +00:00
# j) read_shlibdeps - Reads the stored shlibs information into the metadata
2010-08-12 20:35:48 +00:00
#
classes/package.bbclass: Add fixup_perms
Add a new function that is responsible for fixing directory and file
permissions, owners and groups during the packaging process. This will fix
various issues where two packages may create the same directory and end up
with different permissions, owner and/or group.
The issue being resolved is that if two packages conflict in their ownership
of a directory, the first installed into the rootfs sets the permissions.
This leads to a least potentially non-deterministic filesystems, at worst
security defects.
The user can specify their own settings via the configuration files
specified in FILESYSTEM_PERMS_TABLES. If this is not defined, it will
fall back to loading files/fs-perms.txt from BBPATH. The format of this
file is documented within the file.
By default all of the system directories, specified in bitbake.conf, will
be fixed to be 0755, root, root.
The fs-perms.txt contains a few default entries to correct documentation,
locale, headers and debug sources. It was discovered these are often
incorrect due to being directly copied from the build user environment.
The entries needed to match the base-files package have also been added.
Also tweak a couple of warnings to provide more diagnostic information.
(From OE-Core rev: 8c720efa053f81dc8d2bb604cdbdb25de9a6efab)
Signed-off-by: Mark Hatle <mark.hatle@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-06-20 15:57:49 +00:00
# k) package_depchains - Adds automatic dependencies to -dbg and -dev packages
2010-08-12 20:35:48 +00:00
#
2012-07-11 17:33:43 +00:00
# l) emit_pkgdata - saves the packaging data into PKGDATA_DIR for use in later
2010-08-12 20:35:48 +00:00
# packaging steps
2006-10-20 14:31:06 +00:00
2009-01-25 17:14:57 +00:00
inherit packagedata
2012-07-31 08:49:38 +00:00
inherit chrpath
2009-01-25 17:14:57 +00:00
insane/package: let package.bbclass inherit insane.bbclass
RP's comment:
"What we're trying to do is move everything to use a standard mechanism
for reporting issues of this type (do_package). With insane.bbclass, you
can elect whether a given type of error is a warning or error and fails
the task."
* The package.bbclass had used package_qa_handle_error() which is from
insane.bbclass, and we will use it for handling other warnings and
errors, so let package.bbclass inherit insane.bbclass, this change will
make the insane as a requirement (always included).
* Change the "PACKAGEFUNCS ?=" to "+=", otherwise there would be an
error like:
Exception: variable SUMMARY references itself!
This is because we let package.bbclass inherit insane.bbclass, and
PACKAGEFUNCS has been set in insane.bbclass, so the "PACKAGEFUNCS ?="
will set nothing, then the "emit_pkgdata" doesn't run which will
cause this error.
* Add a QA_SANE variable in insane.bbclass, once the error type
is ERROR_QA, it will fail the task and stop the build.
[YOCTO #3190]
[YOCTO #4396]
(From OE-Core rev: 852dead71387c66ec0cba7c71e3814a74e409560)
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-05-11 22:46:10 +00:00
# Need the package_qa_handle_error() in insane.bbclass
inherit insane
2009-10-29 23:55:43 +00:00
PKGD = "${WORKDIR}/package"
PKGDEST = "${WORKDIR}/packages-split"
2007-09-03 14:16:12 +00:00
2011-06-02 20:31:59 +00:00
LOCALE_SECTION ?= ''
2011-08-16 08:26:49 +00:00
ALL_MULTILIB_PACKAGE_ARCHS = "${@all_multilib_tune_values(d, 'PACKAGE_ARCHS')}"
2010-08-20 15:54:53 +00:00
# rpm is used for the per-file dependency identification
PACKAGE_DEPENDS += "rpm-native"
2017-01-19 16:49:00 +00:00
# If your postinstall can execute at rootfs creation time rather than on
# target but depends on a native/cross tool in order to execute, you need to
# list that tool in PACKAGE_WRITE_DEPENDS. Target package dependencies belong
# in the package dependencies as normal, this is just for native/cross support
# tools at rootfs build time.
PACKAGE_WRITE_DEPS ??= ""
2005-08-31 10:45:47 +00:00
def legitimize_package_name(s):
2012-07-11 17:33:43 +00:00
"""
Make sure package names are legitimate strings
"""
import re
2006-09-20 16:40:07 +00:00
2012-07-11 17:33:43 +00:00
def fixutf(m):
cp = m.group(1)
if cp:
2016-05-20 10:17:05 +00:00
return ('\\u%s' % cp).encode('latin-1').decode('unicode_escape')
2006-09-20 16:40:07 +00:00
2012-07-11 17:33:43 +00:00
# Handle unicode codepoints encoded as <U0123>, as in glibc locale files.
s = re.sub('<U([0-9A-Fa-f]{1,4})>', fixutf, s)
2006-09-20 16:40:07 +00:00
2012-07-11 17:33:43 +00:00
# Remaining package name validity fixes
return s.lower().replace('_', '-').replace('@', '+').replace(',', '+').replace('/', '-')
2005-08-31 10:45:47 +00:00
2013-12-28 19:58:39 +00:00
def do_split_packages(d, root, file_regex, output_pattern, description, postinst=None, recursive=False, hook=None, extra_depends=None, aux_files_pattern=None, postrm=None, allow_dirs=False, prepend=False, match_path=False, aux_files_pattern_verbatim=None, allow_links=False, summary=None):
2012-07-11 17:33:43 +00:00
"""
Used in .bb files to split up dynamically generated subpackages of a
given package, usually plugins or modules.
2012-08-22 10:53:06 +00:00
Arguments:
root -- the path in which to search
file_regex -- regular expression to match searched files. Use
parentheses () to mark the part of this expression
that should be used to derive the module name (to be
substituted where %s is used in other function
arguments as noted below)
output_pattern -- pattern to use for the package names. Must include %s.
description -- description to set for each package. Must include %s.
postinst -- postinstall script to use for all packages (as a
string)
recursive -- True to perform a recursive search - default False
hook -- a hook function to be called for every match. The
function will be called with the following arguments
(in the order listed):
f: full path to the file/directory match
pkg: the package name
file_regex: as above
output_pattern: as above
modulename: the module name derived using file_regex
extra_depends -- extra runtime dependencies (RDEPENDS) to be set for
all packages. The default value of None causes a
dependency on the main package (${PN}) - if you do
not want this, pass '' for this parameter.
aux_files_pattern -- extra item(s) to be added to FILES for each
package. Can be a single string item or a list of
strings for multiple items. Must include %s.
postrm -- postrm script to use for all packages (as a string)
allow_dirs -- True allow directories to be matched - default False
prepend -- if True, prepend created packages to PACKAGES instead
of the default False which appends them
match_path -- match file_regex on the whole relative path to the
root rather than just the file name
aux_files_pattern_verbatim -- extra item(s) to be added to FILES for
each package, using the actual derived module name
rather than converting it to something legal for a
package name. Can be a single string item or a list
of strings for multiple items. Must include %s.
allow_links -- True to allow symlinks to be matched - default False
2013-12-28 19:58:39 +00:00
summary -- Summary to set for each package. Must include %s;
defaults to description if not set.
2012-08-22 10:53:06 +00:00
2012-07-11 17:33:43 +00:00
"""
2016-12-14 21:13:04 +00:00
dvar = d.getVar('PKGD')
2016-02-02 13:52:38 +00:00
root = d.expand(root)
output_pattern = d.expand(output_pattern)
2016-04-05 16:10:43 +00:00
extra_depends = d.expand(extra_depends)
2012-12-06 12:53:14 +00:00
# If the root directory doesn't exist, don't error out later but silently do
# no splitting.
if not os.path.exists(dvar + root):
2014-02-14 15:02:24 +00:00
return []
2012-12-06 12:53:14 +00:00
2016-12-14 21:13:04 +00:00
ml = d.getVar("MLPREFIX")
2012-07-11 17:33:43 +00:00
if ml:
if not output_pattern.startswith(ml):
output_pattern = ml + output_pattern
newdeps = []
for dep in (extra_depends or "").split():
if dep.startswith(ml):
newdeps.append(dep)
else:
newdeps.append(ml + dep)
if newdeps:
extra_depends = " ".join(newdeps)
2016-12-14 21:13:04 +00:00
packages = d.getVar('PACKAGES').split()
2016-04-07 08:54:56 +00:00
split_packages = set()
2012-07-11 17:33:43 +00:00
if postinst:
postinst = '#!/bin/sh\n' + postinst + '\n'
if postrm:
postrm = '#!/bin/sh\n' + postrm + '\n'
if not recursive:
objs = os.listdir(dvar + root)
else:
objs = []
for walkroot, dirs, files in os.walk(dvar + root):
for file in files:
relpath = os.path.join(walkroot, file).replace(dvar + root + '/', '', 1)
if relpath:
objs.append(relpath)
if extra_depends == None:
2016-12-14 21:13:04 +00:00
extra_depends = d.getVar("PN")
2012-07-11 17:33:43 +00:00
2013-12-28 19:58:39 +00:00
if not summary:
summary = description
2012-07-11 17:33:43 +00:00
for o in sorted(objs):
import re, stat
if match_path:
m = re.match(file_regex, o)
else:
m = re.match(file_regex, os.path.basename(o))
if not m:
continue
f = os.path.join(dvar + root, o)
mode = os.lstat(f).st_mode
if not (stat.S_ISREG(mode) or (allow_links and stat.S_ISLNK(mode)) or (allow_dirs and stat.S_ISDIR(mode))):
continue
on = legitimize_package_name(m.group(1))
pkg = output_pattern % on
2016-04-07 08:54:56 +00:00
split_packages.add(pkg)
2012-07-11 17:33:43 +00:00
if not pkg in packages:
if prepend:
packages = [pkg] + packages
else:
packages.append(pkg)
2016-12-14 21:13:04 +00:00
oldfiles = d.getVar('FILES_' + pkg)
2013-10-03 16:02:48 +00:00
newfile = os.path.join(root, o)
# These names will be passed through glob() so if the filename actually
# contains * or ? (rare, but possible) we need to handle that specially
newfile = newfile.replace('*', '[*]')
newfile = newfile.replace('?', '[?]')
2012-07-11 17:33:43 +00:00
if not oldfiles:
2013-10-03 16:02:48 +00:00
the_files = [newfile]
2012-07-11 17:33:43 +00:00
if aux_files_pattern:
if type(aux_files_pattern) is list:
for fp in aux_files_pattern:
the_files.append(fp % on)
else:
the_files.append(aux_files_pattern % on)
if aux_files_pattern_verbatim:
if type(aux_files_pattern_verbatim) is list:
for fp in aux_files_pattern_verbatim:
the_files.append(fp % m.group(1))
else:
the_files.append(aux_files_pattern_verbatim % m.group(1))
d.setVar('FILES_' + pkg, " ".join(the_files))
else:
2013-10-03 16:02:48 +00:00
d.setVar('FILES_' + pkg, oldfiles + " " + newfile)
2014-12-08 23:53:59 +00:00
if extra_depends != '':
d.appendVar('RDEPENDS_' + pkg, ' ' + extra_depends)
2016-12-14 21:13:04 +00:00
if not d.getVar('DESCRIPTION_' + pkg):
2014-12-08 23:53:59 +00:00
d.setVar('DESCRIPTION_' + pkg, description % on)
2016-12-14 21:13:04 +00:00
if not d.getVar('SUMMARY_' + pkg):
2014-12-08 23:53:59 +00:00
d.setVar('SUMMARY_' + pkg, summary % on)
if postinst:
d.setVar('pkg_postinst_' + pkg, postinst)
if postrm:
d.setVar('pkg_postrm_' + pkg, postrm)
2012-07-11 17:33:43 +00:00
if callable(hook):
hook(f, pkg, file_regex, output_pattern, m.group(1))
d.setVar('PACKAGES', ' '.join(packages))
2016-04-07 08:54:56 +00:00
return list(split_packages)
2005-08-31 10:45:47 +00:00
2007-04-10 11:23:14 +00:00
PACKAGE_DEPENDS += "file-native"
2007-04-03 11:31:02 +00:00
python () {
2016-12-14 21:13:04 +00:00
if d.getVar('PACKAGES') != '':
2012-03-03 10:41:41 +00:00
deps = ""
2016-12-14 21:13:04 +00:00
for dep in (d.getVar('PACKAGE_DEPENDS') or "").split():
2009-11-10 14:55:23 +00:00
deps += " %s:do_populate_sysroot" % dep
2012-03-03 10:41:41 +00:00
d.appendVarFlag('do_package', 'depends', deps)
2007-04-03 11:31:02 +00:00
2007-05-09 09:54:54 +00:00
# shlibs requires any DEPENDS to have already packaged for the *.list files
2013-01-23 14:27:33 +00:00
d.appendVarFlag('do_package', 'deptask', " do_packagedata")
2007-04-03 11:31:02 +00:00
}
2015-02-17 02:08:12 +00:00
# Get a list of files from file vars by searching files under current working directory
# The list contains symlinks, directories and normal files.
def files_from_filevars(filevars):
import os,glob
cpath = oe.cachedpath.CachedPath()
files = []
for f in filevars:
if os.path.isabs(f):
f = '.' + f
if not f.startswith("./"):
f = './' + f
globbed = glob.glob(f)
if globbed:
if [ f ] != globbed:
files += globbed
continue
files.append(f)
2016-08-04 09:02:11 +00:00
symlink_paths = []
for ind, f in enumerate(files):
# Handle directory symlinks. Truncate path to the lowest level symlink
parent = ''
for dirname in f.split('/')[:-1]:
parent = os.path.join(parent, dirname)
if dirname == '.':
continue
if cpath.islink(parent):
2016-08-04 09:02:12 +00:00
bb.warn("FILES contains file '%s' which resides under a "
"directory symlink. Please fix the recipe and use the "
"real path for the file." % f[1:])
2016-08-04 09:02:11 +00:00
symlink_paths.append(f)
files[ind] = parent
f = parent
break
2015-02-17 02:08:12 +00:00
if not cpath.islink(f):
if cpath.isdir(f):
newfiles = [ os.path.join(f,x) for x in os.listdir(f) ]
if newfiles:
files += newfiles
2016-08-04 09:02:11 +00:00
return files, symlink_paths
2015-02-17 02:08:12 +00:00
# Called in package_<rpm,ipk,deb>.bbclass to get the correct list of configuration files
def get_conffiles(pkg, d):
2016-12-14 21:13:04 +00:00
pkgdest = d.getVar('PKGDEST')
2015-02-17 02:08:12 +00:00
root = os.path.join(pkgdest, pkg)
cwd = os.getcwd()
os.chdir(root)
2016-12-14 21:13:04 +00:00
conffiles = d.getVar('CONFFILES_%s' % pkg);
2015-02-17 02:08:12 +00:00
if conffiles == None:
2016-12-14 21:13:04 +00:00
conffiles = d.getVar('CONFFILES')
2015-02-17 02:08:12 +00:00
if conffiles == None:
conffiles = ""
conffiles = conffiles.split()
2016-08-04 09:02:11 +00:00
conf_orig_list = files_from_filevars(conffiles)[0]
2015-02-17 02:08:12 +00:00
# Remove links and directories from conf_orig_list to get conf_list which only contains normal files
conf_list = []
for f in conf_orig_list:
if os.path.isdir(f):
continue
if os.path.islink(f):
continue
if not os.path.exists(f):
continue
conf_list.append(f)
# Remove the leading './'
for i in range(0, len(conf_list)):
conf_list[i] = conf_list[i][1:]
os.chdir(cwd)
return conf_list
2016-03-21 08:46:20 +00:00
def checkbuildpath(file, d):
2016-12-14 21:13:04 +00:00
tmpdir = d.getVar('TMPDIR')
2016-03-21 08:46:20 +00:00
with open(file) as f:
file_content = f.read()
if tmpdir in file_content:
return True
return False
2013-05-14 05:50:33 +00:00
def splitdebuginfo(file, debugfile, debugsrcdir, sourcefile, d):
2013-01-29 13:43:15 +00:00
# Function to split a single file into two components, one is the stripped
# target system binary, the other contains any debugging information. The
# two files are linked to reference each other.
2011-02-09 03:46:47 +00:00
#
2013-01-29 13:43:15 +00:00
# sourcefile is also generated containing a list of debugsources
2007-08-06 08:54:41 +00:00
2013-07-02 12:19:10 +00:00
import stat
2007-08-06 08:54:41 +00:00
2016-12-14 21:13:04 +00:00
dvar = d.getVar('PKGD')
objcopy = d.getVar("OBJCOPY")
2016-12-30 15:56:17 +00:00
debugedit = d.expand("${STAGING_LIBDIR_NATIVE}/rpm/debugedit")
2013-03-25 17:34:07 +00:00
2011-02-09 03:46:47 +00:00
# We ignore kernel modules, we don't generate debug info files.
if file.find("/lib/modules/") != -1 and file.endswith(".ko"):
2012-07-11 17:33:43 +00:00
return 1
2007-08-06 08:54:41 +00:00
2011-02-09 03:46:47 +00:00
newmode = None
if not os.access(file, os.W_OK) or os.access(file, os.R_OK):
origmode = os.stat(file)[stat.ST_MODE]
newmode = origmode | stat.S_IWRITE | stat.S_IREAD
os.chmod(file, newmode)
2007-08-06 08:54:41 +00:00
2011-02-09 03:46:47 +00:00
# We need to extract the debug src information here...
if debugsrcdir:
2016-03-21 08:46:20 +00:00
cmd = "'%s' -i -l '%s' '%s'" % (debugedit, sourcefile, file)
2013-07-02 12:19:10 +00:00
(retval, output) = oe.utils.getstatusoutput(cmd)
2013-03-25 16:52:07 +00:00
if retval:
2013-07-02 12:19:10 +00:00
bb.fatal("debugedit failed with exit code %s (cmd was %s)%s" % (retval, cmd, ":\n%s" % output if output else ""))
2007-08-06 08:54:41 +00:00
2013-02-03 17:09:26 +00:00
bb.utils.mkdirhier(os.path.dirname(debugfile))
2011-02-09 03:46:47 +00:00
2013-03-25 16:52:07 +00:00
cmd = "'%s' --only-keep-debug '%s' '%s'" % (objcopy, file, debugfile)
2013-07-02 12:19:10 +00:00
(retval, output) = oe.utils.getstatusoutput(cmd)
2013-03-25 16:52:07 +00:00
if retval:
2013-07-02 12:19:10 +00:00
bb.fatal("objcopy failed with exit code %s (cmd was %s)%s" % (retval, cmd, ":\n%s" % output if output else ""))
2007-12-19 10:32:12 +00:00
2011-02-09 03:46:47 +00:00
# Set the debuglink to have the view of the file path on the target
2013-03-25 16:52:07 +00:00
cmd = "'%s' --add-gnu-debuglink='%s' '%s'" % (objcopy, debugfile, file)
2013-07-02 12:19:10 +00:00
(retval, output) = oe.utils.getstatusoutput(cmd)
2013-03-25 16:52:07 +00:00
if retval:
2013-07-02 12:19:10 +00:00
bb.fatal("objcopy failed with exit code %s (cmd was %s)%s" % (retval, cmd, ":\n%s" % output if output else ""))
2011-02-09 03:46:47 +00:00
if newmode:
os.chmod(file, origmode)
return 0
2013-01-29 13:43:15 +00:00
def copydebugsources(debugsrcdir, d):
# The debug src information written out to sourcefile is further procecessed
2011-02-09 03:46:47 +00:00
# and copied to the destination here.
2013-07-02 12:19:10 +00:00
import stat
2011-02-09 03:46:47 +00:00
2012-03-03 11:21:22 +00:00
sourcefile = d.expand("${WORKDIR}/debugsources.list")
2012-03-14 07:16:03 +00:00
if debugsrcdir and os.path.isfile(sourcefile):
2016-12-14 21:13:04 +00:00
dvar = d.getVar('PKGD')
strip = d.getVar("STRIP")
objcopy = d.getVar("OBJCOPY")
2012-08-20 16:52:21 +00:00
debugedit = d.expand("${STAGING_LIBDIR_NATIVE}/rpm/bin/debugedit")
2016-12-14 21:13:04 +00:00
workdir = d.getVar("WORKDIR")
2012-11-21 09:06:35 +00:00
workparentdir = os.path.dirname(os.path.dirname(workdir))
workbasedir = os.path.basename(os.path.dirname(workdir)) + "/" + os.path.basename(workdir)
2012-08-20 16:52:21 +00:00
2016-03-21 08:46:20 +00:00
# If build path exists in sourcefile, it means toolchain did not use
# -fdebug-prefix-map to compile
if checkbuildpath(sourcefile, d):
localsrc_prefix = workparentdir + "/"
else:
localsrc_prefix = "/usr/src/debug/"
2012-08-20 16:52:21 +00:00
nosuchdir = []
basepath = dvar
for p in debugsrcdir.split("/"):
basepath = basepath + "/" + p
2013-03-14 17:26:20 +00:00
if not cpath.exists(basepath):
2012-08-20 16:52:21 +00:00
nosuchdir.append(basepath)
2013-02-03 17:09:26 +00:00
bb.utils.mkdirhier(basepath)
2013-03-26 15:37:45 +00:00
cpath.updatecache(basepath)
2012-08-20 16:52:21 +00:00
Switch to Recipe Specific Sysroots
This patch is comparatively large and invasive. It does only do one thing, switching the
system to build using recipe specific sysroots and where changes could be isolated from it,
that has been done.
With the current single sysroot approach, its possible for software to find things which
aren't in their dependencies. This leads to a determinism problem and is a growing issue in
several of the market segments where OE makes sense. The way to solve this problem for OE is
to have seperate sysroots for each recipe and these will only contain the dependencies for
that recipe.
Its worth noting that this is not task specific sysroots and that OE's dependencies do vary
enormously by task. This did result in some implementation challenges. There is nothing stopping
the implementation of task specific sysroots at some later point based on this work but
that as deemed a bridge too far right now.
Implementation details:
* Rather than installing the sysroot artefacts into a combined sysroots, they are now placed in
TMPDIR/sysroot-components/PACKAGE_ARCH/PN.
* WORKDIR/recipe-sysroot and WORKDIR/recipe-sysroot-native are built by hardlinking in files
from the sysroot-component trees. These new directories are known as RECIPE_SYSROOT and
RECIPE_SYSROOT_NATIVE.
* This construction is primarily done by a new do_prepare_recipe_sysroot task which runs
before do_configure and consists of a call to the extend_recipe_sysroot function.
* Other tasks need things in the sysroot before/after this, e.g. do_patch needs quilt-native
and do_package_write_deb needs dpkg-native. The code therefore inspects the dependencies
for each task and adds extend_recipe_sysroot as a prefunc if it has populate_sysroot
dependencies.
* We have to do a search/replace 'fixme' operation on the files installed into the sysroot to
change hardcoded paths into the correct ones. We create a fixmepath file in the component
directory which lists the files which need this operation.
* Some files have "postinstall" commands which need to run against them, e.g. gdk-pixbuf each
time a new loader is added. These are handled by adding files in bindir with the name
prefixed by "postinst-" and are run in each sysroot as its created if they're present.
This did mean most sstate postinstalls have to be rewritten but there shouldn't be many of them.
* Since a recipe can have multiple tasks and these tasks can run against each other at the same
time we have to have a lock when we perform write operations against the sysroot. We also have
to maintain manifests of what we install against a task checksum of the dependency. If the
checksum changes, we remove its files and then add the new ones.
* The autotools logic for filtering the view of m4 files is no longer needed (and was the model
for the way extend_recipe_sysroot works).
* For autotools, we used to build a combined m4 macros directory which had both the native and
target m4 files. We can no longer do this so we use the target sysroot as the default and add
the native sysroot as an extra backup include path. If we don't do this, we'd have to build
target pkg-config before we could built anything using pkg-config for example (ditto gettext).
Such dependencies would be painful so we haven't required that.
* PKDDATA_DIR was moved out the sysroot and works as before using sstate to build a hybrid copy
for each machine. The paths therefore changed, the behaviour did not.
* The ccache class had to be reworked to function with rss.
* The TCBOOTSTRAP sysroot for compiler bootstrap is no longer needed but the -initial data
does have to be filtered out from the main recipe sysroots. Putting "-initial" in a normal
recipe name therefore remains a bad idea.
* The logic in insane needed tweaks to deal with the new path layout, as did the debug source
file extraction code in package.bbclass.
* The logic in sstate.bbclass had to be rewritten since it previously only performed search and
replace on extracted sstate and we now need this to happen even if the compiled path was
"correct". This in theory could cause a mild performance issue but since the sysroot data
was the main data that needed this and we'd have to do it there regardless with rss, I've opted
just to change the way the class for everything. The built output used to build the sstate output
is now retained and installed rather than deleted.
* The search and replace logic used in sstate objects also seemed weak/incorrect and didn't hold
up against testing. This has been rewritten too. There are some assumptions made about paths, we
save the 'proper' search and replace operations to fixmepath.cmd but then ignore this. What is
here works but is a little hardcoded and an area for future improvement.
* In order to work with eSDK we need a way to build something that looks like the old style sysroot.
"bitbake build-sysroots" will construct such a sysroot based on everything in the components
directory that matches the current MACHINE. It will allow transition of external tools and can
built target or native variants or both. It also supports a clean task. I'd suggest not relying on
this for anything other than transitional purposes though. To see XXX in that sysroot, you'd have
to have built that in a previous bitbake invocation.
* pseudo is run out of its components directory. This is fine as its statically linked.
* The hacks for wayland to see allarch dependencies in the multilib case are no longer needed
and can be dropped.
* wic needed more extensive changes to work with rss and the fixes are in a separate commit series
* Various oe-selftest tweaks were needed since tests did assume the location to binaries and the
combined sysroot in several cases.
* Most missing dependencies this work found have been sent out as separate patches as they were found
but a few tweaks are still included here.
* A late addition is that extend_recipe_sysroot became multilib aware and able to populate multilib
sysroots. I had hoped not to have to add that complexity but the meta-environment recipe forced my
hand. That implementation can probably be neater but this is on the list of things to cleanup later
at this point.
In summary, the impact people will likely see after this change:
* Recipes may fail with missing dependencies, particularly native tools like gettext-native,
glib-2.0-native and libxml2.0-native. Some hosts have these installed and will mask these errors
* Any recipe/class using SSTATEPOSTINSTFUNCS will need that code rewriting into a postinst
* There was a separate patch series dealing with roots postinst native dependency issues. Any postinst
which expects native tools at rootfs time will need to mark that dependency with PACKAGE_WRITE_DEPS.
There could well be other issues. This has been tested repeatedly against our autobuilders and oe-selftest
and issues found have been fixed. We believe at least OE-Core is in good shape but that doesn't mean
we've found all the issues.
Also, the logging is a bit chatty at the moment. It does help if something goes wrong and goes to the
task logfiles, not the console so I've intentionally left this like that for now. We can turn it down
easily enough in due course.
(From OE-Core rev: 809746f56df4b91af014bf6a3f28997d6698ac78)
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2016-12-07 13:54:35 +00:00
# Ignore files from the recipe sysroots (target and native)
processdebugsrc = "LC_ALL=C ; sort -z -u '%s' | egrep -v -z '((<internal>|<built-in>)$|/.*recipe-sysroot.*/)' | "
2012-08-20 16:52:21 +00:00
# We need to ignore files that are not actually ours
# we do this by only paying attention to items from this package
2013-07-22 14:52:15 +00:00
processdebugsrc += "fgrep -zw '%s' | "
2016-03-21 08:46:20 +00:00
# Remove prefix in the source paths
processdebugsrc += "sed 's#%s##g' | "
2012-10-22 10:39:33 +00:00
processdebugsrc += "(cd '%s' ; cpio -pd0mlL --no-preserve-owner '%s%s' 2>/dev/null)"
2012-08-20 16:52:21 +00:00
2016-03-21 08:46:20 +00:00
cmd = processdebugsrc % (sourcefile, workbasedir, localsrc_prefix, workparentdir, dvar, debugsrcdir)
2013-07-02 12:19:10 +00:00
(retval, output) = oe.utils.getstatusoutput(cmd)
2013-03-25 16:52:07 +00:00
# Can "fail" if internal headers/transient sources are attempted
#if retval:
# bb.fatal("debug source copy failed with exit code %s (cmd was %s)" % (retval, cmd))
2013-08-20 13:01:49 +00:00
# cpio seems to have a bug with -lL together and symbolic links are just copied, not dereferenced.
# Work around this by manually finding and copying any symbolic links that made it through.
cmd = "find %s%s -type l -print0 -delete | sed s#%s%s/##g | (cd '%s' ; cpio -pd0mL --no-preserve-owner '%s%s' 2>/dev/null)" % (dvar, debugsrcdir, dvar, debugsrcdir, workparentdir, dvar, debugsrcdir)
(retval, output) = oe.utils.getstatusoutput(cmd)
if retval:
bb.fatal("debugsrc symlink fixup failed with exit code %s (cmd was %s)" % (retval, cmd))
2012-08-20 16:52:21 +00:00
# The copy by cpio may have resulted in some empty directories! Remove these
2013-03-25 16:52:07 +00:00
cmd = "find %s%s -empty -type d -delete" % (dvar, debugsrcdir)
2013-07-02 12:19:10 +00:00
(retval, output) = oe.utils.getstatusoutput(cmd)
2013-03-25 16:52:07 +00:00
if retval:
2013-07-02 12:19:10 +00:00
bb.fatal("empty directory removal failed with exit code %s (cmd was %s)%s" % (retval, cmd, ":\n%s" % output if output else ""))
2012-08-20 16:52:21 +00:00
# Also remove debugsrcdir if its empty
for p in nosuchdir[::-1]:
if os.path.exists(p) and not os.listdir(p):
os.rmdir(p)
2012-02-23 12:26:12 +00:00
2006-10-20 14:31:06 +00:00
#
# Package data handling routines
#
2013-04-12 16:45:27 +00:00
def get_package_mapping (pkg, basepkg, d):
2012-07-11 17:33:43 +00:00
import oe.packagedata
2010-10-10 04:24:38 +00:00
2012-07-11 17:33:43 +00:00
data = oe.packagedata.read_subpkgdata(pkg, d)
key = "PKG_%s" % pkg
2006-10-20 14:31:06 +00:00
2012-07-11 17:33:43 +00:00
if key in data:
2013-04-12 16:45:27 +00:00
# Have to avoid undoing the write_extra_pkgs(global_variants...)
if bb.data.inherits_class('allarch', d) and data[key] == basepkg:
return pkg
2012-07-11 17:33:43 +00:00
return data[key]
2006-10-20 14:31:06 +00:00
2012-07-11 17:33:43 +00:00
return pkg
2006-10-20 14:31:06 +00:00
2012-11-16 18:29:25 +00:00
def get_package_additional_metadata (pkg_type, d):
base_key = "PACKAGE_ADD_METADATA"
for key in ("%s_%s" % (base_key, pkg_type.upper()), base_key):
2015-06-18 14:14:16 +00:00
if d.getVar(key, False) is None:
2012-11-16 18:29:25 +00:00
continue
d.setVarFlag(key, "type", "list")
2016-12-14 21:13:06 +00:00
if d.getVarFlag(key, "separator") is None:
2012-11-16 18:29:25 +00:00
d.setVarFlag(key, "separator", "\\n")
metadata_fields = [field.strip() for field in oe.data.typed_value(key, d)]
return "\n".join(metadata_fields).strip()
2013-04-12 16:45:27 +00:00
def runtime_mapping_rename (varname, pkg, d):
2016-12-14 21:13:04 +00:00
#bb.note("%s before: %s" % (varname, d.getVar(varname)))
2006-10-20 14:31:06 +00:00
2012-10-02 10:37:07 +00:00
new_depends = {}
2016-12-14 21:13:04 +00:00
deps = bb.utils.explode_dep_versions2(d.getVar(varname) or "")
2012-07-11 17:33:43 +00:00
for depend in deps:
2013-04-12 16:45:27 +00:00
new_depend = get_package_mapping(depend, pkg, d)
2012-10-02 10:37:07 +00:00
new_depends[new_depend] = deps[depend]
2006-10-20 14:31:06 +00:00
2012-10-02 10:37:07 +00:00
d.setVar(varname, bb.utils.join_deps(new_depends, commasep=False))
2006-10-20 14:31:06 +00:00
2016-12-14 21:13:04 +00:00
#bb.note("%s after: %s" % (varname, d.getVar(varname)))
2006-10-20 14:31:06 +00:00
#
# Package functions suitable for inclusion in PACKAGEFUNCS
#
2011-05-18 13:15:01 +00:00
python package_get_auto_pr() {
2014-11-05 18:44:24 +00:00
import oe.prservice
import re
# Support per recipe PRSERV_HOST
2016-12-14 21:13:04 +00:00
pn = d.getVar('PN')
host = d.getVar("PRSERV_HOST_" + pn)
2012-07-11 17:33:43 +00:00
if not (host is None):
d.setVar("PRSERV_HOST", host)
2013-01-23 14:54:28 +00:00
2016-12-14 21:13:04 +00:00
pkgv = d.getVar("PKGV")
2015-01-10 13:46:42 +00:00
2014-11-05 18:44:24 +00:00
# PR Server not active, handle AUTOINC
2016-12-14 21:13:04 +00:00
if not d.getVar('PRSERV_HOST'):
2013-01-19 16:23:01 +00:00
if 'AUTOINC' in pkgv:
d.setVar("PKGV", pkgv.replace("AUTOINC", "0"))
2014-11-05 18:44:24 +00:00
return
auto_pr = None
2016-12-14 21:13:04 +00:00
pv = d.getVar("PV")
version = d.getVar("PRAUTOINX")
pkgarch = d.getVar("PACKAGE_ARCH")
checksum = d.getVar("BB_TASKHASH")
2014-11-05 18:44:24 +00:00
2016-12-14 21:13:04 +00:00
if d.getVar('PRSERV_LOCKDOWN'):
auto_pr = d.getVar('PRAUTO_' + version + '_' + pkgarch) or d.getVar('PRAUTO_' + version) or None
2014-11-05 18:44:24 +00:00
if auto_pr is None:
bb.fatal("Can NOT get PRAUTO from lockdown exported file")
d.setVar('PRAUTO',str(auto_pr))
return
try:
2016-12-14 21:13:04 +00:00
conn = d.getVar("__PRSERV_CONN")
2014-11-05 18:44:24 +00:00
if conn is None:
conn = oe.prservice.prserv_make_conn(d)
if conn is not None:
2015-01-10 13:46:42 +00:00
if "AUTOINC" in pkgv:
2014-11-05 18:44:24 +00:00
srcpv = bb.fetch2.get_srcrev(d)
base_ver = "AUTOINC-%s" % version[:version.find(srcpv)]
value = conn.getPR(base_ver, pkgarch, srcpv)
2015-01-10 13:46:42 +00:00
d.setVar("PKGV", pkgv.replace("AUTOINC", str(value)))
2014-11-05 18:44:24 +00:00
auto_pr = conn.getPR(version, pkgarch, checksum)
except Exception as e:
bb.fatal("Can NOT get PRAUTO, exception %s" % str(e))
if auto_pr is None:
bb.fatal("Can NOT get PRAUTO from remote PR service")
d.setVar('PRAUTO',str(auto_pr))
2011-05-18 13:15:01 +00:00
}
2012-07-27 10:50:37 +00:00
LOCALEBASEPN ??= "${PN}"
2006-10-20 14:31:06 +00:00
python package_do_split_locales() {
2016-12-14 21:13:04 +00:00
if (d.getVar('PACKAGE_NO_LOCALE') == '1'):
2012-07-11 17:33:43 +00:00
bb.debug(1, "package requested not splitting locales")
return
2016-12-14 21:13:04 +00:00
packages = (d.getVar('PACKAGES') or "").split()
2012-07-11 17:33:43 +00:00
2016-12-14 21:13:04 +00:00
datadir = d.getVar('datadir')
2012-07-11 17:33:43 +00:00
if not datadir:
bb.note("datadir not defined")
return
2016-12-14 21:13:04 +00:00
dvar = d.getVar('PKGD')
pn = d.getVar('LOCALEBASEPN')
2012-07-11 17:33:43 +00:00
if pn + '-locale' in packages:
packages.remove(pn + '-locale')
localedir = os.path.join(dvar + datadir, 'locale')
2013-03-14 17:26:20 +00:00
if not cpath.isdir(localedir):
2012-07-11 17:33:43 +00:00
bb.debug(1, "No locale files in this package")
return
locales = os.listdir(localedir)
2016-12-14 21:13:04 +00:00
summary = d.getVar('SUMMARY') or pn
description = d.getVar('DESCRIPTION') or ""
locale_section = d.getVar('LOCALE_SECTION')
mlprefix = d.getVar('MLPREFIX') or ""
2012-07-11 17:33:43 +00:00
for l in sorted(locales):
ln = legitimize_package_name(l)
pkg = pn + '-locale-' + ln
packages.append(pkg)
d.setVar('FILES_' + pkg, os.path.join(datadir, 'locale', l))
2013-01-18 12:41:49 +00:00
d.setVar('RRECOMMENDS_' + pkg, '%svirtual-locale-%s' % (mlprefix, ln))
2012-07-11 17:33:43 +00:00
d.setVar('RPROVIDES_' + pkg, '%s-locale %s%s-translation' % (pn, mlprefix, ln))
d.setVar('SUMMARY_' + pkg, '%s - %s translations' % (summary, l))
d.setVar('DESCRIPTION_' + pkg, '%s This package contains language translation files for the %s locale.' % (description, l))
if locale_section:
d.setVar('SECTION_' + pkg, locale_section)
d.setVar('PACKAGES', ' '.join(packages))
# Disabled by RP 18/06/07
# Wildcards aren't supported in debian
# They break with ipkg since glibc-locale* will mean that
# glibc-localedata-translit* won't install as a dependency
# for some other package which breaks meta-toolchain
# Probably breaks since virtual-locale- isn't provided anywhere
2016-12-14 21:13:04 +00:00
#rdep = (d.getVar('RDEPENDS_%s' % pn) or "").split()
2012-07-11 17:33:43 +00:00
#rdep.append('%s-locale*' % pn)
#d.setVar('RDEPENDS_%s' % pn, ' '.join(rdep))
2006-10-20 14:31:06 +00:00
}
2009-10-29 23:55:43 +00:00
python perform_packagecopy () {
2016-12-14 21:13:04 +00:00
dest = d.getVar('D')
dvar = d.getVar('PKGD')
2009-10-29 23:55:43 +00:00
2012-07-11 17:33:43 +00:00
# Start by package population by taking a copy of the installed
# files to operate on
# Preserve sparse files and hard links
2013-10-11 22:01:54 +00:00
cmd = 'tar -cf - -C %s -p . | tar -xf - -C %s' % (dest, dvar)
2013-07-02 12:19:10 +00:00
(retval, output) = oe.utils.getstatusoutput(cmd)
2013-03-25 16:52:07 +00:00
if retval:
2013-07-02 12:19:10 +00:00
bb.fatal("file copy failed with exit code %s (cmd was %s)%s" % (retval, cmd, ":\n%s" % output if output else ""))
2012-07-31 08:49:38 +00:00
# replace RPATHs for the nativesdk binaries, to make them relocatable
2012-08-17 10:38:10 +00:00
if bb.data.inherits_class('nativesdk', d) or bb.data.inherits_class('cross-canadian', d):
2012-07-31 08:49:38 +00:00
rpath_replace (dvar, d)
2009-10-29 23:55:43 +00:00
}
2013-02-03 17:21:40 +00:00
perform_packagecopy[cleandirs] = "${PKGD}"
perform_packagecopy[dirs] = "${PKGD}"
2009-10-29 23:55:43 +00:00
classes/package.bbclass: Add fixup_perms
Add a new function that is responsible for fixing directory and file
permissions, owners and groups during the packaging process. This will fix
various issues where two packages may create the same directory and end up
with different permissions, owner and/or group.
The issue being resolved is that if two packages conflict in their ownership
of a directory, the first installed into the rootfs sets the permissions.
This leads to a least potentially non-deterministic filesystems, at worst
security defects.
The user can specify their own settings via the configuration files
specified in FILESYSTEM_PERMS_TABLES. If this is not defined, it will
fall back to loading files/fs-perms.txt from BBPATH. The format of this
file is documented within the file.
By default all of the system directories, specified in bitbake.conf, will
be fixed to be 0755, root, root.
The fs-perms.txt contains a few default entries to correct documentation,
locale, headers and debug sources. It was discovered these are often
incorrect due to being directly copied from the build user environment.
The entries needed to match the base-files package have also been added.
Also tweak a couple of warnings to provide more diagnostic information.
(From OE-Core rev: 8c720efa053f81dc8d2bb604cdbdb25de9a6efab)
Signed-off-by: Mark Hatle <mark.hatle@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-06-20 15:57:49 +00:00
# We generate a master list of directories to process, we start by
# seeding this list with reasonable defaults, then load from
# the fs-perms.txt files
python fixup_perms () {
2012-07-18 13:08:48 +00:00
import pwd, grp
2012-07-11 17:33:43 +00:00
# init using a string with the same format as a line as documented in
# the fs-perms.txt file
# <path> <mode> <uid> <gid> <walk> <fmode> <fuid> <fgid>
# <path> link <link target>
#
# __str__ can be used to print out an entry in the input format
#
# if fs_perms_entry.path is None:
# an error occured
# if fs_perms_entry.link, you can retrieve:
# fs_perms_entry.path = path
# fs_perms_entry.link = target of link
# if not fs_perms_entry.link, you can retrieve:
# fs_perms_entry.path = path
# fs_perms_entry.mode = expected dir mode or None
# fs_perms_entry.uid = expected uid or -1
# fs_perms_entry.gid = expected gid or -1
# fs_perms_entry.walk = 'true' or something else
# fs_perms_entry.fmode = expected file mode or None
# fs_perms_entry.fuid = expected file uid or -1
# fs_perms_entry_fgid = expected file gid or -1
class fs_perms_entry():
def __init__(self, line):
lsplit = line.split()
if len(lsplit) == 3 and lsplit[1].lower() == "link":
self._setlink(lsplit[0], lsplit[2])
elif len(lsplit) == 8:
self._setdir(lsplit[0], lsplit[1], lsplit[2], lsplit[3], lsplit[4], lsplit[5], lsplit[6], lsplit[7])
else:
2013-05-11 22:46:10 +00:00
msg = "Fixup Perms: invalid config line %s" % line
package_qa_handle_error("perm-config", msg, d)
2012-07-11 17:33:43 +00:00
self.path = None
self.link = None
def _setdir(self, path, mode, uid, gid, walk, fmode, fuid, fgid):
self.path = os.path.normpath(path)
self.link = None
self.mode = self._procmode(mode)
self.uid = self._procuid(uid)
self.gid = self._procgid(gid)
self.walk = walk.lower()
self.fmode = self._procmode(fmode)
self.fuid = self._procuid(fuid)
self.fgid = self._procgid(fgid)
def _setlink(self, path, link):
self.path = os.path.normpath(path)
self.link = link
def _procmode(self, mode):
if not mode or (mode and mode == "-"):
return None
else:
return int(mode,8)
# Note uid/gid -1 has special significance in os.lchown
def _procuid(self, uid):
if uid is None or uid == "-":
return -1
elif uid.isdigit():
return int(uid)
else:
return pwd.getpwnam(uid).pw_uid
def _procgid(self, gid):
if gid is None or gid == "-":
return -1
elif gid.isdigit():
return int(gid)
else:
return grp.getgrnam(gid).gr_gid
# Use for debugging the entries
def __str__(self):
if self.link:
return "%s link %s" % (self.path, self.link)
else:
mode = "-"
if self.mode:
mode = "0%o" % self.mode
fmode = "-"
if self.fmode:
fmode = "0%o" % self.fmode
uid = self._mapugid(self.uid)
gid = self._mapugid(self.gid)
fuid = self._mapugid(self.fuid)
fgid = self._mapugid(self.fgid)
return "%s %s %s %s %s %s %s %s" % (self.path, mode, uid, gid, self.walk, fmode, fuid, fgid)
def _mapugid(self, id):
if id is None or id == -1:
return "-"
else:
return "%d" % id
# Fix the permission, owner and group of path
def fix_perms(path, mode, uid, gid, dir):
if mode and not os.path.islink(path):
#bb.note("Fixup Perms: chmod 0%o %s" % (mode, dir))
os.chmod(path, mode)
# -1 is a special value that means don't change the uid/gid
# if they are BOTH -1, don't bother to lchown
if not (uid == -1 and gid == -1):
#bb.note("Fixup Perms: lchown %d:%d %s" % (uid, gid, dir))
os.lchown(path, uid, gid)
# Return a list of configuration files based on either the default
# files/fs-perms.txt or the contents of FILESYSTEM_PERMS_TABLES
# paths are resolved via BBPATH
def get_fs_perms_list(d):
str = ""
2016-12-14 21:13:04 +00:00
bbpath = d.getVar('BBPATH')
fs_perms_tables = d.getVar('FILESYSTEM_PERMS_TABLES')
2012-07-11 17:33:43 +00:00
if not fs_perms_tables:
fs_perms_tables = 'files/fs-perms.txt'
for conf_file in fs_perms_tables.split():
2013-02-03 17:25:30 +00:00
str += " %s" % bb.utils.which(bbpath, conf_file)
2012-07-11 17:33:43 +00:00
return str
2016-12-14 21:13:04 +00:00
dvar = d.getVar('PKGD')
2012-07-11 17:33:43 +00:00
fs_perms_table = {}
2016-04-12 15:22:21 +00:00
fs_link_table = {}
2012-07-11 17:33:43 +00:00
# By default all of the standard directories specified in
# bitbake.conf will get 0755 root:root.
target_path_vars = [ 'base_prefix',
'prefix',
'exec_prefix',
'base_bindir',
'base_sbindir',
'base_libdir',
'datadir',
'sysconfdir',
'servicedir',
'sharedstatedir',
'localstatedir',
'infodir',
'mandir',
'docdir',
'bindir',
'sbindir',
'libexecdir',
'libdir',
'includedir',
'oldincludedir' ]
for path in target_path_vars:
2016-12-14 21:13:04 +00:00
dir = d.getVar(path) or ""
2012-07-11 17:33:43 +00:00
if dir == "":
continue
2017-03-17 15:53:09 +00:00
fs_perms_table[dir] = fs_perms_entry(d.expand("%s 0755 root root false - - -" % (dir)))
2012-07-11 17:33:43 +00:00
# Now we actually load from the configuration files
for conf in get_fs_perms_list(d).split():
if os.path.exists(conf):
f = open(conf)
for line in f:
if line.startswith('#'):
continue
lsplit = line.split()
if len(lsplit) == 0:
continue
if len(lsplit) != 8 and not (len(lsplit) == 3 and lsplit[1].lower() == "link"):
2013-05-11 22:46:10 +00:00
msg = "Fixup perms: %s invalid line: %s" % (conf, line)
package_qa_handle_error("perm-line", msg, d)
2012-07-11 17:33:43 +00:00
continue
entry = fs_perms_entry(d.expand(line))
if entry and entry.path:
2016-04-12 15:22:21 +00:00
if entry.link:
2016-04-13 22:17:56 +00:00
fs_link_table[entry.path] = entry
if entry.path in fs_perms_table:
fs_perms_table.pop(entry.path)
2016-04-12 15:22:21 +00:00
else:
fs_perms_table[entry.path] = entry
2016-04-13 22:17:56 +00:00
if entry.path in fs_link_table:
fs_link_table.pop(entry.path)
2012-07-11 17:33:43 +00:00
f.close()
# Debug -- list out in-memory table
#for dir in fs_perms_table:
# bb.note("Fixup Perms: %s: %s" % (dir, str(fs_perms_table[dir])))
2016-04-12 15:22:21 +00:00
#for link in fs_link_table:
# bb.note("Fixup Perms: %s: %s" % (link, str(fs_link_table[link])))
2012-07-11 17:33:43 +00:00
# We process links first, so we can go back and fixup directory ownership
# for any newly created directories
2016-04-12 15:22:21 +00:00
# Process in sorted order so /run gets created before /run/lock, etc.
2016-04-13 22:17:56 +00:00
for entry in sorted(fs_link_table.values(), key=lambda x: x.link):
link = entry.link
dir = entry.path
2012-07-11 17:33:43 +00:00
origin = dvar + dir
2013-03-14 17:26:20 +00:00
if not (cpath.exists(origin) and cpath.isdir(origin) and not cpath.islink(origin)):
2012-07-11 17:33:43 +00:00
continue
if link[0] == "/":
target = dvar + link
ptarget = link
else:
target = os.path.join(os.path.dirname(origin), link)
ptarget = os.path.join(os.path.dirname(dir), link)
if os.path.exists(target):
2013-05-11 22:46:10 +00:00
msg = "Fixup Perms: Unable to correct directory link, target already exists: %s -> %s" % (dir, ptarget)
package_qa_handle_error("perm-link", msg, d)
2012-07-11 17:33:43 +00:00
continue
# Create path to move directory to, move it, and then setup the symlink
2013-02-03 17:09:26 +00:00
bb.utils.mkdirhier(os.path.dirname(target))
2012-07-11 17:33:43 +00:00
#bb.note("Fixup Perms: Rename %s -> %s" % (dir, ptarget))
os.rename(origin, target)
#bb.note("Fixup Perms: Link %s -> %s" % (dir, link))
os.symlink(link, origin)
for dir in fs_perms_table:
origin = dvar + dir
2013-03-14 17:26:20 +00:00
if not (cpath.exists(origin) and cpath.isdir(origin)):
2012-07-11 17:33:43 +00:00
continue
fix_perms(origin, fs_perms_table[dir].mode, fs_perms_table[dir].uid, fs_perms_table[dir].gid, dir)
if fs_perms_table[dir].walk == 'true':
for root, dirs, files in os.walk(origin):
for dr in dirs:
each_dir = os.path.join(root, dr)
fix_perms(each_dir, fs_perms_table[dir].mode, fs_perms_table[dir].uid, fs_perms_table[dir].gid, dir)
for f in files:
each_file = os.path.join(root, f)
fix_perms(each_file, fs_perms_table[dir].fmode, fs_perms_table[dir].fuid, fs_perms_table[dir].fgid, dir)
classes/package.bbclass: Add fixup_perms
Add a new function that is responsible for fixing directory and file
permissions, owners and groups during the packaging process. This will fix
various issues where two packages may create the same directory and end up
with different permissions, owner and/or group.
The issue being resolved is that if two packages conflict in their ownership
of a directory, the first installed into the rootfs sets the permissions.
This leads to a least potentially non-deterministic filesystems, at worst
security defects.
The user can specify their own settings via the configuration files
specified in FILESYSTEM_PERMS_TABLES. If this is not defined, it will
fall back to loading files/fs-perms.txt from BBPATH. The format of this
file is documented within the file.
By default all of the system directories, specified in bitbake.conf, will
be fixed to be 0755, root, root.
The fs-perms.txt contains a few default entries to correct documentation,
locale, headers and debug sources. It was discovered these are often
incorrect due to being directly copied from the build user environment.
The entries needed to match the base-files package have also been added.
Also tweak a couple of warnings to provide more diagnostic information.
(From OE-Core rev: 8c720efa053f81dc8d2bb604cdbdb25de9a6efab)
Signed-off-by: Mark Hatle <mark.hatle@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-06-20 15:57:49 +00:00
}
2011-02-08 22:07:47 +00:00
python split_and_strip_files () {
2013-05-09 14:55:04 +00:00
import stat, errno
2012-07-11 17:33:43 +00:00
2016-12-14 21:13:04 +00:00
dvar = d.getVar('PKGD')
pn = d.getVar('PN')
2012-07-11 17:33:43 +00:00
2016-06-13 21:09:54 +00:00
oldcwd = os.getcwd()
os.chdir(dvar)
2012-07-11 17:33:43 +00:00
# We default to '.debug' style
2016-12-14 21:13:04 +00:00
if d.getVar('PACKAGE_DEBUG_SPLIT_STYLE') == 'debug-file-directory':
2012-07-11 17:33:43 +00:00
# Single debug-file-directory style debug info
debugappend = ".debug"
debugdir = ""
debuglibdir = "/usr/lib/debug"
debugsrcdir = "/usr/src/debug"
2016-12-14 21:13:04 +00:00
elif d.getVar('PACKAGE_DEBUG_SPLIT_STYLE') == 'debug-without-src':
2013-03-12 11:24:32 +00:00
# Original OE-core, a.k.a. ".debug", style debug info, but without sources in /usr/src/debug
debugappend = ""
debugdir = "/.debug"
debuglibdir = ""
debugsrcdir = ""
2012-07-11 17:33:43 +00:00
else:
# Original OE-core, a.k.a. ".debug", style debug info
debugappend = ""
debugdir = "/.debug"
debuglibdir = ""
debugsrcdir = "/usr/src/debug"
2013-05-14 05:50:33 +00:00
sourcefile = d.expand("${WORKDIR}/debugsources.list")
bb.utils.remove(sourcefile)
2012-07-11 17:33:43 +00:00
# Return type (bits):
# 0 - not elf
# 1 - ELF
# 2 - stripped
# 4 - executable
# 8 - shared library
2013-01-29 13:45:17 +00:00
# 16 - kernel module
2012-07-11 17:33:43 +00:00
def isELF(path):
type = 0
2013-11-01 03:51:51 +00:00
ret, result = oe.utils.getstatusoutput("file \"%s\"" % path.replace("\"", "\\\""))
2012-07-11 17:33:43 +00:00
if ret:
2013-05-11 22:46:10 +00:00
msg = "split_and_strip_files: 'file %s' failed" % path
package_qa_handle_error("split-strip", msg, d)
2012-07-11 17:33:43 +00:00
return type
# Not stripped
if "ELF" in result:
type |= 1
if "not stripped" not in result:
type |= 2
if "executable" in result:
type |= 4
if "shared" in result:
type |= 8
return type
#
# First lets figure out all of the files we may have to process ... do this only once!
#
2013-02-03 17:11:58 +00:00
elffiles = {}
symlinks = {}
2013-01-29 13:45:17 +00:00
kernmods = []
2015-04-22 19:57:18 +00:00
inodes = {}
2016-12-14 21:13:04 +00:00
libdir = os.path.abspath(dvar + os.sep + d.getVar("libdir"))
baselibdir = os.path.abspath(dvar + os.sep + d.getVar("base_libdir"))
if (d.getVar('INHIBIT_PACKAGE_STRIP') != '1' or \
d.getVar('INHIBIT_PACKAGE_DEBUG_SPLIT') != '1'):
2013-03-14 17:26:20 +00:00
for root, dirs, files in cpath.walk(dvar):
2012-07-11 17:33:43 +00:00
for f in files:
file = os.path.join(root, f)
2013-01-29 13:45:17 +00:00
if file.endswith(".ko") and file.find("/lib/modules/") != -1:
kernmods.append(file)
continue
2013-02-03 17:11:58 +00:00
# Skip debug files
if debugappend and file.endswith(debugappend):
continue
if debugdir and debugdir in os.path.dirname(file[len(dvar):]):
continue
try:
2013-03-14 17:26:20 +00:00
ltarget = cpath.realpath(file, dvar, False)
s = cpath.lstat(ltarget)
2013-05-07 12:55:55 +00:00
except OSError as e:
(err, strerror) = e.args
2013-02-03 17:11:58 +00:00
if err != errno.ENOENT:
raise
# Skip broken symlinks
continue
2013-03-14 17:26:20 +00:00
if not s:
continue
2013-02-03 17:11:58 +00:00
# Check its an excutable
2013-03-05 13:10:22 +00:00
if (s[stat.ST_MODE] & stat.S_IXUSR) or (s[stat.ST_MODE] & stat.S_IXGRP) or (s[stat.ST_MODE] & stat.S_IXOTH) \
2016-03-22 09:53:34 +00:00
or ((file.startswith(libdir) or file.startswith(baselibdir)) and (".so" in f or ".node" in f)):
2013-02-03 17:11:58 +00:00
# If it's a symlink, and points to an ELF file, we capture the readlink target
2013-03-14 17:26:20 +00:00
if cpath.islink(file):
2013-02-03 17:11:58 +00:00
target = os.readlink(file)
if isELF(ltarget):
#bb.note("Sym: %s (%d)" % (ltarget, isELF(ltarget)))
symlinks[file] = target
2012-07-11 17:33:43 +00:00
continue
2015-04-22 19:57:18 +00:00
2013-02-03 17:11:58 +00:00
# It's a file (or hardlink), not a link
# ...but is it ELF, and is it already stripped?
elf_file = isELF(file)
if elf_file & 1:
if elf_file & 2:
2016-12-14 21:13:04 +00:00
if 'already-stripped' in (d.getVar('INSANE_SKIP_' + pn) or "").split():
2013-09-06 21:15:18 +00:00
bb.note("Skipping file %s from %s for already-stripped QA test" % (file[len(dvar):], pn))
else:
msg = "File '%s' from %s was already stripped, this will prevent future debugging!" % (file[len(dvar):], pn)
package_qa_handle_error("already-stripped", msg, d)
2012-07-11 17:33:43 +00:00
continue
2015-04-22 19:57:18 +00:00
# At this point we have an unstripped elf file. We need to:
# a) Make sure any file we strip is not hardlinked to anything else outside this tree
# b) Only strip any hardlinked file once (no races)
# c) Track any hardlinks between files so that we can reconstruct matching debug file hardlinks
# Use a reference of device ID and inode number to indentify files
file_reference = "%d_%d" % (s.st_dev, s.st_ino)
if file_reference in inodes:
os.unlink(file)
os.link(inodes[file_reference][0], file)
inodes[file_reference].append(file)
else:
inodes[file_reference] = [file]
# break hardlink
bb.utils.copyfile(file, file)
elffiles[file] = elf_file
# Modified the file so clear the cache
cpath.updatecache(file)
2012-07-11 17:33:43 +00:00
#
# First lets process debug splitting
#
2016-12-14 21:13:04 +00:00
if (d.getVar('INHIBIT_PACKAGE_DEBUG_SPLIT') != '1'):
2013-02-03 17:11:58 +00:00
for file in elffiles:
2012-07-11 17:33:43 +00:00
src = file[len(dvar):]
dest = debuglibdir + os.path.dirname(src) + debugdir + "/" + os.path.basename(src) + debugappend
fpath = dvar + dest
2013-02-03 17:11:58 +00:00
# Split the file...
bb.utils.mkdirhier(os.path.dirname(fpath))
#bb.note("Split %s -> %s" % (file, fpath))
# Only store off the hard link reference if we successfully split!
2013-05-14 05:50:33 +00:00
splitdebuginfo(file, fpath, debugsrcdir, sourcefile, d)
2012-07-11 17:33:43 +00:00
2013-02-03 17:11:58 +00:00
# Hardlink our debug symbols to the other hardlink copies
2015-04-22 19:57:18 +00:00
for ref in inodes:
if len(inodes[ref]) == 1:
continue
for file in inodes[ref][1:]:
2012-07-11 17:33:43 +00:00
src = file[len(dvar):]
dest = debuglibdir + os.path.dirname(src) + debugdir + "/" + os.path.basename(src) + debugappend
fpath = dvar + dest
2015-04-22 19:57:18 +00:00
target = inodes[ref][0][len(dvar):]
2013-02-03 17:11:58 +00:00
ftarget = dvar + debuglibdir + os.path.dirname(target) + debugdir + "/" + os.path.basename(target) + debugappend
bb.utils.mkdirhier(os.path.dirname(fpath))
#bb.note("Link %s -> %s" % (fpath, ftarget))
os.link(ftarget, fpath)
# Create symlinks for all cases we were able to split symbols
for file in symlinks:
src = file[len(dvar):]
dest = debuglibdir + os.path.dirname(src) + debugdir + "/" + os.path.basename(src) + debugappend
fpath = dvar + dest
# Skip it if the target doesn't exist
try:
s = os.stat(fpath)
2013-05-07 12:55:55 +00:00
except OSError as e:
(err, strerror) = e.args
2013-02-03 17:11:58 +00:00
if err != errno.ENOENT:
raise
continue
ltarget = symlinks[file]
lpath = os.path.dirname(ltarget)
lbase = os.path.basename(ltarget)
ftarget = ""
if lpath and lpath != ".":
ftarget += lpath + debugdir + "/"
ftarget += lbase + debugappend
if lpath.startswith(".."):
ftarget = os.path.join("..", ftarget)
bb.utils.mkdirhier(os.path.dirname(fpath))
#bb.note("Symlink %s -> %s" % (fpath, ftarget))
os.symlink(ftarget, fpath)
2012-07-11 17:33:43 +00:00
# Process the debugsrcdir if requested...
# This copies and places the referenced sources for later debugging...
2013-01-29 13:43:15 +00:00
copydebugsources(debugsrcdir, d)
2012-07-11 17:33:43 +00:00
#
# End of debug splitting
#
#
# Now lets go back over things and strip them
#
2016-12-14 21:13:04 +00:00
if (d.getVar('INHIBIT_PACKAGE_STRIP') != '1'):
strip = d.getVar("STRIP")
2013-02-01 15:03:41 +00:00
sfiles = []
2013-02-03 17:11:58 +00:00
for file in elffiles:
elf_file = int(elffiles[file])
#bb.note("Strip %s" % file)
sfiles.append((file, elf_file, strip))
2013-01-29 13:45:17 +00:00
for f in kernmods:
2013-02-01 15:03:41 +00:00
sfiles.append((f, 16, strip))
2014-08-21 20:47:50 +00:00
oe.utils.multiprocess_exec(sfiles, oe.package.runstrip)
2013-02-01 15:03:41 +00:00
2012-07-11 17:33:43 +00:00
#
# End of strip
#
2016-06-13 21:09:54 +00:00
os.chdir(oldcwd)
2011-02-08 22:07:47 +00:00
}
python populate_packages () {
2013-07-02 12:19:10 +00:00
import glob, re
2012-07-11 17:33:43 +00:00
2016-12-14 21:13:04 +00:00
workdir = d.getVar('WORKDIR')
outdir = d.getVar('DEPLOY_DIR')
dvar = d.getVar('PKGD')
packages = d.getVar('PACKAGES')
pn = d.getVar('PN')
2012-07-11 17:33:43 +00:00
2013-02-03 17:09:26 +00:00
bb.utils.mkdirhier(outdir)
2012-07-11 17:33:43 +00:00
os.chdir(dvar)
2015-12-15 15:38:54 +00:00
2016-12-14 21:13:04 +00:00
autodebug = not (d.getVar("NOAUTOPACKAGEDEBUG") or False)
2012-07-11 17:33:43 +00:00
2015-04-29 13:23:14 +00:00
# Sanity check PACKAGES for duplicates
2012-07-11 17:33:43 +00:00
# Sanity should be moved to sanity.bbclass once we have the infrastucture
package_list = []
for pkg in packages.split():
2013-04-18 23:51:51 +00:00
if pkg in package_list:
2013-05-11 22:46:10 +00:00
msg = "%s is listed in PACKAGES multiple times, this leads to packaging errors." % pkg
package_qa_handle_error("packages-list", msg, d)
2015-12-15 15:38:54 +00:00
elif autodebug and pkg.endswith("-dbg"):
package_list.insert(0, pkg)
2012-07-11 17:33:43 +00:00
else:
2013-02-03 17:25:30 +00:00
package_list.append(pkg)
2012-07-11 17:33:43 +00:00
d.setVar('PACKAGES', ' '.join(package_list))
2016-12-14 21:13:04 +00:00
pkgdest = d.getVar('PKGDEST')
2012-07-11 17:33:43 +00:00
seen = []
2013-09-25 12:36:46 +00:00
# os.mkdir masks the permissions with umask so we have to unset it first
oldumask = os.umask(0)
2015-12-15 15:38:54 +00:00
debug = []
for root, dirs, files in cpath.walk(dvar):
dir = root[len(dvar):]
if not dir:
dir = os.sep
for f in (files + dirs):
path = "." + os.path.join(dir, f)
if "/.debug/" in path or path.endswith("/.debug"):
debug.append(path)
2012-07-11 17:33:43 +00:00
for pkg in package_list:
root = os.path.join(pkgdest, pkg)
2013-02-03 17:09:26 +00:00
bb.utils.mkdirhier(root)
2012-07-11 17:33:43 +00:00
2016-12-14 21:13:04 +00:00
filesvar = d.getVar('FILES_%s' % pkg) or ""
2013-01-29 13:49:56 +00:00
if "//" in filesvar:
2013-05-11 22:46:10 +00:00
msg = "FILES variable for package %s contains '//' which is invalid. Attempting to fix this but you should correct the metadata.\n" % pkg
package_qa_handle_error("files-invalid", msg, d)
2013-01-29 13:49:56 +00:00
filesvar.replace("//", "/")
2013-10-03 16:02:48 +00:00
origfiles = filesvar.split()
2016-08-04 09:02:11 +00:00
files, symlink_paths = files_from_filevars(origfiles)
2013-10-03 16:02:48 +00:00
2015-12-15 15:38:54 +00:00
if autodebug and pkg.endswith("-dbg"):
files.extend(debug)
2013-10-03 16:02:48 +00:00
for file in files:
2013-03-14 17:26:20 +00:00
if (not cpath.islink(file)) and (not cpath.exists(file)):
2012-07-11 17:33:43 +00:00
continue
if file in seen:
continue
seen.append(file)
def mkdir(src, dest, p):
src = os.path.join(src, p)
dest = os.path.join(dest, p)
2013-03-14 17:26:20 +00:00
fstat = cpath.stat(src)
os.mkdir(dest, fstat.st_mode)
2012-07-11 17:33:43 +00:00
os.chown(dest, fstat.st_uid, fstat.st_gid)
if p not in seen:
seen.append(p)
2013-03-14 17:26:20 +00:00
cpath.updatecache(dest)
2012-07-11 17:33:43 +00:00
def mkdir_recurse(src, dest, paths):
2013-03-14 17:26:20 +00:00
if cpath.exists(dest + '/' + paths):
2013-01-29 13:47:17 +00:00
return
2012-07-11 17:33:43 +00:00
while paths.startswith("./"):
paths = paths[2:]
p = "."
for c in paths.split("/"):
p = os.path.join(p, c)
2013-03-14 17:26:20 +00:00
if not cpath.exists(os.path.join(dest, p)):
2012-07-11 17:33:43 +00:00
mkdir(src, dest, p)
2013-03-14 17:26:20 +00:00
if cpath.isdir(file) and not cpath.islink(file):
2012-07-11 17:33:43 +00:00
mkdir_recurse(dvar, root, file)
continue
mkdir_recurse(dvar, root, os.path.dirname(file))
fpath = os.path.join(root,file)
2013-03-14 17:26:20 +00:00
if not cpath.islink(file):
2012-07-11 17:33:43 +00:00
os.link(file, fpath)
continue
2013-02-03 17:09:26 +00:00
ret = bb.utils.copyfile(file, fpath)
2012-07-11 17:33:43 +00:00
if ret is False or ret == 0:
2016-10-01 02:47:08 +00:00
bb.fatal("File population failed")
2012-07-11 17:33:43 +00:00
2016-08-04 09:02:11 +00:00
# Check if symlink paths exist
for file in symlink_paths:
if not os.path.exists(os.path.join(root,file)):
bb.fatal("File '%s' cannot be packaged into '%s' because its "
"parent directory structure does not exist. One of "
"its parent directories is a symlink whose target "
"directory is not included in the package." %
(file, pkg))
2013-09-25 12:36:46 +00:00
os.umask(oldumask)
2012-07-11 17:33:43 +00:00
os.chdir(workdir)
2015-04-29 13:23:14 +00:00
# Handle LICENSE_EXCLUSION
package_list = []
for pkg in packages.split():
2016-12-14 21:13:04 +00:00
if d.getVar('LICENSE_EXCLUSION-' + pkg):
2015-04-29 13:23:14 +00:00
msg = "%s has an incompatible license. Excluding from packaging." % pkg
package_qa_handle_error("incompatible-license", msg, d)
else:
package_list.append(pkg)
d.setVar('PACKAGES', ' '.join(package_list))
2012-07-11 17:33:43 +00:00
unshipped = []
2013-03-14 17:26:20 +00:00
for root, dirs, files in cpath.walk(dvar):
2012-07-11 17:33:43 +00:00
dir = root[len(dvar):]
if not dir:
dir = os.sep
for f in (files + dirs):
path = os.path.join(dir, f)
if ('.' + path) not in seen:
unshipped.append(path)
if unshipped != []:
2015-04-17 15:43:07 +00:00
msg = pn + ": Files/directories were installed but not shipped in any package:"
2016-12-14 21:13:04 +00:00
if "installed-vs-shipped" in (d.getVar('INSANE_SKIP_' + pn) or "").split():
2013-05-11 22:46:10 +00:00
bb.note("Package %s skipping QA tests: installed-vs-shipped" % pn)
2012-07-11 17:33:43 +00:00
else:
for f in unshipped:
msg = msg + "\n " + f
2015-09-18 13:14:09 +00:00
msg = msg + "\nPlease set FILES such that these items are packaged. Alternatively if they are unneeded, avoid installing them or delete them within do_install.\n"
msg = msg + "%s: %d installed and not shipped files." % (pn, len(unshipped))
2013-05-11 22:46:10 +00:00
package_qa_handle_error("installed-vs-shipped", msg, d)
2013-01-29 13:55:59 +00:00
}
populate_packages[dirs] = "${D}"
python package_fixsymlinks () {
import errno
2016-12-14 21:13:04 +00:00
pkgdest = d.getVar('PKGDEST')
2015-06-18 14:14:16 +00:00
packages = d.getVar("PACKAGES", False).split()
2012-07-11 17:33:43 +00:00
dangling_links = {}
pkg_files = {}
2013-01-29 13:55:59 +00:00
for pkg in packages:
2012-07-11 17:33:43 +00:00
dangling_links[pkg] = []
pkg_files[pkg] = []
inst_root = os.path.join(pkgdest, pkg)
2013-01-29 14:10:30 +00:00
for path in pkgfiles[pkg]:
2012-07-11 17:33:43 +00:00
rpath = path[len(inst_root):]
pkg_files[pkg].append(rpath)
2013-03-14 17:26:20 +00:00
rtarget = cpath.realpath(path, inst_root, True, assume_dir = True)
if not cpath.lexists(rtarget):
2013-02-10 12:41:47 +00:00
dangling_links[pkg].append(os.path.normpath(rtarget[len(inst_root):]))
2012-07-11 17:33:43 +00:00
2013-01-29 14:02:56 +00:00
newrdepends = {}
for pkg in dangling_links:
2012-07-11 17:33:43 +00:00
for l in dangling_links[pkg]:
found = False
bb.debug(1, "%s contains dangling link %s" % (pkg, l))
2013-01-29 13:55:59 +00:00
for p in packages:
2013-01-29 14:02:56 +00:00
if l in pkg_files[p]:
2012-07-11 17:33:43 +00:00
found = True
bb.debug(1, "target found in %s" % p)
if p == pkg:
break
2013-01-29 14:02:56 +00:00
if pkg not in newrdepends:
newrdepends[pkg] = []
newrdepends[pkg].append(p)
2012-07-11 17:33:43 +00:00
break
if found == False:
bb.note("%s contains dangling symlink to %s" % (pkg, l))
2013-01-29 14:02:56 +00:00
for pkg in newrdepends:
2016-12-14 21:13:04 +00:00
rdepends = bb.utils.explode_dep_versions2(d.getVar('RDEPENDS_' + pkg) or "")
2013-01-29 14:02:56 +00:00
for p in newrdepends[pkg]:
if p not in rdepends:
rdepends[p] = []
2012-07-11 17:33:43 +00:00
d.setVar('RDEPENDS_' + pkg, bb.utils.join_deps(rdepends, commasep=False))
2006-09-21 16:29:02 +00:00
}
2014-07-01 15:56:53 +00:00
python package_package_name_hook() {
"""
A package_name_hook function can be used to rewrite the package names by
changing PKG. For an example, see debian.bbclass.
"""
pass
}
EXPORT_FUNCTIONS package_name_hook
2010-08-05 13:16:59 +00:00
PKGDESTWORK = "${WORKDIR}/pkgdata"
2006-09-21 16:29:02 +00:00
python emit_pkgdata() {
2012-07-11 17:33:43 +00:00
from glob import glob
2013-12-02 18:50:44 +00:00
import json
2012-07-11 17:33:43 +00:00
def write_if_exists(f, pkg, var):
def encode(str):
import codecs
2016-05-20 10:17:05 +00:00
c = codecs.getencoder("unicode_escape")
return c(str)[0].decode("latin1")
2012-07-11 17:33:43 +00:00
2017-01-05 21:15:08 +00:00
val = d.getVar('%s_%s' % (var, pkg))
2012-07-11 17:33:43 +00:00
if val:
f.write('%s_%s: %s\n' % (var, pkg, encode(val)))
2014-07-09 20:18:04 +00:00
return val
2017-01-05 21:15:08 +00:00
val = d.getVar('%s' % (var))
2012-07-11 17:33:43 +00:00
if val:
f.write('%s: %s\n' % (var, encode(val)))
2014-07-09 20:18:04 +00:00
return val
2012-07-11 17:33:43 +00:00
2012-12-24 11:28:38 +00:00
def write_extra_pkgs(variants, pn, packages, pkgdatadir):
for variant in variants:
with open("%s/%s-%s" % (pkgdatadir, variant, pn), 'w') as fd:
fd.write("PACKAGES: %s\n" % ' '.join(
map(lambda pkg: '%s-%s' % (variant, pkg), packages.split())))
def write_extra_runtime_pkgs(variants, packages, pkgdatadir):
for variant in variants:
for pkg in packages.split():
ml_pkg = "%s-%s" % (variant, pkg)
subdata_file = "%s/runtime/%s" % (pkgdatadir, ml_pkg)
with open(subdata_file, 'w') as fd:
fd.write("PKG_%s: %s" % (ml_pkg, pkg))
2016-12-14 21:13:04 +00:00
packages = d.getVar('PACKAGES')
pkgdest = d.getVar('PKGDEST')
pkgdatadir = d.getVar('PKGDESTWORK')
2012-07-11 17:33:43 +00:00
# Take shared lock since we're only reading, not writing
lf = bb.utils.lockfile(d.expand("${PACKAGELOCK}"), True)
data_file = pkgdatadir + d.expand("/${PN}" )
f = open(data_file, 'w')
f.write("PACKAGES: %s\n" % packages)
f.close()
2016-12-14 21:13:04 +00:00
pn = d.getVar('PN')
global_variants = (d.getVar('MULTILIB_GLOBAL_VARIANTS') or "").split()
variants = (d.getVar('MULTILIB_VARIANTS') or "").split()
2012-12-24 11:28:38 +00:00
if bb.data.inherits_class('kernel', d) or bb.data.inherits_class('module-base', d):
write_extra_pkgs(variants, pn, packages, pkgdatadir)
if (bb.data.inherits_class('allarch', d) and not bb.data.inherits_class('packagegroup', d)):
write_extra_pkgs(global_variants, pn, packages, pkgdatadir)
2016-12-14 21:13:04 +00:00
workdir = d.getVar('WORKDIR')
2012-07-11 17:33:43 +00:00
for pkg in packages.split():
2016-12-14 21:13:04 +00:00
pkgval = d.getVar('PKG_%s' % pkg)
2013-01-29 13:53:17 +00:00
if pkgval is None:
pkgval = pkg
d.setVar('PKG_%s' % pkg, pkg)
2013-12-02 18:50:44 +00:00
pkgdestpkg = os.path.join(pkgdest, pkg)
files = {}
2013-12-02 18:50:45 +00:00
total_size = 0
2016-12-16 18:06:20 +00:00
seen = set()
2013-12-02 18:50:44 +00:00
for f in pkgfiles[pkg]:
relpth = os.path.relpath(f, pkgdestpkg)
fstat = os.lstat(f)
files[os.sep + relpth] = fstat.st_size
2016-12-16 18:06:20 +00:00
if fstat.st_ino not in seen:
seen.add(fstat.st_ino)
total_size += fstat.st_size
2013-12-02 18:50:44 +00:00
d.setVar('FILES_INFO', json.dumps(files))
2013-04-04 10:46:09 +00:00
2013-12-02 18:50:44 +00:00
subdata_file = pkgdatadir + "/runtime/%s" % pkg
2012-07-11 17:33:43 +00:00
sf = open(subdata_file, 'w')
write_if_exists(sf, pkg, 'PN')
2013-12-02 18:50:46 +00:00
write_if_exists(sf, pkg, 'PE')
2012-07-11 17:33:43 +00:00
write_if_exists(sf, pkg, 'PV')
write_if_exists(sf, pkg, 'PR')
2013-12-02 18:50:46 +00:00
write_if_exists(sf, pkg, 'PKGE')
2012-07-11 17:33:43 +00:00
write_if_exists(sf, pkg, 'PKGV')
write_if_exists(sf, pkg, 'PKGR')
write_if_exists(sf, pkg, 'LICENSE')
write_if_exists(sf, pkg, 'DESCRIPTION')
write_if_exists(sf, pkg, 'SUMMARY')
write_if_exists(sf, pkg, 'RDEPENDS')
2014-07-09 20:18:04 +00:00
rprov = write_if_exists(sf, pkg, 'RPROVIDES')
2012-07-11 17:33:43 +00:00
write_if_exists(sf, pkg, 'RRECOMMENDS')
write_if_exists(sf, pkg, 'RSUGGESTS')
write_if_exists(sf, pkg, 'RREPLACES')
write_if_exists(sf, pkg, 'RCONFLICTS')
write_if_exists(sf, pkg, 'SECTION')
write_if_exists(sf, pkg, 'PKG')
write_if_exists(sf, pkg, 'ALLOW_EMPTY')
write_if_exists(sf, pkg, 'FILES')
write_if_exists(sf, pkg, 'pkg_postinst')
write_if_exists(sf, pkg, 'pkg_postrm')
write_if_exists(sf, pkg, 'pkg_preinst')
write_if_exists(sf, pkg, 'pkg_prerm')
write_if_exists(sf, pkg, 'FILERPROVIDESFLIST')
2013-04-04 10:46:09 +00:00
write_if_exists(sf, pkg, 'FILES_INFO')
2016-12-14 21:13:04 +00:00
for dfile in (d.getVar('FILERPROVIDESFLIST_' + pkg) or "").split():
2012-07-11 17:33:43 +00:00
write_if_exists(sf, pkg, 'FILERPROVIDES_' + dfile)
write_if_exists(sf, pkg, 'FILERDEPENDSFLIST')
2016-12-14 21:13:04 +00:00
for dfile in (d.getVar('FILERDEPENDSFLIST_' + pkg) or "").split():
2012-07-11 17:33:43 +00:00
write_if_exists(sf, pkg, 'FILERDEPENDS_' + dfile)
2013-12-02 18:50:45 +00:00
sf.write('%s_%s: %d\n' % ('PKGSIZE', pkg, total_size))
2012-07-11 17:33:43 +00:00
sf.close()
2014-07-09 20:18:04 +00:00
# Symlinks needed for rprovides lookup
if rprov:
for p in rprov.strip().split():
subdata_sym = pkgdatadir + "/runtime-rprovides/%s/%s" % (p, pkg)
bb.utils.mkdirhier(os.path.dirname(subdata_sym))
oe.path.symlink("../../runtime/%s" % pkg, subdata_sym, True)
2012-07-11 17:33:43 +00:00
2016-12-14 21:13:04 +00:00
allow_empty = d.getVar('ALLOW_EMPTY_%s' % pkg)
2012-07-11 17:33:43 +00:00
if not allow_empty:
2016-12-14 21:13:04 +00:00
allow_empty = d.getVar('ALLOW_EMPTY')
2012-07-11 17:33:43 +00:00
root = "%s/%s" % (pkgdest, pkg)
os.chdir(root)
g = glob('*')
if g or allow_empty == "1":
2014-10-05 15:14:22 +00:00
# Symlinks needed for reverse lookups (from the final package name)
subdata_sym = pkgdatadir + "/runtime-reverse/%s" % pkgval
oe.path.symlink("../runtime/%s" % pkg, subdata_sym, True)
2012-07-11 17:33:43 +00:00
packagedfile = pkgdatadir + '/runtime/%s.packaged' % pkg
2013-05-09 16:05:58 +00:00
open(packagedfile, 'w').close()
2012-07-11 17:33:43 +00:00
2012-12-24 11:28:38 +00:00
if bb.data.inherits_class('kernel', d) or bb.data.inherits_class('module-base', d):
write_extra_runtime_pkgs(variants, packages, pkgdatadir)
if bb.data.inherits_class('allarch', d) and not bb.data.inherits_class('packagegroup', d):
write_extra_runtime_pkgs(global_variants, packages, pkgdatadir)
2012-07-11 17:33:43 +00:00
bb.utils.unlockfile(lf)
2005-08-31 10:45:47 +00:00
}
2014-07-09 20:18:04 +00:00
emit_pkgdata[dirs] = "${PKGDESTWORK}/runtime ${PKGDESTWORK}/runtime-reverse ${PKGDESTWORK}/runtime-rprovides"
2005-08-31 10:45:47 +00:00
ldconfig_postinst_fragment() {
if [ x"$D" = "x" ]; then
2012-07-09 18:50:17 +00:00
if [ -x /sbin/ldconfig ]; then /sbin/ldconfig ; fi
2005-08-31 10:45:47 +00:00
fi
}
2017-03-13 22:46:18 +00:00
RPMDEPS = "${STAGING_LIBDIR_NATIVE}/rpm/rpmdeps --rcfile ${STAGING_LIBDIR_NATIVE}/rpm/rpmrc --macros ${STAGING_LIBDIR_NATIVE}/rpm/macros --define '_rpmconfigdir ${STAGING_LIBDIR_NATIVE}/rpm/'"
2010-08-20 15:54:53 +00:00
# Collect perfile run-time dependency metadata
# Output:
# FILERPROVIDESFLIST_pkg - list of all files w/ deps
# FILERPROVIDES_filepath_pkg - per file dep
#
# FILERDEPENDSFLIST_pkg - list of all files w/ deps
# FILERDEPENDS_filepath_pkg - per file dep
python package_do_filedeps() {
2016-12-14 21:13:04 +00:00
if d.getVar('SKIP_FILEDEPS') == '1':
2012-07-11 17:33:43 +00:00
return
2016-12-14 21:13:04 +00:00
pkgdest = d.getVar('PKGDEST')
packages = d.getVar('PACKAGES')
rpmdeps = d.getVar('RPMDEPS')
2017-03-13 22:46:18 +00:00
magic = d.expand("${STAGING_DIR_NATIVE}${datadir_native}/misc/magic.mgc")
2012-07-11 17:33:43 +00:00
def chunks(files, n):
return [files[i:i+n] for i in range(0, len(files), n)]
2013-02-01 13:50:38 +00:00
pkglist = []
2012-07-11 17:33:43 +00:00
for pkg in packages.split():
2016-12-14 21:13:04 +00:00
if d.getVar('SKIP_FILEDEPS_' + pkg) == '1':
2013-01-26 18:21:28 +00:00
continue
2012-07-11 17:33:43 +00:00
if pkg.endswith('-dbg') or pkg.endswith('-doc') or pkg.find('-locale-') != -1 or pkg.find('-localedata-') != -1 or pkg.find('-gconv-') != -1 or pkg.find('-charmap-') != -1 or pkg.startswith('kernel-module-'):
continue
2013-02-01 13:50:38 +00:00
for files in chunks(pkgfiles[pkg], 100):
2017-03-13 22:46:18 +00:00
pkglist.append((pkg, files, rpmdeps, pkgdest, magic))
2012-07-11 17:33:43 +00:00
2014-08-21 20:47:50 +00:00
processed = oe.utils.multiprocess_exec( pkglist, oe.package.filedeprunner)
2012-07-11 17:33:43 +00:00
2013-02-01 13:50:38 +00:00
provides_files = {}
requires_files = {}
for result in processed:
(pkg, provides, requires) = result
if pkg not in provides_files:
provides_files[pkg] = []
if pkg not in requires_files:
requires_files[pkg] = []
2012-07-11 17:33:43 +00:00
2013-02-01 13:50:38 +00:00
for file in provides:
provides_files[pkg].append(file)
key = "FILERPROVIDES_" + file + "_" + pkg
d.setVar(key, " ".join(provides[file]))
for file in requires:
requires_files[pkg].append(file)
key = "FILERDEPENDS_" + file + "_" + pkg
d.setVar(key, " ".join(requires[file]))
2012-07-11 17:33:43 +00:00
2013-02-01 13:50:38 +00:00
for pkg in requires_files:
d.setVar("FILERDEPENDSFLIST_" + pkg, " ".join(requires_files[pkg]))
for pkg in provides_files:
d.setVar("FILERPROVIDESFLIST_" + pkg, " ".join(provides_files[pkg]))
2010-08-20 15:54:53 +00:00
}
2014-07-07 17:41:03 +00:00
SHLIBSDIRS = "${PKGDATA_DIR}/${MLPREFIX}shlibs2"
SHLIBSWORKDIR = "${PKGDESTWORK}/${MLPREFIX}shlibs2"
2010-08-21 19:21:02 +00:00
2005-08-31 10:45:47 +00:00
python package_do_shlibs() {
2012-07-11 17:33:43 +00:00
import re, pipes
2014-08-02 08:48:31 +00:00
import subprocess as sub
2012-07-11 17:33:43 +00:00
2016-09-12 08:32:08 +00:00
exclude_shlibs = d.getVar('EXCLUDE_FROM_SHLIBS', False)
2012-07-11 17:33:43 +00:00
if exclude_shlibs:
bb.note("not generating shlibs")
return
lib_re = re.compile("^.*\.so")
2016-12-14 21:13:04 +00:00
libdir_re = re.compile(".*/%s$" % d.getVar('baselib'))
2012-07-11 17:33:43 +00:00
2016-12-14 21:13:04 +00:00
packages = d.getVar('PACKAGES')
targetos = d.getVar('TARGET_OS')
2012-07-11 17:33:43 +00:00
2016-12-14 21:13:04 +00:00
workdir = d.getVar('WORKDIR')
2012-07-11 17:33:43 +00:00
2016-12-14 21:13:04 +00:00
ver = d.getVar('PKGV')
2012-07-11 17:33:43 +00:00
if not ver:
2013-05-11 22:46:10 +00:00
msg = "PKGV not defined"
package_qa_handle_error("pkgv-undefined", msg, d)
2012-07-11 17:33:43 +00:00
return
2016-12-14 21:13:04 +00:00
pkgdest = d.getVar('PKGDEST')
2012-07-11 17:33:43 +00:00
2016-12-14 21:13:04 +00:00
shlibswork_dir = d.getVar('SHLIBSWORKDIR')
2012-07-11 17:33:43 +00:00
# Take shared lock since we're only reading, not writing
lf = bb.utils.lockfile(d.expand("${PACKAGELOCK}"))
2014-07-07 17:41:03 +00:00
def linux_so(file, needed, sonames, renames, pkgver):
2012-07-11 17:33:43 +00:00
needs_ldconfig = False
2014-07-07 20:42:54 +00:00
ldir = os.path.dirname(file).replace(pkgdest + "/" + pkg, '')
2016-12-14 21:13:04 +00:00
cmd = d.getVar('OBJDUMP') + " -p " + pipes.quote(file) + " 2>/dev/null"
2012-07-11 17:33:43 +00:00
fd = os.popen(cmd)
lines = fd.readlines()
fd.close()
2014-07-07 17:41:23 +00:00
rpath = []
for l in lines:
m = re.match("\s+RPATH\s+([^\s]*)", l)
if m:
rpaths = m.group(1).replace("$ORIGIN", ldir).split(":")
2016-05-20 10:53:11 +00:00
rpath = list(map(os.path.normpath, rpaths))
2012-07-11 17:33:43 +00:00
for l in lines:
m = re.match("\s+NEEDED\s+([^\s]*)", l)
if m:
2014-07-07 17:41:23 +00:00
dep = m.group(1)
if dep not in needed[pkg]:
needed[pkg].append((dep, file, rpath))
2012-07-11 17:33:43 +00:00
m = re.match("\s+SONAME\s+([^\s]*)", l)
if m:
this_soname = m.group(1)
2014-07-07 17:41:03 +00:00
prov = (this_soname, ldir, pkgver)
2014-07-07 20:42:54 +00:00
if not prov in sonames:
2012-07-11 17:33:43 +00:00
# if library is private (only used by package) then do not build shlib for it
2014-01-28 14:26:10 +00:00
if not private_libs or this_soname not in private_libs:
2014-07-07 20:42:54 +00:00
sonames.append(prov)
2013-01-29 14:10:30 +00:00
if libdir_re.match(os.path.dirname(file)):
2012-07-11 17:33:43 +00:00
needs_ldconfig = True
2013-01-29 14:10:30 +00:00
if snap_symlinks and (os.path.basename(file) != this_soname):
renames.append((file, os.path.join(os.path.dirname(file), this_soname)))
2012-07-11 17:33:43 +00:00
return needs_ldconfig
2014-07-07 17:41:03 +00:00
def darwin_so(file, needed, sonames, renames, pkgver):
2013-01-29 14:10:30 +00:00
if not os.path.exists(file):
2012-07-11 17:33:43 +00:00
return
2014-08-02 08:48:06 +00:00
ldir = os.path.dirname(file).replace(pkgdest + "/" + pkg, '')
2012-07-11 17:33:43 +00:00
def get_combinations(base):
#
# Given a base library name, find all combinations of this split by "." and "-"
#
combos = []
options = base.split(".")
for i in range(1, len(options) + 1):
combos.append(".".join(options[0:i]))
options = base.split("-")
for i in range(1, len(options) + 1):
combos.append("-".join(options[0:i]))
return combos
if (file.endswith('.dylib') or file.endswith('.so')) and not pkg.endswith('-dev') and not pkg.endswith('-dbg'):
# Drop suffix
2013-09-04 11:48:27 +00:00
name = os.path.basename(file).rsplit(".",1)[0]
2012-07-11 17:33:43 +00:00
# Find all combinations
combos = get_combinations(name)
for combo in combos:
if not combo in sonames:
2014-07-07 17:41:03 +00:00
prov = (combo, ldir, pkgver)
2014-07-07 20:42:54 +00:00
sonames.append(prov)
2012-07-11 17:33:43 +00:00
if file.endswith('.dylib') or file.endswith('.so'):
2014-08-02 08:48:31 +00:00
rpath = []
p = sub.Popen([d.expand("${HOST_PREFIX}otool"), '-l', file],stdout=sub.PIPE,stderr=sub.PIPE)
2016-08-06 01:43:29 +00:00
out, err = p.communicate()
# If returned successfully, process stdout for results
2014-08-02 08:48:31 +00:00
if p.returncode == 0:
2016-08-06 01:43:29 +00:00
for l in out.split("\n"):
2014-08-02 08:48:31 +00:00
l = l.strip()
if l.startswith('path '):
rpath.append(l.split()[1])
2014-08-04 15:22:49 +00:00
p = sub.Popen([d.expand("${HOST_PREFIX}otool"), '-L', file],stdout=sub.PIPE,stderr=sub.PIPE)
2016-08-06 01:43:29 +00:00
out, err = p.communicate()
# If returned successfully, process stdout for results
2014-08-04 15:22:49 +00:00
if p.returncode == 0:
2016-08-06 01:43:29 +00:00
for l in out.split("\n"):
2014-08-04 15:22:49 +00:00
l = l.strip()
if not l or l.endswith(":"):
continue
if "is not an object file" in l:
continue
name = os.path.basename(l.split()[0]).rsplit(".", 1)[0]
if name and name not in needed[pkg]:
needed[pkg].append((name, file, []))
2012-07-11 17:33:43 +00:00
2017-02-21 13:17:28 +00:00
def mingw_dll(file, needed, sonames, renames, pkgver):
if not os.path.exists(file):
return
if file.endswith(".dll"):
# assume all dlls are shared objects provided by the package
sonames.append((os.path.basename(file), os.path.dirname(file).replace(pkgdest + "/" + pkg, ''), pkgver))
if (file.endswith(".dll") or file.endswith(".exe")):
# use objdump to search for "DLL Name: .*\.dll"
p = sub.Popen([d.expand("${HOST_PREFIX}objdump"), "-p", file], stdout = sub.PIPE, stderr= sub.PIPE)
out, err = p.communicate()
# process the output, grabbing all .dll names
if p.returncode == 0:
for m in re.finditer("DLL Name: (.*?\.dll)$", out.decode(), re.MULTILINE | re.IGNORECASE):
dllname = m.group(1)
if dllname:
needed[pkg].append((dllname, file, []))
2016-12-14 21:13:04 +00:00
if d.getVar('PACKAGE_SNAP_LIB_SYMLINKS') == "1":
2012-07-11 17:33:43 +00:00
snap_symlinks = True
else:
snap_symlinks = False
2017-01-27 22:29:10 +00:00
use_ldconfig = bb.utils.contains('DISTRO_FEATURES', 'ldconfig', True, False, d)
2012-07-11 17:33:43 +00:00
needed = {}
2014-12-19 11:41:44 +00:00
shlib_provider = oe.package.read_shlib_providers(d)
2014-01-19 15:24:19 +00:00
2012-07-11 17:33:43 +00:00
for pkg in packages.split():
2016-12-14 21:13:04 +00:00
private_libs = d.getVar('PRIVATE_LIBS_' + pkg) or d.getVar('PRIVATE_LIBS') or ""
2014-01-28 14:26:10 +00:00
private_libs = private_libs.split()
2012-07-11 17:33:43 +00:00
needs_ldconfig = False
bb.debug(2, "calculating shlib provides for %s" % pkg)
2016-12-14 21:13:04 +00:00
pkgver = d.getVar('PKGV_' + pkg)
2012-07-11 17:33:43 +00:00
if not pkgver:
2016-12-14 21:13:04 +00:00
pkgver = d.getVar('PV_' + pkg)
2012-07-11 17:33:43 +00:00
if not pkgver:
pkgver = ver
needed[pkg] = []
sonames = list()
renames = list()
2013-01-29 14:10:30 +00:00
for file in pkgfiles[pkg]:
2012-07-11 17:33:43 +00:00
soname = None
2013-03-14 17:26:20 +00:00
if cpath.islink(file):
2012-07-11 17:33:43 +00:00
continue
if targetos == "darwin" or targetos == "darwin8":
2014-07-07 17:41:03 +00:00
darwin_so(file, needed, sonames, renames, pkgver)
2017-02-21 13:17:28 +00:00
elif targetos.startswith("mingw"):
mingw_dll(file, needed, sonames, renames, pkgver)
2013-01-29 14:10:30 +00:00
elif os.access(file, os.X_OK) or lib_re.match(file):
2014-07-07 17:41:03 +00:00
ldconfig = linux_so(file, needed, sonames, renames, pkgver)
2012-07-11 17:33:43 +00:00
needs_ldconfig = needs_ldconfig or ldconfig
for (old, new) in renames:
bb.note("Renaming %s to %s" % (old, new))
os.rename(old, new)
2014-01-10 14:38:32 +00:00
pkgfiles[pkg].remove(old)
2012-07-11 17:33:43 +00:00
shlibs_file = os.path.join(shlibswork_dir, pkg + ".list")
if len(sonames):
fd = open(shlibs_file, 'w')
for s in sonames:
2014-07-07 17:40:55 +00:00
if s[0] in shlib_provider and s[1] in shlib_provider[s[0]]:
(old_pkg, old_pkgver) = shlib_provider[s[0]][s[1]]
2014-01-19 15:24:19 +00:00
if old_pkg != pkg:
2014-07-07 20:42:54 +00:00
bb.warn('%s-%s was registered as shlib provider for %s, changing it to %s-%s because it was built later' % (old_pkg, old_pkgver, s[0], pkg, pkgver))
bb.debug(1, 'registering %s-%s as shlib provider for %s' % (pkg, pkgver, s[0]))
2014-07-07 17:41:03 +00:00
fd.write(s[0] + ':' + s[1] + ':' + s[2] + '\n')
2014-07-07 17:40:55 +00:00
if s[0] not in shlib_provider:
shlib_provider[s[0]] = {}
shlib_provider[s[0]][s[1]] = (pkg, pkgver)
2012-07-11 17:33:43 +00:00
fd.close()
if needs_ldconfig and use_ldconfig:
bb.debug(1, 'adding ldconfig call to postinst for %s' % pkg)
2016-12-14 21:13:04 +00:00
postinst = d.getVar('pkg_postinst_%s' % pkg)
2012-07-11 17:33:43 +00:00
if not postinst:
postinst = '#!/bin/sh\n'
2016-12-14 21:13:04 +00:00
postinst += d.getVar('ldconfig_postinst_fragment')
2012-07-11 17:33:43 +00:00
d.setVar('pkg_postinst_%s' % pkg, postinst)
2014-01-19 15:24:19 +00:00
bb.debug(1, 'LIBNAMES: pkg %s sonames %s' % (pkg, sonames))
2012-07-11 17:33:43 +00:00
bb.utils.unlockfile(lf)
2016-12-14 21:13:04 +00:00
assumed_libs = d.getVar('ASSUME_SHLIBS')
2012-07-11 17:33:43 +00:00
if assumed_libs:
2016-12-14 21:13:04 +00:00
libdir = d.getVar("libdir")
2012-07-11 17:33:43 +00:00
for e in assumed_libs.split():
l, dep_pkg = e.split(":")
lib_ver = None
dep_pkg = dep_pkg.rsplit("_", 1)
if len(dep_pkg) == 2:
lib_ver = dep_pkg[1]
dep_pkg = dep_pkg[0]
2014-11-21 18:05:03 +00:00
if l not in shlib_provider:
shlib_provider[l] = {}
2014-07-07 17:40:55 +00:00
shlib_provider[l][libdir] = (dep_pkg, lib_ver)
2016-12-14 21:13:04 +00:00
libsearchpath = [d.getVar('libdir'), d.getVar('base_libdir')]
2012-07-11 17:33:43 +00:00
for pkg in packages.split():
bb.debug(2, "calculating shlib requirements for %s" % pkg)
deps = list()
for n in needed[pkg]:
2014-01-28 14:26:10 +00:00
# if n is in private libraries, don't try to search provider for it
# this could cause problem in case some abc.bb provides private
# /opt/abc/lib/libfoo.so.1 and contains /usr/bin/abc depending on system library libfoo.so.1
# but skipping it is still better alternative than providing own
# version and then adding runtime dependency for the same system library
2015-01-08 17:11:18 +00:00
if private_libs and n[0] in private_libs:
2014-07-07 17:41:23 +00:00
bb.debug(2, '%s: Dependency %s covered by PRIVATE_LIBS' % (pkg, n[0]))
2014-01-28 14:26:10 +00:00
continue
2014-07-07 17:41:23 +00:00
if n[0] in shlib_provider.keys():
2016-05-20 10:53:11 +00:00
shlib_provider_path = []
2014-10-09 05:33:05 +00:00
for k in shlib_provider[n[0]].keys():
shlib_provider_path.append(k)
2014-07-07 17:40:55 +00:00
match = None
2014-10-09 05:33:05 +00:00
for p in n[2] + shlib_provider_path + libsearchpath:
2014-07-07 17:40:55 +00:00
if p in shlib_provider[n[0]]:
match = p
break
if match:
(dep_pkg, ver_needed) = shlib_provider[n[0]][match]
2012-07-11 17:33:43 +00:00
2014-07-07 17:40:55 +00:00
bb.debug(2, '%s: Dependency %s requires package %s (used by files: %s)' % (pkg, n[0], dep_pkg, n[1]))
2012-07-11 17:33:43 +00:00
2014-07-07 17:40:55 +00:00
if dep_pkg == pkg:
continue
2012-07-11 17:33:43 +00:00
2014-07-07 17:40:55 +00:00
if ver_needed:
dep = "%s (>= %s)" % (dep_pkg, ver_needed)
else:
dep = dep_pkg
if not dep in deps:
deps.append(dep)
continue
bb.note("Couldn't find shared library provider for %s, used by files: %s" % (n[0], n[1]))
2012-07-11 17:33:43 +00:00
deps_file = os.path.join(pkgdest, pkg + ".shlibdeps")
if os.path.exists(deps_file):
os.remove(deps_file)
if len(deps):
fd = open(deps_file, 'w')
for dep in deps:
fd.write(dep + '\n')
fd.close()
2005-08-31 10:45:47 +00:00
}
python package_do_pkgconfig () {
2012-07-11 17:33:43 +00:00
import re
2016-12-14 21:13:04 +00:00
packages = d.getVar('PACKAGES')
workdir = d.getVar('WORKDIR')
pkgdest = d.getVar('PKGDEST')
2012-07-11 17:33:43 +00:00
2016-12-14 21:13:04 +00:00
shlibs_dirs = d.getVar('SHLIBSDIRS').split()
shlibswork_dir = d.getVar('SHLIBSWORKDIR')
2012-07-11 17:33:43 +00:00
pc_re = re.compile('(.*)\.pc$')
var_re = re.compile('(.*)=(.*)')
field_re = re.compile('(.*): (.*)')
pkgconfig_provided = {}
pkgconfig_needed = {}
for pkg in packages.split():
pkgconfig_provided[pkg] = []
pkgconfig_needed[pkg] = []
2013-01-29 14:10:30 +00:00
for file in pkgfiles[pkg]:
2012-07-11 17:33:43 +00:00
m = pc_re.match(file)
if m:
pd = bb.data.init()
name = m.group(1)
pkgconfig_provided[pkg].append(name)
2013-01-29 14:10:30 +00:00
if not os.access(file, os.R_OK):
2012-07-11 17:33:43 +00:00
continue
2013-01-29 14:10:30 +00:00
f = open(file, 'r')
2012-07-11 17:33:43 +00:00
lines = f.readlines()
f.close()
for l in lines:
m = var_re.match(l)
if m:
name = m.group(1)
val = m.group(2)
pd.setVar(name, pd.expand(val))
continue
m = field_re.match(l)
if m:
hdr = m.group(1)
2017-03-17 15:53:09 +00:00
exp = pd.expand(m.group(2))
2012-07-11 17:33:43 +00:00
if hdr == 'Requires':
pkgconfig_needed[pkg] += exp.replace(',', ' ').split()
# Take shared lock since we're only reading, not writing
lf = bb.utils.lockfile(d.expand("${PACKAGELOCK}"))
for pkg in packages.split():
pkgs_file = os.path.join(shlibswork_dir, pkg + ".pclist")
if pkgconfig_provided[pkg] != []:
f = open(pkgs_file, 'w')
for p in pkgconfig_provided[pkg]:
f.write('%s\n' % p)
f.close()
2013-01-19 23:29:08 +00:00
# Go from least to most specific since the last one found wins
for dir in reversed(shlibs_dirs):
2012-07-11 17:33:43 +00:00
if not os.path.exists(dir):
continue
for file in os.listdir(dir):
m = re.match('^(.*)\.pclist$', file)
if m:
pkg = m.group(1)
fd = open(os.path.join(dir, file))
lines = fd.readlines()
fd.close()
pkgconfig_provided[pkg] = []
for l in lines:
pkgconfig_provided[pkg].append(l.rstrip())
for pkg in packages.split():
deps = []
for n in pkgconfig_needed[pkg]:
found = False
for k in pkgconfig_provided.keys():
if n in pkgconfig_provided[k]:
if k != pkg and not (k in deps):
deps.append(k)
found = True
if found == False:
bb.note("couldn't find pkgconfig module '%s' in any package" % n)
deps_file = os.path.join(pkgdest, pkg + ".pcdeps")
if len(deps):
fd = open(deps_file, 'w')
for dep in deps:
fd.write(dep + '\n')
fd.close()
bb.utils.unlockfile(lf)
2005-08-31 10:45:47 +00:00
}
2012-07-26 14:35:00 +00:00
def read_libdep_files(d):
pkglibdeps = {}
2016-12-14 21:13:04 +00:00
packages = d.getVar('PACKAGES').split()
2012-07-11 17:33:43 +00:00
for pkg in packages:
2012-10-02 10:37:07 +00:00
pkglibdeps[pkg] = {}
2012-07-11 17:33:43 +00:00
for extension in ".shlibdeps", ".pcdeps", ".clilibdeps":
depsfile = d.expand("${PKGDEST}/" + pkg + extension)
if os.access(depsfile, os.R_OK):
2013-05-09 16:05:58 +00:00
fd = open(depsfile)
2012-07-11 17:33:43 +00:00
lines = fd.readlines()
fd.close()
2012-10-02 10:37:07 +00:00
for l in lines:
l.rstrip()
deps = bb.utils.explode_dep_versions2(l)
for dep in deps:
if not dep in pkglibdeps[pkg]:
pkglibdeps[pkg][dep] = deps[dep]
2012-07-26 14:35:00 +00:00
return pkglibdeps
python read_shlibdeps () {
pkglibdeps = read_libdep_files(d)
2016-12-14 21:13:04 +00:00
packages = d.getVar('PACKAGES').split()
2012-07-26 14:35:00 +00:00
for pkg in packages:
2016-12-14 21:13:04 +00:00
rdepends = bb.utils.explode_dep_versions2(d.getVar('RDEPENDS_' + pkg) or "")
2012-07-26 14:35:00 +00:00
for dep in pkglibdeps[pkg]:
2012-10-02 10:37:07 +00:00
# Add the dep if it's not already there, or if no comparison is set
if dep not in rdepends:
rdepends[dep] = []
for v in pkglibdeps[pkg][dep]:
if v not in rdepends[dep]:
rdepends[dep].append(v)
2012-07-11 17:33:43 +00:00
d.setVar('RDEPENDS_' + pkg, bb.utils.join_deps(rdepends, commasep=False))
2006-10-20 14:31:06 +00:00
}
2005-08-31 10:45:47 +00:00
2006-10-20 14:31:06 +00:00
python package_depchains() {
2012-07-11 17:33:43 +00:00
"""
For a given set of prefix and postfix modifiers, make those packages
RRECOMMENDS on the corresponding packages for its RDEPENDS.
Example: If package A depends upon package B, and A's .bb emits an
A-dev package, this would make A-dev Recommends: B-dev.
If only one of a given suffix is specified, it will take the RRECOMMENDS
based on the RDEPENDS of *all* other packages. If more than one of a given
suffix is specified, its will only use the RDEPENDS of the single parent
package.
"""
2016-12-14 21:13:04 +00:00
packages = d.getVar('PACKAGES')
postfixes = (d.getVar('DEPCHAIN_POST') or '').split()
prefixes = (d.getVar('DEPCHAIN_PRE') or '').split()
2012-07-11 17:33:43 +00:00
def pkg_adddeprrecs(pkg, base, suffix, getname, depends, d):
#bb.note('depends for %s is %s' % (base, depends))
2016-12-14 21:13:04 +00:00
rreclist = bb.utils.explode_dep_versions2(d.getVar('RRECOMMENDS_' + pkg) or "")
2012-07-11 17:33:43 +00:00
for depend in depends:
if depend.find('-native') != -1 or depend.find('-cross') != -1 or depend.startswith('virtual/'):
#bb.note("Skipping %s" % depend)
continue
if depend.endswith('-dev'):
2013-02-22 10:52:19 +00:00
depend = depend[:-4]
2012-07-11 17:33:43 +00:00
if depend.endswith('-dbg'):
2013-02-22 10:52:19 +00:00
depend = depend[:-4]
2012-07-11 17:33:43 +00:00
pkgname = getname(depend, suffix)
#bb.note("Adding %s for %s" % (pkgname, depend))
2012-07-26 14:34:59 +00:00
if pkgname not in rreclist and pkgname != pkg:
2012-10-02 10:37:07 +00:00
rreclist[pkgname] = []
2012-07-11 17:33:43 +00:00
#bb.note('setting: RRECOMMENDS_%s=%s' % (pkg, ' '.join(rreclist)))
d.setVar('RRECOMMENDS_%s' % pkg, bb.utils.join_deps(rreclist, commasep=False))
def pkg_addrrecs(pkg, base, suffix, getname, rdepends, d):
#bb.note('rdepends for %s is %s' % (base, rdepends))
2016-12-14 21:13:04 +00:00
rreclist = bb.utils.explode_dep_versions2(d.getVar('RRECOMMENDS_' + pkg) or "")
2012-07-11 17:33:43 +00:00
for depend in rdepends:
if depend.find('virtual-locale-') != -1:
#bb.note("Skipping %s" % depend)
continue
if depend.endswith('-dev'):
2013-02-22 10:52:19 +00:00
depend = depend[:-4]
2012-07-11 17:33:43 +00:00
if depend.endswith('-dbg'):
2013-02-22 10:52:19 +00:00
depend = depend[:-4]
2012-07-11 17:33:43 +00:00
pkgname = getname(depend, suffix)
#bb.note("Adding %s for %s" % (pkgname, depend))
2012-07-26 14:34:59 +00:00
if pkgname not in rreclist and pkgname != pkg:
2012-10-02 10:37:07 +00:00
rreclist[pkgname] = []
2012-07-11 17:33:43 +00:00
#bb.note('setting: RRECOMMENDS_%s=%s' % (pkg, ' '.join(rreclist)))
d.setVar('RRECOMMENDS_%s' % pkg, bb.utils.join_deps(rreclist, commasep=False))
def add_dep(list, dep):
if dep not in list:
list.append(dep)
depends = []
2016-12-14 21:13:04 +00:00
for dep in bb.utils.explode_deps(d.getVar('DEPENDS') or ""):
2012-07-11 17:33:43 +00:00
add_dep(depends, dep)
rdepends = []
for pkg in packages.split():
2016-12-14 21:13:04 +00:00
for dep in bb.utils.explode_deps(d.getVar('RDEPENDS_' + pkg) or ""):
2012-07-11 17:33:43 +00:00
add_dep(rdepends, dep)
#bb.note('rdepends is %s' % rdepends)
def post_getname(name, suffix):
return '%s%s' % (name, suffix)
def pre_getname(name, suffix):
return '%s%s' % (suffix, name)
pkgs = {}
for pkg in packages.split():
for postfix in postfixes:
if pkg.endswith(postfix):
if not postfix in pkgs:
pkgs[postfix] = {}
pkgs[postfix][pkg] = (pkg[:-len(postfix)], post_getname)
for prefix in prefixes:
if pkg.startswith(prefix):
if not prefix in pkgs:
pkgs[prefix] = {}
pkgs[prefix][pkg] = (pkg[:-len(prefix)], pre_getname)
2012-07-26 14:35:00 +00:00
if "-dbg" in pkgs:
pkglibdeps = read_libdep_files(d)
2012-10-03 08:58:24 +00:00
pkglibdeplist = []
for pkg in pkglibdeps:
for k in pkglibdeps[pkg]:
add_dep(pkglibdeplist, k)
2016-12-14 21:13:04 +00:00
dbgdefaultdeps = ((d.getVar('DEPCHAIN_DBGDEFAULTDEPS') == '1') or (bb.data.inherits_class('packagegroup', d)))
2012-07-26 14:35:00 +00:00
2012-07-11 17:33:43 +00:00
for suffix in pkgs:
for pkg in pkgs[suffix]:
2016-12-14 21:13:06 +00:00
if d.getVarFlag('RRECOMMENDS_' + pkg, 'nodeprrecs'):
2012-07-11 17:33:43 +00:00
continue
(base, func) = pkgs[suffix][pkg]
if suffix == "-dev":
pkg_adddeprrecs(pkg, base, suffix, func, depends, d)
2012-07-26 14:35:00 +00:00
elif suffix == "-dbg":
if not dbgdefaultdeps:
pkg_addrrecs(pkg, base, suffix, func, pkglibdeplist, d)
continue
2012-07-11 17:33:43 +00:00
if len(pkgs[suffix]) == 1:
pkg_addrrecs(pkg, base, suffix, func, rdepends, d)
else:
rdeps = []
2016-12-14 21:13:04 +00:00
for dep in bb.utils.explode_deps(d.getVar('RDEPENDS_' + base) or ""):
2012-07-11 17:33:43 +00:00
add_dep(rdeps, dep)
pkg_addrrecs(pkg, base, suffix, func, rdeps, d)
2006-10-20 16:09:53 +00:00
}
2006-10-20 14:31:06 +00:00
2012-07-11 17:33:43 +00:00
# Since bitbake can't determine which variables are accessed during package
2011-08-30 13:22:21 +00:00
# iteration, we need to list them here:
2017-03-20 15:17:07 +00:00
PACKAGEVARS = "FILES RDEPENDS RRECOMMENDS SUMMARY DESCRIPTION RSUGGESTS RPROVIDES RCONFLICTS PKG ALLOW_EMPTY pkg_postinst pkg_postrm INITSCRIPT_NAME INITSCRIPT_PARAMS DEBIAN_NOAUTONAME ALTERNATIVE PKGE PKGV PKGR USERADD_PARAM GROUPADD_PARAM CONFFILES SYSTEMD_SERVICE LICENSE SECTION pkg_preinst pkg_prerm RREPLACES GROUPMEMS_PARAM SYSTEMD_AUTO_ENABLE SKIP_FILEDEPS PRIVATE_LIBS"
2011-08-30 13:22:21 +00:00
def gen_packagevar(d):
ret = []
2016-12-14 21:13:04 +00:00
pkgs = (d.getVar("PACKAGES") or "").split()
vars = (d.getVar("PACKAGEVARS") or "").split()
2011-08-30 13:22:21 +00:00
for p in pkgs:
for v in vars:
ret.append(v + "_" + p)
2013-04-18 23:51:50 +00:00
# Ensure that changes to INCOMPATIBLE_LICENSE re-run do_package for
# affected recipes.
ret.append('LICENSE_EXCLUSION-%s' % p)
2011-08-30 13:22:21 +00:00
return " ".join(ret)
2009-10-30 00:31:30 +00:00
PACKAGE_PREPROCESS_FUNCS ?= ""
2013-01-29 14:10:30 +00:00
# Functions for setting up PKGD
PACKAGEBUILDPKGD ?= " \
2011-05-18 13:15:01 +00:00
perform_packagecopy \
2009-10-30 00:31:30 +00:00
${PACKAGE_PREPROCESS_FUNCS} \
2012-07-11 17:33:43 +00:00
split_and_strip_files \
fixup_perms \
2013-02-03 17:31:30 +00:00
"
# Functions which split PKGD up into separate packages
PACKAGESPLITFUNCS ?= " \
2013-01-29 14:10:30 +00:00
package_do_split_locales \
populate_packages"
# Functions which process metadata based on split packages
insane/package: let package.bbclass inherit insane.bbclass
RP's comment:
"What we're trying to do is move everything to use a standard mechanism
for reporting issues of this type (do_package). With insane.bbclass, you
can elect whether a given type of error is a warning or error and fails
the task."
* The package.bbclass had used package_qa_handle_error() which is from
insane.bbclass, and we will use it for handling other warnings and
errors, so let package.bbclass inherit insane.bbclass, this change will
make the insane as a requirement (always included).
* Change the "PACKAGEFUNCS ?=" to "+=", otherwise there would be an
error like:
Exception: variable SUMMARY references itself!
This is because we let package.bbclass inherit insane.bbclass, and
PACKAGEFUNCS has been set in insane.bbclass, so the "PACKAGEFUNCS ?="
will set nothing, then the "emit_pkgdata" doesn't run which will
cause this error.
* Add a QA_SANE variable in insane.bbclass, once the error type
is ERROR_QA, it will fail the task and stop the build.
[YOCTO #3190]
[YOCTO #4396]
(From OE-Core rev: 852dead71387c66ec0cba7c71e3814a74e409560)
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
Signed-off-by: Saul Wold <sgw@linux.intel.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2013-05-11 22:46:10 +00:00
PACKAGEFUNCS += " \
2013-01-29 14:10:30 +00:00
package_fixsymlinks \
2013-01-29 13:59:35 +00:00
package_name_hook \
2012-07-11 17:33:43 +00:00
package_do_filedeps \
package_do_shlibs \
package_do_pkgconfig \
read_shlibdeps \
package_depchains \
emit_pkgdata"
2006-10-20 14:31:06 +00:00
2011-01-31 16:22:14 +00:00
python do_package () {
2012-07-11 17:33:43 +00:00
# Change the following version to cause sstate to invalidate the package
# cache. This is useful if an item this class depends on changes in a
# way that the output of this class changes. rpmdeps is a good example
# as any change to rpmdeps requires this to be rerun.
# PACKAGE_BBCLASS_VERSION = "1"
2013-03-14 17:26:20 +00:00
# Init cachedpath
global cpath
cpath = oe.cachedpath.CachedPath()
2013-02-03 17:36:33 +00:00
###########################################################################
# Sanity test the setup
###########################################################################
2016-12-14 21:13:04 +00:00
packages = (d.getVar('PACKAGES') or "").split()
2012-07-11 17:33:43 +00:00
if len(packages) < 1:
bb.debug(1, "No packages to build, skipping do_package")
return
2016-12-14 21:13:04 +00:00
workdir = d.getVar('WORKDIR')
outdir = d.getVar('DEPLOY_DIR')
dest = d.getVar('D')
dvar = d.getVar('PKGD')
pn = d.getVar('PN')
2012-07-11 17:33:43 +00:00
2013-01-03 16:01:07 +00:00
if not workdir or not outdir or not dest or not dvar or not pn:
2013-05-11 22:46:10 +00:00
msg = "WORKDIR, DEPLOY_DIR, D, PN and PKGD all must be defined, unable to package"
package_qa_handle_error("var-undefined", msg, d)
2012-07-11 17:33:43 +00:00
return
2013-02-06 16:49:51 +00:00
bb.build.exec_func("package_get_auto_pr", d)
2013-02-03 17:59:03 +00:00
###########################################################################
# Optimisations
###########################################################################
2014-08-28 15:11:04 +00:00
# Continually expanding complex expressions is inefficient, particularly
# when we write to the datastore and invalidate the expansion cache. This
# code pre-expands some frequently used variables
2013-02-03 17:59:03 +00:00
def expandVar(x, d):
2016-12-14 21:13:04 +00:00
d.setVar(x, d.getVar(x))
2013-02-03 17:59:03 +00:00
for x in 'PN', 'PV', 'BPN', 'TARGET_SYS', 'EXTENDPRAUTO':
expandVar(x, d)
2013-02-03 17:36:33 +00:00
###########################################################################
# Setup PKGD (from D)
###########################################################################
2016-12-14 21:13:04 +00:00
for f in (d.getVar('PACKAGEBUILDPKGD') or '').split():
2013-01-29 14:10:30 +00:00
bb.build.exec_func(f, d)
2013-02-03 17:36:33 +00:00
###########################################################################
# Split up PKGD into PKGDEST
###########################################################################
2013-03-14 17:26:20 +00:00
cpath = oe.cachedpath.CachedPath()
2016-12-14 21:13:04 +00:00
for f in (d.getVar('PACKAGESPLITFUNCS') or '').split():
2013-02-03 17:31:30 +00:00
bb.build.exec_func(f, d)
2013-02-03 17:36:33 +00:00
###########################################################################
# Process PKGDEST
###########################################################################
2013-01-29 14:10:30 +00:00
# Build global list of files in each split package
global pkgfiles
pkgfiles = {}
2016-12-14 21:13:04 +00:00
packages = d.getVar('PACKAGES').split()
pkgdest = d.getVar('PKGDEST')
2013-01-29 14:10:30 +00:00
for pkg in packages:
pkgfiles[pkg] = []
2013-03-14 17:26:20 +00:00
for walkroot, dirs, files in cpath.walk(pkgdest + "/" + pkg):
2013-01-29 14:10:30 +00:00
for file in files:
pkgfiles[pkg].append(walkroot + os.sep + file)
2016-12-14 21:13:04 +00:00
for f in (d.getVar('PACKAGEFUNCS') or '').split():
2012-07-11 17:33:43 +00:00
bb.build.exec_func(f, d)
2016-01-12 18:00:13 +00:00
2016-12-14 21:13:04 +00:00
qa_sane = d.getVar("QA_SANE")
2016-01-12 18:00:13 +00:00
if not qa_sane:
bb.fatal("Fatal QA errors found, failing task.")
2005-08-31 10:45:47 +00:00
}
classes/package.bbclass: Add fixup_perms
Add a new function that is responsible for fixing directory and file
permissions, owners and groups during the packaging process. This will fix
various issues where two packages may create the same directory and end up
with different permissions, owner and/or group.
The issue being resolved is that if two packages conflict in their ownership
of a directory, the first installed into the rootfs sets the permissions.
This leads to a least potentially non-deterministic filesystems, at worst
security defects.
The user can specify their own settings via the configuration files
specified in FILESYSTEM_PERMS_TABLES. If this is not defined, it will
fall back to loading files/fs-perms.txt from BBPATH. The format of this
file is documented within the file.
By default all of the system directories, specified in bitbake.conf, will
be fixed to be 0755, root, root.
The fs-perms.txt contains a few default entries to correct documentation,
locale, headers and debug sources. It was discovered these are often
incorrect due to being directly copied from the build user environment.
The entries needed to match the base-files package have also been added.
Also tweak a couple of warnings to provide more diagnostic information.
(From OE-Core rev: 8c720efa053f81dc8d2bb604cdbdb25de9a6efab)
Signed-off-by: Mark Hatle <mark.hatle@windriver.com>
Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
2011-06-20 15:57:49 +00:00
2010-08-05 13:16:59 +00:00
do_package[dirs] = "${SHLIBSWORKDIR} ${PKGDESTWORK} ${D}"
2013-03-05 12:22:33 +00:00
do_package[vardeps] += "${PACKAGEBUILDPKGD} ${PACKAGESPLITFUNCS} ${PACKAGEFUNCS} ${@gen_packagevar(d)}"
2014-02-12 12:12:23 +00:00
addtask package after do_install
2006-10-20 14:31:06 +00:00
2010-08-05 13:16:59 +00:00
PACKAGELOCK = "${STAGING_DIR}/package-output.lock"
SSTATETASKS += "do_package"
2013-02-03 17:21:40 +00:00
do_package[cleandirs] = "${PKGDEST} ${PKGDESTWORK}"
2013-01-23 14:27:33 +00:00
do_package[sstate-plaindirs] = "${PKGD} ${PKGDEST} ${PKGDESTWORK}"
2011-06-09 14:05:40 +00:00
do_package[sstate-lockfile-shared] = "${PACKAGELOCK}"
2010-10-26 07:54:43 +00:00
do_package_setscene[dirs] = "${STAGING_DIR}"
2010-08-05 13:16:59 +00:00
python do_package_setscene () {
2012-07-11 17:33:43 +00:00
sstate_setscene(d)
2010-08-05 13:16:59 +00:00
}
2010-09-16 05:55:21 +00:00
addtask do_package_setscene
2010-08-05 13:16:59 +00:00
2013-01-23 14:27:33 +00:00
do_packagedata () {
:
}
addtask packagedata before do_build after do_package
SSTATETASKS += "do_packagedata"
do_packagedata[sstate-inputdirs] = "${PKGDESTWORK}"
do_packagedata[sstate-outputdirs] = "${PKGDATA_DIR}"
do_packagedata[sstate-lockfile-shared] = "${PACKAGELOCK}"
2013-09-13 12:35:31 +00:00
do_packagedata[stamp-extra-info] = "${MACHINE}"
2013-01-23 14:27:33 +00:00
python do_packagedata_setscene () {
sstate_setscene(d)
}
addtask do_packagedata_setscene
2006-10-20 14:31:06 +00:00
#
# Helper functions for the package writing classes
#
2012-05-10 08:24:22 +00:00
def mapping_rename_hook(d):
2012-07-11 17:33:43 +00:00
"""
Rewrite variables to account for package renaming in things
like debian.bbclass or manual PKG variable name changes
"""
2016-12-14 21:13:04 +00:00
pkg = d.getVar("PKG")
2013-04-12 16:45:27 +00:00
runtime_mapping_rename("RDEPENDS", pkg, d)
runtime_mapping_rename("RRECOMMENDS", pkg, d)
runtime_mapping_rename("RSUGGESTS", pkg, d)