%poky; ] > Technical Details This chapter provides technical details for various parts of the Yocto Project. Currently, topics include Yocto Project components, cross-toolchain generation, shared state (sstate) cache, x32, Wayland support, and Licenses.
Yocto Project Components The BitBake task executor together with various types of configuration files form the OpenEmbedded Core. This section overviews these components by describing their use and how they interact. BitBake handles the parsing and execution of the data files. The data itself is of various types: Recipes: Provides details about particular pieces of software. Class Data: Abstracts common build information (e.g. how to build a Linux kernel). Configuration Data: Defines machine-specific settings, policy decisions, and so forth. Configuration data acts as the glue to bind everything together. BitBake knows how to combine multiple data sources together and refers to each data source as a layer. For information on layers, see the "Understanding and Creating Layers" section of the Yocto Project Development Manual. Following are some brief details on these core components. For additional information on how these components interact during a build, see the "A Closer Look at the Yocto Project Development Environment" Chapter.
BitBake BitBake is the tool at the heart of the OpenEmbedded build system and is responsible for parsing the Metadata, generating a list of tasks from it, and then executing those tasks. This section briefly introduces BitBake. If you want more information on BitBake, see the BitBake User Manual. To see a list of the options BitBake supports, use either of the following commands: $ bitbake -h $ bitbake --help The most common usage for BitBake is bitbake packagename, where packagename is the name of the package you want to build (referred to as the "target" in this manual). The target often equates to the first part of a recipe's filename (e.g. "foo" for a recipe named foo_1.3.0-r0.bb). So, to process the matchbox-desktop_1.2.3.bb recipe file, you might type the following: $ bitbake matchbox-desktop Several different versions of matchbox-desktop might exist. BitBake chooses the one selected by the distribution configuration. You can get more details about how BitBake chooses between different target versions and providers in the "Preferences" section of the BitBake User Manual. BitBake also tries to execute any dependent tasks first. So for example, before building matchbox-desktop, BitBake would build a cross compiler and glibc if they had not already been built. A useful BitBake option to consider is the -k or --continue option. This option instructs BitBake to try and continue processing the job as long as possible even after encountering an error. When an error occurs, the target that failed and those that depend on it cannot be remade. However, when you use this option other dependencies can still be processed.
Metadata (Recipes) Files that have the .bb suffix are "recipes" files. In general, a recipe contains information about a single piece of software. This information includes the location from which to download the unaltered source, any source patches to be applied to that source (if needed), which special configuration options to apply, how to compile the source files, and how to package the compiled output. The term "package" is sometimes used to refer to recipes. However, since the word "package" is used for the packaged output from the OpenEmbedded build system (i.e. .ipk or .deb files), this document avoids using the term "package" when referring to recipes.
Classes Class files (.bbclass) contain information that is useful to share between Metadata files. An example is the autotools class, which contains common settings for any application that Autotools uses. The "Classes" chapter provides details about classes and how to use them.
Configuration The configuration files (.conf) define various configuration variables that govern the OpenEmbedded build process. These files fall into several areas that define machine configuration options, distribution configuration options, compiler tuning options, general common configuration options, and user configuration options in local.conf, which is found in the Build Directory.
Cross-Development Toolchain Generation The Yocto Project does most of the work for you when it comes to creating cross-development toolchains. This section provides some technical background on how cross-development toolchains are created and used. For more information on toolchains, you can also see the Yocto Project Software Development Kit (SDK) Developer's Guide. In the Yocto Project development environment, cross-development toolchains are used to build the image and applications that run on the target hardware. With just a few commands, the OpenEmbedded build system creates these necessary toolchains for you. The following figure shows a high-level build environment regarding toolchain construction and use. Most of the work occurs on the Build Host. This is the machine used to build images and generally work within the the Yocto Project environment. When you run BitBake to create an image, the OpenEmbedded build system uses the host gcc compiler to bootstrap a cross-compiler named gcc-cross. The gcc-cross compiler is what BitBake uses to compile source files when creating the target image. You can think of gcc-cross simply as an automatically generated cross-compiler that is used internally within BitBake only. The extensible SDK does not use gcc-cross-canadian since this SDK ships a copy of the OpenEmbedded build system and the sysroot within it contains gcc-cross. The chain of events that occurs when gcc-cross is bootstrapped is as follows: gcc -> binutils-cross -> gcc-cross-initial -> linux-libc-headers -> glibc-initial -> glibc -> gcc-cross -> gcc-runtime gcc: The build host's GNU Compiler Collection (GCC). binutils-cross: The bare minimum binary utilities needed in order to run the gcc-cross-initial phase of the bootstrap operation. gcc-cross-initial: An early stage of the bootstrap process for creating the cross-compiler. This stage builds enough of the gcc-cross, the C library, and other pieces needed to finish building the final cross-compiler in later stages. This tool is a "native" package (i.e. it is designed to run on the build host). linux-libc-headers: Headers needed for the cross-compiler. glibc-initial: An initial version of the Embedded GLIBC needed to bootstrap glibc. gcc-cross: The final stage of the bootstrap process for the cross-compiler. This stage results in the actual cross-compiler that BitBake uses when it builds an image for a targeted device. If you are replacing this cross compiler toolchain with a custom version, you must replace gcc-cross. This tool is also a "native" package (i.e. it is designed to run on the build host). gcc-runtime: Runtime libraries resulting from the toolchain bootstrapping process. This tool produces a binary that consists of the runtime libraries need for the targeted device. You can use the OpenEmbedded build system to build an installer for the relocatable SDK used to develop applications. When you run the installer, it installs the toolchain, which contains the development tools (e.g., the gcc-cross-canadian), binutils-cross-canadian, and other nativesdk-* tools, which are tools native to the SDK (i.e. native to SDK_ARCH), you need to cross-compile and test your software. The figure shows the commands you use to easily build out this toolchain. This cross-development toolchain is built to execute on the SDKMACHINE, which might or might not be the same machine as the Build Host. If your target architecture is supported by the Yocto Project, you can take advantage of pre-built images that ship with the Yocto Project and already contain cross-development toolchain installers. Here is the bootstrap process for the relocatable toolchain: gcc -> binutils-crosssdk -> gcc-crosssdk-initial -> linux-libc-headers -> glibc-initial -> nativesdk-glibc -> gcc-crosssdk -> gcc-cross-canadian gcc: The build host's GNU Compiler Collection (GCC). binutils-crosssdk: The bare minimum binary utilities needed in order to run the gcc-crosssdk-initial phase of the bootstrap operation. gcc-crosssdk-initial: An early stage of the bootstrap process for creating the cross-compiler. This stage builds enough of the gcc-crosssdk and supporting pieces so that the final stage of the bootstrap process can produce the finished cross-compiler. This tool is a "native" binary that runs on the build host. linux-libc-headers: Headers needed for the cross-compiler. glibc-initial: An initial version of the Embedded GLIBC needed to bootstrap nativesdk-glibc. nativesdk-glibc: The Embedded GLIBC needed to bootstrap the gcc-crosssdk. gcc-crosssdk: The final stage of the bootstrap process for the relocatable cross-compiler. The gcc-crosssdk is a transitory compiler and never leaves the build host. Its purpose is to help in the bootstrap process to create the eventual relocatable gcc-cross-canadian compiler, which is relocatable. This tool is also a "native" package (i.e. it is designed to run on the build host). gcc-cross-canadian: The final relocatable cross-compiler. When run on the SDKMACHINE, this tool produces executable code that runs on the target device. Only one cross-canadian compiler is produced per architecture since they can be targeted at different processor optimizations using configurations passed to the compiler through the compile commands. This circumvents the need for multiple compilers and thus reduces the size of the toolchains. For information on advantages gained when building a cross-development toolchain installer, see the "Building an SDK Installer" section in the Yocto Project Software Development Kit (SDK) Developer's Guide.
Shared State Cache By design, the OpenEmbedded build system builds everything from scratch unless BitBake can determine that parts do not need to be rebuilt. Fundamentally, building from scratch is attractive as it means all parts are built fresh and there is no possibility of stale data causing problems. When developers hit problems, they typically default back to building from scratch so they know the state of things from the start. Building an image from scratch is both an advantage and a disadvantage to the process. As mentioned in the previous paragraph, building from scratch ensures that everything is current and starts from a known state. However, building from scratch also takes much longer as it generally means rebuilding things that do not necessarily need to be rebuilt. The Yocto Project implements shared state code that supports incremental builds. The implementation of the shared state code answers the following questions that were fundamental roadblocks within the OpenEmbedded incremental build support system: What pieces of the system have changed and what pieces have not changed? How are changed pieces of software removed and replaced? How are pre-built components that do not need to be rebuilt from scratch used when they are available? For the first question, the build system detects changes in the "inputs" to a given task by creating a checksum (or signature) of the task's inputs. If the checksum changes, the system assumes the inputs have changed and the task needs to be rerun. For the second question, the shared state (sstate) code tracks which tasks add which output to the build process. This means the output from a given task can be removed, upgraded or otherwise manipulated. The third question is partly addressed by the solution for the second question assuming the build system can fetch the sstate objects from remote locations and install them if they are deemed to be valid. The OpenEmbedded build system does not maintain PR information as part of the shared state packages. Consequently, considerations exist that affect maintaining shared state feeds. For information on how the OpenEmbedded build system works with packages and can track incrementing PR information, see the "Automatically Incrementing a Binary Package Revision Number" section. The rest of this section goes into detail about the overall incremental build architecture, the checksums (signatures), shared state, and some tips and tricks.
Overall Architecture When determining what parts of the system need to be built, BitBake works on a per-task basis rather than a per-recipe basis. You might wonder why using a per-task basis is preferred over a per-recipe basis. To help explain, consider having the IPK packaging backend enabled and then switching to DEB. In this case, the do_install and do_package task outputs are still valid. However, with a per-recipe approach, the build would not include the .deb files. Consequently, you would have to invalidate the whole build and rerun it. Rerunning everything is not the best solution. Also, in this case, the core must be "taught" much about specific tasks. This methodology does not scale well and does not allow users to easily add new tasks in layers or as external recipes without touching the packaged-staging core.
Checksums (Signatures) The shared state code uses a checksum, which is a unique signature of a task's inputs, to determine if a task needs to be run again. Because it is a change in a task's inputs that triggers a rerun, the process needs to detect all the inputs to a given task. For shell tasks, this turns out to be fairly easy because the build process generates a "run" shell script for each task and it is possible to create a checksum that gives you a good idea of when the task's data changes. To complicate the problem, there are things that should not be included in the checksum. First, there is the actual specific build path of a given task - the WORKDIR. It does not matter if the work directory changes because it should not affect the output for target packages. Also, the build process has the objective of making native or cross packages relocatable. Both native and cross packages run on the build host. However, cross packages generate output for the target architecture. The checksum therefore needs to exclude WORKDIR. The simplistic approach for excluding the work directory is to set WORKDIR to some fixed value and create the checksum for the "run" script. Another problem results from the "run" scripts containing functions that might or might not get called. The incremental build solution contains code that figures out dependencies between shell functions. This code is used to prune the "run" scripts down to the minimum set, thereby alleviating this problem and making the "run" scripts much more readable as a bonus. So far we have solutions for shell scripts. What about Python tasks? The same approach applies even though these tasks are more difficult. The process needs to figure out what variables a Python function accesses and what functions it calls. Again, the incremental build solution contains code that first figures out the variable and function dependencies, and then creates a checksum for the data used as the input to the task. Like the WORKDIR case, situations exist where dependencies should be ignored. For these cases, you can instruct the build process to ignore a dependency by using a line like the following: PACKAGE_ARCHS[vardepsexclude] = "MACHINE" This example ensures that the PACKAGE_ARCHS variable does not depend on the value of MACHINE, even if it does reference it. Equally, there are cases where we need to add dependencies BitBake is not able to find. You can accomplish this by using a line like the following: PACKAGE_ARCHS[vardeps] = "MACHINE" This example explicitly adds the MACHINE variable as a dependency for PACKAGE_ARCHS. Consider a case with in-line Python, for example, where BitBake is not able to figure out dependencies. When running in debug mode (i.e. using -DDD), BitBake produces output when it discovers something for which it cannot figure out dependencies. The Yocto Project team has currently not managed to cover those dependencies in detail and is aware of the need to fix this situation. Thus far, this section has limited discussion to the direct inputs into a task. Information based on direct inputs is referred to as the "basehash" in the code. However, there is still the question of a task's indirect inputs - the things that were already built and present in the Build Directory. The checksum (or signature) for a particular task needs to add the hashes of all the tasks on which the particular task depends. Choosing which dependencies to add is a policy decision. However, the effect is to generate a master checksum that combines the basehash and the hashes of the task's dependencies. At the code level, there are a variety of ways both the basehash and the dependent task hashes can be influenced. Within the BitBake configuration file, we can give BitBake some extra information to help it construct the basehash. The following statement effectively results in a list of global variable dependency excludes - variables never included in any checksum: BB_HASHBASE_WHITELIST ?= "TMPDIR FILE PATH PWD BB_TASKHASH BBPATH DL_DIR \ SSTATE_DIR THISDIR FILESEXTRAPATHS FILE_DIRNAME HOME LOGNAME SHELL TERM \ USER FILESPATH STAGING_DIR_HOST STAGING_DIR_TARGET COREBASE PRSERV_HOST \ PRSERV_DUMPDIR PRSERV_DUMPFILE PRSERV_LOCKDOWN PARALLEL_MAKE \ CCACHE_DIR EXTERNAL_TOOLCHAIN CCACHE CCACHE_DISABLE LICENSE_PATH SDKPKGSUFFIX" The previous example excludes WORKDIR since that variable is actually constructed as a path within TMPDIR, which is on the whitelist. The rules for deciding which hashes of dependent tasks to include through dependency chains are more complex and are generally accomplished with a Python function. The code in meta/lib/oe/sstatesig.py shows two examples of this and also illustrates how you can insert your own policy into the system if so desired. This file defines the two basic signature generators OE-Core uses: "OEBasic" and "OEBasicHash". By default, there is a dummy "noop" signature handler enabled in BitBake. This means that behavior is unchanged from previous versions. OE-Core uses the "OEBasicHash" signature handler by default through this setting in the bitbake.conf file: BB_SIGNATURE_HANDLER ?= "OEBasicHash" The "OEBasicHash" BB_SIGNATURE_HANDLER is the same as the "OEBasic" version but adds the task hash to the stamp files. This results in any Metadata change that changes the task hash, automatically causing the task to be run again. This removes the need to bump PR values, and changes to Metadata automatically ripple across the build. It is also worth noting that the end result of these signature generators is to make some dependency and hash information available to the build. This information includes: BB_BASEHASH_task-taskname: The base hashes for each task in the recipe. BB_BASEHASH_filename:taskname: The base hashes for each dependent task. BBHASHDEPS_filename:taskname: The task dependencies for each task. BB_TASKHASH: The hash of the currently running task.
Shared State Checksums and dependencies, as discussed in the previous section, solve half the problem of supporting a shared state. The other part of the problem is being able to use checksum information during the build and being able to reuse or rebuild specific components. The sstate class is a relatively generic implementation of how to "capture" a snapshot of a given task. The idea is that the build process does not care about the source of a task's output. Output could be freshly built or it could be downloaded and unpacked from somewhere - the build process does not need to worry about its origin. There are two types of output, one is just about creating a directory in WORKDIR. A good example is the output of either do_install or do_package. The other type of output occurs when a set of data is merged into a shared directory tree such as the sysroot. The Yocto Project team has tried to keep the details of the implementation hidden in sstate class. From a user's perspective, adding shared state wrapping to a task is as simple as this do_deploy example taken from the deploy class: DEPLOYDIR = "${WORKDIR}/deploy-${PN}" SSTATETASKS += "do_deploy" do_deploy[sstate-inputdirs] = "${DEPLOYDIR}" do_deploy[sstate-outputdirs] = "${DEPLOY_DIR_IMAGE}" python do_deploy_setscene () { sstate_setscene(d) } addtask do_deploy_setscene do_deploy[dirs] = "${DEPLOYDIR} ${B}" The following list explains the previous example: Adding "do_deploy" to SSTATETASKS adds some required sstate-related processing, which is implemented in the sstate class, to before and after the do_deploy task. The do_deploy[sstate-inputdirs] = "${DEPLOYDIR}" declares that do_deploy places its output in ${DEPLOYDIR} when run normally (i.e. when not using the sstate cache). This output becomes the input to the shared state cache. The do_deploy[sstate-outputdirs] = "${DEPLOY_DIR_IMAGE}" line causes the contents of the shared state cache to be copied to ${DEPLOY_DIR_IMAGE}. If do_deploy is not already in the shared state cache or if its input checksum (signature) has changed from when the output was cached, the task will be run to populate the shared state cache, after which the contents of the shared state cache is copied to ${DEPLOY_DIR_IMAGE}. If do_deploy is in the shared state cache and its signature indicates that the cached output is still valid (i.e. if no relevant task inputs have changed), then the contents of the shared state cache will be copied directly to ${DEPLOY_DIR_IMAGE} by the do_deploy_setscene task instead, skipping the do_deploy task. The following task definition is glue logic needed to make the previous settings effective: python do_deploy_setscene () { sstate_setscene(d) } addtask do_deploy_setscene sstate_setscene() takes the flags above as input and accelerates the do_deploy task through the shared state cache if possible. If the task was accelerated, sstate_setscene() returns True. Otherwise, it returns False, and the normal do_deploy task runs. For more information, see the "setscene" section in the BitBake User Manual. The do_deploy[dirs] = "${DEPLOYDIR} ${B}" line creates ${DEPLOYDIR} and ${B} before the do_deploy task runs, and also sets the current working directory of do_deploy to ${B}. For more information, see the "Variable Flags" section in the BitBake User Manual. In cases where sstate-inputdirs and sstate-outputdirs would be the same, you can use sstate-plaindirs. For example, to preserve the ${PKGD} and ${PKGDEST} output from the do_package task, use the following: do_package[sstate-plaindirs] = "${PKGD} ${PKGDEST}" sstate-inputdirs and sstate-outputdirs can also be used with multiple directories. For example, the following declares PKGDESTWORK and SHLIBWORK as shared state input directories, which populates the shared state cache, and PKGDATA_DIR and SHLIBSDIR as the corresponding shared state output directories: do_package[sstate-inputdirs] = "${PKGDESTWORK} ${SHLIBSWORKDIR}" do_package[sstate-outputdirs] = "${PKGDATA_DIR} ${SHLIBSDIR}" These methods also include the ability to take a lockfile when manipulating shared state directory structures, for cases where file additions or removals are sensitive: do_package[sstate-lockfile] = "${PACKAGELOCK}" Behind the scenes, the shared state code works by looking in SSTATE_DIR and SSTATE_MIRRORS for shared state files. Here is an example: SSTATE_MIRRORS ?= "\ file://.* http://someserver.tld/share/sstate/PATH;downloadfilename=PATH \n \ file://.* file:///some/local/dir/sstate/PATH" The shared state directory (SSTATE_DIR) is organized into two-character subdirectories, where the subdirectory names are based on the first two characters of the hash. If the shared state directory structure for a mirror has the same structure as SSTATE_DIR, you must specify "PATH" as part of the URI to enable the build system to map to the appropriate subdirectory. The shared state package validity can be detected just by looking at the filename since the filename contains the task checksum (or signature) as described earlier in this section. If a valid shared state package is found, the build process downloads it and uses it to accelerate the task. The build processes use the *_setscene tasks for the task acceleration phase. BitBake goes through this phase before the main execution code and tries to accelerate any tasks for which it can find shared state packages. If a shared state package for a task is available, the shared state package is used. This means the task and any tasks on which it is dependent are not executed. As a real world example, the aim is when building an IPK-based image, only the do_package_write_ipk tasks would have their shared state packages fetched and extracted. Since the sysroot is not used, it would never get extracted. This is another reason why a task-based approach is preferred over a recipe-based approach, which would have to install the output from every task.
Tips and Tricks The code in the build system that supports incremental builds is not simple code. This section presents some tips and tricks that help you work around issues related to shared state code.
Debugging Seeing what metadata went into creating the input signature of a shared state (sstate) task can be a useful debugging aid. This information is available in signature information (siginfo) files in SSTATE_DIR. For information on how to view and interpret information in siginfo files, see the "Viewing Task Variable Dependencies" section.
Invalidating Shared State The OpenEmbedded build system uses checksums and shared state cache to avoid unnecessarily rebuilding tasks. Collectively, this scheme is known as "shared state code." As with all schemes, this one has some drawbacks. It is possible that you could make implicit changes to your code that the checksum calculations do not take into account. These implicit changes affect a task's output but do not trigger the shared state code into rebuilding a recipe. Consider an example during which a tool changes its output. Assume that the output of rpmdeps changes. The result of the change should be that all the package and package_write_rpm shared state cache items become invalid. However, because the change to the output is external to the code and therefore implicit, the associated shared state cache items do not become invalidated. In this case, the build process uses the cached items rather than running the task again. Obviously, these types of implicit changes can cause problems. To avoid these problems during the build, you need to understand the effects of any changes you make. Realize that changes you make directly to a function are automatically factored into the checksum calculation. Thus, these explicit changes invalidate the associated area of shared state cache. However, you need to be aware of any implicit changes that are not obvious changes to the code and could affect the output of a given task. When you identify an implicit change, you can easily take steps to invalidate the cache and force the tasks to run. The steps you can take are as simple as changing a function's comments in the source code. For example, to invalidate package shared state files, change the comment statements of do_package or the comments of one of the functions it calls. Even though the change is purely cosmetic, it causes the checksum to be recalculated and forces the OpenEmbedded build system to run the task again. For an example of a commit that makes a cosmetic change to invalidate shared state, see this commit.
Automatically Added Runtime Dependencies The OpenEmbedded build system automatically adds common types of runtime dependencies between packages, which means that you do not need to explicitly declare the packages using RDEPENDS. Three automatic mechanisms exist (shlibdeps, pcdeps, and depchains) that handle shared libraries, package configuration (pkg-config) modules, and -dev and -dbg packages, respectively. For other types of runtime dependencies, you must manually declare the dependencies. shlibdeps: During the do_package task of each recipe, all shared libraries installed by the recipe are located. For each shared library, the package that contains the shared library is registered as providing the shared library. More specifically, the package is registered as providing the soname of the library. The resulting shared-library-to-package mapping is saved globally in PKGDATA_DIR by the do_packagedata task. Simultaneously, all executables and shared libraries installed by the recipe are inspected to see what shared libraries they link against. For each shared library dependency that is found, PKGDATA_DIR is queried to see if some package (likely from a different recipe) contains the shared library. If such a package is found, a runtime dependency is added from the package that depends on the shared library to the package that contains the library. The automatically added runtime dependency also includes a version restriction. This version restriction specifies that at least the current version of the package that provides the shared library must be used, as if "package (>= version)" had been added to RDEPENDS. This forces an upgrade of the package containing the shared library when installing the package that depends on the library, if needed. If you want to avoid a package being registered as providing a particular shared library (e.g. because the library is for internal use only), then add the library to PRIVATE_LIBS inside the package's recipe. pcdeps: During the do_package task of each recipe, all pkg-config modules (*.pc files) installed by the recipe are located. For each module, the package that contains the module is registered as providing the module. The resulting module-to-package mapping is saved globally in PKGDATA_DIR by the do_packagedata task. Simultaneously, all pkg-config modules installed by the recipe are inspected to see what other pkg-config modules they depend on. A module is seen as depending on another module if it contains a "Requires:" line that specifies the other module. For each module dependency, PKGDATA_DIR is queried to see if some package contains the module. If such a package is found, a runtime dependency is added from the package that depends on the module to the package that contains the module. The pcdeps mechanism most often infers dependencies between -dev packages. depchains: If a package foo depends on a package bar, then foo-dev and foo-dbg are also made to depend on bar-dev and bar-dbg, respectively. Taking the -dev packages as an example, the bar-dev package might provide headers and shared library symlinks needed by foo-dev, which shows the need for a dependency between the packages. The dependencies added by depchains are in the form of RRECOMMENDS. By default, foo-dev also has an RDEPENDS-style dependency on foo, because the default value of RDEPENDS_${PN}-dev (set in bitbake.conf) includes "${PN}". To ensure that the dependency chain is never broken, -dev and -dbg packages are always generated by default, even if the packages turn out to be empty. See the ALLOW_EMPTY variable for more information. The do_package task depends on the do_packagedata task of each recipe in DEPENDS through use of a [deptask] declaration, which guarantees that the required shared-library/module-to-package mapping information will be available when needed as long as DEPENDS has been correctly set.
Fakeroot and Pseudo Some tasks are easier to implement when allowed to perform certain operations that are normally reserved for the root user. For example, the do_install task benefits from being able to set the UID and GID of installed files to arbitrary values. One approach to allowing tasks to perform root-only operations would be to require BitBake to run as root. However, this method is cumbersome and has security issues. The approach that is actually used is to run tasks that benefit from root privileges in a "fake" root environment. Within this environment, the task and its child processes believe that they are running as the root user, and see an internally consistent view of the filesystem. As long as generating the final output (e.g. a package or an image) does not require root privileges, the fact that some earlier steps ran in a fake root environment does not cause problems. The capability to run tasks in a fake root environment is known as "fakeroot", which is derived from the BitBake keyword/variable flag that requests a fake root environment for a task. In current versions of the OpenEmbedded build system, the program that implements fakeroot is known as Pseudo. Pseudo overrides system calls through the LD_PRELOAD mechanism to give the illusion of running as root. To keep track of "fake" file ownership and permissions resulting from operations that require root permissions, an sqlite3 database is used. This database is stored in ${WORKDIR}/pseudo/files.db for individual recipes. Storing the database in a file as opposed to in memory gives persistence between tasks, and even between builds. Caution If you add your own task that manipulates the same files or directories as a fakeroot task, then that task should also run under fakeroot. Otherwise, the task will not be able to run root-only operations, and will not see the fake file ownership and permissions set by the other task. You should also add a dependency on virtual/fakeroot-native:do_populate_sysroot, giving the following: fakeroot do_mytask () { ... } do_mytask[depends] += "virtual/fakeroot-native:do_populate_sysroot" For more information, see the FAKEROOT* variables in the BitBake User Manual. You can also reference this Pseudo article.
x32 x32 is a processor-specific Application Binary Interface (psABI) for x86_64. An ABI defines the calling conventions between functions in a processing environment. The interface determines what registers are used and what the sizes are for various C data types. Some processing environments prefer using 32-bit applications even when running on Intel 64-bit platforms. Consider the i386 psABI, which is a very old 32-bit ABI for Intel 64-bit platforms. The i386 psABI does not provide efficient use and access of the Intel 64-bit processor resources, leaving the system underutilized. Now consider the x86_64 psABI. This ABI is newer and uses 64-bits for data sizes and program pointers. The extra bits increase the footprint size of the programs, libraries, and also increases the memory and file system size requirements. Executing under the x32 psABI enables user programs to utilize CPU and system resources more efficiently while keeping the memory footprint of the applications low. Extra bits are used for registers but not for addressing mechanisms.
Support This Yocto Project release supports the final specifications of x32 psABI. Support for x32 psABI exists as follows: You can create packages and images in x32 psABI format on x86_64 architecture targets. You can successfully build many recipes with the x32 toolchain. You can create and boot core-image-minimal and core-image-sato images.
Completing x32 Future Plans for the x32 psABI in the Yocto Project include the following: Enhance and fix the few remaining recipes so they work with and support x32 toolchains. Enhance RPM Package Manager (RPM) support for x32 binaries. Support larger images.
Using x32 Right Now Follow these steps to use the x32 spABI: Enable the x32 psABI tuning file for x86_64 machines by editing the conf/local.conf like this: MACHINE = "qemux86-64" DEFAULTTUNE = "x86-64-x32" baselib = "${@d.getVar('BASE_LIB_tune-' + (d.getVar('DEFAULTTUNE', True) \ or 'INVALID'), True) or 'lib'}" #MACHINE = "genericx86" #DEFAULTTUNE = "core2-64-x32" As usual, use BitBake to build an image that supports the x32 psABI. Here is an example: $ bitbake core-image-sato As usual, run your image using QEMU: $ runqemu qemux86-64 core-image-sato
Wayland Wayland is a computer display server protocol that provides a method for compositing window managers to communicate directly with applications and video hardware and expects them to communicate with input hardware using other libraries. Using Wayland with supporting targets can result in better control over graphics frame rendering than an application might otherwise achieve. The Yocto Project provides the Wayland protocol libraries and the reference Weston compositor as part of its release. This section describes what you need to do to implement Wayland and use the compositor when building an image for a supporting target.
Support The Wayland protocol libraries and the reference Weston compositor ship as integrated packages in the meta layer of the Source Directory. Specifically, you can find the recipes that build both Wayland and Weston at meta/recipes-graphics/wayland. You can build both the Wayland and Weston packages for use only with targets that accept the Mesa 3D and Direct Rendering Infrastructure, which is also known as Mesa DRI. This implies that you cannot build and use the packages if your target uses, for example, the Intel Embedded Media and Graphics Driver (Intel EMGD) that overrides Mesa DRI. Due to lack of EGL support, Weston 1.0.3 will not run directly on the emulated QEMU hardware. However, this version of Weston will run under X emulation without issues.
Enabling Wayland in an Image To enable Wayland, you need to enable it to be built and enable it to be included in the image.
Building To cause Mesa to build the wayland-egl platform and Weston to build Wayland with Kernel Mode Setting (KMS) support, include the "wayland" flag in the DISTRO_FEATURES statement in your local.conf file: DISTRO_FEATURES_append = " wayland" If X11 has been enabled elsewhere, Weston will build Wayland with X11 support
Installing To install the Wayland feature into an image, you must include the following CORE_IMAGE_EXTRA_INSTALL statement in your local.conf file: CORE_IMAGE_EXTRA_INSTALL += "wayland weston"
Running Weston To run Weston inside X11, enabling it as described earlier and building a Sato image is sufficient. If you are running your image under Sato, a Weston Launcher appears in the "Utility" category. Alternatively, you can run Weston through the command-line interpretor (CLI), which is better suited for development work. To run Weston under the CLI, you need to do the following after your image is built: Run these commands to export XDG_RUNTIME_DIR: mkdir -p /tmp/$USER-weston chmod 0700 /tmp/$USER-weston export XDG_RUNTIME_DIR=/tmp/$USER-weston Launch Weston in the shell: weston
Licenses This section describes the mechanism by which the OpenEmbedded build system tracks changes to licensing text. The section also describes how to enable commercially licensed recipes, which by default are disabled. For information that can help you maintain compliance with various open source licensing during the lifecycle of the product, see the "Maintaining Open Source License Compliance During Your Project's Lifecycle" section in the Yocto Project Development Manual.
Tracking License Changes The license of an upstream project might change in the future. In order to prevent these changes going unnoticed, the LIC_FILES_CHKSUM variable tracks changes to the license text. The checksums are validated at the end of the configure step, and if the checksums do not match, the build will fail.
Specifying the <filename>LIC_FILES_CHKSUM</filename> Variable The LIC_FILES_CHKSUM variable contains checksums of the license text in the source code for the recipe. Following is an example of how to specify LIC_FILES_CHKSUM: LIC_FILES_CHKSUM = "file://COPYING;md5=xxxx \ file://licfile1.txt;beginline=5;endline=29;md5=yyyy \ file://licfile2.txt;endline=50;md5=zzzz \ ..." Notes When using "beginline" and "endline", realize that line numbering begins with one and not zero. Also, the included lines are inclusive (i.e. lines five through and including 29 in the previous example for licfile1.txt). When a license check fails, the selected license text is included as part of the QA message. Using this output, you can determine the exact start and finish for the needed license text. The build system uses the S variable as the default directory when searching files listed in LIC_FILES_CHKSUM. The previous example employs the default directory. Consider this next example: LIC_FILES_CHKSUM = "file://src/ls.c;beginline=5;endline=16;\ md5=bb14ed3c4cda583abc85401304b5cd4e" LIC_FILES_CHKSUM = "file://${WORKDIR}/license.html;md5=5c94767cedb5d6987c902ac850ded2c6" The first line locates a file in ${S}/src/ls.c and isolates lines five through 16 as license text. The second line refers to a file in WORKDIR. Note that LIC_FILES_CHKSUM variable is mandatory for all recipes, unless the LICENSE variable is set to "CLOSED".
Explanation of Syntax As mentioned in the previous section, the LIC_FILES_CHKSUM variable lists all the important files that contain the license text for the source code. It is possible to specify a checksum for an entire file, or a specific section of a file (specified by beginning and ending line numbers with the "beginline" and "endline" parameters, respectively). The latter is useful for source files with a license notice header, README documents, and so forth. If you do not use the "beginline" parameter, then it is assumed that the text begins on the first line of the file. Similarly, if you do not use the "endline" parameter, it is assumed that the license text ends with the last line of the file. The "md5" parameter stores the md5 checksum of the license text. If the license text changes in any way as compared to this parameter then a mismatch occurs. This mismatch triggers a build failure and notifies the developer. Notification allows the developer to review and address the license text changes. Also note that if a mismatch occurs during the build, the correct md5 checksum is placed in the build log and can be easily copied to the recipe. There is no limit to how many files you can specify using the LIC_FILES_CHKSUM variable. Generally, however, every project requires a few specifications for license tracking. Many projects have a "COPYING" file that stores the license information for all the source code files. This practice allows you to just track the "COPYING" file as long as it is kept up to date. If you specify an empty or invalid "md5" parameter, BitBake returns an md5 mis-match error and displays the correct "md5" parameter value during the build. The correct parameter is also captured in the build log. If the whole file contains only license text, you do not need to use the "beginline" and "endline" parameters.
Enabling Commercially Licensed Recipes By default, the OpenEmbedded build system disables components that have commercial or other special licensing requirements. Such requirements are defined on a recipe-by-recipe basis through the LICENSE_FLAGS variable definition in the affected recipe. For instance, the poky/meta/recipes-multimedia/gstreamer/gst-plugins-ugly recipe contains the following statement: LICENSE_FLAGS = "commercial" Here is a slightly more complicated example that contains both an explicit recipe name and version (after variable expansion): LICENSE_FLAGS = "license_${PN}_${PV}" In order for a component restricted by a LICENSE_FLAGS definition to be enabled and included in an image, it needs to have a matching entry in the global LICENSE_FLAGS_WHITELIST variable, which is a variable typically defined in your local.conf file. For example, to enable the poky/meta/recipes-multimedia/gstreamer/gst-plugins-ugly package, you could add either the string "commercial_gst-plugins-ugly" or the more general string "commercial" to LICENSE_FLAGS_WHITELIST. See the "License Flag Matching" section for a full explanation of how LICENSE_FLAGS matching works. Here is the example: LICENSE_FLAGS_WHITELIST = "commercial_gst-plugins-ugly" Likewise, to additionally enable the package built from the recipe containing LICENSE_FLAGS = "license_${PN}_${PV}", and assuming that the actual recipe name was emgd_1.10.bb, the following string would enable that package as well as the original gst-plugins-ugly package: LICENSE_FLAGS_WHITELIST = "commercial_gst-plugins-ugly license_emgd_1.10" As a convenience, you do not need to specify the complete license string in the whitelist for every package. You can use an abbreviated form, which consists of just the first portion or portions of the license string before the initial underscore character or characters. A partial string will match any license that contains the given string as the first portion of its license. For example, the following whitelist string will also match both of the packages previously mentioned as well as any other packages that have licenses starting with "commercial" or "license". LICENSE_FLAGS_WHITELIST = "commercial license"
License Flag Matching License flag matching allows you to control what recipes the OpenEmbedded build system includes in the build. Fundamentally, the build system attempts to match LICENSE_FLAGS strings found in recipes against LICENSE_FLAGS_WHITELIST strings found in the whitelist. A match causes the build system to include a recipe in the build, while failure to find a match causes the build system to exclude a recipe. In general, license flag matching is simple. However, understanding some concepts will help you correctly and effectively use matching. Before a flag defined by a particular recipe is tested against the contents of the whitelist, the expanded string _${PN} is appended to the flag. This expansion makes each LICENSE_FLAGS value recipe-specific. After expansion, the string is then matched against the whitelist. Thus, specifying LICENSE_FLAGS = "commercial" in recipe "foo", for example, results in the string "commercial_foo". And, to create a match, that string must appear in the whitelist. Judicious use of the LICENSE_FLAGS strings and the contents of the LICENSE_FLAGS_WHITELIST variable allows you a lot of flexibility for including or excluding recipes based on licensing. For example, you can broaden the matching capabilities by using license flags string subsets in the whitelist. When using a string subset, be sure to use the part of the expanded string that precedes the appended underscore character (e.g. usethispart_1.3, usethispart_1.4, and so forth). For example, simply specifying the string "commercial" in the whitelist matches any expanded LICENSE_FLAGS definition that starts with the string "commercial" such as "commercial_foo" and "commercial_bar", which are the strings the build system automatically generates for hypothetical recipes named "foo" and "bar" assuming those recipes simply specify the following: LICENSE_FLAGS = "commercial" Thus, you can choose to exhaustively enumerate each license flag in the whitelist and allow only specific recipes into the image, or you can use a string subset that causes a broader range of matches to allow a range of recipes into the image. This scheme works even if the LICENSE_FLAGS string already has _${PN} appended. For example, the build system turns the license flag "commercial_1.2_foo" into "commercial_1.2_foo_foo" and would match both the general "commercial" and the specific "commercial_1.2_foo" strings found in the whitelist, as expected. Here are some other scenarios: You can specify a versioned string in the recipe such as "commercial_foo_1.2" in a "foo" recipe. The build system expands this string to "commercial_foo_1.2_foo". Combine this license flag with a whitelist that has the string "commercial" and you match the flag along with any other flag that starts with the string "commercial". Under the same circumstances, you can use "commercial_foo" in the whitelist and the build system not only matches "commercial_foo_1.2" but also matches any license flag with the string "commercial_foo", regardless of the version. You can be very specific and use both the package and version parts in the whitelist (e.g. "commercial_foo_1.2") to specifically match a versioned recipe.