Trusted Services Documentation
The Trusted Services project provides a framework for developing and deploying device root-of-trust services for A-profile devices. Alternative secure processing environments are supported to accommodate the diverse range of isolation technologies available to system integrators.
-
Introduction
-
About the Project
-
Quick Start Guides
-
Services
-
Secure Processing Environments
-
Deployments
-
Developer Documents
-
Platform Certification
Introduction
The term ‘trusted service’ is used as a general name for a class of application that runs in an isolated processing environment. Other applications rely on trusted services to perform security related operations in a way that avoids exposing secret data beyond the isolation boundary of the environment. The word ‘trusted’ does not imply anything inherently trustworthy about a service application but rather that other applications put trust in the service. Meeting those trust obligations relies on a range of hardware and firmware implemented security measures.
The Arm Application-profile (A-profile) architecture, in combination with standard firmware, provides a range of isolated processing environments that offer hardware-backed protection against various classes of attack. Because of their strong security properties, these environments are suitable for running applications that have access to valuable assets such as keys or sensitive user data. The goal of the Trusted Services project is to provide a framework in which security related services may be developed, tested and easily deployed to run in any of the supported environments. A core set of trusted services are implemented to provide basic device security functions such as cryptography and secure storage.
Example isolated processing environments are:
Secure partitions - secure world isolated environments managed by a secure partition manager
Trusted applications - application environments managed by a TEE
VM backed container - container runtime that uses a hypervisor to provide hardware backed container isolation
The default reference system, used for test and development, uses the Secure Partition Manager configuration of OP-TEE to manage a set of secure partitions running at S-EL0. The secure partitions host service providers that implement PSA root-of-trust services. Services may be accessed using client-side C bindings that expose PSA Functional APIs. UEFI SMM services are provided by the SMM Gateway.
For more background on the type of problems solved by trusted services and how the project aims to make solutions more accessible, see:
Solving Common Security Problems
The following are examples of how trusted services can solve common device security problems.
Protecting IoT device identity
During the provisioning process, an IoT device is assigned a secure identity that consists of a public/private key pair and a CA signed certificate that includes the public key. The device is also provisioned with the public key corresponding to the cloud service that it will operate with. The provisioned material is used whenever a device connects to the cloud during the authentication process. To prevent the possibility of device cloning or unauthorized transfer to a different cloud service, all provisioned material must be held in secure storage and access to the private key must be prevented. To achieve this, the certificate verification and nonce signing performed during the TLS handshake is performed by the Crypto trusted service that performs the operations without exposing the private key.
Protecting Software Updates
To ensure that software updates applied to a device originate from a legitimate source, update packages are signed. A signed package will include a signature block that includes a hash of the package contents within the signed data. During the update process, a device will verify the signature using a provisioned public key that corresponds to the signing key used by the update originator. By holding the public key in secure storage and performing the signature verification using the Crypto service, unauthorized modification of the update source is prevented.
Secure Logging
A managed IoT device will often be configured by an installation engineer who has physical access to the device. To allow a cloud operator to audit configuration changes, it is necessary to keep a log of configuration steps performed by the installation engineer. To avoid the possibility of fraudulent modification of the audit log, a device signs log data using a device unique key-pair. The public key corresponding to the signing private key may be retrieved by the cloud operator to allow the log to be verified. To protect the signing key, the Crypto service is used for signing log records.
Copyright (c) 2020-2022, Arm Limited and Contributors. All rights reserved.
SPDX-License-Identifier: BSD-3-Clause
Project Goals
The trusted services project aims to make it easy to write new trusted services that can be deployed in different secure processing environments without having to rewrite lots of code. The aim is to make component reuse as painless as possible by keeping software components free of unnecessary dependencies on anything environment or platform specific.
The project structure has been designed to help meet the following goals:
Support multiple deployments - allow for different deployments where common service code can be built to run in different environments.
Support multiple processing environments - allow support for new processing environments to be easily added.
Support multiple hardware platforms - provide a portability model for different hardware.
Avoid the need for duplicated code - by encouraging code sharing, code duplication can be minimized.
Avoid cross-talk between builds - allow images for different deployments to be built independently without any nasty cross dependencies.
Support and promote automated testing - writing and running test cases that operate on individual components, component assemblies or complete service deployments should be easy and accessible.
Support component sharing with deployment specific configuration - where necessary, a deployment specific build configuration may need to be applied to a shared component.
Control which versions of external components are used - where external components such as libraries are used, it should be possible to peg to a specific version.
Enhancing Security through Reuse and Testing
Reuse of common framework and service components across multiple deployments will help to shake out bugs that may present security vulnerabilities. Repeated reuse of a piece of software in different contexts and by different people can help harden the code through progressive improvements and bug fixes. Reuse of a common framework also creates opportunities for standard solutions to security problems such as service discovery, client identification, authentication and access control.
The confidence to reuse components needs to be underpinned by testing. A project structure that makes it easy to add tests, run tests and live with an increasing suite of test cases is fundamentally important in meeting security goals. Although trusted service code will be deployed in execution environments where test and debug can be somewhat awkward, a large amount of the code can be tested effectively in a native PC environment. Where code can be tested on a PC, it should be. It should be easy for anyone to build and run tests to give confidence that a component passes all tests before changes are made and that code changes haven’t broken anything.
Copyright (c) 2020-2021, Arm Limited and Contributors. All rights reserved.
SPDX-License-Identifier: BSD-3-Clause
The Trusted Services project includes components that may be integrated into platform firmware to enable A-profile platforms to meet PSA Certified security requirements. For more information, see: Platform Certification.
Copyright (c) 2020-2022, Arm Limited and Contributors. All rights reserved.
SPDX-License-Identifier: BSD-3-Clause
About the Project
Change Log & Release Notes
This document contains a summary of the new features, changes, fixes and known issues in each release of Trusted Services.
Version 1.0.0-Beta
The first tagged release of the project.
Feature Highlights
The project supports the following services:
Secure Storage
Crypto
Initial Attestation
Smm Variable
Services may be accessed using client components that implement “Psacertified v1.0 APIs”. The project includes deployments that integrate PSA API certification tests with API clients to facilitate end-to-end PSA certification testing.
Known limitations
Crypto key store partitioning by client is not yet supported.
Discovery support is only currently integrated into the Crypto service provider. In case of services not supporting this feature yet, communication parameters (e.g. maximum buffer size) and supported feature set needs to be hardcode to the service provider and service client.
Supported Trusted Environments
In the default configuration each service is deployed to a dedicated FF-A Secure Partition and executes isolated. Service implementations are platform, trusted environment and service deployment agonistic. With appropriate enablement work services can be enabled to work in any combination of these.
The reference integration uses the SPMC implemented in OP-TEE OS to manage TS SPs. This release supports OP-TEE v3.19.
Supported Integration Systems
The reference solution uses the OP-TEE integration methodology. This relies on the google repo tool for high-level dependency management and a set of makefiles to capture the build configuration information. For details please refer to OP-TEE git repo documentation.
The project is officially enabled in Yocto meta-arm.
Supported Target Platforms
The only reference platform supported by this release is the AEM FVP build using the OP-TEE integration method.
Known limitations:
Non-volatile backend secure storage is not currently provided.
Test Report
Please find the Test Report covering this release in the tf.org wiki.
Copyright (c) 2020-2022, Arm Limited and Contributors. All rights reserved.
SPDX-License-Identifier: BSD-3-Clause
Coding Style & Guidelines
The following sections contain TS coding guidelines for different types of file. They are continually evolving and should not be considered “set in stone”. Feel free to question them and provide feedback.
To help configuring text editors the project comes with “EditorConfig” file(s). (../../.editorconfig
).
Common Rules
The following rules are common for all types of text file, except where noted otherwise:
Files shall be UTF-8 encoded.
Use Unix style line endings (
LF
character)The primary language of the project is English. All comments and documentation must be in this language.
Trailing whitespace is not welcome, please trim these.
C Rules
C source code rules are base on the Linux Coding Style (See: LCS). The following deviations apply:
TS follows ISO/IEC 9899:1999 standard with ACLE version Q3 2020 extensions.
Line length shall not exceed 100 characters.
Use snake_case for function, variable and file names.
Each file shall be “self contained” and include header files with external dependencies. No file shall depend on headers included by other files.
Include ordering: please include project specific headers first and then system includes. Please order the files alphabetically in the above two groups.
All variables must be initialized.
C source files should include a copyright and license comment block at the head of each file. Here is an example:
/*
* Copyright (c) 2020-2022, Arm Limited and Contributors. All rights reserved.
*
* SPDX-License-Identifier: BSD-3-Clause
*/
Boring stuff is not for smart people and the project uses the Uncrustify code beautifier to easy formatting the
source. (See ../../.uncrustify.cfg
)
CMake Rules
Cmake files (e.g. CMakeLists.txt and .cmake) should conform to the following rules:
CMake file names use CamelCase style.
Indent with tabs and otherwise use spaces. Use 4 spaces for tab size.
Use LF as line end in CMake files.
Remove trailing whitespace.
Maximum line length is 128 characters.
When complicated functionality is needed prefer CMake scripting over other languages.
Prefix local variables with _.
Use functions to prevent global name-space pollution.
Use snake_case for function and variable names.
Use the
include_guard()
CMake function when creating new modules, to prevent multiple inclusion.Use self contained modules, i.e. include direct dependencies of the module.
Use the Sphinx CMake domain for in-line documentation of CMake scripts. For details please refer to the CMake Documentation.
Each file should include a copyright and license comment block at the head of each file. Here is an example:
#-------------------------------------------------------------------------------
# Copyright (c) 2020-2022, Arm Limited and Contributors. All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause
#
#-------------------------------------------------------------------------------
Restructured Text Rules
Please refer to Writing Documentation.
Copyright (c) 2020-2022, Arm Limited and Contributors. All rights reserved.
SPDX-License-Identifier: BSD-3-Clause
Contributing
Reporting Security Issues
Please follow the directions of the Trusted Firmware Security Center
Getting Started
Make sure you have a GitHub account and you are logged on developer.trustedfirmware.org.
Send an email to the TS Mailing List about your work. This gives everyone visibility of whether others are working on something similar.
Clone the TS repository on your own machine.
Making Changes
Make commits of logical units. See these general Git guidelines for contributing to a project.
Follow the Coding Style & Guidelines.
Keep the commits on topic. If you need to fix another bug or make another enhancement, please create a separate change.
Avoid long commit series. If you do have a long series, consider whether some commits should be squashed together or addressed in a separate topic.
Make sure your commit messages are in the proper format. Please keel the 50/72 rule (for details see Tim Popes blog entry.)
Where appropriate, please update the documentation.
Consider which documents or other in-source documentation needs updating.
Ensure that each changed file has the correct copyright and license information. Files that entirely consist of contributions to this project should have a copyright notice and BSD-3-Clause SPDX license identifier of the form as shown in License. Example copyright and license comment blocks are shown in Coding Style & Guidelines. Files that contain changes to imported Third Party IP files should retain their original copyright and license notices. For significant contributions you may add your own copyright notice in following format:
Portions copyright (c) [XXXX-]YYYY, <OWNER>. All rights reserved.
where XXXX is the year of first contribution (if different to YYYY) and YYYY is the year of most recent contribution. <OWNER> is your name or your company name.
For any change, ensure that YYYY is updated if a contribution is made in a year more recent than the previous YYYY.
If you are submitting new files that you intend to be the technical sub-maintainer for (for example, a new platform port), then also update the Maintainers file.
For topics with multiple commits, you should make all documentation changes (and nothing else) in the last commit of the series. Otherwise, include the documentation changes within the single commit.
Please test your changes.
Submitting Changes
Ensure that each commit in the series has at least one
Signed-off-by:
line, using your real name and email address. The names in theSigned-off-by:
andAuthor:
lines must match. If anyone else contributes to the commit, they must also add their ownSigned-off-by:
line. By adding this line the contributor certifies the contribution is made under the terms of theDeveloper Certificate of Origin
.More details may be found in the Gerrit Signed-off-by Lines guidelines.
Ensure that each commit also has a unique
Change-Id:
line. If you have cloned the repository with the “Clone with commit-msg hook” clone method, this should already be the case.More details may be found in the Gerrit Change-Ids documentation.
Submit your changes for review at https://review.trustedfirmware.org targeting the
integration
branch.The changes will then undergo further review and testing by the Maintainers. Any review comments will be made directly on your patch. This may require you to do some rework.
Refer to the Gerrit Uploading Changes documentation for more details.
When the changes are accepted, the Maintainers will integrate them.
Typically, the Maintainers will merge the changes into the
integration
branch.If the changes are not based on a sufficiently-recent commit, or if they cannot be automatically rebased, then the Maintainers may rebase it on the
main
branch or ask you to do so.After final integration testing, the changes will make their way into the
main
branch. If a problem is found during integration, the merge commit will be removed from theintegration
branch and the Maintainers will ask you to create a new patch set to resolve the problem.
Copyright (c) 2020-2022, Arm Limited and Contributors. All rights reserved.
SPDX-License-Identifier: BSD-3-Clause
Glossary
This glossary provides definitions for terms and abbreviations used in the Trusted Services documentation.
You can find additional definitions in the Arm Glossary.
- ACLE
- C identifier like string
A name which uses only alphanumeric characters and underscores and the first character is not a digit.
- FF-A
- LCS
- Logical SP
A Secure Partition which executes a software image isolated buy without physical address space isolation.
- Physical SP
A Secure Partition which executes a software image in and isolated physical address space.
- PSA
- Secure Enclave
An isolated hardware subsystem focusing on security related operations. The subsystem may include hardware peripherals and one or more processing elements. As an example see the Arm SSE-700 subsystem.
- Secure Partition
Secure Partition is a compartment to execute a software image isolated from other images. Isolation can be logical or physical based on if physical address range isolation is involved or not. See Physical SP and Logical SP.
An SP may host a single or multiple services.
- Secure Partition Manager
A component responsible for creating and managing the physical isolation boundary of an SP in the SWd. It is built from two sub-components the Secure Partition Manager Dispatcher and the Secure Partition Manager Core.
- Secure Partition Manager Core
A component responsible for SP initialization and isolation at boot-time, inter partition isolation at run-time, inter-partition communication at run-time.
- Secure Partition Manager Dispatcher
The SPM component responsible for SPMC initialization boot-time, and forwarding FF-A calls run-time between SPs and between SPs and the SPMC.
- Secure Processing Environment
An isolated environment to execute software images backed by a specific set of hardware and arm architecture features. The aim of isolation os to protect sensitive workloads and their assets.
- SP
see Secure Partition
- SPE
- SPM
- TEE
Trusted Execution Environment. An SPE implemented using TrustZone.
- TF-A
Trusted Firmware-A
- TrustZone
Hardware assisted isolation technology built into arm CPUs. See TrustZone for Cortex-A.
- TS
Trusted Services
Copyright (c) 2020-2022, Arm Limited and Contributors. All rights reserved.
SPDX-License-Identifier: BSD-3-Clause
License
Copyright (c) 2020-2021, Arm Limited and Contributors. All rights reserved.
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
Neither the name of Arm nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS “AS IS” AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
Note: Individual files contain the following tag instead of the full license text.
SPDX-License-Identifier: BSD-3-Clause
This enables machine processing of license information based on the SPDX License Identifiers that are here available: http://spdx.org/licenses/
Maintainers
TS is a trustedfirmware.org maintained project. All contributions are ultimately merged by the maintainers listed below. Technical ownership of some parts of the code-base is delegated to the code owners listed below. An acknowledgment from these code maintainers may be required before the maintainers merge a contribution.
More details may be found in the Project Maintenance Process document.
This file follows the format of the Linux Maintainers file. For details on the meaning of tags below please refer to the Linux Maintainers file.
Main maintainers
- M
Dan Handley <dan.handley@arm.com>
- G
- M
Miklós Bálint <miklos.balint@arm.com>
- G
- M
György Szing <gyorgy.szing@arm.com>
- G
- L
Code owners
- M
Julian Hall <julian.hall@arm.com>
- G
- M
Bálint Dobszay <balint.dobszai@arm.com>
- G
- M
Imre Kis <imre.kis@arm.com>
- G
Copyright (c) 2020-2021, Arm Limited and Contributors. All rights reserved.
SPDX-License-Identifier: BSD-3-Clause
Versioning policy
This document captures information about the version identifier used by the project. It tells the meaning of each part, where the version information is captured and how it is managed.
Format of version IDs
The version identifier identifies the feature set supported by a specific release, and captures compatibility information to other releases.
This project uses “Semantic Versioning”, for details please refer to Semantic Versioning.
The version number is constructed from three numbers, and an optional
pre-release identifier. The MAJOR number is changed when incompatible API
changes are introduced, the MINOR version when new functionality is added in a
backward compatible manner, and the PATCH version when backwards compatible
bug fixes are added. The pre-release identifier is appended after the numbers
separated with a -
and can be the string alpha
or beta
.
Each release will get a unique version id assigned. When a release is made, the version number will get incremented in accordance with the compatibility rules mentioned above.
Version ID hierarchy
The project hosts multiple components which can be used separately and thus need compatibility information expressed independently. Such components get a dedicated version ID. Examples are libsp and libts.
Components are never released standalone but only part of a TS release. In that sense a set of independent component version IDs are assigned to a TS release ID.
Storage and format
- The version number of each release will be stored at two locations:
In a tag of the version control system in the form of “vX.Y.Z” where X Y and Z are the major, minor and patch version numbers.
In a file called version.txt. This file uses ASCII encoding and will contain the version number as “X.Y.Z” where X Y and Z are the major, minor and patch version numbers.
Note
The version id is independent from version identifiers of the versioning system used to store the TS (i.e. git).
Copyright (c) 2020-2022, Arm Limited and Contributors. All rights reserved.
SPDX-License-Identifier: BSD-3-Clause
Version Control
Version control is about tracking and managing changes to text files including source-code, scripts, configuration files and documentation.
The project uses the Git version control system and Gerrit as the code review and access restriction enforcement solution. Although git is a distributed version control system the project uses a “centralized” approach. The main repositories can be found at https://trustedfirmware.org, where two WEB UIs can be found:
a cgit instance allowing browse and clone the repositories
and a Gerrit instance for contribution and code review purposes.
Currently the project has a single repository hosting the source-code: https://git.trustedfirmware.org/TS/trusted-services.git/
Branching Strategy
The branching strategy is built around a “quality” based flow of changes.
Each change starts targeting an “integration” branch either as a “standalone” change or as a topic. Changes landing on integrations are expected to be “independent” (properly build and working without depending on other changes). Validation efforts of the changes my have limited focus based on the expected effect of the change. This allows balancing validation costs to pay during reviews.
All change landing on the integration branch will get trough full validation. When passing all quality checks, the change can be merged to a “main” branch. All changes on the main branch are expected to fulfill all quality requirements and to pass full validation.
The default name of the “integration” branch is integration
and the default name of the “main” branch is main
.
For special purposes (e.g. long term support, hosting a special version, etc…) other branches acting as “integration”
and “main” can be defined.
Sandbox branches
For prototyping purposes the project allows using “sandbox” branches. Changes on these branches are free to lower
quality expectations as needed. Sandbox branches are to be created under sandbox/<username>/
namespace
(e.g. sandbox/gyoszi01/static-init-prototype
).
Topic branches
For large changes or changes expected to have a longer development time “topic” branches can be used. Topic branches are
to be created under the topics/<username>/<some name>
namespace. If multiple developers are co-developing a feature
<username>
is expect to be the lead developer.
Review vs quality
As discussed above all commits on the “integration” branch must properly build and work independent of other changes. This may result in large commits, which would make code reviews difficult. To help the code review, large changes should be split to small steps, each implementing a single logical step needed for the full change. To remove the conflict between quality expectation requiring large and review requiring small commits, topic branches shall be used. Large changes are to be split up to small steps and target a topic branch first. This way reviewers can check small changes, and only the tip of the topic branch is to pass build and runtime tests.
Copyright (c) 2022, Arm Limited and Contributors. All rights reserved.
SPDX-License-Identifier: BSD-3-Clause
Quality Assurance
This section covers quality definition of the project, efforts the project is making to ensure the quality level of the products meet expectations.
The primary products of this project are the Deployments building Secure Partition Images and Libraries. There are secondary products like:
build scripts
test automation scripts
documentation
various processes
etc…
Quality Assurance of secondary products happens at a “best effort” basis. The project will try to keep these healthy, but quality definition of these may aim lower or may even be lacking.
Verification Strategy
This page describes verification from a high level concept perspective.
In this context source code
has a wider scope and may mean any text content produced by humans and processed by
other humans or tools. Examples: C/C++ source code, reST documents, build scripts, etc…
Clean Code
Clean code aims to counterfeit issues discussed in the following sub-chapters.
Code Readability
Expressing ideas in a machine readable formats is complicated, and each developer may have a different taste or different preferences on how source code should be formatted. Some people may find a specific kind of formatting easier to understand than others. If the source code does not follow a consistent look and feel, human processing of the text may become error prone. This undermines effectiveness of co-operation and code-review, and may lead to incorrect code. The project defines coding style rules to counterfeit these problems. For detail please refer to Coding Style & Guidelines
Undefined and Implementation Defined Behavior
The “standard” defining how to process a specific source code type, may leave some processing behavior to the tool to be defined, or allow the tool to behave in an undefined way. Coding constructs relying on such behavior are to be avoided, or used in a well defined way. This adds robustness and helps avoiding errors due to using different version of the same tool, or different implementations.
The project defines coding guidelines to counterfeit these problems. For detail please refer to Coding Style & Guidelines
Security
Security is a complex topic affecting all steps of the development process. Incorrect code may lead to security issues and thus “Clean Code” has a vital role in implementing secure software.
Runtime Testing
Runtime testing focuses of verifying the behavior of one or multiple build products built from source code. This can be done at multiple levels and in multiple execution environment.
Unit Test
Unit tests aim to verify if the internal operation of a module matches the developers expectation. It helps covering all code execution paths, and to give confidence on correct operation when code needs to be refactored. Unit tests serve as a kind of documentation capturing the expected usage of the code.
Unit-testing allays happen on the “host PC”
Component Test
Component tests aim to verify the API (and ABI) of a component is matching expectations. Components are tested in isolation, where the exported APIs are exercised by these code, and APIs the component depends on are implemented by test doubles.
System Test
System test verifies correct operation of a set of modules configured to fulfill the requirements of a use-case. For TS this usually means testing and end-to-end setup on a specific target platform.
Balancing Costs vs Quality
Executing build products on target platforms may have high costs in terms of time, complexity and availability and in turn it gives the hights confidence in verification results, or the best quality. In the development phase it may be desired to some level of this confidence for lower costs. For this purpose the project defines two main test set-up types based on the balance choice between cost and quality.
In this environment tests are executed on a target platform. Emulators (e.g. QEMU, FVP) from this aspect are treated like targets implemented in silicon.
In this environment test executables are compiled to execute as a “standard” user-space application running on the machine “hosting” the development activity. In most cases these machines are based on a different architecture that the ones the project is targeting (e.g. x86-64 vs aarch64). This means this environment relies on the assumption that code is portable and behaves architecture and compiler independent. This puts limitations on the features which can be tested and lower the confidence level of test output. In turn executing tests int his environment is simple and gives very good scalability options.
Copyright (c) 2022, Arm Limited and Contributors. All rights reserved.
SPDX-License-Identifier: BSD-3-Clause
Verification methodology
This page discusses discusses verification tools and techniques used by the project.
Static Checks
This verification step checks quality by examining the source code. The project currently uses two tools which are discussed in the chapters below.
Checkpatch
Checkpatch is a tool developed and maintained by the Linux Kernel community. It can look for errors related to:
C and C++ coding style
spelling mistakes
git commit message formatting
Please find the configuration of this tool in the TS repository.
Cppcheck tool
CppCheck is a C/C++ static analysis tool. It can detect code depending on implementation defined behavior, and dangerous coding constructs and thus it verifies coding guidelines.
Please find the configuration of this tool in the TS repository.
Build verification
The Build test runner captures reference build configurations for all TS build products and can be used to verify these.
Runtime verification
During the runtime versification step various test and demo executables are executed on the host PC and/or on target platforms.
Tests are targeting three different environment types:
arm-linux
: test executables to be run from Linux user-space on the target.
pc-linux
: executables to run on the host PC. These tests have a lower verification level, as the binary is likely not running on an arm target. Portability issues in the source may hide error or trigger false alarms. In turn this type of test is cheap,
sp
andopteesp
: test executables targeting these environments run in the SWd and server as:
test payloads to help exercise trusted services
test payload to help platform porting
Each of these test applications manifest as a “deployment” in trusted services. For more details please see the Deployments section.
Compliance testing
The project hosts deployment helping compliance testing. For more information please refer to Platform Certification.
Copyright (c) 2022, Arm Limited and Contributors. All rights reserved.
SPDX-License-Identifier: BSD-3-Clause
Verification Plan
This document describes when and by whom verification steps are to be executed. Since this is an open-source project maintained by an open community, each contributor is expected to participate.
Verification during development
When changing existing code, or adding new code, the developer is expected to:
run static checks to guard “clean code”.
execute runtime tests on the host machine to ensure features not changed are behaving as before. Verification efforts targeting regression may be limited based on the expected effects of the change.
extend unit and component tests to cover changes
Verification during code review
The code review covers all aspects of a change, including design and implementation. This includes running static checks and runtime tests. The reviewers are expected to check if tests are extended as needed.
Verification efforts of a review may be limited to lower costs, based on the expected effects of the change.
Guarding “main”
All commits of the integration branch shall be verified using the full verification set-up. This verification shall aim for achieving the highest quality level and shall not make compromises. A change becomes ready to get merged to “main” after passing the tests.
Copyright (c) 2022, Arm Limited and Contributors. All rights reserved.
SPDX-License-Identifier: BSD-3-Clause
Copyright (c) 2022, Arm Limited and Contributors. All rights reserved.
SPDX-License-Identifier: BSD-3-Clause
Releases
The release is a well documented and identifiable “quality snapshot” of the products the project is developing. It helps adopters by providing reference points and understanding differences between these.
Due to the Version Control policy implemented, each commit on the “main” branch has a source code and runtime quality level as a release. The release in addition to that ads extra documentation of changes in form of the Change Log & Release Notes
Cadence
There is no fixed release cadence defined yet.
Release procedure
DR
below stands for “Day of Release”.
Time |
Task |
---|---|
|
|
|
|
|
|
Errors discovered during testing will break the release process. Fixes need to be made and merged as usual, and release
testing to be restarted with including applying a new _rc<x>
tag, where <x>
is a monotonic counter.
If fixing the encountered errors takes long, the release is either aborted and postponed, or the defects are captured in the release notes under the “known issues” section.
Copyright (c) 2022, Arm Limited and Contributors. All rights reserved.
SPDX-License-Identifier: BSD-3-Clause
Copyright (c) 2020-2022, Arm Limited and Contributors. All rights reserved.
SPDX-License-Identifier: BSD-3-Clause
Quick Start Guides
The following quick start guides provide step-by-step instructions for performing common tasks when working with the Trusted Services project.
Build and run PC based tests
Many components within the Trusted Services project may be built and tested within a native PC environment. PC based testing is an important part of the development flow and can provide a straight-forward way to check for regressions and debug problems. PC based tests range from small unit tests up to end-to-end service tests. All test cases in the Trusted Services project are written for the CppUTest test framework.
The test executables most often used for PC based testing of Trusted Services components are:
component-test - a PC executable that runs many component level tests.
ts-service-test - contains a set of service-level end-to-end tests. For PC build, service providers are included in the libts library.
psa-api-test - PSA functional API conformance tests (from external project).
Before you start
Before attempting to run any builds, ensure that all necessary tools are installed. See: Software Requirements
Build and run component-test
From the root directory of the checked-out TS project, enter the following:
cmake -B build-ct -S deployments/component-test/linux-pc
make -C build-ct install
build-ct/install/linux-pc/bin/component-test -v
Build and run ts-service-test
From the root directory of the checked-out TS project, enter the following:
cmake -B build-ts -S deployments/ts-service-test/linux-pc
make -C build-ts install
LD_PRELOAD=build-ts/install/linux-pc/lib/libts.so build-ts/install/linux-pc/bin/ts-service-test -v
Build and run psa-api-test
Tests for each API are built as separate executables. Test are available for the following APIs:
crypto
initial_attestation
internal_trusted_storage
protected_storage
To build and run tests for the Crypto API, enter the following (use the same flow for other APIs):
cmake -B build-pa deployments/psa-api-test/crypto/linux-pc
make -C build-pa install
LD_PRELOAD=build-pa/install/linux-pc/lib/libts.so build-pa/install/linux-pc/bin/psa-crypto-api-test
More information
For more information about deployments and building, see: Build Instructions
PSA functional API conformance tests git location: https://github.com/ARM-software/psa-arch-tests.git
Copyright (c) 2022, Arm Limited and Contributors. All rights reserved.
SPDX-License-Identifier: BSD-3-Clause
Build and run tests on OP-TEE reference integration for FVP
The Linux based build maintained by the OP-TEE project is used as the reference integration for testing trusted service deployments on a simulated hardware platform. Service providers deployed within secure partitions are tested using test executables that run as user-space programs under Linux. Test cases interact with trusted service providers using standard service access protocols, carried by FF-A based messages.
The test executables most often used for service level testing on hardware platforms are:
ts-service-test - contains a set of service-level end-to-end tests. Discovers and communicates with service providers using libts.
psa-api-test - PSA functional API conformance tests (from external project). Also uses libts.
This method uses the makefiles from the op-tee/build
repository.
Before you start
Before attempting to run tests on the FVP simulation, the OP-TEE reference integration needs to be built and run. Read the following guides to understand how to do this:
OP-TEE build and run instructions, see: Deploying trusted services in S-EL0 Secure Partitions under OP-TEE
Instructions for loading and running user-space programs on FVP: Running user-space programs on FVP
Build the linux application binaries
From the root directory of the workspace, enter the following to build the test applications:
make -C build ffa-test-all
Run ts-service-test
From the root directory of the workspace, enter:
FVP_PATH=../Base_RevC_AEMvA_pkg/models/Linux64_GCC-9.3 make -C build run-only
Once it boots to the login prompt, log in as root and from the FVP terminal, enter:
cd /mnt/host
cp -vat /usr out/ts-install/arm-linux/lib out/ts-install/arm-linux/bin
out/linux-arm-ffa-tee/load_module.sh
out/linux-arm-ffa-user/load_module.sh
ts-service-test -v
Use the same flow for other user-space programs. Check the output of the cp
command executed to see
executables copied under /usr/bin
.
Copyright (c) 2022, Arm Limited and Contributors. All rights reserved.
SPDX-License-Identifier: BSD-3-Clause
Copyright (c) 2022, Arm Limited and Contributors. All rights reserved.
SPDX-License-Identifier: BSD-3-Clause
Developer Documents
Architecture Overview
The Trusted Services project provides a framework for developing applications that can be built and deployed in different secure processing environments and on different hardware platforms. The structure and conventions adopted are designed to maximize opportunities for component reuse. The project adopts a portability model based on the ports and adapters architectural pattern, which promotes loose coupling between an application and its environment. The model allows applications to be deployed in a diverse range of environments from full featured trusted OSs, such as OP-TEE, to bare metal secure partitions.
For a more in-depth description of how the ports and adapters pattern is applied, see: Service Deployment Model
Service Model
Trusted services conform to a client/server model where service specific operations are invoked using an RPC mechanism. The realization of the RPC layer and any underlying messaging layer may vary between deployments but the service layer should be identical for every deployment of a particular service. The following diagram illustrates the common layered model and where standardization on layer interfaces and protocols will be aimed for.
The layered service model is reflected in the project source tree where software components are organized by layer and role. Because components that perform the same role are inter-changeable, there is much flexibility to meet the needs of different deployments. For example:
An instance of the secure storage service could be accessed by different types of client, each presenting different upper edge APIs to suite the needs of different applications. Some different secure storage clients could be:
A filesystem driver that presents a filesystem mount for user-space access to stored objects.
A client that presents the PSA Protected Storage API.
Different types of secure storage provider are possible, each accessed using a common protocol. Some different secure storage providers could be:
A secure storage provider that uses an external RPMB serial flash device for storage.
A secure storage provider that encrypts objects before passing them to a normal world agent to access file-backed storage.
Different RPC layers may be used to access services deployed in different secure processing environments.
Service Deployments
The ability to deploy trusted services over a range of secure processing environments allows a consistent view of services to be presented to clients, independent of the back-end implementation. For a particular service deployment, a concrete set of build-time and run-time dependencies and configurations must be defined. Representing each deployment in the project structure allows multiple deployments to be supported, each reusing a subset of shared components. The following diagram illustrates the dependencies and configurations that must be defined for a fully specified deployment.
Currently supported deployments are listed here: Deployments
Service Access Protocols
As mentioned in the section on layering, trusted services are accessed by clients via an RPC layer. Independent of the mechanics of the RPC layer, a service access protocol is defined by:
A supported set of operations, each qualified by an opcode.
A set of request and response message parameter definitions, one for each operation.
The main documentation page for service access protocols is here: Service Access Protocols.
The trusted service framework can accommodate the use of arbitrary serializations for message parameters. So far, message protocols using Google Protocol Buffers and packed C structures have been defined.
Copyright (c) 2020-2022, Arm Limited and Contributors. All rights reserved.
SPDX-License-Identifier: BSD-3-Clause
Project Structure
This page describes the directory and repository structure for the trusted services project.
Top-Level Project Organization
The project is organized under the following top-level directories:
project
|-- docs
|-- deployments
|-- environments
|-- platforms
|-- components
|-- external
|-- protocols
|-- tools
Top-level directories are used to organize project files as follows:
docs
The home for project documentation source.
deployments
A deployment represents the build instance of a service (or in fact any unit of functionality) for a particular environment. For each deployment, there is a single deployable output, usually a binary executable. The deployment is concerned with configuring and building a particular set of components to run in a particular environment. For each supported deployment, there is a leaf sub-directory that lives under a parent. The parent directory identifies what’s being deployed while the leaf sub-directory identifies where it is being deployed. The following example illustrates how the ‘what’ and ‘where’ are combined to form fully defined deployments:
deployment-name = <descriptive-name>/<environment>
deployments
|-- protected-storage/opteesp
|-- crypto/opteesp
|-- ts-demo/arm-linux
|-- component-test/linux-pc
|-- libts/linux-pc
The trusted services project uses CMake to configure and generate build files. A CMakeLists.txt file exists for each deployment to define the set of components, any deployment specific configuration and anything environment specific. Each deployment leaf directory also holds a source file that defines the main entry point to allow a particular set of components to be initialized before entering the application that implements the core functionality of software being deployed.
The directory structure for deployments supports inheritance from the deployment parent to promote reuse of common definitions and initialization code. For example, deployments of the secure-storage service for different environments are likely to have similarities in terms of the set of components used and in subsystem initialization code. To avoid duplication between deployments, common cmake and source files may be located under the deployment parent. This is illustrated in the following:
deployments
|-- secure-storage
|-- common.cmake <-- Common cmake file
|-- service_init.c <-- Common initialization code
|-- opteesp
|-- CMakeLists.txt <-- Includes ../common.cmake to inherit common definitions
|-- opteesp_service_init.c
environments
An environment represents the execution context in which a built image runs. There are different environments represented in the project structure, one for each supported isolated execution context. Files related to a particular environment live under a sub-directory whose name describes the environment. For example:
opteesp An S-EL0 secure partition hosted by OP-TEE
arm-linux Linux user-space, cross compiled for Arm.
linux-pc Native PC POSIX environment
Files related to an environment will tend to live outside of the project tree and will need to be imported in some way. How this is handled will depend on the environment. An environment will generally provide the following:
Environment specific libraries that have been externally built.
Public header files for libraries.
An install method that takes a deployment image and installs it in the environment.
Compiler configuration
A deployment will include an environment specific build file (see above) that defines the list of environment specific components used for a deployment into a particular environment.
platforms
For some deployments, an environment may not provide access to all hardware backed services needed by an application. Files under the platforms directory are concerned with configuring and building platform specific code that extends the capabilities of an environment. Details of how this works are described in the: Service Deployment Model
components
Source code lives under the components directory, organized as reusable groups of source files. A component is the unit of reuse for code that may be combined with other components to realize the functionality needed for a deployment. Creating a new deployment should be just a case of selecting the right set of components to provide the required functionality for the target environment. Some components may depend on other components and others may only make sense in a particular environment.
The components sub-tree has an organization that reflects the layered model where service components are kept separate from RPC components and so on. There is also a separation between client components and service provider components. The following illustrates this:
components
|-- service
| |-- common
| | |-- test
| |-- secure-storage
| | |-- frontend
| | |-- backend
| | |-- factory
| | |-- test
| |-- crypto
| | |-- client
| | |- component.cmake
| | |-- provider
|-- rpc
| |-- common
| |-- ffarpc
| | |-- caller
| | |-- endpoint
Each leaf directory under the components parent includes a cmake file called component.cmake. This is used to define all files that make up the component and any special defines that are needed to build it. A deployment CMakeLists.txt just needs to reference the required set of components. No details of the component internals are reflected in the deployment CMakeLists.txt file.
Test components
Test code is treated in exactly the same as any other source code and is organized into components to achieve the same reuse goals. To create a deployment intended for testing, you select an appropriate set of components where some happen to be test components. By convention, test components live in sub-directories called test. Test directories are located at the point in the components sub-tree that reflects the scope of tests. In the above example, two test sub-directories are illustrated. The locations of the test component directories imply the following about the scope of the tests:
components
|-- service
| |-- common
| | |-- test <-- Tests for the common service component
| |-- secure-storage
| | |-- frontend
| | |-- backend
| | |-- factory
| | |-- test <-- Service level tests for the secure-storage service
If it is necessary to componentize tests further, sub-directories under the test directory may be used, say for different classes of test. e.g:
components
|-- service
|-- common
|-- test
|-- unit
|-- fuzz
external
Code that originates from other open source projects that needs to be built as part of trusted service deployments is represented by directories beneath the external top-level directory. External components are generally fetched from the source repo during the CMake build process. During the build for a particular deployment, a deployment specific configuration may be applied to an external component. A CMake file under each external component directory is responsible for fetching and building the external component:
external
|-- CppUTest
| |-- CppUTest.cmake
| |-- cpputest-cmake-fix.patch
|-- mbed-crypto
|-- nanopb
protocols
The protocols directory holds protocol definition files to allow clients to use trusted services. Ideally, the service access protocol should be formally defined using an interface description language (IDL) that provides a programming language neutral definition of the service interface. The protocols directory structure accommodates protocol definitions using different definition methods. Where a service access protocol has been defined using an IDL with language compilation support, code may be generated from the interface description to allow RPC request and response parameters to be serialized and deserialized in a compatible way between service clients and providers. The protocols sub-tree is organized as follows:
protocols
|-- service
| |-- common
| |-- crypto
| | |-- packed-c <-- C structure based definitions
| | |-- protobuf <-- Protocol Buffers definitions
| |-- secure-storage
| |-- packed-c
tools
The project directory structure includes a tools directory for holding general purpose tools components to support activities such as build and test.
Copyright (c) 2020-2022, Arm Limited and Contributors. All rights reserved.
SPDX-License-Identifier: BSD-3-Clause
Service Deployment Model
A goal of the Trusted Services project is to provide a toolbox of reusable service components that can be deployed across a wide range of platforms. The project structure promotes reuse by grouping related source files into subdirectories that represent reusable components. Components may be configured and combined in different ways to meet the needs of platform integrators who aim to create firmware with the right features and tradeoffs for their products.
Within the TS project structure, build files that combine and configure components to create deployable firmware images reside under the deployments top-level directory. Beneath the deployments parent are sub-directories concerned with building and deploying different applications. Applications can generally be classified as one of the following:
Service providers
Test suites
Libraries
Development support applications
This page is mainly concerned with describing the conventions used to enable service providers to be deployed in different environments, on different platforms and with different capabilities. The conventions aim to minimize build definition duplication between alternative deployments while offering sufficient flexibility to customize capabilities and support different platforms. The service deployment model borrows from a pattern used for deploying cloud services where There is a similar requirement for deployment flexibility.
Ports and Adapters Architecture
An application is decoupled from any particular environment via a set of interfaces that reflect the needs of the application. This model conforms to the ports and adapters architectural pattern that aims to avoid tight coupling between application components and any particular environment. This pattern, also known as the hexagonal architecture, is often illustrated as a hexagonal cell with the application on the inside and the platform on the outside.
The following diagram illustrates how ports and adapters is applied in the trusted services project to provide a model for service provider deployment.
This deployment model has the following characteristics:
The application is decoupled from the environment by a set of virtual interfaces (ports) that reflect the needs of the application.
Ports are realized by a set of adapters. An adapter may:
Use a service/device provided by the platform or environment.
Communicate with another service provider.
Provide a self-contained implementation.
The set of adapters that the application depends on represents the infrastructure that is needed to support the application.
Different infrastructure realizations may be needed for different deployments of the same service provider.
Service Deployment Structure
By convention, the directory structure for service provider deployments reflects the layers in the ports and adapters architecture. The following dependency diagram illustrates the set of relationships that exist for a fully defined deployment:
To avoid undesirable build definition duplication when adding new deployments of an application, the directory structure used to organize files related to different deployments should reflect the above model. The following table lists reusable build components that may be used across different deployment definitions:
Build Component |
Defines |
Reuse Scope |
---|---|---|
Application |
Set of components that form the core application to be deployed.
|
All deployments of the application.
|
Infra |
The set of adapters that realize the ports that the application depends on.
An infrastructure definition may depend on:
|
Any deployment that uses the same infrastructure to support the application.
This will depend on how specific the infrastructure is. An infrastructure
definition may allow for some level of configurability to enable deployment
to impose a particular build configuration. Where an infrastructure includes
adapters that use a well supported driver model (such as UEFI), the scope
for reuse is large.
|
Env |
The set of environment specific components that are common across all
deployments of an application for a particular environment.
|
All deployments of the application into a specific environment. There is
scope to improve reuse of environment specific components across multiple
deployments.
|
Config |
Build configuration variables together with a particular application, infra
and env.
|
Depends on how specific the config is.
|
Deployment Directory Structure
Using the block-storage deployment as an example, the deployment directory structure reflects the service deployment model as follows:
deployments
|- block-storage
|- block-storage.cmake - Common application build definition
|- env - Environment specific build definitions
|- infra - Alternative infrastructures
|- config - Configurations for block-storage deployments
Configuration Definitions
To build a particular configuration of the block-storage service provider (in this case, one that uses flash storage on the N1SDP platform), use:
cd deployments/block-storage/config/n1sdp-flash
cmake -B build
cd build
make
The CMakeLists.txt file for the n1sdp-flash deployment of the block-storage service provider includes:
Set TS_PLATFORM to n1sdp platform name
Set any build configuration parameter overrides
Include
${DEPLOYMENT_ROOT}/env/opteesp.cmake
Include
${DEPLOYMENT_ROOT}/infra/edk2-flash.cmake
Include
${DEPLOYMENT_ROOT}/block-storage.cmake
Each alternative deployment of the block-storage service provider is represented by a
subdirectory under ${DEPLOYMENT_ROOT}/config
. The number of directories under config is
likely to grow to accommodate platform variability and different tradeoffs for how the infrastructure
for an application will be realized.
To support test and to provide a starting point for new config definitions, a default config should exist for each supported environment.
Infrastructure Definitions
An infrastructure defines a set of adapter components that realize the ports that the application depends on. For block-storage deployments, some possible infrastructures are:
Infra Name |
Description |
---|---|
ref-ram |
Provides volatile storage using the reference partition configuration. Intended for test. |
edk2-flash |
Provides persistent storage using a flash driver that conforms to the EDK2 driver model. |
tfa-flash |
Provides persistent storage using a flash driver that conforms to the TF-A driver model. |
rpmb |
Provides persistent storage using an RPMB partition, accessed via a Nwd agent. |
Platform Support
The Trusted Services project is not intended to be a home for platform specific code such as device drivers. Ideally, device drivers and other platform specific code should be reused from external upstream repos such as edk2-platforms. The ports and adapters pattern allows alternative driver models to be accommodated so different upstream projects with different driver models may be used without the need to modify driver code. Where driver reuse from an external project is not possible, the platform directory structure can accommodate driver components that reside within the TS project.
The ability to accommodate third-party device drivers that conform to different driver models is important for enabling TS components to be used across different segments. The EDK2 project for example can provide a rich source of drivers that conform to the UEFI model. UEFI is not however adopted in all product segments.
All files related to supporting different platforms reside beneath the platform top-level directory.
Platform Providers
Within the TS project, a platform provider is responsible for adding and maintaining the glue that enables platform specific code to be used from a particular source. The platform code will either be:
Fetched from an upstream repo (preferred)
Added to the TS project.
Each platform provider is represented by a subdirectory beneath platform/providers
. For
Arm provided platforms, the structure will look something like this:
platform
|-- providers
|--arm
|-- corstone1000
|-- fvp
|-- fvp_base_aemva
|-- fvp_base_revc-2xaemv8a
|-- platform.cmake
Under each platform leaf directory is a file called platform.cmake
. This file implements
the common configuration and build interface that will be used during the deployment build
process. How this interface is realized is entirely down to the platform provider. An
implementation will do things like setting configuration variables for SoC, board and driver
selection. Any additional files needed to support platform configuration and build may be
included within the platform provider’s sub-tree.
For product developers who want to define and maintain their own private platforms, it should
be possible to override the default platform/providers
directory path to allow an
alternative sub-tree to be used. A product developer is free to organize a private sub-tree
in any way that suites their needs.
Although the TS project structure doesn’t mandate it, platform specific firmware is likely to live outside of the TS project. The ability to reuse existing drivers and driver frameworks is important for promoting adoption across hardware from different vendors. Board and silicon vendors may reuse existing CI and project infrastructure for platform components that they maintain.
Platform support that depends on EDK2 platform components is represented by the edk2 platform provider. Files related to the EDK2 platform provider are organized as follows:
platform
|- providers
|- edk2
|- edk2-platforms.cmake - Fetches the upstream edk2-platforms repo
|- platform - Directory for platform definitions, organized by contributor
|- arm
|- n1sdp
|- platform.cmake
Some special platforms are provided by the TS project itself. These are represented beneath the ts provider. Current TS platforms are:
TS Platform |
Purpose |
---|---|
|
A platform that never provides any drivers. The
ts/vanilla platform should be used when an environment provides its owndevice framework and no additional drivers need to be provided by the platform. An attempt to build a deployment with
platform dependencies on the vanilla platform will result in a build-time error. The vanilla platform is selected by
default at build-time if no explicit platform has been specified.
|
|
A platform that provides a complete set of drivers that may be selected when building any deployment. The platform uses
mock drivers that don’t offer functionality suitable for production builds. The mock platform is useful for CI build
testing of deployments with platform dependencies. You should always expect a deployment with platform dependencies to
build when
TS_PLATFORM=ts/mock . |
Diver Models
Alternative driver models are represented by subdirectories beneath platform/driver_model
.
Driver code imported from an external project, such as edk2-platforms, will also depend on
interface and other header files related to the driver model. For drivers reused from
edk2-platforms, the driver interface header files will define interface structures defined
by the UEFI specification. The following example illustrates two driver models, one for
UEFI drivers from the EDK2 project and another for bare-metal drivers that conform to TS
defined interfaces:
platform
|- driver_model
|- edk2
|- baremetal
Header files under the driver_model/edk2 directory will either explicitly provide definitions for the EDK2 driver model or include definitions from an external component. To maintain compatibility with driver code imported from edk2-platforms, sub-directories beneath platform/driver_model/edk2 should conform to the EDK2 directory structure and naming conventions. The following illustrates how UEFI driver model files are organized:
platform
|- driver_model
|- edk2
|- interface
|- Protocol
| |- BlockIo.h
| |- DiskIo.h
| |- FirmwareVolumeBlock.h
|
|- Library
| |- IoLib.h
| |- DebugLib.h
Drivers
The platforms/drivers directory provides a home for CMake files that enable driver code to be built
as part of the the deployment build process. Source files will either have been fetched from an
upstream repo or will live under the platform/drivers
parent.
Copyright (c) 2021-2022, Arm Limited and Contributors. All rights reserved.
SPDX-License-Identifier: BSD-3-Clause
Service Access Protocols
A trusted service is accessed by calling service-specific methods via an RPC mechanism. The set of callable methods forms the public interface exposed by a service. This section is concerned with interface conventions and protocols used for serializing method parameters and return values. It is anticipated that there will be a need to support different parameter serialization schemes to suite different needs. The project accommodates this with the following:
The Protocols directory structure allows for different protocol definitions for the same service.
Message serialization code is decoupled from service provider code using an abstract ‘serializer’ interface. Alternative concrete serializers may provide implementations of the interface.
RPC Session
Before a client can call trusted service methods, an RPC session must be established where an association is made between an RPC Caller and a call endpoint that corresponds to the required service provider instance. To establish the session, the client must provide:
An identifier for the service provider instance.
Any client credentials that allow RPC layer access control to be applied if needed.
Once the RPC session is established, the client may call service methods via an abstract RPC Caller interface that takes the following parameters:
The opcode that identifies the method to call.
A buffer for the serialized method parameters.
A buffer for the serialized return values.
A deployment independent interface for locating services and establishing RPC sessions is described here: Service Locator
Status Codes
On returning from a request to invoke a service method, two status codes are returned as follows:
RPC status - A generic status code that corresponds to the RPC call transaction. RPC status codes are standardized across all services.
Operation status - a service specific status code.
Separation of status codes by layer allows service specific status codes to be accommodated while keeping RPC status codes common.
A client should only check the returned operation status if the returned RPC status value is RPC_CALL_ACCEPTED. All other RPC status values indicate that an error occurred in delivering the RPC request. An RPC status of RPC_CALL_ACCEPTED does not indicate that the service operation was successful. It merely indicates that the request was delivered, a suitable handler was identified and the request parameters were understood.
Service Access Protocol Definition Conventions
A service access protocol defines the following:
Opcodes used for identifying service methods.
Request parameters for each method.
Response parameters for method return values.
Operation status code.
Details of how public interface definition files for trusted services are organized, see: Project Structure
It is possible that for certain deployments, it will be necessary to customize which parameter encoding scheme is used. Many schemes are possible such as Protocol Buffers, CBOR, JSON, TLV, TPM commands or packed C structures. To make scheme customization straight forward, serilize/deserialize operations should be encapsulated behind a common interface to decouple service provider code from any particular serialization scheme. A section below describes a pattern for achieving this.
Service Namespace
Definitions that form a service access protocol should live within a namespace that is unique for the particular service. Using a namespace for service definitions avoids possible clashes between similarly named definitions that belong to different services. How the namespace is implemented depends on how the access protocol is defined. For example, the Protocol Buffers definitions for the crypto service all live within the ts_crypto package. The recommended convention for forming a trusted service namespace is as follows:
ts_<service_name>
e.g.
ts_crypto
ts_secure_storage
Language Independent Protocol Definitions
By defining service access protocols using an interface description language (IDL) with good support for different programming languages, it should be straight forward to access trusted services from clients written in a range of languages. On Arm Cortex-A deployments, it is common for user applications to be implemented using a range of languages such as Go, Python or Java. Rather than relying on a binding to a C client library, native client code may be generated from the formal protocol definition files. Initial protocol definitions use Google Protocol Buffers as the IDL. The project structure allows for use of alternative definition schemes and serializations.
Opcode Definition
Opcodes are integer values that identify methods implemented by a service endpoint. Opcodes only need to be unique within the scope of a particular service. The mapping of opcode to method is an important part of a service interface definition and should be readily available to clients written in a variety of programming languages. For a Protocol Buffers based definition, opcodes are defined in a file called:
opcodes.proto
For example, for the Crypto trusted service, the Protocol Buffers opcode definitions are in:
protocols/service/crypto/protobuf/opcodes.proto
Alternative definitions for light-weight C clients using the packed-c scheme are in:
protocols/service/crypto/packed-c/opcodes.h
Parameter Definition
The convention used for serializing method parameters and return values may be specific to a particular service. The definition file will include message definitions for both request and response parameters. Common objects that are used for multiple methods should be defined in separate files. When using Protobufs, the following naming convention for method parameter files should be used:
<method_name>.proto
For example, the Crypto export_public_key method is defined in a file called:
protocols/service/crypto/protobuf/export_public_key.proto
RPC Status Codes
Generic RPC status code definitions using different definition schemes are defined here:
protocols/rpc/common/protobuf/status.proto
protocols/rpc/common/packed-c/status.h
Service Status Codes
Service specific status code definitions using different definition schemes are defined here (using crypto service as an example):
protocols/service/crypto/protobuf/status.proto
protocols/service/crypto/packed-c/status.h
Status code definitions may also be shared between services. For example, services that conform to PSA API conventions will use standardized PSA status codes, defined here:
protocols/service/psa/protobuf/status.proto
protocols/service/psa/packed-c/status.h
Use of Protocol Buffers
When Protocol Buffers is used for protocol definition and parameter serialization, the following conventions have been adopted.
.proto File Style Guide
The style of the .proto files should follow Google’s Protocol Buffers Style Guide.
Protocol Buffer Library for Trusted Services
Protocol Buffers standardizes how service interfaces are defined and the on-wire encoding for messages. Because of this, service clients and service providers are free to use any conformant implementation. However for trusted services that may be deployed across a range of environments, some of which may be resource constrained, a lightweight library should be used for C/C++ code that implement or use trusted services. For this purpose, Nanobp (https://github.com/nanopb/nanopb) should be used.
Serialization Protocol Flexibility
Many different serialization protocols exist for encoding and decoding message parameters. Hard-wiring a particular protocol into a trusted service provider implementation isn’t desirable for the following reasons:
Depending on the complexity of serialization operations, mixing serialization logic with protocol-independent code makes trusted service provider code bigger and more difficult to maintain.
Different protocols may be needed for different deployments. It should be possible to make a build-time or even a run-time selection of which protocol to use.
The number of supported serializations protocols is likely to grow. Adding a new protocol shouldn’t require you to make extensive code changes and definitely shouldn’t break support for existing protocols.
These problems can be avoided by implementing protocol specific operations behind a common interface. Serialize/deserialize operations will have the following pattern:
int serialize_for_method(msg_buffer *buf, in args...);
int deserialize_for_method(const msg_buffer *buf, out args...);
To extend a service provider to support a new serialization encoding, the following steps are required:
Define a new encoding identifier string if a suitable one doesn’t exist. Currently used identifiers are protobuf and packed-c. The identifier will be used as a directory name so it needs to be filename-friendly. Some likely candidate identifiers could be cbor and json.
Add a new RPC encoding ID to protocols/rpc/common/packed-c/encoding.h. This is used by a caller to identify the encoding used for RPC parameters. This is analogous to the content-type header parameter used in HTTP.
Under the protocols parent directory, add a new access protocol definition for the service that needs extending. This will be a representation of existing service access protocols but using a definition notation compatible with the new encoding.
Add a new serializer implementation under the service provider’s serializer directory e.g. for the crypto service - components/service/crypto/provider/serializer.
Add registration of the new serializer to any deployment initialization code where the new encoding is needed.
Copyright (c) 2020-2021, Arm Limited and Contributors. All rights reserved.
SPDX-License-Identifier: BSD-3-Clause
Service Locator
The service locator model provides clients of trusted services with a common interface for locating service instances and establishing RPC sessions with service endpoints. By using the service locator, application code is decoupled from the details of where services are deployed. Use of the service locator is entirely optional for client applications. Different deployments of libts provide implementations of the service locator API that are suitable for different environments. The trusted services project uses libts to decouple test code from the services under test. This enables tests to be reused for testing on different platforms with different distributions of services. The same flexibility may be exploited when writing applications that use trusted services.
Service Locator Model
The following class diagram illustrates the service locator model:
The model takes inspiration from microservices architectures where there is a similar need to decouple clients from service location. In the model, classes have the following roles:
Class service_locator
The service_locator is responsible for locating service provider instances and returning a service_context object to allow a client to establish RPC sessions with the located service endpoint. A service instance is requested by a client using a service name. The service name uniquely identifies a service instance, independent of where the service provider is located. The service_locator is a singleton and forms the common interface for locating trusted services.
Class service_context
A service_context object represents a located service and enables a service client to establish RPC sessions with the service. A concrete service_context will provide open and close methods that manage RPC session setup and teardown.
Class rpc_caller
An rpc_caller provides methods for making remote calls associated with a service endpoint. An rpc_caller object represents an instance of an RPC session.
Locating Service Instances
The location of service instances is likely to vary between deployments. Many factors influence where a service instance is deployed and the method needed to locate it. e.g.:
The type of processing environment in which a service instance is deployed. e.g. service could be deployed in a secure partition, as a TA or in a secure enclave.
Whether a service instance is co-located with other services instances in the same processing environment or whether a separate environment instance is used per service instance.
For Linux user-space clients, the kernel driver model used for messaging influences how a service is located and the type of messaging interface used for RPC requests.
Because of the wide variability in service deployment options, the Trusted Services framework supports the following:
Location independent service names - a naming convention for identifying service instances, wherever they are located. By using a location independent service name, a client is decoupled from the actual location of a service instance (similar to a DNS names). A concrete service locator is responsible for resolving the location independent service name.
Service location strategies - to accommodate the likely variability, an extensible framework for alternative service location strategies is provided.
Service Names
Location Independent Service Names
Because of the potential variability in where service instances are deployed, a naming convention that allows a service instance to be identified, independent of its location, is useful. By using a location independent service name, coupling between a client application and any particular service deployment can be avoided. Use of the Service Locator API and location independent service names allows client applications to be portable across different platforms.
The service instance naming convention uses a URN type string to uniquely identify a particular instance of a class of service. To provide extensibility, a naming authority is included in the name. This allows anyone with a domain name to define their own unique service names. Core service names are defined under the trustedfirmware.org authority. The general structure of a service name is as follows:
urn:sn:<authority>:<service>.<version>:<instance>
The 'urn' prefix should be dropped when service names are used in context.
The version field is optional.
The naming convention includes a version number, separated from the <service> field by a ‘.’ character. Beyond the ‘.’, any version numbering scheme may be used. This will potentially be useful for delegating version compatibility decisions to a service locator. It is preferable for a client to specify a service name that includes a version number as this will potentially allow a service locator to:
Locate a compatible service instance. For example, a service provider may expose multiple RPC call endpoints to handle different protocol versions. A service locator may resolve the name to the compatible RPC endpoint, based on the version string requested by the client.
Fail gracefully if no compatible version is found.
Some example service names:
sn:trustedfirmware.org:crypto.1.0.4:0
sn:trustedfirmware.org:secure-storage.1.3.11:1
sn:trustedfirmware.org:tpm.2.0:0
Location Specific Service Names
To enable a client to be able to specify location specific service names, it should also be possible to use names that express a location specific identifier such as a partition UUID. While use of location specific services names creates a coupling between the client and details of the service deployment, their use may be important in the following cases:
Where there is no well-known mapping between a location independent service name and a location specific identifier.
Where the client needs to be specific e.g. for tests that target a specific service deployment.
Location specific service names use the same structure as location independent services names but with a technology specific authority field. The following is an example of a service name that identifies a service instance that is deployed in a secure partition:
sn:ffa:d9df52d5-16a2-4bb2-9aa4-d26d3b84e8c0:0
The instance field qualified a particular SP instance from the discovered set.
Service Location Strategies
The method used by the service locator to resolve a service name to a service instance will depend on the environment in which a client is running and where service instances are located. Services will need to be located by any client of a trusted service. There are typically two classes of trusted service client:
A user-space application.
Another trusted service, running in a secure processing environment.
Different methods for locating service instances in different environments are illustrated in the following examples:
Locating a Service from Linux User-space
Depending on the kernel driver model used, the example methods for locating service instances from Linux user-space are:
Service instances are represented by device nodes e.g. /dev/tpm0. The service locator will simply map the <service> portion of the services name to the base device name and the <instance> to the device node instance.
A service instance is hosted by a TEE as a TA. The TEE will provide a discovery mechanism that will allow a TA type and instance to be identified. The service locator will need to map the service name to the TEE specific naming scheme.
A special device that provides messaging provides a method for discovery. e.g. an FF-A driver supports partition discovery.
A device is used for remote messaging to a separate microcontroller. There is a well-known protocol for endpoint discovery using the messaging layer.
Locating a Service from another Trusted Service
Where a trusted service uses another trusted service, it is likely that both service instances will be running in the same security domain e.g. both running in secure partitions within the secure world. Where a single service instance is deployed per secure partition, the client service will use the following strategy to locate the service provider:
The service name is mapped to the well known UUID for the class of SP that hosts the service provider.
FF-A partition discovery is used to find all SPs that match the requested UUID.
The service instance portion of the service name is used to match the partition ID when selecting the target SP from the list of discovered SPs.
Extending the Service Locator Model
To accommodate the need to support alternative location strategies, the Service Locator model can be extended to use a set of concrete strategy objects to implement different methods of locating a service instance. The set of strategies used will be different for different client environments. The following class diagram illustrates how the model can be extended.
Copyright (c) 2020-2021, Arm Limited and Contributors. All rights reserved.
SPDX-License-Identifier: BSD-3-Clause
Software Requirements
As of today the only available normal-world interface for Trusted Services is available trough linux. Building and end-to-end firmware stack requires compiling the Linux Kernel and linux user space applications. This restricts the possible host environments to Linux distributions. While some TS components can be built under Windows this scenario is not mandated by this documentation.
The preferred host environment is Ubuntu 18.04.
The following tools are required:
CMake, version 3.18.4. (See the CMake download page.)
GNU Make v4.1 or higher.
Git v2.17 or newer.
Python3.6 and the modules listed in
<project>/requirements.txt
.GCC supporting the deployment.
opteesp environment: a host to aarch64 cross-compiler is needed. Please use the compilers specified by the OP-TEE documentation.
arm-linux environment: a host to aarch64 linux cross-compiler is needed. Please use the version 9.2-2019.12 of the “aarch64-none-linux-gnu” compiler available from arm Developer. (Note: the compiler part of the OP-TEE build environment works too.)
For linux-pc environment use the native version of GCC available in your Linux distribution.
The AEM FVP binaries if targeting the FVP platform.
To build the documentation, please refer to Documentation Build Instructions.
Copyright (c) 2020-2022, Arm Limited and Contributors. All rights reserved.
SPDX-License-Identifier: BSD-3-Clause
Build Instructions
All trusted service builds use CMake to create native build files for building and installing service binaries and other build products. Details about the tools needed for building are specified here: Software Requirements.
All top-level build files are located beneath the ‘deployments’ parent directory under a sub-directory for each deployment. For more information about the project directory structure, see: Project Structure.
Build Flow
All deployment builds follow a common flow that results in the creation of executable binaries or libraries and the installation of files into an output directory. Deploying the contents of the output directory into the target environment is handled in an environment specific way and is not part of the common build flow. The build flow conforms to the conventional CMake process where building takes place in to following two stages:
Native build files, such as makefiles, are generated from CMake configuration files.
Native build tools, such as make, are used to build and install items, ready for deployment.
The following activity diagram illustrates the common deployment build flow. The green activity states lie outside of the common build flow. Environment specific instructions are provided for deploying into different environments:
Selecting the build type
The build type selects code optimization and debug information related compiler settings. The build system follows the standard CMake methodology and uses the CMAKE_BUILD_TYPE variable.
The build system uses the following build types:
Build type |
Purpose |
Description |
---|---|---|
Debug |
For debugging purposes. |
Optimization is off, debugging information generation is enabled. |
MinSizeRel |
Size optimized release build. |
Optimization is configured to prefer small code size, debugging information is not generated. |
MinSizWithDebInfo |
For debugging size optimized release build. |
Optimization is configured to prefer small code size, debugging information generation is enabled. |
Release |
Speed optimized release build. |
Optimization is configured to prefer execution speed, debugging information is not generated. |
RelWithDebugInfo |
For debugging speed optimized release build. |
Optimization is configured to prefer execution speed, debugging information generation is enabled. |
Build type of external components can be configured with command line parameters. Parameter names follow this pattern:
-D<upper case component name>_BUILD_TYPE=<value>
e.g. -DNANOPB_BUILD_TYPE=Release
. Supported values are
component specific, please refer to the appropriate cmake file under <TS_ROOT>/external/<name>
.
Building and Installing
When building from a clean environment where no generated build files exist, it is necessary to run the CMake command, specifying the source directory, the build directory and optionally, the install directory where build output is installed.
To illustrate the steps involved, we will build the ‘component-test’ executable to run in the ‘linux-pc’ environment. The built executable is a standalone program that uses the CppUTest framework to run a set of component level tests on components from within the project. For this example, it is assumed that we are building under Linux and ‘make’ is used as the native build tool.
The described steps may be used for any of the deployments under the top-level deployments directory.
Starting from the project root directory, change directory to the relevant deployment directory:
cd deployments/component-test/linux-pc
Build file generation is performed using the CMake command. If no CMAKE_INSTALL_PREFIX path is specified, build output will be installed in the default location (build/install). To generate build files that install to the default location, use:
cmake -S . -B build
To generate build files that install to an alternative location, use:
cmake -S . -B build -DCMAKE_INSTALL_PREFIX=<install_dir>
Having successfully generated build files, the native build tool may be run to build and install files using:
cd build
make install
In the above example, all build output is written to a sub-directory called ‘build’. You are free to choose any location for build output.
Dependencies on external components and in-tree built objects, such as libraries, are handled automatically by the build system during the generating phase. External components are fetched from the relevant source repository and built as part of the build context for the deployment binary being built. This allows deployment specific configuration and compiler options to be applied to the external component without impacting other builds. Dependencies on in-tree built libraries are handled in a similar manner.
For information on running tests, see: Running Tests.
For more information on deployments, see: Deployments.
Installed build output files
On successfully completing the building phase of the build flow, a set of build output files are installed to the directory specified by CMAKE_INSTALL_PREFIX. The set of installed files will depend on the type of build and the environment in which the files will be deployed. The following table summarizes what files are installed for different typed of build during the installing phase of the build flow:
Deployment type |
Environment |
Files installed |
---|---|---|
Binary executable |
linux-pc, arm-linux |
bin/ - program binary
|
Shared library |
linux-pc, arm-linux |
include/ - public header files
lib/ - shared library
lib/cmake/ - cmake target import file
|
SP image |
opteesp |
bin/ - stripped elf file for SP
lib/make - OP-TEE helper makefile
|
SP collection |
opteesp |
bin/ - set of stripped elf files
lib/make/ - set of OP-TEE helper makefiles
|
Deploying installed files
Having built and installed build output files to a known directory, further steps may be needed to deploy the files into the target processing environment. The nature of these steps will be different for different environments.
To avoid overly complicating the common Trusted Services build system, details of how installed files are deployed into the target execution environment are handled separately and may rely on environment specific tools.
Some example deployment methods are:
A filesystem share exists between a build machine and the target machine. Files installed into the shared directory are directly accessible by the target.
Installed files are incorporated into a third-party build process e.g. OP-TEE.
The following guides provide instructions on deploying services and running programs on FVP:
Batch Building
To support batching building of a set of deployments, a tool called b-test is included. For more information, see b-test page
Copyright (c) 2020-2022, Arm Limited and Contributors. All rights reserved.
SPDX-License-Identifier: BSD-3-Clause
Running Tests
Running component tests
On successfully completing the steps above, a binary executable called ‘component-test’ will have been created. Because this deployment targets the linux-pc environment, the executable may be run as a native application. The application uses the stock CppUtest command line test runner.
To run component tests, use:
./component-test -v
Typical verbose output:
TEST(PackedCprotocolChecks, checkTsStatusCodes) - 0 ms
TEST(InternalTrustedStorageTests, storeNewItem) - 0 ms
TEST(E2EcryptoOpTests, generateRandomNumbers) - 2 ms
TEST(E2EcryptoOpTests, asymEncryptDecrypt) - 4 ms
TEST(E2EcryptoOpTests, signAndVerifyHash) - 40 ms
TEST(E2EcryptoOpTests, exportAndImportKeyPair) - 18 ms
TEST(E2EcryptoOpTests, exportPublicKey) - 7 ms
TEST(E2EcryptoOpTests, generatePersistentKeys) - 39 ms
TEST(E2EcryptoOpTests, generateVolatileKeys) - 20 ms
TEST(CryptoFaultTests, randomNumbersWithBrokenStorage) - 0 ms
TEST(CryptoFaultTests, persistentKeysWithBrokenStorage) - 9 ms
TEST(CryptoFaultTests, volatileKeyWithBrokenStorage) - 8 ms
TEST(PocCryptoOpTests, checkOpSequence) - 13 ms
TEST(CryptoMsgTests, SignHashOutMsgTest) - 0 ms
TEST(CryptoMsgTests, SignHashInMsgTest) - 0 ms
TEST(CryptoMsgTests, ExportPublicKeyOutMsgTest) - 1 ms
TEST(CryptoMsgTests, ExportPublicKeyInMsgTest) - 0 ms
TEST(CryptoMsgTests, GenerateKeyInMsgTest) - 0 ms
TEST(ServiceFrameworkTests, serviceWithOps) - 0 ms
TEST(ServiceFrameworkTests, serviceWithNoOps) - 0 ms
TEST(TsDemoTests, runTsDemo) - 71 ms
OK (21 tests, 21 ran, 159 checks, 0 ignored, 0 filtered out, 233 ms)
Copyright (c) 2020-2021, Arm Limited and Contributors. All rights reserved.
SPDX-License-Identifier: BSD-3-Clause
Documentation Build Instructions
To create a rendered copy of this documentation locally you can use the Sphinx tool to build and package the plain-text documents into HTML-formatted pages.
If you are building the documentation for the first time then you will need to check that you have the required software packages, as described in the Prerequisites section that follows.
Prerequisites
For building a local copy of the TS documentation you will need, at minimum:
GNUMake
Python 3 (3.5 or later)
PlantUML (1.2017.15 or later)
You must also install the Python modules that are specified in the
requirements.txt
file in the root of the docs
directory. These modules
can be installed using pip3
(the Python Package Installer). Passing this
requirements file as an argument to pip3
automatically installs the specific
module versions required.
Example environment
- An example set of installation commands for Linux with the following assumptions:
OS and version: Ubuntu 18.04 LTS
virtualenv is used to separate the python dependencies
pip is used for python dependency management
bash is used as the shell.
sudo apt install make python3 python3-pip virtualenv python3-virtualenv plantuml
virtualenv -p python3 ~/sphinx-venv
. ~/sphinx-venv/bin/activate
pip3 install -r requirements.txt
deactivate
Note
More advanced usage instructions for pip are beyond the scope of this document but you can refer to the pip homepage for detailed guides.
Note
For more information on Virtualenv please refer to the Virtualenv documentation
Building rendered documentation
From the docs
directory of the project, run the following commands.
. ~/sphinx-venv/bin/activate
make clean
make
deactivate
Output from the build process will be placed in:
<tf-a CMF root>/docs/_build/html/
Copyright (c) 2020-2021, Arm Limited and Contributors. All rights reserved.
SPDX-License-Identifier: BSD-3-Clause
Writing Documentation
TS is documented using Sphinx, which in turn uses Docutils and Restructured Text (reST hereafter).
The source files for the documents are in the docs directory of the TS repository.
The preferred output format is HTML, and other formats may or may not work.
Section Headings
In order to avoid problems if documents include each other, it is important to follow a consistent section heading style. Please use at most five heading levels. Please use the following style:
First-Level Title
=================
Second-Level Title
------------------
Third-Level Title
'''''''''''''''''
Forth-level Title
"""""""""""""""""
Fifth-level Title
~~~~~~~~~~~~~~~~~
Inline documentation
To get all information integrated into a single document the project uses Sphinx extensions to allow capturing inline documentation into this manual.
CMake
The project uses the “”moderncmakedomain” Sphinx extension. This allows adding inline documentation to cmake files. For details please refer to the documentation of the plugin.
Copyright (c) 2020-2022, Arm Limited and Contributors. All rights reserved.
SPDX-License-Identifier: BSD-3-Clause
Build test runner
This directory captures build test case definitions and a tool to execute the test based on the data. The tool combines the power of shell scripting with the power of structured data (yaml). The bridge between the two technologies is provided by Jinja2 template engine and yasha.
Dependencies
Jinja2 and yasha are python tools and python3 is needed to run the tests. Please install the following tools into your build environment:
python3
pip3
After this please install further pip packages listed in requirements.txt
as:
pip3 install -r requirements.txt
Note
This document lists the dependencies of this tool only. To be able to successfully run the
build tests further tools are needed. Please refer to the Trusted Services
documentation for details.
Files
Design
The project needs a convenient way to define and execute “build tests”. This test aims to ensure all build configurations are in a good working condition. Testing is done by executing build of all supported build configurations. In order to make the testing robust and easy to use a “data driven” approach is the best fit. With this test cases are described by pure data and this data is processed by some tool which is responsible for test execution.
For command execution the bash shell is a good candidate. It provides portability between OSs, is widely adopted and well tested. Unfortunately shells are not good on handling structured data. To address this shortcoming templating is utilized or “code generation” is used. The shell script to execute the command is generated based on a template file and the test data.
Since python is already a dependency of Trusted Services we selected the Jinja2 template engine to go with, and to decrease maintenance cost, we use it trough yasha.
Usage
There are two “entry points” to the tests. If the intention is to run all tests, issue make
.
Makefile
The makefile is responsible to provide a high level “API”. It allows executing the script generation process and to run the tests. It ensures all components are fresh before being executed.
Issue make help
to get a list of supported commands.
run.sh
run.sh
is the test runner. It is responsible to execute the needed builds in a proper way and
thus validate the build definitions.
Execute run.sh help
to get further details.
Copyright (c) 2020-2021, Arm Limited and Contributors. All rights reserved.
SPDX-License-Identifier: BSD-3-Clause
Copyright (c) 2020-2022, Arm Limited and Contributors. All rights reserved.
SPDX-License-Identifier: BSD-3-Clause
Services
Attestation Service
Overview
The Attestation service is responsible for reporting on the security state of a device. Because information is signed, a remote party may verify that the information is intact and authentic. The Attestation service can be used as part of an infrastructure for remote security monitoring. The Attestation service provider performs the following functions:
Collates information about device hardware and firmware. This information must be obtained in a secure way to provide a suitably trustworthy snapshot of a device’s security state.
Prepares and signs a report that includes the information as a set of claims about the device.
Like other trusted services, the Attestation service provider runs within a secure processing environment such as a secure partition or secondary MCU. Service operations are invoked by clients using a service access protocol that defines the serialization of requests and responses carried by the underlying RPC layer. Client-side adapters are available that support service access using the following C APIs:
PSA Initial Attestation API - used during normal device operation to obtain a fresh attestation token.
Attestation Provisioning API - used during manufacture for key provisioning operations.
Project Directories
Components within the Trusted Services project related to the Attestation service are located under the following directories:
Directory |
Contains |
---|---|
components/service/attestation |
Service specific code and API header files. |
protocols/service/attestation |
Service access protocol definitions. |
deployments/attestation |
Build files and deployment specific code for building the attestation service provider to run in different environments. |
deployments/platform-inspect |
A user-space application that retrieves information about platform firmware and hardware and produces a pretty printed output. |
Attestation report
A fresh attestation report may be requested at any time to obtain the current view of a device’s security state. The report is encoded as a CBOR token, signed using the CBOR Object Signing and Encryption protocol (COSE). For more information about the report contents and encoding, see: https://www.psacertified.org/blog/what-is-an-entity-attestation-token/. The following text shows the typical content of an attestation report. This report was retrieved and decoded using the platform-inspect command line application:
attestation_report:
challenge: 32 2d 69 64 ba df b2 f3 28 e8 27 88 50 68 c2 94 7c 4d a9 71 ce 14 e9 f4 88 26 45 9d 2c f5 3c 1b
client_id: 0
boot_seed: 6c eb 03 90 46 e2 09 27 f2 1c 7c a2 2c 1a a6 a2 bd 41 5e 3c aa be 4a b1 fd 35 52 95 b9 74 32 42
security_lifecycle: 3000
instance_id: 01 cb e9 65 fc 88 90 69 36 4b b1 0c ef 04 ae 97 aa d7 7c f9 74 41 4d f5 41 0c d3 9d e3 df 97 de c5
sw_components:
type: BL_2
digest: a8 4f b4 7b 54 d9 4b ab 49 73 63 f7 9b fc 66 cb 85 12 ab 18 6f 24 74 01 5d cf 33 f3 80 9e 9b 20
type: BL_31
digest: 2f d3 43 6c 6f ef 9b 11 c2 16 dd 1f 8b df 9b a5 24 14 a5 c1 97 0c 3a 6c 78 bf ef 64 0f c1 23 e1
type: HW_CONFIG
digest: f3 de 4e 17 a1 a5 a7 fe d9 d9 f4 16 3c 49 36 7e ae f7 2f 2a a8 87 e6 b6 22 89 cd 27 dc 1c 80 25
type: SOC_FW_CONFIG
digest: 4e e4 8e 5a e6 50 ed e0 b5 a3 54 8a 1f d6 0e 8a ea 0e 71 75 0e a4 3f 82 76 ce af cd 7c b0 91 e0
type: BL_32
digest: 62 22 4f 0f b0 5d b4 77 1b 3f a5 2e ab 76 1e 61 17 b8 c6 6e ac 8c c8 4d 2e b0 7d 70 08 60 4b 41
type: BL32_EXTRA1_IMAGE
digest: 39 d2 b8 5d 93 5d f6 d8 f8 ed 0c 1a 3a e3 c8 90 72 19 f4 88 5c 79 15 05 7b f0 76 db c1 4c 5d 77
type: BL_33
digest: b5 d6 08 61 dd fa 6d da a3 f7 a5 de d6 8f 6f 39 25 b1 57 fa 3e db 46 42 58 24 8e 81 1c 45 5d 38
type: NT_FW_CONFIG
digest: 25 10 60 5d d4 bc 9d 82 7a 16 9f 8a cc 47 95 a6 fd ca a0 c1 2b c9 99 8f 51 20 ff c6 ed 74 68 5a
Design Description
Components related to the Attestation service are partitioned as follows:
The partitioning into components reflects the following problem areas:
Component |
Problem Area |
---|---|
claims |
Collecting diverse information about a device and presenting it in a uniform way. Provides an extensible framework that allows new sources of information to be added while avoiding coupling to other components. |
client |
Client side adapters for calling service operations. |
key_mngr |
Manages provisioning related operations and access to the key (IAK) used for report signing. |
reporter |
Combines the set of claims that forms the content of an attestation report, encoding it and signing using the IAK. |
provider |
The service provider that handles incoming requests. |
protocol |
The service access protocol definition that describes supported operations and the serialization of input and output parameters. |
Claims Model
The set of available claims about a device and the method for obtaining them is likely to vary between different platforms. The following are examples of likely variations:
The method for collecting boot measurements will depend on the boot loader and on SoC architecture. Some likely variations are:
Passed forward using a TPM event log or via a proprietary format.
Boot measurements are stored in TPM PCR type registers that need to be read to obtain claims about loaded components.
The set of information passed forward by the boot loader may vary between platforms. Information such as the boot seed or device lifecycle state may be owned by the boot loader on some platforms but not on others.
Platform vendors may wish to include custom claims within the attestation report that reflect vendor specific views of security state.
To accommodate these variations, a flexible claims model is implemented with the following characteristics:
Any claim is represented by a common structure with members to identify:
The category of claim - e.g. this is a claim about device hardware, firmware, the verification service.
The subject of the claim - a claim specific identifier
A variant id to identify the data type for a claim - e.g. integer, byte string, text string or a collection.
Arbitrarily complex claim structures may be presented in a normalized way using combinations of claim variants.
Claims are collected by a set of ‘claim sources’. Each concrete claim source implements the platform specific method for collecting information and representing it in standard way. The set of claim sources used may vary for different deployments.
Claim sources are registered with the claims_register. This is a singleton that provides methods for querying for different sets of claims e.g. all device claims or all firmware measurements. By collating claims by category, tight coupling between the reporter and the set of available claims is avoided.
The following class diagram illustrates the implemented claims model:
Claim Sources
It is envisaged that the number of concrete claim sources will grow to cope with differences between platforms and the need to include custom claims in attestation reports. The following table lists some existing claim sources:
Claim Source |
Description |
---|---|
event_log |
A claim source that sources a claim_collection variant. An iterator may be created that allows claims within a TCG event log to be iterated over and accessed. |
boot_seed_generator |
Where a boot seed is not available from another source, a boot_seed_generator may be used in a deployment. On the first call to get_claim(), a random boot seed is generated and returned as a byte_string claim variant. On subsequent calls, the same boot seed value is return. |
instance_id |
A claim source that returns a device instance ID, derived from the IAK public key. |
null_lifecycle |
Used when there is no hardware backed support for the device lifecycle state variable. This claim source just returns a lifecycle state of ‘unknown’. |
Reporter
The contents of the attestation report created by the reporter is determined by the set of claim sources registered with the claims_register. To generate a PSA compliant attestation report, the reporter queries for the following categories of claim:
Device
Verification service
Boot measurements
Having collated all claims, the report is serialized as a CBOR object using the qcbor open source library. The CBOR object is then signed using the t_cose library to produce the final attestation token.
Provisioning Flows
The Attestation service uses the IAK (an ECDSA key pair) for signing attestation reports. An external verification service needs a way of establishing trust in the IAK used by a device to sign a report. This trust relationship is formed when a device is provisioned during manufacture. During provisioning, the following steps must be performed in a secure manufacturing environment:
A unique IAK is generated and stored as a persistent key in the device’s secure key store.
The IAK public key is obtained and stored in a central database of trusted devices. The hash of the IAK public key (the device’s instance ID) is used as the database key for accessing the stored key.
To verify the authenticity of an attestation report, an external verifier must query the database using the instance ID claim contained within the report. The signature on the report is viewed as authentic if the following are true:
A key record exists for the given instance ID within the database.
The signature is verified successfully using the corresponding public key.
The attestation access protocol supports operations to support provisioning. These operations may be invoked using simple client C API (see attest_provision.h) or by using the access protocol directly for non-C clients. The following two alternative provisioning flows are supported:
Self-generated IAK
When a device powers up before provisioning has been performed, no IAK will exist in the device’s key store. As long as no attestation related service operations are performed, the device will remain in this state. To trigger the self generation of an IAK, factory provisioning software should call the export_iak_public_key operation. If no IAK exists, one will be generated using the device’s TRNG. A benefit of this flow is that the IAK private key value is never externally exposed. To support test deployments where no persistent storage is used, the self-generated IAK flow may optionally generate a volatile key instead of persistent key.:
Imported IAK
To support external generation of the IAK, a one-time key import operation is also supported. When a device is in the pre-provisioned state where no IAK exists, the import_iak may be called by factory provisioning software. Importantly, import_iak may only be called once. An attempt to call it again will be rejected.:
Testing the Attestation Service
The following CppUtest based test suites are available for attestation service testing. All component and service level tests may be run on a real target device and as part of a native PC built binary.
Component-Level Test Suites
Test suites included in deployments of component-test:
Test Suite |
Coverage |
File Location |
---|---|---|
TcgEventLogTests |
Tests decoding and iterator access to a TCG event log. |
service/attestation/claims/sources/event_log/test |
AttestationReporterTests |
Checks the contents and signing of a generated attestation report. |
service/attestation/test/component |
Service-Level Test Suites
Test suites included in deployments of ts-service-test. Test cases act as conventional service clients:
Test Suite |
Coverage |
File Location |
---|---|---|
AttestationServiceTests |
Different attestation token request scenarios |
service/attestation/test/service |
AttestationProvisioningTests |
Tests provisioning flows and checks defence against misuse of provisioning operations. |
service/attestation/test/service |
Environment Tests
When deployed within a secure partition, the attestation SP relies on access to externally provided information such as the TPM event log. Test have been added to the env_test SP deployment to check that features that the attestation SP relies on are working as expected. Tests included in the env_test SP deployment may be invoked from Linux user-space using the ts-remote-test/arm-linux deployment.
Copyright (c) 2021-2022, Arm Limited and Contributors. All rights reserved.
SPDX-License-Identifier: BSD-3-Clause
Crypto Service
Overview
The Crypto service provides a rich set of cryptographic operations with the backing of a private key store. Clients identify keys using opaque key handles, enabling cryptographic operations to be performed without exposing key values beyond the boundary of the service’s secure processing environment. This pattern underpins the security guarantees offered by the Crypto service.
The set of supported operations is aligned to the PSA Crypto API. C API functions are invoked by clients using the Crypto service access protocol. All types and values defined by the PSA Crypto C API are projected by the Crypto access protocol. The one-to-one mapping between the C API and Crypto access protocol allows developers to use PSA Crypto documentation and examples to understand details of the protocol.
Supported operations fall into the following categories:
Key lifetime management
Message signing and signature verification
Asymmetric encryption/decryption
Random number generation
Service Provider Implementation
The default crypto service provider uses the Mbed Crypto library to implement backend operations. The following diagram illustrates the component dependencies in the crypto service provider implementation (note that there are many more handlers than illustrated):
The packages illustrated reflect the partitioning of the code into separate directories. Functionality is partitioned as follows:
Crypto Provider
Implements the set of handlers that map incoming RPC call requests to PSA Crypto API function calls. A separate handler function exists for each operation supported by the service.
Crypto Serializer
Incoming call request parameters are de-serialized and response parameters serialized by a serializer. The trusted services framework allows for the use of alternative serializers to support different parameter encoding schemes.
Mbed Crypto
All cryptographic operations are handled by an instance of the Mbed Crypto library. The library is built with a specific configuration that creates dependencies on the following:
PSA ITS API for persistent key storage
External entropy source
Secure Storage
Persistent storage of keys is handled by an instance of the Secure Storage service. The service is accessed via a client that presents the PSA ITS API at its upper edge. This is needed for compatibility with Mbed Crypto. As long as it meets security requirements, any Secure Storage service provider may be used. An RPC session between the Crypto and Secure Storage service providers is established during initialization and is maintained for the lifetime of the Crypto service provider.
Entropy Source
Certain cryptographic operations, such as key generation, require use of a cryptographically secure random number generator. To allow a hardware TRNG to be used, the Mbed Crypto library is configured to use an externally provided entropy source. Any deployment of the service provider must include an implementation of the following function:
int mbedtls_hardware_poll(void *data, unsigned char *output, size_t len, size_t *olen)
For production deployments, an implementation of this function should be provided that obtains the requested bytes of entropy from a suitable source. To allow the Crypto service to be used where no hardware backed implementation is available, a software only implementation is provided.
Copyright (c) 2020-2022, Arm Limited and Contributors. All rights reserved.
SPDX-License-Identifier: BSD-3-Clause
Secure Storage Service
Overview
The Secure Storage service provides a generic persistent object store for valuable assets such as cryptographic keys. The confidentiality and integrity of stored data is typically achieved using keys that are bound to the device. The backend object store can be implemented in different ways, depending on available hardware such as:
On-SoC secure world peripherals such as NV counters.
A hardware unique key stored in OTP.
Internal flash (on-die or in package).
On-SoC crypto island with persistent storage.
RPMB partition in a an external eMMC chip.
The secure storage service provider architecture offers flexibility to use alternative backend storage implementations to suite available hardware.
Service Access Protocol
A client accesses any instance of the Secure Storage service using a common secure storage access protocol. Although multiple secure storage service instances may exist on a device, they are all accessed using the same access protocol. By standardizing on a common protocol, client applications maintain compatibility with any secure storage provider instance.
The protocol definition lives here:
protocols/service/secure_storage
PSA Storage Classes
Backend storage implementations that rely on external components, such as a flash chip, will require security measures that are not necessarily needed when on-chip or in-package storage is used. The PSA Storage API specification introduces the storage classes Protected and Internal Trusted to distinguish between externally and internally provided storage. These storage class designations are used for naming secure storage service instances. For example, the secure storage deployment that uses an RPMB backend is referred to as Protected Storage. The two storage classes have the following characteristics. Both classes of storage are required to support the notion of data ownership and to implement access control based on policy set by the owner.
Internal Trusted Storage
Internal trusted storage uses isolated or shielded locations for storage. Example storage backends could be on-die or in package flash memory that is inherently secure. Alternatively, storage may be delegated to an on-die secure enclave that offers equivalent security properities. An external storage device may also be used, as long as there is a cryptographic binding between the owning secure partition and the stored data that prevents unauthorized access to the storage device.
To provide a persisent store for fundamental objects such as device ID and trust anchor certificates, access control based on the secure lifecycle state should be possible to support access policies such as r/w during manufacture but read-only in all other lifecycle states.
Protected Storage
Protected storage uses an external memory device for persistent storage. To meet PSA security goals, the following protection measures should exist:
Privacy and integrity protection to prevent data access and modification by an unauthorized agent.
Replay protection to prevent the current set of stored data being replaced by an old set.
Common implementation options for a protected store are:
RPMB partition in an eMMC device. Access to the device is brokered by a normal-world agent such as tee-supplicant.
Dedicated serial flash device with secure-world only access.
Normal-world filesystem for backend storage. Data is encrypted and integrity protected in the secure-world.
PSA Storage C API
For client application developers who wish to use the PSA Storage API to access secure storage, two storage frontends are available; one that implements the Protected Storage API and another that implements the Internal Trusted Storage API.
Storage Frontend and Backend Separation
For flexibility, secure storage components are separated between frontend and backend. All storage backends implement a common public interface and may be used with any storage frontend. A storage frontend presents an interface that suites a particular type of consumer. The following class diagram illustrates how a storage frontend is decoupled from any concrete storage backend through the use of an abstract storage backend interface.
Some example storage frontends:
Secure storage service provider - provides access using the secure storage access protocol.
ITS frontend - provides secure storage access via PSA Internal Trusted Storage C API
PS frontend - provides secure storage access via PSA Protected Storage C API
Some example storage backends:
RPMB storage backend
Secure enclave storage backend
Normal-world filesystem backend
Secure storage service client
Components related to storage frontends and backends live under the following TS project directories:
components/service/secure_storage/frontend
components/service/secure_storage/backend
Storage Frontend and Backend Responsibilities
A storage frontend is responsible for presenting an interface that is suitable for a particular type of consumer. For example, the Mbed TLS library depends on the PSA Internal Trusted Storage C API for accessing persistent storage. The ITS frontend provides an implementation of this API at its upper edge. Where appropriate, a storage frontend will be responsible for sanitizing input parameters.
A storage backend is responsible for:
Realizing the common storage backend interface.
Implementing per object access control based on the provided client ID. The client ID associated with the creator of an object is treated as the object owner.
Providing persistent storage with appropriate security and robustness properties.
Storage Factory
To decouple generic code from environment and platform specific code, a storage factory interface is defined that provides a common interface for constructing storage backends. A concrete storage factory may use environment specific methods and configuration to construct a suitable storage backend. Allows new storage backends to be added without impacting service provider implementations. The factory method uses PSA storage classifications to allow a service provider to specify the security characteristics of the backend. How those security characteristics are realized will depend on the secure processing environment and platform.
A concrete storage factory may exploit any of the following to influence how the storage backend is constructed:
Environment and platform specific factory component used in deployment
Runtime configuration e.g. from Device Tree
The PSA storage classification specified by the SP initialization code.
Concrete storage factory components live under the following TS project directory:
components/service/secure_storage/factory
Storage Frontend/Backend Combinations
The following storage frontend/backend combinations are used in different deployments.
Persistent Key Store for Crypto Service Provider
The Crypto service provider uses the Mbed Crypto portion of Mbed TLS to implement crypto operations. Persistent keys are stored via the PSA Internal Trusted Storage C API. In the opteesp deployment of the Crypto service provider, a storage client backend is used that accesses a secure store provided by a separate secure partition. The following deployment diagram illustrates the storage frontend/backend combination used:
Proxy for OP-TEE Provided Storage
When service providers are deployed in secure partitions running under OP-TEE, access to OP-TEE provided secure storage is possible via an S-EL1 SP that hosts a secure storage provider instance. The following deployment diagram illustrates how secure storage access is brokered by an S-EL0 proxy:
Copyright (c) 2022, Arm Limited and Contributors. All rights reserved.
SPDX-License-Identifier: BSD-3-Clause
UEFI SMM Services
The Trusted Services project provides support for UEFI System Management Mode (SMM) services via the SMM Gateway secure partition. The SMM Gateway adopts the API Gateway design pattern, popular in microservices architecture. The pattern decouples clients from backend service providers using an API gateway that presents a domain specific interface to clients while delegating operations to a set of backend microservices. An API gateway will typically use multiple backend services and may perform protocol translation while presenting a single service entry point for clients. The SMM Gateway works in a similar manner - clients access SMM services using standard SMM protocol messages, carried by an RPC mechanism. Service requests are forwarded by the SMM Gateway to backend service providers for operations such as secure persistent storage and signature verification.
SMM Gateway is intended to be used on non-EDK2 platforms as an alternative to the EDK2 StandaloneMM (StMM) component. The current SMM Gateway version only supports the SMM Variable service. Additional SMM service providers may be added to SMM Gateway if required. By deliberately limiting functionality and exploiting backend services, the SMM Gateway SP can be significantly lighter-weight than StMM. This option is intended to be used on more resource constrained devices that tend to use u-boot. There is of course the possibility that other SMM services will need to be supported in the future. In such cases, a judgement should be made as to whether StMM should be used rather than extending the SP.
SMM Variable Service
Overview
UEFI Variable support is provided by the smm_variable service provider component. This service provider is structured in the same way as other service providers within the TS project. Features of this component are:
Source file location:
components/service/smm_variable
Public interface definitions:
protocols/service/smm_variable
Can be used with any RPC layer - not tied to MM Communicate RPC.
Volatile and non-volatile storage is accessed via instances of the common storage_backend interface.
The smm-gateway/opteesp deployment integrates the smm_variable service provider with the following:
An MM Communicate based RPC endpoint.
A mock_store instance for volatile variables.
A secure_storage_client for non-volatile variables.
During SP initialization, the smm-gateway uses pre-configured information to discover a backend secure storage SP for NV storage.
The following diagram illustrates how the smm_variable service provider is integrated into the smm-gateway.
Because the smm_variable service provider is independent of any particular environment, alternative deployments are possible e.g.
smm_variable service provider running within a GP TA with storage off-loaded to the GP TEE Internal API.
smm_variable service provider running within a secure enclave with its own internal flash storage.
Supported Functions
The smm_variable service provider supports the following functions:
SMM Variable Function |
Purpose |
Backend service interaction |
---|---|---|
SMM_VARIABLE_FUNCTION_GET_VARIABLE |
Get variable data identified by GUID/name. |
Query index and get object from appropriate storage backend. |
SMM_VARIABLE_FUNCTION_GET_NEXT_VARIABLE_NAME |
Called multiple times to enumerate stored variables. |
Find variable in index and return next. |
SMM_VARIABLE_FUNCTION_SET_VARIABLE |
Adds a new variable or updates an existing one. |
Sets object in storage backend and if necessary, updates index
and syncs to storage.
|
SMM_VARIABLE_FUNCTION_QUERY_VARIABLE_INFO |
Returns information about the variable store. |
Iterates over stored variables to determine space used. |
SMM_VARIABLE_FUNCTION_EXIT_BOOT_SERVICE |
Called by OS when boot phase is complete. |
Updates view of runtime state held by smm_variable service provider.
State variable used when implementing state dependent access control.
|
SMM_VARIABLE_FUNCTION_VAR_CHECK_VARIABLE_PROPERTY_SET |
Set constraints that are checked on the SetVariable operation.
Allows a platform to set check policy.
|
Variable index holds variable check constraints object for each variable.
This is updated by this function.
|
SMM_VARIABLE_FUNCTION_VAR_CHECK_VARIABLE_PROPERTY_GET |
Get the variable check constraints. |
Reads the variable check constraints object. |
SMM_VARIABLE_FUNCTION_GET_PAYLOAD_SIZE |
Returns the maximum variable data size, excluding any
auth header.
|
Considers size constraints imposed by backend stores and RPC response
payload constraints.
|
Supported Variable Attributes
The following variable attributes are supported:
SMM Variable Attribute |
Support |
Comment |
---|---|---|
EFI_VARIABLE_NON_VOLATILE |
yes |
Determines which storage backend is used. |
EFI_VARIABLE_BOOTSERVICE_ACCESS |
yes |
Boot service access controlled by smm_variable service provider. |
EFI_VARIABLE_RUNTIME_ACCESS |
yes |
Runtime access controlled by smm_variable service provider. |
EFI_VARIABLE_HARDWARE_ERROR_RECORD |
no |
|
EFI_VARIABLE_AUTHENTICATED_WRITE_ACCESS |
no |
|
EFI_VARIABLE_TIME_BASED_AUTHENTICATED_WRITE_ACCESS |
not yet |
Will be needed for secure boot support |
EFI_VARIABLE_APPEND_WRITE |
yes |
Implemented by overwriting entire variable data. |
SMM Variable Tests
The following test components exist for the SMM Variable service:
Test Component |
Description |
Included in deployments |
---|---|---|
|
Component tests for the variable_index and variable_store backend
components. Can be run in a native PC environment.
|
|
|
End-to-end service level tests that call service operations from
the perspective of a client. Can be run in a native PC environment
or on the Arm target platform.
|
deployments/ts-service-test/linux-pc deployments/uefi-test/arm-linux |
SMM Gateway Build Configuration
The smm-gateway SP image may be built using the default configuration parameters defined within relevant source files. In practice, it is likely that at least some configuration values will need to be overridden. The following table lists build-time configuration parameters that may be overridden by global C pre-processor defines.
Config define |
Usage |
File |
Default value |
---|---|---|---|
SMM_GATEWAY_MAX_UEFI_VARIABLES |
Maximum number of variables |
|
40 |
SMM_GATEWAY_NV_STORE_SN |
The service ID for the backend NV variable store |
|
Protected Storage SP |
MM Communicate RPC Layer
To maintain compatibility with existing SMM service clients, an MM Communicate based RPC layer has been developed that uses the same ‘carveout’ buffer scheme as StMM. When SMM Gateway is used instead of StMM, existing SMM variable clients should interoperate seamlessly. The MM Communicate RPC components implement the standard TS RPC interfaces and can be used as a general purpose RPC for calls from normal world to secure world. The following MM Communicate RPC components have been added:
components/rpc/mm_communicate/endpoint/sp
- an RPC endpoint that handles FFA direct calls with MM Communicate and SMM message carried in a shared ‘carveout’ buffer. Call requests are demultiplexed to the appropriate service interface based on the service GUID carried in the MM Communicate header. Suitable for use in SP deployments.
components/rpc/mm_communicate/caller/linux
- an RPC caller that calls service operations associated with the destination service interface from Linux user-space. Uses the MM Communicate protocol, sent over FFA using the Debug FFA kernel driver. Service level tests that run against the SMM Gateway use this RPC caller for invoking SMM service operations.
The following register mapping is assumed for FFA based direct calls to an SP that handles the MM Communicate RPC protocol:
Registers |
FF-A layer |
MM_COMMUNICATE Request |
MM_COMMUNICATE Response |
---|---|---|---|
W0 |
Function ID |
FFA_MSG_SEND_DIRECT_REQ
(0x8400006F/0xC400006F)
|
FFA_MSG_SEND_DIRECT_RESP
(0x84000070/0xC4000070)
|
W1 |
Source/Destination ID |
Source/Destination ID |
Source/Destination ID |
W2/X2 |
Reserved |
0x00000000 |
0x00000000 |
W3/X3 |
Parameter[0] |
Address of the MM communication buffer |
ARM_SVC_ID_SP_EVENT_COMPLETE
(0x84000061/0xC4000061)
|
W4/X4 |
Parameter[1] |
Size of the MM communication buffer |
SUCCESS/[error code] |
W5/X5 |
Parameter[2] |
0x00000000 |
0x00000000 |
W6/X6 |
Parameter[3] |
0x00000000 |
0x00000000 |
W7/X7 |
Parameter[4] |
0x00000000 |
0x00000000 |
Copyright (c) 2021-2022, Arm Limited and Contributors. All rights reserved.
SPDX-License-Identifier: BSD-3-Clause
Copyright (c) 2020-2022, Arm Limited and Contributors. All rights reserved.
SPDX-License-Identifier: BSD-3-Clause
Environments
Depending on Arm architecture and SoC capabilities, there may be different options for implementing hardware-backed isolation for protecting security sensitive workloads and their assets. The Trusted Services project decouples service related components from any particular environment, enabling services to be deployed in different environments. This section provides information about supported secure processing environments.
Secure Partitions
Secure Partitions are defined by the FF-A standard
Secure partitions are isolated processing environments managed by a Secure Partition Manager (SPM). An SPM performs the role of hypervisor for the Arm Secure State and is responsible for managing SP initialization, memory management and messaging. The Arm Firmware Framework for A-Profile (FF-A) specification (FF-A Specification) defines the set of firmware features that enable the use of secure partitions for protecting sensitive workloads.
The Armv8.4 architecture introduces the virtualization extension in the Secure state. For silicon based on Armv8.4 (or above) that implement the Secure-EL2 extension, the Hafnium Project provides a reference SPMC implementation. For pre-Arm8.4 silicon, the OP-TEE Project provides an alternative reference SPMC implementation.
Within the Trusted Services, the environments realized by the two reference SPM implementations are named as follows:
hfsp - for service deployment under Hafnium
opteesp - for service deployment under OP-TEE
Firmware Framework for Armv8-A
The FF-A specification defines a software architecture that isolates Secure world firmware images from each other. The specification defines interfaces that standardize communication between various images. This includes communication between images in the Secure world and Normal world.
The Trusted Services project includes service providers that may be deployed within FF-A S-EL0 secure partitions. This includes service providers that form the security foundations needed for meeting PSA Certified requirements. Other secure partitions are available such as the SMM Gateway to provide Secure world backing for UEFI services.
The component libsp captures helpful abstractions to allow easy FF-A compliant S-EL0 SP development. S-EL0 SPs are SPMC agonistic and can be used with an SPMC running in any higher secure exception level (S-EL1 - S-EL3).
Copyright (c) 2021-2022, Arm Limited and Contributors. All rights reserved.
SPDX-License-Identifier: BSD-3-Clause
S-EL0 Secure Partitions under OP-TEE
Running user-space programs on FVP
This page explains how to load and run user space programs on a Linux image running in FVP simulation. The loaded programs may use any trusted services that are available as part of the image firmware.
To prepare and run an image that includes trusted services running in S-EL0 secure partitions under OP-TEE see: Deploying trusted services in S-EL0 Secure Partitions under OP-TEE
The example assumes that the FVP model has been installed in the following directory relative to the OP-TEE build directory:
../Base_RevC_AEMvA_pkg/models/Linux64_GCC-9.3
Running service level tests
Most test and demo applications are integrated into the OP-TEE build flow, and can be build using
the makefiles in the op-tee/build
repository.
To build all such binaries build the ffa-test-all
target. For available targets please refer to
fvp-psa-sp.mk_. As an example to build the ts-service-test
application execute the following
commands from the root of the workspace:
make -C build ffa-ts-service-test
The executable includes service level test cases that exercise trusted services via their standard interfaces. Test cases use libts for locating services and establishing RPC sessions. ts-service-test provides a useful reference for understanding how libts may be used for accessing trusted services.
Build output will be copied to out/ts-install
.
To build the applications without using the op-tee/build
files refer to the instructions here:
Build Instructions
To start the FVP, from the root directory of the workspace, enter:
FVP_PATH=../Base_RevC_AEMvA_pkg/models/Linux64_GCC-9.3 make -C build run-only
Once it boots to the login prompt, log in as root and from the FVP terminal, enter:
# Enter the mount target for the shared directory
cd /mnt/host
# Install the shared library and executables
cp -vat /usr out/ts-install/arm-linux/lib out/ts-install/arm-linux/bin
# Load the kernel modules
out/linux-arm-ffa-tee/load_module.sh
out/linux-arm-ffa-user/load_module.sh
# Run the test application
ts-service-test -v
Use the same flow for other user-space programs. Check the output of the cp
command executed to see
executables copied under /usr/bin
.
If all is well, you should see something like:
TEST(PsServiceTests, createAndSetExtended) - 0 ms
TEST(PsServiceTests, createAndSet) - 0 ms
TEST(PsServiceTests, storeNewItem) - 0 ms
TEST(ItsServiceTests, storeNewItem) - 0 ms
TEST(AttestationProvisioningTests, provisionedIak) - 1 ms
TEST(AttestationProvisioningTests, selfGeneratedIak) - 1 ms
TEST(AttestationServiceTests, repeatedOperation) - 75 ms
TEST(AttestationServiceTests, invalidChallengeLen) - 0 ms
TEST(AttestationServiceTests, checkTokenSize) - 2 ms
TEST(CryptoKeyDerivationServicePackedcTests, deriveAbort) - 0 ms
TEST(CryptoKeyDerivationServicePackedcTests, hkdfDeriveBytes) - 0 ms
TEST(CryptoKeyDerivationServicePackedcTests, hkdfDeriveKey) - 0 ms
TEST(CryptoMacServicePackedcTests, macAbort) - 0 ms
TEST(CryptoMacServicePackedcTests, signAndVerify) - 1 ms
TEST(CryptoCipherServicePackedcTests, cipherAbort) - 0 ms
TEST(CryptoCipherServicePackedcTests, encryptDecryptRoundtrip) - 0 ms
TEST(CryptoHashServicePackedcTests, hashAbort) - 0 ms
TEST(CryptoHashServicePackedcTests, hashAndVerify) - 0 ms
TEST(CryptoHashServicePackedcTests, calculateHash) - 0 ms
TEST(CryptoServicePackedcTests, generateRandomNumbers) - 0 ms
TEST(CryptoServicePackedcTests, asymEncryptDecryptWithSalt) - 14 ms
TEST(CryptoServicePackedcTests, asymEncryptDecrypt) - 1 ms
TEST(CryptoServicePackedcTests, signAndVerifyEat) - 4 ms
TEST(CryptoServicePackedcTests, signAndVerifyMessage) - 4 ms
TEST(CryptoServicePackedcTests, signAndVerifyHash) - 4 ms
TEST(CryptoServicePackedcTests, exportAndImportKeyPair) - 1 ms
TEST(CryptoServicePackedcTests, exportPublicKey) - 1 ms
TEST(CryptoServicePackedcTests, purgeKey) - 0 ms
TEST(CryptoServicePackedcTests, copyKey) - 1 ms
TEST(CryptoServicePackedcTests, generatePersistentKeys) - 1 ms
TEST(CryptoServicePackedcTests, generateVolatileKeys) - 0 ms
TEST(CryptoServiceProtobufTests, generateRandomNumbers) - 1 ms
TEST(CryptoServiceProtobufTests, asymEncryptDecryptWithSalt) - 15 ms
TEST(CryptoServiceProtobufTests, asymEncryptDecrypt) - 1 ms
TEST(CryptoServiceProtobufTests, signAndVerifyMessage) - 4 ms
TEST(CryptoServiceProtobufTests, signAndVerifyHash) - 4 ms
TEST(CryptoServiceProtobufTests, exportAndImportKeyPair) - 1 ms
TEST(CryptoServiceProtobufTests, exportPublicKey) - 0 ms
TEST(CryptoServiceProtobufTests, generatePersistentKeys) - 1 ms
TEST(CryptoServiceProtobufTests, generateVolatileKeys) - 1 ms
TEST(CryptoServiceLimitTests, volatileRsaKeyPairLimit) - 99 ms
TEST(CryptoServiceLimitTests, volatileEccKeyPairLimit) - 22 ms
TEST(DiscoveryServiceTests, checkServiceInfo) - 0 ms
TEST(SmmVariableAttackTests, getCheckPropertyWithMaxSizeName) - 0 ms
TEST(SmmVariableAttackTests, getCheckPropertyWithOversizeName) - 0 ms
TEST(SmmVariableAttackTests, setCheckPropertyWithMaxSizeName) - 0 ms
TEST(SmmVariableAttackTests, setCheckPropertyWithOversizeName) - 0 ms
TEST(SmmVariableAttackTests, enumerateWithSizeMaxNameSize) - 0 ms
TEST(SmmVariableAttackTests, enumerateWithOversizeName) - 0 ms
TEST(SmmVariableAttackTests, setAndGetWithSizeMaxNameSize) - 0 ms
TEST(SmmVariableAttackTests, setAndGetWithOversizeName) - 0 ms
TEST(SmmVariableAttackTests, setWithSizeMaxNameSize) - 0 ms
TEST(SmmVariableAttackTests, setWithOversizeName) - 0 ms
TEST(SmmVariableAttackTests, setWithSizeMaxDataSize) - 0 ms
TEST(SmmVariableAttackTests, setWithOversizeData) - 0 ms
TEST(SmmVariableServiceTests, checkMaxVariablePayload) - 0 ms
TEST(SmmVariableServiceTests, setSizeConstraint) - 0 ms
TEST(SmmVariableServiceTests, enumerateStoreContents) - 0 ms
TEST(SmmVariableServiceTests, getVarSizeNv) - 0 ms
TEST(SmmVariableServiceTests, getVarSize) - 0 ms
TEST(SmmVariableServiceTests, setAndGetNv) - 1 ms
TEST(SmmVariableServiceTests, setAndGet) - 0 ms
TEST(TestRunnerServiceTests, runSpecificTest) - 0 ms
TEST(TestRunnerServiceTests, runConfigTests) - 0 ms
TEST(TestRunnerServiceTests, listPlatformTests) - 0 ms
TEST(TestRunnerServiceTests, runAllTests) - 0 ms
TEST(TestRunnerServiceTests, listAllTests) - 0 ms
OK (67 tests, 67 ran, 977 checks, 0 ignored, 0 filtered out, 261 ms)
Copyright (c) 2020-2022, Arm Limited and Contributors. All rights reserved.
SPDX-License-Identifier: BSD-3-Clause
Deploying trusted services in S-EL0 Secure Partitions under OP-TEE
Trusted services built for the opteesp environment may be deployed to run within S-EL0 secure partitions, managed by OP-TEE. The current implementation of the OP-TEE SPMC supports booting SPs embedded into the OP-TEE OS binary (similar to early-TAs) or from the FIP.
Tool prerequisites and general build instruction for OP-TEE are described here: https://optee.readthedocs.io/en/latest/building/gits/build.html
Download page for Arm Fixed Virtual Platforms (FVP): https://developer.arm.com/tools-and-software/simulation-models/fixed-virtual-platforms
Embedding SP images into the OP-TEE OS image
The set of SP images to include in the built OP-TEE OS image are specified to the OP-TEE OS
build by the SP_PATHS
make variable. The SP_PATHS
variable should be assigned a string
containing a space separated list of file paths for each SP image file to include. SP images
that need to be deployed from the Trusted Services project will be located in the install directory,
specified when the SP images where built i.e.:
<CMAKE_INSTALL_PREFIX>/opteesp/bin
The following example illustrates a setting of the SP_PATHS
variable to deploy the Secure Storage
SP and Crypto SP:
SP_PATHS="ts-install-dir/opteesp/bin/dc1eef48-b17a-4ccf-ac8b-dfcff7711b14.stripped.elf \
ts-install-dir/opteesp/bin/d9df52d5-16a2-4bb2-9aa4-d26d3b84e8c0.stripped.elf"
Reference OP-TEE build with PSA RoT Services
To provide an example integration of OP-TEE with a set of trusted services, a makefile called fvp-ps-sp.mk is included in the OP-TEE build repository that builds OP-TEE OS with a set of SP images. SP images are built using the standard trusted services build flow and are automatically injected into the optee_os build using the TA feature described above.
A bootable Linux image is created that is intended to run on the Arm AEM FVP virtual platform. The built image includes user space programs that may be used to test and demonstrate the deployed trusted services.
To help setup the workspace, a manifest file called fvp-ts.xml is included in OP-TEE manifests repository. This may be used with the repo tool to manage the set of git repositories.
Having created a new directory for the workspace, the required set of git repositories can be cloned and fetched using:
repo init -u https://github.com/OP-TEE/manifest.git -b master -m fvp-ts.xml
repo sync
To build the bootable image that includes OP-TEE and the set of secure partition images that hold the PSA RoT services, use the following (from the root directory of the workspace):
make -C build
This will take many tens of minutes to complete.
The fvp makefile includes a run and run-only target which can be used to start the FVP model and boot the built image. The example assumes that the FVP model has been installed in the following directory relative to the OP-TEE build directory:
../Base_RevC_AEMvA_pkg/models/Linux64_GCC-9.3
To boot the built image on FVP without building, use:
FVP_PATH=../Base_RevC_AEMvA_pkg/models/Linux64_GCC-9.3 make run-only
For information on running user space programs on FVP, see:
Running user-space programs on FVP
Copyright (c) 2020-2022, Arm Limited and Contributors. All rights reserved.
SPDX-License-Identifier: BSD-3-Clause
Copyright (c) 2022, Arm Limited and Contributors. All rights reserved.
SPDX-License-Identifier: BSD-3-Clause
S-EL1 Secure Partitions under Hafnium
Note: The Arm Total Compute solution is the current reference for running SPs for meeting PSA Certified requirements under Hafnium. The ‘hfsp_shim’ environment is used for deploying service providers under Hafnium. Files related to this environment are still in-flux and have not yet been up-streamed to TS. See Total Compute
Copyright (c) 2022, Arm Limited and Contributors. All rights reserved.
SPDX-License-Identifier: BSD-3-Clause
libsp
libsp is intended to be used from S-EL0 secure partitions. It contains all the necessary features for communicating with other components using the FF-A interfaces.
The implementation is split into multiple layers for easy maintainability and usage. The structure and the short description of the components of the library is described below.
For detailed information about the FF-A interfaces please check the FF-A specification
The API reference documentation is included in the code.
The components of the following diagram are illustrated as classes but as the library is written in C they are not real classes. Their purpose is to describe the interfaces and dependencies of the library’s components.
SP layer
The SP layer provides convenient high level interface for accessing FF-A features. The layer has multiple components around the groups of the FF-A interfaces.
SP RXTX
The FF-A calls may utilize a pair or buffers called RXTX buffers for passing data to the SPM that won’t fit into registers. These buffers should be set up during the initialization phase of the SP, then the FF-A calls can use these in discovery or memory management calls.
The SP RXTX component provides a high level interface for registering these buffers. It also enables other components of the SP layer to use these buffers during FF-A calls without the need of manually passing the buffers to the functions.
SP memory management
The FF-A memory management interfaces involve multiple steps for setting up a memory transaction. This component gives a set of functions to the user for doing these transactions in a simple way. The supported memory transactions follows below:
Donate
Lend
Share
Retrieve
Relinquish
Reclaim
FF-A layer
The FF-A layer provides functions and types for accessing FF-A features through a C API. This low level API gives full control of the FF-A call parameters.
FF-A API
The FF-A API provides wrappers for the FF-A interfaces. These interfaces are fitted into the following groups:
Setup and discovery interfaces
CPU cycle management interfaces
Messaging interfaces
Memory management interfaces
The functions of this unit give raw control of all the parameters of the FF-A calls beside providing basic validity checks for the parameters.
Couple FF-A interfaces have the ability to receive a response which indicates an interrupt that is meant to be handled by the SP. All these functions call a common interrupt handler function which is defined in the FF-A API component. This interrupt handler function should be implemented by the upper layers (in fact it is implemented by the SP layer or libsp).
FF-A memory descriptors
The FF-A defines memory descriptors to provide information to the SPM about the referenced memory areas. These are used by memory transactions like sharing, lending, etc. This information covers various details like instruction and data access rights, the endpoints of the transaction, address and size of the area, and so on. Building and parsing memory transaction structures can be quite complex and this is what this component addresses.
First of all it provides a type for describing buffers where the transaction descriptors should be built. Using this type provides safety against buffer overflows during the transaction build and parse processes.
The transaction data consists of three main sections.
A transaction descriptor should be added where the memory region attributes are described.
Multiple memory access descriptors should be added which specify access for each receiver of the memory area.
Addresses and sizes of the memory regions can be added.
At this point the transaction data should be ready to passed to the SPM by invoking the suitable FF-A call.
FF-A internal API
The lowest layer implemented in libsp is responsible for wrapping FF-A layer calls into the SVC conduit. Practically this means an escalation of the exception level and invoking the SVC handler of the SPM with the suitable parameters passed in registers.
Copyright (c) 2020-2021, Arm Limited and Contributors. All rights reserved.
SPDX-License-Identifier: BSD-3-Clause
Copyright (c) 2020-2022, Arm Limited and Contributors. All rights reserved.
SPDX-License-Identifier: BSD-3-Clause
Copyright (c) 2020-2022, Arm Limited and Contributors. All rights reserved.
SPDX-License-Identifier: BSD-3-Clause
Security Model
Generic Threat Model
Threat modeling is a process to identify security requirements, pinpoint security threats and potential vulnerabilities, quantify threat and vulnerability criticality and prioritize remediation methods.
In the next sections you can find the output of this process the for a generic, use-case and service independent assessment.
Target evaluation
In this threat model, the target of evaluation is the S-EL0 SPs part of the PSA RoT hosting a “generalized” trusted service.
This evaluation is based on the following assumptions:
The implementation is based on the FF-A standard.
Trusted boot is enabled. This means an attacker can’t boot arbitrary images that are not approved by platform providers.
Each trusted service is running in an S-EL0 secure partition. Access to memory and hardware is controlled by the SPM based on the FF-A manifest or FF-A framework messages.
Components running at higher privilege levels than S-EL0 are to be inherently trusted. (I.e. the SPM).
Data flow diagram
The data flow diagram visualizes the connection between components and where the data flow crosses security boundaries.
Data flow |
Description |
In scope |
DF1 |
Trusted Service interacts with NWd client directly. |
Yes |
DF2 |
Trusted Service interacts with NWd client through SPM. |
Yes |
DF3 |
Trusted Services interact through SPM. |
Yes |
DF4 |
Trusted Service logs debug information. |
Yes |
DF5 |
Trusted Services interact directly. |
Yes |
DF6, DF7 |
Trusted Services interacts with shared hardware. |
Yes |
DF8 |
Trusted Service interacts with dedicated peripheral interface. |
Yes |
DF9, DF10 |
Trusted Service interacts with shared, external hardware. |
Yes |
DF11 |
Trusted Service interacts with dedicated, external hardware. |
Yes |
DF12 |
NWd interacts with more privileged software. |
No |
DF13 |
FF-A manifest and other data is handed over to a Trussed Service |
No |
Trust boundaries
Trust boundary |
Description |
TB1 |
Trust boundary between TEE and normal world. |
TB2 |
Trust boundary between higher privilege level SW and Trusted Services. |
TB3, TB4 |
Trust boundary between trusted services. |
TB5 |
Trust boundary to physically accessible external hardware. |
Assets
The above dataflow identifies the following generalized assets.
Asset |
Description |
|
Availability of a trusted service to clients. |
|
Code or code flow of a trusted service. |
|
Data that an attacker must not tamper with. These include device identity key, Initial Attestation Key, Protected Storage Key, UEFI variables, tpm-event log, etc… |
|
Hardware that an attacker must not be tamper with. Examples are control interface of storage medium, true random number generator, crypto accelerator. |
Attackers and threat agents
This section identifies the generalized stakeholders interacting with secure services.
Attacker/Threat agent |
Description |
In scope |
|
Client executing in the normal world. |
Yes |
|
Client running in SWd. |
Yes |
|
Components running at higher privilege level than the trusted service. |
No |
|
Physical attacker using debug signals to access resources. |
Yes |
|
Physical attacker having access to the external device communication bus and to the external flash communication bus using common hardware. |
Yes |
|
Attackers who are able to use specialist hardware for attacks that require irreversible changes to the target system (e.g., “rewiring” a chip using a Focused IonBeam FIB workstation). |
No |
Threat Priority
Threat priority calculation is based on Common Vulnerability Scoring System (CVSS) Version 3.1. The threat priority is represented by the Severity Rating Scale calculated from the CVSS score of each threat. The CVSS score is calculated using the Vulnerability Calculator.
For each threat the priority and a link to CVSS calculator capturing the calculator settings will be listed.
Threat Types
In this threat model we categorize threats using the STRIDE threat analysis technique. In this technique a threat is
categorized as one or more of these types: Spoofing
, Tampering
, Repudiation
, Information disclosure
,
Denial of service
or Elevation of privilege
.
ID |
1 |
Description |
Information leak via debug logs. During development it is common practice to help understanding code execution by emitting debug logs. |
Data flow |
DF4 |
Asset(s) |
|
Threat Agent/Attacker |
|
Threat type |
|
Impact |
Sensitive information may get to unauthorized people. Information can potentially help compromising the target or other systems. |
Scoring/CVSS |
Medium, 4.6 CVSS:3.1/AV:P/AC:L/PR:N/UI:N/S:U/C:H/I:N/A:N |
Mitigation |
Log messages are put to “verbosity categories”. Release builds limit printed log messages to “error” category. |
Mitigation in place |
yes |
ID |
2 |
Description |
An attacker can tamper with sensitive data and execute arbitrary code through hardware-assisted debug interface. |
Data flow |
N/A. |
Asset(s) |
|
Threat Agent/Attacker |
|
Threat type |
|
Impact |
Sensitive information may get to unauthorized people. Information can potentially help compromising the target or other systems. An attacker may modify sensitive data and alter device behavior and thus compromise the target or other systems. |
Scoring/CVSS |
Medium, 6.8 CVSS:3.1/AV:P/AC:H/PR:H/UI:R/S:C/C:H/I:H/A:H |
Mitigation |
Hardware platform specific means to disable or limit access to debug functionality. |
Mitigation in place |
yes |
ID |
3 |
Description |
An attacker can perform a denial-of-service attack by using a broken service call that causes the service to enter an unknown state. Secure and non-secure clients access a trusted service through FF-A calls. Malicious code can attempt to place the service into an inconsistent state by calling unimplemented calls or by passing invalid arguments. |
Data flow |
DF1, DF2, DF3, DF5 |
Asset(s) |
|
Threat Agent/Attacker |
|
Threat type |
|
Impact |
The service or the whole system may temporarily or permanently enter an unusable state. |
Scoring/CVSS |
Medium, 6.8 CVSS:3.1/AV:L/AC:L/PR:N/UI:N/S:U/C:N/I:L/A:H |
Mitigation |
The service must validate all inputs before usage. Input validation shall be checked during code review and by testing. |
Mitigation in place |
yes |
ID |
4 |
Description |
Memory corruption due to memory overflows and lack of boundary checking when accessing resources. Allows an attacker to execute arbitrary code, modify memory content to change program flow. |
Data flow |
DF1, DF2, DF3, DF5 |
Asset(s) |
|
Threat Agent/Attacker |
|
Threat type |
|
Impact |
The service or the whole system may temporarily or permanently enter an unusable state. Malicious code might be executed in the context of the compromised service. Leakage of sensitive data. |
Scoring/CVSS |
|
Mitigation |
The service must validate boundaries and sanity check incoming data. Validation shall be checked during code reviews and by testing. |
Mitigation in place |
yes |
ID |
5 |
Description |
External devices connected to the system storing sensitive data. An attacker could eavesdrop external signals. |
Data flow |
DF9, DF10, DF11 |
Asset(s) |
|
Threat agent/Attacker |
|
Threat type |
|
Impact |
An attacker may get access to sensitive data, could tamper with sensitive data, or could attack the service using the external device by injecting malicious data, which could lead to malfunction or execution of malicious code. |
Scoring/CVSS |
Medium, 5.9 CVSS:3.1/AV:P/AC:L/PR:N/UI:R/S:U/C:H/I:N/A:H |
Mitigation |
When designing the use case, storage services must be assessed to understand which protection type they can implement (integrity, authenticity, confidentiality, rollback-protection). Sensitive data must be categorized and mapped to the storage service which can provide the needed protection. For example integrity can be safeguarded by using checksums. Authenticity by using digital signatures. Confidentiality by using encryption. Rollback protection by using nonce values. |
Mitigation in place |
yes |
ID |
6 |
Description |
State of external devices connected to the system might be modified by an attacker. This includes modifying signals, replacing the device, or modifying device content. |
Data flow |
DF9, DF10, DF11 |
Asset(s) |
|
Threat agent/Attacker |
|
Threat type |
|
Impact |
An attacker could tamper with sensitive data, or could attack the system by injecting malicious data, which could lead to malfunction, execution of malicious code, or using old state with known vulnerability. |
Scoring/CVSS |
|
Mitigation |
When designing the use case, storage services must be assessed to understand which protection type they can implement (integrity, authenticity, confidentiality, rollback-protection). Sensitive data must be categorized and mapped to the storage service which can provide the needed protection. For example integrity can be safeguarded by using checksums. Authenticity by using digital signatures. Confidentiality by using encryption. Rollback protection by using hardware backed nonce values. |
Mitigation in place |
yes |
ID |
7 |
Description |
Invalid or conflicting access to shared hardware. |
Data flow |
DF6, DF7, DF9, DF10 |
Asset(s) |
|
Threat Agent/Attacker |
|
Threat type |
|
Impact |
A trusted service relying on shared hardware usage might get compromised or misbehaving if other stakeholders affect shared hardware in unexpected way. |
Scoring/CVSS |
|
Mitigation |
Access to peripherals must be limited to the smallest possible set of services. Ideally each peripheral should be dedicated to a single trusted service and sharing of peripherals should be avoided is possible. If sharing can not be avoided, a strict handover process shall be implemented to allow proper context switches, where hardware state can be controlled. |
Mitigation in place |
yes |
ID |
8 |
Description |
Unauthenticated access to hardware. A trusted service relying on hardware usage might get compromised or misbehaving if hardware state is maliciously altered. |
Data flow |
DF6, DF7, DF9, DF10 |
Asset(s) |
|
Threat Agent/Attacker |
|
Threat type |
|
Impact |
An attacker may get access to sensitive data of might make a trusted service or the system enter an unusable state by tampering with hardware peripherals. |
Scoring/CVSS |
Medium, 6.4 CVSS:3.1/AV:L/AC:H/PR:H/UI:N/S:U/C:H/I:H/A:H |
Mitigation |
Access to peripherals must be limited to the smallest possible set of services. Ideally each peripheral should be dedicated to a single trusted service, and sharing of peripherals should be avoided is possible. If sharing can not be avoided, a strict handover process shall be implemented to allow proper context switches, where register values can be controlled. |
Mitigation in place |
yes |
ID |
9 |
Description |
Unauthenticated access to sensitive data. |
Data flow |
DF1, DF2, DF3, DF5 |
Asset(s) |
|
Threat Agent/Attacker |
|
Threat type |
|
Impact |
A trusted service may manage data of multiple clients. Different clients shall not be able to access each other’s data unless in response to explicit request. |
Scoring/CVSS |
Medium, 6.8 CVSS:3.1/AV:L/AC:L/PR:N/UI:N/S:U/C:H/I:L/A:N |
Mitigation |
Trusted services must implement access control based on identification data provided by higher privileged components (i.e. FF-A endpoint ID). |
Mitigation in place |
yes |
Attachments
Source file of the Data flow diagram. Please use the yEd for editing. ./generic-data-flow.graphml
Copyright (c) 2022, Arm Limited and Contributors. All rights reserved.
SPDX-License-Identifier: BSD-3-Clause
The security model of Trusted Services build on the Platform Security Model v1.1 beta. For a concept level overview please refer to this document.
Copyright (c) 2020-2022, Arm Limited and Contributors. All rights reserved.
SPDX-License-Identifier: BSD-3-Clause
Deployments
In the context of the Trusted Services project, a deployment represents a build of an assembly of components that is intended to run within a specific environment. Some deployments may be built for different platforms using platform specific components if needed. The concept of a deployment is general purpose and can be applied to building a wide range of targets such as secure partition images, user-space tools, shared libraries and test executables.
Supported deployments are described on the following pages:
Secure Partition Images
Secure partition (SP) deployments are concerned with building SP images that can be loaded and run under a secure partition manager such as Hafnium or OP-TEE. SP images will usually include service provider components that expose a service interface that may be reached using FF-A messages. A set of SP images will be loaded and verified by device firmware to provide the required services.
The following deployments that create SP images are currently supported:
crypto
An instance of the crypto service provider is built into an SP image to perform cryptographic operations on behalf of clients running in different partitions. Backend crypto operations are implemented by the crypto library component of MbedTLS. This deployment provides the cryptographic facilities needed for PSA certification. For more information, see: Crypto Service.
Supported Environments |
|
External Dependencies |
|
attestation
An instance of the attestation service provider is built into an SP image to support remote attestation use-cases. The service provider obtains a trusted view of the boot state of device firmware from the TPM event log collected by the boot loader. This deployment provides the initial attestation facility needed for PSA certification. For more information, see: Attestation Service.
Supported Environments |
|
External Dependencies |
|
internal-trusted-storage & protected-storage
Two secure storage SP deployments are provided to allow different classes of storage to coexist on a device. Both deployments build an instance of the secure storage service provider with a storage backend. To allow different security trade-offs to be made and to support different hardware, a system integrator may configure which storage backend to use. Secure storage is a requirement for PSA certification. For more information, see: Secure Storage Service.
Supported Environments |
|
External Dependencies |
|
se-proxy
The se-proxy SP provides access to services hosted by a secure enclave (hence ‘se’). A secure enclave consists of a separate MCU, connected to the host via a secure communications channel. To protect access to the communication channel, the se-proxy SP is assigned exclusive access to the communication peripheral via device or memory regions defined in the SP manifest. The deployment integrates multiple service providers into the SP image. After performing access control, service requests are forwarded to the secure enclave.
The se-proxy deployment includes proxies for the following services:
Crypto
Attestation
Internal Trusted Storage
Protected Storage
Supported Environments |
|
External Dependencies |
|
smm-gateway
An instance of the smm-variable service provider is built into the smm-gateway SP image to provide secure world backing for UEFI SMM services. The smm-gateway SP provides a lightweight alternative to StMM. For more information, see: UEFI SMM Services.
Supported Environments |
|
External Dependencies |
|
env-test
An instance of the test runner service provider is built into an SP image to allow test cases to be run from within the SP isolated environment. The SP image also includes environment and platform specific test cases to allow access to FF-A services and platform hardware to be tested. The test runner service provider is intended to be used in conjunction with a client that coordinates which tests to run and collects test results.
Supported Environments |
|
External Dependencies |
|
Copyright (c) 2022, Arm Limited and Contributors. All rights reserved.
SPDX-License-Identifier: BSD-3-Clause
Test Executables
The Trusted Services project maintains a number of deployments concerned with test. Although there may be some coverage overlap between different deployments, in general, the built test executables corresponding to different deployments serve different purposes. Most test executables may be run either on target hardware or a development PC as a native application. For more information, see: Running Tests.
The following test deployments are currently supported:
component-test
The component-test deployment combines a large set of tests and components into a monolithic image that may be run as a userspace application. The CppUtest test framework is used for running tests and capturing results. The component-test executable may be built and run very quickly to obtain a first pass check for build failures or regressions.
Supported Environments |
|
Used for |
|
ts-service-test
The ts-service-test deployment combines test suites that exercise service providers via their standard service interfaces where test cases perform the role of service client. Service discovery and RPC messaging is handled by the libts shared library. On real targets, the libts library uses a dynamic discovery mechanism to locate and communicate with real service deployments. For native PC builds, service providers are embedded into the libts library itself, allowing service level testing within a native PC environment.
Supported Environments |
|
Used for |
|
uefi-test
The uefi-test deployment includes service level tests for UEFI SMM services.
Supported Environments |
|
Used for |
|
psa-api-test
Used for PSA API conformance testing using test suites from: PSA Arch Test project. Tests are integrated with service clients to enable end-to-end testing against deployed service providers. Separate executables are built for each API under test. As with ts-service-test and uefi-test, service discovery and messaging is handled by libts, allowing API tests to be run on real targets or within a native PC environment.
Supported Environments |
|
Used for |
|
ts-remote-test
The ts-remote-test deployment builds a userspace application that allows a remote test runner to be discovered and controlled. It implements a subset of the the CppUtest command line interface but instead of running tests directly, it communicates with the remote test runner to run tests and collect results. Can be used, for example, to control the running of tests included in the env-test deployment.
Supported Environments |
|
Used for |
|
Copyright (c) 2022, Arm Limited and Contributors. All rights reserved.
SPDX-License-Identifier: BSD-3-Clause
Libraries
Some deployments build common functionality into libraries that may be used by other deployments or external applications. The following library deployments are currently supported:
libts
Userspace applications that depend on trusted services may use libts for handling service discovery and RPC messaging. A major benefit to application developers is that libts entirely decouples client applications from details of where a service provider is deployed and how to communicate with it. All TS test executables and tools that interact with service providers use libts.
To facilitate test and development within a native PC environment, the libts deployment for the linux-pc environment integrates a set of service providers into the library itself. From a client application’s perspective, this looks exactly the same as when running on a target platform with service providers deployed in secure processing environments. For more information, see: Service Locator.
Supported Environments |
|
Used by |
|
libsp
libsp provides a functional interface for using FF-A messaging and memory management facilities. libsp is used in SP deployments. For more information, see: libsp.
Supported Environments |
|
Used by |
|
Copyright (c) 2022, Arm Limited and Contributors. All rights reserved.
SPDX-License-Identifier: BSD-3-Clause
Tools & Demo Applications
The following deployments are concerned with building tools and demo applications.
platform-inspect
The platform-inspect tool may be run from a Linux terminal to inspect and report information about platform firmware. Functionality is currently limited to retrieving a firmware attestation report and printing its contents.
Supported Environments |
|
Used for |
|
ts-demo
ts-demo is a simple application that uses the Crypto service to perform some typical sign, verify and encrypt operations. It is intended to be used as an example of how trusted services can be used by userspace applications.
Supported Environments |
|
Used for |
|
Copyright (c) 2022, Arm Limited and Contributors. All rights reserved.
SPDX-License-Identifier: BSD-3-Clause
Related documentsments:
Copyright (c) 2020-2022, Arm Limited and Contributors. All rights reserved.
SPDX-License-Identifier: BSD-3-Clause
Platform Certification
Various certification programmes exist to help platform vendors produce hardware and firmware that meets defined requirements for security and feature compatibility. By conforming to a set of testable criteria, platform vendors can make assurances to customers about the capabilities and security of their products.
The Trusted Services project is an upstream source for service related components that can be integrated into platform firmware. Many of these components are important building blocks for meeting certification requirements. Reuse of components by downstream platform integration projects will help drive quality and security improvements, especially in areas covered by relevant certification programmes.
Currently, the following certification programmes have been adopted by downstream platform integration projects:
PSA Certified
PSA Certified provides a framework for securing connected devices. Certification demonstrates that security best practices have been implemented, based on an independent security assessment. For more information, see: PSA Certified.
PSA Certified defines ten security goals that form the foundation for device security. The certification process involves an assessment that these security goals have been met. The Trusted Services project includes service provider components and reference integrations that a system integrator may use as the basis for creating a platform that meets these goals.
PSA Goals
The following table lists the ten security goals and how the Trusted Services project helps to achieve them:
PSA Certified Goal |
Trusted Services Contribution |
---|---|
Unique Identification |
A unique device identity, assigned during manufacture, may be stored securely
using the Secure Storage trusted service with a suitable platform provided backend.
|
Security Lifecycle |
The Attestation trusted service provides an extensible framework for adding claims
to a signed attestation report. The security lifecycle state claim is planned to be
added in a future release.
|
Attestation |
A remote third-party may obtain a trusted view of the security state of a device by
obtaining a signed attestation token from the Attestation service.
|
Secure Boot |
Secure boot relies on a hardware trust anchor such as a public key hash programmed into
an OTP eFuse array. For firmware that uses TF-A, all firmware components are verified
during the early boot phase.
|
Secure Update |
Involves cooperation of a trusted service with other firmware components such as the
boot loader.
|
Anti-Rollback |
The Secure Storage service provider can be used with arbitrary storage backends, allowing
platform specific storage to be used. Where the necessary hardware is available, roll-back
protected storage can be provided with a suitable backend.
|
Isolation |
The trusted services architectural model assumes that service isolation is implemented using
a hardware backed secure processing environment. A secure partition managed by a Secure
Partition Manager is one method for realizing isolation.
|
Interaction |
The FF-A specification defines messaging and memory management primitives that enable
secure interaction between partitions. Importantly, the secure partition manager provides
a trusted view of the identity of a message sender, allowing access to be controlled.
|
Secure Storage |
The Secure Storage service provider uses a pre-configured storage backend to provide
an object store with suitable security properties. Two deployments of the secure storage
provider (Internal Trusted Storage and Protected Storage) are included with platform
specific storage backends.
|
Cryptographic Service |
The Crypto service provider implements a rich set of cryptographic operations using
a protected key store. Key usage is controlled based on the least privileges principle
where usage flags constrain permitted operations.
|
Conformance Test Support
To support API level conformance testing, the PSA Arch Test project provides a rich set of test suites that allow service implementations to be tested. To facilitate running of PSA functional API tests, the psa-api-test deployment (see: Test Executables) is supported which integrates test suites with service clients. This can be used to run tests on a platform and collect tests results to provide visibility to an external assessor.
Copyright (c) 2022, Arm Limited and Contributors. All rights reserved.
SPDX-License-Identifier: BSD-3-Clause
SystemReady
Arm SystemReady is a compliance certification programme that aims to promote a standardized view of a platform and its firmware (see: Arm SystemReady). SystemReady may be applied across different classes of device, represented by different SystemReady bands, from resource constrained IoT devices through to servers. By standardizing the platform and its firmware, generic operating systems can be expected to ‘just work’ on any compliant device.
SystemReady leverages existing open standards such as UEFI. The Trusted Services project includes service level components that enable UEFI SMM services to be realized, backed by PSA root-of-trust services. As an alternative to EDK2 StMM, the smm-gateway deployment presents UEFI compliant SMM service endpoints, backed by the generic Secure Storage and Crypto services. For more information, see:
The UEFI features supported by smm-gateway are designed to meet SystemReady requirements for the IR band (embedded IoT).
Copyright (c) 2022, Arm Limited and Contributors. All rights reserved.
SPDX-License-Identifier: BSD-3-Clause
Copyright (c) 2020-2022, Arm Limited and Contributors. All rights reserved.
SPDX-License-Identifier: BSD-3-Clause
Target Platforms
Target platforms are emulated or physically implemented hardware. This chapter discusses platform related information.
Platforms can be categorized by level of support as:
Reference platforms Reference platforms are “easily” accessible for testing and quality gate-keeping of the main branch includes tests executed on these platforms.
Active Platforms in this category are updated and tested by their owners for each release.
Obsolete Platforms which are not tested for the current release are put into this category.
Deprecated Platforms not tested for more than one two are threated as obsolete, and will be removed for the next release.
The quality of the platform, known issues, feature limitations, extra features, etc… are defined in the sub-chapters below.
Reference platforms
AEM FVP
Arm Fixed Virtual Platforms are hardware emulators “running at speeds comparable to the real hardware”. FVP packages are released in various different configurations. This platform supports the Armv-A Base RevC AEM FVP
Please see the following chapters of using the AEM FVP:
Copyright (c) 2022, Arm Limited and Contributors. All rights reserved.
SPDX-License-Identifier: BSD-3-Clause
Copyright (c) 2022, Arm Limited and Contributors. All rights reserved.
SPDX-License-Identifier: BSD-3-Clause
Copyright (c) 2020-2022, Arm Limited and Contributors. All rights reserved.
SPDX-License-Identifier: BSD-3-Clause