Skip to main content

The Teaclave Secure Computing Ecosystem - Projects Powered by Teaclave

Mingshen Sun

English | 中文

Apache Teaclave (incubating) is an open source universal secure computing platform, making computation on privacy-sensitive data safe and simple. With hardware-based security capabilities (e.g., Intel SGX), Teaclave enables sensitive data to be circulated and processed under security control, even in off-site and offshore environments, without being compromised or misused. By providing muti-party interfaces, Teaclave can help to "bridge the data islands" within/among organizations and companies.

Highlights

In a nutshell, Teaclave have these four highlights:

  • Secure and Attestable: Teaclave adopts multiple security technologies to enable secure computing, in particular, Teaclave uses Intel SGX to serve the most security-sensitive tasks with hardware-based isolation, memory encryption and attestation. Also, Teaclave is written in Rust to prevent memory-safety issues.
  • Function-as-a-Service: Teaclave is a function-as-a-service platform supports tasks like privacy-preserving machine learning, private set intersection, and crypto computation. In addition, developers can also write and execute Python function. Teaclave supports both general secure computing tasks and flexible multi-party secure computation.
  • Ease of Use: Teaclave builds its components in containers, therefore, it supports deployment both locally and within cloud infrastructures. Teaclave also provides convenient endpoint APIs, client SDKs and command line tools.
  • Flexibility: Components in Teaclave are designed in modular, and some like remote attestation can be easily embedded in other projects. In addition, Teaclave SGX SDK can also be used separately to write standalone SGX enclaves for other purposes.

Secure Computing and Intel SGX

Teaclave Ecosystem

Teaclave provides an open ecosystem in the Apache Way to various parties like project developers, platform users, service providers, first/third-party library users. In a border view, the Teaclave community aims to support commercial products, academic research projects, teaching platform, and other open source projects powered by Teaclave.

Due to the importance of porting, maintaining, auditing of third-party libraries in the trusted computing environments, we maintains hundreds of Rust's libraries for TEE, e.g., crypto, scientific computation, and machine learning libraries. With these libraries around the Teaclave ecosystem, this drives the development of other projects powered by Teaclave. Furthermore, we are also incubating many important TEE components like MesaPy for SGX, and [Rust OP-TEE TrustZone SDK]((https://github.com/sccommunity/rust-optee-trustzone-sdk).

The Teaclave open-source ecosystem is also supporting many commercial products. For example, Baidu, as a platform service provider, has announced the MesaTEE secure computing platform, providing a secure, efficient, and extensible secure computing solution to help companies and organizations for privacy-preserving data exchange/computation.

In addition, the Teaclave ecosystem is also actively sponsoring academic research in trusted execution environment and system security like topics about security in TEE, designing of memory-safe SGX SDK, security of supply chains, etc. Many studies has been published in top-tier conferences.

Projects powered by Teaclave

Because of the openness and flexibility of Teaclave, many projects are based on the Teaclave platforms and Teaclave SGX SDK. Here, we summarize several popular open source or commercial products powered by Teaclave. We hope more organizations and companies can come and join the ecosystem, pushing forward the boundary of data privacy and secure computation.

  • Advanca: A privacy-preserving general-purpose compute/storage infrastructure for Dapps.
  • Anonify: A blockchain-agnostic execution environment with privacy and auditability based on TEE. Anonify: A blockchain-agnostic execution environment with privacy and auditability
  • Enigma Core: Enigma Core library. The domain: Trusted and Untrusted App in Rust.
  • Crypto.com Chain: Alpha version prototype of Crypto.com Chain.
  • Occlum: Occlum is a memory-safe, multi-process library OS for Intel SGX. This project plans to join the Confidential Computing Consortium in the near future.
  • Phala Network: A TEE-Blockchain hybrid architecture implementing Confidential Contract on Polkadot.
  • SafeTrace: Privacy preserving voluntary COVID-19 self-reporting platform for contact tracing.
  • Secret Network: A blockchain-based, open-source protocol that lets anyone perform computations on encrypted data, bringing privacy to smart contracts and public Secret Network is a blockchain-based, open-source protocol that lets anyone perform computations on encrypted data, bringing privacy to smart contracts and public blockchains. blockchains.
  • substraTEE: Trusted Off-Chain Compute Framework for substrate blockchains.
  • Veracruz: Veracruz is a framework for defining and deploying collaborative, privacy-preserving computations amongst a group of mutually mistrusting individuals. This project plans to join the Confidiential Computing Consortium in the near future.

Projects powered by Teaclave

In summary, all above projects are trying to utilize capabilities from TEE, like hardware-isolation, memory encryption and remote attestation, to protect data computation or serve as trusted party. Based on Teaclave SGX SDK, some projects like Occlum and Veracruz aim to provide a more flexible TEE framework. Others like Secret Network and substraTEE want to implement a trusted node for their business logics. Also, some projects are using some common components like remote attestation libraries in their code base. Overall, the Teaclave ecosystem has empowered lots of successful projects. We have a list of projects powered by Teaclave in the community homepage.

At last, everyone are very welcome to join the Teaclave community and contribute to the secure computing ecosystem. Hope to see more users and contributors in the Teaclave community. BTW, if there are any other amazing projects I didn't mention you plan to work on, please also share with me. I'm more than happy to learn more about it.

Teaclave 安全计算开源生态 - 由 Teaclave 驱动的开源项目一览

Mingshen Sun

English | 中文

Apache Teaclave (incubating) (https://teaclave.apache.org/)是全球首个通用安全计算平台,为隐私数据计算赋能,让安全计算更简单。Teaclave 基于硬件安全能力(Intel SGX),确保敏感数据在可信域外和离岸场景下安全可控的流通和处理,无需担心隐私数据泄露和滥用。同时,Teaclave 还支持多方参与的联合计算,打破企业和组织中的数据孤岛。Teaclave 于 2019 年 8 月进入 Apache 基金会孵化器,2020 年 10 月,Teaclave 社区正式通过并发布了 0.1.0 版。

Teaclave 的四大亮点

简单来说,Teaclave 有四大亮点:

  • 安全性:Teaclave 使用多重安全技术保护隐私数据的计算。我们使用 Intel SGX,提供基于硬件隔离、内存加密、远程证实等安全技术保护数据隐私计算任务。不仅如此,为了避免内存安全漏洞带来的安全风险,Teaclave 还使用内存安全编程语言 Rust 编写。
  • 功能性:Teaclave 平台提供了更加便利的函数即服务(function-as-a-service)接口,大大的降低了使用门槛。平台中也实现了众多内置函数,例如机器学习算法,多方联合求交,加解密计算等等常用功能。更重要的是,开发者还可以使用 Python 自由编写函数来操作隐私数据,提交至 Teaclave 平台执行。与传统 FaaS 框架不同的是,Teaclave 提供了灵活的多方参与计算接口,大大简化多方安全计算的流程。
  • 易用性:Teaclave 提供便利的部署方式,支持在本地和云上以容器的方式进行部署。项目中还提供了完善的接口,客户端 SDK 以及命令行工具帮助开发者使用平台的功能。
  • 灵活性:Teaclave 设计了灵活的组件和服务框架,其他项目仅需少量修改就可以使用 Teaclave 中的各种组件。例如,可以在项目中使用 Teaclave 远程证实库作为第三方依赖库,或者直接单独部署使用 Teaclave 中的数据库服务等。当然,开发者也可使用 Teaclave SGX SDK (原 Rust SGX SDK)来构建全新的 SGX 应用。

Secure Computing and Intel SGX

Teaclave 开源生态

Teaclave 提供了一个开放的开源生态,其中既有 Teaclave 项目的开发者,也包括安全计算平台的用户、服务提供商,以及在安全计算相关项目中使用了 Teaclave 中的核心库和第三方常用库的使用者。另一方面,Teaclave 生态服务于商业产品、学术研究项目、可信计算教学平台,同时社区也支持由 Teaclave 驱动的第三方开源项目。

在可信执行环境中,第三方库的移植、维护、审计尤为重要。我们为 Teaclave 生态提供了上百个 Rust 的第三方开发库,比如密码学库,科学计算库,机器学习库等等,大大的丰富了 Teaclave 生态,并且驱动了其他基于 Teaclave 的开源项目发展。除了维护第三方库,我们也孵化出了一些重要的 TEE 组件,比如,针对 SGX 环境的 Python 解释器 - MesaPy (https://github.com/mesalock-linux/mesapy/tree/sgx),以及对 TrustZone 后端 TEE 实现的 SDK - Rust OP-TEE TrustZone SDK (https://github.com/sccommunity/rust-optee-trustzone-sdk)。

Teaclave 开源生态也支持着众多商业产品。比如,基于 Teaclave 开源代码,百度作为平台服务提供商,构建了 MesaTEE 安全计算平台,该平台提供了强安全、高性能、易扩展的芯片级数据安全计算解决方案。协助机构之间解决数据合作过程中数据安全和隐私问题,打破数据孤岛。 通过私有化或云服务帮助金融、政务、互联网等行业在联合建模、联合营销、联合风控等场景下一站式完成数据联合计算,实现 “数据可用不可见”的安全体验。

Teaclave 生态也支持着可信计算和系统安全相关的学术研究,其中包括可信执行环境的安全,内存安全的 SGX SDK 设计,SGX 第三方库生态的安全等等相关课题。其中有多篇学术成果发表在国际顶级会议上。

由 Teaclave 驱动的开源项目

正因为 Teaclave 的开源、开放及其灵活性,许多项目基于 Teaclave 平台或 SGX SDK 构建,以下列举了几个由 Teaclave 驱动的开源项目或商业产品。希望更多的公司、项目能够加入安全计算的生态当中,共同推动数据隐私、安全计算的发展。

  • Advanca 是服务于去中心化应用(Dapps)的一个通用的隐私保护计算/存储基础设施。https://github.com/advanca/advanca
  • Anonify: 在区块链上利用可信执行环境 (TEE) 技术,向用户提供防篡改的数据共享技术设施;在灵活执行业务逻辑同时,保证分布式账本中各个参与者的数据隐私。https://github.com/LayerXcom/anonify

Anonify: A blockchain-agnostic execution environment with privacy and auditability

Secret Network is a blockchain-based, open-source protocol that lets anyone perform computations on encrypted data, bringing privacy to smart contracts and public blockchains.

Projects powered by Teaclave

以上些项目都是利用 TEE 提供的硬件隔离、内存加密、远程证实等特性,保护业务中数据计算的隐私。一些项目在 SGX SDK 基础上提供了更为便利的可信执行环境框架,比如 Occlum 和 Veracruz。另外一些项目使用 SGX SDK 编写 TEE 应用保护其业务逻辑中的数据隐私,或提供可信的节点,如 Secret Network 和 substraTEE。还有一些项目移植使用 Teaclave 平台里的通用模块,比如远程证实代码库等等。更多关于由 Teaclave 驱动的开源项目列表可以在官网中查看(https://teaclave.apache.org/community/)。

最后,Teaclave 项目非常欢迎大家共同参与社区,推动隐私安全计算发展。

让安全计算更简单 - Apache Teaclave (incubating) 0.1.0 正式发布

Mingshen Sun

English | 中文

Apache Teaclave (incubating) 作为全球首个开源通用安全计算平台,在 2020 年 10 月 21 日社区投票正式通过 0.1.0 的版本发布。0.1.0 版是 Teaclave 社区发展重要的里程碑, 它是暨 Teaclave 进入 Apache 孵化器后发布的第一个正式版本,这意味着 Teaclave 通向 Apache 顶级项目之路走上了正轨。

Teaclave 是一个安全计算平台,为隐私数据计算赋能。基于硬件安全能力,Teaclave 确保 敏感数据在可信域外和离岸场景下安全可控的流通和处理,无需担心隐私数据泄露和滥用。 Teaclave 同时支持多方参与的联合计算,打破企业和组织中的数据孤岛。

简单来说,Teaclave 有以下四个亮点:

  • 安全性:Teaclave 使用多重安全技术保护隐私数据的计算。我们使用 Intel SGX, 提供基于硬件隔离、内存加密、远程证实等安全技术保护数据隐私计算任务。不仅如此, 为了避免内存安全漏洞带来的安全风险,Teaclave 还使用内存安全编程语言 Rust 编写。
  • 功能性:Teaclave 平台提供了更加便利的函数即服务(function-as-a-service)接 口,大大的降低了使用门槛。平台中也实现了众多内置函数,例如机器学习算法,多方联 合求交,加解密计算等等常用功能。更重要的是,开发者还可以使用 Python 自由编写函 数来操作隐私数据,提交至 Teaclave 平台执行。与传统 FaaS 框架不同的是,Teaclave 提供了灵活的多方参与计算接口,大大简化多方安全计算的流程。
  • 易用性:Teaclave 提供便利的部署方式,支持在本地和云上以容器的方式进行部署。 项目中还提供了完善的接口,客户端 SDK 以及命令行工具帮助开发者使用平台的功能。
  • 灵活性:Teaclave 设计了灵活的组件和服务框架,其他项目仅需少量修改就可以使 用 Teaclave 中的各种组件。例如,可以在项目中使用 Teaclave 远程证实库作为第三方 依赖库,或者直接单独部署使用 Teaclave 中的数据库服务等。

Teaclave Architecture

在 Teaclave 进入 Apache 孵化器后,我们启动了对项目的重构计划,同时对整个项目代码 和交互的梳理,0.1.0 的发布是项目从诞生到成长的第一步。简单来说,我们在以下几个方 面做了代码重构和功能特性完善:提供了更完善的函数和数据注册执行,重新设计的函数的 开发流程,支持兼容 protobuf 的 RPC 定义接口,支持分布式执行服务,支持自定义执行 器,提供更佳完善的文档,重新设计了错误处理机制,完善测试框架和测试用例,实现了更 多的内置函数,支持了更加灵活的远程证实方式(DCAP),重新设计 RPC 方式等等。更多 更详细的修改可以看我们的版本发布文档。这次 发布一共修改了 876 个源文件,新增了 78,237 行代码,删除了 54,846 行代码,并且在 GitHub issue 上解决了来自社区的 102 个问题。

Teaclave 0.1.0 版本可以在官网下载页面找到(注意验证 PGP 签名和 hash)。

在新版本中我们提供了非常全面和详细的文档,如果是第一次了解和接触 Teaclave,我们 提供了一个简单易上手的新手入门文档,你可以在这个文档中 学习如何在 Teaclave 中编译和运行一个简单的函数。当然,如果你想深入的了解 Teaclave 的内部设计和实现,我们还有更多设计文档、API 文档、代码库文档等你发现。

感谢一直以来关注、使用、提问、为项目贡献代码的人们,Teaclave 项目的发展离不开社区的支持与帮助。在此,我们非常高 兴的宣布,已有四家公司或组织在产品中使用 Teaclave,并且有超过九个开源项目使用了 Teaclave 平台和 Teaclave SGX SDK。这些都给项目的发展提供源源不断的动力。

Teaclave 的开源是 The Apache Way的 开源,无论是代码、文档、设计还是路线图规划,我们都会在社区的各个渠道中讨论。如果 你想关注 Teaclave,亦或想加入我们的讨论,可以在 Github issues,邮件列 表回复我们、发起新的话题。也可以关注我们的 Twitter 账号 @ApacheTeaclave了解最新动态。除此之外, 我们在官网中提供了 Teaclave 相关的演讲、论文和文章,包括今年在 ApacheCon 的演讲 slides 和视频。

在此,感谢我们所有贡献者对于项 目的贡献,感谢 Apache 社区导师们的指导,感谢所有帮助 扩大社区、宣传社区、讨论路线图、组织社区活动的 PPMC 和 committer 成员, 还要感谢正在了解 和尝试使用 Teaclave 的人们,以及回馈 Teaclave 社区的公司。你们都为 Teaclave 开源 社区的发展做了巨大的贡献。

最后,0.1.0 只是 Teaclave 的开端,我们还会和 Teaclave 社区继续努力,推动隐私安全 计算的发展。

Teaclave,让安全计算更简单。

Announcing Apache Teaclave (incubating) 0.1.0

Mingshen Sun

English | 中文

On behalf of the Teaclave community, I am happy to announce the first Apache Incubator release of Teaclave, 0.1.0. Teaclave is a universal secure computing platform, making computation on privacy-sensitive data safe and simple.

The 0.1.0 release marks an important milestone for the Teaclave community. This is the first public release of the Teaclave platform. Moreover, this is the first official release since we entered Apache Incubator on August 19, 2019.

Project highlights

Since this is our first public release, I still want to emphasize the highlights of the Teaclave project.

Apache Teaclave (incubating) has following highlights:

  • Security: Teaclave adopts multiple security technologies to enable secure computing. In particular, Teaclave uses Intel SGX to serve the most security-sensitive tasks with hardware-based isolation, memory encryption, and attestation. Also, Teaclave is written in Rust to prevent memory-safety issues.
  • Functionality: Teaclave is provided as a function-as-a-service platform. With many built-in functions, it supports tasks like machine learning, private set intersection, crypto computation, etc. In addition, developers can also deploy and execute Python scripts in Teaclave. More importantly, unlike traditional FaaS, Teaclave supports both general secure computing tasks and flexible single- and multi-party computation.
  • Usability: Teaclave builds its components in containers, therefore, it supports deployment both locally and within cloud infrastructures. Teaclave also provides convenient endpoint APIs, client SDKs, and command-line tools.
  • Modularity: Components in Teaclave are designed in modular, and some components like remote attestation can be easily embedded in other projects. In addition, Teaclave SGX SDK can also be used separately to write standalone SGX enclaves for other purposes.

Teaclave Architecture

Changes since entering Apache Incubator

Since we entered Apache Incubator, we started an ambitious task of refactoring and improving the project in many perspectives. These include:

  • Supporting function and data registration.
    • input data of a function can be fetched from external storage
    • reuse registered function in new tasks
  • Redesigning function development workflow.
    • developers only need to write python script with native libraries written in Rust
  • Providing persistent database implementation.
    • port rusty-leveldb to SGX enclave using protected FS to secure file I/O
    • add dedicated database service called "storage service" for data persistency
  • Supporting protobuf compatible RPC protocol.
    • refactor existing service protocol with .proto files
    • redesign PRC protocol with remote attestation
  • Supporting distributed workers.
    • support docker deployment
    • decouple services into different containers
  • Supporting customized workers.
    • service provider can deploy with other kinds of customized workers besides builtin worker
  • Providing better documentation.
  • Providing better error handling.
    • errors can be propagated through RPC
    • better error triage
  • Testsuite cleanup/refactoring.
  • Implementing more built-in functions: we have nine built-in functions, including echo, face detection, gbdt predict/train, logistic regression predict/train, online decrypt, ordered set intersect, private join and compute, and RSA sign.
  • Supporting more flexible attestation mechanisms (DCAP): providing a reference implementation of DCAP attestation service and supporting DCAP in our attestation library and build system as well.
  • RPC enhancement.
    • multi-language client
  • Many other changes.

These changes include 876 files changed, 78,237 insertions, and 54,846 deletions. We have also closed 102 issues on GitHub to answer questions from the community and implementing feature requests.

Download

Teaclave 0.1.0 can be downloaded at the download page. Note that it is essential to verify the integrity of the downloaded file using the PGP signature (.asc file) or a hash (.md5 or .sha* file).

Documentation

If it is the first time to try Teaclave, we provide a simple but clear tutorial to guide you getting stated with Teaclave by invoking your first function in Teaclave.

Basically, you can build the Teaclave platform using docker with these commands:

$ cd incubator-teaclave
$ docker run --rm -v $(pwd):/teaclave -w /teaclave \
-it teaclave/teaclave-build-ubuntu-1804-sgx-2.9.1:latest \
bash -c ". /root/.cargo/env && \
. /opt/sgxsdk/environment && \
mkdir -p build && cd build && \
cmake -DTEST_MODE=ON .. && \
make"

Launch all services with docker-compose:

$ (cd docker && docker-compose -f docker-compose-ubuntu-1804.yml up --build)

And invoke function with a Python client:

$ cd examples/python
$ PYTHONPATH=../../sdk/python python3 builtin_echo.py 'Hello, Teaclave!'
[+] registering user
[+] login
[+] registering function
[+] creating task
[+] approving task
[+] invoking task
[+] getting result
[+] done
[+] function return: b'Hello, Teaclave!'

If you want to understand the internals of Teaclave, we provide several documents about the design of Teaclave. Also, we extensively document our codebase in each sub directories. At last, API references are automatically generated and uploaded to our homepage.

Community

Along our road to the first public release, we also see the growth of the community who started to follow the project and using in their products. We have seen at least four organizations using Teaclave in their products, and more than nine open source projects building on top of Teaclave platform or Teaclave SGX SDK.

We build several channels to communicate with the community including GitHub issues, mailing lists, and Twitter account (@ApacheTeaclave).

We also attended ApacheCon this year to introduce Teaclave to the Apache community. If you missed the talk, please find the slides and other materials of talks, papers, and related articles at this page.

Thanks

I'd like to thank all contributors to build this community. Thanks to our project mentors, who help us to understand the Apache way of open source. Furthermore, thanks to all PPMC members and committers for governing and expanding the community, organizing meetups, discussing roadmap, designing components, writing documents, contributing code, and advertising the project. Moreover, thanks to all external contributors who send pull requests to fix bugs and implementing features. In addition, I'd like to thank people who tries Teaclave themselves, files bugs or feature requests. This also helps us to understand the project and figure out where we can improve. Finally, I'd like to thank companies who use Teaclave and contribute back to the community.

At last, 0.1.0 is just the start of Teaclave. We will continue working hard with the community implementing new features, writing documents, and expanding the community.

Happy Teaclaving. Cheers!

Mitigation of Intel-SA-00219 in Teaclave SGX SDK

Yu Ding

Background

Intel issued Intel SA-00219 on Nov 12, 2019, with CVE number CVE-2019-0117. Intel also published a guidance to instruct the developers/researchers. Then Intel released Intel SGX SDK v2.7.1, including new memory allocation primitives and corresponding patches in PSW enclaves.

This article is to help people understand Intel-SA-00219, and how Teaclave SGX SDK handles it.

The problem statement and my thoughts

The only statement I found is on the Intel-SA-00219 page:

Organize the code/data within enclave memory to avoid putting sensitive materials in DWORD0 and DWORD1 of cache line. The effectiveness of this mitigation is dependent on the ability for the software to avoid the affected memory region. To assist the enclave application providers to modify their code, Intel is releasing SGX SDK update (Windows version 2.5.101.3, Linux version 2.7.101.3) with new memory allocation APIs to avoid the affected memory region. More details about the APIs can be found here.

Intel does not directly describe the vulnerability here. But it's clear that the 64-byte cache line would contain 8-byte or sensitive data, which can be keys protected by Intel SGX. So the following memory layout can be problematic in SGX:

 --------------------------------------------------------------------------------------
| attacker accessible data A | private key (inaccessible) | attacker accessible data B |
--------------------------------------------------------------------------------------

It's equal to a vulnerable data structure like:

struct foo {
uint64_t A;
uint64_t secret;
uint64_t B;
}

where foo.A and foo.B are accessible by design, while foo.secret is not.

If an attacker somehow can access either A or B, he probably will have first or last 8-byte of the "inaccessible" secret in cache line. Then something bad may happen.

So, the most straightforward mitigation is to insert additional "guard bytes" before and after the sensitive data:

 ----------------------------------------------------------------------------------------------
| attacker data A | 8-byte guard | private key (inaccessible) | 8-byte guard | attacker data B |
----------------------------------------------------------------------------------------------

It results in a modified structure like

struct foo {
uint64_t A;
(private) uint64_t _guard0;
uint64_t secret;
(private) uint64_t _guard1;
uint64_t B;
}

Further investigation from Intel's code reveals that _guard1 is not required. So it can be:

     -------------------------------------------------------------------------------
| attacker data A | 8-byte guard | private key (inaccessible) | attacker data B |
-------------------------------------------------------------------------------

Intel's new allocator primitive

Intel's guidance provides:

(1) A C++ template custom_alignment_aligned (2) A C function sgx_get_aligned_ptr and one of its parameter's type struct align_req_t (3) A dynamic memory allocator function sgx_aligned_malloc

After spending hours on Intel's code, I realized that these primitives are helping developers allocate a larger object which:

a) contains all fields of the original object. b) adds "guard bytes" before and after each "specified secret field". c) align each "specified secret field" on demand

Intel's patches on PSW enclaves

The most easy to understand example is from psw/ae/pse_op/session_mgr.cpp:

@@ -417,7 +461,12 @@ pse_op_error_t pse_exchange_report(uint64_t tick,
{
pse_op_error_t status = OP_SUCCESS;
sgx_dh_session_t sgx_dh_session;
- sgx_key_128bit_t aek;
+ //
+ // securely align aek
+ //
+ //sgx_key_128bit_t aek;
+ sgx::custom_alignment_aligned<sgx_key_128bit_t, sizeof(sgx_key_128bit_t), 0, sizeof(sgx_key_128bit_t)> oaek;
+ sgx_key_128bit_t& aek = oaek.v;
sgx_dh_session_enclave_identity_t initiator_identity;
cse_sec_prop_t * pcse_sec = NULL;
secu_info_t* psec_info = NULL;

The template generates a larger struct oaek. Size of sgx_key_128bit_t is 16 bytes, and sizeof(oaek) equals to 32. And the offset of oaek.v is 8.

And in the same file, another fix is:

--- a/psw/ae/pse/pse_op/session_mgr.cpp
+++ b/psw/ae/pse/pse_op/session_mgr.cpp
@@ -29,21 +29,65 @@
*
*/

-
+#include <sgx_secure_align.h>
#include "utility.h"
#include "session_mgr.h"
#include "pse_op_t.h"
#include "sgx_dh.h"

// ISV enclave <-> pse-op sessions
-static pse_session_t g_session[SESSION_CONNECTION];
+//
+// securely align all ISV enclave - pse sessions' secrets
+//
+static sgx::custom_alignment_aligned<pse_session_t, 16, __builtin_offsetof(pse_session_t, active.AEK), 16> og_session[SESSION_CONNECTION];
+//
+// following allows existing references to g_session[index]
+// to not have to change
+//
+class CSessions
+{
+public:
+ pse_session_t& operator[](int index) {
+ return og_session[index].v;
+ }
+};
+static CSessions g_session;
static uint32_t g_session_count = 0;

It seems that the original global g_session array is vulnerabile to INTEL-SA-00219. So Intel created a new structure CSessions and reloaded the [] operator, and used custom_alignment_aligned template to create the array of guarded CSessions.

We can see some more complex samples in the same file, such as:

 // ephemeral session global variables
static uint8_t g_nonce_r_pse[EPH_SESSION_NONCE_SIZE] = {0}; // nonce R(PSE) for ephemeral session establishment
static uint8_t g_nonce_r_cse[EPH_SESSION_NONCE_SIZE] = {0}; // nonce R(CSE) for ephemeral session establishment
-static pairing_data_t g_pairing_data; // unsealed pairing data
-eph_session_t g_eph_session; // ephemeral session information
+
+//
+// securely align pairing data
+// Id_pse and Id_cse aren't secrets
+// I don't think pairingNonce is a secret and even if it is, we can't align
+// all of [mk, sk, pairingID, pairingNonce]
+//
+//static pairing_data_t g_pairing_data; // unsealed pairing data
+static sgx::custom_alignment<pairing_data_t,
+ //__builtin_offsetof(pairing_data_t, secret_data.Id_pse), sizeof(((pairing_data_t*)0)->secret_data.Id_pse),
+ //__builtin_offsetof(pairing_data_t, secret_data.Id_cse), sizeof(((pairing_data_t*)0)->secret_data.Id_cse),
+ __builtin_offsetof(pairing_data_t, secret_data.mk), sizeof(((pairing_data_t*)0)->secret_data.mk),
+ __builtin_offsetof(pairing_data_t, secret_data.sk), sizeof(((pairing_data_t*)0)->secret_data.sk),
+ __builtin_offsetof(pairing_data_t, secret_data.pairingID), sizeof(((pairing_data_t*)0)->secret_data.pairingID)
+ //__builtin_offsetof(pairing_data_t, secret_data.pairingNonce), sizeof(((pairing_data_t*)0)->secret_data.pairingNonce)
+ > opairing_data;
+pairing_data_t& g_pairing_data = opairing_data.v;
+//
+// securely align pse - cse/psda ephemeral session secrets
+//
+//eph_session_t g_eph_session; // ephemeral session information
+sgx::custom_alignment<eph_session_t,
+ __builtin_offsetof(eph_session_t, TSK), sizeof(((eph_session_t*)0)->TSK),
+ __builtin_offsetof(eph_session_t, TMK), sizeof(((eph_session_t*)0)->TMK)
+> oeph_session;
+//
+// this reference trick requires change to declaration
+// in other files, but still cleaner than changing
+// all references
+//
+eph_session_t& g_eph_session = oeph_session.v;

/**
* @brief Check the status of the ephemeral session

To understand it, let me expand struct pairing_data_t here:

/* Pairing blob unsealed and usable inside of enclave*/
typedef struct _pairing_data_t
{
se_plaintext_pairing_data_t plaintext; // does not involved
struct se_secret_pairing_data_t {
SHA256_HASH Id_pse;
SHA256_HASH Id_cse;
SIGMA_MAC_KEY mk;
SIGMA_SECRET_KEY sk;
SIGMA_SECRET_KEY pairingID; // old_sk used for repairing check
Nonce128_t pairingNonce;
EcDsaPrivKey VerifierPrivateKey;
} secret_data;
} pairing_data_t;

The patch seems to protect mk, sk, and pairingID, and all the other fields are commented out. What's more, this patch uses a undocumented template sgx::custom_alignment defined as:

    template <class T, std::size_t... OLs>
using custom_alignment = custom_alignment_aligned<T, alignof(T), OLs...>;

Experiments on the undocument template

To test how the undocumented template work, I write the following codes:

    struct foo {
uint64_t secret1[5]; // offset = 0
};

typedef sgx::custom_alignment<foo, __builtin_offsetof(foo, secret1), sizeof(((foo*)0)->secret1)> AFOO;

printf("=== Size of foo = %u ===\n", sizeof(foo)); // 40
printf("=== Size of bar = %u ===\n", sizeof(AFOO)); // 64
printf("=== offset of AROO.v = %u ===\n", __builtin_offsetof(AFOO, v)); // 8
printf("=== offset of secret1 = %u ===\n", __builtin_offsetof(AFOO, v.secret1)); // 8

So we can see that the structure of AROO is:

struct AROO {
uint64_t _padding_head[1] // offset = 0, len = 8
struct {
uint64_t secret1[5]; // offset = 8, len = 40
} v;
uint64_t _padding_tail[2]; // offset = 40, len = 16

It seems the undocumented C++ template aligns AROO to the next level, and add 8-byte headings into it. If we add the second secret in foo like:

    struct foo {
uint64_t secret1[5]; // offset = 0
uint64_t secret2[1]; // offset = 40
};

typedef sgx::custom_alignment<foo,
__builtin_offsetof(foo, secret1), sizeof(((foo*)0)->secret1),
__builtin_offsetof(foo, secret2), sizeof(((foo*)0)->secret2)
> AFOO;

printf("=== Size of foo = %u ===\n", sizeof(foo)); // 48
printf("=== Size of bar = %u ===\n", sizeof(AFOO)); // 64
printf("=== offset of AROO.v = %u ===\n", __builtin_offsetof(AFOO, v)); // 8
printf("=== offset of AROO.v.secret1 = %u ===\n", __builtin_offsetof(AFOO, v.secret1)); // 8
printf("=== offset of AROO.v.secret2 = %u ===\n", __builtin_offsetof(AFOO, v.secret2)); // 48

we can see that the structure of AROO is:

struct AROO {
uint64_t _padding_head[1] // offset = 0, len = 8
struct {
uint64_t secret1[5]; // offset = 8, len = 40
uint64_t secret2[1]; // offset = 48, len = 8
} v;
uint64_t _padding_tail[1]; // offset = 56, len = 8

If we increase secret2 to 16-bytes, it works well as usual. And the _padding_tail will have zero length. So does it means that only extra heading is required for mitigation? But it'll not compile if we make secret2 24-bytes, like:

    struct foo {
uint64_t secret1[5]; // offset = 0
uint64_t secret2[3]; // offset = 40
};

typedef sgx::custom_alignment<foo,
__builtin_offsetof(foo, secret1), sizeof(((foo*)0)->secret1),
__builtin_offsetof(foo, secret2), sizeof(((foo*)0)->secret2)
> AFOO;

GCC would terminate on:

make[1]: Entering directory '/root/linux-sgx/SampleCode/Cxx11SGXDemo'
In file included from Enclave/TrustedLibrary/Libcxx.cpp:47:0:
/opt/sgxsdk/include/sgx_secure_align.h: In instantiation of 'struct sgx::__custom_alignment_internal::custom_alignment<ecall_lambdas_demo()::foo, 8ul, -1>':
Enclave/TrustedLibrary/Libcxx.cpp:125:53: required from here
/opt/sgxsdk/include/sgx_secure_align.h:123:13: error: static assertion failed: No viable offset
static_assert(LZ > 0, "No viable offset");
^
/opt/sgxsdk/include/sgx_secure_align.h:125:48: error: size of array is negative
char __no_secret_allowed_in_here[LZ];
^
Makefile:255: recipe for target 'Enclave/TrustedLibrary/Libcxx.o' failed

Nothing changes if we switch to the original template sgx::custom_alignment_aligned. So I guess the template does not support structures:

  • contains secrets consecutively, and
  • the consecutive secrets' size is larger than a certain number (not sure yet)

If we break down secret1 and secret2 by inserting something in the middle, the template works:

struct foo {
uint64_t secret1[5]; // offset = 0
char dumb; // offset = 40
uint64_t secret2[3]; // offset = 48
};

typedef sgx::custom_alignment<foo,
__builtin_offsetof(foo, secret1), sizeof(((foo*)0)->secret1),
__builtin_offsetof(foo, secret2), sizeof(((foo*)0)->secret2)
> AFOO;

printf("=== Size of foo = %u ===\n", sizeof(foo)); // 72
printf("=== Size of bar = %u ===\n", sizeof(AFOO)); // 128
printf("=== offset of AROO.v = %u ===\n", __builtin_offsetof(AFOO, v)); // 24
printf("=== offset of AROO.v.secret1 = %u ===\n", __builtin_offsetof(AFOO, v.secret1)); // 24
printf("=== offset of AROO.v.secret2 = %u ===\n", __builtin_offsetof(AFOO, v.secret2)); // 72

Changes/Actions required

From Intel's usage, we can learn that:

**Don't construct a sensitive data structure directly. Always allocate an aligned structure and fill it up later **

It means:

  • if you allocate something sensitive (e.g. keys in sgx_key_128bit_t) on stack/heap, you probably need to allocate another guarded structure first, and get a mutable reference to its inner data.
  • if you want to make sgx_key_128bit_t as the type of return value, you can choose between (1) return a guarded structure, or (2) takes an additional argument of caller-allocated, mutuable reference of sgx_key_128bit_t and fill it.

Rust SGX provided primitive

  • We provided AlignBox as a replacement of Box

    • Box is somewhat tricky because it always "initialize on stack first and copy to heap later". copyless provides a novel primitive to solve it but it does not always effective. To this end, we created AlignBox which guarantees "on-heap initialization" without copying any bits. Usage:

      let heap_align_obj = AlignBox::<struct_align_t>::heap_init_with_req(|mut t| {
      t.key1 = [0xf0, 0xf1, 0xf2, 0xf3, 0xf4, 0xf5, 0xf6, 0xf7, 0xf8, 0xf9, 0xfa, 0xfb, 0xfc, 0xfd, 0xfe, 0xff];
      t.pad1 = [0x00; 16];
      t.key2 = [0xf0, 0xf1, 0xf2, 0xf3, 0xf4, 0xf5, 0xf6, 0xf7, 0xf8, 0xf9, 0xfa, 0xfb, 0xfc, 0xfd, 0xfe, 0xff];
      t.pad2 = [0x00; 16];
      t.key3 = [0xf0, 0xf1, 0xf2, 0xf3, 0xf4, 0xf5, 0xf6, 0xf7, 0xf8, 0xf9, 0xfa, 0xfb, 0xfc, 0xfd, 0xfe, 0xff];
      t.pad3 = [0x00; 16];
      t.key4 = [0xf0, 0xf1, 0xf2, 0xf3, 0xf4, 0xf5, 0xf6, 0xf7, 0xf8, 0xf9, 0xfa, 0xfb, 0xfc, 0xfd, 0xfe, 0xff];
      }, 16, &str_slice);
      assert!(heap_align_obj.is_some());
  • We provided aligned key type for each built-in key type. The layout are calculated by Intel's template.

    • sgx_align_key_128bit_t
    • sgx_align_mac_128bit_t
    • sgx_align_key_256bit_t
    • sgx_align_mac_256bit_t
    • sgx_align_ec256_dh_shared_t
    • sgx_align_ec256_private_t

We modified sgx_tcrypto, sgx_tse, and sgx_tdh and use the above primitives for enhancement, following the above required changes. One sample is from sgx_tcrypto:

+    let mut align_mac = sgx_align_mac_128bit_t::default();
+ let ret = unsafe {
+ sgx_rijndael128_cmac_msg(key as * const sgx_cmac_128bit_key_t,
+ src.as_ptr() as * const u8,
+ size as u32,
+ &mut align_mac.mac as * mut sgx_cmac_128bit_tag_t)
+ };

We allocate an aligned structure first, and then fill it up using Intel's crypto primitive later.

Aapache Incubation Proposal

The Teaclave PPMC
MesaTEE Apache Incubation Proposal

= Abstract =

MesaTEE is a framework for universal secure computing.

= Proposal =

MesaTEE is the next-gen solution to enable general computing service for
security-critical scenarios. It will allow even the most sensitive data to
be securely processed to enable offshore businesses without leakage.

The solution combines the advanced Hybrid Memory Safety (HMS) model and the
power of the Trusted Computing technologies (e.g., TPM) as well as the
Confidential Computing technologies (e.g., Intel SGX).

* Code base:
* https://github.com/mesalock-linux/mesatee
* https://github.com/baidu/rust-sgx-sdk
* Website: https://mesatee.org
* Documentation: https://mesatee.org/doc/mesatee_sdk/

= Background =

The emerging technologies of big data analytics, machine learning, cloud/edge
computing, and blockchain are significantly boosting our productivity, but at
the same time they are bringing new confidentiality and integrity concerns. On
public cloud and blockchain, sensitive data like health and financial records
may be consumed at runtime by untrusted computing processes running on
compromised platforms; during in-house data exchange, confidential information
may cross different clearance boundaries and possibly fall into the wrong hands;
also not to mention the privacy issue arises in offshore data supply chains.

Although the consequences of data breaching have been extensively elaborated, we
should also note that proprietary computing algorithms themselves, such as AI
models, also need to be well protected. Once leaked, attackers can steal the
intellectual properties, or launch whitebox attacks and easily exploit the
weaknesses of the models.

Facing all these risky scenarios, we are in desperate need of a trusted and
secure mechanism, enabling us to protect both private data and proprietary
computing models during a migratable execution in potentially unsafe
environments, yet preserving functionalities, performance, compatibility, and
flexibility. MesaTEE is targeting to be, as we call it, the full "Universal
Secure Computing" stack, so it can help users resolve these runtime security
risks.

MesaTEE aims to promote the development of universal secure computing ecosystem
through open source and openness, to provide basic support for trust protection
for the productivity revolution brought by big data and AI, to completely solve
the data exchange or multi-party computing between departments/companies, to
enable privacy-crucial services such as financial and medical care using
blockchain/cloud services, and to convoy businesses that are closely related to
life and safety such as autonomous driving. MesaTEE has been working closely
with mainstream cloud computing/blockchain/chip vendors and
universities/research institutions to promote hardware TEE, software memory
safety, and versatile computing services to create an internationally protected
and flexible secure computing framework. MesaTEE’s open-source release will
greatly accelerate the development of the next generation of big data business
applications, and it is also of great importance to promoting AI in all business
areas.

= Rationale =

MesaTEE stack redefines future AI and big data analytics by providing a trusted
and secure offshore computing environment. The confidentiality and privacy of
data and models can be well protected with MesaTEE, even if data and model
originate from different parties with no mutual trust. Moreover, the computing
platform itself is not necessarily trusted either. The Trusted Computing Base
(TCB) can thus be largely reduced to MesaTEE framework alone. A detailed
description of target use-cases can be found at
https://github.com/mesalock-linux/mesatee/blob/master/docs/case_study.md.

We believe that Apache way of open source community empowers MesaTEE to attract
a diverse set of contributors who can bring new ideas into the project.

= Initial Goals =

* Move the existing codebase, website, documentation, and mailing lists to an
Apache-hosted infrastructure.
* Integrate with the Apache development process.
* Ensure all dependencies are compliant with Apache License version 2.0.
* Incrementally develop and release per Apache guidelines.

= Current Status =

The MesaTEE project (and its sub-project Rust SGX SDK) has been designed and
developed at Baidu since 2017, and was open sourced under the Apache License,
Version 2.0 in 2019. The source code is currently hosted at github.com
(https://github.com/mesalock-linux/mesatee and
https://github.com/baidu/rust-sgx-sdk), which will seed the Apache git
repository.

== Meritocracy ==

We are fully committed to open, transparent, and meritocratic interactions with
our community. In fact, one of the primary motivations for us to enter the
incubation process is to be able to rely on Apache best practices that can
ensure meritocracy. This will eventually help incorporate the best ideas back
into the project and enable contributors to continue investing their time in the
project. We already have some guidelines to help external contributors:

* https://github.com/mesalock-linux/mesatee/blob/master/docs/rust_guideline.md
* https://github.com/mesalock-linux/mesatee/blob/master/docs/how_to_add_your_function.md
* https://github.com/mesalock-linux/mesatee/blob/master/CODE_OF_CONDUCT.md

== Community ==

The MesaTEE community is fairly young. Since our sub-project (Rust SGX SDK) was
open sourced in 2017, we received many contributions from various companies and
individual researchers (https://github.com/baidu/rust-sgx-sdk/pulls). Our
primary goal during the incubation would be to grow the community and groom our
existing active contributors for committers.

== Core Developers ==

Current core developers work at Baidu. We are confident that incubation will
help us grow a diverse community in an open and collaborative way.

== Alignment ==

MesaTEE is designed as a framework for universal secure computing. This is
complementary to the Apache's projects, providing a trusted and secure computing
framework.

Our sincere hope is that being a part of the Apache foundation would enable us
to drive the future of the project in alignment with the other Apache projects
for the benefit of thousands of organizations that already leverage these
projects.

= Known Risks =

== Orphaned Products ==

The risk of abandonment of MesaTEE is low. MesaTEE has been incubated at Baidu
for over two years. Baidu is committed to the further development of the project
and will keep investing resources towards the Apache processes and community
building, during the incubation period.

== Inexperience with Open Source ==

Even though the initial committers are new to the Apache world, some have
considerable open source experience - Yu Ding, Yiming Jing, Mingshen Sun. We
have been successfully managing the current open source community, answering
questions, and taking feedback already. Moreover, we hope to obtain guidance and
mentorship from current ASF members to help us succeed in the incubation.

== Length of Incubation ==

We expect the project to be in incubation for 2 years or less.

== Homogenous Developers ==

Currently, the lead developers for MesaTEE are from Baidu. However, we have an
active set of early contributors/collaborators from Alibaba and other companies,
which we hope will increase the diversity going forward. Once again, a primary
motivation for the incubation is to facilitate this in the Apache way.

== Reliance on Salaried Developers ==

Both the current committers and early contributors have several years of core
expertise around designing trusted computing systems. Current committers are
very passionate about the project and have already invested hundreds of hours
towards helping and building the community. Thus, even with employer changes, we
expect they will be able to actively engage in the project either because they
will be working in similar areas even with newer employers or out of belief in
the project.

== Relationships with Other Apache Products ==

To the best of our knowledge, there are no directly competing projects with
MesaTEE that offer all of the feature set - memory safety, secure computing,
multi-party computation, etc. However, some projects share similar goals, e.g.,
OpenWhisk which provides a serverless cloud platform. We are committed to open
collaboration with such Apache projects and incorporating changes to MesaTEE or
contributing patches to other projects, with the goal of making it easier for
the community at large, to adopt these open source technologies.

== Excessive Fascination with the Apache Brand ==

The Apache Brand is very respected. We are very honored to have the opportunity
to join ASF, with the understanding that its brand policies shall be respected.
And we hope Apache can help us build the ecosystem around MesaTEE and attract
more developers.

= Documentation =

* Detailed documentation: https://github.com/mesalock-linux/mesatee
* MesaTEE SDK API documentation: https://mesatee.org/doc/mesatee_sdk/

= Initial Source =

The codebase is currently hosted on Github:

* https://github.com/mesalock-linux/mesatee
* https://github.com/baidu/rust-sgx-sdk

During incubation, the codebase will be migrated to an Apache infrastructure.
The source code of MesaTEE is under Apache version 2.0 License, while Rust SGX
SDK is under BSD 3-Clauses License.

= Source and Intellectual Property Submission Plan =

We will work with the committers to get ICLAs signed. We will provide a Software
Grant Agreement from an authorized signer per
https://www.apache.org/licenses/software-grant-template.pdf

= External Dependencies =

MesaTEE directly depends on these third-party Rust crates:

* adler32, 1.0.3, BSD-3-Clause
* aho-corasick, 0.7.4, Unlicense/MIT
* array_tool, 1.0.3, MIT
* assert_matches, 1.3.0, MIT/Apache-2.0
* autocfg, 0.1.4, Apache-2.0/MIT
* base64, 0.10.1, MIT/Apache-2.0
* bincode, 1.1.4, MIT
* bit-vec, 0.6.1, MIT/Apache-2.0
* bitflags, 1.1.0, MIT/Apache-2.0
* byteorder, 1.3.2, MIT/Unlicense
* bytes, 0.5.0, MIT
* cc, 1.0.37, MIT/Apache-2.0
* cfg-if, 0.1.9, MIT/Apache-2.0
* chrono, 0.4.7, MIT/Apache-2.0
* color_quant, 1.0.1, MIT
* crc32fast, 1.2.0, MIT
* ctor, 0.1.9, Apache-2.0
* deflate, 0.7.20, MIT/Apache-2.0
* either, 1.5.2, MIT/Apache-2.0
* env_logger, 0.6.2, MIT/Apache-2.0
* erased-serde, 0.3.9, MIT
* fnv, 1.0.6, Apache-2.0
* getrandom, 0.1.6, MIT
* ghost, 0.1.0, MIT/Apache-2.0
* gif, 0.10.2, MIT/Apache-2.0
* gzip-header, 0.3.0, MIT/Apache-2.0
* half, 1.3.0, MIT/Apache-2.0
* hashbrown, 0.3.1, Apache-2.0/MIT
* heapsize, 0.4.2, MIT/Apache-2.0
* hex, 0.3.2, MIT
* http, 0.1.17, MIT/Apache-2.0
* httparse, 1.3.4, MIT/Apache-2.0
* humantime, 1.2.0, MIT/Apache-2.0
* image, 0.21.0, MIT
* inflate, 0.4.5, MIT
* inventory, 0.1.3, MIT
* inventory-impl, 0.1.3, MIT
* iovec, 0.2.0, MIT/Apache-2.0
* itertools, 0.8.0, MIT/Apache-2.0
* itoa, 0.4.4, MIT
* jpeg-decoder, 0.1.15, MIT
* lazy_static, 1.3.0, MIT/Apache-2.0
* libc, 0.2.59, MIT
* linked-hash-map, 0.5.2, MIT/Apache-2.0
* log, 0.4.7, MIT
* lzw, 0.10.0, MIT/Apache-2.0
* matrixmultiply, 0.2.2, MIT/Apache-2.0
* md5, 0.6.1, Apache-2.0/MIT
* memchr, 2.2.1, Unlicense/MIT
* memory_units, 0.3.0, MPL-2.0
* net2, 0.2.33, MIT/Apache-2.0
* num, 0.2.0, MIT/Apache-2.0
* num-bigint, 0.2.2, MIT/Apache-2.0
* num-complex, 0.2.3, MIT/Apache-2.0
* num-integer, 0.1.41, MIT/Apache-2.0
* num-iter, 0.1.39, MIT/Apache-2.0
* num-rational, 0.2.2, MIT/Apache-2.0
* num-traits, 0.2.8, MIT/Apache-2.0
* parity-wasm, 0.31.3, MIT/Apache-2.0
* png, 0.14.1, MIT/Apache-2.0
* proc-macro2, 0.4.30, MIT/Apache-2.0
* profiler_builtins, 0.1.0, profiler_builtins
* quick-error, 1.2.2, MIT/Apache-2.0
* quote, 0.3.15, MIT
* quote, 0.6.13, MIT
* rand, 0.6.5, MIT/Apache-2.0
* rand_core, 0.4.0, MIT/Apache-2.0
* rand_hc, 0.1.0, MIT/Apache-2.0
* rand_pcg, 0.1.2, MIT/Apache-2.0
* rawpointer, 0.1.0, MIT/Apache-2.0
* regex, 1.1.9, MIT/Apache-2.0
* regex-syntax, 0.6.8, MIT/Apache-2.0
* ring, 0.14.6, ISC-style
* rulinalg, 0.4.2, MIT
* rustls, 0.15.2, Apache-2.0/ISC/MIT
* rusty-machine, 0.5.4, MIT
* ryu, 1.0.0, Apache-2.0
* sct, 0.5.0, Apache-2.0/ISC/MIT
* serde, 1.0.94, MIT
* serde_cbor, 0.10.0, MIT/Apache-2.0
* serde_derive, 1.0.94, MIT
* serde_json, 1.0.40, MIT
* sha1, 0.6.0, BSD-3-Clause
* sha2, 0.8.0, sha2
* spin, 0.5.0, MIT
* syn, 0.11.11, MIT
* syn, 0.15.39, MIT
* synom, 0.11.3, MIT/Apache-2.0
* termcolor, 1.0.5, Unlicense
* thread_local, 0.3.6, Apache-2.0/MIT
* tiff, 0.3.1, MIT
* toml, 0.5.1, MIT/Apache-2.0
* typetag, 0.1.3, MIT
* typetag-impl, 0.1.3, MIT
* ucd-util, 0.1.3, MIT/Apache-2.0
* unicode-xid, 0.0.4, MIT/Apache-2.0
* unicode-xid, 0.1.0, MIT/Apache-2.0
* utf8-ranges, 1.0.3, Unlicense/MIT
* uuid, 0.7.4, Apache-2.0
* wabt, 0.6.0, Apache-2.0
* wasmi, 0.5.0, MIT/Apache-2.0
* wasmi-validation, 0.1.0, MIT/Apache-2.0
* webpki, 0.19.1, ISC-style
* webpki-roots, 0.16.0, MPL-2.0
* winapi, 0.3.7, MIT/Apache-2.0
* winapi-i686-pc-windows-gnu, 0.4.0, MIT/Apache-2.0
* winapi-util, 0.1.2, Unlicense/MIT
* winapi-x86_64-pc-windows-gnu, 0.4.0, MIT/Apache-2.0
* wincolor, 1.0.1, Unlicense/MIT
* yasna, 0.3.1, MIT/Apache-2.0

Note that this is not an exhaustive dependency list and only direct
dependencies
of MesaTEE's trusted libs are included.

== Cryptography ==

MesaTEE uses following cryptographic libraries:

* ring (https://github.com/briansmith/ring): a Rust crypto library based on
BoringSSL
* rustls: a Rust TLS library
* sgx_tcrypto in Intel SGX SDK (https://software.intel.com/en-us/sgx/sdk)

= Required Resources =

== Mailing lists ==

* private@mesatee.incubator.apache.org (with moderated subscriptions)
* dev@mesatee.incubator.apache.org
* commits@mesatee.incubator.apache.org
* user@mesatee.incubator.apache.org

== Git Repositories ==

Upon entering incubation, we want to transfer the existing repos from
https://github.com/mesalock-linux/mesatee and
https://github.com/baidu/rust-sgx-sdk to Apache organization in GitHub like:

* https://github.com/apache/incubator-mesatee
* https://github.com/apache/incubator-mesatee-rust-sgx-sdk

== Issue Tracking ==

MesaTEE currently uses GitHub to track issues. Would like to continue doing
so.

== Continuous Integration Service ==

MesaTEE currently uses self-hosted continuous integration (CI) service which can
help developers to automatically test commits. The CI service involves several
nodes which support Intel SGX. We would like to continue doing so.

= Initial Committers =

The list is sorted alphabetically:

* Mingshen Sun <mssun at mesatee.org>
* Pei Wang <wangpei at mesatee.org>
* Rundong Zhou <rundongzhou at mesatee.org>
* Tao Wei <lenx at mesatee.org>
* Tongxin Li <litongxin at mesatee.org>
* Yiming Jing <jingyiming at mesatee.org>
* Yu Ding <d at mesatee.org>
* Yulong Zhang <ylzhang at mesatee.org>
* Zhaofeng Chen <zf at mesatee.org>

= Sponsors =

== Champion ==

* Zhijie Shen <zjshen@apache.org>

== Nominated Mentors ==

* Jianyong Dai <daijy@apache.org>
* Luciano Resende <lresende@apache.org>
* Matt Sicker
* Furkan Kamaci

== Sponsoring Entity ==

The Incubator PMC