Merge pull request #1316 from jedevc/cache-docs

docs: add cache storage backend docs
This commit is contained in:
CrazyMax 2022-09-22 14:07:06 +02:00 committed by GitHub
commit 6a46ea04ab
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
7 changed files with 634 additions and 0 deletions

58
docs/guides/cache/azblob.md vendored Normal file
View File

@ -0,0 +1,58 @@
# Azure Blob Storage cache storage
> **Warning**
>
> This cache backend is unreleased. You can use it today, by using the
> `moby/buildkit:master` image in your Buildx driver.
The `azblob` cache store uploads your resulting build cache to
[Azure's blob storage service](https://azure.microsoft.com/en-us/services/storage/blobs/).
> **Note**
>
> This cache storage backend requires using a different driver than the default
> `docker` driver - see more information on selecting a driver
> [here](../drivers/index.md). To create a new driver (which can act as a simple
> drop-in replacement):
>
> ```console
> docker buildx create --use --driver=docker-container
> ```
## Synopsis
```console
$ docker buildx build . --push -t <registry>/<image> \
--cache-to type=azblob,name=<cache-image>[,parameters...] \
--cache-from type=azblob,name=<cache-image>[,parameters...]
```
Common parameters:
- `name`: the name of the cache image.
- `account_url`: the base address of the blob storage account, for example:
`https://myaccount.blob.core.windows.net`. See
[authentication](#authentication).
- `secret_access_key`: specifies the
[Azure Blob Storage account key](https://docs.microsoft.com/en-us/azure/storage/common/storage-account-keys-manage),
see [authentication](#authentication).
Parameters for `--cache-to`:
- `mode`: specify cache layers to export (default: `min`), see
[cache mode](./index.md#cache-mode)
## Authentication
The `secret_access_key`, if left unspecified, is read from environment variables
on the BuildKit server following the scheme for the
[Azure Go SDK](https://docs.microsoft.com/en-us/azure/developer/go/azure-sdk-authentication).
The environment variables are read from the server, not the Buildx client.
## Further reading
For an introduction to caching see
[Optimizing builds with cache management](https://docs.docker.com/build/building/cache).
For more information on the `azblob` cache backend, see the
[BuildKit README](https://github.com/moby/buildkit#azure-blob-storage-cache-experimental).

110
docs/guides/cache/gha.md vendored Normal file
View File

@ -0,0 +1,110 @@
# GitHub Actions cache storage
> **Warning**
>
> The GitHub Actions cache is a beta feature. You can use it today, in current
> releases of Buildx and BuildKit. However, the interface and behavior are
> unstable and may change in future releases.
The GitHub Actions cache utilizes the
[GitHub-provided Action's cache](https://github.com/actions/cache) available
from within your CI execution environment. This is the recommended cache to use
inside your GitHub action pipelines, as long as your use case falls within the
[size and usage limits set by GitHub](https://docs.github.com/en/actions/using-workflows/caching-dependencies-to-speed-up-workflows#usage-limits-and-eviction-policy).
> **Note**
>
> This cache storage backend requires using a different driver than the default
> `docker` driver - see more information on selecting a driver
> [here](../drivers/index.md). To create a new driver (which can act as a simple
> drop-in replacement):
>
> ```console
> docker buildx create --use --driver=docker-container
> ```
## Synopsis
```console
$ docker buildx build . --push -t <registry>/<image> \
--cache-to type=gha[,parameters...] \
--cache-from type=gha[,parameters...]
```
Common parameters:
- `url`: cache server URL (default `$ACTIONS_CACHE_URL`), see
[authentication](#authentication)
- `token`: access token (default `$ACTIONS_RUNTIME_TOKEN`), see
[authentication](#authentication)
- `scope`: cache scope (defaults to the name of the current Git branch).
Parameters for `--cache-to`:
- `mode`: specify cache layers to export (default: `min`), see
[cache mode](./index.md#cache-mode)
## Authentication
If the `url` or `token` parameters are left unspecified, the `gha` cache backend
will fall back to using environment variables. If you invoke the `docker buildx`
command manually from an inline step, then the variables must be manually
exposed (using
[`crazy-max/ghaction-github-runtime`](https://github.com/crazy-max/ghaction-github-runtime),
for example).
## Scope
By default, cache is scoped per Git branch. This ensures a separate cache
environment for the main branch and each feature branch. If you build multiple
images on the same branch, each build will overwrite the cache of the previous,
leaving only the final cache.
To preserve the cache for multiple builds on the same branch, you can manually
specify a cache scope name using the `scope` parameter. In the following
example, the cache is set to a combination of the branch name and the image
name, to ensure each branch gets its own cache):
```console
$ docker buildx build . --push -t <registry>/<image> \
--cache-to type=gha,url=...,token=...,scope=$GITHUB_REF_NAME-image
--cache-from type=gha,url=...,token=...,scope=$GITHUB_REF_NAME-image
$ docker buildx build . --push -t <registry>/<image2> \
--cache-to type=gha,url=...,token=...,scope=$GITHUB_REF_NAME-image2
--cache-from type=gha,url=...,token=...,scope=$GITHUB_REF_NAME-image2
```
GitHub's
[cache access restrictions](https://docs.github.com/en/actions/advanced-guides/caching-dependencies-to-speed-up-workflows#restrictions-for-accessing-a-cache),
still apply. Only the cache for the current branch, the base branch and the
default branch is accessible by a workflow.
### Using `docker/build-push-action`
When using the
[`docker/build-push-action`](https://github.com/docker/build-push-action), the
`url` and `token` parameters are automatically populated. No need to manually
specify them, or include any additional workarounds.
For example:
```yaml
- name: Build and push
uses: docker/build-push-action@v3
with:
context: .
push: true
tags: "<registry>/<image>:latest"
cache-from: type=gha
cache-to: type=gha,mode=max
```
<!-- FIXME: cross-link to ci docs once docs.docker.com has them -->
## Further reading
For an introduction to caching see
[Optimizing builds with cache management](https://docs.docker.com/build/building/cache).
For more information on the `gha` cache backend, see the
[BuildKit README](https://github.com/moby/buildkit#github-actions-cache-experimental).

182
docs/guides/cache/index.md vendored Normal file
View File

@ -0,0 +1,182 @@
# Cache storage backends
To ensure fast builds, BuildKit automatically caches the build result in its own
internal cache. Additionally, BuildKit also supports exporting build cache to an
external location, making it possible to import in future builds.
An external cache becomes almost essential in CI/CD build environments. Such
environments usually have little-to-no persistence between runs, but it's still
important to keep the runtime of image builds as low as possible.
> **Warning**
>
> If you use secrets or credentials inside your build process, ensure you
> manipulate them using the dedicated
> [--secret](../../reference/buildx_build.md#secret) functionality instead of
> using manually `COPY`d files or build `ARG`s. Using manually managed secrets
> like this with exported cache could lead to an information leak.
## Backends
Buildx supports the following cache storage backends:
- [Inline cache](./inline.md) that embeds the build cache into the image.
The inline cache gets pushed to the same location as the main output result.
Note that this only works for the `image` exporter.
- [Registry cache](./registry.md) that embeds the build cache into a separate
image, and pushes to a dedicated location separate from the main output.
- [Local directory cache](./local.md) that writes the build cache to a local
directory on the filesystem.
- [GitHub Actions cache](./gha.md) that uploads the build cache to
[GitHub](https://docs.github.com/en/rest/actions/cache) (beta).
- [Amazon S3 cache](./s3.md) that uploads the build cache to an
[AWS S3 bucket](https://aws.amazon.com/s3/) (unreleased).
- [Azure Blob Storage cache](./azblob.md) that uploads the build cache to
[Azure Blob Storage](https://azure.microsoft.com/en-us/services/storage/blobs/)
(unreleased).
## Command syntax
To use any of the cache backends, you first need to specify it on build with the
[`--cache-to`](../../reference/buildx_build.md#cache-to) option to export the
cache to your storage backend of choice. Then, use the
[`--cache-from`](../../reference/buildx_build.md#cache-from) option to import
the cache from the storage backend into the current build. Unlike the local
BuildKit cache (which is always enabled), all of the cache storage backends must
be explicitly exported to, and explicitly imported from. All cache exporters
except for the `inline` cache requires that you
[select an alternative Buildx driver](../drivers/index.md).
Example `buildx` command using the `registry` backend, using import and export
cache:
```console
$ docker buildx build . --push -t <registry>/<image> \
--cache-to type=registry,ref=<registry>/<cache-image>[,parameters...] \
--cache-from type=registry,ref=<registry>/<cache-image>[,parameters...]
```
> **Warning**
>
> As a general rule, each cache writes to some location. No location can be
> written to twice, without overwriting the previously cached data. If you want
> to maintain multiple scoped caches (for example, a cache per Git branch), then
> ensure that you use different locations for exported cache.
## Multiple caches
BuildKit currently only supports
[a single cache exporter](https://github.com/moby/buildkit/pull/3024). But you
can import from as many remote caches as you like. For example, a common pattern
is to use the cache of both the current branch and the main branch. The
following example shows importing cache from multiple locations using the
registry cache backend:
```console
$ docker buildx build . --push -t <registry>/<image> \
--cache-to type=registry,ref=<registry>/<cache-image>:<branch> \
--cache-from type=registry,ref=<registry>/<cache-image>:<branch> \
--cache-from type=registry,ref=<registry>/<cache-image>:main
```
## Configuration options
<!-- FIXME: link to image exporter guide when it's written -->
This section describes some of the configuration options available when
generating cache exports. The options described here are common for at least two
or more backend types. Additionally, the different backend types support
specific parameters as well. See the detailed page about each backend type for
more information about which configuration parameters apply.
The common parameters described here are:
- Cache mode
- Cache compression
- OCI media type
### Cache mode
When generating a cache output, the `--cache-to` argument accepts a `mode`
option for defining which layers to include in the exported cache.
Mode can be set to either of two options: `mode=min` or `mode=max`. For example,
to build the cache with `mode=max` with the registry backend:
```console
$ docker buildx build . --push -t <registry>/<image> \
--cache-to type=registry,ref=<registry>/<cache-image>,mode=max \
--cache-from type=registry,ref=<registry>/<cache-image>
```
This option is only set when exporting a cache, using `--cache-to`. When
importing a cache (`--cache-from`) the relevant parameters are automatically
detected.
In `min` cache mode (the default), only layers that are exported into the
resulting image are cached, while in `max` cache mode, all layers are cached,
even those of intermediate steps.
While `min` cache is typically smaller (which speeds up import/export times, and
reduces storage costs), `max` cache is more likely to get more cache hits.
Depending on the complexity and location of your build, you should experiment
with both parameters to find the results that work best for you.
### Cache compression
Since `registry` cache image is a separate export artifact from the main build
result, you can specify separate compression parameters for it. These parameters
are similar to the options provided by the `image` exporter. While the default
values provide a good out-of-the-box experience, you may wish to tweak the
parameters to optimize for storage vs compute costs.
To select the compression algorithm, you can use the
`compression=<uncompressed|gzip|estargz|zstd>` option. For example, to build the
cache with `compression=zstd`:
```console
$ docker buildx build . --push -t <registry>/<image> \
--cache-to type=registry,ref=<registry>/<cache-image>,compression=zstd \
--cache-from type=registry,ref=<registry>/<cache-image>
```
Use the `compression-level=<value>` option alongside the `compression` parameter
to choose a compression level for the algorithms which support it:
- 0-9 for `gzip` and `estargz`
- 0-22 for `zstd`
As a general rule, the higher the number, the smaller the resulting file will
be, and the longer the compression will take to run.
Use the `force-compression=true` option to force re-compressing layers imported
from a previous cache, if the requested compression algorithm is different from
the previous compression algorithm.
> **Note**
>
> The `gzip` and `estargz` compression methods use the
> [`compress/gzip` package](https://pkg.go.dev/compress/gzip), while `zstd` uses
> the
> [`github.com/klauspost/compress/zstd` package](https://github.com/klauspost/compress/tree/master/zstd).
### OCI media types
Like the `image` exporter, the `registry` cache exporter supports creating
images with Docker media types or with OCI media types. To export OCI media type
cache, use the `oci-mediatypes` property:
```console
$ docker buildx build . --push -t <registry>/<image> \
--cache-to type=registry,ref=<registry>/<cache-image>,oci-mediatypes=true \
--cache-from type=registry,ref=<registry>/<cache-image>
```
This property is only meaningful with the `--cache-to` flag. When fetching
cache, BuildKit will auto-detect the correct media types to use.

47
docs/guides/cache/inline.md vendored Normal file
View File

@ -0,0 +1,47 @@
# Inline cache storage
The `inline` cache storage backend is the simplest way to get an external cache
and is easy to get started using if you're already building and pushing an
image. However, it doesn't scale as well to multi-stage builds as well as the
other drivers do. It also doesn't offer separation between your output artifacts
and your cache output. This means that if you're using a particularly complex
build flow, or not exporting your images directly to a registry, then you may
want to consider the [registry](./registry.md) cache.
## Synopsis
```console
$ docker buildx build . --push -t <registry>/<image> \
--cache-to type=inline \
--cache-from type=registry,ref=<registry>/image
```
To export cache using `inline` storage, pass `type=inline` to the `--cache-to`
option:
```console
$ docker buildx build . --push -t <registry>/<image> --cache-to type=inline
```
Alternatively, you can also export inline cache by setting the build argument
`BUILDKIT_INLINE_CACHE=1`, instead of using the `--cache-to` flag:
```console
$ docker buildx build . --push -t <registry>/<image> --arg BUILDKIT_INLINE_CACHE=1
```
To import the resulting cache on a future build, pass `type=registry` to
`--cache-from` which lets you extract the cache from inside a Docker image in
the specified registry:
```console
$ docker buildx build . --push -t <registry>/<image> --cache-from type=registry,ref=<registry>/<image>
```
## Further reading
For an introduction to caching see
[Optimizing builds with cache management](https://docs.docker.com/build/building/cache).
For more information on the `inline` cache backend, see the
[BuildKit README](https://github.com/moby/buildkit#inline-push-image-and-cache-together).

105
docs/guides/cache/local.md vendored Normal file
View File

@ -0,0 +1,105 @@
# Local cache storage
The `local` cache store is a simple cache option that stores your cache as files
in a directory on your filesystem, using an
[OCI image layout](https://github.com/opencontainers/image-spec/blob/main/image-layout.md)
for the underlying directory structure. Local cache is a good choice if you're
just testing, or if you want the flexibility to self-manage a shared storage
solution.
> **Note**
>
> This cache storage backend requires using a different driver than the default
> `docker` driver - see more information on selecting a driver
> [here](../drivers/index.md). To create a new driver (which can act as a simple
> drop-in replacement):
>
> ```console
> docker buildx create --use --driver=docker-container
> ```
## Synopsis
```console
$ docker buildx build . --push -t <registry>/<image> \
--cache-to type=local,dest=path/to/local/dir[,parameters...] \
--cache-from type=local,src=path/to/local/dir,
```
Parameters for `--cache-to`:
- `dest`: absolute or relative path to the local directory where you want to
export the cache to.
- `mode`: specify cache layers to export (default: `min`), see
[cache mode](./index.md#cache-mode)
- `oci-mediatypes`: whether to use OCI media types in exported manifests
(default `true`, since BuildKit `v0.8`), see
[OCI media types](./index.md#oci-media-types)
- `compression`: compression type for layers newly created and cached (default:
`gzip`), see [cache compression](./index.md#cache-compression)
- `compression-level`: compression level for `gzip`, `estargz` (0-9) and `zstd`
(0-22)
- `force-compression`: forcibly apply `compression` option to all layers
Parameters for `--cache-from`:
- `src`: absolute or relative path to the local directory where you want to
import cache from.
- `digest`: specify explicit digest of the manifest list to import, see
[cache versioning](#cache-versioning)
If the `src` cache doesn't exist, then the cache import step will fail, but
the build will continue.
## Cache versioning
<!-- FIXME: update once https://github.com/moby/buildkit/pull/3111 is released -->
This section describes how versioning works for caches on a local filesystem,
and how you can use the `digest` parameter to use older versions of cache.
If you inspect the cache directory manually, you can see the resulting OCI image
layout:
```console
$ ls cache
blobs index.json ingest
$ cat cache/index.json | jq
{
"schemaVersion": 2,
"manifests": [
{
"mediaType": "application/vnd.oci.image.index.v1+json",
"digest": "sha256:6982c70595cb91769f61cd1e064cf5f41d5357387bab6b18c0164c5f98c1f707",
"size": 1560,
"annotations": {
"org.opencontainers.image.ref.name": "latest"
}
}
]
}
```
Like other cache types, local cache gets replaced on export, by replacing the
contents of the `index.json` file. However, previous caches will still be
available in the `blobs` directory. These old caches are addressable by digest,
and kept indefinitely. Therefore, the size of the local cache will continue to
grow (see [`moby/buildkit#1896`](https://github.com/moby/buildkit/issues/1896)
for more information).
When importing cache using `--cache-to`, you can specify the `digest` parameter
to force loading an older version of the cache, for example:
```console
$ docker buildx build . --push -t <registry>/<image> \
--cache-to type=local,dest=path/to/local/dir \
--cache-from type=local,ref=path/to/local/dir,digest=sha256:6982c70595cb91769f61cd1e064cf5f41d5357387bab6b18c0164c5f98c1f707
```
## Further reading
For an introduction to caching see
[Optimizing builds with cache management](https://docs.docker.com/build/building/cache).
For more information on the `local` cache backend, see the
[BuildKit README](https://github.com/moby/buildkit#local-directory-1).

71
docs/guides/cache/registry.md vendored Normal file
View File

@ -0,0 +1,71 @@
# Registry cache storage
The `registry` cache storage can be thought of as an extension to the `inline`
cache. Unlike the `inline` cache, the `registry` cache is entirely separate from
the image, which allows for more flexible usage - `registry`-backed cache can do
everything that the inline cache can do, and more:
- Allows for separating the cache and resulting image artifacts so that you can
distribute your final image without the cache inside.
- It can efficiently cache multi-stage builds in `max` mode, instead of only the
final stage.
- It works with other exporters for more flexibility, instead of only the
`image` exporter.
> **Note**
>
> This cache storage backend requires using a different driver than the default
> `docker` driver - see more information on selecting a driver
> [here](../drivers/index.md). To create a new driver (which can act as a simple
> drop-in replacement):
>
> ```console
> docker buildx create --use --driver=docker-container
> ```
## Synopsis
Unlike the simpler `inline` cache, the `registry` cache supports several
configuration parameters:
```console
$ docker buildx build . --push -t <registry>/<image> \
--cache-to type=registry,ref=<registry>/<cache-image>[,parameters...] \
--cache-from type=registry,ref=<registry>/<cache-image>
```
Common parameters:
- `ref`: full address and name of the cache image that you want to import or
export.
Parameters for `--cache-to`:
- `mode`: specify cache layers to export (default: `min`), see
[cache mode](./index.md#cache-mode)
- `oci-mediatypes`: whether to use OCI media types in exported manifests
(default `true`, since BuildKit `v0.8`), see
[OCI media types](./index.md#oci-media-types)
- `compression`: compression type for layers newly created and cached (default:
`gzip`), see [cache compression](./index.md#cache-compression)
- `compression-level`: compression level for `gzip`, `estargz` (0-9) and `zstd`
(0-22)
- `force-compression`: forcibly apply `compression` option to all layers
You can choose any valid value for `ref`, as long as it's not the same as the
target location that you push your image to. You might choose different tags
(e.g. `foo/bar:latest` and `foo/bar:build-cache`), separate image names (e.g.
`foo/bar` and `foo/bar-cache`), or even different repositories (e.g.
`docker.io/foo/bar` and `ghcr.io/foo/bar`). It's up to you to decide the
strategy that you want to use for separating your image from your cache images.
If the `--cache-from` target doesn't exist, then the cache import step will
fail, but the build will continue.
## Further reading
For an introduction to caching see
[Optimizing builds with cache management](https://docs.docker.com/build/building/cache).
For more information on the `registry` cache backend, see the
[BuildKit README](https://github.com/moby/buildkit#registry-push-image-and-cache-separately).

61
docs/guides/cache/s3.md vendored Normal file
View File

@ -0,0 +1,61 @@
# Amazon S3 cache storage
> **Warning**
>
> This cache backend is unreleased. You can use it today, by using the
> `moby/buildkit:master` image in your Buildx driver.
The `s3` cache storage uploads your resulting build cache to
[Amazon S3 file storage service](https://aws.amazon.com/s3/), into a specified
bucket.
> **Note**
>
> This cache storage backend requires using a different driver than the default
> `docker` driver - see more information on selecting a driver
> [here](../drivers/index.md). To create a new driver (which can act as a simple
> drop-in replacement):
>
> ```console
> docker buildx create --use --driver=docker-container
> ```
## Synopsis
```console
$ docker buildx build . --push -t <user>/<image> \
--cache-to type=s3,region=<region>,bucket=<bucket>,name=<cache-image>[,parameters...] \
--cache-from type=s3,region=<region>,bucket=<bucket>,name=<cache-image>
```
Common parameters:
- `region`: geographic location
- `bucket`: name of the S3 bucket used for caching
- `name`: name of the cache image
- `access_key_id`: access key ID, see [authentication](#authentication)
- `secret_access_key`: secret access key, see [authentication](#authentication)
- `session_token`: session token, see [authentication](#authentication)
Parameters for `--cache-to`:
- `mode`: specify cache layers to export (default: `min`), see
[cache mode](./index.md#cache-mode)
## Authentication
`access_key_id`, `secret_access_key`, and `session_token`, if left unspecified,
are read from environment variables on the BuildKit server following the scheme
for the
[AWS Go SDK](https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/configuring-sdk.html).
The environment variables are read from the server, not the Buildx client.
<!-- FIXME: update once https://github.com/docker/buildx/pull/1294 is released -->
## Further reading
For an introduction to caching see
[Optimizing builds with cache management](https://docs.docker.com/build/building/cache).
For more information on the `s3` cache backend, see the
[BuildKit README](https://github.com/moby/buildkit#s3-cache-experimental).