docs: moved manual pages to docs repo, added link

Signed-off-by: David Karlsson <david.karlsson@docker.com>
This commit is contained in:
David Karlsson 2022-12-02 09:45:15 +01:00
parent e91d5326fe
commit 6c5168e1ec
22 changed files with 22 additions and 3131 deletions

View File

@ -1,74 +1,3 @@
# Defining additional build contexts and linking targets
In addition to the main `context` key that defines the build context each target
can also define additional named contexts with a map defined with key `contexts`.
These values map to the `--build-context` flag in the [build command](https://docs.docker.com/engine/reference/commandline/buildx_build/#build-context).
Inside the Dockerfile these contexts can be used with the `FROM` instruction or `--from` flag.
The value can be a local source directory, container image (with `docker-image://` prefix),
Git URL, HTTP URL or a name of another target in the Bake file (with `target:` prefix).
## Pinning alpine image
```dockerfile
# syntax=docker/dockerfile:1
FROM alpine
RUN echo "Hello world"
```
```hcl
# docker-bake.hcl
target "app" {
contexts = {
alpine = "docker-image://alpine:3.13"
}
}
```
## Using a secondary source directory
```dockerfile
# syntax=docker/dockerfile:1
FROM scratch AS src
FROM golang
COPY --from=src . .
```
```hcl
# docker-bake.hcl
target "app" {
contexts = {
src = "../path/to/source"
}
}
```
## Using a result of one target as a base image in another target
To use a result of one target as a build context of another, specity the target
name with `target:` prefix.
```dockerfile
# syntax=docker/dockerfile:1
FROM baseapp
RUN echo "Hello world"
```
```hcl
# docker-bake.hcl
target "base" {
dockerfile = "baseapp.Dockerfile"
}
target "app" {
contexts = {
baseapp = "target:base"
}
}
```
Please note that in most cases you should just use a single multi-stage
Dockerfile with multiple targets for similar behavior. This case is recommended
when you have multiple Dockerfiles that can't be easily merged into one.
Moved to [docs.docker.com](https://docs.docker.com/build/customize/bake/build-contexts)

View File

@ -1,270 +1,3 @@
# Building from Compose file
## Specification
Bake uses the [compose-spec](https://docs.docker.com/compose/compose-file/) to
parse a compose file and translate each service to a [target](file-definition.md#target).
```yaml
# docker-compose.yml
services:
webapp-dev:
build: &build-dev
dockerfile: Dockerfile.webapp
tags:
- docker.io/username/webapp:latest
cache_from:
- docker.io/username/webapp:cache
cache_to:
- docker.io/username/webapp:cache
webapp-release:
build:
<<: *build-dev
x-bake:
platforms:
- linux/amd64
- linux/arm64
db:
image: docker.io/username/db
build:
dockerfile: Dockerfile.db
```
```console
$ docker buildx bake --print
```
```json
{
"group": {
"default": {
"targets": [
"db",
"webapp-dev",
"webapp-release"
]
}
},
"target": {
"db": {
"context": ".",
"dockerfile": "Dockerfile.db",
"tags": [
"docker.io/username/db"
]
},
"webapp-dev": {
"context": ".",
"dockerfile": "Dockerfile.webapp",
"tags": [
"docker.io/username/webapp:latest"
],
"cache-from": [
"docker.io/username/webapp:cache"
],
"cache-to": [
"docker.io/username/webapp:cache"
]
},
"webapp-release": {
"context": ".",
"dockerfile": "Dockerfile.webapp",
"tags": [
"docker.io/username/webapp:latest"
],
"cache-from": [
"docker.io/username/webapp:cache"
],
"cache-to": [
"docker.io/username/webapp:cache"
],
"platforms": [
"linux/amd64",
"linux/arm64"
]
}
}
}
```
Unlike the [HCL format](file-definition.md#hcl-definition), there are some
limitations with the compose format:
* Specifying variables or global scope attributes is not yet supported
* `inherits` service field is not supported, but you can use [YAML anchors](https://docs.docker.com/compose/compose-file/#fragments) to reference other services like the example above
## `.env` file
You can declare default environment variables in an environment file named
`.env`. This file will be loaded from the current working directory,
where the command is executed and applied to compose definitions passed
with `-f`.
```yaml
# docker-compose.yml
services:
webapp:
image: docker.io/username/webapp:${TAG:-v1.0.0}
build:
dockerfile: Dockerfile
```
```
# .env
TAG=v1.1.0
```
```console
$ docker buildx bake --print
```
```json
{
"group": {
"default": {
"targets": [
"webapp"
]
}
},
"target": {
"webapp": {
"context": ".",
"dockerfile": "Dockerfile",
"tags": [
"docker.io/username/webapp:v1.1.0"
]
}
}
}
```
> **Note**
>
> System environment variables take precedence over environment variables
> in `.env` file.
## Extension field with `x-bake`
Even if some fields are not (yet) available in the compose specification, you
can use the [special extension](https://docs.docker.com/compose/compose-file/#extension)
field `x-bake` in your compose file to evaluate extra fields:
```yaml
# docker-compose.yml
services:
addon:
image: ct-addon:bar
build:
context: .
dockerfile: ./Dockerfile
args:
CT_ECR: foo
CT_TAG: bar
x-bake:
tags:
- ct-addon:foo
- ct-addon:alp
platforms:
- linux/amd64
- linux/arm64
cache-from:
- user/app:cache
- type=local,src=path/to/cache
cache-to:
- type=local,dest=path/to/cache
pull: true
aws:
image: ct-fake-aws:bar
build:
dockerfile: ./aws.Dockerfile
args:
CT_ECR: foo
CT_TAG: bar
x-bake:
secret:
- id=mysecret,src=./secret
- id=mysecret2,src=./secret2
platforms: linux/arm64
output: type=docker
no-cache: true
```
```console
$ docker buildx bake --print
```
```json
{
"group": {
"default": {
"targets": [
"aws",
"addon"
]
}
},
"target": {
"addon": {
"context": ".",
"dockerfile": "./Dockerfile",
"args": {
"CT_ECR": "foo",
"CT_TAG": "bar"
},
"tags": [
"ct-addon:foo",
"ct-addon:alp"
],
"cache-from": [
"user/app:cache",
"type=local,src=path/to/cache"
],
"cache-to": [
"type=local,dest=path/to/cache"
],
"platforms": [
"linux/amd64",
"linux/arm64"
],
"pull": true
},
"aws": {
"context": ".",
"dockerfile": "./aws.Dockerfile",
"args": {
"CT_ECR": "foo",
"CT_TAG": "bar"
},
"tags": [
"ct-fake-aws:bar"
],
"secret": [
"id=mysecret,src=./secret",
"id=mysecret2,src=./secret2"
],
"platforms": [
"linux/arm64"
],
"output": [
"type=docker"
],
"no-cache": true
}
}
}
```
Complete list of valid fields for `x-bake`:
* `cache-from`
* `cache-to`
* `contexts`
* `no-cache`
* `no-cache-filter`
* `output`
* `platforms`
* `pull`
* `secret`
* `ssh`
* `tags`
Moved to [docs.docker.com](https://docs.docker.com/build/customize/bake/compose-file)

View File

@ -1,216 +1,3 @@
# Configuring builds
Bake supports loading build definition from files, but sometimes you need even
more flexibility to configure this definition.
For this use case, you can define variables inside the bake files that can be
set by the user with environment variables or by [attribute definitions](#global-scope-attributes)
in other bake files. If you wish to change a specific value for a single
invocation you can use the `--set` flag [from the command line](#from-command-line).
## Global scope attributes
You can define global scope attributes in HCL/JSON and use them for code reuse
and setting values for variables. This means you can do a "data-only" HCL file
with the values you want to set/override and use it in the list of regular
output files.
```hcl
# docker-bake.hcl
variable "FOO" {
default = "abc"
}
target "app" {
args = {
v1 = "pre-${FOO}"
}
}
```
You can use this file directly:
```console
$ docker buildx bake --print app
```
```json
{
"group": {
"default": {
"targets": [
"app"
]
}
},
"target": {
"app": {
"context": ".",
"dockerfile": "Dockerfile",
"args": {
"v1": "pre-abc"
}
}
}
}
```
Or create an override configuration file:
```hcl
# env.hcl
WHOAMI="myuser"
FOO="def-${WHOAMI}"
```
And invoke bake together with both of the files:
```console
$ docker buildx bake -f docker-bake.hcl -f env.hcl --print app
```
```json
{
"group": {
"default": {
"targets": [
"app"
]
}
},
"target": {
"app": {
"context": ".",
"dockerfile": "Dockerfile",
"args": {
"v1": "pre-def-myuser"
}
}
}
}
```
## From command line
You can also override target configurations from the command line with the
[`--set` flag](https://docs.docker.com/engine/reference/commandline/buildx_bake/#set):
```hcl
# docker-bake.hcl
target "app" {
args = {
mybuildarg = "foo"
}
}
```
```console
$ docker buildx bake --set app.args.mybuildarg=bar --set app.platform=linux/arm64 app --print
```
```json
{
"group": {
"default": {
"targets": [
"app"
]
}
},
"target": {
"app": {
"context": ".",
"dockerfile": "Dockerfile",
"args": {
"mybuildarg": "bar"
},
"platforms": [
"linux/arm64"
]
}
}
}
```
Pattern matching syntax defined in [https://golang.org/pkg/path/#Match](https://golang.org/pkg/path/#Match)
is also supported:
```console
$ docker buildx bake --set foo*.args.mybuildarg=value # overrides build arg for all targets starting with "foo"
$ docker buildx bake --set *.platform=linux/arm64 # overrides platform for all targets
$ docker buildx bake --set foo*.no-cache # bypass caching only for targets starting with "foo"
```
Complete list of overridable fields:
* `args`
* `cache-from`
* `cache-to`
* `context`
* `dockerfile`
* `labels`
* `no-cache`
* `output`
* `platform`
* `pull`
* `secrets`
* `ssh`
* `tags`
* `target`
## Using variables in variables across files
When multiple files are specified, one file can use variables defined in
another file.
```hcl
# docker-bake1.hcl
variable "FOO" {
default = upper("${BASE}def")
}
variable "BAR" {
default = "-${FOO}-"
}
target "app" {
args = {
v1 = "pre-${BAR}"
}
}
```
```hcl
# docker-bake2.hcl
variable "BASE" {
default = "abc"
}
target "app" {
args = {
v2 = "${FOO}-post"
}
}
```
```console
$ docker buildx bake -f docker-bake1.hcl -f docker-bake2.hcl --print app
```
```json
{
"group": {
"default": {
"targets": [
"app"
]
}
},
"target": {
"app": {
"context": ".",
"dockerfile": "Dockerfile",
"args": {
"v1": "pre--ABCDEF-",
"v2": "ABCDEF-post"
}
}
}
}
```
Moved to [docs.docker.com](https://docs.docker.com/build/customize/bake/configuring-build)

View File

@ -1,440 +1,3 @@
# Bake file definition
`buildx bake` supports HCL, JSON and Compose file format for defining build
[groups](#group), [targets](#target) as well as [variables](#variable) and
[functions](#functions). It looks for build definition files in the current
directory in the following order:
* `docker-compose.yml`
* `docker-compose.yaml`
* `docker-bake.json`
* `docker-bake.override.json`
* `docker-bake.hcl`
* `docker-bake.override.hcl`
## Specification
Inside a bake file you can declare group, target and variable blocks to define
project specific reusable build flows.
### Target
A target reflects a single docker build invocation with the same options that
you would specify for `docker build`:
```hcl
# docker-bake.hcl
target "webapp-dev" {
dockerfile = "Dockerfile.webapp"
tags = ["docker.io/username/webapp:latest"]
}
```
```console
$ docker buildx bake webapp-dev
```
> **Note**
>
> In the case of compose files, each service corresponds to a target.
> If compose service name contains a dot it will be replaced with an underscore.
Complete list of valid target fields available for [HCL](#hcl-definition) and
[JSON](#json-definition) definitions:
| Name | Type | Description |
|---------------------|--------|-------------------------------------------------------------------------------------------------------------------------------------------------|
| `inherits` | List | [Inherit build options](#merging-and-inheritance) from other targets |
| `args` | Map | Set build-time variables (same as [`--build-arg` flag](https://docs.docker.com/engine/reference/commandline/buildx_build/)) |
| `cache-from` | List | External cache sources (same as [`--cache-from` flag](https://docs.docker.com/engine/reference/commandline/buildx_build/)) |
| `cache-to` | List | Cache export destinations (same as [`--cache-to` flag](https://docs.docker.com/engine/reference/commandline/buildx_build/)) |
| `context` | String | Set of files located in the specified path or URL |
| `contexts` | Map | Additional build contexts (same as [`--build-context` flag](https://docs.docker.com/engine/reference/commandline/buildx_build/)) |
| `dockerfile` | String | Name of the Dockerfile (same as [`--file` flag](https://docs.docker.com/engine/reference/commandline/buildx_build/)) |
| `dockerfile-inline` | String | Inline Dockerfile content |
| `labels` | Map | Set metadata for an image (same as [`--label` flag](https://docs.docker.com/engine/reference/commandline/buildx_build/)) |
| `no-cache` | Bool | Do not use cache when building the image (same as [`--no-cache` flag](https://docs.docker.com/engine/reference/commandline/buildx_build/)) |
| `no-cache-filter` | List | Do not cache specified stages (same as [`--no-cache-filter` flag](https://docs.docker.com/engine/reference/commandline/buildx_build/)) |
| `output` | List | Output destination (same as [`--output` flag](https://docs.docker.com/engine/reference/commandline/buildx_build/)) |
| `platforms` | List | Set target platforms for build (same as [`--platform` flag](https://docs.docker.com/engine/reference/commandline/buildx_build/)) |
| `pull` | Bool | Always attempt to pull all referenced images (same as [`--pull` flag](https://docs.docker.com/engine/reference/commandline/buildx_build/)) |
| `secret` | List | Secret to expose to the build (same as [`--secret` flag](https://docs.docker.com/engine/reference/commandline/buildx_build/)) |
| `ssh` | List | SSH agent socket or keys to expose to the build (same as [`--ssh` flag](https://docs.docker.com/engine/reference/commandline/buildx_build/)) |
| `tags` | List | Name and optionally a tag in the format `name:tag` (same as [`--tag` flag](https://docs.docker.com/engine/reference/commandline/buildx_build/)) |
| `target` | String | Set the target build stage to build (same as [`--target` flag](https://docs.docker.com/engine/reference/commandline/buildx_build/)) |
### Group
A group is a grouping of targets:
```hcl
# docker-bake.hcl
group "build" {
targets = ["db", "webapp-dev"]
}
target "webapp-dev" {
dockerfile = "Dockerfile.webapp"
tags = ["docker.io/username/webapp:latest"]
}
target "db" {
dockerfile = "Dockerfile.db"
tags = ["docker.io/username/db"]
}
```
```console
$ docker buildx bake build
```
### Variable
Similar to how Terraform provides a way to [define variables](https://www.terraform.io/docs/configuration/variables.html#declaring-an-input-variable),
the HCL file format also supports variable block definitions. These can be used
to define variables with values provided by the current environment, or a
default value when unset:
```hcl
# docker-bake.hcl
variable "TAG" {
default = "latest"
}
target "webapp-dev" {
dockerfile = "Dockerfile.webapp"
tags = ["docker.io/username/webapp:${TAG}"]
}
```
```console
$ docker buildx bake webapp-dev # will use the default value "latest"
$ TAG=dev docker buildx bake webapp-dev # will use the TAG environment variable value
```
> **Tip**
>
> See also the [Configuring builds](configuring-build.md) page for advanced usage.
### Functions
A [set of generally useful functions](https://github.com/docker/buildx/blob/master/bake/hclparser/stdlib.go)
provided by [go-cty](https://github.com/zclconf/go-cty/tree/main/cty/function/stdlib)
are available for use in HCL files:
```hcl
# docker-bake.hcl
target "webapp-dev" {
dockerfile = "Dockerfile.webapp"
tags = ["docker.io/username/webapp:latest"]
args = {
buildno = "${add(123, 1)}"
}
}
```
In addition, [user defined functions](https://github.com/hashicorp/hcl/tree/main/ext/userfunc)
are also supported:
```hcl
# docker-bake.hcl
function "increment" {
params = [number]
result = number + 1
}
target "webapp-dev" {
dockerfile = "Dockerfile.webapp"
tags = ["docker.io/username/webapp:latest"]
args = {
buildno = "${increment(123)}"
}
}
```
> **Note**
>
> See [User defined HCL functions](hcl-funcs.md) page for more details.
## Built-in variables
* `BAKE_CMD_CONTEXT` can be used to access the main `context` for bake command
from a bake file that has been [imported remotely](file-definition.md#remote-definition).
* `BAKE_LOCAL_PLATFORM` returns the current platform's default platform
specification (e.g. `linux/amd64`).
## Merging and inheritance
Multiple files can include the same target and final build options will be
determined by merging them together:
```hcl
# docker-bake.hcl
target "webapp-dev" {
dockerfile = "Dockerfile.webapp"
tags = ["docker.io/username/webapp:latest"]
}
```
```hcl
# docker-bake2.hcl
target "webapp-dev" {
tags = ["docker.io/username/webapp:dev"]
}
```
```console
$ docker buildx bake -f docker-bake.hcl -f docker-bake2.hcl webapp-dev
```
A group can specify its list of targets with the `targets` option. A target can
inherit build options by setting the `inherits` option to the list of targets or
groups to inherit from:
```hcl
# docker-bake.hcl
target "webapp-dev" {
dockerfile = "Dockerfile.webapp"
tags = ["docker.io/username/webapp:${TAG}"]
}
target "webapp-release" {
inherits = ["webapp-dev"]
platforms = ["linux/amd64", "linux/arm64"]
}
```
## `default` target/group
When you invoke `bake` you specify what targets/groups you want to build. If no
arguments is specified, the group/target named `default` will be built:
```hcl
# docker-bake.hcl
target "default" {
dockerfile = "Dockerfile.webapp"
tags = ["docker.io/username/webapp:latest"]
}
```
```console
$ docker buildx bake
```
## Definitions
### HCL definition
HCL definition file is recommended as its experience is more aligned with buildx UX
and also allows better code reuse, different target groups and extended features.
```hcl
# docker-bake.hcl
variable "TAG" {
default = "latest"
}
group "default" {
targets = ["db", "webapp-dev"]
}
target "webapp-dev" {
dockerfile = "Dockerfile.webapp"
tags = ["docker.io/username/webapp:${TAG}"]
}
target "webapp-release" {
inherits = ["webapp-dev"]
platforms = ["linux/amd64", "linux/arm64"]
}
target "db" {
dockerfile = "Dockerfile.db"
tags = ["docker.io/username/db"]
}
```
### JSON definition
```json
{
"variable": {
"TAG": {
"default": "latest"
}
},
"group": {
"default": {
"targets": [
"db",
"webapp-dev"
]
}
},
"target": {
"webapp-dev": {
"dockerfile": "Dockerfile.webapp",
"tags": [
"docker.io/username/webapp:${TAG}"
]
},
"webapp-release": {
"inherits": [
"webapp-dev"
],
"platforms": [
"linux/amd64",
"linux/arm64"
]
},
"db": {
"dockerfile": "Dockerfile.db",
"tags": [
"docker.io/username/db"
]
}
}
}
```
### Compose file
```yaml
# docker-compose.yml
services:
webapp:
image: docker.io/username/webapp:latest
build:
dockerfile: Dockerfile.webapp
db:
image: docker.io/username/db
build:
dockerfile: Dockerfile.db
```
> **Note**
>
> See [Building from Compose file](compose-file.md) page for more details.
## Remote definition
You can also build bake files directly from a remote Git repository or HTTPS URL:
```console
$ docker buildx bake "https://github.com/docker/cli.git#v20.10.11" --print
#1 [internal] load git source https://github.com/docker/cli.git#v20.10.11
#1 0.745 e8f1871b077b64bcb4a13334b7146492773769f7 refs/tags/v20.10.11
#1 2.022 From https://github.com/docker/cli
#1 2.022 * [new tag] v20.10.11 -> v20.10.11
#1 DONE 2.9s
```
```json
{
"group": {
"default": {
"targets": [
"binary"
]
}
},
"target": {
"binary": {
"context": "https://github.com/docker/cli.git#v20.10.11",
"dockerfile": "Dockerfile",
"args": {
"BASE_VARIANT": "alpine",
"GO_STRIP": "",
"VERSION": ""
},
"target": "binary",
"platforms": [
"local"
],
"output": [
"build"
]
}
}
}
```
As you can see the context is fixed to `https://github.com/docker/cli.git` even if
[no context is actually defined](https://github.com/docker/cli/blob/2776a6d694f988c0c1df61cad4bfac0f54e481c8/docker-bake.hcl#L17-L26)
in the definition.
If you want to access the main context for bake command from a bake file
that has been imported remotely, you can use the [`BAKE_CMD_CONTEXT` built-in var](#built-in-variables).
```console
$ cat https://raw.githubusercontent.com/tonistiigi/buildx/remote-test/docker-bake.hcl
```
```hcl
target "default" {
context = BAKE_CMD_CONTEXT
dockerfile-inline = <<EOT
FROM alpine
WORKDIR /src
COPY . .
RUN ls -l && stop
EOT
}
```
```console
$ docker buildx bake "https://github.com/tonistiigi/buildx.git#remote-test" --print
```
```json
{
"target": {
"default": {
"context": ".",
"dockerfile": "Dockerfile",
"dockerfile-inline": "FROM alpine\nWORKDIR /src\nCOPY . .\nRUN ls -l \u0026\u0026 stop\n"
}
}
}
```
```console
$ touch foo bar
$ docker buildx bake "https://github.com/tonistiigi/buildx.git#remote-test"
```
```text
...
> [4/4] RUN ls -l && stop:
#8 0.101 total 0
#8 0.102 -rw-r--r-- 1 root root 0 Jul 27 18:47 bar
#8 0.102 -rw-r--r-- 1 root root 0 Jul 27 18:47 foo
#8 0.102 /bin/sh: stop: not found
```
```console
$ docker buildx bake "https://github.com/tonistiigi/buildx.git#remote-test" "https://github.com/docker/cli.git#v20.10.11" --print
#1 [internal] load git source https://github.com/tonistiigi/buildx.git#remote-test
#1 0.429 577303add004dd7efeb13434d69ea030d35f7888 refs/heads/remote-test
#1 CACHED
```
```json
{
"target": {
"default": {
"context": "https://github.com/docker/cli.git#v20.10.11",
"dockerfile": "Dockerfile",
"dockerfile-inline": "FROM alpine\nWORKDIR /src\nCOPY . .\nRUN ls -l \u0026\u0026 stop\n"
}
}
}
```
```console
$ docker buildx bake "https://github.com/tonistiigi/buildx.git#remote-test" "https://github.com/docker/cli.git#v20.10.11"
```
```text
...
> [4/4] RUN ls -l && stop:
#8 0.136 drwxrwxrwx 5 root root 4096 Jul 27 18:31 kubernetes
#8 0.136 drwxrwxrwx 3 root root 4096 Jul 27 18:31 man
#8 0.136 drwxrwxrwx 2 root root 4096 Jul 27 18:31 opts
#8 0.136 -rw-rw-rw- 1 root root 1893 Jul 27 18:31 poule.yml
#8 0.136 drwxrwxrwx 7 root root 4096 Jul 27 18:31 scripts
#8 0.136 drwxrwxrwx 3 root root 4096 Jul 27 18:31 service
#8 0.136 drwxrwxrwx 2 root root 4096 Jul 27 18:31 templates
#8 0.136 drwxrwxrwx 10 root root 4096 Jul 27 18:31 vendor
#8 0.136 -rwxrwxrwx 1 root root 9620 Jul 27 18:31 vendor.conf
#8 0.136 /bin/sh: stop: not found
```
Moved to [docs.docker.com](https://docs.docker.com/build/customize/bake/file-definition)

View File

@ -1,327 +1,3 @@
# User defined HCL functions
## Using interpolation to tag an image with the git sha
As shown in the [File definition](file-definition.md#variable) page, `bake`
supports variable blocks which are assigned to matching environment variables
or default values:
```hcl
# docker-bake.hcl
variable "TAG" {
default = "latest"
}
group "default" {
targets = ["webapp"]
}
target "webapp" {
tags = ["docker.io/username/webapp:${TAG}"]
}
```
alternatively, in json format:
```json
{
"variable": {
"TAG": {
"default": "latest"
}
},
"group": {
"default": {
"targets": ["webapp"]
}
},
"target": {
"webapp": {
"tags": ["docker.io/username/webapp:${TAG}"]
}
}
}
```
```console
$ docker buildx bake --print webapp
```
```json
{
"group": {
"default": {
"targets": [
"webapp"
]
}
},
"target": {
"webapp": {
"context": ".",
"dockerfile": "Dockerfile",
"tags": [
"docker.io/username/webapp:latest"
]
}
}
}
```
```console
$ TAG=$(git rev-parse --short HEAD) docker buildx bake --print webapp
```
```json
{
"group": {
"default": {
"targets": [
"webapp"
]
}
},
"target": {
"webapp": {
"context": ".",
"dockerfile": "Dockerfile",
"tags": [
"docker.io/username/webapp:985e9e9"
]
}
}
}
```
## Using the `add` function
You can use [`go-cty` stdlib functions](https://github.com/zclconf/go-cty/tree/main/cty/function/stdlib).
Here we are using the `add` function.
```hcl
# docker-bake.hcl
variable "TAG" {
default = "latest"
}
group "default" {
targets = ["webapp"]
}
target "webapp" {
args = {
buildno = "${add(123, 1)}"
}
}
```
```console
$ docker buildx bake --print webapp
```
```json
{
"group": {
"default": {
"targets": [
"webapp"
]
}
},
"target": {
"webapp": {
"context": ".",
"dockerfile": "Dockerfile",
"args": {
"buildno": "124"
}
}
}
}
```
## Defining an `increment` function
It also supports [user defined functions](https://github.com/hashicorp/hcl/tree/main/ext/userfunc).
The following example defines a simple an `increment` function.
```hcl
# docker-bake.hcl
function "increment" {
params = [number]
result = number + 1
}
group "default" {
targets = ["webapp"]
}
target "webapp" {
args = {
buildno = "${increment(123)}"
}
}
```
```console
$ docker buildx bake --print webapp
```
```json
{
"group": {
"default": {
"targets": [
"webapp"
]
}
},
"target": {
"webapp": {
"context": ".",
"dockerfile": "Dockerfile",
"args": {
"buildno": "124"
}
}
}
}
```
## Only adding tags if a variable is not empty using an `notequal`
Here we are using the conditional `notequal` function which is just for
symmetry with the `equal` one.
```hcl
# docker-bake.hcl
variable "TAG" {default="" }
group "default" {
targets = [
"webapp",
]
}
target "webapp" {
context="."
dockerfile="Dockerfile"
tags = [
"my-image:latest",
notequal("",TAG) ? "my-image:${TAG}": "",
]
}
```
```console
$ docker buildx bake --print webapp
```
```json
{
"group": {
"default": {
"targets": [
"webapp"
]
}
},
"target": {
"webapp": {
"context": ".",
"dockerfile": "Dockerfile",
"tags": [
"my-image:latest"
]
}
}
}
```
## Using variables in functions
You can refer variables to other variables like the target blocks can. Stdlib
functions can also be called but user functions can't at the moment.
```hcl
# docker-bake.hcl
variable "REPO" {
default = "user/repo"
}
function "tag" {
params = [tag]
result = ["${REPO}:${tag}"]
}
target "webapp" {
tags = tag("v1")
}
```
```console
$ docker buildx bake --print webapp
```
```json
{
"group": {
"default": {
"targets": [
"webapp"
]
}
},
"target": {
"webapp": {
"context": ".",
"dockerfile": "Dockerfile",
"tags": [
"user/repo:v1"
]
}
}
}
```
## Using typed variables
Non-string variables are also accepted. The value passed with env is parsed
into suitable type first.
```hcl
# docker-bake.hcl
variable "FOO" {
default = 3
}
variable "IS_FOO" {
default = true
}
target "app" {
args = {
v1 = FOO > 5 ? "higher" : "lower"
v2 = IS_FOO ? "yes" : "no"
}
}
```
```console
$ docker buildx bake --print app
```
```json
{
"group": {
"default": {
"targets": [
"app"
]
}
},
"target": {
"app": {
"context": ".",
"dockerfile": "Dockerfile",
"args": {
"v1": "lower",
"v2": "yes"
}
}
}
}
```
Moved to [docs.docker.com](https://docs.docker.com/build/customize/bake/hcl-funcs)

View File

@ -1,36 +1,3 @@
# High-level build options with Bake
> This command is experimental.
>
> The design of bake is in early stages, and we are looking for [feedback from users](https://github.com/docker/buildx/issues).
{: .experimental }
Buildx also aims to provide support for high-level build concepts that go beyond
invoking a single build command. We want to support building all the images in
your application together and let the users define project specific reusable
build flows that can then be easily invoked by anyone.
[BuildKit](https://github.com/moby/buildkit) efficiently handles multiple
concurrent build requests and de-duplicating work. The build commands can be
combined with general-purpose command runners (for example, `make`). However,
these tools generally invoke builds in sequence and therefore cannot leverage
the full potential of BuildKit parallelization, or combine BuildKit's output
for the user. For this use case, we have added a command called
[`docker buildx bake`](https://docs.docker.com/engine/reference/commandline/buildx_bake/).
The `bake` command supports building images from HCL, JSON and Compose files.
This is similar to [`docker compose build`](https://docs.docker.com/compose/reference/build/),
but allowing all the services to be built concurrently as part of a single
request. If multiple files are specified they are all read and configurations are
combined.
We recommend using HCL files as its experience is more aligned with buildx UX
and also allows better code reuse, different target groups and extended features.
## Next steps
* [File definition](file-definition.md)
* [Configuring builds](configuring-build.md)
* [User defined HCL functions](hcl-funcs.md)
* [Defining additional build contexts and linking targets](build-contexts.md)
* [Building from Compose file](compose-file.md)
Moved to [docs.docker.com](https://docs.docker.com/build/customize/bake)

View File

@ -1,56 +1,3 @@
# Azure Blob Storage cache storage
> **Warning**
>
> This cache backend is unreleased. You can use it today, by using the
> `moby/buildkit:master` image in your Buildx driver.
The `azblob` cache store uploads your resulting build cache to
[Azure's blob storage service](https://azure.microsoft.com/en-us/services/storage/blobs/).
> **Note**
>
> This cache storage backend requires using a different driver than the default
> `docker` driver - see more information on selecting a driver
> [here](https://docs.docker.com/build/building/drivers/). To create a new
> driver (which can act as a simple drop-in replacement):
>
> ```console
> $ docker buildx create --use --driver=docker-container
> ```
## Synopsis
```console
$ docker buildx build --push -t <registry>/<image> \
--cache-to type=azblob,name=<cache-image>[,parameters...] \
--cache-from type=azblob,name=<cache-image>[,parameters...] .
```
The following table describes the available CSV parameters that you can pass to
`--cache-to` and `--cache-from`.
| Name | Option | Type | Default | Description |
| ------------------- | ----------------------- | ----------- | ------- | -------------------------------------------------- |
| `name` | `cache-to`,`cache-from` | String | | Required. The name of the cache image. |
| `account_url` | `cache-to`,`cache-from` | String | | Base URL of the storage account. |
| `secret_access_key` | `cache-to`,`cache-from` | String | | Blob storage account key, see [authentication][1]. |
| `mode` | `cache-to` | `min`,`max` | `min` | Cache layers to export, see [cache mode][2]. |
[1]: #authentication
[2]: index.md#cache-mode
## Authentication
The `secret_access_key`, if left unspecified, is read from environment variables
on the BuildKit server following the scheme for the
[Azure Go SDK](https://docs.microsoft.com/en-us/azure/developer/go/azure-sdk-authentication).
The environment variables are read from the server, not the Buildx client.
## Further reading
For an introduction to caching see
[Optimizing builds with cache](https://docs.docker.com/build/building/cache).
For more information on the `azblob` cache backend, see the
[BuildKit README](https://github.com/moby/buildkit#azure-blob-storage-cache-experimental).
Moved to [docs.docker.com](https://docs.docker.com/build/building/cache/backends/azblob)

View File

@ -1,111 +1,3 @@
# GitHub Actions cache storage
> **Warning**
>
> The GitHub Actions cache is a beta feature. You can use it today, in current
> releases of Buildx and BuildKit. However, the interface and behavior are
> unstable and may change in future releases.
The GitHub Actions cache utilizes the
[GitHub-provided Action's cache](https://github.com/actions/cache) available
from within your CI execution environment. This is the recommended cache to use
inside your GitHub action pipelines, as long as your use case falls within the
[size and usage limits set by GitHub](https://docs.github.com/en/actions/using-workflows/caching-dependencies-to-speed-up-workflows#usage-limits-and-eviction-policy).
> **Note**
>
> This cache storage backend requires using a different driver than the default
> `docker` driver - see more information on selecting a driver
> [here](https://docs.docker.com/build/building/drivers/). To create a new
> driver (which can act as a simple drop-in replacement):
>
> ```console
> $ docker buildx create --use --driver=docker-container
> ```
## Synopsis
```console
$ docker buildx build --push -t <registry>/<image> \
--cache-to type=gha[,parameters...] \
--cache-from type=gha[,parameters...] .
```
The following table describes the available CSV parameters that you can pass to
`--cache-to` and `--cache-from`.
| Name | Option | Type | Default | Description |
| ------- | ----------------------- | ----------- | ------------------------------- | -------------------------------------------- |
| `url` | `cache-to`,`cache-from` | String | `$ACTIONS_CACHE_URL` | Cache server URL, see [authentication][1]. |
| `token` | `cache-to`,`cache-from` | String | `$ACTIONS_RUNTIME_TOKEN` | Access token, see [authentication][1]. |
| `scope` | `cache-to`,`cache-from` | String | Name of the current Git branch. | Cache scope, see [scope][2] |
| `mode` | `cache-to` | `min`,`max` | `min` | Cache layers to export, see [cache mode][3]. |
[1]: #authentication
[2]: #scope
[3]: index.md#cache-mode
## Authentication
If the `url` or `token` parameters are left unspecified, the `gha` cache backend
will fall back to using environment variables. If you invoke the `docker buildx`
command manually from an inline step, then the variables must be manually
exposed (using
[`crazy-max/ghaction-github-runtime`](https://github.com/crazy-max/ghaction-github-runtime),
for example).
## Scope
By default, cache is scoped per Git branch. This ensures a separate cache
environment for the main branch and each feature branch. If you build multiple
images on the same branch, each build will overwrite the cache of the previous,
leaving only the final cache.
To preserve the cache for multiple builds on the same branch, you can manually
specify a cache scope name using the `scope` parameter. In the following
example, the cache is set to a combination of the branch name and the image
name, to ensure each image gets its own cache):
```console
$ docker buildx build --push -t <registry>/<image> \
--cache-to type=gha,url=...,token=...,scope=$GITHUB_REF_NAME-image \
--cache-from type=gha,url=...,token=...,scope=$GITHUB_REF_NAME-image .
$ docker buildx build --push -t <registry>/<image2> \
--cache-to type=gha,url=...,token=...,scope=$GITHUB_REF_NAME-image2 \
--cache-from type=gha,url=...,token=...,scope=$GITHUB_REF_NAME-image2 .
```
GitHub's
[cache access restrictions](https://docs.github.com/en/actions/advanced-guides/caching-dependencies-to-speed-up-workflows#restrictions-for-accessing-a-cache),
still apply. Only the cache for the current branch, the base branch and the
default branch is accessible by a workflow.
### Using `docker/build-push-action`
When using the
[`docker/build-push-action`](https://github.com/docker/build-push-action), the
`url` and `token` parameters are automatically populated. No need to manually
specify them, or include any additional workarounds.
For example:
```yaml
- name: Build and push
uses: docker/build-push-action@v3
with:
context: .
push: true
tags: "<registry>/<image>:latest"
cache-from: type=gha
cache-to: type=gha,mode=max
```
<!-- FIXME: cross-link to ci docs once docs.docker.com has them -->
## Further reading
For an introduction to caching see
[Optimizing builds with cache](https://docs.docker.com/build/building/cache).
For more information on the `gha` cache backend, see the
[BuildKit README](https://github.com/moby/buildkit#github-actions-cache-experimental).
Moved to [docs.docker.com](https://docs.docker.com/build/building/cache/backends/gha)

View File

@ -1,158 +1,3 @@
# Cache storage backends
To ensure fast builds, BuildKit automatically caches the build result in its own
internal cache. Additionally, BuildKit also supports exporting build cache to an
external location, making it possible to import in future builds.
An external cache becomes almost essential in CI/CD build environments. Such
environments usually have little-to-no persistence between runs, but it's still
important to keep the runtime of image builds as low as possible.
> **Warning**
>
> If you use secrets or credentials inside your build process, ensure you
> manipulate them using the dedicated
> [`--secret` option](https://docs.docker.com/engine/reference/commandline/buildx_build/#secret).
> Manually managing secrets using `COPY` or `ARG` could result in leaked
> credentials.
## Backends
Buildx supports the following cache storage backends:
- `inline`: embeds the build cache into the image.
The inline cache gets pushed to the same location as the main output result.
Note that this only works for the `image` exporter.
- `registry`: embeds the build cache into a separate image, and pushes to a
dedicated location separate from the main output.
- `local`: writes the build cache to a local directory on the filesystem.
- `gha`: uploads the build cache to
[GitHub Actions cache](https://docs.github.com/en/rest/actions/cache) (beta).
- `s3`: uploads the build cache to an
[AWS S3 bucket](https://aws.amazon.com/s3/) (unreleased).
- `azblob`: uploads the build cache to
[Azure Blob Storage](https://azure.microsoft.com/en-us/services/storage/blobs/)
(unreleased).
## Command syntax
To use any of the cache backends, you first need to specify it on build with the
[`--cache-to` option](https://docs.docker.com/engine/reference/commandline/buildx_build/#cache-to)
to export the cache to your storage backend of choice. Then, use the
[`--cache-from` option](https://docs.docker.com/engine/reference/commandline/buildx_build/#cache-from)
to import the cache from the storage backend into the current build. Unlike the
local BuildKit cache (which is always enabled), all of the cache storage
backends must be explicitly exported to, and explicitly imported from. All cache
exporters except for the `inline` cache requires that you
[select an alternative Buildx driver](../../drivers/index.md).
Example `buildx` command using the `registry` backend, using import and export
cache:
```console
$ docker buildx build --push -t <registry>/<image> \
--cache-to type=registry,ref=<registry>/<cache-image>[,parameters...] \
--cache-from type=registry,ref=<registry>/<cache-image>[,parameters...] .
```
> **Warning**
>
> As a general rule, each cache writes to some location. No location can be
> written to twice, without overwriting the previously cached data. If you want
> to maintain multiple scoped caches (for example, a cache per Git branch), then
> ensure that you use different locations for exported cache.
## Multiple caches
BuildKit currently only supports
[a single cache exporter](https://github.com/moby/buildkit/pull/3024). But you
can import from as many remote caches as you like. For example, a common pattern
is to use the cache of both the current branch and the main branch. The
following example shows importing cache from multiple locations using the
registry cache backend:
```console
$ docker buildx build --push -t <registry>/<image> \
--cache-to type=registry,ref=<registry>/<cache-image>:<branch> \
--cache-from type=registry,ref=<registry>/<cache-image>:<branch> \
--cache-from type=registry,ref=<registry>/<cache-image>:main .
```
## Configuration options
This section describes some of the configuration options available when
generating cache exports. The options described here are common for at least two
or more backend types. Additionally, the different backend types support
specific parameters as well. See the detailed page about each backend type for
more information about which configuration parameters apply.
The common parameters described here are:
- [Cache mode](#cache-mode)
- [Cache compression](#cache-compression)
- [OCI media type](#oci-media-types)
### Cache mode
When generating a cache output, the `--cache-to` argument accepts a `mode`
option for defining which layers to include in the exported cache. This is
supported by all cache backends except for the `inline` cache.
Mode can be set to either of two options: `mode=min` or `mode=max`. For example,
to build the cache with `mode=max` with the registry backend:
```console
$ docker buildx build --push -t <registry>/<image> \
--cache-to type=registry,ref=<registry>/<cache-image>,mode=max \
--cache-from type=registry,ref=<registry>/<cache-image> .
```
This option is only set when exporting a cache, using `--cache-to`. When
importing a cache (`--cache-from`) the relevant parameters are automatically
detected.
In `min` cache mode (the default), only layers that are exported into the
resulting image are cached, while in `max` cache mode, all layers are cached,
even those of intermediate steps.
While `min` cache is typically smaller (which speeds up import/export times, and
reduces storage costs), `max` cache is more likely to get more cache hits.
Depending on the complexity and location of your build, you should experiment
with both parameters to find the results that work best for you.
### Cache compression
The cache compression options are the same as the
[exporter compression options](../../exporters/index.md#compression). This is
supported by the `local` and `registry` cache backends.
For example, to compress the `registry` cache with `zstd` compression:
```console
$ docker buildx build --push -t <registry>/<image> \
--cache-to type=registry,ref=<registry>/<cache-image>,compression=zstd \
--cache-from type=registry,ref=<registry>/<cache-image> .
```
### OCI media types
The cache OCI options are the same as the
[exporter OCI options](../../exporters/index.md#oci-media-types). These are
supported by the `local` and `registry` cache backends.
For example, to export OCI media type cache, use the `oci-mediatypes` property:
```console
$ docker buildx build --push -t <registry>/<image> \
--cache-to type=registry,ref=<registry>/<cache-image>,oci-mediatypes=true \
--cache-from type=registry,ref=<registry>/<cache-image> .
```
This property is only meaningful with the `--cache-to` flag. When fetching
cache, BuildKit will auto-detect the correct media types to use.
Moved to [docs.docker.com](https://docs.docker.com/build/building/cache/backends)

View File

@ -1,52 +1,3 @@
# Inline cache storage
The `inline` cache storage backend is the simplest way to get an external cache
and is easy to get started using if you're already building and pushing an
image. However, it doesn't scale as well to multi-stage builds as well as the
other drivers do. It also doesn't offer separation between your output artifacts
and your cache output. This means that if you're using a particularly complex
build flow, or not exporting your images directly to a registry, then you may
want to consider the [registry](./registry.md) cache.
## Synopsis
```console
$ docker buildx build --push -t <registry>/<image> \
--cache-to type=inline \
--cache-from type=registry,ref=<registry>/image .
```
No additional parameters are supported for the `inline` cache.
To export cache using `inline` storage, pass `type=inline` to the `--cache-to`
option:
```console
$ docker buildx build --push -t <registry>/<image> \
--cache-to type=inline .
```
Alternatively, you can also export inline cache by setting the build argument
`BUILDKIT_INLINE_CACHE=1`, instead of using the `--cache-to` flag:
```console
$ docker buildx build --push -t <registry>/<image> \
--arg BUILDKIT_INLINE_CACHE=1 .
```
To import the resulting cache on a future build, pass `type=registry` to
`--cache-from` which lets you extract the cache from inside a Docker image in
the specified registry:
```console
$ docker buildx build --push -t <registry>/<image> \
--cache-from type=registry,ref=<registry>/<image> .
```
## Further reading
For an introduction to caching see
[Optimizing builds with cache](https://docs.docker.com/build/building/cache).
For more information on the `inline` cache backend, see the
[BuildKit README](https://github.com/moby/buildkit#inline-push-image-and-cache-together).
Moved to [docs.docker.com](https://docs.docker.com/build/building/cache/backends/inline)

View File

@ -1,102 +1,3 @@
# Local cache storage
The `local` cache store is a simple cache option that stores your cache as files
in a directory on your filesystem, using an
[OCI image layout](https://github.com/opencontainers/image-spec/blob/main/image-layout.md)
for the underlying directory structure. Local cache is a good choice if you're
just testing, or if you want the flexibility to self-manage a shared storage
solution.
> **Note**
>
> This cache storage backend requires using a different driver than the default
> `docker` driver - see more information on selecting a driver
> [here](https://docs.docker.com/build/building/drivers/). To create a new
> driver (which can act as a simple drop-in replacement):
>
> ```console
> $ docker buildx create --use --driver=docker-container
> ```
## Synopsis
```console
$ docker buildx build --push -t <registry>/<image> \
--cache-to type=local,dest=path/to/local/dir[,parameters...] \
--cache-from type=local,src=path/to/local/dir .
```
The following table describes the available CSV parameters that you can pass to
`--cache-to` and `--cache-from`.
| Name | Option | Type | Default | Description |
| ------------------- | ------------ | ----------------------- | ------- | -------------------------------------------------------------------- |
| `src` | `cache-from` | String | | Path of the local directory where cache gets imported from. |
| `digest` | `cache-from` | String | | Digest of manifest to import, see [cache versioning][4]. |
| `dest` | `cache-to` | String | | Path of the local directory where cache gets exported to. |
| `mode` | `cache-to` | `min`,`max` | `min` | Cache layers to export, see [cache mode][1]. |
| `oci-mediatypes` | `cache-to` | `true`,`false` | `true` | Use OCI media types in exported manifests, see [OCI media types][2]. |
| `compression` | `cache-to` | `gzip`,`estargz`,`zstd` | `gzip` | Compression type, see [cache compression][3]. |
| `compression-level` | `cache-to` | `0..22` | | Compression level, see [cache compression][3]. |
| `force-compression` | `cache-to` | `true`,`false` | `false` | Forcibly apply compression, see [cache compression][3]. |
[1]: index.md#cache-mode
[2]: index.md#oci-media-types
[3]: index.md#cache-compression
[4]: #cache-versioning
If the `src` cache doesn't exist, then the cache import step will fail, but the
build will continue.
## Cache versioning
<!-- FIXME: update once https://github.com/moby/buildkit/pull/3111 is released -->
This section describes how versioning works for caches on a local filesystem,
and how you can use the `digest` parameter to use older versions of cache.
If you inspect the cache directory manually, you can see the resulting OCI image
layout:
```console
$ ls cache
blobs index.json ingest
$ cat cache/index.json | jq
{
"schemaVersion": 2,
"manifests": [
{
"mediaType": "application/vnd.oci.image.index.v1+json",
"digest": "sha256:6982c70595cb91769f61cd1e064cf5f41d5357387bab6b18c0164c5f98c1f707",
"size": 1560,
"annotations": {
"org.opencontainers.image.ref.name": "latest"
}
}
]
}
```
Like other cache types, local cache gets replaced on export, by replacing the
contents of the `index.json` file. However, previous caches will still be
available in the `blobs` directory. These old caches are addressable by digest,
and kept indefinitely. Therefore, the size of the local cache will continue to
grow (see [`moby/buildkit#1896`](https://github.com/moby/buildkit/issues/1896)
for more information).
When importing cache using `--cache-to`, you can specify the `digest` parameter
to force loading an older version of the cache, for example:
```console
$ docker buildx build --push -t <registry>/<image> \
--cache-to type=local,dest=path/to/local/dir \
--cache-from type=local,ref=path/to/local/dir,digest=sha256:6982c70595cb91769f61cd1e064cf5f41d5357387bab6b18c0164c5f98c1f707 .
```
## Further reading
For an introduction to caching see
[Optimizing builds with cache](https://docs.docker.com/build/building/cache).
For more information on the `local` cache backend, see the
[BuildKit README](https://github.com/moby/buildkit#local-directory-1).
Moved to [docs.docker.com](https://docs.docker.com/build/building/cache/backends/local)

View File

@ -1,70 +1,3 @@
# Registry cache storage
The `registry` cache storage can be thought of as an extension to the `inline`
cache. Unlike the `inline` cache, the `registry` cache is entirely separate from
the image, which allows for more flexible usage - `registry`-backed cache can do
everything that the inline cache can do, and more:
- Allows for separating the cache and resulting image artifacts so that you can
distribute your final image without the cache inside.
- It can efficiently cache multi-stage builds in `max` mode, instead of only the
final stage.
- It works with other exporters for more flexibility, instead of only the
`image` exporter.
> **Note**
>
> This cache storage backend requires using a different driver than the default
> `docker` driver - see more information on selecting a driver
> [here](https://docs.docker.com/build/building/drivers/). To create a new
> driver (which can act as a simple drop-in replacement):
>
> ```console
> $ docker buildx create --use --driver=docker-container
> ```
## Synopsis
Unlike the simpler `inline` cache, the `registry` cache supports several
configuration parameters:
```console
$ docker buildx build --push -t <registry>/<image> \
--cache-to type=registry,ref=<registry>/<cache-image>[,parameters...] \
--cache-from type=registry,ref=<registry>/<cache-image> .
```
The following table describes the available CSV parameters that you can pass to
`--cache-to` and `--cache-from`.
| Name | Option | Type | Default | Description |
| ------------------- | ----------------------- | ----------------------- | ------- | -------------------------------------------------------------------- |
| `ref` | `cache-to`,`cache-from` | String | | Full name of the cache image to import. |
| `dest` | `cache-to` | String | | Path of the local directory where cache gets exported to. |
| `mode` | `cache-to` | `min`,`max` | `min` | Cache layers to export, see [cache mode][1]. |
| `oci-mediatypes` | `cache-to` | `true`,`false` | `true` | Use OCI media types in exported manifests, see [OCI media types][2]. |
| `compression` | `cache-to` | `gzip`,`estargz`,`zstd` | `gzip` | Compression type, see [cache compression][3]. |
| `compression-level` | `cache-to` | `0..22` | | Compression level, see [cache compression][3]. |
| `force-compression` | `cache-to` | `true`,`false` | `false` | Forcibly apply compression, see [cache compression][3]. |
[1]: index.md#cache-mode
[2]: index.md#oci-media-types
[3]: index.md#cache-compression
You can choose any valid value for `ref`, as long as it's not the same as the
target location that you push your image to. You might choose different tags
(e.g. `foo/bar:latest` and `foo/bar:build-cache`), separate image names (e.g.
`foo/bar` and `foo/bar-cache`), or even different repositories (e.g.
`docker.io/foo/bar` and `ghcr.io/foo/bar`). It's up to you to decide the
strategy that you want to use for separating your image from your cache images.
If the `--cache-from` target doesn't exist, then the cache import step will
fail, but the build will continue.
## Further reading
For an introduction to caching see
[Optimizing builds with cache](https://docs.docker.com/build/building/cache).
For more information on the `registry` cache backend, see the
[BuildKit README](https://github.com/moby/buildkit#registry-push-image-and-cache-separately).
Moved to [docs.docker.com](https://docs.docker.com/build/building/cache/backends/registry)

View File

@ -1,63 +1,3 @@
# Amazon S3 cache storage
> **Warning**
>
> This cache backend is unreleased. You can use it today, by using the
> `moby/buildkit:master` image in your Buildx driver.
The `s3` cache storage uploads your resulting build cache to
[Amazon S3 file storage service](https://aws.amazon.com/s3/), into a specified
bucket.
> **Note**
>
> This cache storage backend requires using a different driver than the default
> `docker` driver - see more information on selecting a driver
> [here](https://docs.docker.com/build/building/drivers/). To create a new
> driver (which can act as a simple drop-in replacement):
>
> ```console
> $ docker buildx create --use --driver=docker-container
> ```
## Synopsis
```console
$ docker buildx build --push -t <user>/<image> \
--cache-to type=s3,region=<region>,bucket=<bucket>,name=<cache-image>[,parameters...] \
--cache-from type=s3,region=<region>,bucket=<bucket>,name=<cache-image> .
```
The following table describes the available CSV parameters that you can pass to
`--cache-to` and `--cache-from`.
| Name | Option | Type | Default | Description |
| ------------------- | ----------------------- | ----------- | ------- | -------------------------------------------- |
| `region` | `cache-to`,`cache-from` | String | | Geographic location. |
| `bucket` | `cache-to`,`cache-from` | String | | Name of the S3 bucket used for caching |
| `name` | `cache-to`,`cache-from` | String | | Name of the cache image |
| `access_key_id` | `cache-to`,`cache-from` | String | | See [authentication][1] |
| `secret_access_key` | `cache-to`,`cache-from` | String | | See [authentication][1] |
| `session_token` | `cache-to`,`cache-from` | String | | See [authentication][1] |
| `mode` | `cache-to` | `min`,`max` | `min` | Cache layers to export, see [cache mode][2]. |
[1]: #authentication
[2]: index.md#cache-mode
## Authentication
`access_key_id`, `secret_access_key`, and `session_token`, if left unspecified,
are read from environment variables on the BuildKit server following the scheme
for the
[AWS Go SDK](https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/configuring-sdk.html).
The environment variables are read from the server, not the Buildx client.
<!-- FIXME: update once https://github.com/docker/buildx/pull/1294 is released -->
## Further reading
For an introduction to caching see
[Optimizing builds with cache](https://docs.docker.com/build/building/cache).
For more information on the `s3` cache backend, see the
[BuildKit README](https://github.com/moby/buildkit#s3-cache-experimental).
Moved to [docs.docker.com](https://docs.docker.com/build/building/cache/backends/s3)

View File

@ -1,141 +1,3 @@
# Docker container driver
The buildx Docker container driver allows creation of a managed and customizable
BuildKit environment in a dedicated Docker container.
Using the Docker container driver has a couple of advantages over the default
Docker driver. For example:
- Specify custom BuildKit versions to use.
- Build multi-arch images, see [QEMU](#qemu)
- Advanced options for
[cache import and export](../cache/backends/index.md)
## Synopsis
Run the following command to create a new builder, named `container`, that uses
the Docker container driver:
```console
$ docker buildx create \
--name container \
--driver=docker-container \
--driver-opt=[key=value,...]
container
```
The following table describes the available driver-specific options that you can
pass to `--driver-opt`:
| Parameter | Type | Default | Description |
| --------------- | ------ | ---------------- | ------------------------------------------------------------------------------------------ |
| `image` | String | | Sets the image to use for running BuildKit. |
| `network` | String | | Sets the network mode for running the BuildKit container. |
| `cgroup-parent` | String | `/docker/buildx` | Sets the cgroup parent of the BuildKit container if Docker is using the `cgroupfs` driver. |
| `env.<key>` | String | | Sets the environment variable `key` to the specified `value` in the BuildKit container. |
## Usage
When you run a build, Buildx pulls the specified `image` (by default,
[`moby/buildkit`](https://hub.docker.com/r/moby/buildkit)). When the container
has started, Buildx submits the build submitted to the containerized build
server.
```console
$ docker buildx build -t <image> --builder=container .
WARNING: No output specified with docker-container driver. Build result will only remain in the build cache. To push result image into registry use --push or to load image into docker use --load
#1 [internal] booting buildkit
#1 pulling image moby/buildkit:buildx-stable-1
#1 pulling image moby/buildkit:buildx-stable-1 1.9s done
#1 creating container buildx_buildkit_container0
#1 creating container buildx_buildkit_container0 0.5s done
#1 DONE 2.4s
...
```
## Loading to local image store
Unlike when using the default `docker` driver, images built with the
`docker-container` driver must be explicitly loaded into the local image store.
Use the `--load` flag:
```console
$ docker buildx build --load -t <image> --builder=container .
...
=> exporting to oci image format 7.7s
=> => exporting layers 4.9s
=> => exporting manifest sha256:4e4ca161fa338be2c303445411900ebbc5fc086153a0b846ac12996960b479d3 0.0s
=> => exporting config sha256:adf3eec768a14b6e183a1010cb96d91155a82fd722a1091440c88f3747f1f53f 0.0s
=> => sending tarball 2.8s
=> importing to docker
```
The image becomes available in the image store when the build finishes:
```console
$ docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
<image> latest adf3eec768a1 2 minutes ago 197MB
```
## Cache persistence
The `docker-container` driver supports cache persistence, as it stores all of
the BuildKit state and related cache into a dedicated docker volume.
To persist the `docker-container` driver's cache, even after recreating the
driver using `docker buildx rm` and `docker buildx create`, you can destroy the
builder using the `--keep-state` flag:
For example, to create a builder named `container` and then remove it while
persisting state:
```console
# setup a builder
$ docker buildx create --name=container --driver=docker-container --use --bootstrap
container
$ docker buildx ls
NAME/NODE DRIVER/ENDPOINT STATUS BUILDKIT PLATFORMS
container * docker-container
container0 desktop-linux running v0.10.5 linux/amd64
$ docker volume ls
DRIVER VOLUME NAME
local buildx_buildkit_container0_state
# remove the builder while persisting state
$ docker buildx rm --keep-state container
$ docker volume ls
DRIVER VOLUME NAME
local buildx_buildkit_container0_state
# the newly created driver with the same name will have all the state of the previous one!
$ docker buildx create --name=container --driver=docker-container --use --bootstrap
container
```
## QEMU
The `docker-container` driver supports using [QEMU](https://www.qemu.org/) (user
mode) to build non-native platforms. Use the `--platform` flag to specify which
architectures that you want to build for.
For example, to build a Linux image for `amd64` and `arm64`:
```console
$ docker buildx build \
--builder=container \
--platform=linux/amd64,linux/arm64 \
-t <registry>/<image> \
--push .
```
> **Warning**
>
> QEMU performs full-system emulation of non-native platforms, which is much
> slower than native builds. Compute-heavy tasks like compilation and
> compression/decompression will likely take a large performance hit.
## Further reading
For more information on the Docker container driver, see the
[buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver).
Moved to [docs.docker.com](https://docs.docker.com/build/building/drivers/docker-container)

View File

@ -1,30 +1,3 @@
# Docker driver
The Buildx Docker driver is the default driver. It uses the BuildKit server
components built directly into the Docker engine. The Docker driver requires no
configuration.
Unlike the other drivers, builders using the Docker driver can't be manually
created. They're only created automatically from the Docker context.
Images built with the Docker driver are automatically loaded to the local image
store.
## Synopsis
```console
# The Docker driver is used by buildx by default
docker buildx build .
```
It's not possible to configure which BuildKit version to use, or to pass any
additional BuildKit parameters to a builder using the Docker driver. The
BuildKit version and parameters are preset by the Docker engine internally.
If you need additional configuration and flexibility, consider using the
[Docker container driver](./docker-container.md).
## Further reading
For more information on the Docker driver, see the
[buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver).
Moved to [docs.docker.com](https://docs.docker.com/build/building/drivers/docker)

View File

@ -1,76 +1,3 @@
# Buildx drivers overview
Buildx drivers are configurations for how and where the BuildKit backend runs.
Driver settings are customizable and allows fine-grained control of the builder.
Buildx supports the following drivers:
- `docker`: uses the BuildKit library bundled into the Docker daemon.
- `docker-container`: creates a dedicated BuildKit container using Docker.
- `kubernetes`: creates BuildKit pods in a Kubernetes cluster.
- `remote`: connects directly to a manually managed BuildKit daemon.
Different drivers support different use cases. The default `docker` driver
prioritizes simplicity and ease of use. It has limited support for advanced
features like caching and output formats, and isn't configurable. Other drivers
provide more flexibility and are better at handling advanced scenarios.
The following table outlines some of the differences between drivers.
| Feature | `docker` | `docker-container` | `kubernetes` | `remote` |
| :--------------------------- | :---------: | :----------------: | :----------: | :----------------: |
| **Automatically load image** | ✅ | | | |
| **Cache export** | Inline only | ✅ | ✅ | ✅ |
| **Tarball output** | | ✅ | ✅ | ✅ |
| **Multi-arch images** | | ✅ | ✅ | ✅ |
| **BuildKit configuration** | | ✅ | ✅ | Managed externally |
## List available builders
Use `docker buildx ls` to see builder instances available on your system, and
the drivers they're using.
```console
$ docker buildx ls
NAME/NODE DRIVER/ENDPOINT STATUS BUILDKIT PLATFORMS
default docker
default default running 20.10.17 linux/amd64, linux/386
```
Depending on your setup, you may find multiple builders in your list that use
the Docker driver. For example, on a system that runs both a manually installed
version of dockerd, as well as Docker Desktop, you might see the following
output from `docker buildx ls`:
```console
NAME/NODE DRIVER/ENDPOINT STATUS BUILDKIT PLATFORMS
default docker
default default running 20.10.17 linux/amd64, linux/386
desktop-linux * docker
desktop-linux desktop-linux running 20.10.17 linux/amd64, linux/arm64, linux/riscv64, linux/ppc64le, linux/s390x, linux/386, linux/arm/v7, linux/arm/v6
```
This is because the Docker driver builders are automatically pulled from the
available
[Docker Contexts](https://docs.docker.com/engine/context/working-with-contexts/).
When you add new contexts using `docker context create`, these will appear in
your list of buildx builders.
## Create a new builder
Use the
[`docker buildx create`](https://docs.docker.com/engine/reference/commandline/buildx_create/)
command to create a builder, and specify the driver using the `--driver` option.
```console
$ docker buildx create --name=<builder-name> --driver=<driver> --driver-opt=<driver-options>
```
## What's next
Read about each of the Buildx drivers to learn about how they work and how to
use them:
- [Docker driver](./docker.md)
- [Docker container driver](./docker-container.md)
- [Kubernetes driver](./kubernetes.md)
- [Remote driver](./remote.md)
Moved to [docs.docker.com](https://docs.docker.com/build/building/drivers)

View File

@ -1,323 +1,3 @@
# Kubernetes driver
The Buildx Kubernetes driver allows connecting your local development or CI
environments to your Kubernetes cluster to allow access to more powerful and
varied compute resources.
## Synopsis
Run the following command to create a new builder, named `kube`, that uses
the Kubernetes driver:
```console
$ docker buildx create \
--bootstrap \
--name=kube \
--driver=kubernetes \
--driver-opt=[key=value,...]
```
The following table describes the available driver-specific options that you can
pass to `--driver-opt`:
| Parameter | Type | Default | Description |
| ----------------- | ----------------- | --------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------ |
| `image` | String | | Sets the image to use for running BuildKit. |
| `namespace` | String | Namespace in current Kubernetes context | Sets the Kubernetes namespace. |
| `replicas` | Integer | 1 | Sets the number of Pod replicas to create. See [scaling BuildKit][1] |
| `requests.cpu` | CPU units | | Sets the request CPU value specified in units of Kubernetes CPU. For example `requests.cpu=100m` or `requests.cpu=2` |
| `requests.memory` | Memory size | | Sets the request memory value specified in bytes or with a valid suffix. For example `requests.memory=500Mi` or `requests.memory=4G` |
| `limits.cpu` | CPU units | | Sets the limit CPU value specified in units of Kubernetes CPU. For example `requests.cpu=100m` or `requests.cpu=2` |
| `limits.memory` | Memory size | | Sets the limit memory value specified in bytes or with a valid suffix. For example `requests.memory=500Mi` or `requests.memory=4G` |
| `nodeselector` | CSV string | | Sets the pod's `nodeSelector` label(s). See [node assignment][2]. |
| `tolerations` | CSV string | | Configures the pod's taint toleration. See [node assignment][2]. |
| `rootless` | `true`,`false` | `false` | Run the container as a non-root user. See [rootless mode][3]. |
| `loadbalance` | `sticky`,`random` | `sticky` | Load-balancing strategy. If set to `sticky`, the pod is chosen using the hash of the context path. |
| `qemu.install` | `true`,`false` | | Install QEMU emulation for multi platforms support. See [QEMU][4]. |
| `qemu.image` | String | `tonistiigi/binfmt:latest` | Sets the QEMU emulation image. See [QEMU][4]. |
[1]: #scaling-buildkit
[2]: #node-assignment
[3]: #rootless-mode
[4]: #qemu
## Scaling BuildKit
One of the main advantages of the Kubernetes driver is that you can scale the
number of builder replicas up and down to handle increased build load. Scaling
is configurable using the following driver options:
- `replicas=N`
This scales the number of BuildKit pods to the desired size. By default, it
only creates a single pod. Increasing the number of replicas lets you take
advantage of multiple nodes in your cluster.
- `requests.cpu`, `requests.memory`, `limits.cpu`, `limits.memory`
These options allow requesting and limiting the resources available to each
BuildKit pod according to the official Kubernetes documentation
[here](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/).
For example, to create 4 replica BuildKit pods:
```console
$ docker buildx create \
--bootstrap \
--name=kube \
--driver=kubernetes \
--driver-opt=namespace=buildkit,replicas=4
```
Listing the pods, you get this:
```console
$ kubectl -n buildkit get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
kube0 4/4 4 4 8s
$ kubectl -n buildkit get pods
NAME READY STATUS RESTARTS AGE
kube0-6977cdcb75-48ld2 1/1 Running 0 8s
kube0-6977cdcb75-rkc6b 1/1 Running 0 8s
kube0-6977cdcb75-vb4ks 1/1 Running 0 8s
kube0-6977cdcb75-z4fzs 1/1 Running 0 8s
```
Additionally, you can use the `loadbalance=(sticky|random)` option to control
the load-balancing behavior when there are multiple replicas. `random` selects
random nodes from the node pool, providing an even workload distribution across
replicas. `sticky` (the default) attempts to connect the same build performed
multiple times to the same node each time, ensuring better use of local cache.
For more information on scalability, see the options for
[buildx create](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver-opt).
## Node assignment
The Kubernetes driver allows you to control the scheduling of BuildKit pods
using the `nodeSelector` and `tolerations` driver options.
The value of the `nodeSelector` parameter is a comma-separated string of
key-value pairs, where the key is the node label and the value is the label
text. For example: `"nodeselector=kubernetes.io/arch=arm64"`
The `tolerations` parameter is a semicolon-separated list of taints. It accepts
the same values as the Kubernetes manifest. Each `tolerations` entry specifies a
taint key and the value, operator, or effect. For example:
`"tolerations=key=foo,value=bar;key=foo2,operator=exists;key=foo3,effect=NoSchedule"`
Due to quoting rules for shell commands, you must wrap the `nodeselector` and
`tolerations` parameters in single quotes. You can even wrap all of
`--driver-opt` in single quotes, for example:
```console
$ docker buildx create \
--bootstrap \
--name=kube \
--driver=kubernetes \
'--driver-opt="nodeselector=label1=value1,label2=value2","tolerations=key=key1,value=value1"'
```
## Multi-platform builds
The Buildx Kubernetes driver has support for creating
[multi-platform images](https://docs.docker.com/build/building/multi-platform/),
either using QEMU or by leveraging the native architecture of nodes.
### QEMU
Like the `docker-container` driver, the Kubernetes driver also supports using
[QEMU](https://www.qemu.org/) (user mode) to build images for non-native
platforms. Include the `--platform` flag and specify which platforms you want to
output to.
For example, to build a Linux image for `amd64` and `arm64`:
```console
$ docker buildx build \
--builder=kube \
--platform=linux/amd64,linux/arm64 \
-t <user>/<image> \
--push .
```
> **Warning**
>
> QEMU performs full-system emulation of non-native platforms, which is much
> slower than native builds. Compute-heavy tasks like compilation and
> compression/decompression will likely take a large performance hit.
Using a custom BuildKit image or invoking non-native binaries in builds may
require that you explicitly turn on QEMU using the `qemu.install` option when
creating the builder:
```console
$ docker buildx create \
--bootstrap \
--name=kube \
--driver=kubernetes \
--driver-opt=namespace=buildkit,qemu.install=true
```
### Native
If you have access to cluster nodes of different architectures, the Kubernetes
driver can take advantage of these for native builds. To do this, use the
`--append` flag of `docker buildx create`.
First, create your builder with explicit support for a single architecture, for
example `amd64`:
```console
$ docker buildx create \
--bootstrap \
--name=kube \
--driver=kubernetes \
--platform=linux/amd64 \
--node=builder-amd64 \
--driver-opt=namespace=buildkit,nodeselector="kubernetes.io/arch=amd64"
```
This creates a Buildx builder named `kube`, containing a single builder node
`builder-amd64`. Note that the Buildx concept of a node isn't the same as the
Kubernetes concept of a node. A Buildx node in this case could connect multiple
Kubernetes nodes of the same architecture together.
With the `kube` builder created, you can now introduce another architecture into
the mix using `--append`. For example, to add `arm64`:
```console
$ docker buildx create \
--append \
--bootstrap \
--name=kube \
--driver=kubernetes \
--platform=linux/arm64 \
--node=builder-arm64 \
--driver-opt=namespace=buildkit,nodeselector="kubernetes.io/arch=arm64"
```
If you list builders now, you should be able to see both nodes present:
```console
$ docker buildx ls
NAME/NODE DRIVER/ENDPOINT STATUS PLATFORMS
kube kubernetes
builder-amd64 kubernetes:///kube?deployment=builder-amd64&kubeconfig= running linux/amd64*, linux/amd64/v2, linux/amd64/v3, linux/386
builder-arm64 kubernetes:///kube?deployment=builder-arm64&kubeconfig= running linux/arm64*
```
You should now be able to build multi-arch images with `amd64` and `arm64`
combined, by specifying those platforms together in your buildx command:
```console
$ docker buildx build --builder=kube --platform=linux/amd64,linux/arm64 -t <user>/<image> --push .
```
You can repeat the `buildx create --append` command for as many different
architectures that you want to support.
## Rootless mode
The Kubernetes driver supports rootless mode. For more information on how
rootless mode works, and it's requirements, see
[here](https://github.com/moby/buildkit/blob/master/docs/rootless.md).
To turn it on in your cluster, you can use the `rootless=true` driver option:
```console
$ docker buildx create \
--name=kube \
--driver=kubernetes \
--driver-opt=namespace=buildkit,rootless=true
```
This will create your pods without `securityContext.privileged`.
Requires Kubernetes version 1.19 or later. Using Ubuntu as the host kernel is
recommended.
## Example: Creating a Buildx builder in Kubernetes
This guide shows you how to:
- Create a namespace for your Buildx resources
- Create a Kubernetes builder.
- List the available builders
- Build an image using your Kubernetes builders
Prerequisites:
- You have an existing Kubernetes cluster. If you don't already have one, you
can follow along by installing [minikube](https://minikube.sigs.k8s.io/docs/).
- The cluster you want to connect to is accessible via the `kubectl` command,
with the `KUBECONFIG` environment variable
[set appropriately](https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/#set-the-kubeconfig-environment-variable)
if necessary.
1. Create a `buildkit` namespace.
Creating a separate namespace helps keep your Buildx resources separate from
other resources in the cluster.
```console
$ kubectl create namespace buildkit
namespace/buildkit created
```
2. Create a new Buildx builder with the Kubernetes driver:
```console
# Remember to specify the namespace in driver options
$ docker buildx create \
--bootstrap \
--name=kube \
--driver=kubernetes \
```
3. List available Buildx builders using `docker buildx ls`
```console
$ docker buildx ls
NAME/NODE DRIVER/ENDPOINT STATUS PLATFORMS
kube kubernetes
kube0-6977cdcb75-k9h9m running linux/amd64, linux/amd64/v2, linux/amd64/v3, linux/386
default * docker
default default running linux/amd64, linux/386
```
4. Inspect the running pods created by the Buildx driver with `kubectl`.
```console
$ kubectl -n buildkit get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
kube0 1/1 1 1 32s
$ kubectl -n buildkit get pods
NAME READY STATUS RESTARTS AGE
kube0-6977cdcb75-k9h9m 1/1 Running 0 32s
```
The buildx driver creates the necessary resources on your cluster in the
specified namespace (in this case, `buildkit`), while keeping your driver
configuration locally.
5. Use your new builder by including the `--builder` flag when running buildx
commands. For example: :
```console
# Replace <registry> with your Docker username
# and <image> with the name of the image you want to build
docker buildx build \
--builder=kube \
-t <registry>/<image> \
--push .
```
That's it! You've now built an image from a Kubernetes pod, using Buildx!
## Further reading
For more information on the Kubernetes driver, see the
[buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver).
Moved to [docs.docker.com](https://docs.docker.com/build/building/drivers/kubernetes)

View File

@ -1,192 +1,3 @@
# Remote driver
The Buildx remote driver allows for more complex custom build workloads,
allowing you to connect to externally managed BuildKit instances. This is useful
for scenarios that require manual management of the BuildKit daemon, or where a
BuildKit daemon is exposed from another source.
## Synopsis
```console
$ docker buildx create \
--name remote \
--driver remote \
tcp://localhost:1234
```
The following table describes the available driver-specific options that you can
pass to `--driver-opt`:
| Parameter | Type | Default | Description |
| ------------ | ------ | ------------------ | ---------------------------------------------------------- |
| `key` | String | | Sets the TLS client key. |
| `cert` | String | | Sets the TLS client certificate to present to `buildkitd`. |
| `cacert` | String | | Sets the TLS certificate authority used for validation. |
| `servername` | String | Endpoint hostname. | Sets the TLS server name used in requests. |
## Example: Remote BuildKit over Unix sockets
This guide shows you how to create a setup with a BuildKit daemon listening on a
Unix socket, and have Buildx connect through it.
1. Ensure that [BuildKit](https://github.com/moby/buildkit) is installed.
For example, you can launch an instance of buildkitd with:
```console
$ sudo ./buildkitd --group $(id -gn) --addr unix://$HOME/buildkitd.sock
```
Alternatively,
[see here](https://github.com/moby/buildkit/blob/master/docs/rootless.md) for
running buildkitd in rootless mode or
[here](https://github.com/moby/buildkit/tree/master/examples/systemd) for
examples of running it as a systemd service.
2. Check that you have a Unix socket that you can connect to.
```console
$ ls -lh /home/user/buildkitd.sock
srw-rw---- 1 root user 0 May 5 11:04 /home/user/buildkitd.sock
```
3. Connect Buildx to it using the remote driver:
```console
$ docker buildx create \
--name remote-unix \
--driver remote \
unix://$HOME/buildkitd.sock
```
4. List available builders with `docker buildx ls`. You should then see
`remote-unix` among them:
```console
$ docker buildx ls
NAME/NODE DRIVER/ENDPOINT STATUS PLATFORMS
remote-unix remote
remote-unix0 unix:///home/.../buildkitd.sock running linux/amd64, linux/amd64/v2, linux/amd64/v3, linux/386
default * docker
default default running linux/amd64, linux/386
```
You can switch to this new builder as the default using
`docker buildx use remote-unix`, or specify it per build using `--builder`:
```console
$ docker buildx build --builder=remote-unix -t test --load .
```
Remember that you need to use the `--load` flag if you want to load the build
result into the Docker daemon.
## Example: Remote BuildKit in Docker container
This guide will show you how to create setup similar to the `docker-container`
driver, by manually booting a BuildKit Docker container and connecting to it
using the Buildx remote driver. This procedure will manually create a container
and access it via it's exposed port. (You'd probably be better of just using the
`docker-container` driver that connects to BuildKit through the Docker daemon,
but this is for illustration purposes.)
1. Generate certificates for BuildKit.
You can use the
[create-certs.sh](https://github.com/moby/buildkit/v0.10.3/master/examples/kubernetes/create-certs.sh)
script as a starting point. Note that while it's possible to expose BuildKit
over TCP without using TLS, it's not recommended. Doing so allows arbitrary
access to BuildKit without credentials.
2. With certificates generated in `.certs/`, startup the container:
```console
$ docker run -d --rm \
--name=remote-buildkitd \
--privileged \
-p 1234:1234 \
-v $PWD/.certs:/etc/buildkit/certs \
moby/buildkit:latest \
--addr tcp://0.0.0.0:1234 \
--tlscacert /etc/buildkit/certs/ca.pem \
--tlscert /etc/buildkit/certs/daemon-cert.pem \
--tlskey /etc/buildkit/certs/daemon-key.pem
```
This command starts a BuildKit container and exposes the daemon's port 1234
to localhost.
3. Connect to this running container using Buildx:
```console
$ docker buildx create \
--name remote-container \
--driver remote \
--driver-opt cacert=.certs/ca.pem,cert=.certs/client-cert.pem,key=.certs/client-key.pem,servername=... \
tcp://localhost:1234
```
Alternatively, use the `docker-container://` URL scheme to connect to the
BuildKit container without specifying a port:
```console
$ docker buildx create \
--name remote-container \
--driver remote \
docker-container://remote-container
```
## Example: Remote BuildKit in Kubernetes
This guide will show you how to create a setup similar to the `kubernetes`
driver by manually creating a BuildKit `Deployment`. While the `kubernetes`
driver will do this under-the-hood, it might sometimes be desirable to scale
BuildKit manually. Additionally, when executing builds from inside Kubernetes
pods, the Buildx builder will need to be recreated from within each pod or
copied between them.
1. Create a Kubernetes deployment of `buildkitd`, as per the instructions
[here](https://github.com/moby/buildkit/tree/master/examples/kubernetes).
Following the guide, create certificates for the BuildKit daemon and client
using
[create-certs.sh](https://github.com/moby/buildkit/blob/v0.10.3/examples/kubernetes/create-certs.sh),
and create a deployment of BuildKit pods with a service that connects to
them.
2. Assuming that the service is called `buildkitd`, create a remote builder in
Buildx, ensuring that the listed certificate files are present:
```console
$ docker buildx create \
--name remote-kubernetes \
--driver remote \
--driver-opt cacert=.certs/ca.pem,cert=.certs/client-cert.pem,key=.certs/client-key.pem \
tcp://buildkitd.default.svc:1234
```
Note that this will only work internally, within the cluster, since the BuildKit
setup guide only creates a ClusterIP service. To configure the builder to be
accessible remotely, you can use an appropriately configured ingress, which is
outside the scope of this guide.
To access the service remotely, use the port forwarding mechanism of `kubectl`:
```console
$ kubectl port-forward svc/buildkitd 1234:1234
```
Then you can point the remote driver at `tcp://localhost:1234`.
Alternatively, you can use the `kube-pod://` URL scheme to connect directly to a
BuildKit pod through the Kubernetes API. Note that this method only connects to
a single pod in the deployment:
```console
$ kubectl get pods --selector=app=buildkitd -o json | jq -r '.items[].metadata.name
buildkitd-XXXXXXXXXX-xxxxx
$ docker buildx create \
--name remote-container \
--driver remote \
kube-pod://buildkitd-XXXXXXXXXX-xxxxx
```
Moved to [docs.docker.com](https://docs.docker.com/build/building/drivers/remote)

View File

@ -1,62 +1,3 @@
# Image and registry exporters
The `image` exporter outputs the build result into a container image format. The
`registry` exporter is identical, but it automatically pushes the result by
setting `push=true`.
## Synopsis
Build a container image using the `image` and `registry` exporters:
```console
$ docker buildx build --output type=image[,parameters] .
```
```console
$ docker buildx build --output type=registry[,parameters] .
```
The following table describes the available parameters that you can pass to
`--output` for `type=image`:
| Parameter | Type | Default | Description |
| ---------------------- | -------------------------------------- | ------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `name` | String | | Specify image name(s) |
| `push` | `true`,`false` | `false` | Push after creating the image. |
| `push-by-digest` | `true`,`false` | `false` | Push image without name. |
| `registry.insecure` | `true`,`false` | `false` | Allow pushing to insecure registry. |
| `dangling-name-prefix` | `<value>` | | Name image with `prefix@<digest>`, used for anonymous images |
| `name-canonical` | `true`,`false` | | Add additional canonical name `name@<digest>` |
| `compression` | `uncompressed`,`gzip`,`estargz`,`zstd` | `gzip` | Compression type, see [compression][1] |
| `compression-level` | `0..22` | | Compression level, see [compression][1] |
| `force-compression` | `true`,`false` | `false` | Forcefully apply compression, see [compression][1] |
| `oci-mediatypes` | `true`,`false` | `false` | Use OCI media types in exporter manifests, see [OCI Media types][2] |
| `buildinfo` | `true`,`false` | `true` | Attach inline [build info][3] |
| `buildinfo-attrs` | `true`,`false` | `false` | Attach inline [build info attributes][3] |
| `unpack` | `true`,`false` | `false` | Unpack image after creation (for use with containerd) |
| `store` | `true`,`false` | `true` | Store the result images to the worker's (for example, containerd) image store, and ensures that the image has all blobs in the content store. Ignored if the worker doesn't have image store (when using OCI workers, for example). |
| `annotation.<key>` | String | | Attach an annotation with the respective `key` and `value` to the built image,see [annotations][4] |
[1]: index.md#compression
[2]: index.md#oci-media-types
[3]: index.md#build-info
[4]: #annotations
## Annotations
These exporters support adding OCI annotation using `annotation.*` dot notation
parameter. The following example sets the `org.opencontainers.image.title`
annotation for a build:
```console
$ docker buildx build \
--output "type=<type>,name=<registry>/<image>,annotation.org.opencontainers.image.title=<title>" .
```
For more information about annotations, see
[BuildKit documentation](https://github.com/moby/buildkit/blob/master/docs/annotations.md).
## Further reading
For more information on the `image` or `registry` exporters, see the
[BuildKit README](https://github.com/moby/buildkit/blob/master/README.md#imageregistry).
Moved to [docs.docker.com](https://docs.docker.com/build/building/exporters/image-registry)

View File

@ -1,284 +1,3 @@
# Exporters overview
Exporters save your build results to a specified output type. You specify the
exporter to use with the
[`--output` CLI option](https://docs.docker.com/engine/reference/commandline/buildx_build/#output).
Buildx supports the following exporters:
- `image`: exports the build result to a container image.
- `registry`: exports the build result into a container image, and pushes it to
the specified registry.
- `local`: exports the build root filesystem into a local directory.
- `tar`: packs the build root filesystem into a local tarball.
- `oci`: exports the build result to the local filesystem in the
[OCI image layout](https://github.com/opencontainers/image-spec/blob/v1.0.1/image-layout.md)
format.
- `docker`: exports the build result to the local filesystem in the
[Docker image](https://github.com/docker/docker/blob/v20.10.2/image/spec/v1.2.md)
format.
- `cacheonly`: doesn't export a build output, but runs the build and creates a
cache.
## Using exporters
To specify an exporter, use the following command syntax:
```console
$ docker buildx build --tag <registry>/<image> \
--output type=<TYPE> .
```
Most common use cases doesn't require you don't need to specify which exporter
to use explicitly. You only need to specify the exporter if you intend to
customize the output somehow, or if you want to save it to disk. The `--load`
and `--push` options allow Buildx to infer the exporter settings to use.
For example, if you use the `--push` option in combination with `--tag`, Buildx
automatically uses the `image` exporter, and configures the exporter to push the
results to the specified registry.
To get the full flexibility out of the various exporters BuildKit has to offer,
you use the `--output` flag that lets you configure exporter options.
## Use cases
Each exporter type is designed for different use cases. The following sections
describe some common scenarios, and how you can use exporters to generate the
output that you need.
### Load to image store
Buildx is often used to build container images that can be loaded to an image
store. That's where the `docker` exporter comes in. The following example shows
how to build an image using the `docker` exporter, and have that image loaded to
the local image store, using the `--output` option:
```console
$ docker buildx build \
--output type=docker,name=<registry>/<image> .
```
Buildx CLI will automatically use the `docker` exporter and load it to the image
store if you supply the `--tag` and `--load` options:
```console
$ docker buildx build --tag <registry>/<image> --load .
```
Building images using the `docker` driver are automatically loaded to the local
image store.
Images loaded to the image store are available to for `docker run` immediately
after the build finishes, and you'll see them in the list of images when you run
the `docker images` command.
### Push to registry
To push a built image to a container registry, you can use the `registry` or
`image` exporters.
When you pass the `--push` option to the Buildx CLI, you instruct BuildKit to
push the built image to the specified registry:
```console
$ docker buildx build --tag <registry>/<image> --push .
```
Under the hood, this uses the `image` exporter, and sets the `push` parameter.
It's the same as using the following long-form command using the `--output`
option:
```console
$ docker buildx build \
--output type=image,name=<registry>/<image>,push=true .
```
You can also use the `registry` exporter, which does the same thing:
```console
$ docker buildx build \
--output type=registry,name=<registry>/<image> .
```
### Export image layout to file
You can use either the `oci` or `docker` exporters to save the build results to
image layout on your local filesystem. Both of these exporters generate a tar
archive file containing the corresponding image layout. The `dest` parameter
defines the target output path for the tarball.
```console
$ docker buildx build --output type=oci,dest=./image.tar .
[+] Building 0.8s (7/7) FINISHED
...
=> exporting to oci image format 0.0s
=> exporting layers 0.0s
=> exporting manifest sha256:c1ef01a0a0ef94a7064d5cbce408075730410060e253ff8525d1e5f7e27bc900 0.0s
=> exporting config sha256:eadab326c1866dd247efb52cb715ba742bd0f05b6a205439f107cf91b3abc853 0.0s
=> sending tarball 0.0s
$ mkdir -p out && tar -C out -xf ./image.tar
$ tree out
out
├── blobs
│   └── sha256
│   ├── 9b18e9b68314027565b90ff6189d65942c0f7986da80df008b8431276885218e
│   ├── c78795f3c329dbbbfb14d0d32288dea25c3cd12f31bd0213be694332a70c7f13
│   ├── d1cf38078fa218d15715e2afcf71588ee482352d697532cf316626164699a0e2
│   ├── e84fa1df52d2abdfac52165755d5d1c7621d74eda8e12881f6b0d38a36e01775
│   └── fe9e23793a27fe30374308988283d40047628c73f91f577432a0d05ab0160de7
├── index.json
├── manifest.json
└── oci-layout
```
### Export filesystem
If you don't want to build an image from your build results, but instead export
the filesystem that was built, you can use the `local` and `tar` exporters.
The `local` exporter unpacks the filesystem into a directory structure in the
specified location. The `tar` exporter creates a tarball archive file.
```console
$ docker buildx build --output type=tar,dest=<path/to/output> .
```
The `local` exporter is useful in
[multi-stage builds](https://docs.docker.com/build/building/multi-stage/) since
it allows you to export only a minimal number of build artifacts. For example,
self-contained binaries.
### Cache-only export
The `cacheonly` exporter can be used if you just want to run a build, without
exporting any output. This can be useful if, for example, you want to run a test
build. Or, if you want to run the build first, and create exports using
subsequent commands. The `cacheonly` exporter creates a build cache, so any
successive builds are instant.
```console
$ docker buildx build --output type=cacheonly
```
If you don't specify an exporter, and you don't provide short-hand options like
`--load` that automatically selects the appropriate exporter, Buildx defaults to
using the `cacheonly` exporter. Except if you build using the `docker` driver,
in which case you use the `docker` exporter.
Buildx logs a warning message when using `cacheonly` as a default:
```console
$ docker buildx build .
WARNING: No output specified with docker-container driver.
Build result will only remain in the build cache.
To push result image into registry use --push or
to load image into docker use --load
```
## Multiple exporters
You can only specify a single exporter for any given build (see
[this pull request](https://github.com/moby/buildkit/pull/2760) for details).
But you can perform multiple builds one after another to export the same content
twice. BuildKit caches the build, so unless any of the layers change, all
successive builds following the first are instant.
The following example shows how to run the same build twice, first using the
`image`, followed by the `local`.
```console
$ docker buildx build --output type=image,tag=<registry>/<image> .
$ docker buildx build --output type=local,dest=<path/to/output> .
```
## Configuration options
This section describes some of the configuration options available for
exporters.
The options described here are common for at least two or more exporter types.
Additionally, the different exporters types support specific parameters as well.
See the detailed page about each exporter for more information about which
configuration parameters apply.
The common parameters described here are:
- [Compression](#compression)
- [OCI media type](#oci-media-types)
### Compression
When you export a compressed output, you can configure the exact compression
algorithm and level to use. While the default values provide a good
out-of-the-box experience, you may wish to tweak the parameters to optimize for
storage vs compute costs. Changing the compression parameters can reduce storage
space required, and improve image download times, but will increase build times.
To select the compression algorithm, you can use the `compression` option. For
example, to build an `image` with `compression=zstd`:
```console
$ docker buildx build \
--output type=image,name=<registry>/<image>,push=true,compression=zstd .
```
Use the `compression-level=<value>` option alongside the `compression` parameter
to choose a compression level for the algorithms which support it:
- 0-9 for `gzip` and `estargz`
- 0-22 for `zstd`
As a general rule, the higher the number, the smaller the resulting file will
be, and the longer the compression will take to run.
Use the `force-compression=true` option to force re-compressing layers imported
from a previous image, if the requested compression algorithm is different from
the previous compression algorithm.
> **Note**
>
> The `gzip` and `estargz` compression methods use the
> [`compress/gzip` package](https://pkg.go.dev/compress/gzip), while `zstd` uses
> the
> [`github.com/klauspost/compress/zstd` package](https://github.com/klauspost/compress/tree/master/zstd).
### OCI media types
Exporters that output container images, support creating images with either
Docker media types (the default) or with OCI media types. This is supported by
the `image`, `registry`, `oci` and `docker` exporters.
To export images with OCI media types set, use the `oci-mediatypes` property.
For example, with the `image` exporter:
```console
$ docker buildx build \
--output type=image,name=<registry>/<image>,push=true,oci-mediatypes=true .
```
### Build info
Exporters that output container images, allow embedding information about the
build, including information on the original build request and sources used
during the build. This is supported by the `image`, `registry`, `oci` and
`docker` exporters.
This build info is attached to the image configuration:
```json
{
"moby.buildkit.buildinfo.v0": "<base64>"
}
```
By default, build dependencies are attached to the image configuration. You can
turn off this behavior by setting `buildinfo=false`.
## What's next
Read about each of the exporters to learn about how they work and how to use
them:
- [Image and registry exporters](image-registry.md)
- [OCI and Docker exporters](oci-docker.md).
- [Local and tar exporters](local-tar.md)
Moved to [docs.docker.com](https://docs.docker.com/build/building/exporters)

View File

@ -1,31 +1,3 @@
# Local and tar exporters
The `local` and `tar` exporters output the root filesystem of the build result
into a local directory. They're useful for producing artifacts that aren't
container images.
- `local` exports files and directories.
- `tar` exports the same, but bundles the export into a tarball.
## Synopsis
Build a container image using the `local` exporter:
```console
$ docker buildx build --output type=local[,parameters] .
```
```console
$ docker buildx build --output type=tar[,parameters] .
```
The following table describes the available parameters:
| Parameter | Type | Default | Description |
| --------- | ------ | ------- | --------------------- |
| `dest` | String | | Path to copy files to |
## Further reading
For more information on the `local` or `tar` exporters, see the
[BuildKit README](https://github.com/moby/buildkit/blob/master/README.md#local-directory).
Moved to [docs.docker.com](https://docs.docker.com/build/building/exporters/local-tar)

View File

@ -1,61 +1,3 @@
# OCI and Docker exporters
The `oci` exporter outputs the build result into an
[OCI image layout](https://github.com/opencontainers/image-spec/blob/main/image-layout.md)
tarball. The `docker` exporter behaves the same way, except it exports a Docker
image layout instead.
The [`docker` driver](../drivers/docker.md) doesn't support these exporters. You
must use `docker-container` or some other driver if you want to generate these
outputs.
## Synopsis
Build a container image using the `oci` and `docker` exporters:
```console
$ docker buildx build --output type=oci[,parameters] .
```
```console
$ docker buildx build --output type=docker[,parameters] .
```
The following table describes the available parameters:
| Parameter | Type | Default | Description |
| ------------------- | -------------------------------------- | ------- | ------------------------------------------------------------------------------------------------------------------------------------- |
| `name` | String | | Specify image name(s) |
| `dest` | String | | Path |
| `tar` | `true`,`false` | `true` | Bundle the output into a tarball layout |
| `compression` | `uncompressed`,`gzip`,`estargz`,`zstd` | `gzip` | Compression type, see [compression][1] |
| `compression-level` | `0..22` | | Compression level, see [compression][1] |
| `force-compression` | `true`,`false` | `false` | Forcefully apply compression, see [compression][1] |
| `oci-mediatypes` | `true`,`false` | | Use OCI media types in exporter manifests. Defaults to `true` for `type=oci`, and `false` for `type=docker`. See [OCI Media types][2] |
| `buildinfo` | `true`,`false` | `true` | Attach inline [build info][3] |
| `buildinfo-attrs` | `true`,`false` | `false` | Attach inline [build info attributes][3] |
| `annotation.<key>` | String | | Attach an annotation with the respective `key` and `value` to the built image,see [annotations][4] |
[1]: index.md#compression
[2]: index.md#oci-media-types
[3]: index.md#build-info
[4]: #annotations
## Annotations
These exporters support adding OCI annotation using `annotation.*` dot notation
parameter. The following example sets the `org.opencontainers.image.title`
annotation for a build:
```console
$ docker buildx build \
--output "type=<type>,name=<registry>/<image>,annotation.org.opencontainers.image.title=<title>" .
```
For more information about annotations, see
[BuildKit documentation](https://github.com/moby/buildkit/blob/master/docs/annotations.md).
## Further reading
For more information on the `oci` or `docker` exporters, see the
[BuildKit README](https://github.com/moby/buildkit/blob/master/README.md#docker-tarball).
Moved to [docs.docker.com](https://docs.docker.com/build/building/exporters/oci-docker)