updated prose and structure for driver docs

Signed-off-by: David Karlsson <david.karlsson@docker.com>
This commit is contained in:
David Karlsson 2022-09-29 11:13:20 +02:00
parent e98c252490
commit d030fcc076
5 changed files with 483 additions and 328 deletions

View File

@ -1,40 +1,43 @@
# Docker container driver
The buildx docker-container driver allows creation of a managed and
customizable BuildKit environment inside a dedicated Docker container.
The buildx Docker container driver allows creation of a managed and customizable
BuildKit environment in a dedicated Docker container.
Using the docker-container driver has a couple of advantages over the basic
docker driver. Firstly, we can manually override the version of buildkit to
use, meaning that we can access the latest and greatest features as soon as
they're released, instead of waiting to upgrade to a newer version of Docker.
Additionally, we can access more complex features like multi-architecture
builds and the more advanced cache exporters, which are currently unsupported
in the default docker driver.
Using the Docker container driver has a couple of advantages over the default
Docker driver. For example:
We can easily create a new builder that uses the docker-container driver:
- Specify custom BuildKit versions to use.
- Build multi-arch images, see [QEMU](#qemu)
- Advanced options for
[cache import and export](https://docs.docker.com/build/building/cache/)
## Synopsis
Run the following command to create a new builder, named `container`, that uses
the Docker container driver:
```console
$ docker buildx create --name container --driver docker-container
container
```
We should then be able to see it on our list of available builders:
The following table describes the available driver-specific options that you can
pass to `--driver-opt`:
| Parameter | Value | Default | Description |
| --------------- | ------ | ---------------- | ------------------------------------------------------------------------------------------ |
| `image` | string | | Sets the image to use for running BuildKit. |
| `network` | string | | Sets the network mode for running the BuildKit container. |
| `cgroup-parent` | string | `/docker/buildx` | Sets the cgroup parent of the BuildKit container if Docker is using the `cgroupfs` driver. |
## Usage
When you run a build, Buildx pulls the specified `moby/buildkit` image from
[Docker Hub](https://hub.docker.com/u/moby/buildkit). When the container has
started, Buildx submits the build submitted to the containerized build server.
```console
$ docker buildx ls
NAME/NODE DRIVER/ENDPOINT STATUS BUILDKIT PLATFORMS
container docker-container
container0 desktop-linux inactive
default docker
default default running 20.10.17 linux/amd64, linux/386
```
If we trigger a build, the appropriate `moby/buildkit` image will be pulled
from [Docker Hub](https://hub.docker.com/u/moby/buildkit), the image started,
and our build submitted to our containerized build server.
```console
$ docker buildx build -t <image> --builder=container .
$ docker buildx build . -t <image> --builder=container
WARNING: No output specified with docker-container driver. Build result will only remain in the build cache. To push result image into registry use --push or to load image into docker use --load
#1 [internal] booting buildkit
#1 pulling image moby/buildkit:buildx-stable-1
@ -45,22 +48,24 @@ WARNING: No output specified with docker-container driver. Build result will onl
...
```
Note the warning "Build result will only remain in the build cache" - unlike
the `docker` driver, the built image must be explicitly loaded into the local
image store. We can use the `--load` flag for this:
## Loading to local image store
Unlike when using the default `docker` driver, images built with the
`docker-container` driver must be explicitly loaded into the local image store.
Use the `--load` flag:
```console
$ docker buildx build --load -t <image> --builder=container .
$ docker buildx build . --load -t <image> --builder=container
...
=> exporting to oci image format 7.7s
=> => exporting layers 4.9s
=> => exporting manifest sha256:4e4ca161fa338be2c303445411900ebbc5fc086153a0b846ac12996960b479d3 0.0s
=> => exporting config sha256:adf3eec768a14b6e183a1010cb96d91155a82fd722a1091440c88f3747f1f53f 0.0s
=> => sending tarball 2.8s
=> importing to docker
=> => exporting manifest sha256:4e4ca161fa338be2c303445411900ebbc5fc086153a0b846ac12996960b479d3 0.0s
=> => exporting config sha256:adf3eec768a14b6e183a1010cb96d91155a82fd722a1091440c88f3747f1f53f 0.0s
=> => sending tarball 2.8s
=> importing to docker
```
The image should then be available in the image store:
The image becomes available in the image store when the build finishes:
```console
$ docker image ls
@ -68,6 +73,32 @@ REPOSITORY TAG IMAGE ID CREATED
<image> latest adf3eec768a1 2 minutes ago 197MB
```
### QEMU
The `docker-container` driver supports using [QEMU](https://www.qemu.org/) (user
mode) to build non-native platforms. Use the `--platform` flag to specify which
architectures that you want to build for.
For example, to build a Linux image for `amd64` and `arm64`:
```console
$ docker buildx build . \
--builder=container \
--platform=linux/amd64,linux/arm64 \
-t <registry>/<image> \
--push
```
> **Warning**
>
> QEMU performs full-system emulation of non-native platforms, which is much
> slower than native builds. Compute-heavy tasks like compilation and
> compression/decompression will likely take a large performance hit.
## Further reading
For more information on the docker-container driver, see the [buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver).
For more information on the Docker container driver, see the
[buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver).
If want to explore builders running on a remote server, see the
[Kubernetes driver](./kubernetes.md) and the [Remote driver](./remote.md).

View File

@ -1,48 +1,30 @@
# Docker driver
The buildx docker driver is the default builtin driver, that uses the BuildKit
server components built directly into the docker engine.
The Buildx Docker driver is the default driver. It uses the BuildKit server
components built directly into the Docker engine. The Docker driver requires no
configuration.
No setup should be required for the docker driver - it should already be
configured for you:
Unlike the other drivers, builders using the Docker driver can't be manually
created. They're only created automatically from the Docker context.
Images built with the Docker driver are automatically loaded to the local image
store.
## Synopsis
```console
$ docker buildx ls
NAME/NODE DRIVER/ENDPOINT STATUS BUILDKIT PLATFORMS
default docker
default default running 20.10.17 linux/amd64, linux/386
# The Docker driver is used by buildx by default
docker buildx build .
```
This builder is ready to build with out-of-the-box, requiring no extra setup,
so you can get going with a `docker buildx build` as soon as you like.
It's not possible to configure which BuildKit version to use, or to pass any
additional BuildKit parameters to a builder using the Docker driver. The
BuildKit version and parameters are preset by the Docker engine internally.
Depending on your personal setup, you may find multiple builders in your list
the use the docker driver. For example, on a system that runs both a package
managed version of dockerd, as well as Docker Desktop, you might have the
following:
```console
NAME/NODE DRIVER/ENDPOINT STATUS BUILDKIT PLATFORMS
default docker
default default running 20.10.17 linux/amd64, linux/386
desktop-linux * docker
desktop-linux desktop-linux running 20.10.17 linux/amd64, linux/arm64, linux/riscv64, linux/ppc64le, linux/s390x, linux/386, linux/arm/v7, linux/arm/v6
```
This is because the docker driver builders are automatically pulled from
the available [Docker Contexts](https://docs.docker.com/engine/context/working-with-contexts/).
When you add new contexts using `docker context create`, these will appear in
your list of buildx builders.
Unlike the [other drivers](./index.md), builders using the docker driver
cannot be manually created, and can only be automatically created from the
docker context. Additionally, they cannot be configured to a specific BuildKit
version, and cannot take any extra parameters, as these are both preset by the
Docker engine internally.
If you want the extra configuration and flexibility without too much more
overhead, then see the help page for the [docker-container driver](./docker-container.md).
If you need additional configuration and flexibility, consider using the
[Docker container driver](./docker-container.md).
## Further reading
For more information on the docker driver, see the [buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver).
For more information on the Docker driver, see the
[buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver).

View File

@ -1,39 +1,78 @@
# Buildx drivers overview
The buildx client connects out to the BuildKit backend to execute builds -
Buildx drivers allow fine-grained control over management of the backend, and
supports several options for where and how BuildKit should run.
Buildx drivers are configurations for how and where the BuildKit backend runs.
Driver settings are customizable and allows fine-grained control of the builder.
Buildx supports the following drivers:
Currently, we support the following drivers:
- `docker`: uses the BuildKit library bundled into the Docker daemon.
- `docker-container`: creates a dedicated BuildKit container using Docker.
- `kubernetes`: creates BuildKit pods in a Kubernetes cluster.
- `remote`: connects directly to a manually managed BuildKit daemon.
- The `docker` driver, that uses the BuildKit library bundled into the Docker
daemon.
([guide](./docker.md), [reference](https://docs.docker.com/engine/reference/commandline/buildx_create/#docker-driver-1))
- The `docker-container` driver, that launches a dedicated BuildKit container
using Docker, for access to advanced features.
([guide](./docker-container.md), [reference](https://docs.docker.com/engine/reference/commandline/buildx_create/#docker-container-driver-1))
- The `kubernetes` driver, that launches dedicated BuildKit pods in a
remote Kubernetes cluster, for scalable builds.
([guide](./kubernetes.md), [reference](https://docs.docker.com/engine/reference/commandline/buildx_create/#kubernetes-driver-1))
- The `remote` driver, that allows directly connecting to a manually managed
BuildKit daemon, for more custom setups.
([guide](./remote.md), [reference](https://docs.docker.com/engine/reference/commandline/buildx_create/#remote-driver-1))
Different drivers support different use cases. The default `docker` driver
prioritizes simplicity and ease of use. It has limited support for advanced
features like caching and output formats, and isn't configurable. Other drivers
provide more flexibility and are better at handling advanced scenarios. The
`kubernetes` and `remote` drivers specifically aim to enable remote builders.
To create a new builder that uses one of the above drivers, you can use the
[`docker buildx create`](https://docs.docker.com/engine/reference/commandline/buildx_create/) command:
The following table outlines some of the differences between drivers.
| Feature | `docker` | `docker-container` | `kubernetes` | `remote` |
| :--------------------------- | :---------: | :----------------: | :----------: | :----------------: |
| **Automatically load image** | Yes | No | No | No |
| **Cache export** | Inline only | Yes | Yes | Yes |
| **Remote builders** | No | No | Yes | Yes |
| **Tarball output** | No | Yes | Yes | Yes |
| **Multi-arch images** | No | Yes | Yes | Yes |
| **BuildKit configuration** | No | Yes | Yes | Managed externally |
## List available drivers
Use `docker buildx ls` to see builder instances available on your system, and
the drivers they're using.
```console
$ docker buildx ls
NAME/NODE DRIVER/ENDPOINT STATUS BUILDKIT PLATFORMS
default docker
default default running 20.10.17 linux/amd64, linux/386
```
Depending on your setup, you may find multiple builders in your list that use
the Docker driver. For example, on a system that runs both a manually installed
version of dockerd, as well as Docker Desktop, you might see the following
output from `docker buildx ls`:
```console
NAME/NODE DRIVER/ENDPOINT STATUS BUILDKIT PLATFORMS
default docker
default default running 20.10.17 linux/amd64, linux/386
desktop-linux * docker
desktop-linux desktop-linux running 20.10.17 linux/amd64, linux/arm64, linux/riscv64, linux/ppc64le, linux/s390x, linux/386, linux/arm/v7, linux/arm/v6
```
This is because the Docker driver builders are automatically pulled from the
available
[Docker Contexts](https://docs.docker.com/engine/context/working-with-contexts/).
When you add new contexts using `docker context create`, these will appear in
your list of buildx builders.
## Create a new builder
Use the
[`docker buildx create`](https://docs.docker.com/engine/reference/commandline/buildx_create/)
command to create a builder, and specify the driver using the `--driver` option.
```console
$ docker buildx create --name=<builder-name> --driver=<driver> --driver-opt=<driver-options>
```
The build experience is very similar across drivers, however, there are some
features that are not evenly supported across the board, notably, the `docker`
driver does not include support for certain output/caching types.
## What's next
| Feature | `docker` | `docker-container` | `kubernetes` | `remote` |
| :---------------------------- | :-------------: | :----------------: | :----------: | :--------------------: |
| **Automatic `--load`** | ✅ | ❌ | ❌ | ❌ |
| **Cache export** | ❔ (inline only) | ✅ | ✅ | ✅ |
| **Docker/OCI tarball output** | ❌ | ✅ | ✅ | ✅ |
| **Multi-arch images** | ❌ | ✅ | ✅ | ✅ |
| **BuildKit configuration** | ❌ | ✅ | ✅ | ❔ (managed externally) |
Read about each of the Buildx drivers to learn about how they work and how to
use them:
- [Docker driver](./docker.md)
- [Docker container driver](./docker-container.md)
- [Kubernetes driver](./kubernetes.md)
- [Remote driver](./remote.md)

View File

@ -1,89 +1,65 @@
# Kubernetes driver
The buildx kubernetes driver allows connecting your local development or ci
environments to your kubernetes cluster to allow access to more powerful
and varied compute resources.
The Buildx Kubernetes driver allows connecting your local development or CI
environments to your Kubernetes cluster to allow access to more powerful and
varied compute resources.
This guide assumes you already have an existing kubernetes cluster - if you don't already
have one, you can easily follow along by installing
[minikube](https://minikube.sigs.k8s.io/docs/).
## Synopsis
Before connecting buildx to your cluster, you may want to create a dedicated
namespace using `kubectl` to keep your buildx-managed resources separate. You
can call your namespace anything you want, or use the existing `default`
namespace, but we'll create a `buildkit` namespace for now:
```console
$ kubectl create namespace buildkit
```
Then create a new buildx builder:
Run the following command to create a new builder, named `container`, that uses
the Docker container driver:
```console
$ docker buildx create \
--bootstrap \
--name=kube \
--driver=kubernetes \
--driver-opt=namespace=buildkit
--driver-opt=[key=value,...]
```
This assumes that the kubernetes cluster you want to connect to is currently
accessible via the kubectl command, with the `KUBECONFIG` environment variable
[set appropriately](https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/#set-the-kubeconfig-environment-variable)
if neccessary.
The following table describes the available driver-specific options that you can
pass to `--driver-opt`:
You should now be able to see the builder in the list of buildx builders:
| Parameter | Value | Default | Description |
| ----------------- | ---------------- | --------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------ |
| `image` | String | | Sets the image to use for running BuildKit. |
| `namespace` | String | Namespace in current Kubernetes context | Sets the Kubernetes namespace. |
| `replicas` | Integer | 1 | Sets the number of Pod replicas to create. See [scaling BuildKit][1] |
| `requests.cpu` | CPU units | | Sets the request CPU value specified in units of Kubernetes CPU. For example `requests.cpu=100m` or `requests.cpu=2` |
| `requests.memory` | Memory size | | Sets the request memory value specified in bytes or with a valid suffix. For example `requests.memory=500Mi` or `requests.memory=4G` |
| `limits.cpu` | CPU units | | Sets the limit CPU value specified in units of Kubernetes CPU. For example `requests.cpu=100m` or `requests.cpu=2` |
| `limits.memory` | Memory size | | Sets the limit memory value specified in bytes or with a valid suffix. For example `requests.memory=500Mi` or `requests.memory=4G` |
| `nodeselector` | CSV string | | Sets the pod's `nodeSelector` label(s). See [node assignment][2]. |
| `tolerations` | CSV string | | Configures the pod's taint toleration. See [node assignment][2]. |
| `rootless` | `true\|false` | `false` | Run the container as a non-root user. See [rootless mode][3]. |
| `loadbalance` | `sticky\|random` | `sticky` | Load-balancing strategy. If set to `sticky`, the pod is chosen using the hash of the context path. |
| `qemu.install` | `true\|false` | | Install QEMU emulation for multi platforms support. See [QEMU][4]. |
| `qemu.image` | String | `tonistiigi/binfmt:latest` | Sets the QEMU emulation image. See [QEMU][4]. |
```console
$ docker buildx ls
NAME/NODE DRIVER/ENDPOINT STATUS PLATFORMS
kube kubernetes
kube0-6977cdcb75-k9h9m running linux/amd64, linux/amd64/v2, linux/amd64/v3, linux/386
default * docker
default default running linux/amd64, linux/386
```
[1]: #scaling-buildkit
[2]: #node-assignment
[3]: #rootless-mode
[4]: #qemu
The buildx driver creates the neccessary resources on your cluster in the
specified namespace (in this case, `buildkit`), while keeping your
driver configuration locally. You can see the running pods with:
## Scaling BuildKit
```console
$ kubectl -n buildkit get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
kube0 1/1 1 1 32s
$ kubectl -n buildkit get pods
NAME READY STATUS RESTARTS AGE
kube0-6977cdcb75-k9h9m 1/1 Running 0 32s
```
You can use your new builder by including the `--builder` flag when running
buildx commands. For example (replacing `<user>` and `<image>` with your Docker
Hub username and desired image output respectively):
```console
$ docker buildx build . \
--builder=kube \
-t <user>/<image> \
--push
```
## Scaling Buildkit
One of the main advantages of the kubernetes builder is that you can easily
scale your builder up and down to handle increased build load. These controls
are exposed via the following options:
One of the main advantages of the Kubernetes driver is that you can scale the
number of builder replicas up and down to handle increased build load. Scaling
is configurable using the following driver options:
- `replicas=N`
- This scales the number of buildkit pods to the desired size. By default,
only a single pod will be created, but increasing this allows taking of
advantage of multiple nodes in your cluster.
This scales the number of BuildKit pods to the desired size. By default, it
only creates a single pod. Increasing the number of replicas lets you take
advantage of multiple nodes in your cluster.
- `requests.cpu`, `requests.memory`, `limits.cpu`, `limits.memory`
- These options allow requesting and limiting the resources available to each
buildkit pod according to the official kubernetes documentation
[here](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/).
For example, to create 4 replica buildkit pods:
These options allow requesting and limiting the resources available to each
BuildKit pod according to the official Kubernetes documentation
[here](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/).
For example, to create 4 replica BuildKit pods:
```console
$ docker buildx create \
@ -93,7 +69,7 @@ $ docker buildx create \
--driver-opt=namespace=buildkit,replicas=4
```
Listing the pods, we get:
Listing the pods, you get this:
```console
$ kubectl -n buildkit get deployments
@ -107,28 +83,54 @@ kube0-6977cdcb75-rkc6b 1/1 Running 0 8s
kube0-6977cdcb75-vb4ks 1/1 Running 0 8s
kube0-6977cdcb75-z4fzs 1/1 Running 0 8s
```
Additionally, you can use the `loadbalance=(sticky|random)` option to control
the load-balancing behavior when there are multiple replicas. While `random`
should selects random nodes from the available pool, which should provide
better balancing across all replicas, `sticky` (the default) attempts to
connect the same build performed multiple times to the same node each time,
ensuring better local cache utilization.
For more information on scalability, see the options for [buildx create](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver-opt).
Additionally, you can use the `loadbalance=(sticky|random)` option to control
the load-balancing behavior when there are multiple replicas. `random` selects
random nodes from the node pool, providing an even workload distribution across
replicas. `sticky` (the default) attempts to connect the same build performed
multiple times to the same node each time, ensuring better use of local cache.
For more information on scalability, see the options for
[buildx create](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver-opt).
## Node assignment
The Kubernetes driver allows you to control the scheduling of BuildKit pods
using the `nodeSelector` and `tolerations` driver options.
The value of the `nodeSelector` parameter is a comma-separated string of
key-value pairs, where the key is the node label and the value is the label
text. For example: `"nodeselector=kubernetes.io/arch=arm64"`
The `tolerations` parameter is a semicolon-separated list of taints. It accepts
the same values as the Kubernetes manifest. Each `tolerations` entry specifies a
taint key and the value, operator, or effect. For example:
`"tolerations=key=foo,value=bar;key=foo2,operator=exists;key=foo3,effect=NoSchedule"`
The syntax for these parameters is slightly different compared to other driver
options. You must wrap both `nodeSelector` and `tolerations` in double quotes.
For example:
```console
$ docker buildx create \
--bootstrap \
--name=kube \
--driver=kubernetes \
--driver-opt="nodeselector=label=value","tolerations=key=key1,value=value1"
```
## Multi-platform builds
The kubernetes buildx driver has support for creating [multi-platform images](https://docs.docker.com/build/building/multi-platform/),
for easily building for multiple platforms at once.
The Buildx Kubernetes driver has support for creating
[multi-platform images](https://docs.docker.com/build/building/multi-platform/),
either using QEMU or by leveraging the native architecture of nodes.
### QEMU
Like the other containerized driver `docker-container`, the kubernetes driver
also supports using [QEMU](https://www.qemu.org/) (user mode) to build
non-native platforms. If using a default setup like above, no extra setup
should be needed, you should just be able to start building for other
architectures, by including the `--platform` flag.
Like the `docker-container` driver, the Kubernetes driver also supports using
[QEMU](https://www.qemu.org/) (user mode) to build images for non-native
platforms. Include the `--platform` flag and specify which platforms you want to
output to.
For example, to build a Linux image for `amd64` and `arm64`:
@ -141,13 +143,14 @@ $ docker buildx build . \
```
> **Warning**
> QEMU performs full-system emulation of non-native platforms, which is *much*
>
> QEMU performs full-system emulation of non-native platforms, which is much
> slower than native builds. Compute-heavy tasks like compilation and
> compression/decompression will likely take a large performance hit.
Note, if you're using a custom buildkit image using the `image=<image>` driver
option, or invoking non-native binaries from within your build, you may need to
explicitly enable QEMU using the `qemu.install` option during driver creation:
Using a custom BuildKit image or invoking non-native binaries in builds may
require that you explicitly turn on QEMU using the `qemu.install` option when
creating the builder:
```console
$ docker buildx create \
@ -159,12 +162,12 @@ $ docker buildx create \
### Native
If you have access to cluster nodes of different architectures, we can
configure the kubernetes driver to take advantage of these for native builds.
To do this, we need to use the `--append` feature of `docker buildx create`.
If you have access to cluster nodes of different architectures, the Kubernetes
driver can take advantage of these for native builds. To do this, use the
`--append` flag of `docker buildx create`.
To start, we can create our builder with explicit support for a single
architecture, `amd64`:
First, create your builder with explicit support for a single architecture, for
example `amd64`:
```console
$ docker buildx create \
@ -176,13 +179,13 @@ $ docker buildx create \
--driver-opt=namespace=buildkit,nodeselector="kubernetes.io/arch=amd64"
```
This creates a buildx builder `kube` containing a single builder node `builder-amd64`.
Note that the buildx concept of a node is not the same as the kubernetes
concept of a node - the buildx node in this case could connect multiple
kubernetes nodes of the same architecture together.
This creates a Buildx builder named `kube`, containing a single builder node
`builder-amd64`. Note that the Buildx concept of a node isn't the same as the
Kubernetes concept of a node. A Buildx node in this case could connect multiple
Kubernetes nodes of the same architecture together.
With our `kube` driver created, we can now introduce another architecture into
the mix, for example, like before we can use `arm64`:
With the `kube` builder created, you can now introduce another architecture into
the mix using `--append`. For example, to add `arm64`:
```console
$ docker buildx create \
@ -200,7 +203,7 @@ If you list builders now, you should be able to see both nodes present:
```console
$ docker buildx ls
NAME/NODE DRIVER/ENDPOINT STATUS PLATFORMS
kube kubernetes
kube kubernetes
builder-amd64 kubernetes:///kube?deployment=builder-amd64&kubeconfig= running linux/amd64*, linux/amd64/v2, linux/amd64/v3, linux/386
builder-arm64 kubernetes:///kube?deployment=builder-arm64&kubeconfig= running linux/arm64*
```
@ -217,10 +220,11 @@ architectures that you want to support.
## Rootless mode
The kubernetes driver supports rootless mode. For more information on how
rootless mode works, and it's requirements, see [here](https://github.com/moby/buildkit/blob/master/docs/rootless.md).
The Kubernetes driver supports rootless mode. For more information on how
rootless mode works, and it's requirements, see
[here](https://github.com/moby/buildkit/blob/master/docs/rootless.md).
To enable it in your cluster, you can use the `rootless=true` driver option:
To turn it on in your cluster, you can use the `rootless=true` driver option:
```console
$ docker buildx create \
@ -231,6 +235,89 @@ $ docker buildx create \
This will create your pods without `securityContext.privileged`.
Requires Kubernetes version 1.19 or later. Using Ubuntu as the host kernel is
recommended.
## Guide: Creating a Buildx builder in Kubernetes
This guide shows you how to:
- Create a namespace for your Buildx resources
- Create a Kubernetes builder.
- List the available builders
- Build an image using your Kubernetes builders
Prerequisites:
- You have an existing Kubernetes cluster. If you don't already have one, you
can follow along by installing [minikube](https://minikube.sigs.k8s.io/docs/).
- The cluster you want to connect to is accessible via the `kubectl` command,
with the `KUBECONFIG` environment variable
[set appropriately](https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/#set-the-kubeconfig-environment-variable)
if necessary.
1. Create a `buildkit` namespace.
Creating a separate namespace helps keep your Buildx resources separate from
other resources in the cluster.
```console
$ kubectl create namespace buildkit
namespace/buildkit created
```
2. Create a new Buildx builder with the Kubernetes driver:
```console
# Remember to specify the namespace in driver options
$ docker buildx create \
--bootstrap \
--name=kube \
--driver=kubernetes \
```
3. List available Buildx builders using `docker buildx ls`
```console
$ docker buildx ls
NAME/NODE DRIVER/ENDPOINT STATUS PLATFORMS
kube kubernetes
kube0-6977cdcb75-k9h9m running linux/amd64, linux/amd64/v2, linux/amd64/v3, linux/386
default * docker
default default running linux/amd64, linux/386
```
4. Inspect the running pods created by the Buildx driver with `kubectl`.
```console
$ kubectl -n buildkit get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
kube0 1/1 1 1 32s
$ kubectl -n buildkit get pods
NAME READY STATUS RESTARTS AGE
kube0-6977cdcb75-k9h9m 1/1 Running 0 32s
```
The buildx driver creates the necessary resources on your cluster in the
specified namespace (in this case, `buildkit`), while keeping your driver
configuration locally.
5. Use your new builder by including the `--builder` flag when running buildx
commands. For example: :
```console
# Replace <registry> with your Docker username
# and <image> with the name of the image you want to build
docker buildx build . \
--builder=kube \
-t <registry>/<image> \
--push
```
That's it! You've now built an image from a Kubernetes pod, using Buildx!
## Further reading
For more information on the kubernetes driver, see the [buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver).
For more information on the Kubernetes driver, see the
[buildx reference](https://docs.docker.com/engine/reference/commandline/buildx_create/#driver).

View File

@ -1,11 +1,11 @@
# Remote driver
The buildx remote driver allows for more complex custom build workloads that
allow users to connect to external buildkit instances. This is useful for
scenarios that require manual management of the buildkit daemon, or where a
buildkit daemon is exposed from another source.
The Buildx remote driver allows for more complex custom build workloads,
allowing you to connect to externally managed BuildKit instances. This is useful
for scenarios that require manual management of the BuildKit daemon, or where a
BuildKit daemon is exposed from another source.
To connect to a running buildkitd instance:
## Synopsis
```console
$ docker buildx create \
@ -14,157 +14,173 @@ $ docker buildx create \
tcp://localhost:1234
```
## Remote Buildkit over Unix sockets
The following table describes the available driver-specific options that you can
pass to `--driver-opt`:
In this scenario, we'll create a setup with buildkitd listening on a unix
socket, and have buildx connect through it.
| Parameter | Value | Default | Description |
| ------------ | ------ | ------------------ | ---------------------------------------------------------- |
| `key` | String | | Sets the TLS client key. |
| `cert` | String | | Sets the TLS client certificate to present to `buildkitd`. |
| `cacert` | String | | Sets the TLS certificate authority used for validation. |
| `servername` | String | Endpoint hostname. | Sets the TLS server name used in requests. |
Firstly, ensure that [buildkit](https://github.com/moby/buildkit) is installed.
For example, you can launch an instance of buildkitd with:
## Guide: Remote BuildKit over Unix sockets
This guide shows you how to create a setup with a BuildKit daemon listening on a
Unix socket, and have Buildx connect through it.
1. Ensure that [BuildKit](https://github.com/moby/buildkit) is installed.
For example, you can launch an instance of buildkitd with:
```console
$ sudo ./buildkitd --group $(id -gn) --addr unix://$HOME/buildkitd.sock
```
Alternatively,
[see here](https://github.com/moby/buildkit/blob/master/docs/rootless.md) for
running buildkitd in rootless mode or
[here](https://github.com/moby/buildkit/tree/master/examples/systemd) for
examples of running it as a systemd service.
2. Check that you have a Unix socket that you can connect to.
```console
$ ls -lh /home/user/buildkitd.sock
srw-rw---- 1 root user 0 May 5 11:04 /home/user/buildkitd.sock
```
3. Connect Buildx to it using the remote driver:
```console
$ docker buildx create \
--name remote-unix \
--driver remote \
unix://$HOME/buildkitd.sock
```
4. List available builders with `docker buildx ls`. You should then see
`remote-unix` among them:
```console
$ docker buildx ls
NAME/NODE DRIVER/ENDPOINT STATUS PLATFORMS
remote-unix remote
remote-unix0 unix:///home/.../buildkitd.sock running linux/amd64, linux/amd64/v2, linux/amd64/v3, linux/386
default * docker
default default running linux/amd64, linux/386
```
You can switch to this new builder as the default using
`docker buildx use remote-unix`, or specify it per build using `--builder`:
```console
$ sudo ./buildkitd --group $(id -gn) --addr unix://$HOME/buildkitd.sock
$ docker buildx build . --builder=remote-unix -t test --load
```
Alternatively, [see here](https://github.com/moby/buildkit/blob/master/docs/rootless.md)
for running buildkitd in rootless mode or [here](https://github.com/moby/buildkit/tree/master/examples/systemd)
for examples of running it as a systemd service.
Remember that you need to use the `--load` flag if you want to load the build
result into the Docker daemon.
You should now have a unix socket accessible to your user, that is available to
connect to:
## Guide: Remote BuildKit in Docker container
```console
$ ls -lh /home/user/buildkitd.sock
srw-rw---- 1 root user 0 May 5 11:04 /home/user/buildkitd.sock
```
This guide will show you how to create setup similar to the `docker-container`
driver, by manually booting a BuildKit Docker container and connecting to it
using the Buildx remote driver. This procedure will manually create a container
and access it via it's exposed port. (You'd probably be better of just using the
`docker-container` driver that connects to BuildKit through the Docker daemon,
but this is for illustration purposes.)
You can then connect buildx to it with the remote driver:
1. Generate certificates for BuildKit.
```console
$ docker buildx create \
--name remote-unix \
--driver remote \
unix://$HOME/buildkitd.sock
```
If you list available builders, you should then see `remote-unix` among them:
You can use the
[create-certs.sh](https://github.com/moby/buildkit/v0.10.3/master/examples/kubernetes/create-certs.sh)
script as a starting point. Note that while it's possible to expose BuildKit
over TCP without using TLS, it's not recommended. Doing so allows arbitrary
access to BuildKit without credentials.
```console
$ docker buildx ls
NAME/NODE DRIVER/ENDPOINT STATUS PLATFORMS
remote-unix remote
remote-unix0 unix:///home/.../buildkitd.sock running linux/amd64, linux/amd64/v2, linux/amd64/v3, linux/386
default * docker
default default running linux/amd64, linux/386
```
2. With certificates generated in `.certs/`, startup the container:
We can switch to this new builder as the default using `docker buildx use remote-unix`,
or specify it per build:
```console
$ docker run -d --rm \
--name=remote-buildkitd \
--privileged \
-p 1234:1234 \
-v $PWD/.certs:/etc/buildkit/certs \
moby/buildkit:latest \
--addr tcp://0.0.0.0:1234 \
--tlscacert /etc/buildkit/certs/ca.pem \
--tlscert /etc/buildkit/certs/daemon-cert.pem \
--tlskey /etc/buildkit/certs/daemon-key.pem
```
```console
$ docker buildx build --builder=remote-unix -t test --load .
```
This command starts a BuildKit container and exposes the daemon's port 1234
to localhost.
(remember that `--load` is necessary when not using the default `docker`
driver, to load the build result into the docker daemon)
3. Connect to this running container using Buildx:
## Remote Buildkit in Docker container
```console
$ docker buildx create \
--name remote-container \
--driver remote \
--driver-opt cacert=.certs/ca.pem,cert=.certs/client-cert.pem,key=.certs/client-key.pem,servername=... \
tcp://localhost:1234
```
In this scenario, we'll create a similar setup to the `docker-container`
driver, by manually booting a buildkit docker container and connecting to it
using the buildx remote driver. In most cases you'd probably just use the
`docker-container` driver that connects to buildkit through the Docker daemon,
but in this case we manually create a container and access it via it's exposed
port.
Alternatively, use the `docker-container://` URL scheme to connect to the
BuildKit container without specifying a port:
First, we need to generate certificates for buildkit - you can use the
[create-certs.sh](https://github.com/moby/buildkit/v0.10.3/master/examples/kubernetes/create-certs.sh)
script as a starting point. Note, that while it is *possible* to expose
buildkit over TCP without using TLS, it is **not recommended**, since this will
allow arbitrary access to buildkit without credentials.
```console
$ docker buildx create \
--name remote-container \
--driver remote \
docker-container://remote-container
```
With our certificates generated in `.certs/`, we startup the container:
## Guide: Remote BuildKit in Kubernetes
```console
$ docker run -d --rm \
--name=remote-buildkitd \
--privileged \
-p 1234:1234 \
-v $PWD/.certs:/etc/buildkit/certs \
moby/buildkit:latest \
--addr tcp://0.0.0.0:1234 \
--tlscacert /etc/buildkit/certs/ca.pem \
--tlscert /etc/buildkit/certs/daemon-cert.pem \
--tlskey /etc/buildkit/certs/daemon-key.pem
```
This guide will show you how to create a setup similar to the `kubernetes`
driver by manually creating a BuildKit `Deployment`. While the `kubernetes`
driver will do this under-the-hood, it might sometimes be desirable to scale
BuildKit manually. Additionally, when executing builds from inside Kubernetes
pods, the Buildx builder will need to be recreated from within each pod or
copied between them.
The above command starts a buildkit container and exposes the daemon's port
1234 to localhost.
1. Create a Kubernetes deployment of `buildkitd`, as per the instructions
[here](https://github.com/moby/buildkit/tree/master/examples/kubernetes).
We can now connect to this running container using buildx:
Following the guide, create certificates for the BuildKit daemon and client
using
[create-certs.sh](https://github.com/moby/buildkit/blob/v0.10.3/examples/kubernetes/create-certs.sh),
and create a deployment of BuildKit pods with a service that connects to
them.
```console
$ docker buildx create \
--name remote-container \
--driver remote \
--driver-opt cacert=.certs/ca.pem,cert=.certs/client-cert.pem,key=.certs/client-key.pem,servername=... \
tcp://localhost:1234
```
2. Assuming that the service is called `buildkitd`, create a remote builder in
Buildx, ensuring that the listed certificate files are present:
Alternatively, we could use the `docker-container://` URL scheme to connect
to the buildkit container without specifying a port:
```console
$ docker buildx create \
--name remote-kubernetes \
--driver remote \
--driver-opt cacert=.certs/ca.pem,cert=.certs/client-cert.pem,key=.certs/client-key.pem \
tcp://buildkitd.default.svc:1234
```
```console
$ docker buildx create \
--name remote-container \
--driver remote \
docker-container://remote-container
```
Note that this will only work internally, within the cluster, since the BuildKit
setup guide only creates a ClusterIP service. To configure the builder to be
accessible remotely, you can use an appropriately configured ingress, which is
outside the scope of this guide.
## Remote Buildkit in Kubernetes
In this scenario, we'll create a similar setup to the `kubernetes` driver by
manually creating a buildkit `Deployment`. While the `kubernetes` driver will
do this under-the-hood, it might sometimes be desirable to scale buildkit
manually. Additionally, when executing builds from inside Kubernetes pods,
the buildx builder will need to be recreated from within each pod or copied
between them.
Firstly, we can create a kubernetes deployment of buildkitd, as per the
instructions [here](https://github.com/moby/buildkit/tree/master/examples/kubernetes).
Following the guide, we setup certificates for the buildkit daemon and client
(as above using [create-certs.sh](https://github.com/moby/buildkit/blob/v0.10.3/examples/kubernetes/create-certs.sh))
and create a `Deployment` of buildkit pods with a service that connects to
them.
Assuming that the service is called `buildkitd`, we can create a remote builder
in buildx, ensuring that the listed certificate files are present:
```console
$ docker buildx create \
--name remote-kubernetes \
--driver remote \
--driver-opt cacert=.certs/ca.pem,cert=.certs/client-cert.pem,key=.certs/client-key.pem \
tcp://buildkitd.default.svc:1234
```
Note that the above will only work in-cluster (since the buildkit setup guide
only creates a ClusterIP service). To configure the builder to be accessible
remotely, you can use an appropriately configured Ingress, which is outside the
scope of this guide.
To access the service remotely, we can use the port forwarding mechanism in
kubectl:
To access the service remotely, use the port forwarding mechanism of `kubectl`:
```console
$ kubectl port-forward svc/buildkitd 1234:1234
```
Then you can simply point the remote driver at `tcp://localhost:1234`.
Then you can point the remote driver at `tcp://localhost:1234`.
Alternatively, we could use the `kube-pod://` URL scheme to connect
directly to a buildkit pod through the kubernetes api (note that this method
will only connect to a single pod in the deployment):
Alternatively, you can use the `kube-pod://` URL scheme to connect directly to a
BuildKit pod through the Kubernetes API. Note that this method only connects to
a single pod in the deployment:
```console
$ kubectl get pods --selector=app=buildkitd -o json | jq -r '.items[].metadata.name