Web Dev

Extend Docker Via Plugin

One of the things I love about Docker, and also one of the things that enabled its success, is that the batteries are included.

What do I mean? Basically, to get started with Docker, you can just install and use it. Nothing more is needed, and complex things like network, process, and filesystem isolation are all working out of the box.

But after some time, you?ll probably start to feel like doing more ? custom networking, custom IP address reservation, distributing files, and so on. These needs kick in when you start using Docker in production or when you?re preparing for that next step.

Fortunately, the batteries aren?t just included with Docker, they?re also swappable. How? With Docker plugins!

What Are Docker Plugins?

From the documentation:

Docker plugins are out-of-process extensions which add capabilities to the Docker Engine.

This means that plugins do not run within the Docker daemon process and are not even children processes of the Docker daemon. You start your plugin wherever you want (on another host, if you need) in whichever way you want. You just inform the Docker daemon that there?s a new plugin available via Plugin Discovery (we?ll explore this topic in a bit).

Another advantage of the out-of-process philosophy is that you don?t even need to rebuild the Docker daemon to add a plugin.

You can create plugins with the following capabilities:

Authorization (authz)

This capability allows your plugins to control authentication and authorization to/for the Docker Daemon and its remote API. Authorization plugins are used when you need to have authentication or a more granular way to control who can do what against the daemon.

VolumeDriver

The VolumeDriver capability basically gives plugins control over the volumes life cycle. A plugin registers itself as a VolumeDriver plugin and when the host requires a volume with a specific name for that Driver. The plugin provides a Mountpoint for that volume on the host machine.

VolumeDriver plugins can be used for things like distributed filesystems and stateful volumes.

NetworkDriver

NetworkDriver plugins extend the Engine acting as remote drivers for libnetwork. This means that you can act on various aspects from the Network itself (VLAN, bridges) through its connected endpoints (veth pairs and similar) and sandboxes (network namespaces, FreeBSD Jails, and so on).

IpamDriver

IPAM stands for IP Address management. IPAM is a libnetwork feature in charge of controlling the assignment of IP addresses for network and endpoint interfaces. IpamDriver plugins are very useful when you want to apply custom rules for a container?s IP address reservation.

What Did We Do Before Plugins?

Before Docker 1.7 when the plugin mechanism wasn?t available, the only way to take control over the daemon was to wrap the Docker remote API. A lot of vendors did this; they basically wrapped the Docker Remote API and exposed their API acting like a real Docker daemon while doing their specific things.

The problem with this approach is that you end up in composition hell. For instance, if you had to run two plugins, which one was the first to be loaded?

As I said, plugins run out of the main Docker daemon process. This means that the Docker daemon needs to find a way to speak with them. To solve this communication problem, each plugin has to implement an HTTP server which can be discovered by the Docker daemon. This server exposes a set of RPCs issued as HTTP POSTs with JSON payloads. The set of RPC calls that the server needs to expose is defined by the protocol that the server is going to implement (authz, volume, network, ipam).

!New Call-to-action

Plugin Discovery Mechanism

Okay, but what do you mean by ?an HTTP server which can be discovered by the Docker daemon??

Docker has a few ways to discover a plugin?s HTTP server. It will always first check for Unix sockets in the /run/docker/plugins folder. For example, your plugin named myplugin would write the socket file in this location: /run/docker/plugins/myplugin.sock

After looking for sockets, it will check for specification files under the /etc/docker/plugins or /usr/lib/docker/plugins folders.

There are two types of specification files that can be used:

  • *.json
  • *.spec

JSON specification files (*.json)

This kind of specification file is just a *.json file with some information in it:

  • Name: the current name of the plugin used for discovery
  • Addr: the address at which the server can be actually reached
  • TLSConfig: this is optional; you need to specify this configuration only if you want to connect to an HTTP server over SSL

A JSON specification file for myplugin looks like this:

{
  "Name": "myplugin",
  "Addr": "https://fntlnz.wtf/myplugin",
  "TLSConfig": {
    "InsecureSkipVerify": false,
    "CAFile": "/usr/shared/docker/certs/example-ca.pem",
    "CertFile": "/usr/shared/docker/certs/example-cert.pem",
    "KeyFile": "/usr/shared/docker/certs/example-key.pem",
  }
}

Plain text files (*.spec)

You can use plain text files with the *.spec extension. These files can specify a TPC socket or a UNIX socket:

tcp://127.0.0.50:8080

unix:///path/to/myplugin.sock

Activation Mechanism

The lowest common denominator among all the protocols is the plugin?s activation mechanism. This mechanism enables Docker to know which protocols are supported by each plugin. When necessary, the daemon will make a request call against the plugin?s /Plugin.Activate RPC that must respond with a list of its available protocols:

{
    "Implements": ["NetworkDriver"]
}

Available protocols are:

  • authz
  • NetworkDriver
  • VolumeDriver
  • IpamDriver

Each protocol provides its own set of RPC calls in addition to the activation call. For this post, I decided to deepen the VolumeDriver plugin protocol. We?ll enumerate the VolumeDriver.* RPCs, and we will practically write a ?Hello World? volume driver plugin.

Error Handling

The plugins must provide meaningful error messages to the Docker daemon so it can give them back to the client. Error handling is done via the response error form:

{
    "Err": string
}

That should be used along with the HTTP error status codes 400 and 500.

VolumeDriver Protocol

The VolumeDriver protocol is both simple and powerful. The first thing to know is that during the handshake (/Plugin.Activate), plugins must register themselves as VolumeDriver.

{
    "Implements": ["VolumeDriver"]
}

Any VolumeDriver plugin is expected to provide writable paths on the host filesystem.

The experience while using a VolumeDriver plugin is very close to the standard one. You can just create a volume using your volume driver by specifying it with the -d flag:

docker volume create -d=myplugin --name myvolume

Or you can start a container while creating a volume using the normal -v flag along with the --volume-driver flag to specify the name of your volume driver plugin.

docker run -v myvolume:/my/path/on/container --volume-driver=myplugin alpine sh

Writing a ?Hello World? VolumeDriver plugin

Let?s write a simple plugin that uses the local filesystem starting from the /tmp/exampledriver folder to create volumes. In simple terms, when the client requests a volume named myvolume, the plugin will map that volume to the mountpoint /tmp/exampledriver/myvolume and mount that folder.

The VolumeDriver protocol is composed of a total of seven RPC calls plus the Activation one which are:

  • /VolumeDriver.Create
  • /VolumeDriver.Remove
  • /VolumeDriver.Mount
  • /VolumeDriver.Path
  • /VolumeDriver.Unmount
  • /VolumeDriver.Get
  • /VolumeDriver.List

For each one of these RPC actions, we need to implement the corresponding POST endpoint that must return the right JSON payload. You can read the full specification here.

Fortunately a lot of work has already been done by the docker/go-plugin-helpers project, which contains a set of packages to implement Docker plugins in Go.

Since we?re going to implement a VolumeDriver plugin, we need to create a struct that implements the volume.Driver interface of the volume package. The volume.Driver interface is defined as follows:

type Driver interface {
    Create(Request) Response
    List(Request) Response
    Get(Request) Response
    Remove(Request) Response
    Path(Request) Response
    Mount(Request) Response
    Unmount(Request) Response
}

As you can see, this interface?s functions are a one-to-one mapping to the VolumeDriver RPC calls. So we can start by creating our Driver?s struct:

type ExampleDriver struct {
    volumes    map[string]string
    m          *sync.Mutex
    mountPoint string
}

Nothing fancy there. We created a struct with a few properties:

  • volumes: We?re going to use this property to keep key value pair of ?volume name? => ?mountpoint?
  • m: That?s just a mutex used to block on operations that must not be done concurrently
  • mountPoint: It?s the base mountpoint for our plugin

In order to make our struct to implement the volume.Driver interface, it implements all the interface?s functions.

Create

func (d ExampleDriver) Create(r volume.Request) volume.Response {
    logrus.Infof("Create volume: %s", r.Name)
    d.m.Lock()
    defer d.m.Unlock()

    if _, ok := d.volumes[r.Name]; ok {
        return volume.Response{}
    }

    volumePath := filepath.Join(d.mountPoint, r.Name)

    _, err := os.Lstat(volumePath)
    if err != nil {
        logrus.Errorf("Error %s %v", volumePath, err.Error())
        return volume.Response{Err: fmt.Sprintf("Error: %s: %s", volumePath, err.Error())}
    }

    d.volumes[r.Name] = volumePath

    return volume.Response{}
}

This function is called each time a client wants to create a volume. What?s going on here is really simple. After logging the fact that the command has been called, we lock the mutex so we are sure that there will be nobody else performing actions on the volumes map. The mutex is automatically released when the execution exits the scope.

Then check if the volume is already present. If so, we just return an empty response that means that the volume is available. If the volume is not yet available, we create a string with its mountpoint, check if the directory is writable, and add it to the volumes map. We return an empty response for success or a response with an error if the directory is not writable.

The plugin does not automatically handle directory creation (it could do it easily) ? the user has to do it manually.

List

func (d ExampleDriver) List(r volume.Request) volume.Response {
    logrus.Info("Volumes list ", r)

    volumes := []*volume.Volume{}

    for name, path := range d.volumes {
        volumes = append(volumes, &volume.Volume{
            Name:       name,
            Mountpoint: path,
        })
    }

    return volume.Response{Volumes: volumes}

}

A volume plugin must provide a list of the volumes registered with the plugin itself. This function basically does that ? it cycles through all the volumes and puts them in a list that is returned as response.

Get

func (d ExampleDriver) Get(r volume.Request) volume.Response {
    logrus.Info("Get volume ", r)
    if path, ok := d.volumes[r.Name]; ok {
        return volume.Response{
            Volume: &volume.Volume{
                Name:       r.Name,
                Mountpoint: path,
            },
        }
    }
    return volume.Response{
        Err: fmt.Sprintf("volume named %s not found", r.Name),
    }
}

This function basically gives back a few informations about the volume. We just look for the volume name in the volumes map and return its name and mountpoint in the response.

Remove

func (d ExampleDriver) Remove(r volume.Request) volume.Response {
    logrus.Info("Remove volume ", r)

    d.m.Lock()
    defer d.m.Unlock()

    if _, ok := d.volumes[r.Name]; ok {
        delete(d.volumes, r.Name)
    }

    return volume.Response{}
}

This is called when the client asks the Docker daemon to remove a volume. The first thing we do here is to lock the mutex as we are operating on the volumes map, and then we delete that volume from it.

Path

func (d ExampleDriver) Path(r volume.Request) volume.Response {
    logrus.Info("Get volume path", r)

    if path, ok := d.volumes[r.Name]; ok {
        return volume.Response{
            Mountpoint: path,
        }
    }
    return volume.Response{}
}

There are a few circumstances when Docker needs to know what the Mountpoint is of a given volume name. That?s what this function does ? it takes a volume name and gives back the Mountpoint for that volume.

Mount

func (d ExampleDriver) Mount(r volume.Request) volume.Response {
    logrus.Info("Mount volume ", r)

    if path, ok := d.volumes[r.Name]; ok {
        return volume.Response{
            Mountpoint: path,n
        }
    }

    return volume.Response{}

}

This is called once per container stop. Here, we just look into the volumes map for the requested volume name and return the Mountpoint so that Docker can use it.

In this example, implementing this function is the same as the Path function. In a real plugin, the Mount function may want to do a few more things, like allocating resources or requesting remote filesystems for such resources.

Unmount

func (d ExampleDriver) Unmount(r volume.Request) volume.Response {
    logrus.Info("Unmount ", r)
    return volume.Response{}
}

This function is called once per container stop when Docker is no longer using the volume. Here we don?t do anything. A production-ready plugin may want to de-provision resources at this point.

Server

Now that our driver is ready, we can create the server that will serve our Unix socket for the Docker daemon. The empty for loop is here so that the main function becomes blocking since the server will go in a separate goroutine.

func main() {
    driver := NewExampleDriver()
    handler := volume.NewHandler(driver)
    if err := handler.ServeUnix("root", "driver-example"); err != nil {
        log.Fatalf("Error %v", err)
    }

    for {

    }
}

A possible improvement here could be to handle the various signals to avoid abnormal interruption.

At this point, we haven?t implemented the /Plugin.Activate RPC call. The go-plugin-helpers does this for us when we register the volume handler.

Since I showed you only the most important pieces of code and I omitted the parts that holds it all together, you may want to clone the repository with the full source code:

Clone

git clone https://github.com/fntlnz/docker-volume-plugin-example.git

Then, you have to build the plugin in order to use it.

Build

$ cd docker-volume-plugin-example
$ go build .

Run

At this point, we need to start the plugin server, so the Docker daemon can discover it.

# ./docker-volume-plugin-example

You can check that the plugin has created the unix socket by issuing a:

# ls -la /run/docker/plugins

Which should output something like:

total 0
drwxr-xr-x. 2 root root  60 Apr 25 12:49 .
drwx------. 6 root root 120 Apr 25 02:13 ..
srw-rw----. 1 root root   0 Apr 25 12:49 driver-example.sock

It is recommended that you start your plugins before starting the Docker daemon and stop them after stopping the Docker daemon. I usually follow this advice in production, but on my local testing environment, I usually test plugins inside containers so I have no other choice than starting them after Docker.

Using your plugin

Now that the plugin is up and running, we can try using it by starting a container and specifying the volume driver. Before starting the container, we need to create the myvolumename under the /tmp/exampledriver mountpoint.

A real production-ready plugin should handle mountpoint creation automatically.

$ mkdir /tmp/exampledriver/myvolumename
# docker run -it -v myvolumename:/data --volume-driver=driver-example alpine sh

You can check if the volume has been created by issuing a docker volume ls, which should output something similar:

DRIVER              VOLUME NAME
local               dcb04fb12e6d914d4b34b7dbfff6c72a98590033e20cb36b481c37cc97aaf162
local               f3b65b1354484f217caa593dc0f93c1a7ea048721f876729f048639bcfea3375
driver-example      myvolumename

Now each file that you will put in the /data folder in the container will be written on the host?s /tmp/exampledriver/myvolumename folder.

Available Plugins

You can find an exhaustive list of plugins here. My favorites are:

  • Flocker: This plugin basically allows your volumes to ?follow? your containers, enabling you to run stateful containers for things that need a consistent state like databases.
  • Netshare plugin: I use this to mount NFS folders inside containers. It also supports EFS and CIFS.
  • Weave Network Plugin: This enables you to see containers just as though they were plugged into the same network switch independently of where they are running.

Now you know that the plugin API is available and that you can benefit from it by writing your own plugins. Yay!

But there are a few more things that you can do now. For example, I showed you how to write your plugin in Go with the official plugin helpers in Golang. But you might not be a Golang programmer ? you may be a Rust programmer or a Java programmer or even a Javascript programmer. If so, you may want to consider writing plugin helpers for your language!

Reference: Extend Docker Via Plugin from our WCG partner Lorenzo Fontana at the Codeship Blog blog.

Lorenzo Fontana

Lorenzo Fontana is a software engineer at Facile.it.
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Inline Feedbacks
View all comments
Back to top button