• Damn I Love Docker: Local DNS With CoreDNS

    Table of Contents

    Synopsis

    Here we are going to discuss how you can setup a local DNS12 server using docker34 and CoreDNS.56

    Introduction

    If you are not already familiar with how DNS12 works, then basically it is an address book. In this address book are stored the IP addresses7 and their corresponding Domain Names (i.e. google.com, youtube.com, amazon.com, etc…). When you type google.com in your browser, your browser will first attempt to resolve this Domain Name by querying whatever Name Servers your computer has set in its local configuration (Note: this is most often set automatically by the router on the wired/wireless network when you first join).

    The result of this query is the resolution of the Domain Name to its corresponding address (e.g. while google.com has multiple IP addresses, one of the returned addresses is 108.177.122.113). This can be seen below in the following example using the well known tool dig:89

    $ dig google.com
    
    ; <<>> DiG 9.10.6 <<>> google.com
    ;; global options: +cmd
    ;; Got answer:
    ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 10808
    ;; flags: qr rd ra; QUERY: 1, ANSWER: 6, AUTHORITY: 0, ADDITIONAL: 1
    
    ;; OPT PSEUDOSECTION:
    ; EDNS: version: 0, flags:; udp: 4096
    ;; QUESTION SECTION:
    ;google.com.                    IN      A
    
    ;; ANSWER SECTION:
    google.com.             76      IN      A       108.177.122.113
    google.com.             76      IN      A       108.177.122.139
    google.com.             76      IN      A       108.177.122.138
    google.com.             76      IN      A       108.177.122.102
    google.com.             76      IN      A       108.177.122.101
    google.com.             76      IN      A       108.177.122.100
    
    ;; Query time: 7 msec
    ;; SERVER: 192.168.2.1#53(192.168.2.1)
    ;; WHEN: Fri Jan 03 10:09:45 EST 2020
    ;; MSG SIZE  rcvd: 135
    
    

    We can see from the above example that there are several IP addresses associated for google.com (including the 108.177.122.113 address we noted earlier). Now your browser can actually send the desired HTTP request10 to Google’s servers at 108.177.122.113, which will in turn respond with the desired HTML/CSS/Javascript that comprise its search engine interface. All DNS really did is allow you to “lookup” the address for Google’s server. In a very simplified way, this is how DNS works.

    But this example only applies to externally located hosts (e.g. the servers running websites like google.com, youtube.com, and amazon.com). What if you wanted to create Domain Names and the corresponding DNS Records for your local hosts/services running on your local network? This is where you would want to have a locally running DNS server, with entries composed of Domain Names pointing to local IP addresseson your local network.

    Getting Started

    CoreDNS Config

    To start with we need to create the config file for CoreDNS. This file is called a Corefile and an example can be seen below

    $ mkdir -p local-dns/root
    $ cd local-dns
    $ cat <<EOF > root/Corefile
    .:53 {
        forward . 8.8.8.8 9.9.9.9
        log
        errors
    }
    
    example.com:53 {
        file /root/db.example
        log
        errors
    }
    EOF
    

    First we start by using mkdir to create the directories to store our config files. Then we use the cat command and a heredoc to write to the Corefile. The first section of this Corefile (i.e. the .:53) corresponds to all DNS queries that do no match any of the other domains listed in the config file (in this instance we mean the example.com domain listed underneath). What we are saying with this first section is “any query that does not directly match any other domains listed in this Corefile, forward that query to the name servers at 8.8.8.8 and 9.9.9.9”. These are Google’s DNS servers (8.8.8.8 and 9.9.9.9 respectively). This works out well because not all of our DNS queries are going to correspond to local domains, we still need to send out queries for external domains out on then WAN11 (i.e. internet).

    If you have not already guessed, the last section specifically matches any DNS queries for the domain example.com. Here you are telling the CoreDNS server to specifically reference the file located at /root/db.example for every DNS query that specifies example.com. The domain name here can be whatever you want and as many as you want, but you would need a separate db.* file for each domain you are referencing in this Corefile:

    foobarbaz.org:53 {
        file /root/db.foobarbaz
        log
        errors
    }
    
    helloworld.com:53 {
        file /root/db.helloworld
        log
        errors
    }
    

    In the above example we created rules to add to the Corefile for the domains foobarbaz.org and helloworld.com. Notice we have uniquely named db.* files for foobarbaz.db and helloworld.db respectively.

    DNS Zone File

    Back to our original example, let us now dive into the db.example12 file we referenced in the Corefile above, and look at how it can be setup to work:

    $ cat <<EOF > root/db.example
    $ORIGIN example.com.  ; designates the start of this zone file in the namespace
    $TTL 1h               ; default expiration time of all resource records without their own TTL value
    @                 IN  SOA     ns.example.com. rtiger.example.com. (
                                      2020010510     ; Serial
                                      1d             ; Refresh
                                      2h             ; Retry
                                      4w             ; Expire
                                      1h)            ; Minimum TTL
    @                 IN  A       192.168.1.20       ; Local IPv4 address for example.com.
    @                 IN  NS      ns.example.com.    ; Name server for example.com.
    ns                IN  CNAME   @                  ; Alias for name server (points to example.com.)
    webblog           IN  CNAME   @                  ; Alias for webblog.example.com
    netprint          IN  CNAME   @                  ; Alias for netprint.example.com
    

    We will leave an extensive explanation of the zone file for further reference,12 and will instead attempt a brief explanation of the above values.

    As you can see at the top of the file is the $ORIGIN keyword, which merely sets the global ORIGIN domain name (i.e. in this case example.com). The $ORIGIN value can then be referenced using the @ symbol throughout the zone file. You can see the use of the @ symbol to replace the example.com domain name through various records. Again, please refer to the zone file reference12 for information on the $TTL value.

    Next we see several resource records (also shortened to RR) starting directly below the $TTL value:

    $ORIGIN example.com.  ; designates the start of this zone file in the namespace
    $TTL 1h               ; default expiration time of all resource records without their own TTL value
    
    ; =============================== Resource Records ==============================
    
    @                 IN  SOA     ns.example.com. rtiger.example.com. (
                                      2020010510     ; Serial
                                      1d             ; Refresh
                                      2h             ; Retry
                                      4w             ; Expire
                                      1h)            ; Minimum TTL
    @                 IN  A       192.168.1.20       ; Local IPv4 address for example.com.
    @                 IN  NS      ns.example.com.    ; Name server for example.com.
    ns                IN  CNAME   @                  ; Alias for ns.example.com
    webblog           IN  CNAME   @                  ; Alias for webblog.example.com
    netprint          IN  CNAME   @                  ; Alias for netprint.example.com
    

    More information on the resource records can be found in the references13 located in the reference section. What is important to understand here is that we have an address record with the label A that connects the domain name example.com to the IP address 192.168.1.20. This is the actual address for example.com. The other resource records for CNAME are what are known as canonical name records. Here we use some shorthand to alias the domain ns, weblog, and netprint (which correspond to ns.example.com, webblog.example.com, netprint.example.com respectively) to example.com (again we used the @ symbol here as a reference). This is simply because all of the subdomain names are all pointing to the same IP address as example.com so we can simply point them to the example.com domain. This makes management much easier, because should we need to change the IP address associated with example.com, we will not have excessive multiple entries to change.

    Finally we have the NS and SOA resource records. The NS records simply stand for name server and allow us to declare a name server (in this case we use the ns.example.com domain which is actually an alias or CNAME for example.com). The SOA record is the only record specifically required by the zone file and is short for Start of Authority. This DNS record has a few different features compared to previously described A, CNAME, and NS records. It requires a name server address (in this case the ns.example.com domain name) that hosts the A record for the $ORIGIN domain (in this case example.com). In this specific example the name server is the same as the server hosting this zone file, that is why the name server is this example is merely an alias to example.com. It also requires an administrator contact (in this case the rtiger.example.com) and various time-related data (seen above with comments to the right). Please see the references on the resource records13 and zone file12 for more info.

    Docker Deploy

    Finally we are ready to deploy our local DNS server. We can deploy it simply as shown below:

    $ docker run -d \
                 --name coredns \
                 -v ~/local-dns/root/:/root \
                 -p 53:53/udp \
                 coredns/coredns -conf /root/Corefile
    

    To clarify, we are mounting the directories we created earlier ~/local-dns/root for CoreDNS to find the config files, so be aware that if your path is different you will need to change this value to the location of your Corefile/db.* files. Now we need to test it to make sure it is working

    Using Local DNS Server

    Testing

    To begin using our server, we need to first test whether our DNS request will work. This will involve our good friend, once again, dig.89 Let us start by querying the server using its local IP address and the example.com domain:

    $ dig @192.168.1.20 example.com
    
    ; <<>> DiG 9.10.6 <<>> @192.168.1.20 example.com
    ; (1 server found)
    ;; global options: +cmd
    ;; Got answer:
    ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 50252
    ;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 1, ADDITIONAL: 1
    ;; WARNING: recursion requested but not available
    
    ;; OPT PSEUDOSECTION:
    ; EDNS: version: 0, flags:; udp: 4096
    ;; QUESTION SECTION:
    ;example.com.                   IN      A
    
    ;; ANSWER SECTION:
    example.com.            3600    IN      A       192.168.1.20
    
    ;; AUTHORITY SECTION:
    example.com.            3600    IN      NS      ns.example.com.
    
    ;; Query time: 41 msec
    ;; SERVER: 192.168.1.20#53(192.168.1.20)
    ;; WHEN: Sun Jan 05 18:52:10 EST 2020
    ;; MSG SIZE  rcvd: 106
    

    We can see in the ANSWER SECTION of the dig report that the A record for example.com points to 192.168.1.20 just like we defined in the zone file. We can also see that the AUTHORITY SECTION reports an NS record with ns.example.com as the name server, just like we defined in the zone file. What about our other records for webblog.example.com and netprint.example.com:

    $ dig @192.168.1.20 webblog.example.com netprint.example.com
    ; <<>> DiG 9.10.6 <<>> @192.168.1.20 webblog.example.com netprint.example.com
    ; (1 server found)
    ;; global options: +cmd
    ;; Got answer:
    ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 37312
    ;; flags: qr aa rd; QUERY: 1, ANSWER: 2, AUTHORITY: 1, ADDITIONAL: 1
    ;; WARNING: recursion requested but not available
    
    ;; OPT PSEUDOSECTION:
    ; EDNS: version: 0, flags:; udp: 4096
    ;; QUESTION SECTION:
    ;webblog.example.com.           IN      A
    
    ;; ANSWER SECTION:
    webblog.example.com.    3600    IN      CNAME   example.com.
    example.com.            3600    IN      A       192.168.1.20
    
    ;; AUTHORITY SECTION:
    example.com.            3600    IN      NS      ns.example.com.
    
    ;; Query time: 552 msec
    ;; SERVER: 192.168.1.20#53(192.168.1.20)
    ;; WHEN: Sun Jan 05 18:56:02 EST 2020
    ;; MSG SIZE  rcvd: 158
    
    ;; Got answer:
    ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 50787
    ;; flags: qr aa rd; QUERY: 1, ANSWER: 2, AUTHORITY: 1, ADDITIONAL: 1
    ;; WARNING: recursion requested but not available
    
    ;; OPT PSEUDOSECTION:
    ; EDNS: version: 0, flags:; udp: 4096
    ;; QUESTION SECTION:
    ;netprint.example.com.          IN      A
    
    ;; ANSWER SECTION:
    netprint.example.com.   3600    IN      CNAME   example.com.
    example.com.            3600    IN      A       192.168.1.20
    
    ;; AUTHORITY SECTION:
    example.com.            3600    IN      NS      ns.example.com.
    
    ;; Query time: 10 msec
    ;; SERVER: 192.168.1.20#53(192.168.1.20)
    ;; WHEN: Sun Jan 05 18:56:02 EST 2020
    ;; MSG SIZE  rcvd: 160
    
    

    We see two reports generated, both containing CNAME records for webblog.example.com and netprint.example.com respectively. Our local DNS server is working!!!

    macOS DNS Resolver

    In this section we actually cover how to setup your macOS computer to use the local DNS server we have just successfully deployed and tested. We will be using a macOS-specific command line tool called networksetup. With this tool we will be setting the name servers used for a specific interface (in this case Wi-Fi):

    $ networksetup -setdnsservers Wi-Fi 192.168.1.20
    

    This will immediately point all your DNS traffic to the CoreDNS server we deployed on host 192.168.1.20. To reset the name servers used to the default:

    $ networksetup -setdnsservers Wi-Fi empty
    

    Now we are back the the original DNS servers being used. Confirm this with the following:

    $ networksetup -getdnsservers Wi-Fi
    There aren't any DNS Servers set on Wi-Fi.
    

    The above response is the default response for macOS (i.e. the state before we changed it to use our local DNS servers). Note: These commands were performed in macOS Mojave.

    References

  • Damn I Love Docker: NGINX Reverse Proxy

    Table of Contents

    Synopsis

    In this tutorial we cover how to setup a reverse proxy1 using NGINX2 running in a docker container. The emphasis in this tutorial will be primarily on the use of a dockerized NGINX server running as a reverse proxy to allow for the hosting of multiple websites/web apps behind a single IP address.

    Introduction

    Running multiple websites/web applications from the same physical IP address (i.e. from your home network) is a common issue many face when deploying their websites/web apps for the first time. Basically you need something that will sit in the front of your websites/web application servers and route traffic to them based on the Host field of the request.

    This is sometimes referred to as an edge router and is exactly the kind of task that NGINX is suited for (Note: see Traefik3 for a more advanced/specific reverse proxy for edge routing). While tools like Traefik3 have been developed more recently and more specifically for this problem, NGINX is a more minimal viable product and possibly more facile to deal with and setup for your initial reverse proxy needs.

    Getting Started

    Below we cover the different scenarios for setting up a reverse proxy on a docker host.

    Docker Single Host

    To clarify, we will be showing how to setup a reverse proxy using docker on a single host NOT in swarm mode.4 The setup for a single host docker environment will not distribute appropriately on a docker swarm because we are using volumes.

    A good example of how to set this up can be found in the github.com/RagingTiger/request-router repository.5 Here is the gist of the info shared:

    Configure

    Begin by creating a directory to store your NGINX config files (this can be anywhere you like that is accessible from the file system)

    $ mkdir -p ~/reverse-proxy/config ~/reverse-proxy/conf.d
    

    These two directories config and conf.d are where we will be storing the config file for both the NGINX server and also the domain routing for your backend services that sit behind the reverse proxy.

    Next add the config file:

    $ cd ~/reverse-proxy/
    $ cat <<EOF > config/nginx.conf
    user  nginx;
    worker_processes  auto;
    
    error_log  /var/log/nginx/error.log warn;
    pid        /var/run/nginx.pid;
    
    
    events {
        worker_connections  1024;
    }
    
    
    http {
        include       /etc/nginx/mime.types;
        default_type  application/octet-stream;
    
        log_format  main  ' -  [] "" '
                          '0  "" '
                          '"" ""';
    
        access_log  /var/log/nginx/access.log  main;
    
        sendfile        on;
        #tcp_nopush     on;
    
        keepalive_timeout  65;
    
        #gzip  on;
    
        server_names_hash_bucket_size 512;
    
        include /etc/nginx/conf.d/*.conf;
    }
    EOF
    

    Here we use the cat command and a heredoc6 to create the nginx.conf file in the reverse-proxy/config/ directory. This completes the configuration of the NGINX server. Now we will add some example proxy config files to the conf.d directory.

    The proxy config example given from the github repository5 simply shows how you can write a configuration for sending traffic with a specific host name or domain name to a specific internal address:

    $ cat <<EOF > conf.d/example.conf
    server {
        listen       80;
        server_name  example.com;
    
        location / {
            proxy_pass http://192.168.1.20:8080;
        }
    }
    EOF
    

    There are a few points worth mentioning about this config file. The listen keyword simply tells NGINX that any traffic it receives on port 80, it should check to see if the host name or domain name matches the server_name7 given in this configuration block. In this example, any request that this NGINX server would receive on port 80 (the default HTTP port) must be parsed to see if the host/domain name matches the server_name example.com. If a match is found the NGINX server will route that request to the URL and port for proxy_pass8(seen above in the location / sub-block of the server block).9 Hence traffic for example.com will be sent to the local address http://192.168.1.20 on port 8080.

    Deploy: Simple

    The simplest setup is to deploy a container built with a stable version of NGINX (nginx:1.15.8-alpine at the time of this publication):

    # run docker container in daemon mode (i.e. -d option)
    $ docker run -d \
                 --name=reverse-proxy \
                 -v ~/reverse-proxy/config/nginx.conf:/etc/nginx/nginx.conf \
                 -v ~/reverse-proxy/conf.d/:/etc/nginx/conf.d/ \
                 -p 80:80 \
                 nginx:1.15.8-alpine
    

    Now simply add a proxy config file to the reverse-proxy/conf.d/ directory with a unique name (e.g. andrewsblog.conf or webdevsite.conf) as follows:

    $ cat <<EOF > conf.d/webblog.conf
    server {
        listen       80;
        server_name  thelifeofsarah.com;
    
        location / {
            proxy_pass http://192.168.1.25:5000;
        }
    }
    $ docker restart reverse-proxy
    

    Notice after the creation of the webblog.conf config file we restart the NGINX docker container using docker restart .... This simply restarts NGINX so that it will read the new configuration file. Now you should be able to send traffic to the reverse proxy and if it gets requests with thelifeofsarah.com in the Host header10 it will route them to http://192.168.25:5000.

    Deploy: Advanced

    This version of the deploy follows the same steps above, except it specifically addresses situations where you want to run your reverse proxy on the same physical docker host as your websites/web applications. This allows us to do some work with docker network commands.11

    But why work with docker network commands? Well there are a few benefits worth pointing out:

    • DNS using container names (e.g. –name webapp)
    • Network isolation
    Creating Docker Network

    Let us start by creating a docker network named proxy:

    $ docker network create proxy
    

    This will create a network called proxy using the default driver bridge. See docker network documentation12 for more information.

    Connecting Web Apps

    Now we can add our website container to the network using a docker image called foobar/blog (NOTE: this is not a real image):

    $ docker run -d \
                 --name=blog \
                 --network proxy \
                 foobar/blog
    

    Imagine this foobar/blog is the image for your website/web app. Then if you look at the options given, we passed the proxy network to the docker run command as the --network option. This will connect the blog container to the proxy network. (NOTE: by default the web server in this example runs on port 5000 keep that in mind for the reverse proxy config)

    Pay attention to the name we gave the container, blog, as this name can now be used to directly send traffic to the blog container by name from any other container on the proxy network.12 This will come in handy in the next section.

    Connecting Reverse Proxy

    Remember the configuration file we wrote in the earlier Deploy: Simple section? We are going to rewrite it here utilizing the inherent Docker DNS12 feature.

    $ cd ~/reverse-proxy/
    $ cat <<EOF > conf.d/webblog.conf
    server {
        listen       80;
        server_name  thelifeofsarah.com;
    
        location / {
            proxy_pass http://blog:5000; ## <--- Now using container name as IP/DN
        }
    }
    

    So basically docker allows you to use the name of a container (which you can specify with the --name option) to send traffic to that container. In this case, we need to configure NGINX to send traffic to our blog container, and we can use the name of that container (i.e. blog) as the local domain name for that container on the proxy network (and also the port 5000 which the web server in this example binds to by default).

    Next we are going to start our reverse proxy passing this proxy network as a command line argument using the --network option:

    $ docker run -d \
                 --name=reverse-proxy \
                 -v ~/reverse-proxy/config/nginx.conf:/etc/nginx/nginx.conf \
                 -v ~/reverse-proxy/conf.d/:/etc/nginx/conf.d/ \
                 -p 80:80 \
                 --network proxy \
                 nginx:1.15.8-alpine
    

    Note: If you already have the a container from the previous section running with the name reverse-proxy simply stop the container and remove it as such:

    $ docker stop reverse-proxy && docker rm reverse-proxy
    

    Now everything should be networked together. No one can access your blog directly. They can only send traffic to the reverse-proxy container running on port 80. This reverse proxy then forwards that traffic to its internal network (named proxy) that is shared by both the blog and reverse-proxy containers.

    Testing: Simple

    With everything up and running you might ask yourself “Well how do I know if this whole setup is working?” … to which we offer the following answer:

    $ curl -H "Host: example.com" SERVER_IP_ADDRESS
    

    curl (also know as cURL)13 is a well-known command line tool and is exactly the right tool for this job. Continuing with the lifeofsarah.com example for the webblog above, here is how we can test that domain:

    $ curl -H "Host: lifeofsarah.com" 192.168.1.25
    

    To clarify, we are setting the $SERVER_IP_ADDRESS=192.168.1.25 which means this is the physical address of the host machine (i.e. where our reverse proxy is running). We are also setting the Host header (i.e. using the -H flag in curl) to Host: lifeofsarah.com. So what does this all do?

    When we execute the command, curl will send an GET request to the server at 192.168.1.25 (by default on port 80) with the amended Host header set to lifeofsarah.com. Our reverse proxy will receive the request on port 80 (again by default) and check the Host header where it will see that it has a configuration for the server_name lifeofsarah.com and will immediately pass the request (i.e. proxy it) to the address in its configurations (e.g. 192.168.1.25:5000 as shown in the Deploy: Simple section or http://blog in the Deploy: Advanced section).

    Testing: Advanced

    While curl is certainly an efficient, and simple way to test your reverse proxy infrastructure (including the websites running behind the reverse proxy) sometimes it would be nice be able to browse the websites using the domain names[^fn14] you have assigned them in the reverse proxy. But do you really have to register them with a domain name registrar to test? No. This is where running your own local DNS Server comes in handy.

    We have written a complete guide for running your own local DNS Server in the blogpost: Damn I Love Docker: Local DNS With CoreDNS. Following this guide will allow you to setup your own local DNS server (running in Docker of course), which will allow you to create DNS records to point to your server. This will allow you to create records for the domain names you listed in the config files for the NGINX reverse proxy (i.e. in the server_name section). By resolving these domain names after typing them into your browser, your request will be routed to the host running your reverse proxy which will intern check the request Host field and route the request to the correct website.

    References

  • Damn I Love Docker: Docker Swarm Initial Setup

    Table of Contents

    Introduction

    A brief walkthrough on setting up your first docker swarm1

    Getting Started

    It may go without saying that you need to have docker already installed[^fn2] before proceeding, but assuming docker has been installed on all hosts, then creating a swarm is trivial. First choose the initial manager node (this is simply the node that you want to manage the other nodes in the cluster). If you are working with a single node cluster (i.e. one host machine) then that will be your manager.

    Login to the initial manager node and execute the following:

    $ docker swarm init
    
    Swarm initialized: current node (y4vkoat0deioflw53cb2ie0ij) is now a manager.
    
    To add a worker to this swarm, run the following command:
    
        docker swarm join --token SWMTKN-1-1ce9a31d1c3b4b1ce9a31d1c3b4bdf52a1aea30359c712-5cggaqt83z5b26w17lceb7if9 192.168.100.20:2377
    
    To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
    
    

    Basically the output of the command is telling you that it has create the swarm and set the current node as manager. It also tells you to run the following command on the worker nodes:

    $ docker swarm join --tokenSWMTKN-1-1ce9a31d1c3b4b1ce9a31d1c3b4bdf52a1aea30359c712-5cggaqt83z5b26w17lceb7if9 192.168.100.20:2377
    

    Every node (i.e. host machine) on your network that you run the above command on will join the swarm you just created and print out the following:

    This node joined a swarm as a worker.
    

    This lets you know that you have successfully added a new worker node to the cluster. (NOTE: if you have a single node cluster this is not necessary as the managing node is the only node in the cluster).

    The last bit of information you are given in the above docker swarm init command tells you how to add more managers to the cluster:

    To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
    

    With this your initial cluster is complete.

    References

  • Damn I Love Docker: Docker Swarm Metrics

    Table of Contents

    Introduction

    Here we discuss one way to monitor the nodes in a docker swarm1 cluster using Prometheus,2 Grafana,3 and node-exporter.4

    Metrics Stack

    NOTE: This guide assumes you already have a docker swarm1 setup.

    Node-Exporter Service

    The following command will create a docker service on your docker swarm composed of the node-exporter application written by Prometheus.2

    docker service create --mode global \
                          --name=metrics_node-exporter \
                          --network=host \
                          prom/node-exporter \
                          --path.rootfs=/host
    

    Prometheus/Grafana Stack

    The following command will deploy a docker stack5 composed of two docker services6: Prometheus2, and Grafana.3 Before we look at how to create the docker stack we need to look at the config file for Prometheus:

    $ cat prometheus_grafana/prometheus.yml
    global:
      scrape_interval: 1m
      scrape_timeout: 10s
      evaluation_interval: 1m
    
    scrape_configs:
      - job_name: node-exporter
        static_configs:
          - targets:
            - 192.168.100.30:9100
            - 192.168.100.31:9100
            - 192.168.100.32:9100
    

    Next we will write a stack file which has the same syntax as a docker-compose file7:

    $ cat prometheus_grafana/metrics-stack.yml
    
    version: "3.7"
    
    volumes:
      prometheus-data:
      grafana-data:
    
    configs:
      prometheus-config:
        file: ./prometheus.yaml
    
    services:
      prometheus:
        deploy:
          placement:
            constraints:
              - node.role == manager
        hostname: prometheus
        configs:
          - source: prometheus-config
            target: /prometheus.yaml
        volumes:
          - prometheus-data:/prometheus
        ports:
          - 9090:9090
        image: prom/prometheus
        command: [
          --config.file, /prometheus.yaml,
          --storage.tsdb.path, /prometheus
        ]
    
      grafana:
        deploy:
          placement:
            constraints:
              - node.role == manager
        hostname: grafana
        volumes:
          - grafana-data:/var/lib/grafana
        ports:
          - 3000:3000
        image: grafana/grafana
    
    

    Finally we can deploy the stack as follows:

    $ docker stack deploy -c prometheus_grafana/metrics_stack.yml metrics
    

    References

  • Damn I Love Docker: Miscellaneous Docker Tips

    Table of Contents

    Introduction

    While there have been many posts written on this blog discussing the various applications of Docker, there is need to actually discuss some of the “finer” aspects of using docker. In this post we will introduce, and discuss some of the “lesser known” options, commands, and features related to Docker.

    Sync Container and Host Time

    This is the best way I have found to sync the time between host and container (for logging purposes):1

    -v /etc/localtime:/etc/localtime:ro
    

    Simply mount the volume for your /etc/localtime into the container. This will ensure that both the host machine running the Docker Engine and the containers that will be run by the engine all have the same time.

    Container Restart Policy

    After enough time working with docker, experimenting with different options and commands, and even running important applications in docker, you will eventually want to know how to “restart” a container automatically. This is all handled with the --restart option.2

    For example, I run a dockerized OpenVPN server on my home network, and I want to make sure that it is ALWAYS RUNNING. This makes sense, because I may be in situations where I need to access the VPN, but due to a power failure at some point, the server has been restarted. Without the --restart option set the container running my VPN is not automatically restarted … This would be a serious pain in the &$$, if you catch my drift. But with the --restart option, this can be alleviated like so:

    docker create --restart=always  --name=ovpn -v /etc/localtime:/etc/localtime:ro -v $OVPN_DATA:/etc/openvpn -p 1194:1194/udp --cap-add=NET_ADMIN tigerj/rpi-ovpn
    

    To clarify, the above command comes from a previous tutorial on docker OpenVPN3, and if you look towards the beginning of the command you will see the --restart=always option. As you could imagine, this ensures that your container will (per the documentation): “Always restart the container if it stops.” Again, check out the reference2 if you would like to know more details or other restart policies.

    envsubst

    The envsubst command is a tool4 that will parse a file and search for shell variables to replace with their corresponding environment variable values and then print the original file with the variable values substituted.

    Below is an example NGINX reverse proxy configuration:

    $ cat reverse_proxy.tmpl
    server {
      listen ${LISTEN_PORT};
      listen [::]:${LISTEN_PORT};
    
      server_name ${SERVNAME};
    
      location / {
          proxy_pass ${DOMAIN}:${DOMAIN_PORT};
      }
    }
    

    If we set the shell values as follows:

    $ LISTEN_PORT=80; SERVNAME=yourdomain.com; DOMAIN=localhost; DOMAIN_PORT=5000
    

    And then run envsubst

    $ envsubst < reverse_proxy.tmpl > reverse_proxy.conf
    

    Then we will have a config file with our variables:

    $ cat reverse_proxy.conf
    server {
      listen 80;
      listen [::]:80;
    
      server_name yourdomain.com;
    
      location / {
          proxy_pass localhost:5000;
      }
    }
    

    References