One place for hosting & domains

      Docker

      How To Use Traefik v2 as a Reverse Proxy for Docker Containers on Ubuntu 20.04


      The author selected Girls Who Code to receive a donation as part of the Write for DOnations program.

      Introduction

      Docker can be an efficient way to run web applications in production, but you may want to run multiple applications on the same Docker host. In this situation, you’ll need to set up a reverse proxy. This is because you only want to expose ports 80 and 443 to the rest of the world.

      Traefik is a Docker-aware reverse proxy that includes a monitoring dashboard. Traefik v1 has been widely used for a while, and you can follow this earlier tutorial to install Traefik v1). But in this tutorial, you’ll install and configure Traefik v2, which includes quite a few differences.

      The biggest difference between Traefik v1 and v2 is that frontends and backends were removed and their combined functionality spread out across routers, middlewares, and services. Previously a backend did the job of making modifications to requests and getting that request to whatever was supposed to handle it. Traefik v2 provides more separation of concerns by introducing middlewares that can modify requests before sending them to a service. Middlewares make it easier to specify a single modification step that might be used by a lot of different routes so that they can be reused (such as HTTP Basic Auth, which you’ll see later). A router can also use many different middlewares.

      In this tutorial you’ll configure Traefik v2 to route requests to two different web application containers: a WordPress container and an Adminer container, each talking to a MySQL database. You’ll configure Traefik to serve everything over HTTPS using Let’s Encrypt.

      Prerequisites

      To complete this tutorial, you will need the following:

      Step 1 — Configuring and Running Traefik

      The Traefik project has an official Docker image, so you will use that to run Traefik in a Docker container.

      But before you get your Traefik container up and running, you need to create a configuration file and set up an encrypted password so you can access the monitoring dashboard.

      You’ll use the htpasswd utility to create this encrypted password. First, install the utility, which is included in the apache2-utils package:

      • sudo apt-get install apache2-utils

      Then generate the password with htpasswd. Substitute secure_password with the password you’d like to use for the Traefik admin user:

      • htpasswd -nb admin secure_password

      The output from the program will look like this:

      Output

      admin:$apr1$ruca84Hq$mbjdMZBAG.KWn7vfN/SNK/

      You’ll use this output in the Traefik configuration file to set up HTTP Basic Authentication for the Traefik health check and monitoring dashboard. Copy the entire output line so you can paste it later.

      To configure the Traefik server, you’ll create two new configuration files called traefik.toml and traefik_dynamic.toml using the TOML format. TOML is a configuration language similar to INI files, but standardized. These files let us configure the Traefik server and various integrations, or providers, that you want to use. In this tutorial, you will use three of Traefik’s available providers: api, docker, and acme. The last of these, acme, supports TLS certificates using Let’s Encrypt.

      Create and open traefik.toml using nano or your preferred text editor:

      First, you want to specify the ports that Traefik should listen on using the entryPoints section of your config file. You want two because you want to listen on port 80 and 443. Let’s call these web (port 80) and websecure (port 443).

      Add the following configurations:

      traefik.toml

      [entryPoints]
        [entryPoints.web]
          address = ":80"
          [entryPoints.web.http.redirections.entryPoint]
            to = "websecure"
            scheme = "https"
      
        [entryPoints.websecure]
          address = ":443"
      

      Note that you are also automatically redirecting traffic to be handled over TLS.

      Next, configure the Traefik api, which gives you access to both the API and your dashboard interface. The heading of [api] is all that you need because the dashboard is then enabled by default, but you’ll be explicit for the time being.

      Add the following code:

      traefik.toml

      ...
      [api]
        dashboard = true
      

      To finish securing your web requests you want to use Let’s Encrypt to generate valid TLS certificates. Traefik v2 supports Let’s Encrypt out of the box and you can configure it by creating a certificates resolver of the type acme.

      Let’s configure your certificates resolver now using the name lets-encrypt:

      traefik.toml

      ...
      [certificatesResolvers.lets-encrypt.acme]
        email = "your_email@your_domain"
        storage = "acme.json"
        [certificatesResolvers.lets-encrypt.acme.tlsChallenge]
      

      This section is called acme because ACME is the name of the protocol used to communicate with Let’s Encrypt to manage certificates. The Let’s Encrypt service requires registration with a valid email address, so to have Traefik generate certificates for your hosts, set the email key to your email address. You then specify that you will store the information that you will receive from Let’s Encrypt in a JSON file called acme.json.

      The acme.tlsChallenge section allows us to specify how Let’s Encrypt can verify that the certificate. You’re configuring it to serve a file as part of the challenge over port 443.

      Finally, you need to configure Traefik to work with Docker.

      Add the following configurations:

      traefik.toml

      ...
      [providers.docker]
        watch = true
        network = "web"
      

      The docker provider enables Traefik to act as a proxy in front of Docker containers. You’ve configured the provider to watch for new containers on the web network, which you’ll create soon.

      Our final configuration uses the file provider. With Traefik v2, static and dynamic configurations can’t be mixed and matched. To get around this, you will use traefik.toml to define your static configurations and then keep your dynamic configurations in another file, which you will call traefik_dynamic.toml. Here you are using the file provider to tell Traefik that it should read in dynamic configurations from a different file.

      Add the following file provider:

      traefik.toml

      • [providers.file]
      • filename = "traefik_dynamic.toml"

      Your completed traefik.toml will look like this:

      traefik.toml

      [entryPoints]
        [entryPoints.web]
          address = ":80"
          [entryPoints.web.http.redirections.entryPoint]
            to = "websecure"
            scheme = "https"
      
        [entryPoints.websecure]
          address = ":443"
      
      [api]
        dashboard = true
      
      [certificatesResolvers.lets-encrypt.acme]
        email = "your_email@your_domain"
        storage = "acme.json"
        [certificatesResolvers.lets-encrypt.acme.tlsChallenge]
      
      [providers.docker]
        watch = true
        network = "web"
      
      [providers.file]
        filename = "traefik_dynamic.toml"
      

      Save and close the file.

      Now let’s create traefik_dynamic.toml.

      The dynamic configuration values that you need to keep in their own file are the middlewares and the routers. To put your dashboard behind a password you need to customize the API’s router and configure a middleware to handle HTTP basic authentication. Let’s start by setting up the middleware.

      The middleware is configured on a per-protocol basis and since you’re working with HTTP you’ll specify it as a section chained off of http.middlewares. Next comes the name of your middleware so that you can reference it later, followed by the type of middleware that it is, which will be basicAuth in this case. Let’s call your middleware simpleAuth.

      Create and open a new file called traefik_dynamic.toml:

      • nano traefik_dynamic.toml

      Add the following code. This is where you’ll paste the output from the htpasswd command:

      traefik_dynamic.toml

      [http.middlewares.simpleAuth.basicAuth]
        users = [
          "admin:$apr1$ruca84Hq$mbjdMZBAG.KWn7vfN/SNK/"
        ]
      

      To configure the router for the api you’ll once again be chaining off of the protocol name, but instead of using http.middlewares, you’ll use http.routers followed by the name of the router. In this case, the api provides its own named router that you can configure by using the [http.routers.api] section. You’ll configure the domain that you plan on using with your dashboard also by setting the rule key using a host match, the entrypoint to use websecure, and the middlewares to include simpleAuth.

      Add the following configurations:

      traefik_dynamic.toml

      ...
      [http.routers.api]
        rule = "Host(`your_domain`)"
        entrypoints = ["websecure"]
        middlewares = ["simpleAuth"]
        service = "api@internal"
        [http.routers.api.tls]
          certResolver = "lets-encrypt"
      

      The web entry point handles port 80, while the websecure entry point uses port 443 for TLS/SSL. You automatically redirect all of the traffic on port 80 to the websecure entry point to force secure connections for all requests.

      Notice the last three lines here configure a service, enable tls, and configure certResolver to "lets-encrypt". Services are the final step to determining where a request is finally handled. The api@internal service is a built-in service that sits behind the API that you expose. Just like routers and middlewares, services can be configured in this file, but you won’t need to do that to achieve your desired result.

      Your completed traefik_dynamic.toml file will look like this:

      traefik_dynamic.toml

      [http.middlewares.simpleAuth.basicAuth]
        users = [
          "admin:$apr1$ruca84Hq$mbjdMZBAG.KWn7vfN/SNK/"
        ]
      
      [http.routers.api]
        rule = "Host(`your_domain`)"
        entrypoints = ["websecure"]
        middlewares = ["simpleAuth"]
        service = "api@internal"
        [http.routers.api.tls]
          certResolver = "lets-encrypt"
      

      Save the file and exit the editor.

      With these configurations in place, you will now start Traefik.

      Step 2 – Running the Traefik Container

      In this step you will create a Docker network for the proxy to share with containers. You will then access the Traefik dashboard. The Docker network is necessary so that you can use it with applications that are run using Docker Compose.

      Create a new Docker network called web:

      • docker network create web

      When the Traefik container starts, you will add it to this network. Then you can add additional containers to this network later for Traefik to proxy to.

      Next, create an empty file that will hold your Let’s Encrypt information. You’ll share this into the container so Traefik can use it:

      Traefik will only be able to use this file if the root user inside of the container has unique read and write access to it. To do this, lock down the permissions on acme.json so that only the owner of the file has read and write permission.

      Once the file gets passed to Docker, the owner will automatically change to the root user inside the container.

      Finally, create the Traefik container with this command:

      • docker run -d
      • -v /var/run/docker.sock:/var/run/docker.sock
      • -v $PWD/traefik.toml:/traefik.toml
      • -v $PWD/traefik_dynamic.toml:/traefik_dynamic.toml
      • -v $PWD/acme.json:/acme.json
      • -p 80:80
      • -p 443:443
      • --network web
      • --name traefik
      • traefik:v2.2

      This command is a little long. Let’s break it down.

      You use the -d flag to run the container in the background as a daemon. You then share your docker.sock file into the container so that the Traefik process can listen for changes to containers. You also share the traefik.toml and traefik_dynamic.toml configuration files into the container, as well as acme.json.

      Next, you map ports :80 and :443 of your Docker host to the same ports in the Traefik container so Traefik receives all HTTP and HTTPS traffic to the server.

      You set the network of the container to web, and you name the container traefik.

      Finally, you use the traefik:v2.2 image for this container so that you can guarantee that you’re not running a completely different version than this tutorial is written for.

      A Docker image’s ENTRYPOINT is a command that always runs when a container is created from the image. In this case, the command is the traefik binary within the container. You can pass additional arguments to that command when you launch the container, but you’ve configured all of your settings in the traefik.toml file.

      With the container started, you now have a dashboard you can access to see the health of your containers. You can also use this dashboard to visualize the routers, services, and middlewares that Traefik has registered. You can try to access the monitoring dashboard by pointing your browser to https://monitor.your_domain/dashboard/ (the trailing / is required).

      You will be prompted for your username and password, which are admin and the password you configured in Step 1.

      Once logged in, you’ll see the Traefik interface:

      Empty Traefik dashboard

      You will notice that there are already some routers and services registered, but those are the ones that come with Traefik and the router configuration that you wrote for the API.

      You now have your Traefik proxy running, and you’ve configured it to work with Docker and monitor other containers. In the next step you will start some containers for Traefik to proxy.

      Step 3 — Registering Containers with Traefik

      With the Traefik container running, you’re ready to run applications behind it. Let’s launch the following containers behind Traefik:

      1. A blog using the official WordPress image.
      2. A database management server using the official Adminer image.

      You’ll manage both of these applications with Docker Compose using a docker-compose.yml file.

      Create and open the docker-compose.yml file in your editor:

      Add the following lines to the file to specify the version and the networks you’ll use:

      docker-compose.yml

      version: "3"
      
      networks:
        web:
          external: true
        internal:
          external: false
      

      You use Docker Compose version 3 because it’s the newest major version of the Compose file format.

      For Traefik to recognize your applications, they must be part of the same network, and since you created the network manually, you pull it in by specifying the network name of web and setting external to true. Then you define another network so that you can connect your exposed containers to a database container that you won’t expose through Traefik. You’ll call this network internal.

      Next, you’ll define each of your services, one at a time. Let’s start with the blog container, which you’ll base on the official WordPress image. Add this configuration to the bottom of the file:

      docker-compose.yml

      ...
      
      services:
        blog:
          image: wordpress:4.9.8-apache
          environment:
            WORDPRESS_DB_PASSWORD:
          labels:
            - traefik.http.routers.blog.rule=Host(`blog.your_domain`)
            - traefik.http.routers.blog.tls.enabled=true
            - traefik.http.routers.blog.tls.cert-provider=lets-encrypt
            - traefik.port=80
          networks:
            - internal
            - web
          depends_on:
            - mysql
      

      The environment key lets you specify environment variables that will be set inside of the container. By not setting a value for WORDPRESS_DB_PASSWORD, you’re telling Docker Compose to get the value from your shell and pass it through when you create the container. You will define this environment variable in your shell before starting the containers. This way you don’t hard-code passwords into the configuration file.

      The labels section is where you specify configuration values for Traefik. Docker labels don’t do anything by themselves, but Traefik reads these so it knows how to treat containers. Here’s what each of these labels does:

      • traefik.http.routers.adminer.rule=Host(`blog.your_domain`) creates a new router for your container and then specifies the routing rule used to determine if a request matches this container.
      • traefik.routers.custom_name.tls=true specifies that this router should use TLS.
      • traefik.routers.custom_name.tls.certResolver=lets-encrypt specifies that the certificates resolver that you created earlier called lets-encrypt should be used to get a certificate for this route.
      • traefik.port specifies the exposed port that Traefik should use to route traffic to this container.

      With this configuration, all traffic sent to your Docker host on port 80 or 443 with the domain of blog.your_domain will be routed to the blog container.

      You assign this container to two different networks so that Traefik can find it via the web network and it can communicate with the database container through the internal network.

      Lastly, the depends_on key tells Docker Compose that this container needs to start after its dependencies are running. Since WordPress needs a database to run, you must run your mysql container before starting your blog container.

      Next, configure the MySQL service:

      docker-compose.yml

      services:
      ...
        mysql:
          image: mysql:5.7
          environment:
            MYSQL_ROOT_PASSWORD:
          networks:
            - internal
          labels:
            - traefik.enable=false
      

      You’re using the official MySQL 5.7 image for this container. You’ll notice that you’re once again using an environment item without a value. The MYSQL_ROOT_PASSWORD and WORDPRESS_DB_PASSWORD variables will need to be set to the same value to make sure that your WordPress container can communicate with the MySQL. You don’t want to expose the mysql container to Traefik or the outside world, so you’re only assigning this container to the internal network. Since Traefik has access to the Docker socket, the process will still expose a router for the mysql container by default, so you’ll add the label traefik.enable=false to specify that Traefik should not expose this container.

      Finally, define the Adminer container:

      docker-compose.yml

      services:
      ...
        adminer:
          image: adminer:4.6.3-standalone
          labels:
            - traefik.http.routers.adminer.rule=Host(`db-admin.your_domain`)
            - traefik.http.routers.adminer.tls=true
            - traefik.http.routers.adminer.tls.certresolver=lets-encrypt
            - traefik.port=8080
          networks:
            - internal
            - web
          depends_on:
            - mysql
      

      This container is based on the official Adminer image. The network and depends_on configuration for this container exactly match what you’re using for the blog container.

      The line traefik.http.routers.adminer.rule=Host(`db-admin.your_domain`) tells Traefik to examine the host requested. If it matches the pattern of db-admin.your_domain, Traefik will route the traffic to the adminer container over port 8080.

      Your completed docker-compose.yml file will look like this:

      docker-compose.yml

      version: "3"
      
      networks:
        web:
          external: true
        internal:
          external: false
      
      services:
        blog:
          image: wordpress:4.9.8-apache
          environment:
            WORDPRESS_DB_PASSWORD:
          labels:
            - traefik.http.routers.blog.rule=Host(`blog.your_domain`)
            - traefik.http.routers.blog.tls=true
            - traefik.http.routers.blog.tls.certresolver=lets-encrypt
            - traefik.port=80
          networks:
            - internal
            - web
          depends_on:
            - mysql
      
        mysql:
          image: mysql:5.7
          environment:
            MYSQL_ROOT_PASSWORD:
          networks:
            - internal
          labels:
            - traefik.enable=false
      
        adminer:
          image: adminer:4.6.3-standalone
          labels:
          labels:
            - traefik.http.routers.adminer.rule=Host(`db-admin.your_domain`)
            - traefik.http.routers.adminer.tls=true
            - traefik.http.routers.adminer.tls.certresolver=lets-encrypt
            - traefik.port=8080
          networks:
            - internal
            - web
          depends_on:
            - mysql
      

      Save the file and exit the text editor.

      Next, set values in your shell for the WORDPRESS_DB_PASSWORD and MYSQL_ROOT_PASSWORD variables:

      • export WORDPRESS_DB_PASSWORD=secure_database_password
      • export MYSQL_ROOT_PASSWORD=secure_database_password

      Substitute secure_database_password with your desired database password. Remember to use the same password for both WORDPRESS_DB_PASSWORD and MYSQL_ROOT_PASSWORD.

      With these variables set, run the containers using docker-compose:

      Now watch the Traefik admin dashboard while it populates.

      Populated Traefik dashboard

      If you explore the Routers section you will find routers for adminer and blog configured with TLS:

      HTTP Routers w/ TLS

      Navigate to blog.your_domain, substituting your_domain with your domain. You’ll be redirected to a TLS connection and you can now complete the WordPress setup:

      WordPress setup screen

      Now access Adminer by visiting db-admin.your_domain in your browser, again substituting your_domain with your domain. The mysql container isn’t exposed to the outside world, but the adminer container has access to it through the internal Docker network that they share using the mysql container name as a hostname.

      On the Adminer login screen, enter root for Username, enter mysql for Server, and enter the value you set for MYSQL_ROOT_PASSWORD for the Password. Leave Database empty. Now press Login.

      Once logged in, you’ll see the Adminer user interface.

      Adminer connected to the MySQL database

      Both sites are now working, and you can use the dashboard at monitor.your_domain to keep an eye on your applications.

      Conclusion

      In this tutorial, you configured Traefik v2 to proxy requests to other applications in Docker containers.

      Traefik’s declarative configuration at the application container level makes it easy to configure more services, and there’s no need to restart the traefik container when you add new applications to proxy traffic to since Traefik notices the changes immediately through the Docker socket file it’s monitoring.

      To learn more about what you can do with Traefik v2, head over to the official Traefik documentation.



      Source link

      Cara Menginstal dan Menggunakan Docker pada Ubuntu 20.04


      Pengantar

      Docker adalah suatu aplikasi yang menyederhanakan proses pengelolaan proses aplikasi di dalam kontainer. Kontainer memungkinkan Anda menjalankan aplikasi di dalam proses yang terisolasi sumber daya. Kontainer mirip seperti mesin virtual, tetapi kontainer lebih portabel, lebih ramah sumber daya, dan lebih bergantung pada sistem operasi hos.

      Untuk pengantar mendetail tentang beragam komponen berbeda dari kontainer Docker, silakan baca The Docker Ecosystem: An Introduction to Common Components.

      Dalam tutorial ini, Anda akan menginstal dan menggunakan Docker Community Edition (CE) pada Ubuntu 20.04. Anda akan menginstal Docker sendiri, bekerja dengan kontainer dan citra, serta mendorong citra ke Repositori Docker.

      Prasyarat

      Untuk mengikuti tutorial ini, Anda membutuhkan hal berikut ini:

      • Satu server Ubuntu 20.04 yang disiapkan dengan mengikuti panduan penyiapan server awal Ubuntu 20.04 berikut ini, termasuk penggguna non-root sudo dan firewall.
      • Satu akun di Docker Hub jika Anda ingin menciptakan citra sendiri dan mendorongnya ke Docker Hub, seperti yang diperlihatkan dalam Langkah 7 dan 8.

      Langkah 1 — Menginstal Docker

      Paket instalasi Docker yang tersedia di repositori Ubuntu resmi mungkin bukan versi terbaru. Untuk memastikan kita mendapat versi terbaru, kita akan menginstal Docker dari repositori Docker resmi. Untuk melakukan itu, kita akan menambah satu sumber paket baru, menambah kunci GPG dari Docker untuk memastikan unduhannya valid, lalu menginstal paket itu.

      Pertama, perbarui daftar paket Anda saat ini:

      Selanjutnya, instal beberapa paket prasyarat yang memungkinkan apt menggunakan paket lewat HTTPS:

      • sudo apt install apt-transport-https ca-certificates curl software-properties-common

      Lalu tambahkan kunci GPG untuk repositori Docker resmi ke sistem Anda:

      • curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

      Tambahkan repositori Docker ke sumber APT:

      • sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu focal stable"

      Selanjutnya, perbarui basis data paket dengan paket Docker dari repo yang baru ditambahkan:

      Pastikan Anda akan menginstal dari repo Docker alih-alih repo Ubuntu asali:

      • apt-cache policy docker-ce

      Anda akan melihat keluaran seperti ini, meskipun nomor versi untuk Docker mungkin berbeda:

      Output of apt-cache policy docker-ce

      docker-ce:
        Installed: (none)
        Candidate: 5:19.03.9~3-0~ubuntu-focal
        Version table:
           5:19.03.9~3-0~ubuntu-focal 500
              500 https://download.docker.com/linux/ubuntu focal/stable amd64 Packages
      

      Perhatikan bahwa docker-ce belum terinstal, tetapi kandidat untuk instalasi adalah dari repositori Docker untuk Ubuntu 20.04 (focal).

      Akhirnya, instal Docker:

      • sudo apt install docker-ce

      Docker kini seharusnya sudah terinstal, daemon dimulai, dan prosesnya kini dapat berjalan ketika memulai saat boot. Periksa bahwa ini berjalan:

      • sudo systemctl status docker

      Keluaran harus mirip dengan yang berikut ini, yang menunjukkan bahwa layanan sudah aktif dan berjalan:

      Output

      ● docker.service - Docker Application Container Engine Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled) Active: active (running) since Tue 2020-05-19 17:00:41 UTC; 17s ago TriggeredBy: ● docker.socket Docs: https://docs.docker.com Main PID: 24321 (dockerd) Tasks: 8 Memory: 46.4M CGroup: /system.slice/docker.service └─24321 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock

      Menginstal Docker kini tidak hanya memberi Anda layanan Docker (daemon) tetapi juga utilitas baris perintah docker, atau klien Docker. Kita akan menjelajahi cara menggunakan perintah docker di dalam tutorial ini nanti.

      Langkah 2 — Mengeksekusi Perintah Docker Tanpa Sudo (Opsional)

      Secara asali, perintah docker hanya dapat dijalankan pengguna root atau oleh pengguna di dalam grup docker yang tercipta secara otomatis selama proses instalasi Docker. Jika Anda mencoba menjalankan perintah docker tanpa mengawalinya dengan sudo atau tanpa berada di dalam grup docker, Anda akan mendapat keluaran seperti ini:

      Output

      docker: Cannot connect to the Docker daemon. Is the docker daemon running on this host?. See 'docker run --help'.

      Jika Anda ingin menghindari untuk mengetik sudo setiap kali Anda menjalankan perintah docker, tambahkan nama pengguna Anda ke grup docker:

      • sudo usermod -aG docker ${USER}

      Untuk menerapkan keanggotaan grup baru, lakukan log keluar dari server dan masuk kembali, atau ketik yang berikut ini:

      Anda akan diminta untuk memasukkan kata sandi pengguna Anda untuk melanjutkan.

      Konfirmasikan bahwa pengguna Anda kini sudah ditambahkan ke grup docker dengan mengetik:

      Output

      sammy sudo docker

      Jika Anda perlu menambahkan seorang pengguna ke grup docker yang Anda sedang tidak sedang log masuk di dalamnya, deklarasikan nama pengguna secara eksplisit menggunakan:

      • sudo usermod -aG docker username

      Mulai dari sekarang, artikel ini mengasumsikan bahwa Anda menjalankan perintah docker sebagai pengguna di dalam grup docker. Jika Anda memilih untuk tidak melakukan itu, silakan sisipkan perintah dengan awalan sudo.

      Mari kita jelajahi perintah docker berikutnya.

      Langkah 3 — Menggunakan Perintah Docker

      Menggunakan docker terdiri dari memberikannya serangkaian opsi dan perintah yang diikuti oleh argumen. Sintaksnya berbentuk seperti ini:

      • docker [option] [command] [arguments]

      Untuk melihat semua subperintah yang tersedia, ketik:

      Mulai dari Docker 19, daftar lengkap subperintah yang tersedia termasuk:

      Output

      attach Attach local standard input, output, and error streams to a running container build Build an image from a Dockerfile commit Create a new image from a container's changes cp Copy files/folders between a container and the local filesystem create Create a new container diff Inspect changes to files or directories on a container's filesystem events Get real time events from the server exec Run a command in a running container export Export a container's filesystem as a tar archive history Show the history of an image images List images import Import the contents from a tarball to create a filesystem image info Display system-wide information inspect Return low-level information on Docker objects kill Kill one or more running containers load Load an image from a tar archive or STDIN login Log in to a Docker registry logout Log out from a Docker registry logs Fetch the logs of a container pause Pause all processes within one or more containers port List port mappings or a specific mapping for the container ps List containers pull Pull an image or a repository from a registry push Push an image or a repository to a registry rename Rename a container restart Restart one or more containers rm Remove one or more containers rmi Remove one or more images run Run a command in a new container save Save one or more images to a tar archive (streamed to STDOUT by default) search Search the Docker Hub for images start Start one or more stopped containers stats Display a live stream of container(s) resource usage statistics stop Stop one or more running containers tag Create a tag TARGET_IMAGE that refers to SOURCE_IMAGE top Display the running processes of a container unpause Unpause all processes within one or more containers update Update configuration of one or more containers version Show the Docker version information wait Block until one or more containers stop, then print their exit codes

      Untuk melihat opsi yang tersedia untuk perintah spesifik, ketik:

      • docker docker-subcommand --help

      Untuk melihat informasi keseluruhan sistem tentang Docker, gunakan:

      Mari kita jelajahi beberapa dari perintah ini. Kita akan mulai dengan bekerja dengan citra.

      Langkah 4 — Bekerja dengan Citra Docker

      Kontainer Docker dibangun dari citra Docker. Secara asali, Docker menarik citra ini dari Docker Hub, suatu registri Docker yang dikelola oleh Docker, perusahaan di balik proyek Docker. Siapa pun dapat menjadi hos citra Docker miliknya di Docker Hub, sehingga sebagian besar aplikasi dan distro Linux yang Anda butuhkan akan memiliki citra yang ditempatkan di dalamnya.

      Untuk memeriksa apakah Anda dapat mengakses dan mengunduh citra dari Docker Hub, ketik:

      Keluaran akan mengindikasikan bahwa Docker bekerja dengan benar:

      Output

      Unable to find image 'hello-world:latest' locally latest: Pulling from library/hello-world 0e03bdcc26d7: Pull complete Digest: sha256:6a65f928fb91fcfbc963f7aa6d57c8eeb426ad9a20c7ee045538ef34847f44f1 Status: Downloaded newer image for hello-world:latest Hello from Docker! This message shows that your installation appears to be working correctly. ...

      Docker pada awalnya tidak dapat menemukan citra hello-world secara lokal, sehingga Docker mengunduh citra dari Docker Hub, yang merupakan repositori asali. Setelah citra diunduh, Docker menciptakan suatu kontainer dari citra dan aplikasi di dalam kontainer yang dieksekusi, yang menampilkan pesan.

      Anda dapat mencari citra yang tersedia di Docker Hub dengan menggunakan perintah docker dengan subperintah search. Sebagai contoh, untuk mencari citra Ubuntu, ketik:

      Skrip ini akan mencari di Docker Hub dan memberi daftar semua citra yang namanya cocok dengan string pencarian. Dalam kasus ini, keluarannya akan mirip seperti ini:

      Output

      NAME DESCRIPTION STARS OFFICIAL AUTOMATED ubuntu Ubuntu is a Debian-based Linux operating sys… 10908 [OK] dorowu/ubuntu-desktop-lxde-vnc Docker image to provide HTML5 VNC interface … 428 [OK] rastasheep/ubuntu-sshd Dockerized SSH service, built on top of offi… 244 [OK] consol/ubuntu-xfce-vnc Ubuntu container with "headless" VNC session… 218 [OK] ubuntu-upstart Upstart is an event-based replacement for th… 108 [OK] ansible/ubuntu14.04-ansible Ubuntu 14.04 LTS with ...

      Pada kolom OFFICIAL, OK menandakan citra yang dibuat dan didukung oleh perusahaan yang ada di balik proyek ini. Setelah Anda mengidentifikasi citra yang Anda ingin gunakan, Anda dapat mengunduhnya ke komputer Anda menggunakan subperintah pull.

      Jalankan perintah berikut ini untuk mengunduh citra ubuntu resmi ke komputer Anda:

      Anda akan melihat keluaran berikut ini:

      Output

      Using default tag: latest latest: Pulling from library/ubuntu d51af753c3d3: Pull complete fc878cd0a91c: Pull complete 6154df8ff988: Pull complete fee5db0ff82f: Pull complete Digest: sha256:747d2dbbaaee995098c9792d99bd333c6783ce56150d1b11e333bbceed5c54d7 Status: Downloaded newer image for ubuntu:latest docker.io/library/ubuntu:latest

      Setelah suatu citra telah diunduh, Anda lalu dapat menjalankan suatu kontainer dengan menggunakan citra yang telah diunduh dengan subperintah run. Seperti yang Anda lihat dengan contoh hello-world, jika suatu citra belum diunduh saat docker dieksekusi dengan subperintah run, klien Docker akan mengunduh citra terlebih dahulu, lalu menjalankan kontainer dengan menggunakannya.

      Untuk melihat citra yang telah diunduh ke komputer Anda, ketik:

      Keluaran akan terlihat mirip dengan yang berikut ini:

      Output

      REPOSITORY TAG IMAGE ID CREATED SIZE ubuntu latest 1d622ef86b13 3 weeks ago 73.9MB hello-world latest bf756fb1ae65 4 months ago 13.3kB

      Seperti yang Anda nanti lihat di tutorial ini, citra yang Anda gunakan untuk menjalankan kontainer dapat dimodifikasi dan digunakan untuk menghasilkan citra baru, yang mungkin diunggah (didorong adalah istilah teknisnya) ke Docker Hub atau registri Docker lainnya.

      Mari kita lihat cara menjalankan kontainer secara lebih mendetail.

      Langkah 5 — Menjalankan Kontainer Docker

      Kontainer hello-world yang Anda jalankan di langkah sebelumnya adalah contoh dari kontainer yang berjalan dan keluar setelah menampilkan suatu pesan teks. Kontainer dapat menjadi lebih bermanfaat daripada itu dan dapat menjadi interaktif. Bagaimanapun juga, kontainer mirip dengan mesin virtual, hanya saja lebih ramah sumber daya.

      Sebagai contoh, mari kita jalankan kontainer dengan menggunakan citra terbaru dari Ubuntu. Kombinasi dari switch -i dan -t memberi Anda akses shell interaktif ke dalam kontainer:

      Prompt perintah Anda harus berubah untuk mencerminkan fakta bahwa Anda kini bekerja di dalam kontainer dan harus berbentuk seperti ini:

      Output

      root@d9b100f2f636:/#

      Perhatikan id kontainer di dalam prompt perintah. Pada contoh ini, id-nya adalah d9b100f2f636. Anda akan membutuhkan id kontainer itu untuk mengidentifikasi kontainer ketika Anda ingin menghapusnya.

      Sekarang Anda dapat menjalankan perintah apa pun di dalam kontainer. Sebagai contoh, mari kita perbarui basis data paket di dalam kontainer. Anda tidak perlu mengawali perintah dengan sudo, karena Anda kini beroperasi di dalam kontainer sebagai pengguna root:

      Lalu, instal aplikasi apa pun di dalamnya. Mari kita instal Node.js:

      Ini menginstal Node.js di dalam kontainer dari repositori Ubuntu resmi. Saat instalasi selesai, pastikan bahwa Node.js sudah terinstal:

      Anda akan melihat nomor versi ditampilkan pada terminal Anda:

      Output

      v10.19.0

      Segala perubahan yang Anda buat di dalam kontainer hanya berlaku pada kontainer itu.

      Untuk keluar dari kontainer, ketik exit di prompt.

      Selanjutnya, mari kita lihat tentang pengelolaan kontainer di sistem kita.

      Langkah 6 — Mengelola Kontainer Docker

      Setelah menggunakan Docker selama beberapa waktu, Anda akan memiliki banyak kontainer aktif (berjalan) dan tidak aktif di komputer Anda. Untuk melihat kontainer yang aktif, gunakan:

      Anda akan melihat keluaran yang mirip dengan yang berikut ini:

      Output

      CONTAINER ID IMAGE COMMAND CREATED

      Dalam tutorial ini, Anda memulai dua kontainer, satu dari citra hello-word dan yang lain dari citra ubuntu. Kedua kontainer tidak lagi berjalan, tetapi masih ada di sistem Anda.

      Untuk melihat semua kontainer — aktif dan tidak aktif, jalankan docker ps dengan switch -a:

      Anda akan melihat keluaran mirip dengan ini:

      1c08a7a0d0e4        ubuntu              "/bin/bash"         2 minutes ago       Exited (0) 8 seconds ago                       quizzical_mcnulty
      a707221a5f6c        hello-world         "/hello"            6 minutes ago       Exited (0) 6 minutes ago                       youthful_curie
      
      

      Untuk melihat kontainer terbaru yang Anda buat, teruskan ke switch -l:

      • CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
      • 1c08a7a0d0e4 ubuntu "/bin/bash" 2 minutes ago Exited (0) 40 seconds ago quizzical_mcnulty

      Untuk memulai kontainer yang telah dihentikan, gunakan docker start, diikuti dengan id kontainer atau nama kontainer. Mari kita mulai kontainer berbasis Ubuntu dengan ID 1c08a7a0d0e4:

      • docker start 1c08a7a0d0e4

      Kontainer akan memulai, dan Anda dapat menggunakan docker ps untuk melihat statusnya:

      Output

      CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 1c08a7a0d0e4 ubuntu "/bin/bash" 3 minutes ago Up 5 seconds quizzical_mcnulty

      Untuk menghentikan suatu kontainer yang sedang berjalan, gunakan docker stop, diikuti dengan ID atau nama kontainer. Kali ini, kita akan menggunakan nama yang diberikan Docker kepada kontainer, yaitu quizzical_mcnulty:

      • docker stop quizzical_mcnulty

      Setelah Anda memutuskan bahwa Anda tidak lagi membutuhkan suatu kontainer, hapus kontainer itu dengan perintah docker rm, dengan kembali menggunakan baik ID atau nama kontainer. Gunakan perintah docker ps -a untuk menemukan ID atau nama kontainer untuk kontainer yang terkait dengan citra hello-world dan hapus.

      Anda dapat memulai suatu kontainer baru dan memberinya nama dengan menggunakan switch --name. Anda juga dapat menggunakan switch --rm untuk menciptakan suatu kontainer yang menghapus dirinya sendiri saat dihentikan. Lihat perintah docker run help untuk informasi lebih lanjut tentang opsi ini dan lainnya.

      Kontainer dapat diubah menjadi citra yang Anda dapat gunakan untuk menciptakan kontainer baru. Mari kita lihat cara kerjanya.

      Langkah 7 — Menerapkan Perubahan dalam Kontainer ke Citra Docker

      Saat Anda memulai suatu citra Docker, Anda dapat menciptakan, memodifikasi, dan menghapus berkas seperti yang Anda dapat lakukan dengan mesin virtual. Perubahan yang Anda buat hanya akan berlaku untuk kontainer itu. Anda dapat memulai dan menghentikannya, tetapi setelah Anda menghancurkannya dengan perintah docker rm, perubahan akan hilang selamanya.

      Bagian ini menunjukkan kepada Anda cara menyimpan kondisi suatu kontainer sebagai citra Docker baru.

      Setelah menginstal Node.js di dalam kontainer Ubuntu, Anda kini memiliki suatu kontainer yang berjalan dari suatu citra, tetapi kontainer ini berbeda dari citra yang Anda gunakan sebelumnya untuk menciptakannya. Tetapi Anda mungkin ingin menggunakan kembali kontainer Node.js ini sebagai dasar untuk citra baru nantinya.

      Lalu terapkan perubahan ke instans citra Docker baru dengan menggunakan perintah berikut.

      • docker commit -m "What you did to the image" -a "Author Name" container_id repository/new_image_name

      Switch -m adalah untuk pesan penerapan yang membantu Anda dan orang lain untuk mengetahui perubahan yang Anda buat, sedangkan -a digunakan untuk menentukan penulisnya. container_id adalah hal yang Anda catat sebelumnya dalam tutorial saat Anda memulai sesi Docker interaktif. Kecuali Anda menciptakan repositori tambahan pada Docker Hub, repository itu biasanya merupakan nama pengguna Docker Hub Anda.

      Sebagai contoh, untuk pengguna bernama sammy, dengan ID kontainer d9b100f2f636, perintahnya adalah:

      • docker commit -m "added Node.js" -a "sammy" d9b100f2f636 sammy/ubuntu-nodejs

      Saat Anda melakukan commit pada suatu citra, citra baru itu disimpan secara lokal di komputer Anda. Dalam tutorial ini nantinya Anda akan belajar cara mendorong citra ke suatu registri Docker seperti Docker Hub sehingga orang lain dapat mengaksesnya.

      Membuat daftar citra Docker kembali akan menampilkan citra baru dan lama yang menjadi rujukan bagi yang baru:

      Anda akan melihat keluaran seperti ini:

      Output

      REPOSITORY TAG IMAGE ID CREATED SIZE sammy/ubuntu-nodejs latest 7c1f35226ca6 7 seconds ago 179MB ...

      Pada contoh ini, ubuntu-nodejs adalah citra baru, yang diturunkan dari citra ubuntu yang sudah ada sebelumnya dari Docker Hub. Perbedaan ukuran mencerminkan perubahan yang dibuat. Dan dalam contoh ini, perubahannya adalah bahwa NodeJS telah diinstal. Jadi, lain kali Anda perlu menjalankan suatu kontainer yang menggunakan Ubuntu dengan NodeJS yang telah diinstal sebelumnya, Anda dapat langsung menggunakan citra baru ini.

      Anda juga dapat membangun citra dari suatu Dockerfile, yang memungkinkan Anda mengotomatiskan instalasi perangkat lunak dalam suatu citra baru. Namun, hal itu berada di luar cakupan tutorial ini.

      Sekarang, mari kita berbagi citra baru dengan orang lain sehingga mereka dapat menciptakan kontainer dari citra itu.

      Langkah 8 — Mendorong Citra Docker ke Repositori Docker

      Langkah logis berikutnya setelah menciptakan citra baru dari citra yang sudah ada adalah membagikannya kepada beberapa teman yang Anda pilih, seluruh dunia di Docker Hub, atau registri Docker lain yang dapat Anda akses. Untuk mendorong suatu citra ke Docker Hub atau registri Docker lain, Anda harus memiliki akun di sana.

      Bagian ini menunjukkan kepada Anda cara mendorong suatu citra Docker ke Docker Hub. Untuk mempelajari cara membuat registri Docker pribadi Anda sendiri, bacalah How To Set Up a Private Docker Registry on Ubuntu 14.04.

      Langkah pertama untuk mendorong citra Anda adalah dengan melakukan log masuk ke Docker Hub.

      • docker login -u docker-registry-username

      Anda akan diminta melakukan autentikasi menggunakan kata sandi Docker Hub Anda. Jika Anda memberikan kata sandi yang benar, autentikasi pasti berhasil.

      Catatan: Jika nama pengguna registri Docker Anda berbeda dari nama pengguna lokal yang Anda gunakan untuk menciptakan citra, Anda harus menandai citra Anda dengan nama pengguna registri. Untuk contoh yang diberikan pada langkah terakhir, Anda perlu mengetik:

      • docker tag sammy/ubuntu-nodejs docker-registry-username/ubuntu-nodejs

      Lalu, Anda dapat mendorong citra Anda sendiri menggunakan:

      • docker push docker-registry-username/docker-image-name

      Untuk mendorong citra ubuntu-nodejs ke repositori sammy, perintahnya adalah:

      • docker push sammy/ubuntu-nodejs

      Proses ini mungkin membutuhkan waktu beberapa saat untuk mengunggah citra hingga selesai, tetapi saat selesai, keluaran akan terlihat seperti ini:

      Output

      The push refers to a repository [docker.io/sammy/ubuntu-nodejs] e3fbbfb44187: Pushed 5f70bf18a086: Pushed a3b5c80a4eba: Pushed 7f18b442972b: Pushed 3ce512daaf78: Pushed 7aae4540b42d: Pushed ...

      Setelah mendorong citra ke registri, citra akan terdaftar pada dasbor akun Anda, seperti yang ditampilkan dalam gambar di bawah ini.

      Daftar citra Docker baru pada Docker Hub

      Jika upaya mendorong menghasilkan kesalahan seperti ini, ada kemungkinan Anda belum log masuk:

      Output

      The push refers to a repository [docker.io/sammy/ubuntu-nodejs] e3fbbfb44187: Preparing 5f70bf18a086: Preparing a3b5c80a4eba: Preparing 7f18b442972b: Preparing 3ce512daaf78: Preparing 7aae4540b42d: Waiting unauthorized: authentication required

      Lakukan log masuk dengan login docker dan ulangi upaya dorong. Lalu pastikan bahwa citra itu ada di halaman repositori Docker Hub.

      Anda sekarang dapat menggunakan docker pull sammy/ubuntu-nodejs untuk menarik citra itu ke mesin baru dan menggunakannya untuk menjalankan suatu kontainer baru.

      Kesimpulan

      Dalam tutorial ini, Anda telah menginstal Docker, bekerja dengan citra dan kontainer, dan mendorong citra yang telah dimodifikasi ke Docker Hub. Sekarang, setelah Anda tahu dasar-dasarnya, jelajahi tutorial Docker yang lain yang ada di dalam DigitalOcean Community.



      Source link

      How To Automate Jenkins Setup with Docker and Jenkins Configuration as Code


      The author selected the Wikimedia Foundation to receive a donation as part of the Write for DOnations program.

      Introduction

      Jenkins is one of the most popular open-source automation servers, often used to orchestrate continuous integration (CI) and/or continuous deployment (CD) workflows.

      Configuring Jenkins is typically done manually through a web-based setup wizard; this can be a slow, error-prone, and non-scalable process. You can see the steps involved by following Step 4 — Setting Up Jenkins of the How To Install Jenkins on Ubuntu 18.04 guide. Furthermore, configurations cannot be tracked in a version control system (VCS) like Git, nor be under the scrutiny of any code review process.

      In this tutorial, you will automate the installation and configuration of Jenkins using Docker and the Jenkins Configuration as Code (JCasC) method.

      Jenkins uses a pluggable architecture to provide most of its functionality. JCasC makes use of the Configuration as Code plugin, which allows you to define the desired state of your Jenkins configuration as one or more YAML file(s), eliminating the need for the setup wizard. On initialization, the Configuration as Code plugin would configure Jenkins according to the configuration file(s), greatly reducing the configuration time and eliminating human errors.

      Docker is the de facto standard for creating and running containers, which is a virtualization technology that allows you to run isolated, self-contained applications consistently across different operation systems (OSes) and hardware architectures. You will run your Jenkins instance using Docker to take advantage of this consistency and cross-platform capability.

      This tutorial starts by guiding you through setting up JCasC. You will then incrementally add to the JCasC configuration file to set up users, configuration authentication and authorization, and finally to secure your Jenkins instance. After you’ve completed this tutorial, you’ll have created a custom Docker image that is set up to use the Configuration as Code plugin on startup to automatically configure and secure your Jenkins instance.

      Prerequisites

      To complete this tutorial, you will need:

      • Access to a server with at least 2GB of RAM and Docker installed. This can be your local development machine, a Droplet, or any kind of server. Follow Step 1 — Installing Docker from one of the tutorials in the How to Install and Use Docker collection to set up Docker.

      Note: This tutorial is tested on Ubuntu 18.04; however, because Docker images are self-contained, the steps outlined here would work for any OSes with Docker installed.

      Step 1 — Disabling the Setup Wizard

      Using JCasC eliminates the need to show the setup wizard; therefore, in this first step, you’ll create a modified version of the official jenkins/jenkins image that has the setup wizard disabled. You will do this by creating a Dockerfile and building a custom Jenkins image from it.

      The jenkins/jenkins image allows you to enable or disable the setup wizard by passing in a system property named jenkins.install.runSetupWizard via the JAVA_OPTS environment variable. Users of the image can pass in the JAVA_OPTS environment variable at runtime using the --env flag to docker run. However, this approach would put the onus of disabling the setup wizard on the user of the image. Instead, you should disable the setup wizard at build time, so that the setup wizard is disabled by default.

      You can achieve this by creating a Dockerfile and using the ENV instruction to set the JAVA_OPTS environment variable.

      First, create a new directory inside your server to store the files you will be creating in this tutorial:

      • mkdir -p $HOME/playground/jcasc

      Then, navigate inside that directory:

      • cd $HOME/playground/jcasc

      Next, using your editor, create a new file named Dockerfile:

      • nano $HOME/playground/jcasc/Dockerfile

      Then, copy the following content into the Dockerfile:

      ~/playground/jcasc/

      FROM jenkins/jenkins:latest
      ENV JAVA_OPTS -Djenkins.install.runSetupWizard=false
      

      Here, you’re using the FROM instruction to specify jenkins/jenkins:latest as the base image, and the ENV instruction to set the JAVA_OPTS environment variable.

      Save the file and exit the editor by pressing CTRL+X followed by Y.

      With these modifications in place, build a new custom Docker image and assign it a unique tag (we’ll use jcasc here):

      • docker build -t jenkins:jcasc .

      You will see output similar to the following:

      Output

      Sending build context to Docker daemon 2.048kB Step 1/2 : FROM jenkins/jenkins:latest ---> 1f4b0aaa986e Step 2/2 : ENV JAVA_OPTS -Djenkins.install.runSetupWizard=false ---> 7566b15547af Successfully built 7566b15547af Successfully tagged jenkins:jcasc

      Once built, run your custom image by running docker run:

      • docker run --name jenkins --rm -p 8080:8080 jenkins:jcasc

      You used the --name jenkins option to give your container an easy-to-remember name; otherwise a random hexadecimal ID would be used instead (e.g. f1d701324553). You also specified the --rm flag so the container will automatically be removed after you’ve stopped the container process. Lastly, you’ve configured your server host’s port 8080 to proxy to the container’s port 8080 using the -p flag; 8080 is the default port where the Jenkins web UI is served from.

      Jenkins will take a short period of time to initiate. When Jenkins is ready, you will see the following line in the output:

      Output

      ... hudson.WebAppMain$3#run: Jenkins is fully up and running

      Now, open up your browser to server_ip:8080. You’re immediately shown the dashboard without the setup wizard.

      The Jenkins dashboard

      You have just confirmed that the setup wizard has been disabled. To clean up, stop the container by pressing CTRL+C. If you’ve specified the --rm flag earlier, the stopped container would automatically be removed.

      In this step, you’ve created a custom Jenkins image that has the setup wizard disabled. However, the top right of the web interface now shows a red notification icon indicating there are issues with the setup. Click on the icon to see the details.

      The Jenkins dashboard showing issues

      The first warning informs you that you have not configured the Jenkins URL. The second tells you that you haven’t configured any authentication and authorization schemes, and that anonymous users have full permissions to perform all actions on your Jenkins instance. Previously, the setup wizard guided you through addressing these issues. Now that you’ve disabled it, you need to replicate the same functions using JCasC. The rest of this tutorial will involve modifying your Dockerfile and JCasC configuration until no more issues remain (that is, until the red notification icon disappears).

      In the next step, you will begin that process by pre-installing a selection of Jenkins plugins, including the Configuration as Code plugin, into your custom Jenkins image.

      Step 2 — Installing Jenkins Plugins

      To use JCasC, you need to install the Configuration as Code plugin. Currently, no plugins are installed. You can confirm this by navigating to http://server_ip:8080/pluginManager/installed.

      Jenkins dashboard showing no plugins are installed

      In this step, you’re going to modify your Dockerfile to pre-install a selection of plugins, including the Configuration as Code plugin.

      To automate the plugin installation process, you can make use of an installation script that comes with the jenkins/jenkins Docker image. You can find it inside the container at /usr/local/bin/install-plugins.sh. To use it, you would need to:

      • Create a text file containing a list of plugins to install
      • Copy it into the Docker image
      • Run the install-plugins.sh script to install the plugins

      First, using your editor, create a new file named plugins.txt:

      • nano $HOME/playground/jcasc/plugins.txt

      Then, add in the following newline-separated list of plugin names and versions (using the format <id>:<version>):

      ~/playground/jcasc/plugins.txt

      ant:latest
      antisamy-markup-formatter:latest
      build-timeout:latest
      cloudbees-folder:latest
      configuration-as-code:latest
      credentials-binding:latest
      email-ext:latest
      git:latest
      github-branch-source:latest
      gradle:latest
      ldap:latest
      mailer:latest
      matrix-auth:latest
      pam-auth:latest
      pipeline-github-lib:latest
      pipeline-stage-view:latest
      ssh-slaves:latest
      timestamper:latest
      workflow-aggregator:latest
      ws-cleanup:latest
      

      Save the file and exit your editor.

      The list contains the Configuration as Code plugin, as well as all the plugins suggested by the setup wizard (correct as of Jenkins v2.251). For example, you have the Git plugin, which allows Jenkins to work with Git repositories; you also have the Pipeline plugin, which is actually a suite of plugins that allows you to define Jenkins jobs as code.

      Note: The most up-to-date list of suggested plugins can be inferred from the source code. You can also find a list of the most popular community-contributed plugins at plugins.jenkins.io. Feel free to include any other plugins you want into the list.

      Next, open up the Dockerfile file:

      • nano $HOME/playground/jcasc/Dockerfile

      In it, add a COPY instruction to copy the plugins.txt file into the /usr/share/jenkins/ref/ directory inside the image; this is where Jenkins normally looks for plugins. Then, include an additional RUN instruction to run the install-plugins.sh script:

      ~/playground/jcasc/Dockerfile

      FROM jenkins/jenkins
      ENV JAVA_OPTS -Djenkins.install.runSetupWizard=false
      COPY plugins.txt /usr/share/jenkins/ref/plugins.txt
      RUN /usr/local/bin/install-plugins.sh < /usr/share/jenkins/ref/plugins.txt
      

      Save the file and exit the editor. Then, build a new image using the revised Dockerfile:

      • docker build -t jenkins:jcasc .

      This step involves downloading and installing many plugins into the image, and may take some time to run depending on your internet connection. Once the plugins have finished installing, run the new Jenkins image:

      • docker run --name jenkins --rm -p 8080:8080 jenkins:jcasc

      After the Jenkins is fully up and running message appears on stdout, navigate to server_ip:8080/pluginManager/installed to see a list of installed plugins. You will see a solid checkbox next to all the plugins you’ve specified inside plugins.txt, as well as a faded checkbox next to plugins, which are dependencies of those plugins.

      A list of installed plugins

      Once you’ve confirmed that the Configuration As Code plugin is installed, terminate the container process by pressing CTRL+C.

      In this step, you’ve installed all the suggested Jenkins plugins and the Configuration as Code plugin. You’re now ready to use JCasC to tackle the issues listed in the notification box. In the next step, you will fix the first issue, which warns you that the Jenkins root URL is empty.

      Step 3 — Specifying the Jenkins URL

      The Jenkins URL is a URL for the Jenkins instance that is routable from the devices that need to access it. For example, if you’re deploying Jenkins as a node inside a private network, the Jenkins URL may be a private IP address, or a DNS name that is resolvable using a private DNS server. For this tutorial, it is sufficient to use the server’s IP address (or 127.0.0.1 for local hosts) to form the Jenkins URL.

      You can set the Jenkins URL on the web interface by navigating to server_ip:8080/configure and entering the value in the Jenkins URL field under the Jenkins Location heading. Here’s how to achieve the same using the Configuration as Code plugin:

      1. Define the desired configuration of your Jenkins instance inside a declarative configuration file (which we’ll call casc.yaml).
      2. Copy the configuration file into the Docker image (just as you did for your plugins.txt file).
      3. Set the CASC_JENKINS_CONFIG environment variable to the path of the configuration file to instruct the Configuration as Code plugin to read it.

      First, create a new file named casc.yaml:

      • nano $HOME/playground/jcasc/casc.yaml

      Then, add in the following lines:

      ~/playground/jcasc/casc.yaml

      unclassified:
        location:
          url: http://server_ip:8080/
      

      unclassified.location.url is the path for setting the Jenkins URL. It is just one of a myriad of properties that can be set with JCasC. Valid properties are determined by the plugins that are installed. For example, the jenkins.authorizationStrategy.globalMatrix.permissions property would only be valid if the Matrix Authorization Strategy plugin is installed. To see what properties are available, navigate to server_ip:8080/configuration-as-code/reference, and you’ll find a page of documentation that is customized to your particular Jenkins installation.

      Save the casc.yaml file, exit your editor, and open the Dockerfile file:

      • nano $HOME/playground/jcasc/Dockerfile

      Add a COPY instruction to the end of your Dockerfile that copies the casc.yaml file into the image at /var/jenkins_home/casc.yaml. You’ve chosen /var/jenkins_home/ because that’s the default directory where Jenkins stores all of its data:

      ~/playground/jcasc/Dockerfile

      FROM jenkins/jenkins:latest
      ENV JAVA_OPTS -Djenkins.install.runSetupWizard=false
      COPY plugins.txt /usr/share/jenkins/ref/plugins.txt
      RUN /usr/local/bin/install-plugins.sh < /usr/share/jenkins/ref/plugins.txt
      COPY casc.yaml /var/jenkins_home/casc.yaml
      

      Then, add a further ENV instruction that sets the CASC_JENKINS_CONFIG environment variable:

      ~/playground/jcasc/Dockerfile

      FROM jenkins/jenkins:latest
      ENV JAVA_OPTS -Djenkins.install.runSetupWizard=false
      ENV CASC_JENKINS_CONFIG /var/jenkins_home/casc.yaml
      COPY plugins.txt /usr/share/jenkins/ref/plugins.txt
      RUN /usr/local/bin/install-plugins.sh < /usr/share/jenkins/ref/plugins.txt
      COPY casc.yaml /var/jenkins_home/casc.yaml
      

      Note: You’ve put the ENV instruction near the top because it’s something that you are unlikely to change. By placing it before the COPY and RUN instructions, you can avoid invalidating the cached layer if you were to update the casc.yaml or plugins.txt.

      Save the file and exit the editor. Next, build the image:

      • docker build -t jenkins:jcasc .

      And run the updated Jenkins image:

      • docker run --name jenkins --rm -p 8080:8080 jenkins:jcasc

      As soon as the Jenkins is fully up and running log line appears, navigate to server_ip:8080 to view the dashboard. This time, you may have noticed that the notification count is reduced by one, and the warning about the Jenkins URL has disappeared.

      Jenkins Dashboard showing the notification counter has a count of 1

      Now, navigate to server_ip:8080/configure and scroll down to the Jenkins URL field. Confirm that the Jenkins URL has been set to the same value specified in the casc.yaml file.

      Lastly, stop the container process by pressing CTRL+C.

      In this step, you used the Configuration as Code plugin to set the Jenkins URL. In the next step, you will tackle the second issue from the notifications list (the Jenkins is currently unsecured message).

      Step 4 — Creating a User

      So far, your setup has not implemented any authentication and authorization mechanisms. In this step, you will set up a basic, password-based authentication scheme and create a new user named admin.

      Start by opening your casc.yaml file:

      • nano $HOME/playground/jcasc/casc.yaml

      Then, add in the highlighted snippet:

      ~/playground/jcasc/casc.yaml

      jenkins:
        securityRealm:
          local:
            allowsSignup: false
            users:
             - id: ${JENKINS_ADMIN_ID}
               password: ${JENKINS_ADMIN_PASSWORD}
      unclassified:
        ...
      

      In the context of Jenkins, a security realm is simply an authentication mechanism; the local security realm means to use basic authentication where users must specify their ID/username and password. Other security realms exist and are provided by plugins. For instance, the LDAP plugin allows you to use an existing LDAP directory service as the authentication mechanism. The GitHub Authentication plugin allows you to use your GitHub credentials to authenticate via OAuth.

      Note that you’ve also specified allowsSignup: false, which prevents anonymous users from creating an account through the web interface.

      Finally, instead of hard-coding the user ID and password, you are using variables whose values can be filled in at runtime. This is important because one of the benefits of using JCasC is that the casc.yaml file can be committed into source control; if you were to store user passwords in plaintext inside the configuration file, you would have effectively compromised the credentials. Instead, variables are defined using the ${VARIABLE_NAME} syntax, and its value can be filled in using an environment variable of the same name, or a file of the same name that’s placed inside the /run/secrets/ directory within the container image.

      Next, build a new image to incorporate the changes made to the casc.yaml file:

      • docker build -t jenkins:jcasc .

      Then, run the updated Jenkins image whilst passing in the JENKINS_ADMIN_ID and JENKINS_ADMIN_PASSWORD environment variables via the --env option (replace <password> with a password of your choice):

      • docker run --name jenkins --rm -p 8080:8080 --env JENKINS_ADMIN_ID=admin --env JENKINS_ADMIN_PASSWORD=password jenkins:jcasc

      You can now go to server_ip:8080/login and log in using the specified credentials.

      Jenkins Login Screen with the user ID and password fields populated

      Once you’ve logged in successfully, you will be redirected to the dashboard.

      Jenkins Dashboard for authenticated user, showing the user ID and a 'log out' link near the top right corner of the page

      Finish this step by pressing CTRL+C to stop the container.

      In this step, you used JCasC to create a new user named admin. You’ve also learned how to keep sensitive data, like passwords, out of files tracked by VCSs. However, so far you’ve only configured user authentication; you haven’t implemented any authorization mechanisms. In the next step, you will use JCasC to grant your admin user with administrative privileges.

      Step 5 — Setting Up Authorization

      After setting up the security realm, you must now configure the authorization strategy. In this step, you will use the Matrix Authorization Strategy plugin to configure permissions for your admin user.

      By default, the Jenkins core installation provides us with three authorization strategies:

      • unsecured: every user, including anonymous users, have full permissions to do everything
      • legacy: emulates legacy Jenkins (prior to v1.164), where any users with the role admin is given full permissions, whilst other users, including anonymous users, are given read access.

      Note: A role in Jenkins can be a user (for example, daniel) or a group (for example, developers)

      • loggedInUsersCanDoAnything: anonymous users are given either no access or read-only access. Authenticated users have full permissions to do everything. By allowing actions only for authenticated users, you are able to have an audit trail of which users performed which actions.

      Note: You can explore other authorization strategies and their related plugins in the documentation; these include plugins that handle both authentication and authorization.

      All of these authorization strategies are very crude, and does not afford granular control over how permissions are set for different users. Instead, you can use the Matrix Authorization Strategy plugin that was already included in your plugins.txt list. This plugin affords you a more granular authorization strategy, and allows you to set user permissions globally, as well as per project/job.

      The Matrix Authorization Strategy plugin allows you to use the jenkins.authorizationStrategy.globalMatrix.permissions JCasC property to set global permissions. To use it, open your casc.yaml file:

      • nano $HOME/playground/jcasc/casc.yaml

      And add in the highlighted snippet:

      ~/playground/jcasc/casc.yaml

      ...
             - id: ${JENKINS_ADMIN_ID}
               password: ${JENKINS_ADMIN_PASSWORD}
        authorizationStrategy:
          globalMatrix:
            permissions:
              - "Overall/Administer:admin"
              - "Overall/Read:authenticated"
      unclassified:
      ...
      

      The globalMatrix property sets global permissions (as opposed to per-project permissions). The permissions property is a list of strings with the format <permission-group>/<permission-name>:<role>. Here, you are granting the Overall/Administer permissions to the admin user. You’re also granting Overall/Read permissions to authenticated, which is a special role that represents all authenticated users. There’s another special role called anonymous, which groups all non-authenticated users together. But since permissions are denied by default, if you don’t want to give anonymous users any permissions, you don’t need to explicitly include an entry for it.

      Save the casc.yaml file, exit your editor, and build a new image:

      • docker build -t jenkins:jcasc .

      Then, run the updated Jenkins image:

      • docker run --name jenkins --rm -p 8080:8080 --env JENKINS_ADMIN_ID=admin --env JENKINS_ADMIN_PASSWORD=password jenkins:jcasc

      Wait for the Jenkins is fully up and running log line, and then navigate to server_ip:8080. You will be redirected to the login page. Fill in your credentials and you will be redirected to the main dashboard.

      In this step, you have set up global permissions for your admin user. However, resolving the authorization issue uncovered additional issues that are now shown in the notification menu.

      Jenkins Dashboard showing the notifications menu with two issues

      Therefore, in the next step, you will continue to modify your Docker image, to resolve each issue one by one until none remains.

      Before you continue, stop the container by pressing CTRL+C.

      Step 6 — Setting Up Build Authorization

      The first issue in the notifications list relates to build authentication. By default, all jobs are run as the system user, which has a lot of system privileges. Therefore, a Jenkins user can perform privilege escalation simply by defining and running a malicious job or pipeline; this is insecure.

      Instead, jobs should be ran using the same Jenkins user that configured or triggered it. To achieve this, you need to install an additional plugin called the Authorize Project plugin.

      Open plugins.txt:

      • nano $HOME/playground/jcasc/plugins.txt

      And add the highlighted line:

      ~/playground/jcasc/plugins.txt

      ant:latest
      antisamy-markup-formatter:latest
      authorize-project:latest
      build-timeout:latest
      ...
      

      The plugin provides a new build authorization strategy, which you would need to specify in your JCasC configuration. Exit out of the plugins.txt file and open the casc.yaml file:

      • nano $HOME/playground/jcasc/casc.yaml

      Add the highlighted block to your casc.yaml file:

      ~/playground/jcasc/casc.yaml

      ...
              - "Overall/Administer:admin"
              - "Overall/Read:authenticated"
      security:
        queueItemAuthenticator:
          authenticators:
          - global:
              strategy: triggeringUsersAuthorizationStrategy
      unclassified:
      ...
      

      Save the file and exit the editor. Then, build a new image using the modified plugins.txt and casc.yaml files:

      • docker build -t jenkins:jcasc .

      Then, run the updated Jenkins image:

      • docker run --name jenkins --rm -p 8080:8080 --env JENKINS_ADMIN_ID=admin --env JENKINS_ADMIN_PASSWORD=password jenkins:jcasc

      Wait for the Jenkins is fully up and running log line, then navigate to server_ip:8080/login, fill in your credentials, and arrive at the main dashboard. Open the notification menu, and you will see the issue related to build authentication no longer appears.

      Jenkins dashboard's notification menu showing a single issue related to agent to master security subsystem being turned off

      Stop the container by running CTRL+C before continuing.

      In this step, you have configured Jenkins to run builds using the user that triggered the build, instead of the system user. This eliminates one of the issues in the notifications list. In the next step, you will tackle the next issue related to the Agent to Controller Security Subsystem.

      Step 7 — Enabling Agent to Controller Access Control

      In this tutorial, you have deployed only a single instance of Jenkins, which runs all builds. However, Jenkins supports distributed builds using an agent/controller configuration. The controller is responsible for providing the web UI, exposing an API for clients to send requests to, and co-ordinating builds. The agents are the instances that execute the jobs.

      The benefit of this configuration is that it is more scalable and fault-tolerant. If one of the servers running Jenkins goes down, other instances can take up the extra load.

      However, there may be instances where the agents cannot be trusted by the controller. For example, the OPS team may manage the Jenkins controller, whilst an external contractor manages their own custom-configured Jenkins agent. Without the Agent to Controller Security Subsystem, the agent is able to instruct the controller to execute any actions it requests, which may be undesirable. By enabling Agent to Controller Access Control, you can control which commands and files the agents have access to.

      To enable Agent to Controller Access Control, open the casc.yaml file:

      • nano $HOME/playground/jcasc/casc.yaml

      Then, add the following highlighted lines:

      ~/playground/jcasc/casc.yaml

      ...
              - "Overall/Administer:admin"
              - "Overall/Read:authenticated"
        remotingSecurity:
          enabled: true
      security:
        queueItemAuthenticator:
      ...
      

      Save the file and build a new image:

      • docker build -t jenkins:jcasc .

      Run the updated Jenkins image:

      • docker run --name jenkins --rm -p 8080:8080 --env JENKINS_ADMIN_ID=admin --env JENKINS_ADMIN_PASSWORD=password jenkins:jcasc

      Navigate to server_ip:8080/login and authenticate as before. When you land on the main dashboard, the notifications menu will not show any more issues.

      Jenkins dashboard showing no issues

      Conclusion

      You’ve now successfully configured a simple Jenkins server using JCasC. Just as the Pipeline plugin enables developers to define their jobs inside a Jenkinsfile, the Configuration as Code plugin enables administrators to define the Jenkins configuration inside a YAML file. Both of these plugins bring Jenkins closer aligned with the Everything as Code (EaC) paradigm.

      However, getting the JCasC syntax correct can be difficult, and the documentation can be hard to decipher. If you’re stuck and need help, you may find it in the Gitter chat for the plugin.

      Although you have configured the basic settings of Jenkins using JCasC, the new instance does not contain any projects or jobs. To take this even further, explore the Job DSL plugin, which allows us to define projects and jobs as code. What’s more, you can include the Job DSL code inside your JCasC configuration file, and have the projects and jobs created as part of the configuration process.



      Source link