One place for hosting & domains

      Configuration

      How To Use Ansible with Terraform for Configuration Management


      The author selected the Free and Open Source Fund to receive a donation as part of the Write for DOnations program.

      Introduction

      Ansible is a configuration management tool that executes playbooks, which are lists of customizable actions written in YAML on specified target servers. It can perform all bootstrapping operations, like installing and updating software, creating and removing users, and configuring system services. As such, it is suitable for bringing up servers you deploy using Terraform, which are created blank by default.

      Ansible and Terraform are not competing solutions, because they resolve different phases of infrastructure and software deployment. Terraform allows you to define and create the infrastructure of your system, encompassing the hardware that your applications will run on. Conversely, Ansible configures and deploys software by executing its playbooks on the provided server instances. Running Ansible on the resources Terraform provisioned directly after their creation allows you to make the resources usable for your use case much faster. It also enables easier maintenance and troubleshooting, because all deployed servers will have the same actions applied to them.

      In this tutorial, you’ll deploy Droplets using Terraform, and then immediately after their creation, you’ll bootstrap the Droplets using Ansible. You’ll invoke Ansible directly from Terraform when a resource deploys. You’ll also avoid introducing race conditions using Terraform’s remote-exec and local-exec provisioners in your configuration, which will ensure that the Droplet deployment is fully complete before further setup commences.

      Prerequisites

      Note: This tutorial has specifically been tested with Terraform 0.13.

      Step 1 — Defining Droplets

      In this step, you’ll define the Droplets on which you’ll later run an Ansible playbook, which will set up the Apache web server.

      Assuming you are in the terraform-ansible directory, which you created as part of the prerequisites, you’ll define a Droplet resource, create three copies of it by specifying count, and output their IP addresses. You’ll store the definitions in a file named droplets.tf. Create and open it for editing by running:

      Add the following lines:

      ~/terraform-ansible/droplets.tf

      resource "digitalocean_droplet" "web" {
        count  = 3
        image  = "ubuntu-18-04-x64"
        name   = "web-${count.index}"
        region = "fra1"
        size   = "s-1vcpu-1gb"
      
        ssh_keys = [
            data.digitalocean_ssh_key.terraform.id
        ]
      }
      
      output "droplet_ip_addresses" {
        value = {
          for droplet in digitalocean_droplet.web:
          droplet.name => droplet.ipv4_address
        }
      }
      

      Here you define a Droplet resource running Ubuntu 18.04 with 1GB RAM on a CPU core in the region fra1. Terraform will pull the SSH key you defined in the prerequisites from your account and add it to the provisioned Droplet with the specified unique ID list element passed into ssh_keys. Terraform will deploy the Droplet three times because the count parameter is set. The output block following it will show the IP addresses of the three Droplets. The loop traverses the list of Droplets, and for each instance, pairs its name with its IP address and appends it to the resulting map.

      Save and close the file when you’re done.

      You have now defined the Droplets that Terraform will deploy. In the next step, you’ll write an Ansible playbook that will execute on each of the three deployed Droplets and will deploy the Apache web server. You’ll later go back to the Terraform code and add in the integration with Ansible.

      Step 2 — Writing an Ansible Playbook

      You’ll now create an Ansible playbook that performs the initial server setup tasks, such as creating a new user and upgrading the installed packages. You’ll instruct Ansible on what to do by writing tasks, which are units of action that are executed on target hosts. Tasks can use built-in functions, or specify custom commands to be run. Besides the tasks for the initial setup, you’ll also install the Apache web server and enable its mod_rewrite module.

      Before writing the playbook, ensure that your public and private SSH keys, which correspond to the one in your DigitalOcean account, are available and accessible on the machine from which you’re running Terraform and Ansible. A typical location for storing them on Linux would be ~/.ssh (though you can store them in other places).

      Note: On Linux, you’ll need to ensure that the private key file has the appropriate permissions. You can set them by running:

      • chmod 600 your_private_key_location

      You already have a variable for the private key defined, so you’ll only need to add in one for the public key location.

      Open provider.tf for editing by running:

      Add the following line:

      ~/terraform-ansible/provider.tf

      terraform {
        required_providers {
          digitalocean = {
            source = "digitalocean/digitalocean"
            version = "1.22.2"
          }
        }
      }
      
      variable "do_token" {}
      variable "pvt_key" {}
      variable "pub_key" {}
      
      provider "digitalocean" {
        token = var.do_token
      }
      
      data "digitalocean_ssh_key" "terraform" {
        name = "terraform"
      }
      

      When you’re done, save and close the file.

      With the pub_key variable now defined, you’ll start writing the Ansible playbook. You’ll store it in a file called apache-install.yml. Create and open it for editing:

      You’ll be building the playbook gradually. First, you’ll need to define on which hosts the playbook will run, its name, and if the tasks should be run as root. Add the following lines:

      ~/terraform-ansible/apache-install.yml

      - become: yes
        hosts: all
        name: apache-install
      

      By setting become to yes, you instruct Ansible to run commands as the superuser, and by specifying all for hosts, you allow Ansible to run the tasks on any given server—even the ones passed in through the command line, as Terraform does.

      The first task that you’ll add will create a new, non-root user. Append the following task definition to your playbook:

      ~/terraform-ansible/apache-install.yml

      . . .
        tasks:
          - name: Add the user 'sammy' and add it to 'sudo'
            user:
              name: sammy
              group: sudo
      

      You first define a list of tasks and then add a task to it. It will create a user named sammy and grant them superuser access using sudo by adding them to the appropriate group.

      The next task will add your public SSH key to the user, so you’ll be able to connect to it later on:

      ~/terraform-ansible/apache-install.yml

      . . .
          - name: Add SSH key to 'sammy'
            authorized_key:
              user: sammy
              state: present
              key: "{{ lookup('file', pub_key) }}"
      

      This task will ensure that the public SSH key, which is looked up from a local file, is present on the target. You’ll supply the value for the pub_key variable from Terraform in the next step.

      Once you have set up the user task, the next is to update the software on the Droplet using apt:

      ~/terraform-ansible/apache-install.yml

      . . .
          - name: Update all packages
            apt:
              upgrade: dist
              update_cache: yes
              cache_valid_time: 3600
      

      The target Droplet has the newest versions of available packages and a non-root user available so far. You can now order the installation of Apache and the mod_rewrite module by appending the following tasks:

      ~/terraform-ansible/apache-install.yml

      . . .
          - name: Install apache2
            apt: name=apache2 update_cache=yes state=latest
      
          - name: Enable mod_rewrite
            apache2_module: name=rewrite state=present
            notify:
              - Restart apache2
      
        handlers:
          - name: Restart apache2
            service: name=apache2 state=restarted
      

      The first task will run the apt package manager to install Apache. The second one will ensure that the mod_rewrite module is present. After it’s enabled, you need to ensure that you restart Apache, which you can’t configure from the task itself. To resolve that, you call a handler to issue the restart.

      At this point, your playbook will be as follows:

      ~/terraform-ansible/apache-install.yml

      - become: yes
        hosts: all
        name: apache-install
        tasks:
          - name: Add the user 'sammy' and add it to 'sudo'
            user:
              name: sammy
              group: sudo
          - name: Add SSH key to 'sammy'
            authorized_key:
              user: sammy
              state: present
              key: "{{ lookup('file', pub_key) }}"
          - name: Update all packages
            apt:
              upgrade: dist
              update_cache: yes
              cache_valid_time: 3600
          - name: Install apache2
            apt: name=apache2 update_cache=yes state=latest
          - name: Enable mod_rewrite
            apache2_module: name=rewrite state=present
            notify:
              - Restart apache2
      
        handlers:
          - name: Restart apache2
            service: name=apache2 state=restarted
      

      This is all you need to define on the Ansible side, so save and close the playbook. You’ll now modify the Droplet deployment code to execute this playbook when the Droplets have finished provisioning.

      Step 3 — Running Ansible on Deployed Droplets

      Now that you have defined the actions Ansible will take on the target servers, you’ll modify the Terraform configuration to run it upon Droplet creation.

      Terraform offers two provisioners that execute commands: local-exec and remote-exec, which run commands locally or remotely (on the target), respectively. remote-exec requires connection data, such as type and access keys, while local-exec does everything on the machine Terraform is executing on, and so does not require connection information. It’s important to note that local-exec runs immediately after the resource you have defined it for has finished provisioning; therefore, it does not wait for the resource to actually boot up. It runs after the cloud platform acknowledges its presence in the system.

      You’ll now add provisioner definitions to your Droplet to run Ansible after deployment. Open droplets.tf for editing:

      Add the highlighted lines:

      ~/terraform-ansible/droplets.tf

      resource "digitalocean_droplet" "web" {
        count  = 3
        image  = "ubuntu-18-04-x64"
        name   = "web-${count.index}"
        region = "fra1"
        size   = "s-1vcpu-1gb"
      
        ssh_keys = [
            data.digitalocean_ssh_key.terraform.id
        ]
      
        provisioner "remote-exec" {
          inline = ["sudo apt update", "sudo apt install python3 -y", "echo Done!"]
      
          connection {
            host        = self.ipv4_address
            type        = "ssh"
            user        = "root"
            private_key = file(var.pvt_key)
          }
        }
      
        provisioner "local-exec" {
          command = "ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook -u root -i '${self.ipv4_address},' --private-key ${var.pvt_key} -e 'pub_key=${var.pub_key}' apache-install.yml"
        }
      }
      
      output "droplet_ip_addresses" {
        value = {
          for droplet in digitalocean_droplet.web:
          droplet.name => droplet.ipv4_address
        }
      }
      

      Like Terraform, Ansible is run locally and connects to the target servers via SSH. To run it, you define a local-exec provisioner in the Droplet definition that runs the ansible-playbook command. This passes in the username (root), the IP of the current Droplet (retrieved with ${self.ipv4_address}), the SSH public and private keys, and specifies the playbook file to run (apache-install.yml). By setting the ANSIBLE_HOST_KEY_CHECKING environment variable to False, you skip checking if the server was connected to beforehand.

      As was noted, the local-exec provisioner runs without waiting for the Droplet to become available, so the execution of the playbook may precede the actual availability of the Droplet. To remedy this, you define the remote-exec provisioner to contain commands to execute on the target server. For remote-exec to execute the target server must be available. Since remote-exec runs before local-exec the server will be fully initialized by the time Ansible is invoked. python3 comes preinstalled on Ubuntu 18.04, so you can comment out or remove the command as necessary.

      When you’re done making changes, save and close the file.

      Then, deploy the Droplets by running the following command. Remember to replace private_key_location and public_key_location with the locations of your private and public keys respectively:

      • terraform apply -var "do_token=${DO_PAT}" -var "pvt_key=private_key_location" -var "pub_key=public_key_location"

      The output will be long. Your Droplets will provision and then a connection will establish with each. Next the remote-exec provisioner will execute and install python3:

      Output

      ... digitalocean_droplet.web[1] (remote-exec): Connecting to remote host via SSH... digitalocean_droplet.web[1] (remote-exec): Host: ... digitalocean_droplet.web[1] (remote-exec): User: root digitalocean_droplet.web[1] (remote-exec): Password: false digitalocean_droplet.web[1] (remote-exec): Private key: true digitalocean_droplet.web[1] (remote-exec): Certificate: false digitalocean_droplet.web[1] (remote-exec): SSH Agent: false digitalocean_droplet.web[1] (remote-exec): Checking Host Key: false digitalocean_droplet.web[1] (remote-exec): Connected! ...

      After that, Terraform will run the local-exec provisioner for each of the Droplets, which executes Ansible. The following output shows this for one of the Droplets:

      Output

      ... digitalocean_droplet.web[2] (local-exec): Executing: ["/bin/sh" "-c" "ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook -u root -i 'ip_address,' --private-key private_key_location -e 'pub_key=public_key_location' apache-install.yml"] digitalocean_droplet.web[2] (local-exec): PLAY [apache-install] ********************************************************** digitalocean_droplet.web[2] (local-exec): TASK [Gathering Facts] ********************************************************* digitalocean_droplet.web[2] (local-exec): ok: [ip_address] digitalocean_droplet.web[2] (local-exec): TASK [Add the user 'sammy' and add it to 'sudo'] ******************************* digitalocean_droplet.web[2] (local-exec): changed: [ip_address] digitalocean_droplet.web[2] (local-exec): TASK [Add SSH key to 'sammy''] ******************************* digitalocean_droplet.web[2] (local-exec): changed: [ip_address] digitalocean_droplet.web[2] (local-exec): TASK [Update all packages] ***************************************************** digitalocean_droplet.web[2] (local-exec): changed: [ip_address] digitalocean_droplet.web[2] (local-exec): TASK [Install apache2] ********************************************************* digitalocean_droplet.web[2] (local-exec): changed: [ip_address] digitalocean_droplet.web[2] (local-exec): TASK [Enable mod_rewrite] ****************************************************** digitalocean_droplet.web[2] (local-exec): changed: [ip_address] digitalocean_droplet.web[2] (local-exec): RUNNING HANDLER [Restart apache2] ********************************************** digitalocean_droplet.web[2] (local-exec): changed: [ip_address] digitalocean_droplet.web[2] (local-exec): PLAY RECAP ********************************************************************* digitalocean_droplet.web[2] (local-exec): [ip_address] : ok=7 changed=6 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 ...

      At the end of the output, you’ll receive a list of the three Droplets and their IP addresses:

      Output

      droplet_ip_addresses = { "web-0" = "..." "web-1" = "..." "web-2" = "..." }

      You can now navigate to one of the IP addresses in your browser. You will reach the default Apache welcome page, signifying the successful installation of the web server.

      Apache Welcome Page

      This means that Terraform provisioned your servers and your Ansible playbook executed on it successfully.

      To check that the SSH key was correctly added to sammy on the provisioned Droplets, connect to one of them with the following command:

      • ssh -i private_key_location sammy@droplet_ip_address

      Remember to put in the private key location and the IP address of one of the provisioned Droplets, which you can find in your Terraform output.

      The output will be similar to the following:

      Output

      Welcome to Ubuntu 18.04.5 LTS (GNU/Linux 4.15.0-121-generic x86_64) * Documentation: https://help.ubuntu.com * Management: https://landscape.canonical.com * Support: https://ubuntu.com/advantage System information as of ... System load: 0.0 Processes: 88 Usage of /: 6.4% of 24.06GB Users logged in: 0 Memory usage: 20% IP address for eth0: ip_address Swap usage: 0% IP address for eth1: ip_address 0 packages can be updated. 0 updates are security updates. New release '20.04.1 LTS' available. Run 'do-release-upgrade' to upgrade to it. *** System restart required *** Last login: ... ...

      You’ve successfully connected to the target and obtained shell access for the sammy user, which confirms that the SSH key was correctly configured for that user.

      You can destroy the deployed Droplets by running the following command, entering yes when prompted:

      • terraform destroy -var "do_token=${DO_PAT}" -var "pvt_key=private_key_location" -var "pub_key=public_key_location"

      In this step, you have added in Ansible playbook execution as a local-exec provisioner to your Droplet definition. To ensure that the server is available for connections, you’ve included the remote-exec provisioner, which can serve to install the python3 prerequisite, after which Ansible will run.

      Conclusion

      Terraform and Ansible together form a flexible workflow for spinning up servers with the needed software and hardware configurations. Running Ansible directly as part of the Terraform deployment process allows you to have the servers up and bootstrapped with dependencies for your development work and applications much faster.

      For more on using Terraform, check out our How To Manage Infrastructure with Terraform series. You can also find further Ansible content resources on our Ansible topic page.



      Source link

      How To Automate Jenkins Job Configuration Using Job DSL


      The author selected the Internet Archive to receive a donation as part of the Write for DOnations program.

      Introduction

      Jenkins is a popular automation server, often used to orchestrate continuous integration (CI) and continuous deployment (CD) workflows. However, the process of setting up Jenkins itself has traditionally been a manual, siloed process for the system administrator. The process typically involves installing dependencies, running the Jenkins server, configuring the server, defining pipelines, and configuring jobs.

      Then came the Everything as Code (EaC) paradigm, which allowed administrators to define these manual tasks as declarative code that can be version-controlled and automated. In previous tutorials, we covered how to define Jenkins pipelines as code using Jenkinsfiles, as well as how to install dependencies and define configuration of a Jenkins server as code using Docker and JCasC. But using only Docker, JCasC, and pipelines to set up your Jenkins instance would only get you so far—these servers would not come pre-loaded with any jobs, so someone would still have to configure them manually. The Job DSL plugin provides a solution, and allows you to configure Jenkins jobs as code.

      In this tutorial, you’ll use Job DSL to configure two demo jobs: one that prints a 'Hello World' message in the console, and one that runs a pipeline from a Git repository. If you follow the tutorial to the end, you will have a minimal Job DSL script that you can build on for your own use cases.

      Prerequisites

      To complete this tutorial, you will need:

      Step 1 — Installing the Job DSL Plugin

      The Job DSL plugin provides the Job DSL features you’ll use in this tutorial for your demo jobs. In this step, you will install the Job DSL plugin.

      First, navigate to your_jenkins_url/pluginManager/available. In the search box, type in Job DSL. Next, in the resulting plugins list, check the box next to Job DSL and click Install without restart.

      Plugin Manager page showing Job DSL checked

      Note: If searching for Job DSL returned no results, it either means the Job DSL plugin is already installed, or that your Jenkin server’s plugin list is not updated.

      You can check if the Job DSL plugin is already installed by navigating to your_jenkins_url/pluginManager/installed and searching for Job DSL.

      You can update your Jenkins server’s plugin list by navigating to your_jenkins_url/pluginManager/available and clicking on the Check Now button at the bottom of the (empty) plugins list.

      After initiating the installation process, you’ll be redirected to a page that shows the progress of the installation. Wait until you see Success next to both Job DSL and Loading plugin extensions before continuing to the next step.

      You’ve installed the Job DSL plugin. You are now ready to use Job DSL to configure jobs as code. In the next step, you will define a demo job inside a Job DSL script. You’ll then incorporate the script into a seed job, which, when executed, will create the jobs defined.

      Step 2 — Creating a Seed Job

      The seed job is a normal Jenkins job that runs the Job DSL script; in turn, the script contains instructions that create additional jobs. In short, the seed job is a job that creates more jobs. In this step, you will construct a Job DSL script and incorporate it into a seed job. The Job DSL script that you’ll define will create a single freestyle job that prints a 'Hello World!' message in the job’s console output.

      A Job DSL script consists of API methods provided by the Job DSL plugin; you can use these API methods to configure different aspects of a job, such as its type (freestyle versus pipeline jobs), build triggers, build parameters, post-build actions, and so on. You can find all supported methods on the API reference site.

      Jenkins Job DSL API Reference web page

      By default, the site shows the API methods for job configuration settings that are available as part of the core Jenkins installation, as well as settings that are enabled by 184 supported plugins (accurate as of v1.77). To get a clearer picture of what API methods the Job DSL plugin provides for only the core Jenkins installation, click on the funnel icon next to the search box, and then check and uncheck the Filter by Plugin checkbox to deselect all the plugins.

      Jenkins Job DSL API reference web page showing only the core APIs

      The list of API methods are now significantly reduced. The ones that remain would work even if the Jenkins installation had no plugins installed apart from the Job DSL plugin.

      For the ‘Hello World’ freestyle job, you need the job API method (freeStyleJob is an alias of job and would also work). Let’s navigate to the documentation for the job method.

      job API method reference

      Click the ellipsis icon () in job(String name) { … } to show the methods and blocks that are available within the job block.

      Expanded view of the job API method reference

      Let’s go over some of the most commonly used methods and blocks within the job block:

      • parameters: setting parameters for users to input when they create a new build of the job.
      • properties: static values that are to be used within the job.
      • scm: configuration for how to retrieve the source code from a source-control management provider like GitHub.
      • steps: definitions for each step of the build.
      • triggers: apart from manually creating a build, specifies in what situations the job should be run (for example, periodically like a cron job, or after some events like a push to a GitHub repository).

      You can further expand child blocks to see what methods and blocks are available within. Click on the ellipsis icon () in steps { … } to uncover the shell(String command) method, which you can use to run a shell script.

      Reference for the Job DSL steps block

      Putting the pieces together, you can write a Job DSL script like the following to create a freestyle job that, when run, will print 'Hello World!' in the output console.

      job('demo') {
          steps {
              shell('echo Hello World!')
          }
      }
      

      To run the Job DSL script, we must first incorporate it into a seed job.

      To create the seed job, go to your_jenkins_url, log in (if necessary), click the New Item link on the left of the dashboard. On the screen that follows, type in seed, select Freestyle project, and click OK.

      Part of the New Item screen where you give the item the name of 'seed' and with the 'Freestyle project' option selected

      In the screen that follows, scroll down to the Build section and click on the Add build step dropdown. Next select Process Job DSLs.

      Screen showing the Add build step dropdown expanded and the Process Job DSLs option selected

      Then, click on the radio button next to Use the provided DSL script, and paste the Job DSL script you wrote into the DSL Script text area.

      Job DSL script added to the Process Job DSLs build step

      Click Save to create the job. This will take you to the seed job page.

      Seed job page

      Then, navigate to your_jenkins_url and confirm that the seed job is there.

      Jenkins jobs list showing the seed job

      You’ve successfully created a seed job that incorporates your Job DSL script. In the next step, you will run the seed job so that new jobs are created based on your Job DSL script.

      Step 3 — Running the Seed Job

      In this step, you will run the seed job and confirm that the jobs defined within the Job DSL script are indeed created.

      First, click back into the seed job page and click on the Build Now button on the left to run the seed job.

      Refresh the page and you’ll see a new section that says Generated Items; it lists the demo job that you’ve specified in your Job DSL script.

      Seed job page showing a list of generated items from running the seed job

      Navigate to your_server_ip and you will find the demo job that you specified in the Job DSL script.

      Jenkins jobs list showing the demo and seed jobs

      Click the demo link to go to the demo job page. You’ll see Seed job: seed, indicating that this job is created by the seed job. Now, click the Build Now link to run the demo job once.

      Demo job page showing a section on seed job

      This creates an entry inside the Build History box. Hover over the date of the entry to reveal a little arrow; click on it to reveal the dropdown. From the dropdown, choose Console Output.

      Screen showing the Console Output option selected in the dropdown for Build #1 inside the Build History box

      This will bring you the logs and console output from this build. In it, you will find the line + echo Hello World! followed by Hello World!, which corresponds to the shell('echo Hello World!') step in your Job DSL script.

      Console output of build #1 showing the echo Hello World! command and output

      You’ve run the demo job and confirmed that the echo step specified in the Job DSL script was executed. In the next and final step, you will be modifying and re-applying the Job DSL script to include an additional pipeline job.

      Step 4 — Defining Pipeline Jobs

      In line with the Everything as Code paradigm, more and more developers are choosing to define their builds as pipeline jobs—those that use a pipeline script (typically named Jenkinsfile)—instead of freestyle jobs. The demo job you’ve defined so far is a small demonstration. In this step, you will define a more realistic job that pulls down a Git repository from GitHub and run a pipeline defined in one of its pipeline scripts.

      For Jenkins to pull a Git repository and build using pipeline scripts, you’ll need to install additional plugins. So, before you make any changes to the Job DSL script, first make sure that the required plugins are installed.

      Navigate to your_jenkins_url/pluginManager/installed and check the plugins lists for the presence of the Git, Pipeline: Job, and Pipeline: Groovy plugins. If any of them are not installed, go to your_jenkins_url/pluginManager/available and search for and select the plugins, then click Install without restart.

      Now that the required plugins are installed, let’s shift our focus to modifying your Job DSL script to include an additional pipeline job.

      We will be defining a pipeline job that pulls the code from the public jenkinsci/pipeline-examples Git repository and run the environmentInStage.groovy declarative pipeline script found in it.

      Once again, navigate to the Jenkins Job DSL API Reference, click the funnel icon to bring up the Filter by Plugin menu, then deselect all the plugins except Git, Pipeline: Job, and Pipeline: Groovy.

      The Jenkins Job DSL API Reference page with all plugins deselected except for Pipeline: Job, and (not shown) Git and Pipeline: Groovy

      Click on pipelineJob on the left-hand side menu and expand the pipelineJob(String name) { … } block, then, in order, the definition { … }, cpsScm { … }, and scm { … } blocks.

      Expanded view of the pipelineJob API method block

      There are comments above each API method that explain their roles. For our use case, you’d want to define your pipeline job using a pipeline script found inside a GitHub repository. So you’d need to modify your Job DSL script as follows:

      job('demo') {
          steps {
              shell('echo Hello World!')
          }
      }
      
      pipelineJob('github-demo') {
          definition {
              cpsScm {
                  scm {
                      git {
                          remote {
                              github('jenkinsci/pipeline-examples')
                          }
                      }
                  }
                  scriptPath('declarative-examples/simple-examples/environmentInStage.groovy')
              }
          }
      }
      

      To make the change, go to your_jenkins_url/job/seed/configure and find the DSL Script text area, and replace the contents with your new Job DSL script. Then press Save. In the next screen, click on Build Now to re-run the seed job.

      Then, go to the Console Output page of the new build and you’ll find Added items: GeneratedJob{name="github-demo"}, which means you’ve successfully added the new pipeline job, whilst the existing job remains unchanged.

      Console output for the modified seed job, showing that the github-demo job has been added

      You can confirm this by going to your_jenkins_url; you will find the github-demo job appear in the list of jobs.

      Job list showing the github-demo job

      Finally, confirm that your job is working as intended by navigating to your_jenkins_url/job/github-demo/ and clicking Build Now. After the build has finished, navigate to your_jenkins_url/job/github-demo/1/console and you will find the Console Output page showing that Jenkins has successfully cloned the repository and executed the pipeline script.

      Conclusion

      In this tutorial, you’ve used the Job DSL plugin to configure jobs on Jenkins servers in a consistent and repeatable way.

      But Job DSL is not the only tool in the Jenkins ecosystem that follows the Everything as Code (EaC) paradigm. You can also deploy Jenkins as Docker containers and set it up using Jenkins Configuration as Code (JCasC). Together, Docker, JCasC, Job DSL, and pipelines allow developers and administrators to deploy and configure Jenkins completely automatically, without any manual involvement.



      Source link

      How To Automate Jenkins Setup with Docker and Jenkins Configuration as Code


      The author selected the Wikimedia Foundation to receive a donation as part of the Write for DOnations program.

      Introduction

      Jenkins is one of the most popular open-source automation servers, often used to orchestrate continuous integration (CI) and/or continuous deployment (CD) workflows.

      Configuring Jenkins is typically done manually through a web-based setup wizard; this can be a slow, error-prone, and non-scalable process. You can see the steps involved by following Step 4 — Setting Up Jenkins of the How To Install Jenkins on Ubuntu 18.04 guide. Furthermore, configurations cannot be tracked in a version control system (VCS) like Git, nor be under the scrutiny of any code review process.

      In this tutorial, you will automate the installation and configuration of Jenkins using Docker and the Jenkins Configuration as Code (JCasC) method.

      Jenkins uses a pluggable architecture to provide most of its functionality. JCasC makes use of the Configuration as Code plugin, which allows you to define the desired state of your Jenkins configuration as one or more YAML file(s), eliminating the need for the setup wizard. On initialization, the Configuration as Code plugin would configure Jenkins according to the configuration file(s), greatly reducing the configuration time and eliminating human errors.

      Docker is the de facto standard for creating and running containers, which is a virtualization technology that allows you to run isolated, self-contained applications consistently across different operation systems (OSes) and hardware architectures. You will run your Jenkins instance using Docker to take advantage of this consistency and cross-platform capability.

      This tutorial starts by guiding you through setting up JCasC. You will then incrementally add to the JCasC configuration file to set up users, configuration authentication and authorization, and finally to secure your Jenkins instance. After you’ve completed this tutorial, you’ll have created a custom Docker image that is set up to use the Configuration as Code plugin on startup to automatically configure and secure your Jenkins instance.

      Prerequisites

      To complete this tutorial, you will need:

      • Access to a server with at least 2GB of RAM and Docker installed. This can be your local development machine, a Droplet, or any kind of server. Follow Step 1 — Installing Docker from one of the tutorials in the How to Install and Use Docker collection to set up Docker.

      Note: This tutorial is tested on Ubuntu 18.04; however, because Docker images are self-contained, the steps outlined here would work for any OSes with Docker installed.

      Step 1 — Disabling the Setup Wizard

      Using JCasC eliminates the need to show the setup wizard; therefore, in this first step, you’ll create a modified version of the official jenkins/jenkins image that has the setup wizard disabled. You will do this by creating a Dockerfile and building a custom Jenkins image from it.

      The jenkins/jenkins image allows you to enable or disable the setup wizard by passing in a system property named jenkins.install.runSetupWizard via the JAVA_OPTS environment variable. Users of the image can pass in the JAVA_OPTS environment variable at runtime using the --env flag to docker run. However, this approach would put the onus of disabling the setup wizard on the user of the image. Instead, you should disable the setup wizard at build time, so that the setup wizard is disabled by default.

      You can achieve this by creating a Dockerfile and using the ENV instruction to set the JAVA_OPTS environment variable.

      First, create a new directory inside your server to store the files you will be creating in this tutorial:

      • mkdir -p $HOME/playground/jcasc

      Then, navigate inside that directory:

      • cd $HOME/playground/jcasc

      Next, using your editor, create a new file named Dockerfile:

      • nano $HOME/playground/jcasc/Dockerfile

      Then, copy the following content into the Dockerfile:

      ~/playground/jcasc/

      FROM jenkins/jenkins:latest
      ENV JAVA_OPTS -Djenkins.install.runSetupWizard=false
      

      Here, you’re using the FROM instruction to specify jenkins/jenkins:latest as the base image, and the ENV instruction to set the JAVA_OPTS environment variable.

      Save the file and exit the editor by pressing CTRL+X followed by Y.

      With these modifications in place, build a new custom Docker image and assign it a unique tag (we’ll use jcasc here):

      • docker build -t jenkins:jcasc .

      You will see output similar to the following:

      Output

      Sending build context to Docker daemon 2.048kB Step 1/2 : FROM jenkins/jenkins:latest ---> 1f4b0aaa986e Step 2/2 : ENV JAVA_OPTS -Djenkins.install.runSetupWizard=false ---> 7566b15547af Successfully built 7566b15547af Successfully tagged jenkins:jcasc

      Once built, run your custom image by running docker run:

      • docker run --name jenkins --rm -p 8080:8080 jenkins:jcasc

      You used the --name jenkins option to give your container an easy-to-remember name; otherwise a random hexadecimal ID would be used instead (e.g. f1d701324553). You also specified the --rm flag so the container will automatically be removed after you’ve stopped the container process. Lastly, you’ve configured your server host’s port 8080 to proxy to the container’s port 8080 using the -p flag; 8080 is the default port where the Jenkins web UI is served from.

      Jenkins will take a short period of time to initiate. When Jenkins is ready, you will see the following line in the output:

      Output

      ... hudson.WebAppMain$3#run: Jenkins is fully up and running

      Now, open up your browser to server_ip:8080. You’re immediately shown the dashboard without the setup wizard.

      The Jenkins dashboard

      You have just confirmed that the setup wizard has been disabled. To clean up, stop the container by pressing CTRL+C. If you’ve specified the --rm flag earlier, the stopped container would automatically be removed.

      In this step, you’ve created a custom Jenkins image that has the setup wizard disabled. However, the top right of the web interface now shows a red notification icon indicating there are issues with the setup. Click on the icon to see the details.

      The Jenkins dashboard showing issues

      The first warning informs you that you have not configured the Jenkins URL. The second tells you that you haven’t configured any authentication and authorization schemes, and that anonymous users have full permissions to perform all actions on your Jenkins instance. Previously, the setup wizard guided you through addressing these issues. Now that you’ve disabled it, you need to replicate the same functions using JCasC. The rest of this tutorial will involve modifying your Dockerfile and JCasC configuration until no more issues remain (that is, until the red notification icon disappears).

      In the next step, you will begin that process by pre-installing a selection of Jenkins plugins, including the Configuration as Code plugin, into your custom Jenkins image.

      Step 2 — Installing Jenkins Plugins

      To use JCasC, you need to install the Configuration as Code plugin. Currently, no plugins are installed. You can confirm this by navigating to http://server_ip:8080/pluginManager/installed.

      Jenkins dashboard showing no plugins are installed

      In this step, you’re going to modify your Dockerfile to pre-install a selection of plugins, including the Configuration as Code plugin.

      To automate the plugin installation process, you can make use of an installation script that comes with the jenkins/jenkins Docker image. You can find it inside the container at /usr/local/bin/install-plugins.sh. To use it, you would need to:

      • Create a text file containing a list of plugins to install
      • Copy it into the Docker image
      • Run the install-plugins.sh script to install the plugins

      First, using your editor, create a new file named plugins.txt:

      • nano $HOME/playground/jcasc/plugins.txt

      Then, add in the following newline-separated list of plugin names and versions (using the format <id>:<version>):

      ~/playground/jcasc/plugins.txt

      ant:latest
      antisamy-markup-formatter:latest
      build-timeout:latest
      cloudbees-folder:latest
      configuration-as-code:latest
      credentials-binding:latest
      email-ext:latest
      git:latest
      github-branch-source:latest
      gradle:latest
      ldap:latest
      mailer:latest
      matrix-auth:latest
      pam-auth:latest
      pipeline-github-lib:latest
      pipeline-stage-view:latest
      ssh-slaves:latest
      timestamper:latest
      workflow-aggregator:latest
      ws-cleanup:latest
      

      Save the file and exit your editor.

      The list contains the Configuration as Code plugin, as well as all the plugins suggested by the setup wizard (correct as of Jenkins v2.251). For example, you have the Git plugin, which allows Jenkins to work with Git repositories; you also have the Pipeline plugin, which is actually a suite of plugins that allows you to define Jenkins jobs as code.

      Note: The most up-to-date list of suggested plugins can be inferred from the source code. You can also find a list of the most popular community-contributed plugins at plugins.jenkins.io. Feel free to include any other plugins you want into the list.

      Next, open up the Dockerfile file:

      • nano $HOME/playground/jcasc/Dockerfile

      In it, add a COPY instruction to copy the plugins.txt file into the /usr/share/jenkins/ref/ directory inside the image; this is where Jenkins normally looks for plugins. Then, include an additional RUN instruction to run the install-plugins.sh script:

      ~/playground/jcasc/Dockerfile

      FROM jenkins/jenkins
      ENV JAVA_OPTS -Djenkins.install.runSetupWizard=false
      COPY plugins.txt /usr/share/jenkins/ref/plugins.txt
      RUN /usr/local/bin/install-plugins.sh < /usr/share/jenkins/ref/plugins.txt
      

      Save the file and exit the editor. Then, build a new image using the revised Dockerfile:

      • docker build -t jenkins:jcasc .

      This step involves downloading and installing many plugins into the image, and may take some time to run depending on your internet connection. Once the plugins have finished installing, run the new Jenkins image:

      • docker run --name jenkins --rm -p 8080:8080 jenkins:jcasc

      After the Jenkins is fully up and running message appears on stdout, navigate to server_ip:8080/pluginManager/installed to see a list of installed plugins. You will see a solid checkbox next to all the plugins you’ve specified inside plugins.txt, as well as a faded checkbox next to plugins, which are dependencies of those plugins.

      A list of installed plugins

      Once you’ve confirmed that the Configuration As Code plugin is installed, terminate the container process by pressing CTRL+C.

      In this step, you’ve installed all the suggested Jenkins plugins and the Configuration as Code plugin. You’re now ready to use JCasC to tackle the issues listed in the notification box. In the next step, you will fix the first issue, which warns you that the Jenkins root URL is empty.

      Step 3 — Specifying the Jenkins URL

      The Jenkins URL is a URL for the Jenkins instance that is routable from the devices that need to access it. For example, if you’re deploying Jenkins as a node inside a private network, the Jenkins URL may be a private IP address, or a DNS name that is resolvable using a private DNS server. For this tutorial, it is sufficient to use the server’s IP address (or 127.0.0.1 for local hosts) to form the Jenkins URL.

      You can set the Jenkins URL on the web interface by navigating to server_ip:8080/configure and entering the value in the Jenkins URL field under the Jenkins Location heading. Here’s how to achieve the same using the Configuration as Code plugin:

      1. Define the desired configuration of your Jenkins instance inside a declarative configuration file (which we’ll call casc.yaml).
      2. Copy the configuration file into the Docker image (just as you did for your plugins.txt file).
      3. Set the CASC_JENKINS_CONFIG environment variable to the path of the configuration file to instruct the Configuration as Code plugin to read it.

      First, create a new file named casc.yaml:

      • nano $HOME/playground/jcasc/casc.yaml

      Then, add in the following lines:

      ~/playground/jcasc/casc.yaml

      unclassified:
        location:
          url: http://server_ip:8080/
      

      unclassified.location.url is the path for setting the Jenkins URL. It is just one of a myriad of properties that can be set with JCasC. Valid properties are determined by the plugins that are installed. For example, the jenkins.authorizationStrategy.globalMatrix.permissions property would only be valid if the Matrix Authorization Strategy plugin is installed. To see what properties are available, navigate to server_ip:8080/configuration-as-code/reference, and you’ll find a page of documentation that is customized to your particular Jenkins installation.

      Save the casc.yaml file, exit your editor, and open the Dockerfile file:

      • nano $HOME/playground/jcasc/Dockerfile

      Add a COPY instruction to the end of your Dockerfile that copies the casc.yaml file into the image at /var/jenkins_home/casc.yaml. You’ve chosen /var/jenkins_home/ because that’s the default directory where Jenkins stores all of its data:

      ~/playground/jcasc/Dockerfile

      FROM jenkins/jenkins:latest
      ENV JAVA_OPTS -Djenkins.install.runSetupWizard=false
      COPY plugins.txt /usr/share/jenkins/ref/plugins.txt
      RUN /usr/local/bin/install-plugins.sh < /usr/share/jenkins/ref/plugins.txt
      COPY casc.yaml /var/jenkins_home/casc.yaml
      

      Then, add a further ENV instruction that sets the CASC_JENKINS_CONFIG environment variable:

      ~/playground/jcasc/Dockerfile

      FROM jenkins/jenkins:latest
      ENV JAVA_OPTS -Djenkins.install.runSetupWizard=false
      ENV CASC_JENKINS_CONFIG /var/jenkins_home/casc.yaml
      COPY plugins.txt /usr/share/jenkins/ref/plugins.txt
      RUN /usr/local/bin/install-plugins.sh < /usr/share/jenkins/ref/plugins.txt
      COPY casc.yaml /var/jenkins_home/casc.yaml
      

      Note: You’ve put the ENV instruction near the top because it’s something that you are unlikely to change. By placing it before the COPY and RUN instructions, you can avoid invalidating the cached layer if you were to update the casc.yaml or plugins.txt.

      Save the file and exit the editor. Next, build the image:

      • docker build -t jenkins:jcasc .

      And run the updated Jenkins image:

      • docker run --name jenkins --rm -p 8080:8080 jenkins:jcasc

      As soon as the Jenkins is fully up and running log line appears, navigate to server_ip:8080 to view the dashboard. This time, you may have noticed that the notification count is reduced by one, and the warning about the Jenkins URL has disappeared.

      Jenkins Dashboard showing the notification counter has a count of 1

      Now, navigate to server_ip:8080/configure and scroll down to the Jenkins URL field. Confirm that the Jenkins URL has been set to the same value specified in the casc.yaml file.

      Lastly, stop the container process by pressing CTRL+C.

      In this step, you used the Configuration as Code plugin to set the Jenkins URL. In the next step, you will tackle the second issue from the notifications list (the Jenkins is currently unsecured message).

      Step 4 — Creating a User

      So far, your setup has not implemented any authentication and authorization mechanisms. In this step, you will set up a basic, password-based authentication scheme and create a new user named admin.

      Start by opening your casc.yaml file:

      • nano $HOME/playground/jcasc/casc.yaml

      Then, add in the highlighted snippet:

      ~/playground/jcasc/casc.yaml

      jenkins:
        securityRealm:
          local:
            allowsSignup: false
            users:
             - id: ${JENKINS_ADMIN_ID}
               password: ${JENKINS_ADMIN_PASSWORD}
      unclassified:
        ...
      

      In the context of Jenkins, a security realm is simply an authentication mechanism; the local security realm means to use basic authentication where users must specify their ID/username and password. Other security realms exist and are provided by plugins. For instance, the LDAP plugin allows you to use an existing LDAP directory service as the authentication mechanism. The GitHub Authentication plugin allows you to use your GitHub credentials to authenticate via OAuth.

      Note that you’ve also specified allowsSignup: false, which prevents anonymous users from creating an account through the web interface.

      Finally, instead of hard-coding the user ID and password, you are using variables whose values can be filled in at runtime. This is important because one of the benefits of using JCasC is that the casc.yaml file can be committed into source control; if you were to store user passwords in plaintext inside the configuration file, you would have effectively compromised the credentials. Instead, variables are defined using the ${VARIABLE_NAME} syntax, and its value can be filled in using an environment variable of the same name, or a file of the same name that’s placed inside the /run/secrets/ directory within the container image.

      Next, build a new image to incorporate the changes made to the casc.yaml file:

      • docker build -t jenkins:jcasc .

      Then, run the updated Jenkins image whilst passing in the JENKINS_ADMIN_ID and JENKINS_ADMIN_PASSWORD environment variables via the --env option (replace <password> with a password of your choice):

      • docker run --name jenkins --rm -p 8080:8080 --env JENKINS_ADMIN_ID=admin --env JENKINS_ADMIN_PASSWORD=password jenkins:jcasc

      You can now go to server_ip:8080/login and log in using the specified credentials.

      Jenkins Login Screen with the user ID and password fields populated

      Once you’ve logged in successfully, you will be redirected to the dashboard.

      Jenkins Dashboard for authenticated user, showing the user ID and a 'log out' link near the top right corner of the page

      Finish this step by pressing CTRL+C to stop the container.

      In this step, you used JCasC to create a new user named admin. You’ve also learned how to keep sensitive data, like passwords, out of files tracked by VCSs. However, so far you’ve only configured user authentication; you haven’t implemented any authorization mechanisms. In the next step, you will use JCasC to grant your admin user with administrative privileges.

      Step 5 — Setting Up Authorization

      After setting up the security realm, you must now configure the authorization strategy. In this step, you will use the Matrix Authorization Strategy plugin to configure permissions for your admin user.

      By default, the Jenkins core installation provides us with three authorization strategies:

      • unsecured: every user, including anonymous users, have full permissions to do everything
      • legacy: emulates legacy Jenkins (prior to v1.164), where any users with the role admin is given full permissions, whilst other users, including anonymous users, are given read access.

      Note: A role in Jenkins can be a user (for example, daniel) or a group (for example, developers)

      • loggedInUsersCanDoAnything: anonymous users are given either no access or read-only access. Authenticated users have full permissions to do everything. By allowing actions only for authenticated users, you are able to have an audit trail of which users performed which actions.

      Note: You can explore other authorization strategies and their related plugins in the documentation; these include plugins that handle both authentication and authorization.

      All of these authorization strategies are very crude, and does not afford granular control over how permissions are set for different users. Instead, you can use the Matrix Authorization Strategy plugin that was already included in your plugins.txt list. This plugin affords you a more granular authorization strategy, and allows you to set user permissions globally, as well as per project/job.

      The Matrix Authorization Strategy plugin allows you to use the jenkins.authorizationStrategy.globalMatrix.permissions JCasC property to set global permissions. To use it, open your casc.yaml file:

      • nano $HOME/playground/jcasc/casc.yaml

      And add in the highlighted snippet:

      ~/playground/jcasc/casc.yaml

      ...
             - id: ${JENKINS_ADMIN_ID}
               password: ${JENKINS_ADMIN_PASSWORD}
        authorizationStrategy:
          globalMatrix:
            permissions:
              - "Overall/Administer:admin"
              - "Overall/Read:authenticated"
      unclassified:
      ...
      

      The globalMatrix property sets global permissions (as opposed to per-project permissions). The permissions property is a list of strings with the format <permission-group>/<permission-name>:<role>. Here, you are granting the Overall/Administer permissions to the admin user. You’re also granting Overall/Read permissions to authenticated, which is a special role that represents all authenticated users. There’s another special role called anonymous, which groups all non-authenticated users together. But since permissions are denied by default, if you don’t want to give anonymous users any permissions, you don’t need to explicitly include an entry for it.

      Save the casc.yaml file, exit your editor, and build a new image:

      • docker build -t jenkins:jcasc .

      Then, run the updated Jenkins image:

      • docker run --name jenkins --rm -p 8080:8080 --env JENKINS_ADMIN_ID=admin --env JENKINS_ADMIN_PASSWORD=password jenkins:jcasc

      Wait for the Jenkins is fully up and running log line, and then navigate to server_ip:8080. You will be redirected to the login page. Fill in your credentials and you will be redirected to the main dashboard.

      In this step, you have set up global permissions for your admin user. However, resolving the authorization issue uncovered additional issues that are now shown in the notification menu.

      Jenkins Dashboard showing the notifications menu with two issues

      Therefore, in the next step, you will continue to modify your Docker image, to resolve each issue one by one until none remains.

      Before you continue, stop the container by pressing CTRL+C.

      Step 6 — Setting Up Build Authorization

      The first issue in the notifications list relates to build authentication. By default, all jobs are run as the system user, which has a lot of system privileges. Therefore, a Jenkins user can perform privilege escalation simply by defining and running a malicious job or pipeline; this is insecure.

      Instead, jobs should be ran using the same Jenkins user that configured or triggered it. To achieve this, you need to install an additional plugin called the Authorize Project plugin.

      Open plugins.txt:

      • nano $HOME/playground/jcasc/plugins.txt

      And add the highlighted line:

      ~/playground/jcasc/plugins.txt

      ant:latest
      antisamy-markup-formatter:latest
      authorize-project:latest
      build-timeout:latest
      ...
      

      The plugin provides a new build authorization strategy, which you would need to specify in your JCasC configuration. Exit out of the plugins.txt file and open the casc.yaml file:

      • nano $HOME/playground/jcasc/casc.yaml

      Add the highlighted block to your casc.yaml file:

      ~/playground/jcasc/casc.yaml

      ...
              - "Overall/Administer:admin"
              - "Overall/Read:authenticated"
      security:
        queueItemAuthenticator:
          authenticators:
          - global:
              strategy: triggeringUsersAuthorizationStrategy
      unclassified:
      ...
      

      Save the file and exit the editor. Then, build a new image using the modified plugins.txt and casc.yaml files:

      • docker build -t jenkins:jcasc .

      Then, run the updated Jenkins image:

      • docker run --name jenkins --rm -p 8080:8080 --env JENKINS_ADMIN_ID=admin --env JENKINS_ADMIN_PASSWORD=password jenkins:jcasc

      Wait for the Jenkins is fully up and running log line, then navigate to server_ip:8080/login, fill in your credentials, and arrive at the main dashboard. Open the notification menu, and you will see the issue related to build authentication no longer appears.

      Jenkins dashboard's notification menu showing a single issue related to agent to master security subsystem being turned off

      Stop the container by running CTRL+C before continuing.

      In this step, you have configured Jenkins to run builds using the user that triggered the build, instead of the system user. This eliminates one of the issues in the notifications list. In the next step, you will tackle the next issue related to the Agent to Controller Security Subsystem.

      Step 7 — Enabling Agent to Controller Access Control

      In this tutorial, you have deployed only a single instance of Jenkins, which runs all builds. However, Jenkins supports distributed builds using an agent/controller configuration. The controller is responsible for providing the web UI, exposing an API for clients to send requests to, and co-ordinating builds. The agents are the instances that execute the jobs.

      The benefit of this configuration is that it is more scalable and fault-tolerant. If one of the servers running Jenkins goes down, other instances can take up the extra load.

      However, there may be instances where the agents cannot be trusted by the controller. For example, the OPS team may manage the Jenkins controller, whilst an external contractor manages their own custom-configured Jenkins agent. Without the Agent to Controller Security Subsystem, the agent is able to instruct the controller to execute any actions it requests, which may be undesirable. By enabling Agent to Controller Access Control, you can control which commands and files the agents have access to.

      To enable Agent to Controller Access Control, open the casc.yaml file:

      • nano $HOME/playground/jcasc/casc.yaml

      Then, add the following highlighted lines:

      ~/playground/jcasc/casc.yaml

      ...
              - "Overall/Administer:admin"
              - "Overall/Read:authenticated"
        remotingSecurity:
          enabled: true
      security:
        queueItemAuthenticator:
      ...
      

      Save the file and build a new image:

      • docker build -t jenkins:jcasc .

      Run the updated Jenkins image:

      • docker run --name jenkins --rm -p 8080:8080 --env JENKINS_ADMIN_ID=admin --env JENKINS_ADMIN_PASSWORD=password jenkins:jcasc

      Navigate to server_ip:8080/login and authenticate as before. When you land on the main dashboard, the notifications menu will not show any more issues.

      Jenkins dashboard showing no issues

      Conclusion

      You’ve now successfully configured a simple Jenkins server using JCasC. Just as the Pipeline plugin enables developers to define their jobs inside a Jenkinsfile, the Configuration as Code plugin enables administrators to define the Jenkins configuration inside a YAML file. Both of these plugins bring Jenkins closer aligned with the Everything as Code (EaC) paradigm.

      However, getting the JCasC syntax correct can be difficult, and the documentation can be hard to decipher. If you’re stuck and need help, you may find it in the Gitter chat for the plugin.

      Although you have configured the basic settings of Jenkins using JCasC, the new instance does not contain any projects or jobs. To take this even further, explore the Job DSL plugin, which allows us to define projects and jobs as code. What’s more, you can include the Job DSL code inside your JCasC configuration file, and have the projects and jobs created as part of the configuration process.



      Source link