One place for hosting & domains

      Automate

      How To Automate Jenkins Job Configuration Using Job DSL


      The author selected the Internet Archive to receive a donation as part of the Write for DOnations program.

      Introduction

      Jenkins is a popular automation server, often used to orchestrate continuous integration (CI) and continuous deployment (CD) workflows. However, the process of setting up Jenkins itself has traditionally been a manual, siloed process for the system administrator. The process typically involves installing dependencies, running the Jenkins server, configuring the server, defining pipelines, and configuring jobs.

      Then came the Everything as Code (EaC) paradigm, which allowed administrators to define these manual tasks as declarative code that can be version-controlled and automated. In previous tutorials, we covered how to define Jenkins pipelines as code using Jenkinsfiles, as well as how to install dependencies and define configuration of a Jenkins server as code using Docker and JCasC. But using only Docker, JCasC, and pipelines to set up your Jenkins instance would only get you so far—these servers would not come pre-loaded with any jobs, so someone would still have to configure them manually. The Job DSL plugin provides a solution, and allows you to configure Jenkins jobs as code.

      In this tutorial, you’ll use Job DSL to configure two demo jobs: one that prints a 'Hello World' message in the console, and one that runs a pipeline from a Git repository. If you follow the tutorial to the end, you will have a minimal Job DSL script that you can build on for your own use cases.

      Prerequisites

      To complete this tutorial, you will need:

      Step 1 — Installing the Job DSL Plugin

      The Job DSL plugin provides the Job DSL features you’ll use in this tutorial for your demo jobs. In this step, you will install the Job DSL plugin.

      First, navigate to your_jenkins_url/pluginManager/available. In the search box, type in Job DSL. Next, in the resulting plugins list, check the box next to Job DSL and click Install without restart.

      Plugin Manager page showing Job DSL checked

      Note: If searching for Job DSL returned no results, it either means the Job DSL plugin is already installed, or that your Jenkin server’s plugin list is not updated.

      You can check if the Job DSL plugin is already installed by navigating to your_jenkins_url/pluginManager/installed and searching for Job DSL.

      You can update your Jenkins server’s plugin list by navigating to your_jenkins_url/pluginManager/available and clicking on the Check Now button at the bottom of the (empty) plugins list.

      After initiating the installation process, you’ll be redirected to a page that shows the progress of the installation. Wait until you see Success next to both Job DSL and Loading plugin extensions before continuing to the next step.

      You’ve installed the Job DSL plugin. You are now ready to use Job DSL to configure jobs as code. In the next step, you will define a demo job inside a Job DSL script. You’ll then incorporate the script into a seed job, which, when executed, will create the jobs defined.

      Step 2 — Creating a Seed Job

      The seed job is a normal Jenkins job that runs the Job DSL script; in turn, the script contains instructions that create additional jobs. In short, the seed job is a job that creates more jobs. In this step, you will construct a Job DSL script and incorporate it into a seed job. The Job DSL script that you’ll define will create a single freestyle job that prints a 'Hello World!' message in the job’s console output.

      A Job DSL script consists of API methods provided by the Job DSL plugin; you can use these API methods to configure different aspects of a job, such as its type (freestyle versus pipeline jobs), build triggers, build parameters, post-build actions, and so on. You can find all supported methods on the API reference site.

      Jenkins Job DSL API Reference web page

      By default, the site shows the API methods for job configuration settings that are available as part of the core Jenkins installation, as well as settings that are enabled by 184 supported plugins (accurate as of v1.77). To get a clearer picture of what API methods the Job DSL plugin provides for only the core Jenkins installation, click on the funnel icon next to the search box, and then check and uncheck the Filter by Plugin checkbox to deselect all the plugins.

      Jenkins Job DSL API reference web page showing only the core APIs

      The list of API methods are now significantly reduced. The ones that remain would work even if the Jenkins installation had no plugins installed apart from the Job DSL plugin.

      For the ‘Hello World’ freestyle job, you need the job API method (freeStyleJob is an alias of job and would also work). Let’s navigate to the documentation for the job method.

      job API method reference

      Click the ellipsis icon () in job(String name) { … } to show the methods and blocks that are available within the job block.

      Expanded view of the job API method reference

      Let’s go over some of the most commonly used methods and blocks within the job block:

      • parameters: setting parameters for users to input when they create a new build of the job.
      • properties: static values that are to be used within the job.
      • scm: configuration for how to retrieve the source code from a source-control management provider like GitHub.
      • steps: definitions for each step of the build.
      • triggers: apart from manually creating a build, specifies in what situations the job should be run (for example, periodically like a cron job, or after some events like a push to a GitHub repository).

      You can further expand child blocks to see what methods and blocks are available within. Click on the ellipsis icon () in steps { … } to uncover the shell(String command) method, which you can use to run a shell script.

      Reference for the Job DSL steps block

      Putting the pieces together, you can write a Job DSL script like the following to create a freestyle job that, when run, will print 'Hello World!' in the output console.

      job('demo') {
          steps {
              shell('echo Hello World!')
          }
      }
      

      To run the Job DSL script, we must first incorporate it into a seed job.

      To create the seed job, go to your_jenkins_url, log in (if necessary), click the New Item link on the left of the dashboard. On the screen that follows, type in seed, select Freestyle project, and click OK.

      Part of the New Item screen where you give the item the name of 'seed' and with the 'Freestyle project' option selected

      In the screen that follows, scroll down to the Build section and click on the Add build step dropdown. Next select Process Job DSLs.

      Screen showing the Add build step dropdown expanded and the Process Job DSLs option selected

      Then, click on the radio button next to Use the provided DSL script, and paste the Job DSL script you wrote into the DSL Script text area.

      Job DSL script added to the Process Job DSLs build step

      Click Save to create the job. This will take you to the seed job page.

      Seed job page

      Then, navigate to your_jenkins_url and confirm that the seed job is there.

      Jenkins jobs list showing the seed job

      You’ve successfully created a seed job that incorporates your Job DSL script. In the next step, you will run the seed job so that new jobs are created based on your Job DSL script.

      Step 3 — Running the Seed Job

      In this step, you will run the seed job and confirm that the jobs defined within the Job DSL script are indeed created.

      First, click back into the seed job page and click on the Build Now button on the left to run the seed job.

      Refresh the page and you’ll see a new section that says Generated Items; it lists the demo job that you’ve specified in your Job DSL script.

      Seed job page showing a list of generated items from running the seed job

      Navigate to your_server_ip and you will find the demo job that you specified in the Job DSL script.

      Jenkins jobs list showing the demo and seed jobs

      Click the demo link to go to the demo job page. You’ll see Seed job: seed, indicating that this job is created by the seed job. Now, click the Build Now link to run the demo job once.

      Demo job page showing a section on seed job

      This creates an entry inside the Build History box. Hover over the date of the entry to reveal a little arrow; click on it to reveal the dropdown. From the dropdown, choose Console Output.

      Screen showing the Console Output option selected in the dropdown for Build #1 inside the Build History box

      This will bring you the logs and console output from this build. In it, you will find the line + echo Hello World! followed by Hello World!, which corresponds to the shell('echo Hello World!') step in your Job DSL script.

      Console output of build #1 showing the echo Hello World! command and output

      You’ve run the demo job and confirmed that the echo step specified in the Job DSL script was executed. In the next and final step, you will be modifying and re-applying the Job DSL script to include an additional pipeline job.

      Step 4 — Defining Pipeline Jobs

      In line with the Everything as Code paradigm, more and more developers are choosing to define their builds as pipeline jobs—those that use a pipeline script (typically named Jenkinsfile)—instead of freestyle jobs. The demo job you’ve defined so far is a small demonstration. In this step, you will define a more realistic job that pulls down a Git repository from GitHub and run a pipeline defined in one of its pipeline scripts.

      For Jenkins to pull a Git repository and build using pipeline scripts, you’ll need to install additional plugins. So, before you make any changes to the Job DSL script, first make sure that the required plugins are installed.

      Navigate to your_jenkins_url/pluginManager/installed and check the plugins lists for the presence of the Git, Pipeline: Job, and Pipeline: Groovy plugins. If any of them are not installed, go to your_jenkins_url/pluginManager/available and search for and select the plugins, then click Install without restart.

      Now that the required plugins are installed, let’s shift our focus to modifying your Job DSL script to include an additional pipeline job.

      We will be defining a pipeline job that pulls the code from the public jenkinsci/pipeline-examples Git repository and run the environmentInStage.groovy declarative pipeline script found in it.

      Once again, navigate to the Jenkins Job DSL API Reference, click the funnel icon to bring up the Filter by Plugin menu, then deselect all the plugins except Git, Pipeline: Job, and Pipeline: Groovy.

      The Jenkins Job DSL API Reference page with all plugins deselected except for Pipeline: Job, and (not shown) Git and Pipeline: Groovy

      Click on pipelineJob on the left-hand side menu and expand the pipelineJob(String name) { … } block, then, in order, the definition { … }, cpsScm { … }, and scm { … } blocks.

      Expanded view of the pipelineJob API method block

      There are comments above each API method that explain their roles. For our use case, you’d want to define your pipeline job using a pipeline script found inside a GitHub repository. So you’d need to modify your Job DSL script as follows:

      job('demo') {
          steps {
              shell('echo Hello World!')
          }
      }
      
      pipelineJob('github-demo') {
          definition {
              cpsScm {
                  scm {
                      git {
                          remote {
                              github('jenkinsci/pipeline-examples')
                          }
                      }
                  }
                  scriptPath('declarative-examples/simple-examples/environmentInStage.groovy')
              }
          }
      }
      

      To make the change, go to your_jenkins_url/job/seed/configure and find the DSL Script text area, and replace the contents with your new Job DSL script. Then press Save. In the next screen, click on Build Now to re-run the seed job.

      Then, go to the Console Output page of the new build and you’ll find Added items: GeneratedJob{name="github-demo"}, which means you’ve successfully added the new pipeline job, whilst the existing job remains unchanged.

      Console output for the modified seed job, showing that the github-demo job has been added

      You can confirm this by going to your_jenkins_url; you will find the github-demo job appear in the list of jobs.

      Job list showing the github-demo job

      Finally, confirm that your job is working as intended by navigating to your_jenkins_url/job/github-demo/ and clicking Build Now. After the build has finished, navigate to your_jenkins_url/job/github-demo/1/console and you will find the Console Output page showing that Jenkins has successfully cloned the repository and executed the pipeline script.

      Conclusion

      In this tutorial, you’ve used the Job DSL plugin to configure jobs on Jenkins servers in a consistent and repeatable way.

      But Job DSL is not the only tool in the Jenkins ecosystem that follows the Everything as Code (EaC) paradigm. You can also deploy Jenkins as Docker containers and set it up using Jenkins Configuration as Code (JCasC). Together, Docker, JCasC, Job DSL, and pipelines allow developers and administrators to deploy and configure Jenkins completely automatically, without any manual involvement.



      Source link

      How To Execute Ansible Playbooks to Automate Server Setup


      Introduction

      Ansible is a modern configuration management tool that facilitates the task of setting up and maintaining remote servers. With a minimalist design intended to get users up and running quickly, it allows you to control one to hundreds of systems from a central location with either playbooks or ad hoc commands.

      While ad hoc commands allow you to run one-off tasks on servers registered within your inventory file, playbooks are typically used to automate a sequence of tasks for setting up services and deploying applications to remote servers. Playbooks are written in YAML, and can contain one or more plays.

      This short guide demonstrates how to execute Ansible playbooks to automate server setup, using an example playbook that sets up an Nginx server with a single static HTML page.

      Prerequisites

      In order to follow this guide, you’ll need:

      • One Ansible control node. This guide assumes your control node is an Ubuntu 20.04 machine with Ansible installed and configured to connect to your Ansible hosts using SSH keys. Make sure the control node has a regular user with sudo permissions and a firewall enabled, as explained in our Initial Server Setup guide. To set up Ansible, please follow our guide on How to Install and Configure Ansible on Ubuntu 20.04.
      • One or more Ansible hosts. An Ansible host is any machine that your Ansible control node is configured to automate. This guide assumes your Ansible hosts are remote Ubuntu 20.04 servers. Make sure each Ansible host has:
        • The Ansible control node’s SSH public key added to the authorized_keys of a system user. This user can be either root or a regular user with sudo privileges. To set this up, you can follow Step 2 of How to Set Up SSH Keys on Ubuntu 20.04.
      • An inventory file set up on the Ansible control node. Make sure you have a working inventory file containing all your Ansible hosts. To set this up, please refer to the guide on How To Set Up Ansible Inventories.

      Once you have met these prerequisites, run a connection test as outlined in our guide on How To Manage Multiple Servers with Ansible Ad Hoc Commands to make sure you’re able to connect and execute Ansible instructions on your remote nodes. In case you don’t have a playbook already available to you, you can create a testing playbook as described in the next section.

      Creating a Test Playbook

      To try out the examples described in this guide, you’ll need an Ansible playbook. We’ll set up a testing playbook that installs Nginx and sets up an index.html page on the remote server. This file will be copied from the Ansible control node to the remote nodes in your inventory file.

      Create a new file called playbook.yml in the same directory as your inventory file. If you followed our guide on how to create inventory files, this should be a folder called ansible inside your home directory:

      • cd ~/ansible
      • nano playbook.yml

      The following playbook has a single play and runs on all hosts from your inventory file, by default. This is defined by the hosts: all directive at the beginning of the file. The become directive is then used to indicate that the following tasks must be executed by a super user (root by default).

      It defines two tasks: one to install required system packages, and the other one to copy an index.html file to the remote host, and save it in Nginx’s default document root location, /var/www/html. Each task has tags, which can be used to control the playbook’s execution.

      Copy the following content to your playbook.yml file:

      ~/ansible/playbook.yml

      ---
      - hosts: all
        become: true
        tasks:
          - name: Install Packages
            apt: name={{ item }} update_cache=yes state=latest
            loop: [ 'nginx', 'vim' ]
            tags: [ 'setup' ]
      
          - name: Copy index page
            copy:
              src: index.html
              dest: /var/www/html/index.html
              owner: www-data
              group: www-data
              mode: '0644'
            tags: [ 'update', 'sync' ]
      
      

      Save and close the file when you’re done. Then, create a new index.html file in the same directory, and place the following content in it:

      ~/ansible/index.html

      <html>
          <head>
              <title>Testing Ansible Playbooks</title>
          </head>
          <body>
              <h1>Testing Ansible Playbooks</h1>
              <p>This server was set up using an Nginx playbook.</p>
          </body>
      </html>
      

      Don’t forget to save and close the file.

      Executing a Playbook

      To execute the testing playbook on all servers listed within your inventory file, which we’ll refer to as inventory throughout this guide, you may use the following command:

      • ansible-playbook -i inventory playbook.yml

      This will use the current system user as remote SSH user, and the current system user’s SSH key to authenticate to the nodes. In case those aren’t the correct credentials to access the server, you’ll need to include a few other parameters in the command, such as -u to define the remote user or --private-key to define the correct SSH keypair you want to use to connect. If your remote user requires a password for running commands with sudo, you’ll need to provide the -K option so that Ansible prompts you for the sudo password.

      More information about connection options is available in our Ansible Cheatsheet guide.

      Listing Playbook Tasks

      In case you’d like to list all tasks contained in a playbook, without executing any of them, you may use the --list-tasks argument:

      • ansible-playbook -i inventory playbook.yml --list-tasks

      Output

      playbook: nginx.yml play #1 (all): all TAGS: [] tasks: Install Packages TAGS: [setup] Copy index page TAGS: [sync, update]

      Tasks often have tags that allow you to have extended control over a playbook’s execution. To list current available tags in a playbook, you can use the --list-tags argument as follows:

      • ansible-playbook -i inventory playbook.yml --list-tags

      Output

      playbook: nginx.yml play #1 (all): all TAGS: [] TASK TAGS: [setup, sync, update]

      Executing Tasks by Tag

      To only execute tasks that are marked with specific tags, you can use the --tags argument, along with the tags that you want to trigger:

      • ansible-playbook -i inventory playbook.yml --tags=setup

      Skipping Tasks by Tag

      To skip tasks that are marked with certain tags, you may use the --exclude-tags argument, along with the names of tags that you want to exclude from execution:

      • ansible-playbook -i inventory playbook.yml --exclude-tags=setup

      Starting Execution at Specific Task

      Another way to control the execution flow of a playbook is by starting the play at a certain task. This is useful when a playbook execution finishes prematurely, in which case you might want to run a retry.

      • ansible-playbook -i inventory playbook.yml --start-at-task=Copy index page

      Limiting Targets for Execution

      Many playbooks set up their target as all by default, and sometimes you want to limit the group or single server that should be the target for that setup. You can use -l (limit) to set up the target group or server in that play:

      • ansible-playbook -l dev -i inventory playbook.yml

      Controlling Output Verbosity

      If you run into errors while executing Ansible playbooks, you can increase output verbosity in order to get more information about the problem you’re experiencing. You can do that by including the -v option to the command:

      • ansible-playbook -i inventory playbook.yml -v

      If you need more detail, you can use -vv or -vvv instead. If you’re unable to connect to the remote nodes, use -vvvv to obtain connection debugging information:

      • ansible-playbook -i inventory playbook.yml -vvvv

      Conclusion

      In this guide, you’ve learned how to execute Ansible playbooks to automate server setup. We’ve also seen how to obtain information about playbooks, how to manipulate a playbook’s execution flow using tags, and how to adjust output verbosity in order to obtain detailed debugging information in a play.



      Source link

      How To Automate WordPress Deployments with DigitalOcean and Buddy


      Introduction

      In this tutorial, you will automate WordPress deployments using Buddy CI/CD, a user-friendly tool offering continuous integration and continuous deployment solutions.

      Compared to many other CI/CD tools, Buddy requires less DevOps experience. It allows developers to create delivery pipelines with drag-and-drop actions in a visual GUI. This GUI leverages pre-configured actions (builds, test, deployments, etc.) in an approach similar to DigitalOcean’s interactive Droplet configuration. This means newcomers and expert developers alike can use Buddy to release more software, all while making fewer errors.

      Once you’ve completed this tutorial, you will be able to perform a WordPress deployment with a single command from your local terminal. For better insight, you will build a more advanced Sage-based WordPress theme that requires multiple build steps before you can deploy it to the WordPress server.

      Prerequisites

      Note: This tutorial was tested on Node.js version 14.13.0, npm version 6.14.8, and PHP version 7.4.10.

      Step 1 — Installing WordPress with Docker

      In this step you will pull the WordPress image from Docker and start your build.

      First, verify that Docker is running with the following command:

      You will receive an output like this:

      Output

      Client: Debug Mode: false Server: Containers: 0 Running: 0 Paused: 0 Stopped: 0 Images: 0 Server Version: 19.03.12 Storage Driver: overlay2 Backing Filesystem: extfs Supports d_type: true Native Overlay Diff: true Logging Driver: json-file Cgroup Driver: cgroupfs Plugins: Volume: local Network: bridge host ipvlan macvlan null overlay Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog Swarm: inactive ...

      Now that you’ve verified that Docker is running, download the latest version of the WordPress image:

      Next, create a folder for your project in your workspace:

      • mkdir docker-wordpress-theme

      Navigate inside your new project folder:

      • cd docker-wordpress-theme

      Now you need to define your build. Use nano or your preferred text editor to open and create a file called docker-compose.yml:

      Add the following definitions to the file. These describe the version of Docker Compose and the services to be launched. In this case, you are launching WordPress and MySQL database. Make sure to replace the highlighted fields with your credentials:

      docker-compose.yml

      version: "3.1"
      
      services:
        wordpress:
          image: wordpress
          restart: always
          ports:
            - 8080:80
          environment:
            WORDPRESS_DB_HOST: db
            WORDPRESS_DB_USER: exampleuser
            WORDPRESS_DB_PASSWORD: examplepass
            WORDPRESS_DB_NAME: exampledb
          volumes:
            - wordpress:/var/www/html
            - ./sage:/var/www/html/wp-content/themes/sage/
        db:
          image: mysql:5.7
          restart: always
          environment:
            MYSQL_DATABASE: exampledb
            MYSQL_USER: exampleuser
            MYSQL_PASSWORD: examplepass
            MYSQL_RANDOM_ROOT_PASSWORD: "1"
          volumes:
            - db:/var/lib/mysql
      
      volumes:
        wordpress:
        db:
      

      Here you are defining the images that Docker will launch in the service and then setting ports and environment variables

      Take note that you are mounting a folder called sage that you haven’t created yet. This will be your custom theme, which you will now create.

      Step 2 — Creating a Custom WordPress Theme

      In this step you will create a custom wordpress theme. You will then create a CI/CD pipeline so that you can push changes you make locally to your WordPress server with one command.

      Let’s start building our custom theme by installing the Sage framework on our local WordPress installation. This theme uses Node.js and Gulp to perform development and build functions. There won’t be any build dependencies installed on the production server – instead, all production build tasks will be performed on Buddy, the remote Continuous Integration server.

      Make sure you are in your project folder:

      • cd docker-wordpress-theme

      Use Composer to create a new Sage theme:

      • composer create-project roots/sage

      With everything properly configured, the following output will appear:

      Output

      Installing roots/sage (9.0.9) - Installing roots/sage (9.0.9): Loading from cache Created project in /home/mike/Projects/buddy/github/sage Loading composer repositories with package information Installing dependencies (including require-dev) from lock file Package operations: 29 installs, 0 updates, 0 removals - Installing composer/installers (v1.6.0): Downloading (100%) - Installing symfony/polyfill-mbstring (v1.10.0): Downloading (100%) - Installing symfony/contracts (v1.0.2): Downloading (100%) - ..........

      The installer will then ask you to select the framework to load:

      Output

      - Theme Name > Sage Starter Theme - Theme URI > https://roots.io/sage/ - Theme Name [Sage Starter Theme]: - Theme Description > Sage is a WordPress starter theme. - Theme Version > 9.0.9 - Theme Author > Roots - Theme Author URI > https://roots.io/ - Local development URL of WP site > http://localhost:8080 - Path to theme directory > /wp-content/themes/sage - Which framework would you like to load? [Bootstrap]: [0] None [1] Bootstrap [2] Bulma [3] Foundation [4] Tachyons [5] Tailwind

      Note: Make sure sure that the local development URL matches the port.

      Press 1 to select the Bootstrap framework. You will be asked for permission to overwrite a couple of files. Type y to confirm and proceed:

      Output

      Are you sure you want to overwrite the following files? - scripts/autoload/_bootstrap.js - styles/autoload/_bootstrap.scss - styles/common/_variables.scss - styles/components/_comments.scss - styles/components/_forms.scss - styles/components/_wp-classes.scss - styles/layouts/_header.scss (yes/no) [no]:

      You now have the foundations of a custom WordPress theme. In the next step you will build and launch the theme, and then you will version it using Git.

      Step 3 — Building and Launching a Custom WordPress Theme

      In this step you will install all your build dependencies, create a production build, and lauch WordPress in a local Docker container.

      Navigate to the Sage folder:

      Install the node-sass binary to prevent installation failure (the rest of package.json will be installed, too):

      Run a production build that will compile all Sass/SCSS files and minify CSS and JS:

      With the build generated, exit the theme folder and launch your WordPress instance using Docker Compose:

      • cd ..
      • docker-compose up -d

      Launching WordPress in the Docker environment should only take a few seconds. Now open the URL http://localhost:8080 in a web browser to access your local WordPress site. Since this is the first time you are launching WordPress, you will be prompted to create an Admin account. Create one now.

      Once you have created an account and are logged in, head over to Appearance > Themes page on the dashboard. You will find several pre-installed themes including the Sage theme we’ve just created. Click the Activate button to set it as the current theme. Your home page will look something like this:

      Sage theme preview

      You have now built and activated a custom theme. In the next step, you will put your project under version control.

      Step 4 — Uploading a WordPress Project to a Remote Repository

      Version control is a cornerstone of the CI/CD workflow. In this step, you will upload your project to a remote Git repository that the Buddy platform can access. Buddy integrates with many popular Git providers, including:

      • GitHub
      • GitLab
      • Bitbucket
      • Privately-hosted Git repositories

      Create a remote repository on the platform of your choice. For the purpose of this guide we’ll use GitHub. Click here to read how you can create a new repo using the Github UI.

      Then, in your terminal, initialize Git in your project’s remote directory:

      Add the newly created remote repository. Replace the highlighted section with your own repository’s URL:

      git add remote https://github.com/user-name/your-repo-name.git
      

      Before you push your project, there are some files that you want to exclude from version control.

      Create a file called .gitignore:

      Add the following filenames:

      ./.gitignore

      .cache-loader
      composer.phar
      dist
      node_modules
      vendor
      

      Save and close the file.

      Now you are ready to add your project under version control and commit the files to your repository on GitHub:

      • git add .
      • git commit -m 'my sage project'
      • git push

      You have now built a custom WordPress theme using the Sage framework and then pushed the code to a remote repository. Now you will automate the deployment of this theme to your WordPress server using Buddy.

      Step 5 — Automating WordPress Deployment with Buddy

      If you haven’t used Buddy before, sign up with your Git provider credentials or email address. There’s a 14-day trial with no limit on resources, and a free plan with 5 projects and 120 executions/month once it’s over, which is more than enough for our needs.

      Start by synchronizing Buddy with your repository. In the Buddy UI, click Create a new project, select your Git provider, and choose the repository that you created in the first section of this article.

      Next, you will be prompted to create your delivery pipeline. A pipeline is a set of actions that perform tasks on your repository code, like builds, tests, or deployments.

      The key settings to configure are:

      • Branch from which Buddy will deploy your code – in this case, set it to master
      • Pipeline trigger mode – set it to On push to automatically execute the pipeline on every push to the selected branch.

      Once you add the pipeline, you’ll need to create four actions:

      1. A PHP action that will install the required PHP packages
      2. A Node action that will download the dependencies and prepare a build for deployment
      3. A Droplet action that will upload the build code directly to your DO Droplet
      4. An SSH action with a script that will activate your theme.

      Based on the contents of your repository, Buddy will automatically suggest the actions to perform. Select PHP from the list.

      Action selection screen

      Clicking the action will open its configuration panel. Enter the following commands in the terminal section:

      # navigate to theme directory
      cd sage
      
      # install php packages
      composer validate
      composer install
      

      Save and run the pipeline to ensure it works:

      Pipeline execution

      Note: Buddy uses isolated containers with preinstalled frameworks for builds. The downloaded dependencies are cached in the container, meaning you don’t have to download them again. Think of it as a local development environment that remains consistent for everybody on the team.

      Next, add the Node.js action. In order for the theme to display properly, we’ll need to compile and minify assets, i.e. SCSS/SASS and JavaScript files.

      First, set Environment to node latest.

      Now you must add several commands. These commands will install the necessary dependencies and perform your build.

      Add them to the terminal box just like before:

      # navigate to theme directory
      cd sage
      
      # install packages
      yarn install
      
      # Create production build
      yarn build:production
      

      Once again, save and run the action to ensure it works.

      Next, add the Droplet action right after the Node.js build. If you’ve never used DigitalOcean with Buddy before, a wizard will appear that will guide you through the integration. Once you’ve completed this step, define the authentication details as follows:

      • Set the Source path to sage.

      • Choose Buddy’s SSH key authentication mode as that is the easiest one to set up. Just log in to your Droplet server via SSH and execute the commands displayed in Buddy’s key code snippet.

      After you execute those commands, go back to the browser and click the Remote path browse button – you will be able to navigate your Droplet’s filesystem and access the correct deployment folder. The default path will be /var/www/html/wp-content/themes/sage.

      You will also need to visit the Ignore paths section and provide the following to prevent uploading of Node.js dependencies:

      .cache-loader/
      node_modules/
      

      When done, click the Test action button to verify that everything’s been properly configured.

      Last, you’ll add one more action to activate your theme on the WordPress Droplet with a WP-CLI command. On your pipeline page, add the SSH action and input the following command in the commands section:

      • sudo -u www-data -- wp theme activate sage/resources

      Ensure you have set the correct Working directory setting – otherwise, the command won’t work.

      Since you already configured Buddy’s SSH key in the previous setup, you don’t need to do anything else. Alternatively, you can select private SSH key and then you can upload your DigitalOcean private key and use that to connect to your Droplet. Buddy’s SSH key is simpler and just as secure.

      Your complete pipeline will now contain 4 actions: PHP > Node > Droplet > SSH. Click the Run Pipeline button to test out all the actions at once. You will receive a green check mark for each stage:

      Pipeline execution screen

      On the first execution, Buddy will deploy all files from the repository to the selected revision. Future deployments will only update files that have changed or were deleted. This feature significantly reduces upload time because you don’t have to deploy everything from scratch on every update.

      Go to your hosted WordPress dashboard and refresh the Themes page. You will see your Sage theme. Activate it now.

      Your hosted home page will now match your local home page.

      Our pipeline is built and our local and remote machines are synced. Now, let’s test the entire workflow.

      Step 6 — Testing Buddy’s Auto-Deployment Workflow

      In this step you will make a small change to your theme and then deploy those changes to your WordPress server.

      Go back to your local terminal and run this yarn command:

      This will start a live proxy development server at localhost:3000. Any changes you make to your theme will get automatically reflected in this window. The page on localhost:8080 will remain unchanged until you run the production build script.

      Let’s test out our pipeline by making some minor changes to our CSS.

      Open the main.scss file for your Sage theme:

      • nano ./sage/resources/assets/styles/main.scss

      Insert the following code to introduce some subtle green color and an underline to the website’s font:

      ./sage/resources/assets/styles/main.scss

      .brand {
        @extend .display-3;
      
        color: #013d30;
      }
      
      .entry-title {
        @extend .display-4;
      
        a {
          color: #015c48;
          text-decoration: underline;
        }
      }
      
      .page-header {
        display: none;
      }
      

      Save and close the file.

      Commit these changes and upload them to your repo:

      • git add .
      • git commit -m "minor style changes"
      • git push

      Once the code is uploaded to the repository, Buddy will automatically trigger your pipeline and execute all actions one by one:

      Wait for the pipeline to finish and then refresh your WordPress Droplet’s home page to see your updates.

      Updated WP Droplet

      Your pipeline is now pushing changes from your local machine to GitHub to Buddy to your production WordPress server, all triggered by one git command.

      Conclusion

      Buddy is a very user friendly and powerful CI/CD tool. Buddy even has a video that shows just how quickly you can create pipelines using their interface.

      By automating your development workflow, you can focus on implementing styles and features for your custom theme or plugin without wasting time on manual deployments. The CI/CD workflow can also significantly reduce the risk of manual errors. In addition, automation allows you to further enhance the quality of your code by running unit tests and analysis tools, such as PHP Sniffer, on every change.

      You can take this tutorial even further by setting up an advanced branching strategy and a staging server, where you can perform quality control checks before you deploy new code to the production server. This way you can release better software more often without losing the momentum.



      Source link