One place for hosting & domains

      Deployments

      From 0 to 3 Million+ Deployments: Scaling App Platform on Kubernetes


      Video

      About the Talk

      Get a better understanding of how your builds run on DigitalOcean App Platform, and how to scale efficiently when your user base expands.

      App Platform, a new DigitalOcean managed-service offering, was released in October 2020 and has already been deployed over 3 million times. This immense growth didn’t come without its challenges around infrastructure, observability, and release velocity.

      Kamal and Nick share techniques and strategies that have helped DigitalOcean overcome these problem areas and the benefits received from each solution.

      About the Presenters

      Kamal Nasser
      Kamal Nasser is a Senior Software Engineer at DigitalOcean. When not automating and playing with modern software and technologies, you’ll likely find him penning early 17th century calligraphy.

      Nick Tate
      Nicholas Tate is a Tech Lead on the App Platform team at DigitalOcean. He has worked on the DigitalOcean Managed Kubernetes team and before that, on multi-cloud Kubernetes at Containership. He loves everything and anything related to cloud native infrastructure and enjoys playing guitar.



      Source link

      How To Automate WordPress Deployments with DigitalOcean and Buddy


      Introduction

      In this tutorial, you will automate WordPress deployments using Buddy CI/CD, a user-friendly tool offering continuous integration and continuous deployment solutions.

      Compared to many other CI/CD tools, Buddy requires less DevOps experience. It allows developers to create delivery pipelines with drag-and-drop actions in a visual GUI. This GUI leverages pre-configured actions (builds, test, deployments, etc.) in an approach similar to DigitalOcean’s interactive Droplet configuration. This means newcomers and expert developers alike can use Buddy to release more software, all while making fewer errors.

      Once you’ve completed this tutorial, you will be able to perform a WordPress deployment with a single command from your local terminal. For better insight, you will build a more advanced Sage-based WordPress theme that requires multiple build steps before you can deploy it to the WordPress server.

      Prerequisites

      Note: This tutorial was tested on Node.js version 14.13.0, npm version 6.14.8, and PHP version 7.4.10.

      Step 1 — Installing WordPress with Docker

      In this step you will pull the WordPress image from Docker and start your build.

      First, verify that Docker is running with the following command:

      You will receive an output like this:

      Output

      Client: Debug Mode: false Server: Containers: 0 Running: 0 Paused: 0 Stopped: 0 Images: 0 Server Version: 19.03.12 Storage Driver: overlay2 Backing Filesystem: extfs Supports d_type: true Native Overlay Diff: true Logging Driver: json-file Cgroup Driver: cgroupfs Plugins: Volume: local Network: bridge host ipvlan macvlan null overlay Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog Swarm: inactive ...

      Now that you’ve verified that Docker is running, download the latest version of the WordPress image:

      Next, create a folder for your project in your workspace:

      • mkdir docker-wordpress-theme

      Navigate inside your new project folder:

      • cd docker-wordpress-theme

      Now you need to define your build. Use nano or your preferred text editor to open and create a file called docker-compose.yml:

      Add the following definitions to the file. These describe the version of Docker Compose and the services to be launched. In this case, you are launching WordPress and MySQL database. Make sure to replace the highlighted fields with your credentials:

      docker-compose.yml

      version: "3.1"
      
      services:
        wordpress:
          image: wordpress
          restart: always
          ports:
            - 8080:80
          environment:
            WORDPRESS_DB_HOST: db
            WORDPRESS_DB_USER: exampleuser
            WORDPRESS_DB_PASSWORD: examplepass
            WORDPRESS_DB_NAME: exampledb
          volumes:
            - wordpress:/var/www/html
            - ./sage:/var/www/html/wp-content/themes/sage/
        db:
          image: mysql:5.7
          restart: always
          environment:
            MYSQL_DATABASE: exampledb
            MYSQL_USER: exampleuser
            MYSQL_PASSWORD: examplepass
            MYSQL_RANDOM_ROOT_PASSWORD: "1"
          volumes:
            - db:/var/lib/mysql
      
      volumes:
        wordpress:
        db:
      

      Here you are defining the images that Docker will launch in the service and then setting ports and environment variables

      Take note that you are mounting a folder called sage that you haven’t created yet. This will be your custom theme, which you will now create.

      Step 2 — Creating a Custom WordPress Theme

      In this step you will create a custom wordpress theme. You will then create a CI/CD pipeline so that you can push changes you make locally to your WordPress server with one command.

      Let’s start building our custom theme by installing the Sage framework on our local WordPress installation. This theme uses Node.js and Gulp to perform development and build functions. There won’t be any build dependencies installed on the production server – instead, all production build tasks will be performed on Buddy, the remote Continuous Integration server.

      Make sure you are in your project folder:

      • cd docker-wordpress-theme

      Use Composer to create a new Sage theme:

      • composer create-project roots/sage

      With everything properly configured, the following output will appear:

      Output

      Installing roots/sage (9.0.9) - Installing roots/sage (9.0.9): Loading from cache Created project in /home/mike/Projects/buddy/github/sage Loading composer repositories with package information Installing dependencies (including require-dev) from lock file Package operations: 29 installs, 0 updates, 0 removals - Installing composer/installers (v1.6.0): Downloading (100%) - Installing symfony/polyfill-mbstring (v1.10.0): Downloading (100%) - Installing symfony/contracts (v1.0.2): Downloading (100%) - ..........

      The installer will then ask you to select the framework to load:

      Output

      - Theme Name > Sage Starter Theme - Theme URI > https://roots.io/sage/ - Theme Name [Sage Starter Theme]: - Theme Description > Sage is a WordPress starter theme. - Theme Version > 9.0.9 - Theme Author > Roots - Theme Author URI > https://roots.io/ - Local development URL of WP site > http://localhost:8080 - Path to theme directory > /wp-content/themes/sage - Which framework would you like to load? [Bootstrap]: [0] None [1] Bootstrap [2] Bulma [3] Foundation [4] Tachyons [5] Tailwind

      Note: Make sure sure that the local development URL matches the port.

      Press 1 to select the Bootstrap framework. You will be asked for permission to overwrite a couple of files. Type y to confirm and proceed:

      Output

      Are you sure you want to overwrite the following files? - scripts/autoload/_bootstrap.js - styles/autoload/_bootstrap.scss - styles/common/_variables.scss - styles/components/_comments.scss - styles/components/_forms.scss - styles/components/_wp-classes.scss - styles/layouts/_header.scss (yes/no) [no]:

      You now have the foundations of a custom WordPress theme. In the next step you will build and launch the theme, and then you will version it using Git.

      Step 3 — Building and Launching a Custom WordPress Theme

      In this step you will install all your build dependencies, create a production build, and lauch WordPress in a local Docker container.

      Navigate to the Sage folder:

      Install the node-sass binary to prevent installation failure (the rest of package.json will be installed, too):

      Run a production build that will compile all Sass/SCSS files and minify CSS and JS:

      With the build generated, exit the theme folder and launch your WordPress instance using Docker Compose:

      • cd ..
      • docker-compose up -d

      Launching WordPress in the Docker environment should only take a few seconds. Now open the URL http://localhost:8080 in a web browser to access your local WordPress site. Since this is the first time you are launching WordPress, you will be prompted to create an Admin account. Create one now.

      Once you have created an account and are logged in, head over to Appearance > Themes page on the dashboard. You will find several pre-installed themes including the Sage theme we’ve just created. Click the Activate button to set it as the current theme. Your home page will look something like this:

      Sage theme preview

      You have now built and activated a custom theme. In the next step, you will put your project under version control.

      Step 4 — Uploading a WordPress Project to a Remote Repository

      Version control is a cornerstone of the CI/CD workflow. In this step, you will upload your project to a remote Git repository that the Buddy platform can access. Buddy integrates with many popular Git providers, including:

      • GitHub
      • GitLab
      • Bitbucket
      • Privately-hosted Git repositories

      Create a remote repository on the platform of your choice. For the purpose of this guide we’ll use GitHub. Click here to read how you can create a new repo using the Github UI.

      Then, in your terminal, initialize Git in your project’s remote directory:

      Add the newly created remote repository. Replace the highlighted section with your own repository’s URL:

      git add remote https://github.com/user-name/your-repo-name.git
      

      Before you push your project, there are some files that you want to exclude from version control.

      Create a file called .gitignore:

      Add the following filenames:

      ./.gitignore

      .cache-loader
      composer.phar
      dist
      node_modules
      vendor
      

      Save and close the file.

      Now you are ready to add your project under version control and commit the files to your repository on GitHub:

      • git add .
      • git commit -m 'my sage project'
      • git push

      You have now built a custom WordPress theme using the Sage framework and then pushed the code to a remote repository. Now you will automate the deployment of this theme to your WordPress server using Buddy.

      Step 5 — Automating WordPress Deployment with Buddy

      If you haven’t used Buddy before, sign up with your Git provider credentials or email address. There’s a 14-day trial with no limit on resources, and a free plan with 5 projects and 120 executions/month once it’s over, which is more than enough for our needs.

      Start by synchronizing Buddy with your repository. In the Buddy UI, click Create a new project, select your Git provider, and choose the repository that you created in the first section of this article.

      Next, you will be prompted to create your delivery pipeline. A pipeline is a set of actions that perform tasks on your repository code, like builds, tests, or deployments.

      The key settings to configure are:

      • Branch from which Buddy will deploy your code – in this case, set it to master
      • Pipeline trigger mode – set it to On push to automatically execute the pipeline on every push to the selected branch.

      Once you add the pipeline, you’ll need to create four actions:

      1. A PHP action that will install the required PHP packages
      2. A Node action that will download the dependencies and prepare a build for deployment
      3. A Droplet action that will upload the build code directly to your DO Droplet
      4. An SSH action with a script that will activate your theme.

      Based on the contents of your repository, Buddy will automatically suggest the actions to perform. Select PHP from the list.

      Action selection screen

      Clicking the action will open its configuration panel. Enter the following commands in the terminal section:

      # navigate to theme directory
      cd sage
      
      # install php packages
      composer validate
      composer install
      

      Save and run the pipeline to ensure it works:

      Pipeline execution

      Note: Buddy uses isolated containers with preinstalled frameworks for builds. The downloaded dependencies are cached in the container, meaning you don’t have to download them again. Think of it as a local development environment that remains consistent for everybody on the team.

      Next, add the Node.js action. In order for the theme to display properly, we’ll need to compile and minify assets, i.e. SCSS/SASS and JavaScript files.

      First, set Environment to node latest.

      Now you must add several commands. These commands will install the necessary dependencies and perform your build.

      Add them to the terminal box just like before:

      # navigate to theme directory
      cd sage
      
      # install packages
      yarn install
      
      # Create production build
      yarn build:production
      

      Once again, save and run the action to ensure it works.

      Next, add the Droplet action right after the Node.js build. If you’ve never used DigitalOcean with Buddy before, a wizard will appear that will guide you through the integration. Once you’ve completed this step, define the authentication details as follows:

      • Set the Source path to sage.

      • Choose Buddy’s SSH key authentication mode as that is the easiest one to set up. Just log in to your Droplet server via SSH and execute the commands displayed in Buddy’s key code snippet.

      After you execute those commands, go back to the browser and click the Remote path browse button – you will be able to navigate your Droplet’s filesystem and access the correct deployment folder. The default path will be /var/www/html/wp-content/themes/sage.

      You will also need to visit the Ignore paths section and provide the following to prevent uploading of Node.js dependencies:

      .cache-loader/
      node_modules/
      

      When done, click the Test action button to verify that everything’s been properly configured.

      Last, you’ll add one more action to activate your theme on the WordPress Droplet with a WP-CLI command. On your pipeline page, add the SSH action and input the following command in the commands section:

      • sudo -u www-data -- wp theme activate sage/resources

      Ensure you have set the correct Working directory setting – otherwise, the command won’t work.

      Since you already configured Buddy’s SSH key in the previous setup, you don’t need to do anything else. Alternatively, you can select private SSH key and then you can upload your DigitalOcean private key and use that to connect to your Droplet. Buddy’s SSH key is simpler and just as secure.

      Your complete pipeline will now contain 4 actions: PHP > Node > Droplet > SSH. Click the Run Pipeline button to test out all the actions at once. You will receive a green check mark for each stage:

      Pipeline execution screen

      On the first execution, Buddy will deploy all files from the repository to the selected revision. Future deployments will only update files that have changed or were deleted. This feature significantly reduces upload time because you don’t have to deploy everything from scratch on every update.

      Go to your hosted WordPress dashboard and refresh the Themes page. You will see your Sage theme. Activate it now.

      Your hosted home page will now match your local home page.

      Our pipeline is built and our local and remote machines are synced. Now, let’s test the entire workflow.

      Step 6 — Testing Buddy’s Auto-Deployment Workflow

      In this step you will make a small change to your theme and then deploy those changes to your WordPress server.

      Go back to your local terminal and run this yarn command:

      This will start a live proxy development server at localhost:3000. Any changes you make to your theme will get automatically reflected in this window. The page on localhost:8080 will remain unchanged until you run the production build script.

      Let’s test out our pipeline by making some minor changes to our CSS.

      Open the main.scss file for your Sage theme:

      • nano ./sage/resources/assets/styles/main.scss

      Insert the following code to introduce some subtle green color and an underline to the website’s font:

      ./sage/resources/assets/styles/main.scss

      .brand {
        @extend .display-3;
      
        color: #013d30;
      }
      
      .entry-title {
        @extend .display-4;
      
        a {
          color: #015c48;
          text-decoration: underline;
        }
      }
      
      .page-header {
        display: none;
      }
      

      Save and close the file.

      Commit these changes and upload them to your repo:

      • git add .
      • git commit -m "minor style changes"
      • git push

      Once the code is uploaded to the repository, Buddy will automatically trigger your pipeline and execute all actions one by one:

      Wait for the pipeline to finish and then refresh your WordPress Droplet’s home page to see your updates.

      Updated WP Droplet

      Your pipeline is now pushing changes from your local machine to GitHub to Buddy to your production WordPress server, all triggered by one git command.

      Conclusion

      Buddy is a very user friendly and powerful CI/CD tool. Buddy even has a video that shows just how quickly you can create pipelines using their interface.

      By automating your development workflow, you can focus on implementing styles and features for your custom theme or plugin without wasting time on manual deployments. The CI/CD workflow can also significantly reduce the risk of manual errors. In addition, automation allows you to further enhance the quality of your code by running unit tests and analysis tools, such as PHP Sniffer, on every change.

      You can take this tutorial even further by setting up an advanced branching strategy and a staging server, where you can perform quality control checks before you deploy new code to the production server. This way you can release better software more often without losing the momentum.



      Source link

      How To Automate Your Node.js Production Deployments with Shipit on CentOS 7


      The author selected the Electronic Frontier Foundation to receive a donation as part of the Write for DOnations program.

      Introduction

      Shipit is a universal automation and deployment tool for Node.js developers. It features a task flow based on the popular Orchestrator package, login and interactive SSH commands through OpenSSH, and an extensible API. Developers can use Shipit to automate build and deployment workflows for a wide range of Node.js applications.

      The Shipit workflow allows developers to not only configure tasks, but also to specify the order in which they are executed; whether they should be run synchronously or asynchronously and on which environment.

      In this tutorial you will install and configure Shipit to deploy a Node.js application from your local development environment to your production environment. You’ll use Shipit to deploy your application and configure the remote server by:

      • transferring your Node.js application’s files from your local environment to the production environment (using rsync, git, and ssh).
      • installing your application’s dependencies (node modules).
      • configuring and managing the Node.js processes running on the remote server with PM2.

      Prerequisites

      Before you begin this tutorial you’ll need the following:

      Note: Windows users will need to install the Windows Subsystem for Linux to execute the commands in this guide.

      Step 1 — Setting Up the Remote Repository

      Shipit requires a Git repository to synchronize between the local development machine and the remote server. In this step you’ll create a remote repository on Github.com. While each provider is slightly different the commands are somewhat transferrable.

      To create a repository, open Github.com in your web browser and log in. You will notice that in the upper-right corner of any page there is a + symbol. Click +, and then click New repository.

      Github-new-repository

      Type a short, memorable name for your repository, for example, hello-world. Note that whatever name you choose here will be replicated as the project folder that you’ll work from on your local machine.

      Github-repository-name

      Optionally, add a description of your repository.

      Github-repository-description

      Set your repository’s visibility to your preference, either public or private.

      Make sure the repository is initialized with a .gitignore, select Node from the Add .gitignore dropdown list. This step is important to avoid having unnecessary files (like the node_modules folder) being added to your repository.

      Github-gitignore-node

      Click the Create repository button.

      The repository now needs to be cloned from Github.com to your local machine.

      Open your terminal and navigate to the location where you want to store all your Node.js project files. Note that this process will create a sub-folder within the current directory. To clone the repository to your local machine, run the following command:

      • git clone https://github.com/your-github-username/your-github-repository-name.git

      You will need to replace your-github-username and your-github-repository-name to reflect your Github username and the previously supplied repository name.

      Note: If you have enabled two-factor authentication (2FA) on Github.com, you must use a personal access token or SSH key instead of your password when accessing Github on the command line. The Github Help page related to 2FA provides further information.

      You’ll see output similar to:

      Output

      Cloning into 'your-github-repository-name'... remote: Enumerating objects: 3, done. remote: Total 3 (delta 0), reused 0 (delta 0), pack-reused 3 Unpacking objects: 100% (3/3), done.

      Move to the repository by running the following command:

      • cd your-github-repository-name

      Inside the repository is a single file and folder, both of which are files used by Git to manage the repository. You can verify this with:

      You’ll see output similar to the following:

      Output

      total 8 0 drwxr-xr-x 4 asciant staff 128 22 Apr 07:16 . 0 drwxr-xr-x 5 asciant staff 160 22 Apr 07:16 .. 0 drwxr-xr-x 13 asciant staff 416 22 Apr 07:16 .git 8 -rw-r--r-- 1 asciant staff 914 22 Apr 07:16 .gitignore

      Now that you have configured a working git repository, you’ll create the shipit.js file that manages your deployment process.

      Step 2 — Integrating Shipit into a Node.js Project

      In this step, you’ll create an example Node.js project and then add the Shipit packages. This tutorial provides an example app—the Node.js web server that accepts HTTP requests and responds with Hello World in plain text. To create the application, run the following command:

      Add the following example application code to hello.js (updating the APP_PRIVATE_IP_ADDRESS variable to your app server’s private network IP address):

      hello.js

      var http = require('http');
      http.createServer(function (req, res) {
        res.writeHead(200, {'Content-Type': 'text/plain'});
        res.end('Hello Worldn');
      }).listen(8080, 'APP_PRIVATE_IP_ADDRESS');
      console.log('Server running at http://APP_PRIVATE_IP_ADDRESS:8080/');
      

      Now create your package.json file for your application:

      This command creates a package.json file, which you’ll use to configure your Node.js application. In the next step, you’ll add dependencies to this file with the npm command line interface.

      Output

      Wrote to ~/hello-world/package.json: { "name": "hello-world", "version": "1.0.0", "description": "", "main": index.js", "scripts": { "test": "echo "Error: no test specified" && exit 1" }, "keywords": [], "author": "", "license": "ISC" }

      Next, install the necessary npm packages with the following command:

      • npm install --save-dev shipit-cli shipit-deploy shipit-shared

      You use the --save-dev flag here as the Shipit packages are only required on your local machine. You’ll see output similar to the following:

      Output

      + shipit-shared@4.4.2 + shipit-cli@4.2.0 + shipit-deploy@4.1.4 updated 4 packages and audited 21356 packages in 11.671s found 62 low severity vulnerabilities run `npm audit fix` to fix them, or `npm audit` for details

      This also added the three packages to your package.json file as development dependencies:

      package.json

      . . .
        "devDependencies": {
          "shipit-cli": "^4.2.0",
          "shipit-deploy": "^4.1.4",
          "shipit-shared": "^4.4.2"
        },
      . . .
      

      With your local environment configured, you can now move on to preparing the remote app server for Shipit-based deployments.

      Step 3 — Preparing the Remote App Server

      In this step, you’ll use ssh to connect to your app server and install your remote dependency rsync. Rsync is a utility for efficiently transferring and synchronizing files between local computer drives and across networked computers by comparing the modification times and sizes of files.

      Shipit uses rsync to transfer and synchronize files between your local computer and the remote app server. You won’t be issuing any commands to rsync directly; Shipit will handle it for you.

      Note: How To Set Up a Node.js Application for Production on CentOS 7 left you with two servers app and web. These commands should be executed on app only.

      Connect to your remote app server via ssh:

      • ssh deployer@your_app_server_ip

      Install rsync on your server by running the following command:

      Confirm the installation with:

      You’ll see a similar line within the output of this command:

      Output

      rsync version 3.1.2 protocol version 31 . . .

      You can end your ssh session by typing exit.

      With rsync installed and available on the command line, you can move on to deployment tasks and their relationship with events.

      Step 4 — Configuring and Executing Deployment Tasks

      Both events and tasks are key components of Shipit deployments and it is important to understand how they complement the deployment of your application. The events triggered by Shipit represent specific points in the deployment lifecycle. Your tasks will execute in response to these events, based on the sequence of the Shipit lifecycle.

      A common example of where this task/event system is useful in a Node.js application, is the installation of the app’s dependencies (node_modules) on the remote server. Later in this step you’ll have Shipit listen for the updated event (which is issued after the application’s files are transferred) and run a task to install the application’s dependencies (npm install) on the remote server.

      To listen to events and execute tasks, Shipit needs a configuration file that holds information about your remote server (the app server) and registers event listeners and the commands to be executed by these tasks. This file lives on your local development computer, inside your Node.js application’s directory.

      To get started, create this file, including information about your remote server, the event listeners you want to subscribe to, and some definitions of your tasks. Create shipitfile.js within your application root directory on your local machine by running the following command:

      Now that you’ve created a file, it needs to be populated with the initial environment information that Shipit needs. This is primarily the location of your remote Git repository and importantly, your app server’s public IP address and SSH user account.

      Add this initial configuration and update the highlighted lines to match your environment:

      shipitfile.js

      module.exports = shipit => {
        require('shipit-deploy')(shipit);
        require('shipit-shared')(shipit);
      
        const appName = 'hello';
      
        shipit.initConfig({
          default: {
            deployTo: '/home/sammy/your-domain',
            repositoryUrl: 'https://git-provider.tld/YOUR_GIT_USERNAME/YOUR_GIT_REPO_NAME.git',
            keepReleases: 5,
            shared: {
              overwrite: true,
              dirs: ['node_modules']
            }
          },
          production: {
            servers: 'sammy@YOUR_APP_SERVER_PUBLIC_IP'
          }
        });
      
        const path = require('path');
        const ecosystemFilePath = path.join(
          shipit.config.deployTo,
          'shared',
          'ecosystem.config.js'
        );
      
        // Our listeners and tasks will go here
      
      };
      

      Updating the variables in your shipit.initConfig method provides Shipit with configuration specific to your deployment. These represent the following to Shipit:

      • deployTo: is the directory where Shipit will deploy your application’s code to on the remote server. Here you use the /home/ folder for a non-root user with sudo privileges (/home/sammy) as it is secure, and will avoid permission issues. The /your-domain component is a naming convention to distinguish the folder from others in the user’s home folder.
      • repositoryUrl: is the URL to the full Git repository, Shipit will use this URL to ensure the project files are in sync prior to deployment.
      • keepReleases: is the number of releases to keep on the remote server. A release is a date-stamped folder containing your application’s files at the time of release. These can be useful for rollback of a deployment.
      • shared: is configuration that corresponds with keepReleases that allows directories to be shared between releases. In this instance, we have a single node_modules folder that is shared between all releases.
      • production: represents a remote server to deploy your application to. In this instance, you have a single server (app server) that you name production, with the servers: configuration matching your SSH user and public ip address. The name production, corresponds with the Shipit deploy command used toward the end of this tutorial (npx shipit server name deploy or in your case npx shipit production deploy).

      Further information on the Shipit Deploy Configuration object can be found in the Shipit Github repository.

      Before continuing to update your shipitfile.js, let’s review the following example code snippet to understand Shipit tasks:

      Example event listener

      shipit.on('deploy', () => { shipit.start('say-hello'); }); shipit.blTask('say-hello', async () => { shipit.local('echo "hello from your local computer"') });

      This is an example task that uses the shipit.on method to subscribe to the deploy event. This task will wait for the deploy event to be emitted by the Shipit lifecycle, then when the event is received, the task executes the shipit.start method that tells Shipit to start the say-hello task.

      The shipit.on method takes two parameters, the name of the event to listen for and the callback function to execute when the event is received.

      Under the shipit.on method declaration, the task is defined with the shipit.blTask method. This creates a new Shipit task that will block other tasks during its execution (it is a synchronous task). The shipit.blTask method also takes two parameters, the name of the task it is defining and a callback function to execute when the task is triggered by shipit.start.

      Within the callback function of this example task (say-hello), the shipit.local method executes a command on the local machine. The local command echos "hello from your local computer" into the terminal output.

      If you wanted to execute a command on the remote server, you would use the shipit.remote method. The two methods, shipit.local and shipit.remote, provide an API to issue commands either locally, or remotely as part of a deployment.

      Now update the shipitfile.js to include event listeners to subscribe to the Shipit lifecycle with shipit.on. Add the event listeners to your shipitfile.js, inserting them following the comment placeholder from the initial configuration // Our tasks will go here:

      shipitfile.js

      . . .
        shipit.on('updated', () => {
          shipit.start('npm-install', 'copy-config');
        });
      
        shipit.on('published', () => {
          shipit.start('pm2-server');
        });
      

      These two methods are listening for the updated and the published events that are emitted as part of the Shipit deployment lifecycle. When the event is received, they will each initiate tasks using the shipit.start method, similarly to the example task.

      Now that you’ve scheduled the listeners, you’ll add the corresponding task. Add the following task to your shipitfile.js, inserting them after your event listeners:

      shipitfile.js

      . . .
      shipit.blTask('copy-config', async () => {
      
      const fs = require('fs');
      
      const ecosystem = `
      module.exports = {
      apps: [
        {
          name: '${appName}',
          script: '${shipit.releasePath}/hello.js',
          watch: true,
          autorestart: true,
          restart_delay: 1000,
          env: {
            NODE_ENV: 'development'
          },
          env_production: {
            NODE_ENV: 'production'
          }
        }
      ]
      };`;
      
        fs.writeFileSync('ecosystem.config.js', ecosystem, function(err) {
          if (err) throw err;
          console.log('File created successfully.');
        });
      
        await shipit.copyToRemote('ecosystem.config.js', ecosystemFilePath);
      });
      

      You first declare a task called copy-config. This task creates a local file called ecosystem.config.js and then copies that file to your remote app server. PM2 uses this file to manage your Node.js application. It provides the necessary file path information to PM2 to ensure that it is running your latest deployed files. Later in the build process, you’ll create a task that runs PM2 with ecosystem.config.js as configuration.

      If your application needs environment variables (like a database connection string) you can declare them either locally in env: or on the remote server in env_production: in the same manner that you set the NODE_ENV variable in these objects.

      Add the next task to your shipitfile.js following the copy-config task:

      shipitfile.js

      . . .
      shipit.blTask('npm-install', async () => {
        shipit.remote(`cd ${shipit.releasePath} && npm install --production`);
      });
      

      Next, you declare a task called npm-install. This task uses a remote bash terminal (via shipit.remote) to install the app’s dependencies (npm packages).

      Add the last task to your shipitfile.js following the npm-install task:

      shipitfile.js

      . . .
      shipit.blTask('pm2-server', async () => {
        await shipit.remote(`pm2 delete -s ${appName} || :`);
        await shipit.remote(
          `pm2 start ${ecosystemFilePath} --env production --watch true`
        );
      });
      

      Finally you declare a task called pm2-server. This task also uses a remote bash terminal to first stop PM2 from managing your previous deployment through the delete command and then start a new instance of your Node.js server providing the ecosystem.config.js file as a variable. You also let PM2 know that it should be using environment variables from the production block in your initial configuration and you ask PM2 to watch the application, restarting it if it crashes.

      The complete shipitfile.js file:

      shipitfile.js

      module.exports = shipit => {
        require('shipit-deploy')(shipit);
        require('shipit-shared')(shipit);
      
        const appName = 'hello';
      
        shipit.initConfig({
          default: {
            deployTo: '/home/deployer/example.com',
            repositoryUrl: 'https://git-provider.tld/YOUR_GIT_USERNAME/YOUR_GIT_REPO_NAME.git',
            keepReleases: 5,
            shared: {
              overwrite: true,
              dirs: ['node_modules']
            }
          },
          production: {
            servers: 'deployer@YOUR_APP_SERVER_PUBLIC_IP'
          }
        });
      
        const path = require('path');
        const ecosystemFilePath = path.join(
          shipit.config.deployTo,
          'shared',
          'ecosystem.config.js'
        );
      
        // Our listeners and tasks will go here
        shipit.on('updated', async () => {
          shipit.start('npm-install', 'copy-config');
        });
      
        shipit.on('published', async () => {
          shipit.start('pm2-server');
        });
      
        shipit.blTask('copy-config', async () => {
          const fs = require('fs');
          const ecosystem = `
      module.exports = {
        apps: [
          {
            name: '${appName}',
            script: '${shipit.releasePath}/hello.js',
            watch: true,
            autorestart: true,
            restart_delay: 1000,
            env: {
              NODE_ENV: 'development'
            },
            env_production: {
              NODE_ENV: 'production'
            }
          }
        ]
      };`;
      
          fs.writeFileSync('ecosystem.config.js', ecosystem, function(err) {
            if (err) throw err;
            console.log('File created successfully.');
          });
      
          await shipit.copyToRemote('ecosystem.config.js', ecosystemFilePath);
        });
      
        shipit.blTask('npm-install', async () => {
          shipit.remote(`cd ${shipit.releasePath} && npm install --production`);
        });
      
        shipit.blTask('pm2-server', async () => {
          await shipit.remote(`pm2 delete -s ${appName} || :`);
          await shipit.remote(
            `pm2 start ${ecosystemFilePath} --env production --watch true`
          );
        });
      };
      

      Save and exit the file when you’re ready.

      With your shipitfile.js configured, event listeners, and associated tasks finalized you can move on to deploying to the app server.

      Step 5 — Deploying Your Application

      In this step, you will deploy your application remotely and test that the deployment made your application available to the internet.

      Because Shipit clones the project files from the remote Git repository, you need to push your local Node.js application files from your local machine to Github. Navigate to your Node.js project’s application directory (where your hello.js and shiptitfile.js are located) and run the following command:

      The git status command displays the state of the working directory and the staging area. It lets you see which changes have been staged, which haven’t, and which files aren’t being tracked by Git. Your files are untracked and appear red in the output:

      Output

      On branch master Your branch is up to date with 'origin/master'. Untracked files: (use "git add <file>..." to include in what will be committed) hello.js package-lock.json package.json shipitfile.js nothing added to commit but untracked files present (use "git add" to track)

      You can add these files to your repository with the following command:

      This command does not produce any output, although if you were to run git status again, the files would appear green with a note that there are changes to be committed.

      You can create a commit running the following command:

      • git commit -m "Our first commit"

      The output of this command provides some Git-specific information about the files.

      Output

      [master c64ea03] Our first commit 4 files changed, 1948 insertions(+) create mode 100644 hello.js create mode 100644 package-lock.json create mode 100644 package.json create mode 100644 shipitfile.js

      All that is left now is to push your commit to the remote repository for Shipit to clone to your app server during deployment. Run the following command:

      The output includes information about the synchronization with the remote repository:

      Output

      Enumerating objects: 7, done. Counting objects: 100% (7/7), done. Delta compression using up to 8 threads Compressing objects: 100% (6/6), done. Writing objects: 100% (6/6), 15.27 KiB | 7.64 MiB/s, done. Total 6 (delta 0), reused 0 (delta 0) To github.com:Asciant/hello-world.git e274312..c64ea03 master -> master

      To deploy your application, run the following command:

      • npx shipit production deploy

      The output of this command (which is too large to include in its entirety) provides detail on the tasks being executed and the result of the specific function. The output following for the pm2-server task shows the Node.js app has been launched:

      Output

      Running 'deploy:init' task... Finished 'deploy:init' after 432 μs . . . Running 'pm2-server' task... Running "pm2 delete -s hello || :" on host "centos-ap-app.asciant.com". Running "pm2 start /home/deployer/example.com/shared/ecosystem.config.js --env production --watch true" on host "centos-ap-app.asciant.com". @centos-ap-app.asciant.com [PM2][WARN] Node 4 is deprecated, please upgrade to use pm2 to have all features @centos-ap-app.asciant.com [PM2][WARN] Applications hello not running, starting... @centos-ap-app.asciant.com [PM2] App [hello] launched (1 instances) @centos-ap-app.asciant.com ┌──────────┬────┬─────────┬──────┬──────┬────────┬─────────┬────────┬─────┬──────────┬──────────┬──────────┐ @centos-ap-app.asciant.com │ App name │ id │ version │ mode │ pid │ status │ restart │ uptime │ cpu │ mem │ user │ watching │ @centos-ap-app.asciant.com ├──────────┼────┼─────────┼──────┼──────┼────────┼─────────┼────────┼─────┼──────────┼──────────┼──────────┤ @centos-ap-app.asciant.com │ hello │ 0 │ 1.0.0 │ fork │ 4177 │ online │ 0 │ 0s │ 0% │ 4.5 MB │ deployer │ enabled │ @centos-ap-app.asciant.com └──────────┴────┴─────────┴──────┴──────┴────────┴─────────┴────────┴─────┴──────────┴──────────┴──────────┘ @centos-ap-app.asciant.com Use `pm2 show <id|name>` to get more details about an app Finished 'pm2-server' after 5.27 s Running 'deploy:clean' task... Keeping "5" last releases, cleaning others Running "(ls -rd /home/deployer/example.com/releases/*|head -n 5;ls -d /home/deployer/example.com/releases/*)|sort|uniq -u|xargs rm -rf" on host "centos-ap-app.asciant.com". Finished 'deploy:clean' after 1.81 s Running 'deploy:finish' task... Finished 'deploy:finish' after 222 μs Finished 'deploy' [ deploy:init, deploy:fetch, deploy:update, deploy:publish, deploy:clean, deploy:finish ]

      To view your application as a user would, you can enter your website URL your-domain in your browser to access your web server. This will serve the Node.js Application, via reverse proxy, on the app server where your files were deployed.

      You’ll see a Hello World greeting.

      Note: After the first deployment, your Git repository will be tracking a newly created file named ecosystem.config.js. As this file will be rebuilt on each deploy, and may contain compiled application secrets it should be added to the .gitignore file in the application root directory on your local machine prior to your next git commit.

      .gitignore

      . . .
      # ecosystem.config
      ecosystem.config.js
      

      You’ve deployed your Node.js application to your app server, that refers to your new deployment. With everything up and running, you can move on to monitoring your application processes.

      Step 6 — Monitoring Your Application

      PM2 is a great tool for managing your remote processes, but it also provides features to monitor the performance of these application processes.

      Connect to your remote app server via SSH with this command:

      • ssh deployer@your_app_server_ip

      To obtain specific information related to your PM2 managed processes, run the following:

      You’ll see output similar to:

      Output

      ┌─────────────┬────┬─────────┬──────┬──────┬────────┬─────────┬────────┬──────┬───────────┬──────────┬──────────┐ │ App name │ id │ version │ mode │ pid │ status │ restart │ uptime │ cpu │ mem │ user │ watching │ ├─────────────┼────┼─────────┼──────┼──────┼────────┼─────────┼────────┼──────┼───────────┼──────────┼──────────┤ │ hello │ 0 │ 0.0.1 │ fork │ 3212 │ online │ 0 │ 62m │ 0.3% │ 45.2 MB │ deployer │ enabled │ └─────────────┴────┴─────────┴──────┴──────┴────────┴─────────┴────────┴──────┴───────────┴──────────┴──────────┘

      You’ll see a summary of the information PM2 has collected. To see detailed information, you can run:

      The output expands on the summary information provided by the pm2 list command. It also provides information on a number of ancillary commands and provides log file locations:

      Output

      Describing process with id 0 - name hello ┌───────────────────┬─────────────────────────────────────────────────────────────┐ │ status │ online │ │ name │ hello │ │ version │ 1.0.0 │ │ restarts │ 0 │ │ uptime │ 82s │ │ script path │ /home/deployer/example.com/releases/20190531213027/hello.js │ │ script args │ N/A │ │ error log path │ /home/deployer/.pm2/logs/hello-error.log │ │ out log path │ /home/deployer/.pm2/logs/hello-out.log │ │ pid path │ /home/deployer/.pm2/pids/hello-0.pid │ │ interpreter │ node │ │ interpreter args │ N/A │ │ script id │ 0 │ │ exec cwd │ /home/deployer │ │ exec mode │ fork_mode │ │ node.js version │ 4.2.3 │ │ node env │ production │ │ watch & reload │ ✔ │ │ unstable restarts │ 0 │ │ created at │ 2019-05-31T21:30:48.334Z │ └───────────────────┴─────────────────────────────────────────────────────────────┘ Revision control metadata ┌──────────────────┬────────────────────────────────────────────────────┐ │ revision control │ git │ │ remote url │ N/A │ │ repository root │ /home/deployer/example.com/releases/20190531213027 │ │ last update │ 2019-05-31T21:30:48.559Z │ │ revision │ 62fba7c8c61c7769022484d0bfa46e756fac8099 │ │ comment │ Our first commit │ │ branch │ master │ └──────────────────┴────────────────────────────────────────────────────┘ Divergent env variables from local env ┌───────────────────────────┬───────────────────────────────────────┐ │ XDG_SESSION_ID │ 15 │ │ HOSTNAME │ N/A │ │ SELINUX_ROLE_REQUESTED │ │ │ TERM │ N/A │ │ HISTSIZE │ N/A │ │ SSH_CLIENT │ 44.222.77.111 58545 22 │ │ SELINUX_USE_CURRENT_RANGE │ │ │ SSH_TTY │ N/A │ │ LS_COLORS │ N/A │ │ MAIL │ /var/mail/deployer │ │ PATH │ /usr/local/bin:/usr/bin │ │ SELINUX_LEVEL_REQUESTED │ │ │ HISTCONTROL │ N/A │ │ SSH_CONNECTION │ 44.222.77.111 58545 209.97.167.252 22 │ └───────────────────────────┴───────────────────────────────────────┘ . . .

      PM2 also provides an in-terminal monitoring tool, accessible with:

      The output of this command is an interactive dashboard, where pm2 provides realtime process information, logs, metrics, and metadata. This dashboard may assist in monitoring resources and error logs:

      Output

      ┌─ Process list ────────────────┐┌─ Global Logs ─────────────────────────────────────────────────────────────┐ │[ 0] hello Mem: 22 MB ││ │ │ ││ │ │ ││ │ └───────────────────────────────┘└───────────────────────────────────────────────────────────────────────────┘ ┌─ Custom metrics (http://bit.l─┐┌─ Metadata ────────────────────────────────────────────────────────────────┐ │ Heap Size 10.73 ││ App Name hello │ │ Heap Usage 66.14 ││ Version N/A │ │ Used Heap Size 7.10 ││ Restarts 0 │ │ Active requests 0 ││ Uptime 55s │ │ Active handles 4 ││ Script path /home/asciant/hello.js │ │ Event Loop Latency 0.70 ││ Script args N/A │ │ Event Loop Latency p95 ││ Interpreter node │ │ ││ Interpreter args N/A │ └───────────────────────────────┘└───────────────────────────────────────────────────────────────────────────┘

      With an understanding of how you can monitor your processes with PM2, you can move on to how Shipit can assist in rolling back to a previous working deployment.

      End your ssh session on your app server by running exit.

      Step 7 — Rolling Back a Bugged Deployment

      Deployments occasionally expose unforeseen bugs, or issues that cause your site to fail. The developers and maintainers of Shipit have anticipated this and have provided the ability for you to roll back to the previous (working) deployment of your application.

      To ensure your PM2 configuration persists, add another event listener to shipitfile.js on the rollback event:

      shipitfile.js

      . . .
        shipit.on('rollback', () => {
          shipit.start('npm-install', 'copy-config');
        });
      

      You add a listener to the rollback event to run your npm-install and copy-config tasks. This is needed because unlike the published event, the updated event is not run by the Shipit lifecycle when rolling back a deployment. Adding this event listener ensures your PM2 process manager points to the most recent deployment, even in the event of a rollback.

      This process is similar to deploying, with a minor change in command. To try rolling back to a previous deployment you can execute the following:

      • npx shipit production rollback

      Like the deploy command, rollback provides details on the roll back process and the tasks being executed:

      Output

      Running 'rollback:init' task... Get current release dirname. Running "if [ -h /home/deployer/example.com/current ]; then readlink /home/deployer/example.com/current; fi" on host "centos-ap-app.asciant.com". @centos-ap-app.asciant.com releases/20190531213719 Current release dirname : 20190531213719. Getting dist releases. Running "ls -r1 /home/deployer/example.com/releases" on host "centos-ap-app.asciant.com". @centos-ap-app.asciant.com 20190531213719 @centos-ap-app.asciant.com 20190531213519 @centos-ap-app.asciant.com 20190531213027 Dist releases : ["20190531213719","20190531213519","20190531213027"]. Will rollback to 20190531213519. Finished 'rollback:init' after 3.96 s Running 'deploy:publish' task... Publishing release "/home/deployer/example.com/releases/20190531213519" Running "cd /home/deployer/example.com && if [ -d current ] && [ ! -L current ]; then echo "ERR: could not make symlink"; else ln -nfs releases/20190531213519 current_tmp && mv -fT current_tmp current; fi" on host "centos-ap-app.asciant.com". Release published. Finished 'deploy:publish' after 1.8 s Running 'pm2-server' task... Running "pm2 delete -s hello || :" on host "centos-ap-app.asciant.com". Running "pm2 start /home/deployer/example.com/shared/ecosystem.config.js --env production --watch true" on host "centos-ap-app.asciant.com". @centos-ap-app.asciant.com [PM2][WARN] Node 4 is deprecated, please upgrade to use pm2 to have all features @centos-ap-app.asciant.com [PM2][WARN] Applications hello not running, starting... @centos-ap-app.asciant.com [PM2] App [hello] launched (1 instances) @centos-ap-app.asciant.com ┌──────────┬────┬─────────┬──────┬──────┬────────┬─────────┬────────┬─────┬──────────┬──────────┬──────────┐ @centos-ap-app.asciant.com │ App name │ id │ version │ mode │ pid │ status │ restart │ uptime │ cpu │ mem │ user │ watching │ @centos-ap-app.asciant.com ├──────────┼────┼─────────┼──────┼──────┼────────┼─────────┼────────┼─────┼──────────┼──────────┼──────────┤ @centos-ap-app.asciant.com │ hello │ 0 │ 1.0.0 │ fork │ 4289 │ online │ 0 │ 0s │ 0% │ 4.5 MB │ deployer │ enabled │ @centos-ap-app.asciant.com └──────────┴────┴─────────┴──────┴──────┴────────┴─────────┴────────┴─────┴──────────┴──────────┴──────────┘ @centos-ap-app.asciant.com Use `pm2 show <id|name>` to get more details about an app Finished 'pm2-server' after 5.55 s Running 'deploy:clean' task... Keeping "5" last releases, cleaning others Running "(ls -rd /home/deployer/example.com/releases/*|head -n 5;ls -d /home/deployer/example.com/releases/*)|sort|uniq -u|xargs rm -rf" on host "centos-ap-app.asciant.com". Finished 'deploy:clean' after 1.82 s Running 'rollback:finish' task... Finished 'rollback:finish' after 615 μs Finished 'rollback' [ rollback:init, deploy:publish, deploy:clean, rollback:finish ]

      You have configured Shipit to keep 5 releases through the keepReleases: 5 configuration in shipitfile.js. Shipit keeps track of these releases internally to ensure it is able to roll back when required. Shipit also provides a handy way to identify the releases by creating a directory named as a timestamp (YYYYMMDDHHmmss - Example: /home/deployer/your-domain/releases/20190420210548).

      If you wanted to further customize the roll back process, you can listen for events specific to the roll back operation. You can then use these events to execute tasks that will complement your roll back. You can refer to the event list provided in the breakdown of the Shipit lifecycle and configure the tasks/listeners within your shipitfile.js.

      The ability to roll back means that you can always serve a functioning version of your application to your users even if a deployment introduces unexpected bugs/issues.

      Conclusion

      In this tutorial, you configured a workflow that allows you to create a highly customizable alternative to Platform as a Service, all from a couple of servers. This workflow allows for customized deployment and configuration, process monitoring with PM2, the potential to scale and add services, or additional servers or environments to the deployment when required.

      If you are interested in continuing to develop your Node.js skills, check out the DigtalOcean Node.js content as well as the How To Code in Node.js Series.



      Source link