One place for hosting & domains

      Delivery

      How To Set Up a Continuous Delivery Pipeline with Flux on DigitalOcean Kubernetes


      The author selected the Free and Open Source Fund to receive a donation as part of the Write for DOnations program.

      Introduction

      By itself, Kubernetes does not offer continuous integration and deployment features. While these concepts are often not widespread in smaller projects, bigger teams who host and update their deployments extensively find it much easier to set up such processes to alleviate manual time-consuming tasks and instead focus on developing the software that’s being deployed. One approach to maintaining continuous delivery for Kubernetes is GitOps.

      GitOps views the Git repositories hosting the application and Kubernetes manifests as the central source of truth regarding deployments. It allows for separated deployment environments by using repository branches, gives you the ability to quickly reproduce any config state, current or past, on any cluster, and makes rollbacks trivial thanks to Git versioning. The manifests are secure, synchronized, and easily accessible at all times. Modifications to the manifest or application can be audited, allowed, or denied depending on external factors (usually, the continuous integration system). Automating the process from pushing the code to having it deploy on a cluster can greatly increase productivity and enhance the developer experience while making the deployment always consistent with the central code base.

      Flux is an open-source tool facilitating the GitOps continuous delivery approach for Kubernetes. Flux allows for automated application and configuration deployments to your clusters by monitoring the configured Git repositories and automatically applying the changes as soon as they become available. It can apply Kustomize manifests (which provide an easy way to optionally patch parts of the usual Kubernetes manifests on the fly), as well as watch over Helm chart releases. You can also configure it to be notified via Slack, Discord, Microsoft Teams, or any other service that supports webhooks. Webhooks provide a way of notifying an app or a service of an event that’s happened somewhere else and provide its description.

      In this tutorial, you’ll install Flux and use it to set up continuous delivery for the podinfo app to your DigitalOcean Kubernetes cluster. podinfo is an app that provides details about the environment it’s running in. You’ll host the repositories holding Flux configuration and podinfo on your GitHub account. You’ll set up Flux to watch over the app repository, automatically apply the changes, and notify you on Slack using webhooks. In the end, all changes that you make to the monitored repository will quickly be propagated to your cluster.

      Prerequisites

      To complete this tutorial, you will need:

      • A DigitalOcean Kubernetes cluster with your connection configuration configured as the kubectl default. Instructions on how to configure kubectl are shown under the Connect to your Cluster step when you create your cluster. To learn how to create a Kubernetes cluster on DigitalOcean, see Kubernetes Quickstart.
      • A Slack workspace you’re a member of. To learn how to create a workspace, visit the official docs.
      • A GitHub account with a Personal Access Token (PAT) created with all privileges. To learn how to create one, visit the official docs.

      • Git initialized and set up on your local machine. To get started with Git, as well as see installation instructions, visit the How To Contribute to Open Source: Getting Started with Git tutorial.

      • The podinfo app repository forked to your GitHub account. For instructions on how to fork a repository to your account, visit the official getting started docs.

      Step 1 — Installing and Bootstrapping Flux

      In this step, you’ll set up Flux on your local machine, install it to your cluster, and set up a dedicated Git repository for storing and versioning its configuration.

      On Linux, you can use the official Bash script to install Flux. If you’re on MacOS, you can either use the official script, following the same steps as for Linux, or use Homebrew to install Flux with the following command:

      • brew install fluxcd/tap/flux

      To install Flux using the officially provided script, download it by running the following command:

      • curl https://fluxcd.io/install.sh -so flux-install.sh

      You can inspect the flux-install.sh script to verify that it’s safe by running this command:

      To be able to run it, you must mark it as executable:

      Then, execute the script to install Flux:

      You’ll see the following output, detailing what version is being installed:

      Output

      [INFO] Downloading metadata https://api.github.com/repos/fluxcd/flux2/releases/latest [INFO] Using 0.13.4 as release [INFO] Downloading hash https://github.com/fluxcd/flux2/releases/download/v0.13.4/flux_0.13.4_checksums.txt [INFO] Downloading binary https://github.com/fluxcd/flux2/releases/download/v0.13.4/flux_0.13.4_linux_amd64.tar.gz [INFO] Verifying binary download [INFO] Installing flux to /usr/local/bin/flux

      To enable command autocompletion, run the following command to configure the shell:

      • echo ". <(flux completion bash)" >> ~/.bashrc

      For the changes to take effect, reload ~/.bashrc by running:

      You now have Flux available on your local machine. Before installing it to your cluster, you’ll first need to run the prerequisite checks that verify compatibility:

      Flux will connect to your cluster, which you’ve set up a connection to in the prerequisites. You’ll see an output similar to this:

      Output

      ► checking prerequisites ✔ kubectl 1.21.1 >=1.18.0-0 ✔ Kubernetes 1.20.2 >=1.16.0-0 ✔ prerequisites checks passed

      Note: If you see an error or a warning, double check the cluster you’re connected to. It’s possible that you may need to perform an upgrade to be able to use Flux. If kubectl is reported missing, repeat the steps from the prerequisites for your platform and check that it’s in your PATH.

      During the bootstrapping process, Flux creates a Git repository at a specified provider and initializes it with a default configuration. To do so requires your GitHub username and personal access token, which you’ve retrieved in the prerequisites. The repository will be available under your account on GitHub.

      You’ll store your GitHub username and personal access token as environment variables to avoid typing them multiple times. Run the following commands, replacing the highlighted parts with your GitHub credentials:

      • export GITHUB_USER=your_username
      • export GITHUB_TOKEN=your_personal_access_token

      You can now bootstrap Flux and install it to your cluster by running:

      • flux bootstrap github
      • --owner=$GITHUB_USER
      • --repository=flux-config
      • --branch=main
      • --path=./clusters/my-cluster
      • --personal

      In this command, you specify that the repository should be called flux-config at provider github, owned by the user you’ve just defined. The new repository will be personal (not under an organization) and will be made private by default.

      The output you’ll see will be similar to this:

      Output

      ► connecting to github.com ► cloning branch "main" from Git repository "https://github.com/GITHUB_USER/flux-config.git" ✔ cloned repository ► generating component manifests ✔ generated component manifests ✔ committed sync manifests to "main" ("b750ffae686c2f110364694d2ddae26c7f18c6a2") ► pushing component manifests to "https://github.com/GITHUB_USER/flux-config.git" ► installing components in "flux-system" namespace ✔ installed components ✔ reconciled components ► determining if source secret "flux-system/flux-system" exists ► generating source secret ✔ public key: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDKw943TnUiKLVk4WMLC5YCeC+tIPVvJprQxTfLqcwkHtedMJPanJFifmbQ/M3CAq1IgqyQTydRJSJu6E/4YDOwx1vawStR9XU16rkn+rZbmvRxZ97E0HNb5m54OwmziAWf0EPdsfiIIJYSRkCMihpKJUNoakl+sng6LQsW+WIRlOK39aJRWud+rygQEuEKmD7YHKQ0VSb/L5v50jiPgEZImiREHNfjBU+RkEni3aZuOO3jNy5WdlPkpdqfHe8fdFsjJnvNB0zmfe3eTIB2fbdDzxo2usLbFeAMhGCRYsGnniHsytBHNLmxDM/4I18xlNN9e6WEYpgHEJVb8azKmwSX ✔ configured deploy key "flux-system-main-flux-system-./clusters/my-cluster" for "https://github.com/GITHUB_USER/flux-config" ► applying source secret "flux-system/flux-system" ✔ reconciled source secret ► generating sync manifests ✔ generated sync manifests ✔ committed sync manifests to "main" ("1dc033e24f3288a70ff80c57816e16c52bc62303") ► pushing sync manifests to "https://github.com/GITHUB_USER/flux-config.git" ► applying sync manifests ✔ reconciled sync configuration ◎ waiting for Kustomization "flux-system/flux-system" to be reconciled ✔ Kustomization reconciled successfully ► confirming components are healthy ✔ source-controller: deployment ready ✔ kustomize-controller: deployment ready ✔ helm-controller: deployment ready ✔ notification-controller: deployment ready ✔ all components are healthy

      Flux noted that it made a new Git repository, committed a basic starting configuration to it, and provisioned necessary controllers in your cluster.

      In this step, you’ve installed Flux on your local machine, created a new Git repository to hold its configuration, and deployed its server-side components to your cluster. The changes defined by the commits in the repository will now get propagated to your cluster automatically. In the next step, you’ll create configuration manifests ordering Flux to automate deployments of the podinfo app you’ve forked whenever a change occurs.

      Step 2 — Configuring the Automated Deployment

      In this section, you will configure Flux to watch over the podinfo repository that you’ve forked and apply the changes to your cluster as soon as they become available.

      In addition to creating the repository and initial configuration, Flux offers commands to help you generate config manifests with your parameters faster than writing them from scratch. The manifests, regardless of what they define, must be available in its Git repository to be taken into consideration. To add them to the repository, you’ll first need to clone it to your machine to be able to push changes. Do so by running the following command:

      • git clone https://github.com/$GITHUB_USER/flux-config ~/flux-config

      You may be asked for your username and password. Input your account username and provide your personal access token for the password.

      Then, navigate to it:

      To instruct Flux to monitor the forked podinfo repository, you’ll first need to let it know where it’s located. This is achieved by creating a GitRepository manifest, which details the repository URL, branch, and monitoring interval.

      To create the manifest, run the following command:

      • flux create source git podinfo
      • --url=https://github.com/$GITHUB_USER/podinfo
      • --branch=master
      • --interval=30s
      • --export > ./clusters/my-cluster/podinfo-source.yaml

      Here, you specify that the source will be a Git repository with the given URL and branch. You pass in --export to output the generated manifest and pipe it into podinfo-source.yaml, located under ./clusters/my-cluster/ in the main config repository, where manifests for the current cluster are stored.

      You can show the contents of the generated file by running:

      • cat ./clusters/my-cluster/podinfo-source.yaml

      The output will look similar to this:

      ~/flux-config/clusters/my-cluster/podinfo-source.yaml

      ---
      apiVersion: source.toolkit.fluxcd.io/v1beta1
      kind: GitRepository
      metadata:
        name: podinfo
        namespace: flux-system
      spec:
        interval: 30s
        ref:
          branch: master
        url: https://github.com/GITHUB_USER/podinfo
      

      You can check that the parameters you just passed into Flux are correctly laid out in the generated manifest.

      You’ve now defined a source Git repository that Flux can access, but you still need to tell it what to deploy. Flux supports Kustomize resources, which podinfo exposes under the kustomize directory. By supporting Kustomizations, Flux does not limit itself, because Kustomize manifests can be as simple as just including all usual manifests unchanged.

      Create a Kustomization manifest, which tells Flux where to look for deployable manifests, by running the following command:

      • flux create kustomization podinfo
      • --source=podinfo
      • --path="./kustomize"
      • --prune=true
      • --validation=client
      • --interval=5m
      • --export > ./clusters/my-cluster/podinfo-kustomization.yaml

      For the --source, you specify the podinfo Git repository you’ve just created. You also set the --path to ./kustomize, which refers to the filesystem structure of the source repository. Then, you save the YAML output into a file called podinfo-kustomization.yaml in the directory for the current cluster.

      The Git repository and Kustomization you’ve created are now available, but the cluster-side of Flux can’t yet see them because they’re not in the remote repository on GitHub. To push them, you must first commit them by running:

      • git add . && git commit -m "podinfo added"

      With the changes now committed, push them to the remote repository:

      Same as last time, git may ask you for your credentials. Input your username and your personal access token to continue.

      The new manifests are now live, and cluster-side Flux will soon pick them up. You can watch it sync the cluster’s state with the one presented in the manifests by running:

      • watch flux get kustomizations

      After the refresh interval specified for the Git repository elapses (which you’ve set to 30s in the manifest above), Flux will retrieve its latest commit and update the cluster. Once it does, you’ll see output similar to this:

      Output

      NAME READY MESSAGE flux-system True Applied revision: main/fc07af652d3168be329539b30a4c3943a7d12dd8 podinfo True Applied revision: master/855f7724be13f6146f61a893851522837ad5b634

      You can see that a podinfo Kustomization was applied, along with its branch and commit hash. You can list deployments and services as well to check that podinfo is deployed:

      • kubectl get deployments,services

      You’ll see that they are present, configured according to their respective manifests:

      Output

      NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/podinfo 2/2 2 2 56s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.245.0.1 <none> 443/TCP 34m service/podinfo ClusterIP 10.245.78.189 <none> 9898/TCP,9999/TCP 56s

      Any changes that you manually make to these and other resources that Flux controls will quickly be overwritten with the ones referenced from Git repositories. To make changes, you’d need to modify the central sources, not the actual deployments in a cluster. This applies to deleting resources as well — any resources you manually delete from the cluster will soon be reinstated. To delete them, you’d need to remove their manifests from the monitored repositories and wait for the changes to be propagated.

      Flux’s behavior is intentionally rigid because it operates on what it finds in the remote repositories at the end of each refresh interval. Suspending Kustomization monitoring and, in turn, state reconciliation is useful when you need to manually override the resources in the cluster without being interrupted by Flux.

      You can pause monitoring of a Kustomization indefinitely by running:

      • flux suspend kustomization kustomization_name

      The default behavior can be brought back by running flux resume on a paused Kustomization:

      • flux resume kustomization kustomization_name

      You now have an automated process in place that will deploy podinfo to your cluster every time a change occurs. You’ll now set up Slack notifications, so you’ll know when a new version of podinfo is being deployed.

      Step 3 — Setting up Slack Notifications

      Now that you’ve set up automatic podinfo deployments to your cluster, you’ll connect Flux to a Slack channel, where you’ll be notified of every deployment and its outcome.

      To integrate with Slack, you’ll need to have an incoming webhook on Slack for your workspace. Incoming webhooks are a way of posting messages to the configured Slack channel.

      If you haven’t ever created a webhook, you’ll first need to create an app for your workspace. To do so, first log in to Slack and navigate to the app creation page. Press on the green Create New App button and select From scratch. Name it flux-app, select the desired workspace, and click Create New App.

      You’ll be redirected to the settings page for the new app. Click on Incoming Webhooks on the left navigation bar.

      Slack app - Incoming Webhooks

      Enable webhooks for flux-app by flipping the switch button next to the title Activate Incoming Webhooks.

      Slack app - Activate Incoming Webhooks

      A new section further down the page will be uncovered. Scroll down and click the Add New Webhook to Workspace button. On the next page, select the channel you want the reports to be sent to and click Allow.

      You’ll be redirected back to the settings page for webhooks, and you’ll see a new webhook listed in the table. Click on Copy to copy it to clipboard and make note of it for later use.

      You’ll store the generated Slack webhook for your app in a Kubernetes Secret in your cluster, so that Flux can access it without explicitly specifying it in its configuration manifests. Storing the webhook as a Secret also lets you easily replace it in the future.

      Create a Secret called slack-url containing the webhook by running the following command, replacing your_slack_webhook with the URL you’ve just copied:

      • kubectl -n flux-system create secret generic slack-url --from-literal=address=your_slack_webhook

      The output will be:

      Output

      secret/slack-url created

      You’ll now create a Provider, which allows Flux to talk to the specified service using webhooks. They read the webhook URL from Secrets, which is why you’ve just created one. Run the following Flux command to create a Slack Provider:

      • flux create alert-provider slack
      • --type slack
      • --channel general
      • --secret-ref slack-url
      • --export > ./clusters/my-cluster/slack-alert-provider.yaml

      Aside from Slack, Flux supports communicating with Microsoft Teams, Discord, and other platforms via webhooks. It also supports sending generic JSON to accommodate more software that parses this format.

      A Provider only allows Flux to send messages and does not specify when messages should be sent. For Flux to react to events, you’ll need to create an Alert using the slack Provider by running:

      • flux create alert slack-alert
      • --event-severity info
      • --event-source Kustomization/*
      • --event-source GitRepository/*
      • --provider-ref slack
      • --export > ./clusters/my-cluster/slack-alert.yaml

      This command creates an alert manifest called slack-alert that will react to all Kustomization and Git repository changes and report them to the slack provider. The event severity is set to info, which will allow the alert to be triggered on all events, such as Kubernetes manifests being created or applied, something delaying deployment, or an error occurring. To report only errors, you can specify error instead. The resulting generated YAML is exported to a file called slack-alert.yaml.

      Commit the changes by running:

      • git add . && git commit -m "Added Slack alerts"

      Push the changes to the remote repository by running the following command, inputting your GitHub username and personal access token if needed:

      After the configured refresh interval for the Git repository elapses, Flux will retrieve and apply the changes. You can watch the Alert become available by running:

      • watch kubectl -n flux-system get alert

      You’ll soon see that it’s Initialized:

      Output

      NAME READY STATUS AGE slack-alert True Initialized 7s

      With alerting now set up, any actions that Flux takes will be logged in the Slack channel of the workspace that the webhook is connected to.

      You’ll test this connection by introducing a change to your fork of podinfo. First, clone it your local machine by running:

      • git clone https://github.com/$GITHUB_USER/podinfo.git ~/podinfo

      Navigate to the cloned repository:

      You’ll modify the name of its Service, which is defined in ~/podinfo/kustomize/service.yaml. Open it for editing:

      • nano ~/podinfo/kustomize/service.yaml

      Modify the Service name, like so:

      ~/podinfo/kustomize/service.yaml

      apiVersion: v1
      kind: Service
      metadata:
        name: podinfo-1
      spec:
        type: ClusterIP
        selector:
          app: podinfo
        ports:
          - name: http
            port: 9898
            protocol: TCP
            targetPort: http
          - port: 9999
            targetPort: grpc
            protocol: TCP
            name: grpc
      

      Save and close the file, then commit the changes by running:

      • git add . && git commit -m "Service name modified"

      Then, push the changes:

      After a few minutes, you’ll see the changes pop up in Slack as they are deployed:

      Slack - Flux reported changes

      Flux fetched the new commit, created a new Service called podinfo-1, configured it, and deleted the old one. This order of actions ensures that the old Service (or any other manifest) stays untouched if provisioning of the new one fails.

      In case the new revision of the watched manifests contains a syntax error, Flux will report an error:

      Slack - Flux reported failed deployment

      You’ve connected Flux to your Slack workspace, and will immediately be notified of all actions and deployments that happen. You’ll now set up Flux to watch over Helm releases.

      Step 4 — (Optional) Automating Helm Release Deployments

      In addition to watching over Kustomizations and Git repositories, Flux can also monitor Helm charts. Flux can monitor charts residing in Git or Helm repositories, as well as in S3 cloud storage. You’ll now set it up to watch over the podinfo chart, which is located in a Helm repository.

      The process of instructing Flux to monitor a Helm chart is similar to what you did in step 2. You’ll first need to define a source that it can poll for changes (of one of the three types noted earlier). Then, you’ll specify which chart to actually deploy among the ones it finds by creating a HelmRelease.

      Navigate back to the flux-config repository:

      Run the following command to create a source for the Helm repository that contains podinfo:

      • flux create source helm podinfo
      • --url=https://stefanprodan.github.io/podinfo
      • --interval=10m
      • --export > ./clusters/my-cluster/podinfo-helm-repo.yaml

      Here, you specify the URL of the repository and how often it should be checked. Then, you save the output into a file called podinfo-helm-repo.yaml.

      With the source repository now defined, you can create a HelmRelease, defining which chart to monitor:

      • flux create hr podinfo
      • --interval=10m
      • --source=HelmRepository/podinfo
      • --chart=podinfo
      • --target-namespace=podinfo-helm
      • --export > ./clusters/my-cluster/podinfo-helm-chart.yaml

      As in the previous command, you save the resulting YAML output to a file, here called podinfo-helm-chart.yaml. You also pass in the name of the chart (podinfo), set the --source to the repository you’ve just defined and specify that the namespace the chart will be installed to is podinfo-helm.

      Since the podinfo-helm namespace does not exist, create it by running:

      • kubectl create namespace podinfo-helm

      Then, commit and push the changes:

      • git add . && git commit -m "Added podinfo Helm chart" && git push

      After a few minutes, you’ll see that Flux logged a successful Helm chart upgrade in Slack:

      Slack - Flux logged successful chart install

      You can check the pods contained in the podinfo-helm namespace by running:

      • kubectl get pods -n podinfo-helm

      The output will be similar to this:

      Output

      NAME READY STATUS RESTARTS AGE podinfo-chart-podinfo-7c9b7667cb-gshkb 1/1 Running 0 33s

      This means that you have successfully configured Flux to monitor and deploy the podinfo Helm chart. As soon as a new version is released, or a modification is pushed, Flux will retrieve and deploy the newest variant of the Helm chart for you.

      Conclusion

      You’ve now automated Kubernetes manifest deployments using Flux, which allows you to push commits to watched repositories and have them automatically applied to your cluster. You’ve also set up alerting to Slack, so you’ll always know what deployments are happening in real time, and you can look up previous ones and see any errors that might have occurred.

      In addition to GitHub, Flux also supports retrieving and bootstrapping Git repositories hosted at GitLab. You can visit the official docs to learn more.



      Source link

      How to Set up Curbside Pickup and Delivery Through Your Website


      COVID-19 has had severe ramifications, not only from a health standpoint but economically as well. With most people staying home and practicing social distancing, businesses that rely on in-person transactions are moving online.

      We’ve already outlined how to pivot your business model, update your website, and lead a team remotely during the coronavirus outbreak. But there’s something else small business owners will want to consider as part of their crisis management plan: offering curbside pickup and delivery options for customers.

      By enabling shoppers to place orders online and receive their items with minimal contact, you can do your part to keep your community safe, while also continuing to bring in revenue.

      In this post, we’ll take a look at some alternative shopping options for local businesses. Then we’ll show you how to set up either curbside pickup or a delivery option for your customers using WooCommerce. Let’s go!

      Alternative Shopping Methods During the COVID-19 Pandemic

      In response to the COVID-19 pandemic, the United States Center for Disease Control (CDC) issued several public health guidelines to prevent the spread of the coronavirus. These suggestions include:

      • Stay home as much as possible.
      • Keep six feet between yourself and others.
      • Avoid gathering in groups of 10 or more.

      Unfortunately, these guidelines make it difficult for small local businesses to welcome customers and conduct sales normally. To maintain even a limited revenue stream, most retailers and restaurants have had to develop alternative methods for serving customers.

      Curbside pickup is one of the best and most popular ways of doing that. It enables customers to purchase products online, and then visit your store to receive their items. Rather than having them go into your building, you or one of your employees brings the customer’s things out to their car. Think of it like takeout.

      With local delivery, your customers stay home while you bring their orders to their doors. Both methods go a long way towards minimizing contact between people since shoppers won’t be gathering in your store.

      However, you can take further precautions as well, such as:

      • Wearing gloves when packing customers’ orders.
      • Requiring online payments to avoid contact with customers when exchanging cash.
      • Place pickup orders directly in customers’ trunks.
      • Leaving delivery orders at customers’ doors, and calling or texting them to let them know their packages have arrived.
      • Providing face masks and hand sanitizer to employees involved in pickup and delivery orders.

      Fortunately, the risk of transferring or contracting COVID-19 via an object is very low. By setting up curbside pickup and delivery and minimizing contact between your employees and customers, you can significantly reduce the health risks for all involved.

      How to Set Up Curbside Pickup and Delivery Through Your Website (In 4 Steps)

      Below, we’ve outlined steps for setting up both curbside and local delivery options for your small business. Note that these instructions assume you already have WooCommerce installed and configured on your store’s website. If that’s not the case, please check out our tutorial on getting started with WooCommerce, and then you’ll be ready to get rolling!

      Trust Us, You Can Build a Website

      Whether you want to start a blog or run a small business, DreamHost makes it easy to begin your online journey. Our shared hosting plans give you everything you need to thrive online at an affordable price.

      Step 1: Configure a Local Shipping Zone

      The first thing you’ll need to do is pick a WooCommerce shipping zone for your local area. This will prevent a shopper who is outside your service area from placing an order for pickup or delivery.

      In your WordPress dashboard, navigate to WooCommerce > Settings > Shipping.

      Accessing WooCommerce’s shipping settings.

      Click on Add Shipping Zone.

      Adding a shipping zone in WooCommerce.

      Add a descriptive name for the shipping zone and then select your region.

      Alt-text: Adding a name and region for a new WooCommerce shipping zone.

      Click on Limit to specific ZIP/postcodes to narrow your pickup and delivery range.

      Specifying local ZIP codes for a new WooCommerce shipping zone.

      Remember to save your changes when you’re done.

      Step 2: Enable Local Pickup as a Shipping Option

      Next, while still on your local shipping zone page, select Add shipping method.

      Adding a shipping method to a WooCommerce shipping zone.

      WooCommerce includes a local pickup option out of the box. Select it from the drop-down menu, then click on the Add shipping method button.

      Selecting local pickup as the shipping method.

      That’s all you have to do to enable curbside pickup for your local business.

      However, if you would like to refine this option, you can purchase and install the Local Pickup Plus WooCommerce extension. This optional add-on enables you to specify a pickup location, set hours, offer discounts to customers who select curbside pickup, and more.

      Step 3: Add a Flat Rate Shipping Option

      WooCommerce no longer offers a “local delivery” shipping option. However, you can still configure one without the need for an additional plugin.

      On your local shipping zone page, add a second shipping option and select Flat rate. Then click on the Edit option for that shipping method.

      Selecting the edit option for a flat rate shipping method.

      Change the Method title to “Local Delivery” (or however you want to present this option to your customers at checkout). If you want, you can also add a flat rate delivery fee.

      Renaming the flat rate shipping method to “Local Delivery” and adding a delivery fee.

      Finally, click on Save changes. Your local pickup and delivery options will now both appear on your site’s checkout page, where customers can select their preferred methods.

      Curbside pickup and local delivery options on the checkout page.

      At this point, you’re ready to start offering curbside pickup and delivery to your customers. However, you may want to take a few extra steps to make managing your orders easier.

      Step 4: Install Order Delivery Date for WooCommerce to Manage Requests

      While you can technically set up curbside pickup and delivery for your business using WooCommerce alone, its native features don’t enable you to manage or schedule orders. This could lead to problems if you have multiple customers placing pickup and delivery orders at the same time.

      One way to solve this issue to enable customers to schedule their pickups and deliveries. Order Delivery Date for WooCommerce can help with this.

      The Order Delivery Date for WooCommerce plugin.

      After you install and activate this plugin, navigate to Order Delivery Date in your WordPress dashboard.

      The Order Delivery Date for WooCommerce settings page.

      Then configure the following settings:

      • Select the checkbox next to Enable Delivery Date capture on the checkout page.
      • Choose which days you’re available for delivery.
      • Set the minimum number of hours you need to prepare an order for delivery.
      • Specify how many days in advance customers can schedule an order.
      • Select the checkbox next to Selection of the delivery date on the checkout page will become mandatory.
      • Set the maximum number of deliveries you can handle per day.
      • Select the checkbox next to Enable default sorting of orders (in descending order) by Delivery Date on WooCommerce > Orders page.

      You may also wish to make additional adjustments in the Appearance and Holidays tabs. Remember to save your changes when you’re done.

      Now, when customers reach your checkout page, they’ll have to choose a delivery date.

      A calendar delivery date selector on the checkout page.

      Once the maximum number of orders for any particular day has been reached, that date will become unavailable in the calendar. This will prevent you from receiving more orders than you can physically manage at one time.

      On your WooCommerce Orders page, you’ll be able to see the customer’s specified delivery/pickup date listed for each order.

      The Delivery Date column in the WooCommerce Orders list.

      Note that you’ll still need to contact customers to inform them what time their orders will be ready (especially for curbside pickup).

      Looking for Remote Work Tips?

      Whether you want to stay focused at home or increase team engagement, we can help! Subscribe to the DreamHost Digest so you never miss an article.

      Curbside Takeout or Home Delivery?

      Curbside pickup and delivery options enable your customers to purchase their favorite products from you with minimal contact. This could help your business survive the COVID-19 pandemic and its economic side effects and is a valuable strategy for building customer loyalty.

      Fortunately, you can enable both local pickup and delivery using WooCommerce in just four steps:

      1. Configure a local shipping zone.
      2. Enable local pickup as a shipping option.
      3. Add a flat rate shipping option.
      4. Install Order Delivery Date for WooCommerce to manage requests.

      The foundation of any successful e-commerce site is a reliable hosting plan. At DreamHost, we provide quality shared hosting services for small businesses at affordable prices. Check out our plans today!



      Source link

      How To Build and Deploy a Node.js Application To DigitalOcean Kubernetes Using Semaphore Continuous Integration and Delivery


      The author selected the Open Internet / Free Speech fund to receive a donation as part of the Write for DOnations program.

      Introduction

      Kubernetes allows users to create resilient and scalable services with a single command. Like anything that sounds too good to be true, it has a catch: you must first prepare a suitable Docker image and thoroughly test it.

      Continuous Integration (CI) is the practice of testing the application on each update. Doing this manually is tedious and error-prone, but a CI platform runs the tests for you, catches errors early, and locates the point at which the errors were introduced. Release and deployment procedures are often complicated, time-consuming, and require a reliable build environment. With Continuous Delivery (CD) you can build and deploy your application on each update without human intervention.

      To automate the whole process, you’ll use Semaphore, a Continuous Integration and Delivery (CI/CD) platform.

      In this tutorial, you’ll build an address book API service with Node.js. The API exposes a simple RESTful API interface to create, delete, and find people in the database. You’ll use Git to push the code to GitHub. Then you’ll use Semaphore to test the application, build a Docker image, and deploy it to a DigitalOcean Kubernetes cluster. For the database, you’ll create a PostgreSQL cluster using DigitalOcean Managed Databases.

      Prerequisites

      Before reading on, ensure you have the following:

      • A DigitalOcean account and a Personal Access Token. Follow Create a Personal Access Token to set one up for your account.
      • A Docker Hub account.
      • A GitHub account.
      • A Semaphore account; you can sign up with your GitHub account.
      • A new GitHub repository called addressbook for the project. When creating the repository, select the Initialize this repository with a README checkbox and select Node in the Add .gitignore menu. Follow GitHub’s Create a Repo help page for more details.
      • Git installed on your local machine and set up to work with your GitHub account. If you are unfamiliar or need a refresher, consider reading the How to use Git reference guide.
      • curl installed on your local machine.
      • Node.js installed on your local machine. In this tutorial, you’ll use Node.js version 10.16.0.

      Step 1 — Creating the Database and the Kubernetes Cluster

      Start by provisioning the services that will power the application: the DigitalOcean Database Cluster and the DigitalOcean Kubernetes Cluster.

      Log in to your DigitalOcean account and create a project. A project lets you organize all the resources that make up the application. Call the project addressbook.

      Next, create a PostgreSQL cluster. The PostgreSQL database service will hold the application’s data. You can pick the latest version available. It should take a few minutes before the service is ready.

      Once the PostgreSQL service is ready, create a database and a user. Set the database name to addessbook_db and set the username to addressbook_user. Take note of the password that’s generated for your new user. Databases are PostgreSQL’s way of organizing data. Usually, each application has its own database, although there are no hard rules about this. The application will use the username and password to get access to the database so it can save and retrieve its data.

      Finally, create a Kubernetes Cluster. Choose the same region in which the database is running. Name the cluser addressbook-server and set the number of nodes to 3.

      While the nodes are provisioning, you can start building your application.

      Step 2 — Writing the Application

      Let’s build the address book application you’re going to deploy. To start, clone the GitHub repository you created in the prerequisites so you have a local copy of the .gitignore file GitHub created for you, and you’ll be able to commit your application code quickly without having to manually create a repository. Open your browser and go to your new GitHub repository. Click on the Clone or download button and copy the provided URL. Use Git to clone the empty repository to your machine:

      • git clone https://github.com/your_github_username/addressbook

      Enter the project directory:

      With the repository cloned, you can start writing the app. You’ll build two components: a module that interacts with the database, and a module that provides the HTTP service. The database module will know how to save and retrieve persons from the address book database, and the HTTP module will receive requests and respond accordingly.

      While not strictly mandatory, it’s good practice to test your code while you write it, so you’ll also create a testing module. This is the planned layout for the application:

      • database.js: database module. It handles database operations.
      • app.js: the end user module and the main application. It provides an HTTP service for the users to connect to.
      • database.test.js: tests for the database module.

      In addition, you’ll want a package.json file for your project, which describes the project and its required dependencies. You can either create it manually with your editor, or interactively using npm. Run the npm init command to create the file interactively:

      The command will ask for some information to get started. Fill in the values as shown in the example. If you don’t see an answer listed, leave the answer blank, which uses the default value in parentheses:

      npm output

      package name: (addressbook) addressbook version: (1.0.0) 1.0.0 description: Addressbook API and database entry point: (index.js) app.js test command: git repository: URL for your GitHub repository keywords: author: Sammy the Shark <sammy@example.com>" license: (ISC) About to write to package.json: { "name": "addressbook", "version": "1.0.0", "description": "Addressbook API and database", "main": "app.js", "scripts": { "test": "echo "Error: no test specified" && exit 1" }, "author": "", "license": "ISC" } Is this OK? (yes) yes

      Now you can start writing the code. The database is at the core of the service you’re developing. It’s essential to have a well-designed database model before writing any other components. Consequently, it makes sense to start with the database code.

      You don’t have to code all the bits of the application; Node.js has a large library of reusable modules. For instance, you don’t have to write any SQL queries if you have the Sequelize ORM module in the project. This module provides an interface that handles databases as JavaScript objects and methods. It can also create tables in your database. Sequelize needs the pg module to work with PostgreSQL.

      Install modules using the npm install command with the --save option, which tells npm to save the module in package.json. Execute this command to install both sequelize and pg:

      • npm install --save sequelize pg

      Create a new JavaScript file to hold the database code:

      Import the sequelize module by adding this line to the file:

      database.js

      const Sequelize = require('sequelize');
      
      . . .
      

      Then, below that line, initialize a sequelize object with the database connection parameters, which you’ll retrieve from the system environment. This keeps the credentials out of your code so you don’t accidentally share your credentials when you push your code to GitHub. You can use process.env to access environment variables, and JavaScripts’s || operator to set defaults for undefined variables:

      database.js

      . . .
      
      const sequelize = new Sequelize(process.env.DB_SCHEMA || 'postgres',
                                      process.env.DB_USER || 'postgres',
                                      process.env.DB_PASSWORD || '',
                                      {
                                          host: process.env.DB_HOST || 'localhost',
                                          port: process.env.DB_PORT || 5432,
                                          dialect: 'postgres',
                                          dialectOptions: {
                                              ssl: process.env.DB_SSL == "true"
                                          }
                                      });
      
      . . .
      

      Now define the Person model. To keep the example from getting too complex, you’ll only create two fields: firstName and lastName, both storing string values. Add the following code to define the model:

      database.js

      . . .
      
      const Person = sequelize.define('Person', {
          firstName: {
              type: Sequelize.STRING,
              allowNull: false
          },
          lastName: {
              type: Sequelize.STRING,
              allowNull: true
          },
      });
      
      . . .
      

      This defines the two fields, making firstName mandatory with allowNull: false. Sequelize’s model definition documentation shows the available data types and options.

      Finally, export the sequelize object and the Person model so other modules can use them:

      database.js

      . . .
      
      module.exports = {
          sequelize: sequelize,
          Person: Person
      };
      

      It’s handy to have a table-creation script in a separate file that you can call at any time during development. These types of files are called migrations. Create a new file to hold this code:

      Add these lines to the file to import the database model you defined, and call the sync() function to initialize the database, which creates the table for your model:

      migrate.js

      var db = require('./database.js');
      db.sequelize.sync();
      

      The application is looking for database connection information in system environment variables. Create a file called .env to hold those values, which you will load into the environment during development:

      Add the following variable declarations to the file. Ensure that you set DB_HOST, DB_PORT, and DB_PASSWORD to those associated with your DigitalOcean PostgreSQL cluster:

      .env

      export DB_SCHEMA=addressbook_db
      export DB_USER=addressbook_user
      export DB_PASSWORD=your_db_user_password
      export DB_HOST=your_db_cluster_host
      export DB_PORT=your_db_cluster_port
      export DB_SSL=true
      export PORT=3000
      

      Save the file.

      Warning: never check environment files into source control. They usually have sensitive information.

      Since you defined a default .gitignore file when you created the repository, this file is already ignored.

      You are ready to initialize the database. Import the environment file and run migrate.js:

      • source ./.env
      • node migrate.js

      This creates the database table:

      Output

      Executing (default): CREATE TABLE IF NOT EXISTS "People" ("id" SERIAL , "firstName" VARCHAR(255) NOT NULL, "lastName" VARCHAR(255), "createdAt" TIMESTAMP WITH TIME ZONE NOT NULL, "updatedAt" TIMESTAMP WITH TIME ZONE NOT NULL, PRIMARY KEY ("id")); Executing (default): SELECT i.relname AS name, ix.indisprimary AS primary, ix.indisunique AS unique, ix.indkey AS indkey, array_agg(a.attnum) as column_indexes, array_agg(a.attname) AS column_names, pg_get_indexdef(ix.indexrelid) AS definition FROM pg_class t, pg_class i, pg_index ix, pg_attribute a WHERE t.oid = ix.indrelid AND i.oid = ix.indexrelid AND a.attrelid = t.oid AND t.relkind = 'r' and t.relname = 'People' GROUP BY i.relname, ix.indexrelid, ix.indisprimary, ix.indisunique, ix.indkey ORDER BY i.relname;

      The output shows two commands. The first one creates the People table as per your definition. The second command checks that the table was indeed created by looking it up in the PostgreSQL catalog.

      It’s good practice to create tests for your code. With tests, you can validate the code’s behavior. You can write a check for each function, method, or any other part of your system and verify that it works the way you’d expect, without having to test things manually.

      The jest testing framework is a great fit for writing tests against Node.js applications. Jest scans the files in the project for test files and executes them one a time. Install Jest with the --save-dev option, which tells npm that the module is not required to run the program, but it is a dependency for developing the application:

      • npm install --save-dev jest

      You’ll write tests to verify that you can insert, read, and delete records from your database. These tests will verify that your database connection and permissions are configured properly, and will also provide some tests you can use in your CI/CD pipeline later.

      Create the database.test.js file:

      Add the following content. Start by importing the database code:

      database.test.js

      const db = require('./database');
      
      . . .
      

      To ensure the database is ready to use, call sync() inside the beforeAll function:

      database.test.js

      . . .
      
      beforeAll(async () => {
          await db.sequelize.sync();
      });
      
      . . .
      

      The first test creates a person record in the database. The sequelize library executes all queries asynchronously, which means it doesn’t wait for the results of the query. To make the test wait for results so you can verify them, you must use the async and await keywords. This test calls the create() method to insert a new row in the database. Use expect to compare the person.id column with 1. The test will fail if you get a different value:

      database.test.js

      . . .
      
      test('create person', async () => {
          expect.assertions(1);
          const person = await db.Person.create({
              id: 1,
              firstName: 'Sammy',
              lastName: 'Davis Jr.',
              email: 'sammy@example.com'
          });
          expect(person.id).toEqual(1);
      });
      
      . . .
      

      In the next test, use the findByPk() method to retrieve the row with id=1. Then, validate the firstName and lastName values. Once again, use async and await:

      database.test.js

      . . .
      
      test('get person', async () => {
          expect.assertions(2);
          const person = await db.Person.findByPk(1);
          expect(person.firstName).toEqual('Sammy');
          expect(person.lastName).toEqual('Davis Jr.');
      });
      
      . . .
      

      Finally, test removing a person from the database. The destroy() method deletes the person with id=1. To ensure that it worked, try retrieving the person a second time and checking that the returned value is null:

      database.test.js

      . . .
      
      test('delete person', async () => {
          expect.assertions(1);
          await db.Person.destroy({
              where: {
                  id: 1
              }
          });
          const person = await db.Person.findByPk(1);
          expect(person).toBeNull();
      });
      
      . . .
      

      Finally, add this code to close the connection to the database with close() once all tests have finished:

      app.js

      . . .
      
      afterAll(async () => {
          await db.sequelize.close();
      });
      

      Save the file.

      The jest command runs the test suite for your program, but you can also store commands in package.json. Open this file in your editor:

      Locate the scripts keyword and replace the existing test line (which was just a placeholder). The test command is jest:

      . . .
      
        "scripts": {
          "test": "jest"
        },
      
      . . .
      

      Now you can call npm run test to invoke the test suite. This may be a longer command, but if you need to modify the jest command later, external services won’t have to change; they can continue calling npm run test.

      Run the tests:

      Then, check the results:

      Output

      console.log node_modules/sequelize/lib/sequelize.js:1176 Executing (default): CREATE TABLE IF NOT EXISTS "People" ("id" SERIAL , "firstName" VARCHAR(255) NOT NULL, "lastName" VARCHAR(255), "createdAt" TIMESTAMP WITH TIME ZONE NOT NULL, "updatedAt" TIMESTAMP WITH TIME ZONE NOT NULL, PRIMARY KEY ("id")); console.log node_modules/sequelize/lib/sequelize.js:1176 Executing (default): SELECT i.relname AS name, ix.indisprimary AS primary, ix.indisunique AS unique, ix.indkey AS indkey, array_agg(a.attnum) as column_indexes, array_agg(a.attname) AS column_names, pg_get_indexdef(ix.indexrelid) AS definition FROM pg_class t, pg_class i, pg_index ix, pg_attribute a WHERE t.oid = ix.indrelid AND i.oid = ix.indexrelid AND a.attrelid = t.oid AND t.relkind = 'r' and t.relname = 'People' GROUP BY i.relname, ix.indexrelid, ix.indisprimary, ix.indisunique, ix.indkey ORDER BY i.relname; console.log node_modules/sequelize/lib/sequelize.js:1176 Executing (default): INSERT INTO "People" ("id","firstName","lastName","createdAt","updatedAt") VALUES ($1,$2,$3,$4,$5) RETURNING *; console.log node_modules/sequelize/lib/sequelize.js:1176 Executing (default): SELECT "id", "firstName", "lastName", "createdAt", "updatedAt" FROM "People" AS "Person" WHERE "Person"."id" = 1; console.log node_modules/sequelize/lib/sequelize.js:1176 Executing (default): DELETE FROM "People" WHERE "id" = 1 console.log node_modules/sequelize/lib/sequelize.js:1176 Executing (default): SELECT "id", "firstName", "lastName", "createdAt", "updatedAt" FROM "People" AS "Person" WHERE "Person"."id" = 1; PASS ./database.test.js ✓ create person (344ms) ✓ get person (173ms) ✓ delete person (323ms) Test Suites: 1 passed, 1 total Tests: 3 passed, 3 total Snapshots: 0 total Time: 5.315s Ran all test suites.

      With the database code tested, you can build the API service to manage the people in the address book.

      To serve HTTP requests, you’ll use the Express web framework. Install Express and save it as a dependency using npm install:

      • npm install --save express

      You’ll also need the body-parser module, which you’ll use to access the HTTP request body. Install this as a dependency as well:

      • npm install --save body-parser

      Create the main application file app.js:

      Import the express, body-parser, and database modules. Then create an instance of the express module called app to control and configure the service. You use app.use() to add features such as middleware. Use this to add the body-parser module so the application can read url-encoded strings:

      app.js

      var express = require('express');
      var bodyParser = require('body-parser');
      var db = require('./database');
      var app = express();
      app.use(bodyParser.urlencoded({ extended: true }));
      
      . . .
      

      Next, add routes to the application. Routes are similar to buttons in an app or website; they trigger some action in your application. Routes link unique URLs to actions in the application. Each route will serve a specific path and support a different operation.

      The first route you’ll define handles GET requests for the /person/$ID path, which will display the database record for the person with the specified ID. Express automatically sets the value of the requested $ID in the req.params.id variable.

      The application must reply with the person data encoded as a JSON string. As you did in the database tests, use the findByPk() method to retrieve the person by id and reply to the request with HTTP status 200 (OK) and send the person record as JSON. Add the following code:

      app.js

      . . .
      
      app.get("/person/:id", function(req, res) {
          db.Person.findByPk(req.params.id)
              .then( person => {
                  res.status(200).send(JSON.stringify(person));
              })
              .catch( err => {
                  res.status(500).send(JSON.stringify(err));
              });
      });
      
      . . .
      

      Errors cause the code in catch() to be executed. For instance, if the database is down, the connection will fail, and this will execute instead. In case of trouble, set the HTTP status to 500 (Internal Server Error) and send the error message back to the user:

      Add another route to create a person in the database. This route will handle PUT requests and access the person’s data from the req.body. Use the create() method to insert a row in the database:

      app.js

      . . .
      
      app.put("/person", function(req, res) {
          db.Person.create({
              firstName: req.body.firstName,
              lastName: req.body.lastName,
              id: req.body.id
          })
              .then( person => {
                  res.status(200).send(JSON.stringify(person));
              })
              .catch( err => {
                  res.status(500).send(JSON.stringify(err));
              });
      });
      
      . . .
      

      Add another route to handle DELETE requests, which will remove records from the address book. First, use the ID to locate the record and then use the destroy method to remove it:

      app.js

      . . .
      
      app.delete("/person/:id", function(req, res) {
          db.Person.destroy({
              where: {
                  id: req.params.id
              }
          })
              .then( () => {
                  res.status(200).send();
              })
              .catch( err => {
                  res.status(500).send(JSON.stringify(err));
              });
      });
      
      . . .
      

      And for convenience, add a route that retrieves all people in the database using the /all path:

      app.js

      . . .
      
      app.get("/all", function(req, res) {
          db.Person.findAll()
              .then( persons => {
                  res.status(200).send(JSON.stringify(persons));
              })
              .catch( err => {
                  res.status(500).send(JSON.stringify(err));
              });
      });
      
      . . .
      

      One last route left. If the request did not match any of the previous routes, send status code 404 (Not Found):

      app.js

      . . .
      
      app.use(function(req, res) {
          res.status(404).send("404 - Not Found");
      });
      
      . . .
      

      Finally, add the listen() method, which starts up the service. If the environment variable PORT is defined, then the service listens in that port; otherwise, it defaults to port 3000:

      app.js

      . . .
      
      var server = app.listen(process.env.PORT || 3000, function() {
          console.log("app is running on port", server.address().port);
      });
      

      As you’ve learned, the package.json file lets you define various commands to run tests, start your apps, and other tasks, which often lets you run common commands with much less typing. Add a new command on package.json to start the application. Edit the file:

      Add the start command, so it looks like this:

      package.json

      . . .
      
        "scripts": {
          "test": "jest",
          "start": "node app.js"
        },
      
      . . .
      

      Don’t forget to add a comma to the previous line, as the scripts section needs its entries separated by commas.

      Save the file and start the application for the first time. First, load the environment file with source; this imports the variables into the session and makes them available to the application. Then, start the application with npm run start:

      • source ./.env
      • npm run start

      The app starts on port 3000:

      Output

      app is running on port 3000

      Open a browser and navigate to http://localhost:3000/all. You’ll see a page showing [].

      Switch back to your terminal and press CTRL-C to stop the application.

      Now is an excellent time to add code quality tests. Code quality tools, also known as linters, scan the project for issues in the code. Bad coding practices like leaving unused variables, not ending statements with a semicolon, or missing curly braces can cause bugs that are difficult to find.

      Install jshint tool, a JavaScript linter, as a development dependency:

      • npm install --save-dev jshint

      Over the years, JavaScript has received of updates, features, and syntax changes. The language has been standardized by ECMA International under the name of “ECMAScript”. About once a year, ECMA releases a new version of ECMAScript with new features.

      By default, jshint assumes that your code is compatible with ES6 (ECMAScript Version 6), and will throw an error if it finds any keywords not supported in that version. You’ll want to find the version that is compatible with your code. If you look at the feature table for all the recent versions, you’ll find that the async/await keywords were not introduced until ES8. You used both keywords in the database test code, so that sets the minimum compatible version to ES8.

      To tell jshint the version you’re using, create a file called .jshintrc:

      In the file, specify esversion. The jshintrc file uses JSON, so create a new JSON object in the file:

      .jshintrc

      { "esversion": 8 }
      

      Save the file and exit the editor.

      Add a command to run jshint. Edit package.json:

      Add a lint command to your project in the scripts section of package.json. The command calls the lint tool against all the JavaScript files you created so far:

      package.json

      . . .
      
        "scripts": {
          "test": "jest",
          "start": "node app.js",
          "lint": "jshint app.js database*.js migrate.js"
        },
      
      . . .
      

      Now you can run the linter to find any issues:

      There should not be any error messages:

      Output

      > jshint app.js database*.js migrate.js

      If there are any errors, jshint will show the line that has the problem.

      You’ve completed the project and ensured it works. Add the files to the repository, commit, and push the changes:

      • git add *.js
      • git add package*.json
      • git add .jshintrc
      • git commit -m 'initial commit'
      • git push origin master

      Now you can configure Semaphore to test, build, and deploy the application, starting by configuring Semaphore with your DigitalOcean Personal Access Token and database credentials.

      Step 3 — Creating Secrets in Semaphore

      There is some information that doesn’t belong in a GitHub repository. Passwords and API Tokens are good examples of this. You’ve stored this sensitive data in a separate file and loaded it into your environment, When using Semaphore, you can use Secrets to store sensitive data.

      There are three kinds of secrets in the project:

      • Docker Hub: the username and password of your Docker Hub account.
      • DigitalOcean Personal Access Token: to deploy the application to your Kubernetes cluster.
      • Environment Variables: for database username and password connection parameters.

      To create the first secret, open your browser and log in to the Semaphore website. On the left navigation menu, click Secrets under the CONFIGURATION heading. Click the Create New Secret button.

      For Name of the Secret, enter dockerhub. Then under Environment Variables, create two environment variables:

      • DOCKER_USERNAME: your DockerHub username.
      • DOCKER_PASSWORD: your DockerHub password.

      Docker Hub Secret

      Click Save Changes.

      Create a second secret for your DigitalOcean Personal Access Token. Once again, click on Secrets on the left navigation menu, then on Create New Secret. Call this secret do-access-token and create an environment value called DO_ACCESS_TOKEN with the value set to your Personal Access Token:

      DigitalOcean Token Secret

      Save the secret.

      For the next secret, instead of setting environment variables directly, you’ll upload the .env file from the project’s root.

      Create a new secret called env-production. Under the Files section, press the Upload file link to locate and upload your .env file, and tell Semaphore to place it at /home/semaphore/env-production.

      Environment Secret

      Note: Because the file is hidden, you may have trouble finding it on your computer. There is usually a menu item or a key combination to view hidden files, such as CTRL+H. If all else fails, you can try copying the file with a non-hidden name:

      Then upload the file and rename it back:

      The environment variables are all configured. Now you can begin the Continuous Integration setup.

      Step 4 — Adding your Project to Semaphore

      In this step you will add your project to Semaphore and start the Continuous Integration (CI) pipeline.

      First, link your GitHub repository with Semaphore:

      1. Log in to your Semaphore account.
      2. Click the + icon next to PROJECTS.
      3. Click the Add Repository button next to your repository.

      Add Repository to Semaphore

      Now that Semaphore is connected, it will pick up any changes in the repository automatically.

      You are now ready to create the Continuous Integration pipeline for the application. A pipeline defines the path your code must travel to get built, tested, and deployed. The pipeline is automatically run each time there is a change in the GitHub repository.

      First, you should ensure that Semaphore uses the same version of Node you’ve been using during development. You can check which version is running on your machine:

      Output

      v10.16.0

      You can tell Semaphore which version of Node.js to use by creating a file called .nvmrc in your repository. Internally, Semaphore uses node version manager to switch between Node.js versions. Create the .nvmrc file and set the version to 10.16.0:

      Semaphore pipelines go in the .semaphore directory. Create the directory:

      Create a new pipeline file. The initial pipeline is always called semaphore.yml. In this file, you’ll define all the steps required to build and test the application.

      • nano .semaphore/semaphore.yml

      Note: You are creating a file in the YAML format. You must preserve the leading spaces as shown in the tutorial.

      The first line must set the Semaphore file version; the current stable is v1.0. Also, the pipeline needs a name. Add these lines to your file:

      .semaphore/semaphore.yml

      version: v1.0
      name: Addressbook
      
      . . .
      

      Semaphore automatically provisions virtual machines to run the tasks. There are various machines to choose from. For the integration jobs, use the e1-standard-2 (2 CPUs 4 GB RAM) along with an Ubuntu 18.04 OS. Add these lines to the file:

      .semaphore/semaphore.yml

      . . .
      
      agent:
        machine:
          type: e1-standard-2
          os_image: ubuntu1804
      
      . . .
      

      Semaphore uses blocks to organize the tasks. Each block can have one or more jobs. All jobs in a block run in parallel, each one in an isolated machine. Semaphore waits for all jobs in a block to pass before starting the next one.

      Start by defining the first block, which installs all the JavaScript dependencies to test and run the application:

      .semaphore/semaphore.yml

      . . .
      
      blocks:
        - name: Install dependencies
          task:
      
      . . .
      

      You can define environment variables that are common for all jobs, like setting NODE_ENV to test, so Node.js knows this is a test environment. Add this code after task:

      .semaphore/semaphore.yml

      . . .
          task:
            env_vars:
              - name: NODE_ENV
                value: test
      
      . . .
      

      Commands in the prologue section are executed before each job in the block. It’s a convenient place to define setup tasks. You can use checkout to clone the GitHub repository. Then, nvm use activates the appropriate Node.js version you specified in .nvmrc. Add the prologue section:

      .semaphore/semaphore.yml

          task:
      . . .
      
            prologue:
              commands:
                - checkout
                - nvm use
      
      . . .
      

      Next add this code to install the project’s dependencies. To speed up jobs, Semaphore provides the cache tool. You can run cache store to save node_modules directory in Semaphore’s cache. cache automatically figures out which files and directories should be stored. The second time the job is executed, cache restore restores the directory.

      .semaphore/semaphore.yml

      . . .
      
            jobs:
              - name: npm install and cache
                commands:
                  - cache restore
                  - npm install
                  - cache store 
      
      . . .
      

      Add another block which will run two jobs. One to run the lint test, and another to run the application’s test suite.

      .semaphore/semaphore.yml

      . . .
      
        - name: Tests
          task:
            env_vars:
              - name: NODE_ENV
                value: test
            prologue:
              commands:
                - checkout
                - nvm use
                - cache restore 
      
      . . .
      

      The prologue repeats the same commands as in the previous block and restores node_module from the cache. Since this block will run tests, you set the NODE_ENV environment variable to test.

      Now add the jobs. The first job performs the code quality check with jshint:

      .semaphore/semaphore.yml

      . . .
      
            jobs:
              - name: Static test
                commands:
                  - npm run lint
      
      . . .
      

      The next job executes the unit tests. You’ll need a database to run them, as you don’t want to use your production database. Semaphore’s sem-service can start a local PostgreSQL database in the test environment that is completely isolated. The database is destroyed when the job ends. Start this service and run the tests:

      .semaphore/semaphore.yml

      . . .
      
              - name: Unit test
                commands:
                  - sem-service start postgres
                  - npm run test
      

      Save the .semaphore/semaphore.yml file.

      Now add and commit the changes to the GitHub repository:

      • git add .nvmrc
      • git add .semaphore/semaphore.yml
      • git commit -m "continuous integration pipeline"
      • git push origin master

      As soon as the code is pushed to GitHub, Semaphore starts the CI pipeline:

      Running Workflow

      You can click on the pipeline to show the blocks and jobs, and their output.

      Integration Pipeline

      Next you will create a new pipeline that builds a Docker image for the application.

      Step 5 — Building Docker Images for the Application

      A Docker image is the basic unit of a Kubernetes deployment. The image should have all the binaries, libraries, and code required to run the application. A Docker container is not a lightweight virtual machine, but it behaves like one. The Docker Hub registry contains hundreds of ready-to-use images, but we’re going to build our own.

      In this step, you’ll add a new pipeline to build a custom Docker image for your app and push it to Docker Hub.

      To build a custom image, create a Dockerfile:

      The Dockerfile is a recipe to create the image. You can use the official Node.js distribution as a starting point instead of starting from scratch. Add this to your Dockerfile:

      Dockerfile

      FROM node:10.16.0-alpine
      
      . . .
      

      Then add a command which copies package.json and package-lock.json, and then install the node modules inside the image:

      Dockerfile

      . . .
      
      COPY package*.json ./
      RUN npm install
      
      . . .
      

      Installing the dependencies first will speed up subsequent builds, as Docker will cache this step.

      Now add this command which copies all the application files in the project root into the image:

      Dockerfile

      . . .
      
      COPY *.js ./
      
      . . .
      

      Finally, EXPOSE specifies that the container listens for connections on port 3000, where the application is listening, and CMD sets the command that should run when the container starts. Add these lines to your file:

      Dockerfile

      . . .
      
      EXPOSE 3000
      CMD [ "npm", "run", "start" ]
      

      Save the file.

      With the Dockerfile complete, you can create a new pipeline so Semaphore can build the image for you when you push your code to GitHub. Create a new file called docker-build.yml:

      • nano .semaphore/docker-build.yml

      Start the pipeline with the same boilerplate as the the CI pipline, but with the name Docker build:

      .semaphore/docker-build.yml

      version: v1.0
      name: Docker build
      agent:
        machine:
          type: e1-standard-2
          os_image: ubuntu1804
      
      . . .
      

      This pipeline will have only one block and one job. In Step 3, you created a secret named dockerhub with your Docker Hub username and password. Here, you’ll import these values using the secrets keyword. Add this code:

      .semaphore/docker-build.yml

      . . .
      
      blocks:
        - name: Build
          task:
            secrets:
              - name: dockerhub
      
      . . .
      

      Docker images are stored in repositories. We’ll use the official Docker Hub which allows for an unlimited number of public images. Add these lines to check out the code from GitHub and use the docker login command to authenticate with Docker Hub.

      .semaphore/docker-build.yml

          task:
      . . .
      
            prologue:
              commands:
                - checkout
                - echo "${DOCKER_PASSWORD}" | docker login -u "${DOCKER_USERNAME}" --password-stdin
      
      . . .
      

      Each Docker image is fully identified by the combination of name and tag. The name usually corresponds to the product or software, and the tag corresponds to the particular version of the software. For example, node.10.16.0. When no tag is supplied, Docker defaults to the special latest tag. Hence, it’s considered good practice to use the latest tag to refer to the most current image.

      Add the following code to build the image and push it to Docker Hub:

      .semaphore/docker-build.yml

      . . .
      
            jobs:
            - name: Docker build
              commands:
                - docker pull "${DOCKER_USERNAME}/addressbook:latest" || true
                - docker build --cache-from "${DOCKER_USERNAME}/addressbook:latest" -t "${DOCKER_USERNAME}/addressbook:$SEMAPHORE_WORKFLOW_ID" .
                - docker push "${DOCKER_USERNAME}/addressbook:$SEMAPHORE_WORKFLOW_ID"
      

      When Docker builds the image, it reuses parts of existing images to speed up the process. The first command tries to pull the latest image from Docker Hub so it may be reused. Semaphore stops the pipeline if any of the commands return a status code different than zero. For example, if the repository doesn’t have any latest image, as it won’t on the first try, the pipeline will stop. You can force Semaphore to ignore failed commands by appending || true to the command.

      The second command builds the image. To reference this particular image later, you can tag it with a unique string. Semaphore provides several environment variables for jobs. One of them, $SEMAPHORE_WORKFLOW_ID is unique and shared among all the pipelines in the workflow. It’s handy for referencing this image later in the deployment.

      The third command pushes the image to Docker Hub.

      The build pipeline is ready, but Semaphore will not start it unless you connect it to the main CI pipeline. You can chain multiple pipelines to create complex, multi-branch workflows using promotions.

      Edit the main pipeline file .semaphore/semaphore.yml:

      • nano .semaphore/semaphore.yml

      Add the following lines at the end of the file:

      .semaphore/semaphore.yml

      . . .
      
      promotions:
        - name: Dockerize
          pipeline_file: docker-build.yml
          auto_promote_on:
            - result: passed
      

      auto_promote_on defines the condition to start the docker build pipeline. In this case, it runs when all jobs defined in the semaphore.yml file have passed.

      To test the new pipeline, you need to add, commit, and push all the modified files to GitHub:

      • git add Dockerfile
      • git add .semaphore/docker-build.yml
      • git add .semaphore/semaphore.yml
      • git commit -m "docker build pipeline"
      • git push origin master

      After the CI pipeline is complete, the Docker build pipeline starts.

      Build Pipeline

      When it finishes, you’ll see your new image in your Docker Hub repository.

      You’ve got your build process testing and creating the image. Now you’ll create the final pipeline to deploy the application to your Kubernetes cluster.

      Step 6 — Setting up Continuous Deployment to Kubernetes

      The building block of a Kubernetes deployment is the pod. A pod is a group of containers that are managed as a single unit. The containers inside a pod start and stop in unison and always run on the same machine, sharing its resources. Each pod has an IP address. In this case, the pods will only have one container.

      Pods are ephemeral; they are created and destroyed frequently. You can’t tell which IP address is going to be assigned to each pod until it’s started. To solve this, you’ll use services, which have fixed public IP addresses so incoming connections can be load-balanced and forwarded to the pods.

      You could manage pods directly, but it’s better to let Kubernetes handle that by using a deployment. In this section, you will create a declarative manifest that describes the final desired state for your cluster. The manifest has two resources:

      • Deployment: starts the pods in the cluster nodes as required and keeps track of their status. Since in this tutorial we’re using a 3-node cluster, we’ll deploy 3 pods.
      • Service: acts as an entry point for our users. Listens to traffic on port 80 (HTTP) and forwards the connection to the pods.

      Create a manifest file called deployment.yml:

      Start the manifest with the Deployment resource. Add the following contents to the new file to define the deployment:

      deployment.yml

      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: addressbook
      spec:
        replicas: 3
        selector:
          matchLabels:
            app: addressbook
        template:
          metadata:
            labels:
              app: addressbook
          spec:
            containers:
              - name: addressbook
                image: ${DOCKER_USERNAME}/addressbook:${SEMAPHORE_WORKFLOW_ID}
                env:
                  - name: NODE_ENV
                    value: "production"
                  - name: PORT
                    value: "$PORT"
                  - name: DB_SCHEMA
                    value: "$DB_SCHEMA"
                  - name: DB_USER
                    value: "$DB_USER"
                  - name: DB_PASSWORD
                    value: "$DB_PASSWORD"
                  - name: DB_HOST
                    value: "$DB_HOST"
                  - name: DB_PORT
                    value: "$DB_PORT"
                  - name: DB_SSL
                    value: "$DB_SSL"
      
      
      . . .
      

      For each resource in the manifest, you need to set an apiVersion. For deployments, use apiVersion: apps/v1, a stable version. Then, tell Kubernetes that this resource is a Deployment with kind: Deployment. Each definition should have a name defined in metadata.name.

      In the spec section you tell Kubernetes what the desired final state is. This definition requests that Kubernetes should create 3 pods with replicas: 3.

      Labels are key-value pairs used to organize and cross-reference Kubernetes resources. You define labels with metadata.labels, and you can look for matching labels with selector.matchLabels. This is how you connect elements togther.

      The key spec.template defines a model that Kubernetes will use to create each pod. Inside spec.template.metadata.labels you set one label for the pods: app: addressbook.

      With spec.selector.matchLabels you make the deployment manage any pods with the label app: addressbook. In this case you are making this deployment responsible for all the pods.

      Finally, you define the image that runs in the pods. In spec.template.spec.containers you set the image name. Kubernetes will pull the image from the registry as needed. In this case, it will pull from Docker Hub). You can also set environment variables for the containers, which is fortunate because you need to supply several values for the database connection.

      To keep the deployment manifest flexible, you’ll be relying on variables. The YAML format, however, doesn’t allow variables, so the file isn’t valid yet. You’ll solve that problem when you define the deployment pipeline for Semaphore.

      That’s it for the deployment. But this only defines the pods. You still need a service that will allow traffic to flow to your pods. You can add another Kubernetes resource in the same file as long as you use three hyphens (---) as a separator.

      Add the following code to define a load balancer service that connects to pods with the addressbook label:

      deployment.yml

      . . .
      
      ---
      
      apiVersion: v1
      kind: Service
      metadata:
        name: addressbook-lb
      spec:
        selector:
          app: addressbook
        type: LoadBalancer
        ports:
          - port: 80
            targetPort: 3000
      

      The load balancer will receive connections on port 80 and forward them to the pods’ port 3000 where the application is listening.

      Save the file.

      Now, create a deployment pipeline for Semaphore that will deploy the app using the manifest. Create a new file in the .semaphore directory:

      • nano .semaphore/deploy-k8s.yml

      Begin the pipeline as usual, specifying the version, name, and image:

      .semaphore/deploy-k8s.yml

      version: v1.0
      name: Deploy to Kubernetes
      agent:
        machine:
          type: e1-standard-2
          os_image: ubuntu1804
      
      . . .
      

      This pipeline will have two blocks. The first block deploys the application to the Kubernetes cluster.

      Define the block and import all the secrets:

      .semaphore/deploy-k8s.yml

      . . .
      
      blocks:
        - name: Deploy to Kubernetes
          task:
            secrets:
              - name: dockerhub
              - name: do-access-token
              - name: env-production
      
      . . .
      

      Store your DigitalOcean Kubernetes cluster name in an environment variable so you can reference it later:

      .semaphore/deploy-k8s.yml

      . . .
      
            env_vars:
              - name: CLUSTER_NAME
                value: addressbook-server
      
      . . .
      

      DigitalOcean Kubernetes clusters are managed with a combination of two programs: kubectl and doctl. The former is already included in Semaphore’s image, but the latter isn’t, so you need to install it. You can use the prologue section to do it.

      Add this prologue section:

      .semaphore/deploy-k8s.yml

      . . .
      
            prologue:
              commands:
                - wget https://github.com/digitalocean/doctl/releases/download/v1.20.0/doctl-1.20.0-linux-amd64.tar.gz
                - tar xf doctl-1.20.0-linux-amd64.tar.gz 
                - sudo cp doctl /usr/local/bin
                - doctl auth init --access-token $DO_ACCESS_TOKEN
                - doctl kubernetes cluster kubeconfig save "${CLUSTER_NAME}"
                - checkout
      
      . . .
      

      The first command downloads the doctl official release with wget. The second command decompresses it with tar and copies it into the local path. Once doctl is installed, it can be used to authenticate with the DigitalOcean API and request the Kubernetes config file for our cluster. After checking out our code, we are done with the prologue:

      Next comes the final piece of our pipeline: deploying to the cluster.

      Remember that there were some environment variables in deployment.yml, and YAML does not allow that. As a result, deployment.yml in its current state, won’t work. To get around that, source the environment file to load the variables, then use the envsubst command to expand the variables in-place with the actual values. The result, a file called deploy.yml, is entirely valid YAML with the values inserted. With the file in place, you can start the deployment with kubectl apply:

      .semaphore/deploy-k8s.yml

      . . .
      
            jobs:
            - name: Deploy
              commands:
                - source $HOME/env-production
                - envsubst < deployment.yml | tee deploy.yml
                - kubectl apply -f deploy.yml
      
      . . .
      

      The second block adds the latest tag to the image on Docker Hub to denote that this is the most current version deployed. Repeat the Docker login steps, then pull, retag, and push to Docker Hub:

      .semaphore/deploy-k8s.yml

      . . .
      
        - name: Tag latest release
          task:
            secrets:
              - name: dockerhub
            prologue:
              commands:
                - checkout
                - echo "${DOCKER_PASSWORD}" | docker login -u "${DOCKER_USERNAME}" --password-stdin
                - checkout
            jobs:
            - name: docker tag latest
              commands:
                - docker pull "${DOCKER_USERNAME}/addressbook:$SEMAPHORE_WORKFLOW_ID" 
                - docker tag "${DOCKER_USERNAME}/addressbook:$SEMAPHORE_WORKFLOW_ID" "${DOCKER_USERNAME}/addressbook:latest"
                - docker push "${DOCKER_USERNAME}/addressbook:latest"
      

      Save the file.

      This pipeline performs the deployment, but it can only start if the Docker image was successfully generated and pushed to Docker Hub. As a result, you must connect the build and deployment pipelines with a promotion. Edit the Docker build pipeline to add it:

      • nano .semaphore/docker-build.yml

      Add the promotion to the end of the file:

      .semaphore/docker-build.yml

      . . .
      
      promotions:
        - name: Deploy to Kubernetes
          pipeline_file: deploy-k8s.yml
          auto_promote_on:
            - result: passed
      

      You are done setting up the CI/CD workflow.

      All that remains is pushing the modified files and letting Semaphore do the work. Add, commit, and push your repository’s changes:

      • git add .semaphore/deploy-k8s.yml
      • git add .semaphore/docker-build.yml
      • git add deployment.yml
      • git commit -m "kubernetes deploy pipeline"
      • git push origin master

      It’ll take a few minutes for the deployment to complete.

      Deploy Pipeline

      Let’s test the application next.

      Step 7 — Testing the Application

      At this point, the application is up and running. In this step, you’ll use curl to test the API endpoint.

      You’ll need to know the public IP that DigitalOcean has given to your cluster. Follow these steps to find it:

      1. Log in to your DigitalOcean account.
      2. Select the addressbook project
      3. Go to Networking.
      4. Click on Load Balancers.
      5. The IP Address is shown. Copy the IP address.

      Load Balancer IP

      Let’s check the /all route using curl:

      • curl -w "n" YOUR_CLUSTER_IP/all

      You can use the -w "n" option to ensure curl prints all lines:

      Since there are no records in the database yet, you get an empty JSON array as the result:

      Output

      []

      Create a new person record by making a PUT request to the /person endpoint:

      • curl -w "n" -X PUT
      • -d "firstName=Sammy&lastName=the Shark" YOUR_CLUSTER_IP/person

      The API returns the JSON object for the person:

      Output

      { "id": 1, "firstName": "Sammy", "lastName": "the Shark", "updatedAt": "2019-07-04T23:51:00.548Z", "createdAt": "2019-07-04T23:51:00.548Z" }

      Create a second person:

      • curl -w "n" -X PUT
      • -d "firstName=Tommy&lastName=the Octopus" YOUR_CLUSTER_IP/person

      The output indicates that a second person was created:

      Output

      { "id": 2, "firstName": "Tommy", "lastName": "the Octopus", "updatedAt": "2019-07-04T23:52:08.724Z", "createdAt": "2019-07-04T23:52:08.724Z" }

      Now make a GET request to get the person with the id of 2:

      • curl -w "n" YOUR_CLUSTER_IP/person/2

      The server replies with the data you requested:

      Output

      { "id": 2, "firstName": "Tommy", "lastName": "the Octopus", "createdAt": "2019-07-04T23:52:08.724Z", "updatedAt": "2019-07-04T23:52:08.724Z" }

      To delete the person, send a DELETE request:

      • curl -w "n" -X DELETE YOUR_CLUSTER_IP/person/2

      No output is returned by this command.

      You should only have one person in your database, the one with the id of 1. Try getting /all again:

      • curl -w "n" YOUR_CLUSTER_IP/all

      The server replies with an array of persons containing only one record:

      Output

      [ { "id": 1, "firstName": "Sammy", "lastName": "the Shark", "createdAt": "2019-07-04T23:51:00.548Z", "updatedAt": "2019-07-04T23:51:00.548Z" } ]

      At this point, there’s only one person left in the database.

      This completes the tests for all the endpoints in our application and marks the end of the tutorial.

      Conclusion

      In this tutorial, you wrote a complete Node.js application from scratch which used DigitalOcean’s managed PostgreSQL database service. You then used Semaphore’s CI/CD pipelines to fully automate a workflow that tested and built a container image, uploaded it to Docker Hub, and deployed it to DigitalOcean Kubernetes.

      To learn more about Kubernetes, you can read An Introduction to Kubernetes and the rest of DigitalOcean’s Kubernetes tutorials.

      Now that your application is deployed, you may consider adding a domain name, securing your database cluster, or setting up alerts for your database.



      Source link