One place for hosting & domains

      June 2021

      How To Deploy Your Own Web Analytics Software with Umami on DigitalOcean’s App Platform


      The author selected The Mozilla Foundation to receive a donation as part of the Write for DOnations program.

      Introduction

      After deploying a website, you will want to add analytics scripts to your site to learn about the pages that drive the most traffic and track the number of visitors, goal conversions, and page views. Umami is an open-source web analytics software that runs on PostgreSQL and Next.js API routes.

      Umami allows you to track events, referring pages, session durations, view counts, and unique visitor counts for your pages. On a single Umami instance, you can track an unlimited number of websites and create multiple users so different people can track their websites from a single deployment.

      In this guide, you will clone Umami to your local computer, create PostgreSQL tables, set up connection pooling, and deploy Umami to App Platform.

      Prerequisites

      Before you begin this guide you’ll need the following:

      Step 1 — Forking and Cloning the Umami Repository

      The Umami repository on GitHub contains the files and scripts needed to run Umami. Forking this repository allows you to deploy Umami to App platform, and use an SQL script contained in it to set up tables in the PostgreSQL database.

      In this step, you will fork the repository and clone it to your local computer with git.

      To fork the repository, go to the Umami repository on GitHub and click the Fork button at the top right corner of the page. Your copy of the repository will be at https://github.com/your_github_username/umami.

      In your forked repository, click the Code button, copy the HTTPS link, and clone the forked repository to your local computer with the following command:

      • git clone https://github.com/your_github_username/umami.git

      The git clone command creates a copy of a repository on your computer. You will see an output similar to the following after running the command:

      Output

      Cloning into 'umami'... remote: Enumerating objects: 6352, done. remote: Counting objects: 100% (270/270), done. remote: Compressing objects: 100% (159/159), done. remote: Total 6352 (delta 131), reused 219 (delta 103), pack-reused 6082 Receiving objects: 100% (6352/6352), 2.57 MiB | 519.00 KiB/s, done. Resolving deltas: 100% (4388/4388), done. Checking out files: 100% (355/355), done.

      Move into the repository’s directory:

      Now that you have forked the Umami repository and cloned it to your local machine, you will set up the umami database on PostgreSQL and create its tables.

      Step 2 — Creating the umami Database, Setting Up Tables, and Starting a Connection Pool

      In this step, you will create and initialize an umami database in your cluster. This database is where Umami will store data from your websites.

      To create the umami database in your cluster, open the Cloud Control Panel in your DigitalOcean account and select Databases from the side menu. Choose the database cluster you created from the list of clusters. Navigate to the Users and Databases tab and scroll down to Databases. Type umami in the textbox and click Save to create the umami database.

      Creating a database in your DigitalOcean managed database

      Now that you have created the umami database, you can build the tables Umami will need to run. To complete this, you will need the connection string to connect to your umami database.

      On the Cloud Control Panel, switch to the Overview tab. Look for the Connection Details section on the right of the page. In the dropdown where Connection Parameters is written, select Connection String. Select the umami database from the dropdown beside where Database/Pool is written. Afterward, click Copy to copy the connection string to the clipboard.

      Locating the connection string of a DigitalOcean managed database

      The SQL script at sql/schema.postgresql.sql creates all the tables that Umami needs and sets up the indices in all these tables. It also sets up an admin account for Umami with the username admin and password umami.

      Warning: The admin user on Umami can create and delete accounts. It is strongly advised to change these default credentials after deployment to prevent unauthorized access to your Umami instance. You can change the default credentials in Step 3.

      Run the following command from the umami directory you entered to create the tables:

      • psql 'your_connection_string' -f sql/schema.postgresql.sql

      psql uses the connection string to connect to your database and the -f flag runs the SQL script at sql/schema.postgresql.sql against the database.

      When you run the command successfully, you will have the following output:

      Output

      psql:sql/schema.postgresql.sql:1: NOTICE: table "event" does not exist, skipping DROP TABLE psql:sql/schema.postgresql.sql:2: NOTICE: table "pageview" does not exist, skipping DROP TABLE psql:sql/schema.postgresql.sql:3: NOTICE: table "session" does not exist, skipping DROP TABLE psql:sql/schema.postgresql.sql:4: NOTICE: table "website" does not exist, skipping DROP TABLE psql:sql/schema.postgresql.sql:5: NOTICE: table "account" does not exist, skipping DROP TABLE CREATE TABLE CREATE TABLE CREATE TABLE CREATE TABLE CREATE TABLE CREATE INDEX CREATE INDEX CREATE INDEX CREATE INDEX CREATE INDEX CREATE INDEX CREATE INDEX CREATE INDEX CREATE INDEX CREATE INDEX CREATE INDEX INSERT 0 1

      You have successfully created the umami database and the tables in it. You will now create a connection pool for the umami database.

      PostgreSQL was built to have persistent connections. The Next.js API routes that Umami runs on, however, are not able to share database connections across requests. To support the short-lived connections that will be made from the API route and prevent errors, you will create a connection pool for your cluster.

      A connection pool is an application that allows you to reuse database connections across multiple requests by making a number of persistent connections to the database and forwarding client requests through those connections. When more requests are made than connections are available, subsequent requests are queued until there is a free connection.

      To enable connection pooling for your managed database, go to your Cloud Control Panel. Click Databases on the side menu then select the database you created. Go to the Connection Pools tab and click Create a Connection Pool. A modal will open. Set the pool name as umami-pool, select the umami database, and set the pool size to 11. Click Create Pool to create the connection pool.

      You can change the size of the connection pool later to support more traffic. Refer to How to Manage Connection Pools to learn more about when to adjust and how to select a pool size.

      Requests from Umami will not be made directly to the database but to the connection pool. You will therefore need the connection string of the connection pool. This connection string is one of the environment variables that will be needed when deploying the app to the App Platform. To get the connection string, go to the Connection Pools tab and click Connection details. When the modal opens, click the Connection parameters dropdown, select Connection String, and click Copy to copy the connection string.

      The connection pool’s connection string is one of the environment variables you will need when deploying to the App Platform since database requests will be made to the connection pool.

      Now that you have set up connection pooling on your database, you will deploy Umami to the App Platform.

      Step 3 — Deploying Umami to App Platform

      In this step, you will deploy Umami to App Platform. Umami runs on a web application written in Next.js, and App Platform will deploy it from your fork of Umami. Visit the App Platform section of the Control Panel and click Launch Your App to begin.

      You will be presented with a list of options for the source of your code. Choose GitHub as the source. If this is your first time deploying to App Platform from a GitHub repository, you will be asked to connect App Platform to your GitHub account.

      Choose the repository where you want to deploy your app. Select your_github_username/umami as the source repository from the dropdown. Leave the branch as master, keep Autodeploy code changes checked, then click Next.

      Selecting a GitHub repository to deploy to App Platform from

      App Platform will automatically detect a Dockerfile in the repository and set the necessary settings. You will now add the environment variables that Umami requires.

      Umami requires two environment variables to work:

      • DATABASE_URL: the connection string for your PostgreSQL database.
      • HASH_SALT: a random string used to generate unique values for the application.

      Click Edit next to Environment Variables to add these environment variables.

      For Umami to work well with your connection pool, you will need to modify the connection pool connection string you got from the Cloud Control Panel by appending &pgbouncer=true to the end. The value of DATABASE_URL should look like:

      postgres://sammy:your_password@host-domain:25061/umami?sslmode=require&pgbouncer=true
      

      Click the + button and set a HASH_SALT environment variable as a random string. Tick the Encrypt checkbox next to HASH_SALT so the value of HASH_SALT will be encrypted while saving.

      Setting environment variables while deploying Umami to App Platform

      Click Next to continue setting up the app.

      Pick a name for your Umami instance and select the region where to deploy your app. The region closest to you is automatically selected to minimize connection latency. Click Next to proceed.

      Select the Basic plan, or the Pro plan should you require a larger size for your project, and click Launch Your App to finalize the deployment.

      The app build will now begin. Once the build completes, the URL where your app will be accessible will display under the app’s name.

      Open the URL to visit your analytics dashboard.

      Umami analytics dashboard

      You can log in with the default credentials:

      • Username: admin
      • Password: umami

      Secure your instance by clicking Settings on the header. Navigate to Profile on the sidebar and click Change password. Enter the previous password—umami and pick a new password for signing in to the admin account.

      To get the tracking script for a website, log in to your Umami Dashboard. Select Settings on the navigation bar at the top of the screen. Click the Add website button. When the modal opens, select a name for the website and enter the domain where the website is located.

      Adding a website to Umami

      After adding the website, you will find it in the list of websites in the settings. Click the first button under the website to show the tracking script.

      Finding the tracking script for a website on Umami

      When you click the button, a modal will open with the tracking script in a <script> tag. Paste the code snippet shown in the <head> tag of your website’s pages to start getting data from the website. When a visitor visits your web pages, the script automatically sends data to Umami.

      Conclusion

      You have now successfully deployed your instance of Umami Analytics. You can now track page views, session durations, and other metrics from all your websites. You can refer to Umami’s documentation to learn how to track events. If you want to have Umami available from a custom domain, you can refer to How to Manage Domains in App Platform to learn how.



      Source link

      How To Set Up a Continuous Delivery Pipeline with Flux on DigitalOcean Kubernetes


      The author selected the Free and Open Source Fund to receive a donation as part of the Write for DOnations program.

      Introduction

      By itself, Kubernetes does not offer continuous integration and deployment features. While these concepts are often not widespread in smaller projects, bigger teams who host and update their deployments extensively find it much easier to set up such processes to alleviate manual time-consuming tasks and instead focus on developing the software that’s being deployed. One approach to maintaining continuous delivery for Kubernetes is GitOps.

      GitOps views the Git repositories hosting the application and Kubernetes manifests as the central source of truth regarding deployments. It allows for separated deployment environments by using repository branches, gives you the ability to quickly reproduce any config state, current or past, on any cluster, and makes rollbacks trivial thanks to Git versioning. The manifests are secure, synchronized, and easily accessible at all times. Modifications to the manifest or application can be audited, allowed, or denied depending on external factors (usually, the continuous integration system). Automating the process from pushing the code to having it deploy on a cluster can greatly increase productivity and enhance the developer experience while making the deployment always consistent with the central code base.

      Flux is an open-source tool facilitating the GitOps continuous delivery approach for Kubernetes. Flux allows for automated application and configuration deployments to your clusters by monitoring the configured Git repositories and automatically applying the changes as soon as they become available. It can apply Kustomize manifests (which provide an easy way to optionally patch parts of the usual Kubernetes manifests on the fly), as well as watch over Helm chart releases. You can also configure it to be notified via Slack, Discord, Microsoft Teams, or any other service that supports webhooks. Webhooks provide a way of notifying an app or a service of an event that’s happened somewhere else and provide its description.

      In this tutorial, you’ll install Flux and use it to set up continuous delivery for the podinfo app to your DigitalOcean Kubernetes cluster. podinfo is an app that provides details about the environment it’s running in. You’ll host the repositories holding Flux configuration and podinfo on your GitHub account. You’ll set up Flux to watch over the app repository, automatically apply the changes, and notify you on Slack using webhooks. In the end, all changes that you make to the monitored repository will quickly be propagated to your cluster.

      Prerequisites

      To complete this tutorial, you will need:

      • A DigitalOcean Kubernetes cluster with your connection configuration configured as the kubectl default. Instructions on how to configure kubectl are shown under the Connect to your Cluster step when you create your cluster. To learn how to create a Kubernetes cluster on DigitalOcean, see Kubernetes Quickstart.
      • A Slack workspace you’re a member of. To learn how to create a workspace, visit the official docs.
      • A GitHub account with a Personal Access Token (PAT) created with all privileges. To learn how to create one, visit the official docs.

      • Git initialized and set up on your local machine. To get started with Git, as well as see installation instructions, visit the How To Contribute to Open Source: Getting Started with Git tutorial.

      • The podinfo app repository forked to your GitHub account. For instructions on how to fork a repository to your account, visit the official getting started docs.

      Step 1 — Installing and Bootstrapping Flux

      In this step, you’ll set up Flux on your local machine, install it to your cluster, and set up a dedicated Git repository for storing and versioning its configuration.

      On Linux, you can use the official Bash script to install Flux. If you’re on MacOS, you can either use the official script, following the same steps as for Linux, or use Homebrew to install Flux with the following command:

      • brew install fluxcd/tap/flux

      To install Flux using the officially provided script, download it by running the following command:

      • curl https://fluxcd.io/install.sh -so flux-install.sh

      You can inspect the flux-install.sh script to verify that it’s safe by running this command:

      To be able to run it, you must mark it as executable:

      Then, execute the script to install Flux:

      You’ll see the following output, detailing what version is being installed:

      Output

      [INFO] Downloading metadata https://api.github.com/repos/fluxcd/flux2/releases/latest [INFO] Using 0.13.4 as release [INFO] Downloading hash https://github.com/fluxcd/flux2/releases/download/v0.13.4/flux_0.13.4_checksums.txt [INFO] Downloading binary https://github.com/fluxcd/flux2/releases/download/v0.13.4/flux_0.13.4_linux_amd64.tar.gz [INFO] Verifying binary download [INFO] Installing flux to /usr/local/bin/flux

      To enable command autocompletion, run the following command to configure the shell:

      • echo ". <(flux completion bash)" >> ~/.bashrc

      For the changes to take effect, reload ~/.bashrc by running:

      You now have Flux available on your local machine. Before installing it to your cluster, you’ll first need to run the prerequisite checks that verify compatibility:

      Flux will connect to your cluster, which you’ve set up a connection to in the prerequisites. You’ll see an output similar to this:

      Output

      ► checking prerequisites ✔ kubectl 1.21.1 >=1.18.0-0 ✔ Kubernetes 1.20.2 >=1.16.0-0 ✔ prerequisites checks passed

      Note: If you see an error or a warning, double check the cluster you’re connected to. It’s possible that you may need to perform an upgrade to be able to use Flux. If kubectl is reported missing, repeat the steps from the prerequisites for your platform and check that it’s in your PATH.

      During the bootstrapping process, Flux creates a Git repository at a specified provider and initializes it with a default configuration. To do so requires your GitHub username and personal access token, which you’ve retrieved in the prerequisites. The repository will be available under your account on GitHub.

      You’ll store your GitHub username and personal access token as environment variables to avoid typing them multiple times. Run the following commands, replacing the highlighted parts with your GitHub credentials:

      • export GITHUB_USER=your_username
      • export GITHUB_TOKEN=your_personal_access_token

      You can now bootstrap Flux and install it to your cluster by running:

      • flux bootstrap github
      • --owner=$GITHUB_USER
      • --repository=flux-config
      • --branch=main
      • --path=./clusters/my-cluster
      • --personal

      In this command, you specify that the repository should be called flux-config at provider github, owned by the user you’ve just defined. The new repository will be personal (not under an organization) and will be made private by default.

      The output you’ll see will be similar to this:

      Output

      ► connecting to github.com ► cloning branch "main" from Git repository "https://github.com/GITHUB_USER/flux-config.git" ✔ cloned repository ► generating component manifests ✔ generated component manifests ✔ committed sync manifests to "main" ("b750ffae686c2f110364694d2ddae26c7f18c6a2") ► pushing component manifests to "https://github.com/GITHUB_USER/flux-config.git" ► installing components in "flux-system" namespace ✔ installed components ✔ reconciled components ► determining if source secret "flux-system/flux-system" exists ► generating source secret ✔ public key: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDKw943TnUiKLVk4WMLC5YCeC+tIPVvJprQxTfLqcwkHtedMJPanJFifmbQ/M3CAq1IgqyQTydRJSJu6E/4YDOwx1vawStR9XU16rkn+rZbmvRxZ97E0HNb5m54OwmziAWf0EPdsfiIIJYSRkCMihpKJUNoakl+sng6LQsW+WIRlOK39aJRWud+rygQEuEKmD7YHKQ0VSb/L5v50jiPgEZImiREHNfjBU+RkEni3aZuOO3jNy5WdlPkpdqfHe8fdFsjJnvNB0zmfe3eTIB2fbdDzxo2usLbFeAMhGCRYsGnniHsytBHNLmxDM/4I18xlNN9e6WEYpgHEJVb8azKmwSX ✔ configured deploy key "flux-system-main-flux-system-./clusters/my-cluster" for "https://github.com/GITHUB_USER/flux-config" ► applying source secret "flux-system/flux-system" ✔ reconciled source secret ► generating sync manifests ✔ generated sync manifests ✔ committed sync manifests to "main" ("1dc033e24f3288a70ff80c57816e16c52bc62303") ► pushing sync manifests to "https://github.com/GITHUB_USER/flux-config.git" ► applying sync manifests ✔ reconciled sync configuration ◎ waiting for Kustomization "flux-system/flux-system" to be reconciled ✔ Kustomization reconciled successfully ► confirming components are healthy ✔ source-controller: deployment ready ✔ kustomize-controller: deployment ready ✔ helm-controller: deployment ready ✔ notification-controller: deployment ready ✔ all components are healthy

      Flux noted that it made a new Git repository, committed a basic starting configuration to it, and provisioned necessary controllers in your cluster.

      In this step, you’ve installed Flux on your local machine, created a new Git repository to hold its configuration, and deployed its server-side components to your cluster. The changes defined by the commits in the repository will now get propagated to your cluster automatically. In the next step, you’ll create configuration manifests ordering Flux to automate deployments of the podinfo app you’ve forked whenever a change occurs.

      Step 2 — Configuring the Automated Deployment

      In this section, you will configure Flux to watch over the podinfo repository that you’ve forked and apply the changes to your cluster as soon as they become available.

      In addition to creating the repository and initial configuration, Flux offers commands to help you generate config manifests with your parameters faster than writing them from scratch. The manifests, regardless of what they define, must be available in its Git repository to be taken into consideration. To add them to the repository, you’ll first need to clone it to your machine to be able to push changes. Do so by running the following command:

      • git clone https://github.com/$GITHUB_USER/flux-config ~/flux-config

      You may be asked for your username and password. Input your account username and provide your personal access token for the password.

      Then, navigate to it:

      To instruct Flux to monitor the forked podinfo repository, you’ll first need to let it know where it’s located. This is achieved by creating a GitRepository manifest, which details the repository URL, branch, and monitoring interval.

      To create the manifest, run the following command:

      • flux create source git podinfo
      • --url=https://github.com/$GITHUB_USER/podinfo
      • --branch=master
      • --interval=30s
      • --export > ./clusters/my-cluster/podinfo-source.yaml

      Here, you specify that the source will be a Git repository with the given URL and branch. You pass in --export to output the generated manifest and pipe it into podinfo-source.yaml, located under ./clusters/my-cluster/ in the main config repository, where manifests for the current cluster are stored.

      You can show the contents of the generated file by running:

      • cat ./clusters/my-cluster/podinfo-source.yaml

      The output will look similar to this:

      ~/flux-config/clusters/my-cluster/podinfo-source.yaml

      ---
      apiVersion: source.toolkit.fluxcd.io/v1beta1
      kind: GitRepository
      metadata:
        name: podinfo
        namespace: flux-system
      spec:
        interval: 30s
        ref:
          branch: master
        url: https://github.com/GITHUB_USER/podinfo
      

      You can check that the parameters you just passed into Flux are correctly laid out in the generated manifest.

      You’ve now defined a source Git repository that Flux can access, but you still need to tell it what to deploy. Flux supports Kustomize resources, which podinfo exposes under the kustomize directory. By supporting Kustomizations, Flux does not limit itself, because Kustomize manifests can be as simple as just including all usual manifests unchanged.

      Create a Kustomization manifest, which tells Flux where to look for deployable manifests, by running the following command:

      • flux create kustomization podinfo
      • --source=podinfo
      • --path="./kustomize"
      • --prune=true
      • --validation=client
      • --interval=5m
      • --export > ./clusters/my-cluster/podinfo-kustomization.yaml

      For the --source, you specify the podinfo Git repository you’ve just created. You also set the --path to ./kustomize, which refers to the filesystem structure of the source repository. Then, you save the YAML output into a file called podinfo-kustomization.yaml in the directory for the current cluster.

      The Git repository and Kustomization you’ve created are now available, but the cluster-side of Flux can’t yet see them because they’re not in the remote repository on GitHub. To push them, you must first commit them by running:

      • git add . && git commit -m "podinfo added"

      With the changes now committed, push them to the remote repository:

      Same as last time, git may ask you for your credentials. Input your username and your personal access token to continue.

      The new manifests are now live, and cluster-side Flux will soon pick them up. You can watch it sync the cluster’s state with the one presented in the manifests by running:

      • watch flux get kustomizations

      After the refresh interval specified for the Git repository elapses (which you’ve set to 30s in the manifest above), Flux will retrieve its latest commit and update the cluster. Once it does, you’ll see output similar to this:

      Output

      NAME READY MESSAGE flux-system True Applied revision: main/fc07af652d3168be329539b30a4c3943a7d12dd8 podinfo True Applied revision: master/855f7724be13f6146f61a893851522837ad5b634

      You can see that a podinfo Kustomization was applied, along with its branch and commit hash. You can list deployments and services as well to check that podinfo is deployed:

      • kubectl get deployments,services

      You’ll see that they are present, configured according to their respective manifests:

      Output

      NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/podinfo 2/2 2 2 56s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.245.0.1 <none> 443/TCP 34m service/podinfo ClusterIP 10.245.78.189 <none> 9898/TCP,9999/TCP 56s

      Any changes that you manually make to these and other resources that Flux controls will quickly be overwritten with the ones referenced from Git repositories. To make changes, you’d need to modify the central sources, not the actual deployments in a cluster. This applies to deleting resources as well — any resources you manually delete from the cluster will soon be reinstated. To delete them, you’d need to remove their manifests from the monitored repositories and wait for the changes to be propagated.

      Flux’s behavior is intentionally rigid because it operates on what it finds in the remote repositories at the end of each refresh interval. Suspending Kustomization monitoring and, in turn, state reconciliation is useful when you need to manually override the resources in the cluster without being interrupted by Flux.

      You can pause monitoring of a Kustomization indefinitely by running:

      • flux suspend kustomization kustomization_name

      The default behavior can be brought back by running flux resume on a paused Kustomization:

      • flux resume kustomization kustomization_name

      You now have an automated process in place that will deploy podinfo to your cluster every time a change occurs. You’ll now set up Slack notifications, so you’ll know when a new version of podinfo is being deployed.

      Step 3 — Setting up Slack Notifications

      Now that you’ve set up automatic podinfo deployments to your cluster, you’ll connect Flux to a Slack channel, where you’ll be notified of every deployment and its outcome.

      To integrate with Slack, you’ll need to have an incoming webhook on Slack for your workspace. Incoming webhooks are a way of posting messages to the configured Slack channel.

      If you haven’t ever created a webhook, you’ll first need to create an app for your workspace. To do so, first log in to Slack and navigate to the app creation page. Press on the green Create New App button and select From scratch. Name it flux-app, select the desired workspace, and click Create New App.

      You’ll be redirected to the settings page for the new app. Click on Incoming Webhooks on the left navigation bar.

      Slack app - Incoming Webhooks

      Enable webhooks for flux-app by flipping the switch button next to the title Activate Incoming Webhooks.

      Slack app - Activate Incoming Webhooks

      A new section further down the page will be uncovered. Scroll down and click the Add New Webhook to Workspace button. On the next page, select the channel you want the reports to be sent to and click Allow.

      You’ll be redirected back to the settings page for webhooks, and you’ll see a new webhook listed in the table. Click on Copy to copy it to clipboard and make note of it for later use.

      You’ll store the generated Slack webhook for your app in a Kubernetes Secret in your cluster, so that Flux can access it without explicitly specifying it in its configuration manifests. Storing the webhook as a Secret also lets you easily replace it in the future.

      Create a Secret called slack-url containing the webhook by running the following command, replacing your_slack_webhook with the URL you’ve just copied:

      • kubectl -n flux-system create secret generic slack-url --from-literal=address=your_slack_webhook

      The output will be:

      Output

      secret/slack-url created

      You’ll now create a Provider, which allows Flux to talk to the specified service using webhooks. They read the webhook URL from Secrets, which is why you’ve just created one. Run the following Flux command to create a Slack Provider:

      • flux create alert-provider slack
      • --type slack
      • --channel general
      • --secret-ref slack-url
      • --export > ./clusters/my-cluster/slack-alert-provider.yaml

      Aside from Slack, Flux supports communicating with Microsoft Teams, Discord, and other platforms via webhooks. It also supports sending generic JSON to accommodate more software that parses this format.

      A Provider only allows Flux to send messages and does not specify when messages should be sent. For Flux to react to events, you’ll need to create an Alert using the slack Provider by running:

      • flux create alert slack-alert
      • --event-severity info
      • --event-source Kustomization/*
      • --event-source GitRepository/*
      • --provider-ref slack
      • --export > ./clusters/my-cluster/slack-alert.yaml

      This command creates an alert manifest called slack-alert that will react to all Kustomization and Git repository changes and report them to the slack provider. The event severity is set to info, which will allow the alert to be triggered on all events, such as Kubernetes manifests being created or applied, something delaying deployment, or an error occurring. To report only errors, you can specify error instead. The resulting generated YAML is exported to a file called slack-alert.yaml.

      Commit the changes by running:

      • git add . && git commit -m "Added Slack alerts"

      Push the changes to the remote repository by running the following command, inputting your GitHub username and personal access token if needed:

      After the configured refresh interval for the Git repository elapses, Flux will retrieve and apply the changes. You can watch the Alert become available by running:

      • watch kubectl -n flux-system get alert

      You’ll soon see that it’s Initialized:

      Output

      NAME READY STATUS AGE slack-alert True Initialized 7s

      With alerting now set up, any actions that Flux takes will be logged in the Slack channel of the workspace that the webhook is connected to.

      You’ll test this connection by introducing a change to your fork of podinfo. First, clone it your local machine by running:

      • git clone https://github.com/$GITHUB_USER/podinfo.git ~/podinfo

      Navigate to the cloned repository:

      You’ll modify the name of its Service, which is defined in ~/podinfo/kustomize/service.yaml. Open it for editing:

      • nano ~/podinfo/kustomize/service.yaml

      Modify the Service name, like so:

      ~/podinfo/kustomize/service.yaml

      apiVersion: v1
      kind: Service
      metadata:
        name: podinfo-1
      spec:
        type: ClusterIP
        selector:
          app: podinfo
        ports:
          - name: http
            port: 9898
            protocol: TCP
            targetPort: http
          - port: 9999
            targetPort: grpc
            protocol: TCP
            name: grpc
      

      Save and close the file, then commit the changes by running:

      • git add . && git commit -m "Service name modified"

      Then, push the changes:

      After a few minutes, you’ll see the changes pop up in Slack as they are deployed:

      Slack - Flux reported changes

      Flux fetched the new commit, created a new Service called podinfo-1, configured it, and deleted the old one. This order of actions ensures that the old Service (or any other manifest) stays untouched if provisioning of the new one fails.

      In case the new revision of the watched manifests contains a syntax error, Flux will report an error:

      Slack - Flux reported failed deployment

      You’ve connected Flux to your Slack workspace, and will immediately be notified of all actions and deployments that happen. You’ll now set up Flux to watch over Helm releases.

      Step 4 — (Optional) Automating Helm Release Deployments

      In addition to watching over Kustomizations and Git repositories, Flux can also monitor Helm charts. Flux can monitor charts residing in Git or Helm repositories, as well as in S3 cloud storage. You’ll now set it up to watch over the podinfo chart, which is located in a Helm repository.

      The process of instructing Flux to monitor a Helm chart is similar to what you did in step 2. You’ll first need to define a source that it can poll for changes (of one of the three types noted earlier). Then, you’ll specify which chart to actually deploy among the ones it finds by creating a HelmRelease.

      Navigate back to the flux-config repository:

      Run the following command to create a source for the Helm repository that contains podinfo:

      • flux create source helm podinfo
      • --url=https://stefanprodan.github.io/podinfo
      • --interval=10m
      • --export > ./clusters/my-cluster/podinfo-helm-repo.yaml

      Here, you specify the URL of the repository and how often it should be checked. Then, you save the output into a file called podinfo-helm-repo.yaml.

      With the source repository now defined, you can create a HelmRelease, defining which chart to monitor:

      • flux create hr podinfo
      • --interval=10m
      • --source=HelmRepository/podinfo
      • --chart=podinfo
      • --target-namespace=podinfo-helm
      • --export > ./clusters/my-cluster/podinfo-helm-chart.yaml

      As in the previous command, you save the resulting YAML output to a file, here called podinfo-helm-chart.yaml. You also pass in the name of the chart (podinfo), set the --source to the repository you’ve just defined and specify that the namespace the chart will be installed to is podinfo-helm.

      Since the podinfo-helm namespace does not exist, create it by running:

      • kubectl create namespace podinfo-helm

      Then, commit and push the changes:

      • git add . && git commit -m "Added podinfo Helm chart" && git push

      After a few minutes, you’ll see that Flux logged a successful Helm chart upgrade in Slack:

      Slack - Flux logged successful chart install

      You can check the pods contained in the podinfo-helm namespace by running:

      • kubectl get pods -n podinfo-helm

      The output will be similar to this:

      Output

      NAME READY STATUS RESTARTS AGE podinfo-chart-podinfo-7c9b7667cb-gshkb 1/1 Running 0 33s

      This means that you have successfully configured Flux to monitor and deploy the podinfo Helm chart. As soon as a new version is released, or a modification is pushed, Flux will retrieve and deploy the newest variant of the Helm chart for you.

      Conclusion

      You’ve now automated Kubernetes manifest deployments using Flux, which allows you to push commits to watched repositories and have them automatically applied to your cluster. You’ve also set up alerting to Slack, so you’ll always know what deployments are happening in real time, and you can look up previous ones and see any errors that might have occurred.

      In addition to GitHub, Flux also supports retrieving and bootstrapping Git repositories hosted at GitLab. You can visit the official docs to learn more.



      Source link

      How To Use WordPress Content with a Gatsby.js Application


      The author selected the Internet Archive to receive a donation as part of the Write for DOnations program.

      Introduction

      WordPress is a popular CMS (Content Management System). It allows you to edit posts within a visual editor, as opposed to hand-coding pages of your website with raw HTML, and offers additional features, such as collaborative editing and revision history.

      Traditionally, WordPress has functioned as both the backend and frontend of a website. The posts are edited within the Admin editor, and the backend dynamically generates each public page of your site when a visitor hits it by passing it through a PHP theme.

      A new paradigm in WordPress is using it only for the content part of your site (also known as running headless), and using Gatsby to statically generate the frontend. By leveraging both and decoupling content from the user interface (UI), you can keep the content editor and collaborative features of WordPress, but also enjoy the faster load times and React-based UI ecosystem of Gatsby.

      In this tutorial, you will provision WordPress to talk to Gatsby, set up a new Gatsby project based on a starter template, define the connection to WordPress in your Gatsby configuration, and put it all together to statically generate some pages based on posts that live inside your WordPress installation.

      Prerequisites

      Before you start on this guide, here are a few things you will need:

      • An environment with sufficient resources to support building and serving your site. If you are using the same server to both host WordPress and build your Gatsby site, the recommended minimum amount of RAM is 2GB. If you would like to use a DigitalOcean Droplet, check out our How to Create a Droplet from the DigitalOcean Control Panel article.
      • A working WordPress installation that is reachable from where Gatsby is running. If you are brand new to WordPress, you might want to start with a guide on What is WordPress first, but for general setup, there are also guides for multiple environments, such as Docker. This tutorial was tested on a LAMP stack set up by following How To Install WordPress on Ubuntu 20.04 with a LAMP Stack.
      • Node.js, installed locally, for running Gatsby and building your site. The installation procedure varies by operating system, but there are guides for installing Node.js on Ubuntu and installing Node.js on Mac, and you can always find the latest release on the official NodeJS download page.
      • The Gatsby CLI tool, installed locally. For how to install this and to learn some of the Gatsby basics, you can follow Step 1 of the How to Set Up Your First Gatsby Website tutorial.
      • Some familiarity with JavaScript, for working in Gatsby. There is a lot of depth to JavaScript, but a good starting spot is our How to Code in JavaScript series. Additionally, it will help to have some knowledge of HTML, such as understanding HTML elements, and if you want to customize the UI of your posts beyond what is covered in this tutorial, it will also help to know some React and JSX.

      This tutorial was tested on WordPress v5.7.1, Gatsby v3.3.0, and Node.js v14.17.0. Additionally, the WordPress setup was tested on both Windows 10 and Ubuntu 20.04 with Apache v2.4.41 and PHP v7.4.3.

      Step 1 — Installing and Configuring the Required WordPress Plugins

      In this first step, you will give WordPress the ability to talk to Gatsby by installing some plugins and adjusting settings. You will also verify that your local or hosted WordPress instance supports these changes, and record some details about your specific setup that will be needed later.

      Start by logging into the admin dashboard of your WordPress instance by navigating to https://your_domain_name/wp-admin in your browser and inputting your credentials. Then go to the plugins page by clicking Plugins in the left sidebar. Next, navigate to the plugin installation page by clicking Add New at the top of the page, or in the same sidebar. If your WordPress installation uses standard paths, you will also be able to find this page at https://your_domain/wp-admin/plugin-install.php. This will bring you to the plugins page, as shown in the following image:

      Screenshot showing the Add New link selected in the Plugins sidebar in WordPress Admin

      The two required plugins are as follows, and will be installed in this order:

      Screenshot of the WordPress plugin listing for WPGraphQL

      Screenshot of the WordPress plugin listing for WPGatsby

      Install and activate both of these plugins by searching for them and then pressing their associated Install Now buttons. Once they are installed, select the Activate button. After both plugins have been installed and activated, you will see some new settings panels within your WordPress admin dashboard. The following image shows these new settings panels.

      Screenshot showing that both required plugins, WPGraphQL and WPGatsby, are installed, activated, and have added settings panels around the admin dashboard

      To verify that the GraphQL connection will be available for connecting to Gatsby, open the Settings panel, under the GraphQL sub-section in the Admin sidebar.

      Screenshot of the WPGraphQL settings page, with the GraphQL endpoint URL highlighted. In this example, it is

      Take special note of the GraphQL endpoint. You can find this in the section labeled GraphQL Endpoint, below the text entry box. It is also highlighted in yellow in the screenshot. You will need this later, so to save some time you can copy it into your clipboard and/or paste it into a temporary text document.

      For the best results with WPGraphQL, it is recommended to use a WordPress permalink setting other than plain, especially if this is a new WordPress installation where changing the URL structure will not affect a live website. To navigate to your permalink settings, click on Settings in the left sidebar of your WordPress admin dashboard, then click on Permalinks in that expanded section. From the permalink settings page, change your setting to any option other than plain, and then press Save Changes to update your site.

      Setting your permalinks to something other than plain comes with some specific technical requirements; with the Apache web server, you need to enable the mod_rewrite module and set the AllowOverride directive to all. These will enable WordPress to dynamically route new paths. Step 3 of the WordPress on Ubuntu 20.04 tutorial covers this, with step-by-step instructions. If you ran Let’s Encrypt to provide an SSL certificate for your site, as is instructed in the How To Secure Apache with Let’s Encrypt on Ubuntu 20.04 tutorial, you will have to complete these steps for the new virtual host at /etc/apache2/sites-available/your_domain-le-ssl.conf.

      Now that you have a GraphQL endpoint configured, you will test this connection. You can do so immediately; no Gatsby is installation required yet. You can use the GraphiQL IDE for a visual query builder tool (accessible through the sidebar), or you can even query the endpoint directly with your favorite network request tool of choice.

      If you prefer the command line and have cURL installed, you could use the following command to retrieve all post titles:

      • curl --location --request POST 'https://your_domain/graphql'
      • --header 'Content-Type: application/json'
      • --data-raw '{
      • "query": "query { posts { nodes { title } } }"
      • }'

      This command makes a request to your GraphQL endpoint for a JSON response containing your WordPress posts, but only with their titles. With GraphQL, this is also called a query; the tutorial Understanding Queries in GraphQL explains them more in-depth.

      The response JSON to your query will be something like this:

      Output

      {"data":{"posts":{"nodes":[{"title":"Hello world!"}]}},"extensions":{"debug":[{"type":"DEBUG_LOGS_INACTIVE","message":"GraphQL Debug logging is not active. To see debug logs, GRAPHQL_DEBUG must be enabled."}]}}

      Now that you have successfully installed and configured the required WordPress plugins to communicate with Gatsby, you can move on to setting up your new Gatsby project.

      Step 2 — Setting Up a New Gatsby Project

      In this step, you will set up a new Gatsby project based on a starter template specifically designed for sourcing data from WordPress. It will require using the Gatsby CLI to download and install the starter and its dependencies.

      To speed up your development process and reduce the amount of setup that is required, you will start by using the Gatsby CLI and the Gatsby WordPress Blog Starter template.

      Navigate to the local parent directory that will hold your Gatsby project, then run the following command to have the Gatsby CLI download and pre-install most of what you will need to get started building your site:

      • gatsby new my-wordpress-gatsby-site https://github.com/gatsbyjs/gatsby-starter-wordpress-blog

      You can replace my-wordpress-gatsby-site with whatever you would like the directory name to be for your local Gatsby files.

      It will take a while for Gatsby to download and install all the necessary dependencies and assets. Once it has finished, you will receive a message similar to this one:

      Output

      ... Your new Gatsby site has been successfully bootstrapped. Start developing it by running: cd my-wordpress-gatsby-site gatsby develop

      Normally with a Gatsby site, this is the point at which you would hand-edit the gatsby-config.js file with details about your site known as metadata. However, in addition to pulling posts from WordPress, this starter also pulls the metadata in for you automatically; no need to hand-code the site title or description.

      Having scaffolded a new Gatsby project, you are now ready to modify its configuration to tell it to pull data from WordPress.

      Step 3 — Configuring Gatsby to Use WordPress Data

      Now that you have a working Gatsby project, the next step is for you to configure it to pull in the data from WordPress. You will do this by editing the main Gatsby configuration file and working with the gatsby-source-wordpress plugin.

      Thanks to the starter template you used, the gatsby-source-wordpress plugin will already be installed as a dependency and have a settings entry in the Gatsby config file; you just need to tweak it slightly.

      Move into your local Gatsby directory that was created in the previous step:

      • cd my-wordpress-gatsby-site

      Then, open the file named ./gatsby-config.js in your text editor of choice. This is the main configuration file for all Gatsby projects.

      Within the config file, you will find an existing settings entry for gatsby-source-wordpress within the plugins array. You can now take your specific GraphQL endpoint you copied from the previous step and replace the default demo endpoint, https://wpgatsbydemo.wpengine.com/graphql, with your value, as highlighted in the following code:

      gatsby-config.js

      module.exports = {
          plugins: [
              {
                  resolve: `gatsby-source-wordpress`,
                  options: {
                      url:
                          process.env.WPGRAPHQL_URL ||
                          `https://your_domain/graphql`,
                  },
              },
              ...
          ],
          ...
      }
      

      Save and close this file so that future builds of the site will use the updated value.

      Note: url is the only required setting option, but there are a lot that are available; take a look at the Gatsby GitHub repostory for more. For example, there are options for enabling debug output, connecting to an HTACCESS, password-protected site, and performance related options, such as schema.requestConcurrency, which is especially important if your WordPress site is running on a server with limited resources.

      Before moving on to customizing how Gatsby uses your WordPress data to build pages, build and preview your site as-is to make sure everything is working correctly. You can do this by running the following command:

      Or, if you are using the yarn package manager:

      Warning: If you get an error at this step, especially if it is an error about a missing dependency or can't resolve '...' in '....cache, it might be that part of the dependency install process has failed. This is a known issue in Gatsby projects. Try running npm i again (or yarn install if using yarn) to check for and install any missing dependencies. If that fails to fix the issue, completely rebuild your dependencies by deleting the node_modules folder, deleting package-lock.json or yarn.lock, and then running npm i or yarn install.

      This command will run the gatsby develop process, which will pull in data from WordPress, combine it with the starter template’s pre-built UI files, and start a local development server so you can view a preview of the actual generated website in a browser. This command also runs Gatsby in a mode that supports hot-reloading, so that if you make edits to your local files, you will see those changes reflected instantly in your web browser.

      Navigate to localhost:8000 in your browser and you will find your Gatsby site with WordPress content:

      Gatsby site entitled

      With your Gatsby project now pulling data from WordPress, the next step will be to customize the actual template files so that you can make your Gatsby site look and act just how you want it to.

      Step 4 — Customizing the Starter Template Files

      The Gatsby WordPress starter template provides a lot of default functionality, but in this step you will explore how you can customize the template files to make the project your own, in both form and function. By editing some Gatsby source files, you will bring a new piece of WordPress content—the post excerpt—into your Gatsby site and style it with CSS.

      For most Gatsby sites, your starter template included, there are a few key files and folders to be aware of when it comes to customization:

      • ./gatsby-node.js: This could be considered the center of the Static Site Generation process. It has the code for querying WordPress for all your content, then passing it through template files to generate the static output. If you want to modify what content ends up on your site, this is the main entry-point. In terms of WordPress development, this is similar to working with The Loop and similar concepts.
      • ./src/templates: This contains individual template files, each of which should contain and export a React component responsible for rendering the content passed in. If you want to change how content looks, integrate third-party UI libraries, or build skeletons around content, this is the usual place to do it. In terms of WordPress development, these are similar to Template Files.
      • ./src/components: Typically, each file in this folder is a singular React component that is dedicated to a specific UI task, and is meant to be pulled into template files. Think of these as UI building blocks, but not as templates. If you have a UI element that you want to share across multiple template files, this is a good place to put it and avoid copying and pasting the same code over and over. Some examples of this would be menus, author bio displays, header and footer elements, etc. In terms of WordPress development, these are similar to Template Partials.
      • ./src/css: This contains CSS files that are shared across the site, as opposed to inline-styling, or a popular css-in-js solution, such as styled-components. In this tutorial, and with your specific starter template, this folder provides the majority of styling for your new site. In terms of WordPress development, this is equivalent to style.css, or any number of stylesheets that a theme can inject into a page through WordPress’s enqueue system.

      For an example of how you can edit the existing template files, open ./src/templates/blog-post.js in your text editor.

      In WordPress, there is a special text value for each post called the excerpt, which is a short descriptive summary of the post. By default, this Gatsby template file pulls in the WordPress excerpt, but only uses it for SEO purposes, putting it in the <meta name="description" /> tag. You can modify the blog post template file to include the post excerpt visually, like so, adding the highlighted code to your file:

      /src/templates/blog-post.js

      const BlogPostTemplate = ({ data: { previous, next, post } }) => {
          ...
          return (
              <Layout>
                  ...
                  <h1 itemProp="headline">{parse(post.title)}</h1>
      
                  <p>{post.date}</p>
      
                  {/* Checking for and adding the post excerpt if the current post has one*/}
                  {post.excerpt && (
                      <div className="post-excerpt">{parse(post.excerpt)}</div>
                  )}
      
            {/* if we have a featured image for this post let's display it */}
              {featuredImage?.fluid && (
                <Image
                  fluid={featuredImage.fluid}
                    alt={featuredImage.alt}
                    style={{ marginBottom: 50 }}
                />
              )}
            ...
              </Layout>
          )
      }
      

      In this code, you are checking if the post has an excerpt (important since it is not mandatory in WordPress), and if it does, displaying the text content of the excerpt inside a <div> element. The parse() function comes from html-react-parser, and is being used here to make sure that the <p> tag that will hold your excerpt is parsed into HTML rather than plain text, so you can echo out the content directly. An alternative approach would be to use dangerouslySetInnerHTML, with <div className="post-excerpt" dangerouslySetInnerHTML={{__html: post.excerpt}} ></div>.

      Save and close the blog-post.js file.

      Since the excerpt is a summary of the post, it might help the visitors to your site if you visually separate it from the body of the post, highlighting it at the top of the page and making it easy to find. You can do this by editing the main shared CSS file at ./src/css/style.css:

      /src/css/style.css

      .post-list-item header {
        margin-bottom: var(--spacing-4);
      }
      
      /* CSS targeting your new post excerpt element */
      .post-excerpt {
        box-shadow: 0px 1px 9px 1px rgb(0 0 0 / 50%);
        padding: 6px;
        border-radius: 8px;
        margin-bottom: 14px;
      }
      
      .post-excerpt p {
        margin-bottom: 0px;
      }
      

      In your CSS, you have now used box-shadow to add a shadow effect around the excerpt container, contrasting it with the actual body of the post, as well as added padding, rounded edges, and spacing between itself and adjacent elements. Furthermore, you removed the default bottom margin from the text of the excerpt, since spacing is now provided by the container .post-excerpt element.

      Save the style.css file. To test this out, add an excerpt in WordPress to take advantage of this new visual feature. In the sidebar of the WordPress admin view, navigate to the Posts tab, then select the sample Hello world! post. This will take you to the WordPress post editor view. In the newer block-based editor, the excerpt field appears under the Post tab in the right sidebar, near the bottom. In the legacy editor, the location of the excerpt field is customizable, so it might appear in different locations depending on your theme and custom settings.

      Add in an excerpt, then select the Update button at the top of the screen. Then, go to your Gatsby site at localhost:8000, and select the Hello world! blog post. You will find the excerpt you wrote rendered on the page:

      The sample

      Note: If you are looking for pre-built themes that don’t require any additional coding or configuration, similar to how WordPress themes work, there is a growing number of both official and community themes for using WordPress with Gatsby.

      You have just embedded and styled a post excerpt from WordPress into a custom Gatsby static site. This used data that was already configured for use by your starter template. The next step will explore how to bring new pieces of data via GraphQL and integrate them into your Gatsby site.

      Step 5 — Using WordPress Data in Gatsby with Custom Templates

      In the previous steps, you edited an existing template and used some standard WordPress data (post title and post content) to render your blog posts with Gatsby’s static output. For many sites, this alone might be all that is needed. However, to showcase how decoupling the UI from WordPress gives you greater flexibility, in this step you will explore how you would add support for a special video post type in Gatsby, going beyond that existing blog post template.

      In this scenario, you are adding support for posts that each showcase a single video, sourced from YouTube. You will make it so that you or your content collaborators can copy and paste a YouTube URL into the WordPress post editor and the Gatsby site itself will show the video inside a customized YouTube embed widget.

      For the post template, create a new file under /src/templates, and name it video-post.js. Before building the UI of the page that will be generated, you can write a GraphQL query to retrieve data for it. In Gatsby, this is called a Page Query, and uses the graphql tag.

      Add the following code to the video-post.js file:

      /src/templates/video-post.js

      import React from "react"
      import { graphql } from "gatsby"
      
      export const pageQuery = graphql`
        query VideoPostById(
          # these variables are passed in via createPage.pageContext in gatsby-node.js
          $id: String!
        ) {
          # selecting the current post by id
          post: wpPost(id: { eq: $id }) {
            id
            content
            title
            date(formatString: "MMMM DD, YYYY")
          }
        }
      `
      

      In this snippet, you are using the post ID to query for specific values belonging to that exact post—such as the actual post content, title, and date.

      Next, you can add the actual React component that returns JSX, which will be rendered as the webpage. A good place to start is by copying most of the structure from the existing blog-post.js template file and adding the following highlighted lines:

      /src/templates/video-post.js

      import React from "react"
      import { graphql } from "gatsby"
      import parse from "html-react-parser"
      
      import Bio from "../components/bio"
      import Layout from "../components/layout"
      import Seo from "../components/seo"
      
      const VideoPostTemplate = ({ data: { post } }) => {
        return (
          <Layout>
            <Seo title={post.title} description={post.excerpt} />
      
            <article
              className="blog-post"
              itemScope
              itemType="http://schema.org/Article"
            >
              <header>
                <h1 itemProp="headline">{parse(post.title)}</h1>
                <p>{post.date}</p>
              </header>
      
              <footer>
                <Bio />
              </footer>
            </article>
          </Layout>
        )
      }
      
      export default VideoPostTemplate;
      
      export const pageQuery = graphql`
        query VideoPostById(
          # these variables are passed in via createPage.pageContext in gatsby-node.js
          $id: String!
        ) {
          # selecting the current post by id
          post: wpPost(id: { eq: $id }) {
            id
            content
            title
            date(formatString: "MMMM DD, YYYY")
          }
        }
      `
      

      In addition to creating the React component, you also used export default to make sure that the component is the default item exported from the file. This is important because of how the file is imported later on by Gatsby when it compiles the template against data from WordPress.

      Now, you can add some logic to your React component to check if there is a raw YouTube URL embedded in the body of the post:

      /src/templates/video-post.js

      ...
      
      const VideoPostTemplate = ({ data: { post } }) => {
        // RegEx to find YouTube IDs
        const youtubeIdPattern = /watch?v=([a-z_0-9-]+)|youtu.be/([a-z_0-9-]+)|youtube.com/embed/([a-z_0-9-]+)/i;
      
        const matches = youtubeIdPattern.exec(post.content);
        let videoId;
      
        if (matches) {
          // Use first available match
          videoId = matches[1] || matches[2] || matches[3];
        }
      
        return (
          <Layout>
            <Seo title={post.title} description={post.excerpt} />
      
            <article
              className="blog-post"
              itemScope
              itemType="http://schema.org/Article"
            >
              <header>
                <h1 itemProp="headline">{parse(post.title)}</h1>
                <p>{post.date}</p>
              </header>
      
              <footer>
                <Bio />
              </footer>
            </article>
          </Layout>
        )
      }
      ...
      

      In this code, youtubeIdPattern is a Regular Expression (or RegEx), which is a search pattern you are executing against the body of the post with youtubeIdPattern.exec(post.content) to try and find any YouTube IDs which were included. If matches are found, the variable videoId is set to the first match.

      Finally, you can add the JSX that renders the video based on the videoId you’ve extracted:

      /src/templates/video-post.js

      ...
      
        return (
          <Layout>
            <Seo title={post.title} description={post.excerpt} />
      
            <article
              className="blog-post"
              itemScope
              itemType="http://schema.org/Article"
            >
              <header>
                <h1 itemProp="headline">{parse(post.title)}</h1>
                <p>{post.date}</p>
              </header>
      
              {videoId ? (
                <div className="video-embed">
                  <iframe width="512" height="288" src={`https://www.youtube-nocookie.com/embed/${videoId}?controls=0&autoplay=1`} title={post.title} frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
                </div>
              ) : (
                <div className="no-video-found">
                  <p>Sorry, could not find a video in this post!</p>
                </div>
              )}
      
              <hr />
      
              <footer>
                <Bio />
              </footer>
            </article>
          </Layout>
        )
      }
      ...
      

      If a videoId is found, your code returns a customized, privacy-enhanced YouTube embed, served through an iframe, set to autoplay. Otherwise, it returns a message that no video was found. It also adds a horizontal break between the video embed and the footer of the post.

      Now that your component template file is built, you will tell Gatsby to use the new template for posts that are set to the Video type within WordPress, and not use the regular blog post template for them.

      Make sure to save your changes in video-post.js, then open gatsby-node.js in your text editor.

      First, modify the getPosts() function, which the starter uses as the main GraphQL query to the WordPress backend for posts. You’ll modify the query to pull in the postFormats that the given post belongs to:

      gatsby-node.js

      ...
      async function getPosts({ graphql, reporter }) {
        const graphqlResult = await graphql(/* GraphQL */ `
          query WpPosts {
            # Query all WordPress blog posts sorted by date
            allWpPost(sort: { fields: 2024, order: DESC }) {
              edges {
                previous {
                  id
                }
      
                ...
      
                post: node {
                  id
                  uri
                  postFormats {
                    formats: nodes {
                      name
                    }
                  }
                }
      
                next {
                  id
                }
              }
            }
          }
        `)
      
        ...
      
        return graphqlResult.data.allWpPost.edges
      }
      

      Next, you need to implement the logic that separates the video posts and sends them to their unique template file for rendering. For this, you can hook into the existing createIndividualBlogPostPages() function in the starter.

      You can pull the data from the GraphQL query you modified and use that to determine if the current post is a video post or not:

      gatsby-node.js

      const createIndividualBlogPostPages = async ({ posts, gatsbyUtilities }) =>
        Promise.all(
          posts.map(({ previous, post, next }) => {
            const postFormats = post.postFormats.formats;
            const isVideo = postFormats.length && postFormats[0].name === 'Video';
      ...
                // We also use the next and previous id's to query them and add links!
                previousPostId: previous ? previous.id : null,
                nextPostId: next ? next.id : null,
              },
            })}
          )
        )
      

      Then, change the component property in createPage to use the corresponding template file:

      gatsby-node.js

      const createIndividualBlogPostPages = async ({ posts, gatsbyUtilities }) =>
        Promise.all(
          posts.map(({ previous, post, next }) => {
            const postFormats = post.postFormats.formats;
            const isVideo = postFormats.length && postFormats[0].name === 'Video';
      
            return gatsbyUtilities.actions.createPage({
              // Use the WordPress uri as the Gatsby page path
              // This is a good idea so that internal links and menus work 👍
              path: post.uri,
      
              // Use special video template if post format === Video, else use blog post template
              component: isVideo ? path.resolve(`./src/templates/video-post.js`) : path.resolve(`./src/templates/blog-post.js`),
      
              ...
            });
          })
        )
      

      To keep things concise, this code statement uses a ternary operator, which is a way to return one value if another is truthy (truth-like) and a different value if it is falsy, all without an if/else statement. The code uses isVideo from your previous post format check, and if true, returns the path of the new video template. If false, it tells Gatsby to use the regular blog post template. The Node.js path.resolve() function is used to turn the relative path (./src/...) into an absolute path (the full filepath), which Gatsby requires to load a component file.

      Save and exit the file.

      Next, style your video embed by editing ./src/css/style.css again:

      /src/css/style.css

      .video-embed {
        /* Shadow effect around box to give it contrast */
        box-shadow: 0px 2px 12px 4px rgb(0 0 0 / 50%);
        /* All four declarations below help center our video and give it space */
        display: block;
        line-height: 0px;
        margin: 20px auto;
        max-width: 512px;
      }
      

      In adding the this CSS, you’ve given the video embed a shadow effect around it, which also gives it contrast with the page, as well as centered it and given it space away from other elements.

      To test the functionality of this new code, you can create a new post in WordPress that matches the criteria required by the template. From your WordPress Admin Dashboard, click on Posts in the left sidebar, then Add New to start building a new post. Give your post a title, and then make sure it meets these two criteria:

      • The Post Format will be set to Video. You can find the format dropdown in the right sidebar
      • The post body will contain a YouTube URL (and not as an embed). To test this, you can use this short link to a DigitalOcean promotional video: youtu.be/iom_nhYQIYk.

      Screenshot showing the WordPress post editor with a YouTube URL in the body of the post, and the post format type set to Video

      After filling out the post, select Publish (or Update if this is an existing post) and click to confirm the prompt that appears, so that your post goes live and Gatsby can fetch it over the GraphQL connection.

      Navigate to localhost:8000 in your browser and select your test video post. The YouTube video will be rendered in the browser, as shown in the following image:

      Video blog post with rendered DigitalOcean promotional video on page

      Conclusion

      By working through the steps in this tutorial, you now have a statically generated Gatsby site that sources its content from a WordPress backend. In decoupling content from UI, you have opened up new possibilities for speeding up your site, reduced the barriers to cross-discipline content collaboration, and taken advantage of the rich ecosystem that Gatsby and React provide for UI development.

      If you would like to read more Gatsby tutorials, try out the other tutorials in the How To Create Static Web Sites with Gatsby.js series.



      Source link