One place for hosting & domains

      Develop

      How To Develop a Drupal 9 Website on Your Local Machine Using Docker and DDEV


      The author selected the Diversity in Tech Fund to receive a donation as part of the Write for DOnations program.

      Introduction

      DDEV is an open-source tool that uses Docker to build local development environments for many different PHP frameworks. Using the power of containerization, DDEV can greatly simplify how you work on multiple projects that use multiple tech stacks and multiple cloud servers. DDEV includes templates for WordPress, Laravel, Magento, TYPO3, Drupal, and more.

      Drupal 9 was released on June 3, 2020 for the Drupal CMS. Known for its ease of use and a massive library of modules and themes, Drupal is a popular PHP framework for building and maintaining various websites and applications of all sizes.

      In this tutorial, you will begin developing a Drupal 9 website on your local machine using DDEV. This will allow you to build your website first, and then later, when you are ready, deploy your project to a production server.

      Prerequisites

      To complete this tutorial, you will need:

      Note: It is possible to develop Drupal 9 using DDEV on a remote server, but you will need a solution to access localhost in a web browser. The DDEV command ddev share works with ngrok, which creates a secure tunnel into your server for you and other stakeholders to view your development site. For personal use, you could also install a GUI on your remote server and access your development site through a web browser inside that interface. To do this, you could follow our guide on how to install and configure VNC on Ubuntu 20.04. For an even quicker GUI solution you can follow our guide on how to set up a remote desktop with X2Go on Ubuntu 20.04.

      Step 1 — Installing DDEV

      In this step you will install DDEV on your local machine. Option 1 includes instructions for macOS while Option 2 provides instructions for Linux. This tutorial was tested on DDEV version 1.15.0.

      Option 1 — Installing DDEV on macOS

      DDEV advises that macOS users install their tool using the Homebrew package manager. Use the following brew command to install the newest stable release:

      • brew tap drud/ddev && brew install drud/ddev/ddev

      If you prefer the absolute newest version, you can use brew to install ddev-edge:

      • brew tap drud/ddev-edge && brew install drud/ddev-edge/ddev

      If you already have a version of DDEV installed, or if you ever wish to update your version, shut down DDEV and use brew to update your installation:

      • ddev poweroff
      • brew upgrade ddev

      Once you have installed or updated DDEV, run ddev version to verify your software:

      You will see an output like this:

      Output

      DDEV-Local version v1.15.0 commit v1.15.0 db drud/ddev-dbserver-mariadb-10.2:v1.15.0 dba phpmyadmin/phpmyadmin:5 ddev-ssh-agent drud/ddev-ssh-agent:v1.15.0 docker 19.03.8 docker-compose 1.25.5 os darwin router drud/ddev-router:v1.15.0 web drud/ddev-webserver:v1.15.0

      DDEV includes a powerful CLI, or command line interface. Run ddev to learn about some common commands:

      You will see the following output:

      Output

      Create and maintain a local web development environment. Docs: https://ddev.readthedocs.io Support: https://ddev.readthedocs.io/en/stable/#support Usage: ddev [command] Available Commands: auth A collection of authentication commands composer Executes a composer command within the web container config Create or modify a ddev project configuration in the current directory debug A collection of debugging commands delete Remove all project information (including database) for an existing project describe Get a detailed description of a running ddev project. exec Execute a shell command in the container for a service. Uses the web service by default. export-db Dump a database to a file or to stdout help Help about any command hostname Manage your hostfile entries. import-db Import a sql file into the project. import-files Pull the uploaded files directory of an existing project to the default public upload directory of your project. list List projects logs Get the logs from your running services. pause uses 'docker stop' to pause/stop the containers belonging to a project. poweroff Completely stop all projects and containers pull Pull files and database using a configured provider plugin. restart Restart a project or several projects. restore-snapshot Restore a project's database to the provided snapshot version. sequelpro This command is not available since sequel pro.app is not installed share Share project on the internet via ngrok. snapshot Create a database snapshot for one or more projects. ssh Starts a shell session in the container for a service. Uses web service by default. start Start a ddev project. stop Stop and remove the containers of a project. Does not lose or harm anything unless you add --remove-data. version print ddev version and component versions Flags: -h, --help help for ddev -j, --json-output If true, user-oriented output will be in JSON format. -v, --version version for ddev Use "ddev [command] --help" for more information about a command.

      For more information about using the DDEV CLI, visit the official DDEV documentation.

      With DDEV installed on your local machine, you are now ready to install Drupal 9 and begin developing a website.

      Option 2 — Installing DDEV on Linux

      On a Linux operating system, you can install DDEV using Homebrew for Linux or using the official installation script. On Ubuntu, begin by updating your list of packages in the apt package manager (you can use apt in Debian, otherwise use the equivalent package manager associated with your Linux distribution):

      Now install some prerequisite packages from Ubuntu’s official repository:

      • sudo apt install build-essential apt-transport-https ca-certificates software-properties-common curl

      These packages will allow you to download the DDEV installation script from their official GitHub repository.

      Now download the script:

      • curl -O https://raw.githubusercontent.com/drud/ddev/master/scripts/install_ddev.sh

      Before running the script, open it in nano or your preferred text editor and inspect its contents:

      nano install_ddev.sh
      

      Once you have reviewed the script’s contents and you are satisfied, save and close the file. Now you are ready to run the installation script.

      Use the chmod command to make the script executable:

      Now run the script:

      The installation process might prompt you to confirm some settings or to enter your sudo password. Once the installation completes, you will have DDEV available on your Linux operating system.

      Run ddev version to verify your software:

      You will see an output like this:

      Output

      DDEV-Local version v1.15.0 commit v1.15.0 db drud/ddev-dbserver-mariadb-10.2:v1.15.0 dba phpmyadmin/phpmyadmin:5 ddev-ssh-agent drud/ddev-ssh-agent:v1.15.0 docker 19.03.8 docker-compose 1.25.5 os linux router drud/ddev-router:v1.15.0 web drud/ddev-webserver:v1.15.0

      DDEV is a powerful CLI, or command line interface. Run ddev without anything else to learn about some common commands:

      You will see the following output:

      Output

      Create and maintain a local web development environment. Docs: https://ddev.readthedocs.io Support: https://ddev.readthedocs.io/en/stable/#support Usage: ddev [command] Available Commands: auth A collection of authentication commands composer Executes a composer command within the web container config Create or modify a ddev project configuration in the current directory debug A collection of debugging commands delete Remove all project information (including database) for an existing project describe Get a detailed description of a running ddev project. exec Execute a shell command in the container for a service. Uses the web service by default. export-db Dump a database to a file or to stdout help Help about any command hostname Manage your hostfile entries. import-db Import a sql file into the project. import-files Pull the uploaded files directory of an existing project to the default public upload directory of your project. list List projects logs Get the logs from your running services. pause uses 'docker stop' to pause/stop the containers belonging to a project. poweroff Completely stop all projects and containers pull Pull files and database using a configured provider plugin. restart Restart a project or several projects. restore-snapshot Restore a project's database to the provided snapshot version. sequelpro This command is not available since sequel pro.app is not installed share Share project on the internet via ngrok. snapshot Create a database snapshot for one or more projects. ssh Starts a shell session in the container for a service. Uses web service by default. start Start a ddev project. stop Stop and remove the containers of a project. Does not lose or harm anything unless you add --remove-data. version print ddev version and component versions Flags: -h, --help help for ddev -j, --json-output If true, user-oriented output will be in JSON format. -v, --version version for ddev Use "ddev [command] --help" for more information about a command.

      For more information about using the DDEV CLI, you can visit the official DDEV documentation.

      With DDEV installed on your local machine, you are now ready to deploy Drupal 9 and begin developing a website.

      Step 2 — Deploying a New Drupal 9 Site Using DDEV

      With DDEV running, you will now use it to create a Drupal-specific filesystem, install Drupal 9, and then initiate a standard website project.

      First, you will create a project root directory and then move inside it. You will run all remaining commands from this location. This tutorial will use d9test, but you are free to name your directory something else. Note, however, that DDEV doesn’t handle hyphenated names well. It is considered a best practice to avoid directory names like my-project or drupal-site-1.

      Create your project root directory and navigate inside:

      DDEV excels at creating directory trees that match specific CMS platforms. Use the ddev config command to create a directory structure specific to Drupal 9:

      • ddev config --project-type=drupal9 --docroot=web --create-docroot

      You will see an output like this:

      Output

      Creating a new ddev project config in the current directory (/Users/sammy/d9test) Once completed, your configuration will be written to /Users/sammy/d9test/.ddev/config.yaml Created docroot at /Users/sammy/d9test/web You have specified a project type of drupal9 but no project of that type is found in /Users/sammy/d9test/web Ensuring write permissions for d9new No settings.php file exists, creating one Existing settings.php file includes settings.ddev.php Configuration complete. You may now run 'ddev start'.

      Because you passed --project-type=drupal9 to your ddev config command, DDEV created several subdirectories and files that represent the default organization for a Drupal website. Your project directory tree will now look like this:

      A Drupal 9 directory tree

      .
      ├── .ddev
      │   ├── .gitignore
      │   ├── config.yaml
      │   ├── db-build
      │   │   └── Dockerfile.example
      │   └── web-build
      │       └── Dockerfile.example
      └── web
          └── sites
              └── default
                  ├── .gitignore
                  ├── settings.ddev.php
                  └── settings.php
      
      6 directories, 7 files
      

      .ddev/ will be the main folder for the ddev configuration. web/ will be the docroot for your new project; it will contain several specific settings. files. You now have the initial scaffolding for your new Drupal project.

      Your next step is to initialize your platform, which will build the necessary containers and networking configurations. DDEV binds to ports 80 and 443, so if you are running a web server like Apache on your machine, or anything else that uses those ports, stop those services before continuing.

      Use the ddev start command to initialize your platform:

      This will build all the Docker-based containers for your project, which include a web container, a database container, and phpmyadmin. When the initialization completes you will see an output like this (your port number might differ):

      Output

      ... Successfully started d9test Project can be reached at http://d9test.ddev.site http://127.0.0.1:32773

      Note: Remember that DDEV is starting Docker containers behind the scenes here. If you want to view those containers or verify that they are running, you can always use the docker ps command:

      Alongside any other containers that you are currently running, you will find four new containers, each running a different image: php-myadmin, ddev-webserver, ddev-router, and ddev-dbserver-mariadb.

      ddev start has successfully built your containers and given you an output with two URLs. While this output says that your project “can be reached at http://d9test.ddev.site and http://127.0.0.1:32773,” visiting these URLs right now will throw an error. Starting with Drupal 8, the Drupal core and the contrib modules function like dependencies. Therefore, you’ll first need to finish installing Drupal using Composer, the package manager for PHP projects, before anything loads in your web browser.

      One of the most useful and elegant features of DDEV is that you can pass Composer commands through the DDEV CLI and into your containerized environment. This means that you can separate your machine’s specific configuration from your development environment. You no longer have to manage the various file path, dependency, and version issues that generally accompany local PHP development. Moreover, you can quickly context-switch between multiple projects using different frameworks and tech stacks with minimal effort.

      Use the ddev composer command to download drupal/recommended-project. This will download Drupal core, its libraries, and other related resources and then create a default project:

      • ddev composer create "drupal/recommended-project"

      Now download one final component called Drush, or Drupal Shell. This tutorial will only use one drush command, and this tutorial provides an alternative, but drush is a powerful CLI for Drupal development that can improve your efficiency.

      Use ddev composer to install drush:

      • ddev composer require "drush/drush"

      You have now built a default Drupal 9 project and installed drush. Now you will view your project in a browser and configure your website’s settings.

      Step 3 — Configuring Your Drupal 9 Project

      Now that you have installed Drupal 9 you can visit your new project in your browser. To do this, you can rerun ddev start and copy one of the two URLs that it outputs, or you can use the following command, which will automatically launch your site in a new browser window:

      You will encounter the standard Drupal installation wizard.

      Drupal 9 installer from browser

      Here you have two options. You can use this UI and follow the wizard through installation, or you can return to your terminal and pass a drush command through ddev. The latter option will automate the installation process and set admin as both your username and password.

      Option 1 — Using the Wizard

      Return to the wizard in your browser. Under Choose language select a language from the drop-down menu and click Save and continue. Now select an installation profile. You can choose between Standard, Minimal, and Demo. Make your choice and then click Save and continue. Drupal will automatically verify your requirements, set up a database, and install your site. Your last step is to customize a few configurations. Add a site name and a site email address that ends in your domain. Then choose a username and password. Choose a strong password and keep your credentials somewhere safe. Lastly, add a private email address that you regularly check, fill in the regional settings, and press Save and continue.

      Drupal 9 welcome message with a warning about permissions

      Your new site will load with a welcome message.

      Option 2 — Using the Command Line

      From your project’s root directory, run this ddev exec command to install a default Drupal site using drush:

      • ddev exec drush site:install --account-name=admin --account-pass=admin

      This will create your site just like the wizard will but with some boilerplate configurations. Your username and password will be admin.

      Now launch the site to view it in your browser:

      You are now ready to begin building your website, but it is considered best practice to check that your permissions are correct for the /sites/web/default directory. While you are working locally, this is not a significant concern, but if you transfer these permissions to a production server, they will pose a security risk.

      Step 4 — Checking Your Permissions

      During the wizard installation, or when your welcome page first loads, you might see a warning about the permissions settings on your /sites/web/default directory and one file inside that directory: settings.php.

      After the installation script runs, Drupal will try to set the web/sites/default directory permissions to read and execute for all groups: this is a 555 permissions setting. It will also attempt to set permissions for default/settings.php to read-only, or 444. If you encounter this warning, run these two chmod commands from your project’s root directory. Failure to do so poses a security risk:

      • chmod 555 web/sites/default
      • chmod 444 web/sites/default/settings.php

      To verify that you have the correct permissions, run this ls command with the a, l, h, and d switches:

      • ls -alhd web/sites/default web/sites/default/settings.php

      Check that your permissions match the following output:

      Output

      dr-xr-xr-x 8 sammy staff 256 Jul 21 12:56 web/sites/default -r--r--r-- 1 sammy staff 249 Jul 21 12:12 web/sites/default/settings.php

      You are now ready to develop a Drupal 9 website on your local machine.

      Step 5 — Creating Your First Post in Drupal

      To test some of Drupal’s functionality, you will now create a post using the web UI.

      From your site’s initial page, click the Content button on the upper menu’s left-hand edge. Now click the blue add content button. A new page will appear. Click Article, and another page will appear.

      Drupal 9 Create Article Prompt

      Add whatever title and content you like. You can add an image, too, like one of DigitalOcean’s wallpapers. When ready, click the blue save button.

      Your first post will appear on your website.

      Drupal 9 Created Post

      You are now developing a Drupal 9 website on your local machine without ever interacting with a server, thanks to Docker and DDEV. In the following step, you will manage the DDEV container to accomodate your workflow.

      Step 6 — Managing the DDEV Container

      When you have finished developing your project, or when you want to take a break, you can stop your DDEV container without worrying about data loss. DDEV can manage rapid context-switching among many projects; this is one of its most useful features. Your code and data are always preserved in your project directory, even after you stop or delete the DDEV container.

      To free up resources, you can stop DDEV at any time. From your project’s root directory, run the following command:

      DDEV is available globally, so you can run ddev commands from anywhere, as long as you specify the DDEV project:

      You can also view all your projects at once using ddev list:

      DDEV includes many other useful commands.

      You can restart DDEV and continue developing locally at any time.

      Conclusion

      In this tutorial, you used Docker and the power of containerization to develop a Drupal site locally, with the help of DDEV. DDEV also integrates well with numerous IDEs, and it provides built-in PHP debugging for Atom, PHPStorm, and Visual Studio Code (vscode). From here, you can also learn more about creating development environments for Drupal with DDEV or developing other PHP frameworks like WordPress.



      Source link

      How To Develop Applications on Kubernetes with Okteto


      The author selected Girls Who Code to receive a donation as part of the Write for DOnations program.

      Introduction

      The Okteto CLI is an open-source project that provides a local development experience for applications running on Kubernetes. With it you can write your code on your local IDE and as soon as you save a file, the changes can be pushed to your Kubernetes cluster and your app will immediately update. This whole process happens without the need to build Docker images or apply Kubernetes manifests, which can take considerable time.

      In this tutorial, you’ll use Okteto to improve your productivity when developing a Kubernetes-native application. First, you’ll create a Kubernetes cluster and use it to run a standard “Hello World” application. Then you’ll use Okteto to develop and automatically update your application without having to install anything locally.

      Prerequisites

      Before you begin this tutorial, you’ll need the following:

      Step 1 — Creating the Hello World Application

      The “Hello World” program is a time-honored tradition in web development. In this case, it is a simple web service that responds “Hello World” to every request. Now that you’ve created your Kubernetes cluster, let’s create a “Hello World” app in Golang and the manifests that you’ll use to deploy it on Kubernetes.

      First change to your home directory:

      Now make a new directory called hello_world and move inside it:

      • mkdir hello_world
      • cd hello_world

      Create and open a new file under the name main.go with your favorite IDE or text editor:

      main.go will be a Golang web server that returns the message Hello world!. So, let’s use the following code:

      main.go

      package main
      
      import (
          "fmt"
          "net/http"
      )
      
      func main() {
          fmt.Println("Starting hello-world server...")
          http.HandleFunc("/", helloServer)
          if err := http.ListenAndServe(":8080", nil); err != nil {
              panic(err)
          }
      }
      
      func helloServer(w http.ResponseWriter, r *http.Request) {
          fmt.Fprint(w, "Hello world!")
      }
      

      The code in main.go does the following:

      • The first statement in a Go source file must be the package name. Executable commands must always use package main.
      • The import section indicates which packages the code depends on. In this case it uses fmt for string manipulation, and net/http for the HTTP server.
      • The main function is the entry point to your binary. The http.HandleFunc method is used to configure the server to call the helloServer function when a request to the / path is received. http.ListenAndServe starts an HTTP server that listens on all network interfaces on port 8080.
      • The helloServer function contains the logic of your request handler. In this case, it will write Hello world! as the response to the request.

      You need to create a Docker image and push it to your Docker registry so that Kubernetes can pull it and then run the application.

      Open a new file under the name Dockerfile with your favorite IDE or text editor:

      The Dockerfile will contain the commands required to build your application’s Docker container. Let’s use the following code:

      Dockerfile

      FROM golang:alpine as builder
      RUN apk --update --no-cache add bash
      WORKDIR /app
      ADD . .
      RUN go build -o app
      
      FROM alpine as prod
      WORKDIR /app
      COPY --from=builder /app/app /app/app
      EXPOSE 8080
      CMD ["./app"]
      

      The Dockerfile contains two stages, builder and prod:

      • The builder stage contains the Go build tools. It’s responsible for copying the files and building the Go binary.
      • The prod stage is the final image. It will contain only a stripped down OS and the application binary.

      This is a good practice to follow. It makes your production containers smaller and safer since they only contain your application and exactly what is needed to run it.

      Build the container image (replace your_DockerHub_username with your Docker Hub username):

      • docker build -t your_DockerHub_username/hello-world:latest

      Now push it to Docker Hub:

      • docker push your_DockerHub_username/hello-world:latest

      Next, create a new folder for the Kubernetes manifests:

      When you use a Kubernetes manifest, you tell Kubernetes how you want your application to run. This time, you’ll create a deployment object. So, create a new file deployment.yaml with your favorite IDE or text editor:

      The following content describes a Kubernetes deployment object that runs the okteto/hello-world:latest Docker image. Add this content to your new file, but in your case replace okteto listed after the image label with your_DockerHub_username:

      ~/hello_world/k8s/deployment.yaml

      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: hello-world
      spec:
        selector:
          matchLabels:
            app: hello-world
        replicas: 1
        template:
          metadata:
            labels:
              app: hello-world
          spec:
            containers:
            - name: hello-world
              image: your_DockerHub_username/hello-world:latest
              ports:
              - containerPort: 8080
      

      The deployment manifest has three main sections:

      • metadata defines the name for your deployment.
      • replicas defines how many copies of it you want running.
      • template tells Kubernetes what to deploy, and what labels to add. In this case, a single container, with the okteto/hello-world:latest image, listening on port 8080, and with the app: hello-world label. Note that this label is the same used in the selector section.

      You’ll now need a way to access your application. You can expose an application on Kubernetes by creating a service object. Let’s continue using manifests to do that. Create a new file called service.yaml with your favorite IDE or text editor:

      The following content describes a service that exposes the hello-world deployment object, which under the hood will use a DigitalOcean Load Balancer:

      k8s/service.yaml

      apiVersion: v1
      kind: Service
      metadata:
        name: hello-world
      spec:
        type: LoadBalancer
        ports:
          - protocol: TCP
            port: 80
            targetPort: 8080
            name: http
        selector:
          app: hello-world
      

      The service manifest has four main sections:

      • metadata tells Kubernetes how to name your service.
      • type tells Kubernetes how you want to expose your service. In this case, it will expose it externally through a Digital Ocean Load Balancer.
      • The ports label tells Kubernetes which ports you want to expose, and how to map them to your deployment. In this case, you will expose port 80 externally and direct it to port 8080 in your deployment.
      • selector tells Kubernetes how to direct traffic. In this case, any pod with the app: hello-world label will receive traffic.

      You now have everything ready to deploy your “Hello World” application on Kubernetes. We will do this next.

      Step 2 — Deploying Your Hello World Application

      In this step you’ll deploy your “Hello World” application on Kubernetes, and then you’ll validate that it is working correctly.

      Start by deploying your application on Kubernetes:

      You’ll see the following output:

      Output

      deployment.apps "hello-world" created service "hello-world" created

      After about one minute or so, you will be able to retrieve your application’s IP. Use this kubectl command to check your service:

      • kubectl get service hello-world

      You’ll see an output like this listing your Kubernetes service objects. Note your application’s IP in the the EXTERNAL-IP column:

      Output

      NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE hello-world ClusterIP your_cluster_ip your_external_ip 8080/TCP 37s

      Open your browser and go to your_external_ip listed for your “Hello World” application. Confirm that your application is up and running before continuing with the next step.

      Hello World Okteto

      Until this moment, you’ve followed a fairly traditional pathway for developing applications with Kubernetes. Moving forward, whenever you want to change the code in your application, you’ll have to build and push a new Docker image, and then pull that image from Kubernetes. This process can take quite some time. Okteto was designed to streamline this development inner-loop. Let’s look at the Okteto CLI and see just how it can help.

      Step 3 — Installing the Okteto CLI

      You will now improve your Kubernetes development productivity by installing the Okteto CLI. The Okteto command line interface is an open-source project that lets you synchronize application code changes to an application running on Kubernetes. You can continue using your favorite IDE, debuggers, or compilers without having to commit, build, push, or redeploy containers to test your application–as you did in the previous steps.

      To install the Okteto CLI on a macOS or Linux machine, run the following command:

      • curl https://get.okteto.com -sSfL | sh

      Let’s take a closer look at this command:

      • The curl command is used to transfer data to and from a server.
      • The -s flag suppresses any output.
      • The -S flag shows errors.
      • The -f flag causes the request to fail on HTTP errors.
      • The -L flag makes the request follow redirects.
      • The | operator pipes this output to the sh command, which will download and install the latest okteto binary in your local machine.

      If you are running Windows, you can alternately download the file through your web browser and manually add it to your $PATH.

      Once the Okteto CLI is installed, you are ready to put your “Hello World” application in development mode.

      Step 4 — Putting Your Hello World Application in Development Mode

      The Okteto CLI is designed to swap the application running on a Kubernetes cluster with the code you have in your machine. To do so, Okteto uses the information provided from an Okteto manifest file. This file declares the Kubernetes deployment object that will swap with your local code.

      Create a new file called okteto.yaml with your favorite IDE or text editor:

      Let’s write a basic manifest where you define the deployment object name, the Docker base image to use, and a shell. We will return to this information later. Use the following sample content file:

      okteto.yaml

      name: hello-world
      image: okteto/golang:1
      workdir: /app
      command: ["bash"]
      

      Prepare to put your application in development mode by running the following command:

      Output

      ✓ Development environment activated ✓ Files synchronized Namespace: default Name: hello-world Welcome to your development environment. Happy coding! default:hello-world /app>

      The okteto up command swaps the “Hello World” application into a development environment, which means:

      • The Hello World application container is updated with the docker image okteto/golang:1. This image contains the required dev tools to build, test, debug, and run the “Hello World” application.

      • A file synchronization service is created to keep your changes up-to-date between your local filesystem and your application pods.

      • A remote shell starts in your development environment. Now you can build, test, and run your application as if you were in your local machine.

      • Whatever process you run in the remote shell will get the same incoming traffic, the same environment variables, volumes, or secrets as the original “Hello World” application pods. This, in turn, gives you a highly realistic, production-like development environment.

      In the same console, now run the application as you would typically do (without building and pushing a Docker image), like this:

      Output

      Starting hello-world server...

      The first time you run the application, Go will download your dependencies and compile your application. Wait for this process to finish and test your application by opening your browser and refreshing the page of your application, just as you did previously.

      Now you are ready to begin developing directly on Kubernetes.

      Step 5 — Developing Directly on Kubernetes

      Let’s start making changes to the “Hello World” application and then see how these changes get reflected in Kubernetes.

      Open the main.go file with your favorite IDE or text editor. For example, open a separate console and run the following command:

      Then, change your response message to Hello world from DigitalOcean!:

      main.go

      package main
      
      import (
          "fmt"
          "net/http"
      )
      
      func main() {
          fmt.Println("Starting hello-world server...")
          http.HandleFunc("/", helloServer)
          if err := http.ListenAndServe(":8080", nil); err != nil {
              panic(err)
          }
      }
      
      func helloServer(w http.ResponseWriter, r *http.Request) {
          fmt.Fprint(w, "Hello world from DigitalOcean!")
      }
      

      It is here that your workflow changes. Instead of building images and redeploying containers to update the “Hello World” application, Okteto will synchronize your changes to your development environment on Kubernetes.

      From the console where you executed the okteto up command, cancel the execution of go run main.go by pressing CTRL + C. Now rerun the application:

      • default:hello-world /app> go run main.go

      Output

      Starting hello-world server...

      Go back to the browser and reload the page for your “Hello World” application.

      Hello world DigitalOcean

      Your code changes were applied instantly to Kubernetes, and all without requiring any commits, builds, or pushes.

      Conclusion

      Okteto transforms your Kubernetes cluster into a fully-featured development platform with the click of a button. In this tutorial you installed and configured the Okteto CLI to iterate your code changes directly on Kubernetes as fast as you can type code. Now you can head over to the Okteto samples repository to see how to use Okteto with different programming languages and debuggers.

      Also, if you share a Kubernetes cluster with your team, consider giving each member access to a secure Kubernetes namespace, configured to be isolated from other developers working on the same cluster. This great functionality is also provided by the Okteto App in the DigitalOcean Kubernetes Marketplace.



      Source link

      How To Develop a Node.js TCP Server Application using PM2 and Nginx on Ubuntu 16.04


      The author selected OSMI to receive a donation as part of the Write for DOnations program.

      Introduction

      Node.js is a popular open-source JavaScript runtime environment built on Chrome’s V8 Javascript engine. Node.js is used for building server-side and networking applications.TCP (Transmission Control Protocol) is a networking protocol that provides reliable, ordered and error-checked delivery of a stream of data between applications. A TCP server can accept a TCP connection request, and once the connection is established both sides can exchange data streams.

      In this tutorial, you’ll build a basic Node.js TCP server, along with a client to test the server. You’ll run your server as a background process using a powerful Node.js process manager called PM2. Then you’ll configure Nginx as a reverse proxy for the TCP application and test the client-server connection from your local machine.

      Prerequisites

      To complete this tutorial, you will need:

      Step 1 — Creating a Node.js TCP Application

      We will write a Node.js application using TCP Sockets. This is a sample application which will help you understand the Net library in Node.js which enables us to create raw TCP server and client applications.

      To begin, create a directory on your server in which you would like to place your Node.js application. For this tutorial, we will create our application in the ~/tcp-nodejs-app directory:

      Then switch to the new directory:

      Create a new file named package.json for your project. This file lists the packages that the application depends on. Creating this file will make the build reproducible as it will be easier to share this list of dependencies with other developers:

      You can also generate the package.json using the npm init command, which will prompt you for the details of the application, but we'll still have to manually alter the file to add additional pieces, including a startup command. Therefore, we'll manually create the file in this tutorial.

      Add the following JSON to the file, which specifies the application's name, version, the main file, the command to start the application, and the software license:

      package.json

      {
        "name": "tcp-nodejs-app",
        "version": "1.0.0",
        "main": "server.js",
        "scripts": {
          "start": "node server.js"
        },
        "license": "MIT"
      }
      

      The scripts field lets you define commands for your application. The setting you specified here lets you run the app by running npm start instead of running node server.js.

      The package.json file can also contain a list of runtime and development dependencies, but we won't have any third party dependencies for this application.

      Now that you have the project directory and package.json set up, let's create the server.

      In your application directory, create a server.js file:

      Node.js provides a module called net which enables TCP server and client communication. Load the net module with require(), then define variables to hold the port and host for the server:

      server.js

      const net = require('net');
      const port = 7070;
      const host = '127.0.0.1';
      

      We'll use port 7070 for this app, but you can use any available port you'd like. We're using 127.0.0.1 for the HOST which ensures that our server is only listening on our local network interface. Later we will place Nginx in front of this app as a reverse proxy. Nginx is well-versed at handling multiple connections and horizontal scaling.

      Then add this code to spawn a TCP server using the createServer() function from the net module. Then start listening for connections on the port and host you defined by using the listen() function of the net module:

      server.js

      ...
      const server = net.createServer();
      server.listen(port, host, () => {
          console.log('TCP Server is running on port ' + port +'.');
      });
      
      

      Save server.js and start the server:

      You'll see this output:

      Output

      TCP Server is running on port 7070

      The TCP server is running on port 7070. Press CTRL+C to stop the server.

      Now that we know the server is listening, let's write the code to handle client connections.

      When a client connects to the server, the server triggers a connection event, which we'll observe. We'll define an array of connected clients, which we'll call sockets, and add each client instance to this array when the client connects.

      We'll use the data event to process the data stream from the connected clients, using the sockets array to broadcast data to all the connected clients.

      Add this code to the server.js file to implement these features:

      server.js

      
      ...
      
      let sockets = [];
      
      server.on('connection', function(sock) {
          console.log('CONNECTED: ' + sock.remoteAddress + ':' + sock.remotePort);
          sockets.push(sock);
      
          sock.on('data', function(data) {
              console.log('DATA ' + sock.remoteAddress + ': ' + data);
              // Write the data back to all the connected, the client will receive it as data from the server
              sockets.forEach(function(sock, index, array) {
                  sock.write(sock.remoteAddress + ':' + sock.remotePort + " said " + data + 'n');
              });
          });
      });
      

      This tells the server to listen to data events sent by connected clients. When the connected clients send any data to the server, we echo it back to all the connected clients by iterating through the sockets array.

      Then add a handler for close events which will be trigerred when a connected client terminates the connection. Whenever a client disconnects, we want to remove the client from the sockets array so we no longer broadcast to it. Add this code at the end of the connection block:

      server.js

      
      let sockets = [];
      server.on('connection', function(sock) {
      
          ...
      
          // Add a 'close' event handler to this instance of socket
          sock.on('close', function(data) {
              let index = sockets.findIndex(function(o) {
                  return o.remoteAddress === sock.remoteAddress && o.remotePort === sock.remotePort;
              })
              if (index !== -1) sockets.splice(index, 1);
              console.log('CLOSED: ' + sock.remoteAddress + ' ' + sock.remotePort);
          });
      });
      

      Here is the complete code for server.js:

      server.js

      const net = require('net');
      const port = 7070;
      const host = '127.0.0.1';
      
      const server = net.createServer();
      server.listen(port, host, () => {
          console.log('TCP Server is running on port ' + port + '.');
      });
      
      let sockets = [];
      
      server.on('connection', function(sock) {
          console.log('CONNECTED: ' + sock.remoteAddress + ':' + sock.remotePort);
          sockets.push(sock);
      
          sock.on('data', function(data) {
              console.log('DATA ' + sock.remoteAddress + ': ' + data);
              // Write the data back to all the connected, the client will receive it as data from the server
              sockets.forEach(function(sock, index, array) {
                  sock.write(sock.remoteAddress + ':' + sock.remotePort + " said " + data + 'n');
              });
          });
      
          // Add a 'close' event handler to this instance of socket
          sock.on('close', function(data) {
              let index = sockets.findIndex(function(o) {
                  return o.remoteAddress === sock.remoteAddress && o.remotePort === sock.remotePort;
              })
              if (index !== -1) sockets.splice(index, 1);
              console.log('CLOSED: ' + sock.remoteAddress + ' ' + sock.remotePort);
          });
      });
      

      Save the file and then start the server again:

      We have a fully functional TCP Server running on our machine. Next we'll write a client to connect to our server.

      Step 2 — Creating a Node.js TCP Client

      Our Node.js TCP Server is running, so let's create a TCP Client to connect to the server and test the server out.

      The Node.js server you just wrote is still running, blocking your current terminal session. We want to keep that running as we develop the client, so open a new Terminal window or tab. Then connect into the server again from the new tab.

      Once connected, navigate to the tcp-nodejs-app directory:

      In the same directory, create a new file called client.js:

      The client will use the same net library used in the server.js file to connect to the TCP server. Add this code to the file to connect to the server using the IP address 127.0.0.1 on port 7070:

      client.js

      const net = require('net');
      const client = new net.Socket();
      const port = 7070;
      const host = '127.0.0.1';
      
      client.connect(port, host, function() {
          console.log('Connected');
          client.write("Hello From Client " + client.address().address);
      });
      

      This code will first try to connect to the TCP server to ensure that the server we created is running. Once the connection is established, the client will send "Hello From Client " + client.address().address to the server using the client.write function. Our server will receive this data and echo it back to the client.

      Once the client receives the data back from the server, we want it to print the server's response. Add this code to catch the data event and print the server's response to the command line:

      client.js

      client.on('data', function(data) {
          console.log('Server Says : ' + data);
      });
      

      Finally, handle disconnections from the server gracefully by adding this code:

      client.js

      client.on('close', function() {
          console.log('Connection closed');
      });
      
      

      Save the client.js file.

      Run the following command to start the client:

      The connection will establish and the server will recieve the data, echoing it back to the client:

      client.js Output

      Connected Server Says : 127.0.0.1:34548 said Hello From Client 127.0.0.1

      Switch back to the terminal where the server is running, and you'll see the following output:

      server.js Output

      CONNECTED: 127.0.0.1:34550 DATA 127.0.0.1: Hello From Client 127.0.0.1

      You have verified that you can establish a TCP connection between your server and client apps.

      Press CTRL+C to stop the server. Then switch to the other terminal session and press CTRL+C to stop the client. You can now disconnect this terminal session from your server and return to your original terminal session.

      In the next step we'll launch the server with PM2 and run it in the background.

      Step 3 — Running the Server with PM2

      You have a working server that accepts client connections, but it runs in the foreground. Let's run the server using PM2 so it runs in the backgrand and can restart gracefully.

      First, install PM2 on your server globally using npm:

      Once PM2 is installed, use it to run your server. Instead of running npm start to start the server, you'll use the pm2 command. Start the server:

      You'll see output like this:

      [secondary_label Output
      [PM2] Spawning PM2 daemon with pm2_home=/home/sammy/.pm2
      [PM2] PM2 Successfully daemonized
      [PM2] Starting /home/sammy/tcp-nodejs-app/server.js in fork_mode (1 instance)
      [PM2] Done.
      ┌────────┬──────┬────────┬───┬─────┬───────────┐
      │ Name   │ mode │ status │ ↺ │ cpu │ memory    │
      ├────────┼──────┼────────┼───┼─────┼───────────┤
      │ server │ fork │ online │ 0 │ 5%  │ 24.8 MB   │
      └────────┴──────┴────────┴───┴─────┴───────────┘
       Use `pm2 show <id|name>` to get more details about an app
      
      

      The server is now running in the background. However, if we reboot the machine, it won't be running anymore, so let's create a systemd service for it.

      Run the following command to generate and install PM2's systemd startup scripts. Be sure to run this with sudo so the systemd files install automatically.

      You'll see this output:

      Output

      [PM2] Init System found: systemd Platform systemd ... [PM2] Writing init configuration in /etc/systemd/system/pm2-root.service [PM2] Making script booting at startup... [PM2] [-] Executing: systemctl enable pm2-root... Created symlink from /etc/systemd/system/multi-user.target.wants/pm2-root.service to /etc/systemd/system/pm2-root.service. [PM2] [v] Command successfully executed. +---------------------------------------+ [PM2] Freeze a process list on reboot via: $ pm2 save [PM2] Remove init script via: $ pm2 unstartup systemd

      PM2 is now running as a systemd service.

      You can list all the processes PM2 is managing with the pm2 list command:

      You'll see your application in the list, with the ID of 0:

      Output

      ┌──────────┬────┬──────┬──────┬────────┬─────────┬────────┬─────┬───────────┬───────┬──────────┐ │ App name │ id │ mode │ pid │ status │ restart │ uptime │ cpu │ mem │ user │ watching │ ├──────────┼────┼──────┼──────┼────────┼─────────┼────────┼─────┼───────────┼───────┼──────────┤ │ server │ 0 │ fork │ 9075 │ online │ 0 │ 4m │ 0% │ 30.5 MB │ sammy │ disabled │ └──────────┴────┴──────┴──────┴────────┴─────────┴────────┴─────┴───────────┴───────┴──────────┘

      In the preceding output, you'll notice that watching is disabled. This is a feature that reloads the server when you make a change to any of the application files. It's useful in development, but we don't need that feature in production.

      To get more info about any of the running processes, use the pm2 show command, followed by its ID. In this case, the ID is 0:

      This output shows the uptime, status, log file paths, and other info about the running application:

      Output

      Describing process with id 0 - name server ┌───────────────────┬──────────────────────────────────────────┐ │ status │ online │ │ name │ server │ │ restarts │ 0 │ │ uptime │ 7m │ │ script path │ /home/sammy/tcp-nodejs-app/server.js │ │ script args │ N/A │ │ error log path │ /home/sammy/.pm2/logs/server-error-0.log │ │ out log path │ /home/sammy/.pm2/logs/server-out-0.log │ │ pid path │ /home/sammy/.pm2/pids/server-0.pid │ │ interpreter │ node │ │ interpreter args │ N/A │ │ script id │ 0 │ │ exec cwd │ /home/sammy/tcp-nodejs-app │ │ exec mode │ fork_mode │ │ node.js version │ 8.11.2 │ │ watch & reload │ ✘ │ │ unstable restarts │ 0 │ │ created at │ 2018-05-30T19:29:45.765Z │ └───────────────────┴──────────────────────────────────────────┘ Code metrics value ┌─────────────────┬────────┐ │ Loop delay │ 1.12ms │ │ Active requests │ 0 │ │ Active handles │ 3 │ └─────────────────┴────────┘ Add your own code metrics: http://bit.ly/code-metrics Use `pm2 logs server [--lines 1000]` to display logs Use `pm2 monit` to monitor CPU and Memory usage server

      If the application status shows an error, you can use the error log path to open and review the error log to debug the error:

      • cat /home/tcp/.pm2/logs/server-error-0.log

      If you make changes to the server code, you'll need to restart the application's process to apply the changes, like this:

      PM2 is now managing the application. Now we'll use Nginx to proxy requests to the server.

      Step 4 — Set Up Nginx as a Reverse Proxy Server

      Your application is running and listening on 127.0.0.1, which means it will only accept connections from the local machine. We will set up Nginx as a reverse proxy which will handle incoming traffic and direct it to our server.

      To do this, we'll modify the Nginx configuration to use the stream {} and stream_proxy features of Nginx to forward TCP connections to our Node.js server.

      We have to edit the main Nginx configuration file as the stream block that configures TCP connection forwarding only works as a top-level block. The default Nginx configuration on Ubuntu loads server blocks within the http block of the file, and the stream block can't be placed within that block.

      Open the file /etc/nginx/nginx.conf in your editor:

      • sudo nano /etc/nginx/nginx.conf

      Add the following lines at the end of your configuration file:

      /etc/nginx/nginx.conf

      
      ...
      
      stream {
          server {
            listen 3000;
            proxy_pass 127.0.0.1:7070;        
            proxy_protocol on;
          }
      }
      

      This listens for TCP connections on port 3000 and proxies the requests to your Node.js server running on port 7070. If your application is set to listen on a different port, update the proxy pass URL port to the correct port number. The proxy_protocol directive tells Nginx to use the PROXY protocol to send client information to backend servers, which can then process that information as needed.

      Save the file and exit the editor.

      Check your Nginx configuration to ensure you didn't introduce any syntax errors:

      Next, restart Nginx to enable the TCP and UDP proxy functionality:

      • sudo systemctl restart nginx

      Next, allow TCP connections to our server on that port. Use ufw to allow connections on port 3000:

      Assuming that your Node.js application is running, and your application and Nginx configurations are correct, you should now be able to access your application via the Nginx reverse proxy.

      Step 5 — Testing the Client-Server Connection

      Let's test the server out by connecting to the TCP server from our local machine using the client.js script. To do so, you'll need to download the client.js file you developed to your local machine and change the port and IP address in the script.

      First, on your local machine, download the client.js file using scp:

      • [environment local
      • scp sammy@your_server_ip:~/tcp-nodejs-app/client.js client.js

      Open the client.js file in your editor:

      • [environment local
      • nano client.js

      Change the port to 3000 and change the host to your server's IP address:

      client.js

      // A Client Example to connect to the Node.js TCP Server
      const net = require('net');
      const client = new net.Socket();
      const port = 3000;
      const host = 'your_server_ip';
      ...
      
      

      Save the file, exit the editor, and test things out by running the client:

      You'll see the same output you saw when you ran it before, indicating that your client machine has connected through Nginx and reached your server:

      client.js Output

      Connected Server Says : 127.0.0.1:34584 said PROXY TCP4 your_local_ip_address your_server_ip 52920 3000 Hello From Client your_local_ip_address

      Since Nginx is proxying client connections to your server, your Node.js server won't see the real IP addresses of the clients; it will only see Nginx's IP address. Nginx doesn't support sending the real IP address to the backend directly without making some changes to your system that could impact security, but since we enabled the PROXY protocol in Nginx, the Node.js server is now receiving an additional PROXY message that contains the real IP. If you need that IP address, you can adapt your server to process PROXY requests and parse out the data you need.

      You now have your Node.js TCP application running behind an Nginx reverse proxy and can continue to develop your server further.

      Conclusion

      In this tutorial you created a TCP application with Node.js, ran it with PM2, and served it behind Nginx. You also created a client application to connect to it from other machines. You can use this application to handle large chunks of data streams or to build real-time messaging applications.



      Source link