One place for hosting & domains

      Development

      Containerizing a Node.js Application for Development With Docker Compose


      Introduction

      If you are actively developing an application, using Docker can simplify your workflow and the process of deploying your application to production. Working with containers in development offers the following benefits:

      • Environments are consistent, meaning that you can choose the languages and dependencies you want for your project without worrying about system conflicts.
      • Environments are isolated, making it easier to troubleshoot issues and onboard new team members.
      • Environments are portable, allowing you to package and share your code with others.

      This tutorial will show you how to set up a development environment for a Node.js application using Docker. You will create two containers — one for the Node application and another for the MongoDB database — with Docker Compose. Because this application works with Node and MongoDB, our setup will do the following:

      • Synchronize the application code on the host with the code in the container to facilitate changes during development.
      • Ensure that changes to the application code work without a restart.
      • Create a user and password-protected database for the application’s data.
      • Persist this data.

      At the end of this tutorial, you will have a working shark information application running on Docker containers:

      Complete Shark Collection

      Prerequisites

      To follow this tutorial, you will need:

      Step 1 — Cloning the Project and Modifying Dependencies

      The first step in building this setup will be cloning the project code and modifying its package.json file, which includes the project’s dependencies. We will add nodemon to the project’s devDependencies, specifying that we will be using it during development. Running the application with nodemon ensures that it will be automatically restarted whenever you make changes to your code.

      First, clone the nodejs-mongo-mongoose repository from the DigitalOcean Community GitHub account. This repository includes the code from the setup described in How To Integrate MongoDB with Your Node Application, which explains how to integrate a MongoDB database with an existing Node application using Mongoose.

      Clone the repository into a directory called node_project:

      • git clone https://github.com/do-community/nodejs-mongo-mongoose.git node_project

      Navigate to the node_project directory:

      Open the project's package.json file using nano or your favorite editor:

      Beneath the project dependencies and above the closing curly brace, create a new devDependencies object that includes nodemon:

      ~/node_project/package.json

      ...
      "dependencies": {
          "ejs": "^2.6.1",
          "express": "^4.16.4",
          "mongoose": "^5.4.10"
        },
        "devDependencies": {
          "nodemon": "^1.18.10"
        }    
      }
      

      Save and close the file when you are finished editing.

      With the project code in place and its dependencies modified, you can move on to refactoring the code for a containerized workflow.

      Step 2 — Configuring Your Application to Work with Containers

      Modifying our application for a containerized workflow means making our code more modular. Containers offer portability between environments, and our code should reflect that by remaining as decoupled from the underlying operating system as possible. To achieve this, we will refactor our code to make greater use of Node's process.env property, which returns an object with information about your user environment at runtime. We can use this object in our code to dynamically assign configuration information at runtime with environment variables.

      Let's begin with app.js, our main application entrypoint. Open the file:

      Inside, you will see a definition for a port constant, as well a listen function that uses this constant to specify the port the application will listen on:

      ~/home/node_project/app.js

      ...
      const port = 8080;
      ...
      app.listen(port, function () {
        console.log('Example app listening on port 8080!');
      });
      

      Let's redefine the port constant to allow for dynamic assignment at runtime using the process.env object. Make the following changes to the constant definition and listen function:

      ~/home/node_project/app.js

      ...
      const port = process.env.PORT || 8080;
      ...
      app.listen(port, function () {
        console.log(`Example app listening on ${port}!`);
      });
      

      Our new constant definition assigns port dynamically using the value passed in at runtime or 8080. Similarly, we've rewritten the listen function to use a template literal, which will interpolate the port value when listening for connections. Because we will be mapping our ports elsewhere, these revisions will prevent our having to continuously revise this file as our environment changes.

      When you are finished editing, save and close the file.

      Next, we will modify our database connection information to remove any configuration credentials. Open the db.js file, which contains this information:

      Currently, the file does the following things:

      • Imports Mongoose, the Object Document Mapper (ODM) that we're using to create schemas and models for our application data.
      • Sets the database credentials as constants, including the username and password.
      • Connects to the database using the mongoose.connect method.

      For more information about the file, please see Step 3 of How To Integrate MongoDB with Your Node Application.

      Our first step in modifying the file will be redefining the constants that include sensitive information. Currently, these constants look like this:

      ~/node_project/db.js

      ...
      const MONGO_USERNAME = 'sammy';
      const MONGO_PASSWORD = 'your_password';
      const MONGO_HOSTNAME = '127.0.0.1';
      const MONGO_PORT = '27017';
      const MONGO_DB = 'sharkinfo';
      ...
      

      Instead of hardcoding this information, you can use the process.env object to capture the runtime values for these constants. Modify the block to look like this:

      ~/node_project/db.js

      ...
      const {
        MONGO_USERNAME,
        MONGO_PASSWORD,
        MONGO_HOSTNAME,
        MONGO_PORT,
        MONGO_DB
      } = process.env;
      ...
      

      Save and close the file when you are finished editing.

      At this point, you have modified db.js to work with your application's environment variables, but you still need a way to pass these variables to your application. Let's create an .env file with values that you can pass to your application at runtime.

      Open the file:

      This file will include the information that you removed from db.js: the username and password for your application's database, as well as the port setting and database name. Remember to update the username, password, and database name listed here with your own information:

      ~/node_project/.env

      MONGO_USERNAME=sammy
      MONGO_PASSWORD=your_password
      MONGO_PORT=27017
      MONGO_DB=sharkinfo
      

      Note that we have removed the host setting that originally appeared in db.js. We will now define our host at the level of the Docker Compose file, along with other information about our services and containers.

      Save and close this file when you are finished editing.

      Because your .env file contains sensitive information, you will want to ensure that it is included in your project's .dockerignore and .gitignore files so that it does not copy to your version control or containers.

      Open your .dockerignore file:

      Add the following line to the bottom of the file:

      ~/node_project/.dockerignore

      ...
      .gitignore
      .env
      

      Save and close the file when you are finished editing.

      The .gitignore file in this repository already includes .env, but feel free to check that it is there:

      ~~/node_project/.gitignore

      ...
      .env
      ...
      

      At this point, you have successfully extracted sensitive information from your project code and taken measures to control how and where this information gets copied. Now you can add more robustness to your database connection code to optimize it for a containerized workflow.

      Step 3 — Modifying Database Connection Settings

      Our next step will be to make our database connection method more robust by adding code that handles cases where our application fails to connect to our database. Introducing this level of resilience to your application code is a recommended practice when working with containers using Compose.

      Open db.js for editing:

      You will see the code that we added earlier, along with the url constant for Mongo's connection URI and the Mongoose connect method:

      ~/node_project/db.js

      ...
      const {
        MONGO_USERNAME,
        MONGO_PASSWORD,
        MONGO_HOSTNAME,
        MONGO_PORT,
        MONGO_DB
      } = process.env;
      
      const url = `mongodb://${MONGO_USERNAME}:${MONGO_PASSWORD}@${MONGO_HOSTNAME}:${MONGO_PORT}/${MONGO_DB}?authSource=admin`;
      
      mongoose.connect(url, {useNewUrlParser: true});
      

      Currently, our connect method accepts an option that tells Mongoose to use Mongo's new URL parser. Let's add a few more options to this method to define parameters for reconnection attempts. We can do this by creating an options constant that includes the relevant information, in addition to the new URL parser option. Below your Mongo constants, add the following definition for an options constant:

      ~/node_project/db.js

      ...
      const {
        MONGO_USERNAME,
        MONGO_PASSWORD,
        MONGO_HOSTNAME,
        MONGO_PORT,
        MONGO_DB
      } = process.env;
      
      const options = {
        useNewUrlParser: true,
        reconnectTries: Number.MAX_VALUE,
        reconnectInterval: 500, 
        connectTimeoutMS: 10000,
      };
      ...
      

      The reconnectTries option tells Mongoose to continue trying to connect indefinitely, while reconnectInterval defines the period between connection attempts in milliseconds. connectTimeoutMS defines 10 seconds as the period that the Mongo driver will wait before failing the connection attempt.

      We can now use the new options constant in the Mongoose connect method to fine tune our Mongoose connection settings. We will also add a promise to handle potential connection errors.

      Currently, the Mongoose connect method looks like this:

      ~/node_project/db.js

      ...
      mongoose.connect(url, {useNewUrlParser: true});
      

      Delete the existing connect method and replace it with the following code, which includes the options constant and a promise:

      ~/node_project/db.js

      ...
      mongoose.connect(url, options).then( function() {
        console.log('MongoDB is connected');
      })
        .catch( function(err) {
        console.log(err);
      });
      

      In the case of a successful connection, our function logs an appropriate message; otherwise it will catch and log the error, allowing us to troubleshoot.

      The finished file will look like this:

      ~/node_project/db.js

      const mongoose = require('mongoose');
      
      const {
        MONGO_USERNAME,
        MONGO_PASSWORD,
        MONGO_HOSTNAME,
        MONGO_PORT,
        MONGO_DB
      } = process.env;
      
      const options = {
        useNewUrlParser: true,
        reconnectTries: Number.MAX_VALUE,
        reconnectInterval: 500,
        connectTimeoutMS: 10000,
      };
      
      const url = `mongodb://${MONGO_USERNAME}:${MONGO_PASSWORD}@${MONGO_HOSTNAME}:${MONGO_PORT}/${MONGO_DB}?authSource=admin`;
      
      mongoose.connect(url, options).then( function() {
        console.log('MongoDB is connected');
      })
        .catch( function(err) {
        console.log(err);
      });
      

      Save and close the file when you have finished editing.

      You have now added resiliency to your application code to handle cases where your application might fail to connect to your database. With this code in place, you can move on to defining your services with Compose.

      Step 4 — Defining Services with Docker Compose

      With your code refactored, you are ready to write the docker-compose.yml file with your service definitions. A service in Compose is a running container, and service definitions — which you will include in your docker-compose.yml file — contain information about how each container image will run. The Compose tool allows you to define multiple services to build multi-container applications.

      Before defining our services, however, we will add a tool to our project called wait-for to ensure that our application only attempts to connect to our database once the database startup tasks are complete. This wrapper script uses netcat to poll whether or not a specific host and port are accepting TCP connections. Using it allows you to control your application's attempts to connect to your database by testing whether or not the database is ready to accept connections.

      Though Compose allows you to specify dependencies between services using the depends_on option, this order is based on whether or not the container is running rather than its readiness. Using depends_on won't be optimal for our setup, since we want our application to connect only when the database startup tasks, including adding a user and password to the admin authentication database, are complete. For more information on using wait-for and other tools to control startup order, please see the relevant recommendations in the Compose documentation.

      Open a file called wait-for.sh:

      Paste the following code into the file to create the polling function:

      ~/node_project/app/wait-for.sh

      #!/bin/sh
      
      # original script: https://github.com/eficode/wait-for/blob/master/wait-for
      
      TIMEOUT=15
      QUIET=0
      
      echoerr() {
        if [ "$QUIET" -ne 1 ]; then printf "%sn" "$*" 1>&2; fi
      }
      
      usage() {
        exitcode="$1"
        cat << USAGE >&2
      Usage:
        $cmdname host:port [-t timeout] [-- command args]
        -q | --quiet                        Do not output any status messages
        -t TIMEOUT | --timeout=timeout      Timeout in seconds, zero for no timeout
        -- COMMAND ARGS                     Execute command with args after the test finishes
      USAGE
        exit "$exitcode"
      }
      
      wait_for() {
        for i in `seq $TIMEOUT` ; do
          nc -z "$HOST" "$PORT" > /dev/null 2>&1
      
          result=$?
          if [ $result -eq 0 ] ; then
            if [ $# -gt 0 ] ; then
              exec "$@"
            fi
            exit 0
          fi
          sleep 1
        done
        echo "Operation timed out" >&2
        exit 1
      }
      
      while [ $# -gt 0 ]
      do
        case "$1" in
          *:* )
          HOST=$(printf "%sn" "$1"| cut -d : -f 1)
          PORT=$(printf "%sn" "$1"| cut -d : -f 2)
          shift 1
          ;;
          -q | --quiet)
          QUIET=1
          shift 1
          ;;
          -t)
          TIMEOUT="$2"
          if [ "$TIMEOUT" = "" ]; then break; fi
          shift 2
          ;;
          --timeout=*)
          TIMEOUT="${1#*=}"
          shift 1
          ;;
          --)
          shift
          break
          ;;
          --help)
          usage 0
          ;;
          *)
          echoerr "Unknown argument: $1"
          usage 1
          ;;
        esac
      done
      
      if [ "$HOST" = "" -o "$PORT" = "" ]; then
        echoerr "Error: you need to provide a host and port to test."
        usage 2
      fi
      
      wait_for "$@"
      

      Save and close the file when you are finished adding the code.

      Make the script executable:

      Next, open the docker-compose.yml file:

      First, define the nodejs application service by adding the following code to the file:

      ~/node_project/docker-compose.yml

      version: '3'
      
      services:
        nodejs:
          build:
            context: .
            dockerfile: Dockerfile
          image: nodejs
          container_name: nodejs
          restart: unless-stopped
          env_file: .env
          environment:
            - MONGO_USERNAME=$MONGO_USERNAME
            - MONGO_PASSWORD=$MONGO_PASSWORD
            - MONGO_HOSTNAME=db
            - MONGO_PORT=$MONGO_PORT
            - MONGO_DB=$MONGO_DB 
          ports:
            - "80:8080"
          volumes:
            - .:/home/node/app
            - node_modules:/home/node/app/node_modules
          networks:
            - app-network
          command: ./wait-for.sh db:27017 -- /home/node/app/node_modules/.bin/nodemon app.js
      

      The nodejs service definition includes the following options:

      • build: This defines the configuration options, including the context and dockerfile, that will be applied when Compose builds the application image. If you wanted to use an existing image from a registry like Docker Hub, you could use the image instruction instead, with information about your username, repository, and image tag.
      • context: This defines the build context for the image build — in this case, the current project directory.
      • dockerfile: This specifies the Dockerfile in your current project directory as the file Compose will use to build the application image. For more information about this file, please see How To Build a Node.js Application with Docker.
      • image, container_name: These apply names to the image and container.
      • restart: This defines the restart policy. The default is no, but we have set the container to restart unless it is stopped.
      • env_file: This tells Compose that we would like to add environment variables from a file called .env, located in our build context.
      • environment: Using this option allows you to add the Mongo connection settings you defined in the .env file. Note that we are not setting NODE_ENV to development, since this is Express's default behavior if NODE_ENV is not set. When moving to production, you can set this to production to enable view caching and less verbose error messages.
        Also note that we have specified the db database container as the host, as discussed in Step 2.
      • ports: This maps port 80 on the host to port 8080 on the container.
      • volumes: We are including two types of mounts here:

        • The first is a bind mount that mounts our application code on the host to the /home/node/app directory on the container. This will facilitate rapid development, since any changes you make to your host code will be populated immediately in the container.
        • The second is a named volume, node_modules. When Docker runs the npm install instruction listed in the application Dockerfile, npm will create a new node_modules directory on the container that includes the packages required to run the application. The bind mount we just created will hide this newly created node_modules directory, however. Since node_modules on the host is empty, the bind will map an empty directory to the container, overriding the new node_modules directory and preventing our application from starting. The named node_modules volume solves this problem by persisting the contents of the /home/node/app/node_modules directory and mounting it to the container,
          hiding the bind.

        Keep the following points in mind when using this approach:

        • Your bind will mount the contents of the node_modules directory on the container to the host and this directory will be owned by root, since the named volume was created by Docker.
        • If you have a pre-existing node_modules directory on the host, it will override the node_modules directory created on the container. The setup that we're building in this tutorial assumes that you do not have a pre-existing node_modules directory and that you won't be working with npm on your host. This is in keeping with a twelve-factor approach to application development, which minimizes dependencies between execution environments.
      • networks: This specifies that our application service will join the app-network network, which we will define at the bottom on the file.

      • command: This option lets you set the command that should be executed when Compose runs the image. Note that this will override the CMD instruction that we set in our application Dockerfile. Here, we are running the application using the wait-for script, which will poll the db service on port 27017 to test whether or not the database service is ready. Once the readiness test succeeds, the script will execute the command we have set, /home/node/app/node_modules/.bin/nodemon app.js, to start the application with nodemon. This will ensure that any future changes we make to our code are reloaded without our having to restart the application.

      Next, create the db service by adding the following code below the application service definition:

      ~/node_project/docker-compose.yml

      ...
        db:
          image: mongo:4.1.8-xenial
          container_name: db
          restart: unless-stopped
          env_file: .env
          environment:
            - MONGO_INITDB_ROOT_USERNAME=$MONGO_USERNAME
            - MONGO_INITDB_ROOT_PASSWORD=$MONGO_PASSWORD
          volumes:  
            - dbdata:/data/db   
          networks:
            - app-network  
      

      Some of the settings we defined for the nodejs service remain the same, but we've also made the following changes to the image, environment, and volumes definitions:

      • image: To create this service, Compose will pull the 4.1.8-xenial Mongo image from Docker Hub. We are pinning a particular version to avoid possible future conflicts as the Mongo image changes. For more information about version pinning, please see the Docker documentation on Dockerfile best practices.
      • MONGO_INITDB_ROOT_USERNAME, MONGO_INITDB_ROOT_PASSWORD: The mongo image makes these environment variables available so that you can modify the initialization of your database instance. MONGO_INITDB_ROOT_USERNAME and MONGO_INITDB_ROOT_PASSWORD together create a root user in the admin authentication database and ensure that authentication is enabled when the container starts. We have set MONGO_INITDB_ROOT_USERNAME and MONGO_INITDB_ROOT_PASSWORD using the values from our .env file, which we pass to the db service using the env_file option. Doing this means that our sammy application user will be a root user on the database instance, with access to all of the administrative and operational privileges of that role. When working in production, you will want to create a dedicated application user with appropriately scoped privileges.

        Note: Keep in mind that these variables will not take effect if you start the container with an existing data directory in place.
      • dbdata:/data/db: The named volume dbdata will persist the data stored in Mongo's default data directory, /data/db. This will ensure that you don't lose data in cases where you stop or remove containers.

      We've also added the db service to the app-network network with the networks option.

      As a final step, add the volume and network definitions to the bottom of the file:

      ~/node_project/docker-compose.yml

      ...
      networks:
        app-network:
          driver: bridge
      
      volumes:
        dbdata:
        node_modules:  
      

      The user-defined bridge network app-network enables communication between our containers since they are on the same Docker daemon host. This streamlines traffic and communication within the application, as it opens all ports between containers on the same bridge network, while exposing no ports to the outside world. Thus, our db and nodejs containers can communicate with each other, and we only need to expose port 80 for front-end access to the application.

      Our top-level volumes key defines the volumes dbdata and node_modules. When Docker creates volumes, the contents of the volume are stored in a part of the host filesystem, /var/lib/docker/volumes/, that's managed by Docker. The contents of each volume are stored in a directory under /var/lib/docker/volumes/ and get mounted to any container that uses the volume. In this way, the shark information data that our users will create will persist in the dbdata volume even if we remove and recreate the db container.

      The finished docker-compose.yml file will look like this:

      ~/node_project/docker-compose.yml

      version: '3'
      
      services:
        nodejs:
          build:
            context: .
            dockerfile: Dockerfile
          image: nodejs
          container_name: nodejs
          restart: unless-stopped
          env_file: .env
          environment:
            - MONGO_USERNAME=$MONGO_USERNAME
            - MONGO_PASSWORD=$MONGO_PASSWORD
            - MONGO_HOSTNAME=db
            - MONGO_PORT=$MONGO_PORT
            - MONGO_DB=$MONGO_DB
          ports:
            - "80:8080"
          volumes:
            - .:/home/node/app
            - node_modules:/home/node/app/node_modules
          networks:
            - app-network
          command: ./wait-for.sh db:27017 -- /home/node/app/node_modules/.bin/nodemon app.js 
      
        db:
          image: mongo:4.1.8-xenial
          container_name: db
          restart: unless-stopped
          environment:
            - MONGO_INITDB_ROOT_USERNAME=$MONGO_USERNAME
            - MONGO_INITDB_ROOT_PASSWORD=$MONGO_PASSWORD
          volumes:     
            - dbdata:/data/db
          networks:
            - app-network  
      
      networks:
        app-network:
          driver: bridge
      
      volumes:
        dbdata:
        node_modules:  
      

      Save and close the file when you are finished editing.

      With your service definitions in place, you are ready to start the application.

      Step 5 — Testing the Application

      With your docker-compose.yml file in place, you can create your services with the docker-compose up command. You can also test that your data will persist by stopping and removing your containers with docker-compose down.

      First, build the container images and create the services by running docker-compose up with the -d flag, which will then run the nodejs and db containers in the background:

      You will see output confirming that your services have been created:

      Output

      ... Creating db ... done Creating nodejs ... done

      You can also get more detailed information about the startup processes by displaying the log output from the services:

      You will see something like this if everything has started correctly:

      Output

      ... nodejs | [nodemon] starting `node app.js` nodejs | Example app listening on 8080! nodejs | MongoDB is connected ... db | 2019-02-22T17:26:27.329+0000 I ACCESS [conn2] Successfully authenticated as principal sammy on admin

      You can also check the status of your containers with docker-compose ps:

      You will see output indicating that your containers are running:

      Output

      Name Command State Ports ---------------------------------------------------------------------- db docker-entrypoint.sh mongod Up 27017/tcp nodejs ./wait-for.sh db:27017 -- ... Up 0.0.0.0:80->8080/tcp

      With your services running, you can visit http://your_server_ip in the browser. You will see a landing page that looks like this:

      Application Landing Page

      Click on the Get Shark Info button. You will see a page with an entry form where you can enter a shark name and a description of that shark's general character:

      Shark Info Form

      In the form, add a shark of your choosing. For the purpose of this demonstration, we will add Megalodon Shark to the Shark Name field, and Ancient to the Shark Character field:

      Filled Shark Form

      Click on the Submit button. You will see a page with this shark information displayed back to you:

      Shark Output

      As a final step, we can test that the data you've just entered will persist if you remove your database container.

      Back at your terminal, type the following command to stop and remove your containers and network:

      Note that we are not including the --volumes option; hence, our dbdata volume is not removed.

      The following output confirms that your containers and network have been removed:

      Output

      Stopping nodejs ... done Stopping db ... done Removing nodejs ... done Removing db ... done Removing network node_project_app-network

      Recreate the containers:

      Now head back to the shark information form:

      Shark Info Form

      Enter a new shark of your choosing. We'll go with Whale Shark and Large:

      Enter New Shark

      Once you click Submit, you will see that the new shark has been added to the shark collection in your database without the loss of the data you've already entered:

      Complete Shark Collection

      Your application is now running on Docker containers with data persistence and code synchronization enabled.

      Conclusion

      By following this tutorial, you have created a development setup for your Node application using Docker containers. You've made your project more modular and portable by extracting sensitive information and decoupling your application's state from your application code. You have also configured a boilerplate docker-compose.yml file that you can revise as your development needs and requirements change.

      As you develop, you may be interested in learning more about designing applications for containerized and Cloud Native workflows. Please see Architecting Applications for Kubernetes and Modernizing Applications for Kubernetes for more information on these topics.

      To learn more about the code used in this tutorial, please see How To Build a Node.js Application with Docker and How To Integrate MongoDB with Your Node Application. For information about deploying a Node application with an Nginx reverse proxy using containers, please see How To Secure a Containerized Node.js Application with Nginx, Let's Encrypt, and Docker Compose.



      Source link

      How to Install Node.js and Create a Local Development Environment on macOS


      Introduction

      Node.js is an open source JavaScript runtime environment for easily building server-side applications. It’s also the runtime that powers many client-side development tools for modern JavaScript frameworks.

      In this tutorial, you’ll set up a Node.js programming environment on your local macOS machine using Homebrew, and you’ll test your environment out by writing a simple Node.js program.

      Prerequisites

      You will need a macOS computer running High Sierra or higher with administrative access and an internet connection.

      Step 1 — Using the macOS Terminal

      You’ll use the command line to install Node.js and run various commands related to developing Node.js applications. The command line is a non-graphical way to interact with your computer. Instead of clicking buttons with your mouse, you’ll type commands as text and receive text-based feedback. The command line, also known as a shell, lets you automate many tasks you do on your computer daily, and is an essential tool for software developers.

      To access the command line interface, you’ll use the Terminal application provided by macOS. Like any other application, you can find it by going into Finder, navigating to the Applications folder, and then into the Utilities folder. From here, double-click the Terminal application to open it up. Alternatively, you can use Spotlight by holding down the COMMAND key and pressing SPACE to find Terminal by typing it out in the box that appears.

      macOS Terminal

      If you’d like to get comfortable using the command line, take a look at An Introduction to the Linux Terminal. The command line interface on macOS is very similar, and the concepts in that tutorial are directly applicable.

      Now that you have the Terminal running, let’s install some prerequisites we’ll need for Node.js.

      Xcode is an integrated development environment (IDE) that is comprised of software development tools for macOS. You won’t need Xcode to write Node.js programs, but Node.js and some of its components will rely on Xcode’s Command Line Tools package.

      Execute this command in the Terminal to download and install these components:

      You'll be prompted to start the installation, and then prompted again to accept a software license. Then the tools will download and install automatically.

      We're now ready to install the package manager Homebrew, which will let us install the latest version of Node.js.

      Step 3 — Installing and Setting Up Homebrew

      While the command line interface on macOS has a lot of the functionality you'd find in Linux and other Unix systems, it does not ship with a good package manager. A package manager is a collection of software tools that work to automate software installations, configurations, and upgrades. They keep the software they install in a central location and can maintain all software packages on the system in formats that are commonly used. Homebrew is a free and open-source software package managing system that simplifies the installation of software on macOS. We'll use Homebrew to install the most recent version of Node.js.

      To install Homebrew, type this command into your Terminal window:

      • /usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"

      The command uses curl to download the Homebrew installation script from Homebrew's Git repository on GitHub.

      Let’s walk through the flags that are associated with the curl command:

      • The -f or --fail flag tells the Terminal window to give no HTML document output on server errors.
      • The -s or --silent flag mutes curl so that it does not show the progress meter, and combined with the -S or --show-error flag it will ensure that curl shows an error message if it fails.
      • The -L or --location flag will tell curl to handle redirects. If the server reports that the requested page has moved to a different location, it'll automatically execute the request again using the new location.

      Once curl downloads the script, it's then executed by the Ruby interpreter that ships with macOS, starting the Homebrew installation process.

      The installation script will explain what it will do and will prompt you to confirm that you want to do it. This lets you know exactly what Homebrew is going to do to your system before you let it proceed. It also ensures you have the prerequisites in place before it continues.

      You'll be prompted to enter your password during the process. However, when you type your password, your keystrokes will not display in the Terminal window. This is a security measure and is something you'll see often when prompted for passwords on the command line. Even though you don't see them, your keystrokes are being recorded by the system, so press the RETURN key once you’ve entered your password.

      Press the letter y for “yes” whenever you are prompted to confirm the installation.

      Now let's verify that Homebrew is set up correctly. Execute this command:

      If no updates are required at this time, you'll see this in your Terminal:

      Output

      Your system is ready to brew.

      Otherwise, you may get a warning to run another command such as brew update to ensure that your installation of Homebrew is up to date.

      Now that Homebrew is installed, you can install Node.js.

      Step 4 — Installing Node.js

      With Homebrew installed, you can install a wide range of software and developer tools. We'll use it to install Node.js and its dependencies.

      You can use Homebrew to search for everything you can install with the brew search command, but to provide us with a shorter list, let’s instead search for packages related to Node.js:

      You'll see a list of packages you can install, like this:

      Output

      ==> Formulae node.js nodejs

      Both of these packages install Node.js on your system. They both exist just in case you can't remember if you need to use nodejs or node.js.

      Execute this command to install the nodejs package:

      You'll see output similar to the following in your Terminal. Homebrew will install many dependencies, but will eventually download and install Node.js itself:

      Output

      ==> Installing dependencies for node: icu4c ==> Installing node dependency: icu4c ==> Installing node ==> Downloading https://homebrew.bintray.com/bottles/node-11.0.0.sierra.bottle.tar.gz ######################################################################## 100.0% ==> Pouring node-11.0.0.sierra.bottle.tar.gz ... ==> Summary 🍺 /usr/local/Cellar/node/11.0.0: 3,936 files, 50.1MB

      In addition to Node.js itself, Homebrew installs a few related tools, including npm, which makes it easy to install and update Node.js libraries and packages you might use in your own projects.

      To check the version of Node.js that you installed, type

      This will output the specific version of Node.js that is currently installed, which will by default be the most up-to-date stable version of Node.js that is available.

      Output

      v11.0.0

      Check the version of npm with

      You'll see the version displayed:

      Output

      6.4.1

      You'll use npm to install additional components, libraries, and frameworks.

      To update your version of Node.js, you can first update Homebrew to get the latest list of packages, and then upgrade Node.js itself:

      • brew update
      • brew upgrade nodejs

      Now that Node.js is installed, let's write a program to ensure everything works.

      Step 5 — Creating a Simple Program

      Let's create a simple "Hello, World" program. This will make sure that our environment is working and gets you comfortable creating and running a Node.js program.

      To do this, create a new file called hello.js using nano:

      Type the following code into the file:

      hello.js

      let message = "Hello, World!";
      console.log(message);
      

      Exit the editor by pressing CTRL+X. Then press y when prompted to save the file. You'll be returned to your prompt.

      Now run the program with the following command:

      The program executes and displays its output to the screen:

      Output

      Hello, World!

      This simple program proves that you have a working development environment. You can use this environment to continue exploring Node.js and build larger, more interesting projects.

      Conclusion

      You've successfully installed Node.js, npm, and tested out your setup by creating and running a simple program. You can now use this to develop client-side apps or server-side apps. Take a look at the following tutorials to learn more:



      Source link

      The Importance of Data Backups

      Backups are important, even in the filesystem level!

      I’ve been a Linux user for around 13 years now and am amazed with how progressive the overall experience has become. Thirteen years ago you were using either Slackware 3, Redhat 5.x or Mandrake usually. Being 14 I was one of the “newbies” stuck on Mandrake because my 56k modem was what is known as a softmodem – a modem that lacks quite a bit of hardware and relies on your computer’s resources to actually function. Back then to make these work in Linux was a complete nightmare and Mandrake was the only one that worked out of the box with softmodems.

      Back in those days you didn’t have the package management tools you have today be it yum, aptitude, portage or any other various package management utilities. You had rpmfind.net to find your rpms while praying to god you found the right ones for your specific operating system as well as playing the dependency tracking game. Slackware was strictly source installs and the truly Linux proficient would pride themselves in how small of a Slackware install footprint they could get to have a running desktop.

      Growing frustrated at not understanding the build process and being constantly referred to as a newbie who uses “N00bdrake” I forced myself into the depths of Linux and after a year or so had a working Slackware box with XFree86 running Enlightenment with sound and support for my modem. I learned an extensive amount about how Linux works, compiling your own kernel, searching mailing lists to find patches for bugs, applying patches to software and walking through your hardware to build proper .conf files so daemons would function specifically. It seems that this kind of knowledge is being lost with Linux users these days as they are not forced to drop down to the lowest level of Linux to make their systems function.

      A good example of this is recently dealing with a hard drive with an ext3 filesystem that was showing no data on it. If you used the command df which shows partition disk usage the data was shown as taking up space, but you couldn’t see it. A lot of people conferred and figured that the data was completely lost for good while I sat there saying NOPE waiting for someone to give a correct answer. Unable to get one, I divulged that the reason this happened is because a special block what is known as a superblock had become corrupted and the journal on the filesystem lost all information. Issuing a fsck would not fix the issue and routinely would check out as “ok” as it is using the bad primary superblock. There are actually multiple superblocks on ext2, ext3 as well as ext4 partitions. These exist specifically for backup purposes should your main one become corrupt to correct such an issue. Having toasted Linux countless times playing around with things such as software raid and hard-locking due to a poorly configured kernel, I have probably spent more time than I should’ve in the past reading about how the ext filesystem works. You in the past might have overlooked this when creating a filesystem in Linux but you will see output similar to this when creating an ext filesystem:

      Superblock backups stored on blocks:

      32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208

      These are very, very important blocks and integral to maintaining data on your filesystem. Lost these numbers? You can still get them from a few methods as well:

      1. First you need to know what size of blocks you used on your filesystem. The default is 1k so unless you actually issued a specific command when using mkfs.ext3 then your blocksize is 1024.
      2. Now issue the command “mke2fs -n -b block-size /dev/sdc1”. This is assuming that sdc1 is the corrupt partition throwing no data. Since you’re issuing the -n flag this means the command will not actually make a new partition but will give you those precious backup superblocks.
      3. Now take any of those superblocks and make sure that your partition is unmounted. Issue “fsck -fy -c -b 163840 /dev/sdc1” to hopefully fix your partition. Once completed mount the drive and more-than-likely all your data will be in the folder lost+found. It might have lost the initial folder name but at least your data is there, and with a little bit of play you can figure out which folder is which.

      Now take a breather, relax and be happy that your data is not completely gone. I suggest in the future pulling up a source-based distribution like Slackware and try setting up an entire system without using any package management. See how it goes, prepare to read a lot of documentation but in the end you will be thankful as you will learn more about Linux this way than any other method.