One place for hosting & domains

      Launch

      How To Launch Child Processes in Node.js


      The author selected the COVID-19 Relief Fund to receive a donation as part of the Write for DOnations program.

      Introduction

      When a user executes a single Node.js program, it runs as a single operating system (OS) process that represents the instance of the program running. Within that process, Node.js executes programs on a single thread. As mentioned earlier in this series with the How To Write Asynchronous Code in Node.js tutorial, because only one thread can run on one process, operations that take a long time to execute in JavaScript can block the Node.js thread and delay the execution of other code. A key strategy to work around this problem is to launch a child process, or a process created by another process, when faced with long-running tasks. When a new process is launched, the operating system can employ multiprocessing techniques to ensure that the main Node.js process and the additional child process run concurrently, or at the same time.

      Node.js includes the child_process module, which has functions to create new processes. Aside from dealing with long-running tasks, this module can also interface with the OS and run shell commands. System administrators can use Node.js to run shell commands to structure and maintain their operations as a Node.js module instead of shell scripts.

      In this tutorial, you will create child processes while executing a series of sample Node.js applications. You’ll create processes with the child_process module by retrieving the results of a child process via a buffer or string with the exec() function, and then from a data stream with the spawn() function. You’ll finish by using fork() to create a child process of another Node.js program that you can communicate with as it’s running. To illustrate these concepts, you will write a program to list the contents of a directory, a program to find files, and a web server with multiple endpoints.

      Prerequisites

      Step 1 — Creating a Child Process with exec()

      Developers commonly create child processes to execute commands on their operating system when they need to manipulate the output of their Node.js programs with a shell, such as using shell piping or redirection. The exec() function in Node.js creates a new shell process and executes a command in that shell. The output of the command is kept in a buffer in memory, which you can accept via a callback function passed into exec().

      Let’s begin creating our first child processes in Node.js. First, we need to set up our coding environment to store the scripts we’ll create throughout this tutorial. In the terminal, create a folder called child-processes:

      Enter that folder in the terminal with the cd command:

      Create a new file called listFiles.js and open the file in a text editor. In this tutorial we will use nano, a terminal text editor:

      We’ll be writing a Node.js module that uses the exec() function to run the ls command. The ls command list the files and folders in a directory. This program takes the output from the ls command and displays it to the user.

      In the text editor, add the following code:

      ~/child-processes/listFiles.js

      const { exec } = require('child_process');
      
      exec('ls -lh', (error, stdout, stderr) => {
        if (error) {
          console.error(`error: ${error.message}`);
          return;
        }
      
        if (stderr) {
          console.error(`stderr: ${stderr}`);
          return;
        }
      
        console.log(`stdout:n${stdout}`);
      });
      

      We first import the exec() command from the child_process module using JavaScript destructuring. Once imported, we use the exec() function. The first argument is the command we would like to run. In this case, it’s ls -lh, which lists all the files and folders in the current directory in long format, with a total file size in human-readable units at the top of the output.

      The second argument is a callback function with three parameters: error, stdout, and stderr. If the command failed to run, error will capture the reason why it failed. This can happen if the shell cannot find the command you’re trying to execute. If the command is executed successfully, any data it writes to the standard output stream is captured in stdout, and any data it writes to the standard error stream is captured in stderr.

      Note: It’s important to keep the difference between error and stderr in mind. If the command itself fails to run, error will capture the error. If the command runs but returns output to the error stream, stderr will capture it. The most resilient Node.js programs will handle all possible outputs for a child process.

      In our callback function, we first check if we received an error. If we did, we display the error’s message (a property of the Error object) with console.error() and end the function with return. We then check if the command printed an error message and return if so. If the command successfully executes, we log its output to the console with console.log().

      Let’s run this file to see it in action. First, save and exit nano by pressing CTRL+X.

      Back in your terminal, run your application with the node command:

      Your terminal will display the following output:

      Output

      stdout: total 4.0K -rw-rw-r-- 1 sammy sammy 280 Jul 27 16:35 listFiles.js

      This lists the contents of the child-processes directory in long format, along with the size of the contents at the top. Your results will have your own user and group in place of sammy. This shows that the listFiles.js program successfully ran the shell command ls -lh.

      Now let’s look at another way to execute concurrent processes. Node.js’s child_process module can also run executable files with the execFile() function. The key difference between the execFile() and exec() functions is that the first argument of execFile() is now a path to an executable file instead of a command. The output of the executable file is stored in a buffer like exec(), which we access via a callback function with error, stdout, and stderr parameters.

      Note: Scripts in Windows such as .bat and .cmd files cannot be run with execFile() because the function does not create a shell when running the file. On Unix, Linux, and macOS, executable scripts do not always need a shell to run. However, a Windows machines needs a shell to execute scripts. To execute script files on Windows, use exec(), since it creates a new shell. Alternatively, you can use spawn(), which you’ll use later in this Step.

      However, note that you can execute .exe files in Windows successfully using execFile(). This limitation only applies to script files that require a shell to execute.

      Let’s begin by adding an executable script for execFile() to run. We’ll write a bash script that downloads the Node.js logo from the Node.js website and Base64 encodes it to convert its data to a string of ASCII characters.

      Create a new shell script file called processNodejsImage.sh:

      • nano processNodejsImage.sh

      Now write a script to download the image and base64 convert it:

      ~/child-processes/processNodejsImage.sh

      #!/bin/bash
      curl -s https://nodejs.org/static/images/logos/nodejs-new-pantone-black.svg > nodejs-logo.svg
      base64 nodejs-logo.svg
      

      The first statement is a shebang statement. It’s used in Unix, Linux, and macOS when we want to specify a shell to execute our script. The second statement is a curl command. The cURL utility, whose command is curl, is a command-line tool that can transfer data to and from a server. We use cURL to download the Node.js logo from the website, and then we use redirection to save the downloaded data to a new file nodejs-logo.svg. The last statement uses the base64 utility to encode the nodejs-logo.svg file we downloaded with cURL. The script then outputs the encoded string to the console.

      Save and exit before continuing.

      In order for our Node program to run the bash script, we have to make it executable. To do this, run the following:

      • chmod u+x processNodejsImage.sh

      This will give your current user the permission to execute the file.

      With our script in place, we can write a new Node.js module to execute it. This script will use execFile() to run the script in a child process, catching any errors and displaying all output to console.

      In your terminal, make a new JavaScript file called getNodejsImage.js:

      Type the following code in the text editor:

      ~/child-processes/getNodejsImage.js

      const { execFile } = require('child_process');
      
      execFile(__dirname + '/processNodejsImage.sh', (error, stdout, stderr) => {
        if (error) {
          console.error(`error: ${error.message}`);
          return;
        }
      
        if (stderr) {
          console.error(`stderr: ${stderr}`);
          return;
        }
      
        console.log(`stdout:n${stdout}`);
      });
      

      We use JavaScript destructuring to import the execFile() function from the child_process module. We then use that function, passing the file path as the first name. __dirname contains the directory path of the module in which it is written. Node.js provides the __dirname variable to a module when the module runs. By using __dirname, our script will always find the processNodejsImage.sh file across different operating systems, no matter where we run getNodejsImage.js. Note that for our current project setup, getNodejsImage.js and processNodejsImage.sh must be in the same folder.

      The second argument is a callback with the error, stdout, and stderr parameters. Like with our previous example that used exec(), we check for each possible output of the script file and log them to the console.

      In your text editor, save this file and exit from the editor.

      In your terminal, use node to execute the module:

      Running this script will produce output like this:

      Output

      stdout: PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHhtbG5zOnhsaW5rPSJodHRwOi8vd3d3LnczLm9yZy8xOTk5L3hsaW5rIiB2aWV3Qm94PSIwIDAgNDQyLjQgMjcwLjkiPjxkZWZzPjxsaW5lYXJHcmFkaWVudCBpZD0iYiIgeDE9IjE4MC43IiB5MT0iODAuNyIge ...

      Note that we truncated the output in this article because of its large size.

      Before base64 encoding the image, processNodejsImage.sh first downloads it. You can also verify that you downloaded the image by inspecting the current directory.

      Execute listFiles.js to find the updated list of files in our directory:

      The script will display content similar to the following on the terminal:

      Output

      stdout: total 20K -rw-rw-r-- 1 sammy sammy 316 Jul 27 17:56 getNodejsImage.js -rw-rw-r-- 1 sammy sammy 280 Jul 27 16:35 listFiles.js -rw-rw-r-- 1 sammy sammy 5.4K Jul 27 18:01 nodejs-logo.svg -rwxrw-r-- 1 sammy sammy 129 Jul 27 17:56 processNodejsImage.sh

      We’ve now successfully executed processNodejsImage.sh as a child process in Node.js using the execFile() function.

      The exec() and execFile() functions can run commands on the operating system’s shell in a Node.js child process. Node.js also provides another method with similar functionality, spawn(). The difference is that instead of getting the output of the shell commands all at once, we get them in chunks via a stream. In the next section we’ll use the spawn() command to create a child process.

      Step 2 — Creating a Child Process with spawn()

      The spawn() function runs a command in a process. This function returns data via the stream API. Therefore, to get the output of the child process, we need to listen for stream events.

      Streams in Node.js are instances of event emitters. If you would like to learn more about listening for events and the foundations of interacting with streams, you can read our guide on Using Event Emitters in Node.js.

      It’s often a good idea to choose spawn() over exec() or execFile() when the command you want to run can output a large amount of data. With a buffer, as used by exec() and execFile(), all the processed data is stored in the computer’s memory. For large amounts of data, this can degrade system performance. With a stream, the data is processed and transferred in small chunks. Therefore, you can process a large amount of data without using too much memory at any one time.

      Let’s see how we can use spawn() to make a child process. We will write a new Node.js module that creates a child process to run the find command. We will use the find command to list all the files in the current directory.

      Create a new file called findFiles.js:

      In your text editor, begin by calling the spawn() command:

      ~/child-processes/findFiles.js

      const { spawn } = require('child_process');
      
      const child = spawn('find', ['.']);
      

      We first imported the spawn() function from the child_process module. We then called the spawn() function to create a child process that executes the find command. We hold the reference to the process in the child variable, which we will use to listen to its streamed events.

      The first argument in spawn() is the command to run, in this case find. The second argument is an array that contains the arguments for the executed command. In this case, we are telling Node.js to execute the find command with the argument ., thereby making the command find all the files in the current directory. The equivalent command in the terminal is find ..

      With the exec() and execFile() functions, we wrote the arguments along with the command in one string. However, with spawn(), all arguments to commands must be entered in the array. That’s because spawn(), unlike exec() and execFile(), does not create a new shell before running a process. To have commands with their arguments in one string, you need Node.js to create a new shell as well.

      Let’s continue our module by adding listeners for the command’s output. Add the following highlighted lines:

      ~/child-processes/findFiles.js

      const { spawn } = require('child_process');
      
      const child = spawn('find', ['.']);
      
      child.stdout.on('data', data => {
        console.log(`stdout:n${data}`);
      });
      
      child.stderr.on('data', data => {
        console.error(`stderr: ${data}`);
      });
      

      Commands can return data in either the stdout stream or the stderr stream, so you added listeners for both. You can add listeners by calling the on() method of each streams’ objects. The data event from the streams gives us the command’s output to that stream. Whenever we get data on either stream, we log it to the console.

      We then listen to two other events: the error event if the command fails to execute or is interrupted, and the close event for when the command has finished execution, thus closing the stream.

      In the text editor, complete the Node.js module by writing the following highlighted lines:

      ~/child-processes/findFiles.js

      const { spawn } = require('child_process');
      
      const child = spawn('find', ['.']);
      
      child.stdout.on('data', (data) => {
        console.log(`stdout:n${data}`);
      });
      
      child.stderr.on('data', (data) => {
        console.error(`stderr: ${data}`);
      });
      
      child.on('error', (error) => {
        console.error(`error: ${error.message}`);
      });
      
      child.on('close', (code) => {
        console.log(`child process exited with code ${code}`);
      });
      

      For the error and close events, you set up a listener directly on the child variable. When listening for error events, if one occurs Node.js provides an Error object. In this case, you log the error’s message property.

      When listening to the close event, Node.js provides the exit code of the command. An exit code denotes if the command ran successfully or not. When a command runs without errors, it returns the lowest possible value for an exit code: 0. When executed with an error, it returns a non-zero code.

      The module is complete. Save and exit nano with CTRL+X.

      Now, run the code with the node command:

      Once complete, you will find the following output:

      Output

      stdout: . ./findFiles.js ./listFiles.js ./nodejs-logo.svg ./processNodejsImage.sh ./getNodejsImage.js child process exited with code 0

      We find a list of all files in our current directory and the exit code of the command, which is 0 as it ran successfully. While our current directory has a small number of files, if we ran this code in our home directory, our program would list every single file in every accessible folder for our user. Because it has such a potentially large output, using the spawn() function is most ideal as its streams do not require as much memory as a large buffer.

      So far we’ve used functions to create child processes to execute external commands in our operating system. Node.js also provides a way to create a child process that executes other Node.js programs. Let’s use the fork() function to create a child process for a Node.js module in the next section.

      Step 3 — Creating a Child Process with fork()

      Node.js provides the fork() function, a variation of spawn(), to create a child process that’s also a Node.js process. The main benefit of using fork() to create a Node.js process over spawn() or exec() is that fork() enables communication between the parent and the child process.

      With fork(), in addition to retrieving data from the child process, a parent process can send messages to the running child process. Likewise, the child process can send messages to the parent process.

      Let’s see an example where using fork() to create a new Node.js child process can improve the performance of our application. Node.js programs run on a single process. Therefore, CPU intensive tasks like iterating over large loops or parsing large JSON files stop other JavaScript code from running. For certain applications, this is not a viable option. If a web server is blocked, then it cannot process any new incoming requests until the blocking code has completed its execution.

      Let’s see this in practice by creating a web server with two endpoints. One endpoint will do a slow computation that blocks the Node.js process. The other endpoint will return a JSON object saying hello.

      First, create a new file called httpServer.js, which will have the code for our HTTP server:

      We’ll begin by setting up the HTTP server. This involves importing the http module, creating a request listener function, creating a server object, and listening for requests on the server object. If you would like to dive deeper into creating HTTP servers in Node.js or would like a refresher, you can read our guide on How To Create a Web Server in Node.js with the HTTP Module.

      Enter the following code in your text editor to set up an HTTP server:

      ~/child-processes/httpServer.js

      const http = require('http');
      
      const host="localhost";
      const port = 8000;
      
      const requestListener = function (req, res) {};
      
      const server = http.createServer(requestListener);
      server.listen(port, host, () => {
        console.log(`Server is running on http://${host}:${port}`);
      });
      

      This code sets up an HTTP server that will run at http://localhost:8000. It uses template literals to dynamically generate that URL.

      Next, we will write an intentionally slow function that counts in a loop 5 billion times. Before the requestListener() function, add the following code:

      ~/child-processes/httpServer.js

      ...
      const port = 8000;
      
      const slowFunction = () => {
        let counter = 0;
        while (counter < 5000000000) {
          counter++;
        }
      
        return counter;
      }
      
      const requestListener = function (req, res) {};
      ...
      

      This uses the arrow function syntax to create a while loop that counts to 5000000000.

      To complete this module, we need to add code to the requestListener() function. Our function will call the slowFunction() on subpath, and return a small JSON message for the other. Add the following code to the module:

      ~/child-processes/httpServer.js

      ...
      const requestListener = function (req, res) {
        if (req.url === '/total') {
          let slowResult = slowFunction();
          let message = `{"totalCount":${slowResult}}`;
      
          console.log('Returning /total results');
          res.setHeader('Content-Type', 'application/json');
          res.writeHead(200);
          res.end(message);
        } else if (req.url === '/hello') {
          console.log('Returning /hello results');
          res.setHeader('Content-Type', 'application/json');
          res.writeHead(200);
          res.end(`{"message":"hello"}`);
        }
      };
      ...
      

      If the user reaches the server at the /total subpath, then we run slowFunction(). If we are hit at the /hello subpath, we return this JSON message: {"message":"hello"}.

      Save and exit the file by pressing CTRL+X.

      To test, run this server module with node:

      When our server starts, the console will display the following:

      Output

      Server is running on http://localhost:8000

      Now, to test the performance of our module, open two additional terminals. In the first terminal, use the curl command to make a request to the /total endpoint, which we expect to be slow:

      • curl http://localhost:8000/total

      In the other terminal, use curl to make a request to the /hello endpoint like this:

      • curl http://localhost:8000/hello

      The first request will return the following JSON:

      Output

      {"totalCount":5000000000}

      Whereas the second request will return this JSON:

      Output

      {"message":"hello"}

      The request to /hello completed only after the request to /total. The slowFunction() blocked all other code from executing while it was still in its loop. You can verify this by looking at the Node.js server output that was logged in your original terminal:

      Output

      Returning /total results Returning /hello results

      To process the blocking code while still accepting incoming requests, we can move the blocking code to a child process with fork(). We will move the blocking code into its own module. The Node.js server will then create a child process when someone accesses the /total endpoint and listen for results from this child process.

      Refactor the server by first creating a new module called getCount.js that will contain slowFunction():

      Now enter the code for slowFunction() once again:

      ~/child-processes/getCount.js

      const slowFunction = () => {
        let counter = 0;
        while (counter < 5000000000) {
          counter++;
        }
      
        return counter;
      }
      

      Since this module will be a child process created with fork(), we can also add code to communicate with the parent process when slowFunction() has completed processing. Add the following block of code that sends a message to the parent process with the JSON to return to the user:

      ~/child-processes/getCount.js

      const slowFunction = () => {
        let counter = 0;
        while (counter < 5000000000) {
          counter++;
        }
      
        return counter;
      }
      
      process.on('message', (message) => {
        if (message == 'START') {
          console.log('Child process received START message');
          let slowResult = slowFunction();
          let message = `{"totalCount":${slowResult}}`;
          process.send(message);
        }
      });
      

      Let’s break down this block of code. The messages between a parent and child process created by fork() are accessible via the Node.js global process object. We add a listener to the process variable to look for message events. Once we receive a message event, we check if it’s the START event. Our server code will send the START event when someone accesses the /total endpoint. Upon receiving that event, we run slowFunction() and create a JSON string with the result of the function. We use process.send() to send a message to the parent process.

      Save and exit getCount.js by entering CTRL+X in nano.

      Now, let’s modify the httpServer.js file so that instead of calling slowFunction(), it creates a child process that executes getCount.js.

      Re-open httpServer.js with nano:

      First, import the fork() function from the child_process module:

      ~/child-processes/httpServer.js

      const http = require('http');
      const { fork } = require('child_process');
      ...
      

      Next, we are going to remove the slowFunction() from this module and modify the requestListener() function to create a child process. Change the code in your file so it looks like this:

      ~/child-processes/httpServer.js

      ...
      const port = 8000;
      
      const requestListener = function (req, res) {
        if (req.url === '/total') {
          const child = fork(__dirname + '/getCount');
      
          child.on('message', (message) => {
            console.log('Returning /total results');
            res.setHeader('Content-Type', 'application/json');
            res.writeHead(200);
            res.end(message);
          });
      
          child.send('START');
        } else if (req.url === '/hello') {
          console.log('Returning /hello results');
          res.setHeader('Content-Type', 'application/json');
          res.writeHead(200);
          res.end(`{"message":"hello"}`);
        }
      };
      ...
      

      When someone goes to the /total endpoint, we now create a new child process with fork(). The argument of fork() is the path to the Node.js module. In this case, it is the getCount.js file in our current directory, which we receive from __dirname. The reference to this child process is stored in a variable child.

      We then add a listener to the child object. This listener captures any messages that the child process gives us. In this case, getCount.js will return a JSON string with the total number counted by the while loop. When we receive that message, we send the JSON to the user.

      We use the send() function of the child variable to give it a message. This program sends the message START, which begins the execution of slowFunction() in the child process.

      Save and exit nano by entering CTRL+X.

      To test the improvement using fork() made on HTTP server, begin by executing the httpServer.js file with node:

      Like before, it will output the following message when it launches:

      Output

      Server is running on http://localhost:8000

      To test the server, we will need an additional two terminals as we did the first time. You can re-use them if they are still open.

      In the first terminal, use the curl command to make a request to the /total endpoint, which takes a while to compute:

      • curl http://localhost:8000/total

      In the other terminal, use curl to make a request to the /hello endpoint, which responds in a short time:

      • curl http://localhost:8000/hello

      The first request will return the following JSON:

      Output

      {"totalCount":5000000000}

      Whereas the second request will return this JSON:

      Output

      {"message":"hello"}

      Unlike the first time we tried this, the second request to /hello runs immediately. You can confirm by reviewing the logs, which will look like this:

      Output

      Child process received START message Returning /hello results Returning /total results

      These logs show that the request for the /hello endpoint ran after the child process was created but before the child process had finished its task.

      Since we moved the blocking code in a child process using fork(), the server was still able to respond to other requests and execute other JavaScript code. Because of the fork() function’s message passing ability, we can control when a child process begins an activity and we can return data from a child process to a parent process.

      Conclusion

      In this article, you used various functions to create a child process in Node.js. You first created child processes with exec() to run shell commands from Node.js code. You then ran an executable file with the execFile() function. You looked at the spawn() function, which can also run commands but returns data via a stream and does not start a shell like exec() and execFile(). Finally, you used the fork() function to allow for two-way communication between the parent and child process.

      To learn more about the child_process module, you can read the Node.js documentation. If you’d like to continue learning Node.js, you can return to the How To Code in Node.js series, or browse programming projects and setups on our Node topic page.



      Source link

      Master Your Website Launch With This 18-Item Checklist


      Unveiling a new website can be an exciting and nerve-racking time. Sometimes it might feel like you’ve forgotten a crucial element, but you won’t always know until someone complains. Additionally, there are a lot of pages and items you’ll need to include, so it can be hard to know where to start.

      Not to worry — we’re here to help! Creating a comprehensive checklist of key pages and technical items to review before you launch your website can be a helpful time-saver. Scanning down a pre-flight checklist is one of the best ways to make sure you don’t overlook anything important.

      In this article, we’ll run through 18 essentials that you need to put on your website launch checklist. Let’s start from the top!

      Launch Your Website with DreamHost

      Our automatic updates and strong security defenses take server management off your hands so you can focus on creating a great website.

      1. Finalize Your Terms of Service and Privacy Policy

      A privacy policy is required by law if you’re collecting any kind of personal data. This policy spells out exactly how any information (emails, contact information, and more) will be used.

      Terms of Service (TOS) statements are not legally mandatory in most cases, but they can still be valuable. Your TOS states the ground rules for visitors who want to use your site.

      2. Create a “Contact Us” Page

      Contact pages might seem straightforward, but there is room for creativity. Buzzworthy Studio has an excellent example of a bold and effective contact page.

      Buzzworthy Studio’s contact page.

      Your contact page can be a valuable way to reaffirm your brand. Plus, it helps visitors get in touch and find answers to their questions.

      3. Set Your Site to Back Up Regularly

      If your website crashes or is hacked, or if you install a plugin that causes a problem, having your files backed up regularly and automatically is a lifesaver.

      There are many ways to approach this task, but one surefire way to keep things running smoothly is by using a managed host for your site. That way, your provider can take care of restoring backups and automatically archiving them for you.

      4. Configure Your 404 Page

      A 404 page is what will display when an error occurs on your website. Designing fun and on-brand error pages can help you retain visitors who would have otherwise left once they encountered an error.

      Magnt, for example, managed to turn an error into a marketing opportunity.

      Magnt’s 404 Error page.

      This clever error page displays the company’s skills for humor and design. Creating this kind of page prompts visitors to learn more about your business, rather than leaving in frustration.

      5. Establish a Comprehensive Site Map

      Sitemaps play a vital role in how search engines read and index your pages. While a sitemap won’t directly improve your rankings, it can help to ensure that your site is indexed properly.

      If you use WordPress, there are plugins available to help you generate and manage sitemaps. Google also has an established process for submitting your sitemap directly.

      6. Complete Your “About Us” Information

      Keeping your About Us information up-to-date and well-organized is essential. In fact, 94% of first impressions online are design-related. That’s why the Refinery29 About Us page is a great example of concise web design.

      Refinery29’s About Us page. 

      Visitors are likely to check your About Us page or section in order to vet your business. Therefore, you’ll want to make sure it contains all the information they need and looks professional as well.

      7. Set Up Your Permalink Structure

      Permalinks are the permanent URLs to your posts and pages, as well as to your category and tag archives. Strategically creating your permalinks can help with your Search Engine Optimization (SEO).

      It’s also important to note that deciding to change your permalink structure after you’ve created content can result in a lot of work. For that reason, it’s best to decide on a structure you’re happy with upfront.

      8. Choose a Web Host With Fast and Reliable Servers

      Your site’s hosting server is a determining factor for how fast its pages will load. Consequently, it’s a pretty significant decision.

      Here at DreamHost, we offer several options for hosting your website.

      DreamHost’s managed WordPress hosting plans.

      A fast and reliable managed hosting plan means the website owner can focus on their business and website content. Leave the server management to us — we live for this stuff!

      9. Add Meta Titles and Descriptions to Your Content

      Metadata, such as meta descriptions and title tags, can help you tell potential visitors what kind of content to expect when they find your website’s pages in search engines. You can think of this data as a summary that helps people decide if a page is valuable to them.

      A meta description is typically limited to two lines of text, so choosing the right keywords is critical. However, what matters most when you’re launching a website is that each post and page has its own meta title and description.

      10. Optimize Image Sizes

      Optimizing your site’s images not only improves performance but can improve the user experience for those using mobile devices to view your site. As a consequence, it can also benefit your rankings on search engines.

      You can use a tool like Tiny PNG to reduce the size of your image files.

      TinyPNG’s home page.

      There are also many plugin options for optimizing your images, either one at a time or in bulk.

      11. Turn on Caching for Your Website

      Caching is when a web browser stores a static version of your website, and loads that copy for the visitor. This results in a faster loading speed than if the site’s data had to be transferred each time anew. If you want to check your site speed, start with Google’s PageSpeed Insights tool.

      It’s a good idea to check with your web host, to see if it offers built-in caching options. Otherwise, you’ll want to look into plugins or other caching solutions.

      12. Set Alt Text for Your Images

      Setting alt text for all the images on your website benefits both its accessibility and SEO. Alt text can typically be added in the same menu you use to edit your images.

      Wordpress’ image edit panel.

      This text will help visitors understand what an image is if it doesn’t load properly. Additionally, it will make it easier for those using screen readers to make sense of your content.

      If you’re looking for more ways to boost your search performance, check out Google’s Search Console. This tool will help you create reports that measure your traffic so you can improve how your site’s pages perform in search engines’ rankings.

      13. Review Responsiveness on Mobile Platforms

      Whether you’re writing a blog post or operating a Shopify store, it’s vital that your site looks good and performs well on devices of all sizes. One easy way to check your website’s mobile responsiveness is with Google’s Mobile-Friendly Test tool.

      Fortunately, most website builders include built-in options for testing mobile responsiveness. Still, there is no doubting the importance of designing with an eye toward mobile use.

      14. Clean Up Your Plugins List

      When you’re launching a website with WordPress, managing your plugins is a must. You may have tried out several different solutions during your development and building process, which can result in unused items in your plugin list.

      There are literally thousands of free and premium plugins at your disposal, many of which we’ve previously recommended:

      A list of plugins.

      Before you launch your site, you’ll want to remove all unused plugins to shore up site security, speed, and functionality. There are a few essentials that we’d recommend you keep, though: Jetpack, Akismet Anti-Spam, and the Yoast plugin for SEO.

      15. Update All Your Website Software

      Keeping every part of your website up to date is vital. Not only does each software update or upgrade help keep your site secure, but newer versions can boost its performance as well.

      Fortunately, software updates can be a “set it and forget it” process. That way, you can automatically keep on top of your plugins and other software from the very beginning.

      16. Double-Check Your Site’s Security License

      Site security is something we can’t stress enough. A Secure Sockets Layer (SSL) certificate tells your visitors that all data exchanged between their browsers and your site will go through a secure connection.

      An example of a secure URL.

      There are several ways to acquire an SSL certificate. You can check with your web host to see if it provides one, or you can purchase a certificate through a third-party service.

      17. Add Analytics Tracking to Your Site

      Once your website is up and running, you’ll need a way to measure how well it performs. That’s why it pays to set up an analytics tracking solution before even launching your site.

      There are many excellent solutions out there, although Google Analytics is a strong choice for beginners. No matter what tool you use, make sure you have an easy way to keep an eye on important numbers, such as your daily visits and page views.

      18. Connect Your Social Media Accounts

      Promoting your site on social media can be vital to reaching your target audience. Providing icons so your visitors can easily find your social media pages is one of the best ways to do that.

      Social media icons.

      Plugins such as Jetpack can also help you automate social sharing. That way, this task will take up as little of your time as possible.

      Your Website Launch Checklist

      Launching your website can involve a lot of work, and many different kinds of tasks. Checklists are one way to help your team stay on track and cover all the bases before revealing your masterpiece to the public.

      To provide a seamless first experience to your website’s visitors, you’ll want to keep in mind a few key items on your website launch checklist. For instance, you can write strong meta descriptions, optimize your images for increased site speed, and take advantage of an SSL certificate.

      Here at DreamHost, we want you to be able to focus on the task at hand, and not worry about whether your website maintenance is taken care of. That’s why we offer complete hosting solutions with reliable support, so you can focus on enjoying your new site!



      Source link

      How To Create an Image of Your Linux Environment and Launch It On DigitalOcean


      Introduction

      DigitalOcean’s Custom Images feature allows you to bring your custom Linux and Unix-like virtual disk images from an on-premise environment or another cloud platform to DigitalOcean and use them to start DigitalOcean Droplets.

      As described in the Custom Images documentation, the following image types are supported natively by the Custom Images upload tool:

      Although ISO format images aren’t officially supported, you can learn how to create and upload a compatible image using VirtualBox by following How to Create a DigitalOcean Droplet from an Ubuntu ISO Format Image.

      If you don’t already have a compatible image to upload to DigitalOcean, you can create and compress a disk image of your Unix-like or Linux system, provided it has the prerequisite software and drivers installed.

      We’ll begin by ensuring that our image meets the Custom Images requirements. To do this, we’ll configure the system and install some software prerequisites. Then, we’ll create the image using the dd command-line utility and compress it using gzip. Following that, we’ll upload this compressed image file to DigitalOcean Spaces, from which we can import it as a Custom Image. Finally, we’ll boot up a Droplet using the uploaded image.

      Prerequisites

      If possible, you should use one of the DigitalOcean-provided images as a base, or an official distribution-provided cloud image like Ubuntu Cloud. You can then install software and applications on top of this base image to bake a new image, using tools like Packer and VirtualBox. Many cloud providers and virtualization environments also provide tools to export virtual disks to one of the compatible formats listed above, so, if possible, you should use these to simplify the import process. In the cases where you need to manually create a disk image of your system, you can follow the instructions in this guide. Note that these instructions have only been tested with an Ubuntu 18.04 system, and steps may vary depending on your server’s OS and configuration.

      Before you begin with this tutorial, you should have the following available to you:

      • A Linux or Unix-like system that meets all of the requirements listed in the Custom Images product documentation. For example, your boot disk must have:

        • A max size of 100GB
        • An MBR or GPT partition table with a grub bootloader
        • VirtIO drivers installed
      • A non-root user with administrative privileges available to you on the system you’re imaging. To create a new user and grant it administrative privileges on Ubuntu 18.04, follow our Initial Server Setup with Ubuntu 18.04. To learn how to do this on Debian 9, consult Initial Server Setup with Debian 9.

      • An additional storage device used to store the disk image created in this guide, preferably as large as the disk being copied. This can be an attached block storage volume, an external USB drive, an additional physical disk, etc.

      • A DigitalOcean Space and the s3cmd file transfer utility configured for use with your Space. To learn how to create a Space, consult the Spaces Quickstart. To learn how set up s3cmd for use with your Space, consult the s3cmd 2.x Setup Guide.

      Step 1 — Installing Cloud-Init and Enabling SSH

      To begin, we will install the cloud-Init initialization package. Cloud-init is a set of scripts that runs at boot to configure certain cloud instance properties like default locale, hostname, SSH keys and network devices.

      Steps for installing cloud-init will vary depending on the operating system you have installed. In general, the cloud-init package should be available in your OS’s package manager, so if you’re not using a Debian-based distribution, you should substitute apt in the following steps with your distribution-specific package manager command.

      Installing cloud-init

      In this guide, we’ll use an Ubuntu 18.04 server and so will use apt to download and install the cloud-init package. Note that cloud-init may already be installed on your system (some Linux distributions install cloud-init by default). To check, log in to your server and run the following command:

      If you see the following output, cloud-init has already been installed on your server and you can continue on to configuring it for use with DigitalOcean:

      Output

      usage: /usr/bin/cloud-init [-h] [--version] [--file FILES] [--debug] [--force] {init,modules,single,query,dhclient-hook,features,analyze,devel,collect-logs,clean,status} ... /usr/bin/cloud-init: error: the following arguments are required: subcommand

      If instead you see the following, you need to install cloud-init:

      Output

      cloud-init: command not found

      To install cloud-init, update your package index and then install the package using apt:

      • sudo apt update
      • sudo apt install cloud-init

      Now that we've installed cloud-init, we'll configure it for use with DigitalOcean, ensuring that it uses the ConfigDrive datasource. Cloud-init datasources dictate how cloud-init will search for and update instance configuration and metadata. DigitalOcean Droplets use the ConfigDrive datasource, so we will check that it comes first in the list of datasources that cloud-init searches whenever the Droplet boots.

      Reconfiguring cloud-init

      By default, on Ubuntu 18.04, cloud-init configures itself to use the NoCloud datasource first. This will cause problems when running the image on DigitalOcean, so we need to reconfigure cloud-init to use the ConfigDrive datasource and ensure that cloud-init reruns when the image is launched on DigitalOcean.

      From the command line, navigate to the /etc/cloud/cloud.cfg.d directory:

      • cd /etc/cloud/cloud.cfg.d

      Use the ls command to list the cloud-init config files present in the directory:

      Output

      05_logging.cfg 50-curtin-networking.cfg 90_dpkg.cfg curtin-preserve-sources.cfg README

      Depending on your installation, some of these files may not be present. If present, delete the 50-curtin-networking.cfg file, which configures networking interfaces for your Ubuntu server. When the image is launched on DigitalOcean, cloud-init will run and reconfigure these interfaces automatically, so this file is not necessary. If this file is not deleted, the DigitalOcean Droplet created from this Ubuntu image will have its interfaces misconfigured and won't be accessible from the internet:

      • sudo rm 50-curtin-networking.cfg

      Next, we'll run dpkg-reconfigure cloud-init to remove the NoCloud datasource, ensuring that cloud-init searches for and finds the ConfigDrive datasource used on DigitalOcean:

      • sudo dpkg-reconfigure cloud-init

      You should see the following graphical menu:

      Cloud Init dpkg Menu

      The NoCloud datasource is initially highlighted. Press SPACE to unselect it, then hit ENTER.

      Finally, navigate to /etc/netplan:

      Remove the 50-cloud-init.yaml file, which was generated from the cloud-init networking file we removed previously:

      • sudo rm 50-cloud-init.yaml

      The final step is ensuring that we clean up configuration from the initial cloud-init run so that it reruns when the image is launched on DigitalOcean.

      To do this, run cloud-init clean:

      At this point you've installed and configured cloud-init for use with DigitalOcean. You can now move on to enabling SSH access to your droplet.

      Enable SSH Access

      Once you've installed and configured cloud-init, the next step is to ensure that you have a non-root admin user and password available to you on your machine, as outlined in the prerequisites. This step is essential to diagnose any errors that may arise after uploading your image and launching your Droplet. If a preexisting network configuration or bad cloud-init configuration renders your Droplet inaccesible over the network, you can use this user in combination with the DigitalOcean Droplet Console to access your system and diagnose any problems that may have surfaced.

      Once you've set up your non-root administrative user, the final step is to ensure that you have an SSH server installed and running. SSH often comes preinstalled on many popular Linux distributions. The process for checking whether a service is running will vary depending on your server's operating system.. If you aren't sure of how to do this, consult your OS's documentation on managing services. On Ubuntu, you can verify that SSH is up and running using the following command:

      You should see the following output:

      Output

      ● ssh.service - OpenBSD Secure Shell server Loaded: loaded (/lib/systemd/system/ssh.service; enabled; vendor preset: enabled) Active: active (running) since Mon 2018-10-22 19:59:38 UTC; 8 days 1h ago Docs: man:sshd(8) man:sshd_config(5) Process: 1092 ExecStartPre=/usr/sbin/sshd -t (code=exited, status=0/SUCCESS) Main PID: 1115 (sshd) Tasks: 1 (limit: 4915) Memory: 9.7M CGroup: /system.slice/ssh.service └─1115 /usr/sbin/sshd -D

      If SSH isn't up and running, you can install it using apt (on Debian-based distributions):

      • sudo apt install openssh-server

      By default, the SSH server will start on boot unless configured otherwise. This is desirable when running the system in the cloud, as DigitalOcean can automatically copy in your public key and grant you immediate SSH access to your Droplet after creation.

      Once you've created a non-root administrative user, enabled SSH, and installed cloud-init, you're ready to move on to creating an image of your boot disk.

      Step 2 — Creating Disk Image

      In this step, we'll create a RAW format disk image using the dd command-line utility, and compress it using gzip. We'll then upload the image to DigitalOcean Spaces using s3cmd.

      To begin, log in to your server, and inspect the block device arrangement for your system using lsblk:

      You should see something like the following:

      Output

      NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 12.7M 1 loop /snap/amazon-ssm-agent/495 loop1 7:1 0 87.9M 1 loop /snap/core/5328 vda 252:0 0 25G 0 disk └─vda1 252:1 0 25G 0 part / vdb 252:16 0 420K 1 disk

      In this case, we notice that our main boot disk is /dev/vda, a 25GB disk, and the primary partition, mounted at /, is /dev/vda1. In most cases the disk containing the partition mounted at / will be the source disk to image. We are going to use dd to create an image of /dev/vda.

      At this point, you should decide where you want to store the disk image. One option is to attach another block storage device, preferably as large as the disk you are going to image. You can then save the image to this attached temporary disk and upload it to DigitalOcean Spaces.

      If you have physical access to the server, you can add an additional drive to the machine or attach another storage device, like an external USB disk.

      Another option, which we'll demonstrate in this guide, is copying the image over SSH to a local machine, from which you can upload it to Spaces.

      No matter which method you choose to follow, ensure that the storage device to which you save the compressed image has enough free space. If the disk you're imaging is mostly empty, you can expect the compressed image file to be significantly smaller than the original disk.

      Warning: Before running the following dd command, ensure that any critical applications have been stopped and your system is as quiet as possible. Copying an actively-used disk may result in some corrupted files, so be sure to halt any data-intensive operations and shut down as many running applications as possible.

      Option 1: Creating Image Locally

      The syntax for the dd command we're going to execute looks as follows:

      • dd if=/dev/vda bs=4M conv=sparse | pv -s 25G | gzip > /mnt/tmp_disk/ubuntu.gz

      In this case, we are selecting /dev/vda as the input disk to image, and setting the input/output block sizes to 4MB (from the default 512 bytes). This generally speeds things up a little bit. In addition, we are using the conv=sparse flag to minimize the output file size by skipping over empty space. To learn more about dd's parameters, consult the dd manpage.

      We then pipe the output to the pv pipe viewer utility so we can visually track the progress of the transfer (this pipe is optional, and requires installing pv using your package manager). If you know the size of the initial disk (in this case it's 25G), you can add the -s 25G to the pv pipe to get an ETA for when the transfer will complete.

      We then pipe it all to gzip, and save it in a file called ubuntu.gz on the temporary block storage volume we've attached to the server. Replace /mnt/tmp_disk with the path to the external storage device you've attached to your server.

      Option 2: Creating Image over SSH

      Instead of provisioning additional storage for your remote machine, you can also execute the copy over SSH if you have enough disk space available on your local machine. Note that depending on the bandwidth available to you, this can be slow and you may incur additional costs for data transfer over the network.

      To copy and compress the disk over SSH, execute the following command on your local machine:

      • ssh remote_user@your_server_ip "sudo dd if=/dev/vda bs=4M conv=sparse | gzip -1 -" | dd of=ubuntu.gz

      In this case, we are SSHing into our remote server, executing the dd command there, and piping the output to gzip. We then transfer the gzip output over the network and save it as ubuntu.gz locally. Ensure you have the dd utility available on your local machine before running this command:

      Output

      /bin/dd

      Create the compressed image file using either of the above methods. This may take several hours, depending on the size of the disk you're imaging and the method you're using to create the image.

      Once you've created the compressed image file, you can move on to uploading it to your DigitalOcean Spaces using s3cmd.

      Step 3 — Uploading Image to Spaces and Custom Images

      As described in the prerequisites, you should have s3cmd installed and configured for use with your DigitalOcean Space on the machine containing your compressed image.

      Locate the compressed image file, and upload it to your Space using s3cmd:

      Note: You should replace your_space_name with your Space’s name and not its URL. For example, if your Space’s URL is https://example-space-name.nyc3.digitaloceanspaces.com, then your Space’s name is example-space-name.

      • s3cmd put /path_to_image/ubuntu.gz s3://your_space_name

      Once the upload completes, navigate to your Space using the DigitalOcean Control Panel, and locate the image in the list of files. We will temporarily make the image publicly accessible so that Custom Images can access it and save a copy.

      At the right-hand side of the image listing, click the More drop down menu, then click into Manage Permissions:

      Spaces Object Configuration

      Then, click the radio button next to Public and hit Update to make the image publicly accessible.

      Warning: Your image will temporarily be publicly accessible to anyone with its Spaces path during this process. If you'd like to avoid making your image temporarily public, you can create your Custom Image using the DigitalOcean API. Be sure to set your image to Private using the above procedure after your image has successfully been transferred to Custom Images.

      Fetch the Spaces URL for your image by hovering over the image name in the Control Panel, and hit Copy URL in the window that pops up.

      Now, navigate to Images in the left hand navigation bar, and then Custom Images.

      From here, upload your image using this URL as detailed in the Custom Images Product Documentation.

      You can then create a Droplet from this image. Note that you need to add an SSH key to the Droplet on creation. To learn how to do this, consult How to Add SSH Keys to Droplets.

      Once your Droplet boots up, if you can SSH into it, you've successfully launched your Custom Image as a DigitalOcean Droplet.

      Debugging

      If you attempt to SSH into your Droplet and are unable to connect, ensure that your image meets the listed requirements and has both cloud-init and SSH installed and properly configured. If you still can't access the Droplet, you can attempt to use the DigitalOcean Droplet Console and the non-root user you created earlier to explore the system and debug your networking, cloud-init and SSH configurations. Another way of debugging your image is to use a virtualization tool like Virtualbox to boot up your disk image inside of a virtual machine, and debug your system's configuration from within the VM.

      Conclusion

      In this guide, you've learned how to create a disk image of an Ubuntu 18.04 system using the dd command line utility and upload it to DigitalOcean as a Custom Image from which you can launch Droplets.

      The steps in this guide may vary depending on your operating system, existing hardware, and kernel configuration but, in general, images created from popular Linux distributions should work using this method. Be sure to carefully follow the steps for installing and configuring cloud-init, and ensure that your system meets all the requirements listed in the [prerequisites](todo: link) section above.

      To learn more about Custom Images, consult the offical Custom Images product documentation.



      Source link