One place for hosting & domains


      How To Set Up a Node.js Application for Production on Debian 10


      Node.js is an open-source JavaScript runtime environment for building server-side and networking applications. The platform runs on Linux, macOS, FreeBSD, and Windows. Though you can run Node.js applications at the command line, this tutorial will focus on running them as a service. This means that the applications will restart on reboot or failure and are safe for use in a production environment.

      In this tutorial, you will set up a production-ready Node.js environment on a single Debian 10 server. This server will run a Node.js application managed by PM2, and provide users with secure access to the application through an Nginx reverse proxy. The Nginx server will offer HTTPS, using a free certificate provided by Let’s Encrypt.


      This guide assumes that you have the following:

      When you’ve completed the prerequisites, you will have a server serving your domain’s default placeholder page at https://your_domain/.

      Step 1 — Installing Node.js

      Let’s begin by installing the latest LTS release of Node.js, using the NodeSource package archives.

      To install the NodeSource PPA and access its contents, you will first need to update your package index and install curl:

      • sudo apt update
      • sudo apt install curl

      Make sure you’re in your home directory, and then use curl to retrieve the installation script for the Node.js 10.x archives:

      • cd ~
      • curl -sL -o

      You can inspect the contents of this script with nano or your preferred text editor:

      When you're done inspecting the script, run it under sudo:

      • sudo bash

      The PPA will be added to your configuration and your local package cache will be updated automatically. After running the setup script from Nodesource, you can install the Node.js package:

      To check which version of Node.js you have installed after these initial steps, type:



      Note: When installing from the NodeSource PPA, the Node.js executable is called nodejs, rather than node.

      The nodejs package contains the nodejs binary as well as npm, a package manager for Node modules, so you don't need to install npm separately.

      npm uses a configuration file in your home directory to keep track of updates. It will be created the first time you run npm. Execute this command to verify that npm is installed and to create the configuration file:



      In order for some npm packages to work (those that require compiling code from source, for example), you will need to install the build-essential package:

      • sudo apt install build-essential

      You now have the necessary tools to work with npm packages that require compiling code from source.

      With the Node.js runtime installed, we can move on to writing a Node.js application.

      Step 2 — Creating a Node.js Application

      Let's write a Hello World application that returns "Hello World" to any HTTP requests. This sample application will help you get Node.js set up. You can replace it with your own application — just make sure that you modify your application to listen on the appropriate IP addresses and ports.

      First, let's create a sample application called hello.js:

      Insert the following code into the file:


      const http = require('http');
      const hostname = 'localhost';
      const port = 3000;
      const server = http.createServer((req, res) => {
        res.statusCode = 200;
        res.setHeader('Content-Type', 'text/plain');
        res.end('Hello World!n');
      server.listen(port, hostname, () => {
        console.log(`Server running at http://${hostname}:${port}/`);

      Save the file and exit the editor.

      This Node.js application listens on the specified address (localhost) and port (3000), and returns "Hello World!" with a 200 HTTP success code. Since we're listening on localhost, remote clients won't be able to connect to our application.

      To test your application, type:

      You will see the following output:


      Server running at http://localhost:3000/

      Note: Running a Node.js application in this manner will block additional commands until you kill the application by pressing CTRL+C.

      To test the application, open another terminal session on your server, and connect to localhost with curl:

      • curl http://localhost:3000

      If you see the following output, the application is working properly and listening on the correct address and port:


      Hello World!

      If you do not see the expected output, make sure that your Node.js application is running and configured to listen on the proper address and port.

      Once you're sure it's working, kill the application (if you haven't already) by pressing CTRL+C.

      Step 3 — Installing PM2

      Next let's install PM2, a process manager for Node.js applications. PM2 makes it possible to daemonize applications so that they will run in the background as a service.

      Use npm to install the latest version of PM2 on your server:

      The -g option tells npm to install the module globally, so it's available system-wide.

      Let's first use the pm2 start command to run the hello.js application in the background:

      This also adds your application to PM2's process list, which is outputted every time you start an application:


      [PM2] Spawning PM2 daemon with pm2_home=/home/sammy/.pm2 [PM2] PM2 Successfully daemonized [PM2] Starting /home/sammy/hello.js in fork_mode (1 instance) [PM2] Done. ┌──────────┬────┬──────┬──────┬────────┬─────────┬────────┬─────┬───────────┬───────┬──────────┐ │ App name │ id │ mode │ pid │ status │ restart │ uptime │ cpu │ mem │ user │ watching │ ├──────────┼────┼──────┼──────┼────────┼─────────┼────────┼─────┼───────────┼───────┼──────────┤ │ hello │ 0 │ fork │ 1338 │ online │ 0 │ 0s │ 0% │ 23.0 MB │ sammy │ disabled │ └──────────┴────┴──────┴──────┴────────┴─────────┴────────┴─────┴───────────┴───────┴──────────┘ Use `pm2 show <id|name>` to get more details about an app

      As you can see, PM2 automatically assigns an App name based on the filename without the .js extension, along with a PM2 id. PM2 also maintains other information, such as the PID of the process, its current status, and memory usage.

      Applications that are running under PM2 will be restarted automatically if the application crashes or is killed, but we can take an additional step to get the application to launch on system startup using the startup subcommand. This subcommand generates and configures a startup script to launch PM2 and its managed processes on server boots. Type the following:

      You will see output that looks like this, describing the service configuration that PM2 has generated:


      [PM2] Init System found: systemd Platform systemd Template [Unit] Description=PM2 process manager Documentation= [Service] Type=forking User=root LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity Environment=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin Environment=PM2_HOME=/root/.pm2 PIDFile=/root/.pm2/ Restart=on-failure ExecStart=/usr/lib/node_modules/pm2/bin/pm2 resurrect ExecReload=/usr/lib/node_modules/pm2/bin/pm2 reload all ExecStop=/usr/lib/node_modules/pm2/bin/pm2 kill [Install] Target path /etc/systemd/system/pm2-root.service Command list [ 'systemctl enable pm2-root' ] [PM2] Writing init configuration in /etc/systemd/system/pm2-root.service [PM2] Making script booting at startup... [PM2] [-] Executing: systemctl enable pm2-root... Created symlink /etc/systemd/system/ → /etc/systemd/system/pm2-root.service. [PM2] [v] Command successfully executed. +---------------------------------------+ [PM2] Freeze a process list on reboot via: $ pm2 save [PM2] Remove init script via: $ pm2 unstartup systemd

      You have now created a systemd unit that runs pm2 on boot. This pm2 instance, in turn, runs hello.js.

      Start the service with systemctl:

      • sudo systemctl start pm2-root.service

      Check the status of the systemd unit:

      • systemctl status pm2-root.service

      You should see output like the following:


      ● pm2-root.service - PM2 process manager Loaded: loaded (/etc/systemd/system/pm2-root.service; enabled; vendor preset: enabled) Active: active (running) since Fri 2019-07-12 16:09:54 UTC; 4s ago

      For a detailed overview of systemd, see Systemd Essentials: Working with Services, Units, and the Journal.

      In addition to those we have covered, PM2 provides many subcommands that allow you to manage or look up information about your applications.

      Stop an application with this command (specify the PM2 App name or id):

      Restart an application:

      • pm2 restart app_name_or_id

      List the applications currently managed by PM2:

      Get information about a specific application using its App name:

      The PM2 process monitor can be pulled up with the monit subcommand. This displays the application status, CPU, and memory usage:

      Note that running pm2 without any arguments will also display a help page with example usage.

      Now that your Node.js application is running and managed by PM2, let's set up the reverse proxy.

      Step 4 — Setting Up Nginx as a Reverse Proxy Server

      Your application is running and listening on localhost, but you need to set up a way for your users to access it. We will set up the Nginx web server as a reverse proxy for this purpose.

      In the prerequisite tutorial, you set up your Nginx configuration in the /etc/nginx/sites-available/your_domain file. Open this file for editing:

      • sudo nano /etc/nginx/sites-available/your_domain

      Within the server block, you should have an existing location / block. Replace the contents of that block with the following configuration. If your application is set to listen on a different port, update the highlighted portion to the correct port number:


      server {
          location / {
              proxy_pass http://localhost:3000;
              proxy_http_version 1.1;
              proxy_set_header Upgrade $http_upgrade;
              proxy_set_header Connection 'upgrade';
              proxy_set_header Host $host;
              proxy_cache_bypass $http_upgrade;

      This configures the server to respond to requests at its root. Assuming our server is available at your_domain, accessing https://your_domain/ via a web browser would send the request to hello.js, listening on port 3000 at localhost.

      You can add additional location blocks to the same server block to provide access to other applications on the same server. For example, if you were also running another Node.js application on port 3001, you could add this location block to allow access to it via https://your_domain/app2:

      /etc/nginx/sites-available/your_domain — Optional

      server {
          location /app2 {
              proxy_pass http://localhost:3001;
              proxy_http_version 1.1;
              proxy_set_header Upgrade $http_upgrade;
              proxy_set_header Connection 'upgrade';
              proxy_set_header Host $host;
              proxy_cache_bypass $http_upgrade;

      Once you are done adding the location blocks for your applications, save the file and exit your editor.

      Make sure you didn't introduce any syntax errors by typing:

      Restart Nginx:

      • sudo systemctl restart nginx

      Assuming that your Node.js application is running and your application and Nginx configurations are correct, you should now be able to access your application via the Nginx reverse proxy. Try it out by accessing your domain in the browser: https://your_domain.


      Congratulations! You now have your Node.js application running behind an Nginx reverse proxy on a Debian 10 server. This reverse proxy setup is flexible enough to provide your users access to other applications or static web content that you want to share.

      Source link

      How To Optimize Docker Images for Production

      The author selected to receive a donation as part of the Write for DOnations program.


      In a production environment, Docker makes it easy to create, deploy, and run applications inside of containers. Containers let developers gather applications and all their core necessities and dependencies into a single package that you can turn into a Docker image and replicate. Docker images are built from Dockerfiles. The Dockerfile is a file where you define what the image will look like, what base operating system it will have, and which commands will run inside of it.

      Large Docker images can lengthen the time it takes to build and send images between clusters and cloud providers. If, for example, you have a gigabyte-sized image to push every time one of your developers triggers a build, the throughput you create on your network will add up during the CI/CD process, making your application sluggish and ultimately costing you resources. Because of this, Docker images suited for production should only have the bare necessities installed.

      There are several ways to decrease the size of Docker images to optimize for production. First off, these images don’t usually need build tools to run their applications, and so there’s no need to add them at all. By using a multi-stage build process, you can use intermediate images to compile and build the code, install dependencies, and package everything into the smallest size possible, then copy over the final version of your application to an empty image without build tools. Additionally, you can use an image with a tiny base, like Alpine Linux. Alpine is a suitable Linux distribution for production because it only has the bare necessities that your application needs to run.

      In this tutorial, you’ll optimize Docker images in a few simple steps, making them smaller, faster, and better suited for production. You’ll build images for a sample Go API in several different Docker containers, starting with Ubuntu and language-specific images, then moving on to the Alpine distribution. You will also use multi-stage builds to optimize your images for production. The end goal of this tutorial is to show the size difference between using default Ubuntu images and optimized counterparts, and to show the advantage of multi-stage builds. After reading through this tutorial, you’ll be able to apply these techniques to your own projects and CI/CD pipelines.

      Note: This tutorial uses an API written in Go as an example. This simple API will give you a clear understanding of how you would approach optimizing Go microservices with Docker images. Even though this tutorial uses a Go API, you can apply this process to almost any programming language.


      Before you start you will need:

      Step 1 — Downloading the Sample Go API

      Before optimizing your Docker image, you must first download the sample API that you will build your Docker images from. Using a simple Go API will showcase all the key steps of building and running an application inside a Docker container. This tutorial uses Go because it’s a compiled language like C++ or Java, but unlike them, has a very small footprint.

      On your server, begin by cloning the sample Go API:

      • git clone

      Once you have cloned the project, you will have a directory named mux-go-api on your server. Move into this directory with cd:

      This will be the home directory for your project. You will build your Docker images from this directory. Inside, you will find the source code for an API written in Go in the api.go file. Although this API is minimal and has only a few endpoints, it will be appropriate for simulating a production-ready API for the purposes of this tutorial.

      Now that you have downloaded the sample Go API, you are ready to build a base Ubuntu Docker image, against which you can compare the later, optimized Docker images.

      Step 2 — Building a Base Ubuntu Image

      For your first Docker image, it will be useful to see what it looks like when you start out with a base Ubuntu image. This will package your sample API in an environment similar to the software you're already running on your Ubuntu server. Inside the image, you will install the various packages and modules you need to run your application. You will find, however, that this process creates a rather heavy Ubuntu image that will affect build time and the code readability of your Dockerfile.

      Start by writing a Dockerfile that instructs Docker to create an Ubuntu image, install Go, and run the sample API. Make sure to create the Dockerfile in the directory of the cloned repo. If you cloned to the home directory it should be $HOME/mux-go-api.

      Make a new file called Dockerfile.ubuntu. Open it up in nano or your favorite text editor:

      • nano ~/mux-go-api/Dockerfile.ubuntu

      In this Dockerfile, you'll define an Ubuntu image and install Golang. Then you'll proceed to install the needed dependencies and build the binary. Add the following contents to Dockerfile.ubuntu:


      FROM ubuntu:18.04
      RUN apt-get update -y 
        && apt-get install -y git gcc make golang-1.10
      ENV GOROOT /usr/lib/go-1.10
      ENV GOPATH /root/go
      ENV APIPATH /root/go/src/api
      COPY . .
        go get -d -v 
        && go install -v 
        && go build
      EXPOSE 3000
      CMD ["./api"]

      Starting from the top, the FROM command specifies which base operating system the image will have. Then the RUN command installs the Go language during the creation of the image. ENV sets the specific environment variables the Go compiler needs in order to work properly. WORKDIR specifies the directory where we want to copy over the code, and the COPY command takes the code from the directory where Dockerfile.ubuntu is and copies it over into the image. The final RUN command installs Go dependencies needed for the source code to compile and run the API.

      Note: Using the && operators to string together RUN commands is important in optimizing Dockerfiles, because every RUN command will create a new layer, and every new layer increases the size of the final image.

      Save and exit the file. Now you can run the build command to create a Docker image from the Dockerfile you just made:

      • docker build -f Dockerfile.ubuntu -t ubuntu .

      The build command builds an image from a Dockerfile. The -f flag specifies that you want to build from the Dockerfile.ubuntu file, while -t stands for tag, meaning you're tagging it with the name ubuntu. The final dot represents the current context where Dockerfile.ubuntu is located.

      This will take a while, so feel free to take a break. Once the build is done, you'll have an Ubuntu image ready to run your API. But the final size of the image might not be ideal; anything above a few hundred MB for this API would be considered an overly large image.

      Run the following command to list all Docker images and find the size of your Ubuntu image:

      You'll see output showing the image you just created:


      REPOSITORY TAG IMAGE ID CREATED SIZE ubuntu latest 61b2096f6871 33 seconds ago 636MB . . .

      As is highlighted in the output, this image has a size of 636MB for a basic Golang API, a number that may vary slightly from machine to machine. Over multiple builds, this large size will significantly affect deployment times and network throughput.

      In this section, you built an Ubuntu image with all the needed Go tools and dependencies to run the API you cloned in Step 1. In the next section, you'll use a pre-built, language-specific Docker image to simplify your Dockerfile and streamline the build process.

      Step 3 — Building a Language-Specific Base Image

      Pre-built images are ordinary base images that users have modified to include situation-specific tools. Users can then push these images to the Docker Hub image repository, allowing other users to use the shared image instead of having to write their own individual Dockerfiles. This is a common process in production situations, and you can find various pre-built images on Docker Hub for almost any use case. In this step, you'll build your sample API using a Go-specific image that already has the compiler and dependencies installed.

      With pre-built base images already containing the tools you need to build and run your app, you can cut down the build time significantly. Because you're starting with a base that has all needed tools pre-installed, you can skip adding these to your Dockerfile, making it look a lot cleaner and ultimately decreasing the build time.

      Go ahead and create another Dockerfile and name it Dockerfile.golang. Open it up in your text editor:

      • nano ~/mux-go-api/Dockerfile.golang

      This file will be significantly more concise than the previous one because it has all the Go-specific dependencies, tools, and compiler pre-installed.

      Now, add the following lines:


      FROM golang:1.10
      WORKDIR /go/src/api
      COPY . .
          go get -d -v 
          && go install -v 
          && go build
      EXPOSE 3000
      CMD ["./api"]

      Starting from the top, you'll find that the FROM statement is now golang:1.10. This means Docker will fetch a pre-built Go image from Docker Hub that has all the needed Go tools already installed.

      Now, once again, build the Docker image with:

      • docker build -f Dockerfile.golang -t golang .

      Check the final size of the image with the following command:

      This will yield output similar to the following:


      REPOSITORY TAG IMAGE ID CREATED SIZE golang latest eaee5f524da2 40 seconds ago 744MB . . .

      Even though the Dockerfile itself is more efficient and the build time is shorter, the total image size actually increased. The pre-built Golang image is around 744MB, a significant amount.

      This is the preferred way to build Docker images. It gives you a base image which the community has approved as the standard to use for the specified language, in this case Go. However, to make an image ready for production, you need to cut away parts that the running application does not need.

      Keep in mind that using these heavy images is fine when you are unsure about your needs. Feel free to use them both as throwaway containers as well as the base for building other images. For development or testing purposes, where you don't need to think about sending images through the network, it's perfectly fine to use heavy images. But if you want to optimize deployments, then you need to try your best to make your images as tiny as possible.

      Now that you have tested a language-specific image, you can move on to the next step, in which you will use the lightweight Alpine Linux distribution as a base image to make your Docker image lighter.

      Step 4 — Building Base Alpine Images

      One of the easiest steps to optimize your Docker images is to use smaller base images. Alpine is a lightweight Linux distribution designed for security and resource efficiency. The Alpine Docker image uses musl libc and BusyBox to stay compact, requiring no more than 8MB in a container to run. The tiny size is due to binary packages being thinned out and split, giving you more control over what you install, which keeps the environment as small and efficient as possible.

      The process of creating an Alpine image is similar to how you created the Ubuntu image in Step 2. First, create a new file called Dockerfile.alpine:

      • nano ~/mux-go-api/Dockerfile.alpine

      Now add this snippet:


      FROM alpine:3.8
      RUN apk add --no-cache 
      ENV GOPATH /go
      ENV PATH $GOPATH/bin:/usr/local/go/bin:$PATH
      ENV APIPATH $GOPATH/src/api
      RUN mkdir -p "$GOPATH/src" "$GOPATH/bin" "$APIPATH" && chmod -R 777 "$GOPATH"
      COPY . .
          go get -d -v 
          && go install -v 
          && go build
      EXPOSE 3000
      CMD ["./api"]

      Here you're adding the apk add command to use Alpine's package manager to install Go and all libraries it requires. As with the Ubuntu image, you need to set the environment variables as well.

      Go ahead and build the image:

      • docker build -f Dockerfile.alpine -t alpine .

      Once again, check the image size:

      You will receive output similar to the following:


      REPOSITORY TAG IMAGE ID CREATED SIZE alpine latest ee35a601158d 30 seconds ago 426MB . . .

      The size has gone down to around 426MB.

      The small size of the Alpine base image has reduced the final image size, but there are a few more things you can do to make it even smaller.

      Next, try using a pre-built Alpine image for Go. This will make the Dockerfile shorter, and will also cut down the size of the final image. Because the pre-built Alpine image for Go is built with Go compiled from source, its footprint is significantly smaller.

      Start by creating a new file called Dockerfile.golang-alpine:

      • nano ~/mux-go-api/Dockerfile.golang-alpine

      Add the following contents to the file:


      FROM golang:1.10-alpine3.8
      RUN apk add --no-cache --update git
      WORKDIR /go/src/api
      COPY . .
      RUN go get -d -v 
        && go install -v 
        && go build
      EXPOSE 3000
      CMD ["./api"]

      The only differences between Dockerfile.golang-alpine and Dockerfile.alpine are the FROM command and the first RUN command. Now, the FROM command specifies a golang image with the 1.10-alpine3.8 tag, and RUN only has a command for installing Git. You need Git for the go get command to work in the second RUN command at the bottom of Dockerfile.golang-alpine.

      Build the image with the following command:

      • docker build -f Dockerfile.golang-alpine -t golang-alpine .

      Retrieve your list of images:

      You will receive the following output:


      REPOSITORY TAG IMAGE ID CREATED SIZE golang-alpine latest 97103a8b912b 49 seconds ago 288MB

      Now the image size is down to around 288MB.

      Even though you've managed to cut down the size a lot, there's one last thing you can do to get the image ready for production. It's called a multi-stage build. By using multi-stage builds, you can use one image to build the application while using another, lighter image to package the compiled application for production, a process you will run through in the next step.

      Ideally, images that you run in production shouldn't have any build tools installed or dependencies that are redundant for the production application to run. You can remove these from the final Docker image by using multi-stage builds. This works by building the binary, or in other terms, the compiled Go application, in an intermediate container, then copying it over to an empty container that doesn't have any unnecessary dependencies.

      Start by creating another file called Dockerfile.multistage:

      • nano ~/mux-go-api/Dockerfile.multistage

      What you'll add here will be familiar. Start out by adding the exact same code as with Dockerfile.golang-alpine. But this time, also add a second image where you'll copy the binary from the first image.


      FROM golang:1.10-alpine3.8 AS multistage
      RUN apk add --no-cache --update git
      WORKDIR /go/src/api
      COPY . .
      RUN go get -d -v 
        && go install -v 
        && go build
      FROM alpine:3.8
      COPY --from=multistage /go/bin/api /go/bin/
      EXPOSE 3000
      CMD ["/go/bin/api"]

      Save and close the file. Here you have two FROM commands. The first is identical to Dockerfile.golang-alpine, except for having an additional AS multistage in the FROM command. This will give it a name of multistage, which you will then reference in the bottom part of the Dockerfile.multistage file. In the second FROM command, you'll take a base alpine image and COPY over the compiled Go application from the multistage image into it. This process will further cut down the size of the final image, making it ready for production.

      Run the build with the following command:

      • docker build -f Dockerfile.multistage -t prod .

      Check the image size now, after using a multi-stage build.

      You will find two new images instead of only one:


      REPOSITORY TAG IMAGE ID CREATED SIZE prod latest 82fc005abc40 38 seconds ago 11.3MB <none> <none> d7855c8f8280 38 seconds ago 294MB . . .

      The <none> image is the multistage image built with the FROM golang:1.10-alpine3.8 AS multistage command. It's only an intermediary used to build and compile the Go application, while the prod image in this context is the final image which only contains the compiled Go application.

      From an initial 744MB, you've now shaved down the image size to around 11.3MB. Keeping track of a tiny image like this and sending it over the network to your production servers will be much easier than with an image of over 700MB, and will save you significant resources in the long run.


      In this tutorial, you optimized Docker images for production using different base Docker images and an intermediate image to compile and build the code. This way, you have packaged your sample API into the smallest size possible. You can use these techniques to improve build and deployment speed of your Docker applications and any CI/CD pipeline you may have.

      If you are interested in learning more about building applications with Docker, check out our How To Build a Node.js Application with Docker tutorial. For more conceptual information on optimizing containers, see Building Optimized Containers for Kubernetes.

      Source link

      How to Deploy a Symfony 4 Application to Production with LEMP on Ubuntu 18.04

      The author selected Software in the Public Interest Inc to receive a donation as part of the Write for DOnations program.


      Symfony is an open-source PHP framework with an elegant structure and a reputation for being a suitable framework to kick-start any project irrespective of its size. As a set of reusable components, its flexibility, architecture, and high performance make it a top choice for building a highly complex enterprise application.

      In this tutorial, you will deploy an existing, standard Symfony 4 application to production with a LEMP stack (Nginx, MySQL, and PHP) on Ubuntu 18.04, which will help you get started configuring the server and the structure of the framework. Nginx is a popular open-source, high-performance HTTP server with additional features including reverse proxy support. It has a good reputation and hosts some of the largest and highest traffic sites on the internet. If you choose to deploy your own Symfony application instead, you might have to implement extra steps depending on the existing structure of your application.


      To complete this tutorial, you will need:

      Step 1 — Creating a User and Database for the Application

      By following the instructions in the Prerequisites, you now have all the basic server dependencies required for the application installation. As every dynamic web application requires a database, you will create a user and properly configure a database for the application in this section.

      To create a MySQL database for our application and a user associated with it, you need to access the MySQL client using the MySQL root account:

      Enter the appropriate password, which should be the same password used when running mysql_secure_installation.

      Next, create the application database with:

      You will see the following output in the console:


      Query OK, 1 row affected (0.00 sec)

      You have successfully created your application database. You can now create a MySQL user and grant them access to the newly created database.

      Execute the following command to create a MySQL user and password. You can change the username and password to something more secure if you wish:

      • CREATE USER 'blog-admin'@'localhost' IDENTIFIED BY 'password';

      You will see the following output:


      Query OK, 0 rows affected (0.00 sec)

      Currently, the user blog-admin does not have the right permission over the application database. In fact, even if blog-admin tries to log-in with their password, they will not be able to reach the MySQL shell.

      A user needs the right permission before accessing or carrying out a specific action on a database. Use the following command to allow complete access to the blog database for the blog-admin user:

      • GRANT ALL PRIVILEGES ON blog.* TO 'blog-admin'@'localhost';

      You will see the following output:


      Query OK, 0 rows affected (0.00 sec)

      The blog-admin now has all privileges on all the tables inside the blog database. To reload the grant tables and apply changes, you need to perform a flush-privilege operation using the flush statement:

      You will see the following output:


      Query OK, 0 rows affected (0.00 sec)

      You are done creating a new user and granting privileges. To test if you’re on track, exit the MySQL client:

      And log in again, using the credentials of the MySQL user you just created and enter the password when prompted:

      Check that the database can be accessed by the user with:

      You'll see the blog table in the output:


      +--------------------+ | Database | +--------------------+ | information_schema | | blog | +--------------------+ 2 rows in set (0.00 sec)

      Finally, exit the MySQL client:

      You have successfully created a database, a user for the demo application, and granted the newly created user the right privileges to access the database. You are now ready to set up the demo application.

      Step 2 — Setting Up the Demo Application

      To keep this tutorial simple, you will deploy a blog application built with Symfony. This application will allow an authenticated user to create a blog post and store it in the database. In addition, the application user can view all the posts and details associated with an author.

      The source code of the blog application you will deploy in this tutorial is on GitHub. You will use Git to pull the source code of the application from GitHub and save it in a new directory.

      First, create a directory that will serve as the root directory for your application. So, run the following command from the console to create a new directory named symfony-blog:

      • sudo mkdir -p /var/www/symfony-blog

      In order to work with the project files using a non-root user account, you’ll need to change the folder owner and group by running:

      • sudo chown sammy:sammy /var/www/symfony-blog

      Replace sammy with your sudo non-root username.

      Now, you can change into the parent directory and clone the application on GitHub:

      • cd /var/www
      • git clone symfony-blog

      You'll see the following output:


      Cloning into 'symfony-blog'... remote: Counting objects: 180, done. remote: Compressing objects: 100% (122/122), done. remote: Total 180 (delta 57), reused 164 (delta 41), pack-reused 0 Receiving objects: 100% (180/180), 167.01 KiB | 11.13 MiB/s, done. Resolving deltas: 100% (57/57), done.

      The demo application is now set. In the next step, you will configure the environment variables and install the required dependencies for the project.

      Step 3 — Configuring your Environment Variables for the Application

      To completely set up the application, you need to install the project dependencies and properly configure the application parameters.

      By default, the Symfony application runs in a development mode, which gives it a very detailed log for the purposes of debugging. This is not applicable to what you are doing in this tutorial, and not good practice for a production environment, as it can slow things down and create very large log files.

      Symfony needs to be aware that you’re running the application in a production environment. You can set this up by either creating a .env file containing variable declarations, or creating environment variables directly. Since you can also use the .env file to configure your database credentials for this application, it makes more sense for you to do this. Change your working directory to the cloned project and create the .env file with:

      • cd symfony-blog
      • sudo nano .env

      Add the following lines to the file to configure the production application environment:



      APP_ENV is an environment variable that specifies that the application is in production, while APP_DEBUG is an environment variable that specifies if the application should run in debug mode or not. You have set it to false for now.

      Save the file and exit the editor.

      Next, install a PHP extension that Symfony apps use to handle XML:

      • sudo apt install php7.2-xml

      Next, you need to install the project dependencies, run composer install:

      • cd /var/www/symfony-blog
      • composer install

      You have successfully configured the environment variables and installed the required dependencies for the project. Next, you will set up the database credentials.

      Step 4 — Setting Up Database Credentials

      In order to retrieve data from the application’s database you created earlier, you will need to set up and configure the required database credentials from within the Symfony application.

      Open the .env file again:

      Add the following content to the file, which will allow you to easily connect and interact properly with the database. You can add it right after the APP_DEBUG=0 line within the .env file:



      The Symfony framework uses a third-party library called Doctrine to communicate with databases. Doctrine gives you useful tools to make interactions with databases easy and flexible.

      You can now use Doctrine to update your database with the tables from the cloned Github application. Run this command to do that:

      • php bin/console doctrine:schema:update --force

      You'll see the following output:


      Updating database schema... 4 queries were executed [OK] Database schema updated successfully!

      After setting up the required credentials and updating the database schema, you can now easily interact with the database. In order to start the application with some data, you will load a set of dummy data into the database in the next section.

      Step 5 — Populating your Database Using Doctrine-Fixtures

      At the moment, the newly created tables are empty. You will populate it using doctrine-fixtures. Using Doctrine-Fixtures is not a prerequisite for Symfony applications, it is only used to provide dummy data for your application.

      Run the following command to automatically load testing data that contains the details of an author and a sample post into the database table created for the blog:

      • php bin/console doctrine:fixtures:load

      You will get a warning about the database getting purged. You can go ahead and type Y:


      Careful, database will be purged. Do you want to continue y/N ? y > purging database > loading AppDataFixturesORMFixtures

      In the next section you will clear and warm up you cache.

      Step 6 — Clearing and Warming Up your Cache

      To ensure your application loads faster when users make requests, it is good practice to warm the cache during the deployment. Warming up the cache generates pages and stores them for faster responses later rather than building completely new pages. Fortunately, Symfony has a command to clear the cache that also triggers a warm up. Run the following command for that purpose:

      • php bin/console cache:clear

      You will see the following output:


      Clearing the cache for the prod environment with debug false [OK] Cache for the "prod" environment (debug=false) was successfully cleared.

      You will conclude the set up in a bit. All that remains is to configure the web server. You will do that in the next section.

      Step 7 — Configuring the Web Server and Running the Application

      By now, you have Nginx installed to serve your pages and MySQL to store and manage your data. You will now configure the web server by creating a new application server block, instead of editing the default one.

      Open a new server block with:

      • sudo nano /etc/nginx/sites-available/blog

      Add the following content to the new server block configuration file. Ensure you replace the your_server_ip within the server block with your server IP address:


      server {
          listen 80;
          listen [::]:80;
          server_name blog your_server_ip;
          root /var/www/symfony-blog/public;
          index index.php;
          client_max_body_size 100m;
          location / {
              try_files $uri $uri/ /index.php$is_args$args;
          location ~ .php {
              try_files $uri /index.php =404;
              fastcgi_pass unix:/var/run/php/php7.2-fpm.sock;
              fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
              fastcgi_param SCRIPT_NAME $fastcgi_script_name;
              fastcgi_split_path_info ^(.+.php)(/.+)$;
              fastcgi_index index.php;
              include fastcgi_params;
          location ~ /.(?:ht|git|svn) {
              deny all;

      First, we specified the listen directives for Nginx, which is by default on port 80, and then set the server name to match requests for the server’s IP address. Next, we used the root directives to specify the document root for the project. The symfony-blog application is stored in /var/www/symfony-blog, but to comply with best practices, we set the web root to /var/www/symfony-blog/public as only the /public subdirectory should be exposed to the internet. Finally, we configured the location directive to handle PHP processing.

      After adding the content, save the file and exit the editor.

      Note: If you created the file in the prerequisite article How To Install Linux, Nginx, MySQL, PHP (LEMP stack) on Ubuntu 18.04, remove it from the sites-enabled directory with sudo rm /etc/nginx/sites-enabled/ so it doesn't conflict with this new file.

      To enable the newly created server block, we need to create a symbolic link from the new server block configuration file located in /etc/nginx/sites-available directory to the /etc/nginx/sites-enabled by using the following command:

      • sudo ln -s /etc/nginx/sites-available/blog /etc/nginx/sites-enabled/

      Check the new configuration file for any syntax errors by running:

      This command will print errors to the console if there are any. Once there are no errors run this command to reload Nginx:

      • sudo systemctl reload nginx

      You just concluded the last step required to successfully deploy the Symfony 4 application. You configured the web server by creating a server block and properly set the web root in order to make the web application accessible.

      Finally, you can now run and test out the application. Visit http://your_server_ip in your favorite browser:

      The following image is the screenshot of the Symfony blog application that you should see at your server's IP address:

      Alt screenshot of the Symfony blog application


      Symfony is a feature-rich PHP framework with an architecture that makes web development fun for the developer who builds software using it. Symfony is a feature-rich web development framework that provides developers powerful tools to build web applications. It's often considered a good choice for enterprise applications due to its flexibility. The steps to deploy a typical Symfony application vary—depending on the setup, complexity, and the requirements of the application.

      In this tutorial, you manually deployed a Symfony 4 application to production on an Ubuntu 18.04 server running LEMP. You can now apply this knowledge to deploying your own Symfony applications.

      Source link