One place for hosting & domains

      Application

      How To Build and Deploy a Flask Application Using Docker on Ubuntu 18.04


      The author selected the Tech Education Fund to receive a donation as part of the Write for DOnations program.

      Introduction

      Docker is an open-source application that allows administrators to create, manage, deploy, and replicate applications using containers. Containers can be thought of as a package that houses dependencies that an application requires to run at an operating system level. This means that each application deployed using Docker lives in an environment of its own and its requirements are handled separately.

      Flask is a web micro-framework that is built on Python. It is called a micro-framework because it does not require specific tools or plug-ins to run. The Flask framework is lightweight and flexible, yet highly structured, making it preferred over other frameworks.

      Deploying a Flask application with Docker will allow you to replicate the application across different servers with minimal reconfiguration.

      In this tutorial, you will create a Flask application and deploy it with Docker. This tutorial will also cover how to update an application after deployment.

      Prerequisites

      To follow this tutorial, you will need the following:

      Step 1 — Setting Up the Flask Application

      To get started, you will create a directory structure that will hold your Flask application. This tutorial will create a directory called TestApp in /var/www, but you can modify the command to name it whatever you’d like.

      • sudo mkdir /var/www/TestApp

      Move in to the newly created TestApp directory:

      Next, create the base folder structure for the Flask application:

      • sudo mkdir -p app/static app/templates

      The -p flag indicates that mkdir will create a directory and all parent directories that don't exist. In this case, mkdir will create the app parent directory in the process of making the static and templates directories.

      The app directory will contain all files related to the Flask application such as its views and blueprints. Views are the code you write to respond to requests to your application. Blueprints create application components and support common patterns within an application or across multiple applications.

      The static directory is where assets such as images, CSS, and JavaScript files live. The templates directory is where you will put the HTML templates for your project.

      Now that the base folder structure is complete, create the files needed to run the Flask application. First, create an __init__.py file inside the app directory. This file tells the Python interpreter that the app directory is a package and should be treated as such.

      Run the following command to create the file:

      • sudo nano app/__init__.py

      Packages in Python allow you to group modules into logical namespaces or hierarchies. This approach enables the code to be broken down into individual and manageable blocks that perform specific functions.

      Next, you will add code to the __init__.py that will create a Flask instance and import the logic from the views.py file, which you will create after saving this file. Add the following code to your new file:

      /var/www/TestApp/__init__.py

      from flask import Flask
      app = Flask(__name__)
      from app import views
      

      Once you've added that code, save and close the file.

      With the __init__.py file created, you're ready to create the views.py file in your app directory. This file will contain most of your application logic.

      Next, add the code to your views.py file. This code will return the hello world! string to users who visit your web page:

      /var/www/TestApp/app/views.py

      from app import app
      
      @app.route('/')
      def home():
         return "hello world!"
      

      The @app.route line above the function is called a decorator. Decorators modify the function that follows it. In this case, the decorator tells Flask which URL will trigger the home() function. The hello world text returned by the home function will be displayed to the user on the browser.

      With the views.py file in place, you're ready to create the uwsgi.ini file. This file will contain the uWSGI configurations for our application. uWSGI is a deployment option for Nginx that is both a protocol and an application server; the application server can serve uWSGI, FastCGI, and HTTP protocols.

      To create this file, run the following command:

      Next, add the following content to your file to configure the uWSGI server:

      /var/www/TestApp/uwsgi.ini

      [uwsgi]
      module = main
      callable = app
      master = true
      

      This code defines the module that the Flask application will be served from. In this case, this is the main.py file, referenced here as main. The callable option instructs uWSGI to use the app instance exported by the main application. The master option allows your application to keep running, so there is little downtime even when reloading the entire application.

      Next, create the main.py file, which is the entry point to the application. The entry point instructs uWSGI on how to interact with the application.

      Next, copy and paste the following into the file. This imports the Flask instance named app from the application package that was previously created.

      /var/www/TestApp/main.py

      from app import app
      

      Finally, create a requirements.txt file to specify the dependencies that the pip package manager will install to your Docker deployment:

      • sudo nano requirements.txt

      Add the following line to add Flask as a dependency:

      /var/www/TestApp/app/requirements.txt

      Flask==1.0.2
      

      This specifies the version of Flask to be installed. At the time of writing this tutorial, 1.0.2 is the latest Flask version. You can check for updates at the official website for Flask.

      Save and close the file. You have successfully set up your Flask application and are ready to set up Docker.

      Step 2 — Setting Up Docker

      In this step you will create two files, Dockerfile and start.sh, to create your Docker deployment. The Dockerfile is a text document that contains the commands used to assemble the image. The start.sh file is a shell script that will build an image and create a container from the Dockerfile.

      First, create the Dockerfile.

      Next, add your desired configuration to the Dockerfile. These commands specify how the image will be built, and what extra requirements will be included.

      /var/www/TestApp/Dockerfile

      FROM tiangolo/uwsgi-nginx-flask:python3.6-alpine3.7
      RUN apk --update add bash nano
      ENV STATIC_URL /static
      ENV STATIC_PATH /var/www/app/static
      COPY ./requirements.txt /var/www/requirements.txt
      RUN pip install -r /var/www/requirements.txt
      

      In this example, the Docker image will be built off an existing image, tiangolo/uwsgi-nginx-flask, which you can find on DockerHub. This particular Docker image is a good choice over others because it supports a wide range of Python versions and OS images.

      The first two lines specify the parent image that you'll use to run the application and install the bash command processor and the nano text editor. It also installs the git client for pulling and pushing to version control hosting services such as GitHub, GitLab, and Bitbucket. ENV STATIC_URL /static is an environment variable specific to this Docker image. It defines the static folder where all assets such as images, CSS files, and JavaScript files are served from.

      The last two lines will copy the requirements.txt file into the container so that it can be executed, and then parses the requirements.txt file to install the specified dependencies.

      Save and close the file after adding your configuration.

      With your Dockerfile in place, you're almost ready to write your start.sh script that will build the Docker container. Before writing the start.sh script, first make sure that you have an open port to use in the configuration. To check if a port is free, run the following command:

      • sudo nc localhost 56733 < /dev/null; echo $?

      If the output of the command above is 1, then the port is free and usable. Otherwise, you will need to select a different port to use in your start.sh configuration file.

      Once you've found an open port to use, create the start.sh script:

      The start.sh script is a shell script that will build an image from the Dockerfile and create a container from the resulting Docker image. Add your configuration to the new file:

      /var/www/TestApp/start.sh

      #!/bin/bash
      app="docker.test"
      docker build -t ${app} .
      docker run -d -p 56733:80 
        --name=${app} 
        -v $PWD:/app ${app}
      

      The first line is called a shebang. It specifies that this is a bash file and will be executed as commands. The next line specifies the name you want to give the image and container and saves as a variable named app. The next line instructs Docker to build an image from your Dockerfile located in the current directory. This will create an image called docker.test in this example.

      The last three lines create a new container named docker.test that is exposed at port 56733. Finally, it links the present directory to the /var/www directory of the container.

      You use the -d flag to start a container in daemon mode, or as a background process. You include the -p flag to bind a port on the server to a particular port on the Docker container. In this case, you are binding port 56733 to port 80 on the Docker container. The -v flag specifies a Docker volume to mount on the container, and in this case, you are mounting the entire project directory to the /var/www folder on the Docker container.

      Execute the start.sh script to create the Docker image and build a container from the resulting image:

      Once the script finishes running, use the following command to list all running containers:

      You will receive output that shows the containers:

      Output

      CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 58b05508f4dd docker.test "/entrypoint.sh /sta…" 12 seconds ago Up 3 seconds 443/tcp, 0.0.0.0:56733->80/tcp docker.test

      You will find that the docker.test container is running. Now that it is running, visit the IP address at the specified port in your browser: http://ip-address:56733

      You'll see a page similar to the following:

      the home page

      In this step you have successfully deployed your Flask application on Docker. Next, you will use templates to display content to users.

      Step 3 — Serving Template Files

      Templates are files that display static and dynamic content to users who visit your application. In this step, you will create a HTML template to create a home page for the application.

      Start by creating a home.html file in the app/templates directory:

      • sudo nano app/templates/home.html

      Add the code for your template. This code will create an HTML5 page that contains a title and some text.

      /var/www/TestApp/app/templates/home.html

      
      <!doctype html>
      
      <html lang="en-us">   
        <head>
          <meta charset="utf-8">
          <meta http-equiv="x-ua-compatible" content="ie=edge">
          <title>Welcome home</title>
        </head>
      
        <body>
          <h1>Home Page</h1>
          <p>This is the home page of our application.</p>
        </body> 
      </html>
      

      Save and close the file once you've added your template.

      Next, modify the app/views.py file to serve the newly created file:

      First, add the following line at the beginning of your file to import the render_template method from Flask. This method parses an HTML file to render a web page to the user.

      /var/www/TestApp/app/views.py

      from flask import render_template
      ...
      

      At the end of the file, you will also add a new route to render the template file. This code specifies that users are served the contents of the home.html file whenever they visit the /template route on your application.

      /var/www/TestApp/app/views.py

      ...
      
      @app.route('/template')
      def template():
          return render_template('home.html')
      

      The updated app/views.py file will look like this:

      /var/www/TestApp/app/views.py

      from flask import render_template
      from app import app 
      
      @app.route('/')
      def home():
          return "Hello world!"
      
      @app.route('/template')
      def template():
          return render_template('home.html')
      

      Save and close the file when done.

      In order for these changes to take effect, you will need to stop and restart the Docker containers. Run the following command to rebuild the container:

      • sudo docker stop docker.test && sudo docker start docker.test

      Visit your application at http://your-ip-address:56733/template to see the new template being served.

      homepage

      In this you've created a Docker template file to serve visitors on your application. In the next step you will see how the changes you make to your application can take effect without having to restart the Docker container.

      Step 4 — Updating the Application

      Sometimes you will need to make changes to the application, whether it is installing new requirements, updating the Docker container, or HTML and logic changes. In this section, you will configure touch-reload to make these changes without needing to restart the Docker container.

      Python autoreloading watches the entire file system for changes and refreshes the application when it detects a change. Autoreloading is discouraged in production because it can become resource intensive very quickly. In this step, you will use touch-reload to watch for changes to a particular file and reload when the file is updated or replaced.

      To implement this, start by opening your uwsgi.ini file:

      Next, add the highlighted line to the end of the file:

      /var/www/TestApp/uwsgi.ini

      module = main
      callable = app
      master = true
      touch-reload = /app/uwsgi.ini
      

      This specifies a file that will be modified to trigger an entire application reload. Once you've made the changes, save and close the file.

      To demonstrate this, make a small change to your application. Start by opening your app/views.py file:

      Replace the string returned by the home function:

      /var/www/TestApp/app/views.py

      • from flask import render_template
      • from app import app
      • @app.route('/')
      • def home():
      • return "<b>There has been a change</b>"
      • @app.route('/template')
      • def template():
      • return render_template('home.html')

      Save and close the file after you've made a change.

      Next, if you open your application’s homepage at http://ip-address:56733, you will notice that the changes are not reflected. This is because the condition for reload is a change to the uwsgi.ini file. To reload the application, use touch to activate the condition:

      Reload the application homepage in your browser again. You will find that the application has incorporated the changes:

      Homepage Updated

      In this step, you set up a touch-reload condition to update your application after making changes.

      Conclusion

      In this tutorial, you created and deployed a Flask application to a Docker container. You also configured touch-reload to refresh your application without needing to restart the container.

      With your new application on Docker, you can now scale with ease. To learn more about using Docker, check out their official documentation.



      Source link

      Containerizing a Node.js Application for Development With Docker Compose


      Introduction

      If you are actively developing an application, using Docker can simplify your workflow and the process of deploying your application to production. Working with containers in development offers the following benefits:

      • Environments are consistent, meaning that you can choose the languages and dependencies you want for your project without worrying about system conflicts.
      • Environments are isolated, making it easier to troubleshoot issues and onboard new team members.
      • Environments are portable, allowing you to package and share your code with others.

      This tutorial will show you how to set up a development environment for a Node.js application using Docker. You will create two containers — one for the Node application and another for the MongoDB database — with Docker Compose. Because this application works with Node and MongoDB, our setup will do the following:

      • Synchronize the application code on the host with the code in the container to facilitate changes during development.
      • Ensure that changes to the application code work without a restart.
      • Create a user and password-protected database for the application’s data.
      • Persist this data.

      At the end of this tutorial, you will have a working shark information application running on Docker containers:

      Complete Shark Collection

      Prerequisites

      To follow this tutorial, you will need:

      Step 1 — Cloning the Project and Modifying Dependencies

      The first step in building this setup will be cloning the project code and modifying its package.json file, which includes the project’s dependencies. We will add nodemon to the project’s devDependencies, specifying that we will be using it during development. Running the application with nodemon ensures that it will be automatically restarted whenever you make changes to your code.

      First, clone the nodejs-mongo-mongoose repository from the DigitalOcean Community GitHub account. This repository includes the code from the setup described in How To Integrate MongoDB with Your Node Application, which explains how to integrate a MongoDB database with an existing Node application using Mongoose.

      Clone the repository into a directory called node_project:

      • git clone https://github.com/do-community/nodejs-mongo-mongoose.git node_project

      Navigate to the node_project directory:

      Open the project's package.json file using nano or your favorite editor:

      Beneath the project dependencies and above the closing curly brace, create a new devDependencies object that includes nodemon:

      ~/node_project/package.json

      ...
      "dependencies": {
          "ejs": "^2.6.1",
          "express": "^4.16.4",
          "mongoose": "^5.4.10"
        },
        "devDependencies": {
          "nodemon": "^1.18.10"
        }    
      }
      

      Save and close the file when you are finished editing.

      With the project code in place and its dependencies modified, you can move on to refactoring the code for a containerized workflow.

      Step 2 — Configuring Your Application to Work with Containers

      Modifying our application for a containerized workflow means making our code more modular. Containers offer portability between environments, and our code should reflect that by remaining as decoupled from the underlying operating system as possible. To achieve this, we will refactor our code to make greater use of Node's process.env property, which returns an object with information about your user environment at runtime. We can use this object in our code to dynamically assign configuration information at runtime with environment variables.

      Let's begin with app.js, our main application entrypoint. Open the file:

      Inside, you will see a definition for a port constant, as well a listen function that uses this constant to specify the port the application will listen on:

      ~/home/node_project/app.js

      ...
      const port = 8080;
      ...
      app.listen(port, function () {
        console.log('Example app listening on port 8080!');
      });
      

      Let's redefine the port constant to allow for dynamic assignment at runtime using the process.env object. Make the following changes to the constant definition and listen function:

      ~/home/node_project/app.js

      ...
      const port = process.env.PORT || 8080;
      ...
      app.listen(port, function () {
        console.log(`Example app listening on ${port}!`);
      });
      

      Our new constant definition assigns port dynamically using the value passed in at runtime or 8080. Similarly, we've rewritten the listen function to use a template literal, which will interpolate the port value when listening for connections. Because we will be mapping our ports elsewhere, these revisions will prevent our having to continuously revise this file as our environment changes.

      When you are finished editing, save and close the file.

      Next, we will modify our database connection information to remove any configuration credentials. Open the db.js file, which contains this information:

      Currently, the file does the following things:

      • Imports Mongoose, the Object Document Mapper (ODM) that we're using to create schemas and models for our application data.
      • Sets the database credentials as constants, including the username and password.
      • Connects to the database using the mongoose.connect method.

      For more information about the file, please see Step 3 of How To Integrate MongoDB with Your Node Application.

      Our first step in modifying the file will be redefining the constants that include sensitive information. Currently, these constants look like this:

      ~/node_project/db.js

      ...
      const MONGO_USERNAME = 'sammy';
      const MONGO_PASSWORD = 'your_password';
      const MONGO_HOSTNAME = '127.0.0.1';
      const MONGO_PORT = '27017';
      const MONGO_DB = 'sharkinfo';
      ...
      

      Instead of hardcoding this information, you can use the process.env object to capture the runtime values for these constants. Modify the block to look like this:

      ~/node_project/db.js

      ...
      const {
        MONGO_USERNAME,
        MONGO_PASSWORD,
        MONGO_HOSTNAME,
        MONGO_PORT,
        MONGO_DB
      } = process.env;
      ...
      

      Save and close the file when you are finished editing.

      At this point, you have modified db.js to work with your application's environment variables, but you still need a way to pass these variables to your application. Let's create an .env file with values that you can pass to your application at runtime.

      Open the file:

      This file will include the information that you removed from db.js: the username and password for your application's database, as well as the port setting and database name. Remember to update the username, password, and database name listed here with your own information:

      ~/node_project/.env

      MONGO_USERNAME=sammy
      MONGO_PASSWORD=your_password
      MONGO_PORT=27017
      MONGO_DB=sharkinfo
      

      Note that we have removed the host setting that originally appeared in db.js. We will now define our host at the level of the Docker Compose file, along with other information about our services and containers.

      Save and close this file when you are finished editing.

      Because your .env file contains sensitive information, you will want to ensure that it is included in your project's .dockerignore and .gitignore files so that it does not copy to your version control or containers.

      Open your .dockerignore file:

      Add the following line to the bottom of the file:

      ~/node_project/.dockerignore

      ...
      .gitignore
      .env
      

      Save and close the file when you are finished editing.

      The .gitignore file in this repository already includes .env, but feel free to check that it is there:

      ~~/node_project/.gitignore

      ...
      .env
      ...
      

      At this point, you have successfully extracted sensitive information from your project code and taken measures to control how and where this information gets copied. Now you can add more robustness to your database connection code to optimize it for a containerized workflow.

      Step 3 — Modifying Database Connection Settings

      Our next step will be to make our database connection method more robust by adding code that handles cases where our application fails to connect to our database. Introducing this level of resilience to your application code is a recommended practice when working with containers using Compose.

      Open db.js for editing:

      You will see the code that we added earlier, along with the url constant for Mongo's connection URI and the Mongoose connect method:

      ~/node_project/db.js

      ...
      const {
        MONGO_USERNAME,
        MONGO_PASSWORD,
        MONGO_HOSTNAME,
        MONGO_PORT,
        MONGO_DB
      } = process.env;
      
      const url = `mongodb://${MONGO_USERNAME}:${MONGO_PASSWORD}@${MONGO_HOSTNAME}:${MONGO_PORT}/${MONGO_DB}?authSource=admin`;
      
      mongoose.connect(url, {useNewUrlParser: true});
      

      Currently, our connect method accepts an option that tells Mongoose to use Mongo's new URL parser. Let's add a few more options to this method to define parameters for reconnection attempts. We can do this by creating an options constant that includes the relevant information, in addition to the new URL parser option. Below your Mongo constants, add the following definition for an options constant:

      ~/node_project/db.js

      ...
      const {
        MONGO_USERNAME,
        MONGO_PASSWORD,
        MONGO_HOSTNAME,
        MONGO_PORT,
        MONGO_DB
      } = process.env;
      
      const options = {
        useNewUrlParser: true,
        reconnectTries: Number.MAX_VALUE,
        reconnectInterval: 500, 
        connectTimeoutMS: 10000,
      };
      ...
      

      The reconnectTries option tells Mongoose to continue trying to connect indefinitely, while reconnectInterval defines the period between connection attempts in milliseconds. connectTimeoutMS defines 10 seconds as the period that the Mongo driver will wait before failing the connection attempt.

      We can now use the new options constant in the Mongoose connect method to fine tune our Mongoose connection settings. We will also add a promise to handle potential connection errors.

      Currently, the Mongoose connect method looks like this:

      ~/node_project/db.js

      ...
      mongoose.connect(url, {useNewUrlParser: true});
      

      Delete the existing connect method and replace it with the following code, which includes the options constant and a promise:

      ~/node_project/db.js

      ...
      mongoose.connect(url, options).then( function() {
        console.log('MongoDB is connected');
      })
        .catch( function(err) {
        console.log(err);
      });
      

      In the case of a successful connection, our function logs an appropriate message; otherwise it will catch and log the error, allowing us to troubleshoot.

      The finished file will look like this:

      ~/node_project/db.js

      const mongoose = require('mongoose');
      
      const {
        MONGO_USERNAME,
        MONGO_PASSWORD,
        MONGO_HOSTNAME,
        MONGO_PORT,
        MONGO_DB
      } = process.env;
      
      const options = {
        useNewUrlParser: true,
        reconnectTries: Number.MAX_VALUE,
        reconnectInterval: 500,
        connectTimeoutMS: 10000,
      };
      
      const url = `mongodb://${MONGO_USERNAME}:${MONGO_PASSWORD}@${MONGO_HOSTNAME}:${MONGO_PORT}/${MONGO_DB}?authSource=admin`;
      
      mongoose.connect(url, options).then( function() {
        console.log('MongoDB is connected');
      })
        .catch( function(err) {
        console.log(err);
      });
      

      Save and close the file when you have finished editing.

      You have now added resiliency to your application code to handle cases where your application might fail to connect to your database. With this code in place, you can move on to defining your services with Compose.

      Step 4 — Defining Services with Docker Compose

      With your code refactored, you are ready to write the docker-compose.yml file with your service definitions. A service in Compose is a running container, and service definitions — which you will include in your docker-compose.yml file — contain information about how each container image will run. The Compose tool allows you to define multiple services to build multi-container applications.

      Before defining our services, however, we will add a tool to our project called wait-for to ensure that our application only attempts to connect to our database once the database startup tasks are complete. This wrapper script uses netcat to poll whether or not a specific host and port are accepting TCP connections. Using it allows you to control your application's attempts to connect to your database by testing whether or not the database is ready to accept connections.

      Though Compose allows you to specify dependencies between services using the depends_on option, this order is based on whether or not the container is running rather than its readiness. Using depends_on won't be optimal for our setup, since we want our application to connect only when the database startup tasks, including adding a user and password to the admin authentication database, are complete. For more information on using wait-for and other tools to control startup order, please see the relevant recommendations in the Compose documentation.

      Open a file called wait-for.sh:

      Paste the following code into the file to create the polling function:

      ~/node_project/app/wait-for.sh

      #!/bin/sh
      
      # original script: https://github.com/eficode/wait-for/blob/master/wait-for
      
      TIMEOUT=15
      QUIET=0
      
      echoerr() {
        if [ "$QUIET" -ne 1 ]; then printf "%sn" "$*" 1>&2; fi
      }
      
      usage() {
        exitcode="$1"
        cat << USAGE >&2
      Usage:
        $cmdname host:port [-t timeout] [-- command args]
        -q | --quiet                        Do not output any status messages
        -t TIMEOUT | --timeout=timeout      Timeout in seconds, zero for no timeout
        -- COMMAND ARGS                     Execute command with args after the test finishes
      USAGE
        exit "$exitcode"
      }
      
      wait_for() {
        for i in `seq $TIMEOUT` ; do
          nc -z "$HOST" "$PORT" > /dev/null 2>&1
      
          result=$?
          if [ $result -eq 0 ] ; then
            if [ $# -gt 0 ] ; then
              exec "$@"
            fi
            exit 0
          fi
          sleep 1
        done
        echo "Operation timed out" >&2
        exit 1
      }
      
      while [ $# -gt 0 ]
      do
        case "$1" in
          *:* )
          HOST=$(printf "%sn" "$1"| cut -d : -f 1)
          PORT=$(printf "%sn" "$1"| cut -d : -f 2)
          shift 1
          ;;
          -q | --quiet)
          QUIET=1
          shift 1
          ;;
          -t)
          TIMEOUT="$2"
          if [ "$TIMEOUT" = "" ]; then break; fi
          shift 2
          ;;
          --timeout=*)
          TIMEOUT="${1#*=}"
          shift 1
          ;;
          --)
          shift
          break
          ;;
          --help)
          usage 0
          ;;
          *)
          echoerr "Unknown argument: $1"
          usage 1
          ;;
        esac
      done
      
      if [ "$HOST" = "" -o "$PORT" = "" ]; then
        echoerr "Error: you need to provide a host and port to test."
        usage 2
      fi
      
      wait_for "$@"
      

      Save and close the file when you are finished adding the code.

      Make the script executable:

      Next, open the docker-compose.yml file:

      First, define the nodejs application service by adding the following code to the file:

      ~/node_project/docker-compose.yml

      version: '3'
      
      services:
        nodejs:
          build:
            context: .
            dockerfile: Dockerfile
          image: nodejs
          container_name: nodejs
          restart: unless-stopped
          env_file: .env
          environment:
            - MONGO_USERNAME=$MONGO_USERNAME
            - MONGO_PASSWORD=$MONGO_PASSWORD
            - MONGO_HOSTNAME=db
            - MONGO_PORT=$MONGO_PORT
            - MONGO_DB=$MONGO_DB 
          ports:
            - "80:8080"
          volumes:
            - .:/home/node/app
            - node_modules:/home/node/app/node_modules
          networks:
            - app-network
          command: ./wait-for.sh db:27017 -- /home/node/app/node_modules/.bin/nodemon app.js
      

      The nodejs service definition includes the following options:

      • build: This defines the configuration options, including the context and dockerfile, that will be applied when Compose builds the application image. If you wanted to use an existing image from a registry like Docker Hub, you could use the image instruction instead, with information about your username, repository, and image tag.
      • context: This defines the build context for the image build — in this case, the current project directory.
      • dockerfile: This specifies the Dockerfile in your current project directory as the file Compose will use to build the application image. For more information about this file, please see How To Build a Node.js Application with Docker.
      • image, container_name: These apply names to the image and container.
      • restart: This defines the restart policy. The default is no, but we have set the container to restart unless it is stopped.
      • env_file: This tells Compose that we would like to add environment variables from a file called .env, located in our build context.
      • environment: Using this option allows you to add the Mongo connection settings you defined in the .env file. Note that we are not setting NODE_ENV to development, since this is Express's default behavior if NODE_ENV is not set. When moving to production, you can set this to production to enable view caching and less verbose error messages.
        Also note that we have specified the db database container as the host, as discussed in Step 2.
      • ports: This maps port 80 on the host to port 8080 on the container.
      • volumes: We are including two types of mounts here:

        • The first is a bind mount that mounts our application code on the host to the /home/node/app directory on the container. This will facilitate rapid development, since any changes you make to your host code will be populated immediately in the container.
        • The second is a named volume, node_modules. When Docker runs the npm install instruction listed in the application Dockerfile, npm will create a new node_modules directory on the container that includes the packages required to run the application. The bind mount we just created will hide this newly created node_modules directory, however. Since node_modules on the host is empty, the bind will map an empty directory to the container, overriding the new node_modules directory and preventing our application from starting. The named node_modules volume solves this problem by persisting the contents of the /home/node/app/node_modules directory and mounting it to the container,
          hiding the bind.

        Keep the following points in mind when using this approach:

        • Your bind will mount the contents of the node_modules directory on the container to the host and this directory will be owned by root, since the named volume was created by Docker.
        • If you have a pre-existing node_modules directory on the host, it will override the node_modules directory created on the container. The setup that we're building in this tutorial assumes that you do not have a pre-existing node_modules directory and that you won't be working with npm on your host. This is in keeping with a twelve-factor approach to application development, which minimizes dependencies between execution environments.
      • networks: This specifies that our application service will join the app-network network, which we will define at the bottom on the file.

      • command: This option lets you set the command that should be executed when Compose runs the image. Note that this will override the CMD instruction that we set in our application Dockerfile. Here, we are running the application using the wait-for script, which will poll the db service on port 27017 to test whether or not the database service is ready. Once the readiness test succeeds, the script will execute the command we have set, /home/node/app/node_modules/.bin/nodemon app.js, to start the application with nodemon. This will ensure that any future changes we make to our code are reloaded without our having to restart the application.

      Next, create the db service by adding the following code below the application service definition:

      ~/node_project/docker-compose.yml

      ...
        db:
          image: mongo:4.1.8-xenial
          container_name: db
          restart: unless-stopped
          env_file: .env
          environment:
            - MONGO_INITDB_ROOT_USERNAME=$MONGO_USERNAME
            - MONGO_INITDB_ROOT_PASSWORD=$MONGO_PASSWORD
          volumes:  
            - dbdata:/data/db   
          networks:
            - app-network  
      

      Some of the settings we defined for the nodejs service remain the same, but we've also made the following changes to the image, environment, and volumes definitions:

      • image: To create this service, Compose will pull the 4.1.8-xenial Mongo image from Docker Hub. We are pinning a particular version to avoid possible future conflicts as the Mongo image changes. For more information about version pinning, please see the Docker documentation on Dockerfile best practices.
      • MONGO_INITDB_ROOT_USERNAME, MONGO_INITDB_ROOT_PASSWORD: The mongo image makes these environment variables available so that you can modify the initialization of your database instance. MONGO_INITDB_ROOT_USERNAME and MONGO_INITDB_ROOT_PASSWORD together create a root user in the admin authentication database and ensure that authentication is enabled when the container starts. We have set MONGO_INITDB_ROOT_USERNAME and MONGO_INITDB_ROOT_PASSWORD using the values from our .env file, which we pass to the db service using the env_file option. Doing this means that our sammy application user will be a root user on the database instance, with access to all of the administrative and operational privileges of that role. When working in production, you will want to create a dedicated application user with appropriately scoped privileges.

        Note: Keep in mind that these variables will not take effect if you start the container with an existing data directory in place.
      • dbdata:/data/db: The named volume dbdata will persist the data stored in Mongo's default data directory, /data/db. This will ensure that you don't lose data in cases where you stop or remove containers.

      We've also added the db service to the app-network network with the networks option.

      As a final step, add the volume and network definitions to the bottom of the file:

      ~/node_project/docker-compose.yml

      ...
      networks:
        app-network:
          driver: bridge
      
      volumes:
        dbdata:
        node_modules:  
      

      The user-defined bridge network app-network enables communication between our containers since they are on the same Docker daemon host. This streamlines traffic and communication within the application, as it opens all ports between containers on the same bridge network, while exposing no ports to the outside world. Thus, our db and nodejs containers can communicate with each other, and we only need to expose port 80 for front-end access to the application.

      Our top-level volumes key defines the volumes dbdata and node_modules. When Docker creates volumes, the contents of the volume are stored in a part of the host filesystem, /var/lib/docker/volumes/, that's managed by Docker. The contents of each volume are stored in a directory under /var/lib/docker/volumes/ and get mounted to any container that uses the volume. In this way, the shark information data that our users will create will persist in the dbdata volume even if we remove and recreate the db container.

      The finished docker-compose.yml file will look like this:

      ~/node_project/docker-compose.yml

      version: '3'
      
      services:
        nodejs:
          build:
            context: .
            dockerfile: Dockerfile
          image: nodejs
          container_name: nodejs
          restart: unless-stopped
          env_file: .env
          environment:
            - MONGO_USERNAME=$MONGO_USERNAME
            - MONGO_PASSWORD=$MONGO_PASSWORD
            - MONGO_HOSTNAME=db
            - MONGO_PORT=$MONGO_PORT
            - MONGO_DB=$MONGO_DB
          ports:
            - "80:8080"
          volumes:
            - .:/home/node/app
            - node_modules:/home/node/app/node_modules
          networks:
            - app-network
          command: ./wait-for.sh db:27017 -- /home/node/app/node_modules/.bin/nodemon app.js 
      
        db:
          image: mongo:4.1.8-xenial
          container_name: db
          restart: unless-stopped
          environment:
            - MONGO_INITDB_ROOT_USERNAME=$MONGO_USERNAME
            - MONGO_INITDB_ROOT_PASSWORD=$MONGO_PASSWORD
          volumes:     
            - dbdata:/data/db
          networks:
            - app-network  
      
      networks:
        app-network:
          driver: bridge
      
      volumes:
        dbdata:
        node_modules:  
      

      Save and close the file when you are finished editing.

      With your service definitions in place, you are ready to start the application.

      Step 5 — Testing the Application

      With your docker-compose.yml file in place, you can create your services with the docker-compose up command. You can also test that your data will persist by stopping and removing your containers with docker-compose down.

      First, build the container images and create the services by running docker-compose up with the -d flag, which will then run the nodejs and db containers in the background:

      You will see output confirming that your services have been created:

      Output

      ... Creating db ... done Creating nodejs ... done

      You can also get more detailed information about the startup processes by displaying the log output from the services:

      You will see something like this if everything has started correctly:

      Output

      ... nodejs | [nodemon] starting `node app.js` nodejs | Example app listening on 8080! nodejs | MongoDB is connected ... db | 2019-02-22T17:26:27.329+0000 I ACCESS [conn2] Successfully authenticated as principal sammy on admin

      You can also check the status of your containers with docker-compose ps:

      You will see output indicating that your containers are running:

      Output

      Name Command State Ports ---------------------------------------------------------------------- db docker-entrypoint.sh mongod Up 27017/tcp nodejs ./wait-for.sh db:27017 -- ... Up 0.0.0.0:80->8080/tcp

      With your services running, you can visit http://your_server_ip in the browser. You will see a landing page that looks like this:

      Application Landing Page

      Click on the Get Shark Info button. You will see a page with an entry form where you can enter a shark name and a description of that shark's general character:

      Shark Info Form

      In the form, add a shark of your choosing. For the purpose of this demonstration, we will add Megalodon Shark to the Shark Name field, and Ancient to the Shark Character field:

      Filled Shark Form

      Click on the Submit button. You will see a page with this shark information displayed back to you:

      Shark Output

      As a final step, we can test that the data you've just entered will persist if you remove your database container.

      Back at your terminal, type the following command to stop and remove your containers and network:

      Note that we are not including the --volumes option; hence, our dbdata volume is not removed.

      The following output confirms that your containers and network have been removed:

      Output

      Stopping nodejs ... done Stopping db ... done Removing nodejs ... done Removing db ... done Removing network node_project_app-network

      Recreate the containers:

      Now head back to the shark information form:

      Shark Info Form

      Enter a new shark of your choosing. We'll go with Whale Shark and Large:

      Enter New Shark

      Once you click Submit, you will see that the new shark has been added to the shark collection in your database without the loss of the data you've already entered:

      Complete Shark Collection

      Your application is now running on Docker containers with data persistence and code synchronization enabled.

      Conclusion

      By following this tutorial, you have created a development setup for your Node application using Docker containers. You've made your project more modular and portable by extracting sensitive information and decoupling your application's state from your application code. You have also configured a boilerplate docker-compose.yml file that you can revise as your development needs and requirements change.

      As you develop, you may be interested in learning more about designing applications for containerized and Cloud Native workflows. Please see Architecting Applications for Kubernetes and Modernizing Applications for Kubernetes for more information on these topics.

      To learn more about the code used in this tutorial, please see How To Build a Node.js Application with Docker and How To Integrate MongoDB with Your Node Application. For information about deploying a Node application with an Nginx reverse proxy using containers, please see How To Secure a Containerized Node.js Application with Nginx, Let's Encrypt, and Docker Compose.



      Source link

      How To Set Up a CakePHP Application with LAMP on Ubuntu 18.04


      The author selected the Free and Open Source Fund to receive a donation as part of the Write for DOnations program.

      Introduction

      CakePHP is a popular and feature-rich PHP web framework. It solves many of the common problems in web development, such as interacting with a database, shielding against SQL injections, and generating view code. It adheres to the model-view-controller (MVC) pattern, which decouples various parts of the application, effectively allowing developers to work on different parts of the app in parallel. It also provides built-in security and authentication. To create a basic database app is a seamless process, which makes CakePHP useful for prototyping. However, you can use CakePHP to create fully developed web applications for deployment as well.

      In this tutorial, you will deploy an example CakePHP web application to a production environment. To achieve this, you’ll set up an example database and user, configure Apache, connect your app to the database, and turn off debug mode. You’ll also use CakePHP’s bake command to automatically generate article models.

      Prerequisites

      Before you begin this tutorial, you will need:

      • A server running Ubuntu 18.04 with root access and a sudo, non-root account, you can set this up by following this initial server setup guide.
      • A LAMP stack installed according to How To Install Linux, Apache, MySQL, PHP (LAMP) stack on Ubuntu 18.04. At the time of this writing, PHP 7.2 is the latest version.
      • Composer (a PHP package manager) installed on your server. For a guide on how to do that, visit How To Install and Use Composer on Ubuntu 18.04. You only need to complete the first two steps from that tutorial.
      • Apache secured with Let’s Encrypt. To complete this prerequisite, you’ll first need to set up virtual hosts following Step 5 of How To Install Apache on Ubuntu 18.04. You can then follow How To Secure Apache with Let’s Encrypt on Ubuntu 18.04 to secure Apache with Let’s Encrypt. When asked, enable mandatory HTTPS redirection.
      • A fully registered domain name. This tutorial will use example.com throughout. You can purchase a domain name on Namecheap, get one for free on Freenom, or use the domain registrar of your choice.
      • Both of the following DNS records set up for your server. You can follow this introduction to DigitalOcean DNS for details on how to add them.
        • An A record with example.com pointing to your server’s public IP address.
        • An A record with www.example.com pointing to your server’s public IP address.

      Step 1 — Installing Dependencies

      To prepare for your application, you’ll begin by installing the PHP extensions that CakePHP needs.

      Start off by updating the package manager cache:

      CakePHP requires the mbstring, intl, and simplexml PHP extensions, which add support for multibyte strings, internationalization, and XML processing. You have installed mbstring as part of the Composer prerequisite tutorial. You can install the remaining libraries with one command:

      • sudo apt install php7.2-intl php7.2-xml -y

      Remember that the version numbers above (7.2) will change with new versions of PHP.

      You installed the required dependencies for CakePHP. You're now ready to configure your MySQL database for production use.

      Step 2 — Setting Up a MySQL Database

      Now, you'll create a MySQL database to store information about your blog's articles. You'll also create a database user that your application will use to access the database. You'll modify the database privileges to achieve this separation of control. As a result, bad actors won't be able to cause issues on the system even with database credentials, which is an important security precaution in a production environment.

      Launch your MySQL shell:

      When asked, enter the password you set up during the initial LAMP installation.

      Next, create a database:

      • CREATE DATABASE cakephp_blog;

      You will see output similar to:

      Output

      Query OK, 1 row affected (0.00 sec)

      Your CakePHP app will use this new database to read and store production data.

      Then, instruct MySQL to operate on the new cakephp_blog database:

      You will see output similar to:

      Output

      Database changed

      Now you'll create a table schema for your blog articles in the cakephp_blog database. Run the following command to set this up:

      • CREATE TABLE articles (
      • id INT UNSIGNED AUTO_INCREMENT PRIMARY KEY,
      • title VARCHAR(50),
      • body TEXT,
      • created DATETIME DEFAULT NULL,
      • modified DATETIME DEFAULT NULL
      • );

      You've created a schema with five fields to describe blog articles:

      • id: is the unique identifier of an article, set up as a primary key.
      • title: is the title of an article, declared as a text field containing a maximum of 50 characters.
      • body: is the text of the article, declared as TEXT field.
      • created: is the date and time of a record's creation.
      • modified: is the date and time of a record's modification.

      The output will be similar to:

      Output

      Query OK, 0 rows affected (0.01 sec)

      You have created a table for storing articles in the cakephp_blog database. Next, populate it with example articles by running the following command:

      • INSERT INTO articles (title, body, created)
      • VALUES ('Sample title', 'This is the article body.', NOW());

      You've added an example article with some sample data for the title and body text.

      You will see the following output:

      Output

      Query OK, 0 rows affected (0.01 sec)

      In order to connect the CakePHP app to the database, you need to create a new database user and restrict its privileges:

      • GRANT ALL PRIVILEGES ON cakephp_blog.* TO 'cake_user'@'localhost' IDENTIFIED BY 'password';

      This command grants all privileges to all the tables in the database.

      Remember to replace password with a strong password of your choice.

      To update your database with the changes you've made, reload by running:

      You've just created a new database user, cake_user and given the user privileges only on the cakephp_blog database, thus tightening security.

      Exit the MySQL terminal by entering exit.

      You've created a new database with a schema, populated it with example data, and created an appropriate database user. In the next step, you will set up the CakePHP app itself.

      Step 3 — Creating the Blog Application

      In this section, you'll use Composer to install an example CakePHP app. It is advantageous to use Composer as it allows you to install CakePHP from your command line and it automatically sets up certain file permissions and configuration files.

      First, navigate to the Apache web server folder:

      • cd /var/www/example.com/html

      Apache uses this directory to store files visible to the outside world. The root user owns this directory, and so your non-root user, sammy, can't write anything to it. To correct this, you'll change the file system permissions by running:

      You'll now create a new CakePHP app via Composer:

      • composer create-project --prefer-dist cakephp/app cake-blog

      Here you have invoked composer and instructed it to create a new project with create-project. --prefer-dist cakephp/app tells composer to use CakePHP as a template with cake-blog as the name of the new application.

      Keep in mind that this command may take some time to finish.

      When Composer asks you to set up folder permissions, answer with y.

      In this section, you created a new CakePHP project with Composer. In the next step, you will configure Apache to point to the new app, which will make it viewable in your browser.

      Step 4 — Configuring Apache to Point to Your App

      Now, you'll configure Apache for your new CakePHP application, as well as enable .htaccess overriding, which is a CakePHP requirement. This entails editing Apache configuration files.

      For actual routing to take place, you must instruct Apache to use .htaccess files. These are configuration files that will be in subdirectories of the application (where needed), and then Apache uses the files to alter its global configuration for the requested part of the app. Among other tasks, they will contain URL rewriting rules, which you'll be adjusting now.

      Start off by opening the Apache global configuration file (apache2.conf) using your text editor:

      • sudo nano /etc/apache2/apache2.conf

      Find the following block of code:

      /etc/apache2/apache2.conf

      ...
      <Directory /var/www/>
              Options Indexes FollowSymLinks
              AllowOverride None
              Require all granted
      </Directory>
      ...
      

      Change AllowOverride from None to All, like the following:

      /etc/apache2/apache2.conf

      ...
      <Directory /var/www/>
              Options Indexes FollowSymLinks
              AllowOverride All
              Require all granted
      </Directory>
      ...
      

      Save and close the file.

      Next, you will instruct Apache to point to the webroot directory in the CakePHP installation. Apache stores its configuration files on Ubuntu 18.04 in /etc/apache2/sites-available. These files govern how Apache processes web requests.

      During the Let's Encrypt prerequisite tutorial, you enabled HTTPS redirection; therefore only allowing HTTPS traffic. As a result, you'll only edit the example.com-le-ssl.conf file, which configures HTTPS traffic.

      First, open the example.com-le-ssl.conf configuration file:

      • sudo nano /etc/apache2/sites-available/example.com-le-ssl.conf

      You need to change only one line, the one that sets up DocumentRoot and tells Apache from where to serve content to the browser. Find the following line in the file:

      /etc/apache2/sites-available/example.com-le-ssl.conf

      DocumentRoot /var/www/example.com/html
      

      Edit this line to point to the CakePHP installation, by adding the following highlighted content:

      /etc/apache2/sites-available/example.com-le-ssl.conf

      DocumentRoot /var/www/example.com/html/cake-blog/webroot
      

      Save the file and exit the editor.

      Afterwards, restart Apache to reflect the new configuration:

      • sudo systemctl restart apache2

      Now you can visit https://your_domain/ in your browser.

      CakePHP can't connect to the database

      You'll see the default CakePHP success page. You'll notice that there is a block indicating that your application can't connect to the database. In the next step you'll resolve this by connecting your app to the database.

      You've now enabled .htaccess overriding, and pointed Apache to the correct webroot directory.

      Step 5 — Connecting Your App to the Database

      In this section, you will connect your database to your application so that your blog can access the articles. You'll edit CakePHP's default config/app.php file to set up the connection to your database.

      Navigate to the app folder:

      • cd /var/www/example.com/html/cake-blog

      Open the config/app.php file, by running the following command:

      Find the Datasources block (it looks like the following):

      /var/www/example.com/html/cake-blog/config/app.php

      ...
          'Datasources' => [
              'default' => [
                  'className' => 'CakeDatabaseConnection',
                  'driver' => 'CakeDatabaseDriverMysql',
                  'persistent' => false,
                  'host' => 'localhost',
                  ...
                  //'port' => 'non_standard_port_number',
                  'username' => 'cake_user',
                  'password' => 'password',
                  'database' => 'cakephp_blog',
      ...
      

      For 'username' replace my_app with your database user's username (this tutorial uses: cake_user), secret with your database user's password, and the second my_app with the database name (cakephp_blog in this tutorial).

      Save and close the file.

      Refresh the app in your browser and observe the success message under the Database section. If it shows an error, double check your configuration file against the preceding steps.

      CakePHP can connect to the database

      In this step, you've connected the CakePHP app to your MySQL database. In the next step, you'll generate the model, view, and controller files that will make up the user interface for interacting with the articles.

      Step 6 — Creating the Article User Interface

      In this section, you'll create a ready-to-use article interface by running the CakePHP bake command, which generates the article model. In CakePHP, baking generates all required models, views, and controllers in a basic state, ready for further development. Every database app must allow for create, read, update, and delete (CRUD) operations, which makes CakePHP's bake feature useful for automatically generating code for these operations. Within a couple of minutes, you get a full prototype of the app, ready to enter, store, and edit the data.

      Models, views, and controllers pertain to the MVC pattern. Their roles are:

      • Models represent the data structure.
      • Views present the data in a user-friendly way.
      • Controllers act upon user requests and serve as an intermediary between views and models.

      CakePHP stores its CLI executable under bin/cake. While it is mostly used for baking, it offers a slew of other commands, such as the ones for clearing various caches.

      The bake command will check your database, and generate the models based on the table definitions it finds. Start off by running the following command:

      By passing the all command, you are instructing CakePHP to generate models, controllers, and views all at once.

      Your output will look like this:

      Output

      Bake All --------------------------------------------------------------- Possible model names based on your database: - articles Run `cake bake all [name]` to generate skeleton files.

      It has properly detected the articles definition from your database, and is offering to generate files for that model.

      Bake it by running:

      • ./bin/cake bake all articles

      Your output will look like this:

      Output

      Bake All --------------------------------------------------------------- One moment while associations are detected. Baking table class for Articles... Creating file /var/www/example.com/html/cake-blog/src/Model/Table/ArticlesTable.php Wrote `/var/www/example.com/html/cake-blog/src/Model/Table/ArticlesTable.php` Deleted `/var/www/example.com/html/cake-blog/src/Model/Table/empty` Baking entity class for Article... Creating file /var/www/example.com/html/cake-blog/src/Model/Entity/Article.php Wrote `/var/www/example.com/html/cake-blog/src/Model/Entity/Article.php` Deleted `/var/www/example.com/html/cake-blog/src/Model/Entity/empty` Baking test fixture for Articles... Creating file /var/www/example.com/html/cake-blog/tests/Fixture/ArticlesFixture.php Wrote `/var/www/example.com/html/cake-blog/tests/Fixture/ArticlesFixture.php` Deleted `/var/www/example.com/html/cake-blog/tests/Fixture/empty` Bake is detecting possible fixtures... Baking test case for AppModelTableArticlesTable ... Creating file /var/www/example.com/html/cake-blog/tests/TestCase/Model/Table/ArticlesTableTest.php Wrote `/var/www/example.com/html/cake-blog/tests/TestCase/Model/Table/ArticlesTableTest.php` Baking controller class for Articles... Creating file /var/www/example.com/html/cake-blog/src/Controller/ArticlesController.php Wrote `/var/www/example.com/html/cake-blog/src/Controller/ArticlesController.php` Bake is detecting possible fixtures... ... Baking `add` view template file... Creating file /var/www/example.com/html/cake-blog/src/Template/Articles/add.ctp Wrote `/var/www/example.com/html/cake-blog/src/Template/Articles/add.ctp` Baking `edit` view template file... Creating file /var/www/example.com/html/cake-blog/src/Template/Articles/edit.ctp Wrote `/var/www/example.com/html/cake-blog/src/Template/Articles/edit.ctp` Bake All complete.

      In the output, you will see that CakePHP has logged all the steps it took to create a functional boilerplate for the articles database.

      Now, navigate to the following in your browser:

      https://your_domain/articles
      

      You'll see a list of articles currently in the database, which includes one row titled Sample Title. The bake command created this interface allowing you to create, delete, and edit articles. As such, it provides a solid starting point for further development. You can try adding a new article by clicking the New Article link in the sidebar.

      The generated article user interface

      In this section, you generated model, view, and controller files with CakePHP's bake command. You can now create, delete, view, and edit your articles, with all your changes immediately saved to the database.

      In the next step, you will disable the debug mode.

      Step 7 — Disabling Debug Mode in CakePHP

      In this section, you will disable the debug mode in CakePHP. This is crucial because in debug mode the app shows detailed debugging information, which is a security risk. You'll complete this step after you've completed the development of your application.

      Open the config/app.php file using your favorite editor:

      Near the start of the file there will be a line for the 'debug' mode. When you open the file 'debug' mode will be set to true. Change this to false as per the following:

      config/app.php

      ...
      'debug' => filter_var(env('DEBUG', false), FILTER_VALIDATE_BOOLEAN),
      ...
      

      Once you've turned debug mode off, the home page, located under src/Templates/Pages/home.ctp, will show an error.

      The debug mode error

      Note: If you haven't changed the default route or replaced the contents of home.ctp, the home page of your app will now show an error. This is because the default home page serves as a status dashboard during development, but does not work with debug mode disabled.

      You've disabled debug mode. Any errors and exceptions that occur from now, along with their stack traces, won't be shown to the end user, tightening the security of your application.

      However, after, disabling debug mode, your home.ctp will show an error. If you've completed this step only for the purposes of this tutorial, you can now redirect your home page to the articles listing interface while keeping debug mode disabled. You'll achieve this by editing the contents of home.ctp.

      Open home.ctp for editing:

      • sudo nano src/Template/Pages/home.ctp

      Replace its contents with the following:

      src/Template/Pages/home.ctp

      <meta http-equiv="refresh" content="0; url=./Articles" />
      <p><a href="./Articles">Click here if you are not redirected</a></p>
      

      This HTML redirects to the Articles controller. If the automatic redirection fails, there is also a link for users to follow.

      In this step, you disabled debug mode for security purposes and fixed the home page's error by redirecting the user to the blog post listing interface that the Articles controller provides.

      Conclusion

      You have now successfully set up a CakePHP application on a LAMP stack on Ubuntu 18.04. With CakePHP, you can create a database with as many tables as you like, and it will produce a live web editor for the data.

      The CakePHP cookbook offers detailed documentation regarding every aspect of CakePHP. The next step for your application could include implementing user authentication so that every user can make their own articles.



      Source link