One place for hosting & domains

      How to Install Ghost CMS with Docker Compose on Ubuntu 18.04

      Updated by Linode

      Written by Linode

      Use promo code DOCS10 for $10 credit on a new account.

      Ghost is an open source blogging platform that helps you easily create a professional-looking online blog.

      Ghost’s 1.0.0 version was the first major, stable release of the Ghost content management system (CMS). Ghost includes a Markdown editor, refreshed user interface, new default theme design, and more. Ghost has been frequently updated since this major release, and the current version at time of publication is 1.25.5.

      In this guide you’ll deploy Ghost using Docker Compose on Ubuntu 18.04. Ghost is powered by JavaScript and Node.js. Using Docker to deploy Ghost will encapsulate all of Ghost’s Node dependencies and keep the deployment self-contained. The Docker Compose services are also fast to set up and easy to update.

      Before you Begin

      1. Familiarize yourself with Linode’s Getting Started guide and complete the steps for deploying and setting up a Linode running Ubuntu 18.04, including setting the hostname and timezone.

      2. This guide uses sudo wherever possible. Complete the sections of our Securing Your Server guide to create a standard user account, harden SSH access and remove unnecessary network services.


        Replace each instance of in this guide with your Ghost site’s domain name.

      3. Complete the Add DNS Records steps to register a domain name that will point to your Ghost Linode.

      4. Ensure your system is up to date:

        sudo apt update && sudo apt upgrade
      5. Your Ghost site will serve its content over HTTPS, so you will need to obtain an SSL/TLS certificate. Use Certbot to request and download a free certificate from Let’s Encrypt:

        sudo apt install software-properties-common
        sudo add-apt-repository ppa:certbot/certbot
        sudo apt update
        sudo apt install certbot
        sudo certbot certonly --standalone -d

        These commands will download a certificate to /etc/letsencrypt/live/ on your Linode.

        Why not use Certbot’s Docker container?

        When your certificate is periodically renewed, your web server needs to be reloaded in order to use the new certificate. This is usually accomplished by passing a web server reload command through Certbot’s --deploy-hook option.

        In your deployment, the web server will run in its own container, and the Certbot container would not be able to directly reload it. A workaround for this limitation would be needed to enable this architecture.

      6. Install Docker and Docker Compose before proceeding. If you haven’t used Docker before, review the Introduction to Docker, When and Why to Use Docker, and How to Use Docker Compose guides for some context on how these technologies work.

      Install Docker

      These steps install Docker Community Edition (CE) using the official Ubuntu repositories. To install on another distribution, see the official installation page.

      1. Remove any older installations of Docker that may be on your system:

        sudo apt remove docker docker-engine
      2. Make sure you have the necessary packages to allow the use of Docker’s repository:

        sudo apt install apt-transport-https ca-certificates curl software-properties-common
      3. Add Docker’s GPG key:

        curl -fsSL | sudo apt-key add -
      4. Verify the fingerprint of the GPG key:

        sudo apt-key fingerprint 0EBFCD88

        You should see output similar to the following:

        pub   4096R/0EBFCD88 2017-02-22
              Key fingerprint = 9DC8 5822 9FC7 DD38 854A  E2D8 8D81 803C 0EBF CD88
        uid                  Docker Release (CE deb) <>
        sub   4096R/F273FCD8 2017-02-22
      5. Add the stable Docker repository:

        sudo add-apt-repository "deb [arch=amd64] $(lsb_release -cs) stable"
      6. Update your package index and install Docker CE:

        sudo apt update
        sudo apt install docker-ce
      7. Add your limited Linux user account to the docker group:

        sudo usermod -aG docker exampleuser

        You will need to restart your shell session for this change to take effect.

      8. Check that the installation was successful by running the built-in “Hello World” program:

        docker run hello-world

      Install Docker Compose

      1. Download the latest version of Docker Compose. Check the releases page and replace 1.21.2 in the command below with the version tagged as Latest release:

        sudo curl -L`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
      2. Set file permissions:

        sudo chmod +x /usr/local/bin/docker-compose

      Install Ghost

      The Ghost deployment has three components:

      • The Ghost service itself;
      • A database (MySQL) that will store your blog posts;
      • A web server (NGINX) that will proxy requests on HTTP and HTTPS to your Ghost service.

      These services are listed in a single Docker Compose file.

      Create the Docker Compose file

      1. Create and change to a directory to hold your new Docker Compose services:

        mkdir ghost && cd ghost
      2. Create a file named docker-compose.yml and open it in your text editor. Paste in the contents from the following snippet. Replace with your domain, and insert a new database password where your_database_root_password appears. The values for database__connection__password and MYSQL_ROOT_PASSWORD should be the same:

        version: '3'
            image: ghost:latest
            restart: always
              - db
              database__client: mysql
              database__connection__host: db
              database__connection__user: root
              database__connection__password: your_database_root_password
              database__connection__database: ghost
              - /opt/ghost_content:/var/lib/ghost/content
            image: mysql:5.7
            restart: always
              MYSQL_ROOT_PASSWORD: your_database_root_password
              - /opt/ghost_mysql:/var/lib/mysql
              context: ./nginx
              dockerfile: Dockerfile
            restart: always
              - ghost
              - "80:80"
              - "443:443"
               - /etc/letsencrypt/:/etc/letsencrypt/
               - /usr/share/nginx/html:/usr/share/nginx/html
      3. The Docker Compose file creates a few Docker bind mounts:

        • /var/lib/ghost/content and /var/lib/mysql inside your containers are mapped to /opt/ghost_content and /opt/ghost_mysql on the Linode. These locations store your Ghost content.

        • NGINX uses a bind mount for /etc/letsencrypt/ to access your Let’s Encrypt certificates.

        • NGINX also uses a bind mount for /usr/share/nginx/html so that it can access the Let’s Encrypt challenge files that are created when your certificate is renewed.

        Create directories for those bind mounts (except for /etc/letsencrypt/, which was already created when you first generated your certificate):

        sudo mkdir /opt/ghost_content
        sudo mkdir /opt/ghost_mysql
        sudo mkdir -p /usr/share/nginx/html

      Create the NGINX Docker Image

      The Docker Compose file relies on a customized NGINX image. This image will be packaged with the appropriate server block settings.

      1. Create a new nginx directory for this image:

        mkdir nginx
      2. Create a file named Dockerfile in the nginx directory and paste in the following contents:

        FROM nginx:latest
        COPY default.conf /etc/nginx/conf.d
      3. Create a file named default.conf in the nginx directory and paste in the following contents. Replace all instances of with your domain:

        server {
          listen 80;
          listen [::]:80;
          # Useful for Let's Encrypt
          location /.well-known/acme-challenge/ { root /usr/share/nginx/html; allow all; }
          location / { return 301 https://$host$request_uri; }
        server {
          listen 443 ssl http2;
          listen [::]:443 ssl http2;
          ssl_protocols TLSv1.2;
          ssl_ciphers HIGH:!MEDIUM:!LOW:!aNULL:!NULL:!SHA;
          ssl_prefer_server_ciphers on;
          ssl_session_cache shared:SSL:10m;
          ssl_certificate     /etc/letsencrypt/live/;
          ssl_certificate_key /etc/letsencrypt/live/;
          location / {
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-Proto https;
            proxy_pass http://ghost:2368;

        This configuration will redirect all requests on HTTP to HTTPS (except for Let’s Encrypt challenge requests), and all requests on HTTPS will be proxied to the Ghost service.

      Run and Test Your Site

      From the ghost directory start the Ghost CMS by running all services defined in the docker-compose.yml file:

      docker-compose up -d

      Verify that your blog appears by loading your domain in a web browser. It may take a few minutes for Docker to start your services, so try refreshing if the page does not appear when you first load it.

      If your site doesn’t appear in your browser, review the logs generated by Docker for more information. To see these errors:

      1. Shut down your containers:

        cd ghost
        docker-compose down
      2. Run Docker Compose in an attached state so that you can view the logs generated by each container:

        docker-compose up
      3. To shut down your services and return the command prompt again, press CTRL-C.

      Complete the Setup

      To complete the setup process, navigate to the Ghost configuration page by appending /ghost to the end of your blog’s URL or IP. This example uses

      1. On the welcome screen, click Create your account:

        Ghost Welcome Screen

      2. Enter your email, create a user and password, and enter a blog title:

        Create Your Account Screen

      3. Invite additional members to your team. If you’d prefer to skip this step, click I’ll do this later, take me to my blog! at the bottom of the page:

        Invite Your Team Screen

      4. Navigate to the Ghost admin area to create your first post, change your site’s theme, or configure additional settings:

        Ghost Admin Area

      Usage and Maintenance

      Because the option restart: always was assigned to your services in your docker-compose.yml file, you do not need to manually start your containers if you reboot your Linode. This option tells Docker Compose to automatically start your services when the server boots.

      Update Ghost

      Your docker-compose.yml specifies the latest version of the Ghost image, so it’s easy to update your Ghost version:

      docker-compose down
      docker-compose pull && docker-compose up -d

      Renew your Let’s Encrypt Certificate

      1. Open your Crontab in your editor:

        sudo crontab -e
      2. Add a line which will automatically invoke Certbot at 11PM every day. Replace with your domain:

        0 23 * * *   certbot certonly -n --webroot -w /usr/share/nginx/html -d --deploy-hook='docker exec ghost_nginx_1 nginx -s reload'

        Certbot will only renew your certificate if its expiration date is within 30 days. Running this every night ensures that if something goes wrong at first, the script will have a number of chances to try again before the expiration.

      3. You can test your new job with the --dry-run option:

        sudo bash -c "certbot certonly -n --webroot -w /usr/share/nginx/html -d --deploy-hook='docker exec ghost_nginx_1 nginx -s reload'"

      More Information

      You may wish to consult the following resources for additional information on this topic. While these are provided in the hope that they will be useful, please note that we cannot vouch for the accuracy or timeliness of externally hosted materials.

      Join our Community

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.

      Source link

      How To Install and Use Composer on Debian 9


      Composer is a popular dependency management tool for PHP, created mainly to facilitate installation and updates for project dependencies. It will check which other packages a specific project depends on and install them for you, using the appropriate versions according to the project requirements.

      In this tutorial, you’ll install and get started with Composer on Debian 9.


      To complete this tutorial, you will need:

      Step 1 — Installing the Dependencies

      Before you download and install Composer, ensure your server has all dependencies installed.

      First, update the package manager cache by running:

      Now, let's install the dependencies. We'll need curl in order to download Composer and php-cli for installing and running it. The php-mbstring package is necessary to provide functions for a library we'll be using. git is used by Composer for downloading project dependencies, and unzip for extracting zipped packages. Everything can be installed with the following command:

      • sudo apt install curl php-cli php-mbstring git unzip

      With the prerequisites installed, we can install Composer itself.

      Step 2 — Downloading and Installing Composer

      Composer provides an installer, written in PHP. We'll download it, verify that it's not corrupted, and then use it to install Composer.

      Make sure you're in your home directory, then retrieve the installer using curl:

      • cd ~
      • curl -sS -o composer-setup.php

      Next, verify that the installer matches the SHA-384 hash for the latest installer found on the [Composer Public Keys / Signatures][composer-sigs] page. Copy the hash from that page and store it as a shell variable:

      • HASH=544e09ee996cdf60ece3804abc52599c22b1f40f4323403c44d44fdfdd586475ca9813a858088ffbc1f233e9b180f061

      Make sure that you substitute the latest hash for the highlighted value.

      Now execute the following PHP script to verify that the installation script is safe to run:

      • php -r "if (hash_file('SHA384', 'composer-setup.php') === '$HASH') { echo 'Installer verified'; } else { echo 'Installer corrupt'; unlink('composer-setup.php'); } echo PHP_EOL;"

      You'll see the following output.


      Installer verified

      If you see Installer corrupt, then you'll need to redownload the installation script again and double check that you're using the correct hash. Then run the command to verify the installer again. Once you have a verified installer, you can continue.

      To install composer globally, use the following command which will download and install Composer as a system-wide command named composer, under /usr/local/bin:

      • sudo php composer-setup.php --install-dir=/usr/local/bin --filename=composer

      You'll see the following output:


      All settings correct for using Composer Downloading... Composer (version 1.7.2) successfully installed to: /usr/local/bin/composer Use it: php /usr/local/bin/composer

      To test your installation, run:

      And you'll see this output displaying Composer's version and arguments.


      ______ / ____/___ ____ ___ ____ ____ ________ _____ / / / __ / __ `__ / __ / __ / ___/ _ / ___/ / /___/ /_/ / / / / / / /_/ / /_/ (__ ) __/ / ____/____/_/ /_/ /_/ .___/____/____/___/_/ /_/ Composer version 1.7.2 2018-08-16 16:57:12 Usage: command [options] [arguments] Options: -h, --help Display this help message -q, --quiet Do not output any message -V, --version Display this application version --ansi Force ANSI output --no-ansi Disable ANSI output -n, --no-interaction Do not ask any interactive question --profile Display timing and memory usage information --no-plugins Whether to disable plugins. -d, --working-dir=WORKING-DIR If specified, use the given directory as working directory. -v|vv|vvv, --verbose Increase the verbosity of messages: 1 for normal output, 2 for more verbose output and 3 for debug . . .

      This verifies that Composer installed successfully on your system and is available system-wide.

      Note: If you prefer to have separate Composer executables for each project you host on this server, you can install it locally, on a per-project basis. Users of NPM will be familiar with this approach. This method is also useful when your system user doesn't have permission to install software system-wide.

      To do this, use the command php composer-setup.php. This will generate a composer.phar file in your current directory, which can be executed with ./composer.phar command.

      Now let's look at using Composer to manage dependencies.

      Step 3 — Using Composer in a PHP Project

      PHP projects often depend on external libraries, and managing those dependencies and their versions can be tricky. Composer solves that by tracking your dependencies and making it easy for others to install them.

      In order to use Composer in your project, you'll need a composer.json file. The composer.json file tells Composer which dependencies it needs to download for your project, and which versions of each package are allowed to be installed. This is extremely important to keep your project consistent and avoid installing unstable versions that could potentially cause backwards compatibility issues.

      You don't need to create this file manually - it's easy to run into syntax errors when you do so. Composer auto-generates the composer.json file when you add a dependency to your project using the require command. You can add additional dependencies in the same way, without the need to manually edit this file.

      The process of using Composer to install a package as dependency in a project involves the following steps:

      • Identify what kind of library the application needs.
      • Research a suitable open source library on, the official package repository for Composer.
      • Choose the package you want to depend on.
      • Run composer require to include the dependency in the composer.json file and install the package.

      Let's try this out with a demo application.

      The goal of this application is to transform a given sentence into a URL-friendly string - a slug. This is commonly used to convert page titles to URL paths (like the final portion of the URL for this tutorial).

      Let's start by creating a directory for our project. We'll call it slugify:

      • cd ~
      • mkdir slugify
      • cd slugify

      Now it's time to search for a package that can help us generate slugs. If you search for the term "slug" on Packagist, you'll get a result similar to this:

      Packagist Search: easy-slug/easy-slug, muffin/slug, ddd/slug, zelenin/slug, webcastle/slug, anomaly/slug-field_type

      You'll see two numbers on the right side of each package in the list. The number on the top represents how many times the package was installed, and the number on the bottom shows how many times a package was starred on GitHub. You can reorder the search results based on these numbers (look for the two icons on the right side of the search bar). Generally speaking, packages with more installations and more stars tend to be more stable, since so many people are using them. It's also important to check the package description for relevance to make sure it's what you need.

      We need a simple string-to-slug converter. From the search results, the package cocur/slugify seems to be a good match, with a reasonable amount of installations and stars. (The package is a bit further down the page than the screenshot shows.)

      Packages on Packagist have a vendor name and a package name. Each package has a unique identifier (a namespace) in the same format GitHub uses for its repositories, in the form vendor/package. The library we want to install uses the namespace cocur/slugif. You need the namespace in order to require the package in your project.

      Now that you know exactly which package you want to install, run composer require to include it as a dependency and also generate the composer.json file for the project:

      • composer require cocur/slugify

      You'll see this output as Composer downloads the dependency:


      Using version ^3.1 for cocur/slugify ./composer.json has been created Loading composer repositories with package information Updating dependencies (including require-dev) Package operations: 1 install, 0 updates, 0 removals - Installing cocur/slugify (v3.1): Downloading (100%) Writing lock file Generating autoload files

      As you can see from the output, Composer automatically decided which version of the package to use. If you check your project's directory now, it will contain two new files: composer.json and composer.lock, and a vendor directory:


      total 12 -rw-r--r-- 1 sammy sammy 59 Sep 7 16:03 composer.json -rw-r--r-- 1 sammy sammy 2934 Sep 7 16:03 composer.lock drwxr-xr-x 4 sammy sammy 4096 Sep 7 16:03 vendor

      The composer.lock file is used to store information about which versions of each package are installed, and ensure the same versions are used if someone else clones your project and installs its dependencies. The vendor directory is where the project dependencies are located. The vendor folder doesn't need to be committed into version control - you only need to include the composer.json and composer.lock files.

      When installing a project that already contains a composer.json file, run composer install in order to download the project's dependencies.

      Let's take a quick look at version constraints. If you check the contents of your composer.json file, you'll see something like this:


      { "require": { "cocur/slugify": "^3.1" } }

      You might notice the special character ^ before the version number in composer.json. Composer supports several different constraints and formats for defining the required package version, in order to provide flexibility while also keeping your project stable. The caret (^) operator used by the auto-generated composer.json file is the recommended operator for maximum interoperability, following semantic versioning. In this case, it defines 3.1 as the minimum compatible version, and allows updates to any future version below 4.0.

      Generally speaking, you won't need to tamper with version constraints in your composer.json file. However, some situations might require that you manually edit the constraints–for instance, when a major new version of your required library is released and you want to upgrade, or when the library you want to use doesn't follow semantic versioning.

      Here are some examples to give you a better understanding of how Composer version constraints work:

      Constraint Meaning Example Versions Allowed
      ^1.0 >= 1.0 < 2.0 1.0, 1.2.3, 1.9.9
      ^1.1.0 >= 1.1.0 < 2.0 1.1.0, 1.5.6, 1.9.9
      ~1.0 >= 1.0 < 2.0.0 1.0, 1.4.1, 1.9.9
      ~1.0.0 >= 1.0.0 < 1.1 1.0.0, 1.0.4, 1.0.9
      1.2.1 1.2.1 1.2.1
      1.* >= 1.0 < 2.0 1.0.0, 1.4.5, 1.9.9
      1.2.* >= 1.2 < 1.3 1.2.0, 1.2.3, 1.2.9

      For a more in-depth view of Composer version constraints, see the official documentation.

      Next, let's look at how to load dependencies automatically with Composer.

      Step 4 — Including the Autoload Script

      Since PHP itself doesn't automatically load classes, Composer provides an autoload script that you can include in your project to get autoloading for free. This makes it much easier to work with your dependencies.

      The only thing you need to do is include the vendor/autoload.php file in your PHP scripts before any class instantiation. This file is automatically generated by Composer when you add your first dependency.

      Let's try it out in our application. Create the file test.php and open it in your text editor:

      Add the following code which brings in the vendor/autoload.php file, loads the cocur/slugify dependency, and uses it to create a slug:


      <?php require __DIR__ . '/vendor/autoload.php'; 
      use CocurSlugifySlugify;
      $slugify = new Slugify();
      echo $slugify->slugify('Hello World, this is a long sentence and I need to make a slug from it!');

      Save the file and exit your editor.

      Now run the script:

      This produces the output hello-world-this-is-a-long-sentence-and-i-need-to-make-a-slug-from-it.

      Dependencies need updates when new versions come out, so let's look at how to handle that.

      Step 5 — Updating Project Dependencies

      Whenever you want to update your project dependencies to more recent versions, run the update command:

      This will check for newer versions of the libraries you required in your project. If a newer version is found and it's compatible with the version constraint defined in the composer.json file, Composer will replace the previous version installed. The composer.lock file will be updated to reflect these changes.

      You can also update one or more specific libraries by specifying them like this:

      • composer update vendor/package vendor2/package2

      Be sure to check in your composer.json and composer.lock files after you update your dependencies so that others can install these newer versions.


      Composer is a powerful tool every PHP developer should have in their utility belt. In this tutorial you installed Composer on Debian 9 and used it in a simple project. You now know how to install and update dependencies.

      Beyond providing an easy and reliable way for managing project dependencies, it also establishes a new de facto standard for sharing and discovering PHP packages created by the community.

      Source link

      Network Transfer Quota

      Updated by Linode

      Contributed by


      Use promo code DOCS10 for $10 credit on a new account.

      Your network transfer quota represents the total monthly amount of traffic your services can use as part of your Linode plans’ basic pricing. Each Linode plan includes a specified amount of transfer. Transfer amounts are listed for each plan on the Linode pricing page.

      Network Transfer Pool

      Your monthly network transfer quota for your services is for your entire account, not for any individual Linode. The transfer amounts provided by all of your Linodes’ plans are added together, and your account’s monthly quota is equal to the total. This is also referred to as your network transfer pool. Each of your Linodes is able to use bandwidth from this pool.

      If an individual Linode’s traffic exceeds the network transfer amount specified by its plan, but the total transfer used between all of your Linodes is still less than your pool total, then you will not be charged overages.

      Linodes from different data centers all use the same transfer pool.

      Network Transfer Pool Example

      If you have two Linodes:

      • Linode A, which comes with 1TB transfer
      • Linode B, which comes with 2TB transfer

      Your monthly pool total, or your account’s quota, would be 3TB. If Linode A uses 1.5TB of traffic during the month, and Linode B uses 1TB of traffic, then the total used between them is 2.5TB. The 1.5TB used by Linode A is greater than the 1TB of transfer specified by its plan, but the 2.5TB total is less than the account quota, so no overages are billed.

      Which Traffic Applies to the Transfer Quota

      The transfer quota only considers traffic on your Linodes’ public addresses. Traffic over the private network does not count against your monthly quota.

      All inbound traffic to your Linodes is free and will not count against your quota–only traffic that your Linodes emit on their public addresses is counted.

      Transfer Resets, Proration, and Overages

      Your transfer quota is reset at the beginning of each month.

      Why is My Linode’s Network Transfer less than My Plan’s Transfer?

      Your account’s transfer quota is prorated based on your Linodes’ creation and deletion dates.

      A Linode you create mid-month will include a lower transfer amount than what’s listed on the pricing page, depending on how much time remains in the month.

      For example, if you create a Linode half-way through the month, it will come with half of the transfer listed for your Linode’s plan. Because your transfer quota is reset at the beginning of the next month, and you will see the full transfer amount at that time.

      If you remove a Linode before the end of the month, then the transfer it contributes to your pool will also be reduced according to the date the Linode was deleted.

      For example, if you create a Linode on the first of the month, then your pool will initially include the full transfer amount for that Linode’s plan. If you remove that Linode half-way through the month, then your pool total will be updated and reduced by half the Linode plan’s transfer.

      How Overages Work

      If you use all available bandwidth in your network transfer pool, you can continue to use your Linodes normally, but you will be charged $0.02 for each additional GB at the end of your billing cycle.

      How to Mitigate Overages

      If you have gone over your quota, or you think you may before the end of the month, you can consider one of the following options to raise your pool total and avoid overages:

      1. Increase the size of an existing Linode to access more monthly transfer bandwidth.

      2. Purchase (an) additional Linode(s) with the specific purpose of increasing your pool total. You may want to delete any Linodes created for this purpose at the end of the month if you don’t anticipate needing a higher pool total in the future.

      When taking one of these actions, keep in mind that the quota amount that will be added is prorated according to the current date.

      View Network Pool Usage

      Linode recommends that you monitor your network pool usage throughout the month. You can check your network usage for your current billing cycle via the Linode Manager or the Linode CLI.

      Linode Manager

      1. Log in to the Linode Manager and view your Linode Dashboard.

      2. Under the This Month’s Network Transfer Pool heading, a graphic displays (in GB) the transfer used, the unused pool amount remaining, and your account’s quota for the month.

      Linode CLI

      • To view your network utilization (in GB) for the current month, issue the following command:

        linode-cli account transfer


        You will need to generate a Personal Access Token and install the Linode CLI before being able to use the CLI. See the Linode CLI guide for more information.

      More Information

      Read the Billing and Payments guide for an overview of Linode billing.

      Join our Community

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.

      Source link