One place for hosting & domains

      Scalable

      Grow Your Business with Scalable Apps Using DigitalOcean App Platform


      Video

      About the Talk

      This session will help you determine when, why, and how to scale your application to meet customer demand, using DigitalOcean’s App Platform.

      What You’ll Learn

      • What is scaling and why is it important?
      • How to utilize DigitalOcean App Platform when growing your business
      • Best scaling practices, how do you deal with increased traffic?

      Resources

      Slides

      Presenter

      Mason is currently a Developer Advocate at DigitalOcean who specializes in cloud infrastructure, distributed systems, and Python. Prior to his work at DigitalOcean he was an SRE helping build and maintain a highly available hybrid multi-cloud PaaS. He is an avid programmer, speaker, educator, and writer as well as an organizer of PyTexas and actively contributes to open source projects. In his spare time, he enjoys reading, camping, kayaking, and exploring new places.



      Source link

      How To Deploy a Scalable and Secure Django Application with Kubernetes


      Introduction

      In this tutorial you’ll deploy a containerized Django polls application into a Kubernetes cluster.

      Django is a powerful web framework that can help you get your Python application off the ground quickly. It includes several convenient features like an object-relational mapper, user authentication, and a customizable administrative interface for your application. It also includes a caching framework and encourages clean app design through its URL Dispatcher and Template system.

      In How to Build a Django and Gunicorn Application with Docker, the Django Tutorial Polls application was modified according to the Twelve-Factor methodology for building scalable, cloud-native web apps. This containerized setup was scaled and secured with an Nginx reverse-proxy and Let’s Encrypt-signed TLS certificates in How To Scale and Secure a Django Application with Docker, Nginx, and Let’s Encrypt. In this final tutorial in the From Containers to Kubernetes with Django series, the modernized Django polls application will be deployed into a Kubernetes cluster.

      Kubernetes is a powerful open-source container orchestrator that automates the deployment, scaling and management of containerized applications. Kubernetes objects like ConfigMaps and Secrets allow you to centralize and decouple configuration from your containers, while controllers like Deployments automatically restart failed containers and enable quick scaling of container replicas. TLS encryption is enabled with an Ingress object and the ingress-nginx open-source Ingress Controller. The cert-manager Kubernetes add-on renews and issues certificates using the free Let’s Encrypt certificate authority.

      Prerequisites

      To follow this tutorial, you will need:

      • A Kubernetes 1.15+ cluster with role-based access control (RBAC) enabled. This setup will use a DigitalOcean Kubernetes cluster, but you are free to create a cluster using another method.
      • The kubectl command-line tool installed on your local machine and configured to connect to your cluster. You can read more about installing kubectl in the official documentation. If you are using a DigitalOcean Kubernetes cluster, please refer to How to Connect to a DigitalOcean Kubernetes Cluster to learn how to connect to your cluster using kubectl.
      • A registered domain name. This tutorial will use your_domain.com throughout. You can get one for free at Freenom, or use the domain registrar of your choice.
      • An ingress-nginx Ingress Controller and the cert-manager TLS certificate manager installed into your cluster and configured to issue TLS certificates. To learn how to install and configure an Ingress with cert-manager, please consult How to Set Up an Nginx Ingress with Cert-Manager on DigitalOcean Kubernetes.
      • An A DNS record with your_domain.com pointing to the Ingress Load Balancer’s public IP address. If you are using DigitalOcean to manage your domain’s DNS records, consult How to Manage DNS Records to learn how to create A records
      • An S3 object storage bucket such as a DigitalOcean Space to store your Django project’s static files and a set of Access Keys for this Space. To learn how to create a Space, consult the How to Create Spaces product documentation. To learn how to create Access Keys for Spaces, consult Sharing Access to Spaces with Access Keys. With minor changes, you can use any object storage service that the django-storages plugin supports.
      • A PostgreSQL server instance, database, and user for your Django app. With minor changes, you can use any database that Django supports.
      • A Docker Hub account and public repository. For more information on creating these, please see Repositories from the Docker documentation.
      • The Docker engine installed on your local machine. Please see How to Install and Use Docker on Ubuntu 18.04 to learn more.

      Once you have these components set up, you’re ready to begin with this guide.

      Step 1 — Cloning and Configuring the Application

      In this step we’ll clone the application code from GitHub and configure settings like database credentials and object storage keys.

      The application code and Dockerfile can be found in the polls-docker branch of the Django Tutorial Polls App GitHub repository. This repo contains code for the Django documentation’s sample Polls application, which teaches you how to build a polling application from scratch.

      The polls-docker branch contains a Dockerized version of this Polls app. To learn how the Polls app was modified to work effectively in a containerized environment, please see How to Build a Django and Gunicorn Application with Docker.

      Begin by using git to clone the polls-docker branch of the Django Tutorial Polls App GitHub repository to your local machine:

      • git clone --single-branch --branch polls-docker https://github.com/do-community/django-polls.git

      Navigate into the django-polls directory:

      This directory contains the Django application Python code, a Dockerfile that Docker will use to build the container image, as well as an env file that contains a list of environment variables to be passed into the container’s running environment. Inspect the Dockerfile:

      Output

      FROM python:3.7.4-alpine3.10 ADD django-polls/requirements.txt /app/requirements.txt RUN set -ex && apk add --no-cache --virtual .build-deps postgresql-dev build-base && python -m venv /env && /env/bin/pip install --upgrade pip && /env/bin/pip install --no-cache-dir -r /app/requirements.txt && runDeps="$(scanelf --needed --nobanner --recursive /env | awk '{ gsub(/,/, "nso:", $2); print "so:" $2 }' | sort -u | xargs -r apk info --installed | sort -u)" && apk add --virtual rundeps $runDeps && apk del .build-deps ADD django-polls /app WORKDIR /app ENV VIRTUAL_ENV /env ENV PATH /env/bin:$PATH EXPOSE 8000 CMD ["gunicorn", "--bind", ":8000", "--workers", "3", "mysite.wsgi"]

      This Dockerfile uses the official Python 3.7.4 Docker image as a base, and installs Django and Gunicorn’s Python package requirements, as defined in the django-polls/requirements.txt file. It then removes some unnecessary build files, copies the application code into the image, and sets the execution PATH. Finally, it declares that port 8000 will be used to accept incoming container connections, and runs gunicorn with 3 workers, listening on port 8000.

      To learn more about each of the steps in this Dockerfile, please see Step 6 of How to Build a Django and Gunicorn Application with Docker.

      Now, build the image using docker build:

      We name the image polls using the -t flag and pass in the current directory as a build context, the set of files to reference when constructing the image.

      After Docker builds and tags the image, list available images using docker images:

      You should see the polls image listed:

      OutputREPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
      polls               latest              80ec4f33aae1        2 weeks ago         197MB
      python              3.7.4-alpine3.10    f309434dea3a        8 months ago        98.7MB
      

      Before we run the Django container, we need to configure its running environment using the env file present in the current directory. This file will be passed into the docker run command used to run the container, and Docker will inject the configured environment variables into the container’s running environment.

      Open the env file with nano or your favorite editor:

      django-polls/env

      DJANGO_SECRET_KEY=
      DEBUG=True
      DJANGO_ALLOWED_HOSTS=
      DATABASE_ENGINE=postgresql_psycopg2
      DATABASE_NAME=polls
      DATABASE_USERNAME=
      DATABASE_PASSWORD=
      DATABASE_HOST=
      DATABASE_PORT=
      STATIC_ACCESS_KEY_ID=
      STATIC_SECRET_KEY=
      STATIC_BUCKET_NAME=
      STATIC_ENDPOINT_URL=
      DJANGO_LOGLEVEL=info
      

      Fill in missing values for the following keys:

      • DJANGO_SECRET_KEY: Set this to a unique, unpredictable value, as detailed in the Django docs. One method of generating this key is provided in Adjusting the App Settings of the Scalable Django App tutorial.
      • DJANGO_ALLOWED_HOSTS: This variable secures the app and prevents HTTP Host header attacks. For testing purposes, set this to *, a wildcard that will match all hosts. In production you should set this to your_domain.com. To learn more about this Django setting, consult Core Settings from the Django docs.
      • DATABASE_USERNAME: Set this to the PostgreSQL database user created in the prerequisite steps.
      • DATABASE_NAME: Set this to polls or the name of the PostgreSQL database created in the prerequisite steps.
      • DATABASE_PASSWORD: Set this to the PostgreSQL user password created in the prerequisite steps.
      • DATABASE_HOST: Set this to your database’s hostname.
      • DATABASE_PORT: Set this to your database’s port.
      • STATIC_ACCESS_KEY_ID: Set this to your Space or object storage’s access key.
      • STATIC_SECRET_KEY: Set this to your Space or object storage’s access key Secret.
      • STATIC_BUCKET_NAME: Set this to your Space name or object storage bucket.
      • STATIC_ENDPOINT_URL: Set this to the appropriate Spaces or object storage endpoint URL, for example https://your_space_name.nyc3.digitaloceanspaces.com if your Space is located in the nyc3 region.

      Once you’ve finished editing, save and close the file.

      In the next step we’ll run the configured container locally and create the database schema. We’ll also upload static assets like stylesheets and images to object storage.

      Step 2 — Creating the Database Schema and Uploading Assets to Object Storage

      With the container built and configured, use docker run to override the CMD set in the Dockerfile and create the database schema using the manage.py makemigrations and manage.py migrate commands:

      • docker run --env-file env polls sh -c "python manage.py makemigrations && python manage.py migrate"

      We run the polls:latest container image, pass in the environment variable file we just modified, and override the Dockerfile command with sh -c "python manage.py makemigrations && python manage.py migrate", which will create the database schema defined by the app code.

      If you’re running this for the first time you should see:

      Output

      No changes detected Operations to perform: Apply all migrations: admin, auth, contenttypes, polls, sessions Running migrations: Applying contenttypes.0001_initial... OK Applying auth.0001_initial... OK Applying admin.0001_initial... OK Applying admin.0002_logentry_remove_auto_add... OK Applying admin.0003_logentry_add_action_flag_choices... OK Applying contenttypes.0002_remove_content_type_name... OK Applying auth.0002_alter_permission_name_max_length... OK Applying auth.0003_alter_user_email_max_length... OK Applying auth.0004_alter_user_username_opts... OK Applying auth.0005_alter_user_last_login_null... OK Applying auth.0006_require_contenttypes_0002... OK Applying auth.0007_alter_validators_add_error_messages... OK Applying auth.0008_alter_user_username_max_length... OK Applying auth.0009_alter_user_last_name_max_length... OK Applying auth.0010_alter_group_name_max_length... OK Applying auth.0011_update_proxy_permissions... OK Applying polls.0001_initial... OK Applying sessions.0001_initial... OK

      This indicates that the database schema has successfully been created.

      If you’re running migrate a subsequent time, Django will perform a no-op unless the database schema has changed.

      Next, we’ll run another instance of the app container and use an interactive shell inside of it to create an administrative user for the Django project.

      • docker run -i -t --env-file env polls sh

      This will provide you with a shell prompt inside of the running container which you can use to create the Django user:

      • python manage.py createsuperuser

      Enter a username, email address, and password for your user, and after creating the user, hit CTRL+D to quit the container and kill it.

      Finally, we’ll generate the static files for the app and upload them to the DigitalOcean Space using collectstatic. Note that this may take a bit of time to complete.

      • docker run --env-file env polls sh -c "python manage.py collectstatic --noinput"

      After these files are generated and uploaded, you’ll receive the following output.

      Output

      121 static files copied.

      We can now run the app:

      • docker run --env-file env -p 80:8000 polls

      Output

      [2019-10-17 21:23:36 +0000] [1] [INFO] Starting gunicorn 19.9.0 [2019-10-17 21:23:36 +0000] [1] [INFO] Listening at: http://0.0.0.0:8000 (1) [2019-10-17 21:23:36 +0000] [1] [INFO] Using worker: sync [2019-10-17 21:23:36 +0000] [7] [INFO] Booting worker with pid: 7 [2019-10-17 21:23:36 +0000] [8] [INFO] Booting worker with pid: 8 [2019-10-17 21:23:36 +0000] [9] [INFO] Booting worker with pid: 9

      Here, we run the default command defined in the Dockerfile, gunicorn --bind :8000 --workers 3 mysite.wsgi:application, and expose container port 8000 so that port 80 on your local machine gets mapped to port 8000 of the polls container.

      You should now be able to navigate to the polls app using your web browser by typing http://localhost in the URL bar. Since there is no route defined for the / path, you’ll likely receive a 404 Page Not Found error, which is expected.

      Navigate to http://localhost/polls to see the Polls app interface:

      Polls Apps Interface

      To view the administrative interface, visit http://localhost/admin. You should see the Polls app admin authentication window:

      Polls Admin Auth Page

      Enter the administrative username and password you created with the createsuperuser command.

      After authenticating, you can access the Polls app’s administrative interface:

      Polls Admin Main Interface

      Note that static assets for the admin and polls apps are being delivered directly from object storage. To confirm this, consult Testing Spaces Static File Delivery.

      When you are finished exploring, hit CTRL+C in the terminal window running the Docker container to kill the container.

      With the Django app Docker image tested, static assets uploaded to object storage, and database schema configured and ready for use with your app, you’re ready to upload your Django app image to an image registry like Docker Hub.

      Step 3 — Pushing the Django App Image to Docker Hub

      To roll your app out on Kubernetes, your app image must be uploaded to a registry like Docker Hub. Kubernetes will pull the app image from its repository and then deploy it to your cluster.

      You can use a private Docker registry, like DigitalOcean Container Registry, currently free in Early Access, or a public Docker registry like Docker Hub. Docker Hub also allows you to create private Docker repositories. A public repository allows anyone to see and pull the container images, while a private repository allows you to restrict access to you and your team members.

      In this tutorial we’ll push the Django image to the public Docker Hub repository created in the prerequisites. You can also push your image to a private repository, but pulling images from a private repository is beyond the scope of this article. To learn more about authenticating Kubernetes with Docker Hub and pulling private images, please see Pull an Image from a Private Registry from the Kubernetes docs.

      Begin by logging in to Docker Hub on your local machine:

      Output

      Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID, head over to https://hub.docker.com to create one. Username:

      Enter your Docker Hub username and password to login.

      The Django image currently has the polls:latest tag. To push it to your Docker Hub repo, re-tag the image with your Docker Hub username and repo name:

      • docker tag polls:latest your_dockerhub_username/your_dockerhub_repo_name:latest

      Push the image to the repo:

      • docker push sammy/sammy-django:latest

      In this tutorial the Docker Hub username is sammy and the repo name is sammy-django. You should replace these values with your own Docker Hub username and repo name.

      You’ll see some output that updates as image layers are pushed to Docker Hub.

      Now that your image is available to Kubernetes on Docker Hub, you can begin rolling it out in your cluster.

      Step 4 — Setting Up the ConfigMap

      When we ran the Django container locally, we passed the env file into docker run to inject configuration variables into the runtime environment. On Kubernetes, configuration variables can be injected using ConfigMaps and Secrets.

      ConfigMaps should be used to store non-confidential configuration information like app settings, and Secrets should be used for sensitive information like API keys and database credentials. They are both injected into containers in a similar fashion, but Secrets have additional access control and security features like encryption at rest. Secrets also store data in base64, while ConfigMaps store data in plain text.

      To begin, create a directory called yaml in which we’ll store our Kubernetes manifests. Navigate into the directory.

      Open a file called polls-configmap.yaml in nano or your preferred text editor:

      • nano polls-configmap.yaml

      Paste in the following ConfigMap manifest:

      polls-configmap.yaml

      apiVersion: v1
      kind: ConfigMap
      metadata:
        name: polls-config
      data:
        DJANGO_ALLOWED_HOSTS: "*"
        STATIC_ENDPOINT_URL: "https://your_space_name.space_region.digitaloceanspaces.com"
        STATIC_BUCKET_NAME: "your_space_name"
        DJANGO_LOGLEVEL: "info"
        DEBUG: "True"
        DATABASE_ENGINE: "postgresql_psycopg2"
      

      We’ve extracted the non-sensitive configuration from the env file modified in Step 1 and pasted it into a ConfigMap manifest. The ConfigMap object is called polls-config. Copy in the same values entered into the env file in the previous step.

      For testing purposes leave DJANGO_ALLOWED_HOSTS as * to disable Host header-based filtering. In a production environment you should set this to your app’s domain.

      When you’re done editing the file, save and close it.

      Create the ConfigMap in your cluster using kubectl apply:

      • kubectl apply -f polls-configmap.yaml

      Output

      configmap/polls-config created

      With the ConfigMap created, we’ll create the Secret used by our app in the next step.

      Step 5 — Setting Up the Secret

      Secret values must be base64-encoded, which means creating Secret objects in your cluster is slightly more involved than creating ConfigMaps. You can repeat the process from the previous step, manually base64-encoding Secret values and pasting them into a manifest file. You can also create them using an environment variable file, kubectl create, and the --from-env-file flag, which we’ll do in this step.

      We’ll once again use the env file from Step 1, removing variables inserted into the ConfigMap. Make a copy of the env file called polls-secrets in the yaml directory:

      • cp ../env ./polls-secrets

      Edit the file in your preferred editor:

      polls-secrets

      DJANGO_SECRET_KEY=
      DEBUG=True
      DJANGO_ALLOWED_HOSTS=
      DATABASE_ENGINE=postgresql_psycopg2
      DATABASE_NAME=polls
      DATABASE_USERNAME=
      DATABASE_PASSWORD=
      DATABASE_HOST=
      DATABASE_PORT=
      STATIC_ACCESS_KEY_ID=
      STATIC_SECRET_KEY=
      STATIC_BUCKET_NAME=
      STATIC_ENDPOINT_URL=
      DJANGO_LOGLEVEL=info
      

      Delete all the variables inserted into the ConfigMap manifest. When you’re done, it should look like this:

      polls-secrets

      DJANGO_SECRET_KEY=your_secret_key
      DATABASE_NAME=polls
      DATABASE_USERNAME=your_django_db_user
      DATABASE_PASSWORD=your_django_db_user_password
      DATABASE_HOST=your_db_host
      DATABASE_PORT=your_db_port
      STATIC_ACCESS_KEY_ID=your_space_access_key
      STATIC_SECRET_KEY=your_space_access_key_secret
      

      Be sure to use the same values used in Step 1. When you’re done, save and close the file.

      Create the Secret in your cluster using kubectl create secret:

      • kubectl create secret generic polls-secret --from-env-file=poll-secrets

      Output

      secret/polls-secret created

      Here we create a Secret object called polls-secret and pass in the secrets file we just created.

      You can inspect the Secret using kubectl describe:

      • kubectl describe secret polls-secret

      Output

      Name: polls-secret Namespace: default Labels: <none> Annotations: <none> Type: Opaque Data ==== DATABASE_PASSWORD: 8 bytes DATABASE_PORT: 5 bytes DATABASE_USERNAME: 5 bytes DJANGO_SECRET_KEY: 14 bytes STATIC_ACCESS_KEY_ID: 20 bytes STATIC_SECRET_KEY: 43 bytes DATABASE_HOST: 47 bytes DATABASE_NAME: 5 bytes

      At this point you’ve stored your app’s configuration in your Kubernetes cluster using the Secret and ConfigMap object types. We’re now ready to deploy the app into the cluster.

      Step 6 — Rolling Out the Django App Using a Deployment

      In this step you’ll create a Deployment for your Django app. A Kubernetes Deployment is a controller that can be used to manage stateless applications in your cluster. A controller is a control loop that regulates workloads by scaling them up or down. Controllers also restart and clear out failed containers.

      Deployments control one or more Pods, the smallest deployable unit in a Kubernetes cluster. Pods enclose one or more containers. To learn more about the different types of workloads you can launch, please review An Introduction to Kubernetes.

      Begin by opening a file called polls-deployment.yaml in your favorite editor:

      • nano polls-deployment.yaml

      Paste in the following Deployment manifest:

      polls-deployment.yaml

      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: polls-app
        labels:
          app: polls
      spec:
          replicas: 2
        selector:
          matchLabels:
            app: polls
        template:
          metadata:
            labels:
              app: polls
          spec:
            containers:
              - image: your_dockerhub_username/app_repo_name:latest
                name: polls
                envFrom:
                - secretRef:
                    name: polls-secret
                - configMapRef:
                    name: polls-config
                ports:
                  - containerPort: 8000
                    name: gunicorn
      

      Fill in the appropriate container image name, referencing the Django Polls image you pushed to Docker Hub in Step 2.

      Here we define a Kubernetes Deployment called polls-app and label it with the key-value pair app: polls. We specify that we’d like to run two replicas of the Pod defined below the template field.

      Using envFrom with secretRef and configMapRef, we specify that all the data from the polls-secret Secret and polls-config ConfigMap should be injected into the containers as environment variables. The ConfigMap and Secret keys become the environment variable names.

      Finally, we expose containerPort 8000 and name it gunicorn.

      To learn more about configuring Kubernetes Deployments, please consult Deployments from the Kubernetes documentation.

      When you’re done editing the file, save and close it.

      Create the Deployment in your cluster using kubectl apply -f:

      • kubectl apply -f polls-deployment.yaml
      • deployment.apps/polls-app created

      Check that the Deployment rolled out correctly using kubectl get:

      • kubectl get deploy polls-app

      Output

      NAME READY UP-TO-DATE AVAILABLE AGE polls-app 2/2 2 2 6m38s

      If you encounter an error or something isn’t quite working, you can use kubectl describe to inspect the failed Deployment:

      You can inspect the two Pods using kubectl get pod:

      Output

      NAME READY STATUS RESTARTS AGE polls-app-847f8ccbf4-2stf7 1/1 Running 0 6m42s polls-app-847f8ccbf4-tqpwm 1/1 Running 0 6m57s

      Two replicas of your Django app are now up and running in the cluster. To access the app, you need to create a Kubernetes Service, which we’ll do next.

      Step 7 — Allowing External Access using a Service

      In this step, you’ll create a Service for your Django app. A Kubernetes Service is an abstraction that allows you to expose a set of running Pods as a network service. Using a Service you can create a stable endpoint for your app that does not change as Pods die and are recreated.

      There are multiple Service types, including ClusterIP Services, which expose the Service on a cluster-internal IP, NodePort Services, which expose the Service on each Node at a static port called the NodePort, and LoadBalancer Services, which provision a cloud load balancer to direct external traffic to the Pods in your cluster (via NodePorts, which it creates automatically). To learn more about these, please see Service from the Kubernetes docs.

      In our final setup we’ll use a ClusterIP Service that is exposed using an Ingress and the Ingress Controller set up in the prerequisites for this guide. For now, to test that everything is functioning correctly, we’ll create a temporary NodePort Service to access the Django app.

      Begin by creating a file called polls-svc.yaml using your favorite editor:

      Paste in the following Service manifest:

      polls-svc.yaml

      apiVersion: v1
      kind: Service
      metadata:
        name: polls
        labels:
          app: polls
      spec:
        type: NodePort
        selector:
          app: polls
        ports:
          - port: 8000
            targetPort: 8000
      

      Here we create a NodePort Service called polls and give it the app: polls label. We then select backend Pods with the app: polls label and target their 8000 ports.

      When you’re done editing the file, save and close it.

      Roll out the Service using kubectl apply:

      • kubectl apply -f polls-svc.yaml

      Output

      service/polls created

      Confirm that your Service was created using kubectl get svc:

      Output

      NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE polls NodePort 10.245.197.189 <none> 8000:32654/TCP 59s

      This output shows the Service’s cluster-internal IP and NodePort (32654). To connect to the service, we need the external IP addresses for our cluster nodes:

      Output

      NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME pool-7no0qd9e0-364fd Ready <none> 27h v1.18.8 10.118.0.5 203.0.113.1 Debian GNU/Linux 10 (buster) 4.19.0-10-cloud-amd64 docker://18.9.9 pool-7no0qd9e0-364fi Ready <none> 27h v1.18.8 10.118.0.4 203.0.113.2 Debian GNU/Linux 10 (buster) 4.19.0-10-cloud-amd64 docker://18.9.9 pool-7no0qd9e0-364fv Ready <none> 27h v1.18.8 10.118.0.3 203.0.113.3 Debian GNU/Linux 10 (buster) 4.19.0-10-cloud-amd64 docker://18.9.9

      In your web browser, visit your Polls app using any Node’s external IP address and the NodePort. Given the output above, the app’s URL would be: http://203.0.113.1:32654/polls.

      You should see the same Polls app interface that you accessed locally in Step 1:

      Polls Apps Interface

      You can repeat the same test using the /admin route: http://203.0.113.1:32654/admin. You should see the same Admin interface as before:

      Polls Admin Auth Page

      At this stage, you’ve rolled out two replicas of the Django Polls app container using a Deployment. You’ve also created a stable network endpoint for these two replicas, and made it externally accessible using a NodePort Service.

      The final step in this tutorial is to secure external traffic to your app using HTTPS. To do this we’ll use the ingress-nginx Ingress Controller installed in the prerequisites, and create an Ingress object to route external traffic to the polls Kubernetes Service.

      Step 8 — Configuring HTTPS Using Nginx Ingress and cert-manager

      Kubernetes Ingresses allow you to flexibly route traffic from outside your Kubernetes cluster to Services inside of your cluster. This is accomplished using Ingress objects, which define rules for routing HTTP and HTTPS traffic to Kubernetes Services, and Ingress Controllers, which implement the rules by load balancing traffic and routing it to the appropriate backend Services.

      In the prerequisites you installed the ingress-nginx Ingress Controller and cert-manager TLS certificate automation add-on. You also set up staging and production ClusterIssuers for your domain using the Let’s Encrypt certificate authority, and created an Ingress to test certificate issuance and TLS encryption to two dummy backend Services. Before continuing with this step, you should delete the echo-ingress Ingress created in the prerequisite tutorial:

      • kubectl delete ingress echo-ingress

      If you’d like you can also delete the dummy Services and Deployments using kubectl delete svc and kubectl delete deploy, but this is not essential to complete this tutorial.

      You should also have created a DNS A record with your_domain.com pointing to the Ingress Load Balancer’s public IP address. If you’re using a DigitalOcean Load Balancer, you can find this IP address in the Load Balancers section of the Control Panel. If you are also using DigitalOcean to manage your domain’s DNS records, consult How to Manage DNS Records to learn how to create A records.

      If you’re using DigitalOcean Kubernetes, also ensure that you’ve implemented the workaround described in Step 5 of How to Set Up an Nginx Ingress with Cert-Manager on DigitalOcean Kubernetes.

      Once you have an A record pointing to the Ingress Controller Load Balancer, you can create an Ingress for your_domain.com and the polls Service.

      Open a file called polls-ingress.yaml using your favorite editor:

      Paste in the following Ingress manifest:

      [polls-ingress.yaml]
      apiVersion: networking.k8s.io/v1beta1
      kind: Ingress
      metadata:
        name: polls-ingress
        annotations:
          kubernetes.io/ingress.class: "nginx"
          cert-manager.io/cluster-issuer: "letsencrypt-staging"
      spec:
        tls:
        - hosts:
          - your_domain.com
          secretName: polls-tls
        rules:
        - host: your_domain.com
          http:
            paths:
            - backend:
                serviceName: polls
                servicePort: 8000
      

      We create an Ingress object called polls-ingress and annotate it to instruct the control plane to use the ingress-nginx Ingress Controller and staging ClusterIssuer. We also enable TLS for your_domain.com and store the certificate and private key in a secret called polls-tls. Finally, we define a rule to route traffic for the your_domain.com host to the polls Service on port 8000.

      When you’re done editing the file, save and close it.

      Create the Ingress in your cluster using kubectl apply:

      • kubectl apply -f polls-ingress.yaml

      Output

      ingress.networking.k8s.io/polls-ingress created

      You can use kubectl describe to track the state of the Ingress you just created:

      • kubectl describe ingress polls-ingress

      Output

      Name: polls-ingress Namespace: default Address: workaround.your_domain.com Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>) TLS: polls-tls terminates your_domain.com Rules: Host Path Backends ---- ---- -------- your_domain.com polls:8000 (10.244.0.207:8000,10.244.0.53:8000) Annotations: cert-manager.io/cluster-issuer: letsencrypt-staging kubernetes.io/ingress.class: nginx Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal CREATE 51s nginx-ingress-controller Ingress default/polls-ingress Normal CreateCertificate 51s cert-manager Successfully created Certificate "polls-tls" Normal UPDATE 25s nginx-ingress-controller Ingress default/polls-ingress

      You can also run a describe on the polls-tls Certificate to further confirm its successful creation:

      • kubectl describe certificate polls-tls

      Output

      . . . Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Issuing 3m33s cert-manager Issuing certificate as Secret does not exist Normal Generated 3m32s cert-manager Stored new private key in temporary Secret resource "polls-tls-v9lv9" Normal Requested 3m32s cert-manager Created new CertificateRequest resource "polls-tls-drx9c" Normal Issuing 2m58s cert-manager The certificate has been successfully issued

      This confirms that the TLS certificate was successfully issued and HTTPS encryption is now active for your_domain.com.

      Given that we used the staging ClusterIssuer, most web browsers won’t trust the fake Let’s Encrypt certificate that it issued, so navigating to your_domain.com will bring you to an error page.

      To send a test request, we’ll use wget from the command-line:

      • wget -O - http://your_domain.com/polls

      Output

      . . . ERROR: cannot verify your_domain.com's certificate, issued by ‘CN=Fake LE Intermediate X1’: Unable to locally verify the issuer's authority. To connect to your_domain.com insecurely, use `--no-check-certificate'.

      We’ll use the suggested --no-check-certificate flag to bypass certificate validation:

      • wget --no-check-certificate -q -O - http://your_domain.com/polls

      Output

      <link rel="stylesheet" type="text/css" href="https://your_space.nyc3.digitaloceanspaces.com/django-polls/static/polls/style.css"> <p>No polls are available.</p>

      This output shows the HTML for the /polls interface page, also confirming that the stylesheet is being served from object storage.

      Now that you’ve successfully tested certificate issuance using the staging ClusterIssuer, you can modify the Ingress to use the production ClusterIssuer.

      Open polls-ingress.yaml for editing once again:

      Modify the cluster-issuer annotation:

      [polls-ingress.yaml]
      apiVersion: networking.k8s.io/v1beta1
      kind: Ingress
      metadata:
        name: polls-ingress
        annotations:
          kubernetes.io/ingress.class: "nginx"
          cert-manager.io/cluster-issuer: "letsencrypt-prod"
      spec:
        tls:
        - hosts:
          - your_domain.com
          secretName: polls-tls
        rules:
        - host: your_domain.com
          http:
            paths:
            - backend:
                serviceName: polls
                servicePort: 8000
      

      When you’re done, save and close the file. Update the Ingress using kubectl apply:

      • kubectl apply -f polls-ingress.yaml

      Output

      ingress.networking.k8s.io/polls-ingress configured

      You can use kubectl describe certificate polls-tls and kubectl describe ingress polls-ingress to track the certificate issuance status:

      • kubectl describe ingress polls-ingress

      Output

      . . . Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal CREATE 23m nginx-ingress-controller Ingress default/polls-ingress Normal CreateCertificate 23m cert-manager Successfully created Certificate "polls-tls" Normal UPDATE 76s (x2 over 22m) nginx-ingress-controller Ingress default/polls-ingress Normal UpdateCertificate 76s cert-manager Successfully updated Certificate "polls-tls"

      The above output confirms that the new production certificate was successfully issued and stored in the polls-tls Secret.

      Navigate to your_domain.com/polls in your web browser to confirm that HTTPS encryption is enabled and everything is working as expected. You should see the Polls app interface:

      Polls Apps Interface

      Verify that HTTPS encryption is active in your web browser. If you’re using Google Chrome, arriving at the above page without any errors confirms that everything is working correctly. In addition, you should see a padlock in the URL bar. Clicking on the padlock will allow you to inspect the Let’s Encrypt certificate details.

      As a final cleanup task, you can optionally switch the polls Service type from NodePort to the internal-only ClusterIP type.

      Modify polls-svc.yaml using your editor:

      Change the type from NodePort to ClusterIP:

      polls-svc.yaml

      apiVersion: v1
      kind: Service
      metadata:
        name: polls
        labels:
          app: polls
      spec:
        type: ClusterIP
        selector:
          app: polls
        ports:
          - port: 8000
            targetPort: 8000
      

      When you’re done editing the file, save and close it.

      Roll out the changes using kubectl apply:

      • kubectl apply -f polls-svc.yaml --force

      Output

      service/polls configured

      Confirm that your Service was modified using kubectl get svc:

      Output

      NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE polls ClusterIP 10.245.203.186 <none> 8000/TCP 22s

      This output shows that the Service type is now ClusterIP. The only way to access it is via your domain and the Ingress created in this step.

      Conclusion

      In this tutorial you deployed a scalable, HTTPS-secured Django app into a Kubernetes cluster. Static content is served directly from object storage, and the number of running Pods can be quickly scaled up or down using the replicas field in the polls-app Deployment manifest.

      If you’re using a DigitalOcean Space, you can also enable delivery of static assets via a content delivery network and create a custom subdomain for your Space. Please consult Enabling CDN from How to Set Up a Scalable Django App with DigitalOcean Managed Databases and Spaces to learn more.

      To review the rest of the series, please visit our From Containers to Kubernetes with Django series page.



      Source link

      How to Set Up a Scalable Laravel 6 Application using Managed Databases and Object Storage


      Introduction

      When scaling web applications horizontally, the first difficulties you’ll typically face are dealing with file storage and data persistence. This is mainly due to the fact that it is hard to maintain consistency of variable data between multiple application nodes; appropriate strategies must be in place to make sure data created in one node is immediately available to other nodes in a cluster.

      A practical way of solving the consistency problem is by using managed databases and object storage systems. The first will outsource data persistence to a managed database, and the latter will provide a remote storage service where you can keep static files and variable content such as images uploaded by users. Each node can then connect to these services at the application level.

      The following diagram demonstrates how such a setup can be used for horizontal scalability in the context of PHP applications:

      Laravel at scale diagram

      In this guide, we will update an existing Laravel 6 application to prepare it for horizontal scalability by connecting it to a managed MySQL database and setting up an S3-compatible object store to save user-generated files. By the end, you will have a travel list application running on an Nginx + PHP-FPM web server:

      Travellist v1.0

      Note: this guide uses DigitalOcean Managed MySQL and Spaces to demonstrate a scalable application setup using managed databases and object storage. The instructions contained here should work in a similar way for other service providers.

      Prerequisites

      To begin this tutorial, you will first need the following prerequisites:

      • Access to an Ubuntu 18.04 server as a non-root user with sudo privileges, and an active firewall installed on your server. To set these up, please refer to our Initial Server Setup Guide for Ubuntu 18.04.
      • Nginx and PHP-FPM installed and configured on your server, as explained in steps 1 and 3 of How to Install LEMP on Ubuntu 18.04. You should skip the step where MySQL is installed.
      • Composer installed on your server, as explained in steps 1 and 2 of How to Install and Use Composer on Ubuntu 18.04.
      • Admin credentials to a managed MySQL 8 database. For this guide, we’ll be using a DigitalOcean Managed MySQL cluster, but the instructions here should work similarly for other managed database services.
      • A set of API keys with read and write permissions to an S3-compatible object storage service. In this guide, we’ll use DigitalOcean Spaces, but you are free to use a provider of your choice.
      • The s3cmd tool installed and configured to connect to your object storage drive. For instructions on how to set this up for DigitalOcean Spaces, please refer to our product documentation.

      Step 1 — Installing the MySQL 8 Client

      The default Ubuntu apt repositories come with the MySQL 5 client, which is not compatible with the MySQL 8 server we’ll be using in this guide. To install the compatible MySQL client, we’ll need to use the MySQL APT Repository provided by Oracle.

      Begin by navigating to the MySQL APT Repository page in your web browser. Find the Download button in the lower-right corner and click through to the next page. This page will prompt you to log in or sign up for an Oracle web account. You can skip that and instead look for the link that says No thanks, just start my download. Copy the link address and go back to your terminal window.

      This link should point to a .deb package that will set up the MySQL APT Repository in your server. After installing it, you’ll be able to use apt to install the latest releases of MySQL. We’ll use curl to download this file into a temporary location.

      Go to your server’s tmp folder:

      Now download the package with curl and using the URL you copied from the MySQL APT Repository page:

      • curl -OL https://dev.mysql.com/get/mysql-apt-config_0.8.13-1_all.deb

      After the download is finished, you can use dpkg to install the package:

      • sudo dpkg -i mysql-apt-config_0.8.13-1_all.deb

      You will be presented with a screen where you can choose which MySQL version you’d like to select as default, as well as which MySQL components you’re interested in:

      MySQL APT Repository Install

      You don’t need to change anything here, because the default options will install the repositories we need. Select “Ok” and the configuration will be finished.

      Next, you’ll need to update your apt cache with:

      Now we can finally install the MySQL 8 client with:

      • sudo apt install mysql-client

      Once that command finishes, check the software version number to ensure that you have the latest release:

      You’ll see output like this:

      Output

      mysql Ver 8.0.18 for Linux on x86_64 (MySQL Community Server - GPL)

      In the next step, we’ll use the MySQL client to connect to your managed MySQL server and prepare the database for the application.

      Step 2 — Creating a new MySQL User and Database

      At the time of this writing, the native MySQL PHP library mysqlnd doesn’t support caching_sha2_authentication, the default authentication method for MySQL 8. We’ll need to create a new user with the mysql_native_password authentication method in order to be able to connect our Laravel application to the MySQL 8 server. We’ll also create a dedicated database for our demo application.

      To get started, log into your server using an admin account. Replace the highlighted values with your own MySQL user, host, and port:

      • mysql -u MYSQL_USER -p -h MYSQL_HOST -P MYSQL_PORT

      When prompted, provide the admin user’s password. After logging in, you will have access to the MySQL 8 server command line interface.

      First, we’ll create a new database for the application. Run the following command to create a new database named travellist:

      • CREATE DATABASE travellist;

      Next, we’ll create a new user and set a password, using mysql_native_password as default authentication method for this user. You are encouraged to replace the highlighted values with values of your own, and to use a strong password:

      • CREATE USER "http://www.digitalocean.com/travellist-user'@'%' IDENTIFIED WITH mysql_native_password BY "http://www.digitalocean.com/MYSQL_PASSWORD';

      Now we need to give this user permission over our application database:

      • GRANT ALL ON travellist.* TO "http://www.digitalocean.com/travellist-user'@'%';

      You can now exit the MySQL prompt with:

      You now have a dedicated database and a compatible user to connect from your Laravel application. In the next step, we’ll get the application code and set up configuration details, so your app can connect to your managed MySQL database.

      In this guide, we’ll use Laravel Migrations and database seeds to set up our application tables. If you need to migrate an existing local database to a DigitalOcean Managed MySQL database, please refer to our documentation on How to Import MySQL Databases into DigitalOcean Managed Databases.

      Step 3 — Setting Up the Demo Application

      To get started, we’ll fetch the demo Laravel application from its Github repository. Feel free to inspect the contents of the application before running the next commands.

      The demo application is a travel bucket list app that was initially developed in our guide on How to Install and Configure Laravel with LEMP on Ubuntu 18.04. The updated app now contains visual improvements including travel photos that can be uploaded by a visitor, and a world map. It also introduces a database migration script and database seeds to create the application tables and populate them with sample data, using artisan commands.

      To obtain the application code that is compatible with this tutorial, we’ll download the 1.1 release from the project’s repository on Github. We’ll save the downloaded zip file as travellist.zip inside our home directory:

      • cd ~
      • curl -L https://github.com/do-community/travellist-laravel-demo/archive/1.1.zip -o travellist.zip

      Now, unzip the contents of the application and rename its directory with:

      • unzip travellist.zip
      • mv travellist-laravel-demo-1.1 travellist

      Navigate to the travellist directory:

      Before going ahead, we’ll need to install a few PHP modules that are required by the Laravel framework, namely: php-xml, php-mbstring, php-xml and php-bcmath. To install these packages, run:

      • sudo apt install unzip php-xml php-mbstring php-xml php-bcmath

      To install the application dependencies, run:

      You will see output similar to this:

      Output

      Loading composer repositories with package information Installing dependencies (including require-dev) from lock file Package operations: 80 installs, 0 updates, 0 removals - Installing doctrine/inflector (v1.3.0): Downloading (100%) - Installing doctrine/lexer (1.1.0): Downloading (100%) - Installing dragonmantank/cron-expression (v2.3.0): Downloading (100%) - Installing erusev/parsedown (1.7.3): Downloading (100%) ... Generating optimized autoload files > IlluminateFoundationComposerScripts::postAutoloadDump > @php artisan package:discover --ansi Discovered Package: beyondcode/laravel-dump-server Discovered Package: fideloper/proxy Discovered Package: laravel/tinker Discovered Package: nesbot/carbon Discovered Package: nunomaduro/collision Package manifest generated successfully.

      The application dependencies are now installed. Next, we’ll configure the application to connect to the managed MySQL Database.

      Creating the .env configuration file and setting the App Key

      We’ll now create a .env file containing variables that will be used to configure the Laravel application in a per-environment basis. The application includes an example file that we can copy and then modify its values to reflect our environment settings.

      Copy the .env.example file to a new file named .env:

      Now we need to set the application key. This key is used to encrypt session data, and should be set to a unique 32 characters-long string. We can generate this key automatically with the artisan tool:

      Let’s edit the environment configuration file to set up the database details. Open the .env file using your command line editor of choice. Here, we will be using nano:

      Look for the database credentials section. The following variables need your attention:

      DB_HOST: your managed MySQL server host.
      DB_PORT: your managed MySQL server port.
      DB_DATABASE: the name of the application database we created in Step 2.
      DB_USERNAME: the database user we created in Step 2.
      DB_PASSWORD: the password for the database user we defined in Step 2.

      Update the highlighted values with your own managed MySQL info and credentials:

      ...
      DB_CONNECTION=mysql
      DB_HOST=MANAGED_MYSQL_HOST
      DB_PORT=MANAGED_MYSQL_PORT
      DB_DATABASE=MANAGED_MYSQL_DB
      DB_USERNAME=MANAGED_MYSQL_USER
      DB_PASSWORD=MANAGED_MYSQL_PASSWORD
      ...
      

      Save and close the file by typing CTRL+X then Y and ENTER when you’re done editing.

      Now that the application is configured to connect to the MySQL database, we can use Laravel’s command line tool artisan to create the database tables and populate them with sample data.

      Migrating and populating the database

      We’ll now use Laravel Migrations and database seeds to set up the application tables. This will help us determine if our database configuration works as expected.

      To execute the migration script that will create the tables used by the application, run:

      You will see output similar to this:

      Output

      Migration table created successfully. Migrating: 2019_09_19_123737_create_places_table Migrated: 2019_09_19_123737_create_places_table (0.26 seconds) Migrating: 2019_10_14_124700_create_photos_table Migrated: 2019_10_14_124700_create_photos_table (0.42 seconds)

      To populate the database with sample data, run:

      You will see output like this:

      Output

      Seeding: PlacesTableSeeder Seeded: PlacesTableSeeder (0.86 seconds) Database seeding completed successfully.

      The application tables are now created and populated with sample data.

      To finish the application setup, we also need to create a symbolic link to the public storage folder that will host the travel photos we’re using in the application. You can do that using the artisan tool:

      Output

      The [public/storage] directory has been linked.

      This will create a symbolic link inside the public directory pointing to storage/app/public, where we’ll save the travel photos. To check that the link was created and where it points to, you can run:

      You’ll see output like this:

      Output

      total 36 drwxrwxr-x 5 sammy sammy 4096 Oct 25 14:59 . drwxrwxr-x 12 sammy sammy 4096 Oct 25 14:58 .. -rw-rw-r-- 1 sammy sammy 593 Oct 25 06:29 .htaccess drwxrwxr-x 2 sammy sammy 4096 Oct 25 06:29 css -rw-rw-r-- 1 sammy sammy 0 Oct 25 06:29 favicon.ico drwxrwxr-x 2 sammy sammy 4096 Oct 25 06:29 img -rw-rw-r-- 1 sammy sammy 1823 Oct 25 06:29 index.php drwxrwxr-x 2 sammy sammy 4096 Oct 25 06:29 js -rw-rw-r-- 1 sammy sammy 24 Oct 25 06:29 robots.txt lrwxrwxrwx 1 sammy sammy 41 Oct 25 14:59 storage -> /home/sammy/travellist/storage/app/public -rw-rw-r-- 1 sammy sammy 1194 Oct 25 06:29 web.config

      Running the test server (optional)

      You can use the artisan serve command to quickly verify that everything is set up correctly within the application, before having to configure a full-featured web server like Nginx to serve the application for the long term.

      We’ll use port 8000 to temporarily serve the application for testing. If you have the UFW firewall enabled on your server, you should first allow access to this port with:

      Now, to run the built in PHP server that Laravel exposes through the artisan tool, run:

      • php artisan serve --host=0.0.0.0 --port=8000

      This command will block your terminal until interrupted with a CTRL+C. It will use the built-in PHP web server to serve the application for test purposes on all network interfaces, using port 8000.

      Now go to your browser and access the application using the server’s domain name or IP address on port 8000:

      http://server_domain_or_IP:8000
      

      You will see the following page:

      Travellist v1.0

      If you see this page, it means the application is successfully pulling data about locations and photos from the configured managed database. The image files are still stored in the local disk, but we’ll change this in a following step of this guide.

      When you are finished testing the application, you can stop the serve command by hitting CTRL+C.

      Don’t forget to close port 8000 again if you are running UFW on your server:

      Step 4 — Configuring Nginx to Serve the Application

      Although the built-in PHP web server is very useful for development and testing purposes, it is not intended to be used as a long term solution to serve PHP applications. Using a full featured web server like Nginx is the recommended way of doing that.

      To get started, we’ll move the application folder to /var/www, which is the usual location for web applications running on Nginx. First, use the mv command to move the application folder with all its contents to /var/www/travellist:

      • sudo mv ~/travellist /var/www/travellist

      Now we need to give the web server user write access to the storage and bootstrap/cache folders, where Laravel stores application-generated files. We’ll set these permissions using setfacl, a command line utility that allows for more robust and fine-grained permission settings in files and folders.

      To include read, write and execution (rwx) permissions to the web server user over the required directories, run:

      • sudo setfacl -R -m g:www-data:rwx /var/www/travellist/storage
      • sudo setfacl -R -m g:www-data:rwx /var/www/travellist/bootstrap/cache

      The application files are now in order, but we still need to configure Nginx to serve the content. To do this, we’ll create a new virtual host configuration file at /etc/nginx/sites-available:

      • sudo nano /etc/nginx/sites-available/travellist

      The following configuration file contains the recommended settings for Laravel applications on Nginx:

      /etc/nginx/sites-available/travellist

      server {
          listen 80;
          server_name server_domain_or_IP;
          root /var/www/travellist/public;
      
          add_header X-Frame-Options "SAMEORIGIN";
          add_header X-XSS-Protection "1; mode=block";
          add_header X-Content-Type-Options "nosniff";
      
          index index.html index.htm index.php;
      
          charset utf-8;
      
          location / {
              try_files $uri $uri/ /index.php?$query_string;
          }
      
          location = /favicon.ico { access_log off; log_not_found off; }
          location = /robots.txt  { access_log off; log_not_found off; }
      
          error_page 404 /index.php;
      
          location ~ .php$ {
              fastcgi_pass unix:/var/run/php/php7.2-fpm.sock;
              fastcgi_index index.php;
              fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
              include fastcgi_params;
          }
      
          location ~ /.(?!well-known).* {
              deny all;
          }
      }
      

      Copy this content to your /etc/nginx/sites-available/travellist file and adjust the highlighted values to align with your own configuration. Save and close the file when you’re done editing.

      To activate the new virtual host configuration file, create a symbolic link to travellist in sites-enabled:

      • sudo ln -s /etc/nginx/sites-available/travellist /etc/nginx/sites-enabled/

      Note: If you have another virtual host file that was previously configured for the same server_name used in the travellist virtual host, you might need to deactivate the old configuration by removing the corresponding symbolic link inside /etc/nginx/sites-enabled/.

      To confirm that the configuration doesn’t contain any syntax errors, you can use:

      You should see output like this:

      Output

      • nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
      • nginx: configuration file /etc/nginx/nginx.conf test is successful

      To apply the changes, reload Nginx with:

      • sudo systemctl reload nginx

      If you reload your browser now, the application images will be broken. That happens because we moved the application directory to a new location inside the server, and for that reason we need to re-create the symbolic link to the application storage folder.

      Remove the old link with:

      • cd /var/www/travellist
      • rm -f public/storage

      Now run once again the artisan command to generate the storage link:

      Now go to your browser and access the application using the server’s domain name or IP address, as defined by the server_name directive in your configuration file:

      http://server_domain_or_IP
      

      Travellist v1.0

      In the next step, we’ll integrate an object storage service into the application. This will replace the current local disk storage used for the travel photos.

      Step 5 — Integrating an S3-Compatible Object Storage into the Application

      We’ll now set up the application to use an S3-compatible object storage service for storing the travel photos exhibited on the index page. Because the application already has a few sample photos stored in the local disk, we’ll also use the s3cmd tool to upload the existing local image files to the remote object storage.

      Setting Up the S3 Driver for Laravel

      Laravel uses league/flysystem, a filesystem abstraction library that enables a Laravel application to use and combine multiple storage solutions, including local disk and cloud services. An additional package is required to use the s3 driver. We can install this package using the composer require command.

      Access the application directory:

      • composer require league/flysystem-aws-s3-v3

      You will see output similar to this:

      Output

      Using version ^1.0 for league/flysystem-aws-s3-v3 ./composer.json has been updated Loading composer repositories with package information Updating dependencies (including require-dev) Package operations: 8 installs, 0 updates, 0 removals - Installing mtdowling/jmespath.php (2.4.0): Loading from cache - Installing ralouphie/getallheaders (3.0.3): Loading from cache - Installing psr/http-message (1.0.1): Loading from cache - Installing guzzlehttp/psr7 (1.6.1): Loading from cache - Installing guzzlehttp/promises (v1.3.1): Loading from cache - Installing guzzlehttp/guzzle (6.4.1): Downloading (100%) - Installing aws/aws-sdk-php (3.112.28): Downloading (100%) - Installing league/flysystem-aws-s3-v3 (1.0.23): Loading from cache ...

      Now that the required packages are installed, we can update the application to connect to the object storage. First, we’ll open the .env file again to set up configuration details such as keys, bucket name, and region for your object storage service.

      Open the .env file:

      Include the following environment variables, replacing the highlighted values with your object store configuration details:

      /var/www/travellist/.env

      DO_SPACES_KEY=EXAMPLE7UQOTHDTF3GK4
      DO_SPACES_SECRET=exampleb8e1ec97b97bff326955375c5
      DO_SPACES_ENDPOINT=https://ams3.digitaloceanspaces.com
      DO_SPACES_REGION=ams3
      DO_SPACES_BUCKET=sammy-travellist
      

      Save and close the file when you’re done. Now open the config/filesystems.php file:

      • nano config/filesystems.php

      Within this file, we’ll create a new disk entry in the disks array. We’ll name this disk spaces, and we’ll use the environment variables we’ve set in the .env file to configure the new disk. Include the following entry in the disks array:

      config/filesystems.php

      
      'spaces' => [
         'driver' => 's3',
         'key' => env('DO_SPACES_KEY'),
         'secret' => env('DO_SPACES_SECRET'),
         'endpoint' => env('DO_SPACES_ENDPOINT'),
         'region' => env('DO_SPACES_REGION'),
         'bucket' => env('DO_SPACES_BUCKET'),
      ],
      
      

      Still in the same file, locate the cloud entry and change it to set the new spaces disk as default cloud filesystem disk:

      config/filesystems.php

      'cloud' => env('FILESYSTEM_CLOUD', "http://www.digitalocean.com/spaces'),
      

      Save and close the file when you’re done editing. From your controllers, you can now use the Storage::cloud() method as a shortcut to access the default cloud disk. This way, the application stays flexible to use multiple storage solutions, and you can switch between providers on a per-environment basis.

      The application is now configured to use object storage, but we still need to update the code that uploads new photos to the application.

      Let’s first examine the current uploadPhoto route, located in the PhotoController class. Open the file using your text editor:

      • nano app/Http/Controllers/PhotoController.php

      app/Http/Controllers/PhotoController.php

      …
      
      public function uploadPhoto(Request $request)
      {
         $photo = new Photo();
         $place = Place::find($request->input('place'));
      
         if (!$place) {
             //add new place
             $place = new Place();
             $place->name = $request->input('place_name');
             $place->lat = $request->input('place_lat');
             $place->lng = $request->input('place_lng');
         }
      
         $place->visited = 1;
         $place->save();
      
         $photo->place()->associate($place);
         $photo->image = $request->image->store('/', 'public');
         $photo->save();
      
         return redirect()->route('Main');
      }
      
      

      This method accepts a POST request and creates a new photo entry in the photos table. It begins by checking if an existing place was selected in the photo upload form, and if that’s not the case, it will create a new place using the provided information. The place is then set to visited and saved to the database. Following that, an association is created so that the new photo is linked to the designated place. The image file is then stored in the root folder of the public disk. Finally, the photo is saved to the database. The user is then redirected to the main route, which is the index page of the application.

      The highlighted line in this code is what we’re interested in. In that line, the image file is saved to the disk using the store method. The store method is used to save files to any of the disks defined in the filesystem.php configuration file. In this case, it is using the default disk to store uploaded images.

      We will change this behavior so that the image is saved to the object store instead of the local disk. In order to do that, we need to replace the public disk by the spaces disk in the store method call. We also need to make sure the uploaded file’s visibility is set to public instead of private.

      The following code contains the full PhotoController class, including the updated uploadPhoto method:

      app/Http/Controllers/PhotoController.php

      <?php
      
      namespace AppHttpControllers;
      
      use IlluminateHttpRequest;
      use AppPhoto;
      use AppPlace;
      use IlluminateSupportFacadesStorage;
      
      class PhotoController extends Controller
      {
         public function uploadForm()
         {
             $places = Place::all();
      
             return view('upload_photo', [
                 'places' => $places
             ]);
         }
      
         public function uploadPhoto(Request $request)
         {
             $photo = new Photo();
             $place = Place::find($request->input('place'));
      
             if (!$place) {
                 //add new place
                 $place = new Place();
                 $place->name = $request->input('place_name');
                 $place->lat = $request->input('place_lat');
                 $place->lng = $request->input('place_lng');
             }
      
             $place->visited = 1;
             $place->save();
      
             $photo->place()->associate($place);
             $photo->image = $request->image->store('/', 'spaces');
             Storage::setVisibility($photo->image, 'public');
             $photo->save();
      
             return redirect()->route('Main');
         }
      }
      
      
      

      Copy the updated code to your own PhotoController so that it reflects the highlighted changes. Save and close the file when you’re done editing.

      We still need to modify the application’s main view so that it uses the object storage file URL to render the image. Open the travel_list.blade.php template:

      • nano resources/views/travel_list.blade.php

      Now locate the footer section of the page, which currently looks like this:

      resources/views/travel_list.blade.php

      @section('footer')
         <h2>Travel Photos <small>[ <a href="{{ route('Upload.form') }}">Upload Photo</a> ]</small></h2>
         @foreach ($photos as $photo)
             <div class="photo">
                <img src="https://www.digitalocean.com/{{ asset('storage') . '/' . $photo->image }}" />
                 <p>{{ $photo->place->name }}</p>
             </div>
         @endforeach
      
      @endsection
      

      Replace the current image src attribute to use the file URL from the spaces storage disk:

      <img src="https://www.digitalocean.com/{{ Storage::disk('spaces')->url($photo->image) }}" />
      

      If you go to your browser now and reload the application page, it will show only broken images. That happens because the image files for those travel photos are still only in the local disk. We need to upload the existing image files to the object storage, so that the photos already stored in the database can be successfully exhibited in the application page.

      Syncing local images with s3cmd

      The s3cmd tool can be used to sync local files with an S3-compatible object storage service. We’ll run a sync command to upload all files inside storage/app/public/photos to the object storage service.

      Access the public app storage directory:

      • cd /var/www/travellist/storage/app/public

      To have a look at the files already stored in your remote disk, you can use the s3cmd ls command:

      • s3cmd ls s3://your_bucket_name

      Now run the sync command to upload existing files in the public storage folder to the object storage:

      • s3cmd sync ./ s3://your_bucket_name --acl-public --exclude=.gitignore

      This will synchronize the current folder (storage/app/public) with the remote object storage’s root dir. You will get output similar to this:

      Output

      upload: './bermudas.jpg' -> 's3://sammy-travellist/bermudas.jpg' [1 of 3] 2538230 of 2538230 100% in 7s 329.12 kB/s done upload: './grindavik.jpg' -> 's3://sammy-travellist/grindavik.jpg' [2 of 3] 1295260 of 1295260 100% in 5s 230.45 kB/s done upload: './japan.jpg' -> 's3://sammy-travellist/japan.jpg' [3 of 3] 8940470 of 8940470 100% in 24s 363.61 kB/s done Done. Uploaded 12773960 bytes in 37.1 seconds, 336.68 kB/s.

      Now, if you run s3cmd ls again, you will see that three new files were added to the root folder of your object storage bucket:

      • s3cmd ls s3://your_bucket_name

      Output

      2019-10-25 11:49 2538230 s3://sammy-travellist/bermudas.jpg 2019-10-25 11:49 1295260 s3://sammy-travellist/grindavik.jpg 2019-10-25 11:49 8940470 s3://sammy-travellist/japan.jpg

      Go to your browser and reload the application page. All images should be visible now, and if you inspect them using your browser debug tools, you’ll notice that they’re all using URLs from your object storage.

      Testing the Integration

      The demo application is now fully functional, storing files in a remote object storage service, and saving data to a managed MySQL database. We can now upload a few photos to test our setup.

      Access the /upload application route from your browser:

      http://server_domain_or_IP/upload
      

      You will see the following form:

      Travellist  Photo Upload Form

      You can now upload a few photos to test the object storage integration. After choosing an image from your computer, you can select an existing place from the dropdown menu, or you can add a new place by providing its name and geographic coordinates so it can be loaded in the application map.

      Step 6 — Scaling Up a DigitalOcean Managed MySQL Database with Read-Only Nodes (Optional)

      Because read-only operations are typically more frequent than writing operations on database servers, its is a common practice to scale up a database cluster by setting up multiple read-only nodes. This will distribute the load generated by SELECT operations.

      To demonstrate this setup, we’ll first add 2 read-only nodes to our DigitalOcean Managed MySQL cluster. Then, we’ll configure the Laravel application to use these nodes.

      Access the DigitalOcean Cloud Panel and follow these instructions:

      1. Go to Databases and select your MySQL cluster.
      2. Click Actions and choose Add a read-only node from the drop-down menu.
      3. Configure the node options and hit the Create button. Notice that it might take several minutes for the new node to be ready.
      4. Repeat steps 1-4 one more time so that you have 2 read-only nodes.
      5. Note down the hosts of the two nodes as we will need them for our Laravel configuration.

      Once you have your read-only nodes ready, head back to your terminal.

      We’ll now configure our Laravel application to work with multiple database nodes. When we’re finished, queries such as INSERT and UPDATE will be forwarded to your primary cluster node, while all SELECT queries will be redirected to your read-only nodes.

      First, go to the application’s directory on the server and open your .env file using your text editor of choice:

      • cd /var/www/travellist
      • nano .env

      Locate the MySQL database configuration and comment out the DB_HOST line:

      /var/www/travellist/.env

      DB_CONNECTION=mysql
      #DB_HOST=MANAGED_MYSQL_HOST
      DB_PORT=MANAGED_MYSQL_PORT
      DB_DATABASE=MANAGED_MYSQL_DB
      DB_USERNAME=MANAGED_MYSQL_USER
      DB_PASSWORD=MANAGED_MYSQL_PASSWORD
      

      Save and close the file when you’re done. Now open the config/database.php in your text editor:

      Look for the mysql entry inside the connections array. You should include three new items in this configuration array: read, write, and sticky. The read and write entries will set up the cluster nodes, and the sticky option set to true will reuse write connections so that data written to the database is immediately available in the same request cycle. You can set it to false if you don’t want this behavior.

      /var/www/travel_list/config/database.php

      ...
            'mysql' => [
               'read' => [
                 'host' => [
                    "http://www.digitalocean.com/READONLY_NODE1_HOST',
                    "http://www.digitalocean.com/READONLY_NODE2_HOST',
                 ],
               ],
               'write' => [
                 'host' => [
                   "http://www.digitalocean.com/MANAGED_MYSQL_HOST',
                 ],
               ],
             'sticky' => true,
      ...
      

      Save and close the file when you are done editing. To test that everything works as expected, we can create a temporary route inside routes/web.php to pull some data from the database and show details about the connection being used. This way we will be able to see how the requests are being load balanced between the read-only nodes.

      Open the routes/web.php file:

      Include the following route:

      /var/www/travel_list/routes/web.php

      ...
      
      Route::get('/mysql-test', function () {
        $places = AppPlace::all();
        $results = DB::select( DB::raw("SHOW VARIABLES LIKE 'server_id'") );
      
        return "Server ID: " . $results[0]->Value;
      });
      

      Now go to your browser and access the /mysql-test application route:

      http://server_domain_or_IP/mysql-test
      

      You’ll see a page like this:

      mysql node test page

      Reload the page a few times and you will notice that the Server ID value changes, indicating that the requests are being randomly distributed between the two read-only nodes.

      Conclusion

      In this guide, we’ve prepared a Laravel 6 application for a highly available and scalable environment. We’ve outsourced the database system to an external managed MySQL service, and we’ve integrated an S3-compatible object storage service into the application to store files uploaded by users. Finally, we’ve seen how to scale up the application’s database by including additional read-only cluster nodes in the app’s configuration file.

      The updated demo application code containing all modifications made in this guide can be found within the 2.1 tag in the application’s repository on Github.

      From here, you can set up a Load Balancer to distribute load and scale your application among multiple nodes. You can also leverage this setup to create a containerized environment to run your application on Docker.



      Source link