One place for hosting & domains

      Storage

      How to Enact Access Control Lists (ACLs) and Bucket Policies with Linode Object Storage


      Updated by Linode

      Contributed by
      Linode

      Linode Object Storage allows users to share access to objects and buckets with other Object Storage users. There are two mechanisms for setting up sharing: Access Control Lists (ACLs), and bucket policies. These mechanisms perform similar functions: both can be used to restrict and grant access to Object Storage resources.

      In this guide you will learn:

      Before You Begin

      • This guide will use the s3cmd command line utility to interact with Object Storage. For s3cmd installation and configuration instructions, visit our How to Use Object Storage guide.

      • You’ll also need the canonical ID of every user you wish to grant additional permissions to.

      Retrieve a User’s Canonical ID

      Follow these steps to determine the canonical ID of the Object Storage users you want to share with:

      1. The following command will return the canonical ID of a user, given any of the user’s buckets:

        s3cmd info s3://other-users-bucket
        

        Note

        The bucket referred to in this section is an arbitrary bucket on the target user’s account. It is not related to the bucket on your account that you would like to set ACLs or bucket policies on.

        There are two options for running this command:

        • The users you’re granting or restricting access to can run this command on one of their buckets and share their canonical ID with you, or:

        • You can run this command yourself if you have use of their access tokens (you will need to configure s3cmd to use their access tokens instead of your own).

      2. Run the above command, replacing other-users-bucket with the name of the bucket. You’ll see output similar to the following:

          
        s3://other-users-bucket/ (bucket):
        Location:  default
        Payer:     BucketOwner
        Expiration Rule: none
        Policy:    none
        CORS:      none
        ACL:       a0000000-000a-0000-0000-00d0ff0f0000: FULL_CONTROL
        
        
      3. The canonical ID of the owner of the bucket is the long string of letters, dashes, and numbers found in the line labeled ACL, which in this case is a0000000-000a-0000-0000-00d0ff0f0000.

      4. Alternatively, you may be able to retrieve the canonical ID by curling a bucket and retrieving the Owner ID field from the returned XML. This method is an option when both of these conditions are true:

        • The bucket has objects within it and has already been set to public (with a command like s3cmd setacl s3://other-users-bucket --acl-public).
        • The bucket has not been set to serve static websites.
      5. Run the curl command, replacing the bucket name and cluster URL with the relevant values:

        curl other-users-bucket.us-east-1.linodeobjects.com
        
      6. This will result in the following output:

        <ListBucketResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
            <Name>acl-bucket-example</Name>
            <Prefix/>
            <Marker/>
            <MaxKeys>1000</MaxKeys>
            <IsTruncated>false</IsTruncated>
            <Contents>
            <Key>cpanel_one-click.gif</Key>
            <LastModified>2019-11-20T16:52:49.946Z</LastModified>
            <ETag>"9aeafcb192a8e540e7be5b51f7249e2e"</ETag>
            <Size>961023</Size>
            <StorageClass>STANDARD</StorageClass>
            <Owner>
                <ID>a0000000-000a-0000-0000-00d0ff0f0000</ID>
                <DisplayName>a0000000-000a-0000-0000-00d0ff0f0000</DisplayName>
            </Owner>
            <Type>Normal</Type>
            </Contents>
        </ListBucketResult>
        

        In the above output, the canonical ID is a0000000-000a-0000-0000-00d0ff0f0000.

      ACLs vs Bucket Policies

      ACLs and bucket policies perform similar functions: both can restrict or grant access to buckets. ACLs can also restrict or grant access to individual objects, but they don’t offer as many fine-grained access modes as bucket policies.

      How to Choose Between ACLs and Bucket Policies

      If you can organize objects with similar permission needs into their own buckets, then it’s strongly suggested that you use bucket policies. However, if you cannot organize your objects in this fashion, ACLs are still a good option.

      ACLs offer permissions with less fine-grained control than the permissions available through bucket policies. If you are looking for more granular permissions beyond read and write access, choose bucket policies over ACLs.

      Additionally, bucket policies are created by applying a written bucket policy file to the bucket. This file cannot exceed 20KB in size. If you have a policy with a lengthy list of policy rules, you may want to look into ACLs instead.

      Note

      ACLs and bucket policies can be used at the same time. When this happens, any rule that limits access to an Object Storage resource will override a rule that grants access. For instance, if an ACL allows a user access to a bucket, but a bucket policy denies that user access, the user will not be able to access that bucket.

      ACLs

      Access Control Lists (ACLs) are a legacy method of defining access to Object Storage resources. You can apply an ACL to a bucket or to a specific object. There are two generalized modes of access: setting buckets and/or objects to be private or public. A few other more granular settings are also available.

      With s3cmd, you can set a bucket to be public with the setacl command and the --acl-public flag:

      s3cmd setacl s3://acl-example --acl-public
      

      This will cause the bucket and its contents to be downloadable over the general Internet.

      To set an object or bucket to private, you can use the setacl command and the --acl-private flag:

      s3cmd setacl s3://acl-example --acl-private
      

      This will prevent users from accessing the bucket’ contents over the general Internet.

      Other ACL Permissions

      The more granular permissions are:

      Permission Description
      read Users with can list objects within a bucket
      write Users can upload objects to a bucket and delete objects from a bucket.
      read_acp Users can read the ACL currently applied to a bucket.
      write_acp Users can change the ACL applied to the bucket.
      full_control Users have read and write access over both objects and ACLs.
      • Setting a permission: To apply these more granular permissions for a specific user with s3cmd, use the following setacl command with the --acl-grant flag:

        s3cmd setacl s3://acl-example --acl-grant=PERMISSION:CANONICAL_ID
        

        Substitute acl-example with the name of the bucket (and the object, if necessary), PERMISSION with a permission from the above table, and CANONICAL_ID with the canonical ID of the user to which you would like to grant permissions.

      • Revoking a permission: To revoke a specific permission, you can use the setacl command with the acl-revoke flag:

        s3cmd setacl s3://acl-example --acl-revoke=PERMISSION:CANONICAL_ID
        

        Substitute the bucket name (and optional object), PERMISSION, and CANONICAL_ID with your relevant values.

      • View current ACLs: To view the current ACLs applied to a bucket or object, use the info command, replacing acl-example with the name of your bucket (and object, if necessary):

        s3cmd info s3://acl-example
        

        You should see output like the following:

          
        s3://acl-bucket-example/ (bucket):
           Location:  default
           Payer:     BucketOwner
           Expiration Rule: none
           Policy:    none
           CORS:      b'<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><CORSRule><AllowedMethod>GET</AllowedMethod><AllowedMethod>PUT</AllowedMethod><AllowedMethod>DELETE</AllowedMethod><AllowedMethod>HEAD</AllowedMethod><AllowedMethod>POST</AllowedMethod><AllowedOrigin>*</AllowedOrigin><AllowedHeader>*</AllowedHeader></CORSRule></CORSConfiguration>'
           ACL:       *anon*: READ
           ACL:       a0000000-000a-0000-0000-00d0ff0f0000: FULL_CONTROL
           URL:       http://us-east-1.linodeobjects.com/acl-example/
        
        

        Note

        The owner of the bucket will always have the full_control permission.

      Bucket Policies

      Bucket policies can offer finer control over the types of permissions you can grant to a user. Below is an example bucket policy written in JSON:

      bucket_policy_example.txt
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      
      {
        "Version": "2012-10-17",
        "Statement": [{
          "Effect": "Allow",
          "Principal": {
            "AWS": [
              "arn:aws:iam:::a0000000-000a-0000-0000-00d0ff0f0000"
            ]
          },
          "Action": [
            "s3:PutObject",
            "s3:GetObject",
            "s3:ListBucket"
          ],
          "Resource": [
            "arn:aws:s3:::bucket-policy-example/*"
          ]
        }]
      }

      This policy allows the user with the canonical ID a0000000-000a-0000-0000-00d0ff0f0000, known here as the “principal”, to interact with the bucket, known as the “resource”. The “resource” that is listed (bucket-policy-example) is the only bucket the user will have access to.

      Note

      The principal (a.k.a. the user) must have the prefix of arn:aws:iam:::, and the resource (a.k.a. the bucket) must have the prefix of arn:aws:s3:::.

      The permissions are specified in the Action array. For the current example, these are:

      The Action and Principal.AWS fields of the bucket policy are arrays, so you can easily add additional users and permissions to the bucket policy, separating them by a comma. To grant permissions to all users, you can supply a wildcard (*) to the Principal.AWS field.

      If you instead wanted to deny access to the user, you could change the Effect field to Deny.

      Enable a Bucket Policy

      To enable the bucket policy, use the setpolicy s3cmd command, supplying the file name of the bucket policy as the first argument, and the S3 bucket address as the second argument:

      s3cmd setpolicy bucket_policy_example.txt s3://bucket-policy-example
      

      To ensure that it has been applied correctly, you can use the info command:

      s3cmd info s3://bucket-policy-example
      

      You should see output like the following:

        
      s3://bucket-policy-example/ (bucket):
         Location:  default
         Payer:     BucketOwner
         Expiration Rule: none
         Policy:    b'{n  "Version": "2012-10-17",n  "Statement": [{n    "Effect": "Allow",n    "Principal": {"AWS": ["arn:aws:iam:::a0000000-000a-0000-0000-00d0ff0f0000"]},n    "Action": ["s3:PutObject","s3:GetObject","s3:ListBucket"],n    "Resource": [n      "arn:aws:s3:::bucket-policy-example/*"n    ]n  }]n}'
         CORS:      none
         ACL:       a0000000-000a-0000-0000-00d0ff0f0000: FULL_CONTROL
      
      

      Note

      The policy is visible in the output.

      More Information

      You may wish to consult the following resources for additional information on this topic. While these are provided in the hope that they will be useful, please note that we cannot vouch for the accuracy or timeliness of externally hosted materials.

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.



      Source link

      How To Set Up an Object Storage Server Using Minio on Ubuntu 18.04


      The author selected the Open Internet/Free Speech Fund to receive a donation as part of the Write for DOnations program.

      Introduction

      From cloud-based backup solutions to high-availability content delivery networks (CDNs), the ability to store unstructured blobs of object data and make them accessible through HTTP APIs, known as object storage, has become an integral part of the modern technology landscape.

      Minio is a popular open-source object storage server compatible with the Amazon S3 cloud storage service. Applications that have been configured to talk to Amazon S3 can also be configured to talk to Minio, allowing Minio to be a viable alternative to S3 if you want more control over your object storage server. The service stores unstructured data such as photos, videos, log files, backups, and container/VM images, and can even provide a single object storage server that pools multiple drives spread across many servers.

      Minio is written in Go, comes with a command line client plus a browser interface, and supports simple queuing service for Advanced Message Queuing Protocol (AMQP), Elasticsearch, Redis, NATS, and PostgreSQL targets. For all of these reasons, learning to set up a Minio object storage server can add a wide range of flexibility and utility to your project.

      In this tutorial, you will:

      • Install the Minio server on your Ubuntu 18.04 server and configure it as a systemd service.

      • Set up an SSL/TLS certificate using Let’s Encrypt to secure communication between the server and the client.

      • Access Minio’s browser interface via HTTPS to use and administrate the server.

      Prerequisites

      To complete this tutorial, you will need:

      • One Ubuntu 18.04 server set up by following our Ubuntu 18.04 initial server setup tutorial, including a sudo non-root user and a firewall.

      • A fully registered domain name. You can purchase one on Namecheap or get one for free on Freenom. In this tutorial, your domain will be represented as your_domain.

      • The following DNS records set up for your Minio server. You can follow our DNS records documentation for details on how to add them for a DigitalOcean Droplet.

        • An A record with your server name (e.g. minio-server.your_domain) pointing to your object server’s IPv4 address.
        • (Optional) If you want your server reachable via IPv6, you’ll need an AAAA record with your server name pointing to your object server’s IPv6 address.

      Step 1 — Installing and Configuring the Minio Server

      You can install the Minio server by compiling the source code or via a binary file. To install it from the source, you need to have at least Go 1.12 installed on your system.

      In this step, you will install the server through the precompiled binary and then configure the Minio server afterward.

      First, log in to your server, replacing sammy with your username and your_server_ip with your Ubuntu 18.04 server’s IP address:

      If you haven’t updated the package database recently, update it now:

      Next, download the Minio server’s binary file from the official website:

      • wget https://dl.min.io/server/minio/release/linux-amd64/minio

      You will receive output similar to the following:

      Output

      --2019-08-27 15:08:49-- https://dl.min.io/server/minio/release/linux-amd64/minio Resolving dl.min.io (dl.min.io)... 178.128.69.202 Connecting to dl.min.io (dl.min.io)|178.128.69.202|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 44511616 (42M) [application/octet-stream] Saving to: ‘minio’ minio 100%[===================>] 42.45M 21.9MB/s in 1.9s 2019-08-27 15:08:51 (21.9 MB/s) - ‘minio’ saved [44511616/44511616]

      Once the download is finished, a file named minio will be in your working directory. Use the following command to make it executable:

      Now, move the file into the /usr/local/bin directory where Minio’s systemd startup script expects to find it:

      • sudo mv minio /usr/local/bin

      This will allow us to write a service unit file later in this tutorial to automatically run Minio on startup.

      For security reasons, it is best to avoid running the Minio server as root. This will limit the damage that can be done to your system if compromised. Since the systemd script you’ll use in Step 2 looks for a user account and group called minio-user, make a new user with this name:

      • sudo useradd -r minio-user -s /sbin/nologin

      In this command, you used the -s flag to set /sbin/nologin as the shell for minio-user. This is a shell that does not allow user login, which is not needed for minio-user.

      Next, change ownership of the Minio binary to minio-user:

      • sudo chown minio-user:minio-user /usr/local/bin/minio

      Next, you will create a directory where Minio will store files. This will be the storage location for the buckets that you will use later to organize the objects you store on your Minio server. This tutorial will name the directory minio:

      • sudo mkdir /usr/local/share/minio

      Give ownership of that directory to minio-user:

      • sudo chown minio-user:minio-user /usr/local/share/minio

      Most server configuration files are stored in the /etc directory, so create your Minio configuration file there:

      Give ownership of that directory to minio-user, too:

      • sudo chown minio-user:minio-user /etc/minio

      Use Nano or your favorite text editor to create the environment file needed to modify the default configuration:

      • sudo nano /etc/default/minio

      Once the file is open, add in the following lines to set some important environment variables in your environment file:

      /etc/default/minio

      MINIO_ACCESS_KEY="minio"
      MINIO_VOLUMES="/usr/local/share/minio/"
      MINIO_OPTS="-C /etc/minio --address your_server_ip:9000"
      MINIO_SECRET_KEY="miniostorage"
      

      Let’s take a look at these variables and the values you set:

      • MINIO_ACCESS_KEY: This sets the access key you will use to access the Minio browser user interface.
      • MINIO_SECRET_KEY: This sets the private key you will use to complete your login credentials into the Minio interface. This tutorial has set the value to miniostorage, but we advise choosing a different, more complicated password to secure your server.
      • MINIO_VOLUMES: This identifies the storage directory that you created for your buckets.
      • MINIO_OPTS: This changes where and how the server serves data. The -C flag points Minio to the configuration directory it should use, while the --address flag tells Minio the IP address and port to bind to. If the IP address is not specified, Minio will bind to every address configured on the server, including localhost and any Docker-related IP addresses, so directly specifying the IP address here is recommended. The default port 9000 can be changed if you would like.

      Finally, save and close the environment file when you’re finished making changes.

      You’ve now installed Minio and set some important environment variables. Next, you’ll configure the server to run as a system service.

      Step 2 — Installing the Minio Systemd Startup Script

      In this step, you’ll configure the Minio server to be managed as a systemd service.

      First, download the official Minio service descriptor file using the following command:

      • curl -O https://raw.githubusercontent.com/minio/minio-service/master/linux-systemd/minio.service

      You will receive output similar to the following:

      Output

      % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 835 100 835 0 0 6139 0 --:--:-- --:--:-- --:--:-- 6139

      After the download has finished, a file named minio.service will be in your working directory.

      To audit the contents of minio.service before applying it, open it in a text editor to view its contents:

      This will show the following:

      /etc/systemd/system/minio.service

      [Unit]
      Description=MinIO
      Documentation=https://docs.min.io
      Wants=network-online.target
      After=network-online.target
      AssertFileIsExecutable=/usr/local/bin/minio
      
      [Service]
      WorkingDirectory=/usr/local/
      
      User=minio-user
      Group=minio-user
      
      EnvironmentFile=/etc/default/minio
      ExecStartPre=/bin/bash -c "if [ -z "${MINIO_VOLUMES}" ]; then echo "Variable MINIO_VOLUMES not set in /etc/default/minio"; exit 1; fi"
      
      ExecStart=/usr/local/bin/minio server $MINIO_OPTS $MINIO_VOLUMES
      
      # Let systemd restart this service always
      Restart=always
      
      # Specifies the maximum file descriptor number that can be opened by this process
      LimitNOFILE=65536
      
      # Disable timeout logic and wait until process is stopped
      TimeoutStopSec=infinity
      SendSIGKILL=no
      
      [Install]
      WantedBy=multi-user.target
      
      # Built for ${project.name}-${project.version} (${project.name})
      

      This service unit file starts the Minio server using the minio-user user that you created earlier. It also implements the environment variables you set in the last step, and makes the server run automatically on startup. For more information on systemd unit files, see our guide Understanding Systemd Units and Unit Files.

      Once you’ve looked over the script’s contents, close your text editor.

      Systemd requires that unit files be stored in the systemd configuration directory, so move minio.service there:

      • sudo mv minio.service /etc/systemd/system

      Then, run the following command to reload all systemd units:

      • sudo systemctl daemon-reload

      Finally, enable Minio to start on boot:

      • sudo systemctl enable minio

      This will give the following output:

      Output

      Created symlink from /etc/systemd/system/multi-user.target.wants/minio.service to /etc/systemd/system/minio.service.

      Now that the systemd script is installed and configured, it’s time to start the server.

      Step 3 — Starting the Minio Server

      In this step, you’ll start the server and modify the firewall to allow access through the browser interface.

      First, start the Minio server:

      • sudo systemctl start minio

      Next, verify Minio’s status, the IP address it’s bound to, its memory usage, and more by running this command:

      • sudo systemctl status minio

      You will get the following output:

      Output

      ● minio.service - MinIO Loaded: loaded (/etc/systemd/system/minio.service; enabled; vendor preset: enabled) Active: active (running) since Mon 2019-12-09 21:54:02 UTC; 46s ago Docs: https://docs.min.io Process: 3405 ExecStartPre=/bin/bash -c if [ -z "${MINIO_VOLUMES}" ]; then echo "Variable MINIO_VOLUMES not set in /etc/default/minio"; exit 1; fi (code=exited, status=0/SUCCES Main PID: 3407 (minio) Tasks: 7 (limit: 1152) CGroup: /system.slice/minio.service └─3407 /usr/local/bin/minio server -C /etc/minio --address your_server_IP:9000 /usr/local/share/minio/ Dec 09 21:54:02 cart-Minion-Object-1804-1 systemd[1]: Started MinIO. Dec 09 21:54:03 cart-Minion-Object-1804-1 minio[3407]: Endpoint: http://your_server_IP:9000 Dec 09 21:54:03 cart-Minion-Object-1804-1 minio[3407]: Browser Access: Dec 09 21:54:03 cart-Minion-Object-1804-1 minio[3407]: http://your_server_IP:9000 ...

      Next, enable access through the firewall to the Minio server on the configured port. In this tutorial, that’s port 9000.

      First add the rule:

      Then, enable the firewall:

      You will get the following prompt:

      Output

      Command may disrupt existing ssh connections. Proceed with operation (y|n)?

      Press y and ENTER to confirm this. You will then get the following output:

      Output

      Firewall is active and enabled on system startup

      Minio is now ready to accept traffic, but before connecting to the server, you will secure communication by installing an SSL/TLS certificate.

      Step 4 — Securing Access to Your Minio Server With a TLS Certificate

      In this step, you will secure access to your Minio server using a private key and public certificate that has been obtained from a certificate authority (CA), in this case Let’s Encrypt. To get a free SSL certificate, you will use Certbot.

      First, allow HTTP and HTTPS access through your firewall. To do this, open port 80, which is the port for HTTP:

      Next, open up port 443 for HTTPS:

      Once you’ve added these rules, check on your firewall’s status with the following command:

      You will receive output similar to the following:

      Output

      Status: active Logging: on (low) Default: deny (incoming), allow (outgoing), disabled (routed) New profiles: skip To Action From -- ------ ---- 22/tcp (OpenSSH) ALLOW IN Anywhere 9000 ALLOW IN Anywhere 443 ALLOW IN Anywhere 80 ALLOW IN Anywhere 22/tcp (OpenSSH (v6)) ALLOW IN Anywhere (v6) 9000 (v6) ALLOW IN Anywhere (v6) 443 (v6) ALLOW IN Anywhere (v6) 80 (v6) ALLOW IN Anywhere (v6)

      This confirms that ports 80 and 443 are open, ensuring that your server accepts requests from the internet.

      Next, you will install Certbot. Since Certbot maintains a separate PPA repository, you will first have to add it to your list of repositories before installing Certbot as shown:

      To prepare to add the PPA repository, first install software-properties-common, a package for managing PPAs:

      • sudo apt install software-properties-common

      This package provides some useful scripts for adding and removing PPAs instead of doing it manually.

      Now add the Universe repository:

      • sudo add-apt-repository universe

      This repository contains free and open source software maintained by the Ubuntu community, but is not officially maintained by Canonical, the developers of Ubuntu. This is where we will find the repository for Certbot.

      Next, add the Certbot repository:

      • sudo add-apt-repository ppa:certbot/certbot

      You will receive the following output:

      Output

      This is the PPA for packages prepared by Debian Let's Encrypt Team and backported for Ubuntu(s). More info: https://launchpad.net/~certbot/+archive/ubuntu/certbot Press [ENTER] to continue or ctrl-c to cancel adding it

      Press ENTER to accept.

      Then update the package list:

      Finally, install certbot:

      Next, you will use certbot to generate a new SSL certificate.

      Since Ubuntu 18.04 doesn’t yet support automatic installation, you will use the certonly command and --standalone to obtain the certificate:

      • sudo certbot certonly --standalone -d minio-server.your_domain

      --standalone means that this certificate is for a built-in standalone web server. For more information on this, see our How To Use Certbot Standalone Mode to Retrieve Let’s Encrypt SSL Certificates on Ubuntu 18.04 tutorial.

      You will receive the following output:

      Output

      Saving debug log to /var/log/letsencrypt/letsencrypt.log Plugins selected: Authenticator standalone, Installer None Enter email address (used for urgent renewal and security notices) (Enter 'c' to cancel):

      Add your email and press ENTER.

      Certbot will then ask you to register with Let’s Encrypt:

      Output

      - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Please read the Terms of Service at https://letsencrypt.org/documents/LE-SA-v1.2-November-15-2017.pdf. You must agree in order to register with the ACME server at https://acme-v02.api.letsencrypt.org/directory - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - (A)gree/(C)ancel:

      Type A and press ENTER to agree.

      Next, you will be asked if you are willing to share your email with the Electronic Frontier Foundation:

      Output

      - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Would you be willing to share your email address with the Electronic Frontier Foundation, a founding partner of the Let's Encrypt project and the non-profit organization that develops Certbot? We'd like to send you email about our work encrypting the web, EFF news, campaigns, and ways to support digital freedom. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - (Y)es/(N)o:

      Once you answer Y or N, your public and private keys will be generated and saved in the /etc/letsencrypt/live/minio-server.your_domain_name directory.

      Next, copy these two files (privkey.pem and fullchain.pem) into the certs directory under Minio’s server configuration folder, which is /etc/minio for this tutorial. Use the following to copy privkey.pem and rename the file private.key:

      • sudo cp /etc/letsencrypt/live/minio-server.your_domain_name/privkey.pem /etc/minio/certs/private.key

      Then do the same for fullchain.pem, naming the result public.crt:

      • sudo cp /etc/letsencrypt/live/minio-server.your_domain_name/fullchain.pem /etc/minio/certs/public.crt

      Now, change the ownership of the files to minio-user. First, do this for private.key:

      • sudo chown minio-user:minio-user /etc/minio/certs/private.key

      Then public.crt:

      • sudo chown minio-user:minio-user /etc/minio/certs/public.crt

      Restart the Minio server, so that it becomes aware of the certificate and starts using HTTPS:

      • sudo systemctl restart minio

      Let’s Encrypt certificates are only valid for ninety days. This is to encourage users to automate their certificate renewal process. The Certbot package you installed automatically adds a renew script to /etc/cron.d. This script runs twice a day and will automatically renew any certificate that’s within thirty days of expiration.

      With that, Minio’s connection is now secure, and the SSL/TLS certificate will automatically renew for you. In the next step, you’ll connect to Minio through the browser to use the server.

      Step 5 — Securely Connecting to Minio’s Web Interface Using HTTPS

      In this step, you’ll securely connect to the Minio web interface via HTTPS, and then you’ll create buckets and upload objects into them.

      Access the web interface by pointing your browser to https://minio-server.your_domain:9000.

      You will see the Minio server login screen:

      Minio login screen

      Now, log in to the main interface by entering your credentials. For Access Key, enter the MINIO_ACCESS_KEY you set in the /etc/default/minio environment file in Step 1. For Secret Key, type the MINIO_SECRET_KEY you set in the same file. Once you’ve entered the credentials, click the round button with the arrow directly below the input fields.

      You will then be presented with the Minio user interface. To create a new bucket in which you can store objects, click the light-red + button on the bottom right of the main interface to bring up two additional yellow buttons.

      Minio's main interface

      Click the middle yellow button and enter a name for your new bucket in the prompt, pressing the ENTER key to save your response. Your new bucket is now ready to be used for storage.

      Note: When naming your Minio bucket, make sure that your name only contains lowercase letters, numbers, or hyphens. Minio limits bucket naming conventions in order to be compatible with AWS S3 standards.

      When you want to add objects into your bucket, click the same light-red button as before and then click the top yellow button to open a file-upload prompt.

      At this point, you’ve worked through the entire basic web interface of creating buckets and uploading objects.

      Conclusion

      You now have your own Minio object storage server that you can connect to securely from the web interface using a Let’s Encrypt SSL/TLS certificate. Optionally, you may want to look at the Minio desktop clients for FreeBSD, Linux, Mac, and Windows as an alternative way to use and administrate your object storage server.

      Additionally, if you’d like to increase your Minio installation’s storage capacity beyond your server’s disk size, you can use DigitalOcean’s block storage service to attach a volume to your server, extending storage capacity by as much as 80 TB.

      More information about Minio is available at the project’s documentation website. If you’d like to learn more about object storage, browse our Object Storage tutorials.



      Source link

      How to Set Up a Scalable Laravel 6 Application using Managed Databases and Object Storage


      Introduction

      When scaling web applications horizontally, the first difficulties you’ll typically face are dealing with file storage and data persistence. This is mainly due to the fact that it is hard to maintain consistency of variable data between multiple application nodes; appropriate strategies must be in place to make sure data created in one node is immediately available to other nodes in a cluster.

      A practical way of solving the consistency problem is by using managed databases and object storage systems. The first will outsource data persistence to a managed database, and the latter will provide a remote storage service where you can keep static files and variable content such as images uploaded by users. Each node can then connect to these services at the application level.

      The following diagram demonstrates how such a setup can be used for horizontal scalability in the context of PHP applications:

      Laravel at scale diagram

      In this guide, we will update an existing Laravel 6 application to prepare it for horizontal scalability by connecting it to a managed MySQL database and setting up an S3-compatible object store to save user-generated files. By the end, you will have a travel list application running on an Nginx + PHP-FPM web server:

      Travellist v1.0

      Note: this guide uses DigitalOcean Managed MySQL and Spaces to demonstrate a scalable application setup using managed databases and object storage. The instructions contained here should work in a similar way for other service providers.

      Prerequisites

      To begin this tutorial, you will first need the following prerequisites:

      • Access to an Ubuntu 18.04 server as a non-root user with sudo privileges, and an active firewall installed on your server. To set these up, please refer to our Initial Server Setup Guide for Ubuntu 18.04.
      • Nginx and PHP-FPM installed and configured on your server, as explained in steps 1 and 3 of How to Install LEMP on Ubuntu 18.04. You should skip the step where MySQL is installed.
      • Composer installed on your server, as explained in steps 1 and 2 of How to Install and Use Composer on Ubuntu 18.04.
      • Admin credentials to a managed MySQL 8 database. For this guide, we’ll be using a DigitalOcean Managed MySQL cluster, but the instructions here should work similarly for other managed database services.
      • A set of API keys with read and write permissions to an S3-compatible object storage service. In this guide, we’ll use DigitalOcean Spaces, but you are free to use a provider of your choice.
      • The s3cmd tool installed and configured to connect to your object storage drive. For instructions on how to set this up for DigitalOcean Spaces, please refer to our product documentation.

      Step 1 — Installing the MySQL 8 Client

      The default Ubuntu apt repositories come with the MySQL 5 client, which is not compatible with the MySQL 8 server we’ll be using in this guide. To install the compatible MySQL client, we’ll need to use the MySQL APT Repository provided by Oracle.

      Begin by navigating to the MySQL APT Repository page in your web browser. Find the Download button in the lower-right corner and click through to the next page. This page will prompt you to log in or sign up for an Oracle web account. You can skip that and instead look for the link that says No thanks, just start my download. Copy the link address and go back to your terminal window.

      This link should point to a .deb package that will set up the MySQL APT Repository in your server. After installing it, you’ll be able to use apt to install the latest releases of MySQL. We’ll use curl to download this file into a temporary location.

      Go to your server’s tmp folder:

      Now download the package with curl and using the URL you copied from the MySQL APT Repository page:

      • curl -OL https://dev.mysql.com/get/mysql-apt-config_0.8.13-1_all.deb

      After the download is finished, you can use dpkg to install the package:

      • sudo dpkg -i mysql-apt-config_0.8.13-1_all.deb

      You will be presented with a screen where you can choose which MySQL version you’d like to select as default, as well as which MySQL components you’re interested in:

      MySQL APT Repository Install

      You don’t need to change anything here, because the default options will install the repositories we need. Select “Ok” and the configuration will be finished.

      Next, you’ll need to update your apt cache with:

      Now we can finally install the MySQL 8 client with:

      • sudo apt install mysql-client

      Once that command finishes, check the software version number to ensure that you have the latest release:

      You’ll see output like this:

      Output

      mysql Ver 8.0.18 for Linux on x86_64 (MySQL Community Server - GPL)

      In the next step, we’ll use the MySQL client to connect to your managed MySQL server and prepare the database for the application.

      Step 2 — Creating a new MySQL User and Database

      At the time of this writing, the native MySQL PHP library mysqlnd doesn’t support caching_sha2_authentication, the default authentication method for MySQL 8. We’ll need to create a new user with the mysql_native_password authentication method in order to be able to connect our Laravel application to the MySQL 8 server. We’ll also create a dedicated database for our demo application.

      To get started, log into your server using an admin account. Replace the highlighted values with your own MySQL user, host, and port:

      • mysql -u MYSQL_USER -p -h MYSQL_HOST -P MYSQL_PORT

      When prompted, provide the admin user’s password. After logging in, you will have access to the MySQL 8 server command line interface.

      First, we’ll create a new database for the application. Run the following command to create a new database named travellist:

      • CREATE DATABASE travellist;

      Next, we’ll create a new user and set a password, using mysql_native_password as default authentication method for this user. You are encouraged to replace the highlighted values with values of your own, and to use a strong password:

      • CREATE USER "http://www.digitalocean.com/travellist-user'@'%' IDENTIFIED WITH mysql_native_password BY "http://www.digitalocean.com/MYSQL_PASSWORD';

      Now we need to give this user permission over our application database:

      • GRANT ALL ON travellist.* TO "http://www.digitalocean.com/travellist-user'@'%';

      You can now exit the MySQL prompt with:

      You now have a dedicated database and a compatible user to connect from your Laravel application. In the next step, we’ll get the application code and set up configuration details, so your app can connect to your managed MySQL database.

      In this guide, we’ll use Laravel Migrations and database seeds to set up our application tables. If you need to migrate an existing local database to a DigitalOcean Managed MySQL database, please refer to our documentation on How to Import MySQL Databases into DigitalOcean Managed Databases.

      Step 3 — Setting Up the Demo Application

      To get started, we’ll fetch the demo Laravel application from its Github repository. Feel free to inspect the contents of the application before running the next commands.

      The demo application is a travel bucket list app that was initially developed in our guide on How to Install and Configure Laravel with LEMP on Ubuntu 18.04. The updated app now contains visual improvements including travel photos that can be uploaded by a visitor, and a world map. It also introduces a database migration script and database seeds to create the application tables and populate them with sample data, using artisan commands.

      To obtain the application code that is compatible with this tutorial, we’ll download the 1.1 release from the project’s repository on Github. We’ll save the downloaded zip file as travellist.zip inside our home directory:

      • cd ~
      • curl -L https://github.com/do-community/travellist-laravel-demo/archive/1.1.zip -o travellist.zip

      Now, unzip the contents of the application and rename its directory with:

      • unzip travellist.zip
      • mv travellist-laravel-demo-1.1 travellist

      Navigate to the travellist directory:

      Before going ahead, we’ll need to install a few PHP modules that are required by the Laravel framework, namely: php-xml, php-mbstring, php-xml and php-bcmath. To install these packages, run:

      • sudo apt install unzip php-xml php-mbstring php-xml php-bcmath

      To install the application dependencies, run:

      You will see output similar to this:

      Output

      Loading composer repositories with package information Installing dependencies (including require-dev) from lock file Package operations: 80 installs, 0 updates, 0 removals - Installing doctrine/inflector (v1.3.0): Downloading (100%) - Installing doctrine/lexer (1.1.0): Downloading (100%) - Installing dragonmantank/cron-expression (v2.3.0): Downloading (100%) - Installing erusev/parsedown (1.7.3): Downloading (100%) ... Generating optimized autoload files > IlluminateFoundationComposerScripts::postAutoloadDump > @php artisan package:discover --ansi Discovered Package: beyondcode/laravel-dump-server Discovered Package: fideloper/proxy Discovered Package: laravel/tinker Discovered Package: nesbot/carbon Discovered Package: nunomaduro/collision Package manifest generated successfully.

      The application dependencies are now installed. Next, we’ll configure the application to connect to the managed MySQL Database.

      Creating the .env configuration file and setting the App Key

      We’ll now create a .env file containing variables that will be used to configure the Laravel application in a per-environment basis. The application includes an example file that we can copy and then modify its values to reflect our environment settings.

      Copy the .env.example file to a new file named .env:

      Now we need to set the application key. This key is used to encrypt session data, and should be set to a unique 32 characters-long string. We can generate this key automatically with the artisan tool:

      Let’s edit the environment configuration file to set up the database details. Open the .env file using your command line editor of choice. Here, we will be using nano:

      Look for the database credentials section. The following variables need your attention:

      DB_HOST: your managed MySQL server host.
      DB_PORT: your managed MySQL server port.
      DB_DATABASE: the name of the application database we created in Step 2.
      DB_USERNAME: the database user we created in Step 2.
      DB_PASSWORD: the password for the database user we defined in Step 2.

      Update the highlighted values with your own managed MySQL info and credentials:

      ...
      DB_CONNECTION=mysql
      DB_HOST=MANAGED_MYSQL_HOST
      DB_PORT=MANAGED_MYSQL_PORT
      DB_DATABASE=MANAGED_MYSQL_DB
      DB_USERNAME=MANAGED_MYSQL_USER
      DB_PASSWORD=MANAGED_MYSQL_PASSWORD
      ...
      

      Save and close the file by typing CTRL+X then Y and ENTER when you’re done editing.

      Now that the application is configured to connect to the MySQL database, we can use Laravel’s command line tool artisan to create the database tables and populate them with sample data.

      Migrating and populating the database

      We’ll now use Laravel Migrations and database seeds to set up the application tables. This will help us determine if our database configuration works as expected.

      To execute the migration script that will create the tables used by the application, run:

      You will see output similar to this:

      Output

      Migration table created successfully. Migrating: 2019_09_19_123737_create_places_table Migrated: 2019_09_19_123737_create_places_table (0.26 seconds) Migrating: 2019_10_14_124700_create_photos_table Migrated: 2019_10_14_124700_create_photos_table (0.42 seconds)

      To populate the database with sample data, run:

      You will see output like this:

      Output

      Seeding: PlacesTableSeeder Seeded: PlacesTableSeeder (0.86 seconds) Database seeding completed successfully.

      The application tables are now created and populated with sample data.

      To finish the application setup, we also need to create a symbolic link to the public storage folder that will host the travel photos we’re using in the application. You can do that using the artisan tool:

      Output

      The [public/storage] directory has been linked.

      This will create a symbolic link inside the public directory pointing to storage/app/public, where we’ll save the travel photos. To check that the link was created and where it points to, you can run:

      You’ll see output like this:

      Output

      total 36 drwxrwxr-x 5 sammy sammy 4096 Oct 25 14:59 . drwxrwxr-x 12 sammy sammy 4096 Oct 25 14:58 .. -rw-rw-r-- 1 sammy sammy 593 Oct 25 06:29 .htaccess drwxrwxr-x 2 sammy sammy 4096 Oct 25 06:29 css -rw-rw-r-- 1 sammy sammy 0 Oct 25 06:29 favicon.ico drwxrwxr-x 2 sammy sammy 4096 Oct 25 06:29 img -rw-rw-r-- 1 sammy sammy 1823 Oct 25 06:29 index.php drwxrwxr-x 2 sammy sammy 4096 Oct 25 06:29 js -rw-rw-r-- 1 sammy sammy 24 Oct 25 06:29 robots.txt lrwxrwxrwx 1 sammy sammy 41 Oct 25 14:59 storage -> /home/sammy/travellist/storage/app/public -rw-rw-r-- 1 sammy sammy 1194 Oct 25 06:29 web.config

      Running the test server (optional)

      You can use the artisan serve command to quickly verify that everything is set up correctly within the application, before having to configure a full-featured web server like Nginx to serve the application for the long term.

      We’ll use port 8000 to temporarily serve the application for testing. If you have the UFW firewall enabled on your server, you should first allow access to this port with:

      Now, to run the built in PHP server that Laravel exposes through the artisan tool, run:

      • php artisan serve --host=0.0.0.0 --port=8000

      This command will block your terminal until interrupted with a CTRL+C. It will use the built-in PHP web server to serve the application for test purposes on all network interfaces, using port 8000.

      Now go to your browser and access the application using the server’s domain name or IP address on port 8000:

      http://server_domain_or_IP:8000
      

      You will see the following page:

      Travellist v1.0

      If you see this page, it means the application is successfully pulling data about locations and photos from the configured managed database. The image files are still stored in the local disk, but we’ll change this in a following step of this guide.

      When you are finished testing the application, you can stop the serve command by hitting CTRL+C.

      Don’t forget to close port 8000 again if you are running UFW on your server:

      Step 4 — Configuring Nginx to Serve the Application

      Although the built-in PHP web server is very useful for development and testing purposes, it is not intended to be used as a long term solution to serve PHP applications. Using a full featured web server like Nginx is the recommended way of doing that.

      To get started, we’ll move the application folder to /var/www, which is the usual location for web applications running on Nginx. First, use the mv command to move the application folder with all its contents to /var/www/travellist:

      • sudo mv ~/travellist /var/www/travellist

      Now we need to give the web server user write access to the storage and bootstrap/cache folders, where Laravel stores application-generated files. We’ll set these permissions using setfacl, a command line utility that allows for more robust and fine-grained permission settings in files and folders.

      To include read, write and execution (rwx) permissions to the web server user over the required directories, run:

      • sudo setfacl -R -m g:www-data:rwx /var/www/travellist/storage
      • sudo setfacl -R -m g:www-data:rwx /var/www/travellist/bootstrap/cache

      The application files are now in order, but we still need to configure Nginx to serve the content. To do this, we’ll create a new virtual host configuration file at /etc/nginx/sites-available:

      • sudo nano /etc/nginx/sites-available/travellist

      The following configuration file contains the recommended settings for Laravel applications on Nginx:

      /etc/nginx/sites-available/travellist

      server {
          listen 80;
          server_name server_domain_or_IP;
          root /var/www/travellist/public;
      
          add_header X-Frame-Options "SAMEORIGIN";
          add_header X-XSS-Protection "1; mode=block";
          add_header X-Content-Type-Options "nosniff";
      
          index index.html index.htm index.php;
      
          charset utf-8;
      
          location / {
              try_files $uri $uri/ /index.php?$query_string;
          }
      
          location = /favicon.ico { access_log off; log_not_found off; }
          location = /robots.txt  { access_log off; log_not_found off; }
      
          error_page 404 /index.php;
      
          location ~ .php$ {
              fastcgi_pass unix:/var/run/php/php7.2-fpm.sock;
              fastcgi_index index.php;
              fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
              include fastcgi_params;
          }
      
          location ~ /.(?!well-known).* {
              deny all;
          }
      }
      

      Copy this content to your /etc/nginx/sites-available/travellist file and adjust the highlighted values to align with your own configuration. Save and close the file when you’re done editing.

      To activate the new virtual host configuration file, create a symbolic link to travellist in sites-enabled:

      • sudo ln -s /etc/nginx/sites-available/travellist /etc/nginx/sites-enabled/

      Note: If you have another virtual host file that was previously configured for the same server_name used in the travellist virtual host, you might need to deactivate the old configuration by removing the corresponding symbolic link inside /etc/nginx/sites-enabled/.

      To confirm that the configuration doesn’t contain any syntax errors, you can use:

      You should see output like this:

      Output

      • nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
      • nginx: configuration file /etc/nginx/nginx.conf test is successful

      To apply the changes, reload Nginx with:

      • sudo systemctl reload nginx

      If you reload your browser now, the application images will be broken. That happens because we moved the application directory to a new location inside the server, and for that reason we need to re-create the symbolic link to the application storage folder.

      Remove the old link with:

      • cd /var/www/travellist
      • rm -f public/storage

      Now run once again the artisan command to generate the storage link:

      Now go to your browser and access the application using the server’s domain name or IP address, as defined by the server_name directive in your configuration file:

      http://server_domain_or_IP
      

      Travellist v1.0

      In the next step, we’ll integrate an object storage service into the application. This will replace the current local disk storage used for the travel photos.

      Step 5 — Integrating an S3-Compatible Object Storage into the Application

      We’ll now set up the application to use an S3-compatible object storage service for storing the travel photos exhibited on the index page. Because the application already has a few sample photos stored in the local disk, we’ll also use the s3cmd tool to upload the existing local image files to the remote object storage.

      Setting Up the S3 Driver for Laravel

      Laravel uses league/flysystem, a filesystem abstraction library that enables a Laravel application to use and combine multiple storage solutions, including local disk and cloud services. An additional package is required to use the s3 driver. We can install this package using the composer require command.

      Access the application directory:

      • composer require league/flysystem-aws-s3-v3

      You will see output similar to this:

      Output

      Using version ^1.0 for league/flysystem-aws-s3-v3 ./composer.json has been updated Loading composer repositories with package information Updating dependencies (including require-dev) Package operations: 8 installs, 0 updates, 0 removals - Installing mtdowling/jmespath.php (2.4.0): Loading from cache - Installing ralouphie/getallheaders (3.0.3): Loading from cache - Installing psr/http-message (1.0.1): Loading from cache - Installing guzzlehttp/psr7 (1.6.1): Loading from cache - Installing guzzlehttp/promises (v1.3.1): Loading from cache - Installing guzzlehttp/guzzle (6.4.1): Downloading (100%) - Installing aws/aws-sdk-php (3.112.28): Downloading (100%) - Installing league/flysystem-aws-s3-v3 (1.0.23): Loading from cache ...

      Now that the required packages are installed, we can update the application to connect to the object storage. First, we’ll open the .env file again to set up configuration details such as keys, bucket name, and region for your object storage service.

      Open the .env file:

      Include the following environment variables, replacing the highlighted values with your object store configuration details:

      /var/www/travellist/.env

      DO_SPACES_KEY=EXAMPLE7UQOTHDTF3GK4
      DO_SPACES_SECRET=exampleb8e1ec97b97bff326955375c5
      DO_SPACES_ENDPOINT=https://ams3.digitaloceanspaces.com
      DO_SPACES_REGION=ams3
      DO_SPACES_BUCKET=sammy-travellist
      

      Save and close the file when you’re done. Now open the config/filesystems.php file:

      • nano config/filesystems.php

      Within this file, we’ll create a new disk entry in the disks array. We’ll name this disk spaces, and we’ll use the environment variables we’ve set in the .env file to configure the new disk. Include the following entry in the disks array:

      config/filesystems.php

      
      'spaces' => [
         'driver' => 's3',
         'key' => env('DO_SPACES_KEY'),
         'secret' => env('DO_SPACES_SECRET'),
         'endpoint' => env('DO_SPACES_ENDPOINT'),
         'region' => env('DO_SPACES_REGION'),
         'bucket' => env('DO_SPACES_BUCKET'),
      ],
      
      

      Still in the same file, locate the cloud entry and change it to set the new spaces disk as default cloud filesystem disk:

      config/filesystems.php

      'cloud' => env('FILESYSTEM_CLOUD', "http://www.digitalocean.com/spaces'),
      

      Save and close the file when you’re done editing. From your controllers, you can now use the Storage::cloud() method as a shortcut to access the default cloud disk. This way, the application stays flexible to use multiple storage solutions, and you can switch between providers on a per-environment basis.

      The application is now configured to use object storage, but we still need to update the code that uploads new photos to the application.

      Let’s first examine the current uploadPhoto route, located in the PhotoController class. Open the file using your text editor:

      • nano app/Http/Controllers/PhotoController.php

      app/Http/Controllers/PhotoController.php

      …
      
      public function uploadPhoto(Request $request)
      {
         $photo = new Photo();
         $place = Place::find($request->input('place'));
      
         if (!$place) {
             //add new place
             $place = new Place();
             $place->name = $request->input('place_name');
             $place->lat = $request->input('place_lat');
             $place->lng = $request->input('place_lng');
         }
      
         $place->visited = 1;
         $place->save();
      
         $photo->place()->associate($place);
         $photo->image = $request->image->store('/', 'public');
         $photo->save();
      
         return redirect()->route('Main');
      }
      
      

      This method accepts a POST request and creates a new photo entry in the photos table. It begins by checking if an existing place was selected in the photo upload form, and if that’s not the case, it will create a new place using the provided information. The place is then set to visited and saved to the database. Following that, an association is created so that the new photo is linked to the designated place. The image file is then stored in the root folder of the public disk. Finally, the photo is saved to the database. The user is then redirected to the main route, which is the index page of the application.

      The highlighted line in this code is what we’re interested in. In that line, the image file is saved to the disk using the store method. The store method is used to save files to any of the disks defined in the filesystem.php configuration file. In this case, it is using the default disk to store uploaded images.

      We will change this behavior so that the image is saved to the object store instead of the local disk. In order to do that, we need to replace the public disk by the spaces disk in the store method call. We also need to make sure the uploaded file’s visibility is set to public instead of private.

      The following code contains the full PhotoController class, including the updated uploadPhoto method:

      app/Http/Controllers/PhotoController.php

      <?php
      
      namespace AppHttpControllers;
      
      use IlluminateHttpRequest;
      use AppPhoto;
      use AppPlace;
      use IlluminateSupportFacadesStorage;
      
      class PhotoController extends Controller
      {
         public function uploadForm()
         {
             $places = Place::all();
      
             return view('upload_photo', [
                 'places' => $places
             ]);
         }
      
         public function uploadPhoto(Request $request)
         {
             $photo = new Photo();
             $place = Place::find($request->input('place'));
      
             if (!$place) {
                 //add new place
                 $place = new Place();
                 $place->name = $request->input('place_name');
                 $place->lat = $request->input('place_lat');
                 $place->lng = $request->input('place_lng');
             }
      
             $place->visited = 1;
             $place->save();
      
             $photo->place()->associate($place);
             $photo->image = $request->image->store('/', 'spaces');
             Storage::setVisibility($photo->image, 'public');
             $photo->save();
      
             return redirect()->route('Main');
         }
      }
      
      
      

      Copy the updated code to your own PhotoController so that it reflects the highlighted changes. Save and close the file when you’re done editing.

      We still need to modify the application’s main view so that it uses the object storage file URL to render the image. Open the travel_list.blade.php template:

      • nano resources/views/travel_list.blade.php

      Now locate the footer section of the page, which currently looks like this:

      resources/views/travel_list.blade.php

      @section('footer')
         <h2>Travel Photos <small>[ <a href="{{ route('Upload.form') }}">Upload Photo</a> ]</small></h2>
         @foreach ($photos as $photo)
             <div class="photo">
                <img src="https://www.digitalocean.com/{{ asset('storage') . '/' . $photo->image }}" />
                 <p>{{ $photo->place->name }}</p>
             </div>
         @endforeach
      
      @endsection
      

      Replace the current image src attribute to use the file URL from the spaces storage disk:

      <img src="https://www.digitalocean.com/{{ Storage::disk('spaces')->url($photo->image) }}" />
      

      If you go to your browser now and reload the application page, it will show only broken images. That happens because the image files for those travel photos are still only in the local disk. We need to upload the existing image files to the object storage, so that the photos already stored in the database can be successfully exhibited in the application page.

      Syncing local images with s3cmd

      The s3cmd tool can be used to sync local files with an S3-compatible object storage service. We’ll run a sync command to upload all files inside storage/app/public/photos to the object storage service.

      Access the public app storage directory:

      • cd /var/www/travellist/storage/app/public

      To have a look at the files already stored in your remote disk, you can use the s3cmd ls command:

      • s3cmd ls s3://your_bucket_name

      Now run the sync command to upload existing files in the public storage folder to the object storage:

      • s3cmd sync ./ s3://your_bucket_name --acl-public --exclude=.gitignore

      This will synchronize the current folder (storage/app/public) with the remote object storage’s root dir. You will get output similar to this:

      Output

      upload: './bermudas.jpg' -> 's3://sammy-travellist/bermudas.jpg' [1 of 3] 2538230 of 2538230 100% in 7s 329.12 kB/s done upload: './grindavik.jpg' -> 's3://sammy-travellist/grindavik.jpg' [2 of 3] 1295260 of 1295260 100% in 5s 230.45 kB/s done upload: './japan.jpg' -> 's3://sammy-travellist/japan.jpg' [3 of 3] 8940470 of 8940470 100% in 24s 363.61 kB/s done Done. Uploaded 12773960 bytes in 37.1 seconds, 336.68 kB/s.

      Now, if you run s3cmd ls again, you will see that three new files were added to the root folder of your object storage bucket:

      • s3cmd ls s3://your_bucket_name

      Output

      2019-10-25 11:49 2538230 s3://sammy-travellist/bermudas.jpg 2019-10-25 11:49 1295260 s3://sammy-travellist/grindavik.jpg 2019-10-25 11:49 8940470 s3://sammy-travellist/japan.jpg

      Go to your browser and reload the application page. All images should be visible now, and if you inspect them using your browser debug tools, you’ll notice that they’re all using URLs from your object storage.

      Testing the Integration

      The demo application is now fully functional, storing files in a remote object storage service, and saving data to a managed MySQL database. We can now upload a few photos to test our setup.

      Access the /upload application route from your browser:

      http://server_domain_or_IP/upload
      

      You will see the following form:

      Travellist  Photo Upload Form

      You can now upload a few photos to test the object storage integration. After choosing an image from your computer, you can select an existing place from the dropdown menu, or you can add a new place by providing its name and geographic coordinates so it can be loaded in the application map.

      Step 6 — Scaling Up a DigitalOcean Managed MySQL Database with Read-Only Nodes (Optional)

      Because read-only operations are typically more frequent than writing operations on database servers, its is a common practice to scale up a database cluster by setting up multiple read-only nodes. This will distribute the load generated by SELECT operations.

      To demonstrate this setup, we’ll first add 2 read-only nodes to our DigitalOcean Managed MySQL cluster. Then, we’ll configure the Laravel application to use these nodes.

      Access the DigitalOcean Cloud Panel and follow these instructions:

      1. Go to Databases and select your MySQL cluster.
      2. Click Actions and choose Add a read-only node from the drop-down menu.
      3. Configure the node options and hit the Create button. Notice that it might take several minutes for the new node to be ready.
      4. Repeat steps 1-4 one more time so that you have 2 read-only nodes.
      5. Note down the hosts of the two nodes as we will need them for our Laravel configuration.

      Once you have your read-only nodes ready, head back to your terminal.

      We’ll now configure our Laravel application to work with multiple database nodes. When we’re finished, queries such as INSERT and UPDATE will be forwarded to your primary cluster node, while all SELECT queries will be redirected to your read-only nodes.

      First, go to the application’s directory on the server and open your .env file using your text editor of choice:

      • cd /var/www/travellist
      • nano .env

      Locate the MySQL database configuration and comment out the DB_HOST line:

      /var/www/travellist/.env

      DB_CONNECTION=mysql
      #DB_HOST=MANAGED_MYSQL_HOST
      DB_PORT=MANAGED_MYSQL_PORT
      DB_DATABASE=MANAGED_MYSQL_DB
      DB_USERNAME=MANAGED_MYSQL_USER
      DB_PASSWORD=MANAGED_MYSQL_PASSWORD
      

      Save and close the file when you’re done. Now open the config/database.php in your text editor:

      Look for the mysql entry inside the connections array. You should include three new items in this configuration array: read, write, and sticky. The read and write entries will set up the cluster nodes, and the sticky option set to true will reuse write connections so that data written to the database is immediately available in the same request cycle. You can set it to false if you don’t want this behavior.

      /var/www/travel_list/config/database.php

      ...
            'mysql' => [
               'read' => [
                 'host' => [
                    "http://www.digitalocean.com/READONLY_NODE1_HOST',
                    "http://www.digitalocean.com/READONLY_NODE2_HOST',
                 ],
               ],
               'write' => [
                 'host' => [
                   "http://www.digitalocean.com/MANAGED_MYSQL_HOST',
                 ],
               ],
             'sticky' => true,
      ...
      

      Save and close the file when you are done editing. To test that everything works as expected, we can create a temporary route inside routes/web.php to pull some data from the database and show details about the connection being used. This way we will be able to see how the requests are being load balanced between the read-only nodes.

      Open the routes/web.php file:

      Include the following route:

      /var/www/travel_list/routes/web.php

      ...
      
      Route::get('/mysql-test', function () {
        $places = AppPlace::all();
        $results = DB::select( DB::raw("SHOW VARIABLES LIKE 'server_id'") );
      
        return "Server ID: " . $results[0]->Value;
      });
      

      Now go to your browser and access the /mysql-test application route:

      http://server_domain_or_IP/mysql-test
      

      You’ll see a page like this:

      mysql node test page

      Reload the page a few times and you will notice that the Server ID value changes, indicating that the requests are being randomly distributed between the two read-only nodes.

      Conclusion

      In this guide, we’ve prepared a Laravel 6 application for a highly available and scalable environment. We’ve outsourced the database system to an external managed MySQL service, and we’ve integrated an S3-compatible object storage service into the application to store files uploaded by users. Finally, we’ve seen how to scale up the application’s database by including additional read-only cluster nodes in the app’s configuration file.

      The updated demo application code containing all modifications made in this guide can be found within the 2.1 tag in the application’s repository on Github.

      From here, you can set up a Load Balancer to distribute load and scale your application among multiple nodes. You can also leverage this setup to create a containerized environment to run your application on Docker.



      Source link