One place for hosting & domains

      Object

      How to Set Up a Private Docker Registry with Linode Kubernetes Engine and Object Storage


      Updated by Leslie Salazar Contributed by Leslie Salazar

      Marquee image for How to Set Up a Private Docker Registry with Linode Kubernetes Engine and Object Storage

      Hosting a private Docker registry alongside your Kubernetes cluster allows you to securely manage your Docker images while also providing quick deployment of your apps. This guide will walk you through the steps needed to deploy a private Docker registry on a Linode Kubernetes Engine (LKE) cluster. At the end of this tutorial, you will be able to locally push and pull Docker images to your registry. Similarly, your LKE cluster’s pods will also be able to pull Docker images from the registry to complete their deployments.

      Before you Begin

      Note

      1. Deploy a LKE Cluster. This example was written using a node pool with two 2 GB nodes. Depending on the workloads you will be deploying on your cluster, you may consider using nodes with higher resources.

      2. Install Helm 3, kubectl, and Docker to your local environment.

        Note

      3. Ensure Object Storage is enabled on your Linode account, generate an Object Storage key pair and ensure you save it in a secure location. You will need the key pair for a later section in this guide. Finally create an Object Storage bucket to store your registry’s images. Throughout this guide, the example bucket name will be registry.

      4. Purchase a domain name from a reliable domain registrar. Using Linode’s DNS Manager, create a new Domain and add an DNS “A” record for a subdomain named registry. Your subdomain will host your Docker registry. This guide will use registry.example.com as the example domain.

        Note

        Optionally, you can create a Wildcard DNS record, *.example.com. In a later section, you will point your DNS A record to a Linode NodeBalancer’s external IP address. Using a Wildcard DNS record, will allow you to expose your Kubernetes services without requiring further configuration using the Linode DNS Manager.

      In this Guide

      In this guide you will:

      Install the NGINX Ingress Controller

      An Ingress is used to provide external routes, via HTTP or HTTPS, to your cluster’s services. An Ingress Controller, like the NGINX Ingress Controller, fulfills the requirements presented by the Ingress using a load balancer.

      In this section, you will install the NGINX Ingress Controller using Helm, which will create a Linode NodeBalancer to handle your cluster’s traffic.

      1. Add the Google stable Helm charts repository to your Helm repos:

        helm repo add stable https://kubernetes-charts.storage.googleapis.com/
        
      2. Update your Helm repositories:

        helm repo update
        
      3. Install the NGINX Ingress Controller. This installation will result in a Linode NodeBalancer being created.

        helm install nginx-ingress stable/nginx-ingress
        

        You will see a similar output after issuing the above command (the output has been truncated for brevity):

          
        NAME: my-nginx-ingress
        LAST DEPLOYED: Wed Apr  8 09:55:47 2020
        NAMESPACE: default
        STATUS: deployed
        REVISION: 1
        TEST SUITE: None
        NOTES:
        The nginx-ingress controller has been installed.
        It may take a few minutes for the LoadBalancer IP to be available.
        You can watch the status by running 'kubectl --namespace default get services -o wide -w my-nginx-ingress-controller'
        ...
            
        

        In the next section, you will use your Linode NodeBalancer’s external IP address to update your registry’s domain record.

      Update your Subdomain’s IP Address

      1. Access your NodeBalancer’s assigned external IP address.

        kubectl --namespace default get services -o wide -w nginx-ingress-controller
        

        The command will return a similar output:

          
        NAME                          TYPE           CLUSTER-IP      EXTERNAL-IP    PORT(S)                      AGE     SELECTOR
        my-nginx-ingress-controller   LoadBalancer   10.128.169.60   192.0.2.0   80:32401/TCP,443:30830/TCP   7h51m   app.kubernetes.io/component=controller,app=nginx-ingress,release=my-nginx-ingress
            
        
      2. Copy the IP address of the EXTERNAL IP field and navigate to Linode’s DNS manager and update your domain’s’ registry A record with the external IP address. Ensure that the entry’s TTL field is set to 5 minutes.

      Now that your NGINX Ingress Controller has been deployed and your subdomain’s A record has been updated, you are ready to enable HTTPS on your Docker registry.

      Enable HTTPS

      Note

      Before performing the commands in this section, ensure that your DNS has had time to propagate across the internet. This process can take several hours. You can query the status of your DNS by using the following command, substituting registry.example.com for your subdomain and domain.

      dig +short registry.example.com
      

      If successful, the output should return the IP address of your NodeBalancer.

      To enable HTTPS on your Docker registry, you will create a Transport Layer Security (TLS) certificate from the Let’s Encrypt certificate authority (CA) using the ACME protocol. This will be facilitated by cert-manager, the native Kubernetes certificate management controller.

      In this section you will install cert-manager using Helm and the required cert-manager CustomResourceDefinitions (CRDs). Then, you will create a ClusterIssuer and Certificate resource to create your cluster’s TLS certificate.

      Install cert-manager

      1. Install cert-manager’s CRDs.

        kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v0.14.1/cert-manager.crds.yaml
        
      2. Create a cert-manager namespace.

        kubectl create namespace cert-manager
        
      3. Add the Helm repository which contains the cert-manager Helm chart.

        helm repo add jetstack https://charts.jetstack.io
        
      4. Update your Helm repositories.

        helm repo update
        
      5. Install the cert-manager Helm chart. These basic configurations should be sufficient for many use cases, however, additional cert-manager configurable parameters can be found in cert-manager’s official documentation.

        helm install 
        cert-manager jetstack/cert-manager 
        --namespace cert-manager 
        --version v0.14.1
        
      6. Verify that the corresponding cert-manager pods are now running.

        kubectl get pods --namespace cert-manager
        

        You should see a similar output:

          
        NAME                                       READY   STATUS    RESTARTS   AGE
        cert-manager-579d48dff8-84nw9              1/1     Running   3          1m
        cert-manager-cainjector-789955d9b7-jfskr   1/1     Running   3          1m
        cert-manager-webhook-64869c4997-hnx6n      1/1     Running   0          1m
            
        

      Create a ClusterIssuer Resource

      Now that cert-manager is installed and running on your cluster, you will need to create a ClusterIssuer resource which defines which CA can create signed certificates when a certificate request is received. A ClusterIssuer is not a namespaced resource, so it can be used by more than one namespace.

      1. Create a directory named registry to store all of your Docker registry’s related manifest files and move into the new directory.

        mkdir ~/registry && cd ~/registry
        
      2. Using the text editor of your choice, create a file named acme-issuer-prod.yaml with the example configurations. Replace the value of email with your own email address.

        ~/registry/acme-issuer-prod.yaml
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        
        apiVersion: cert-manager.io/v1alpha2
        kind: ClusterIssuer
        metadata:
          name: letsencrypt-prod
        spec:
          acme:
            email: [email protected]
            server: https://acme-v02.api.letsencrypt.org/directory
            privateKeySecretRef:
              name: letsencrypt-secret-prod
            solvers:
            - http01:
                ingress:
                  class: nginx
            
        • This manifest file creates a ClusterIssuer resource that will register an account on an ACME server. The value of spec.acme.server designates Let’s Encrypt’s production ACME server, which should be trusted by most browsers.

          Note

          Let’s Encrypt provides a staging ACME server that can be used to test issuing trusted certificates, while not worrying about hitting Let’s Encrypt’s production rate limits. The staging URL is https://acme-staging-v02.api.letsencrypt.org/directory.
        • The value of privateKeySecretRef.name provides the name of a secret containing the private key for this user’s ACME server account (this is tied to the email address you provide in the manifest file). The ACME server will use this key to identify you.

        • To ensure that you own the domain for which you will create a certificate, the ACME server will issue a challenge to a client. cert-manager provides two options for solving challenges, http01 and DNS01. In this example, the http01 challenge solver will be used and it is configured in the solvers array. cert-manager will spin up challenge solver Pods to solve the issued challenges and use Ingress resources to route the challenge to the appropriate Pod.

      3. Create the ClusterIssuer resource:

        kubectl create -f acme-issuer-prod.yaml
        

      Create a Certificate Resource

      After you have a ClusterIssuer resource, you can create a Certificate resource. This will describe your x509 public key certificate and will be used to automatically generate a CertificateRequest which will be sent to your ClusterIssuer.

      1. Using the text editor of your choice, create a file named certificate-prod.yaml with the example configurations. Replace the value of email with your own email address. Replace the value of spec.dnsNames with your own domain that you will use to host your Docker registry.

        ~/registry/certificate-prod.yaml
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        
        apiVersion: cert-manager.io/v1alpha2
        kind: Certificate
        metadata:
          name: docker-registry-prod
        spec:
          secretName: letsencrypt-secret-prod
          duration: 2160h # 90d
          renewBefore: 360h # 15d
          issuerRef:
            name: letsencrypt-prod
            kind: ClusterIssuer
          dnsNames:
          - registry.example.com
            

        Note

        The configurations in this example create a Certificate that is valid for 90 days and renews 15 days before expiry.

      2. Create the Certificate resource:

        kubectl create -f certificate-prod.yaml
        
      3. Verify that the Certificate has been successfully issued:

        kubectl get certs
        

        When your certificate is ready, you should see a similar output:

          
        NAME                   READY   SECRET                    AGE
        docker-registry-prod   True    letsencrypt-secret-prod   42s
            
        

        All the necessary components are now in place to be able to enable HTTPS on your Docker registry. In the next section, you will complete the steps need to deploy your registry.

      Deploy your Docker Registry

      You will now complete the steps to deploy your Docker Registry to your Kubernetes cluster using a Helm chart. Prior to deploying your Docker registry, you will first need to create a username and password in order to enable basic authentication for your registry. This will allow you to restrict access to your Docker registry which will keep your images private. Since, your registry will require authentication, a Kubernetes secret will be added to your cluster in order to provide your cluster with your registry’s authentication credentials, so that it can pull images from it.

      Enable Basic Authentication

      To enabled basic access restriction for your Docker registry, you will use the htpasswd utility. This utility allows you to use a file to store usernames and passwords for basic HTTP authentication. This will require you to log into your Docker registry prior to being able to push or pull images from and to it.

      1. Install the htpasswd utility. This example is for an Ubuntu 18.04 instance, but you can use your system’s package manger to install it.

        sudo apt install apache2-utils -y
        
      2. Create a file to store your Docker registry’s username and password.

        touch my_docker_pass
        
      3. Create a username and password using htpasswd. Replace example_user with your own username. Follow the prompt to create a password.

        htpasswd -B my_docker_pass example_user
        
      4. View the contents of your password file.

        cat my_docker_pass
        

        Your output will resemble the following. You will need these values when deploying your registry in the Configure your Docker Registry section of the guide.

          
        example_user:$2y$05$8VhvzCVCB4txq8mNGh8eu.8GMyBEEeUInqQJHKJUD.KUwxastPG4m
          
        

      Grant your Cluster Access to your Docker Registry

      Your LKE Cluster will also need to authenticate to your Docker registry in order to pull images from it. In this section, you will create a Kubernetes Secret that you can use to grant your cluster’s kubelet with access to your registry’s images.

      1. Create a secret to store your registry’s authentication information. Replace the option values with your own registry’s details. The --docker-username and --docker-password should be the username and password that you used when generating credentials using the htpasswd utility.

        kubectl create secret docker-registry regcred 
          --docker-server=registry.example.com 
          --docker-username=example_user 
          --docker-password=3xampl3Passw0rd 
          [email protected]
        

      Configure your Docker Registry

      Before deploying the Docker Registry Helm chart to your cluster, you will define some configurations so that the Docker registry uses the NGINX Ingress controller, your registry Object Storage bucket, and your cert-manager created TLS certificate. See the Docker Registry Helm Chart’s official documentation for a full list of all available configurations.

      Note

      1. Create a new file named docker-configs.yaml using the example configurations. Ensure you replace the following values in your file:

        • ingress.hosts with your own Docker registry’s domain
        • ingress.tls.secretName with the secret name you used when creating your ClusterIssuer
        • ingress.tls.secretName.hosts with the domain for which you wish to secure with your TLS certificate.
        • secrets.s3.accessKey with the value of your Object Storage account’s access key and secrets.s3.secretKey with the corresponding secret key.
        • secrets.htpasswd with the value returned when you view the contents of your my_docker_pass file. However, ensure you do not remove the |- characters. This ensures that your YAML is properly formatted. See step 4 in the Enable Basic Authentication section for details on viewing the contents of your password file.
        • s3.region with your Object Storage bucket’s cluster region, s3.regionEndpoint with your Object Storage bucket’s region endpoint, and s3.bucket with your registry’s Object Storage bucket name.
        ~/registry/docker-configs.yaml
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        16
        17
        18
        19
        20
        21
        22
        23
        24
        25
        26
        27
        
        ingress:
          enabled: true
          hosts:
            - registry.example.com
          annotations:
            kubernetes.io/ingress.class: nginx
            cert-manager.io/cluster-issuer: letsencrypt-prod
            nginx.ingress.kubernetes.io/proxy-body-size: "0"
            nginx.ingress.kubernetes.io/proxy-read-timeout: "6000"
            nginx.ingress.kubernetes.io/proxy-send-timeout: "6000"
          tls:
            - secretName: letsencrypt-secret-prod
              hosts:
              - registry.example.com
        storage: s3
        secrets:
          htpasswd: |-
            example_user:$2y$05$8VhvzCVCB4txq8mNGh8eu.8GMyBEEeUInqQJHKJUD.KUwxastPG4m
          s3:
            accessKey: "myaccesskey"
            secretKey: "mysecretkey"
        s3:
          region: us-east-1
          regionEndpoint: us-east-1.linodeobjects.com/
          secure: true
          bucket: registry
              
        • The NGINX Ingress annotation nginx.ingress.kubernetes.io/proxy-body-size: "0" disables a maximum allowed size client request body check and ensures that you won’t receive a 413 error when pushing larger Docker images to your registry. The values for nginx.ingress.kubernetes.io/proxy-read-timeout: "6000" and nginx.ingress.kubernetes.io/proxy-send-timeout: "6000" are sane values to begin with, but may be adjusted as needed.
      2. Deploy your Docker registry using the configurations you created in the previous step:

        helm install docker-registry stable/docker-registry -f docker-configs.yaml
        
      3. Navigate to your registry’s domain and verify that your browser loads the TLS certificate.

        Verify that your Docker registry's site loads your TLS certificate

        You will interact with your registry via the Docker CLI, so you should not expect to see any content load on the page.

      Push an Image to your Docker Registry

      You are now ready to push and pull images to your Docker registry. In this section you will pull an existing image from Docker Hub and then push it to your registry. Then, in the next section, you will use your registry’s image to deploy an example static site.

      1. Use Docker to pull an image from Docker Hub. This example is using an image that was created following our Create and Deploy a Docker Container Image to a Kubernetes Cluster guide. The image will build a Hugo static site with some boiler plate content. However, you can use any image from Docker Hub that you prefer.

        sudo docker pull leslitagordita/hugo-site:v10
        
      2. Tag your local Docker image with your private registry’s hostname. This is required when pushing an image to a private registry and not the central Docker registry. Ensure that you replace registry.example.com with your own registry’s domain.

        sudo docker tag leslitagordita/hugo-site:v10 registry.example.com/leslitagordita/hugo-site:v10
        
      3. At this point, you have never authenticated to your private registry. You will need to log into it prior to pushing up any images. Issue the example command, replacing registry.example.com with your own registry’s URL. Follow the prompts to enter in the username and password you created in the Enable Basic Authentication section.

        sudo docker login registry.example.com
        
      4. Push the image to your registry. Ensure that you replace registry.example.com with your own registry’s domain.

        sudo docker push registry.example.com/leslitagordita/hugo-site:v10
        

        You should see a similar output when your image push is complete

          
        The push refers to repository [registry.example.com/leslitagordita/hugo-site]
        925cbd794bd8: Pushed
        b9fee92b7ac7: Pushed
        1658c062e6a8: Pushed
        21acf2dde3fe: Pushed
        588c407f9029: Pushed
        bcf2f368fe23: Pushed
        v10: digest: sha256:3db7ab6bc5a893375af6f7cf505bac2f4957d8a03701d7fd56853712b0900312 size: 1570
            
        

      Create a Test Deployment Using an Image from Your Docker Registry

      In this section, you will create a test deployment using the image that you pushed to your registry in the previous section. This will ensure that your cluster can authenticate to your Docker registry and pull images from it.

      1. Using Linode’s DNS manager to create a new subdomain A record to host your static site. The example will use static.example.com. When creating your record, assign your cluster’s NodeBalancer external IP address as the IP address. You can find the external IP address with the following command:

        kubectl --namespace default get services -o wide -w nginx-ingress-controller
        

        The command will return a similar output. Use the value of the EXTERNAL-IP field to create your static site’s new subdomain A record.

          
        NAME                          TYPE           CLUSTER-IP      EXTERNAL-IP    PORT(S)                      AGE     SELECTOR
        nginx-ingress-controller   LoadBalancer   10.128.169.60   192.0.2.0   80:32401/TCP,443:30830/TCP   7h51m   app.kubernetes.io/component=controller,app=nginx-ingress,release=nginx-ingress
            
        
      2. Using a text editor, create the static-site-test.yaml file with the example configurations. This file will create a deployment, service, and an ingress.

        ~/registry/staic-site-test.yaml
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        16
        17
        18
        19
        20
        21
        22
        23
        24
        25
        26
        27
        28
        29
        30
        31
        32
        33
        34
        35
        36
        37
        38
        39
        40
        41
        42
        43
        44
        45
        46
        47
        48
        49
        50
        51
        
        apiVersion: extensions/v1beta1
        kind: Ingress
        metadata:
          name: static-site-ingress
          annotations:
            kubernetes.io/ingress.class: nginx
            nginx.ingress.kubernetes.io/backend-protocol: "HTTP"
        spec:
          rules:
          - host: static.example.com
            http:
              paths:
              - path: /
                backend:
                  serviceName: static-site
                  servicePort: 80
        ---
        apiVersion: v1
        kind: Service
        metadata:
          name: static-site
        spec:
          type: NodePort
          ports:
          - port: 80
            targetPort: 80
          selector:
            app: static-site
        ---
        apiVersion: apps/v1
        kind: Deployment
        metadata:
          name: static-site
        spec:
          replicas: 3
          selector:
            matchLabels:
              app: static-site
          template:
            metadata:
              labels:
                app: static-site
            spec:
              containers:
              - name: static-site
                image: registry.example.com/leslitagordita/hugo-site:v10
                ports:
                - containerPort: 80
              imagePullSecrets:
              - name: regcred
              
        • In the Deployment section of the manifest, the imagePullSecrets field references the secret you created in the Grant your Cluster Access to your Docker Registry section. This secret contains the authentication credentials that your cluster’s kubelet can use to pull your private registry’s image.
        • The image field provides the image to pull from your Docker registry.
      3. Create the deployment.

        kubectl create -f static-site-test.yaml
        
      4. Open a browser and navigate to your site’s domain and view the example static site. Using our example, you would navigate to static.example.com. The example Hugo site should load.

      (Optional) Tear Down your Kubernetes Cluster

      To avoid being further billed for your Kubernetes cluster and NodeBlancer, delete your cluster using the Linode Cloud Manager. Similarly, to avoid being further billed for our registry’s Object Storage bucket, follow the steps in the cancel the Object Storage service on your account section of our How to Use Object Storage guide.

      This guide is published under a CC BY-ND 4.0 license.



      Source link

      How to Access Objects with Linode Object Storage


      Updated by Linode

      Contributed by
      Linode

      Note

      Object Storage gives each object a unique URL with which you can access your data. An object can be publicly accessible, or you can set it to be private and only visible to you. This makes Object Storage great for sharing and storing unstructured data like images, documents, archives, streaming media assets, and file backups, and the amount of data you store can range from small collections of files up to massive libraries of information.

      In this guide you will learn how to access the objects you have stored in Linode’s Object Storage using:

      Before You Begin

      To learn how to enable Object Storage, see the How to Use Object Storage guide.

      Object Storage is similar to a subscription service. Once enabled, you will be billed at the flat rate regardless of whether or not there are active buckets on your account. Cancelling Object Storage will stop billing for this flat rate.

      In all Object Storage URLs the cluster where your bucket is hosted is a part of the URL string.

      Note

      A cluster is defined as all buckets hosted by a unique url. For example, us-east-1.linodeobjects.com or us-east-2.linodeobjects.com.

      Object URLs

      Objects stored in Linode object storage are generally accessible using this format:

      http://my-example-bucket.us-east-1.linodeobjects.com/example.txt
      
      • Replace the following fields with your information:

        • my-example-bucket with your bucket name
        • us-east-1 with the cluster where your bucket is hosted
        • example.txt with the object you wish to access
      • This assumes that the object is publicly accessible. For more on object permissions, see the How to Use Object Storage guide.

      Signed URLs

      Creating a signed URL will allow you to create a link to objects with limited permissions for a short amount of time. Signed URLs have a similar format:

      http://my-example-bucket.us-east-1.linodeobjects.com/example.txt?AWSAccessKeyId=YOUROBJECTSTORAGEACCESSKEY&Expires=1579725476&Signature=rAnDomKeySigNAtuRe
      
      • This is returned when you use a tool like the Linode CLI or s3cmd to generate a signed URL.

      • Replace the following fields with your information:

        • my-example-bucket with your bucket name
        • us-east-1 with the cluster where your bucket is hosted
        • example.txt with the object you are giving access to
      • The rest of the URL are the parts that make this URL public for a limited amount of time.

      Websites

      Static sites are served from URLs that are different than the standard URLs you would normally use to access objects. Static sites prepend website- to the cluster name to create a subdomain such as website-us-east-1. Using my-example-bucket as an example, a full URL would look like this:

      http://my-example-bucket.website-us-east-1.linodeobjects.com
      

      Note

      This guide is published under a CC BY-ND 4.0 license.



      Source link

      How to Enact Access Control Lists (ACLs) and Bucket Policies with Linode Object Storage


      Updated by Linode

      Contributed by
      Linode

      Linode Object Storage allows users to share access to objects and buckets with other Object Storage users. There are two mechanisms for setting up sharing: Access Control Lists (ACLs), and bucket policies. These mechanisms perform similar functions: both can be used to restrict and grant access to Object Storage resources.

      In this guide you will learn:

      Before You Begin

      • This guide will use the s3cmd command line utility to interact with Object Storage. For s3cmd installation and configuration instructions, visit our How to Use Object Storage guide.

      • You’ll also need the canonical ID of every user you wish to grant additional permissions to.

      Retrieve a User’s Canonical ID

      Follow these steps to determine the canonical ID of the Object Storage users you want to share with:

      1. The following command will return the canonical ID of a user, given any of the user’s buckets:

        s3cmd info s3://other-users-bucket
        

        Note

        The bucket referred to in this section is an arbitrary bucket on the target user’s account. It is not related to the bucket on your account that you would like to set ACLs or bucket policies on.

        There are two options for running this command:

        • The users you’re granting or restricting access to can run this command on one of their buckets and share their canonical ID with you, or:

        • You can run this command yourself if you have use of their access tokens (you will need to configure s3cmd to use their access tokens instead of your own).

      2. Run the above command, replacing other-users-bucket with the name of the bucket. You’ll see output similar to the following:

          
        s3://other-users-bucket/ (bucket):
        Location:  default
        Payer:     BucketOwner
        Expiration Rule: none
        Policy:    none
        CORS:      none
        ACL:       a0000000-000a-0000-0000-00d0ff0f0000: FULL_CONTROL
        
        
      3. The canonical ID of the owner of the bucket is the long string of letters, dashes, and numbers found in the line labeled ACL, which in this case is a0000000-000a-0000-0000-00d0ff0f0000.

      4. Alternatively, you may be able to retrieve the canonical ID by curling a bucket and retrieving the Owner ID field from the returned XML. This method is an option when both of these conditions are true:

        • The bucket has objects within it and has already been set to public (with a command like s3cmd setacl s3://other-users-bucket --acl-public).
        • The bucket has not been set to serve static websites.
      5. Run the curl command, replacing the bucket name and cluster URL with the relevant values:

        curl other-users-bucket.us-east-1.linodeobjects.com
        
      6. This will result in the following output:

        <ListBucketResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
            <Name>acl-bucket-example</Name>
            <Prefix/>
            <Marker/>
            <MaxKeys>1000</MaxKeys>
            <IsTruncated>false</IsTruncated>
            <Contents>
            <Key>cpanel_one-click.gif</Key>
            <LastModified>2019-11-20T16:52:49.946Z</LastModified>
            <ETag>"9aeafcb192a8e540e7be5b51f7249e2e"</ETag>
            <Size>961023</Size>
            <StorageClass>STANDARD</StorageClass>
            <Owner>
                <ID>a0000000-000a-0000-0000-00d0ff0f0000</ID>
                <DisplayName>a0000000-000a-0000-0000-00d0ff0f0000</DisplayName>
            </Owner>
            <Type>Normal</Type>
            </Contents>
        </ListBucketResult>
        

        In the above output, the canonical ID is a0000000-000a-0000-0000-00d0ff0f0000.

      ACLs vs Bucket Policies

      ACLs and bucket policies perform similar functions: both can restrict or grant access to buckets. ACLs can also restrict or grant access to individual objects, but they don’t offer as many fine-grained access modes as bucket policies.

      How to Choose Between ACLs and Bucket Policies

      If you can organize objects with similar permission needs into their own buckets, then it’s strongly suggested that you use bucket policies. However, if you cannot organize your objects in this fashion, ACLs are still a good option.

      ACLs offer permissions with less fine-grained control than the permissions available through bucket policies. If you are looking for more granular permissions beyond read and write access, choose bucket policies over ACLs.

      Additionally, bucket policies are created by applying a written bucket policy file to the bucket. This file cannot exceed 20KB in size. If you have a policy with a lengthy list of policy rules, you may want to look into ACLs instead.

      Note

      ACLs and bucket policies can be used at the same time. When this happens, any rule that limits access to an Object Storage resource will override a rule that grants access. For instance, if an ACL allows a user access to a bucket, but a bucket policy denies that user access, the user will not be able to access that bucket.

      ACLs

      Access Control Lists (ACLs) are a legacy method of defining access to Object Storage resources. You can apply an ACL to a bucket or to a specific object. There are two generalized modes of access: setting buckets and/or objects to be private or public. A few other more granular settings are also available.

      With s3cmd, you can set a bucket to be public with the setacl command and the --acl-public flag:

      s3cmd setacl s3://acl-example --acl-public
      

      This will cause the bucket and its contents to be downloadable over the general Internet.

      To set an object or bucket to private, you can use the setacl command and the --acl-private flag:

      s3cmd setacl s3://acl-example --acl-private
      

      This will prevent users from accessing the bucket’ contents over the general Internet.

      Other ACL Permissions

      The more granular permissions are:

      Permission Description
      read Users with can list objects within a bucket
      write Users can upload objects to a bucket and delete objects from a bucket.
      read_acp Users can read the ACL currently applied to a bucket.
      write_acp Users can change the ACL applied to the bucket.
      full_control Users have read and write access over both objects and ACLs.
      • Setting a permission: To apply these more granular permissions for a specific user with s3cmd, use the following setacl command with the --acl-grant flag:

        s3cmd setacl s3://acl-example --acl-grant=PERMISSION:CANONICAL_ID
        

        Substitute acl-example with the name of the bucket (and the object, if necessary), PERMISSION with a permission from the above table, and CANONICAL_ID with the canonical ID of the user to which you would like to grant permissions.

      • Revoking a permission: To revoke a specific permission, you can use the setacl command with the acl-revoke flag:

        s3cmd setacl s3://acl-example --acl-revoke=PERMISSION:CANONICAL_ID
        

        Substitute the bucket name (and optional object), PERMISSION, and CANONICAL_ID with your relevant values.

      • View current ACLs: To view the current ACLs applied to a bucket or object, use the info command, replacing acl-example with the name of your bucket (and object, if necessary):

        s3cmd info s3://acl-example
        

        You should see output like the following:

          
        s3://acl-bucket-example/ (bucket):
           Location:  default
           Payer:     BucketOwner
           Expiration Rule: none
           Policy:    none
           CORS:      b'<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><CORSRule><AllowedMethod>GET</AllowedMethod><AllowedMethod>PUT</AllowedMethod><AllowedMethod>DELETE</AllowedMethod><AllowedMethod>HEAD</AllowedMethod><AllowedMethod>POST</AllowedMethod><AllowedOrigin>*</AllowedOrigin><AllowedHeader>*</AllowedHeader></CORSRule></CORSConfiguration>'
           ACL:       *anon*: READ
           ACL:       a0000000-000a-0000-0000-00d0ff0f0000: FULL_CONTROL
           URL:       http://us-east-1.linodeobjects.com/acl-example/
        
        

        Note

        The owner of the bucket will always have the full_control permission.

      Bucket Policies

      Bucket policies can offer finer control over the types of permissions you can grant to a user. Below is an example bucket policy written in JSON:

      bucket_policy_example.txt
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      
      {
        "Version": "2012-10-17",
        "Statement": [{
          "Effect": "Allow",
          "Principal": {
            "AWS": [
              "arn:aws:iam:::a0000000-000a-0000-0000-00d0ff0f0000"
            ]
          },
          "Action": [
            "s3:PutObject",
            "s3:GetObject",
            "s3:ListBucket"
          ],
          "Resource": [
            "arn:aws:s3:::bucket-policy-example/*"
          ]
        }]
      }

      This policy allows the user with the canonical ID a0000000-000a-0000-0000-00d0ff0f0000, known here as the “principal”, to interact with the bucket, known as the “resource”. The “resource” that is listed (bucket-policy-example) is the only bucket the user will have access to.

      Note

      The principal (a.k.a. the user) must have the prefix of arn:aws:iam:::, and the resource (a.k.a. the bucket) must have the prefix of arn:aws:s3:::.

      The permissions are specified in the Action array. For the current example, these are:

      The Action and Principal.AWS fields of the bucket policy are arrays, so you can easily add additional users and permissions to the bucket policy, separating them by a comma. To grant permissions to all users, you can supply a wildcard (*) to the Principal.AWS field.

      If you instead wanted to deny access to the user, you could change the Effect field to Deny.

      Enable a Bucket Policy

      To enable the bucket policy, use the setpolicy s3cmd command, supplying the file name of the bucket policy as the first argument, and the S3 bucket address as the second argument:

      s3cmd setpolicy bucket_policy_example.txt s3://bucket-policy-example
      

      To ensure that it has been applied correctly, you can use the info command:

      s3cmd info s3://bucket-policy-example
      

      You should see output like the following:

        
      s3://bucket-policy-example/ (bucket):
         Location:  default
         Payer:     BucketOwner
         Expiration Rule: none
         Policy:    b'{n  "Version": "2012-10-17",n  "Statement": [{n    "Effect": "Allow",n    "Principal": {"AWS": ["arn:aws:iam:::a0000000-000a-0000-0000-00d0ff0f0000"]},n    "Action": ["s3:PutObject","s3:GetObject","s3:ListBucket"],n    "Resource": [n      "arn:aws:s3:::bucket-policy-example/*"n    ]n  }]n}'
         CORS:      none
         ACL:       a0000000-000a-0000-0000-00d0ff0f0000: FULL_CONTROL
      
      

      Note

      The policy is visible in the output.

      More Information

      You may wish to consult the following resources for additional information on this topic. While these are provided in the hope that they will be useful, please note that we cannot vouch for the accuracy or timeliness of externally hosted materials.

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.



      Source link