One place for hosting & domains

      Private

      Dedicated Private Cloud vs. Virtual Private Cloud: What’s the Difference?


      What is the difference between a dedicated private cloud and a virtual private cloud? As solutions architects, this is a question my teammates and I hear often. Simply put:

      • Dedicated Private Cloud (DPC) is defined as physically isolated, single-tenant collection of compute, network and sometimes storage resources exclusively provisioned to just one organization or application.
      • Virtual Private Cloud (VPC) is defined as a multi-tenant but virtually isolated, collection of compute, network and storage resources.

      A simple analogy comparing the two would be choosing between a single-family private home (DPC) versus a condo building (VPC).

      Despite the differences, both dedicated and virtual private clouds offer secure environments with flexible management options, which allow you to concentrate on your core business instead of struggling to keep up with daily infrastructure monitoring and maintenance.

      Let’s discuss each cloud product in greater depth and review use cases for dedicated vs. virtual private clouds. I’ll use INAP’s dedicated private cloud (DPC) and virtual private cloud (VPC) products as examples for the DPC and VPC differentiators.

      Dedicated Private Cloud (DPC)

      DPCs are scalable, isolated computing environments that are tailored to fit unique requirements and rightsized for any of workload or application. DPCs are ideal for mission-critical or legacy applications. When applications can’t be easily refactored for the cloud, a DPC can be a viable solution.  DPC is also ideal for organizations seeking to reduce time spent maintaining infrastructure. You do not need to sacrifice control, compliance or performance with a DPC. INAP DPCs are built with trusted enterprise-class technologies powered by VMware or Hyper-V.

      DPC use cases:

      • Compliance and audit requirements, such as PCI or HIPAA
      • Stringent security requirements
      • Large scale applications with rigorous performance and/or data storage requirements
      • Legacy applications, which may require hardware keys or specific software licensing components
      • Data center migration — scale physical compute, network and storage capacity as needed without significant investments in data center build outs
      • Complex network requirements, which may include MPLS, SDWAN, private layer 2 connections to customers, vendors or partners
      • Fully-integrated active or hot-standby disaster recovery environments
      • Infrastructure Management Services, all the way to the operating system
      • High CPU/GPU/RAM requirements
      • AI environments
      • Big Data
      • Always on applications that are not a fit for hyper-scale providers

      INAP’s DPC differentiators:

      • Designed and “right-sized” to fit your application, economics and compliance requirements
      • Built with enterprise-class technologies and powered by VMware or Hyper-V.
      • Utilize 100 percent isolated compute and highly secure, single-tenant environments perfect for PCI or HIPAA compliance.
      • Flexible compute and data storage options which allow you meet any application performance and growth requirements.
      • OS Managed services free up time from routine tasks of patching
      • Transparency into the core infrastructure technology allows you complete visibility in the inter-workings of the environment.
      • No restrictions on sizing of the VMs or application workloads because the infrastructure is custom designed for your organization specific technology needs.
      • SDN switching for flexible, quick and easy network management or dedicated switching for complex network configurations to meet any network requirements.
      • MDR security services available, which include vulnerability scanning, IDS/IPS, log management with SOC (Security Operations Center)
      • Off-site cloud backups and fully integrated and managed DRaaS available.

      Virtual Private Cloud (VPC)

      VPCs are ideal for applications with variable resource requirements and organizations seeking to reduce time spent maintaining infrastructure without sacrificing control of your virtual machines, compliance, and elasticity. They provide a customized landscape of users, groups, computing resources and a virtual network that you define. Different organizations or users of VPC resources do not have access to the underlying hypervisor for customization or monitoring plugin installation.

      VPCs are pre-designed for smaller to medium workloads and provide management and monitoring tools. They allow for very fast application deployment because the highly available compute, security, storage and hypervisors are already deployed and ready for your workload.

      VPC use cases:

      • Small to medium sized workloads with 10 to 25 VMs and simple network requirements
      • Applications with lower RAM requirements
      • Ideal for additional capacity needed for projects. Deploy in hours—not days.
      • Quickly spin up unlimited Virtual Machines (VMs) per host to support new projects or peak business cycle’s ability to quickly add resources on demand

      INAP’s VPC differentiators:

      • Designed for fast deployments enabling you to eliminate lengthy sourcing and procurement timelines
      • Shield Managed Security services included
        • 24/7 physical security in SSAE 16/SOC 2 certified Data Centers
        • Private networks & segmentation
        • Account security for secure portal access
        • DDoS protection & Mitigation
      • OS Managed services free up time from routine tasks of patching
      • Easy to use interface simplifies management and reduces operational expense of training IT staff
      • Off-site Cloud Backups and Fully integrated On-Demand (Paygo) DRaaS available
      • MDR security services available, which include vulnerability scanning, IDS/IPS, log management with SOC (Security Operations Center)

      Next Steps

      Do you know which private cloud model will work with your company’s workload and applications? Whether you’re certain that a DPC or VPC will be a good fit or you’re still unsure, INAP’s experts can help take your cloud infrastructure to the next level. Chat today to talk all things private cloud.

      Explore INAP Private Cloud.

      LEARN MORE

      Rob Lerner


      READ MORE



      Source link

      How to Set Up a Private Docker Registry with Linode Kubernetes Engine and Object Storage


      Updated by Leslie Salazar Contributed by Leslie Salazar

      Marquee image for How to Set Up a Private Docker Registry with Linode Kubernetes Engine and Object Storage

      Hosting a private Docker registry alongside your Kubernetes cluster allows you to securely manage your Docker images while also providing quick deployment of your apps. This guide will walk you through the steps needed to deploy a private Docker registry on a Linode Kubernetes Engine (LKE) cluster. At the end of this tutorial, you will be able to locally push and pull Docker images to your registry. Similarly, your LKE cluster’s pods will also be able to pull Docker images from the registry to complete their deployments.

      Before you Begin

      Note

      1. Deploy a LKE Cluster. This example was written using a node pool with two 2 GB nodes. Depending on the workloads you will be deploying on your cluster, you may consider using nodes with higher resources.

      2. Install Helm 3, kubectl, and Docker to your local environment.

        Note

      3. Ensure Object Storage is enabled on your Linode account, generate an Object Storage key pair and ensure you save it in a secure location. You will need the key pair for a later section in this guide. Finally create an Object Storage bucket to store your registry’s images. Throughout this guide, the example bucket name will be registry.

      4. Purchase a domain name from a reliable domain registrar. Using Linode’s DNS Manager, create a new Domain and add an DNS “A” record for a subdomain named registry. Your subdomain will host your Docker registry. This guide will use registry.example.com as the example domain.

        Note

        Optionally, you can create a Wildcard DNS record, *.example.com. In a later section, you will point your DNS A record to a Linode NodeBalancer’s external IP address. Using a Wildcard DNS record, will allow you to expose your Kubernetes services without requiring further configuration using the Linode DNS Manager.

      In this Guide

      In this guide you will:

      Install the NGINX Ingress Controller

      An Ingress is used to provide external routes, via HTTP or HTTPS, to your cluster’s services. An Ingress Controller, like the NGINX Ingress Controller, fulfills the requirements presented by the Ingress using a load balancer.

      In this section, you will install the NGINX Ingress Controller using Helm, which will create a Linode NodeBalancer to handle your cluster’s traffic.

      1. Add the Google stable Helm charts repository to your Helm repos:

        helm repo add stable https://kubernetes-charts.storage.googleapis.com/
        
      2. Update your Helm repositories:

        helm repo update
        
      3. Install the NGINX Ingress Controller. This installation will result in a Linode NodeBalancer being created.

        helm install nginx-ingress stable/nginx-ingress
        

        You will see a similar output after issuing the above command (the output has been truncated for brevity):

          
        NAME: my-nginx-ingress
        LAST DEPLOYED: Wed Apr  8 09:55:47 2020
        NAMESPACE: default
        STATUS: deployed
        REVISION: 1
        TEST SUITE: None
        NOTES:
        The nginx-ingress controller has been installed.
        It may take a few minutes for the LoadBalancer IP to be available.
        You can watch the status by running 'kubectl --namespace default get services -o wide -w my-nginx-ingress-controller'
        ...
            
        

        In the next section, you will use your Linode NodeBalancer’s external IP address to update your registry’s domain record.

      Update your Subdomain’s IP Address

      1. Access your NodeBalancer’s assigned external IP address.

        kubectl --namespace default get services -o wide -w nginx-ingress-controller
        

        The command will return a similar output:

          
        NAME                          TYPE           CLUSTER-IP      EXTERNAL-IP    PORT(S)                      AGE     SELECTOR
        my-nginx-ingress-controller   LoadBalancer   10.128.169.60   192.0.2.0   80:32401/TCP,443:30830/TCP   7h51m   app.kubernetes.io/component=controller,app=nginx-ingress,release=my-nginx-ingress
            
        
      2. Copy the IP address of the EXTERNAL IP field and navigate to Linode’s DNS manager and update your domain’s’ registry A record with the external IP address. Ensure that the entry’s TTL field is set to 5 minutes.

      Now that your NGINX Ingress Controller has been deployed and your subdomain’s A record has been updated, you are ready to enable HTTPS on your Docker registry.

      Enable HTTPS

      Note

      Before performing the commands in this section, ensure that your DNS has had time to propagate across the internet. This process can take several hours. You can query the status of your DNS by using the following command, substituting registry.example.com for your subdomain and domain.

      dig +short registry.example.com
      

      If successful, the output should return the IP address of your NodeBalancer.

      To enable HTTPS on your Docker registry, you will create a Transport Layer Security (TLS) certificate from the Let’s Encrypt certificate authority (CA) using the ACME protocol. This will be facilitated by cert-manager, the native Kubernetes certificate management controller.

      In this section you will install cert-manager using Helm and the required cert-manager CustomResourceDefinitions (CRDs). Then, you will create a ClusterIssuer and Certificate resource to create your cluster’s TLS certificate.

      Install cert-manager

      1. Install cert-manager’s CRDs.

        kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v0.14.1/cert-manager.crds.yaml
        
      2. Create a cert-manager namespace.

        kubectl create namespace cert-manager
        
      3. Add the Helm repository which contains the cert-manager Helm chart.

        helm repo add jetstack https://charts.jetstack.io
        
      4. Update your Helm repositories.

        helm repo update
        
      5. Install the cert-manager Helm chart. These basic configurations should be sufficient for many use cases, however, additional cert-manager configurable parameters can be found in cert-manager’s official documentation.

        helm install 
        cert-manager jetstack/cert-manager 
        --namespace cert-manager 
        --version v0.14.1
        
      6. Verify that the corresponding cert-manager pods are now running.

        kubectl get pods --namespace cert-manager
        

        You should see a similar output:

          
        NAME                                       READY   STATUS    RESTARTS   AGE
        cert-manager-579d48dff8-84nw9              1/1     Running   3          1m
        cert-manager-cainjector-789955d9b7-jfskr   1/1     Running   3          1m
        cert-manager-webhook-64869c4997-hnx6n      1/1     Running   0          1m
            
        

      Create a ClusterIssuer Resource

      Now that cert-manager is installed and running on your cluster, you will need to create a ClusterIssuer resource which defines which CA can create signed certificates when a certificate request is received. A ClusterIssuer is not a namespaced resource, so it can be used by more than one namespace.

      1. Create a directory named registry to store all of your Docker registry’s related manifest files and move into the new directory.

        mkdir ~/registry && cd ~/registry
        
      2. Using the text editor of your choice, create a file named acme-issuer-prod.yaml with the example configurations. Replace the value of email with your own email address.

        ~/registry/acme-issuer-prod.yaml
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        
        apiVersion: cert-manager.io/v1alpha2
        kind: ClusterIssuer
        metadata:
          name: letsencrypt-prod
        spec:
          acme:
            email: [email protected]
            server: https://acme-v02.api.letsencrypt.org/directory
            privateKeySecretRef:
              name: letsencrypt-secret-prod
            solvers:
            - http01:
                ingress:
                  class: nginx
            
        • This manifest file creates a ClusterIssuer resource that will register an account on an ACME server. The value of spec.acme.server designates Let’s Encrypt’s production ACME server, which should be trusted by most browsers.

          Note

          Let’s Encrypt provides a staging ACME server that can be used to test issuing trusted certificates, while not worrying about hitting Let’s Encrypt’s production rate limits. The staging URL is https://acme-staging-v02.api.letsencrypt.org/directory.
        • The value of privateKeySecretRef.name provides the name of a secret containing the private key for this user’s ACME server account (this is tied to the email address you provide in the manifest file). The ACME server will use this key to identify you.

        • To ensure that you own the domain for which you will create a certificate, the ACME server will issue a challenge to a client. cert-manager provides two options for solving challenges, http01 and DNS01. In this example, the http01 challenge solver will be used and it is configured in the solvers array. cert-manager will spin up challenge solver Pods to solve the issued challenges and use Ingress resources to route the challenge to the appropriate Pod.

      3. Create the ClusterIssuer resource:

        kubectl create -f acme-issuer-prod.yaml
        

      Create a Certificate Resource

      After you have a ClusterIssuer resource, you can create a Certificate resource. This will describe your x509 public key certificate and will be used to automatically generate a CertificateRequest which will be sent to your ClusterIssuer.

      1. Using the text editor of your choice, create a file named certificate-prod.yaml with the example configurations. Replace the value of email with your own email address. Replace the value of spec.dnsNames with your own domain that you will use to host your Docker registry.

        ~/registry/certificate-prod.yaml
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        
        apiVersion: cert-manager.io/v1alpha2
        kind: Certificate
        metadata:
          name: docker-registry-prod
        spec:
          secretName: letsencrypt-secret-prod
          duration: 2160h # 90d
          renewBefore: 360h # 15d
          issuerRef:
            name: letsencrypt-prod
            kind: ClusterIssuer
          dnsNames:
          - registry.example.com
            

        Note

        The configurations in this example create a Certificate that is valid for 90 days and renews 15 days before expiry.

      2. Create the Certificate resource:

        kubectl create -f certificate-prod.yaml
        
      3. Verify that the Certificate has been successfully issued:

        kubectl get certs
        

        When your certificate is ready, you should see a similar output:

          
        NAME                   READY   SECRET                    AGE
        docker-registry-prod   True    letsencrypt-secret-prod   42s
            
        

        All the necessary components are now in place to be able to enable HTTPS on your Docker registry. In the next section, you will complete the steps need to deploy your registry.

      Deploy your Docker Registry

      You will now complete the steps to deploy your Docker Registry to your Kubernetes cluster using a Helm chart. Prior to deploying your Docker registry, you will first need to create a username and password in order to enable basic authentication for your registry. This will allow you to restrict access to your Docker registry which will keep your images private. Since, your registry will require authentication, a Kubernetes secret will be added to your cluster in order to provide your cluster with your registry’s authentication credentials, so that it can pull images from it.

      Enable Basic Authentication

      To enabled basic access restriction for your Docker registry, you will use the htpasswd utility. This utility allows you to use a file to store usernames and passwords for basic HTTP authentication. This will require you to log into your Docker registry prior to being able to push or pull images from and to it.

      1. Install the htpasswd utility. This example is for an Ubuntu 18.04 instance, but you can use your system’s package manger to install it.

        sudo apt install apache2-utils -y
        
      2. Create a file to store your Docker registry’s username and password.

        touch my_docker_pass
        
      3. Create a username and password using htpasswd. Replace example_user with your own username. Follow the prompt to create a password.

        htpasswd -B my_docker_pass example_user
        
      4. View the contents of your password file.

        cat my_docker_pass
        

        Your output will resemble the following. You will need these values when deploying your registry in the Configure your Docker Registry section of the guide.

          
        example_user:$2y$05$8VhvzCVCB4txq8mNGh8eu.8GMyBEEeUInqQJHKJUD.KUwxastPG4m
          
        

      Grant your Cluster Access to your Docker Registry

      Your LKE Cluster will also need to authenticate to your Docker registry in order to pull images from it. In this section, you will create a Kubernetes Secret that you can use to grant your cluster’s kubelet with access to your registry’s images.

      1. Create a secret to store your registry’s authentication information. Replace the option values with your own registry’s details. The --docker-username and --docker-password should be the username and password that you used when generating credentials using the htpasswd utility.

        kubectl create secret docker-registry regcred 
          --docker-server=registry.example.com 
          --docker-username=example_user 
          --docker-password=3xampl3Passw0rd 
          [email protected]
        

      Configure your Docker Registry

      Before deploying the Docker Registry Helm chart to your cluster, you will define some configurations so that the Docker registry uses the NGINX Ingress controller, your registry Object Storage bucket, and your cert-manager created TLS certificate. See the Docker Registry Helm Chart’s official documentation for a full list of all available configurations.

      Note

      1. Create a new file named docker-configs.yaml using the example configurations. Ensure you replace the following values in your file:

        • ingress.hosts with your own Docker registry’s domain
        • ingress.tls.secretName with the secret name you used when creating your ClusterIssuer
        • ingress.tls.secretName.hosts with the domain for which you wish to secure with your TLS certificate.
        • secrets.s3.accessKey with the value of your Object Storage account’s access key and secrets.s3.secretKey with the corresponding secret key.
        • secrets.htpasswd with the value returned when you view the contents of your my_docker_pass file. However, ensure you do not remove the |- characters. This ensures that your YAML is properly formatted. See step 4 in the Enable Basic Authentication section for details on viewing the contents of your password file.
        • s3.region with your Object Storage bucket’s cluster region, s3.regionEndpoint with your Object Storage bucket’s region endpoint, and s3.bucket with your registry’s Object Storage bucket name.
        ~/registry/docker-configs.yaml
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        16
        17
        18
        19
        20
        21
        22
        23
        24
        25
        26
        27
        
        ingress:
          enabled: true
          hosts:
            - registry.example.com
          annotations:
            kubernetes.io/ingress.class: nginx
            cert-manager.io/cluster-issuer: letsencrypt-prod
            nginx.ingress.kubernetes.io/proxy-body-size: "0"
            nginx.ingress.kubernetes.io/proxy-read-timeout: "6000"
            nginx.ingress.kubernetes.io/proxy-send-timeout: "6000"
          tls:
            - secretName: letsencrypt-secret-prod
              hosts:
              - registry.example.com
        storage: s3
        secrets:
          htpasswd: |-
            example_user:$2y$05$8VhvzCVCB4txq8mNGh8eu.8GMyBEEeUInqQJHKJUD.KUwxastPG4m
          s3:
            accessKey: "myaccesskey"
            secretKey: "mysecretkey"
        s3:
          region: us-east-1
          regionEndpoint: us-east-1.linodeobjects.com/
          secure: true
          bucket: registry
              
        • The NGINX Ingress annotation nginx.ingress.kubernetes.io/proxy-body-size: "0" disables a maximum allowed size client request body check and ensures that you won’t receive a 413 error when pushing larger Docker images to your registry. The values for nginx.ingress.kubernetes.io/proxy-read-timeout: "6000" and nginx.ingress.kubernetes.io/proxy-send-timeout: "6000" are sane values to begin with, but may be adjusted as needed.
      2. Deploy your Docker registry using the configurations you created in the previous step:

        helm install docker-registry stable/docker-registry -f docker-configs.yaml
        
      3. Navigate to your registry’s domain and verify that your browser loads the TLS certificate.

        Verify that your Docker registry's site loads your TLS certificate

        You will interact with your registry via the Docker CLI, so you should not expect to see any content load on the page.

      Push an Image to your Docker Registry

      You are now ready to push and pull images to your Docker registry. In this section you will pull an existing image from Docker Hub and then push it to your registry. Then, in the next section, you will use your registry’s image to deploy an example static site.

      1. Use Docker to pull an image from Docker Hub. This example is using an image that was created following our Create and Deploy a Docker Container Image to a Kubernetes Cluster guide. The image will build a Hugo static site with some boiler plate content. However, you can use any image from Docker Hub that you prefer.

        sudo docker pull leslitagordita/hugo-site:v10
        
      2. Tag your local Docker image with your private registry’s hostname. This is required when pushing an image to a private registry and not the central Docker registry. Ensure that you replace registry.example.com with your own registry’s domain.

        sudo docker tag leslitagordita/hugo-site:v10 registry.example.com/leslitagordita/hugo-site:v10
        
      3. At this point, you have never authenticated to your private registry. You will need to log into it prior to pushing up any images. Issue the example command, replacing registry.example.com with your own registry’s URL. Follow the prompts to enter in the username and password you created in the Enable Basic Authentication section.

        sudo docker login registry.example.com
        
      4. Push the image to your registry. Ensure that you replace registry.example.com with your own registry’s domain.

        sudo docker push registry.example.com/leslitagordita/hugo-site:v10
        

        You should see a similar output when your image push is complete

          
        The push refers to repository [registry.example.com/leslitagordita/hugo-site]
        925cbd794bd8: Pushed
        b9fee92b7ac7: Pushed
        1658c062e6a8: Pushed
        21acf2dde3fe: Pushed
        588c407f9029: Pushed
        bcf2f368fe23: Pushed
        v10: digest: sha256:3db7ab6bc5a893375af6f7cf505bac2f4957d8a03701d7fd56853712b0900312 size: 1570
            
        

      Create a Test Deployment Using an Image from Your Docker Registry

      In this section, you will create a test deployment using the image that you pushed to your registry in the previous section. This will ensure that your cluster can authenticate to your Docker registry and pull images from it.

      1. Using Linode’s DNS manager to create a new subdomain A record to host your static site. The example will use static.example.com. When creating your record, assign your cluster’s NodeBalancer external IP address as the IP address. You can find the external IP address with the following command:

        kubectl --namespace default get services -o wide -w nginx-ingress-controller
        

        The command will return a similar output. Use the value of the EXTERNAL-IP field to create your static site’s new subdomain A record.

          
        NAME                          TYPE           CLUSTER-IP      EXTERNAL-IP    PORT(S)                      AGE     SELECTOR
        nginx-ingress-controller   LoadBalancer   10.128.169.60   192.0.2.0   80:32401/TCP,443:30830/TCP   7h51m   app.kubernetes.io/component=controller,app=nginx-ingress,release=nginx-ingress
            
        
      2. Using a text editor, create the static-site-test.yaml file with the example configurations. This file will create a deployment, service, and an ingress.

        ~/registry/staic-site-test.yaml
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        16
        17
        18
        19
        20
        21
        22
        23
        24
        25
        26
        27
        28
        29
        30
        31
        32
        33
        34
        35
        36
        37
        38
        39
        40
        41
        42
        43
        44
        45
        46
        47
        48
        49
        50
        51
        
        apiVersion: extensions/v1beta1
        kind: Ingress
        metadata:
          name: static-site-ingress
          annotations:
            kubernetes.io/ingress.class: nginx
            nginx.ingress.kubernetes.io/backend-protocol: "HTTP"
        spec:
          rules:
          - host: static.example.com
            http:
              paths:
              - path: /
                backend:
                  serviceName: static-site
                  servicePort: 80
        ---
        apiVersion: v1
        kind: Service
        metadata:
          name: static-site
        spec:
          type: NodePort
          ports:
          - port: 80
            targetPort: 80
          selector:
            app: static-site
        ---
        apiVersion: apps/v1
        kind: Deployment
        metadata:
          name: static-site
        spec:
          replicas: 3
          selector:
            matchLabels:
              app: static-site
          template:
            metadata:
              labels:
                app: static-site
            spec:
              containers:
              - name: static-site
                image: registry.example.com/leslitagordita/hugo-site:v10
                ports:
                - containerPort: 80
              imagePullSecrets:
              - name: regcred
              
        • In the Deployment section of the manifest, the imagePullSecrets field references the secret you created in the Grant your Cluster Access to your Docker Registry section. This secret contains the authentication credentials that your cluster’s kubelet can use to pull your private registry’s image.
        • The image field provides the image to pull from your Docker registry.
      3. Create the deployment.

        kubectl create -f static-site-test.yaml
        
      4. Open a browser and navigate to your site’s domain and view the example static site. Using our example, you would navigate to static.example.com. The example Hugo site should load.

      (Optional) Tear Down your Kubernetes Cluster

      To avoid being further billed for your Kubernetes cluster and NodeBlancer, delete your cluster using the Linode Cloud Manager. Similarly, to avoid being further billed for our registry’s Object Storage bucket, follow the steps in the cancel the Object Storage service on your account section of our How to Use Object Storage guide.

      This guide is published under a CC BY-ND 4.0 license.



      Source link

      Are Private Clouds HIPAA Compliant?


      HIPAA compliant practices are in place to protect the privacy and security of protected health information (PHI). With the rise in popularity of cloud infrastructure solutions, more health providers are moving their IT infrastructure off premise. But before a move can happen, providers must ensure their practices, the cloud provider and the solution itself follows HIPAA’s rules and guidelines. Here, we’ll explore whether private clouds meet these guidelines.

      So, are hosted private clouds a HIPAA compliant option? The short answer is, “Yes!” But that doesn’t mean all private cloud environments are ready out of the gate. For more nuance, let’s first discuss some HIPAA basics.

      HIPAA Privacy and Security Rules

      Where does third-party IT infrastructure and HIPAA compliance intersect?

      There are numerous rules around HIPAA, including privacy, security and breach notifications that establish protections around PHI that covered entities (healthcare providers, insurance providers, etc.) and business associates (those performing functions or activities for, or providing services to a covered entity involving PHI) must follow. Cloud service providers are considered business associates.

      PHI includes any identifiable information about a patient, such as last name, first name and date of birth. And today’s electronic health record (EHR) systems store much more identifiable information, such as social security numbers, addresses and phone numbers, insurance cards and driver licenses, which can be used to identify a person or build a more complete patient profile.

      The HIPAA Privacy Rule relates to the covered entities and business associates and defines and limits when a person’s PHI may be used or disclosed.

      The HIPAA Security Rule establishes the security standards for protecting PHI stored or transferred in electronic form. This rule, in conjunction with the Privacy Rule, is critical to keep in mind as consumers research cloud providers, as covered entities must have technical and non-technical safeguards to secure PHI.

      According to U.S. Department of Health & Human Services, the general rules around security are:

      • Ensure the confidentiality, integrity and availability of all e-PHI they create, receive, maintain or transmit;
      • Identify and protect against reasonably anticipated threats to the security or integrity of the information;
      • Protect against reasonably anticipated, impermissible uses or disclosures; and
      • Ensure compliance by their workforce.

      Compliance is a shared effort between the covered entity and the business associate. With that in mind, how do cloud providers address these rules?

      HIPAA: Private vs. Public Cloud

      A cloud can be most simply defined as remote servers providing compute and storage resources, which are available through the internet or other communication channels. Cloud resources can be consumed and billed per minute or hour or by flat monthly fees. The main difference between private and public clouds is that private cloud compute resources are fully dedicated to one client (single-tenant) while public cloud resources are shared between two or more clients (multi-tenant). Storage resources can also be single or multi-tenant in private clouds while still complying with HIPAA policies.

      HIPAA compliancy can be achieved in both private and public clouds by effectively securing, monitoring and tracking access to patient data. Private clouds, however, allow more granular control and visibility into the underlying layers of the infrastructure such as servers, switches, firewalls and storage. This extra visibility into a private cloud, combined with the assurance the environment is physically isolated , is very helpful when auditing your cloud environment against HIPAA requirements.

      Customers and vendors will normally have control and responsibility for PHI protection clearly divided between the parties. For example, a cloud provider may draw the line of responsibility at the physical, hypervisor or operating system layer, while the customer’s responsibility would start from the application layer.

      Other Benefits of Private Clouds for HIPAA Compliance

      As noted, HIPAA has many provisions, but keeping PHI secured from breaches and unauthorized access is the main objective. PHI is worth major money on the black market. While credit card information is sold for about $5 to $10 per record, PHI is being sold at $300+ per single record.

      Private cloud providers ensure that a customer’s environment is protected from unauthorized access at breach points controlled by the cloud provider. Breach points could be described as physical access to building/data center, external threats and attacks over the internet against the core infrastructure, internal threats by malicious actors, viruses, spyware and ransomware. Private cloud providers also make sure that the data is protected from accidental edits, deletions or corruption via backup and DRaaS services. These same breach points apply to on-premise (customer-owned) infrastructure, too.

      A HIPAA compliant private cloud environment will make sure that their security, technology, tools, training, policies and procedures which relate to protection of PHI are used and followed every step of the way throughout this business association with the customer.

      What a HIPAA Compliant Cloud Supports

      Let’s take a closer look at what a HIPAA compliant private cloud needs to have in place and support.

      • BAA: A provider of a HIPAA compliant private cloud will start their relationship with a signed Business Associate Agreement (BAA). The BAA agreement is required between customer and vendor if the customer is planning to have PHI stored or accessed in the private cloud. If a prospective provider hesitates to sign any type of BAA, it’s probably good idea to walk away.
      • Training: Annual HIPAA training must be provided to every staff member of the private cloud vendor.
      • Physical Security: A Tier III data center with SSAE certifications will provide the physical security and uptime guarantees for your private cloud’s basic needs such as power and cooling.
      • External Threats and Attacks: Your private cloud will need to be secured with industry best practice security measures to defend against viruses, spyware, ransomware and hacking attacks. The measures include firewalls, intrusion detection with log management, monitoring, anti-virus software, patch management, frequent backups with off-site storage, disaster recovery with testing.
      • Internal Threats: A private cloud for PHI needs to be able to be secured against internal attacks by malicious actors. Cloud vendors are required to have policies and procedures to perform background checks and regular audit staff member security profiles to make sure proper level of access is provided based on access requirements and thorough on-boarding and termination processes.
      • Data Protection and Security: A private cloud must be able to protect your data from theft, deletions/corruptions (malicious or accidental). Physical theft of data is not common in secured datacenters, however, encrypting your data at rest should be a standard in today’s solutions. In order to protect private clouds from disasters, a well-executed backup and disaster recovery plan is required. Backups and DR plans must be tested regularly to make sure they will work when needed. I recommend twice a year testing for DR and once a week testing for backup restores.

      Private cloud customers also have a responsibility to continue protection of PHI from the point where they take over management duties. This line of responsibility is often drawn at the application level. Customers must ensure that any application that stores and manages PHI has gone through all the necessary audits and certifications.

      Closing Thoughts

      Well-defined policies and procedures, best practice uses of tools and technologies, proper security footprint and regular auditing and testing yields a HIPAA compliant private cloud. Doing the work on the front end to vet a strong partner for your private cloud or putting in the time to review processes with your current provider will go a long way in meeting HIPAA compliance requirements.

      Explore INAP Private Cloud.

      LEARN MORE

      Rob Lerner


      READ MORE



      Source link