One place for hosting & domains

      How to Migrate Your WordPress Website to DreamHost (In 6 Steps)

      Choosing a web host is typically one of the first decisions you’ll make when setting up your website. However, over time you may come to regret your choice of provider. If this happens, you’ll be faced with the prospect of migrating your website from one host to another.

      Fortunately, that process is more straightforward than it might first appear. This is especially true if you’re moving your WordPress site to a host that will provide a better, safer home for it — like DreamHost! What’s more, this can be done in just a few steps, thanks to our new Automated Migration plugin.

      In this article, we’ll show you how to easily migrate your site to DreamHost in six steps using our plugin. We can’t wait to show you how it works, so let’s dive right in! 

      Easily Migrate Your WordPress Site

      Leave your current hosting hassles behind and get back to focusing on what matters most. Move to DreamHost today with our free, automated migration plugin.

      Why You Might Want to Migrate Your WordPress Website

      DreamHost WordPress hosting plans.
      DreamHost hosting plans provide excellent performance and useful features.

      When we talk about migrating your website, we mean moving it from one web host to another. Typically, one of the first things you’ll do when creating a new site is to sign up with your chosen hosting provider. 

      Typically, they’ll take care of getting your site online, so people are actually able to visit it.
      However, if you decide you want to move your site to a different host after it’s been functioning for a while, you’ll need to migrate all your files and database information. 

      There are many reasons why you might want to migrate your WordPress site to a different host, including:

      • Your website has grown over time, and you need more options, functionality, or space to keep up with it
      • Your current host doesn’t provide the security features or support options you’re looking for
      • Your site is experiencing poor performance or a lot of downtime

      If you feel it’s time for a change, whether due to the above reasons or something else entirely, your first step will be to find a new host that’s a better fit. You should look for one providing excellent performance and uptime, plenty of flexibility and “scalable” options, and top-notch security and support.

      Here at DreamHost, we offer hosting plans that meet all of these criteria and more. You can choose from affordable shared hosting packages, or opt for DreamPress — a fully-managed solution. Either way, you’ll benefit from great performance, lots of useful, WordPress-specific features, and expert support.

      Best of all, migrating your site to DreamHost is a simple process when you use our Automated Migration plugin. 

      Introducing DreamHost Automated Migration

      “The DreamHost Automated Migration plugin.”

      If you’ve decided to migrate your WordPress website, you have several options. As we mentioned above, you can often have your new host take care of the process for you. Advanced users may be interested in manual migration options, such as using WP-CLI. However, in many cases, the simplest option is to use a plugin.

      That’s why we worked with the team over at BlogVault to create a tool specifically designed to help you migrate your site to DreamHost. With DreamHost Automated Migration, not only do you get the ease of using a plugin, but you also know that your migration will be tailored to our hosting services. This also means you won’t have to take the extra step of cloning your website before you move it. 

      Key Features:

      • Enables you to move or migrate your WordPress site to DreamHost.
      • Lets you copy your files and database over to your new hosting account without cloning your site separately.
      • Only requires six simple steps, from installation to migration.

      This really is the simplest way to migrate your website and know that it will fit right into your DreamHost account. 

      Pricing: The DreamHost Automated Migration plugin is available for free to DreamHost account holders. This means that if you’ve recently switched to DreamHost or are considering it, your migration process just got easier and a whole lot more budget-friendly!

      How to Migrate Your WordPress Website to DreamHost (In 6 Steps)

      Now that we’ve covered the basics, it’s time to discuss how to actually perform your site migration. The six steps below will help you move your site from your current hosting provider to your new DreamHost plan.

      Step 1: Prepare for the Migration Process

      Migrating your site to DreamHost (or any web host) is a fairly simple process. However, there are a few tasks you’ll want to take care of before proceeding:

      1. Make sure you’ve updated your WordPress installation to the most recent version. 
      2. Check your themes and plugins to ensure that they’ve all been updated or deleted if no longer in use. 
      3. Choose and purchase a DreamHost plan, or add hosting to your existing DreamHost domain

      After you have all of the above items in place, you’re ready to get started! 

      Step 2: Locate Your Migration Key in Your DreamHost Account

      Once you’ve established your account, log into your user panel and navigate to the Free Migration tab directly from your Home page.

      “Accessing free migration tools in your DreamHost user panel.”

      There, you’ll see where you can click on Generate Migration Key. You’ll need this to complete the migration process. Once you select it, your key will appear below the button.

      “Where to locate your migration key in a shared plan user panel.”

      You’ll need this information for the next step, so it’s best to leave your browser tab open for easy access. It’s important to note that the migration key generator is not available on Virtual Private Servers (VPSs) or dedicated hosting plans. Keys can be requested by contacting our support team or you can manually enter your host details.

      If you have a shared plan where you host more than one website, you may not see the generator in your account. If that’s the case, you’ll need to contact our support team to request your migration key or any additional keys you may need. 

      Locating Your Migration Key in DreamPress Accounts

      It’s important to note that for DreamPress accounts, this process is the same, but your migration option will be located in a different place. In your user panel, you’ll go to WordPress > Managed WordPress in the left-hand menu. You’ll then find a Migration tab among your options.

      “Locating the migration options in a DreamPress account.”

      Next, you’ll see the Generate Migration Key button. Click on this, and you’ll receive your unique key to be used during the migration process. This is the key our plugin needs to access and move your site’s files and database. 

      It’s recommended that you leave this page of your user panel open, and head over to your WordPress website by opening another browser tab. 

      Step 3: Install the Free DreamHost Automated Migration Plugin

      In your WordPress website dashboard, navigate to Plugins > Add New and use the Search field to find the DreamHost Automated Migration plugin.

      “The DreamHost Automated Migration plugin for WordPress.”

      Click on Install Now, and then Activate the plugin once the installation is complete. 

      Step 4: Use Your Migration Key to Start Your Migration

      You’ll recall that we recommended leaving your DreamHost user panel open in another tab. Now you’ll go back to that tab, copy your migration key, and head back to WordPress.

      You’ll see a DreamHost option in your left-hand menu. Click on that, and you’ll be able to add your migration key.

      “The Automated Migration plugin interface.”

      Paste the migration key from your DreamHost account into the Migration Token field. You’ll need to agree to our partner BlogVault’s Terms of Service as well. Once you check that box, click on Migrate.

      Step 5: Track the Progress of Your Migration

      Next, you can either wait for an email from the DreamHost team or watch the progress of the migration in your WordPress dashboard. This will let you know when your migration is complete.

      "Monitoring the migration process.” 

      Additionally, if there are any issues with your migration, you’ll receive the relevant information on this screen.

      Step 6: Update Your DNS Records

      Once you receive notice that your site has been successfully migrated, you should review it within your DreamHost account. If your domain is the same at DreamHost and your old host, you can review this by adding “” to the end of your DreamHost domain. If you’ll be using a different domain name at DreamHost or are using a temporary subdomain, visit our knowledge base for additional steps.

      You’ll also want to make sure that your domain is pointing to your newly-migrated website. This means updating your Domain Name System (DNS) information. 

      You can find DreamHost’s DNS address by going to Domains in your user panel and clicking on DNS underneath your domain.

      “Finding DNS information in the DreamHost user panel.”

      Providing this DNS information to your existing domain registrar will ensure that your domain is pointed at the correct web content, now hosted with us at DreamHost.

      That’s it! Your migrated site should now be up and running. It’s a smart idea to test your new site thoroughly and make sure everything has been transferred correctly. Then you can delete your old site, and enjoy your new quality hosting with DreamHost.

      Make Your Move

      Whether you need help migrating a website, installing WordPress, or vetting web hosting plans, we can help! Subscribe to our monthly digest so you never miss an article.

      It’s Simple to Switch Web Hosts

      Migrating your website to a new host might be a bit of a hassle, but it can be well worth it in the long run. If your current host isn’t up to par, you’ll want to switch to a new host that provides the performance, stability, and security your site needs to thrive. Plus, WordPress users will find that the process can be handled easily using our own Automated Migration plugin

      Source link

      Recommended Steps to Secure a DigitalOcean Kubernetes Cluster

      The author selected Open Sourcing Mental Illness to receive a donation as part of the Write for DOnations program.


      Kubernetes, the open-source container orchestration platform, is steadily becoming the preferred solution for automating, scaling, and managing high-availability clusters. As a result of its increasing popularity, Kubernetes security has become more and more relevant.

      Considering the moving parts involved in Kubernetes and the variety of deployment scenarios, securing Kubernetes can sometimes be complex. Because of this, the objective of this article is to provide a solid security foundation for a DigitalOcean Kubernetes (DOKS) cluster. Note that this tutorial covers basic security measures for Kubernetes, and is meant to be a starting point rather than an exhaustive guide. For additional steps, see the official Kubernetes documentation.

      In this guide, you will take basic steps to secure your DigitalOcean Kubernetes cluster. You will configure secure local authentication with TLS/SSL certificates, grant permissions to local users with Role-based access controls (RBAC), grant permissions to Kubernetes applications and deployments with service accounts, and set up resource limits with the ResourceQuota and LimitRange admission controllers.


      In order to complete this tutorial you will need:

      • A DigitalOcean Kubernetes (DOKS) managed cluster with 3 Standard nodes configured with at least 2 GB RAM and 1 vCPU each. For detailed instructions on how to create a DOKS cluster, read our Kubernetes Quickstart guide. This tutorial uses DOKS version 1.16.2-do.1.
      • A local client configured to manage the DOKS cluster, with a cluster configuration file downloaded from the DigitalOcean Control Panel and saved as ~/.kube/config. For detailed instructions on how to configure remote DOKS management, read our guide How to Connect to a DigitalOcean Kubernetes Cluster. In particular, you will need:
        • The kubectl command-line interface installed on your local machine. You can read more about installing and configuring kubectl in its official documentation. This tutorial will use kubectl version 1.17.0-00.
        • The official DigitalOcean command-line tool, doctl. For instructions on how to install this, see the doctl GitHub page. This tutorial will use doctl version 1.36.0.

      Step 1 — Enabling Remote User Authentication

      After completing the prerequisites, you will end up with one Kubernetes superuser that authenticates through a predefined DigitalOcean bearer token. However, sharing those credentials is not a good security practice, since this account can cause large-scale and possibly destructive changes to your cluster. To mitigate this possibility, you can set up additional users to be authenticated from their respective local clients.

      In this section, you will authenticate new users to the remote DOKS cluster from local clients using secure SSL/TLS certificates. This will be a three-step process: First, you will create Certificate Signing Requests (CSR) for each user, then you will approve those certificates directly in the cluster through kubectl. Finally, you will build each user a kubeconfig file with the appropriate certificates. For more information regarding additional authentication methods supported by Kubernetes, refer to the Kubernetes authentication documentation.

      Creating Certificate Signing Requests for New Users

      Before starting, check the DOKS cluster connection from the local machine configured during the prerequisites:

      Depending on your configuration, the output will be similar to this one:


      Kubernetes master is running at CoreDNS is running at To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

      This means that you are connected to the DOKS cluster.

      Next, create a local folder for client’s certificates. For the purpose of this guide, ~/certs will be used to store all certificates:

      In this tutorial, we will authorize a new user called sammy to access the cluster. Feel free to change this to a user of your choice. Using the SSL and TLS library OpenSSL, generate a new private key for your user using the following command:

      • openssl genrsa -out ~/certs/sammy.key 4096

      The -out flag will make the output file ~/certs/sammy.key, and 4096 sets the key as 4096-bit. For more information on OpenSSL, see our OpenSSL Essentials guide.

      Now, create a certificate signing request configuration file. Open the following file with a text editor (for this tutorial, we will use nano):

      • nano ~/certs/sammy.csr.cnf

      Add the following content into the sammy.csr.cnf file to specify in the subject the desired username as common name (CN), and the group as organization (O):


      [ req ]
      default_bits = 2048
      prompt = no
      default_md = sha256
      distinguished_name = dn
      [ dn ]
      CN = sammy
      O = developers
      [ v3_ext ]

      The certificate signing request configuration file contains all necessary information, user identity, and proper usage parameters for the user. The last argument extendedKeyUsage=serverAuth,clientAuth will allow users to authenticate their local clients with the DOKS cluster using the certificate once it’s signed.

      Next, create the sammy certificate signing request:

      • openssl req -config ~/certs/sammy.csr.cnf -new -key ~/certs/sammy.key -nodes -out ~/certs/sammy.csr

      The -config lets you specify the configuration file for the CSR, and -new signals that you are creating a new CSR for the key specified by -key.

      You can check your certificate signing request by running the following command:

      • openssl req -in ~/certs/sammy.csr -noout -text

      Here you pass in the CSR with -in and use -text to print out the certificate request in text.

      The output will show the certificate request, the beginning of which will look like this:


      Certificate Request: Data: Version: 1 (0x0) Subject: CN = sammy, O = developers Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public-Key: (4096 bit) ...

      Repeat the same procedure to create CSRs for any additional users. Once you have all certificate signing requests saved in the administrator’s ~/certs folder, proceed with the next step to approve them.

      Managing Certificate Signing Requests with the Kubernetes API

      You can either approve or deny TLS certificates issued to the Kubernetes API by using kubectl command-line tool. This gives you the ability to ensure that the requested access is appropriate for the given user. In this section, you will send the certificate request for sammy and aprove it.

      To send a CSR to the DOKS cluster use the following command:

      cat <<EOF | kubectl apply -f -
      kind: CertificateSigningRequest
        name: sammy-authentication
        - system:authenticated
        request: $(cat ~/certs/sammy.csr | base64 | tr -d 'n')
        - digital signature
        - key encipherment
        - server auth
        - client auth

      Using a Bash here document, this command uses cat to pass the certificate request to kubectl apply.

      Let’s take a closer look at the certificate request:

      • name: sammy-authentication creates a metadata identifier, in this case called sammy-authentication.
      • request: $(cat ~/certs/sammy.csr | base64 | tr -d 'n') sends the sammy.csr certificate signing request to the cluster encoded as Base64.
      • server auth and client auth specify the intended usage of the certificate. In this case, the purpose is user authentication.

      The output will look similar to this:

      Output created

      You can check certificate signing request status using the command:

      Depending on your cluster configuration, the output will be similar to this:


      NAME AGE REQUESTOR CONDITION sammy-authentication 37s your_DO_email Pending

      Next, approve the CSR by using the command:

      • kubectl certificate approve sammy-authentication

      You will get a message confirming the operation:

      Output approved

      Note: As an administrator you can also deny a CSR by using the command kubectl certificate deny sammy-authentication. For more information about managing TLS certificates, please read Kubernetes official documentation.

      Now that the CSR is approved, you can download it to the local machine by running:

      • kubectl get csr sammy-authentication -o jsonpath='{.status.certificate}' | base64 --decode > ~/certs/sammy.crt

      This command decodes the Base64 certificate for proper usage by kubectl, then saves it as ~/certs/sammy.crt.

      With the sammy signed certificate in hand, you can now build the user’s kubeconfig file.

      Building Remote Users Kubeconfig

      Next, you will create a specific kubeconfig file for the sammy user. This will give you more control over the user’s access to your cluster.

      The first step in building a new kubeconfig is making a copy of the current kubeconfig file. For the purpose of this guide, the new kubeconfig file will be called config-sammy:

      • cp ~/.kube/config ~/.kube/config-sammy

      Next, edit the new file:

      • nano ~/.kube/config-sammy

      Keep the first eight lines of this file, as they contain the necessary information for the SSL/TLS connection with the cluster. Then starting from the user parameter, replace the text with the following highlighted lines so that the file looks similar to the following:


      apiVersion: v1
      - cluster:
          certificate-authority-data: certificate_data
        name: do-nyc1-do-cluster
      - context:
          cluster: do-nyc1-do-cluster
          user: sammy
        name: do-nyc1-do-cluster
      current-context: do-nyc1-do-cluster
      kind: Config
      preferences: {}
      - name: sammy
          client-certificate: /home/your_local_user/certs/sammy.crt
          client-key: /home/your_local_user/certs/sammy.key

      Note: For both client-certificate and client-key, use the absolute path to their corresponding certificate location. Otherwise, kubectl will produce an error.

      Save and exit the file.

      You can test the new user connection using kubectl cluster-info:

      • kubectl --kubeconfig=/home/your_local_user/.kube/config-sammy cluster-info

      You will see an error similar to this:


      To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. Error from server (Forbidden): services is forbidden: User "sammy" cannot list resource "services" in API group "" in the namespace "kube-system"

      This error is expected because the user sammy has no authorization to list any resource on the cluster yet. Granting authorization to users will be covered in the next step. For now, the output is confirming that the SSL/TLS connection was successful and the sammy authentication credentials were accepted by the Kubernetes API.

      Step 2 — Authorizing Users Through Role Based Access Control (RBAC)

      Once a user is authenticated, the API determines its permissions using Kubernetes built-in Role Based Access Control (RBAC) model. RBAC is an effective method of restricting user rights based on the role assigned to it. From a security point of view, RBAC allows setting fine-grained permissions to limit users from accessing sensitive data or executing superuser-level commands. For more detailed information regarding user roles refer to Kubernetes RBAC documentation.

      In this step, you will use kubectl to assign the predefined role edit to the user sammy in the default namespace. In a production environment, you may want to use custom roles and/or custom role bindings.

      Granting Permissions

      In Kubernetes, granting permissions means assigning the desired role to a user. Assign edit permissions to the user sammy in the default namespace using the following command:

      • kubectl create rolebinding sammy-edit-role --clusterrole=edit --user=sammy --namespace=default

      This will give output similar to the following:

      Output created

      Let’s analyze this command in more detail:

      • create rolebinding sammy-edit-role creates a new role binding, in this case called sammy-edit-role.
      • --clusterrole=edit assigns the predefined role edit at a global scope (cluster role).
      • --user=sammy specifies what user to bind the role to.
      • --namespace=default grants the user role permissions within the specified namespace, in this case default.

      Next, verify user permissions by listing pods in the default namespace. You can tell if RBAC authorization is working as expected if no errors are shown.

      • kubectl --kubeconfig=/home/your_local_user/.kube/config-sammy auth can-i get pods

      You will get the following output:



      Now that you have assigned permissions to sammy, you can now practice revoking those permissions in the next section.

      Revoking Permissions

      Revoking permissions in Kubernetes is done by removing the user role binding.

      For this tutorial, delete the edit role from the user sammy by running the following command:

      • kubectl delete rolebinding sammy-edit-role

      You will get the following output:

      Output "sammy-edit-role" deleted

      Verify if user permissions were revoked as expected by listing the default namespace pods:

      • kubectl --kubeconfig=/home/localuser/.kube/config-sammy --namespace=default get pods

      You will receive the following error:


      Error from server (Forbidden): pods is forbidden: User "sammy" cannot list resource "pods" in API group "" in the namespace "default"

      This shows that the authorization has been revoked.

      From a security standpoint, the Kubernetes authorization model gives cluster administrators the flexibility to change users rights on-demand as required. Moreover, role-based access control is not limited to a physical user; you can also grant and remove permissions to cluster services, as you will learn in the next section.

      For more information about RBAC authorization and how to create custom roles, please read the official documentation.

      Step 3 — Managing Application Permissions with Service Accounts

      As mentioned in the previous section, RBAC authorization mechanisms extend beyond human users. Non-human cluster users, such as applications, services, and processes running inside pods, authenticate with the API server using what Kubernetes calls service accounts. When a pod is created within a namespace, you can either let it use the default service account or you can define a service account of your choice. The ability to assign individual SAs to applications and processes gives administrators the freedom of granting or revoking permissions as required. Moreover, assigning specific SAs to production-critical applications is considered a best security practice. Since service accounts are used for authentication, and thus for RBAC authorization checks, cluster administrators could contain security threats by changing service account access rights and isolating the offending process.

      To demonstrate service accounts, this tutorial will use an Nginx web server as a sample application.

      Before assigning a particular SA to your application, you need to create the SA. Create a new service account called nginx-sa in the default namespace:

      • kubectl create sa nginx-sa

      You will get:


      serviceaccount/nginx-sa created

      Verify that the service account was created by running the following:

      This will give you a list of your service accounts:


      NAME SECRETS AGE default 1 22h nginx-sa 1 80s

      Now you will assign a role to the nginx-sa service account. For this example, grant nginx-sa the same permissions as the sammy user:

      • kubectl create rolebinding nginx-sa-edit
      • --clusterrole=edit
      • --serviceaccount=default:nginx-sa
      • --namespace=default

      Running this will yield the following:

      Output created

      This command uses the same format as for the user sammy, except for the --serviceaccount=default:nginx-sa flag, where you assign the nginx-sa service account in the default namespace.

      Check that the role binding was successful using this command:

      This will give the following output:


      NAME AGE nginx-sa-edit 23s

      Once you’ve confirmed that the role binding for the service account was successfully configured, you can assign the service account to an application. Assigning a particular service account to an application will allow you to manage its access rights in real-time and therefore enhance cluster security.

      For the purpose of this tutorial, an nginx pod will serve as the sample application. Create the new pod and specify the nginx-sa service account with the following command:

      • kubectl run nginx --image=nginx --port 80 --serviceaccount="nginx-sa"

      The first portion of the command creates a new pod running an nginx web server on port :80, and the last portion --serviceaccount="nginx-sa" indicates that this pod should use the nginx-sa service account and not the default SA.

      This will give you output similar to the following:


      deployment.apps/nginx created

      Verify that the new application is using the service account by using kubectl describe:

      • kubectl describe deployment nginx

      This will output a lengthy description of the deployment parameters. Under the Pod Template section, you will see output similar to this:


      ... Pod Template: Labels: run=nginx Service Account: nginx-sa ...

      In this section, you created the nginx-sa service account in the default namespace and assigned it to the nginx webserver. Now you can control nginx permissions in real-time by changing its role as needed. You can also group applications by assigning the same service account to each one and then make bulk changes to permissions. Finally, you could isolate critical applications by assigning them a unique SA.

      Summing up, the idea behind assigning roles to your applications/deployments is to fine-tune permissions. In real-world production environments, you may have several deployments requiring different permissions ranging from read-only to full administrative privileges. Using RBAC brings you the flexibility to restrict the access to the cluster as needed.

      Next, you will set up admission controllers to control resources and safeguard against resource starvation attacks.

      Step 4 — Setting Up Admission Controllers

      Kubernetes admission controllers are optional plug-ins that are compiled into the kube-apiserver binary to broaden security options. Admission controllers intercept requests after they pass the authentication and authorization phase. Once the request is intercepted, admission controllers execute the specified code just before the request is applied.

      While the outcome of either an authentication or authorization check is a boolean that allows or denies the request, admission controllers can be much more diverse. Admission controllers can validate requests in the same manner as authentication, but can also mutate or change the requests and modify objects before they are admitted.

      In this step, you will use the ResourceQuota and LimitRange admission controllers to protect your cluster by mutating requests that could contribute to a resource starvation or Denial-of-Service attack. The ResourceQuota admission controller allows administrators to restrict computing resources, storage resources, and the quantity of any object within a namespace, while the LimitRange admission controller will limit the number of resources used by containers. Using these two admission controllers together will protect your cluster from attacks that render your resources unavailable.

      To demonstrate how ResourceQuota works, you will implement a few restrictions in the default namespace. Start by creating a new ResourceQuota object file:

      • nano resource-quota-default.yaml

      Add in the following object definition to set constraints for resource consumption in the default namespace. You can adjust the values as needed depending on your nodes’ physical resources:


      apiVersion: v1
      kind: ResourceQuota
        name: resource-quota-default
          pods: "2"
          requests.cpu: "500m"
          requests.memory: 1Gi
          limits.cpu: "1000m"
          limits.memory: 2Gi
          configmaps: "5"
          persistentvolumeclaims: "2"
          replicationcontrollers: "10"
          secrets: "3"
          services: "4"
          services.loadbalancers: "2"

      This definition uses the hard keyword to set hard constraints, such as the maximum number of pods, configmaps, PersistentVolumeClaims, ReplicationControllers, secrets, services, and loadbalancers. This also set contraints on compute resources, like:

      • requests.cpu, which sets the maximum CPU value of requests in milliCPU, or one thousandth of a CPU core.
      • requests.memory, which sets the maximum memory value of requests in bytes.
      • limits.cpu, which sets the maximum CPU value of limits in milliCPUs.
      • limits.memory, which sets the maximum memory value of limits in bytes.

      Save and exit the file.

      Now, create the object in the namespace running the following command:

      • kubectl create -f resource-quota-default.yaml --namespace=default

      This will yield the following:


      resourcequota/resource-quota-default created

      Notice that you are using the -f flag to indicate to Kubernetes the location of the ResourceQuota file and the --namespace flag to specify which namespace will be updated.

      Once the object has been created, your ResourceQuota will be active. You can check the default namespace quotas with describe quota:

      • kubectl describe quota --namespace=default

      The output will look similar to this, with the hard limits you set in the resource-quota-default.yaml file:


      Name: resource-quota-default Namespace: default Resource Used Hard -------- ---- ---- configmaps 0 5 limits.cpu 0 1 limits.memory 0 2Gi persistentvolumeclaims 0 2 pods 1 2 replicationcontrollers 0 10 requests.cpu 0 500m requests.memory 0 1Gi secrets 2 3 services 1 4 services.loadbalancers 0 2

      ResourceQuotas are expressed in absolute units, so adding additional nodes will not automatically increase the values defined here. If more nodes are added, you will need to manually edit the values here to proportionate the resources. ResourceQuotas can be modified as often as you need, but they cannot be removed unless the entire namespace is removed.

      If you need to modify a particular ResourceQuota, update the corresponding .yaml file and apply the changes using the following command:

      • kubectl apply -f resource-quota-default.yaml --namespace=default

      For more information regarding the ResourceQuota Admission Controller, refer to the official documentation.

      Now that your ResourceQuota is set up, you will move on to configuring the LimitRange Admission Controller. Similar to how the ResourceQuota enforces limits on namespaces, the LimitRange enforces the limitations declared by validating and mutating containers.

      In a similar way to before, start by creating the object file:

      • nano limit-range-default.yaml

      Now, you can use the LimitRange object to restrict resource usage as needed. Add the following content as an example of a typical use case:


      apiVersion: v1
      kind: LimitRange
        name: limit-range-default
        - max:
            cpu: "400m"
            memory: "1Gi"
            cpu: "100m"
            memory: "100Mi"
            cpu: "250m"
            memory: "800Mi"
            cpu: "150m"
            memory: "256Mi"
          type: Container

      The sample values used in limit-ranges-default.yaml restrict container memory to a maximum of 1Gi and limits CPU usage to a maximum of 400m, which is a Kubernetes metric equivalent to 400 milliCPU, meaning the container is limited to use almost half its core.

      Next, deploy the object to the API server using the following command:

      • kubectl create -f limit-range-default.yaml --namespace=default

      This will give the following output:


      limitrange/limit-range-default created

      Now you can check the new limits with following command:

      • kubectl describe limits --namespace=default

      Your output will look similar to this:


      Name: limit-range-default Namespace: default Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio ---- -------- --- --- --------------- ------------- ----------------------- Container cpu 100m 400m 150m 250m - Container memory 100Mi 1Gi 256Mi 800Mi -

      To see LimitRanger in action, deploy a standard nginx container with the following command:

      • kubectl run nginx --image=nginx --port=80 --restart=Never

      This will give the following output:


      pod/nginx created

      Check how the admission controller mutated the container by running the following command:

      • kubectl get pod nginx -o yaml

      This will give many lines of output. Look in the container specification section to find the resource limits specified in the LimitRange Admission Controller:


      ... spec: containers: - image: nginx imagePullPolicy: IfNotPresent name: nginx ports: - containerPort: 80 protocol: TCP resources: limits: cpu: 250m memory: 800Mi requests: cpu: 150m memory: 256Mi ...

      This would be the same as if you manually declared the resources and requests in the container specification.

      In this step, you used the ResourceQuota and LimitRange admission controllers to protect against malicious attacks toward your cluster’s resources. For more information about LimitRange admission controller, read the official documentation.


      Throughout this guide, you configured a basic Kubernetes security template. This established user authentication and authorization, applications privileges, and cluster resource protection. Combining all the suggestions covered in this article, you will have a solid foundation for a production Kubernetes cluster deployment. From there, you can start hardening individual aspects of your cluster depending on your scenario.

      If you would like to learn more about Kubernetes, check out our Kubernetes resource page, or follow our Kubernetes for Full-Stack Developers self-guided course.

      Source link

      Recommended Steps To Harden Apache HTTP on FreeBSD 12.0

      The author selected the Free and Open Source Fund to receive a donation as part of the Write for DOnations program.


      Although the default installation of an Apache HTTP server is already safe to use, its configuration can be substantially improved with a few modifications. You can complement already present security mechanisms, for example, by setting protections around cookies and headers, so connections can’t be tampered with at the user’s client level. By doing this you can dramatically reduce the possibilities of several attack methods, like Cross-Site Scripting attacks (also known as XSS). You can also prevent other types of attacks, such as Cross-Site Request Forgery, or session hijacking, as well as Denial of Service attacks.

      In this tutorial you’ll implement some recommended steps to reduce how much information on your server is exposed. You will verify the directory listings and disable indexing to check the access to resources. You’ll also change the default value of the timeout directive to help mitigate Denial of Service type of attacks. Furthermore you’ll disable the TRACE method so sessions can’t be reversed and hijacked. Finally you’ll secure headers and cookies.

      Most of the configuration settings will be applied to the Apache HTTP main configuration file found at /usr/local/etc/apache24/httpd.conf.


      Before you begin this guide you’ll need the following:

      With the prerequisites in place you have a FreeBSD system with a stack on top able to serve web content using anything written in PHP, such as major CMS software. Furthermore, you’ve encrypted safe connections through Let’s Encrypt.

      Reducing Server Information

      The operating system banner is a method used by computers, servers, and devices of all kinds to present themselves into networks. Malicious actors can use this information to gain exploits into the relevant systems. In this section you’ll reduce the amount of information published by this banner.

      Sets of directives control how this information is displayed. For this purpose the ServerTokens directive is important; by default it displays all details about the operating system and compiled modules to the client that’s connecting to it.

      You’ll use a tool for network scanning to check what information is currently revealed prior to applying any changes. To install nmap run the following command:

      To get your server’s IP address, you can run the following command:

      • ifconfig vtnet0 | awk '/inet / {print $2}'

      You can check the web server response by using the following command:

      • nmap -sV -p 80 your-server-ip

      You invoke nmap to make a scan (hence the -s flag), to display the version (the -V flag) on port 80 (the -p flag) on the given IP or domain.

      You’ll receive information about your web server, similar to the following:


      Starting Nmap 7.80 ( ) at 2020-01-22 00:30 CET Nmap scan report for Host is up (0.054s latency). PORT STATE SERVICE VERSION 80/tcp open http Apache httpd 2.4.41 ((FreeBSD) OpenSSL/1.1.1d-freebsd Service detection performed. Please report any incorrect results at . Nmap done: 1 IP address (1 host up) scanned in 7.59 seconds

      This output shows that information such as the operating system, the Apache HTTP version, and OpenSSL are visible. This can be useful for attackers to gain information about the server and choose the right tools to exploit, for example, a vulnerability in the software running on the server.

      You’ll place the ServerTokens directive in the main configuration file since it doesn’t come configured by default. The lack of this configuration makes Apache HTTP display the full information about the server as the documentation states. To limit the information that is revealed about your server and configuration, you’ll place the ServerTokens directive inside the main configuration file.

      You’ll place this directive following the ServerName entry in the configuration file. Run the following command to find the directive

      • grep -n 'ServerName' /usr/local/etc/apache24/httpd.conf

      You’ll find the line number that you can then search with vi:


      226 #ServerName

      Run the following command:

      • sudo vi +226 /usr/local/etc/apache24/httpd.conf

      Add the following highlighted line:


      . . .
      ServerTokens Prod

      Save and exit the file with :wq and ENTER.

      Setting the ServerTokens directive to Prod will make it only display that this is an Apache web server.

      For this to take effect restart the Apache HTTP server:

      To test the changes, run the following command:

      • nmap -sV -p 80 your-server-ip

      You’ll see similar output to the following with more minimal information on your Apache web server:


      Starting Nmap 7.80 ( ) at 2020-01-22 00:58 CET Nmap scan report for WPressBSD ( Host is up (0.056s latency). PORT STATE SERVICE VERSION 80/tcp open http Apache httpd Service detection performed. Please report any incorrect results at . Nmap done: 1 IP address (1 host up) scanned in 7.59 seconds

      You’ve seen what information the server was announcing prior to the change and you’ve now reduced this to the minimum. With this you’re providing fewer clues about your server to an external actor. In the next step you’ll manage the directory listings for your web server.

      Managing Directory Listings

      In this step you’ll ensure the directory listing is correctly configured, so the right parts of the system are publicly available as intended, while the remainder are protected.

      Note: When an argument is declared it is active, but the + can visually reinforce it is in fact enabled. When a minus sign - is placed the argument is denied, for example, Options -Indexes.

      Arguments with + and/or - can not be mixed, it is considered bad syntax in Apache HTTP and it may be rejected at the start up.

      Adding the statement Options -Indexes will set the content inside the data path /usr/local/www/apache24/data to not index (read listed) automatically if an .html file doesn’t exist, and not show if a URL maps this directory. This will also apply when using virtual host configurations such as the one used for the prerequisite tutorial for the Let’s Encrypt certificate.

      You will set the Options directive with the -Indexes argument and with the +FollowSymLinks directive, which will allow symbolic links to be followed. You’ll use the + symbol in order to comply with Apache’s HTTP conventions.

      Run the following command to find the line to edit in the configuration file:

      • grep -n 'Options Indexes FollowSymLinks' /usr/local/etc/apache24/httpd.conf

      You’ll see output similar to the following:


      263 : Options Indexes FollowSymLinks

      Run this command to directly access the line for editing:

      • sudo vi +263 /usr/local/etc/apache24/httpd.conf

      Now edit the line as per the configuration:


      . . .
      Options -Indexes +FollowSymLinks
      . . .

      Save and exit the file with :wq and ENTER.

      Restart Apache HTTP to implement these changes:

      At your domain in the browser, you’ll see a forbidden access message, also known as the 403 error. This is due to the changes you’ve applied. Placing -Indexes into the Options directive has disabled the auto-index capability of Apache HTTP and therefore there’s no index.html file inside the data path.

      You can solve this by placing an index.html file inside the VirtualHost you enabled in the prerequisite tutorial for the Let’s Encrypt certificate. You’ll use the default block within Apache HTTP and place it in the same folder as the DocumentRootthat you declared in the virtual host.


      <VirtualHost *:80>
          DocumentRoot "/usr/local/www/apache24/data/"
          ErrorLog "/var/log/"
          CustomLog "/var/log/" common

      Use the following command to do this:

      • sudo cp /usr/local/www/apache24/data/index.html /usr/local/www/apache24/data/

      Now you’ll see an It works! message when visiting your domain.

      In this section you’ve placed restrictions to the Indexes directive to not automatically enlist and display content other than what you intend. Now if there is not an index.html file inside the data path Apache HTTP will not automatically create an index of contents. In the next step you’ll move beyond obscuring information and customize different directives.

      Reducing the Timeout Directive Value

      The Timeout directive sets the limit of time Apache HTTP will wait for new input/output before failing the connection request. This failure can occur due to different circumstances such as packets not arriving to the server or data not being confirmed as received by the client.

      By default the timeout is set to 60 seconds. In environments where the internet service is slow this default value may be sensible, but one minute is quite a long time particularly if the server is covering a target of users with faster internet service. Furthermore the time during which the server is not closing the connection can be abused to perform Denial of Service attacks (DoS). If a flood of these malicious connections occurs the server will stumble and possibly become saturated and irresponsive.

      To change the value you’ll find the Timeout entries in the httpd-default.conf file:

      • grep -n 'Timeout' /usr/local/etc/apache24/extra/httpd-default.conf

      You’ll see similar output to:


      8 # Timeout: The number of seconds before receives and sends time out. 10 Timeout 60 26 # KeepAliveTimeout: Number of seconds to wait for the next request from the 29 KeepAliveTimeout 5 89 RequestReadTimeout header=20-40,MinRate=500 body=20,MinRate=500

      In the output line 10 sets the Timeout directive value. To directly access this line run the following command:

      • sudo vi +10 /usr/local/etc/apache24/extra/httpd-default.conf

      You’ll change it to 30 seconds, for example, like the following:


      # Timeout: The number of seconds before receives and sends time out.
      Timeout 30

      Save and exit the file with :wq and ENTER.

      The value of the Timeout directive has to balance a time range large enough for those events to allow a legitimate and successful connection to happen, but short enough to prevent undesired connection attempts.

      Note: Denial of Service attacks can drain the server’s resources quite effectively. A complementary and very capable counter measure is using a threaded MPM to get the best performance out of how Apache HTTP handles connections and processes. In this tutorial How To Configure Apache HTTP with MPM Event and PHP-FPM on FreeBSD 12.0 there are steps on enabling this capability.

      For this change to take effect restart the Apache HTTP server:

      You’ve changed the default value of the Timeout directive in order to partially mitigate DoS attacks.

      Disabling the TRACE method

      The Hypertext Transport Protocol was developed following a client-server model and as such, the protocol has request methods to retrieve or place information from/to the server. The server needs to understand these sets of methods and the interaction between them. In this step you’ll configure the minimum necessary methods.

      TheTRACE method, which was considered harmless, was leveraged to perform Cross Site Tracing attacks. These types of attacks allow malicious actors to steal user sessions through that method. The method was designed for debugging purposes by the server returning the same request originally sent by the client. Because the cookie from the browser’s session is sent to the server it will be sent back again. However, this could potentially be intercepted by a malicious actor, who can then redirect a browser’s connection to a site of their control and not to the original server.

      Because of the possibility of the misuse of the TRACE method it is recommended to only use it for debugging and not in production. In this section you’ll disable this method.

      Edit the httpd.conf file with the following command and then press G to reach the end of the file:

      • sudo vi /usr/local/etc/apache24/httpd.conf

      Add the following entry path at the end of the file:


      . . .
      TraceEnable off

      A good practice is to only specify the methods you’ll use in your Apache HTTP web server. This will help limit potential entry points for malicious actors.

      LimitExcept can be useful for this purpose since it will not allow any other methods than those declared in it. For example a configuration can be established like this one:


      DocumentRoot "/usr/local/www/apache24/data"
      <Directory "/usr/local/www/apache24/data">
          Options -Indexes +FollowSymLinks -Includes
          AllowOverride none
           <LimitExcept GET POST HEAD>
             deny from all
          Require all granted

      As declared within the LimitExcept directive only the GET, POST, and HEAD methods are allowed in the configuration.

      • The GET method is part of the HTTP protocol and it is used to retrieve data.
      • The POST method is also part of the HTTP protocol and is used to send data to the server.
      • The HEAD method is similar to GET, however this has no response body.

      You’ll use the following command and place the LimitExcept block inside the file:

      • sudo vi +272 /usr/local/etc/apache24/httpd.conf

      To set this configuration you’ll place the following block into the DocumentRoot directive entry where the content will be read from, more specifically inside the Directory entry:


      . . .
      <LimitExcept GET POST HEAD>
         deny from all
      . . .

      To apply the changes restart Apache HTTP:

      The newer directive AllowedMethods provides similar functionality, although its status is still experimental.

      You’ve seen what HTTP methods are, their use, and the protection they offer from malicious activity leveraging the TRACE method as well as how to declare what methods to use. Next you’ll work with further protections dedicated to HTTP headers and cookies.

      Securing Headers and Cookies

      In this step you’ll set specific directives to protect the sessions that the client machines will open when visiting your Apache HTTP web server. This way your server will not load unwanted content, encryption will not be downgraded, and you’ll avoid content sniffing.

      Headers are components of the requests methods. There are headers to adjust authentication, communication between server and client, caching, content negotiation, and so on.

      Cookies are bits of information sent by the server to the browser. These bits allow the server to recognize the client browser from one computer to another. They also allow servers to recognize user sessions. For example, they can track a shopping cart of a logged-in user, payment information, history, and so on. Cookies are used and retained in the client’s web browser since HTTP is a stateless protocol, meaning once the connection closes the server does not remember the request sent by one client, or another one.

      It is important to protect headers as well as cookies because they provide communication between the web browser client and the web server.

      The headers module comes activated by default. To check if it’s loaded you’ll use the following command:

      • sudo apachectl -M | grep 'headers'

      You’ll see the following output:


      headers_module (shared)

      If you don’t see any output, check if the module is activated inside Apache’s httpd.conf file:

      • grep -n 'mod_headers' /usr/local/etc/apache24/httpd.conf

      As output you’ll see an uncommented line referring to the specific module for headers:


      . . .
      122  LoadModule headers_module libexec/apache24/
      . . .

      Remove the hashtag at the beginning of the line, if present, to activate the directive.

      By making use of the following Apache HTTP directives you’ll protect headers and cookies from malicious activity to reduce the risk for clients and servers.

      Now you’ll set the header’s protection. You’ll place all these header values in one block. You can choose to apply these values as you wish, but all are recommended.

      Edit the httpd.conf file with the following command and then press G to reach the end of the file:

      • sudo vi /usr/local/etc/apache24/httpd.conf

      Place the following block at the end of the file:


      . . .
      <IfModule mod_headers.c>
        # Add security and privacy related headers
        Header set Content-Security-Policy "default-src 'self'; upgrade-insecure-requests;"
        Header set Strict-Transport-Security "max-age=31536000; includeSubDomains"
        Header always edit Set-Cookie (.*) "$1; HttpOnly; Secure"
        Header set X-Content-Type-Options "nosniff"
        Header set X-XSS-Protection "1; mode=block"
        Header set Referrer-Policy "strict-origin"
        Header set X-Frame-Options: "deny"
        SetEnv modHeadersAvailable true
      • Header set Strict-Transport-Security "max-age=31536000; includeSubDomains": HTTP Strict Transport Security (HTSTS) is a mechanism for web servers and clients (mainly browsers) to establish communications using only HTTPS. By implementing this you’re avoiding man-in-the-middle attacks, where a third party in between the communication could potentially access the bits, but also tamper with them.

      • Header always edit Set-Cookie (.*) "$1; HttpOnly; Secure": The HttpOnly and Secure flags on headers help prevent cross-site scripting attacks, also known as XSS. Cookies can be misused by attackers to pose as legitimate visitors presenting themselves as someone else (identity theft), or be tampered.

      • Header set Referrer-Policy "strict-origin": The Referrer-Policy header sets what information is included as the referrer information in the header field.

      • Header set Content-Security-Policy "default-src 'self'; upgrade-insecure-requests;": The Content-Security-Policy header (CSP) will completely prevent loading content not specified in the parameters, which is helpful to prevent cross-site scripting (XSS) attacks. There are many possible parameters to configure the policy for this header. The bottom line is configuring it to load content from the same site and upgrade any content with an HTTP origin.

      • Header set X-XSS-Protection "1; mode=block": This supports older browsers that do not cope with Content-Security-Policy headers. The ‘X-XSS-Protection’ header provides protection against Cross-Site Scripting attacks. You do not need to set this header unless you need to support old browser versions, which is rare.

      • Header set X-Frame-Options: "deny": This prevents clickjacking attacks. The 'X-Frame-Options’ header tells a browser if a page can be rendered in a <frame>, <iframe>, <embed>, or <object>. This way content from other sites cannot be embedded into others, preventing clickjacking attacks. Here you’re denying all frame render so the web page can’t be embedded anywhere else, not even inside the same web site. You can adapt this to your needs, if, for example, you must authorize rendering some pages because they are advertisements or collaborations with specific websites.

      • Header set X-Content-Type-Options "nosniff": The 'X-Content-Type-Options’ header controls MIME types so they’re not changed and followed. MIME types are file format standards; they work for text, audio, video, image, and so on. This header blocks malicious actors from content sniffing those files and trying to alter the file types.

      Now restart Apache for the changes to take effect:

      To check the security levels of your configuration settings, visit the security headers website. Having followed the steps in this tutorial, your domain will score an A grade.

      Note: If you make your headers check by visiting and get an F grade it could be because there is no index.html inside the DocumentRoot of your site as instructed at the end of Step 2. If checking your headers you get a different grade than an A or an F, check each Header set line looking for any misspelling that may have caused the downgrade.

      In this step you have worked with up to seven settings to improve the security of your headers and cookies. These will help prevent cross-site scripting, clickjacking, and other types of attacks.


      In this tutorial you’ve addressed several security aspects, from information disclosure, to protecting sessions, through setting alternative configuration settings for important functionality.

      For further resources on hardening Apache, here are some other references:

      For extra tools to protect Apache HTTP:

      Source link