One place for hosting & domains

      How to Set Up an Elasticsearch, Fluentd and Kibana (EFK) Logging Stack on Kubernetes


      Introduction

      When running multiple services and applications on a Kubernetes cluster, a centralized, cluster-level logging stack can help you quickly sort through and analyze the heavy volume of log data produced by your Pods. One popular centralized logging solution is the Elasticsearch, Fluentd, and Kibana (EFK) stack.

      Elasticsearch is a real-time, distributed, and scalable search engine which allows for full-text and structured search, as well as analytics. It is commonly used to index and search through large volumes of log data, but can also be used to search many different kinds of documents.

      Elasticsearch is commonly deployed alongside Kibana, a powerful data visualization frontend and dashboard for Elasticsearch. Kibana allows you to explore your Elasticsearch log data through a web interface, and build dashboards and queries to quickly answer questions and gain insight into your Kubernetes applications.

      In this tutorial we’ll use Fluentd to collect, transform, and ship log data to the Elasticsearch backend. Fluentd is a popular open-source data collector that we’ll set up on our Kubernetes nodes to tail container log files, filter and transform the log data, and deliver it to the Elasticsearch cluster, where it will be indexed and stored.

      We’ll begin by configuring and launching a scalable Elasticsearch cluster, and then create the Kibana Kubernetes Service and Deployment. To conclude, we’ll set up Fluentd as a DaemonSet so it runs on every Kubernetes worker node.

      Prerequisites

      Before you begin with this guide, ensure you have the following available to you:

      • A Kubernetes 1.10+ cluster with role-based access control (RBAC) enabled

        • Ensure your cluster has enough resources available to roll out the EFK stack, and if not scale your cluster by adding worker nodes. We’ll be deploying a 3-Pod Elasticsearch cluster (you can scale this down to 1 if necessary), as well as a single Kibana Pod. Every worker node will also run a Fluentd Pod. The cluster in this guide consists of 3 worker nodes and a managed control plane.
      • The kubectl command-line tool installed on your local machine, configured to connect to your cluster. You can read more about installing kubectl in the official documentation.

      Once you have these components set up, you’re ready to begin with this guide.

      Step 1 — Creating a Namespace

      Before we roll out an Elasticsearch cluster, we’ll first create a Namespace into which we’ll install all of our logging instrumentation. Kubernetes lets you separate objects running in your cluster using a “virtual cluster” abstraction called Namespaces. In this guide, we’ll create a kube-logging namespace into which we’ll install the EFK stack components. This Namespace will also allow us to quickly clean up and remove the logging stack without any loss of function to the Kubernetes cluster.

      To begin, first investigate the existing Namespaces in your cluster using kubectl:

      kubectl get namespaces
      

      You should see the following three initial Namespaces, which come preinstalled with your Kubernetes cluster:

      Output

      • NAME STATUS AGE
      • default Active 5m
      • kube-system Active 5m
      • kube-public Active 5m

      The default Namespace houses objects that are created without specifying a Namespace. The kube-system Namespace contains objects created and used by the Kubernetes system, like kube-dns, kube-proxy, and kubernetes-dashboard. It’s good practice to keep this Namespace clean and not pollute it with your application and instrumentation workloads.

      The kube-public Namespace is another automatically created Namespace that can be used to store objects you’d like to be readable and accessible throughout the whole cluster, even to unauthenticated users.

      To create the kube-logging Namespace, first open and edit a file called kube-logging.yaml using your favorite editor, such as nano:

      Inside your editor, paste the following Namespace object YAML:

      kube-logging.yaml

      kind: Namespace
      apiVersion: v1
      metadata:
        name: kube-logging
      

      Then, save and close the file.

      Here, we specify the Kubernetes object's kind as a Namespace object. To learn more about Namespace objects, consult the Namespaces Walkthrough in the official Kubernetes documentation. We also specify the Kubernetes API version used to create the object (v1), and give it a name, kube-logging.

      Once you've created the kube-logging.yaml Namespace object file, create the Namespace using kubectl create with the -f filename flag:

      • kubectl create -f kube-logging.yaml

      You should see the following output:

      Output

      namespace/kube-logging created

      You can then confirm that the Namespace was successfully created:

      At this point, you should see the new kube-logging Namespace:

      Output

      NAME STATUS AGE default Active 23m kube-logging Active 1m kube-public Active 23m kube-system Active 23m

      We can now deploy an Elasticsearch cluster into this isolated logging Namespace.

      Step 2 — Creating the Elasticsearch StatefulSet

      Now that we've created a Namespace to house our logging stack, we can begin rolling out its various components. We'll first begin by deploying a 3-node Elasticsearch cluster.

      In this guide, we use 3 Elasticsearch Pods to avoid the "split-brain" issue that occurs in highly-available, multi-node clusters. At a high-level, "split-brain" is what arises when one or more nodes can't communicate with the others, and several "split" masters get elected. To learn more, consult “Avoiding split brain.”

      One key takeaway is that you should set the discover.zen.minimum_master_nodes Elasticsearch parameter to N/2 + 1 (rounding down in the case of fractional numbers), where N is the number of master-eligible nodes in your Elasticsearch cluster. For our 3-node cluster, this means that we'll set this value to 2. That way, if one node gets disconnected from the cluster temporarily, the other two nodes can elect a new master and the cluster can continue functioning while the last node attempts to rejoin. It's important to keep this parameter in mind when scaling your Elasticsearch cluster.

      Creating the Headless Service

      To start, we'll create a headless Kubernetes service called elasticsearch that will define a DNS domain for the 3 Pods. A headless service does not perform load balancing or have a static IP; to learn more about headless services, consult the official Kubernetes documentation.

      Open a file called elasticsearch_svc.yaml using your favorite editor:

      • nano elasticsearch_svc.yaml

      Paste in the following Kubernetes service YAML:

      elasticsearch_svc.yaml

      kind: Service
      apiVersion: v1
      metadata:
        name: elasticsearch
        namespace: kube-logging
        labels:
          app: elasticsearch
      spec:
        selector:
          app: elasticsearch
        clusterIP: None
        ports:
          - port: 9200
            name: rest
          - port: 9300
            name: inter-node
      

      Then, save and close the file.

      We define a Service called elasticsearch in the kube-logging Namespace, and give it the app: elasticsearch label. We then set the .spec.selector to app: elasticsearch so that the Service selects Pods with the app: elasticsearch label. When we associate our Elasticsearch StatefulSet with this Service, the Service will return DNS A records that point to Elasticsearch Pods with the app: elasticsearch label.

      We then set clusterIP: None, which renders the service headless. Finally, we define ports 9200 and 9300 which are used to interact with the REST API, and for inter-node communication, respectively.

      Create the service using kubectl:

      • kubectl create -f elasticsearch_svc.yaml

      You should see the following output:

      Output

      service/elasticsearch created

      Finally, double-check that the service was successfully created using kubectl get:

      kubectl get services --namespace=kube-logging
      

      You should see the following:

      Output

      NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE elasticsearch ClusterIP None <none> 9200/TCP,9300/TCP 26s

      Now that we've set up our headless service and a stable .elasticsearch.kube-logging.svc.cluster.local domain for our Pods, we can go ahead and create the StatefulSet.

      Creating the StatefulSet

      A Kubernetes StatefulSet allows you to assign a stable identity to Pods and grant them stable, persistent storage. Elasticsearch requires stable storage to persist data across Pod rescheduling and restarts. To learn more about the StatefulSet workload, consult the Statefulsets page from the Kubernetes docs.

      Open a file called elasticsearch_statefulset.yaml in your favorite editor:

      • nano elasticsearch_statefulset.yaml

      We will move through the StatefulSet object definition section by section, pasting blocks into this file.

      Begin by pasting in the following block:

      elasticsearch_statefulset.yaml

      apiVersion: apps/v1
      kind: StatefulSet
      metadata:
        name: es-cluster
        namespace: kube-logging
      spec:
        serviceName: elasticsearch
        replicas: 3
        selector:
          matchLabels:
            app: elasticsearch
        template:
          metadata:
            labels:
              app: elasticsearch
      

      In this block, we define a StatefulSet called es-cluster in the kube-logging namespace. We then associate it with our previously created elasticsearch Service using the serviceName field. This ensures that each Pod in the StatefulSet will be accessible using the following DNS address: es-cluster-[0,1,2].elasticsearch.kube-logging.svc.cluster.local, where [0,1,2] corresponds to the Pod's assigned integer ordinal.

      We specify 3 replicas (Pods) and set the matchLabels selector to app: elasticseach, which we then mirror in the .spec.template.metadata section. The .spec.selector.matchLabels and .spec.template.metadata.labels fields must match.

      We can now move on to the object spec. Paste in the following block of YAML immediately below the preceding block:

      elasticsearch_statefulset.yaml

      . . .
          spec:
            containers:
            - name: elasticsearch
              image: docker.elastic.co/elasticsearch/elasticsearch-oss:6.4.3
              resources:
                  limits:
                    cpu: 1000m
                  requests:
                    cpu: 100m
              ports:
              - containerPort: 9200
                name: rest
                protocol: TCP
              - containerPort: 9300
                name: inter-node
                protocol: TCP
              volumeMounts:
              - name: data
                mountPath: /usr/share/elasticsearch/data
              env:
                - name: cluster.name
                  value: k8s-logs
                - name: node.name
                  valueFrom:
                    fieldRef:
                      fieldPath: metadata.name
                - name: discovery.zen.ping.unicast.hosts
                  value: "es-cluster-0.elasticsearch,es-cluster-1.elasticsearch,es-cluster-2.elasticsearch"
                - name: discovery.zen.minimum_master_nodes
                  value: "2"
                - name: ES_JAVA_OPTS
                  value: "-Xms512m -Xmx512m"
      

      Here we define the Pods in the StatefulSet. We name the containers elasticsearch and choose the docker.elastic.co/elasticsearch/elasticsearch-oss:6.4.3 Docker image. At this point, you may modify this image tag to correspond to your own internal Elasticsearch image, or a different version. Note that for the purposes of this guide, only Elasticsearch 6.4.3 has been tested.

      The -oss suffix ensures that we use the open-source version of Elasticsearch. If you'd like to use the default version containing X-Pack (which includes a free license), omit the -oss suffix. Note that you will have to modify the steps in this guide slightly to account for the added authentication provided by X-Pack.

      We then use the resources field to specify that the container needs at least 0.1 vCPU guaranteed to it, and can burst up to 1 vCPU (which limits the Pod's resource usage when performing an initial large ingest or dealing with a load spike). You should modify these values depending on your anticipated load and available resources. To learn more about resource requests and limits, consult the official Kubernetes Documentation.

      We then open and name ports 9200 and 9300 for REST API and inter-node communication, respectively. We specify a volumeMount called data that will mount the PersistentVolume named data to the container at the path /usr/share/elasticsearch/data. We will define the VolumeClaims for this StatefulSet in a later YAML block.

      Finally, we set some environment variables in the container:

      • cluster.name: The Elasticsearch cluster's name, which in this guide is k8s-logs.
      • node.name: The node's name, which we set to the .metadata.name field using valueFrom. This will resolve to es-cluster-[0,1,2], depending on the node's assigned ordinal.
      • discovery.zen.ping.unicast.hosts: This field sets the discovery method used to connect nodes to each other within an Elasticsearch cluster. We use unicast discovery, which specifies a static list of hosts for our cluster. In this guide, thanks to the headless service we configured earlier, our Pods have domains of the form es-cluster-[0,1,2].elasticsearch.kube-logging.svc.cluster.local, so we set this variable accordingly. Using local namespace Kubernetes DNS resolution, we can shorten this to es-cluster-[0,1,2].elasticsearch. To learn more about Elasticsearch discovery, consult the official Elasticsearch documentation.
      • discovery.zen.minimum_master_nodes: We set this to (N/2) + 1, where N is the number of master-eligible nodes in our cluster. In this guide we have 3 Elasticsearch nodes, so we set this value to 2 (rounding down to the nearest integer). To learn more about this parameter, consult the official Elasticsearch documenation.
      • ES_JAVA_OPTS: Here we set this to -Xms512m -Xmx512m which tells the JVM to use a minimum and maximum heap size of 512 MB. You should tune these parameters depending on your cluster's resource availability and needs. To learn more, consult Setting the heap size.

      The next block we'll paste in looks as follows:

      elasticsearch_statefulset.yaml

      . . .
            initContainers:
            - name: fix-permissions
              image: busybox
              command: ["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"]
              securityContext:
                privileged: true
              volumeMounts:
              - name: data
                mountPath: /usr/share/elasticsearch/data
            - name: increase-vm-max-map
              image: busybox
              command: ["sysctl", "-w", "vm.max_map_count=262144"]
              securityContext:
                privileged: true
            - name: increase-fd-ulimit
              image: busybox
              command: ["sh", "-c", "ulimit -n 65536"]
              securityContext:
                privileged: true
      

      In this block, we define several Init Containers that run before the main elasticsearch app container. These Init Containers each run to completion in the order they are defined. To learn more about Init Containers, consult the official Kubernetes Documentation.

      The first, named fix-permissions, runs a chown command to change the owner and group of the Elasticsearch data directory to 1000:1000, the Elasticsearch user's UID. By default Kubernetes mounts the data directory as root, which renders it inaccessible to Elasticsearch. To learn more about this step, consult Elasticsearch's “Notes for production use and defaults.”

      The second, named increase-vm-max-map, runs a command to increase the operating system's limits on mmap counts, which by default may be too low, resulting in out of memory errors. To learn more about this step, consult the official Elasticsearch documentation.

      The next Init Container to run is increase-fd-ulimit, which runs the ulimit command to increase the maximum number of open file descriptors. To learn more about this step, consult the “Notes for Production Use and Defaults” from the official Elasticsearch documentation.

      Note: The Elasticsearch Notes for Production Use also mentions disabling swapping for performance reasons. Depending on your Kubernetes installation or provider, swapping may already be disabled. To check this, exec into a running container and run cat /proc/swaps to list active swap devices. If you see nothing there, swap is disabled.

      Now that we've defined our main app container and the Init Containers that run before it to tune the container OS, we can add the final piece to our StatefulSet object definition file: the volumeClaimTemplates.

      Paste in the following volumeClaimTemplate block:

      elasticsearch_statefulset.yaml

      . . .
        volumeClaimTemplates:
        - metadata:
            name: data
            labels:
              app: elasticsearch
          spec:
            accessModes: [ "ReadWriteOnce" ]
            storageClassName: do-block-storage
            resources:
              requests:
                storage: 100Gi
      

      In this block, we define the StatefulSet's volumeClaimTemplates. Kubernetes will use this to create PersistentVolumes for the Pods. In the block above, we name it data (which is the name we refer to in the volumeMounts defined previously), and give it the same app: elasticsearch label as our StatefulSet.

      We then specify its access mode as ReadWriteOnce, which means that it can only be mounted as read-write by a single node. We define the storage class as do-block-storage in this guide since we use a DigitalOcean Kubernetes cluster for demonstration purposes. You should change this value depending on where you are running your Kubernetes cluster. To learn more, consult the Persistent Volume documentation.

      Finally, we specify that we'd like each PersistentVolume to be 100GiB in size. You should adjust this value depending on your production needs.

      The complete StatefulSet spec should look something like this:

      elasticsearch_statefulset.yaml

      apiVersion: apps/v1
      kind: StatefulSet
      metadata:
        name: es-cluster
        namespace: kube-logging
      spec:
        serviceName: elasticsearch
        replicas: 3
        selector:
          matchLabels:
            app: elasticsearch
        template:
          metadata:
            labels:
              app: elasticsearch
          spec:
            containers:
            - name: elasticsearch
              image: docker.elastic.co/elasticsearch/elasticsearch-oss:6.4.3
              resources:
                  limits:
                    cpu: 1000m
                  requests:
                    cpu: 100m
              ports:
              - containerPort: 9200
                name: rest
                protocol: TCP
              - containerPort: 9300
                name: inter-node
                protocol: TCP
              volumeMounts:
              - name: data
                mountPath: /usr/share/elasticsearch/data
              env:
                - name: cluster.name
                  value: k8s-logs
                - name: node.name
                  valueFrom:
                    fieldRef:
                      fieldPath: metadata.name
                - name: discovery.zen.ping.unicast.hosts
                  value: "es-cluster-0.elasticsearch,es-cluster-1.elasticsearch,es-cluster-2.elasticsearch"
                - name: discovery.zen.minimum_master_nodes
                  value: "2"
                - name: ES_JAVA_OPTS
                  value: "-Xms512m -Xmx512m"
            initContainers:
            - name: fix-permissions
              image: busybox
              command: ["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"]
              securityContext:
                privileged: true
              volumeMounts:
              - name: data
                mountPath: /usr/share/elasticsearch/data
            - name: increase-vm-max-map
              image: busybox
              command: ["sysctl", "-w", "vm.max_map_count=262144"]
              securityContext:
                privileged: true
            - name: increase-fd-ulimit
              image: busybox
              command: ["sh", "-c", "ulimit -n 65536"]
              securityContext:
                privileged: true
        volumeClaimTemplates:
        - metadata:
            name: data
            labels:
              app: elasticsearch
          spec:
            accessModes: [ "ReadWriteOnce" ]
            storageClassName: do-block-storage
            resources:
              requests:
                storage: 100Gi
      

      Once you're satisfied with your Elasticsearch configuration, save and close the file.

      Now, deploy the StatefulSet using kubectl:

      • kubectl create -f elasticsearch_statefulset.yaml

      You should see the following output:

      Output

      statefulset.apps/es-cluster created

      You can monitor the StatefulSet as it is rolled out using kubectl rollout status:

      • kubectl rollout status sts/es-cluster --namespace=kube-logging

      You should see the following output as the cluster is rolled out:

      Output

      Waiting for 3 pods to be ready... Waiting for 2 pods to be ready... Waiting for 1 pods to be ready... partitioned roll out complete: 3 new pods have been updated...

      Once all the Pods have been deployed, you can check that your Elasticsearch cluster is functioning correctly by performing a request against the REST API.

      To do so, first forward the local port 9200 to the port 9200 on one of the Elasticsearch nodes (es-cluster-0) using kubectl port-forward:

      • kubectl port-forward es-cluster-0 9200:9200 --namespace=kube-logging

      Then, in a separate terminal window, perform a curl request against the REST API:

      • curl http://localhost:9200/_cluster/state?pretty

      You shoulds see the following output:

      Output

      { "cluster_name" : "k8s-logs", "compressed_size_in_bytes" : 348, "cluster_uuid" : "QD06dK7CQgids-GQZooNVw", "version" : 3, "state_uuid" : "mjNIWXAzQVuxNNOQ7xR-qg", "master_node" : "IdM5B7cUQWqFgIHXBp0JDg", "blocks" : { }, "nodes" : { "u7DoTpMmSCixOoictzHItA" : { "name" : "es-cluster-1", "ephemeral_id" : "ZlBflnXKRMC4RvEACHIVdg", "transport_address" : "10.244.8.2:9300", "attributes" : { } }, "IdM5B7cUQWqFgIHXBp0JDg" : { "name" : "es-cluster-0", "ephemeral_id" : "JTk1FDdFQuWbSFAtBxdxAQ", "transport_address" : "10.244.44.3:9300", "attributes" : { } }, "R8E7xcSUSbGbgrhAdyAKmQ" : { "name" : "es-cluster-2", "ephemeral_id" : "9wv6ke71Qqy9vk2LgJTqaA", "transport_address" : "10.244.40.4:9300", "attributes" : { } } }, ...

      This indicates that our Elasticsearch cluster k8s-logs has successfully been created with 3 nodes: es-cluster-0, es-cluster-1, and es-cluster-2. The current master node is es-cluster-0.

      Now that your Elasticsearch cluster is up and running, you can move on to setting up a Kibana frontend for it.

      Step 3 — Creating the Kibana Deployment and Service

      To launch Kibana on Kubernetes, we'll create a Service called kibana, and a Deployment consisting of one Pod replica. You can scale the number of replicas depending on your production needs, and optionally specify a LoadBalancer type for the Service to load balance requests across the Deployment pods.

      This time, we'll create the Service and Deployment in the same file. Open up a file called kibana.yaml in your favorite editor:

      Paste in the following service spec:

      kibana.yaml

      apiVersion: v1
      kind: Service
      metadata:
        name: kibana
        namespace: kube-logging
        labels:
          app: kibana
      spec:
        ports:
        - port: 5601
        selector:
          app: kibana
      ---
      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: kibana
        namespace: kube-logging
        labels:
          app: kibana
      spec:
        replicas: 1
        selector:
          matchLabels:
            app: kibana
        template:
          metadata:
            labels:
              app: kibana
          spec:
            containers:
            - name: kibana
              image: docker.elastic.co/kibana/kibana-oss:6.4.3
              resources:
                limits:
                  cpu: 1000m
                requests:
                  cpu: 100m
              env:
                - name: ELASTICSEARCH_URL
                  value: http://elasticsearch:9200
              ports:
              - containerPort: 5601
      

      Then, save and close the file.

      In this spec we've defined a service called kibana in the kube-logging namespace, and gave it the app: kibana label.

      We've also specified that it should be accessible on port 5601 and use the app: kibana label to select the Service's target Pods.

      In the Deployment spec, we define a Deployment called kibana and specify that we'd like 1 Pod replica.

      We use the docker.elastic.co/kibana/kibana-oss:6.4.3 image. At this point you may substitute your own private or public Kibana image to use. We once again use the -oss suffix to specify that we'd like the open-source version.

      We specify that we'd like at the very least 0.1 vCPU guaranteed to the Pod, bursting up to a limit of 1 vCPU. You may change these parameters depending on your anticipated load and available resources.

      Next, we use the ELASTICSEARCH_URL environment variable to set the endpoint and port for the Elasticsearch cluster. Using Kubernetes DNS, this endpoint corresponds to its Service name elasticsearch. This domain will resolve to a list of IP addresses for the 3 Elasticsearch Pods. To learn more about Kubernetes DNS, consult DNS for Services and Pods.

      Finally, we set Kibana's container port to 5601, to which the kibana Service will forward requests.

      Once you're satisfied with your Kibana configuration, you can roll out the Service and Deployment using kubectl:

      • kubectl create -f kibana.yaml

      You should see the following output:

      Output

      service/kibana created deployment.apps/kibana created

      You can check that the rollout succeeded by running the following command:

      • kubectl rollout status deployment/kibana --namespace=kube-logging

      You should see the following output:

      Output

      deployment "kibana" successfully rolled out

      To access the Kibana interface, we'll once again forward a local port to the Kubernetes node running Kibana. Grab the Kibana Pod details using kubectl get:

      • kubectl get pods --namespace=kube-logging

      Output

      NAME READY STATUS RESTARTS AGE es-cluster-0 1/1 Running 0 55m es-cluster-1 1/1 Running 0 54m es-cluster-2 1/1 Running 0 54m kibana-6c9fb4b5b7-plbg2 1/1 Running 0 4m27s

      Here we observe that our Kibana Pod is called kibana-6c9fb4b5b7-plbg2.

      Forward the local port 5601 to port 5601 on this Pod:

      • kubectl port-forward kibana-6c9fb4b5b7-plbg2 5601:5601 --namespace=kube-logging

      You should see the following output:

      Output

      Forwarding from 127.0.0.1:5601 -> 5601 Forwarding from [::1]:5601 -> 5601

      Now, in your web browser, visit the following URL:

      http://localhost:5601
      

      If you see the following Kibana welcome page, you've successfully deployed Kibana into your Kubernetes cluster:

      Kibana Welcome Screen

      You can now move on to rolling out the final component of the EFK stack: the log collector, Fluentd.

      Step 4 — Creating the Fluentd DaemonSet

      In this guide, we'll set up Fluentd as a DaemonSet, which is a Kubernetes workload type that runs a copy of a given Pod on each Node in the Kubernetes cluster. Using this DaemonSet controller, we'll roll out a Fluentd logging agent Pod on every node in our cluster. To learn more about this logging architecture, consult “Using a node logging agent” from the official Kubernetes docs.

      In Kubernetes, containerized applications that log to stdout and stderr have their log streams captured and redirected to JSON files on the nodes. The Fluentd Pod will tail these log files, filter log events, transform the log data, and ship it off to the Elasticsearch logging backend we deployed in Step 2.

      In addition to container logs, the Fluentd agent will tail Kubernetes system component logs like kubelet, kube-proxy, and Docker logs. To see a full list of sources tailed by the Fluentd logging agent, consult the kubernetes.conf file used to configure the logging agent. To learn more about logging in Kubernetes clusters, consult “Logging at the node level” from the official Kubernetes documentation.

      Begin by opening a file called fluentd.yaml in your favorite text editor:

      Once again, we'll paste in the Kubernetes object definitions block by block, providing context as we go along. In this guide, we use the Fluentd DaemonSet spec provided by the Fluentd maintainers. Another helpful resource provided by the Fluentd maintainers is Kubernetes Logging with Fluentd.

      First, paste in the following ServiceAccount definition:

      fluentd.yaml

      apiVersion: v1
      kind: ServiceAccount
      metadata:
        name: fluentd
        namespace: kube-logging
        labels:
          app: fluentd
      

      Here, we create a Service Account called fluentd that the Fluentd Pods will use to access the Kubernetes API. We create it in the kube-logging Namespace and once again give it the label app: fluentd. To learn more about Service Accounts in Kubernetes, consult Configure Service Accounts for Pods in the official Kubernetes docs.

      Next, paste in the following ClusterRole block:

      fluentd.yaml

      . . .
      ---
      apiVersion: rbac.authorization.k8s.io/v1
      kind: ClusterRole
      metadata:
        name: fluentd
        labels:
          app: fluentd
      rules:
      - apiGroups:
        - ""
        resources:
        - pods
        - namespaces
        verbs:
        - get
        - list
        - watch
      

      Here we define a ClusterRole called fluentd to which we grant the get, list, and watch permissions on the pods and namespaces objects. ClusterRoles allow you to grant access to cluster-scoped Kubernetes resources like Nodes. To learn more about Role-Based Access Control and Cluster Roles, consult Using RBAC Authorization from the official Kubernetes documentation.

      Now, paste in the following ClusterRoleBinding block:

      fluentd.yaml

      . . .
      ---
      kind: ClusterRoleBinding
      apiVersion: rbac.authorization.k8s.io/v1
      metadata:
        name: fluentd
      roleRef:
        kind: ClusterRole
        name: fluentd
        apiGroup: rbac.authorization.k8s.io
      subjects:
      - kind: ServiceAccount
        name: fluentd
        namespace: kube-logging
      

      In this block, we define a ClusterRoleBinding called fluentd which binds the fluentd ClusterRole to the fluentd Service Account. This grants the fluentd ServiceAccount the permissions listed in the fluentd Cluster Role.

      At this point we can begin pasting in the actual DaemonSet spec:

      fluentd.yaml

      . . .
      ---
      apiVersion: apps/v1
      kind: DaemonSet
      metadata:
        name: fluentd
        namespace: kube-logging
        labels:
          app: fluentd
      

      Here, we define a DaemonSet called fluentd in the kube-logging Namespace and give it the app: fluentd label.

      Next, paste in the following section:

      fluentd.yaml

      . . .
      spec:
        selector:
          matchLabels:
            app: fluentd
        template:
          metadata:
            labels:
              app: fluentd
          spec:
            serviceAccount: fluentd
            serviceAccountName: fluentd
            tolerations:
            - key: node-role.kubernetes.io/master
              effect: NoSchedule
            containers:
            - name: fluentd
              image: fluent/fluentd-kubernetes-daemonset:v0.12-debian-elasticsearch
              env:
                - name:  FLUENT_ELASTICSEARCH_HOST
                  value: "elasticsearch.kube-logging.svc.cluster.local"
                - name:  FLUENT_ELASTICSEARCH_PORT
                  value: "9200"
                - name: FLUENT_ELASTICSEARCH_SCHEME
                  value: "http"
                - name: FLUENT_UID
                  value: "0"
      

      Here, we match the app: fluentd label defined in .metadata.labels and then assign the DaemonSet the fluentd Service Account. We also select the app: fluentd as the Pods managed by this DaemonSet.

      Next, we define a NoSchedule toleration to match the equivalent taint on Kubernetes master nodes. This will ensure that the DaemonSet also gets rolled out to the Kubernetes masters. If you don't want to run a Fluentd Pod on your master nodes, remove this toleration. To learn more about Kubernetes taints and tolerations, consult “Taints and Tolerations” from the official Kubernetes docs.

      Next, we begin defining the Pod container, which we call fluentd.

      We use the official v0.12 Debian image provided by the Fluentd maintainers. If you'd like to use your own private or public Fluentd image, or use a different image version, modify the image tag in the container spec. The Dockerfile and contents of this image are available in Fluentd's fluentd-kubernetes-daemonset Github repo.

      Next, we configure Fluentd using some environment variables:

      • FLUENT_ELASTICSEARCH_HOST: We set this to the Elasticsearch headless Service address defined earlier: elasticsearch.kube-logging.svc.cluster.local. This will resolve to a list of IP addresses for the 3 Elasticsearch Pods. The actual Elasticsearch host will most likely be the first IP address returned in this list. To distribute logs across the cluster, you will need to modify the configuration for Fluentd’s Elasticsearch Output plugin. To learn more about this plugin, consult Elasticsearch Output Plugin.
      • FLUENT_ELASTICSEARCH_PORT: We set this to the Elasticsearch port we configured earlier, 9200.
      • FLUENT_ELASTICSEARCH_SCHEME: We set this to http.
      • FLUENT_UID: We set this to 0 (superuser) so that Fluentd can access the files in /var/log.

      Finally, paste in the following section:

      fluentd.yaml

      . . .
              resources:
                limits:
                  memory: 512Mi
                requests:
                  cpu: 100m
                  memory: 200Mi
              volumeMounts:
              - name: varlog
                mountPath: /var/log
              - name: varlibdockercontainers
                mountPath: /var/lib/docker/containers
                readOnly: true
            terminationGracePeriodSeconds: 30
            volumes:
            - name: varlog
              hostPath:
                path: /var/log
            - name: varlibdockercontainers
              hostPath:
                path: /var/lib/docker/containers
      

      Here we specify a 512 MiB memory limit on the FluentD Pod, and guarantee it 0.1vCPU and 200MiB of memory. You can tune these resource limits and requests depending on your anticipated log volume and available resources.

      Next, we mount the /var/log and /var/lib/docker/containers host paths into the container using the varlog and varlibdockercontainers volumeMounts. These volumes are defined at the end of the block.

      The final parameter we define in this block is terminationGracePeriodSeconds, which gives Fluentd 30 seconds to shut down gracefully upon receiving a SIGTERM signal. After 30 seconds, the containers are sent a SIGKILL signal. The default value for terminationGracePeriodSeconds is 30s, so in most cases this parameter can be omitted. To learn more about gracefully terminating Kubernetes workloads, consult Google's “Kubernetes best practices: terminating with grace.”

      The entire Fluentd spec should look something like this:

      fluentd.yaml

      apiVersion: v1
      kind: ServiceAccount
      metadata:
        name: fluentd
        namespace: kube-logging
        labels:
          app: fluentd
      ---
      apiVersion: rbac.authorization.k8s.io/v1
      kind: ClusterRole
      metadata:
        name: fluentd
        labels:
          app: fluentd
      rules:
      - apiGroups:
        - ""
        resources:
        - pods
        - namespaces
        verbs:
        - get
        - list
        - watch
      ---
      kind: ClusterRoleBinding
      apiVersion: rbac.authorization.k8s.io/v1
      metadata:
        name: fluentd
      roleRef:
        kind: ClusterRole
        name: fluentd
        apiGroup: rbac.authorization.k8s.io
      subjects:
      - kind: ServiceAccount
        name: fluentd
        namespace: kube-logging
      ---
      apiVersion: apps/v1
      kind: DaemonSet
      metadata:
        name: fluentd
        namespace: kube-logging
        labels:
          app: fluentd
      spec:
        selector:
          matchLabels:
            app: fluentd
        template:
          metadata:
            labels:
              app: fluentd
          spec:
            serviceAccount: fluentd
            serviceAccountName: fluentd
            tolerations:
            - key: node-role.kubernetes.io/master
              effect: NoSchedule
            containers:
            - name: fluentd
              image: fluent/fluentd-kubernetes-daemonset:v0.12-debian-elasticsearch
              env:
                - name:  FLUENT_ELASTICSEARCH_HOST
                  value: "elasticsearch.kube-logging.svc.cluster.local"
                - name:  FLUENT_ELASTICSEARCH_PORT
                  value: "9200"
                - name: FLUENT_ELASTICSEARCH_SCHEME
                  value: "http"
                - name: FLUENT_UID
                  value: "0"
              resources:
                limits:
                  memory: 512Mi
                requests:
                  cpu: 100m
                  memory: 200Mi
              volumeMounts:
              - name: varlog
                mountPath: /var/log
              - name: varlibdockercontainers
                mountPath: /var/lib/docker/containers
                readOnly: true
            terminationGracePeriodSeconds: 30
            volumes:
            - name: varlog
              hostPath:
                path: /var/log
            - name: varlibdockercontainers
              hostPath:
                path: /var/lib/docker/containers
      

      Once you've finished configuring the Fluentd DaemonSet, save and close the file.

      Now, roll out the DaemonSet using kubectl:

      • kubectl create -f fluentd.yaml

      You should see the following output:

      Output

      serviceaccount/fluentd created clusterrole.rbac.authorization.k8s.io/fluentd created clusterrolebinding.rbac.authorization.k8s.io/fluentd created daemonset.extensions/fluentd created

      Verify that your DaemonSet rolled out successfully using kubectl:

      • kubectl get ds --namespace=kube-logging

      You should see the following status output:

      Output

      NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE fluentd 3 3 3 3 3 <none> 58s

      This indicates that there are 3 fluentd Pods running, which corresponds to the number of nodes in our Kubernetes cluster.

      We can now check Kibana to verify that log data is being properly collected and shipped to Elasticsearch.

      With the kubectl port-forward still open, navigate to http://localhost:5601.

      Click on Discover in the left-hand navigation menu.

      You should see the following configuration window:

      Kibana Index Pattern Configuration

      This allows you to define the Elasticsearch indices you'd like to explore in Kibana. To learn more, consult Defining your index patterns in the official Kibana docs. For now, we'll just use the logstash-* wildcard pattern to capture all the log data in our Elasticsearch cluster. Enter logstash-* in the text box and click on Next step.

      You'll then be brought to the following page:

      Kibana Index Pattern Settings

      This allows you to configure which field Kibana will use to filter log data by time. In the dropdown, select the @timestamp field, and hit Create index pattern.

      Now, hit Discover in the left hand navigation menu.

      You should see a histogram graph and some recent log entries:

      Kibana Incoming Logs

      At this point you've successfully configured and rolled out the EFK stack on your Kubernetes cluster. To learn how to use Kibana to analyze your log data, consult the Kibana User Guide.

      In the next optional section, we'll deploy a simple counter Pod that prints numbers to stdout, and find its logs in Kibana.

      Step 5 (Optional) — Testing Container Logging

      To demonstrate a basic Kibana use case of exploring the latest logs for a given Pod, we'll deploy a minimal counter Pod that prints sequential numbers to stdout.

      Let's begin by creating the Pod. Open up a file called counter.yaml in your favorite editor:

      Then, paste in the following Pod spec:

      counter.yaml

      apiVersion: v1
      kind: Pod
      metadata:
        name: counter
      spec:
        containers:
        - name: count
          image: busybox
          args: [/bin/sh, -c,
                  'i=0; while true; do echo "$i: $(date)"; i=$((i+1)); sleep 1; done']
      

      Save and close the file.

      This is a minimal Pod called counter that runs a while loop, printing numbers sequentially.

      Deploy the counter Pod using kubectl:

      • kubectl create -f counter.yaml

      Once the Pod has been created and is running, navigate back to your Kibana dashboard.

      From the Discover page, in the search bar enter kubernetes.pod_name:counter. This filters the log data for Pods named counter.

      You should then see a list of log entries for the counter Pod:

      Counter Logs in Kibana

      You can click into any of the log entries to see additional metadata like the container name, Kubernetes node, Namespace, and more.

      Conclusion

      In this guide we've demonstrated how to set up and configure Elasticsearch, Fluentd, and Kibana on a Kubernetes cluster. We've used a minimal logging architecture that consists of a single logging agent Pod running on each Kubernetes worker node.

      Before deploying this logging stack into your production Kubernetes cluster, it’s best to tune the resource requirements and limits as indicated throughout this guide. You may also want to use the X-Pack enabled image with built-in monitoring and security.

      The logging architecture we’ve used here consists of 3 Elasticsearch Pods, a single Kibana Pod (not load-balanced), and a set of Fluentd Pods rolled out as a DaemonSet. You may wish to scale this setup depending on your production use case. To learn more about scaling your Elasticsearch and Kibana stack, consult Scaling Elasticsearch.

      Kubernetes also allows for more complex logging agent architectures that may better suit your use case. To learn more, consult Logging Architecture from the Kubernetes docs.



      Source link

      How to Make the Most of Black Friday and Cyber Monday’s Tech Deals


      Thanksgiving weekend means quality time, stuffing, pie, and, of course, some of the best shopping deals of the year.

      After the leftovers are put away, it’s time for Black Friday and Cyber Monday. Whether you’re looking to simply cross a few gifts off your list or spend big bucks on a new TV or computer, it’s a good idea to do a little pre-gaming.

      So whip out that wallet, power up your laptop, and follow these 12 strategies to score the best deals.

      Before You Carve the Turkey

      Do Your Homework

      While Cyber Monday used to just take place on — you guessed it — Monday, it has become so popular that many retailers stretch it out for as long as a week. Some sites have different deals every day (even by the hour!). That’s why it’s crucial to come up with a game plan before you load up your browser tabs with your favorite e-tailers.

      While many shops keep sale details under wraps until the big day, often a little digging online will bring up leaked ads or specific roundups on shopping sites so do your homework.

      If you haven’t started browsing yet, you might be behind — according to the National Retail Federation, more than half of holiday shoppers begin planning their gift list in October or earlier.

      Know if the Price Is Right

      Some “price drops” on Black Friday and Cyber Monday aren’t actually deals at all. It’s common for stores to raise their prices just before the big days just so that they can “slash” them. That’s why it’s important to know the going price of the items on your list before the holiday shopping frenzy rolls around.

      To start tracking prices, you can’t go wrong with a classic spreadsheet. There are free sites that help you do the work too, such as Finery and Shoptagr, which will alert you when an item you’re watching goes on sale. There’s even a price tracker devoted just to Amazon: camelcamelcamel.

      Similarly, try to do some comparison shopping on Black Friday and Cyber Monday to ensure that you really are getting the lowest price. The Google Shopping app is an easy way to measure prices and inventory.

      Connect With Brands

      Many brands have their own newsletters (by the way, do you want to sign up for DreamHost’s?) and now’s the best time to add your email address to the list. They’ll often share special deals ahead of time, exclusive offers, and codes for bargains like free shipping. And if you’re really into snooping out savings, follow your favorite brands on Twitter or Facebook.

      Make a List

      It’s too easy to get carried away when you’re snagging great deals that are only available for a limited time. To avoid falling into the “sale trap” — and the buyer’s remorse that comes along with it — create a list of the gifts and items you actually need. While you’re at it, it’s a good idea to also write down how much you expect to spend for each item or person, since going into debt for gifts doesn’t put anyone in the holiday spirit.

      Use Bookmarks

      Often there is only a limited amount of products available for rock-bottom deals on Cyber Monday, so to make sure you’re first in that virtual line, bookmark the pages with the deals you’re interested in so that you click on them the second the sale starts and reach the page instantly. Even a few seconds can give you an advantage when millions of people are shopping.

      On Black Friday and Cyber Monday

      Hit the Sales Early

      We’ve all seen those viral videos of people lining up at big box stores before dawn. While you don’t necessarily have to wake up before the roosters, it’s a good idea to get there early. To make sure you don’t miss a deal online, set calendar reminders for time-sensitive deals, especially for products that will be low in inventory.

      Go in Person

      It might seem like Cyber Monday has the upper hand over Black Friday, particularly when it comes to electronics and tech, but that isn’t always the case. Many stores offer in-store exclusives so you’ll have to show up in person to get the prices that aren’t always matched online. Another bonus of dropping by a brick and mortar store? Many have surprise deals that aren’t advertised anywhere else.

      Read the Fine Print

      If a deal is a limited-time offer or you’re afraid your cart will be emptied out before you have time to check out, it’s easy to get a little trigger happy. And impulse buys are a lot more tempting when less dough is at stake. But when discounts are deeper than usual, that sometimes means that the usual customer service rules don’t apply. Be sure to find out return policies before you commit to a deal. After all, if you get stuck with something you can’t return, it won’t be a deal in the end.

      Stay on the Safe Side

      If a deal on a site you’ve never heard of is too good to be true, there’s a chance it really is. Around the holidays, phishing emails and scams become even more common. If a deal in an email looks suspicious, instead of clicking on the links in the email, go directly to the website offering the sale. Big name stores and brands are your best bet when it comes to online safety.

      While You’re Checking Out

      Have Everything Ready to Go

      When deals are hot, e-tailers will commonly give you a countdown for how long you can keep something in your cart. Or it may sell out before you even have time to whip out that credit card. To prevent that, have your card ready by your side, and even better, create a profile on the site before Black Friday and Cyber Monday to speed up the checkout process. If you have discount codes, make sure those are ready too so that you can quickly type them in without having to scan your email or scour the internet for coupon codes.

      Give your website a home for the holidays. Sign up for DreamHost today!

      Don’t Forget About Shipping Fees

      That flat screen TV online is an amazing deal — until you’re checking out and see the insane shipping fee. Few things are more disappointing on Black Friday and Cyber Monday than to find out that the shipping costs more than the item itself, which can be the case when it comes to larger items.

      And remember: just because you can buy something online doesn’t mean that it will be shipped to you instantly. Items often sell out quickly, meaning that it could take weeks to get restocked. Before you buy, look at the shipping dates to ensure it’ll arrive before the holiday.

      Save Receipts

      Even if you’re shopping in person, not every store will give you a print receipt. Decide in advance how you want to track your spending and keep a record. Whether you opt for old-school receipts or an email version, store them all in one place, particularly for high-ticket purchases, in case you need to make a return or exchange. If you decide on email confirmations, have them all sent to the same email address and create a folder to stash them together. Not having to scroll through all your emails will save you major time later.





      Source link

      How to Start a Review Site With WordPress


      Reviews have become ubiquitous online, and it’s not hard to see why. Both professional and user reviews provide first-hand information that can help you make informed purchasing decisions. The best part is that anyone with some writing skills and passion can start a review site for themselves.

      A review site is one of the best ways you can use your knowledge and interests to create valuable content. By reviewing products in a particular niche, you can be creative while leveraging your expertise to help readers find the best solutions and services. Plus, you can even earn a decent income at the same time.

      In this article, we’ll talk about the basics of a review site and discuss why you should consider starting one. We’ll also show you what’s needed to make it successful and talk about how you can enhance it using the right theme and plugins. Let’s get to work!

      A Brief Introduction to Review Sites

      An example of a review on AllMusic.com.

      What do you do when you’re considering buying a particular product, but you’re not sure it’s right for you or worth the money? Like many people, you most likely seek out reviews to help answer your questions. Whether these are written by individual users or provided on dedicated sites, they can be immensely helpful when you’re trying to make informed decisions.

      Sites dedicated to offering reviews are aptly referred to as review sites. While this moniker is accurate, it’s also somewhat vague, as it refers to a variety of websites. For example, some sites aggregate many different people’s reviews. Rotten Tomatoes fits into this category, as it combines film reviews from professional critics and users to create an average score for each movie.

      Similarly, sites like TripAdvisor are entirely devoted to user reviews of hotels and other establishments.

      Two user reviews from TripAdvisor.

      However, a review site could also feature content created by one or more specific writers. These sites function similarly to print magazines, in that they usually have a roster of hired authors or freelancers who produce content. They can also vary widely in scope and subject matter, from huge international brands like Eurogamer to one-person operations such as Wake Up For Makeup.

      This broad spectrum of possibilities means it’s both possible and easy for pretty much anyone to create their own review site. We’ll be showing you how to do just that throughout this article.

      The Benefits of Running a Review Site

      In many cases, the main reason you would want to start your own review site is simply that you enjoy the work. Most sites of this nature are run by people with a passion for a particular topic. However, review sites offer a number of more practical benefits as well.

      For one, review sites can be excellent at driving traffic. We mentioned earlier that a lot of people will go looking for reviews before making purchases. So if you can write content that is clear, engaging, and authoritative, you’ll be primed to receive plenty of visitors.

      One of the reasons for this popularity is that review sites are uniquely suited to Search Engine Optimization (SEO). That’s because your posts will almost by default match the keywords users are most likely to search for.

      For example, imagine that you run a review blog about WordPress plugins, and you write a post about Contact Form 7. You’ll most likely name it something to the effect of “Contact Form 7 – Review.” Someone curious about this plugin is most likely going to use a very similar search phrase, such as “contact form 7 review,” which makes it a lot more probable that they’ll stumble across your article.

      In addition to the SEO benefits, a review site also provides you with a lot of freedom over how you structure and display your reviews. You could make your site very basic and just use a standard blog interface, which is familiar to many people and easy to maintain.

      One example of this in action is IsItWP.

      The IsItWP.com home page.However, you could also go bigger and create a more structurally-complex site with an advanced scoring system, multiple reviews per product, and more. For example, HostingAdvice offers granular scores for added precision.

      An example of a review of DreamHost on HostingAdvice.com.

      Finally, a review site is particularly well-suited to being monetized. You have many options — such as including affiliate links alongside your reviews or featuring paid advertisements that are separate from your main content.

      However, it’s critical to remember that you don’t have to (and, in fact, shouldn’t) change the content of your reviews to suit your advertisers. If you’re not honest and frank with your users about the products you’re reviewing, they aren’t going to trust you and won’t stick around for long.

      What to Consider Before Starting a Review Site

      Before you start sharpening your critical wit, you’ll need to do some planning. First of all, you’ll naturally need to decide what the subject of your review site is going to be. As we discussed earlier, an excellent place to start when picking a niche is by considering your own interests.

      This will help you produce more authoritative reviews, as you’ll have some pre-existing knowledge to rely on. After all, few would be interested in reading reviews about board games, for example, if the writer clearly had little understanding of or experience playing them. Being passionate about your chosen topic will also make the overall experience much more enjoyable.

      When it comes to finding a niche you can fill, it’s a good idea to do some market research. You can start by looking at other review sites and investigating forums related to your subject matter, to see what users think of your competitors. This might give you some ideas about how you could tailor your reviews to better serve your target audience. If you can find an angle that no other site is using, you’ll have a better chance of success.

      You should also decide what methods you want to use to monetize your site. This could involve featuring paid advertisements, such as banners, or including affiliate links alongside your reviews. You could also offer exclusive content to those who sign up for a paid subscription.

      At last, you’ll need to consider the more practical aspects. What will your website look like and who will actually be writing the reviews? If you’re starting small, you might want to begin with a simple blog and yourself as the sole author. However, you can also go big right away with a more intricate structure and even hire a whole team of writers.

      Naturally, the scope of your site will depend largely on your goals and budget. It’s often best to start smaller and then expand your site over time, as this will minimize risks and enable you to grow organically as you receive more traffic. This is similar to creating a Minimum Viable Product (MVP), where you start with a bare-bones approach, focusing on a simple layout and high-quality content, and then scale it up gradually.

      How to Start a Review Site With WordPress (In 5 Steps)

      Once you have a plan and a niche in place, you’re ready to get busy with the fun part — actually creating your review site. To help you along, we’re going to walk you through the main steps involved.

      We’ll be using WordPress, so you’ll first need to install and set up a website, which should only take you a few minutes. After that, you’re ready to get to work!

      Step 1: Pick a Name and Host

      First and foremost, you’ll need to think up a name for your site. This part can be a lot of fun, as you get to be creative in order to find a name that suits your site’s intended tone and branding. While you can pick pretty much any name you want, there are some considerations to keep in mind. For example, your site’s name should be:

      • Memorable. It’s important that your name sticks in people’s memories. Making it short and punchy is a smart way to ensure this.
      • Unique. Naturally, you don’t want your site to get confused with anyone else’s. Once you have a list of possible names, simply use a search engine like Google to ensure that no other site is already using it (or a name that’s too similar).
      • On-brand. Make sure that your site’s name matches its identity and target audience. For example, a ‘quirky,’ modern name might not be ideal if you’re aiming for a straightlaced professional market. However, that type of name could be perfectly suited to a site with a more casual approach.

      It’s also essential that you can purchase a domain that matches your site’s name. As such, it’s a good idea to use a domain checker, to see if your top choices are available at a reasonable price.

      If you’re still struggling to think of a decent name and matching domain, there are also name generation tools that can help you brainstorm ideas. DomainWheel, for example, will create suggested names based on a specific term or category.

      A search on DomainWheel.com.Once you have your domain in place, you’ll also need to consider hosting. Since you’re likely expecting a decent amount of traffic, you’ll need a hosting plan that can ensure top-notch performance at all times. This will also ensure scalability as your site grows over time. Our recommendation would be to go with a WordPress-specific hosting plan, as this will make setting up and maintaining your site simple.

      Step 2: Install a Suitable Theme

      Next, you’ll want to consider your site’s appearance. Fortunately, there are plenty of WordPress themes tailored specifically to review sites. While you don’t need to use a dedicated review theme, it can offer you several unique benefits.

      First of all, a review theme will be able to accommodate the layout and style of a review site easily. Many review themes also include specific functionality that can come in handy, like styles for applying scores or the ability to create lists of the reviews with the highest ratings.

      One example is the InReview theme.

      Example of a page using the InReview theme.

      This theme enables you to showcase your reviews alongside your final scores. You can also display ratings from your users to give readers a more rounded overview of each item.

      If you want something more stylish and with a magazine-like feel, there are also lots of suitable options. One of our favorites is the GoodLife theme.

      Example of a review using the GoodLife theme.

      With this theme, you can style your reviews using several different templates. Its goal is to help you create a modern, clean look, where the content is the central focus.

      Ultimately, the theme you decide to use depends mainly on your goals and target market. As such, it’s worth spending some time to find the perfect option.

      Step 3: Enhance Your Site With Review Plugins

      With the right theme installed, your site might already be equipped with some useful review features. However, you can improve its functionality even further by adding some select plugins. In this section, we’re going to introduce a few of the best plugins to enhance your review site.

      Let’s start with WP Product Review, which enables you to design a scoring interface.

      Alt text: The WP Product Review plugin.

      Once you’ve installed this plugin, you can specify if a post is a review. Then you can assign scores to the post and designate parameters, such as Pros and Cons. Plus, everything can be fully customized with new colors and icons.

      In addition to displaying scoring information on your site, you can also highlight it right in Google’s search results. To do that, you can use All In One Schema Rich Snippets.

      The All In One Schema Rich Snippets plugin.

      This tool will add schema markup to your pages, which will display information such as scores when your posts appear in search results. This can help your content stand out more, which is crucial for encouraging organic traffic.

      Finally, you may want to give your users the chance to submit their own reviews and scores. One plugin that lets you do this is Ultimate Reviews.

      The Ultimate Reviews plugin.

      This plugin lets you support user reviews, even enabling you to tailor precisely what information they can include. You could implement a simple score-only system, for example, or provide the tools needed to write long-form reviews.

      Naturally, this is only scratching the surface of the plugins that are available. For instance, you can also use a plugin like Reviewer WordPress to create a review comparison table, and Taqyeem to implement a summary box for your reviews. The possibilities are just about endless.

      Step 4: Start Writing Reviews

      Finally, the moment has come to actually start writing your reviews. Of course, we can’t help you much with this part, as you’ll need to rely on your own writing skills and critical thinking. However, to get started you may want to check out our blogging checklist and take a look at our expert blogging tips.

      We also recommend that you create a style guide. This will help you write consistent reviews that follow a specific set of standards, especially when it comes to the style of writing and the criteria you’ll use to rate products.

      A style guide is particularly helpful when you’re bringing in other writers, as it ensures that all posts follow a consistent ruleset. However, you’ll also want each writer’s personal style shine through, so try not to get too specific to avoid stifling their unique voices.

      It’s also a smart idea to have a handful of reviews ready before you launch the site. This will ensure that your site doesn’t feel empty when it goes live. You want to give your new visitors a good first impression, after all, and provide them with a reason to stay around longer.

      Step 5: Share Your Reviews

      Once your site has gone live, you’ll need to make the world aware of its existence. As such, you’ll want to start marketing your website right away, to ensure that you get a steady stream of traffic right out of the gate.

      Naturally, you’ll want to spend some time on SEO and make sure your site has a presence on social media. Share your reviews frequently and encourage your readers to do the same. The more your content is spread around, the more traffic you should see as a result.

      You might also consider submitting your site to a review aggregator. As we mentioned earlier, these sites collect reviews from multiple places to calculate average scores. Being featured on this type of site can help your reviews become more visible and reach new readers.

      Rave Reviews

      If you want to build an audience and make money online, while working with a subject matter that interests you, a review site is an ideal vehicle. By creating well-written and engaging reviews, you can provide valuable information to your readers, and create ample opportunity to monetize your work.

      Do you have any questions about starting your own review site with WordPress? Join the DreamHost Community today and ask away!



      Source link