One place for hosting & domains

      Started

      Getting Started with Kubernetes: Use kubeadm to Deploy a Cluster on Linode


      Updated by Linode Contributed by Linode

      Linode offers several pathways for users to easily deploy a Kubernetes cluster. If you prefer the command line, you can create a Kubernetes cluster with one command using the Linode CLI’s k8s-alpha plugin, and Terraform. Or, if you prefer a full featured GUI, Linode’s Rancher integration enables you to deploy and manage Kubernetes clusters with a simple web interface. The Linode Kubernetes Engine, currently under development with an early access beta version on its way this summer, allows you to spin up a Kubernetes cluster with Linode handling the management and maintenance of your control plane. These are all great options for production ready deployments.

      Kubeadm is a cloud provider agnostic tool that automates many of the tasks required to get a cluster up and running. Users of kubeadm can run a few simple commands on individual servers to turn them into a Kubernetes cluster consisting of a master node and worker nodes. This guide will walk you through installing kubeadm and using it to deploy a Kubernetes cluster on Linode. While the kubeadm approach requires more manual steps than other Kubernetes cluster creation pathways offered by Linode, this solution will be covered as way to dive deeper into the various components that make up a Kubernetes cluster and the ways in which they interact with each other to provide a scalable and reliable container orchestration mechanism.

      Note

      This guide’s example instructions will result in the creation of three billable Linodes. Information on how to tear down the Linodes are provided at the end of the guide. Interacting with the Linodes via the command line will provide the most opportunity for learning, however, this guide is written so that users can also benefit by reading along.

      Before You Begin

      1. Deploy three Linodes running Ubuntu 18.04 with the following system requirements:

        • One Linode to use as the master Node with 4GB RAM and 2 CPU cores.
        • Two Linodes to use as the Worker Nodes each with 1GB RAM and 1 CPU core.
      2. Follow the Getting Started and the Securing Your Server guides for instructions on setting up your Linodes. The steps in this guide assume the use of a limited user account with sudo privileges.

      Note

      When following the Getting Started guide, make sure that each Linode is using a different hostname. Not following this guideline will leave you unable to join some or all nodes to the cluster in a later step.
      1. Disable swap memory on your Linodes. Kubernetes requires that you disable swap memory on any cluster nodes to prevent the Kubernetes scheduler (kube-scheduler) from ever sending a pod to a node that has run out of CPU/memory or reached its designated CPU/memory limit.

        sudo swapoff -a
        

        Verify that your swap has been disabled. You should expect to see a value of 0 returned.

        cat /proc/meminfo | grep 'SwapTotal'
        

        To learn more about managing compute resources for containers, see the official Kubernetes documentation.

      2. Read the Beginners Guide to Kubernetes to familiarize yourself with the major components and concepts of Kubernetes. The current guide assumes a working knowledge of common Kubernetes concepts and terminology.

      Build a Kubernetes Cluster

      Kubernetes Cluster Architecture

      A Kubernetes cluster consists of a master node and worker nodes. The master node hosts the control plane, which is the combination of all the components that provide it the ability to maintain the desired cluster state. This cluster state is defined by manifest files and the kubectl tool. While the control plane components can be run on any cluster node, it is a best practice to isolate the control plane on its own node and to run any application containers on a separate worker node. A cluster can have a single worker node or up to 5000. Each worker node must be able to maintain running containers in a pod and be able to communicate with the master node’s control plane.

      The table below provides a list of the Kubernetes tooling you will need to install on your master and worker nodes in order to meet the minimum requirements for a functioning Kubernetes cluster as described above.

      Tool Description Master Node Worker Nodes
      kubeadm This tool provides a simple way to create a Kubernetes cluster by automating the tasks required to get a cluster up and running. New Kubernetes users with access to a cloud hosting provider, like Linode, can use kubeadm to build out a playground cluster. kubeadm is also used as a foundation to create more mature Kubernetes deployment tooling. x x
      Container Runtime A container runtime is responsible for running the containers that make up a cluster’s pods. This guide will use Docker as the container runtime. x x
      kubelet kubelet ensures that all pod containers running on a node are healthy and meet the specifications for a pod’s desired behavior. x x
      kubectl A command line tool used to manage a Kubernetes cluster. x x
      Control Plane Series of services that form Kubernetes master structure that allow it to control the cluster. Kubeadm allows the control plane services to run as containers on the master node. The control plane will be created when you initialize kubeadm later in this guide. x

      Install the Container Runtime: Docker

      Docker is the software responsible for running the pod containers on each node. You can use other container runtime software with Kubernetes, such as Containerd and CRI-O. You will need to install Docker on all three Linodes.

      These steps install Docker Community Edition (CE) using the official Ubuntu repositories. To install on another distribution, see the official installation page.

      1. Remove any older installations of Docker that may be on your system:

        sudo apt remove docker docker-engine docker.io
        
      2. Make sure you have the necessary packages to allow the use of Docker’s repository:

        sudo apt install apt-transport-https ca-certificates curl software-properties-common
        
      3. Add Docker’s GPG key:

        curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
        
      4. Verify the fingerprint of the GPG key:

        sudo apt-key fingerprint 0EBFCD88
        

        You should see output similar to the following:

          
        pub   4096R/0EBFCD88 2017-02-22
                Key fingerprint = 9DC8 5822 9FC7 DD38 854A  E2D8 8D81 803C 0EBF CD88
        uid                  Docker Release (CE deb) 
        sub   4096R/F273FCD8 2017-02-22
        
        
      5. Add the stable Docker repository:

        sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
        
      6. Update your package index and install Docker CE:

        sudo apt update
        sudo apt install docker-ce
        
      7. Add your limited Linux user account to the docker group. Replace $USER with your username:

        sudo usermod -aG docker $USER
        

        Note

        After entering the usermod command, you will need to close your SSH session and open a new one for this change to take effect.

      8. Check that the installation was successful by running the built-in “Hello World” program:

        sudo docker run hello-world
        
      9. Setup the Docker daemon to use systemd as the cgroup driver, instead of the default cgroupfs. This is a recommended step so that Kubelet and Docker are both using the same cgroup manager. This will make it easier for Kubernetes to know which resources are available on your cluster’s nodes.

        sudo bash -c 'cat > /etc/docker/daemon.json <<EOF
        {
          "exec-opts": ["native.cgroupdriver=systemd"],
          "log-driver": "json-file",
          "log-opts": {
            "max-size": "100m"
          },
          "storage-driver": "overlay2"
        }
        EOF'
        
      10. Create a systemd directory for Docker:

        sudo mkdir -p /etc/systemd/system/docker.service.d
        
      11. Restart Docker:

        sudo systemctl daemon-reload
        sudo systemctl restart docker
        

      Install kubeadm, kubelet, and kubectl

      Complete the steps outlined in this section on all three Linodes.

      1. Update the system and install the required dependencies for installation:

        sudo apt-get update && sudo apt-get install -y apt-transport-https curl
        
      2. Add the required GPG key to your apt-sources keyring to authenticate the Kubernetes related packages you will install:

        curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
        
      3. Add Kubernetes to the package manager’s list of sources:

        sudo bash -c "cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
        deb https://apt.kubernetes.io/ kubernetes-xenial main
        EOF"
        
      4. Update apt, install Kubeadm, Kubelet, and Kubectl, and hold the installed packages at their installed versions:

        sudo apt-get update
        sudo apt-get install -y kubelet kubeadm kubectl
        sudo apt-mark hold kubelet kubeadm kubectl
        
      5. Verify that kubeadm, kubelet, and kubectl have installed by retrieving their version information. Each command should return version information about each package.

        kubeadm version
        kubelet --version
        kubectl version
        

      Set up the Kubernetes Control Plane

      After installing the Kubernetes related tooling on all your Linodes, you are ready to set up the Kubernetes control plane on the master node. The control plane is responsible for allocating resources to your cluster, maintaining the health of your cluster, and ensuring that it meets the minimum requirements you designate for the cluster.

      The primary components of the control plane are the kube-apiserver, kube-controller-manager, kube-scheduler, and etcd. kubeadm provides a way to easily initialize the Kubernetes master node with all the necessary control plane components. For more information on each of control plane component see the Beginner’s Guide to Kubernetes.

      In addition to the baseline control plane components, there are several addons, that can be installed on the master node to access additional cluster features. You will need to install a networking and network policy provider add on that will implement Kubernetes’ network model on the cluster’s pod network.

      This guide will use Calico as the pod network add on. Calico is a secure and open source L3 networking and network policy provider for containers. There are several other network and network policy providers to choose from. To view a full list of providers, refer to the official Kubernetes documentation.

      Note

      kubeadm only supports Container Network Interface (CNI) based networks. CNI consists of a specification and libraries for writing plugins to configure network interfaces in Linux containers

      1. Initialize kubeadm on the master node. This command will run checks against the node to ensure it contains all required Kubernetes dependencies, if the checks pass, it will then install the control plane components.

        When issuing this command, it is necessary to set the pod network range that Calico will use to allow your pods to communicate with each other. It is recommended to use the private IP address space, 10.2.0.0/16.

        Note

        The pod network IP range should not overlap with the service IP network range. The default service IP address range is 10.96.0.0/12. You can provide an alternative service ip address range using the --service-cidr=10.97.0.0/12 option when initializing kubeadm. Replace 10.97.0.0/12 with the desired service IP range.

        For a full list of available kubeadm initialization options, see the official Kubernetes documentation.

        sudo kubeadm init --pod-network-cidr=10.2.0.0/16
        

        You should see a similar output:

          
        Your Kubernetes control-plane has initialized successfully!
        
        To start using your cluster, you need to run the following as a regular user:
        
          mkdir -p $HOME/.kube
          sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
          sudo chown $(id -u):$(id -g) $HOME/.kube/config
        
        You should now deploy a pod network to the cluster.
        Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
          https://kubernetes.io/docs/concepts/cluster-administration/addons/
        
        Then you can join any number of worker nodes by running the following on each as root:
        
        kubeadm join 192.0.2.0:6443 --token udb8fn.nih6n1f1aijmbnx5 
            --discovery-token-ca-cert-hash sha256:b7c01e83d63808a4a14d2813d28c127d3a1c4e1b6fc6ba605fe4d2789d654f26
              
        

        The kubeadm join command will be used in the Join a Worker Node to the Cluster section of this guide to bootstrap the worker nodes to the Kubernetes cluster. This command should be kept handy for later use. Below is a description of the required options you will need to pass in with the kubeadm join command:

        • The master node’s IP address and the Kubernetes API server’s port number. In the example output, this is 192.0.2.0:6443. The Kubernetes API server’s port number is 6443 by default on all Kubernetes installations.
        • A bootstrap token. The bootstrap token has a 24-hour TTL (time to live). A new bootstrap token can be generated if your current token expires.
        • A CA key hash. This is used to verify the authenticity of the data retrieved from the Kubernetes API server during the bootstrap process.
      2. Copy the admin.conf configuration file to your limited user account. This file allows you to communicate with your cluster via kubectl and provides superuser privileges over the cluster. It contains a description of the cluster, users, and contexts. Copying the admin.conf to your limited user account will provide you with administrative privileges over your cluster.

        mkdir -p $HOME/.kube
        sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
        sudo chown $(id -u):$(id -g) $HOME/.kube/config
        
      3. Install the necessary Calico manifests to your master node and apply them using kubectl. The first file, rbac-kdd.yaml, works with Kubernetes’ role-based access control (RBAC) to provide Calico components access to necessary parts of the Kubernetes API. The second file, calico.yaml, configures a self-hosted Calico installation that uses the Kubernetes API directly as the datastore (instead of etcd).

        kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml
        kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml
        

      Inspect the Master Node with Kubectl

      After completing the previous section, your Kubernetes master node is ready with all the necessary components to manage a cluster. To gain a better understanding of all the parts that make up the master’s control plane, this section will walk you through inspecting your master node. If you have not yet reviewed the Beginner’s Guide to Kubernetes, it will be helpful to do so prior to continuing with this section as it relies on the understanding of basic Kubernetes concepts.

      1. View the current state of all nodes in your cluster. At this stage, the only node you should expect to see is the master node, since worker nodes have yet to be bootstrapped. A STATUS of Ready indicates that the master node contains all necessary components, including the pod network add-on, to start managing clusters.

        kubectl get nodes
        

        Your output should resemble the following:

          
        NAME        STATUS     ROLES     AGE   VERSION
        kube-master   Ready     master      1h    v1.14.1
            
        
      2. Inspect the available namespaces in your cluster.

        kubectl get namespaces
        

        Your output should resemble the following:

          
        NAME              STATUS   AGE
        default           Active   23h
        kube-node-lease   Active   23h
        kube-public       Active   23h
        kube-system       Active   23h
            
        

        Below is an overview of each namespace installed by default on the master node by kubeadm:

        • default: The default namespace contains objects with no other assigned namespace. By default, a Kubernetes cluster will instantiate a default namespace when provisioning the cluster to hold the default set of Pods, Services, and Deployments used by the cluster.
        • kube-system: The namespace for objects created by the Kubernetes system. This includes all resources used by the master node.
        • kube-public: This namespace is created automatically and is readable by all users. It contains information, like certificate authority data (CA), that helps kubeadm join and authenticate worker nodes.
        • kube-node-lease: The kube-node-lease namespace contains lease objects that are used by kubelet to determine node health. kubelet creates and periodically renews a Lease on a node. The node lifecycle controller treats this lease as a health signal. kube-node-lease was released to beta in Kubernetes 1.14.
      3. View all resources available in the kube-system namespace. The kube-system namespace contains the widest range of resources, since it houses all control plane resources. Replace kube-system with another namespace to view its corresponding resources.

        kubectl get all -n kube-system
        

      Join a Worker Node to the Cluster

      Now that your Kubernetes master node is set up, you can join worker nodes to your cluster. In order for a worker node to join a cluster, it must trust the cluster’s control plane, and the control plane must trust the worker node. This trust is managed via a shared bootstrap token and a certificate authority (CA) key hash. kubeadm handles the exchange between the control plane and the worker node. At a high-level the worker node bootstrap process is the following:

      1. kubeadm retrieves information about the cluster from the Kubernetes API server. The bootstrap token and CA key hash are used to ensure the information originates from a trusted source.

      2. kubelet can take over and begin the bootstrap process, since it has the necessary cluster information retrieved in the previous step. The bootstrap token is used to gain access to the Kubernetes API server and submit a certificate signing request (CSR), which is then signed by the control plane.

      3. The worker node’s kubelet is now able to connect to the Kubernetes API server using the node’s established identity.

      Before continuing, you will need to make sure that you know your Kubernetes API server’s IP address, that you have a bootstrap token, and a CA key hash. This information was provided when kubeadm was initialized on the master node in the Set up the Kubernetes Control Plane section of this guide. If you no longer have this information, you can regenerate the necessary information from the master node.

      Regenerate a Bootstrap Token

      These commands should be issued from your master node.

      1. Generate a new bootstrap token and display the kubeadm join command with the necessary options to join a worker node to the master node’s control plane:

        kubeadm token create --print-join-command
        

      Follow the steps below on each node you would like to bootstrap to the cluster as a worker node.

      1. SSH into the Linode that will be used as a worker node in the Kubernetes cluster.

        ssh username@192.0.2.1
        
      2. Join the node to your cluster using kubeadm. Ensure you replace 192.0.2.0:6443 with the IP address for your master node along with its Kubernetes API server’s port number, udb8fn.nih6n1f1aijmbnx5 with your bootstrap token, and sha256:b7c01e83d63808a4a14d2813d28c127d3a1c4e1b6fc6ba605fe4d2789d654f26 with your CA key hash. The bootstrap process will take a few moments.

        sudo kubeadm join 192.0.2.0:6443 --token udb8fn.nih6n1f1aijmbnx5 
        --discovery-token-ca-cert-hash sha256:b7c01e83d63808a4a14d2813d28c127d3a1c4e1b6fc6ba605fe4d2789d654f26
        

        When the bootstrap process has completed, you should see a similar output:

          
          This node has joined the cluster:
        * Certificate signing request was sent to apiserver and a response was received.
        * The Kubelet was informed of the new secure connection details.
        
        Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
              
        
      3. Repeat the steps outlined above on the second worker node to bootstrap it to the cluster.

      4. SSH into the master node and verify the worker nodes have joined the cluster:

         kubectl get nodes
        

        You should see a similar output.

          
        NAME          STATUS   ROLES    AGE     VERSION
        kube-master   Ready    master   1d22h   v1.14.1
        kube-node-1   Ready       1d22h   v1.14.1
        kube-node-2   Ready       1d22h   v1.14.1
              
        

      Next Steps

      Now that you have a Kubernetes cluster up and running, you can begin experimenting with the various ways to configure pods, group resources, and deploy services that are exposed to the public internet. To help you get started with this, move on to follow along with the Deploy a Static Site on Linode using Kubernetes guide.

      Tear Down Your Cluster

      If you are done experimenting with your Kubernetes Cluster, be sure to remove the Linodes you have running in order to avoid being further billed for them. See the Removing Services section of the Billing and Payments guide.

      More Information

      You may wish to consult the following resources for additional information on this topic. While these are provided in the hope that they will be useful, please note that we cannot vouch for the accuracy or timeliness of externally hosted materials.

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.



      Source link

      Getting Started with Puppet – Basic Installation and Setup


      Updated by Linode Written by Linode

      Puppet is a configuration management tool that simplifies system administration. Puppet uses a client/server model in which your managed nodes, running a process called the Puppet agent, talk to and pull down configuration profiles from a Puppet master.

      Puppet deployments can range from small groups of servers up to enterprise-level operations. This guide will demonstrate how to install Puppet 6.1 on three servers:

      • A Puppet master running Ubuntu 18.04
      • A managed Puppet node running Ubuntu 18.04
      • A managed Puppet node running CentOS 7

      After installation, the next section will show you how to secure these servers via Puppet. This section will demonstrate core features of the Puppet language.

      Note

      Most guides will instruct you to follow the How to Secure your Server guide before proceeding. Because Puppet will be used to perform this task, you should begin this guide as the root user. A limited user with administrative privileges will be configured via Puppet in later steps.

      Before You Begin

      The following table displays example system information for the servers that will be deployed in this guide:

      Description OS Hostname FQDN IP
      Puppet master Ubuntu 18.04 puppet puppet.example.com 192.0.2.2
      Node 1 (Ubuntu) Ubuntu 18.04 puppet-agent-ubuntu puppet-agent-ubuntu.example.com 192.0.2.3
      Node 2 (CentOS) CentOS 7 puppet-agent-centos puppet-agent-centos.example.com 192.0.2.4

      You can choose different hostnames and fully qualified domain names (FQDN) for each of your servers, and the IP addresses for your servers will be different from the example addresses listed. You will need to have a registered domain name in order to specify FQDNs for your servers.

      Throughout this guide, commands and code snippets will reference the values displayed in this table. Wherever such a value appears, replace it with your own value.

      Create your Linodes

      1. Create three Linodes corresponding to the servers listed in the table above. Your Puppet master Linode should have at least four CPU cores; the Linode 8GB plan is recommended. The two other nodes can be of any plan size, depending on how you intend to use them after Puppet is installed and configured.

      2. Configure your timezone on your master and agent nodes so that they all have the same time data.

      3. Set the hostname for each server.

      4. Set the FQDN for each Linode by editing the servers’ /etc/hosts files.

        Example content for the hosts file

        You can model the contents of your /etc/hosts files on these snippets:

        Master
        1
        2
        3
        4
        5
        6
        7
        
        127.0.0.1   localhost
        192.0.2.2   puppet.example.com puppet
        
        # The following lines are desirable for IPv6 capable hosts
        ::1     localhost ip6-localhost ip6-loopback
        ff02::1 ip6-allnodes
        ff02::2 ip6-allrouters
        Node 1 (Ubuntu)
        1
        2
        3
        4
        5
        6
        7
        
        127.0.0.1   localhost
        192.0.2.3   puppet-agent-ubuntu.example.com puppet-agent-ubuntu
        
        # The following lines are desirable for IPv6 capable hosts
        ::1     localhost ip6-localhost ip6-loopback
        ff02::1 ip6-allnodes
        ff02::2 ip6-allrouters
        Node 2 (CentOS)
        1
        2
        3
        
        127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
        ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
        192.0.2.4   puppet-agent-centos.example.com puppet-agent-centos
      5. Set up DNS records for your Linodes’ FQDNs. For each Linode, create a new A record with the name specified by its FQDN and assign it to that Linode’s IP address.

        If you don’t use Linode’s name servers for your domain, consult your name server authority’s website for instructions on how to edit your DNS records.

        Updating DNS records at common nameserver authorities

        The following support documents describe how to update DNS records at common nameserver authorities:

      Puppet Master

      Install the Puppet Server Software

      The Puppet master runs the puppetserver service, which is responsible for compiling and supplying configuration profiles to your managed nodes.

      The puppetserver service has the Puppet agent service as a dependency (which is just called puppet when running on your system). This means that the agent software will also be installed and can be run on your master. Because your master can run the agent service, you can configure your master via Puppet just as you can configure your other managed nodes.

      1. Log in to your Puppet master via SSH (as root):

        ssh root@puppet.example.com
        
      2. Download the Puppet repository, update your system packages, and install puppetserver:

        wget https://apt.puppetlabs.com/puppet-release-bionic.deb
        dpkg -i puppet-release-bionic.deb
        apt update
        apt install puppetserver
        

      Configure the Server Software

      1. Use the puppet config command to set values for the dns_alt_names setting:

        /opt/puppetlabs/bin/puppet config set dns_alt_names 'puppet,puppet.example.com' --section main
        

        If you inspect the configuration file, you’ll see that the setting has been added:

        cat /etc/puppetlabs/puppet/puppet.conf
        
          
        [main]
        dns_alt_names = puppet,puppet.example.com
        # ...
        
        

        Note

        The puppet command by default is not added to your PATH. Using Puppet’s interactive commands requires a full file path. To avoid this, update your PATH for your existing shell session:

        export PATH=/opt/puppetlabs/bin:$PATH
        

        A more permanent solution would be to add this to your .profile or .bashrc files.

      2. Update your Puppet master’s /etc/hosts to resolve your managed nodes’ IP addresses. For example, your /etc/hosts file might look like the following:

        /etc/hosts
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        
        127.0.0.1   localhost
        192.0.2.2   puppet.example.com puppet
        
        192.0.2.3   puppet-agent-ubuntu.example.com puppet-agent-ubuntu
        192.0.2.4   puppet-agent-centos.example.com puppet-agent-centos
        
        # The following lines are desirable for IPv6 capable hosts
        ::1     localhost ip6-localhost ip6-loopback
        ff02::1 ip6-allnodes
        ff02::2 ip6-allrouters

        Note

      3. Start and enable the puppetserver service:

        systemctl start puppetserver
        systemctl enable puppetserver
        

        By default, the Puppet master listens for client connections on port 8140. If the puppetserver service fails to start, check that the port is not already in use:

        netstat -anpl | grep 8140
        

      Puppet Agents

      Install Puppet Agent

      1. On your managed node running Ubuntu 18.04, install the puppet-agent package:

        wget https://apt.puppetlabs.com/puppet-release-bionic.deb
        dpkg -i puppet-release-bionic.deb
        apt update
        apt install puppet-agent
        
      2. On your managed node running CentOS 7, enter:

        rpm -Uvh https://yum.puppet.com/puppet/puppet-release-el-7.noarch.rpm
        yum install puppet-agent
        

      Configure Puppet Agent

      1. Modify your managed nodes’ hosts files to resolve the Puppet master’s IP. To do so, add a line like:

        /etc/hosts
        1
        
        192.0.2.2    puppet.example.com puppet

        Example content for the hosts file

        You can model the contents of your managed nodes’ /etc/hosts files on the following snippets. These incorporate the FQDN declarations described in the Create your Linodes section:

        Node 1 (Ubuntu)
        1
        2
        3
        4
        5
        6
        7
        8
        9
        
        127.0.0.1   localhost
        192.0.2.3   puppet-agent-ubuntu.example.com puppet-agent-ubuntu
        
        192.0.2.2   puppet.example.com puppet
        
        # The following lines are desirable for IPv6 capable hosts
        ::1     localhost ip6-localhost ip6-loopback
        ff02::1 ip6-allnodes
        ff02::2 ip6-allrouters
        Node 2 (CentOS)
        1
        2
        3
        4
        5
        
        127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
        ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
        192.0.2.4   puppet-agent-centos.example.com puppet-agent-centos
        
        192.0.2.2   puppet.example.com puppet
      2. On each managed node, use the puppet config command to set the value for your server setting to the FQDN of the master:

        /opt/puppetlabs/bin/puppet config set server 'puppet.example.com' --section main
        

        If you inspect the configuration file on the nodes, you’ll see that the setting has been added:

        cat /etc/puppetlabs/puppet/puppet.conf
        
          
        [main]
        server = puppet.example.com
        # ...
        
        
      3. Use the puppet resource command to start and enable the Puppet agent service:

        /opt/puppetlabs/bin/puppet resource service puppet ensure=running enable=true
        

        Note

        On systemd systems, the above command is equivalent to using these two systemctl commands:

        systemctl start puppet
        systemctl enable puppet
        

      Generate and Sign Certificates

      Before your managed nodes can receive configurations from the master, they first need to be authenticated:

      1. On your Puppet agents, generate a certificate for the Puppet master to sign:

        /opt/puppetlabs/bin/puppet agent -t
        

        This command will output an error, stating that no certificate has been found. This error is because the generated certificate needs to be approved by the Puppet master.

      2. Log in to your Puppet master and list the certificates that need approval:

        /opt/puppetlabs/bin/puppetserver ca list
        

        It should output a list with your agent nodes’ hostnames.

      3. Approve the certificates:

        /opt/puppetlabs/bin/puppetserver ca sign --certname puppet-agent-ubuntu.example.com,puppet-agent-centos.example.com
        
      4. Return to the Puppet agent nodes and run the Puppet agent again:

        /opt/puppetlabs/bin/puppet agent -t
        

        You should see something like the following:

          
        Info: Downloaded certificate for hostname.example.com from puppet
        Info: Using configured environment 'production'
        Info: Retrieving pluginfacts
        Info: Retrieving plugin
        Info: Retrieving locales
        Info: Caching catalog for hostname.example.com
        Info: Applying configuration version '1547066428'
        Info: Creating state file /opt/puppetlabs/puppet/cache/state/state.yaml
        Notice: Applied catalog in 0.02 seconds
        
        

      Add Modules to Configure Agent Nodes

      The Puppet master and agent nodes are now functional, but they are not secure. Based on concepts from the How to Secure your Server guide, a limited user and a firewall should be configured. This can be done on all nodes through the creation of basic Puppet modules, shown below.

      Note

      This is not meant to provide a basis for a fully-hardened server, and is intended only as a starting point. Alter and add firewall rules and other configuration options, depending on your specific needs.

      Puppet modules are Puppet’s prescribed way of organizing configuration code to serve specific purposes, like installing and configuration an application. You can create custom modules, or you can download and use modules published on Puppet Forge.

      Add a Limited User

      To create a new limited user on your nodes, you will create and apply a new module called accounts. This module will employ the user resource.

      1. From the Puppet master, navigate to the /etc/puppetlabs/code/environments/production/modules directory. When a managed node requests its configuration from the master, the Puppet server process will look in this location for your modules:

        cd /etc/puppetlabs/code/environments/production/modules/
        
      2. Create the directory for a new accounts module:

        mkdir accounts
        cd accounts
        
      3. Create the following directories inside the accounts module:

        mkdir {examples,files,manifests,templates}
        
        Directory Description
        manifests The Puppet code which powers the module
        files Static files to be copied to managed nodes
        templates Template files to be copied to managed nodes that can e customized with variables
        examples Example code which shows how to use the module

        Note

        Review Puppet’s Module fundamentals article for more information on how a module is structured.
      4. Navigate to the manifests directory:

        cd manifests
        
      5. Any file which contains Puppet code is called a manifest, and each manifest file ends in .pp. When located inside a module, a manifest should only define one class. If a module’s manifests directory has an init.pp file, the class definition it contains is considered the main class for the module. The class definition inside init.pp should have the same name as the module.

        Create an init.pp file with the contents of the following snippet. Replace all instances of username with a username of your choosing:

        accounts/manifests/init.pp
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        
        class accounts {
        
          user { 'username':
            ensure      => present,
            home        => '/home/username',
            shell       => '/bin/bash',
            managehome  => true,
            gid         => 'username',
          }
        
        }
        Option Description
        ensure Ensures that the user exists if set to present, or does not exist if set to absent
        home The path for the user’s home directory
        managehome Controls whether a home directory should be created when creating the user
        shell The path to the shell for the user
        gid The user’s primary group
      6. Although the class declares what the user’s primary group should be, it will not create the group itself. Create a new file called groups.pp inside the manifests directory with the following contents. Replace username with your chosen username:

        accounts/manifests/groups.pp
        1
        2
        3
        4
        5
        6
        7
        
        class accounts::groups {
        
          group { 'username':
            ensure  => present,
          }
        
        }
      7. Your accounts class can declare your new accounts::groups class for use within the accounts class scope. Open your init.pp in your editor and enter a new include declaration at the beginning of the class:

        accounts/manifests/init.pp
        1
        2
        3
        4
        5
        6
        7
        
        class accounts {
        
          include accounts::groups
        
          # ...
        
        }
      8. The new user should have administrative privileges. Because we have agent nodes on both Debian- and Red Hat-based systems, the new user needs to be in the sudo group on Debian systems, and the wheel group on Red Hat systems.

        This value can be set dynamically through the use of Puppet facts. The facts system collects system information about your nodes and makes it available in your manifests.

        Add a selector statement to the top of your accounts class:

        accounts/manifests/init.pp
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        
        class accounts {
        
          $rootgroup = $osfamily ? {
            'Debian'  => 'sudo',
            'RedHat'  => 'wheel',
            default   => warning('This distribution is not supported by the Accounts module'),
          }
        
          include accounts::groups
        
          # ...
        
        }

        This code defines the value for the $rootgroup variable by checking the value of $osfamily, which is one of Puppet’s core facts. If the value for $osfamily does not match Debian or Red Hat, the default value will output a warning that the distribution selected is not supported by this module.

        Note

        The Puppet Configuration Language executes code from top to bottom. Because the user resource declaration will reference the $rootgroup variable, you must define $rootgroup before the user declaration.

      9. Update the user resource to include the groups option as follows:

        accounts/manifests/init.pp
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        
        # ...
        
        user { 'username':
          ensure      => present,
          home        => '/home/username',
          shell       => '/bin/bash',
          managehome  => true,
          gid         => 'username',
          groups      => "$rootgroup",
        }
        
        # ...

        The value "$rootgroup" is enclosed in double quotes " " instead of single quotes ' ' because it is a variable which needs to be interpolated in your code.

      10. The final value that needs to be added is the user’s password. Since we do not want to use plain text, the password should be supplied to Puppet as a SHA1 digest, which is supported by default. Generate a digest with the openssl command:

        openssl passwd -1
        

        You will be prompted to enter your password. A hashed password will be output. Copy this value to your clipboard.

      11. Update the user resource to include the password option as follows; insert your copied password hash as the value for the option:

        accounts/manifests/init.pp
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        
        # ...
        
        user { 'username':
          ensure      => present,
          home        => '/home/username',
          shell       => '/bin/bash',
          managehome  => true,
          gid         => 'username',
          groups      => "$rootgroup",
          password    => 'your_password_hash',
        }
        
        # ...

        Caution

        The hashed password must be included in single quotes ' '.

      12. After saving your changes, use the Puppet parser to ensure that the code is correct:

        /opt/puppetlabs/bin/puppet parser validate init.pp
        

        Any errors that need to be addressed will be logged to standard output. If nothing is returned, your code is valid.

      13. Navigate to the examples directory and create another init.pp file:

        cd ../examples
        
        accounts/examples/init.pp
      14. While still in the examples directory, test the module:

        /opt/puppetlabs/bin/puppet apply --noop init.pp
        

        Note

        The --noop parameter prevents Puppet from actually applying the module to your system and making any changes.

        It should return:

          
        Notice: Compiled catalog for puppet.example.com in environment production in 0.26 seconds
        Notice: /Stage[main]/Accounts::Groups/Group[username]/ensure: current_value absent, should be present (noop)
        Notice: Class[Accounts::Groups]: Would have triggered 'refresh' from 1 events
        Notice: /Stage[main]/Accounts/User[username]/ensure: current_value absent, should be present (noop)
        Notice: Class[Accounts]: Would have triggered 'refresh' from 1 events
        Notice: Stage[main]: Would have triggered 'refresh' from 2 events
        Notice: Finished catalog run in 0.02 seconds
        
        
      15. Again from the examples directory, run puppet apply to make these changes to the Puppet master server:

        /opt/puppetlabs/bin/puppet apply init.pp
        

        Puppet will create your limited Linux user on your master.

      16. Log out as root and log in to the Puppet master as your new user.

      Edit SSH Settings

      Although a new limited user has successfully been added to the Puppet master, it is still possible to login to the system as root. To properly secure your system, root access should be disabled.

      Note

      Because you are now logged in to the Puppet master as a limited user, you will need to execute commands and edit files with the user’s sudo privileges.

      1. Navigate to the files directory within the accounts module:

        cd /etc/puppetlabs/code/environments/production/modules/accounts/files
        
      2. Copy your system’s existing sshd_config file to this directory:

        sudo cp /etc/ssh/sshd_config .
        
      3. Open the file in your editor (making sure that you open it with sudo privileges) and set the PermitRootLogin value to no:

        accounts/files/sshd_config
      4. Navigate back to the manifests directory:

        cd ../manifests
        
      5. Create a new manifest called ssh.pp. Use the file resource to replace the default SSH configuration file with one managed by Puppet:

        accounts/manifests/ssh.pp
        1
        2
        3
        4
        5
        6
        7
        8
        
        class accounts::ssh {
        
          file { '/etc/ssh/sshd_config':
            ensure  => present,
            source  => 'puppet:///modules/accounts/sshd_config',
          }
        
        }

        Note

        The files directory is omitted from the source line because the files folder is the default location of files within a module. For more information on the format used to access resources in a module, refer to the official Puppet module documentation.
      6. Create a second resource to restart the SSH service and set it to run whenever sshd_config is changed. This will also require a selector statement because the SSH service is named ssh on Debian systems and sshd on Red Hat systems:

        accounts/manifests/ssh.pp
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        16
        17
        18
        19
        
        class accounts::ssh {
        
          $sshname = $osfamily ? {
            'Debian'  => 'ssh',
            'RedHat'  => 'sshd',
            default   => warning('This distribution is not supported by the Accounts module'),
          }
        
          file { '/etc/ssh/sshd_config':
            ensure  => present,
            source  => 'puppet:///modules/accounts/sshd_config',
            notify  => Service["$sshname"],
          }
        
          service { "$sshname":
            hasrestart  => true,
          }
        
        }

        Note

      7. Include the accounts::ssh class within the accounts class in init.pp:

        accounts/manifests/init.pp
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        
        class accounts {
        
          # ...
        
          include accounts::groups
          include accounts::ssh
        
          # ...
        
        }

        The complete init.pp

        The contents of your init.pp should now look like the following snippet:

        accounts/manifests/init.pp
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        16
        17
        18
        19
        20
        21
        22
        
        class accounts {
        
            $rootgroup = $osfamily ? {
                'Debian' => 'sudo',
                'RedHat' => 'wheel',
                default => warning('This distro not supported by Accounts module'),
            }
        
            include accounts::groups
            include accounts::ssh
        
            user { 'example':
                ensure  => present,
                home    => '/home/username',
                shell   => '/bin/bash',
                managehome  => true,
                gid     => 'username',
                groups  => "$rootgroup",
                password => 'your_password_hash'
            }
        
        }
      8. Run the Puppet parser to test the syntax of the new class, then navigate to the examples directory to test and run the update to your accounts class:

        sudo /opt/puppetlabs/bin/puppet parser validate ssh.pp
        cd ../examples
        sudo /opt/puppetlabs/bin/puppet apply --noop init.pp
        sudo /opt/puppetlabs/bin/puppet apply init.pp
        

        Note

        You may see the following line in your output when validating:

          
        Error: Removing mount "files": /etc/puppet/files does not exist or is not a directory
        
        

        This refers to a Puppet configuration file, not the module resource you’re trying to copy. If this is the only error in your output, the operation should still succeed.

      9. To ensure that the ssh class is working properly, log out of the Puppet master and then try to log in as root. You should not be able to do so.

      Add and Configure IPtables

      To complete this guide’s security settings, the firewall needs to be configure on your Puppet master and nodes. The iptables firewall software will be used.

      1. By default, changes to your iptables rules will not persist across reboots. To avoid this, install the appropriate package on your Puppet master and nodes:

        Ubuntu/Debian:

        sudo apt install iptables-persistent
        

        CentOS 7:

        CentOS 7 uses firewalld by default as a controller for iptables. Be sure firewalld is stopped and disabled before starting to work directly with iptables:

        sudo systemctl stop firewalld && sudo systemctl disable firewalld
        sudo yum install iptables-services
        
      2. On your Puppet master, install Puppet Lab’s firewall module from the Puppet Forge:

        sudo /opt/puppetlabs/bin/puppet module install puppetlabs-firewall
        

        The module will be installed in your /etc/puppetlabs/code/environments/production/modules directory.

      3. Navigate to the manifests directory inside the new firewall module:

        cd /etc/puppetlabs/code/environments/production/modules/firewall/manifests/
        
      4. Create a file titled pre.pp, which will contain all basic networking rules that should be run first:

        firewall/manifests/pre.pp
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        16
        17
        18
        19
        20
        21
        22
        23
        24
        25
        26
        27
        28
        29
        30
        31
        32
        33
        34
        35
        36
        37
        38
        39
        40
        41
        42
        43
        44
        45
        46
        47
        48
        49
        50
        51
        52
        53
        54
        55
        
        class firewall::pre {
        
          Firewall {
            require => undef,
          }
        
           # Accept all loopback traffic
          firewall { '000 lo traffic':
            proto       => 'all',
            iniface     => 'lo',
            action      => 'accept',
          }->
        
           #Drop non-loopback traffic
          firewall { '001 reject non-lo':
            proto       => 'all',
            iniface     => '! lo',
            destination => '127.0.0.0/8',
            action      => 'reject',
          }->
        
           #Accept established inbound connections
          firewall { '002 accept established':
            proto       => 'all',
            state       => ['RELATED', 'ESTABLISHED'],
            action      => 'accept',
          }->
        
           #Allow all outbound traffic
          firewall { '003 allow outbound':
            chain       => 'OUTPUT',
            action      => 'accept',
          }->
        
           #Allow ICMP/ping
          firewall { '004 allow icmp':
            proto       => 'icmp',
            action      => 'accept',
          }
        
           #Allow SSH connections
          firewall { '005 Allow SSH':
            dport    => '22',
            proto   => 'tcp',
            action  => 'accept',
          }->
        
           #Allow HTTP/HTTPS connections
          firewall { '006 HTTP/HTTPS connections':
            dport    => ['80', '443'],
            proto   => 'tcp',
            action  => 'accept',
          }
        
        }
      5. In the same directory, create post.pp, which will run any firewall rules that need to be input last:

        firewall/manifests/post.pp
        1
        2
        3
        4
        5
        6
        7
        8
        9
        
        class firewall::post {
        
          firewall { '999 drop all':
            proto  => 'all',
            action => 'drop',
            before => undef,
          }
        
        }

        These rules will direct the system to drop all inbound traffic that is not already permitted in the firewall.

      6. Run the Puppet parser on both files to check their syntax for errors:

        sudo /opt/puppetlabs/bin/puppet parser validate pre.pp
        sudo /opt/puppetlabs/bin/puppet parser validate post.pp
        
      7. Navigate to the main manifests directory:

        cd /etc/puppetlabs/code/environments/production/manifests
        
      8. Create a file named site.pp inside /etc/puppetlabs/code/environments/production/manifests. This file is the main manifest for the Puppet server service. It is used to map modules, classes, and resources to the nodes that they should be applied to.

        site.pp
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        16
        17
        18
        19
        20
        21
        22
        23
        24
        25
        26
        
        node default {
        
        }
        
        node 'puppet.example.com' {
        
          include accounts
        
          resources { 'firewall':
            purge => true,
          }
        
          Firewall {
            before        => Class['firewall::post'],
            require       => Class['firewall::pre'],
          }
        
          class { ['firewall::pre', 'firewall::post']: }
        
          firewall { '200 Allow Puppet Master':
            dport         => '8140',
            proto         => 'tcp',
            action        => 'accept',
          }
        
        }
      9. Run the site.pp file through the Puppet parser to check its syntax for errors. Then, test the file with the --noop option to see if it will run:

        sudo /opt/puppetlabs/bin/puppet parser validate site.pp
        sudo /opt/puppetlabs/bin/puppet apply --noop site.pp
        

        If successful, run puppet apply without the --noop option:

        sudo /opt/puppetlabs/bin/puppet apply site.pp
        
      10. Once Puppet has finished applying the changes, check the Puppet master’s iptables rules:

        sudo iptables -L
        

        It should return:

        Chain INPUT (policy ACCEPT)
        target     prot opt source               destination
        ACCEPT     all  --  anywhere             anywhere             /* 000 lo traffic */
        REJECT     all  --  anywhere             127.0.0.0/8          /* 001 reject non-lo */ reject-with icmp-port-unreachable
        ACCEPT     all  --  anywhere             anywhere             /* 002 accept established */ state RELATED,ESTABLISHED
        ACCEPT     icmp --  anywhere             anywhere             /* 004 allow icmp */
        ACCEPT     tcp  --  anywhere             anywhere             multiport ports ssh /* 005 Allow SSH */
        ACCEPT     tcp  --  anywhere             anywhere             multiport ports http,https /* 006 HTTP/HTTPS connections */
        ACCEPT     tcp  --  anywhere             anywhere             multiport ports 8140 /* 200 Allow Puppet Master */
        DROP       all  --  anywhere             anywhere             /* 999 drop all */
        
        Chain FORWARD (policy ACCEPT)
        target     prot opt source               destination
        
        Chain OUTPUT (policy ACCEPT)
        target     prot opt source               destination
        ACCEPT     tcp  --  anywhere             anywhere             /* 003 allow outbound */
        

      Apply Modules to the Agent Nodes

      Now that the accounts and firewall modules have been created, tested, and run on the Puppet master, it is time to apply them to your managed nodes.

      1. On the Puppet master, navigate to /etc/puppetlabs/code/environments/production/manifests:

        cd /etc/puppetlabs/code/environments/production/manifests
        
      2. Update site.pp to declare the modules, classes, and resources that should be applied to each managed node:

        site.pp
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        16
        17
        18
        19
        20
        21
        22
        23
        24
        25
        26
        27
        28
        29
        30
        31
        32
        33
        34
        35
        36
        37
        38
        39
        40
        41
        
        node default {
        
        }
        
        node 'puppet.example.com' {
          # ...
        }
        
        node 'puppet-agent-ubuntu.example.com' {
        
          include accounts
        
          resources { 'firewall':
            purge => true,
          }
        
          Firewall {
            before        => Class['firewall::post'],
            require       => Class['firewall::pre'],
          }
        
          class { ['firewall::pre', 'firewall::post']: }
        
        }
        
        node 'puppet-agent-centos.example.com' {
        
          include accounts
        
          resources { 'firewall':
            purge => true,
          }
        
          Firewall {
            before        => Class['firewall::post'],
            require       => Class['firewall::pre'],
          }
        
          class { ['firewall::pre', 'firewall::post']: }
        
        }
      3. By default, the Puppet agent service on your managed nodes will automatically check with the master once every 30 minutes and apply any new configurations from the master. You can also manually invoke the Puppet agent process in-between automatic agent runs.

        Log in to each managed node (as root) and run the Puppet agent:

        /opt/puppetlabs/bin/puppet agent -t
        
      4. To ensure the Puppet agent worked:

      Congratulations! You’ve successfully installed Puppet on a master and two managed nodes. Now that you’ve confirmed everything is working, you can create additional modules to automate configuration management on your nodes. For more information, review Puppet’s open source documentation. You can also install and use modules others have created on the Puppet Forge.

      More Information

      You may wish to consult the following resources for additional information on this topic. While these are provided in the hope that they will be useful, please note that we cannot vouch for the accuracy or timeliness of externally hosted materials.

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.



      Source link

      How to Get Started with FreeBSD


      Introduction

      FreeBSD is a secure, high performance operating system that is suitable for a variety of server roles. In this guide, we will cover some basic information about how to get started with a FreeBSD server.

      This guide is intended to provide a general setup for FreeBSD servers, but please be aware that different versions of FreeBSD may have different functionalities. Depending on which version of FreeBSD your server is running, the instructions provided here may not work as described.

      Logging in with SSH

      The first step you need to take to begin configuring your FreeBSD server is to log in.

      On DigitalOcean, you must provide a public SSH key when creating a FreeBSD server. This key is added to the server instance, allowing you to securely log in from your local machine using the associated private key. To learn more about how to use SSH keys with FreeBSD on DigitalOcean, follow this guide.

      To log in to your server, you will need to know your server’s public IP address. For DigitalOcean Droplets, you can find this information in the control panel. The main user account available on FreeBSD servers created through DigitalOcean is called freebsd. This user account is configured with sudo privileges, allowing you to complete administrative tasks.

      To log in to your FreeBSD server, use the ssh command. You will need to specify the freebsd user account along with your server’s public IP address:

      • ssh freebsd@your_server_ip

      You should be automatically authenticated and logged in. You will be dropped into a command line interface.

      Changing the Default Shell to tcsh (Optional)

      If you logged into a DigitalOcean Droplet running FreeBSD 11, you will be presented with a very minimal command prompt that looks like this:

      If you're new to working with FreeBSD, this prompt may look somewhat unfamiliar to you. Let's get some clarity on what kind of environment we're working in. Run the following command to see what the default shell for your freebsd user is:

      Output

      /bin/sh

      In this output, you can see that the default shell for the freebsd user is sh (also known as the Bourne shell). On Linux systems, sh is often an alias for bash, a free software replacement for the Bourne shell that includes a few extra features. In FreeBSD, however, it's actually the classic sh shell program, rather than an alias.

      The default command line shell for FreeBSD is tcsh, but DigitalOcean Droplets running FreeBSD use sh by default. If you'd like to set tcsh as your freebsd user's default shell, run the following command:

      • sudo chsh -s /bin/tcsh freebsd

      The next time you log in to your server, you will see the tcsh prompt instead of the sh prompt. You can invoke the tcsh shell for the current session by running:

      Your prompt should immediately change to the following:

      If you ever want to return to the Bourne shell you can do so with the sh command.

      Although tcsh is typically the default shell for FreeBSD systems, it has a few default settings that users tend to tweak on their own, such as the default pager and editor, as well as the behaviors of certain keys. To illustrate how to change some of these defaults, we will modify the shell's configuration file.

      An example configuration file is already included in the filesystem. Copy it into your home directory so that you can modify it as you wish:

      • cp /usr/share/skel/dot.cshrc ~/.cshrc

      After the file has been copied into your home directory, you can edit it. The vi editor is included on the system by default, but if you want a simpler editor, you can try the ee editor instead:

      As you go through this file, you can decide what entries you may want to modify. In particular, you may want to change the setenv entries to have specific defaults that you may be more familiar with.

      ~/.cshrc

      . . .
      
      setenv  EDITOR  vi
      setenv  PAGER   more
      
      . . .
      

      If you are not familiar with the vi editor and would like a more basic editing environment, you could change the EDITOR environment variable to something like ee. Most users will want to change the PAGER to less instead of more. This will allow you to scroll up and down in man pages without exiting the pager:

      ~/.cshrc

      . . .
      setenv  EDITOR  ee
      setenv  PAGER   less
      . . .
      

      Another thing that you will likely want to add to this configuration file is a block of code that will correctly map some of your keyboard keys inside the tcsh session. At the bottom of the file, add the following code. Without these lines, DELETE and other keys will not work correctly:

      ~/.cshrc

      . . .
      if ($term == "xterm" || $term == "vt100" 
                  || $term == "vt102" || $term !~ "con*") then
                # bind keypad keys for console, vt100, vt102, xterm
                bindkey "e[1~" beginning-of-line  # Home
                bindkey "e[7~" beginning-of-line  # Home rxvt
                bindkey "e[2~" overwrite-mode     # Ins
                bindkey "e[3~" delete-char        # Delete
                bindkey "e[4~" end-of-line        # End
                bindkey "e[8~" end-of-line        # End rxvt
      endif
      

      When you are finished, save and close the file by pressing CTRL+C, typing exit, and then pressing ENTER. If you instead edited the file with vi, save and close the file by pressing ESC, typing :wq, and then pressing ENTER.

      To make your current session reflect these changes immediately, source the configuration file:

      It might not be immediately apparent, but the Home, Insert, Delete, and End keys will work as expected now.

      One thing to note at this point is that if you are using the tcsh or csh shells, you will need to execute the rehash command whenever any changes are made that may affect the executable path. Common scenarios where this may happen occur when you are installing or uninstalling applications.

      After installing programs, you may need to type this in order for the shell to find the new application files:

      With that, the tcsh shell is not only set as your freebsd user's default, but it is also much more usable.

      Setting bash as the Default Shell (Optional)

      If you are more familiar with the bash shell and would prefer to use that as your default shell, you can make that adjustment in a few short steps.

      Note: bash is not supported on FreeBSD 11.1, and the instructions in this section will not work for that particular version.

      First, you need to install the bash shell by typing:

      You will be prompted to confirm that you want to download the package. Do so by pressing y and then ENTER.

      After the installation is complete, you can start bash by running:

      This will update your shell prompt to look like this:

      To change freebsd's default shell to bash, you can type:

      • sudo chsh -s /usr/local/bin/bash freebsd

      The next time you log in, the bash shell will be started automatically instead of the current default.

      If you wish to change the default pager or editor in the bash shell, you can do so in a file called ~/.bash_profile. This will not exist by default, so you will need to create it:

      Inside, to change the default pager or editor, add your selections like this:

      ~/.bash_profile

      export PAGER=less
      export EDITOR=ee
      

      Save and close the file when you are finished by pressing CTRL+C, typing exit, and then pressing ENTER.

      To implement your changes immediately, source the file:

      If you'd like to make further changes to your shell environment, like setting up special command aliases or setting environment variables, you can reopen that file and add your new changes to it.

      Setting a Root Password (Optional)

      By default, FreeBSD servers do not allow ssh logins for the root account. On DigitalOcean, this policy has been supplemented to tell users to log in with the freebsd account.

      Because the root user account is inaccessible over SSH, it is relatively safe to set a root account password. While you will not be able to use this to log in through SSH, you can use this password to log in as root through the DigitalOcean web console.

      To set a root password, type:

      sudo passwd
      

      You will be asked to select and confirm a password for the root account. As mentioned above, you still won't be able to use this for SSH authentication (this is a security decision), but you will be able to use it to log in through the DigitalOcean console.

      To do so, click the Console button in the upper-right corner of your Droplet's page to bring up the web console:

      DigitalOcean web console

      If you choose not to set a password and you get locked out of your server (for instance if you accidentally set overly restrictive firewall rules), you can always set one later by booting your Droplet into single user mode. We have a guide that shows you how to do that here.

      Conclusion

      By now, you should know how to log into a FreeBSD server and how to set up a bash shell environment. A good next step is to familiarize yourself with some FreeBSD basics as well as what makes it different from Linux-based distributions.

      Once you become familiar with FreeBSD and configure it to your needs, you will be able to take greater advantage of its flexibility, security, and performance.



      Source link