One place for hosting & domains

      May 2019

      How To Use Certbot Standalone Mode to Retrieve Let’s Encrypt SSL Certificates on Debian 9


      Introduction

      Let’s Encrypt is a service offering free SSL certificates through an automated API. The most popular Let’s Encrypt client is EFF’s Certbot.

      Certbot offers a variety of ways to validate your domain, fetch certificates, and automatically configure Apache and Nginx. In this tutorial, we’ll discuss Certbot’s standalone mode and how to use it to secure other types of services, such as a mail server or a message broker like RabbitMQ.

      We won’t discuss the details of SSL configuration, but when you are done you will have a valid certificate that is automatically renewed. Additionally, you will be able to automate reloading your service to pick up the renewed certificate.

      Prerequisites

      Before starting this tutorial, you will need:

      • A Debian 9 server with a non-root, sudo-enabled user and basic firewall set up, as detailed in this Debian 9 server setup tutorial.
      • A domain name pointed at your server, which you can accomplish by following “How to Set Up a Host Name with DigitalOcean.” This tutorial will use example.com throughout.
      • Port 80 or 443 must be unused on your server. If the service you’re trying to secure is on a machine with a web server that occupies both of those ports, you’ll need to use a different mode such as Certbot’s webroot mode or DNS-based challenge mode.

      Step 1 — Installing Certbot

      Debian 9 includes the Certbot client in their default repository, and it should be up-to-date enough for basic use. If you need to do DNS-based challenges or use other newer Certbot features, you should instead install from the stretch-backports repo as instructed by the official Certbot documentation.

      Use apt to install the certbot package:

      You may test your install by asking certbot to output its version number:

      Output

      certbot 0.28.0

      Now that we have Certbot installed, let's run it to get our certificate.

      Step 2 — Running Certbot

      Certbot needs to answer a cryptographic challenge issued by the Let's Encrypt API in order to prove we control our domain. It uses ports 80 (HTTP) or 443 (HTTPS) to accomplish this. Open up the appropriate port in your firewall:

      Substitute 443 above if that's the port you're using. ufw will output confirmation that your rule was added:

      Output

      Rule added Rule added (v6)

      We can now run Certbot to get our certificate. We'll use the --standalone option to tell Certbot to handle the challenge using its own built-in web server. The --preferred-challenges option instructs Certbot to use port 80 or port 443. If you're using port 80, you want --preferred-challenges http. For port 443 it would be --preferred-challenges tls-sni. Finally, the -d flag is used to specify the domain you're requesting a certificate for. You can add multiple -d options to cover multiple domains in one certificate.

      • sudo certbot certonly --standalone --preferred-challenges http -d example.com

      When running the command, you will be prompted to enter an email address and agree to the terms of service. After doing so, you should see a message telling you the process was successful and where your certificates are stored:

      Output

      IMPORTANT NOTES: - Congratulations! Your certificate and chain have been saved at: /etc/letsencrypt/live/example.com/fullchain.pem Your key file has been saved at: /etc/letsencrypt/live/example.com/privkey.pem Your cert will expire on 2019-08-28. To obtain a new or tweaked version of this certificate in the future, simply run certbot again. To non-interactively renew *all* of your certificates, run "certbot renew" - Your account credentials have been saved in your Certbot configuration directory at /etc/letsencrypt. You should make a secure backup of this folder now. This configuration directory will also contain certificates and private keys obtained by Certbot so making regular backups of this folder is ideal. - If you like Certbot, please consider supporting our work by: Donating to ISRG / Let's Encrypt: https://letsencrypt.org/donate Donating to EFF: https://eff.org/donate-le

      We've got our certificates. Let's take a look at what we downloaded and how to use the files with our software.

      Step 3 — Configuring Your Application

      Configuring your application for SSL is beyond the scope of this article, as each application has different requirements and configuration options, but let's take a look at what Certbot has downloaded for us. Use ls to list out the directory that holds our keys and certificates:

      • sudo ls /etc/letsencrypt/live/example.com

      Output

      cert.pem chain.pem fullchain.pem privkey.pem README

      The README file in this directory has more information about each of these files. Most often you'll only need two of these files:

      • privkey.pem: This is the private key for the certificate. This needs to be kept safe and secret, which is why most of the /etc/letsencrypt directory has very restrictive permissions and is accessible by only the root user. Most software configuration will refer to this as something similar to ssl-certificate-key or ssl-certificate-key-file.
      • fullchain.pem: This is our certificate, bundled with all intermediate certificates. Most software will use this file for the actual certificate, and will refer to it in their configuration with a name like 'ssl-certificate'.

      For more information on the other files present, refer to the "Where are my certificates" section of the Certbot docs.

      Some software will need its certificates in other formats, in other locations, or with other user permissions. It is best to leave everything in the letsencrypt directory, and not change any permissions in there (permissions will just be overwritten upon renewal anyway), but sometimes that's just not an option. In that case, you'll need to write a script to move files and change permissions as needed. This script will need to be run whenever Certbot renews the certificates, which we'll talk about next.

      Step 4 — Handling Certbot Automatic Renewals

      Let's Encrypt's certificates are only valid for ninety days. This is to encourage users to automate their certificate renewal process. The certbot package we installed takes care of this for us by adding a renew script to /etc/cron.d. This script runs twice a day and will renew any certificate that's within thirty days of expiration.

      With our certificates renewing automatically, we still need a way to run other tasks after a renewal. We need to at least restart or reload our server to pick up the new certificates, and as mentioned in Step 3 we may need to manipulate the certificate files in some way to make them work with the software we're using. This is the purpose of Certbot's renew_hook option.

      To add a renew_hook, we update Certbot's renewal config file. Certbot remembers all the details of how you first fetched the certificate, and will run with the same options upon renewal. We just need to add in our hook. Open the config file with you favorite editor:

      • sudo nano /etc/letsencrypt/renewal/example.com.conf

      A text file will open with some configuration options. Add your hook on the last line:

      /etc/letsencrypt/renewal/example.com.conf

      renew_hook = systemctl reload rabbitmq
      

      Update the command above to whatever you need to run to reload your server or run your custom file munging script. Usually, on Debian, you’ll mostly be using systemctl to reload a service. Save and close the file, then run a Certbot dry run to make sure the syntax is ok:

      • sudo certbot renew --dry-run

      If you see no errors, you're all set. Certbot is set to renew when necessary and run any commands needed to get your service using the new files.

      Conclusion

      In this tutorial, we've installed the Certbot Let's Encrypt client, downloaded an SSL certificate using standalone mode, and enabled automatic renewals with renew hooks. This should give you a good start on using Let's Encrypt certificates with services other than your typical web server.

      For more information, please refer to Certbot's documentation.



      Source link

      How to Install and Secure the Mosquitto MQTT Messaging Broker on Debian 9


      Introduction

      MQTT is a machine-to-machine messaging protocol, designed to provide lightweight publish/subscribe communication to “Internet of Things” devices. It is commonly used for geo-tracking fleets of vehicles, home automation, environmental sensor networks, and utility-scale data collection.

      Mosquitto is a popular MQTT server (or broker, in MQTT parlance) that has great community support and is easy to install and configure.

      In this tutorial, we’ll install Mosquitto and set up our broker to use SSL to secure our password-protected MQTT communications.

      Prerequisites

      Before starting this tutorial, you will need:

      Step 1 — Installing Mosquitto

      Debian 9 has a fairly recent version of Mosquitto in its default software repository, so we can install it from there.

      First, log in using your non-root user and update the package lists using apt update:

      Now, install Mosquitto using apt install:

      • sudo apt install mosquitto mosquitto-clients

      By default, Debian will start the Mosquitto service after install. Let's test the default configuration. We'll use one of the Mosquitto clients we just installed to subscribe to a topic on our broker.

      Topics are labels that you publish messages to and subscribe to. They are arranged as a hierarchy, so you could have sensors/outside/temp and sensors/outside/humidity, for example. How you arrange topics is up to you and your needs. Throughout this tutorial we will use a simple test topic to test our configuration changes.

      Log in to your server a second time, so you have two terminals side-by-side. In the new terminal, use mosquitto_sub to subscribe to the test topic:

      • mosquitto_sub -h localhost -t test

      -h is used to specify the hostname of the MQTT server, and -t is the topic name. You'll see no output after hitting ENTER because mosquitto_sub is waiting for messages to arrive. Switch back to your other terminal and publish a message:

      • mosquitto_pub -h localhost -t test -m "hello world"

      The options for mosquitto_pub are the same as mosquitto_sub, though this time we use the additional -m option to specify our message. Hit ENTER, and you should see hello world pop up in the other terminal. You've sent your first MQTT message!

      Enter CTRL+C in the second terminal to exit out of mosquitto_sub, but keep the connection to the server open. We'll use it again for another test in Step 5.

      Next, we'll secure our installation using password-based authentication.

      Step 2 — Configuring MQTT Passwords

      Let's configure Mosquitto to use passwords. Mosquitto includes a utility to generate a special password file called mosquitto_passwd. This command will prompt you to enter a password for the specified username, and place the results in /etc/mosquitto/passwd.

      • sudo mosquitto_passwd -c /etc/mosquitto/passwd sammy

      Now we'll open up a new configuration file for Mosquitto and tell it to use this password file to require logins for all connections:

      • sudo nano /etc/mosquitto/conf.d/default.conf

      This should open an empty file. Paste in the following:

      /etc/mosquitto/conf.d/default.conf

      allow_anonymous false
      password_file /etc/mosquitto/passwd
      
      

      Be sure to leave a trailing newline at the end of the file.

      allow_anonymous false will disable all non-authenticated connections, and the password_file line tells Mosquitto where to look for user and password information. Save and exit the file.

      Now we need to restart Mosquitto and test our changes.

      • sudo systemctl restart mosquitto

      Try to publish a message without a password:

      • mosquitto_pub -h localhost -t "test" -m "hello world"

      The message should be rejected:

      Output

      Connection Refused: not authorised. Error: The connection was refused.

      Before we try again with the password, switch to your second terminal window again, and subscribe to the 'test' topic, using the username and password this time:

      • mosquitto_sub -h localhost -t test -u "sammy" -P "password"

      It should connect and sit, waiting for messages. You can leave this terminal open and connected for the rest of the tutorial, as we'll periodically send it test messages.

      Now publish a message with your other terminal, again using the username and password:

      • mosquitto_pub -h localhost -t "test" -m "hello world" -u "sammy" -P "password"

      The message should go through as in Step 1. We've successfully added password protection to Mosquitto. Unfortunately, we're sending passwords unencrypted over the internet. We'll fix that next by adding SSL encryption to Mosquitto.

      Step 3 — Configuring MQTT SSL

      To enable SSL encryption, we need to tell Mosquitto where our Let's Encrypt certificates are stored. Open up the configuration file we previously started:

      • sudo nano /etc/mosquitto/conf.d/default.conf

      Paste in the following at the end of the file, leaving the two lines we already added:

      /etc/mosquitto/conf.d/default.conf

      . . .
      listener 1883 localhost
      
      listener 8883
      certfile /etc/letsencrypt/live/mqtt.example.com/cert.pem
      cafile /etc/letsencrypt/live/mqtt.example.com/chain.pem
      keyfile /etc/letsencrypt/live/mqtt.example.com/privkey.pem
      
      

      Again, be sure to leave a trailing newline at the end of the file.

      We're adding two separate listener blocks to the config. The first, listener 1883 localhost, updates the default MQTT listener on port 1883, which is what we've been connecting to so far. 1883 is the standard unencrypted MQTT port. The localhost portion of the line instructs Mosquitto to only bind this port to the localhost interface, so it's not accessible externally. External requests would have been blocked by our firewall anyway, but it's good to be explicit.

      listener 8883 sets up an encrypted listener on port 8883. This is the standard port for MQTT + SSL, often referred to as MQTTS. The next three lines, certfile, cafile, and keyfile, all point Mosquitto to the appropriate Let's Encrypt files to set up the encrypted connections.

      Save and exit the file, then restart Mosquitto to update the settings:

      • sudo systemctl restart mosquitto

      Update the firewall to allow connections to port 8883.

      Output

      Rule added Rule added (v6)

      Now we test again using mosquitto_pub, with a few different options for SSL:

      • mosquitto_pub -h mqtt.example.com -t test -m "hello again" -p 8883 --capath /etc/ssl/certs/ -u "sammy" -P "password"

      Note that we're using the full hostname instead of localhost. Because our SSL certificate is issued for mqtt.example.com, if we attempt a secure connection to localhost we'll get an error saying the hostname does not match the certificate hostname (even though they both point to the same Mosquitto server).

      --capath /etc/ssl/certs/ enables SSL for mosquitto_pub, and tells it where to look for root certificates. These are typically installed by your operating system, so the path is different for Mac OS, Windows, etc. mosquitto_pub uses the root certificate to verify that the Mosquitto server's certificate was properly signed by the Let's Encrypt certificate authority. It's important to note that mosquitto_pub and mosquitto_sub will not attempt an SSL connection without this option (or the similar --cafile option), even if you're connecting to the standard secure port of 8883.

      If all goes well with the test, we'll see hello again show up in the other mosquitto_sub terminal. This means your server is fully set up! If you'd like to extend the MQTT protocol to work with websockets, you can follow the final step.

      Step 4 — Configuring MQTT Over Websockets (Optional)

      In order to speak MQTT using JavaScript from within web browsers, the protocol was adapted to work over standard websockets. If you don't need this functionality, you may skip this step.

      We need to add one more listener block to our Mosquitto config:

      • sudo nano /etc/mosquitto/conf.d/default.conf

      At the end of the file, add the following:

      /etc/mosquitto/conf.d/default.conf

      . . .
      listener 8083
      protocol websockets
      certfile /etc/letsencrypt/live/mqtt.example.com/cert.pem
      cafile /etc/letsencrypt/live/mqtt.example.com/chain.pem
      keyfile /etc/letsencrypt/live/mqtt.example.com/privkey.pem
      
      

      Again, be sure to leave a trailing newline at the end of the file.

      This is mostly the same as the previous block, except for the port number and the protocol websockets line. There is no official standardized port for MQTT over websockets, but 8083 is the most common.

      Save and exit the file, then restart Mosquitto.

      • sudo systemctl restart mosquitto

      Now, open up port 8083 in the firewall.

      To test this functionality, we'll use a public, browser-based MQTT client. There are a few out there, but the Eclipse Paho JavaScript Client is simple and straightforward to use. Open the Paho client in your browser. You'll see the following:

      Paho Client Screen

      Fill out the connection information as follows:

      • Host should be the domain for your Mosquitto server, mqtt.example.com.
      • Port should be 8083.
      • ClientId can be left to the default value, js-utility-DI1m6.
      • Path can be left to the default value, /ws.
      • Username should be your Mosquitto username; here, we used sammy.
      • Password should be the password you chose.

      The remaining fields can be left to their default values.

      After pressing Connect, the Paho browser-based client will connect to your Mosquitto server.

      To publish a message, navigate to the Publish Message pane, fill out Topic as test, and enter any message in the Message section. Next, press Publish. The message will show up in your mosquitto_sub terminal.

      Conclusion

      We've now set up a secure, password-protected and SSL-secured MQTT server. This can serve as a robust and secure messaging platform for whatever projects you dream up. Some popular software and hardware that work well with the MQTT protocol include:

      • OwnTracks, an open-source geo-tracking app you can install on your phone. OwnTracks will periodically report position information to your MQTT server, which you could then store and display on a map, or create alerts and activate IoT hardware based on your location.
      • Node-RED is a browser-based graphical interface for 'wiring' together the Internet of Things. You drag the output of one node to the input of another, and can route information through filters, between various protocols, into databases, and so on. MQTT is very well supported by Node-RED.
      • The ESP8266 is an inexpensive wifi microcontroller with MQTT capabilities. You could wire one up to publish temperature data to a topic, or perhaps subscribe to a barometric pressure topic and sound a buzzer when a storm is coming!

      These are just a few popular examples from the MQTT ecosystem. There is much more hardware and software out there that speaks the protocol. If you already have a favorite hardware platform, or software language, it probably has MQTT capabilities. Have fun getting your "things" talking to each other!



      Source link

      How To Add Swap Space on Debian 8


      Introduction

      One of the easiest way of guarding against out-of-memory errors in applications is to add some swap space to your server. In this guide, we will cover how to add a swap file to a Debian 8 server.

      Warning: Although swap is generally recommended for systems using traditional spinning hard drives, using swap with SSDs can cause issues with hardware degradation over time. Due to this consideration, we do not recommend enabling swap on DigitalOcean or any other provider that utilizes SSD storage. Doing so can impact the reliability of the underlying hardware for you and your neighbors. This guide is provided as reference for users who may have spinning disk systems elsewhere.

      If you need to improve the performance of your server on DigitalOcean, we recommend upgrading your Droplet. This will lead to better results in general and will decrease the likelihood of contributing to hardware issues that can affect your service.

      What is Swap?

      Swap is an area on a hard drive that has been designated as a place where the operating system can temporarily store data that it can no longer hold in RAM. Basically, this gives you the ability to increase the amount of information that your server can keep in its working “memory”, with some caveats. The swap space on the hard drive will be used mainly when there is no longer sufficient space in RAM to hold in-use application data.

      The information written to disk will be significantly slower than information kept in RAM, but the operating system will prefer to keep running application data in memory and use swap for the older data. Overall, having swap space as a fallback for when your system’s RAM is depleted can be a good safety net against out-of-memory exceptions on systems with non-SSD storage available.

      Step 1 – Checking the System for Swap Information

      Before we begin, we can check if the system already has some swap space available. It is possible to have multiple swap files or swap partitions, but generally one should be enough.

      We can see if the system has any configured swap by typing:

      If you don't get back any output, this means your system does not have swap space available currently.

      You can verify that there is no active swap using the free utility:

      Output

      total used free shared buffers cached Mem: 1.0G 331M 668M 4.3M 11M 276M -/+ buffers/cache: 44M 955M Swap: 0B 0B 0B

      As you can see in the Swap row of the output, no swap is active on the system.

      Step 2 – Checking Available Space on the Hard Drive Partition

      Before we create our swap file, we'll check our current disk usage to make sure we have enough space. Do this by entering:

      Output

      Filesystem Size Used Avail Use% Mounted on /dev/vda1 25G 946M 23G 4% / udev 10M 0 10M 0% /dev tmpfs 201M 4.3M 196M 3% /run tmpfs 501M 0 501M 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 501M 0 501M 0% /sys/fs/cgroup tmpfs 101M 0 101M 0% /run/user/1001

      The device with / in the Mounted on column is our disk in this case. We have plenty of space available in this example (only 946M used). Your usage will probably be different.

      Although there are many opinions about the appropriate size of a swap space, it really depends on your personal preferences and your application requirements. Generally, an amount equal to or double the amount of RAM on your system is a good starting point. Another good rule of thumb is that anything over 4G of swap is probably unnecessary if you are just using it as a RAM fallback.

      Step 3 – Creating a Swap File

      Now that we know our available hard drive space, we can go about creating a swap file within our filesystem.

      We will create a file called swapfile in our root (/) directory. The file must allocate the amount of space we want for our swap file. There are two main ways of doing this:

      The Traditional, Slow Way

      Traditionally, we would create a file with preallocated space by using the dd command. This versatile disk utility writes from one location to another location.

      We can use this to write zeros to the file from a special device in Linux systems located at /dev/zero that just spits out as many zeros as requested.

      We specify the file size by using a combination of bs for block size and count for the number of blocks. What we assign to each parameter is almost entirely arbitrary. What matters is what the product of multiplying them turns out to be.

      For instance, in our example, we're looking to create a 1 Gigabyte file. We can do this by specifying a block size of 1 megabyte and a count of 1024:

      • sudo dd if=/dev/zero of=/swapfile bs=1M count=1024

      Output

      1024+0 records in 1024+0 records out 1073741824 bytes (1.1 GB) copied, 1.36622 s, 786 MB/s

      Check your command before pressing ENTER because this has the potential to destroy data if you point the of (which stands for output file) to the wrong location.

      We can see that 1 Gigabyte has been allocated by typing:

      Output

      -rw-r--r-- 1 root root 1.0G May 30 15:07 /swapfile

      If you've completed the command above, you may notice that it took a few seconds. Only 1.3 seconds for this small swapfile, but that could increase significantly for larger files on slower hard drives.

      If you want to learn how to create the file faster, remove the file swapfile using sudo rm /swapfile, then follow along below:

      The Faster Way

      The quicker way of getting the same file is by using the fallocate program. Note that this command only works with more modern filesystems, so if you're using an ext3 system, for instance, this option is not available to you.

      The fallocate command creates a file of a preallocated size instantly, without actually having to write dummy contents.

      We can create a 1 Gigabyte file by typing:

      sudo fallocate -l 1G /swapfile
      

      The prompt will be returned to you almost immediately. We can verify that the correct amount of space was reserved by typing:

      Output

      -rw-r--r-- 1 root root 1.0G May 30 15:07 /swapfile

      As you can see, our file is created with the correct amount of space set aside.

      Step 4 – Enabling the Swap File

      Now that we have a file of the correct size available, we need to actually turn this into swap space.

      First, we need to lock down the permissions of the file so that only the users with root privileges can read the contents. This prevents normal users from being able to access the file, which would have significant security implications.

      Make the file only accessible to root by typing:

      Verify the permissions change by typing:

      Output

      -rw------- 1 root root 1.0G May 29 17:34 /swapfile

      As you can see, only the root user has the read and write flags enabled.

      We can now mark the file as swap space by typing:

      Output

      Setting up swapspace version 1, size = 1048572 KiB no label, UUID=757ee0b7-db04-46bd-aafb-adf6954ea077

      After marking the file, we can enable the swap file, allowing our system to start utilizing it:

      Verify that the swap is available by typing:

      Output

      NAME TYPE SIZE USED PRIO /swapfile file 1024M 0B -1

      We can check the output of the free utility again to corroborate our findings:

      Output

      total used free shared buffers cached Mem: 1.0G 925M 74M 4.3M 13M 848M -/+ buffers/cache: 63M 936M Swap: 1.0G 0B 1.0G

      Our swap has been set up successfully and our operating system will begin to use it as necessary.

      Step 5 – Making the Swap File Permanent

      Our recent changes have enabled the swap file for the current session. However, if we reboot, the server will not retain the swap settings automatically. We can change this by adding the swap file to our /etc/fstab file.

      Back up the /etc/fstab file in case anything goes wrong:

      • sudo cp /etc/fstab /etc/fstab.bak

      Add the swap file information to the end of your /etc/fstab file by typing:

      • echo '/swapfile none swap sw 0 0' | sudo tee -a /etc/fstab

      Next we'll review some settings we can update to tune our swap space.

      Step 6 – Tuning your Swap Settings

      There are a few options that you can configure that will have an impact on your system's performance when dealing with swap.

      Adjusting the Swappiness Property

      The swappiness parameter configures how often your system swaps data out of RAM to the swap space. This is a value between 0 and 100 that represents a percentage.

      With values close to zero, the kernel will not swap data to the disk unless absolutely necessary. Remember, interactions with the swap file are "expensive" in that they take a lot longer than interactions with RAM and they can cause a significant reduction in performance. Telling the system not to rely on the swap much will generally make your system faster.

      Values that are closer to 100 will try to put more data into swap in an effort to keep more RAM space free. Depending on your applications' memory profile or what you are using your server for, this might be better in some cases.

      We can see the current swappiness value by typing:

      • cat /proc/sys/vm/swappiness

      Output

      60

      For a Desktop, a swappiness setting of 60 is not a bad value. For a server, you might want to move it closer to 0.

      We can set the swappiness to a different value by using the sysctl command.

      For instance, to set the swappiness to 10, we could type:

      • sudo sysctl vm.swappiness=10

      Output

      vm.swappiness = 10

      This setting will persist until the next reboot. We can set this value automatically at restart by adding the line to our /etc/sysctl.conf file:

      • sudo nano /etc/sysctl.conf

      At the bottom, you can add:

      /etc/sysctl.conf

      vm.swappiness=10
      

      Save and close the file when you are finished.

      Adjusting the Cache Pressure Setting

      Another related value that you might want to modify is the vfs_cache_pressure. This setting configures how much the system will choose to cache inode and dentry information over other data.

      Basically, this is access data about the filesystem. This is generally very costly to look up and very frequently requested, so it's an excellent thing for your system to cache. You can see the current value by querying the proc filesystem again:

      • cat /proc/sys/vm/vfs_cache_pressure

      Output

      100

      As it is currently configured, our system removes inode information from the cache too quickly. We can set this to a more conservative setting like 50 by typing:

      • sudo sysctl vm.vfs_cache_pressure=50

      Output

      vm.vfs_cache_pressure = 50

      Again, this is only valid for our current session. We can change that by adding it to our configuration file like we did with our swappiness setting:

      • sudo nano /etc/sysctl.conf

      At the bottom, add the line that specifies your new value:

      /etc/sysctl.conf

      vm.vfs_cache_pressure=50
      

      Save and close the file when you are finished.

      Conclusion

      Following the steps in this guide will give you some breathing room in cases that would otherwise lead to out-of-memory exceptions. Swap space can be incredibly useful in avoiding some of these common problems.

      If you are running into OOM (out of memory) errors, or if you find that your system is unable to use the applications you need, the best solution is to optimize your application configurations or upgrade your server.



      Source link