One place for hosting & domains

      April 2021

      How To Fix the WordPress Memory Exhausted Error by Increasing Your Site’s PHP Memory Limit


      As you may know, WordPress is built using PHP. This programming language is incredibly flexible, but it also has a few drawbacks. For example, if you don’t allocate enough memory for your WordPress installation, you might start running into the occasional “PHP Memory Exhausted” error.

      In a nutshell, this error means your server isn’t allocating enough resources for WordPress to execute the PHP scripts it needs to function properly. This issue can negatively affect your site’s functionality, but there are several ways you can fix and even prevent it.

      In this article, we’ll show you how to fix the memory exhausted problem by increasing your PHP memory limit. However, first, let’s talk about how to recognize this error and what it means!

      Why You’re Seeing a WordPress Memory Limit Error on Your Site

      As we mentioned earlier, the PHP memory limit error means you’re not allocating enough resources for your WordPress installation to function correctly. The problem usually presents itself with a message such as:

      The memory exhausted PHP fatal error.

      Don’t be scared by the word “fatal,” though. Your website isn’t broken, but you will need to make some changes to your WordPress installation if you want it to work properly. Specifically, you’ll want to increase your PHP memory limit.

      By “PHP memory limit,” we mean the amount of server memory that’s allocated to run PHP scripts. By default, that number should be around 64 MB or higher. In most cases, 64 MB is more than enough, however.

      Most hosting servers provide you with far more memory than that, so increasing the PHP allowed memory size shouldn’t negatively impact your website’s performance whatsoever. In fact, unless you’re using a cheap web host or you set up WordPress manually, your PHP memory limit shouldn’t be an issue at all.

      You can easily check to see what your PHP memory limit is by accessing your WordPress dashboard and navigating to Tools > Site Health > Info. Next, you can click on the Server tab and look for the PHP memory limit entry.

      A website with a high PHP memory size.

      Within the Server tab, you can also check other information such as your PHP version and the PHP time limit. The latter variable, which is in seconds, defines how long PHP scripts have to execute before they time out.

      For now, let’s focus on the PHP memory limit. As you can see, the above example has quite a high limit, which means that the website is unlikely to run into a WordPress Memory Exhausted error.

      If your site has a low memory limit (<64 MB), it’s in your best interests to increase it. There are a couple of ways you can do so.

      Take Your WordPress Website to the Next Level

      Whether you need help navigating your web hosting control panel, fixing an error, or finding the right plugin, we can help! Subscribe to our monthly digest so you never miss an article.

      How to Resolve the WordPress Memory Limit Error (2 Methods)

      As far as WordPress errors go, this one has a clear-cut cause and solution. You’re not allocating enough memory for your PHP installation, so you need to increase that number. In this section, we’ll go over two methods you can use: one manual technique and one that requires your wallet.

      1. Increase the PHP Memory Allocated to Your Website Manually

      WordPress enables you to declare your allowed memory size manually by modifying one of two files: .htaccess and wp-config.php. However, changing your WordPress installation’s .htaccess file can lead to site-wide errors since that file governs how it interacts with your server.

      Increasing your PHP memory limit through wp-config.php is, in most cases, the safest option, and it’s remarkably easy to do. All you need is a Secure File Transfer Protocol (SFTP) client such as FileZilla that you can use to connect to your website.

      Once you access your website via SFTP, open the WordPress root folder and look for the wp-config.php file within it.

      A WordPress wp-config.php file.

      Open that file using a text editor, and you should see something like this:

      Editing a wp-config.php file.

      To increase your PHP memory limit, you can simply add a single line of code anywhere after the <?php tag and before the part of the file that reads “/* That’s all, stop editing! Happy blogging. */”.

      This is the line of code to add:

      define( 'WP_MEMORY_LIMIT', 'XXXM' );

      You’ll need to replace the “XXX” variable within that line with the amount of memory you want to allocate to PHP. As we mentioned before, the absolute minimum you should settle for is 64 MB.

      However, you can also double the number to play it safe or increase it even further. For example, if you set a PHP memory limit of 256 MB, it would look like this:

      define( 'WP_MEMORY_LIMIT', ‘256M’);

      Once you’re set on a number, save the changes to wp-config.php and close the editor. Now return to your WordPress dashboard and navigate to Tools > Site Health > Info > Server to see if the changes went through.

      In some cases, declaring your PHP memory limit manually won’t work because you don’t have the necessary permissions to change that value. If you can’t adjust your WordPress memory size manually, that leaves you with one other option.

      2. Upgrade Your Website’s Hosting Plan

      Typically, if you use a decent WordPress hosting provider, you won’t need to worry about increasing your PHP memory limit. One caveat is that if you’re using shared hosting, you’ll likely face limited resources. So if you’re encountering this error, it might be time to upgrade to a better hosting plan.

      Upgrading your hosting package will usually result in an increase in available PHP memory. That means you’re much less likely to run into a WordPress memory limit error. The only limiting factor is your budget.

      If you can’t upgrade hosting plans right now, it might be worth contacting your provider’s support team and seeing if they can increase your PHP memory limit on their end. If they can’t, it might be time to switch to a better WordPress host that offers high PHP memory limits on affordable plans.

      Skip the Stress

      Avoid troubleshooting when you sign up for DreamPress. Our friendly WordPress experts are available 24/7 to help solve website problems — big or small.

      Want More WordPress Error Tips?

      Once you increase PHP memory on your WordPress website, we can help tackle other issues. We’ve put together several tutorials to help you troubleshoot every error message:

      Want more information on WordPress site management? Check out our WordPress Tutorials, a collection of guides designed to help you navigate the WordPress dashboard like an expert.

      Increasing PHP Memory Limit

      Running into a PHP fatal error can be worrying, but it’s not necessarily a cause for concern. Learning how to increase your PHP memory limit is relatively simple if you don’t mind using an SFTP client and adding a single line of code to one of WordPress’ core files.

      The alternative is to upgrade your hosting plan or opt for a better provider. Most WordPress-friendly hosting options offer high limits by default, so you’ll never run into a PHP memory exhausted error ever again.

      If you’re ready to use a web host optimized for WordPress websites, check out our DreamPress hosting packages! We offer optimized WordPress setups, so you spend less time troubleshooting errors and more time working on your website.



      Source link

      How To Use Telepresence on Kubernetes for Rapid Development on Ubuntu 20.04


      The author selected the Tech Education Fund to receive a donation as part of the Write for DOnations program.

      Introduction

      Application developers building microservices on Kubernetes often encounter two major problems that slow them down:

      • Slow feedback loops. Once a code change is made, it must be deployed to Kubernetes to be tested. This requires a container build, push to a container registry, and deployment to Kubernetes. This adds minutes to every code iteration.
      • Insufficient memory and CPU locally. Developers attempt to speed up the feedback loop by running Kubernetes locally with minikube or the equivalent. However, resource-hungry applications quickly exceed the compute and memory available locally.

      Telepresence is a Cloud-Native Computing Foundation project for fast, efficient development on Kubernetes. With Telepresence, you run your service locally, while you run the rest of your application in the cloud. Telepresence creates a bi-directional network connection between your Kubernetes cluster and your local workstation. This way, the service you’re running locally can communicate with services in the cluster, and vice versa. That allows you to use the compute and memory resources of the cluster, but without having to go through a complete deployment cycle for each change.

      In this tutorial, you’ll configure Telepresence on your local machine running Ubuntu 20.04 to work with a Kubernetes cluster. You’ll intercept traffic to your cluster and redirect it to your local environment.

      To complete this tutorial, you will need:

      Step 1 — Installing Telepresence

      In this step, you’ll install Telepresence and connect it to your Kubernetes cluster. First, make sure that you have kubectl configured and that you can connect to your Kubernetes cluster from your local workstation. Use the get services command to check your cluster’s status:

      The output will look like this, with your own cluster’s IP address listed:

      Output

      NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.245.0.1 <none> 443/TCP 116m

      Next you’ll install Telepresence locally. Telepresence comes as a single binary.

      Use curl to download the latest binary for Linux (around 50 MB):

      • sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/latest/telepresence -o /usr/local/bin/telepresence

      Then use chmod to make the binary executable:

      • sudo chmod a+x /usr/local/bin/telepresence

      Now that you have Telepresence installed locally, you can verify that it worked by connecting to your Kubernetes cluster:

      You’ll see the following output:

      Output

      Launching Telepresence Daemon ... Connected to context default (https://<cluster public IP>)

      If Telepresence doesn’t connect, check your kubectl configuration.

      Verify that Telepresence is working properly by connecting to the Kubernetes API server with the status command:

      You will see the following output. Telepresence Proxy: ON indicates that Telepresence has configured a proxy to access services on the cluster.

      Output

      Root Daemon: Running Version : v2.1.4 (api 3) Primary DNS : "" Fallback DNS: "" User Daemon: Running Version : v2.1.4 (api 3) Ambassador Cloud : Logged out Status : Connected Kubernetes server : https://7c10e553-10d1-4fee-9b7d-1ccbce4cdd34.k8s.ondigitalocean.com Kubernetes context: <your_kubernetes_context> Telepresence proxy: ON (networking to the cluster is enabled) Intercepts : 0 total Connected Context: do-tor1-k8s-bg-telepresence (https://bee66877-1b07-4bb1-8c8f-4fd62e416865.k8s.ondigitalocean.com) Proxy: ON (networking to the cluster is enabled) Intercepts: 0 total

      When you use telepresence connect, on the server side, Telepresence creates a namespace called ambassador and runs a traffic manager. On the client side, Telepresence sets up DNS to enable local access to remote servers. That means you do not have to use kubectl port-forward to manually configure access to local services. When you access a remote service the DNS resolves to a specific IP address. For more details, see the Telepresence architecture documentation.

      You can now connect to the remote Kubernetes cluster from your local workstation, as if the Kubernetes cluster were running on your laptop. Next you’ll try out a sample application.

      Step 2 — Adding a Sample Node.js Application

      In this step, you’ll use a simple Node.js application to simulate a complex service running on your Kubernetes cluster. Instead of creating the file locally, you’ll access it from DockerHub and deploy it to your cluster from there. The file is called hello-node, and returns a text string:

      var http = require('http');
      
      var handleRequest = function(request, response) {
        console.log('Received request for URL: ' + request.url);
        response.writeHead(200, {'Content-Type': 'text/plain'});
        response.write('Hello, Node!');
        response.end();
      };
      
      http.createServer(handleRequest).listen(9001);
      console.log('Use curl <hostname>:9001 to access this server...');
      

      Use the kubectl create deployment command to create a deployment called hello node:

      • kubectl create deployment hello-node --image=docommunity/hello-node:1.0

      You will see the following output:

      Output

      deployment.apps/hello-node created

      Use the get pod command to confirm that the deployment has occurred and the app is now running on the cluster:

      The output will show a READY status of 1/1.

      Output

      NAME READY STATUS RESTARTS AGE hello-node-86b49779bf-9zqvn 1/1 Running 0 11s

      Use the expose deployment command to make the application available on port 9001:

      • kubectl expose deployment hello-node --type=LoadBalancer --port=9001

      The output will look like this:

      Output

      service/hello-node exposed

      Use the kubectl get svc command to check that the load balancer is running:

      The output will look like this, with your own IP addresses:

      Output

      NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE hello-node LoadBalancer 10.245.75.48 <pending> 9001:30682/TCP 4s kubernetes ClusterIP 10.245.0.1 <none> 443/TCP 6d

      If you are using local Kubernetes without load balancer support, then the external IP value for LoadBalancer will show as <pending> permanently. That is fine for the purposes of this tutorial. If you are using DigitalOcean Kubernetes, you should see the external IP value will display the IP address after a delay.

      Next, verify that the application is running by using curl to access the load balancer:

      If you’re not running a load balancer, you can use curl to access the service directly:

      • curl <servicename>.<namespace>:9001

      The output will look like this:

      Output

      Hello, Node!

      Next, use the telepresence connect command to connect Telepresence to the cluster:

      This allows you to access all remote services as if they were local, so you can access the service by name:

      • curl hello-node.default:9001

      You’ll receive the same response as you did when you accessed the service via its IP:

      Output

      Hello, Node!

      The service is up and running on the cluster, and you can access it remotely. If you make any changes to the hello-node.js app, you’d need to take the following steps:

      • Modify the app.
      • Rebuild the container image.
      • Push it to a container registry.
      • Deploy to Kubernetes.

      That is a lot of steps. You could use tooling (automated pipelines, such as Skaffold) to reduce the manual work. But the steps themselves cannot be bypassed.

      Now you’ll build another version of our hello-node app, and use Telepresence to test it without having to build the container image or push it to registry or even deploy to Kubernetes.

      Step 3 — Running a New Version of the Service Locally

      In this step, you’ll modify the existing hello-node application on your local machine. You’ll then use Telepresence to route traffic to the local version with a Telepresence intercept. The intercept takes traffic intended for your cluster and reroutes it to your local version of the service, so you can continue working in your development environment.

      Create a new file containing a modified version of the sample application:

      Add the following code to the new file:

      hello-node-v2.js

      var http = require('http');
      
      var handleRequest = function(request, response) {
        console.log('Received request for URL: ' + request.url);
        response.writeHead(200, {'Content-Type': 'text/plain'});
        response.write('Hello, Node V2!');
        response.end();
      };
      
      http.createServer(handleRequest).listen(9001);
      

      Save and exit the file.

      Start the service with Node:

      Leave the service running, then open a new terminal window and access the service:

      The output will look like this:

      Output

      Hello, Node V2!

      This service is only running locally, however. If you try to access the remote server, it is currently running version 1 of hello-node. To fix that, you’ll enable an intercept to route all traffic going to the hello-node service in the cluster to the local version of the service.

      Use the intercept command to set up the intercept:

      • telepresence intercept hello-node --port 9001

      The output will look like this:

      Output

      Using deployment hello-node intercepted Intercept name : hello-node State : ACTIVE Destination : 127.0.0.1:9001 Volume Mount Error: sshfs is not installed on your local machine Intercepting : all TCP connections

      Check that the intercept has been set up correctly with the status command:

      The output will look like this:

      Output

      Root Daemon: Running Version : v2.1.4 (api 3) Primary DNS : "" Fallback DNS: "" User Daemon: Running Version : v2.1.4 (api 3) Ambassador Cloud : Logged out Status : Connected Kubernetes server : https://7c10e553-10d1-4fee-9b7d-1ccbce4cdd34.k8s.ondigitalocean.com Kubernetes context: <your_kubernetes_context> Telepresence proxy: ON (networking to the cluster is enabled) Intercepts : 1 total hello-node: brian@telepresence-tutorial

      Now access the remote service with curl as you did previously:

      The output will look like this:

      Output

      Hello, Node V2!

      Now, any messages sent to the service on the cluster are redirected to the local service. This is useful in the development stage, because you can avoid the deployment loop (build, push, deploy) for every individual change to your code.

      Conclusion

      In this tutorial, you’ve installed Telepresence on your local machine, and demonstrated how to make code changes in your local environment without having to deploy to Kubernetes every time you make a change. For more tutorials and information about Telepresence, see the Telepresence documentation.



      Source link

      Creating and Running your First Ansible Playbook



      Part of the Series:
      How To Write Ansible Playbooks

      Ansible is a modern configuration management tool that doesn’t require the use of an agent software on remote nodes, using only SSH and Python to communicate and execute commands on managed servers. This series will walk you through the main Ansible features that you can use to write playbooks for server automation. At the end, we’ll see a practical example of how to create a playbook to automate setting up a remote Nginx web server and deploy a static HTML website to it.

      Playbooks use the YAML format to define one or more plays. A play is a set of ordered tasks that are arranged in a way to automate a process, such as setting up a web server or deploying an application to production.

      In a playbook file, plays are defined as a YAML list. A typical play starts off by determining which hosts are the target of that particular setup. This is done with the hosts directive.

      Setting the hosts directive to all is a common choice because you can limit the targets of a play at execution time by running the ansible-playbook command with the -l parameter. That allows you to run the same playbook on different servers or groups without the need to change the playbook file every time.

      Start by creating a new directory on your home folder where you can save your practice playbooks. First, make sure you’re in your Ubuntu user’s home directory. From there, create a directory named ansible-practice and then navigate into that directory with the cd command:

      • cd ~
      • mkdir ansible-practice
      • cd ansible-practice

      If you followed all prerequisites, you should already have a working inventory file. You can copy that file into your new ansible-practice directory now. For instance, if you created your test inventory file in an ansible directory in your home folder, you could copy the file to the new directory with:

      • cp ~/ansible/inventory ~/ansible-practice/inventory

      Next, create a new playbook file:

      The following playbook defines a play targeting all hosts from a given inventory. It contains a single task to print a debug message.

      Note: We’ll learn more about tasks in the next section of this series.

      Add the following content to your playbook-01.yml file:

      ~/ansible-practice/playbook-01.yml

      ---
      - hosts: all
        tasks:
          - name: Print message
            debug:
              msg: Hello Ansible World
      

      Save and close the file when you’re done. If you’re using nano, you can do that by typing CTRL+X, then Y and ENTER to confirm.

      To try this playbook on the server(s) that you set up in your inventory file, run ansible-playbook with the same connection arguments you used when running a connection test within the introduction of this series. Here, we’ll be using an inventory file named inventory and the sammy user to connect to the remote server, but be sure to change these details to align with your own inventory file and administrative user:

      • ansible-playbook -i inventory playbook-01.yml -u sammy

      You’ll see output like this:

      Output

      PLAY [all] *********************************************************************************** TASK [Gathering Facts] *********************************************************************** ok: [203.0.113.10] TASK [Update apt cache] ********************************************************************** ok: [203.0.113.10] => { "msg": "Hello Ansible World" } PLAY RECAP *********************************************************************************** 203.0.113.10 : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

      You might have noticed that even though you have defined only one task within your playbook, two tasks were listed in the play output. At the beginning of each play, Ansible executes by default an additional task that gathers information — referred to as facts — about the remote nodes. Because facts can be used on playbooks to better customize the behavior of tasks, the fact-gathering task must happen before any other tasks are executed.

      We’ll learn more about Ansible facts in a later section of this series.



      Source link