One place for hosting & domains

      How To Run a PHP Job Multiple Times in a Minute with Crontab on Ubuntu 20.04


      The author selected Girls Who Code to receive a donation as part of the Write for DOnations program.

      Introduction

      In Linux, you can use the versatile crontab tool to process long-running tasks in the background at specific times. While the daemon is great for running repetitive tasks, it has got one limitation: you can only execute tasks at a minimum time interval of 1 minute.

      However, in many applications, to avoid a poor user experience it’s preferable for jobs to execute more frequently. For instance, if you’re using the job-queue model to schedule file-processing tasks on your website, a significant wait will have a negative impact on end users.

      Another scenario is an application that uses the job-queue model either to send text messages or emails to clients once they’ve completed a certain task in an application (for example, sending money to a recipient). If users have to wait a minute before the delivery of a confirmation message, they might think that the transaction failed and try to repeat the same transaction.

      To overcome these challenges, you can program a PHP script that loops and processes tasks repetitively for 60 seconds as it awaits for the crontab daemon to call it again after the minute. Once the PHP script is called for the first time by the crontab daemon, it can execute tasks in a time period that matches the logic of your application without keeping the user waiting.

      In this guide, you will create a sample cron_jobs database on an Ubuntu 20.04 server. Then, you’ll set up a tasks table and a script that executes the jobs in your table in intervals of 5 seconds using the PHP while(...){...} loop and sleep() functions.

      Prerequisites

      To complete this tutorial, you require the following:

      Step 1 — Setting Up a Database

      In this step, you’ll create a sample database and table. First, SSH to your server and log in to MySQL as root:

      Enter your root password for the MySQL server and press ENTER to proceed. Then, run the following command to create a cron_jobs database.

      • CREATE DATABASE cron_jobs;

      Create a non-root user for the database. You’ll need the credentials of this user to connect to the cron_jobs database from PHP. Remember to replace EXAMPLE_PASSWORD with a strong value:

      • CREATE USER 'cron_jobs_user'@'localhost' IDENTIFIED WITH mysql_native_password BY 'EXAMPLE_PASSWORD';
      • GRANT ALL PRIVILEGES ON cron_jobs.* TO 'cron_jobs_user'@'localhost';
      • FLUSH PRIVILEGES;

      Next, switch to the cron_jobs database:

      Output

      Database changed

      Once you’ve selected the database, create a tasks table. In this table, you’ll insert some tasks that will be automatically executed by a cron job. Since the minimum time interval for running a cron job is 1 minute, you’ll later code a PHP script that overrides this setting and instead, execute the jobs in intervals of 5 seconds.

      For now, create your tasks table:

      • CREATE TABLE tasks (
      • task_id BIGINT NOT NULL AUTO_INCREMENT PRIMARY KEY,
      • task_name VARCHAR(50),
      • queued_at DATETIME,
      • completed_at DATETIME,
      • is_processed CHAR(1)
      • ) ENGINE = InnoDB;

      Insert three records to the tasks table. Use the MySQL NOW() function in the queued_at column to record the current date and time when the tasks are queued. Also for the completed_at column, use the MySQL CURDATE() function to set a default time of 00:00:00. Later, as tasks complete, your script will update this column:

      • INSERT INTO tasks (task_name, queued_at, completed_at, is_processed) VALUES ('TASK 1', NOW(), CURDATE(), 'N');
      • INSERT INTO tasks (task_name, queued_at, completed_at, is_processed) VALUES ('TASK 2', NOW(), CURDATE(), 'N');
      • INSERT INTO tasks (task_name, queued_at, completed_at, is_processed) VALUES ('TASK 3', NOW(), CURDATE(), 'N');

      Confirm the output after running each INSERT command:

      Output

      Query OK, 1 row affected (0.00 sec) ...

      Make sure the data is in place by running a SELECT statement against the tasks table:

      • SELECT task_id, task_name, queued_at, completed_at, is_processed FROM tasks;

      You will find a list of all tasks:

      Output

      +---------+-----------+---------------------+---------------------+--------------+ | task_id | task_name | queued_at | completed_at | is_processed | +---------+-----------+---------------------+---------------------+--------------+ | 1 | TASK 1 | 2021-03-06 06:27:19 | 2021-03-06 00:00:00 | N | | 2 | TASK 2 | 2021-03-06 06:27:28 | 2021-03-06 00:00:00 | N | | 3 | TASK 3 | 2021-03-06 06:27:36 | 2021-03-06 00:00:00 | N | +---------+-----------+---------------------+---------------------+--------------+ 3 rows in set (0.00 sec)

      The time for the completed_at column is set to 00:00:00, this column will update once the tasks are processed by a PHP script that you will create next.

      Exit from the MySQL command-line interface:

      Output

      Bye

      Your cron_jobs database and tasks table are now in place and you can now create a PHP script that processes the jobs.

      Step 2 — Creating a PHP Script that Runs Tasks After 5 Seconds

      In this step, you’ll create a script that uses a combination of the PHP while(...){...} loop and sleep functions to run tasks after every 5 seconds.

      Open a new /var/www/html/tasks.php file in the root directory of your web server using nano:

      • sudo nano /var/www/html/tasks.php

      Next, create a new try { block after a <?php tag and declare the database variables that you created in Step 1. Remember to replace EXAMPLE_PASSWORD with the actual password for your database user:

      /var/www/html/tasks.php

      <?php
      try {
          $db_name="cron_jobs";
          $db_user="cron_jobs_user";
          $db_password = 'EXAMPLE_PASSWORD';
          $db_host="localhost";
      

      Next, declare a new PDO (PHP Data Object) class and set the attribute ERRMODE_EXCEPTION to catch any PDO errors. Also, switch ATTR_EMULATE_PREPARES to false to let the native MySQL database engine handle emulation. Prepared statements allow you to send the SQL queries and data separately to enhance security and reduce chances of an SQL injection attack:

      /var/www/html/tasks.php

      
          $pdo = new PDO('mysql:host=" . $db_host . "; dbname=" . $db_name, $db_user, $db_password);
          $pdo->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);  
          $pdo->setAttribute(PDO::ATTR_EMULATE_PREPARES, false);       
      

      Then, create a new variable named $loop_expiry_time and set it to the current time plus 60 seconds. Then open a new PHP while(time() < $loop_expiry_time) { statement. The idea here is to create a loop that runs until the current time (time()) matches the variable $loop_expiry_time:

      /var/www/html/tasks.php

             
          $loop_expiry_time = time() + 60;
      
          while (time() < $loop_expiry_time) { 
      

      Next, declare a prepared SQL statement that retrieves unprocessed jobs from the tasks table:

      /var/www/html/tasks.php

         
              $data = [];
              $sql  = "select 
                       task_id
                       from tasks
                       where is_processed = :is_processed
                       ";
      

      Execute the SQL command and fetch all rows from the tasks table that have the column is_processed set to N. This means the rows are not processed:

      /var/www/html/tasks.php

        
              $data["is_processed'] = 'N';  
      
              $stmt = $pdo->prepare($sql);
              $stmt->execute($data);
      

      Next, loop through the retrieved rows using a PHP while ($row = $stmt->fetch(PDO::FETCH_ASSOC)) {...} statement and create another SQL statement. This time around, the SQL command updates the is_processed and completed_at columns for each task processed. This ensures that you don’t process tasks more than one time:

      /var/www/html/tasks.php

        
              while ($row = $stmt->fetch(PDO::FETCH_ASSOC)) { 
                  $data_update   = [];         
                  $sql_update    = "update tasks set 
                                    is_processed  = :is_processed,
                                    completed_at  = :completed_at
                                    where task_id = :task_id                                 
                                    ";
      
                  $data_update   = [                
                                   'is_processed' => 'Y',                          
                                   'completed_at' => date("Y-m-d H:i:s"),
                                   'task_id'      => $row['task_id']                         
                                   ];
                  $stmt = $pdo->prepare($sql_update);
                  $stmt->execute($data_update);
              }
      

      Note: If you have a large queue to be processed (for example, 100,000 records per second), you might consider queueing jobs in a Redis Server since it is faster than MySQL when it comes to implementing the job-queue model. Nevertheless, this guide will process a smaller dataset.

      Before you close the first PHP while (time() < $loop_expiry_time) { statement, include a sleep(5); statement to pause the jobs execution for 5 seconds and free up your server resources.

      You may change the 5 seconds period depending on your business logic and how fast you want tasks to execute. For instance, if you would like the tasks to be processed 3 times in a minute, set this value to 20 seconds.

      Remember to catch any PDO error messages inside a } catch (PDOException $ex) { echo $ex->getMessage(); } block:

      /var/www/html/tasks.php

                       sleep(5); 
      
              }       
      
      } catch (PDOException $ex) {
          echo $ex->getMessage(); 
      }
      

      Your complete tasks.php file will be as follows:

      /var/www/html/tasks.php

      <?php
      try {
          $db_name="cron_jobs";
          $db_user="cron_jobs_user";
          $db_password = 'EXAMPLE_PASSWORD';
          $db_host="localhost";
      
          $pdo = new PDO('mysql:host=" . $db_host . "; dbname=" . $db_name, $db_user, $db_password);
          $pdo->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);  
          $pdo->setAttribute(PDO::ATTR_EMULATE_PREPARES, false);               
      
          $loop_expiry_time = time() + 60;
      
          while (time() < $loop_expiry_time) { 
              $data = [];
              $sql  = "select 
                       task_id
                       from tasks
                       where is_processed = :is_processed
                       ";
      
              $data["is_processed'] = 'N';             
      
              $stmt = $pdo->prepare($sql);
              $stmt->execute($data);
      
              while ($row = $stmt->fetch(PDO::FETCH_ASSOC)) { 
                  $data_update   = [];         
                  $sql_update    = "update tasks set 
                                    is_processed  = :is_processed,
                                    completed_at  = :completed_at
                                    where task_id = :task_id                                 
                                    ";
      
                  $data_update   = [                
                                   'is_processed' => 'Y',                          
                                   'completed_at' => date("Y-m-d H:i:s"),
                                   'task_id'      => $row['task_id']                         
                                   ];
                  $stmt = $pdo->prepare($sql_update);
                  $stmt->execute($data_update);
              }   
      
              sleep(5); 
      
              }       
      
      } catch (PDOException $ex) {
          echo $ex->getMessage(); 
      }
      

      Save the file by pressing CTRL + X, Y then ENTER.

      Once you’ve completed coding the logic in the /var/www/html/tasks.php file, you’ll schedule the crontab daemon to execute the file after every 1 minute in the next step.

      Step 3 — Scheduling the PHP Script to Run After 1 Minute

      In Linux, you can schedule jobs to run automatically after a stipulated time by entering a command into the crontab file. In this step, you will instruct the crontab daemon to run your /var/www/html/tasks.php script after every minute. So, open the /etc/crontab file using nano:

      Then add the following toward the end of the file to execute the http://localhost/tasks.php after every 1 minute:

      /etc/crontab

      ...
      * * * * * root /usr/bin/wget -O - http://localhost/tasks.php
      

      Save and close the file.

      This guide assumes that you have a basic knowledge of how cron jobs work. Consider reading our guide on How to Use Cron to Automate Tasks on Ubuntu.

      As earlier indicated, although the cron daemon runs the tasks.php file after every 1 minute, once the file is executed for the first time, it will loop through the open tasks for another 60 seconds. By the time the loop time expires, the cron daemon will execute the file again and the process will continue.

      After updating and closing the /etc/crontab file, the crontab daemon should begin executing the MySQL tasks that you inserted in the tasks table immediately. To confirm whether everything is working as expected, you’ll query your cron_jobs database next.

      Step 4 — Confirming Job Execution

      In this step, you will open your database one more time to check whether the tasks.php file is processing queued jobs when executed automatically by the crontab.

      Log back in to your MySQL server as root:

      Then, enter your MySQL server’s root password and hit ENTER to proceed. Then, switch to the database:

      Output

      Database changed

      Run a SELECT statement against the tasks table:

      • SELECT task_id, task_name, queued_at, completed_at, is_processed FROM tasks;

      You will receive output similar to the following. In the completed_at column, tasks have been processed at intervals of 5 seconds. Also, the tasks have been marked as completed since the is_processed column is now set to Y, which means YES.

      Output

      +---------+-----------+---------------------+---------------------+--------------+ | task_id | task_name | queued_at | completed_at | is_processed | +---------+-----------+---------------------+---------------------+--------------+ | 1 | TASK 1 | 2021-03-06 06:27:19 | 2021-03-06 06:30:01 | Y | | 2 | TASK 2 | 2021-03-06 06:27:28 | 2021-03-06 06:30:06 | Y | | 3 | TASK 3 | 2021-03-06 06:27:36 | 2021-03-06 06:30:11 | Y | +---------+-----------+---------------------+---------------------+--------------+ 3 rows in set (0.00 sec)

      This confirms that your PHP script is working as expected; you have run tasks in a shorter time interval by overriding the limitation of the 1 minute time period set by the crontab daemon.

      Conclusion

      In this guide, you’ve set up a sample database on an Ubuntu 20.04 server. Then, you have created jobs in a table and run them at intervals of 5 seconds using the PHP while(...){...} loop and sleep() functions. Use the logic in this tutorial when you’re next implementing a job-queue-based application where tasks need to be run multiple times within a 1 minute time period.

      For more PHP tutorials, check out our PHP topic page.



      Source link

      How To Use subprocess to Run External Programs in Python 3


      The author selected the COVID-19 Relief Fund to receive a donation as part of the Write for DOnations program.

      Introduction

      Python 3 includes the subprocess module for running external programs and reading their outputs in your Python code.

      You might find subprocess useful if you want to use another program on your computer from within your Python code. For example, you might want to invoke git from within your Python code to retrieve files in your project that are tracked in git version control. Since any program you can access on your computer can be controlled by subprocess, the examples shown here will be applicable to any external program you might want to invoke from your Python code.

      subprocess includes several classes and functions, but in this tutorial we’ll cover one of subprocess’s most useful functions: subprocess.run. We’ll review its different uses and main keyword arguments.

      Prerequisites

      To get the most out of this tutorial, it is recommended to have some familiarity with programming in Python 3. You can review these tutorials for the necessary background information:

      Running an External Program

      You can use the subprocess.run function to run an external program from your Python code. First, though, you need to import the subprocess and sys modules into your program:

      import subprocess
      import sys
      
      result = subprocess.run([sys.executable, "-c", "print('ocean')"])
      

      If you run this, you will receive output like the following:

      Output

      ocean

      Let’s review this example:

      • sys.executable is the absolute path to the Python executable that your program was originally invoked with. For example, sys.executable might be a path like /usr/local/bin/python.
      • subprocess.run is given a list of strings consisting of the components of the command we are trying to run. Since the first string we pass is sys.executable, we are instructing subprocess.run to execute a new Python program.
      • The -c component is a python command line option that allows you to pass a string with an entire Python program to execute. In our case, we pass a program that prints the string ocean.

      You can think of each entry in the list that we pass to subprocess.run as being separated by a space. For example, [sys.executable, "-c", "print('ocean')"] translates roughly to /usr/local/bin/python -c "print('ocean')". Note that subprocess automatically quotes the components of the command before trying to run them on the underlying operating system so that, for example, you can pass a filename that has spaces in it.

      Warning: Never pass untrusted input to subprocess.run. Since subprocess.run has the ability to perform arbitrary commands on your computer, malicious actors can use it to manipulate your computer in unexpected ways.

      Capturing Output From an External Program

      Now that we can invoke an external program using subprocess.run, let’s see how we can capture output from that program. For example, this process could be useful if we wanted to use git ls-files to output all your files currently stored under version control.

      Note: The examples shown in this section require Python 3.7 or higher. In particular, the capture_output and text keyword arguments were added in Python 3.7 when it was released in June 2018.

      Let’s add to our previous example:

      import subprocess
      import sys
      
      result = subprocess.run(
          [sys.executable, "-c", "print('ocean')"], capture_output=True, text=True
      )
      print("stdout:", result.stdout)
      print("stderr:", result.stderr)
      

      If we run this code, we’ll receive output like the following:

      Output

      stdout: ocean stderr:

      This example is largely the same as the one introduced in the first section: we are still running a subprocess to print ocean. Importantly, however, we pass the capture_output=True and text=True keyword arguments to subprocess.run.

      subprocess.run returns a subprocess.CompletedProcess object that is bound to result. The subprocess.CompletedProcess object includes details about the external program’s exit code and its output. capture_output=True ensures that result.stdout and result.stderr are filled in with the corresponding output from the external program. By default, result.stdout and result.stderr are bound as bytes, but the text=True keyword argument instructs Python to instead decode the bytes into strings.

      In the output section, stdout is ocean (plus the trailing newline that print adds implicitly), and we have no stderr.

      Let’s try an example that produces a non-empty value for stderr:

      import subprocess
      import sys
      
      result = subprocess.run(
          [sys.executable, "-c", "raise ValueError('oops')"], capture_output=True, text=True
      )
      print("stdout:", result.stdout)
      print("stderr:", result.stderr)
      

      If we run this code, we receive output like the following:

      Output

      stdout: stderr: Traceback (most recent call last): File "<string>", line 1, in <module> ValueError: oops

      This code runs a Python subprocess that immediately raises a ValueError. When we inspect the final result, we see nothing in stdout and a Traceback of our ValueError in stderr. This is because by default Python writes the Traceback of the unhandled exception to stderr.

      Raising an Exception on a Bad Exit Code

      Sometimes it’s useful to raise an exception if a program we run exits with a bad exit code. Programs that exit with a zero code are considered successful, but programs that exit with a non-zero code are considered to have encountered an error. As an example, this pattern could be useful if we wanted to raise an exception in the event that we run git ls-files in a directory that wasn’t actually a git repository.

      We can use the check=True keyword argument to subprocess.run to have an exception raised if the external program returns a non-zero exit code:

      import subprocess
      import sys
      
      result = subprocess.run([sys.executable, "-c", "raise ValueError('oops')"], check=True)
      

      If we run this code, we receive output like the following:

      Output

      Traceback (most recent call last): File "<string>", line 1, in <module> ValueError: oops Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python3.8/subprocess.py", line 512, in run raise CalledProcessError(retcode, process.args, subprocess.CalledProcessError: Command '['/usr/local/bin/python', '-c', "raise ValueError('oops')"]' returned non-zero exit status 1.

      This output shows that we ran a subprocess that raised an error, which is printed in stderr in our terminal. Then subprocess.run dutifully raised a subprocess.CalledProcessError on our behalf in our main Python program.

      Alternatively, the subprocess module also includes the subprocess.CompletedProcess.check_returncode method, which we can invoke for similar effect:

      import subprocess
      import sys
      
      result = subprocess.run([sys.executable, "-c", "raise ValueError('oops')"])
      result.check_returncode()
      

      If we run this code, we’ll receive:

      Output

      Traceback (most recent call last): File "<string>", line 1, in <module> ValueError: oops Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python3.8/subprocess.py", line 444, in check_returncode raise CalledProcessError(self.returncode, self.args, self.stdout, subprocess.CalledProcessError: Command '['/usr/local/bin/python', '-c', "raise ValueError('oops')"]' returned non-zero exit status 1.

      Since we didn’t pass check=True to subprocess.run, we successfully bound a subprocess.CompletedProcess instance to result even though our program exited with a non-zero code. Calling result.check_returncode(), however, raises a subprocess.CalledProcessError because it detects the completed process exited with a bad code.

      Using timeout to Exit Programs Early

      subprocess.run includes the timeout argument to allow you to stop an external program if it is taking too long to execute:

      import subprocess
      import sys
      
      result = subprocess.run([sys.executable, "-c", "import time; time.sleep(2)"], timeout=1)
      

      If we run this code, we’ll receive output like the following:

      Output

      Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python3.8/subprocess.py", line 491, in run stdout, stderr = process.communicate(input, timeout=timeout) File "/usr/local/lib/python3.8/subprocess.py", line 1024, in communicate stdout, stderr = self._communicate(input, endtime, timeout) File "/usr/local/lib/python3.8/subprocess.py", line 1892, in _communicate self.wait(timeout=self._remaining_time(endtime)) File "/usr/local/lib/python3.8/subprocess.py", line 1079, in wait return self._wait(timeout=timeout) File "/usr/local/lib/python3.8/subprocess.py", line 1796, in _wait raise TimeoutExpired(self.args, timeout) subprocess.TimeoutExpired: Command '['/usr/local/bin/python', '-c', 'import time; time.sleep(2)']' timed out after 0.9997982999999522 seconds

      The subprocess we tried to run used the time.sleep function to sleep for 2 seconds. However, we passed the timeout=1 keyword argument to subprocess.run to time out our subprocess after 1 second. This explains why our call to subprocess.run ultimately raised a subprocess.TimeoutExpired exception.

      Note that the timeout keyword argument to subprocess.run is approximate. Python will make a best effort to kill the subprocess after the timeout number of seconds, but it won’t necessarily be exact.

      Passing Input to Programs

      Sometimes programs expect input to be passed to them via stdin.

      The input keyword argument to subprocess.run allows you to pass data to the stdin of the subprocess. For example:

      import subprocess
      import sys
      
      result = subprocess.run(
          [sys.executable, "-c", "import sys; print(sys.stdin.read())"], input=b"underwater"
      )
      

      We’ll receive output like the following after running this code:

      Output

      underwater

      In this case, we passed the bytes underwater to input. Our target subprocess used sys.stdin to read the passed in stdin (underwater) and printed it out in our output.

      The input keyword argument can be useful if you want to chain multiple subprocess.run calls together passing the output of one program as the input to another.

      Conclusion

      The subprocess module is a powerful part of the Python standard library that lets you run external programs and inspect their outputs easily. In this tutorial, you have learned to use subprocess.run to control external programs, pass input to them, parse their output, and check their return codes.

      The subprocess module exposes additional classes and utilities that we did not cover in this tutorial. Now that you have a baseline, you can use the subprocess module’s documentation to learn more about other available classes and utilities.



      Source link

      How To Run Multiple PHP Versions on One Server Using Apache and PHP-FPM on Ubuntu 20.04


      Not using Ubuntu 20.04?


      Choose a different version or distribution.

      The author selected the COVID-19 Relief Fund to receive a donation as part of the Write for DOnations program.

      Introduction

      The Apache web server uses virtual hosts to manage multiple domains on a single instance. Similarly, PHP-FPM uses a daemon to manage multiple PHP versions on a single instance. Together, you can use Apache and PHP-FPM to host multiple PHP web-applications, each using a different version of PHP, all on the same server, and all at the same time. This is useful because different applications may require different versions of PHP, but some server stacks, like a regularly configured LAMP stack, can only manage one. Combining Apache with PHP-FPM is also a more cost-efficient solution than hosting each application on its own instance.

      PHP-FPM also offers configuration options for stderr and stdout logging, emergency restarts, and adaptive process spawning, which is useful for heavy-loaded sites. In fact, using Apache with PHP-FPM is one of the best stacks for hosting PHP applications, especially when it comes to performance.

      In fact, using Apache with PHP-FPM is one of the best stacks for hosting PHP applications, especially when it comes to performance. PHP-FPM not only allows you run to multiple PHP versions simultaneously, it also provides numerous extra features like adaptive process spawning, which is useful for heavy-loaded sites.

      In this tutorial you will set up two PHP sites on a single instance. Each site will use its own domain, and each domain will deploy its own version of PHP. The first, site1.your_domain, will deploy PHP 7.2. The second, site2.your_domain, will deploy PHP 7.3.

      Prerequisites

      Step 1 — Installing PHP Versions 7.2 and 7.3 with PHP-FPM

      With the prerequisites completed, you will now install PHP versions 7.2 and 7.3, as well as PHP-FPM and several additional extensions. But to accomplish this, you will first need to add the Ondrej PHP repository to your system.

      Execute the apt-get command to install software-properties-common:

      • sudo apt-get install software-properties-common -y

      The software-properties-common package provides apt-add-repository command-line utility which you will use to add the ondrej/php PPA (Personal Package Archive) repository.

      Now add the ondrej/php repository to your system. The ondrej/php PPA will have more up-to-date versions of PHP than the official Ubuntu repositories, and it will also allow you to install multiple versions of PHP in the same system:

      • sudo add-apt-repository ppa:ondrej/php

      Update the repository:

      Next, install php7.2, php7.2-fpm, php7.2-mysql, libapache2-mod-php7.2, and libapache2-mod-fcgid with the following commands:

      • sudo apt-get install php7.2 php7.2-fpm php7.2-mysql libapache2-mod-php7.2 libapache2-mod-fcgid -y
      • php7.2 is a metapackage used to run PHP applications.
      • php7.2-fpm provides the Fast Process Manager interpreter that runs as a daemon and receives Fast/CGI requests.
      • php7.2-mysql connects PHP to the MySQL database.
      • libapache2-mod-php7.2 provides the PHP module for the Apache webserver.
      • libapache2-mod-fcgid contains a mod_fcgid that starts a number of CGI program instances to handle concurrent requests.

      Now repeat the process for PHP version 7.3. Install php7.3, php7.3-fpm, php7.3-mysql, and libapache2-mod-php7.3.

      • sudo apt-get install php7.3 php7.3-fpm php7.3-mysql libapache2-mod-php7.3 -y

      After installing both PHP versions, start the php7.2-fpm service:

      • sudo systemctl start php7.2-fpm

      Next, verify the status of php7.2-fpm service:

      • sudo systemctl status php7.2-fpm

      You’ll see the following output:

      Output

      • ● php7.2-fpm.service - The PHP 7.2 FastCGI Process Manager
      • Loaded: loaded (/lib/systemd/system/php7.2-fpm.service; enabled; vendor preset: enabled)
      • Active: active (running) since Fri 2020-06-05 11:25:07 UTC; 1min 38s ago
      • Docs: man:php-fpm7.2(8)
      • Main PID: 13703 (php-fpm7.2)
      • Status: "Processes active: 0, idle: 2, Requests: 0, slow: 0, Traffic: 0req/sec"
      • Tasks: 3 (limit: 2353)
      • Memory: 6.2M
      • CGroup: /system.slice/php7.2-fpm.service
      • ├─13703 php-fpm: master process (/etc/php/7.2/fpm/php-fpm.conf)
      • ├─13719 php-fpm: pool www
      • └─13720 php-fpm: pool www
      • Jun 05 11:25:07 ubuntu systemd[1]: Starting The PHP 7.2 FastCGI Process Manager...
      • Jun 05 11:25:07 ubuntu systemd[1]: Started The PHP 7.2 FastCGI Process Manager.

      Repeating this process, now start the php7.3-fpm service:

      • sudo systemctl start php7.3-fpm

      Next, verify the status of php7.3-fpm service:

      • sudo systemctl status php7.3-fpm

      You’ll see the following output:

      Output

      • ● php7.3-fpm.service - The PHP 7.3 FastCGI Process Manager
      • Loaded: loaded (/lib/systemd/system/php7.3-fpm.service; enabled; vendor preset: enabled)
      • Active: active (running) since Fri 2020-06-05 11:26:33 UTC; 56s ago
      • Docs: man:php-fpm7.3(8)
      • Process: 23470 ExecStartPost=/usr/lib/php/php-fpm-socket-helper install /run/php/php-fpm.sock /etc/php/7.3/fpm/pool.d/www.conf 73 (code=ex>
      • Main PID: 23452 (php-fpm7.3)
      • Status: "Processes active: 0, idle: 2, Requests: 0, slow: 0, Traffic: 0req/sec"
      • Tasks: 3 (limit: 2353)
      • Memory: 7.1M
      • CGroup: /system.slice/php7.3-fpm.service
      • ├─23452 php-fpm: master process (/etc/php/7.3/fpm/php-fpm.conf)
      • ├─23468 php-fpm: pool www
      • └─23469 php-fpm: pool www
      • Jun 05 11:26:33 ubuntu systemd[1]: Starting The PHP 7.3 FastCGI Process Manager...
      • Jun 05 11:26:33 ubuntu systemd[1]: Started The PHP 7.3 FastCGI Process Manager.

      Lastly, you must enable several modules so that your Apache2 service can work with multiple PHP versions:

      • sudo a2enmod actions fcgid alias proxy_fcgi
      • actions is used for executing CGI scripts based on media type or request method.

      • fcgid is a high performance alternative to mod_cgi that starts a sufficient number of instances of the CGI program to handle concurrent requests.

      • alias provides for the mapping of different parts of the host filesystem in the document tree, and for URL redirection.

      • proxy_fcgi allows Apache to forward requests to PHP-FPM.

      Now restart the Apache service to apply your changes:

      • sudo systemctl restart apache2

      At this point you have installed two PHP versions on your server. Next, you will create a directory structure for each website you want to deploy.

      Step 2 — Creating Directory Structures for Both Websites

      In this section, you will create a document root directory and an index page for each of your two websites.

      First, create document root directories for both site1.your_domain and site2.your_domain:

      • sudo mkdir /var/www/site1.your_domain
      • sudo mkdir /var/www/site2.your_domain

      By default, the Apache webserver runs as a www-data user and www-data group. To ensure that you have the correct ownership and permissions of your website root directories, execute the following commands:

      • sudo chown -R www-data:www-data /var/www/site1.your_domain
      • sudo chown -R www-data:www-data /var/www/site2.your_domain
      • sudo chmod -R 755 /var/www/site1.your_domain
      • sudo chmod -R 755 /var/www/site2.your_domain

      Next you will create an info.php file inside each website root directory. This will display each website’s PHP version information. Begin with site1:

      • sudo nano /var/www/site1.your_domain/info.php

      Add the following line:

      /var/www/site1.your_domain/info.php

      <?php phpinfo(); ?>
      

      Save and close the file. Now copy the info.php file you created to site2:

      • sudo cp /var/www/site1.your_domain/info.php /var/www/site2.your_domain/info.php

      Your web server should now have the document root directories that each site requires to serve data to visitors. Next, you will configure your Apache web server to work with two different PHP versions.

      Step 3 — Configuring Apache for Both Websites

      In this section, you will create two virtual host configuration files. This will enable your two websites to work simultaneously with two different PHP versions.

      In order for Apache to serve this content, it is necessary to create a virtual host file with the correct directives. Instead of modifying the default configuration file located at /etc/apache2/sites-available/000-default.conf, you’ll create two new ones inside the directory /etc/apache2/sites-available/.

      First create a new virtual host configuration file for the website site1.your_domain. Here you will direct Apache to render content using php7.2:

      • sudo nano /etc/apache2/sites-available/site1.your_domain.conf

      Add the following content. Make sure the website directory path, server name, and PHP version match your setup:

      /etc/apache2/sites-available/site1.your_domain.conf

      
      <VirtualHost *:80>
           ServerAdmin admin@site1.your_domain
           ServerName site1.your_domain
           DocumentRoot /var/www/site1.your_domain
           DirectoryIndex info.php
      
           <Directory /var/www/site1.your_domain>
              Options Indexes FollowSymLinks MultiViews
              AllowOverride All
              Order allow,deny
              allow from all
           </Directory>
      
          <FilesMatch .php$>
              # From the Apache version 2.4.10 and above, use the SetHandler to run PHP as a fastCGI process server
               SetHandler "proxy:unix:/run/php/php7.2-fpm.sock|fcgi://localhost"
          </FilesMatch>
      
           ErrorLog ${APACHE_LOG_DIR}/site1.your_domain_error.log
           CustomLog ${APACHE_LOG_DIR}/site1.your_domain_access.log combined
      </VirtualHost>
      

      In this file you updated the DocumentRoot to your new directory and ServerAdmin to an email that the your_domain site administrator can access. You’ve also updated ServerName, which establishes the base domain for this virtual host configuration, and you’ve added a SetHandler directive to run PHP as a fastCGI process server.

      Save and close the file.

      Next, create a new virtual host configuration file for the website site2.your_domain. You will specify this subdomain to deploy php7.3:

      • sudo nano /etc/apache2/sites-available/site2.your_domain.conf

      Add the following content. Again, make sure the website directory path, server name, and PHP version match your unique information:

      /etc/apache2/sites-available/site2.your_domain.conf

      <VirtualHost *:80>
           ServerAdmin admin@site2.your_domain
           ServerName site2.your_domain
           DocumentRoot /var/www/site2.your_domain
           DirectoryIndex info.php
      
           <Directory /var/www/site2.your_domain>
              Options Indexes FollowSymLinks MultiViews
              AllowOverride All
              Order allow,deny
              allow from all
           </Directory>
      
          <FilesMatch .php$>
              # 2.4.10+ can proxy to unix socket
               SetHandler "proxy:unix:/run/php/php7.3-fpm.sock|fcgi://localhost"
          </FilesMatch>
      
           ErrorLog ${APACHE_LOG_DIR}/site2.your_domain_error.log
           CustomLog ${APACHE_LOG_DIR}/site2.your_domain_access.log combined
      </VirtualHost>
      

      Save and close the file when you are finished. Then, check the Apache configuration file for any syntax errors:

      • sudo apachectl configtest

      You’ll see the following output:

      Output

      Next, enable both virtual host configuration files with the following commands:

      • sudo a2ensite site1.your_domain
      • sudo a2ensite site2.your_domain

      Now disable the default site, since you won’t need it.:

      • sudo a2dissite 000-default.conf

      Finally, restart the Apache service to implement your changes:

      • sudo systemctl restart apache2

      Now that you have configured Apache to serve each site, you will test them to make sure the proper PHP versions are running.

      Step 4 — Testing Both Websites

      At this point, you have configured two websites to run two different versions of PHP. Now test the results.

      Open your web browser and visit both sites http://site1.your_domain and http://site2.your_domain. You will see two pages that look like this:

      PHP 7.2 info page
      PHP 7.3 info page

      Note the titles. The first page indicates that site1.your_domain deployed PHP version 7.2. The second indicates that site2.your_domain deployed PHP version 7.3.

      Now that you’ve tested your sites, remove the info.php files. Because they contain sensitive information about your server and are accessible to unauthorized users, they pose a security vulnerability. To remove both files, run the following commands:

      • sudo rm -rf /var/www/site1.your_domain/info.php
      • sudo rm -rf /var/www/site2.your_domain/info.php

      You now have a single Ubuntu 20.04 server handling two websites with two different PHP versions. PHP-FPM, however, is not limited to this one application.

      Conclusion

      You have now combined virtual hosts and PHP-FPM to serve multiple websites and multiple versions of PHP on a single server. The only practical limit on the number of PHP sites and PHP versions that your Apache service can handle is the processing power of your instance.

      From here you might consider exploring PHP-FPM’s more advanced features, like its adaptive spawning process or how it can log sdtout and stderr Alternatively, you could now secure your websites. To accomplish this, you can follow our tutorial on how to secure your sites with free TLS/SSL certificates from Let’s Encrypt.



      Source link