One place for hosting & domains

      September 2022

      Working with CORS Policies on Linode Object Storage


      Linode Object Storage offers a globally-available, S3-compatible storage solution. Whether you are storing critical backup files or data for a static website, S3 object storage can efficiently answer the call.

      To make the most of object storage, you may need to access the data from other domains. For instance, your dynamic applications may opt to use S3 for static file storage.

      This leaves you dealing with Cross-Origin Resource Sharing, or CORS. However, it’s often not clear how to effectively navigate CORS policies or deal with issues as they come up.

      This tutorial aims to clarify how to work with CORS and S3. It covers tools and approaches for effectively reviewing and managing CORS policies for Linode Object Storage or most other S3-compatible storage solutions.

      CORS and S3 Storage – What you Need to Know

      Linode Object Storage is an S3, which stands for simple storage service. With S3, data gets stored as objects in “buckets.” This gives S3s a flat approach to storage, in contrast to the hierarchical and logistically more complicated storage structures like traditional file systems. Objects stored in S3 can also be given rich metadata.

      CORS defines how clients and servers from different domains may share resources. Generally, CORS policies restrict access to resources to requests from the same domain. By managing your CORS policies, you can open up services to requests from specified origin domains, or from any domains whatsoever.

      An S3 like Linode Object Storage can provide excellent storage for applications. However, you also want to keep your data as secure as possible while also allowing your applications the access they need.

      This is where managing CORS policies on your object storage service becomes imperative. Applications and other tools often need to access stored resources from particular domains. Implementing specific CORS policies controls what kinds of requests, and responses, each origin domain is allowed.

      Working with CORS Policies on Linode Object Storage

      One of the best tools for managing policies on your S3, including Linode Object Storage, is s3cmd. Follow along with our guide
      Using S3cmd with Object Storage to:

      1. Install s3cmd on your system. The installation takes place on the system from which you intend to manage your S3 instance.

      2. Configure s3cmd for your Linode Object Storage instance. This includes indicating the instance’s access key, endpoint, etc.

      You can verify the connection to your object storage instance with the command to list your buckets. This example lists the one bucket used for this tutorial, example-cors-bucket:

      s3cmd ls
      
      2022-09-24 16:13  s3://example-cors-bucket

      Once you have s3cmd set up for your S3 instance, use it to follow along with the upcoming sections of this tutorial. These show you how to use the tool to review and deploy CORS policies.

      Reviewing CORS Policies for Linode Object Storage

      You can get the current CORS policies for your S3 bucket using the info flag for s3cmd. The command provides general information on the designated bucket, including its policies:

      s3cmd info s3://example-cors-bucket
      
      s3://example-cors-bucket/ (bucket):
         Location:  default
         Payer:     BucketOwner
         Expiration Rule: none
         Policy:    none
         CORS:      <CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><CORSRule><AllowedMethod>GET</AllowedMethod><AllowedMethod>PUT</AllowedMethod><AllowedMethod>DELETE</AllowedMethod><AllowedMethod>HEAD</AllowedMethod><AllowedMethod>POST</AllowedMethod><AllowedOrigin>*</AllowedOrigin><AllowedHeader>*</AllowedHeader></CORSRule></CORSConfiguration>
         ACL:       31ffbc26-d6ed-4bc3-8a14-ad78fe8f95b6: FULL_CONTROL

      This bucket already has a CORS policy in place. This is because it was set up with the CORS Enabled setting using the Linode Cloud Manager web interface.

      The basic CORS policy above is fairly permissive, allowing access for any request method from any domain. Keep reading to see how you can fine-tune such policies to better fit your particular needs.

      Deploying CORS Policies on Linode Object Storage

      As you can see above, the Linode Cloud Manager can set up a general CORS policy for your bucket. However, if you need more fine-grained control, you need to deploy custom CORS policies.

      Creating CORS policies follows a similar methodology to the one outlined in our
      Define Access and Permissions using Bucket Policies tutorial.

      These next sections break down the particular fields needed for CORS policies and how each affects your bucket’s availability.

      Configuring Policies

      The overall structure for CORS policies on S3 looks like the following. While policies on your object storage instance can generally be set with JSON or XML, CORS policies must use the XML format:

      File: cors_policies.xml
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      14
      15
      16
      17
      
      <CORSConfiguration>
        <CORSRule>
          <AllowedHeader>*</AllowedHeader>
      
          <AllowedMethod>GET</AllowedMethod>
          <AllowedMethod>PUT</AllowedMethod>
          <AllowedMethod>POST</AllowedMethod>
          <AllowedMethod>DELETE</AllowedMethod>
          <AllowedMethod>HEAD</AllowedMethod>
      
          <AllowedOrigin>*</AllowedOrigin>
      
          <ExposeHeader>*</ExposeHeader>
      
          <MaxAgeSeconds>3000</MaxAgeSeconds>
        </CORSRule>
      </CORSConfiguration>

      To break this structure down:

      • The policy introduces a list of one or more <CORSRule> elements within a <CORSConfiguration> element. Each <CORSRule> element contains policy details.

      • Policies tend to have some combination of the five types of elements shown in the example above.

        The <AllowedHeader>, <AllowedMethod>, and <AllowedOrigin> elements are almost always present. Further, there may be multiple of these elements within a single <CORSRule>.

        The other two elements, <ExposeHeader> and <MaxAgeSeconds>, are optional. There can be multiple <ExposeHeader> elements, but only one <MaxAgeSeconds>.

      • <AllowedHeader> lets you specify request headers allowed for the given policy. You can find a list of commonly used request headers in AWS’s
        Common Request Headers documentation.

      • <AllowedMethod> lets you specify request methods that the given policy applies to. The full range of supported HTTP request methods is shown in the example above.

      • <AllowedOrigin> lets you specify request origins for the policy. These are the domains from which cross-origin requests can be made.

      • <ExposeHeader> can specify which response headers the policy allows to be exposed. You can find a list of commonly used response headers in AWS’s
        Common Response Headers documentation.

      • <MaxAgeSeconds> can specify the amount of time, in seconds, that browsers are allowed to cache the response to preflight requests. Having this cache allows the browser to repeat the original requests without having to send another preflight request.

      Example CORS Policies

      To give more concrete ideas of how you can work with CORS policies, the following are two additional example policies. One provides another simple, but more limited, policy, while the other presents a more complicated set of two policies.

      • First, a public access read-only policy. This lets any origin, with any request headers, make GET and HEAD requests to the bucket. However, the policy does not expose custom response headers.

        File: cors_policies.xml
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        
        <CORSConfiguration>
          <CORSRule>
            <AllowedHeader>*</AllowedHeader>
        
            <AllowedMethod>GET</AllowedMethod>
            <AllowedMethod>HEAD</AllowedMethod>
        
            <AllowedOrigin>*</AllowedOrigin>
          </CORSRule>
        </CORSConfiguration>
            
      • Next, a set of policies for fine control over requests from example.com. The <AllowedOrigin> elements specify the range of possible example.com domains. The two policies distinguish the kinds of headers allowed based on the kinds of request methods.

        File: cors_policies.xml
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        16
        17
        18
        19
        20
        21
        22
        23
        24
        25
        26
        27
        28
        29
        30
        31
        32
        33
        34
        35
        36
        
        <CORSConfiguration>
          <CORSRule>
            <AllowedHeader>Authorization</AllowedHeader>
        
            <AllowedMethod>GET</AllowedMethod>
            <AllowedMethod>HEAD</AllowedMethod>
        
            <AllowedOrigin>http://example.com</AllowedOrigin>
            <AllowedOrigin>http://*.example.com</AllowedOrigin>
            <AllowedOrigin>https://example.com</AllowedOrigin>
            <AllowedOrigin>https://*.example.com</AllowedOrigin>
        
            <ExposeHeader>Access-Control-Allow-Origin</ExposeHeader>
        
            <MaxAgeSeconds>3000</MaxAgeSeconds>
          </CORSRule>
          <CORSRule>
            <AllowedHeader>Authorization</AllowedHeader>
            <AllowedHeader>Origin</AllowedHeader>
            <AllowedHeader>Content-*</AllowedHeader>
        
            <AllowedMethod>PUT</AllowedMethod>
            <AllowedMethod>POST</AllowedMethod>
            <AllowedMethod>DELETE</AllowedMethod>
        
            <AllowedOrigin>http://example.com</AllowedOrigin>
            <AllowedOrigin>http://*.example.com</AllowedOrigin>
            <AllowedOrigin>https://example.com</AllowedOrigin>
            <AllowedOrigin>https://*.example.com</AllowedOrigin>
        
            <ExposeHeader>ETag</ExposeHeader>
        
            <MaxAgeSeconds>3000</MaxAgeSeconds>
          </CORSRule>
        </CORSConfiguration>
            

      Deploying Policies

      The next step is to actually deploy your CORS policies. Once you do, your S3 bucket starts following them to determine what origins to allow and what request and response information to permit.

      Follow these steps to put your CORS policies into practice on your S3 instance.

      1. Save your CORS policy into a XML file. This example uses a file named cors_policies.xml which contains the second example policy XML above.

      2. Use s3cmd’s setcors commands to deploy the CORS policies to the bucket. This command takes the policy XML file and the bucket identifier as arguments:

        s3cmd setcors cors_policies.xml s3://example-cors-bucket
        
      3. Verify the new CORS policies using the info command as shown earlier in this tutorial:

        s3cmd info s3://example-cors-bucket
        
        s3://example-cors-bucket/ (bucket):
           Location:  default
           Payer:     BucketOwner
           Expiration Rule: none
           Policy:    none
           CORS:      <CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><CORSRule><AllowedMethod>GET</AllowedMethod><AllowedMethod>HEAD</AllowedMethod><AllowedOrigin>http://*.example.com</AllowedOrigin><AllowedOrigin>http://example.com</AllowedOrigin><AllowedOrigin>https://*.example.com</AllowedOrigin><AllowedOrigin>https://example.com</AllowedOrigin><AllowedHeader>Authorization</AllowedHeader><MaxAgeSeconds>3000</MaxAgeSeconds><ExposeHeader>Access-Control-Allow-Origin</ExposeHeader></CORSRule><CORSRule><AllowedMethod>PUT</AllowedMethod><AllowedMethod>DELETE</AllowedMethod><AllowedMethod>POST</AllowedMethod><AllowedOrigin>http://*.example.com</AllowedOrigin><AllowedOrigin>http://example.com</AllowedOrigin><AllowedOrigin>https://*.example.com</AllowedOrigin><AllowedOrigin>https://example.com</AllowedOrigin><AllowedHeader>Authorization</AllowedHeader><AllowedHeader>Content-*</AllowedHeader><AllowedHeader>Origin</AllowedHeader><MaxAgeSeconds>3000</MaxAgeSeconds><ExposeHeader>ETag</ExposeHeader></CORSRule></CORSConfiguration>
           ACL:       31ffbc26-d6ed-4bc3-8a14-ad78fe8f95b6: FULL_CONTROL

      Troubleshooting Common CORS Errors

      Having CORS-related issues on your S3 instance? Take these steps to help narrow down the issue and figure out the kind of policy change needed to resolve it.

      1. Review your instance’s CORS policies using s3cmd:

        s3cmd info s3://example-cors-bucket
        

        This can give you a concrete reference for what policies are in place and the specific details of each, like header and origin information.

      2. Review the request and response data. This can give you insights on any possible inconsistencies between existing CORS policies and the actual requests and responses.

        You can use a tool like cURL for this. First, use s3cmd to create a signed URL to an object on your storage instance. This example command creates a URL for an example.txt object and makes the URL last 300 seconds:

        s3cmd signurl s3://example-cors-bucket/example.txt +300
        

        Now, until the URL expires, you can use a cURL command like this one to send a request for the object:

        curl -v "http://example-cors-bucket.us-southeast-1.linodeobjects.com/index.md?AWSAccessKeyId=example-access-key&Expires=1664121793&Signature=example-signature"
        

        The -v option gives you verbose results, outputting more details to help you dissect any request and response issues.

      3. Compare the results of the cURL request to the CORS policy on your instance.

      Conclusion

      This covers the tools and approaches you need to start managing CORS for your Linode Object Storage or other S3 instance. Once you have these, addressing CORS issues is a matter of reviewing and adjusting policies against desired origins and request types.

      Keep improving your resources for managing your S3 through our collection of
      object storage guides. These cover a range of topics to help you with S3 generally, and Linode Object Storage in particular.

      Have more questions or want some help getting started? Feel free to reach out to our
      Support team.

      More Information

      You may wish to consult the following resources for additional information
      on this topic. While these are provided in the hope that they will be
      useful, please note that we cannot vouch for the accuracy or timeliness of
      externally hosted materials.



      Source link

      How to Create a LEMP Stack on Linux


      LEMP stack refers to a development framework for Web and mobile applications based on four open source components:

      1. Linux operating system
      2. NGINX Web server
      3. MySQL relational database management system (RDBMS)
      4. PHP,
        Perl, or
        Python programming language

      NGINX contributes to the acronym “LEMP” because English-speakers pronounce NGINX as “engine-x”, hence an “E”.

      Before You Begin

      1. If you have not already done so, create a Linode account and Compute Instance. See our
        Getting Started with Linode and
        Creating a Compute Instance guides.

      2. Follow our
        Setting Up and Securing a Compute Instance guide to update your system. You may also wish to set the timezone, configure your hostname, create a limited user account, and harden SSH access.

      Note

      The steps in this guide require root privileges. Be sure to run the steps below as root or with the sudo prefix. For more information on privileges, see our
      Users and Groups guide.

      How Does LEMP Differ from LAMP?

      LAMP is just like LEMP, except with Apache in place of NGINX.

      LAMP played a crucial role in the Web for
      over twenty years. NGINX was released publicly in 2004, largely to address faults in LAMP. LEMP use spread widely after 2008, and NGINX is now the second
      most popular Web server, after the
      Apache Web server that LAMP uses.

      Both LEMP and LAMP combine open source tools to supply the essentials for a Web application. This includes an underlying Linux operating system which hosts everything else, including:

      • The NGINX or Apache Web server that receives and responds to end-user actions.
      • The MySQL RDBMS which stores information including user profile, event histories, and application-specific content which has a lifespan beyond an individual transaction.
      • A programming language for business logic that defines a particular application.

      Abundant documentation and rich communities of practitioners make both LEMP and LAMP natural choices for development. The difference between them is confined to the Web server part of the stack.

      Apache Versus NGINX

      In broad terms, the two Web servers have much in common.
      NGINX is faster than Apache, but requires more expertise in certain aspects of its configuration and use, and is less robust on Windows than Apache. Apache works usefully “out of the box”, while, as we see below, NGINX demands a couple of additional steps before its installation is truly usable.

      RDBMS and Programming Language

      Two other variations deserve clarity in regard to the initials “M” and “P”.
      MariaDB is a drop-in replacement for MySQL. The differences between the two are explained in
      this tutorial. Everything you do with MySQL applies immediately with MariaDB as well.

      While several different programming languages work well in a LEMP stack, this guide focuses on PHP. However, nearly all the principles of LEMP illustrated below apply with Python or another alternative.

      LEMP Benefits

      LEMP has a deep track record of successful deliveries. Hundreds of millions of working Web applications depend on it.

      LEMP’s suitability extends beyond purely technical dimensions. Its flexible open-source licensing enables development teams to focus on their programming and operations, with few legal constraints to complicate their engineering.

      Install the LEMP Stack

      Linode’s support for LEMP begins with abundant documentation, including
      How to Install the LEMP Stack on Ubuntu 18.04.

      Rich collections of documentation are available to readers
      new to Linux and its command line. This guide assumes familiarity with the command line and Linux filesystems, along with permission to run as root or with sudo privileges. With the “L” (Linux) in place, the installation in this Guide focuses purely on the “EMP” layers of LEMP.

      Install “E”, “M”, and “P” Within “L”

      Different distributions of Linux require subtly different LEMP installations. The sequence below works across a range of Ubuntu versions, and is a good model for other Debian-based distributions.

      1. Update your host package index with:

        sudo apt-get update -y
        
      2. Now upgrade your installed packages:

        sudo apt-get upgrade -y
        
      3. Install software-properties-common and apt-transport-httpsto manage the PHP PPA repository:

        sudo apt-get install software-properties-common apt-transport-https -y
        
      4. Now provide a reference to the current PHP repository:

        sudo add-apt-repository ppa:ondrej/php -y
        
      5. Update the package index again:

        sudo apt update -y
        
      6. Install the rest of the LEMP stack:

        sudo apt-get install nginx php-mysql mysql-server php8.1-fpm -y
        

      The installation demands a small amount of interaction to give information about geographic location and timezone. Depending on circumstances, you may need to verify the country and timezone your server is located in.

      Start Services

      1. Start the “E” (NGINX), “M” (MySQL), and “P” (PHP) services:

        sudo service nginx start
        sudo service mysql start
        sudo service php8.1-fpm start
        
      2. Check on these services:

        sudo service --status-all
        

        You should see them all running::

        [ + ]  mysql
        [ + ]  nginx
        [ + ]  php8.1-fpm

      Verify PHP

      Verify the healthy operation of these services.

      1. For PHP, launch:

        php -version
        

        You should see:

        PHP 8.1.x (cli) (built: ...
        Copyright © The PHP Group ...
      2. Go one step further with verification of the PHP configuration through the command:

        php -m
        

        The result you see is:

        [PHP Modules]
        calendar
        Core
        ...
        mysqli
        mysqlnd
        ...

      This demonstrates that PHP is installed and that the modules needed to communicate with the rest of the LEMP stack are in place.

      Verify NGINX

      Verification of NGINX service is a little more involved. The first step is
      identification of the IP address of the host.

      1. Navigate a browser to a URL such as http://localhost or http://23.77.NNN.NNN, henceforth referred to as $LEMP_HOST

        Your Web browser shows a default display of:

        Welcome to nginx!
        If you see this page, the nginx web server is successfully installed and working.  ...
      2. With the default NGINX configuration verified, update it to enable PHP. Edit the file located at /etc/nginx/sites-enable/default and change this section:

        File: /etc/nginx/sites-enabled/default
        1
        2
        3
        4
        5
        
        location / {
                # First attempt to serve request as file, then
                # as directory, then fall back to displaying a 404.
                try_files $uri $uri/ =404;
        }

        To become:

        File: /etc/nginx/sites-enabled/default
        1
        2
        3
        4
        5
        6
        7
        8
        9
        
        location / {
               # First attempt to serve request as file, then
               # as directory, then fall back to displaying a 404.
               try_files $uri $uri/ =404;
        }
        location ~ \.php {
               include snippets/fastcgi-php.conf;
               fastcgi_pass unix:/var/run/php/php8.1-fpm.sock;
        }
      3. Back at the Linux command line, activate this new configuration with:

        service nginx restart
        
      4. Next, ensure that NGINX communicates with PHP by creating the file /var/www/html/php-test.php with contents:

        File: /var/www/html/php-test.php
        1
        2
        3
        
        <?php
        phpinfo();
        ?>
        
      5. Now direct your browser to http://$LEMP_HOST/php-test.php.

        Your browser shows several pages of diagnostic output, starting with:

        PHP Version 8.1.9
           System Linux ... 5.10.76-linuxkit #1 SMP Mon Nov 8 ...
           ...

      The location of /var/www/html/php-test.php is configurable. This means that a particular distribution of Linux and NGINX might designate a different directory. /var/www/html is common, especially for a new Ubuntu instance with NGINX “out of the box”. In practice, it’s common to modify the NGINX default a great deal. You can allow for tasks such as caching, special handling for static requests, virtual hosts, and logging security.

      Verify MySQL

      When you install MySQL according to the directions above, it doesn’t depend on authentication.

      1. No password is required. You only need one command:

        mysql
        

        And you see:

        Welcome to the MySQL monitor ...
      2. You can leave the MySQL monitor and return to the Linux command line with:

        \q
        

      Your LEMP stack is now installed, activated, and ready for application development. For a basic LEMP installation, this consists of placing programming source code in the /var/www/html directory, and occasionally updating the configurations of the LEMP layers.

      Use the LEMP Stack to Create an Example Application

      You can create a minimal model application that exercises each component and typical interactions between them. This application collects a record of each Web request made to the server in its backend database. A more refined version of this application could be used to collect:

      • Sightings of a rare bird at different locations.
      • Traffic at voting stations.
      • Requests for customer support.
      • Tracking data for a company automobile.

      The configuration and source below apply to LEMP environments. Even if your LEMP stack used different commands during installation, the directions that follow apply with a minimum amount of customization or disruption.

      Prepare a Database to Receive Data

      Start application development by configuring the database to receive program data.

      1. Re-enter the MySQL monitor with:

        mysql
        
      2. While connected to MySQL, create a database instance specific to this development:

        CREATE DATABASE model_application;
        
      3. Enter that database with:

        USE model_application;
        
      4. Define a table for the program data:

        CREATE TABLE events (
            timestamp TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
            client_ip INT(4) UNSIGNED NOT NULL
        );
        
      5. Create a database account:

        CREATE USER 'automation'@'localhost' IDENTIFIED BY 'abc123';
        
      6. Now allow PHP to access it:

        GRANT ALL PRIVILEGES ON model_application.* TO 'automation'@'localhost' WITH GRANT OPTION;
        
      7. Quit MySQL:

        \q
        

      A polished application uses tighter security privileges, but this sample application adopts simple choices to maintain focus on the teamwork between the different LEMP layers.

      Create Application Source Code

      Create /var/www/html/event.php with the following content:

      File: /var/www/html/event.php
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      
      <?php
          $connection = new mysqli("127.0.0.1", "automation", "abc123", "model_application");
          $client_ip = $_SERVER['REMOTE_ADDR'];
          // INET_ATON() packs an IPv4 string representation into
          // four octets in a standard way.
          $query = "INSERT INTO events(client_ip)
      VALUES(INET_ATON('$client_ip'))";
          $connection->query($query);
          echo 'Your request has successfully created one database record.';
      ?>
      

      Verify Operation of the Application

      1. event.php is the only program source code for our minimal model application. With it in place, instruct your browser to visit http://$LEMP_HOST/event.php.

        You should see:

        Your request has successfully created one database record.
      2. You can also exercise the application from different remote browser connections. With a different browser, perhaps from a different desktop, again navigate to http://$LEMP_SERVER/event.php.

      View Collected Data

      The model application exhibits the expected behavior from a Web application and your browser reports success. Viewed through the Web browser, the application does the right thing.

      1. To confirm it updated the database, re-enter the MySQL monitor:

        mysql
        
      2. Enter the example application database:

        USE model_application;
        
      3. Pose the query:

        select timestamp, inet_ntoa(client_ip) from events;
        

        You should see output such as:

        +---------------------+----------------------+
        | timestamp           | inet_ntoa(client_ip) |
        +---------------------+----------------------+
        | 2022-08-03 02:26:44 | 127.0.0.1            |
        | 2022-08-03 02:27:18 | 98.200.8.79          |
        | 2022-08-05 02:27:23 | 107.77.220.62        |
        +---------------------+----------------------+

      This demonstrates the flow of data from a Web browser to the database server. Each row in the events table reflects one request from a Web browser to connect to the application. As the application goes into practical use, rows accumulate in the table.

      Application Context

      LEMP is a trustworthy basis for Web development, with decades of successful deliveries over a range of requirements. It directly supports only
      server-side processing. The model application above delivers pure HTML to the browser. However, LEMP is equally capable of serving up CSS and
      JavaScript, but does not build in tooling for these client-side technologies. Projects reliant on elaborate user interface effects usually choose a framework focused on the client side.
      React is an example of such a framework.

      Server-side orientation remains adequate for many applications, and LEMP fits these well. Server-side computation typically involves several functions beyond the model application above, including:

      • Account Management
      • Forms Processing
      • Security Restrictions
      • Analytic and Cost Reporting
      • Exception Handling
      • Quality Assurance Instrumentation

      Contemporary applications often build in a
      model-view-controller (MVC) architecture, and/or define a
      representational state transfer (REST) perspective. A commercial-grade installation usually migrates the database server to a separate dedicated host. Additionally, high-volume applications often introduce load balancers, security-oriented proxies,
      content delivery network (CDN) services, and other refinements. These functions are layers over the basic data flow between user, browser, business logic processing, and datastore that the model application embodies. The model application is a good first example.

      Conclusion

      You just installed a working LEMP stack, activated it, and created a model application. All the needs of a specific Web application have a place in this same model.



      Source link

      Recover from Unexpected Shutdowns with Lassie


      Updated
      , by Linode

      Traducciones al Español

      Estamos traduciendo nuestros guías y tutoriales al Español. Es
      posible que usted esté viendo una traducción generada
      automáticamente. Estamos trabajando con traductores profesionales
      para verificar las traducciones de nuestro sitio web. Este proyecto
      es un trabajo en curso.

      Create a Linode account
      to try this guide with a $ credit.

      This credit will be applied to any valid services used during your first
       days.

      Linode Compute Instances have a featured called Lassie (Linode Autonomous System Shutdown Intelligent rEbooter), also referred to as the Shutdown Watchdog. When this feature is enabled, a Compute Instance automatically reboots if it ever powers off unexpectedly.

      Shutdown Recovery Behavior

      The Shutdown Watchdog feature detects when a Compute Instance is powered off and checks if that directive came from the Linode platform (such as the Cloud Manager or Linode API). If the power off command did not originate from the Linode platform, the shutdown is considered unexpected and the Compute Instance is automatically powered back on.

      Note

      Shutdown Watchdog can power back on a Compute Instance up to 5 times within a 15 minute period. If there is a recurring issue that is causing 6 or more shutdowns within this time period, the instance remains powered off until it is manually powered back on. This is to prevent endless reboot loops if there is an issue with the internal software of a Compute Instance.

      Enable (or Disable) Shutdown Watchdog

      By default, Shutdown Watchdog is enabled on all new Compute Instances. If you wish to disable or re-enable this feature, follow the instructions below:

      1. Log in to the
        Cloud Manager and navigate to the Linodes link in the sidebar.

      2. Select the Linode Compute Instance that you wish to modify.

      3. Navigate to the Settings tab.

      4. Scroll down to the section labeled Shutdown Watchdog.

      5. From here, click the corresponding toggle button to update this setting to the desired state, either enabled or disabled.

      Reasons for an Unexpected Shutdown

      An unexpected shutdown is when a Compute Instance powers off without receiving a power off command from the Linode platform (such as one issued by a user in the Cloud Manager or API). In general, this is caused within a Compute Instance’s internal system or software configuration. The following list includes potential reasons for these unexpected shutdowns.

      • A user issues the
        shutdown command
        in the shell environment of a Compute Instance. In Linux, a system can be powered off by entering the shutdown command (or other similar commands) in the system’s terminal. Since Linode has no knowledge of internal commands issued on a Compute Instance, it is considered an unexpected shutdown.

      • Kernel panic: A kernel panic can occur when your system detects a fatal error and it isn’t able to safely recover. Here is an example of a console log entry that indicates a kernel panic has occurred:

        Kernel panic - not syncing: No working init found.
        
      • Out of memory (OOM) error: When a Linux system runs out of memory, it can start killing processes to free up additional memory. In many cases, your system remains accessible but some of the software you use may stop functioning properly. OOMing can occasionally result in your system becoming unresponsive or crashing, causing an unexpected shutdown.

        kernel: Out of memory: Kill process [...]
        
      • Other system crashes, such as a crash caused by the software installed on your system or a malicious process (such as malware).

      Note

      The Shutdown Watchdog feature never causes a Compute Instance to shut down and only ever powers on an instance if it detects an unexpected shutdown.

      Investigate the Cause of a Shutdown

      The underlying cause of these issues can vary. The most helpful course of action is to review your system logs.

      1. Open the
        Lish console. This displays your system’s boot log and, if your system boot was normal, a login prompt appears. If you do not see a login prompt, look for any errors or unexpected output that indicates a kernel panic, file system corruption, or other type of system crash.

      2. Log in to your system through either
        SSH or
        Lish and review the log files for you system using either journald or syslog. For systems using systemd-journald for logging, you can use the journalctl command to review system logs. See
        Use journalctl to View Your System’s Logs for instructions.

        • journalctl -b: Log entries for the last system boot
        • journalctl -k: Kernel messages

        For systems using syslog, you should review the following log files using your preferred text editor (such as
        nano or
        vim) or file viewer (such as cat or
        less).

        • /var/log/syslog: Most logs as recorded by
          syslog.
        • /var/log/boot.log: Log entries for the last system boot
        • /var/log/kern.log: Kernel messages
        • /var/log/messages: Various system notifications and messages typically recorded at boot.

        You may also want to review log files for any other software you have installed on your system that might be causing these issues.

      Note

      Unexpected shutdowns are primarily caused by issues with the internal software configuration of a Compute Instance. To investigate these issues further, it is recommended that you reach out to your own system administrators or on our
      Community Site. These issues are generally
      outside the scope of the Linode Support team.

      File System Corruption

      In some cases, unexpected shutdowns can cause file system corruption on a Compute Instance. If an error message (such as the one below) appears within your console logs, your file system may be corrupt or otherwise be in an inconsistent state.

      /dev/sda: UNEXPECTED INCONSISTENCY; RUN fsck MANUALLY.
      

      In cases like this, it is recommended that you attempt to correct the issue by running the fsck tool in
      Rescue Mode. See
      Using fsck to Find and Repair Disk Errors and Bad Sectors for instructions.

      This page was originally published on



      Join the conversation.
      Read other comments or post your own below. Comments must be respectful,
      constructive, and relevant to the topic of the guide. Do not post external
      links or advertisements. Before posting, consider if your comment would be
      better addressed by contacting our
      Support team or asking on
      our
      Community Site.



      Source link