One place for hosting & domains

      Rails

      How To Install Ruby on Rails with rbenv on Ubuntu 20.04


      Introduction

      Ruby on Rails is one of the most popular application stacks for developers looking to create sites and web apps. The Ruby programming language, combined with the Rails development framework, allows you to build and deploy scalable apps quickly.

      You can install Ruby and Rails with the command line tool rbenv. Using rbenv provides you with a solid environment for developing your Ruby on Rails applications and allows you to switch between Ruby versions, keeping your entire team on the same version. rbenv also provides support for specifying application-specific versions of Ruby, allows you to change the global Ruby for each user, and the option to use an environment variable to override the Ruby version.

      In this tutorial, we will guide you through the Ruby and Rails installation processes with rbenv and gem. First, you’ll install the appropriate packages to install rbenv and then Ruby. After, you’ll install the ruby-build plugin so that you can install available versions of Ruby. Last, you’ll use gem to install Rails and can then use Ruby on Rails to begin your web development. We will also provide steps on how to check if your rbenv version is up-to-date, and how to uninstall Ruby versions and rbenv.

      Prerequisites

      To follow this tutorial, you need:

      Step 1 – Install rbenv and Dependencies

      Ruby relies on several packages that you can install through your package manager. Once those are installed, you can install rbenv and use it to install Ruby.

      First, update your package list:

      Next, install the dependencies required to install Ruby:

      • sudo apt install git curl libssl-dev libreadline-dev zlib1g-dev autoconf bison build-essential libyaml-dev libreadline-dev libncurses5-dev libffi-dev libgdbm-dev

      After installing the dependencies, you can install rbenv itself. Use curl to transfer information from the rbenv repository on GitHub into the directory ~/.rbenv:

      • curl -fsSL https://github.com/rbenv/rbenv-installer/raw/HEAD/bin/rbenv-installer | bash

      Next, add ~/.rbenv/bin to your $PATH so that you can use the rbenv command line utility. Do this by altering your ~/.bashrc file so that it affects future login sessions:

      • echo 'export PATH="$HOME/.rbenv/bin:$PATH"' >> ~/.bashrc

      Then, add the command eval "$(rbenv init -)" to your ~/.bashrc file so rbenv loads automatically:

      • echo 'eval "$(rbenv init -)"' >> ~/.bashrc

      Next, apply the changes you made to your ~/.bashrc file to your current shell session:

      Verify that rbenv is set up properly by running the type command, which will display more information about the rbenv command:

      Your terminal window will display the following:

      Output

      rbenv is a function rbenv () { local command; command="${1:-}"; if [ "$#" -gt 0 ]; then shift; fi; case "$command" in rehash | shell) eval "$(rbenv "sh-$command" "$@")" ;; *) command rbenv "$command" "$@" ;; esac }

      Next, install the ruby-build plugin. This plugin adds the rbenv install command, which makes the installation process of new versions of Ruby less complex. To install ruby-build, first clone the ruby-build GitHub repository:

      • git clone https://github.com/rbenv/ruby-build.git

      After running this command, you’ll have a directory named ruby-build in your working directory. Within the ruby-build directory is a script named install.sh which you’ll use to actually install ruby-build.

      Before running this script, take a moment to review its contents. Rather than opening the file with a text editor, you can print its contents to your terminal’s output with the following command:

      • cat ruby-build/install.sh

      Output

      #!/bin/sh # Usage: PREFIX=/usr/local ./install.sh # # Installs ruby-build under $PREFIX. set -e cd "$(dirname "$0")" if [ -z "${PREFIX}" ]; then PREFIX="/usr/local" fi BIN_PATH="${PREFIX}/bin" SHARE_PATH="${PREFIX}/share/ruby-build" mkdir -p "$BIN_PATH" "$SHARE_PATH" install -p bin/* "$BIN_PATH" install -p -m 0644 share/ruby-build/* "$SHARE_PATH"

      Notice the second line of this file that reads # Usage: PREFIX=/usr/local ./install.sh. This commented-out line explains that in order to execute this script and install ruby-build, you must precede the script with PREFIX=/usr/local. This will create a temporary environment variable that will affect how the script is run. Essentially, this will cause the string $PREFIX to be replaced with /usr/local any time it appears in the script and will ultimately cause all the necessary ruby-build files to be installed within the /usr/local directory. This environment variable is only temporary and will cease to exist once the script terminates.

      Create this temporary environment variable and run the script with the following command. Note that this command includes sudo before calling the script. This is necessary since you must have advanced privileges to install files to the /usr/local directory:

      • PREFIX=/usr/local sudo ./ruby-build/install.sh

      At this point, you have both rbenv and ruby-build installed. Let’s install Ruby next.

      Step 2 – Installing Ruby with ruby-build

      With the ruby-build plugin now installed, you can install whatever versions of Ruby that you may need with a single command. First, list all the available versions of Ruby:

      The output of that command will be a list of versions that you can choose to install:

      Output

      2.6.8 2.7.4 3.0.2 jruby-9.2.19.0 mruby-3.0.0 rbx-5.0 truffleruby-21.2.0.1 truffleruby+graalvm-21.2.0 Only latest stable releases for each Ruby implementation are shown. Use 'rbenv install --list-all / -L' to show all local versions.

      Now let’s install Ruby 3.0.2:

      Installing Ruby can be a lengthy process, so be prepared for the installation to take some time to complete.

      Once it’s done installing, set it as your default version of Ruby with the global sub-command:

      Verify that Ruby was properly installed by checking its version number:

      If you installed version 3.0.2 of Ruby, this command will return output like this:

      Output

      ruby 3.0.2p107 (2021-07-07 revision 0db68f0233) [x86_64-linux]

      To install and use a different version of Ruby, run the rbenv commands with a different version number, as in rbenv install 2.3.0 followed by rbenv global 2.3.0.

      You now have at least one version of Ruby installed and have set your default Ruby version. Next, you will set up gems and Rails.

      Step 3 – Working with Gems

      Gems are the way Ruby libraries are distributed. You use the gem command to manage these gems, and use this command to install Rails.

      When you install a gem, the installation process generates local documentation. This can add a significant amount of time to each gem’s installation process, so turn off local documentation generation by creating a file called ~/.gemrc which contains a configuration setting to turn off this feature:

      • echo "gem: --no-document" > ~/.gemrc

      Bundler is a tool that manages gem dependencies for projects. Install the Bundler gem next, as Rails depends on it:

      You’ll receive the following output:

      Output

      Fetching bundler-2.2.27.gem Successfully installed bundler-2.2.27 1 gem installed

      You can use the gem env command (the subcommand env is short for environment) to learn more about the environment and configuration of gems. You can confirm where gems are being installed by using the home argument, like this:

      You’ll receive an output similar to this:

      Output

      /home/sammy/.rbenv/versions/3.0.2/lib/ruby/gems/3.0.0

      Once you have gems set up, you can install Rails.

      Step 4 – Installing Rails

      To install Rails, use the gem install command along with the -v flag to specify the version. For this tutorial, you’ll use version 6.1.4.1:

      • gem install rails -v 6.1.4.1

      The gem command installs the gem you specify, as well as any of its dependencies. Rails is a complex web development framework and has many dependencies, so the process will take some time to complete. Eventually, you’ll receive a message stating that Rails is installed, along with its dependencies:

      Output

      ... Successfully installed rails-6.1.4.1 37 gems installed

      Note: If you would like to install a different version of Rails, you can list the valid versions of Rails by doing a search, which will output a list of possible versions. You can then install a specific version, such as 4.2.7:

      • gem search '^rails$' --all
      • gem install rails -v 4.2.7

      If you would like to install the latest version of Rails, run the command without a version specified:

      rbenv works by creating a directory of shims, which point to the files used by the Ruby version that’s currently enabled. Through the rehash sub-command, rbenv maintains shims in that directory to match every Ruby command across every installed version of Ruby on your server. Whenever you install a new version of Ruby or a gem that provides commands as Rails does, you should run the following:

      Verify that Rails has been installed properly by printing its version, with the following command:

      If it’s installed properly, this command will return the version of Rails that was installed:

      Output

      Rails 6.1.4.1

      At this point, you can begin testing your Ruby on Rails installation and start to develop web applications. Now let’s review how to keep the rbenv up-to-date.

      Step 5 – Updating rbenv

      Since you installed rbenv manually using Git, you can upgrade your installation to the most recent version at any time by using the git pull command in the ~/.rbenv directory:

      This will ensure that you are using the most up-to-date version of rbenv available.

      Step 6 – Uninstalling Ruby versions

      As you download additional versions of Ruby, you may accumulate more versions than you would like in your ~/.rbenv/versions directory. Use the ruby-build plugin’s uninstall subcommand to remove these previous versions.

      The following command will uninstall Ruby version 3.0.2:

      With the rbenv uninstall command you can clean up old versions of Ruby so that you do not have more installed than you are currently using.

      Step 7 – Uninstalling rbenv

      If you’ve decided you no longer want to use rbenv, you can remove it from your system.

      To do this, first open your ~/.bashrc file in your editor. In this example, we will use nano:

      Find and remove the following two lines from the file:

      ~/.bashrc

      ...
      export PATH="$HOME/.rbenv/bin:$PATH"
      eval "$(rbenv init -)"
      

      After removing these lines, save the file and exit the editor. If you used nano, you can exit by pressing CTRL + X then Y and ENTER.

      Then remove rbenv and all installed Ruby versions with the following command:

      Log out and back in to apply the changes to your shell.

      Conclusion

      In this tutorial, you installed rbenv and gem to install the entire Ruby on Rails framework. From here, you can begin creating your web development application projects. If you want to learn more about making those environments more robust you can check out our series on How To Code In Ruby.



      Source link

      How To Use ActiveStorage in Rails 6 with DigitalOcean Spaces


      The author selected the Diversity in Tech fund to receive a donation as part of the Write for DOnations program.

      Introduction

      When you’re building web applications that let users upload and store files, you’ll want to use a scalable file storage solution. This way you’re not in danger of running out of space if your application gets wildly popular. After all, these uploads can be anything from profile pictures to house photos to PDF reports. You also want your file storage solution to be reliable so you don’t lose your important customer files, and fast so your visitors aren’t waiting for files to transfer. ou’ll want this all to be affordable too.

      DigitalOcean Spaces can address all of these needs. Because it’s compatible with Amazon’s S3 service, you can quickly integrate it into a Ruby on Rails application using the new ActiveStorage library that ships with Rails 6.

      In this guide, you’ll configure a Rails application, so it uses ActiveStorage with DigitalOcean Spaces. You’ll then run through the configuration necessary to get uploads and downloads blazing fast using direct uploads and Spaces’ built-in CDN (Content Delivery Network).

      When you’re finished, you’ll be ready to integrate file storage with DigitalOcean spaces into your own Rails application.

      Prerequisites

      Before you begin this guide, you’ll need the following:

      Step 1 — Getting the Sample App Running

      Rather than build a complete Rails application from scratch, you’ll clone an existing Rails 6 application that uses ActiveStorage and modify it to use DigitalOcean Spaces as its image storage backend. The app you’ll work with is Space Puppies, an image gallery that will let people upload and view photographs of their favorite puppies. The application looks like the following figure:

      The Space Puppies application running in a web browser

      Open your terminal and clone the application from GitHub with the following command:

      • git clone https://github.com/do-community/space-puppies

      You’ll see output that looks similar to this:

      Output

      Cloning into 'space-puppies'... remote: Enumerating objects: 122, done. remote: Counting objects: 100% (122/122), done. remote: Compressing objects: 100% (103/103), done. remote: Total 122 (delta 3), reused 122 (delta 3), pack-reused 0 Receiving objects: 100% (122/122), 163.17 KiB | 1018.00 KiB/s, done. Resolving deltas: 100% (3/3), done.

      Next, check your Ruby version. Space Puppies uses Ruby 2.7.1, so run rbenv versions to check which version you have installed:

      If you’ve followed the prerequisite tutorials, you’ll only have Ruby 2.5.1 in that list, and your output will look like this:

      Output

      * system 2.5.1

      If you don’t have Ruby 2.7.1 in that list, install it using ruby-build:

      Depending on your machine’s speed and operating system, this might take a while. You’ll see output that looks like this:

      Output

      Downloading ruby-2.7.1.tar.bz2... -> https://cache.ruby-lang.org/pub/ruby/2.7/ruby-2.7.1.tar.bz2 Installing ruby-2.7.1... Installed ruby-2.7.1 to /root/.rbenv/versions/2.7.1

      Change to the space-puppies directory:

      rbenv will automatically change your Ruby version when you enter the directory. Verify the version:

      You’ll see output similar to the following:

      Output

      ruby 2.7.1p83 (2020-03-31 revision a0c7c23c9c) [x86_64-linux]

      Next, you will install the Ruby gems and JavaScript packages that the app needs to run. Then you’ll the database migrations needed for the Space Puppies app to run.

      Install all the necessary gems using the bundle command:

      Then, to tell rbenv about any new binaries installed by Bundler, use the rehash command:

      Next, tell yarn to install the necessary JavaScript dependencies:

      Now create the database schema with Rails’ built-in migration tool:

      With all the libraries installed and the database created, start the built-in web server with the following command:

      Note: By default, rails s only binds to the local loopback address, meaning you can only access the server from the same computer that runs the command. If you’re running on a Droplet and you’d like to access your server from a browser running on your local machine, you’ll need to tell the Rails server to respond to remote requests by binding to 0.0.0.0. You can do that with this command:

      Your server starts, and you’ll receive output like this:

      Output

      => Booting Puma => Rails 6.0.3.2 application starting in development => Run `rails server --help` for more startup options Puma starting in single mode... * Version 4.3.5 (ruby 2.7.1-p83), codename: Mysterious Traveller * Min threads: 5, max threads: 5 * Environment: development * Listening on tcp://127.0.0.1:3000 * Listening on tcp://[::1]:3000 Use Ctrl-C to stop

      Now you can access your application in a web browser. If you’re running the application on your local machine, navigate to http://localhost:3000. If you’re running on a Droplet or other remote server, then navigate to http://your_server_ip:3000.

      You’ll see the app’s interface, only this time without any puppies. Try adding a couple of images by clicking the New Puppy button.

      The Space Puppies application running in a web browser

      If you need puppy photos to use for testing, Unsplash has an extensive list you can use for testing. Review the Unsplash license if you plan to use these images in your projects.

      Before moving on, let’s walk through each layer of the application and look at how ActiveStorage works with each part so you can make the necessary changes for DigitalOcean Spaces. For a more detailed look at ActiveStorage, read the Active Storage Overview page in the official Rails documentation.

      First, look at the model, which represents an object in your application that you’re storing in the database. You’ll find the Puppy model in app/models/puppy.rb. Open this file in your text editor and you’ll see this code:

      app/models/puppy.rb

      class Puppy < ApplicationRecord
      
        has_one_attached :photo
      
      end
      

      You’ll find the has_one_attached macro in the model, which indicates there’s a photo attached to each Puppy model instance. These photos will be stored as ActiveStorage::Blob instances via an ActiveStorage::Attached::One proxy.

      Close this file.

      The next layer up the stack is the controller. In a Rails application, the controller is responsible for controlling access to database models and responding to requests from the user. The corresponding controller for the Puppy model is the PuppiesController which you will find in app/controllers/puppies_controller.rb. Open this file in your editor and you’ll see the following code:

      app/controllers/puppies_controller.rb

      class PuppiesController < ApplicationController
      
        def index
          @puppies = Puppy.with_attached_photo
        end
      
        # ... snipped other actions ...
      
      end
      

      Everything in the file is standard Rails code, apart from the with_attached_photo call. This call causes ActiveRecord to load all of the associated ActiveStorage::Blob associations when you fetch the list of Puppy models. This is a scope that ActiveStorage provides to help you avoid an expensive N+1 database query.

      Finally, let’s look at the views, which generate the HTML your application will send to the user’s browser. There are a few views in this app, but you’ll want to focus on the view responsible for showing the uploaded puppy image. You’ll find this file at app/views/puppies/_puppy.html.erb. Open it in your editor, and you’ll see code like this:

      app/views/puppies/_puppy.html.erb

      <div class="puppy">
        <%= image_tag puppy.photo.variant(resize_to_fill: [250, 250]) %>
      </div>
      

      ActiveStorage is designed to work with Rails, so you can use the built-in image_tag helper to generate a URL that points to an attached photo, wherever it happens to be stored. In this case, the app is using the variant support for images. When the user first requests this variant, ActiveStorage will automatically use ImageMagick via the image_processing gem, to generate a modified image fitting our requirements. In this case, it will create a puppy photo filling a 250x250 pixel box. The variant will be stored for you in the same place as your original photo, which means you’ll only need to generate each variant once. Rails will serve the generated version on subsequent requests.

      Note: Generating image variants can be slow, and you potentially don’t want your users waiting. If you know you’re going to need a particular variant, you can eagerly generate it using the .processed method:

      puppy.photo.variant(resize_to_fill: [250, 250]).processed
      

      It’s a good idea to do this kind of processing in a background job when you deploy to production. Explore Active Job and create a task to call processed to generate your images ahead of time.

      Now your application is running locally, and you know how all the code pieces fit together. Next, it’s time to set up a new DigitalOcean Space so you can move your uploads to the cloud.

      Step 2 — Setting up your DigitalOcean Space

      At the moment, your Space Puppies application stores images locally, which is fine for development or testing, but you almost certainly don’t want to use this mode in production. In order to scale the application horizontally by adding more application server instances, you’d need copies of each image on every server.

      In this step, you’ll create a DigitalOcean Space to use for your app’s images.

      Sign in to your DigitalOcean management console, click Create in the top right, and choose Spaces.

      Pick any data center and leave the CDN disabled for now; you’ll come back to this later. Ensure the file listing is set to Restrict File Listing.

      Choose a name for your Space. Remember that this will have to be unique across all Spaces users, so pick a unique name, like yourname-space-puppies. Click Create a Space:

      A screenshot of the DigitalOcean create space form with a name filled  in

      Warning: Be careful about access to the files you store on behalf of your customers. There have been many examples of data leaks and hacks due to misconfigured file storage. By default, ActiveStorage files are only accessible if you generate an authenticated URL, but it’s worth being vigilant if you’re dealing with customer data.

      You’ll then see your brand new Space.

      Click the Settings tab and take a note of your Space’s endpoint. You’ll need that when you configure your Rails application.

      Next, you’ll configure the Rails application to store ActiveStorage files in this Space. To do that securely, you need to create a new Spaces Access Key and Secret.

      Click API in the left navigation, then click Generate New Key in the bottom right. Give your new key a descriptive name like “Development Machine”. Your secret will only appear once, so be sure to copy it somewhere safe for a moment.

      A screenshot showing a Spaces access key

      In your Rails app, you’ll need a secure way to store that access token, so you’ll use Rails’ secure credential management feature. To edit your credentials, execute the following command in your terminal:

      • EDITOR="nano -w" rails credentials:edit

      This generates a master key and launches the nano editor so you can edit the values.

      In nano, add the following to your credentials.yml file, using your API key and secret from DigitalOcean:

      config/credentials.yml

      digitalocean:
        access_key: YOUR_API_ACCESS_KEY
        secret: YOUR_API_ACCESS_SCRET
      

      Save and close the file (Ctrl+X, then Y, then Enter), and Rails will store an encrypted version that’s safe to commit to source control in config/credentials.yml.enc.

      You will see output like the following:

      Output

      Adding config/master.key to store the encryption key: RANDOM_HASH_HERE Save this in a password manager your team can access. If you lose the key, no one, including you, can access anything encrypted with it. create config/master.key File encrypted and saved.

      Now that you’ve configured your credentials, you’re ready to point your app to your new Spaces bucket.

      Open the file config/storage.yml in your editor and add the following definition to the bottom of that file:

      config/storage.yml

      digitalocean:
        service: S3
        endpoint: https://your-spaces-endpoint-here
        access_key_id: <%= Rails.application.credentials.dig(:digitalocean, :access_key) %>
        secret_access_key: <%= Rails.application.credentials.dig(:digitalocean, :secret) %>
        bucket: your-space-name-here
        region: unused
      

      Note that the service says S3 rather than Spaces. Spaces has an S3-compatible API, and Rails supports S3 natively. Your endpoint is https:// followed by your Space’s endpoint, which you copied previously, and the bucket name is the name of your Space, which you entered when creating it. The bucket name is also displayed as the title in your Control Panel when you view your Space.

      This configuration file will be stored unencrypted, so instead of entering your access key and secret, you’re referencing the ones you just entered securely in credentials.yml.enc.

      Note: DigitalOcean uses the endpoint to specify the region. However, you need to provide the region, or ActiveStorage will complain. Since DigitalOcean will ignore it, you can set it to whatever value you’d like. The value unused in the example code makes it clear that you’re not using it.

      Save the configuration file.

      Now, you need to tell Rails to use Spaces for your file storage backend instead of the local file system. Open config/environments/development.rb in your editor and change the config.active_storage.service entry from :local: to :digitalocean:

      config/environments/development.rb

      
        # ...
      
        # Store uploaded files on the local file system (see config/storage.yml for options).
        config.active_storage.service = :digitalocean
      
        # ... 
      

      Save the file and exit your editor. Now start your server again:

      Visit http://localhost:3000 or http://your server ip:3000 in a browser once again.

      Upload some images, and the app will store them in your DigitalOcean Space. You can see this by visiting your Space in the DigitalOcean console. You will see the uploaded files and variants listed:

      files uploaded to a Space

      ActiveStorage uses random filenames by default, which is helpful when protecting uploaded customer data. Metadata, including the original filename, is stored in your database instead.

      Note: If you are getting an Aws::S3::Errors::SignatureDoesNotMatch, that might mean your credentials are incorrect. Run rails credentials:edit again and double-check them.

      Rails stores the names and some metadata about your files as ActiveStorage::Blob records. You can access the ActiveStorage::Blob for any of your records by calling an accessor method named after your attachment. In this case, the attachment is called photo.

      Try it out. Start a Rails console in your terminal:

      Grab the blob from the last puppy photo you uploaded:

      > Puppy.last.photo.blob
      #=> => #<ActiveStorage::Blob ...>
      

      You now have a Rails Application storing uploads in a scalable, reliable, and affordable object store.

      In the next two steps, you’ll explore two optional additions you can make to the app that will help improve this solution’s performance and speed for your users.

      Step 3 — Configuring the Spaces CDN (Optional)

      Note: For this step, you will need a doman with name servers pointing to DigitalOcean. You can follow the How to Add Domains guide to do that.

      Using a Content Delivery Network (CDN) will allow you to provide faster downloads of files for your users by locating copies of the files closer to them.

      You can investigate CDN performance using a tool like Uptrends CDN Performance Check. If you add the URL for one of the photos you uploaded in the previous step, you’ll see things are fast if you happen to be nearby, but things get a little slower as you move away geographically. You can get the URL using the Developer Tools in your browser, or by starting a Rails console (rails c) and calling service_url on an attachment.

      > Puppy.last.photo.service_url
      

      Here’s an example Uptrends report with a file located in the San Francisco data center. Notice that the times decrease depending on the distance from San Francisco. San Diego has a short time, while Paris has a much longer time:

      An example Uptrends CDN Performance Report

      You can improve speeds by enabling Spaces’ built-in CDN. Go to Spaces in your DigitalOcean Control Panel and click the name of the Space you created in Step 2. Next, choose the Settings tab and click Edit next to CDN (Content Delivery Network), then click Enable CDN.

      Now you need to choose a domain to use for your CDN and create an SSL Certificate for the domain. You can do this automatically using Let’s Encrypt. Click the Use a custom subdomain dropdown and then Add a new subdomain certificate.

      Find the domain you’d like to use, then choose the option to create a subdomain. Something like cdn.yourdomain.com is a standard naming convention. You can then give the certificate a name and click the “Generate Certificate and Use Subdomain” button.

      The filled-in Add Custom Subdomain form

      Press the Save button under CDN (Content Delivery Network).

      Your CDN is now enabled, but you need to tell your Rails Application to use it. This isn’t built into ActiveStorage in this version of Rails, so you’ll override some built-in Rails framework methods to make it work.

      Create a new Rails initializer called config/initializers/active_storage_cdn.rb and add the following code which will rewrite the URLs:

      config/initializers/active_storage_cdn.rb

      Rails.application.config.after_initialize do
        require "active_storage/service/s3_service"
      
        module SimpleCDNUrlReplacement
          CDN_HOST = "cdn.yourdomain.com"
      
          def url(...)
            url = super
            original_host = "#{bucket.name}.#{client.client.config.endpoint.host}"      
            url.gsub(original_host, CDN_HOST)
          end
        end
      
        ActiveStorage::Service::S3Service.prepend(SimpleCDNUrlReplacement)
      end
      

      This initializer runs each time your application asks for a URL from an ActiveStorage::Service::S3Service provider. It then replaces the original, non-CDN host with your CDN host, defined as the CDN_HOST constant.

      You can now restart your server, and you’ll notice that each of your photos comes from the CDN. You won’t need to re-upload them, as DigitalOcean will take care of forwarding the content from the data center where you set up your Space out to the edge nodes.

      You might like to compare the speed of accessing one of your photos on Uptrends’ Performance Check site now to the pre-CDN speed. Here’s an example of using the CDN on a San Francisco-based Space. You can see a significant global speed improvement.

      The Uptrends CDN Performance Report after enabling the CDN

      Next you’ll configure the application to receive files directly from the browser.

      Step 4 — Setting up Direct Uploads (Optional)

      One last feature of ActiveStorage that you might like to consider is called a Direct Upload. Now, when your users upload a file, the data is sent to your server, processed by Rails, then forwarded to your Space. This can cause problems if you have many simultaneous users, or if your users are uploading large files, as each file will (in most cases) use a single app server thread for the entire duration of an upload.

      By contrast, a Direct Upload will go straight to your DigitalOcean Space with no Rails server hop in between. To do this, you’ll enable some built-in JavaScript that ships with Rails and configure Cross-Origin Resource Sharing([CORS]((https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS) on your Space so that you can securely send requests directly to the Space despite them originating in a different place.

      First, you’ll configure CORS for your Space. You will use s3cmd to do this, and you can follow Setting Up s3cmd 2.x with DigitalOcean Spaces if you haven’t configured this to work with Spaces yet.

      Create a new file called cors.xml and add the following code to the file, replacing your_domain with the domain you’re using for development. If you are developing on your local machine, you’ll use http://localhost:3000. If you’re developing on a Droplet, this will be your Droplet IP address:

      cors.xml

      <CORSConfiguration>
       <CORSRule>
         <AllowedOrigin>your_domain</AllowedOrigin>
         <AllowedMethod>PUT</AllowedMethod>
         <AllowedHeader>*</AllowedHeader>
         <ExposeHeader>Origin</ExposeHeader>
         <ExposeHeader>Content-Type</ExposeHeader>
         <ExposeHeader>Content-MD5</ExposeHeader>
         <ExposeHeader>Content-Disposition</ExposeHeader>
         <MaxAgeSeconds>3600</MaxAgeSeconds>
       </CORSRule>
      </CORSConfiguration>
      

      You can then use s3cmd to set this as the CORS configuration for your Space:

      • s3cmd setcors cors.xml s3://your-space-name-here

      There’s no output when this command runs successfully, but you can check that it worked by looking at your Space in the DigitalOcean Control Panel. Choose Spaces, then select the name of your Space, then select the Settings tab. You’ll see your configuration under the CORS Configurations heading:

      A successful CORS configuration for direct uploads

      Note: At the moment you need to use s3cmd rather than the Control Panel to configure CORS for “localhost” domains because the Control Panel treats these as invalid domains. If you’re using a non-localhost domain (like a Droplet IP) it’s safe to do it here.

      Now you need to tell Rails to use direct uploads, which you do by passing the direct_upload option to the file_field helper. Open app/views/puppies/new.html.erb in your editor and modify the file_field helper:

      app/views/puppies/new.html.erb

      <h2>New Puppy</h2>
      
      <%= form_with(model: @puppy) do |f| %>
      
        <div class="form-item">
          <%= f.label :photo %>
          <%= f.file_field :photo, accept: "image/*", direct_upload: true %>
        </div>
      
        <div class="form-item">
          <%= f.submit "Create puppy", class: "btn", data: { disable_with: "Creating..." } %>
        </div>
      
      <% end %>
      

      Save the file and start your server again:

      When you upload a new photo, your photo is uploaded directly to DigitalOcean Spaces. You can verify this by looking at the PUT request that’s made when you click the Create puppy button. You can find the requests by looking in your browser’s web console, or by reading the Rails server logs. You’ll notice that the image upload is significantly faster, especially for larger images.

      Conclusion

      In this article you modified a basic Rails application using ActiveStorage to store files that are secure, fast, and scalable on DigitalOcean Spaces. You configured a CDN for fast downloads no matter where your users are located, and you implemented direct uploads so that your app servers will not be overwhelmed.

      You can now take this code and configuration and adapt it to fit your own Rails application.



      Source link

      How To Migrate a Docker Compose Workflow for Rails Development to Kubernetes


      Introduction

      When building modern, stateless applications, containerizing your application’s components is the first step in deploying and scaling on distributed platforms. If you have used Docker Compose in development, you will have modernized and containerized your application by:

      • Extracting necessary configuration information from your code.
      • Offloading your application’s state.
      • Packaging your application for repeated use.

      You will also have written service definitions that specify how your container images should run.

      To run your services on a distributed platform like Kubernetes, you will need to translate your Compose service definitions to Kubernetes objects. This will allow you to scale your application with resiliency. One tool that can speed up the translation process to Kubernetes is kompose, a conversion tool that helps developers move Compose workflows to container orchestrators like Kubernetes or OpenShift.

      In this tutorial, you will translate Compose services to Kubernetes objects using kompose. You will use the object definitions that kompose provides as a starting point and make adjustments to ensure that your setup will use Secrets, Services, and PersistentVolumeClaims in the way that Kubernetes expects. By the end of the tutorial, you will have a single-instance Rails application with a PostgreSQL database running on a Kubernetes cluster. This setup will mirror the functionality of the code described in Containerizing a Ruby on Rails Application for Development with Docker Compose and will be a good starting point to build out a production-ready solution that will scale with your needs.

      Prerequisites

      Step 1 — Installing kompose

      To begin using kompose, navigate to the project’s GitHub Releases page, and copy the link to the current release (version 1.22.0 as of this writing). Paste this link into the following curl command to download the latest version of kompose:

      • curl -L https://github.com/kubernetes/kompose/releases/download/v1.22.0/kompose-linux-amd64 -o kompose

      For details about installing on non-Linux systems, please refer to the installation instructions.

      Make the binary executable:

      Move it to your PATH:

      • sudo mv ./kompose /usr/local/bin/kompose

      To verify that it has been installed properly, you can do a version check:

      If the installation was successful, you will see output like the following:

      Output

      1.22.0 (955b78124)

      With kompose installed and ready to use, you can now clone the Node.js project code that you will be translating to Kubernetes.

      Step 2 — Cloning and Packaging the Application

      To use our application with Kubernetes, we will need to clone the project code and package the application so that the kubelet service can pull the image.

      Our first step will be to clone the rails-sidekiq repository from the DigitalOcean Community GitHub account. This repository includes the code from the setup described in Containerizing a Ruby on Rails Application for Development with Docker Compose, which uses a demo Rails application to demonstrate how to set up a development environment using Docker Compose. You can find more information about the application itself in the series Rails on Containers.

      Clone the repository into a directory called rails_project:

      • git clone https://github.com/do-community/rails-sidekiq.git rails_project

      Navigate to the rails_project directory:

      Now checkout the code for this tutorial from the compose-workflow branch:

      • git checkout compose-workflow

      Output

      Branch 'compose-workflow' set up to track remote branch 'compose-workflow' from 'origin'. Switched to a new branch 'compose-workflow'

      The rails_project directory contains files and directories for a shark information application that works with user input. It has been modernized to work with containers: sensitive and specific configuration information has been removed from the application code and refactored to be injected at runtime, and the application’s state has been offloaded to a PostgreSQL database.

      For more information about designing modern, stateless applications, please see Architecting Applications for Kubernetes and Modernizing Applications for Kubernetes.

      The project directory includes a Dockerfile with instructions for building the application image. Let’s build the image now so that you can push it to your Docker Hub account and use it in your Kubernetes setup.

      Using the docker build command, build the image with the -t flag, which allows you to tag it with a memorable name. In this case, tag the image with your Docker Hub username and name it rails-kubernetes or a name of your own choosing:

      • docker build -t your_dockerhub_user/rails-kubernetes .

      The . in the command specifies that the build context is the current directory.

      It will take a minute or two to build the image. Once it is complete, check your images:

      You will see the following output:

      Output

      REPOSITORY TAG IMAGE ID CREATED SIZE your_dockerhub_user/rails-kubernetes latest 24f7e88b6ef2 2 days ago 606MB alpine latest d6e46aa2470d 6 weeks ago 5.57MB

      Next, log in to the Docker Hub account you created in the prerequisites:

      • docker login -u your_dockerhub_user

      When prompted, enter your Docker Hub account password. Logging in this way will create a ~/.docker/config.json file in your user’s home directory with your Docker Hub credentials.

      Push the application image to Docker Hub with the docker push command. Remember to replace your_dockerhub_user with your own Docker Hub username:

      • docker push your_dockerhub_user/rails-kubernetes

      You now have an application image that you can pull to run your application with Kubernetes. The next step will be to translate your application service definitions to Kubernetes objects.

      Step 3 — Translating Compose Services to Kubernetes Objects with kompose

      Our Docker Compose file, here called docker-compose.yml, lays out the definitions that will run our services with Compose. A service in Compose is a running container, and service definitions contain information about how each container image will run. In this step, we will translate these definitions to Kubernetes objects by using kompose to create yaml files. These files will contain specs for the Kubernetes objects that describe their desired state.

      We will use these files to create different types of objects: Services, which will ensure that the Pods running our containers remain accessible; Deployments, which will contain information about the desired state of our Pods; a PersistentVolumeClaim to provision storage for our database data; a ConfigMap for environment variables injected at runtime; and a Secret for our application’s database user and password. Some of these definitions will be in the files kompose will create for us, and others we will need to create ourselves.

      First, we will need to modify some of the definitions in our docker-compose.yml file to work with Kubernetes. We will include a reference to our newly-built application image in our app service definition and remove the bind mounts, volumes, and additional commands that we used to run the application container in development with Compose. Additionally, we’ll redefine both containers’ restart policies to be in line with the behavior Kubernetes expects.

      If you have followed the steps in this tutorial and checked out the compose-workflow branch with git, then you should have a docker-compose.yml file in your working directory.

      If you don’t have a docker-compose.yml then be sure to visit the previous tutorial in this series, Containerizing a Ruby on Rails Application for Development with Docker Compose, and paste the contents from the linked section into a new docker-compose.yml file.

      Open the file with nano or your favorite editor:

      The current definition for the app application service looks like this:

      ~/rails_project/docker-compose.yml

      . . .
      services:
        app:
          build:
            context: .
            dockerfile: Dockerfile
          depends_on:
            - database
            - redis
          ports:
            - "3000:3000"
          volumes:
            - .:/app
            - gem_cache:/usr/local/bundle/gems
            - node_modules:/app/node_modules
          env_file: .env
          environment:
            RAILS_ENV: development
      . . .
      

      Make the following edits to your service definition:

      • Replace the build: line with image: your_dockerhub_user/rails-kubernetes
      • Remove the following context: ., and dockerfile: Dockerfile lines.
      • Remove the volumes list.

      The finished service definition will now look like this:

      ~/rails_project/docker-compose.yml

      . . .
      services:
        app:
          image: your_dockerhub_user/rails-kubernetes
          depends_on:
            - database
            - redis
          ports:
            - "3000:3000"
          env_file: .env
          environment:
            RAILS_ENV: development
      . . .
      

      Next, scroll down to the database service definition and make the following edits:

      • Remove the - ./init.sql:/docker-entrypoint-initdb.d/init.sql volume line. Instead of using values from the local SQL file, we will pass the values for our POSTGRES_USER and POSTGRES_PASSWORD to the database container using the Secret we will create in Step 4.
      • Add a ports: section that will make PostgreSQL available inside your Kubernetes cluster on port 5432.
      • Add an environment: section with a PGDATA variable that points to a directory inside /var/lib/postgresql/data. This setting is required when PostgreSQL is configured to use block storage, since the database engine expects to find its data files in a sub-directory.

      The database service definition should look like this when you are finished editing it:

      ~/rails_project/docker-compose.yml

      . . .
        database:
          image: postgres:12.1
          volumes:
            - db_data:/var/lib/postgresql/data
          ports:
            - "5432:5432"
          environment:
            PGDATA: /var/lib/postgresql/data/pgdata
      . . .
      

      Next, edit the redis service definition to expose its default TCP port by adding a ports: section with the default 6379 port. Adding the ports: section will make Redis available inside your Kubernetes cluster. Your edited redis service should resemble the following:

      ~/rails_project/docker-compose.yml

      . . .
        redis:
          image: redis:5.0.7
          ports:
            - "6379:6379"
      

      After editing the redis section of the file, continue to the sidekiq service definition. Just as with the app service, you’ll need to switch from building a local docker image to pulling from Docker Hub. Make the following edits to your sidekiq service definition:

      • Replace the build: line with image: your_dockerhub_user/rails-kubernetes
      • Remove the following context: ., and dockerfile: Dockerfile lines.
      • Remove the volumes list.

      ~/rails_project/docker-compose.yml

      . . .
        sidekiq:
          image: your_dockerhub_user/rails-kubernetes
          depends_on:
            - app
            - database
            - redis
          env_file: .env
          environment:
              RAILS_ENV: development
          entrypoint: ./entrypoints/sidekiq-entrypoint.sh
      

      Finally, at the bottom of the file, remove the gem_cache and node_modules volumes from the top-level volumes key. The key will now look like this:

      ~/rails_project/docker-compose.yml

      . . .
      volumes:
        db_data:
      

      Save and close the file when you are finished editing.

      For reference, your completed docker-compose.yml file should contain the following:

      ~/rails_project/docker-compose.yml

      version: '3'
      
      services:
        app:
          image: your_dockerhub_user/rails-kubernetes
          depends_on:
              - database
              - redis
          ports:
              - "3000:3000"
          env_file: .env
          environment:
              RAILS_ENV: development
      
        database:
          image: postgres:12.1
          volumes:
              - db_data:/var/lib/postgresql/data
          ports:
              - "5432:5432"
          environment:
              PGDATA: /var/lib/postgresql/data/pgdata
      
        redis:
          image: redis:5.0.7
          ports:
              - "6379:6379"
      
        sidekiq:
          image: your_dockerhub_user/rails-kubernetes
          depends_on:
              - app
              - database
              - redis
          env_file: .env
          environment:
              RAILS_ENV: development
          entrypoint: ./entrypoints/sidekiq-entrypoint.sh
      
      volumes:
        db_data:
      

      Before translating our service definitions, we will need to write the .env file that kompose will use to create the ConfigMap with our non-sensitive information. Please see Step 2 of Containerizing a Ruby on Rails Application for Development with Docker Compose for a longer explanation of this file.

      In that tutorial, we added .env to our .gitignore file to ensure that it would not copy to version control. This means that it did not copy over when we cloned the rails-sidekiq repository in Step 2 of this tutorial. We will therefore need to recreate it now.

      Create the file:

      kompose will use this file to create a ConfigMap for our application. However, instead of assigning all of the variables from the app service definition in our Compose file, we will only add settings for the PostgreSQL and Redis. We will assign the database name, username, and password separately when we manually create a Secret object in Step 4.

      Add the following port and database name information to the .env file. Feel free to rename your database if you would like:

      ~/rails_project/.env

      DATABASE_HOST=database
      DATABASE_PORT=5432
      REDIS_HOST=redis
      REDIS_PORT=6379
      

      Save and close the file when you are finished editing.

      You are now ready to create the files with your object specs. kompose offers multiple options for translating your resources. You can:

      • Create yaml files based on the service definitions in your docker-compose.yml file with kompose convert.
      • Create Kubernetes objects directly with kompose up.
      • Create a Helm chart with kompose convert -c.

      For now, we will convert our service definitions to yaml files and then add to and revise the files that kompose creates.

      Convert your service definitions to yaml files with the following command:

      After you run this command, kompose will output information about the files it has created:

      Output

      INFO Kubernetes file "app-service.yaml" created INFO Kubernetes file "database-service.yaml" created INFO Kubernetes file "redis-service.yaml" created INFO Kubernetes file "app-deployment.yaml" created INFO Kubernetes file "env-configmap.yaml" created INFO Kubernetes file "database-deployment.yaml" created INFO Kubernetes file "db-data-persistentvolumeclaim.yaml" created INFO Kubernetes file "redis-deployment.yaml" created INFO Kubernetes file "sidekiq-deployment.yaml" created

      These include yaml files with specs for the Rails application Service, Deployment, and ConfigMap, as well as for the db-data PersistentVolumeClaim and PostgreSQL database Deployment. Also included are files for Redis and Sidekiq respectively.

      To keep these manifests out of the main directory for your Rails project, create a new directory called k8s-manifests and then use the mv command to move the generated files into it:

      • mkdir k8s-manifests
      • mv *.yaml k8s-manifests

      Finally, cd into the k8s-manifests directory. We’ll work from inside this directory from now on to keep things tidy:

      These files are a good starting point, but in order for our application’s functionality to match the setup described in Containerizing a Ruby on Rails Application for Development with Docker Compose we will need to make a few additions and changes to the files that kompose has generated.

      Step 4 — Creating Kubernetes Secrets

      In order for our application to function in the way we expect, we will need to make a few modifications to the files that kompose has created. The first of these changes will be generating a Secret for our database user and password and adding it to our application and database Deployments. Kubernetes offers two ways of working with environment variables: ConfigMaps and Secrets. kompose has already created a ConfigMap with the non-confidential information we included in our .env file, so we will now create a Secret with our confidential information: our database name, username and password.

      The first step in manually creating a Secret will be to convert the data to base64, an encoding scheme that allows you to uniformly transmit data, including binary data.

      First convert the database name to base64 encoded data:

      • echo -n 'your_database_name' | base64

      Note down the encoded value.

      Next convert your database username:

      • echo -n 'your_database_username' | base64

      Again record the value you see in the output.

      Finally, convert your password:

      • echo -n 'your_database_password' | base64

      Take note of the value in the output here as well.

      Open a file for the Secret:

      Note: Kubernetes objects are typically defined using YAML, which strictly forbids tabs and requires two spaces for indentation. If you would like to check the formatting of any of your yaml files, you can use a linter or test the validity of your syntax using kubectl create with the --dry-run and --validate flags:

      • kubectl create -f your_yaml_file.yaml --dry-run --validate=true

      In general, it is a good idea to validate your syntax before creating resources with kubectl.

      Add the following code to the file to create a Secret that will define your DATABASE_NAME, DATABASE_USER and DATABASE_PASSWORD using the encoded values you just created. Be sure to replace the highlighted placeholder values here with your encoded database name, username and password:

      ~/rails_project/k8s-manifests/secret.yaml

      apiVersion: v1
      kind: Secret
      metadata:
        name: database-secret
      data:
        DATABASE_NAME: your_database_name
        DATABASE_PASSWORD: your_encoded_password
        DATABASE_USER: your_encoded_username
      

      We have named the Secret object database-secret, but you are free to name it anything you would like.

      These secrets are used with the Rails application so that it can connect to PostgreSQL. However, the database itself needs to be initialized with these same values. So next, copy the three lines and paste them at the end of the file. Edit the last three lines and change the DATABASE prefix for each variable to POSTGRES. Finally change the POSTGRES_NAME variable to read POSTGRES_DB.

      Your final secret.yaml file should contain the following:

      ~/rails_project/k8s-manifests/secret.yaml

      apiVersion: v1
      kind: Secret
      metadata:
        name: database-secret
      data:
        DATABASE_NAME: your_database_name
        DATABASE_PASSWORD: your_encoded_password
        DATABASE_USER: your_encoded_username
        POSTGRES_DB: your_database_name
        POSTGRES_PASSWORD: your_encoded_password
        POSTGRES_USER: your_encoded_username
      

      Save and close this file when you are finished editing. As you did with your .env file, be sure to add secret.yaml to your .gitignore file to keep it out of version control.

      With secret.yaml written, our next step will be to ensure that our application and database Deployments both use the values that we added to the file. Let’s start by adding references to the Secret to our application Deployment.

      Open the file called app-deployment.yaml:

      The file’s container specifications include the following environment variables defined under the env key:

      ~/rails_project/k8s-manifests/app-deployment.yaml

      apiVersion: apps/v1
      kind: Deployment
      . . .
          spec:
            containers:
              - env:
                  - name: DATABASE_HOST
                    valueFrom:
                      configMapKeyRef:
                        key: DATABASE_HOST
                        name: env
                  - name: DATABASE_PORT
                    valueFrom:
                      configMapKeyRef:
                        key: DATABASE_PORT
                        name: env
                  - name: RAILS_ENV
                    value: development
                  - name: REDIS_HOST
                    valueFrom:
                      configMapKeyRef:
                        key: REDIS_HOST
                        name: env
                  - name: REDIS_PORT
                    valueFrom:
                      configMapKeyRef:
                        key: REDIS_PORT
                        name: env
      . . .
      

      We will need to add references to our Secret so that our application will have access to those values. Instead of including a configMapKeyRef key to point to our env ConfigMap, as is the case with the existing values, we’ll include a secretKeyRef key to point to the values in our database-secret secret.

      Add the following Secret references after the - name: REDIS_PORT variable section:

      ~/rails_project/k8s-manifests/app-deployment.yaml

      . . .
          spec:
            containers:
              - env:
              . . .  
                  - name: REDIS_PORT
                    valueFrom:
                      configMapKeyRef:
                        key: REDIS_PORT
                        name: env
                  - name: DATABASE_NAME
                    valueFrom:
                      secretKeyRef:
                        name: database-secret
                        key: DATABASE_NAME
                  - name: DATABASE_PASSWORD
                    valueFrom:
                      secretKeyRef:
                        name: database-secret
                        key: DATABASE_PASSWORD
                  - name: DATABASE_USER
                    valueFrom:
                      secretKeyRef:
                        name: database-secret
                        key: DATABASE_USER
      . . .
      
      

      Save and close the file when you are finished editing. As with your secrets.yaml file, be sure to validate your edits using kubectl to ensure there are no issues with spaces, tabs, and indentation:

      • kubectl create -f app-deployment.yaml --dry-run --validate=true

      Output

      deployment.apps/app created (dry run)

      Next, we’ll add the same values to the database-deployment.yaml file.

      Open the file for editing:

      • nano database-deployment.yaml

      In this file, we will add references to our Secret for following variable keys: POSTGRES_DB, POSTGRES_USER and POSTGRES_PASSWORD. The postgres image makes these variables available so that you can modify the initialization of your database instance. The POSTGRES_DB creates a default database that is available when the container starts. The POSTGRES_USER and POSTGRES_PASSWORD together create a privileged user that can access the created database.

      Using the these values means that the user we create has access to all of the administrative and operational privileges of that role in PostgreSQL. When working in production, you will want to create a dedicated application user with appropriately scoped privileges.

      Under the POSTGRES_DB, POSTGRES_USER and POSTGRES_PASSWORD variables, add references to the Secret values:

      ~/rails_project/k8s-manifests/database-deployment.yaml

      apiVersion: apps/v1
      kind: Deployment
      . . .
          spec:
            containers:
              - env:
                  - name: PGDATA
                    value: /var/lib/postgresql/data/pgdata
                  - name: POSTGRES_DB
                    valueFrom:
                      secretKeyRef:
                        name: database-secret
                        key: POSTGRES_DB
                  - name: POSTGRES_PASSWORD
                    valueFrom:
                      secretKeyRef:
                        name: database-secret
                        key: POSTGRES_PASSWORD        
                  - name: POSTGRES_USER
                    valueFrom:
                      secretKeyRef:
                        name: database-secret
                        key: POSTGRES_USER
      . . .
      

      Save and close the file when you are finished editing. Again be sure to lint your edited file using kubectl with the --dry-run --validate=true arguments.

      With your Secret in place, you can move on to creating the database Service and ensuring that your application container only attempts to connect to the database once it is fully set up and initialized.

      Step 5 — Modifying the PersistentVolumeClaim and Exposing the Application Frontend

      Before running our application, we will make two final changes to ensure that our database storage will be provisioned properly and that we can expose our application frontend using a LoadBalancer.

      First, let’s modify the storage resource defined in the PersistentVolumeClaim that kompose created for us. This Claim allows us to dynamically provision storage to manage our application’s state.

      To work with PersistentVolumeClaims, you must have a StorageClass created and configured to provision storage resources. In our case, because we are working with DigitalOcean Kubernetes, our default StorageClass provisioner is set to dobs.csi.digitalocean.com — DigitalOcean Block Storage.

      We can check this by typing:

      If you are working with a DigitalOcean cluster, you will see the following output:

      Output

      NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE do-block-storage (default) dobs.csi.digitalocean.com Delete Immediate true 76m

      If you are not working with a DigitalOcean cluster, you will need to create a StorageClass and configure a provisioner of your choice. For details about how to do this, please see the official documentation.

      When kompose created db-data-persistentvolumeclaim.yaml, it set the storage resource to a size that does not meet the minimum size requirements of our provisioner. We will therefore need to modify our PersistentVolumeClaim to use the minimum viable DigitalOcean Block Storage unit: 1GB. Please feel free to modify this to meet your storage requirements.

      Open db-data-persistentvolumeclaim.yaml:

      • nano db-data-persistentvolumeclaim.yaml

      Replace the storage value with 1Gi:

      ~/rails_project/k8s-manifests/db-data-persistentvolumeclaim.yaml

      apiVersion: v1
      kind: PersistentVolumeClaim
      metadata:
        creationTimestamp: null
        labels:
          io.kompose.service: db-data
        name: db-data
      spec:
        accessModes:
          - ReadWriteOnce
        resources:
          requests:
            storage: 1Gi
      status: {}
      

      Also note the accessMode: ReadWriteOnce means that the volume provisioned as a result of this Claim will be read-write only by a single node. Please see the documentation for more information about different access modes.

      Save and close the file when you are finished.

      Next, open app-service.yaml:

      We are going to expose this Service externally using a DigitalOcean Load Balancer. If you are not using a DigitalOcean cluster, please consult the relevant documentation from your cloud provider for information about their load balancers. Alternatively, you can follow the official Kubernetes documentation on setting up a highly available cluster with kubeadm, but in this case you will not be able to use PersistentVolumeClaims to provision storage.

      Within the Service spec, specify LoadBalancer as the Service type:

      ~/rails_project/k8s-manifests/app-service.yaml

      apiVersion: v1
      kind: Service
      . . .
      spec:
        type: LoadBalancer
        ports:
      . . .
      

      When we create the app Service, a load balancer will be automatically created, providing us with an external IP where we can access our application.

      Save and close the file when you are finished editing.

      With all of our files in place, we are ready to start and test the application.

      Note:
      If you would like to compare your edited Kubernetes manifests to a set of reference files to be certain that your changes match this tutorial, the companion Github repository contains a set of tested manifests. You can compare each file individually, or you can also switch your local git branch to use the kubernetes-workflow branch.

      If you opt to switch branches, be sure to copy your secrets.yaml file into the new checked out version since we added it to .gitignore earlier in the tutorial.

      Step 6 — Starting and Accessing the Application

      It’s time to create our Kubernetes objects and test that our application is working as expected.

      To create the objects we’ve defined, we’ll use kubectl create with the -f flag, which will allow us to specify the files that kompose created for us, along with the files we wrote. Run the following command to create the Rails application and PostgreSQL database, Redis cache, and Sidekiq Services and Deployments, along with your Secret, ConfigMap, and PersistentVolumeClaim:

      • kubectl create -f app-deployment.yaml,app-service.yaml,database-deployment.yaml,database-service.yaml,db-data-persistentvolumeclaim.yaml,env-configmap.yaml,redis-deployment.yaml,redis-service.yaml,secret.yaml,sidekiq-deployment.yaml

      You receive the following output, indicating that the objects have been created:

      Output

      deployment.apps/app created service/app created deployment.apps/database created service/database created persistentvolumeclaim/db-data created configmap/env created deployment.apps/redis created service/redis created secret/database-secret created deployment.apps/sidekiq created

      To check that your Pods are running, type:

      You don’t need to specify a Namespace here, since we have created our objects in the default Namespace. If you are working with multiple Namespaces, be sure to include the -n flag when running this kubectl create command, along with the name of your Namespace.

      You will see output similar to the following while your database container is starting (the status will be either Pending or ContainerCreating):

      Output

      NAME READY STATUS RESTARTS AGE app-854d645fb9-9hv7w 1/1 Running 0 23s database-c77d55fbb-bmfm8 0/1 Pending 0 23s redis-7d65467b4d-9hcxk 1/1 Running 0 23s sidekiq-867f6c9c57-mcwks 1/1 Running 0 23s

      Once the database container is started, you will have output like this:

      Output

      NAME READY STATUS RESTARTS AGE app-854d645fb9-9hv7w 1/1 Running 0 30s database-c77d55fbb-bmfm8 1/1 Running 0 30s redis-7d65467b4d-9hcxk 1/1 Running 0 30s sidekiq-867f6c9c57-mcwks 1/1 Running 0 30s

      The Running STATUS indicates that your Pods are bound to nodes and that the containers associated with those Pods are running. READY indicates how many containers in a Pod are running. For more information, please consult the documentation on Pod lifecycles.

      Note:
      If you see unexpected phases in the STATUS column, remember that you can troubleshoot your Pods with the following commands:

      • kubectl describe pods your_pod
      • kubectl logs your_pod

      Now that your application is up and running, the last step that is required is to run Rails’ database migrations. This step will load a schema into the PostgreSQL database for the demo application.

      To run pending migrations you’ll exec into the running application pod and then call the rake db:migrate command.

      First, find the name of the application pod with the following command:

      Find the pod that corresponds to your application like the highlighted pod name in the following output:

      Output

      NAME READY STATUS RESTARTS AGE app-854d645fb9-9hv7w 1/1 Running 0 30s database-c77d55fbb-bmfm8 1/1 Running 0 30s redis-7d65467b4d-9hcxk 1/1 Running 0 30s sidekiq-867f6c9c57-mcwks 1/1 Running 0 30s

      With that pod name noted down, you can now run the kubectl exec command to complete the database migration step.

      Run the migrations with this command:

      • kubectl exec your_app_pod_name -- rake db:migrate

      You should receive output similar to the following, which indicates that the database schema has been loaded:

      Output

      == 20190927142853 CreateSharks: migrating ===================================== -- create_table(:sharks) -> 0.0190s == 20190927142853 CreateSharks: migrated (0.0208s) ============================ == 20190927143639 CreatePosts: migrating ====================================== -- create_table(:posts) -> 0.0398s == 20190927143639 CreatePosts: migrated (0.0421s) ============================= == 20191120132043 CreateEndangereds: migrating ================================ -- create_table(:endangereds) -> 0.8359s == 20191120132043 CreateEndangereds: migrated (0.8367s) =======================

      With your containers running and data loaded, you can now access the application. To get the IP for the app LoadBalancer, type:

      You will receive output like the following:

      Output

      NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE app LoadBalancer 10.245.73.142 your_lb_ip 3000:31186/TCP 21m database ClusterIP 10.245.155.87 <none> 5432/TCP 21m kubernetes ClusterIP 10.245.0.1 <none> 443/TCP 21m redis ClusterIP 10.245.119.67 <none> 6379/TCP 21m

      The EXTERNAL_IP associated with the app service is the IP address where you can access the application. If you see a <pending> status in the EXTERNAL_IP column, this means that your load balancer is still being created.

      Once you see an IP in that column, navigate to it in your browser: http://your_lb_ip:3000.

      You should see the following landing page:

      Application Landing Page

      Click on the Get Shark Info button. You will have a page with a button to create a new shark:

      Shark Info Form

      Click it and when prompted, enter the username and password from earlier in the tutorial series. If you did not change these values then the defaults are sammy and shark respectively.

      In the form, add a shark of your choosing. To demonstrate, we will add Megalodon Shark to the Shark Name field, and Ancient to the Shark Character field:

      Filled Shark Form

      Click on the Submit button. You will see a page with this shark information displayed back to you:

      Shark Output

      You now have a single instance setup of a Rails application with a PostgreSQL database running on a Kubernetes cluster. You also have a Redis cache and a Sidekiq worker to process data that users submit.

      Conclusion

      The files you have created in this tutorial are a good starting point to build from as you move toward production. As you develop your application, you can work on implementing the following:



      Source link