One place for hosting & domains

      How To Optimize MySQL with Query Cache on Ubuntu 18.04

      The author selected the Apache Software Foundation to receive a donation as part of the Write for DOnations program.


      Query cache is a prominent MySQL feature that speeds up data retrieval from a database. It achieves this by storing MySQL SELECT statements together with the retrieved record set in memory, then if a client requests identical queries it can serve the data faster without executing commands again from the database.

      Compared to data read from disk, cached data from RAM (Random Access Memory) has a shorter access time, which reduces latency and improves input/output (I/O) operations. As an example, for a WordPress site or an e-commerce portal with high read calls and infrequent data changes, query cache can drastically boost the performance of the database server and make it more scalable.

      In this tutorial, you will first configure MySQL without query cache and run queries to see how quickly they are executed. Then you’ll set up query cache and test your MySQL server with it enabled to show the difference in performance.

      Note: Although query cache is deprecated as of MySQL 5.7.20, and removed in MySQL 8.0, it is still a powerful tool if you’re using supported versions of MySQL. However, if you are using newer versions of MySQL, you may adopt alternative third-party tools like ProxySQL to optimize performance on your MySQL database.


      Before you begin, you will need the following:

      Step 1 — Checking the Availability of Query Cache

      Before you set up query cache, you’ll check whether your version of MySQL supports this feature. First, ssh into your Ubuntu 18.04 server:

      • ssh user_name@your_server_ip

      Then, run the following command to log in to the MySQL server as the root user:

      Enter your MySQL server root password when prompted and then press ENTER to continue.

      Use the following command to check if query cache is supported:

      • show variables like 'have_query_cache';

      You should get an output similar to the following:


      +------------------+-------+ | Variable_name | Value | +------------------+-------+ | have_query_cache | YES | +------------------+-------+ 1 row in set (0.01 sec)

      You can see the value of have_query_cache is set to YES and this means query cache is supported. If you receive an output showing that your version does not support query cache, please see the note in the Introduction section for more information.

      Now that you have checked and confirmed that your version of MySQL supports query cache, you will move on to examining the variables that control this feature on your database server.

      Step 2 — Checking the Default Query Cache Variables

      In MySQL, a number of variables control query cache. In this step, you'll check the default values that ship with MySQL and understand what each variable controls.

      You can examine these variables using the following command:

      • show variables like 'query_cache_%' ;

      You will see the variables listed in your output:


      +------------------------------+----------+ | Variable_name | Value | +------------------------------+----------+ | query_cache_limit | 1048576 | | query_cache_min_res_unit | 4096 | | query_cache_size | 16777216 | | query_cache_type | OFF | | query_cache_wlock_invalidate | OFF | +------------------------------+----------+ 5 rows in set (0.00 sec)

      The query_cache_limit value determines the maximum size of individual query results that can be cached. The default value is 1,048,576 bytes and this is equivalent to 1MB.

      MySQL does not handle cached data in one big chunk; instead it is handled in blocks. The minimum amount of memory allocated to each block is determined by the query_cache_min_res_unit variable. The default value is 4096 bytes or 4KB.

      query_cache_size controls the total amount of memory allocated to the query cache. If the value is set to zero, it means query cache is disabled. In most cases, the default value may be set to 16,777,216 (around 16MB). Also, keep in mind that query_cache_size needs at least 40KB to allocate its structures. The value allocated here is aligned to the nearest 1024 byte block. This means the reported value may be slightly different from what you set.

      MySQL determines the queries to cache by examining the query_cache_type variable. Setting this value to 0 or OFF prevents caching or retrieval of cached queries. You can also set it to 1 to enable caching for all queries except for ones beginning with the SELECT SQL_NO_CACHE statement. A value of 2 tells MySQL to only cache queries that begin with SELECT SQL_CACHE command.

      The variable query_cache_wlock_invalidate controls whether MySQL should retrieve results from the cache if the table used on the query is locked. The default value is OFF.

      Note: The query_cache_wlock_invalidate variable is deprecated as of MySQL version 5.7.20. As a result, you may not see this in your output depending on the MySQL version you're using.

      Having reviewed the system variables that control the MySQL query cache, you'll now test how MySQL performs without first enabling the feature.

      Step 3 — Testing Your MySQL Server Without Query Cache

      The goal of this tutorial is to optimize your MySQL server by using the query cache feature. To see the difference in speed, you're going to run queries and see their performance before and after implementing the feature.

      In this step you're going to create a sample database and insert some data to see how MySQL performs without query cache.

      While still logged in to your MySQL server, create a database and name it sample_db by running the following command:

      • Create database sample_db;


      Query OK, 1 row affected (0.00 sec)

      Then switch to the database:


      Database changed

      Create a table with two fields (customer_id and customer_name) and name it customers:

      • Create table customers (customer_id INT PRIMARY KEY, customer_name VARCHAR(50) NOT NULL) Engine = InnoDB;


      Query OK, 0 rows affected (0.01 sec)

      Then, run the following commands to insert some sample data:

      • Insert into customers(customer_id, customer_name) values ('1', 'JANE DOE');
      • Insert into customers(customer_id, customer_name) values ('2', 'JANIE DOE');
      • Insert into customers(customer_id, customer_name) values ('3', 'JOHN ROE');
      • Insert into customers(customer_id, customer_name) values ('4', 'MARY ROE');
      • Insert into customers(customer_id, customer_name) values ('5', 'RICHARD ROE');
      • Insert into customers(customer_id, customer_name) values ('6', 'JOHNNY DOE');
      • Insert into customers(customer_id, customer_name) values ('7', 'JOHN SMITH');
      • Insert into customers(customer_id, customer_name) values ('8', 'JOE BLOGGS');
      • Insert into customers(customer_id, customer_name) values ('9', 'JANE POE');
      • Insert into customers(customer_id, customer_name) values ('10', 'MARK MOE');


      Query OK, 1 row affected (0.01 sec) Query OK, 1 row affected (0.00 sec) ...

      The next step is starting the MySQL profiler, which is an analysis service for monitoring the performance of MySQL queries. To turn the profile on for the current session, run the following command, setting it to 1, which is on:


      Query OK, 0 rows affected, 1 warning (0.00 sec)

      Then, run the following query to retrieve all customers:

      You'll receive the following output:


      +-------------+---------------+ | customer_id | customer_name | +-------------+---------------+ | 1 | JANE DOE | | 2 | JANIE DOE | | 3 | JOHN ROE | | 4 | MARY ROE | | 5 | RICHARD ROE | | 6 | JOHNNY DOE | | 7 | JOHN SMITH | | 8 | JOE BLOGGS | | 9 | JANE POE | | 10 | MARK MOE | +-------------+---------------+ 10 rows in set (0.00 sec)

      Then, run the SHOW PROFILES command to retrieve performance information about the SELECT query you just ran:

      You will get output similar to the following:


      +----------+------------+-------------------------+ | Query_ID | Duration | Query | +----------+------------+-------------------------+ | 1 | 0.00044075 | Select * from customers | +----------+------------+-------------------------+ 1 row in set, 1 warning (0.00 sec)

      The output shows the total time spent by MySQL when retrieving records from the database. You are going to compare this data in the next steps when query cache is enabled, so keep note of your Duration. You can ignore the warning within the output since this simply indicates that SHOW PROFILES command will be removed in a future MySQL release and replaced with Performance Schema.

      Next, exit from the MySQL Command Line Interface.

      You have ran a query with MySQL before enabling query cache and noted down the Duration or time spent to retrieve records. Next, you will enable query cache and see if there is a performance boost when running the same query.

      Step 4 — Setting Up Query Cache

      In the previous step, you created sample data and ran a SELECT statement before you enabled query cache. In this step, you'll enable query cache by editing the MySQL configuration file.

      Use nano to edit the file:

      • sudo nano /etc/mysql/my.cnf

      Add the following information to the end of your file:


      query_cache_size = 10M

      Here you've enabled query cache by setting the query_cache_type to 1. You've also set up the individual query limit size to 256K and instructed MySQL to allocate 10 megabytes to query cache by setting the value of query_cache_size to 10M.

      Save and close the file by pressing CTRL + X, Y, then ENTER. Then, restart your MySQL server to implement the changes:

      • sudo systemctl restart mysql

      You have now enabled query cache.

      Once you have configured query cache and restarted MySQL to apply the changes, you will go ahead and test the performance of MySQL with the feature enabled.

      Step 5 — Testing Your MySQL Server with Query Cache Enabled

      In this step, you'll run the same query you ran in Step 3 one more time to check how query cache has optimized the performance of your MySQL server.

      First, connect to your MySQL server as the root user:

      Enter your root password for the database server and hit ENTER to continue.

      Now confirm your configuration set in the previous step to ensure you enabled query cache:

      • show variables like 'query_cache_%' ;

      You'll see the following output:


      +------------------------------+----------+ | Variable_name | Value | +------------------------------+----------+ | query_cache_limit | 262144 | | query_cache_min_res_unit | 4096 | | query_cache_size | 10485760 | | query_cache_type | ON | | query_cache_wlock_invalidate | OFF | +------------------------------+----------+ 5 rows in set (0.01 sec)

      The variable query_cache_type is set to ON; this confirms that you enabled query cache with the parameters defined in the previous step.

      Switch to the sample_db database that you created earlier.

      Start the MySQL profiler:

      Then, run the query to retrieve all customers at least two times in order to generate enough profiling information.

      Remember, once you've run the first query, MySQL will create a cache of the results and therefore, you must run the query twice to trigger the cache:

      • Select * from customers;
      • Select * from customers;

      Then, list the profiles information:

      You'll receive an output similar to the following:


      +----------+------------+-------------------------+ | Query_ID | Duration | Query | +----------+------------+-------------------------+ | 1 | 0.00049250 | Select * from customers | | 2 | 0.00026000 | Select * from customers | +----------+------------+-------------------------+ 2 rows in set, 1 warning (0.00 sec)

      As you can see the time taken to run the query has drastically reduced from 0.00044075 (without query cache in Step 3) to 0.00026000 (the second query) in this step.

      You can see the optimization from enabling the query cache feature by profiling the first query in detail:



      +--------------------------------+----------+ | Status | Duration | +--------------------------------+----------+ | starting | 0.000025 | | Waiting for query cache lock | 0.000004 | | starting | 0.000003 | | checking query cache for query | 0.000045 | | checking permissions | 0.000008 | | Opening tables | 0.000014 | | init | 0.000018 | | System lock | 0.000008 | | Waiting for query cache lock | 0.000002 | | System lock | 0.000018 | | optimizing | 0.000003 | | statistics | 0.000013 | | preparing | 0.000010 | | executing | 0.000003 | | Sending data | 0.000048 | | end | 0.000004 | | query end | 0.000006 | | closing tables | 0.000006 | | freeing items | 0.000006 | | Waiting for query cache lock | 0.000003 | | freeing items | 0.000213 | | Waiting for query cache lock | 0.000019 | | freeing items | 0.000002 | | storing result in query cache | 0.000003 | | cleaning up | 0.000012 | +--------------------------------+----------+ 25 rows in set, 1 warning (0.00 sec)

      Run the following command to show profile information for the second query, which is cached:



      +--------------------------------+----------+ | Status | Duration | +--------------------------------+----------+ | starting | 0.000024 | | Waiting for query cache lock | 0.000003 | | starting | 0.000002 | | checking query cache for query | 0.000006 | | checking privileges on cached | 0.000003 | | checking permissions | 0.000027 | | sending cached result to clien | 0.000187 | | cleaning up | 0.000008 | +--------------------------------+----------+ 8 rows in set, 1 warning (0.00 sec)

      The outputs from the profiler show that MySQL took less time on the second query because it was able to retrieve data from the query cache instead of reading it from the disk. You can compare the two sets of output for each of the queries. If you look at the profile information on QUERY 2, the status of sending cached result to client shows that data was read from the cache and no tables were opened since the Opening tables status is missing.

      With the MySQL query cache feature enabled on your server, you'll now experience improved read speeds.


      You have set up query cache to speed up your MySQL server on Ubuntu 18.04. Using features like MySQL's query cache can enhance the speed of your website or web application. Caching reduces unnecessary execution for SQL statements and is a highly recommended and popular method for optimizing your database. For more on speeding up your MySQL server, try the How To Set Up a Remote Database to Optimize Site Performance with MySQL on Ubuntu 18.04 tutorial.

      Source link

      How to Write Product Descriptions That Really Sell: 8 Simple Tips

      Congratulations! You’ve done the hard marketing work to lead your target customer right to your product pages. They are currently reading through a product description to decide whether or not they will purchase something from your e-commerce business.

      The million dollar question: will they buy what you’re selling?

      The answer, in large part, depends on how much time and effort you put into your product description. It may seem drastic to weigh product descriptions so heavily, but stats show that a well-written product description is a surefire conversion tool. Here’s a closer look:

      • 87% of consumers ranked product content extremely or very important when deciding to buy.
      • Millennials are 40% more likely than other adults to say product content is extremely important to their purchasing decisions.
      • Consumers purchasing clothing and online groceries ranked product descriptions as the second most influential factor in their decision to buy — just after price.
      • 20% of purchase failures are potentially a result of missing or unclear product information.

      The stats don’t lie. If you want to increase sales, it’s time to polish your e-commerce product descriptions.

      Shared Hosting to Power Your Purpose

      We make sure your website is fast, secure, and always up so your visitors trust you. Plans start at $2.59/mo.

      8 Ways to Write an Excellent Product Description

      But what actually makes a good product description? In this guide, we’re giving you eight tips (along with winning examples) that provide a comprehensive look into what makes an effective product description. Let’s go!

      1. Identify Your Buyer Personas

      It can be difficult to write a product description if you don’t know who your target audience is. To successfully write about product features that resonate with your potential buyers, you have to know who they are.

      This means you need to reference your buyer persona(s)  — a fictional representation of your ideal customer based on market research. If you don’t already have a buyer persona to guide the copywriting on your website, the time to create one is now.

      A buyer persona should answer all of the following general questions:

      • What is the demographic information of your buyers?
      • What are their interests?
      • What is their native language?
      • What kind of language appeals to them? (e.g., Does industry jargon appeal to them or turn them off?)
      • How do they spend their free time?
      • How do they find your website?
      • Why are they interested in your store?

      If you have the luxury of big data at your hands, collect data on your current customers to also understand:

      • Product preferences
      • Behavioral patterns
      • Purchasing patterns

      Access to this data will help you fine-tune your buyer personas. Once you know who you are selling to, it will be easier to write product descriptions that resonate well with them.

      2. Focus on Product Benefits and Features

      As crucial as it is to speak the language of your buyers, your buyers don’t come to your page to connect. They come to learn precisely what your product can do and how it will meet their needs and fulfill their pain points. To accomplish this, you need to write an extensive list of your product’s features and benefits.

      Start with the features. For example, if you sell shoes, include size information, material, color information, the weight of the shoe, etc. Your features section should be comprehensive and tell consumers everything they need to know about what makes your product special.A list of features is a great start, but it’s only half the battle. Potential customers also want to know the benefits of your particular product. And, this is where your product description can shine.

      With the shoe example, benefits would include things like comfort, flexibility, odor-resistance, wet and dry traction, etc.

      Allbirds does a fantastic job showing off the benefits of their shoe without being verbose. Their advantages are spelled out in short, sweet blurbs that get right to the point.

      Allbirds product benefits.
      Allbirds clearly identifies its products’ main benefits for customers.

      Benefits are your main selling points, your differentiators, and the reasons why customers will end up selecting your product over your competitors. Don’t neglect clearly identifying them.

      3. Stay True to Your Brand’s Voice

      If your brand’s voice is professional, your product descriptions should be professional. If your brand is snarky and sarcastic, then your product descriptions should match. Is your brand funny? Be funny when writing your product descriptions.

      Everyone is familiar with the hilarious Poo-Pourri advertising videos. You know, the videos that took Poo-Pourri from a $10 million company to a $30 million company almost overnight?

      Poo-Pourri has a unique brand identity and tone of voice, which they stay true to even when describing their products.

      Poo-Pourri product description.
      Poo-Pourri stays true to their brand’s unique voice in product descriptions.

      4. Tell a Full Story

      Every good story has a beginning, a middle, and an end. Unless, of course, you’re one of the writers on Game of Thrones, but I digress.

      With product descriptions, the formula for good writing is no different. You need to present a complete story that engages your readers. This doesn’t mean you need to write a novel, but at the same time, your product description shouldn’t just be a list of features and benefits either.

      Instead, show (not tell) your customers how the product will improve their lives. Help them visualize a real-life scenario where your product solves a problem. The goal is to create a narrative arc in which the reader is the hero and your product is the tool that enables them to succeed.

      For example, check out the impressive product storytelling of Malicious Women Candle Co.

      Customers aren’t just buying a candle at Malicious Women Candle Company. They are purchasing a product that promotes empowerment with a side of hustle and energy. Now that’s a product story.

      5. Use Active Language to Persuade Buyers

      Your mom was right; the words you use make a difference — especially with product descriptions. The truth is that some words are just more persuasive than others. In fact, experts have roadtested all kinds of language to come up with 189 words and phrases that actually improve conversion rates.

      Consider these 20 tried-and-tested words recommended by David Ogilvy, the proverbially ‘Father of Advertising’:

      • Suddenly
      • Now
      • Announcing
      • Introducing
      • Improvement
      • Amazing
      • Sensational
      • Remarkable
      • Revolutionary
      • Startling
      • Miracle
      • Magic
      • Offer
      • Quick
      • Easy
      • Wanted
      • Challenge
      • Compare
      • Bargain
      • Hurry

      The common theme? Persuasive words encourage consumers to take action.

      Jon Morrow of has his own list of 600 power words that will tap into your customer’s emotions, making them more likely to engage with your message.

      Sample of Jon Morrow’s 600-word list
      Sample of Jon Morrow’s 600-word list

      Since many companies use awe-inspiring (see what we did there?) power words in their product descriptions, it’s easy to find good examples — even for seemingly bland products. Here’s one about shaving cream from Ulta Beauty.

      Ulta Beauty product description.
      Ulta Beauty utilizes power words to make shaving cream seem swanky.

      When writing product descriptions, take a moment to scan through your copy and make sure each word is pulling its weight.

      6. Make Text Scannable with Bullet Points

      Making your text scannable is one of the most critical elements of writing a good product description. Studies suggest humans have an attention span that’s shorter than that of a goldfish — a bleak eight seconds.

      This means it’s essential to make your content easily digestible. The solution to packing a narrative punch in a relatively small space? Create a bulleted list.

      J. Crew does this well. Customers can click on a picture to see the item of interest and quickly read the scannable bullet points for more information.

      J. Crew product description with bullet points.
      Bullet points make it easy for J. Crew customers to scan the fine print.

      The more you can do to make a product description scannable, the better.

      7. Optimize Copy for Search Engines

      Copywriters have a unique challenge when it comes to writing product descriptions. They must persuade readers, but there’s another audience to keep in mind too: search engine algorithms.

      Search Engine Optimization (SEO) — including identifying and using the appropriate keywords for your products — should be a critical part of your product description writing process.

      The SEO world is constantly changing, along with Google’s algorithms, so what works one day might not be ideal the next. However, there are still some keyword strategies that stand the test of time, such as avoiding duplicate content and including keywords in the following places:

      • Page title
      • Product title
      • Meta descriptions
      • Alt tags
      • Product descriptions

      The keywords you use in your copy help Google and other search engines identify what the page is about. This information then used to determine how to rank your site on the search engine results page (SERP) so that relevant results to served up to people imputing related search queries.

      For example, when you type “shaving cream” into Google, Google offers a list of products.

      Google search result for 'shaving cream'.
      Google displays popular products when you search for ‘shaving cream.’

      There are literally hundreds of shaving cream products on the market today, but these five products have the best SEO keyword strategy.

      Take Cremo Shave Cream, for example. When visiting their product page, it’s clear they have maximized the use of keywords, such as shave cream and shave.

      Cremo product descriptions focused on keywords.
      Cremo focused on incorporating keywords into its product descriptions.

      Additionally, when you check out the page source, you can see the back-end (e.g., alt tags) are optimized with the keyword as well.

      8. Add Images and Video

      It should go without saying that a great product description must include images. If you need extra persuasion, remember that 63% of consumers believe good images are more important than product descriptions.

      If your e-commerce store can afford to hire a product photographer, awesome! If not, there are lots of DIY product photography tutorials to help get you started. Of course, good photos start with good equipment, including:

      • Camera
      • Tripod
      • Nice background
      • White bounce cards made of foam board
      • Table
      • Tape

      Once you’ve gathered your gear, you’ll need some tips on how to actually take stellar photos. This guide from Bigcommerce provides beginner-friendly tips at budget-price: how to shoot exceptional product photos for under $50. Suggestions include:

      • Using a light-colored backdrop so it’s easier to touch up images.
      • Creating your own lightbox to distribute light evenly.
      • Using a tripod to steady your camera.
      • Retouching images before posting them.

      If you don’t think a smartphone will do the trick, think again. All you need for affirmation is to take a gander at some of the DIY photographers on Instagram. Jennifer Steinkop of @aloeandglow, for example, uses an iPhone 8 Plus, the Lightbox app, and some of the tips mentioned above to create gorgeous beauty shots.

      @aloeandglow Instagram account
      @aloeandglow Instagram account

      Looking for a more corporate example? iRobot has excellent product photography on its website. The company includes at least four images and often a video (bonus!) to show consumers exactly how the product works.

      iRobot’s Roomba i7 product page.
      iRobot’s Roomba i7 product page.

      With a few clicks of a button in a second or two, consumers know exactly what they are getting when they buy a Roomba.

      Another tip courtesy of iRobot: consider adding customer reviews to your product description. In addition to quality imagery, social proof can be hugely motivating for prospective buyers.

      Be Awesome on the Internet

      Join our monthly newsletter for tips and tricks to build your dream website!

      How to Create a Product Description Template

      While we’ve just outlined eight tips for writing product descriptions that really sell, it’s important to note that there is no one-size-fits-all solution. That’s because all products have different features, benefits, and selling points.

      However, if you have a list of similar products and you don’t want to start from scratch every time you write a product description, it can be beneficial to create a template.

      There are lots of handy product description template examples you can download from e-commerce websites. To really maximize their value, though, we’d recommended you focus on the 8 tips we outlined above. Start by asking:

      • What are your buyer personas?
      • What are the pain points of your customers?
      • How does your product solve customer pain points?
      • What power words can you use in your copy?
      • Do you have a unique story or brand voice?
      • Is your language accessible and free of industry jargon?
      • What are the main features and benefits of your products?
      • Do you have an image and video library?

      Once you’ve answered these questions, you can tweak your template and test it with your audience. If you find a specific template is outperforming others, then you’ve found your winner.

      Your Products, Our Hosting

      Ready to revolutionize the way you write product descriptions and how you display them on your website? At DreamHost, we offer low-cost shared WordPress hosting, and a variety of other resources to help you build the perfect custom website for your online store. Check out our shared hosting plans today!

      Source link

      Use HashiCorp Vault to Manage Secrets

      Updated by Linode Contributed by Linode

      HashiCorp Vault is a secrets management tool that helps to provide secure, automated access to sensitive data. Vault meets these use cases by coupling authentication methods (such as application tokens) to secret engines (such as simple key/value pairs) using policies to control how access is granted. In this guide, you will install, configure, and access Vault in an example deployment to illustrate Vault’s features and API.

      This guide will use the latest version of Vault, which is 1.1.0 at the time of this writing.

      Why Use Vault?

      A service such as Vault requires operational effort to run securely and effectively. Given the added complexity of using Vault as part of an application, in what way does it add value?

      Consider a simple application that must use an API token or other secret value. How should this sensitive credential be given to the application at runtime?

      • Committing the secret alongside the rest of the application code in a version control system such as git is a poor security practice for a number of reasons, including that the sensitive value is recorded in plaintext and not protected in any way.
      • Recording a secret in a file that is passed to an application requires that the file be securely populated in the first place and strictly access-controlled.
      • Static credentials are challenging to rotate or restrict access to if an application is compromised.

      Vault solves these and other problems in a number of ways, including:

      • Services and applications that run without operator interaction can authenticate to Vault using values that can be rotated, revoked, and permission-controlled.
      • Some secrets engines can generate temporary, dynamically-generated secrets to ensure that credentials expire after a period of time.
      • Policies for users and machine accounts can be strictly controlled for specific types of access to particular paths.


      Before continuing, you should familiarize yourself with important Vault terms and concepts that will be used later in this guide.

      • A token is the the underlying mechanism that underpins access to Vault resources. Whether a user authenticates to Vault using a GitHub token or an application-driven service authenticates using an AppRole RoleID and SecretID, all forms of authentication are eventually normalized to a token. Tokens are typically short-lived (that is, expire after a period or time-to-live, or ttl) and have one or more policies attached to them.
      • A Vault policy dictates certain actions that may be performed upon a Vault path. Capabilities such as the ability to read a secret, write secrets, and delete them are all examples of actions that are defined in a policy for a particular path.
      • A path in Vault is similar in form to a Unix filesystem path (like /etc) or a URL (such as /blog/title). Users and machine accounts interact with Vault over particular paths in order to retrieve secrets, change settings, or otherwise interact with a running Vault service. All Vault access is performed over a REST interface, so these paths eventually take the form of an HTTP URL. While some paths interact with the Vault service itself to manage resources such as policies or settings, many paths serve as an endpoint to either authenticate to Vault or interact with a secret engine.
      • A secret engine is a backend used in Vault to provide secrets to Vault users. The simplest example of a secret engine is the key/value backend, which simply returns plain text values that may be stored at particular paths (these secrets remain encrypted on the backend). Other examples of secret backends include the PKI backend, which can generate and manage TLS certificates, and the TOTP backend, which can generate temporary one-time passwords for web sites that require multi-factor authentication (including the Linode Manager).


      This guide will setup Vault in a simple, local filesystem-only configuration. The steps listed here apply equally to any distribution.

      These installation steps will:

      • Procure a TLS certificate to ensure that all communications between Vault and clients are encrypted.
      • Configure Vault for local filesystem storage.
      • Install the vault binary and set up the operating system to operate Vault as a service.


      The configuration outlined in this guide is suitable for small deployments. In situations that call for highly-available or fault-tolerant services, consider running more than one Vault instance with a highly-available storage backend such as Consul.

      Before you Begin

      1. Familiarize yourself with Linode’s Getting Started guide and complete the steps for deploying and setting up a Linode running a recent Linux distribution (such as Ubuntu 18.04 or CentOS 7), including setting the hostname and timezone.


        Setting the full hostname correctly in /etc/hosts is important in this guide in order to terminate TLS on Vault correctly. Your Linode’s fully qualified domain name and short hostname should be present in the /etc/hosts file before continuing.

      2. This guide uses sudo wherever possible. Complete the sections of our Securing Your Server guide to create a standard user account, harden SSH access, and remove unnecessary network services.

      3. Follow our UFW Guide in order to install and configure a firewall on your Ubuntu or Debian-based system, or our FirewallD Guide for rpm or CentOS-based systems. Consider reviewing Vault’s Production Hardening recommendations if this will be used in a production environment.


        When configuring a firewall, keep in mind that Vault listens on port 8200 by default and Let’s Encrypt utilizes ports 80 (HTTP) and 443 (HTTPS).

      4. Ensure your system is up to date. On Debian-based systems, use:

        sudo apt update && sudo apt upgrade

        While on rpm-based systems, such as CentOS, use:

        sudo yum update

      Acquire a TLS Certificate

      1. Follow the steps in our Secure HTTP Traffic with Certbot guide to acquire a TLS certificate.

      2. Add a system group in order to grant limited read access to the TLS files created by Certbot.

        sudo groupadd tls
      3. Change the group ownership of certificate files in the Let’s Encrypt directory to tls.

        sudo chgrp -R tls /etc/letsencrypt/{archive,live}
      4. Grant members of the tls group read access to the necessary directories and files.

        sudo chmod g+rx /etc/letsencrypt/{archive,live}
        sudo find /etc/letsencrypt/archive -name 'privkey*' -exec chmod g+r {} ';'

      Download Vault files

      1. Download the release binary for Vault.



        If you receive an error that indicates wget is missing from your system, install the wget package and try again.

      2. Download the checksum file, which will verify that the zip file is not corrupt.

      3. Download the checksum signature file, which verifies that the checksum file has not been tampered with.


      Verify the Downloads

      1. Import the HashiCorp Security GPG key (listed on the HashiCorp Security page under Secure Communications):

        gpg --recv-keys 51852D87348FFC4C

        The output should show that the key was imported:

        gpg: /home/user/.gnupg/trustdb.gpg: trustdb created
        gpg: key 51852D87348FFC4C: public key "HashiCorp Security " imported
        gpg: no ultimately trusted keys found
        gpg: Total number processed: 1
        gpg:               imported: 1


        If an error occurs with the error message keyserver receive failed: Syntax error in URI, simply try rerunning the gpg command again.


        If you receive errors that indicate the dirmngr software is missing or inaccessible, install dirmngr using your package manager and run the GPG command again.

      2. Verify the checksum file’s GPG signature:

        gpg --verify vault*.sig vault*SHA256SUMS

        The output should contain the Good signature from "HashiCorp Security <>" confirmation message:

        gpg: Signature made Mon 18 Mar 2019 01:44:51 PM MDT
        gpg:                using RSA key 91A6E7F85D05C65630BEF18951852D87348FFC4C
        gpg: Good signature from "HashiCorp Security <>" [unknown]
        gpg: WARNING: This key is not certified with a trusted signature!
        gpg:          There is no indication that the signature belongs to the owner.
        Primary key fingerprint: 91A6 E7F8 5D05 C656 30BE  F189 5185 2D87 348F FC4C
      3. Verify that the fingerprint output matches the fingerprint listed in the Secure Communications section of the HashiCorp Security page.

      4. Verify the .zip archive’s checksum:

        sha256sum -c vault*SHA256SUMS 2>&1 | grep OK

        The output should show the file’s name as given in the vault*SHA256SUMS file:


      Install the Vault Executable

      1. Extract the Vault executable to the local directory.

        unzip vault_*


        If you receive an error that indicates unzip is missing from your system, install the unzip package and try again.

      2. Move the vault executable into a system-wide location.

        sudo mv vault /usr/local/bin
      3. Reset the ownership and permissions on the executable.

        sudo chown root:root /usr/local/bin/vault
        sudo chmod 755 /usr/local/bin/vault
      4. Set executable capabilities on the vault binary. This will grant Vault privileges to lock memory, which is a best practice for running Vault securely (see the Vault documentation for additional information).

        sudo setcap cap_ipc_lock=+ep /usr/local/bin/vault
      5. Verify that vault is now available in the local shell.

        vault --version

        The output of this command should return the following.

        Vault v1.1.0 ('36aa8c8dd1936e10ebd7a4c1d412ae0e6f7900bd')

      System Vault Configuration

      1. Create a system user that vault will run as when the service is started.

        sudo useradd --system -d /etc/vault.d -s /bin/nologin vault
      2. Add the vault user to the previously created tls group, which will grant the user the ability to read Let’s Encrypt certificates.

        sudo gpasswd -a vault tls
      3. Create the data directory and configuration directory for vault with limited permissions.

        sudo install -o vault -g vault -m 750 -d /var/lib/vault
        sudo install -o vault -g vault -m 750 -d /etc/vault.d
      4. Create a systemd service file that will control how to run vault persistently as a system daemon.

        Description="a tool for managing secrets"
        CapabilityBoundingSet=CAP_SYSLOG CAP_IPC_LOCK
        ExecStart=/usr/local/bin/vault server -config=/etc/vault.d/vault.hcl
        ExecReload=/bin/kill --signal HUP $MAINPID

        These systemd service options define a number of important settings to ensure that Vault runs securely and reliably. Review the Vault documentation for a complete explanation of what these options achieve.


      Configure Vault

      1. Create a configuration file for Vault with the following contents, replacing with the domain used in your Let’s Encrypt certificates.

        listener "tcp" {
          address = ""
          tls_cert_file = "/etc/letsencrypt/live/"
          tls_key_file = "/etc/letsencrypt/live/"
        storage "file" {
          path = "/var/lib/vault"

        This configuration will use the Let’s Encrypt certificates created in the previous steps to terminate TLS for the Vault service. This ensures that secrets will never be transmitted in plaintext. The actual storage for Vault will be on the local filesystem at /var/lib/vault.

      Run The Vault Service

      1. Vault is now ready to run. Start the service using systemctl.

        sudo systemctl start vault
      2. If desired, enable the service as well so that Vault starts at system boot time.

        sudo systemctl enable vault
      3. Confirm that Vault is operational by using the vault executable to check for the service’s status. Set the VAULT_ADDR environment variable to, replacing with your own domain:

        export VAULT_ADDR=
      4. vault commands should now be sent to your local Vault instance. To confirm this, run the vault status command:

        vault status

        The command should return output similar to the following:

        Key                Value
        ---                -----
        Seal Type          shamir
        Initialized        false
        Sealed             true
        Total Shares       0
        Threshold          0
        Unseal Progress    0/0
        Unseal Nonce       n/a
        Version            n/a
        HA Enabled         false

      The remainder of this tutorial assumes that the environment variable VAULT_ADDR is set to this value to ensure that requests are sent to the correct Vault host.

      Initializing Vault

      At this stage, Vault is installed and running, but not yet initialized. The following steps will initialize the Vault backend, which sets unseal keys and returns the initial root token. Initialization occurs only one time for a Vault deployment.

      There are two configurable options to choose when performing the initialization step. The first value is the number of key shares, which controls the total number of unseal keys that Vault will generate. The second value is the key threshold, which controls how many of these unseal key shares are required before Vault will successfully unseal itself. Unsealing is required whenever Vault is restarted or otherwise brought online after being in a previously offline state.

      To illustrate this concept, consider a secure server in a data center. Because the Vault database is only decrypted in-memory, stealing or bringing the server offline for any reason will leave the only copy of Vault’s database on the filesystem in encrypted form, or “sealed”.

      When starting the server again, a key share of 3 and key threshold of 2 means that 3 keys exist, but at least 2 must be provided at startup for Vault to derive its decryption key and load its database into memory for access once again.

      The key share count ensure that multiple keys can exist at different locations for a degree of fault tolerance and backup purposes. The key threshold count ensures that compromising one unseal key alone is not sufficient to decrypt Vault data.

      1. Choose a value for the number of key shares and key threshold. Your situation may vary, but as an example, consider a team of three people in charge of operating Vault. A key share of 3 ensures that each member holds one unseal key. A key threshold of 2 means that no single operator can lose their key and compromise the system or steal the Vault database without coordinating with another operator.

      2. Using these chosen values, execute the initialization command. Be prepared to save the output that is returned from the following command, as it is only viewable once.

        vault operator init -key-shares=3 -key-threshold=2

        This command will return output similar to the following:

        Unseal Key 1: BaR6GUWRY8hIeNyuzAn7FTa82DiIldgvEZhOKhVsl0X5
        Unseal Key 2: jzh7lji1NX9TsNVGycUudSIy/X4lczJgsCpRfm3m8Q03
        Unseal Key 3: JfdH8LqEyc4B+xLMBX6/LT9o8G/6isC2ZFfz+iNMIW/0
        Initial Root Token: s.YijNa8lqSDeho1tJBtY02983
        Vault initialized with 3 key shares and a key threshold of 2. Please securely
        distribute the key shares printed above. When the Vault is re-sealed,
        restarted, or stopped, you must supply at least 2 of these keys to unseal it
        before it can start servicing requests.
        Vault does not store the generated master key. Without at least 2 key to
        reconstruct the master key, Vault will remain permanently sealed!
        It is possible to generate new unseal keys, provided you have a quorum of
        existing unseal keys shares. See "vault operator rekey" for more information.
      3. In a production scenario, these unseal keys should be stored in separate locations. For example, store one in a password manager such as LastPass, encrypted one with gpg, and store another offline on a USB key. Doing so ensures that compromising one storage location is not sufficient to recover the number of unseal keys required to decrypt the Vault database.

      4. The Initial Root Token is equivalent to the “root” or superuser account for the Vault API. Record and protect this token in a similar fashion. Like the root account on a Unix system, this token should be used to create less-privileged accounts to use for day-to-day interactions with Vault and the root token should be used infrequently due to its widespread privileges.

      Unseal Vault

      After initialization, Vault will be sealed. The following unseal steps must be performed any time the vault service is brought down and then brought up again, such as when performing systemctl restart vault or restarting the host machine.

      1. With VAULT_ADDR set appropriately, execute the unseal command.

        vault operator unseal

        A prompt will appear:

        Unseal Key (will be hidden):
      2. Paste or enter one unseal key and press Enter. The command will finish with output similar to the following:

        Unseal Key (will be hidden):
        Key                Value
        ---                -----
        Seal Type          shamir
        Initialized        true
        Sealed             true
        Total Shares       3
        Threshold          2
        Unseal Progress    1/2
        Unseal Nonce       0124ce2a-6229-fac1-0e3f-da3e97e00583
        Version            1.1.0
        HA Enabled         false

        Notice that the output indicates that the one out of two required unseal keys have been provided.

      3. Perform the unseal command again.

        vault operator unseal
      4. Enter a different unseal key when the prompt appears.

        Unseal Key (will be hidden):
      5. The resulting output should indicate that Vault is now unsealed (notice the Sealed false line).

        Unseal Key (will be hidden):
        Key             Value
        ---             -----
        Seal Type       shamir
        Initialized     true
        Sealed          false
        Total Shares    3
        Threshold       2
        Version         1.1.0
        Cluster Name    vault-cluster-a397153e
        Cluster ID      a065557e-3ee8-9d26-4d90-b90c8d69fa5d
        HA Enabled      false

      Vault is now operational.

      Using Vault

      Token Authentication

      When interacting with Vault over its REST API, Vault identifies and authenticates most requests by the presence of a token. While the initial root token can be used for now, the Policies section of this guide explains how to provision additional tokens.

      1. Set the VAULT_TOKEN environment variable to the value of the previously-obtained root token. This token is the authentication mechanism that the vault command will rely on for future interaction with Vault. The actual root token will be different in your environment.

        export VAULT_TOKEN=s.YijNa8lqSDeho1tJBtY02983
      2. Use the token lookup subcommand to confirm that the token is valid and has the expected permissions.

        vault token lookup
      3. The output of this command should include the following:

        policies            [root]

      The KV Secret Backend

      Vault backends are the core mechanism Vault uses to permit users to read and write secret values. The simplest backend to illustrate this functionality is the KV backend. This backend lets clients write key/value pairs (such as mysecret=apikey) that can be read later.

      1. Enable the secret backend by using the enable Vault subcommand.

        vault secrets enable -version=2 kv
      2. Write an example value to the KV backend using the kv put Vault subcommand.

        vault kv put kv/myservice api_token=secretvalue

        This command should return output similar to the following:

        Key              Value
        ---              -----
        created_time     2019-03-31T04:35:38.631167678Z
        deletion_time    n/a
        destroyed        false
        version          1
      3. Read this value from the kv/myservice path.

        vault kv get kv/myservice

        This command should return output similar to the following:

        ====== Metadata ======
        Key              Value
        ---              -----
        created_time     2019-03-31T04:35:38.631167678Z
        deletion_time    n/a
        destroyed        false
        version          1
        ====== Data ======
        Key          Value
        ---          -----
        api_token    secretvalue
      4. Many utilities and script are better suited to process json output. Use the -format=json flag to do a read once more, with the results return in JSON form.

        vault kv get -format=json kv/myservice
          "request_id": "2734ea81-6f39-c017-4c73-2719b2018b65",
          "lease_id": "",
          "lease_duration": 0,
          "renewable": false,
          "data": {
            "data": {
              "api_token": "secretvalue"
            "metadata": {
              "created_time": "2019-03-31T04:35:38.631167678Z",
              "deletion_time": "",
              "destroyed": false,
              "version": 1
          "warnings": null


      Up until this point, we have performed API calls to Vault with the root token. Production best practices dictate that this token should rarely be used and most operations should be performed with lesser-privileged tokens associated with controlled policies.

      Policies are defined by specifying a particular path and the set of capabilities that are permitted by a user upon the path. In our previous commands, the path has been kv/myservice, so we can create a policy to only read this secret and perform no other operations, including reading or listing secrets. When no policy exists for a particular path, Vault denies operations by default.

      In the case of the KV backend, Vault distinguishes operations upon the stored data, which are the actual stored values, and metadata, which includes information such as version history. In this example, we will create a policy to control access to the key/value data alone.

      1. Create the following Vault policy file.

        path "kv/data/myservice" {
          capabilities = ["read"]

        This simple policy will permit any token associated with it to read the secret stored at the KV secret backend path kv/myservice.

      2. Load this policy into Vault using the policy write subcommand. The following command names the aforementioned policy read-myservice.

        vault policy write read-myservice policy.hcl
      3. To illustrate the use of this policy, create a new token with this new policy associated with it.

        vault token create -policy=read-myservice

        This command should return output similar to the following.

        Key                  Value
        ---                  -----
        token                s.YdpJWRRaEIgdOW4y72sSVygy
        token_accessor       07akQfzg0TDjj3YoZSGMPkHA
        token_duration       768h
        token_renewable      true
        token_policies       ["default" "read-myservice"]
        identity_policies    []
        policies             ["default" "read-myservice"]
      4. Open another terminal window or tab and login to the same host that Vault is running on. Set the VAULT_ADDR to ensure that new vault commands point at the local instance of Vault, replacing with your domain.

        export VAULT_ADDR=
      5. Set the VAULT_TOKEN environment variable to the new token just created by the token create command. Remember that your actual token will be different than the one in this example.

        export VAULT_TOKEN=s.YdpJWRRaEIgdOW4y72sSVygy
      6. Now attempt to read our secret in Vault at the kv/myservice path.

        vault kv get kv/myservice

        Vault should return the key/value data.

        ====== Metadata ======
        Key              Value
        ---              -----
        created_time     2019-03-31T04:35:38.631167678Z
        deletion_time    n/a
        destroyed        false
        version          1
        ====== Data ======
        Key          Value
        ---          -----
        api_token    secretvalue
      7. To illustrate forbidden operations, attempt to list all secrets in the KV backend.

        vault kv list kv/

        Vault should deny this request.

        Error listing kv/metadata: Error making API request.
        URL: GET
        Code: 403. Errors:
        * 1 error occurred:
                * permission denied
      8. In contrast, attempt to perform the same operation in the previous terminal window that has been configured with the root token.

        vault kv list kv/

        The root token should have sufficient rights to return a list of all secret keys under the kv/ path.

      Authentication Methods

      In practice, when services that require secret values are deployed, a token should not be distributed as part of the deployment or configuration management. Rather, services should authenticate themselves to Vault in order to acquire a token that has a limited lifetime. This ensures that credentials eventually expire and cannot be reused if they are ever leaked or disclosed.

      Vault supports many types of authentication methods. For example, the Kubernetes authentication method can retrieve a token for individual pods. As a simple illustrative example, the following steps will demonstrate how to use the AppRole method.

      The AppRole authentication method works by requiring that clients provide two pieces of information: the AppRole RoleID and SecretID. The recommendation approach to using this method is to store these two pieces of information in separate locations, as one alone is not sufficient to authenticate against Vault, but together, they permit a client to retrieve a valid Vault token. For example, in a production service, a RoleID might be present in a service’s configuration file, while the SecretID could be provided as an environment variable.

      1. Enable the AppRole authentication method using the auth subcommand. Remember to perform these steps in the terminal window with the root token stored in the VAULT_TOKEN environment variable, otherwise Vault commands will fail.

        vault auth enable approle
      2. Create a named role. This will define a role that can be used to “log in” to Vault and retrieve a token with a policy associated with it. The following command creates a named role named my-application which creates tokens valid for 10 minutes which will have the read-myservice policy associated with them.

        vault write auth/approle/role/my-application 
      3. Retrieve the RoleID of the named role, which uniquely identifies the AppRole. Note this value for later use.

        vault read auth/approle/role/my-application/role-id
        Key        Value
        ---        -----
        role_id    147cd412-d1c2-4d2c-c57e-d660da0b1fa8

        In this example case, RoleID is 147cd412-d1c2-4d2c-c57e-d660da0b1fa8. Note that your value will be different.

      4. Finally, read the secret-id of the named role, and save this value for later use as well.

        vault write -f auth/approle/role/my-application/secret-id
        Key                   Value
        ---                   -----
        secret_id             2225c0c3-9b9f-9a9c-a0a5-10bf06df7b25
        secret_id_accessor    30cbef6a-8834-94fe-6cf3-cf2e4598dd6a

        In this example output, the SecretID is 2225c0c3-9b9f-9a9c-a0a5-10bf06df7b25.

      5. Use these values to generate a limited-use token by performing a write operation against the AppRole API. Replace the RoleID and SecretID values here with your own.

        vault write auth/approle/login 

        The resulting output should include a new token, which in this example case is s.coRl4UR6YL1sqw1jXhJbuZfq

        Key                     Value
        ---                     -----
        token                   s.3uu4vwFO8D1mG5S76IG04mck
        token_accessor          fi3aW4W9kZNB3FAC20HRXeoT
        token_duration          10m
        token_renewable         true
        token_policies          ["default" "read-myservice"]
        identity_policies       []
        policies                ["default" "read-myservice"]
        token_meta_role_name    my-application
      6. Open one more terminal tab or window and log in to your remote host running Vault.

      7. Once again, set the VAULT_ADDR environment variable to the correct value to communicate with your local Vault instance.

        export VAULT_ADDR=
      8. Set the VAULT_TOKEN environment variable to this newly created token. From the previous example output, this would be the following (note that your token will be different).

        export VAULT_TOKEN=s.3uu4vwFO8D1mG5S76IG04mck
      9. Read the KV path that this token should be able to access.

        vault kv get kv/myservice

        The example should should be read and accessible.

      10. If you read this value using this Vault token after more than 10 minutes have elapsed, the token will have expired and any read operations using the token should be denied. Performing another vault write auth/approle/login operation (detailed in step 5) can generate new tokens to use.

      More Information

      You may wish to consult the following resources for additional information on this topic. While these are provided in the hope that they will be useful, please note that we cannot vouch for the accuracy or timeliness of externally hosted materials.

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.

      Source link