One place for hosting & domains

      What’s New in Debian 10 Buster


      The Debian operating system’s most recent stable release, version 10 (Buster), was published on July 6, 2019, and will be supported until 2022. Long term support may be provided through 2024 as part of the Debian LTS Project.

      This guide is a brief overview of the new features and significant changes to Debian since the previous release. It focuses mainly on changes that will affect users running Debian in a typical server environment. It synthesizes information from the official Debian 10 release notes, the Debian 10 release blog post,, and other sources.

      Summary of Changes and Major Package Versions

      Generally, Debian stable releases contain very few surprises or major changes. This remains the case with Debian 10. Beyond a few networking and security changes — which we will cover in subsequent sections — most updates are small modifications to the base system and new versions of available software packages.

      The following list summarizes a select list of Debian 10 software updates. The versions that shipped in Debian 9 are included in ( ) parentheses:


      Web Servers

      Programming Languages

      • Go 1.11 (from 1.7)
      • Node.js 10.15.2 (from 4.8.2)
      • PHP 7.3 (from 7.0)
      • Python 3.7.2 (from 3.5.3)
      • Ruby 2.5 (from 2.3)
      • Rust 1.34 (from 1.24)


      The following sections explain some of the more extensive changes to Debian 10.

      Linux Kernel 4.19

      The Linux kernel has been updated to version 4.19. This is a long-term support kernel that was released on October 22, 2018 and will be supported until December of 2020. For more information on the different types of Linux kernel releases, take a look at the official Linux kernel release and support schedule.

      Some new features and updates that were released between kernels 4.9 and 4.19 include:

      • Virtual GPU support, which enables GPU hardware to be shared between multiple virtual machines instead of being passed-through directly to one. (4.10)
      • Performance improvements for large-scale SSD-based swap. (4.11)
      • Improved in-kernel TLS acceleration. (4.13)
      • Improvements to the Ext4 filesystem, including support for billions of directory entries, and extended attributes that can be up to 64k in size. (4.13)
      • Support for 4 petabytes of physical memory, up from 64 terabytes. (4.14)
      • Meltdown and Spectre vulnerability updates, along with other CPU vulnerability patches. (4.15)
      • Support for using cgroups to set I/O latency targets for block devices. (4.19)

      For more information on Linux kernel updates, maintains a detailed and beginner-friendly changelog summary for each release.

      AppArmor Enabled by Default

      AppArmor is an access control system that focuses on limiting the resources an application can use. It is supplemental to more traditional user-based access control mechanisms.

      AppArmor works by loading application profiles into the kernel, and then using those profiles to enforce limits on capabilities such as file reads and writes, networking access, mounts, and raw socket access.

      Debian 10 ships with AppArmor enabled and some default profiles for common applications such as Apache, Bash, Python, and PHP. More profiles can be installed via the apparmor-profiles-extra package.

      See the AppArmor documentation for more information, including guidelines on how to write your own AppArmor application profiles.

      nftables Replaces iptables for Packet Filtering

      In Debian Buster the iptables subsystem is replaced by nftables, a newer packet filtering system with improved syntax, streamlined ipv4/ipv6 support, and built-in support for data sets such as dictionaries and maps. You can read a more detailed list of differences on the nftables wiki.

      Compatibility with existing iptables scripts is provided by the iptables-nft command. The nftables wiki also has advice on transitioning from iptables to nftables.

      Apt supports https repositories by default in Debian 10. Users no longer need to install additional packages before using https-based package repos.

      Additionally, unattended-upgrades — the system Debian uses to perform automatic updates from the security repository — now also supports automating point-release upgrades from any repo. These upgrades are usually small bug fixes and security updates.


      While this guide is not exhaustive, you should now have a general idea of the major changes and new features in Debian 10 Buster. Please refer to the official Debian 10 release notes for more information.

      Source link

      HTTP/1.1 vs HTTP/2: What’s the Difference?

      The author selected the Society of Women Engineers to receive a donation as part of the Write for DOnations program.


      The Hypertext Transfer Protocol, or HTTP, is an application protocol that has been the de facto standard for communication on the World Wide Web since its invention in 1989. From the release of HTTP/1.1 in 1997 until recently, there have been few revisions to the protocol. But in 2015, a reimagined version called HTTP/2 came into use, which offered several methods to decrease latency, especially when dealing with mobile platforms and server-intensive graphics and videos. HTTP/2 has since become increasingly popular, with some estimates suggesting that around a third of all websites in the world support it. In this changing landscape, web developers can benefit from understanding the technical differences between HTTP/1.1 and HTTP/2, allowing them to make informed and efficient decisions about evolving best practices.

      After reading this article, you will understand the main differences between HTTP/1.1 and HTTP/2, concentrating on the technical changes HTTP/2 has adopted to achieve a more efficient Web protocol.


      To contextualize the specific changes that HTTP/2 made to HTTP/1.1, let’s first take a high-level look at the historical development and basic workings of each.


      Developed by Timothy Berners-Lee in 1989 as a communication standard for the World Wide Web, HTTP is a top-level application protocol that exchanges information between a client computer and a local or remote web server. In this process, a client sends a text-based request to a server by calling a method like GET or POST. In response, the server sends a resource like an HTML page back to the client.

      For example, let’s say you are visiting a website at the domain When you navigate to this URL, the web browser on your computer sends an HTTP request in the form of a text-based message, similar to the one shown here:

      GET /index.html HTTP/1.1

      This request uses the GET method, which asks for data from the host server listed after Host:. In response to this request, the web server returns an HTML page to the requesting client, in addition to any images, stylesheets, or other resources called for in the HTML. Note that not all of the resources are returned to the client in the first call for data. The requests and responses will go back and forth between the server and client until the web browser has received all the resources necessary to render the contents of the HTML page on your screen.

      You can think of this exchange of requests and responses as a single application layer of the internet protocol stack, sitting on top of the transfer layer (usually using the Transmission Control Protocol, or TCP) and networking layers (using the Internet Protocol, or IP):

      Internet Protocol Stack

      There is much to discuss about the lower levels of this stack, but in order to gain a high-level understanding of HTTP/2, you only need to know this abstracted layer model and where HTTP figures into it.

      With this basic overview of HTTP/1.1 out of the way, we can now move on to recounting the early development of HTTP/2.


      HTTP/2 began as the SPDY protocol, developed primarily at Google with the intention of reducing web page load latency by using techniques such as compression, multiplexing, and prioritization. This protocol served as a template for HTTP/2 when the Hypertext Transfer Protocol working group httpbis of the IETF (Internet Engineering Task Force) put the standard together, culminating in the publication of HTTP/2 in May 2015. From the beginning, many browsers supported this standardization effort, including Chrome, Opera, Internet Explorer, and Safari. Due in part to this browser support, there has been a significant adoption rate of the protocol since 2015, with especially high rates among new sites.

      From a technical point of view, one of the most significant features that distinguishes HTTP/1.1 and HTTP/2 is the binary framing layer, which can be thought of as a part of the application layer in the internet protocol stack. As opposed to HTTP/1.1, which keeps all requests and responses in plain text format, HTTP/2 uses the binary framing layer to encapsulate all messages in binary format, while still maintaining HTTP semantics, such as verbs, methods, and headers. An application level API would still create messages in the conventional HTTP formats, but the underlying layer would then convert these messages into binary. This ensures that web applications created before HTTP/2 can continue functioning as normal when interacting with the new protocol.

      The conversion of messages into binary allows HTTP/2 to try new approaches to data delivery not available in HTTP/1.1, a contrast that is at the root of the practical differences between the two protocols. The next section will take a look at the delivery model of HTTP/1.1, followed by what new models are made possible by HTTP/2.

      Delivery Models

      As mentioned in the previous section, HTTP/1.1 and HTTP/2 share semantics, ensuring that the requests and responses traveling between the server and client in both protocols reach their destinations as traditionally formatted messages with headers and bodies, using familiar methods like GET and POST. But while HTTP/1.1 transfers these in plain-text messages, HTTP/2 encodes these into binary, allowing for significantly different delivery model possibilities. In this section, we will first briefly examine how HTTP/1.1 tries to optimize efficiency with its delivery model and the problems that come up from this, followed by the advantages of the binary framing layer of HTTP/2 and a description of how it prioritizes requests.

      HTTP/1.1 — Pipelining and Head-of-Line Blocking

      The first response that a client receives on an HTTP GET request is often not the fully rendered page. Instead, it contains links to additional resources needed by the requested page. The client discovers that the full rendering of the page requires these additional resources from the server only after it downloads the page. Because of this, the client will have to make additional requests to retrieve these resources. In HTTP/1.0, the client had to break and remake the TCP connection with every new request, a costly affair in terms of both time and resources.

      HTTP/1.1 takes care of this problem by introducing persistent connections and pipelining. With persistent connections, HTTP/1.1 assumes that a TCP connection should be kept open unless directly told to close. This allows the client to send multiple requests along the same connection without waiting for a response to each, greatly improving the performance of HTTP/1.1 over HTTP/1.0.

      Unfortunately, there is a natural bottleneck to this optimization strategy. Since multiple data packets cannot pass each other when traveling to the same destination, there are situations in which a request at the head of the queue that cannot retrieve its required resource will block all the requests behind it. This is known as head-of-line (HOL) blocking, and is a significant problem with optimizing connection efficiency in HTTP/1.1. Adding separate, parallel TCP connections could alleviate this issue, but there are limits to the number of concurrent TCP connections possible between a client and server, and each new connection requires significant resources.

      These problems were at the forefront of the minds of HTTP/2 developers, who proposed to use the aforementioned binary framing layer to fix these issues, a topic you will learn more about in the next section.

      HTTP/2 — Advantages of the Binary Framing Layer

      In HTTP/2, the binary framing layer encodes requests/responses and cuts them up into smaller packets of information, greatly increasing the flexibility of data transfer.

      Let’s take a closer look at how this works. As opposed to HTTP/1.1, which must make use of multiple TCP connections to lessen the effect of HOL blocking, HTTP/2 establishes a single connection object between the two machines. Within this connection there are multiple streams of data. Each stream consists of multiple messages in the familiar request/response format. Finally, each of these messages split into smaller units called frames:

      Streams, Messages, and Frames

      At the most granular level, the communication channel consists of a bunch of binary-encoded frames, each tagged to a particular stream. The identifying tags allow the connection to interleave these frames during transfer and reassemble them at the other end. The interleaved requests and responses can run in parallel without blocking the messages behind them, a process called multiplexing. Multiplexing resolves the head-of-line blocking issue in HTTP/1.1 by ensuring that no message has to wait for another to finish. This also means that servers and clients can send concurrent requests and responses, allowing for greater control and more efficient connection management.

      Since multiplexing allows the client to construct multiple streams in parallel, these streams only need to make use of a single TCP connection. Having a single persistent connection per origin improves upon HTTP/1.1 by reducing the memory and processing footprint throughout the network. This results in better network and bandwidth utilization and thus decreases the overall operational cost.

      A single TCP connection also improves the performance of the HTTPS protocol, since the client and server can reuse the same secured session for multiple requests/responses. In HTTPS, during the TLS or SSL handshake, both parties agree on the use of a single key throughout the session. If the connection breaks, a new session starts, requiring a newly generated key for further communication. Thus, maintaining a single connection can greatly reduce the resources required for HTTPS performance. Note that, though HTTP/2 specifications do not make it mandatory to use the TLS layer, many major browsers only support HTTP/2 with HTTPS.

      Although the multiplexing inherent in the binary framing layer solves certain issues of HTTP/1.1, multiple streams awaiting the same resource can still cause performance issues. The design of HTTP/2 takes this into account, however, by using stream prioritization, a topic we will discuss in the next section.

      HTTP/2 — Stream Prioritization

      Stream prioritization not only solves the possible issue of requests competing for the same resource, but also allows developers to customize the relative weight of requests to better optimize application performance. In this section, we will break down the process of this prioritization in order to provide better insight into how you can leverage this feature of HTTP/2.

      As you know now, the binary framing layer organizes messages into parallel streams of data. When a client sends concurrent requests to a server, it can prioritize the responses it is requesting by assigning a weight between 1 and 256 to each stream. The higher number indicates higher priority. In addition to this, the client also states each stream’s dependency on another stream by specifying the ID of the stream on which it depends. If the parent identifier is omitted, the stream is considered to be dependent on the root stream. This is illustrated in the following figure:

      Stream Prioritization

      In the illustration, the channel contains six streams, each with a unique ID and associated with a specific weight. Stream 1 does not have a parent ID associated with it and is by default associated with the root node. All other streams have some parent ID marked. The resource allocation for each stream will be based on the weight that they hold and the dependencies they require. Streams 5 and 6 for example, which in the figure have been assigned the same weight and same parent stream, will have the same prioritization for resource allocation.

      The server uses this information to create a dependency tree, which allows the server to determine the order in which the requests will retrieve their data. Based on the streams in the preceding figure, the dependency tree will be as follows:

      Dependency Tree

      In this dependency tree, stream 1 is dependent on the root stream and there is no other stream derived from the root, so all the available resources will allocate to stream 1 ahead of the other streams. Since the tree indicates that stream 2 depends on the completion of stream 1, stream 2 will not proceed until the stream 1 task is completed. Now, let us look at streams 3 and 4. Both these streams depend on stream 2. As in the case of stream 1, stream 2 will get all the available resources ahead of streams 3 and 4. After stream 2 completes its task, streams 3 and 4 will get the resources; these are split in the ratio of 2:4 as indicated by their weights, resulting in a higher chunk of the resources for stream 4. Finally, when stream 3 finishes, streams 5 and 6 will get the available resources in equal parts. This can happen before stream 4 has finished its task, even though stream 4 receives a higher chunk of resources; streams at a lower level are allowed to start as soon as the dependent streams on an upper level have finished.

      As an application developer, you can set the weights in your requests based on your needs. For example, you may assign a lower priority for loading an image with high resolution after providing a thumbnail image on the web page. By providing this facility of weight assignment, HTTP/2 enables developers to gain better control over web page rendering. The protocol also allows the client to change dependencies and reallocate weights at runtime in response to user interaction. It is important to note, however, that a server may change assigned priorities on its own if a certain stream is blocked from accessing a specific resource.

      Buffer Overflow

      In any TCP connection between two machines, both the client and the server have a certain amount of buffer space available to hold incoming requests that have not yet been processed. These buffers offer flexibility to account for numerous or particularly large requests, in addition to uneven speeds of downstream and upstream connections.

      There are situations, however, in which a buffer is not enough. For example, the server may be pushing a large amount of data at a pace that the client application is not able to cope with due to a limited buffer size or a lower bandwidth. Likewise, when a client uploads a huge image or a video to a server, the server buffer may overflow, causing some additional packets to be lost.

      In order to avoid buffer overflow, a flow control mechanism must prevent the sender from overwhelming the receiver with data. This section will provide an overview of how HTTP/1.1 and HTTP/2 use different versions of this mechanism to deal with flow control according to their different delivery models.


      In HTTP/1.1, flow control relies on the underlying TCP connection. When this connection initiates, both client and server establish their buffer sizes using their system default settings. If the receiver’s buffer is partially filled with data, it will tell the sender its receive window, i.e., the amount of available space that remains in its buffer. This receive window is advertised in a signal known as an ACK packet, which is the data packet that the receiver sends to acknowledge that it received the opening signal. If this advertised receive window size is zero, the sender will send no more data until the client clears its internal buffer and then requests to resume data transmission. It is important to note here that using receive windows based on the underlying TCP connection can only implement flow control on either end of the connection.

      Because HTTP/1.1 relies on the transport layer to avoid buffer overflow, each new TCP connection requires a separate flow control mechanism. HTTP/2, however, multiplexes streams within a single TCP connection, and will have to implement flow control in a different manner.


      HTTP/2 multiplexes streams of data within a single TCP connection. As a result, receive windows on the level of the TCP connection are not sufficient to regulate the delivery of individual streams. HTTP/2 solves this problem by allowing the client and server to implement their own flow controls, rather than relying on the transport layer. The application layer communicates the available buffer space, allowing the client and server to set the receive window on the level of the multiplexed streams. This fine-scale flow control can be modified or maintained after the initial connection via a WINDOW_UPDATE frame.

      Since this method controls data flow on the level of the application layer, the flow control mechanism does not have to wait for a signal to reach its ultimate destination before adjusting the receive window. Intermediary nodes can use the flow control settings information to determine their own resource allocations and modify accordingly. In this way, each intermediary server can implement its own custom resource strategy, allowing for greater connection efficiency.

      This flexibility in flow control can be advantageous when creating appropriate resource strategies. For example, the client may fetch the first scan of an image, display it to the user, and allow the user to preview it while fetching more critical resources. Once the client fetches these critical resources, the browser will resume the retrieval of the remaining part of the image. Deferring the implementation of flow control to the client and server can thus improve the perceived performance of web applications.

      In terms of flow control and the stream prioritization mentioned in an earlier section, HTTP/2 provides a more detailed level of control that opens up the possibility of greater optimization. The next section will explain another method unique to the protocol that can enhance a connection in a similar way: predicting resource requests with server push.

      Predicting Resource Requests

      In a typical web application, the client will send a GET request and receive a page in HTML, usually the index page of the site. While examining the index page contents, the client may discover that it needs to fetch additional resources, such as CSS and JavaScript files, in order to fully render the page. The client determines that it needs these additional resources only after receiving the response from its initial GET request, and thus must make additional requests to fetch these resources and complete putting the page together. These additional requests ultimately increase the connection load time.

      There are solutions to this problem, however: since the server knows in advance that the client will require additional files, the server can save the client time by sending these resources to the client before it asks for them. HTTP/1.1 and HTTP/2 have different strategies of accomplishing this, each of which will be described in the next section.

      HTTP/1.1 — Resource Inlining

      In HTTP/1.1, if the developer knows in advance which additional resources the client machine will need to render the page, they can use a technique called resource inlining to include the required resource directly within the HTML document that the server sends in response to the initial GET request. For example, if a client needs a specific CSS file to render a page, inlining that CSS file will provide the client with the needed resource before it asks for it, reducing the total number of requests that the client must send.

      But there are a few problems with resource inlining. Including the resource in the HTML document is a viable solution for smaller, text-based resources, but larger files in non-text formats can greatly increase the size of the HTML document, which can ultimately decrease the connection speed and nullify the original advantage gained from using this technique. Also, since the inlined resources are no longer separate from the HTML document, there is no mechanism for the client to decline resources that it already has, or to place a resource in its cache. If multiple pages require the resource, each new HTML document will have the same resource inlined in its code, leading to larger HTML documents and longer load times than if the resource were simply cached in the beginning.

      A major drawback of resource inlining, then, is that the client cannot separate the resource and the document. A finer level of control is needed to optimize the connection, a need that HTTP/2 seeks to meet with server push.

      HTTP/2 — Server Push

      Since HTTP/2 enables multiple concurrent responses to a client’s initial GET request, a server can send a resource to a client along with the requested HTML page, providing the resource before the client asks for it. This process is called server push. In this way, an HTTP/2 connection can accomplish the same goal of resource inlining while maintaining the separation between the pushed resource and the document. This means that the client can decide to cache or decline the pushed resource separate from the main HTML document, fixing the major drawback of resource inlining.

      In HTTP/2, this process begins when the server sends a PUSH_PROMISE frame to inform the client that it is going to push a resource. This frame includes only the header of the message, and allows the client to know ahead of time which resource the server will push. If it already has the resource cached, the client can decline the push by sending a RST_STREAM frame in response. The PUSH_PROMISE frame also saves the client from sending a duplicate request to the server, since it knows which resources the server is going to push.

      It is important to note here that the emphasis of server push is client control. If a client needed to adjust the priority of server push, or even disable it, it could at any time send a SETTINGS frame to modify this HTTP/2 feature.

      Although this feature has a lot of potential, server push is not always the answer to optimizing your web application. For example, some web browsers cannot always cancel pushed requests, even if the client already has the resource cached. If the client mistakenly allows the server to send a duplicate resource, the server push can use up the connection unnecessarily. In the end, server push should be used at the discretion of the developer. For more on how to strategically use server push and optimize web applications, check out the PRPL pattern developed by Google. To learn more about the possible issues with server push, see Jake Archibald’s blog post HTTP/2 push is tougher than I thought.


      A common method of optimizing web applications is to use compression algorithms to reduce the size of HTTP messages that travel between the client and the server. HTTP/1.1 and HTTP/2 both use this strategy, but there are implementation problems in the former that prohibit compressing the entire message. The following section will discuss why this is the case, and how HTTP/2 can provide a solution.


      Programs like gzip have long been used to compress the data sent in HTTP messages, especially to decrease the size of CSS and JavaScript files. The header component of a message, however, is always sent as plain text. Although each header is quite small, the burden of this uncompressed data weighs heavier and heavier on the connection as more requests are made, particularly penalizing complicated, API-heavy web applications that require many different resources and thus many different resource requests. Additionally, the use of cookies can sometimes make headers much larger, increasing the need for some kind of compression.

      In order to solve this bottleneck, HTTP/2 uses HPACK compression to shrink the size of headers, a topic discussed further in the next section.


      One of the themes that has come up again and again in HTTP/2 is its ability to use the binary framing layer to exhibit greater control over finer detail. The same is true when it comes to header compression. HTTP/2 can split headers from their data, resulting in a header frame and a data frame. The HTTP/2-specific compression program HPACK can then compress this header frame. This algorithm can encode the header metadata using Huffman coding, thereby greatly decreasing its size. Additionally, HPACK can keep track of previously conveyed metadata fields and further compress them according to a dynamically altered index shared between the client and the server. For example, take the following two requests:

      Request #1

      method:     GET
      scheme:     https
      path:       /academy
      accept:     /image/jpeg
      user-agent: Mozilla/5.0 ...

      Request #2

      method:     GET
      scheme:     https
      path:       /academy/images
      accept:     /image/jpeg
      user-agent: Mozilla/5.0 ...

      The various fields in these requests, such as method, scheme, host, accept, and user-agent, have the same values; only the path field uses a different value. As a result, when sending Request #2, the client can use HPACK to send only the indexed values needed to reconstruct these common fields and newly encode the path field. The resulting header frames will be as follows:

      Header Frame for Request #1

      method:     GET
      scheme:     https
      path:       /academy
      accept:     /image/jpeg
      user-agent: Mozilla/5.0 ...

      Header Frame for Request #2

      path:       /academy/images

      Using HPACK and other compression methods, HTTP/2 provides one more feature that can reduce client-server latency.


      As you can see from this point-by-point analysis, HTTP/2 differs from HTTP/1.1 in many ways, with some features providing greater levels of control that can be used to better optimize web application performance and other features simply improving upon the previous protocol. Now that you have gained a high-level perspective on the variations between the two protocols, you can consider how such factors as multiplexing, stream prioritization, flow control, server push, and compression in HTTP/2 will affect the changing landscape of web development.

      If you would like to see a performance comparison between HTTP/1.1 and HTTP/2, check out this Google demo that compares the protocols for different latencies. Note that when you run the test on your computer, page load times may vary depending on several factors such as bandwidth, client and server resources available at the time of testing, and so on. If you’d like to study the results of more exhaustive testing, take a look at the article HTTP/2 – A Real-World Performance Test and Analysis. Finally, if you would like to explore how to build a modern web application, you could follow our How To Build a Modern Web Application to Manage Customer Information with Django and React on Ubuntu 18.04 tutorial, or set up your own HTTP/2 server with our How To Set Up Nginx with HTTP/2 Support on Ubuntu 16.04 tutorial.

      Source link

      What’s My Domain Worth? How to Value Your Domain Name

      Many of us nurture a dream of finding out that something we own is secretly immensely valuable. Perhaps that old vase in the attic is actually worth thousands of dollars, for example. However, it’s not just dusty antiques that could be unexpectedly profitable. Something as simple as a domain name could be worth a considerable amount.

      However, in order to find out if you’re sitting on a potential moneymaker, you’ll need to know what makes a domain valuable in the first place. Fortunately, this is not nearly as complex or time-consuming to figure out as you might expect. In practice, estimating the value of your domains simply requires you to do a little research.

      In this article, we’ll discuss the importance of domain valuation. We’ll also explore some of the most critical factors that determine a domain’s worth, and show you what you can do to estimate your own domain’s price. Let’s get started!

      An Introduction to Domain Valuation

      Whether you’re planning on selling a domain, or you just want to know how much one you own is worth, you’ll be looking to perform a process called domain valuation. This can also be referred to as domain appraisal, but the principle remains the same. Either way, this is a method for estimating the value of a specific domain name.

      Before moving on, let’s quickly recap some domain name basics. A domain name refers to the main part of a site’s URL, which visitors use to access the main page of that site. For example, our domain name is

      In turn, a domain name consists of two primary components:

      • Second-Level Domain (SLD): This refers to the main part of the URL, which most commonly contains the name of the website or the business that owns it. In our example, this is “dreamhost”.
      • Top-Level Domain (TLD): This is what comes at the end of the domain name, which in our case is .com. There are hundreds of TLDs available to use, but some of the most popular include .com, .net, .org, .gov, and .edu.

      To add a domain name to your site, you’ll need to register one with a vendor known as a Domain Name Registrar (DNR). You can also buy domain names from several hosting companies. In fact,  DreamHost specializes in domains by providing an easy interface to search for, purchase, and manage your assets. This can be particularly useful if you want to get your website hosting and domains from the same provider and keep the administration of your entire site under one roof.

      Once you’ve purchased a domain name and linked it to your site, you’ll often give it little more attention. However, sometimes it’s worth finding out how valuable a domain you own is. The main reason to do this is when you’re considering selling a domain you’re not using, or one you no longer need. In these cases, it’s smart to find out how much your domain is worth ahead of time, so you know if you’re selling it at a fair price.

      What Makes a Domain Name Valuable

      Regardless of your reasons for wanting to find out, estimating the value of a domain doesn’t have to be too difficult. However, it will require you to understand what makes a domain desirable in the first place.

      Of course, it’s significant to note that as with most evaluations of this nature, this is not an exact science. A domain will always be worth what people are willing to pay for it, and the bottom line is that sometimes the theory may not line up with the practical reality. By calculating the estimated worth, however, you’re providing yourself with a handy baseline — so you don’t end up selling a valuable domain for pennies.

      With that in mind, let’s look at some of the features generally considered important when it comes to domain names. These include:

      • The Top-Level Domain. A domain’s TLD can be a big part of what makes it desirable. For example, .com remains the most popular option (as it’s recognizable and common), so many buyers will gravitate towards it. However, newer alternatives can also become trendy (and valuable).
      • Popularity and traffic. If the domain name is currently used for a specific website, the level of traffic that site receives can become a vital factor in calculating the domain’s worth. The reason for this is pretty straightforward. If the domain comes with an existing audience attached, the buyer can leverage that traffic for their site right away. If the domain has been active for a while, this can also help its Search Engine Optimization (SEO) for the new owner, which may make it even more appealing.
      • Keywords. Including the right keywords in your domain name is another crucial aspect of SEO. According to a study by Higher Visibility, in most industries, a majority of sites include high-quality keywords in their domains. For example, the top-rated website for the search query “hotel” is As such, if your domain contains a desirable keyword, this could increase its value.
      • Brandability. While a domain’s brandability can be very difficult to define, it’s also an important consideration many site owners make when choosing a name. Many of the most visited websites in the world have clear, memorable, and unique domains, such as,, and If your domain is similarly catchy and attention-grabbing, it may make buyers take special notice.
      • Spelling. This may seem obvious, but making sure your domain is spelled correctly can be critical. After all, few buyers will be swayed to use something that looks sloppy and unprofessional. At the same time, using unexpected spelling can sometimes be a benefit, as it could make the domain more brandable. For example, and have taken technically incorrect spellings and used them to create memorable, lasting brands.
      • Length. A general rule of thumb is that the shorter a domain is, the more people will pay for it. This isn’t always the case – brevity alone isn’t going to make an otherwise cumbersome domain like more appealing to potential buyers. However, a concise domain is often considered rarer and therefore more valuable. This is due to shorter domains being more memorable, easier to share, and more marketable.

      These points are all worth paying attention to as you research your domain’s potential value. However, remember that there are no guarantees,and that the value of a domain can shift dramatically over time.

      How to Determine the Value of Your Domain Name (In 3 Steps)

      Theory is all well and good, but to find out how much your domain might be worth, you’ll need to get practical. To do this, we’re going to show you how to perform a domain valuation in three simple steps.

      Bear in mind that these steps don’t need to be performed all at the same time or in this specific order. However, what follows is a recommended process that can help you get a clear and comprehensive understanding of your domain’s worth. Let’s get started!

      Step 1: Research What Similar Domains Are Sold For

      To figure out what people may be willing to pay for your domain name, you need to know what people are charging for similar domains. We mentioned earlier that the value of a given domain can shift dramatically over time, so you probably can’t use what you initially paid for the domain as a baseline.

      At this stage, you will need to do some digging to see what domain names are currently being sold for and compare them to the ones you own. Fortunately, several sites collect information about domain sales. One of these is DN Journal, which can tell you about the past three weeks’ highest reported sales.

      The DN Journal website.

      Another site that does something similar is Domain Name Wire. On its blog, you can find regular roundups of prominent, recent domain sales.

      The Domain Name Wire website.

      If your domain is concise, you may find ShortNames especially useful. This site also collects recent sales, but specializes in (you guessed it) short domain names.

      The ShortNames website.

      These sites are all handy resources to help you track what names are selling for the highest amounts. For example, if we take a look at the most recent list provided by DN Journal (at the time of this writing), we can spot several domains that match the criteria we outlined earlier.

      The top 10 recent domain sales from DN Journal.

      Featured prominently in this list are short, memorable, clear, and keyword-heavy domains. If one of your domains is very similar, you might be sitting on a potential goldmine. These roundups can also help you catch recent trends in domain names.

      However, if you want to find information about sales for domains that are more directly comparable to yours, you will need to dig a little deeper. Let’s look at how to do that next.

      Step 2: Use an Appraisal Service

      A domain appraisal service is exactly what it sounds like. It’s a website that enables you to find information about your domain, helping you estimate its value and compare it to similar names.

      This kind of service will do a lot of the hard work for you. It automatically compares your domain against similar names and collects information about what those other domains sold for. It will also measure your domain’s worth, based on many of the factors we outlined previously. If you’re serious about putting an estimated price on your domain, this is one of the easiest methods for getting an educated answer.

      While there are many appraisal sites you can use, we’re going to look specifically at one of the most well-known. Say hello to EstiBot.

      The EstiBot website.

      EstiBot is the most widely used domain appraisal tool, with over two million appraisals performed on a daily basis. To try it for yourself, you can start by entering the domain you want to check in the field on the homepage and clicking Appraise.

      At that point, you should be presented with a report on the specified domain.

      A domain appraisal in EstiBot for

      At the top of the page, you’ll see the estimated value and some basic information about the domain. However, if you scroll down, you’ll find more details regarding how EstiBot arrived at this value. For instance, you can see the total amounts that similar domains sold for.

      Comparable domain sales in EstiBot.

      You can also view a huge amount of statistics and analytics for the domain, giving you an even clearer picture of its ultimate worth.

      Domain analytics and statistics.

      All of this information should give you a pretty good picture of what you could potentially ask for your domain. You may also want to use some appraisal tools to get an even better average, by comparing their different evaluations. For this, you could use DomainIndex and NamePros, to name a few options.

      This should provide you with a substantial theoretical value for your domain. However, as we’ve already mentioned, a domain’s real value is what someone is actually willing to pay for it. As such, if you want a definite answer, you may need to go straight to your potential buyers.

      Step 3: Find Out What People Are Willing to Pay for Your Domain Name

      Taking the time to do research and use estimation tools will undoubtedly be useful. This gives you an idea of the potential price you may be able to charge for your domain. However, it’s often best to go straight to potential buyers for a definitive answer.

      For instance, a domain could theoretically tick many of the boxes we outlined earlier, but still not be one that people are keen to purchase. It could be that the domain simply isn’t relevant to anyone, is too similar to an existing prominent site, or it has an unfortunate connotation that makes it less desirable.

      On the flip side, the opposite could also be true. A domain that shouldn’t be particularly valuable might be just what a particular website owner is looking for. This can make it much more lucrative than anticipated.

      The best way to find these things out is by putting the domain name up for sale. That might sound drastic, especially if you don’t actually want to lose the domain just yet, but it doesn’t have to be a risk. In fact, many sites that enable you to buy and sell domains will also let you set a reserve price for them. If you’ve ever sold anything on eBay before, this should be a familiar concept.

      Essentially, a reserve price is a figure you specify that serves as the minimum price you will accept. If the reserve is not matched or exceeded by any offers, the auction ends without a transaction actually taking place. This lets you put up a domain for auction and see what people are willing to pay, without the risk of selling the domain for less than you’re satisfied.

      Let’s try this out for ourselves to illustrate the process. The site we’ll use is Flippa, which is an online marketplace for domains.

      The Flippa homepage.

      To get started, you’ll need to sign up for an account. Bear in mind that while accounts are free, actually selling a domain costs at least $9 per listing. As such, you should only do this if you’re willing to spend a little money on valuing your domain. Other sites vary, but most charge similar token fees.

      If you decide that the cost of entry is worth it, click Sign Up on Flippa’s homepage and enter your personal details.

      Signing up for a Flippa account.

      You’ll then receive a confirmation email to activate your account. Click on the attached link and you’ll be taken back to Flippa, where you’ll be asked if you’re signing up as a buyer or a seller.

      Choosing to be either a buyer or seller.

      Naturally, you’ll want to select the latter option for now. When you do so, you’ll be asked some further questions, so add in your details as appropriate.

      Activating a Flippa account.

      Once you’ve done that, you’re ready to start selling. Select the Domains option, and enter the domain you want to value in the text field that’s provided.

      Start selling on Flippa.

      The next step entails providing information about the domain and how you want to sell it. You should select the Auction method, as this will let other users set whatever price they’re willing to pay.

      Creating a domain auction.

      You also need to specify a duration for your auction. Fourteen days should generally be enough to give you a good estimate of your domain name’s value. Then, towards the bottom of the page, you can also set the domain’s starting price and your Reserve price.

      Setting the start and reserve price.

      If you don’t want to sell the domain under any circumstances, you should set this value very high. In addition, make sure that the Show reserve price option is not checked, as users will otherwise be able to see it. Bear in mind that the highest you can set a reserve price on Flippa is $10,000.

      The next few steps in this process will ask how much you want to pay for the listing, which determines its visibility. Stick with the cheapest option for now, and provide your payment details.

      Paying to create a domain listing.

      Once you’ve provided all your details, you can verify your listing. When this is done, it will appear on the site, where other users will be able to bid on it.

       An example of a Flippa domain auction.

      You can now sit back and watch as the bids roll in, giving you an accurate estimate of your domain’s value to real users. If you’re lucky, somebody will take a fancy to your domain, and may even bid above the reserve right away. Either way, at the end of this process you should have a much clearer idea of how much your domain name is worth.

      Claim That Domain

      Whether you’re planning on selling your domain name or you simply want to know what it’s worth, performing domain valuation will help you find the answer. By understanding what factors are essential in determining a domain’s value and conducting a little research, you can quickly learn how badly other people might want to own it.

      Do you have any questions about estimating the value of a domain name? Join the DreamHost Community and start the conversation!

      Source link