One place for hosting & domains


      INAP Executive Spotlight: Mary Jane Horne, SVP, Global Network Services

      In the INAP Executive Spotlight series, we interview senior leaders across the organization, hearing candid reflections about their careers, the mentors who shaped them and big lessons learned along the way.Mary Jane Horne headshot black and white

      Next in the series is Mary Jane Horne, SVP of Global Network Services. With over 25 years of network and operations experience, Horne currently oversees INAP’s network engineering, carrier management, and global support teams, and is responsible for these activities across INAP’s worldwide footprint.

      Horne shares the lessons she’s learned throughout her career, working in the technology, media and telecommunications industries in the U.S. and abroad. Read on to learn what she loves about her role in tech, and the advice that she has for those looking to progress along their career path.

      The interview has been lightly edited for clarity and length.

      How did you get started in network engineering? What inspired you to pursue it?

      Growing up, my dad was an engineer. I started out in college as a computer science major, but switched after my first year to engineering. I spent five years at Northeastern University in Boston studying electrical and computer engineering, and I worked for the federal government while in school.

      After graduation, I went to work for the phone company, and my first job was as a central office design engineer. I was given some of the best advice of my career by my first manager, which was to move around as much as I could at the “doer “level, to figure out how the company worked. I had 10 jobs in the 13 and a half years I worked there, with a variety of roles in field engineering, technical sales support, customer service and corporate development. I learned how interdependent everyone was, and how best to improve important processes.

      After deciding to change companies to a small fiber start up, I realized the most important part of any company is its foundation. In the roles I held there, we created the strategy for the company, built out the network, thought out of the box for customer solutions and drove sales from $100k in year one to $64.5M in year five. Here is where I truly embraced the role network plays in driving the success of the company.

      Can you tell us more about your work with the global network services team? What are some challenges with that part of the business?

      Our global network strategy started by going from metro to metro and grooming the network components (both fiber and lit services) which eliminated of a lot of unnecessary costs in running the network. We also lit an express 100-gig ring between 3 key data center locations (Dallas/NY/San Jose) to carry more of our own traffic on-net. We have, since the completion of these first 2 initiatives, been upgrading a majority of the US and trans-Atlantic backbones to 100gig as well, to provide much needed additional capacity. We’re deploying new state of the art technology from Ciena on the fiber and bandwidth we are purchasing, allowing us to provide scalability and redundancy, while giving us the opportunity to develop new products in the future. When all is said and done with these three initiatives, the network operating expenses are flat with what they were before, however, our capacity will be three times what it was in the old network.

      We also have the software side of the network. We have CDN, Performance IP®, Managed DNS, as well as other in-house tools supported by the team. They are continuously evaluating where we need to take these products in order to stay competitive, which may include partnering and white labeling. How do we get these products launched across this network that we are deploying and upgrading? Global network services is not just a foundation, but it’s also the product and services that ride across the network. We have infrastructure evolution, as well as product evolution, and that’s where I focus with the team.

      What do you love about your role in tech?

      Learning new things and trying new things is part of who I am. Because tech is ever changing, it’s always been very exciting for me. I think as tech has evolved, some people have fallen off the bandwagon since they don’t keep up with the latest and greatest trends.

      In tech, you must be a person who looks to the future. I look at what’s coming up, not just how I need to design a network for today, and what the customers need today, but what I need three years from now. What should I consider now to prepare for any changes that might come down the road? That’s one of the things that I’ve always been attracted to in the tech industry— looking far enough ahead to say, “I need to do this, but I don’t want to be shortsighted and do it the cheap way just to get done with today. I want to look at how to do it the best way, so we are ready for the future, and we can then move forward faster.” Tech gives me exciting opportunities to do that.

      Of the qualities you possess, which do you think has been the greatest influence on your success?

      The ability to try anything and rise to challenges, even when I have no idea what I’m doing. I credit my boss, Pete Aquino [INAP CEO], for challenging me over the course of our working relationship. He would say, “I have a need for X.” And I’d say, “I’ve never done that before.” He’d respond, “That’s fine. I know you’ll figure it out.”

      I have learned so much because I did things that I never would have done anywhere else in my career, because somebody trusted me to figure it out. The only thing you need to say to me is it’s impossible, or everyone else who tried couldn’t do it, because now I’m sure I’m going to get it done. I love a challenge. I think that’s driven me through my career.

      Who are some of the people that have mentored you in your career?

      Some of the best advice I’ve ever been given came from another other female leader in the industry. When I wanted to make that jump from being a manager to the next level, my boss at the time was a female director, and that was considered quite the accomplishment (back then) at a phone company. I said to her, “’I’m ready, I’m looking to move up. I’m really excited.” She gave me the second best piece of advice I’ve ever been given: Just because you are really good at what you do today, does not mean you ready for the next level. She pointed out, in order to be considered for the next level, you need to continuously demonstrate leadership qualities and focus on how you embrace and lead change.

      That was an eye opening, great piece of advice. That’s when I made some drastic changes and left the big stable environment to go to a risky startup, where you have to lead every day to be successful.

      If you had to pick a piece of advice that you’d give to someone pursuing IT or network engineering as a career path, what would that be?

      I just approved some training for people who want to learn more. Don’t be afraid to ask for that. Always stay current, always stay hungry, always learn as much as you can, and learn across platforms. It’ll make you more valuable.

      Also, tell your boss what you need and what you’re interested in. You must have open communication with your manager. We are not mind readers, so talk about what your plans might be, or ask for help in developing them. We are the ones who have to drive our own careers.

      Are there any other big lessons you’ve learned in your career that you want to share?

      I learned to take a step back and think about things in the big picture, instead of just what I’m doing today. What I decide to do today could affect what other people will be doing well into the future, especially in technology. Ask yourself, am I really making the right choice, or do I need to evaluate other options?

      I also believe we should cross-train people. At a minimum, I think we should have people sit in somebody else’s job for a week or two, and swap chairs. It gives employees appreciation for other roles and responsibilities that they may not truly understand or have misjudged. It also may help folks develop a path to pursue other roles in the future.

      I was lucky enough in my career to be able to move from department to department, so I could get a better view of how a company worked. You can’t always do that in smaller companies, but I think those are valuable lessons to learn. We should spend more time educating one another on how things work at INAP.

      Laura Vietmeyer


      Source link

      How To Build a Neural Network to Recognize Handwritten Digits with TensorFlow


      Neural networks are used as a method of deep learning, one of the many subfields of artificial intelligence. They were first proposed around 70 years ago as an attempt at simulating the way the human brain works, though in a much more simplified form. Individual ‘neurons’ are connected in layers, with weights assigned to determine how the neuron responds when signals are propagated through the network. Previously, neural networks were limited in the number of neurons they were able to simulate, and therefore the complexity of learning they could achieve. But in recent years, due to advancements in hardware development, we have been able to build very deep networks, and train them on enormous datasets to achieve breakthroughs in machine intelligence.

      These breakthroughs have allowed machines to match and exceed the capabilities of humans at performing certain tasks. One such task is object recognition. Though machines have historically been unable to match human vision, recent advances in deep learning have made it possible to build neural networks which can recognize objects, faces, text, and even emotions.

      In this tutorial, you will implement a small subsection of object recognition—digit recognition. Using TensorFlow, an open-source Python library developed by the Google Brain labs for deep learning research, you will take hand-drawn images of the numbers 0-9 and build and train a neural network to recognize and predict the correct label for the digit displayed.

      While you won’t need prior experience in practical deep learning or TensorFlow to follow along with this tutorial, we’ll assume some familiarity with machine learning terms and concepts such as training and testing, features and labels, optimization, and evaluation. You can learn more about these concepts in An Introduction to Machine Learning.


      To complete this tutorial, you’ll need:

      Step 1 — Configuring the Project

      Before you can develop the recognition program, you’ll need to install a few dependencies and create a workspace to hold your files.

      We’ll use a Python 3 virtual environment to manage our project’s dependencies. Create a new directory for your project and navigate to the new directory:

      • mkdir tensorflow-demo
      • cd tensorflow-demo

      Execute the following commands to set up the virtual environment for this tutorial:

      • python3 -m venv tensorflow-demo
      • source tensorflow-demo/bin/activate

      Next, install the libraries you’ll use in this tutorial. We’ll use specific versions of these libraries by creating a requirements.txt file in the project directory which specifies the requirement and the version we need. Create the requirements.txt file:

      Open the file in your text editor and add the following lines to specify the Image, NumPy, and TensorFlow libraries and their versions:



      Save the file and exit the editor. Then install these libraries with the following command:

      • pip install -r requirements.txt

      With the dependencies installed, we can start working on our project.

      Step 2 — Importing the MNIST Dataset

      The dataset we will be using in this tutorial is called the MNIST dataset, and it is a classic in the machine learning community. This dataset is made up of images of handwritten digits, 28x28 pixels in size. Here are some examples of the digits included in the dataset:

      Examples of MNIST images

      Let's create a Python program to work with this dataset. We will use one file for all of our work in this tutorial. Create a new file called

      Now open this file in your text editor of choice and add this line of code to the file to import the TensorFlow library:

      import tensorflow as tf

      Add the following lines of code to your file to import the MNIST dataset and store the image data in the variable mnist:

      from tensorflow.examples.tutorials.mnist import input_data
      mnist = input_data.read_data_sets("MNIST_data/", one_hot=True) # y labels are oh-encoded

      When reading in the data, we are using one-hot-encoding to represent the labels (the actual digit drawn, e.g. "3") of the images. One-hot-encoding uses a vector of binary values to represent numeric or categorical values. As our labels are for the digits 0-9, the vector contains ten values, one for each possible digit. One of these values is set to 1, to represent the digit at that index of the vector, and the rest are set to 0. For example, the digit 3 is represented using the vector [0, 0, 0, 1, 0, 0, 0, 0, 0, 0]. As the value at index 3 is stored as 1, the vector therefore represents the digit 3.

      To represent the actual images themselves, the 28x28 pixels are flattened into a 1D vector which is 784 pixels in size. Each of the 784 pixels making up the image is stored as a value between 0 and 255. This determines the grayscale of the pixel, as our images are presented in black and white only. So a black pixel is represented by 255, and a white pixel by 0, with the various shades of gray somewhere in between.

      We can use the mnist variable to find out the size of the dataset we have just imported. Looking at the num_examples for each of the three subsets, we can determine that the dataset has been split into 55,000 images for training, 5000 for validation, and 10,000 for testing. Add the following lines to your file:

      n_train = mnist.train.num_examples # 55,000
      n_validation = mnist.validation.num_examples # 5000
      n_test = mnist.test.num_examples # 10,000

      Now that we have our data imported, it’s time to think about the neural network.

      Step 3 — Defining the Neural Network Architecture

      The architecture of the neural network refers to elements such as the number of layers in the network, the number of units in each layer, and how the units are connected between layers. As neural networks are loosely inspired by the workings of the human brain, here the term unit is used to represent what we would biologically think of as a neuron. Like neurons passing signals around the brain, units take some values from previous units as input, perform a computation, and then pass on the new value as output to other units. These units are layered to form the network, starting at a minimum with one layer for inputting values, and one layer to output values. The term hidden layer is used for all of the layers in between the input and output layers, i.e. those "hidden" from the real world.

      Different architectures can yield drastically different results, as the performance can be thought of as a function of the architecture among other things, such as the parameters, the data, and the duration of training.

      Add the following lines of code to your file to store the number of units per layer in global variables. This allows us to alter the network architecture in one place, and at the end of the tutorial you can test for yourself how different numbers of layers and units will impact the results of our model:

      n_input = 784   # input layer (28x28 pixels)
      n_hidden1 = 512 # 1st hidden layer
      n_hidden2 = 256 # 2nd hidden layer
      n_hidden3 = 128 # 3rd hidden layer
      n_output = 10   # output layer (0-9 digits)

      The following diagram shows a visualization of the architecture we've designed, with each layer fully connected to the surrounding layers:

      Diagram of a neural network

      The term "deep neural network" relates to the number of hidden layers, with "shallow" usually meaning just one hidden layer, and "deep" referring to multiple hidden layers. Given enough training data, a shallow neural network with a sufficient number of units should theoretically be able to represent any function that a deep neural network can. But it is often more computationally efficient to use a smaller deep neural network to achieve the same task that would require a shallow network with exponentially more hidden units. Shallow neural networks also often encounter overfitting, where the network essentially memorizes the training data that it has seen, and is not able to generalize the knowledge to new data. This is why deep neural networks are more commonly used: the multiple layers between the raw input data and the output label allow the network to learn features at various levels of abstraction, making the network itself better able to generalize.

      Other elements of the neural network that need to be defined here are the hyperparameters. Unlike the parameters that will get updated during training, these values are set initially and remain constant throughout the process. In your file, set the following variables and values:

      learning_rate = 1e-4
      n_iterations = 1000
      batch_size = 128
      dropout = 0.5

      The learning rate represents ow much the parameters will adjust at each step of the learning process. These adjustments are a key component of training: after each pass through the network we tune the weights slightly to try and reduce the loss. Larger learning rates can converge faster, but also have the potential to overshoot the optimal values as they are updated. The number of iterations refers to how many times we go through the training step, and the batch size refers to how many training examples we are using at each step. The dropout variable represents a threshold at which we elimanate some units at random. We will be using dropout in our final hidden layer to give each unit a 50% chance of being eliminated at every training step. This helps prevent overfitting.

      We have now defined the architecture of our neural network, and the hyperparameters that impact the learning process. The next step is to build the network as a TensorFlow graph.

      Step 4 — Building the TensorFlow Graph

      To build our network, we will set up the network as a computational graph for TensorFlow to execute. The core concept of TensorFlow is the tensor, a data structure similar to an array or list. initialized, manipulated as they are passed through the graph, and updated through the learning process.

      We’ll start by defining three tensors as placeholders, which are tensors that we'll feed values into later. Add the following to your file:

      X = tf.placeholder("float", [None, n_input])
      Y = tf.placeholder("float", [None, n_output])
      keep_prob = tf.placeholder(tf.float32) 

      The only parameter that needs to be specified at its declaration is the size of the data we will be feeding in. For X we use a shape of [None, 784], where None represents any amount, as we will be feeding in an undefined number of 784-pixel images. The shape of Y is [None, 10] as we will be using it for an undefined number of label outputs, with 10 possible classes. The keep_prob tensor is used to control the dropout rate, and we initialize it as a placeholder rather than an immutable variable because we want to use the same tensor both for training (when dropout is set to 0.5) and testing (when dropout is set to 1.0).

      The parameters that the network will update in the training process are the weight and bias values, so for these we need to set an initial value rather than an empty placeholder. These values are essentially where the network does its learning, as they are used in the activation functions of the neurons, representing the strength of the connections between units.

      Since the values are optimized during training, we could set them to zero for now. But the initial value actually has a significant impact on the final accuracy of the model. We'll use random values from a truncated normal distribution for the weights. We want them to be close to zero, so they can adjust in either a positive or negative direction, and slightly different, so they generate different errors. This will ensure that the model learns something useful. Add these lines:

      weights = {
          'w1': tf.Variable(tf.truncated_normal([n_input, n_hidden1], stddev=0.1)),
          'w2': tf.Variable(tf.truncated_normal([n_hidden1, n_hidden2], stddev=0.1)),
          'w3': tf.Variable(tf.truncated_normal([n_hidden2, n_hidden3], stddev=0.1)),
          'out': tf.Variable(tf.truncated_normal([n_hidden3, n_output], stddev=0.1)),

      For the bias, we use a small constant value to ensure that the tensors activate in the intial stages and therefore contribute to the propagation. The weights and bias tensors are stored in dictionary objects for ease of access. Add this code to your file to define the biases:

      biases = {
          'b1': tf.Variable(tf.constant(0.1, shape=[n_hidden1])),
          'b2': tf.Variable(tf.constant(0.1, shape=[n_hidden2])),
          'b3': tf.Variable(tf.constant(0.1, shape=[n_hidden3])),
          'out': tf.Variable(tf.constant(0.1, shape=[n_output]))

      Next, set up the layers of the network by defining the operations that will manipulate the tensors. Add these lines to your file:

      layer_1 = tf.add(tf.matmul(X, weights['w1']), biases['b1'])
      layer_2 = tf.add(tf.matmul(layer_1, weights['w2']), biases['b2'])
      layer_3 = tf.add(tf.matmul(layer_2, weights['w3']), biases['b3'])
      layer_drop = tf.nn.dropout(layer_3, keep_prob)
      output_layer = tf.matmul(layer_3, weights['out']) + biases['out']

      Each hidden layer will execute matrix multiplication on the previous layer’s outputs and the current layer’s weights, and add the bias to these values. At the last hidden layer, we will apply a dropout operation using our keep_prob value of 0.5.

      The final step in building the graph is to define the loss function that we want to optimize. A popular choice of loss function in TensorFlow programs is cross-entropy, also known as log-loss, which quantifies the difference between two probability distributions (the predictions and the labels). A perfect classification would result in a cross-entropy of 0, with the loss completely minimized.

      We also need to choose the optimization algorithm which will be used to minimize the loss function. A process named gradient descent optimization is a common method for finding the (local) minimum of a function by taking iterative steps along the gradient in a negative (descending) direction. There are several choices of gradient descent optimization algorithms already implemented in TensorFlow, and in this tutorial we will be using the Adam optimizer. This extends upon gradient descent optimization by using momentum to speed up the process through computing an exponentially weighted average of the gradients and using that in the adjustments. Add the following code to your file:

      cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=Y, logits=output_layer))
      train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)

      We've now defined the network and built it out with TensorFlow. The next step is to feed data through the graph to train it, and then test that it has actually learnt something.

      Step 5 — Training and Testing

      The training process involves feeding the training dataset through the graph and optimizing the loss function. Every time the network iterates through a batch of more training images, it updates the parameters to reduce the loss in order to more accurately predict the digits shown. The testing process involves running our testing dataset through the trained graph, and keeping track of the number of images that are correctly predicted, so that we can calculate the accuracy.

      Before starting the training process, we will define our method of evaluating the accuracy so we can print it out on mini-batches of data while we train. These printed statements will allow us to check that from the first iteration to the last, loss decreases and accuracy increases; they will also allow us to track whether or not we have ran enough iterations to reach a consistent and optimal result:

      correct_pred = tf.equal(tf.argmax(output_layer, 1), tf.argmax(Y, 1))
      accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))

      In correct_pred, we use the arg_max function to compare which images are being predicted correctly by looking at the output_layer (predictions) and Y (labels), and we use the equal function to return this as a list of [Booleans](tps:// We can then cast this list to floats and calculate the mean to get a total accuracy score.

      We are now ready to initialize a session for running the graph. In this session we will feed the network with our training examples, and once trained, we feed the same graph with new test examples to determine the accuracy of the model. Add the following lines of code to your file:

      init = tf.global_variables_initializer()
      sess = tf.Session()

      The essence of the training process in deep learning is to optimize the loss function. Here we are aiming to minimize the difference between the predicted labels of the images, and the true labels of the images. The process involves four steps which are repeated for a set number of iterations:

      • Propagate values forward through the network
      • Compute the loss
      • Propagate values backward through the network
      • Update the parameters

      At each training step, the parameters are adjusted slightly to try and reduce the loss for the next step. As the learning progresses, we should see a reduction in loss, and eventually we can stop training and use the network as a model for testing our new data.

      Add this code to the file:

      # train on mini batches
      for i in range(n_iterations):
          batch_x, batch_y = mnist.train.next_batch(batch_size)
, feed_dict={X: batch_x, Y: batch_y, keep_prob:dropout})
          # print loss and accuracy (per minibatch)
          if i%100==0:
              minibatch_loss, minibatch_accuracy =[cross_entropy, accuracy], feed_dict={X: batch_x, Y: batch_y, keep_prob:1.0})
              print("Iteration", str(i), "t| Loss =", str(minibatch_loss), "t| Accuracy =", str(minibatch_accuracy))

      After 100 iterations of each training step in which we feed a mini-batch of images through the network, we print out the loss and accuracy of that batch. Note that we should not be expecting a decreasing loss and increasing accuracy here, as the values are per batch, not for the entire model. We use mini-batches of images rather than feeding them through individually to speed up the training process and allow the network to see a number of different examples before updating the parameters.

      Once the training is complete, we can run the session on the test images. This time we are using a keep_prob dropout rate of 1.0 to ensure all units are active in the testing process.

      Add this code to the file:

      test_accuracy =, feed_dict={X: mnist.test.images, Y: mnist.test.labels, keep_prob:1.0})
      print("nAccuracy on test set:", test_accuracy)

      It’s now time to run our program and see how accurately our neural network can recognize these handwritten digits. Save the file and execute the following command in the terminal to run the script:

      You'll see an output similar to the following, although individual loss and accuracy results may vary slightly:


      Iteration 0 | Loss = 3.67079 | Accuracy = 0.140625 Iteration 100 | Loss = 0.492122 | Accuracy = 0.84375 Iteration 200 | Loss = 0.421595 | Accuracy = 0.882812 Iteration 300 | Loss = 0.307726 | Accuracy = 0.921875 Iteration 400 | Loss = 0.392948 | Accuracy = 0.882812 Iteration 500 | Loss = 0.371461 | Accuracy = 0.90625 Iteration 600 | Loss = 0.378425 | Accuracy = 0.882812 Iteration 700 | Loss = 0.338605 | Accuracy = 0.914062 Iteration 800 | Loss = 0.379697 | Accuracy = 0.875 Iteration 900 | Loss = 0.444303 | Accuracy = 0.90625 Accuracy on test set: 0.9206

      To try and improve the accuracy of our model, or to learn more about the impact of tuning hyperparameters, we can test the effect of changing the learning rate, the dropout threshold, the batch size, and the number of iterations. We can also change the number of units in our hidden layers, and change the amount of hidden layers themselves, to see how different architectures increase or decrease the model accuracy.

      To demonstrate that the network is actually recognizing the hand-drawn images, let's test it on a single image of our own.

      First either download this sample test image or open up a graphics editor and create your own 28x28 pixel image of a digit.

      Open the file in your editor and add the following lines of code to the top of the file to import two libraries necessary for image manipulation.

      import numpy as np
      from PIL import Image

      Then at the end of the file, add the following line of code to load the test image of the handwritten digit:

      img = np.invert("test_img.png").convert('L')).ravel()

      The open function of the Image library loads the test image as a 4D array containing the three RGB color channels and the Alpha transparency. This is not the same representation we used previously when reading in the dataset with TensorFlow, so we'll need to do some extra work to match the format.

      First, we use the convert function with the L parameter to reduce the 4D RGBA representation to one grayscale color channel. We store this as a numpy array and invert it using np.invert, because the current matrix represents black as 0 and white as 255, whereas we need the opposite. Finally, we call ravel to flatten the array.

      Now that the image data is structured correctly, we can run a session in the same way as previously, but this time only feeding in the single image for testing. Add the following code to your file to test the image and print the outputted label.

      prediction =,1), feed_dict={X: [img]})
      print ("Prediction for test image:", np.squeeze(prediction))

      The np.squeeze function is called on the prediction to return the single integer from the array (i.e. to go from [2] to 2). The resulting output demonstrates that the network has recognized this image as the digit 2.


      Prediction for test image: 2

      You can try testing the network with more complex images –– digits that look like other digits, for example, or digits that have been drawn poorly or incorrectly –– to see how well it fares.


      In this tutorial you successfully trained a neural network to classify the MNIST dataset with around 92% accuracy and tested it on an image of your own. Current state-of-the-art research achieves around 99% on this same problem, using more complex network architectures involving convolutional layers. These use the 2D structure of the image to better represent the contents, unlike our method which flattened all the pixels into one vector of 784 units. You can read more about this topic on the TensorFlow website, and see the research papers detailing the most accurate results on the MNIST website.

      Now that you know how to build and train a neural network, you can try and use this implementation on your own data, or test it on other popular datasets such as the Google StreetView House Numbers, or the CIFAR-10 dataset for more general image recognition.

      Source link

      How To Configure BIND as a Private Network DNS Server on Debian 9


      An important part of managing server configuration and infrastructure includes maintaining an easy way to look up network interfaces and IP addresses by name, by setting up a proper Domain Name System (DNS). Using fully qualified domain names (FQDNs), instead of IP addresses, to specify network addresses eases the configuration of services and applications, and increases the maintainability of configuration files. Setting up your own DNS for your private network is a great way to improve the management of your servers.

      In this tutorial, we will go over how to set up an internal DNS server, using the BIND name server software (BIND9) on Debian 9, that can be used by your servers to resolve private hostnames and private IP addresses. This provides a central way to manage your internal hostnames and private IP addresses, which is indispensable when your environment expands to more than a few hosts.


      To complete this tutorial, you will need the following infrastructure. Create each server in the same datacenter with private networking enabled:

      • A fresh Debian 9 server to serve as the Primary DNS server, ns1
      • (Recommended) A second Debian 9 server to serve as a Secondary DNS server, ns2
      • Additional servers in the same datacenter that will be using your DNS servers

      On each of these servers, configure administrative access via a sudo user and a firewall by following our Debian 9 initial server setup guide.

      If you are unfamiliar with DNS concepts, it is recommended that you read at least the first three parts of our Introduction to Managing DNS.

      Example Infrastructure and Goals

      For the purposes of this article, we will assume the following:

      • We have two servers which will be designated as our DNS name servers. We will refer to these as ns1 and ns2 in this guide.
      • We have two additional client servers that will be using the DNS infrastructure we create. We will call these host1 and host2 in this guide. You can add as many as you’d like for your infrastructure.
      • All of these servers exist in the same datacenter. We will assume that this is the nyc3 datacenter.
      • All of these servers have private networking enabled (and are on the subnet. You will likely have to adjust this for your servers).
      • All servers are connected to a project that runs on “”. Since our DNS system will be entirely internal and private, you do not have to purchase a domain name. However, using a domain you own may help avoid conflicts with publicly routable domains.

      With these assumptions, we decide that it makes sense to use a naming scheme that uses “” to refer to our private subnet or zone. Therefore, host1‘s private Fully-Qualified Domain Name (FQDN) will be Refer to the following table the relevant details:

      Host Role Private FQDN Private IP Address
      ns1 Primary DNS Server
      ns2 Secondary DNS Server
      host1 Generic Host 1
      host2 Generic Host 2


      Your existing setup will be different, but the example names and IP addresses will be used to demonstrate how to configure a DNS server to provide a functioning internal DNS. You should be able to easily adapt this setup to your own environment by replacing the host names and private IP addresses with your own. It is not necessary to use the region name of the datacenter in your naming scheme, but we use it here to denote that these hosts belong to a particular datacenter’s private network. If you utilize multiple datacenters, you can set up an internal DNS within each respective datacenter.

      By the end of this tutorial, we will have a primary DNS server, ns1, and optionally a secondary DNS server, ns2, which will serve as a backup.

      Let’s get started by installing our Primary DNS server, ns1.

      Installing BIND on DNS Servers


      Text that is highlighted in red is important! It will often be used to denote something that needs to be replaced with your own settings or that it should be modified or added to a configuration file. For example, if you see something like, replace it with the FQDN of your own server. Likewise, if you see host1_private_IP, replace it with the private IP address of your own server.

      On both DNS servers, ns1 and ns2, update the apt package cache by typing:

      Now install BIND:

      • sudo apt install bind9 bind9utils bind9-doc

      Setting Bind to IPv4 Mode

      Before continuing, let's set BIND to IPv4 mode since our private networking uses IPv4 exclusively. On both servers, edit the bind9 default settings file by typing:

      • sudo nano /etc/default/bind9

      Add "-4" to the end of the OPTIONS parameter. It should look like the following:


      . . .
      OPTIONS="-u bind -4"

      Save and close the file when you are finished.

      Restart BIND to implement the changes:

      • sudo systemctl restart bind9

      Now that BIND is installed, let's configure the primary DNS server.

      Configuring the Primary DNS Server

      BIND's configuration consists of multiple files, which are included from the main configuration file, named.conf. These filenames begin with named because that is the name of the process that BIND runs (short for "domain name daemon"). We will start with configuring the options file.

      Configuring the Options File

      On ns1, open the named.conf.options file for editing:

      • sudo nano /etc/bind/named.conf.options

      Above the existing options block, create a new ACL (access control list) block called "trusted". This is where we will define a list of clients that we will allow recursive DNS queries from (i.e. your servers that are in the same datacenter as ns1). Using our example private IP addresses, we will add ns1, ns2, host1, and host2 to our list of trusted clients:

      /etc/bind/named.conf.options — 1 of 3

      acl "trusted" {
    ;    # ns1 - can be set to localhost
    ;    # ns2
    ;  # host1
    ;  # host2
      options {
              . . .

      Now that we have our list of trusted DNS clients, we will want to edit the options block. Currently, the start of the block looks like the following:

      /etc/bind/named.conf.options — 2 of 3

              . . .
      options {
              directory "/var/cache/bind";
              . . .

      Below the directory directive, add the highlighted configuration lines (and substitute in the proper ns1 IP address) so it looks something like this:

      /etc/bind/named.conf.options — 3 of 3

              . . .
      options {
              directory "/var/cache/bind";
              recursion yes;                 # enables resursive queries
              allow-recursion { trusted; };  # allows recursive queries from "trusted" clients
              listen-on {; };   # ns1 private IP address - listen on private network only
              allow-transfer { none; };      # disable zone transfers by default
              forwarders {
              . . .

      When you are finished, save and close the named.conf.options file. The above configuration specifies that only your own servers (the "trusted" ones) will be able to query your DNS server for outside domains.

      Next, we will configure the local file, to specify our DNS zones.

      Configuring the Local File

      On ns1, open the named.conf.local file for editing:

      • sudo nano /etc/bind/named.conf.local

      Aside from a few comments, the file should be empty. Here, we will specify our forward and reverse zones. DNS zones designate a specific scope for managing and defining DNS records. Since our domains will all be within the "" subdomain, we will use that as our forward zone. Because our servers' private IP addresses are each in the IP space, we will set up a reverse zone so that we can define reverse lookups within that range.

      Add the forward zone with the following lines, substituting the zone name with your own and the secondary DNS server's private IP address in the allow-transfer directive:

      /etc/bind/named.conf.local — 1 of 2

      zone "" {
          type master;
          file "/etc/bind/zones/"; # zone file path
          allow-transfer {; };           # ns2 private IP address - secondary

      Assuming that our private subnet is, add the reverse zone by with the following lines (note that our reverse zone name starts with "128.10" which is the octet reversal of "10.128"):

      /etc/bind/named.conf.local — 2 of 2

          . . .
      zone "" {
          type master;
          file "/etc/bind/zones/db.10.128";  # subnet
          allow-transfer {; };  # ns2 private IP address - secondary

      If your servers span multiple private subnets but are in the same datacenter, be sure to specify an additional zone and zone file for each distinct subnet. When you are finished adding all of your desired zones, save and exit the named.conf.local file.

      Now that our zones are specified in BIND, we need to create the corresponding forward and reverse zone files.

      Creating the Forward Zone File

      The forward zone file is where we define DNS records for forward DNS lookups. That is, when the DNS receives a name query, "" for example, it will look in the forward zone file to resolve host1's corresponding private IP address.

      Let's create the directory where our zone files will reside. According to our named.conf.local configuration, that location should be /etc/bind/zones:

      • sudo mkdir /etc/bind/zones

      We will base our forward zone file on the sample db.local zone file. Copy it to the proper location with the following commands:

      • sudo cp /etc/bind/db.local /etc/bind/zones/

      Now let's edit our forward zone file:

      • sudo nano /etc/bind/zones/

      Initially, it will look something like the following:

      /etc/bind/zones/ — original

      $TTL    604800
      @       IN      SOA     localhost. root.localhost. (
                                    2         ; Serial
                               604800         ; Refresh
                                86400         ; Retry
                              2419200         ; Expire
                               604800 )       ; Negative Cache TTL
      @       IN      NS      localhost.      ; delete this line
      @       IN      A       ; delete this line
      @       IN      AAAA    ::1             ; delete this line

      First, you will want to edit the SOA record. Replace the first "localhost" with ns1's FQDN, then replace "root.localhost" with "". Every time you edit a zone file, you need to increment the serial value before you restart the named process. We will increment it to "3". It should now look something like this:

      /etc/bind/zones/ — updated 1 of 3

      @       IN      SOA (
                                    3         ; Serial
                                    . . .

      Next, delete the three records at the end of the file (after the SOA record). If you're not sure which lines to delete, they are marked with a "delete this line" comment above.

      At the end of the file, add your name server records with the following lines (replace the names with your own). Note that the second column specifies that these are "NS" records:

      /etc/bind/zones/ — updated 2 of 3

      . . .
      ; name servers - NS records
          IN      NS
          IN      NS

      Now, add the A records for your hosts that belong in this zone. This includes any server whose name we want to end with "" (substitute the names and private IP addresses). Using our example names and private IP addresses, we will add A records for ns1, ns2, host1, and host2 like so:

      /etc/bind/zones/ — updated 3 of 3

      . . .
      ; name servers - A records          IN      A          IN      A
      ; - A records        IN      A        IN      A

      Save and close the file.

      Our final example forward zone file looks like the following:

      /etc/bind/zones/ — updated

      $TTL    604800
      @       IN      SOA (
                        3     ; Serial
                   604800     ; Refresh
                    86400     ; Retry
                  2419200     ; Expire
                   604800 )   ; Negative Cache TTL
      ; name servers - NS records
           IN      NS
           IN      NS
      ; name servers - A records          IN      A          IN      A
      ; - A records        IN      A        IN      A

      Now let's move onto the reverse zone file(s).

      Creating the Reverse Zone File(s)

      Reverse zone files are where we define DNS PTR records for reverse DNS lookups. That is, when the DNS receives a query by IP address, "" for example, it will look in the reverse zone file(s) to resolve the corresponding FQDN, "" in this case.

      On ns1, for each reverse zone specified in the named.conf.local file, create a reverse zone file. We will base our reverse zone file(s) on the sample db.127 zone file. Copy it to the proper location with the following commands (substituting the destination filename so it matches your reverse zone definition):

      • sudo cp /etc/bind/db.127 /etc/bind/zones/db.10.128

      Edit the reverse zone file that corresponds to the reverse zone(s) defined in named.conf.local:

      • sudo nano /etc/bind/zones/db.10.128

      Initially, it will look something like the following:

      /etc/bind/zones/db.10.128 — original

      $TTL    604800
      @       IN      SOA     localhost. root.localhost. (
                                    1         ; Serial
                               604800         ; Refresh
                                86400         ; Retry
                              2419200         ; Expire
                               604800 )       ; Negative Cache TTL
      @       IN      NS      localhost.      ; delete this line
      1.0.0   IN      PTR     localhost.      ; delete this line

      In the same manner as the forward zone file, you will want to edit the SOA record and increment the serial value. It should look something like this:

      /etc/bind/zones/db.10.128 — updated 1 of 3

      @       IN      SOA (
                                    3         ; Serial
                                    . . .

      Now delete the two records at the end of the file (after the SOA record). If you're not sure which lines to delete, they are marked with a "delete this line" comment above.

      At the end of the file, add your name server records with the following lines (replace the names with your own). Note that the second column specifies that these are "NS" records:

      /etc/bind/zones/db.10.128 — updated 2 of 3

      . . .
      ; name servers - NS records
            IN      NS
            IN      NS

      Then add PTR records for all of your servers whose IP addresses are on the subnet of the zone file that you are editing. In our example, this includes all of our hosts because they are all on the subnet. Note that the first column consists of the last two octets of your servers' private IP addresses in reversed order. Be sure to substitute names and private IP addresses to match your servers:

      /etc/bind/zones/db.10.128 — updated 3 of 3

      . . .
      ; PTR Records
      11.10   IN      PTR    ;
      12.20   IN      PTR    ;
      101.100 IN      PTR  ;
      102.200 IN      PTR  ;

      Save and close the reverse zone file (repeat this section if you need to add more reverse zone files).

      Our final example reverse zone file looks like the following:

      /etc/bind/zones/db.10.128 — updated

      $TTL    604800
      @       IN      SOA (
                                    3         ; Serial
                               604800         ; Refresh
                                86400         ; Retry
                              2419200         ; Expire
                               604800 )       ; Negative Cache TTL
      ; name servers
            IN      NS
            IN      NS
      ; PTR Records
      11.10   IN      PTR    ;
      12.20   IN      PTR    ;
      101.100 IN      PTR  ;
      102.200 IN      PTR  ;

      We're done editing our files, so next we can check our files for errors.

      Checking the BIND Configuration Syntax

      Run the following command to check the syntax of the named.conf* files:

      If your named configuration files have no syntax errors, you will return to your shell prompt and see no error messages. If there are problems with your configuration files, review the error message and the "Configure Primary DNS Server" section, then try named-checkconf again.

      The named-checkzone command can be used to check the correctness of your zone files. Its first argument specifies a zone name, and the second argument specifies the corresponding zone file, which are both defined in named.conf.local.

      For example, to check the "" forward zone configuration, run the following command (change the names to match your forward zone and file):

      • sudo named-checkzone /etc/bind/zones/

      And to check the "" reverse zone configuration, run the following command (change the numbers to match your reverse zone and file):

      • sudo named-checkzone /etc/bind/zones/db.10.128

      When all of your configuration and zone files have no errors in them, you should be ready to restart the BIND service.

      Restarting BIND

      Restart BIND:

      • sudo systemctl restart bind9

      If you have the UFW firewall configured, open up access to BIND by typing:

      Your primary DNS server is now setup and ready to respond to DNS queries. Let's move on to creating the secondary DNS server.

      Configuring the Secondary DNS Server

      In most environments, it is a good idea to set up a secondary DNS server that will respond to requests if the primary becomes unavailable. Luckily, the secondary DNS server is much easier to configure.

      On ns2, edit the named.conf.options file:

      • sudo nano /etc/bind/named.conf.options

      At the top of the file, add the ACL with the private IP addresses of all of your trusted servers:

      /etc/bind/named.conf.options — updated 1 of 2 (secondary)

      acl "trusted" {
    ;   # ns1
    ;   # ns2 - can be set to localhost
    ;  # host1
    ;  # host2
      options {
              . . .

      Below the directory directive, add the following lines:

      /etc/bind/named.conf.options — updated 2 of 2 (secondary)

              recursion yes;
              allow-recursion { trusted; };
              listen-on {; };      # ns2 private IP address
              allow-transfer { none; };          # disable zone transfers by default
              forwarders {

      Save and close the named.conf.options file. This file should look exactly like ns1's named.conf.options file except it should be configured to listen on ns2's private IP address.

      Now edit the named.conf.local file:

      • sudo nano /etc/bind/named.conf.local

      Define slave zones that correspond to the master zones on the primary DNS server. Note that the type is "slave", the file does not contain a path, and there is a masters directive which should be set to the primary DNS server's private IP address. If you defined multiple reverse zones in the primary DNS server, make sure to add them all here:

      /etc/bind/named.conf.local — updated (secondary)

      zone "" {
          type slave;
          file "";
          masters {; };  # ns1 private IP
      zone "" {
          type slave;
          file "db.10.128";
          masters {; };  # ns1 private IP

      Now save and close the named.conf.local file.

      Run the following command to check the validity of your configuration files:

      Once that checks out, restart BIND:

      • sudo systemctl restart bind9

      Allow DNS connections to the server by altering the UFW firewall rules:

      Now you have primary and secondary DNS servers for private network name and IP address resolution. Now you must configure your client servers to use your private DNS servers.

      Configuring DNS Clients

      Before all of your servers in the "trusted" ACL can query your DNS servers, you must configure each of them to use ns1 and ns2 as name servers. This process varies depending on OS, but for most Linux distributions it involves adding your name servers to the /etc/resolv.conf file.

      Ubuntu 18.04 Clients

      On Ubuntu 18.04, networking is configured with Netplan, an abstraction that allows you to write standardized network configuration and apply it to incompatible backend networking software. To configure DNS, we need to write a Netplan configuration file.

      First, find the device associated with your private network by querying the private subnet with the ip address command:

      • ip address show to


      3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 inet brd scope global eth1 valid_lft forever preferred_lft forever

      In this example, the private interface is eth1.

      Next, create a new file in /etc/netplan called 00-private-nameservers.yaml:

      • sudo nano /etc/netplan/00-private-nameservers.yaml

      Inside, paste the following contents. You will need to modify the interface of the private network, the addresses of your ns1 and ns2 DNS servers, and the DNS zone:

      Note: Netplan uses the YAML data serialization format for its configuration files. Because YAML uses indentation and whitespace to define its data structure, make sure that your definition uses consistent indentation to avoid errors.

      /etc/netplan 00-private-nameservers.yaml

          version: 2
              eth1:                                 # Private network interface
                      -                # Private IP for ns1
                      -                # Private IP for ns2
                      search: [ ]  # DNS zone

      Save and close the file when you are finished.

      Next, tell Netplan to attempt to use the new configuration file by using netplan try. If there are problems that cause a loss of networking, Netplan will automatically roll back the changes after a timeout:


      Warning: Stopping systemd-networkd.service, but it can still be activated by: systemd-networkd.socket Do you want to keep these settings? Press ENTER before the timeout to accept the new configuration Changes will revert in 120 seconds

      If the countdown is updating correctly at the bottom, the new configuration is at least functional enough to not break your SSH connection. Press ENTER to accept the new configuration.

      Now, check that the system's DNS resolver to determine if your DNS configuration has been applied:

      • sudo systemd-resolve --status

      Scroll down until you see the section for your private network interface. You should see the private IP addresses for your DNS servers listed first, followed by some fallback values. Your domain should should be in the "DNS Domain":


      . . . Link 3 (eth1) Current Scopes: DNS LLMNR setting: yes MulticastDNS setting: no DNSSEC setting: no DNSSEC supported: no DNS Servers: DNS Domain: . . .

      Your client should now be configured to use your internal DNS servers.

      Ubuntu 16.04 and Debian Clients

      On Ubuntu 16.04 and Debian Linux servers, you can edit the /etc/network/interfaces file:

      • sudo nano /etc/network/interfaces

      Inside, find the dns-nameservers line. If it is attached to the lo interface, move it to your networking interface (eth0 or eth1 for example). Next, prepend your own name servers in front of the list that is currently there. Below that line, add a dns-search option pointed to the base domain of your infrastructure. In our case, this would be "":


          . . .
          . . .

      Save and close the file when you are finished.

      Make sure that the resolvconf package is installed on your system:

      • sudo apt update
      • sudo apt install resolvconf

      Now, restart your networking services, applying the new changes with the following commands. Make sure you replace eth0 with the name of your networking interface:

      • sudo ifdown --force eth0 && sudo ip addr flush dev eth0 && sudo ifup --force eth0

      This should restart your network without dropping your current connection. If it worked correctly, you should see something like this:


      RTNETLINK answers: No such process Waiting for DAD... Done

      Double check that your settings were applied by typing:

      You should see your name servers in the /etc/resolv.conf file, as well as your search domain:


      # Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8) # DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN nameserver nameserver nameserver search

      Your client is now configured to use your DNS servers.

      CentOS Clients

      On CentOS, RedHat, and Fedora Linux, edit the /etc/sysconfig/network-scripts/ifcfg-eth0 file. You may have to substitute eth0 with the name of your primary network interface:

      • sudo nano /etc/sysconfig/network-scripts/ifcfg-eth0

      Search for the DNS1 and DNS2 options and set them to the private IP addresses of your primary and secondary name servers. Add a DOMAIN parameter that with your infrastructure's base domain. In this guide, that would be "":


      . . .
      . . .

      Save and close the file when you are finished.

      Now, restart the networking service by typing:

      • sudo systemctl restart network

      The command may hang for a few seconds, but should return you to the prompt shortly.

      Check that your changes were applied by typing:

      You should see your name servers and search domain in the list:



      Your client should now be able to connect to and use your DNS servers.

      Testing Clients

      Use nslookup to test if your clients can query your name servers. You should be able to do this on all of the clients that you have configured and are in the "trusted" ACL.

      For CentOS clients, you may need to install the utility with:

      • sudo yum install bind-utils

      For Debian clients, you can install with:

      • sudo apt install dnsutils

      We can start by performing a forward lookup.

      Forward Lookup

      For example, we can perform a forward lookup to retrieve the IP address of by running the following command:

      Querying "host1" expands to " because of the search option is set to your private subdomain, and DNS queries will attempt to look on that subdomain before looking for the host elsewhere. The output of the command above would look like the following:


      Server: Address: Non-authoritative answer: Name: Address:

      Next, we can check reverse lookups.

      Reverse Lookup

      To test the reverse lookup, query the DNS server with host1's private IP address:

      You should see output that looks like the following:

      Output name = Authoritative answers can be found from:

      If all of the names and IP addresses resolve to the correct values, that means that your zone files are configured properly. If you receive unexpected values, be sure to review the zone files on your primary DNS server (e.g. and db.10.128).

      Congratulations! Your internal DNS servers are now set up properly! Now we will cover maintaining your zone records.

      Maintaining DNS Records

      Now that you have a working internal DNS, you need to maintain your DNS records so they accurately reflect your server environment.

      Adding a Host to DNS

      Whenever you add a host to your environment (in the same datacenter), you will want to add it to DNS. Here is a list of steps that you need to take:

      Primary Name Server

      • Forward zone file: Add an "A" record for the new host, increment the value of "Serial"
      • Reverse zone file: Add a "PTR" record for the new host, increment the value of "Serial"
      • Add your new host's private IP address to the "trusted" ACL (named.conf.options)

      Test your configuration files:

      • sudo named-checkconf
      • sudo named-checkzone
      • sudo named-checkzone /etc/bind/zones/db.10.128

      Then reload BIND:

      • sudo systemctl reload bind9

      Your primary server should be configured for the new host now.

      Secondary Name Server

      • Add your new host's private IP address to the "trusted" ACL (named.conf.options)

      Check the configuration syntax:

      Then reload BIND:

      • sudo systemctl reload bind9

      Your secondary server will now accept connections from the new host.

      Configure New Host to Use Your DNS

      • Configure /etc/resolv.conf to use your DNS servers
      • Test using nslookup

      Removing Host from DNS

      If you remove a host from your environment or want to just take it out of DNS, just remove all the things that were added when you added the server to DNS (i.e. the reverse of the steps above).


      Now you may refer to your servers' private network interfaces by name, rather than by IP address. This makes configuration of services and applications easier because you no longer have to remember the private IP addresses, and the files will be easier to read and understand. Also, now you can change your configurations to point to a new servers in a single place, your primary DNS server, instead of having to edit a variety of distributed configuration files, which eases maintenance.

      Once you have your internal DNS set up, and your configuration files are using private FQDNs to specify network connections, it is critical that your DNS servers are properly maintained. If they both become unavailable, your services and applications that rely on them will cease to function properly. This is why it is recommended to set up your DNS with at least one secondary server, and to maintain working backups of all of them.

      Source link