One place for hosting & domains

      Networks

      How To Visualize and Interpret Neural Networks in Python


      The author selected Open Sourcing Mental Illness to receive a donation as part of the Write for DOnations program.

      Introduction

      Neural networks achieve state-of-the-art accuracy in many fields such as computer vision, natural-language processing, and reinforcement learning. However, neural networks are complex, easily containing hundreds of thousands, or even, millions of operations (MFLOPs or GFLOPs). This complexity makes interpreting a neural network difficult. For example: How did the network arrive at the final prediction? Which parts of the input influenced the prediction? This lack of understanding is exacerbated for high-dimensional inputs like images: What does an explanation for an image classification even look like?

      Research in Explainable AI (XAI) works to answers these questions with a number of different explanations. In this tutorial, you’ll specifically explore two types of explanations: 1. Saliency maps, which highlight the most important parts of the input image; and 2. decision trees, which break down each prediction into a sequence of intermediate decisions. For both of these approaches, you’ll produce code that generates these explanations from a neural network.

      Along the way, you’ll also use deep-learning Python library PyTorch, computer-vision library OpenCV, and linear-algebra library numpy. By following this tutorial, you will gain an understanding of current XAI efforts to understand and visualize neural networks.

      Prerequisites

      To complete this tutorial, you will need the following:

      You can find all the code and assets from this tutorial in this repository.

      Step 1 — Creating Your Project and Installing Dependencies

      Let’s create a workspace for this project and install the dependencies you’ll need. You’ll call your workspace XAI, short for Explainable Artificial Intelligence:

      Navigate to the XAI directory:

      Make a directory to hold all your assets:

      Then create a new virtual environment for the project:

      Activate your environment:

      Then install PyTorch, a deep-learning framework for Python that you’ll use in this tutorial.

      On macOS, install PyTorch with the following command:

      • python -m pip install torch==1.4.0 torchvision==0.5.0

      On Linux and Windows, use the following commands for a CPU-only build:

      • pip install torch==1.4.0+cpu torchvision==0.5.0+cpu -f https://download.pytorch.org/whl/torch_stable.html
      • pip install torchvision

      Now install prepackaged binaries for OpenCV, Pillow, and numpy, which are libraries for computer vision and linear algebra, respectively. OpenCV and Pillow offer utilities such as image rotations, and numpy offers linear algebra utilities, such as a matrix inversion:

      • python -m pip install opencv-python==3.4.3.18 pillow==7.1.0 numpy==1.14.5 matplotlib==3.3.2

      On Linux distributions, you will need to install libSM.so:

      • sudo apt-get install libsm6 libxext6 libxrender-dev

      Finally, install nbdt, a deep-learning library for neural-backed decision trees, which we will discuss in the last step of this tutorial:

      • python -m pip install nbdt==0.0.4

      With the dependencies installed, let’s run an image classifier that has already been trained.

      Step 2 — Running a Pretrained Classifier

      In this step, you will set up an image classifier that has already been trained.

      First, an image classifier accepts images as input, and outputs a predicted class (like Cat or Dog). Second, pretained means this model has already been trained and will be able to predict classes, accurately, straightaway. Your goal will be to visualize and interpret this image classifier: How does it make decisions? Which parts of the image did the model use for its prediction?

      First, download a JSON file to convert neural network output to a human-readable class name:

      • wget -O assets/imagenet_idx_to_label.json https://raw.githubusercontent.com/do-community/tricking-neural-networks/master/utils/imagenet_idx_to_label.json

      Download the following Python script, which will load an image, load a neural network with its weights, and classify the image using the neural network:

      • wget https://raw.githubusercontent.com/do-community/tricking-neural-networks/master/step_2_pretrained.py

      Note: For a more detailed walkthrough of this file step_2_pretrained.py, please see Step 2 — Running a Pretrained Animal Classifier in the How To Trick a Neural Network tutorial.

      Next you’ll download the following image of a cat and dog, as well, to run the image classifier on.

      Image of Cat and dog on sofa

      • wget -O assets/catdog.jpg https://assets.digitalocean.com/articles/visualize_neural_network/step2b.jpg

      Finally, run the pretrained image classifier on the newly downloaded image:

      • python step_2_pretrained.py assets/catdog.jpg

      This will produce the following output, showing your animal classifier works as expected:

      Output

      Prediction: Persian cat

      That concludes running inference with your pretrained model.

      Although this neural network produces predictions correctly, we don’t understand how the model arrived at its prediction. To better understand this, start by considering the cat and dog image that you provided to the image classifier.

      Image of Cat and dog on sofa

      The image classifier predicts Persian cat. One question you can ask is: Was the model looking at the cat on the left? Or the dog on the right? Which pixels did the model use to make that prediction? Fortunately, we have a visualization that answers this exact question. Following is a visualization that highlights pixels that the model used, to determine Persian Cat.

      A visualization that highlights pixels that the model used

      The model classifies the image as Persian cat by looking at the cat. For this tutorial, we will refer to visualizations like this example as saliency maps, which we define to be heatmaps that highlight pixels influencing the final prediction. There are two types of saliency maps:

      1. Model-agnostic Saliency Maps (often called “black-box” methods): These approaches do not need access to the model weights. In general, these methods change the image and observe the changed image’s impact on accuracy. For example, you might remove the center of the image (pictured following). The intuition is: If the image classifier now misclassifies the image, the image center must have been important. We can repeat this and randomly remove parts of the image each time. In this way, we can produce a heatmap like previously, by highlighting the patches that damaged accuracy the most.

      A heatmap highlighting the patches that damaged accuracy the most.

      1. Model-aware Saliency Maps (often called “white-box” methods): These approaches require access to the model’s weights. We will discuss one such method in more detail in the next section.

      This concludes our brief overview of saliency maps. In the next step, you will implement one model-aware technique called a Class Activation Map (CAM).

      Step 3 — Generating Class Activation Maps (CAM)

      Class Activation Maps (CAMs) are a type of model-aware saliency method. To understand how a CAM is computed, we first need to discuss what the last few layers in a classification network do. Following is an illustration of a typical image-classification neural network, for the method in this paper on Learning Deep Features for Discriminative Localization.

      Diagram of an existing image classification neural network.

      The figure describes the following process in a classification neural network. Note the image is represented as a stack of rectangles; for a refresher on how images are represented as a tensor, see How to Build an Emotion-Based Dog Filter in Python 3 (Step 4):

      1. Focus on the second-to-last layer’s outputs, labeled LAST CONV with blue, red, and green rectangles.
      2. This output undergoes a global average pool (denoted as GAP). GAP averages values in each channel (colored rectangle) to produce a single value (corresponding colored box, in LINEAR).
      3. Finally, those values are combined in a weighted sum (with weights denoted by w1, w2, w3) to produce a probability (dark gray box) of a class. In this case, these weights correspond to CAT. In essence, each wi answers: “How important is the ith channel to detecting a Cat?”
      4. Repeat for all classes (light gray circles) to obtain probabilities for all classes.

      We’ve omitted several details that are not necessary to explain CAM. Now, we can use this to compute CAM. Let us revisit an expanded version of this figure, still for the method in the same paper. Focus on the second row.

      Diagram of how class activation maps are computed from an image classification neural network.

      1. To compute a class activation map, take the second-to-last layer’s outputs. This is depicted in the second row, outlined by blue, red, and green rectangles corresponding to the same colored rectangles in the first row.
      2. Pick a class. In this case, we pick “Australian Terrier”. Find the weights w1, w2wn corresponding to that class.
      3. Each channel (colored rectangle) is then weighted by w1, w2wn. Note we do not perform a global average pool (step 2 from the previous figure). Compute the weighted sum, to obtain a class activation map (far right, second row in figure).

      This final weighted sum is the class activation map.

      Next, we will implement class activation maps. This section will be broken into the three steps that we’ve already discussed:

      1. Take the second-to-last layer’s outputs.
      2. Find weights w1, w2wn.
      3. Compute a weighted sum of outputs.

      Start by creating a new file step_3_cam.py:

      First, add the Python boilerplate; import the necessary packages and declare a main function:

      step_3_cam.py

      """Generate Class Activation Maps"""
      import numpy as np
      import sys
      import torch
      import torchvision.models as models
      import torchvision.transforms as transforms
      import matplotlib.cm as cm
      
      from PIL import Image
      from step_2_pretrained import load_image
      
      
      def main():
          pass
      
      
      if __name__ == '__main__':
          main()
      

      Create an image loader that will load, resize, and crop your image, but leave the color untouched. This ensures your image has the correct dimensions. Add this before your main function:

      step_3_cam.py

      . . .
      def load_raw_image():
          """Load raw 224x224 center crop of image"""
          image = Image.open(sys.argv[1])
          transform = transforms.Compose([
            transforms.Resize(224),  # resize smaller side of image to 224
            transforms.CenterCrop(224),  # take center 224x224 crop
          ])
          return transform(image)
      . . .
      

      In load_raw_image, you first access the one argument passed to the script sys.argv[1]. Then, open the image specified using Image.open. Next, you define a number of different transformations to apply to the images that are passed to your neural network:

      • transforms.Resize(224): Resizes the smaller side of the image to 224. For example, if your image is 448 x 672, this operation would downsample the image to 224 x 336.
      • transforms.CenterCrop(224): Takes a crop from the center of the image, of size 224 x 224.
      • transform(image): Applies the sequence of image transformations defined in the previous lines.

      This concludes image loading.

      Next, load the pretrained model. Add this function after your first load_raw_image function, but before the main function:

      step_3_cam.py

      . . .
      def get_model():
          """Get model, set forward hook to save second-to-last layer's output"""
          net = models.resnet18(pretrained=True).eval()
          layer = net.layer4[1].conv2
      
          def store_feature_map(self, _, output):
              self._parameters['out'] = output
          layer.register_forward_hook(store_feature_map)
      
          return net, layer
      . . .
      

      In the get_model function, you:

      1. Instantiate a pretrained model models.resnet18(pretrained=True).
      2. Change the model’s inference mode to eval by calling .eval().
      3. Define layer..., the second-to-last layer, which we will use later.
      4. Add a “forward hook” function. This function will save the layer’s output when the layer is executed. We do this in two steps, first defining a store_feature_map hook and then binding the hook with register_forward_hook.
      5. Return both the network and the second-to-last layer.

      This concludes model loading.

      Next, compute the class activation map itself. Add this function before your main function:

      step_3_cam.py

      . . .
      def compute_cam(net, layer, pred):
          """Compute class activation maps
      
          :param net: network that ran inference
          :param layer: layer to compute cam on
          :param int pred: prediction to compute cam for
          """
      
          # 1. get second-to-last-layer output
          features = layer._parameters['out'][0]
      
          # 2. get weights w_1, w_2, ... w_n
          weights = net.fc._parameters['weight'][pred]
      
          # 3. compute weighted sum of output
          cam = (features.T * weights).sum(2)
      
          # normalize cam
          cam -= cam.min()
          cam /= cam.max()
          cam = cam.detach().numpy()
          return cam
      . . .
      

      The compute_cam function mirrors the three steps outlined at the start of this section and in the section before.

      1. Take the second-to-last layer’s outputs, using the feature maps our forward hook saved in layer._parameters.
      2. Find weights w1, w2wn in the final linear layer net.fc_parameters['weight']. Access the predth row of weights, to obtain weights for our predicted class.
      3. Compute a weighted sum of outputs. (features.T * weights).sum(...). The argument 2 means we compute a sum along the index 2 dimension of the provided tensor.
      4. Normalize the class activation map, so that all values fall in between 0 and 1—cam -= cam.min(); cam /= cam.max().
      5. Detach the PyTorch tensor from the computation graph .detach(). Convert the CAM from a PyTorch tensor object into a numpy array. .numpy().

      This concludes computation for a class activation map.

      Our last helper function is a utility that saves the class activation map. Add this function before your main function:

      step_3_cam.py

      . . .
      def save_cam(cam):
          # save heatmap
          heatmap = (cm.jet_r(cam) * 255.0)[..., 2::-1].astype(np.uint8)
          heatmap = Image.fromarray(heatmap).resize((224, 224))
          heatmap.save('heatmap.jpg')
          print(' * Wrote heatmap to heatmap.jpg')
      
          # save heatmap on image
          image = load_raw_image()
          combined = (np.array(image) * 0.5 + np.array(heatmap) * 0.5).astype(np.uint8)
          Image.fromarray(combined).save('combined.jpg')
          print(' * Wrote heatmap on image to combined.jpg')
      . . .
      

      This utility save_cam performs the following:

      1. Colorize the heatmap cm.jet_r(cam). The output is in the range [0, 1] so multiply by 255.0. Furthermore, the output (1) contains a 4th alpha channel and (2) the color channels are ordered as BGR. We use indexing [..., 2::-1] to solve both problems, dropping the alpha channel and inverting the color channel order to be RGB. Finally, cast to unsigned integers.
      2. Convert the image Image.fromarray into a PIL image and use the image’s image-resize utility .resize(...), then the .save(...) utility.
      3. Load a raw image, using the utility load_raw_image we wrote earlier.
      4. Superimpose the heatmap on top of the image by adding 0.5 weight of each. Like before, cast the result to unsigned integers .astype(...).
      5. Finally, convert the image into PIL, and save.

      Next, populate the main function with some code to run the neural network on a provided image:

      step_3_cam.py

      . . .
      def main():
          """Generate CAM for network's predicted class"""
          x = load_image()
          net, layer = get_model()
      
          out = net(x)
          _, (pred,) = torch.max(out, 1)  # get class with highest probability
      
          cam = compute_cam(net, layer, pred)
          save_cam(cam)
      . . .
      

      In main, run the network to obtain a prediction.

      1. Load the image.
      2. Fetch the pretrained neural network.
      3. Run the neural network on the image.
      4. Find the highest probability with torch.max. pred is now a number with the index of the most likely class.
      5. Compute the CAM using compute_cam.
      6. Finally, save the CAM using save_cam.

      This now concludes our class activation script. Save and close your file. Check that your script matches the step_3_cam.py in this repository.

      Then, run the script:

      • python step_3_cam.py assets/catdog.jpg

      Your script will output the following:

      Output

      * Wrote heatmap to heatmap.jpg * Wrote heatmap on image to combined.jpg

      This will produce a heatmap.jpg and combined.jpg akin to the following images showing the heatmap and the heatmap combined with the cat/dog image.

      Heatmap highlighting
      Saliency map superimposed on top of the original image

      You have produced your first saliency map. We will end the article with more links and resources for generating other kinds of saliency maps. In the meantime, let us now explore a second approach to explainability—namely, making the model itself interpretable.

      Step 4 — Using Neural-Backed Decision Trees

      Decision Trees belong to a family of rule-based models. A decision tree is a data tree that displays possible decision pathways. Each prediction is the result of a series of predictions.

      Decision tree for hot dog, burger, super burger, waffle fries

      Instead of just outputting a prediction, each prediction also comes with justification. For example, to arrive at the conclusion of “Hotdog” for this figure the model must first ask: “Does it have a bun?”, then ask: “Does it have a sausage?” Each of these intermediate decisions can be verified or challenged separately. As a result, classic machine learning calls these rule-based systems “interpretable.”

      One question is: How are these rules created? Decision Trees warrant a far more detailed discussion of its own but in short, rules are created to “split classes as much as possible.” Formally, this is “maximizing information gain.” In the limit, maximizing this split makes sense: If the rules perfectly split classes, then our final predictions will always be correct.

      Now, we move on to using a neural network and decision tree hybrid. For more on decision trees, see Classification and Regression Trees (CART) overview.

      Now, we will run inference on a neural network and decision tree hybrid. As we will find, this gives us a different type of explainability: direct-model interpretability.

      Start by creating a new file called step_4_nbdt.py:

      First, add the Python boilerplate. Import the necessary packages and declare a main function. maybe_install_wordnet sets up a prerequisite that our program may need:

      step_4_nbdt.py

      """Run evaluation on a single image, using an NBDT"""
      
      from nbdt.model import SoftNBDT, HardNBDT
      from pytorchcv.models.wrn_cifar import wrn28_10_cifar10
      from torchvision import transforms
      from nbdt.utils import DATASET_TO_CLASSES, load_image_from_path, maybe_install_wordnet
      import sys
      
      maybe_install_wordnet()
      
      
      def main():
          pass
      
      
      if __name__ == '__main__':
          main()
      

      Start by loading the pretrained model, as before. Add the following before your main function:

      step_4_nbdt.py

      . . .
      def get_model():
          """Load pretrained NBDT"""
          model = wrn28_10_cifar10()
          model = HardNBDT(
            pretrained=True,
            dataset="CIFAR10",
            arch="wrn28_10_cifar10",
            model=model)
          return model
      . . .
      

      This function does the following:

      1. Creates a new model called WideResNet wrn28_10_cifar10().
      2. Next, it creates the neural-backed decision tree variant of that model, by wrapping it with HardNBDT(..., model=model).

      This concludes model loading.

      Next, load and preprocess the image for model inference. Add the following before your main function:

      step_4_nbdt.py

      . . .
      def load_image():
          """Load + transform image"""
          assert len(sys.argv) > 1, "Need to pass image URL or image path as argument"
          im = load_image_from_path(sys.argv[1])
          transform = transforms.Compose([
            transforms.Resize(32),
            transforms.CenterCrop(32),
            transforms.ToTensor(),
            transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)),
          ])
          x = transform(im)[None]
          return x
      . . .
      

      In load_image, you start by loading the image from the provided URL, using a custom utility method called load_image_from_path. Next, you define a number of different transformations to apply to the images that are passed to your neural network:

      • transforms.Resize(32): Resizes the smaller side of the image to 32. For example, if your image is 448 x 672, this operation would downsample the image to 32 x 48.
      • transforms.CenterCrop(224): Takes a crop from the center of the image, of size 32 x 32.
      • transforms.ToTensor(): Converts the image into a PyTorch tensor. All PyTorch models require PyTorch tensors as input.
      • transforms.Normalize(mean=..., std=...): Standardizes your input by subtracting the mean, then dividing by the standard deviation. This is described more precisely in the torchvision documentation.

      Finally, apply the image transformations to the image transform(im)[None].

      Next, define a utility function to log both the prediction and the intermediate decisions that led up to it. Place this before your main function:

      step_4_nbdt.py

      . . .
      def print_explanation(outputs, decisions):
          """Print the prediction and decisions"""
          _, predicted = outputs.max(1)
          cls = DATASET_TO_CLASSES['CIFAR10'][predicted[0]]
          print('Prediction:', cls, '// Decisions:', ', '.join([
              '{} ({:.2f}%)'.format(info['name'], info['prob'] * 100) for info in decisions[0]
          ][1:]))  # [1:] to skip the root
      . . .
      

      The print_explanations function computes and logs predictions and decisions:

      1. Starts by computing the index of the highest probability class outputs.max(1).
      2. Then, it converts that prediction into a human readable class name using the dictionary DATASET_TO_CLASSES['CIFAR10'][predicted[0]].
      3. Finally, it prints the prediction cls and the decisions info['name'], info['prob']....

      Conclude the script by populating the main with utilities we have written so far:

      step_4_nbdt.py

      . . .
      def main():
          model = get_model()
          x = load_image()
          outputs, decisions = model.forward_with_decisions(x)  # use `model(x)` to obtain just logits
          print_explanation(outputs, decisions)
      

      We perform model inference with explanations in several steps:

      1. Load the model get_model.
      2. Load the image load_image.
      3. Run model inference model.forward_with_decisions.
      4. Finally, print the prediction and explanations print_explanations.

      Close your file, and double-check your file contents matches step_4_nbdt.py. Then, run your script on the photo from earlier of two pets side-by-side.

      • python step_4_nbdt.py assets/catdog.jpg

      This will output the following, both the prediction and the corresponding justifications.

      Output

      Prediction: cat // Decisions: animal (99.34%), chordate (92.79%), carnivore (99.15%), cat (99.53%)

      This concludes the neural-backed decision tree section.

      Conclusion

      You have now run two types of Explainable AI approaches: a post-hoc explanation like saliency maps and a modified interpretable model using a rule-based system.

      There are many explainability techniques not covered in this tutorial. For further reading, please be sure to check out other ways to visualize and interpret neural networks; the utilities number many, from debugging to debiasing to avoiding catastrophic errors. There are many applications for Explainable AI (XAI), from sensitive applications like medicine to other mission-critical systems in self-driving cars.



      Source link

      Connecting Our ‘Inner Networks’ Through Yoga, Powered by INAP


      As the new reality of the work from home lifestyle began to sink in, we at INAP started looking for new ways to connect with our employees, partners and clients. Enter INAP Marketing Specialist Nicolette Downs. As a yoga instructor and owner of Chicago’s own Big Shoulders Yoga Studio, she’s graciously stepped onto her mat to offer twice-weekly yoga classes via Instagram Live.

      “The virtual INAP yoga classes are a great break in the day and have helped with my flexibility and focus,” said Matt Cutler, National Account Manager.

      You can join Downs and other members of the INAP family on Tuesdays and Thursdays at 12 p.m. Eastern Time. All you have to do is follow us @poweredbyinap on Instagram and you can join the free classes in real time or watch them on our Instagram story for up to 24 hours after the live class. The 30-minute classes are the perfect way to take a short break, connect with others, reconnect with yourself and get your body moving.

      “It’s great to take 30 minutes in the middle of the day to step away from my desk and move my body,” said Kandace Hyland, Senior Marketing Manager. “I come back re-energized to do whatever project I’m working on that day.”

      Attending live classes can also bring some much needed routine to your day, as Human Resources Generalist Anastacia Cesario can attest. “Taking live classes during this quarantine helps me feel like there is some sort of normality in my day still and helps me stick to a schedule.”

      Yogis of all levels can participate, whether you’re a seasoned expert or have never taken a yoga class in your life.

      National Account Manager Joseph Shaughnessy has been able to find a new challenge and goal to strive for through these classes. “I’m still trying to perfect my ‘crow pose,’” he said. “A lot of practice is needed!”

      You won’t need any special equipment to participate. If you don’t have access to a mat, a towel or blanket can serve as a stand-in. Downs often offers up substitutes for the typical props used in the yoga studio at the beginning of the live broadcast and throughout the practice.

      We hope we’re able to provide the small break you need in your day to keep you feeling grounded as we make our way through these unprecedented times. And as always, we’re here to help you find the solutions you need to maintain your connectivity, whether it be through yoga to keep you up and running, or an IT solution to keep your business going.

      Know Your Yoga Moves

      New to yoga and want to be better prepared? There are many poses to explore in class that aren’t covered below, including crow pose, which Shaughnessy mentioned he’s working on and is a pose that allows you to get some “network uptime.” But the poses that follow are some of the moves you’ll frequently see in our classes.

      Downs also provides modifications during the class in order to help accommodate limitations.

      Backbone Connection Pose (Downward Dog)

      Yoga Downward Dog

      This is a common pose across all styles of yoga. This spine-lengthening pose will help you shake off long days sitting in front of a computer.

      From all fours, ground your hands into the mat, putting the weight into your thumbs and forefingers to take any strain off of your wrists. Lift your hips and bend your knees, coming onto the balls of your feet. Bring your shins parallel to the mat and keep your sit bones lifting high and back and you straighten your legs. Once in position, you can peddle out your feet as you work to melt your heels toward the floor.

      Hyper-V (Boat Pose)

      Yoga Boat Pose

      This pose will really rev up your core. You’ll begin seated with your knees bent and feet flat on the mat. Lean back slightly and lift your legs to bring your shins parallel to the floor. Maintain tension in your core to ensure that your spine doesn’t round down. We want a nice straight back and a lifted chest in this pose.

      As you maintain the balance on your sit bones, straighten your legs to your comfort level. In the photo above, Downs in demonstrating the maximum extension. Lift your arms and reach forward, keeping your arms and hands actively engaged. We’ll typically draw several rounds of breath in this pose.

      To release, exhale as you lower your legs and hands to the floor.

      Network Branch Pose (Tree Pose)

      Yoga Network Branch Pose

      Challenge your balance! Throughout this balance pose, keep your gaze fixed on an unmoving point in front of you. Begin by standing on your mat with your arms at your side. For this example, we’ll pretend you’re starting with the balance on your left leg. Shift your weight to your left foot and bend your right knee.

      Lift your right leg or use your hand to draw your right foot alongside your inner left thigh, your left calf or your left ankle, depending on your flexibility level. To protect your knee, do not rest your foot against the knee joint.

      Place your hands on your hips and lengthen your tailbone toward the floor. Then, press your palms together in a prayer position at your chest, with your thumbs resting on your sternum. You can stay in this position, or take the balance a step further by reaching your network branches up overhead.

      Web Firewall Application Pose (Warrior II)

      Yoga Web Application Firewall Pose

      This pose will give your quad muscles a run for their money. Step your feet apart, using your mat or towel as a guide. Raise your arms parallel, palms facing down, and reach them actively out to the sides. Keep the front foot pointed forward and turn your back foot out. The front and back heels should be aligned as the feet run perpendicular to each other.

      Bend your left knee over the left ankle, so that the shin is perpendicular to the floor. This is where you’ll feel the quad go to work. Anchor this movement of the front knee by strengthening the back leg as you press the outer back heel firmly to the floor. Turn you head forward to look out over your fingers.

      Concluding the Practice

      Yoga Sealing Practice

      Each practice concludes with savasana, or what we’re calling NAP Pose. You’ll get a chance to lie back on the mat, relax and thank yourself for the work you just did. Then, after that final relaxation, we seal the practice by sitting cross legged and saying, “Namaste,” which roughly means, “The light within me honors and respects the light within you.”

      Join us on Instagram Live for free yoga classes!

      FOLLOW INAP ON INSTAGRAM

      Laura Vietmeyer


      READ MORE





      Source link

      Networks and Online Gaming: 3 Ways to Improve Performance and Retain Your Audience


      What makes or breaks the technical success of a new multiplayer video game? Or for that matter, the success of any given online gaming session or match? There are a lot of reasons, to be sure, but success typically boils down to factors outside of the end users’ control. At the top of the list, arguably, is network performance.

      In June, 2018 Fornite experienced a network interruption that caused world-famous streamer, Ninja, to swap mid-stream to Hi-Rez’s Realm Royale. Ninja gave the game rave reviews, resulting in a huge userbase jumping over to play Realm Royale. And just this month, the launch of Wolcen: Lords of Mayhem was darkened by infrastructure issues as the servers couldn’t handle the number of users flocking to the game. While both popular games might not have experienced long-term damage, ongoing issues like these can turn users toward a competitor’s game or drive them away for good.

      Low latency is so vital, that in a 2019 survey, seven in 10 gamers said they will play a laggy game for less than 10 minutes before quitting. And nearly three in 10 say what matters most about an online game is having a seamless gaming experience without lag. What can game publishers do to prevent lag, increase network performance and increase the chances that their users won’t “rage quit”?

      Taking Control of the Network to Avoid Log Offs

      There are a few different ways to answer the question and avoid scenario outlined above, but some solutions are stronger than others.

      Increase Network Presence with Edge Deployments

      One option is to spread nodes across multiple geographical presences to reduce the distance a user must traverse to connect. Latency starts as a physics problem, so the shorter the distance between data centers and users, the lower the latency.

      This approach isn’t always the best answer, however, as everyday there can be both physical and logical network issues just miles apart from a user and a host. Some of these problems can be the difference between tens to thousands of milliseconds across a single carrier.

      Games are also increasingly global. You can put a server in Los Angeles to be close to users on the West Coast, but they’re going to want to play with their friends on the East Coast, or somewhere even further away.

      Connect Through the Same Carriers as the End Users

      Another answer is to purchase connectivity to some of the same networks end users will connect from, such as Comcast, AT&T, Time Warner, Telecom, Verizon, etc.

      A drawback of this option, though, stems from the abolishment of Net Neutrality. Carriers don’t necessarily need to honor best-route methodology anymore, meaning they can prioritize cost efficiency over performance on network configurations. I’ve personally observed traffic going from Miami to Tampa being routed all the way to Houston and back, as show in the images below.

      Network routing
      The traffic on the left follows best-route methodology, while the traffic on the right going from Miami to Tampa is being routed through Houston. This is one consequence of the abolishment of Net Neutrality.

      Purchasing connectivity that gets you directly into the homes of end-users may seem like the best method to reduce latency, but bottlenecks or indirect routing inside these large carriers’ networks can cause issues. A major metro market in the United States can also have three to four incumbent consumer carriers providing residential services to gamers, necessitating and IP blend to effectively reach end users. However, startups or gaming companies don’t want to build their own blended IP solution in every market they want to build out in.

      Choose a Host with a Blended Carrier Agreement

      The best possible solution to the initial scenario is to host with a carrier that has a blended carrier agreement, with a network route optimization technology to algorithmically traverse all of those carriers.

      Take for example, INAP’s Performance IP® solution. This technology makes a daily average of nearly 500 million optimizations across INAP’s global network to automatically put a customer’s outbound traffic on the best-performing route. This type of technology reduces latency upwards of 44 percent and prevents packet loss, preventing users from experiencing the lag that can change the fate of a game’s commercial success. You can explore our IP solution by running your own performance test.

      Taking Control When Uncontrollable Factors are at Play

      There will be times that game play is affected by end user hardware. It makes a difference, and it always will, but unfortunately publishers can’t control the type of access their users have to the internet. In some regions of the world, high speed internet is just a dream, while in others it would be unfathomable to go without high-speed internet access.

      Inline end user networking equipment can also play a role in network behavior. Modems, switches, routers and carrier equipment can cause poor performance. Connectivity being switched through an entire neighborhood, throughput issues during peak neighborhood activities, satellite dishes angled in an unoptimized position limiting throughput—there’s a myriad of reasons that user experience can be impacted.

      With these scenarios, end users often understand what they are working with and make mental allowances to cope with any limitations. Or they’ll upgrade their internet service and gaming hardware accordingly.

      The impact of network performance on streaming services and game play can’t be underscored enough. Most end users will make the corrections they can in order to optimize game play and connectivity. The rest is up to the publisher.

      Explore INAP’s Global Network.

      LEARN MORE

      Dan Lotterman


      READ MORE



      Source link