One place for hosting & domains

      Network

      INAP Executive Spotlight: Matt Cuneio, Vice President, Network Operations Center


      In the INAP Executive Spotlight series, we interview senior leaders across the organization, hearing candid reflections about their careers, what they love about their work and big lessons learned along the way.

      Next in the series is Matt Cuneio, Vice President, Network Operations Center (NOC). He oversees our customer support teams, ensuring that our customers get top tier technical support. During his two-and-a-half-year tenure at INAP, he’s aligned all support employees to form team of roughly 95 members who support customers across all INAP products. This shift has yielded fantastic customer surveys and has put INAP on the map for top-notch service.

      Read on to learn more about what makes Matt tick, and how he’s worked to shatter silos in order to build a stronger INAP.

      The interview has been lightly edited for clarity and length.

      What do you think makes the NOC team successful?

      Hustle! Customers want to be ‘happily uninvolved’ in their products and services. When they log on, they just want things to work. It’s really that simple. When that does not occur, responsiveness and crisp communication is the key to a good experience. I’ve been in the industry 20+ years now and what sets INAP apart is our product and engineering team. They work around the clock to ensure our platform is robust. When events happen, it’s all hands-on deck and we work together to drive resolution. We have industry leading ASA (Average Speed to Answer) times and our customer surveys reinforce our effort to be a ‘Best In Class’ support organization.

      What do you love about your role in tech? What is the best part of being in the industry?

      I love people, and I love having the ability to set people up for success. I’ve used this phrase over the years: Come, Grow and Go. I want people to come into the support organization. I want them to take advantage of the opportunity to crack into the industry, to learn technology and to grow their skillset. And then I want them to spread their wings and make a difference—make a difference for the company, their families, their community. Having the opportunity to lead in the technology industry is exciting and very rewarding.

      Of all the qualities you possess, which do you think has the greatest influence on your success?

      When I started at INAP, the silos that existed were extreme for a company of our size. I’ve been able to bring folks together. If I were to market my skillset as a leader and what I can do, it’s that I’m pretty damn good at bringing people together. The talent we possess as a company is unrivaled in the industry. We have some of the greatest engineering minds I’ve ever been around. Aligning all of this talent and getting everyone to pull the rope in the same direction is what we will do better than anyone in the days ahead.

      What does a typical day look like for you?

      One thing I tell our leaders is that the NOC life is a ‘lifestyle’. Last night I was on the phone with a customer at 9:30 and helped them work through their issue. It’s a 24/7/365 gig. There’s no walking away at 5 p.m. when you’re done with your last call. We have a great team, we have fun together and the feedback we receive from customers makes it all worth it.

      What advice would you have for someone pursuing a career in tech?

      Zone in on certifications. I talk to my kids about this, about getting their CCNA and the different technology tools that are available. There is a lot and it’s always changing, so if you can get in on the front end of technology, then that’s going to really benefit you.

      And my other advice is to be a great teammate. If you’re a great teammate and you work hard and you give your best, it’s going to work out. I’ve seen it time and time again with people who have worked for me. It’s what’s worked for me personally. If you treat people with respect, if you hustle, if you don’t cut corners and do the small things right it will all work out. Pay attention to the details. If you focus on the basics, success lies ahead.

      Who are some of the people that have mentored or been your role models throughout your career?

      I love this question because I’ve been incredibly fortunate in this area. So many people to talk about, but I’ll keep it to two. Keith Hayes took me under his wing and gave me my first vice president role. I still talk to him often. He taught me those core principles of treating people with respect, staying grateful and having a servant approach in leadership. Greg Wood is another mentor who has had a lifelong impact on me. He really emphasized the relationship piece. He stressed that you can get things done by yourself, but you can get a lot more done as a team and with people aligned and all focused on the same thing. And he gave me a lot of great tools to do that.

      What are the biggest lessons you’ve learned in your career?

      People make the difference. Process, products, services are all critical to success. The foundation of delivering the business though all comes down to having the right people in the right place. Treating people with respect and encouraging them has a greater influence than constant criticism. Mistakes happen, and when they do it’s critical you address them head on. But an energized, motivated workforce can accomplish great things and that’s what we have in motion here at INAP!

      Laura Vietmeyer


      READ MORE



      Source link

      Apache Network Error AH00072: make_sock: could not bind to address



      Part of the Series:
      Common Apache Errors

      This tutorial series explains how to troubleshoot and fix some of the most common errors that you may encounter when using the Apache web server.

      Each tutorial in this series includes descriptions of common Apache configuration, network, filesystem, or permission errors. The series begins with an overview of the commands and log files that you can use to troubleshoot Apache. Subsequent tutorials examine specific errors in detail.

      Introduction

      An Apache AH00072: make_sock: could not bind to address error message is generated when there is another process listening on the same port that Apache is configured to use. Typically the port will be the standard port 80 for HTTP connections, or port 443 for HTTPS connections. However, any port conflict with another process can cause an AH00072 error.

      The error is derived from the underlying operating system system’s network stack. The issue is that only a single process can be bound to a port at any given time. If another web server like Nginx is configured to listen on port 80 and it is running, then Apache will not be able to claim the port for itself.

      To detect a port conflict with Apache, you will need to examine systemctl and journalctl output to determine the IP address and port that are causing the error. Then you can decide how to resolve the issue, whether it is by switching web servers, changing the IP address that Apache uses, the port, or any combination of these options.

      Troubleshooting with systemctl

      Following the troubleshooting steps from the How to Troubleshoot Common Apache Errors tutorial at the beginning of this series, the first step when you are troubleshooting an AH00072: make_sock: could not bind to address error message is to check Apache’s status with systemctl.

      If systemctl does not include output that describes the problem, then the last section of this tutorial, Troubleshooting Using journalctl Logs explains how to examine the systemd logs to find the conflicting port.

      The output from systemctl status will in many cases contain all the diagnostic information that you need to resolve the error. It will include the IP address that Apache is using, as well as the port that it is attempting to bind to. The output will also indicate how long Apache has been unable to start so that you can determine how long the issue has been affecting Apache.

      On Ubuntu and Debian-derived Linux distributions, run the following to check Apache’s status:

      Ubuntu and Debian Systems

      • sudo systemctl status apache2.service -l --no-pager

      On CentOS and Fedora systems, use this command to examine Apache’s status:

      CentOS and Fedora Systems

      • sudo systemctl status httpd.service -l --no-pager

      The -l flag will ensure that systemctl outputs the entire contents of a line, instead of substituting in ellipses () for long lines. The --no-pager flag will output the entire log to your screen without invoking a tool like less that only shows a screen of content at a time.

      Since you are troubleshooting an AH00072: make_sock error message, you should receive output that is similar to the following:

      Output

      ● httpd.service - The Apache HTTP Server Loaded: loaded (/usr/lib/systemd/system/httpd.service; disabled; vendor preset: disabled) Active: failed (Result: exit-code) since Tue 2020-07-28 13:58:40 UTC; 8s ago Docs: man:httpd.service(8) Process: 69 ExecStart=/usr/sbin/httpd $OPTIONS -DFOREGROUND (code=exited, status=1/FAILURE) Main PID: 69 (code=exited, status=1/FAILURE) Status: "Reading configuration..." Tasks: 213 (limit: 205060) Memory: 25.9M CGroup: /system.slice/containerd.service/system.slice/httpd.service Jul 28 13:58:40 e3633cbfc65e systemd[1]: Starting The Apache HTTP Server… Jul 28 13:58:40 e3633cbfc65e httpd[69]: (98)Address already in use: AH00072: make_sock: could not bind to address [::]:80 Jul 28 13:58:40 e3633cbfc65e httpd[69]: (98)Address already in use: AH00072: make_sock: could not bind to address 0.0.0.0:80 Jul 28 13:58:40 e3633cbfc65e httpd[69]: no listening sockets available, shutting down Jul 28 13:58:40 e3633cbfc65e httpd[69]: AH00015: Unable to open logs Jul 28 13:58:40 e3633cbfc65e systemd[1]: httpd.service: Main process exited, code=exited, status=1/FAILURE Jul 28 13:58:40 e3633cbfc65e systemd[1]: httpd.service: Failed with result 'exit-code'. Jul 28 13:58:40 e3633cbfc65e systemd[1]: Failed to start The Apache HTTP Server.

      Note that your output may be slightly different if you are using an Ubuntu or Debian-derived distribution, where the name of the Apache process is not httpd but is apache2.

      This example systemctl output includes some highlighted lines from the systemd journal that describes the AH00072 error. These lines, both of which begin with (98)Address already in use: AH00072: make_sock: could not bind to address, give you all the information about the AH00072 error that you need to troubleshoot it further, so you can skip the following journalctl steps and instead proceed to the Troubleshooting with ss and ps Utilities section at the end of this tutorial.

      If your systemctl output does not give specific information about the IP address and port or ports that are causing the AH00072 error, you will need to examine journalctl output from the systemd logs. The following section explains how to use journalctl to troubleshoot an AH00072 error.

      Troubleshooting Using journalctl Logs

      If your systemctl output does not include specifics about an AH00072 error, you should proceed with using the journalctl command to examine systemd logs for Apache.

      On Ubuntu and Debian-derived systems, run the following command:

      • sudo journalctl -u apache2.service --since today --no-pager

      On CentOS, Fedora, and RedHat-derived systems, use this command to inspect the logs:

      • sudo journalctl -u httpd.service --since today --no-pager

      The --since today flag will limit the output of the command to log entries beginning at 00:00:00 of the current day only. Using this option will help restrict the volume of log entries that you need to examine when checking for errors.

      If Apache is unable to bind to a port that is in use, search through the output for lines that are similar to the following log entries, specifically lines that contain the AH00072 error code as highlighted in this example:

      Output

      -- Logs begin at Tue 2020-07-14 20:10:37 UTC, end at Tue 2020-07-28 14:01:40 UTC. -- . . . Jul 28 14:03:01 b06f9c91975d apachectl[71]: (98)Address already in use: AH00072: make_sock: could not bind to address [::]:80 Jul 28 14:03:01 b06f9c91975d apachectl[71]: (98)Address already in use: AH00072: make_sock: could not bind to address 0.0.0.0:80 Jul 28 14:03:01 b06f9c91975d apachectl[71]: no listening sockets available, shutting down

      This output indicates two AH00072 errors. The first of these explains that Apache cannot bind to the [::]:80 address, which is port 80 on all available IPv6 interfaces. The next line, with the address 0.0.0.0:80, indicates Apache cannot bind to port 80 on all available IPv4 interfaces. Depending on your system’s configuration, the IP addresses may be different and only show individual IPs, and may only include IPv4 or IPv6 errors.

      Even though your own system may have different conflicting interfaces and ports, the errors will be similar to the output shown here. With output from journalctl you will be able to diagnose the issue using ss in the following section of this tutorial.

      Troubleshooting with ss and ps Utilities

      To troubleshoot an AH00072 error you need to determine what other process is listening on the IP address and port that Apache is attempting to use. Most modern Linux distributions include a utility called ss which can be used to gather information about the state of a system’s network sockets.

      In the previous journalctl section, something was already bound to the IPv4 and IPv6 addresses on port 80. The following command will determine the name of the process that is already bound to an IPv4 interface on port 80. Ensure that you substitute the port from the error message if it is different from 80 in the following command:

      • sudo ss -4 -tlnp | grep 80

      The flags to the ss command alter its default output in the following ways:

      • -4 restricts ss to only display IPv4-related socket information.
      • -t restricts the output to tcp sockets only.
      • -l displays all listening sockets with the -4 and -t restrictions taken into account.
      • -n ensures that port numbers are displayed, as opposed to protocol names like ‘httporhttps`. This is important since Apache may be attempting to bind to a non-standard port and a service name can be confusing as opposed to the actual port number.
      • -p outputs information about the process that is bound to a port.

      With all of those flags, you will receive output like the following:

      Output

      LISTEN 0 511 0.0.0.0:80 0.0.0.0:* users:(("nginx",pid=40,fd=6))

      The first three fields are not important when troubleshooting an AH00072 error so they can be ignored. The important fields are the fourth (0.0.0.0:80), which matches the journalctl error that you discovered earlier, along with the last users:(("nginx",pid=40,fd=6)), specifically the pid=40 portion.

      If you have an AH00072 error that is related to an IPv6 interface, repeat the ss invocation, this time using the -6 flag to restrict the interfaces to the IPv6 network stack like this:

      • sudo ss -6 -tlnp |grep 80

      Output

      LISTEN 0 511 [::]:80 [::]:* users:(("nginx",pid=40,fd=7))

      Again, substitute the port number in question from your journalctl output if it is different from the highlighted 80 given here.

      In both these cases of IPv4 and IPv6 errors, the ss output indicates that there is a program with process ID 40 (the pid=40 in the output) that is bound to the 0.0.0.0:80 and [::]:80 interfaces respectively. This process is preventing Apache from starting since it already owns the port. To determine the name of the program, use the ps utility like this, substituting the process ID from your output in place of the highlighted 40 value in this example:

      You will receive output that is similar to the following:

      Output

      PID TTY TIME CMD 40 ? 00:00:00 nginx

      The highlighted nginx in the output is the name of the process that is listening on the interfaces. Now that you have the name of the program that is preventing Apache from starting, you can decide how to resolve the error. You could stop the nginx process, reconfigure nginx to listen on a different interface and port, or reconfigure Apache to avoid the port collision.

      It is important to note that the process may be different from nginx and the port and IP addresses may not always be 0.0.0.0 or [::] if you are diagnosing an AH00072 error. Oftentimes, different web servers and proxies will be in use on the same server. Each may be attempting to bind to different IPv4 ports and IPv6 interfaces to handle different web traffic. For example, a server that is configured with HAProxy listening on the IPv4 loopback address (also referred to as localhost) on port 8080 will show ss output like this:

      Output

      LISTEN 0 2000 127.0.0.1:8080 0.0.0.0:* users:(("haproxy",pid=545,fd=7))

      It is important to combine systemctl output, or journalctl output that indicates specific IP addresses and ports, with diagnostic data from ss, and then ps to narrow down the process that is causing Apache to fail to start.

      Conclusion

      In this tutorial you learned how to troubleshoot an Apache AH00072 make_sock: could not bind to address error message on both IPv4 and IPv6 interfaces. You learned how to use systemctl to examine the status of the Apache server and try to find error messages. You also learned how to use journalctl to examine the systemd logs for specific information about an AH00072 error.

      With the appropriate error messages from the logs, you then learned about the ss utility and how to use it to examine the state of a system’s network sockets. After that you learned how to combine process ID information from ss with the ps utility to find the name of the process that is causing Apache to be unable to start.



      Source link

      How To Trick a Neural Network in Python 3


      The author selected Dev Color to receive a donation as part of the Write for DOnations program.

      Could a neural network for animal classification be fooled? Fooling an animal classifier may have few consequences, but what if our face authenticator could be fooled? Or our self-driving car prototype’s software? Fortunately, legions of engineers and research stand between a prototype computer-vision model and production-quality models on our mobile devices or cars. Still, these risks have significant implications and are important to consider as a machine-learning practitioner.

      In this tutorial, you will try “fooling” or tricking an animal classifier. As you work through the tutorial, you’ll use OpenCV, a computer-vision library, and PyTorch, a deep learning library. You will cover the following topics in the associated field of adversarial machine learning:

      • Create a targeted adversarial example. Pick an image, say, of a dog. Pick a target class, say, a cat. Your goal is to trick the neural network into believing the pictured dog is a cat.
      • Create an adversarial defense. In short, protect your neural network against these tricky images, without knowing what the trick is.

      By the end of the tutorial, you will have a tool for tricking neural networks and an understanding of how to defend against tricks.

      Prerequisites

      To complete this tutorial, you will need the following:

      Step 1 — Creating Your Project and Installing Dependencies

      Let’s create a workspace for this project and install the dependencies you’ll need. You’ll call your workspace AdversarialML:

      Navigate to the AdversarialML directory:

      Make a directory to hold all your assets:

      • mkdir ~/AdversarialML/assets

      Then create a new virtual environment for the project:

      • python3 -m venv adversarialml

      Activate your environment:

      • source adversarialml/bin/activate

      Then install PyTorch, a deep-learning framework for Python that you’ll use in this tutorial.

      On macOS, install Pytorch with the following command:

      • python -m pip install torch==1.2.0 torchvision==0.4.0

      On Linux and Windows, use the following commands for a CPU-only build:

      • pip install torch==1.2.0+cpu torchvision==0.4.0+cpu -f https://download.pytorch.org/whl/torch_stable.html
      • pip install torchvision

      Now install prepackaged binaries for OpenCV and numpy, which are libraries for computer vision and linear algebra, respectively. OpenCV offers utilities such as image rotations, and numpy offers linear algebra utilities such as a matrix inversion:

      • python -m pip install opencv-python==3.4.3.18 numpy==1.14.5

      On Linux distributions, you will need to install libSM.so:

      • sudo apt-get install libsm6 libxext6 libxrender-dev

      With the dependencies installed, let’s run an animal classifier called ResNet18, which we describe next.

      Step 2 — Running a Pretrained Animal Classifier

      The torchvision library, the official computer vision library for PyTorch, contains pretrained versions of commonly used computer vision neural networks. These neural networks are all trained on ImageNet 2012, a dataset of 1.2 million training images with 1000 classes. These classes include vehicles, places, and most importantly, animals. In this step, you will run one of these pretrained neural networks, called ResNet18. We will refer to ResNet18 trained on ImageNet as an “animal classifier”.

      What is ResNet18? ResNet18 is the smallest neural network in a family of neural networks called residual neural networks, developed by MSR (He et al.). In short, He found that a neural network (denoted as a function f, with input x, and output f(x)) would perform better with a “residual connection” x + f(x). This residual connection is used prolifically in state-of-the-art neural networks, even today. For example, FBNetV2, FBNetV3.

      Download this image of a dog with the following command:

      • wget -O assets/dog.jpg https://www.xpresservers.com/wp-content/uploads/2020/06/How-To-Trick-a-Neural-Network-in-Python-3.png

      Image of corgi running near pond

      Then, download a JSON file to convert neural network output to a human-readable class name:

      • wget -O assets/imagenet_idx_to_label.json https://raw.githubusercontent.com/do-community/tricking-neural-networks/master/utils/imagenet_idx_to_label.json

      Next, create a script to run your pretrained model on the dog image. Create a new file called step_2_pretrained.py:

      • nano step_2_pretrained.py

      First, add the Python boilerplate by importing the necessary packages and declaring a main function:

      step_2_pretrained.py

      from PIL import Image
      import json
      import torchvision.models as models
      import torchvision.transforms as transforms
      import torch
      import sys
      
      def main():
          pass
      
      if __name__ == '__main__':
          main()
      

      Next, load the mapping from neural network output to human-readable class names. Add this directly after your import statements and before your main function:

      step_2_pretrained.py

      . . .
      def get_idx_to_label():
          with open("assets/imagenet_idx_to_label.json") as f:
              return json.load(f)
      . . .
      

      Create an image transformation function that will ensure your input image firstly has the correct dimensions, and secondly is normalized correctly. Add the following function directly after the last:

      step_2_pretrained.py

      . . .
      def get_image_transform():
          transform = transforms.Compose([
            transforms.Resize(224),
            transforms.CenterCrop(224),
            transforms.ToTensor(),
            transforms.Normalize(mean=[0.485, 0.456, 0.406],
                                 std=[0.229, 0.224, 0.225])
          ])
          return transform
      . . .
      

      In get_image_transform, you define a number of different transformations to apply to the images that are passed to your neural network:

      • transforms.Resize(224): Resizes the smaller side of the image to 224. For example, if your image is 448 x 672, this operation would downsample the image to 224 x 336.
      • transforms.CenterCrop(224): Takes a crop from the center of the image, of size 224 x 224.
      • transforms.ToTensor(): Converts the image into a PyTorch tensor. All PyTorch models require PyTorch tensors as input.
      • transforms.Normalize(mean=..., std=...): Standardizes your input by subtracting the mean, then dividing by the standard deviation. This is described more precisely in the torchvision documentation.

      Add a utility to predict the animal class, given the image. This method uses both the previous utilities to perform animal classification:

      step_2_pretrained.py

      . . .
      def predict(image):
          model = models.resnet18(pretrained=True)
          model.eval()
      
          out = model(image)
      
          _, pred = torch.max(out, 1)  
          idx_to_label = get_idx_to_label()  
          cls = idx_to_label[str(int(pred))]  
          return cls
      . . .
      

      Here the predict function classifies the provided image using a pretrained neural network:

      • models.resnet18(pretrained=True): Loads a pretrained neural network called ResNet18.
      • model.eval(): Modifies the model in-place to run in ‘evaluation’ mode. The only other mode is ‘training’ mode, but training mode isn’t needed, as you aren’t training the model (that is, updating the model’s parameters) in this tutorial.
      • out = model(image): Runs the neural network on the provided, transformed image.
      • _, pred = torch.max(out, 1): The neural network outputs one probability for each possible class. This step computes the index of the class with the highest probability. For example, if out = [0.4, 0.1, 0.2], then pred = 0.
      • idx_to_label = get_idx_to_label(): Obtains a mapping from class index to human-readable class names. For example, the mapping could be {0: cat, 1: dog, 2: fish}.
      • cls = idx_to_label[str(int(pred))]: Convert the predicted class index to a class name. The examples provided in the last two bullet points would yield cls = idx_to_label[0] = 'cat'.

      Next, following the last function, add a utility to load images:

      step_2_pretrained.py

      . . .
      def load_image():
          assert len(sys.argv) > 1, 'Need to pass path to image'
          image = Image.open(sys.argv[1])
      
          transform = get_image_transform()
          image = transform(image)[None]
          return image
      . . .
      

      This will load an image from the path provided in the first argument to the script. transform(image)[None] applies the sequence of image transformations defined in the previous lines.

      Finally, populate your main function with the following, to load your image and classify the animal in the image:

      step_2_pretrained.py

      def main():
          x = load_image()
          print(f'Prediction: {predict(x)}')
      

      Double check that your file matches our final step 2 script at step_2_pretrained.py on GitHub. Save and exit your script, and run the animal classifier:

      • python step_2_pretrained.py assets/dog.jpg

      This will produce the following output, showing your animal classifier works as expected:

      Output

      Prediction: Pembroke, Pembroke Welsh corgi

      That concludes running inference with your pretrained model. Next, you will see an adversarial example in action by tricking a neural network with impercetible differences in the image.

      Step 3 — Trying an Adversarial Example

      Now, you will synthesize an adversarial example, and test the neural network on that example. For this tutorial, you will build adversarial examples of the form x + r, where x is the original image and r is some “perturbation”. You will eventually create the perturbation r yourself, but in this step, you will download one we created for you beforehand. Start by downloading the perturbation r:

      • wget -O assets/adversarial_r.npy https://github.com/do-community/tricking-neural-networks/blob/master/outputs/adversarial_r.npy?raw=true

      Now composite the picture with the perturbation. Create a new file called step_3_adversarial.py:

      • nano step_3_adversarial.py

      In this file, you will perform the following three-step process, to produce an adversarial example:

      1. Transform an image
      2. Apply the perturbation r
      3. Inverse transform the perturbed image

      At the end of step 3, you will have an adversarial image. First, import the necessary packages and declare a main function:

      step_3_adversarial.py

      from PIL import Image
      import torchvision.transforms as transforms
      import torch
      import numpy as np
      import os
      import sys
      
      from step_2_pretrained import get_idx_to_label, get_image_transform, predict, load_image
      
      
      def main():
          pass
      
      
      if __name__ == '__main__':
          main()
      

      Next, create an “image transformation” that inverts the earlier image transformation. Place this after your imports, before the main function:

      step_3_adversarial.py

      . . .
      def get_inverse_transform():
          return transforms.Normalize(
              mean=[-0.485/0.229, -0.456/0.224, -0.406/0.255],  # INVERSE normalize images, according to https://pytorch.org/docs/stable/torchvision/models.html
              std=[1/0.229, 1/0.224, 1/0.255])
      . . .
      

      As before, the transforms.Normalize operation subtracts the mean and divides by the standard deviation (that is, for the original image x, y = transforms.Normalize(mean=u, std=o) = (x - u) / o). You do some algebra and define a new operation that reverses this normalize function (transforms.Normalize(mean=-u/o, std=1/o) = (y - -u/o) / 1/o = (y + u/o) o = yo + u = x).

      As part of the inverse transformation, add a method that transforms a PyTorch tensor back to a PIL image. Add this following the last function:

      step_3_adversarial.py

      . . .
      def tensor_to_image(tensor):
          x = tensor.data.numpy().transpose(1, 2, 0) * 255.  
          x = np.clip(x, 0, 255)
          return Image.fromarray(x.astype(np.uint8))
      . . .
      
      • tensor.data.numpy() converts the PyTorch tensor into a NumPy array. .transpose(1, 2, 0) rearranges (channels, width, height) into (height, width, channels). This NumPy array is approximately in the range (0, 1). Finally, multiply by 255 to ensure the image is now in the range (0, 255).
      • np.clip ensures that all values in the image are between (0, 255).
      • x.astype(np.uint8) ensures all image values are integers. Finally, Image.fromarray(...) creates a PIL image object from the NumPy array.

      Then, use these utilities to create the adversarial example with the following:

      step_3_adversarial.py

      . . .
      def get_adversarial_example(x, r):
          y = x + r
          y = get_inverse_transform()(y[0])
          image = tensor_to_image(y)
          return image
      . . .
      

      This function generates the adversarial example as described at the start of the section:

      1. y = x + r. Take your perturbation r and add it to the original image x.
      2. get_inverse_transform: Obtain and apply the reverse image transformation you defined several lines earlier.
      3. tensor_to_image: Finally, convert the PyTorch tensor back to an image object.

      Finally, modify your main function to load the image, load the adversarial perturbation r, apply the perturbation, save the adversarial example to disk, and run prediction on the adversarial example:

      step_3_adversarial.py

      def main():
          x = load_image()
          r = torch.Tensor(np.load('assets/adversarial_r.npy'))
      
          # save perturbed image
          os.makedirs('outputs', exist_ok=True)
          adversarial = get_adversarial_example(x, r)
          adversarial.save('outputs/adversarial.png')
      
          # check prediction is new class
          print(f'Old prediction: {predict(x)}')
          print(f'New prediction: {predict(x + r)}')
      

      Your completed file should match step_3_adversarial.py on GitHub. Save the file, exit the editor, and launch your script with:

      • python step_3_adversarial.py assets/dog.jpg

      You’ll see this output:

      Output

      Old prediction: Pembroke, Pembroke Welsh corgi New prediction: goldfish, Carassius auratus

      You’ve now created an adversarial example: tricking the neural network into thinking a corgi is a goldfish. In the next step, you will actually create the perturbation r that you used here.

      Step 4 — Understanding an Adversarial Example

      For a primer on classification, see “How to Build an Emotion-Based Dog Filter”.

      Taking a step back, recall that your classification model outputs a probability for each class. During inference, the model predicts the class with the highest probability. During training, you update the model parameters t to maximize the probability of the correct class y, given your data x.

      argmax_y P(y|x,t)
      

      However, to generate adversarial examples, you now modify your goal. Instead of finding a class, your goal is now to find a new image, x. Take any class other than the correct one. Let us call this new class w. Your new objective is to maximize the probability of the wrong class.

      argmax_x P(w|x)
      

      Note that the neural network weights t are missing from the above expression. This is because you now assume the role of the adversary: Someone else has trained and deployed a model. You are only allowed to create adversarial inputs and are not allowed to modify the deployed model. To generate the adversarial example x, you can run “training”, except instead of updating the neural network weights, you update the input image with the new objective.

      As a reminder, for this tutorial, you assume that the adversarial example is an affine transformation of x. In other words, your adversarial example takes the form x + r for some r. In the next step, you will write a script to generate this r.

      Step 5 — Creating an Adversarial Example

      In this step, you will learn a perturbation r, so that your corgi is misclassified as a goldfish. Create a new file called step_5_perturb.py:

      Import the necessary packages and declare a main function:

      step_5_perturb.py

      from torch.autograd import Variable
      import torchvision.models as models
      import torch.nn as nn
      import torch.optim as optim
      import numpy as np
      import torch
      import os
      
      from step_2_pretrained import get_idx_to_label, get_image_transform, predict, load_image
      from step_3_adversarial import get_adversarial_example
      
      
      def main():
          pass
      
      
      if __name__ == '__main__':
          main()
      

      Directly following your imports and before the main function, define two constants:

      step_5_perturb.py

      . . .
      TARGET_LABEL = 1
      EPSILON = 10 / 255.
      . . .
      

      The first constant TARGET_LABEL is the class to misclassify the corgi as. In this case, index 1 corresponds to “goldfish”. The second constant EPSILON is the maximum amount of perturbation allowed for each image value. This limit is introduced so that the image is imperceptibly altered.

      Following your two constants, add a helper function to define a neural network and the perturbation parameter r:

      step_5_perturb.py

      . . .
      def get_model():
          net = models.resnet18(pretrained=True).eval()
          r = nn.Parameter(data=torch.zeros(1, 3, 224, 224), requires_grad=True)
          return net, r
      . . .
      
      • model.resnet18(pretrained=True) loads a pretrained neural network called ResNet18, like before. Also like before, you set the model to evaluation mode using .eval.
      • nn.Parameter(...) defines a new perturbation r, the size of the input image. The input image is also of size (1, 3, 224, 224). The requires_grad=True keyword argument ensures that you can update this perturbation r in later lines, in this file.

      Next, begin modifying your main function. Start by loading the model net, loading the inputs x, and defining the label label:

      step_5_perturb.py

      . . .
      def main():
          print(f'Target class: {get_idx_to_label()[str(TARGET_LABEL)]}')
          net, r = get_model()
          x = load_image()
          labels = Variable(torch.Tensor([TARGET_LABEL])).long()
        . . .
      

      Next, define both the criterion and the optimizer in your main function. The former tells PyTorch what the objective is—that is, what loss to minimize. The latter tells PyTorch how to train your parameter r:

      step_5_perturb.py

      . . .
          criterion = nn.CrossEntropyLoss()
          optimizer = optim.SGD([r], lr=0.1, momentum=0.1)
      . . .
      

      Directly following, add the main training loop for your parameter r:

      step_5_perturb.py

      . . .
          for i in range(30):
              r.data.clamp_(-EPSILON, EPSILON)
              optimizer.zero_grad()
      
              outputs = net(x + r)
              loss = criterion(outputs, labels)
              loss.backward()
              optimizer.step()
      
              _, pred = torch.max(outputs, 1)
              if i % 5 == 0:
                  print(f'Loss: {loss.item():.2f} / Class: {get_idx_to_label()[str(int(pred))]}')
      . . .
      

      On each iteration of this training loop, you:

      • r.data.clamp_(...): Ensure the parameter r is small, within EPSILON of 0.
      • optimizer.zero_grad(): Clear any gradients you computed in the previous iteration.
      • model(x + r): Run inference on the modified image x + r.
      • Compute the loss.
      • Compute the gradient loss.backward.
      • Take a gradient descent step optimizer.step.
      • Compute the prediction pred.
      • Finally, report the loss and predicted class print(...).

      Next, save the final perturbation r:

      step_5_perturb.py

      def main():
          . . .
          for i in range(30):
              . . .
          . . .
          np.save('outputs/adversarial_r.npy', r.data.numpy())
      

      Directly following, still in the main function, save the perturbed image:

      step_5_perturb.py

      . . .
          os.makedirs('outputs', exist_ok=True)
          adversarial = get_adversarial_example(x, r)
      

      Finally, run prediction on both the original image and the adversarial example:

      step_5_perturb.py

          print(f'Old prediction: {predict(x)}')
          print(f'New prediction: {predict(x + r)}')
      

      Double check your script matches step_5_perturb.py on GitHub. Save, exit, and run the script:

      • python step_5_perturb.py assets/dog.jpg

      Your script will output the following.

      Output

      Target class: goldfish, Carassius auratus Loss: 17.03 / Class: Pembroke, Pembroke Welsh corgi Loss: 8.19 / Class: Pembroke, Pembroke Welsh corgi Loss: 5.56 / Class: Pembroke, Pembroke Welsh corgi Loss: 3.53 / Class: Pembroke, Pembroke Welsh corgi Loss: 1.99 / Class: Pembroke, Pembroke Welsh corgi Loss: 1.00 / Class: goldfish, Carassius auratus Old prediction: Pembroke, Pembroke Welsh corgi New prediction: goldfish, Carassius auratus

      The last two lines indicate you have now completed construction of an adversarial example from scratch. Your neural network now classifies a perfectly reasonable corgi image as a goldfish.

      You’ve now shown that neural networks can be fooled easily—what’s more, the lack of robustness to adversarial examples has significant consequences. A natural next question is this: How can you combat adversarial examples? A good amount of research has been conducted by various organizations, including OpenAI. In the next section, you’ll run a defense to thwart this adversarial example.

      Step 6 — Defending Against Adversarial Examples

      In this step, you will implement a defense against adversarial examples. The idea is the following: You are now the owner of the animal classifier being deployed to production. You don’t know what adversarial examples may be generated, but you can modify the image or the model to protect against attacks.

      Before you defend, you should see for yourself how imperceptible the image manipulation is. Open both of the following images:

      1. assets/dog.jpg
      2. outputs/adversarial.png

      Here, you show both side by side. Your original image will have a different aspect ratio. Can you tell which is the adversarial example?

      (left) Corgi as goldfish, adversarial, (right)Corgi as itself, not adversarial

      Notice that the new image looks identical to the original. As it turns out, the left image is your adversarial image. To be certain, download the image and run your evaluation script:

      • wget -O assets/adversarial.png https://github.com/alvinwan/fooling-neural-network/blob/master/outputs/adversarial.png?raw=true
      • python step_2_pretrained.py assets/adversarial.png

      This will output the goldfish class, to prove its adversarial nature:

      Output

      Prediction: goldfish, Carassius auratus

      You will run a fairly naive, but effective, defense: Compress the image by writing to a lossy JPEG format. Open the Python interactive prompt:

      Then, load the adversarial image as PNG, and save it back as a JPEG.

      • from PIL import Image
      • image = Image.open('assets/adversarial.png')
      • image.save('outputs/adversarial.jpg')

      Type CTRL + D to leave the Python interactive prompt. Next, run inference with your model on the compressed adversarial example:

      • python step_2_pretrained.py outputs/adversarial.jpg

      This will now output the corgi class, proving the efficacy of your naive defense.

      Output

      Prediction: Pembroke, Pembroke Welsh corgi

      You’ve now completed your very first adversarial defense. Note that this defense does not require knowing how the adversarial example was generated. This is what makes an effective defense. There are also many other forms of defense, many of which involve retraining the neural network. However, these retraining procedures are a topic of their own and beyond the scope of this tutorial. With that, this concludes your guide into adversarial machine learning.

      Conclusion

      To understand the implications of your work in this tutorial, revisit the two images side-by-side—the original and the adversarial example.

      (left) Corgi as goldfish, adversarial, (right)Corgi as itself, not adversarial

      Despite the fact that both images look identical to the human eye, the first has been manipulated to fool your model. Both images clearly feature a corgi, and yet the model is entirely confident that the second model contains a goldfish. This should concern you and, as you wrap up this tutorial, keep in mind the fragility of your model. Just by applying a simple transformation, you can fool it. These are real, plausible dangers that evade even cutting-edge research. Research beyond machine-learning security is just as susceptible to these flaws, and, as a practitioner, it is up to you to apply machine learning safely. For more readings, check out the following links:

      For more machine learning content and tutorials, you can visit our Machine Learning Topic page.



      Source link