One place for hosting & domains

      Detect

      Detect Responsive Screen Sizes in Angular


      Most of the time, we use CSS media queries to handle responsive, screen size changes to layout our content differently. However, there are times where CSS media queries alone isn’t sufficient for that. We need to handle the responsiveness in our code.

      In this article, I would like to share about how to detect responsive breakpoints in Angular, with a twist – we don’t maintaining responsive breakpoint sizes in your Typescript code (because responsive breakpoints are already defined in CSS).

      We will use Angular with Bootstrap in this example, but it works for any CSS frameworks and classes. Let’s start.

      What’s the Plan

      We will be using CSS Classes to determine the current responsive breakpoints. There are 5 breakpoints in Bootstrap CSS. The CSS classes to determine the visibility of each breakpoints is:

      • Visible only on xs: .d-block .d-sm-none
      • Visible only on sm: .d-none .d-sm-block .d-md-none
      • Visible only on md: .d-none .d-md-block .d-lg-none
      • Visible only on lg: .d-none .d-lg-block .d-xl-none
      • Visible only on xl: .d-none .d-xl-block

      The CSS display property will be toggled between none or block. We will apply these classes to HTML elements.

      Everytime when screen size changes, we will loop and find the HTML element with style display: block, this is how we will detect the current breakpoint.

      Here is the code if you are too excited to see the solution: https://stackblitz.com/edit/angular-size.

      The Implementation: Component

      Let’s create an Angular component size-detector.

      The component HTML template:

      <!-- size-detector.component.html -->
      <div *ngFor="let s of sizes" class="{{s.css + ' ' + (prefix + s.id) }}">{{s.name}}</div>
      

      The component Typescript code:

      // size-detector.component.ts
      ...
      export class SizeDetectorComponent implements AfterViewInit {
        prefix = 'is-';
        sizes = [
          {
            id: SCREEN_SIZE.XS, name: 'xs', css: `d-block d-sm-none`
          },
          {
            id: SCREEN_SIZE.SM, name: 'sm', css: `d-none d-sm-block d-md-none`
          },
          {
            id: SCREEN_SIZE.MD, name: 'md', css: `d-none d-md-block d-lg-none`
          },
          {
            id: SCREEN_SIZE.LG, name: 'lg', css: `d-none d-lg-block d-xl-none`
          },
          {
            id: SCREEN_SIZE.XL, name: 'xl', css: `d-none d-xl-block`
          },
        ];
      
        @HostListener("window:resize", [])
        private onResize() {
          this.detectScreenSize();
        }
      
        ngAfterViewInit() {
          this.detectScreenSize();
        }
      
        private detectScreenSize() {
          // we will write this logic later
        }
      }
      

      After looking at the component code, you might be wondering where is those SCREEN_SIZE.* value come from. It is an enum. Let’s create the screen size enum (You may create a new file or just place the enum in same component file)

      // screen-size.enum.ts
      
      /_ An enum that define all screen sizes the application support _/
      export enum SCREEN_SIZE {
        XS,
        SM,
        MD,
        LG,
        XL
      }
      

      Also, remember to add Bootstrap to your project! You may add it via npm or yarn, but in this example, we will use the easier way. Add the cdn link in index.html.

      <!-- index.html -->
      <link rel="stylesheet" 
          href="https://stackpath.bootstrapcdn.com/bootstrap/4.3.1/css/bootstrap.min.css">
      

      The code is pretty expressive itself.

      1. First, we define a list of sizes that we support and the CSS classes that used to determine each breakpoints.
      2. In the HTML, we loop through the size list, create div element, assign css and display it. Also note that we give each div an additional unique css class is-<SIZE_ENUM>.
      3. We have a function detectScreenSize. This is where we will write our logic to detect the screen size changes. We will complete that later.
      4. We need to run the logic to everytime when screen size changes. We use the HostListener decorator to listen to the window resize event.
      5. We also need to run the logic when we first initialize the application. We need to run it during the AfterViewInit component lifecycle hook.

      The Implementation: Service & Component

      Now we have the component code “almost” ready, let’s start implementing our resize service.

      // resize.service.ts
      
      @Injectable()
      export class ResizeService {
      
        get onResize$(): Observable<SCREEN_SIZE> {
          return this.resizeSubject.asObservable().pipe(distinctUntilChanged());
        }
      
        private resizeSubject: Subject<SCREEN_SIZE>;
      
        constructor() {
          this.resizeSubject = new Subject();
        }
      
        onResize(size: SCREEN_SIZE) {
          this.resizeSubject.next(size);
        }
      
      }
      

      The resize service code is simple:

      1. We create a rxjs subject resizeSubject.
      2. We have a public method onResize that receive size as the parameter. It will then push the value to the resize stream. (We will call this method later in our size-detector component)
      3. Notice that we use distinctUntilChanged operator in the resize observable. We use that to reduce unnecessary notification. For example, when your screen size change from 200px to 300px, it is still consider as xs size in bootstrap. We don’t need to notify in that case. (You can remove the operator if you need)
      4. We export the the resize stream as observable via onResize$. Any components, services, directives, etc can then subscribe to this stream to get notify whenever size is changed.

      Next, let’s go back to our size-detector component and update the detectScreenSize logic.

      // size-detector.component.ts
      ...
      
      private detectScreenSize() {
          constructor(private elementRef: ElementRef, private resizeSvc: ResizeService) { }
      
          const currentSize = this.sizes.find(x => {
            // get the HTML element
            const el = this.elementRef.nativeElement.querySelector(`.${this.prefix}${x.id}`);
      
            // check its display property value
            const isVisible = window.getComputedStyle(el).display != 'none';
      
            return isVisible;
          });
      
          this.resizeSvc.onResize(currentSize.id);
      }
      
      ...
      

      Let’s breakdown and go through the logic together:

      1. First, we will need to inject the ElementRef and our newly created ResizeService to our component.
      2. Base on our CSS classes, at any point of time, there will be ONLY ONE HTML element visible. We loop through our sizes array and find it.
      3. For each size of our sizes array, we will use HTML5 element’s query selector to find the element by the unique css class we defined earlier on is-<SIZE_ENUM>.
      4. Once we find the current visible element, we then notify our resize service by calling the onResize method.

      Using the Service and Component

      You may place the size-detector component under our root component app-component. For example:

      <!-- app.component.html -->
      
      <hello name="{{ name }}"></hello>
      <!-- Your size-detector component place here -->
      <app-size-detector></app-size-detector>
      

      In this example, I have another hello-component in the app-component, but that doesn’t matter.

      Since I place the component in app-component, means I can use the ResizeService everywhere (directives, components, services, etc).

      For instance, let’s say I want to detect the screen size changes in hello-component, I can do so by inject the ResizeService in constructor, then subscribe to the onSizeChange$ observable and do what I need.

      // hello.component.ts
      
      @Component({
        selector: 'hello',
        template: `<h1>Hello {{size}}!</h1>`,
      })
      export class HelloComponent  {
      
        size: SCREEN_SIZE;
      
        constructor(private resizeSvc: ResizeService) { 
          // subscribe to the size change stream
          this.resizeSvc.onResize$.subscribe(x => {
            this.size = x;
          });
        }
      
      }
      

      In the above code, we detect the screen size changes and simply display the current screen size value.

      See it in action!

      One of the real life use case scenario might be you have accordion on screen. In mobile, you would like to collapse all accordion panels, show only the active one at a time. However, in desktop, you might want to expand all panel.

      Summary

      This is how we can detect the screen size changes without maintaining the actual breakpoint sizes in our JavaScript code. Here is the code: https://stackblitz.com/edit/angular-size.

      If you think of it, it is not very often that the user changes the screen size when browsing the app. You may handle the screen sizes changes application wide (like our example above) or just handle it everytime you need it (per use case / component basis).

      Besides that, if you don’t mind to duplicate and maintain the breakpoint sizes in JavaScript code, you may remove the component, move the detectScreenSize into your service and change a bit on the logic. It is not difficult to implement that. (Try it probably?)

      That’s all. Happy coding!



      Source link

      How To Detect and Extract Faces from an Image with OpenCV and Python


      The author selected the Open Internet/Free Speech Fund to receive a donation as part of the Write for DOnations program.

      Introduction

      Images make up a large amount of the data that gets generated each day, which makes the ability to process these images important. One method of processing images is via face detection. Face detection is a branch of image processing that uses machine learning to detect faces in images.

      A Haar Cascade is an object detection method used to locate an object of interest in images. The algorithm is trained on a large number of positive and negative samples, where positive samples are images that contain the object of interest. Negative samples are images that may contain anything but the desired object. Once trained, the classifier can then locate the object of interest in any new images.

      In this tutorial, you will use a pre-trained Haar Cascade model from OpenCV and Python to detect and extract faces from an image. OpenCV is an open-source programming library that is used to process images.

      Prerequisites

      Step 1 — Configuring the Local Environment

      Before you begin writing your code, you will first create a workspace to hold the code and install a few dependencies.

      Create a directory for the project with the mkdir command:

      Change into the newly created directory:

      Next, you will create a virtual environment for this project. Virtual environments isolate different projects so that differing dependencies won't cause any disruptions. Create a virtual environment named face_scrapper to use with this project:

      • python3 -m venv face_scrapper

      Activate the isolated environment:

      • source face_scrapper/bin/activate

      You will now see that your prompt is prefixed with the name of your virtual environment:

      Now that you've activated your virtual environment, you will use nano or your favorite text editor to create a requirements.txt file. This file indicates the necessary Python dependencies:

      Next, you need to install three dependencies to complete this tutorial:

      • numpy: numpy is a Python library that adds support for large, multi-dimensional arrays. It also includes a large collection of mathematical functions to operate on the arrays.
      • opencv-utils: This is the extended library for OpenCV that includes helper functions.
      • opencv-python: This is the core OpenCV module that Python uses.

      Add the following dependencies to the file:

      requirements.txt

      numpy 
      opencv-utils
      opencv-python
      

      Save and close the file.

      Install the dependencies by passing the requirements.txt file to the Python package manager, pip. The -r flag specifies the location of requirements.txt file.

      • pip install -r requirements.txt

      In this step, you set up a virtual environment for your project and installed the necessary dependencies. You're now ready to start writing the code to detect faces from an input image in next step.

      Step 2 — Writing and Running the Face Detector Script

      In this section, you will write code that will take an image as input and return two things:

      • The number of faces found in the input image.
      • A new image with a rectangular plot around each detected face.

      Start by creating a new file to hold your code:

      In this new file, start writing your code by first importing the necessary libraries. You will import two modules here: cv2 and sys. The cv2 module imports the OpenCV library into the program, and sys imports common Python functions, such as argv, that your code will use.

      app.py

      import cv2
      import sys
      

      Next, you will specify that the input image will be passed as an argument to the script at runtime. The Pythonic way of reading the first argument is to assign the value returned by sys.argv[1] function to an variable:

      app.py

      ...
      imagePath = sys.argv[1]
      

      A common practice in image processing is to first convert the input image to gray scale. This is because detecting luminance, as opposed to color, will generally yield better results in object detection. Add the following code to take an input image as an argument and convert it to grayscale:

      app.py

      ...
      image = cv2.imread(imagePath)
      gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
      

      The .imread() function takes the input image, which is passed as an argument to the script, and converts it to an OpenCV object. Next, OpenCV's .cvtColor() function converts the input image object to a grayscale object.

      Now that you've added the code to load an image, you will add the code that detects faces in the specified image:

      app.py

      ...
      faceCascade = cv2.CascadeClassifier(cv2.data.haarcascades + "haarcascade_frontalface_default.xml")
      faces = faceCascade.detectMultiScale(
              gray,
              scaleFactor=1.3,
              minNeighbors=3,
              minSize=(30, 30)
      ) 
      
      print("Found {0} Faces!".format(len(faces)))
      
      

      This code will create a faceCascade object that will load the Haar Cascade file with the cv2.CascadeClassifier method. This allows Python and your code to use the Haar Cascade.

      Next, the code applies OpenCV's .detectMultiScale() method on the faceCascade object. This generates a list of rectangles for all of the detected faces in the image. The list of rectangles is a collection of pixel locations from the image, in the form of Rect(x,y,w,h).

      Here is a summary of the other parameters your code uses:

      • gray: This specifies the use of the OpenCV grayscale image object that you loaded earlier.
      • scaleFactor: This parameter specifies the rate to reduce the image size at each image scale. Your model has a fixed scale during training, so input images can be scaled down for improved detection. This process stops after reaching a threshold limit, defined by maxSize and minSize.
      • minNeighbors: This parameter specifies how many neighbors, or detections, each candidate rectangle should have to retain it. A higher value may result in less false positives, but a value too high can eliminate true positives.
      • minSize: This allows you to define the minimum possible object size measured in pixels. Objects smaller than this parameter are ignored.

      After generating a list of rectangles, the faces are then counted with the len function. The number of detected faces are then returned as output after running the script.

      Next, you will use OpenCV's .rectangle() method to draw a rectangle around the detected faces:

      app.py

      ...
      for (x, y, w, h) in faces:
          cv2.rectangle(image, (x, y), (x+w, y+h), (0, 255, 0), 2)
      
      

      This code uses a for loop to iterate through the list of pixel locations returned from faceCascade.detectMultiScale method for each detected object. The rectangle method will take four arguments:

      • image tells the code to draw rectangles on the original input image.
      • (x,y), (x+w, y+h) are the four pixel locations for the detected object. rectangle will use these to locate and draw rectangles around the detected objects in the input image.
      • (0, 255, 0) is the color of the shape. This argument gets passed as a tuple for BGR. For example, you would use (255, 0, 0) for blue. We are using green in this case.
      • 2 is the thickness of the line measured in pixels.

      Now that you've added the code to draw the rectangles, use OpenCV's .imwrite() method to write the new image to your local filesystem as faces_detected.jpg. This method will return true if the write was successful and false if it wasn't able to write the new image.

      app.py

      ...
      status = cv2.imwrite('faces_detected.jpg', image)
      

      Finally, add this code to print the return the true or false status of the .imwrite() function to the console. This will let you know if the write was successful after running the script.

      app.py

      ...
      print ("Image faces_detected.jpg written to filesystem: ",status)
      

      The completed file will look like this:

      app.py

      import cv2
      import sys
      
      imagePath = sys.argv[1]
      
      image = cv2.imread(imagePath)
      gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
      
      faceCascade = cv2.CascadeClassifier(cv2.data.haarcascades + "haarcascade_frontalface_default.xml")
      faces = faceCascade.detectMultiScale(
          gray,
          scaleFactor=1.3,
          minNeighbors=3,
          minSize=(30, 30)
      )
      
      print("[INFO] Found {0} Faces!".format(len(faces)))
      
      for (x, y, w, h) in faces:
          cv2.rectangle(image, (x, y), (x + w, y + h), (0, 255, 0), 2)
      
      status = cv2.imwrite('faces_detected.jpg', image)
      print("[INFO] Image faces_detected.jpg written to filesystem: ", status)
      

      Once you've verified that everything is entered correctly, save and close the file.

      Note: This code was sourced from the publicly available OpenCV documentation.

      Your code is complete and you are ready to run the script.

      Step 3 — Running the Script

      In this step, you will use an image to test your script. When you find an image you'd like to use to test, save it in the same directory as your app.py script. This tutorial will use the following image:

      Input Image of four people looking at phones

      If you would like to test with the same image, use the following command to download it:

      • curl -O https://www.xpresservers.com/wp-content/uploads/2019/03/How-To-Detect-and-Extract-Faces-from-an-Image-with-OpenCV-and-Python.png

      Once you have an image to test the script, run the script and provide the image path as an argument:

      • python app.py path/to/input_image

      Once the script finishes running, you will receive output like this:

      Output

      [INFO] Found 4 Faces! [INFO] Image faces_detected.jpg written to filesystem: True

      The true output tells you that the updated image was successfully written to the filesystem. Open the image on your local machine to see the changes on the new file:

      Output Image with detected faces

      You should see that your script detected four faces in the input image and drew rectangles to mark them. In the next step, you will use the pixel locations to extract faces from the image.

      Step 4 — Extracting Faces and Saving them Locally (Optional)

      In the previous step, you wrote code to use OpenCV and a Haar Cascade to detect and draw rectangles around faces in an image. In this section, you will modify your code to extract the detected faces from the image into their own files.

      Start by reopening the app.py file with your text editor:

      Next, add the highlighted lines under the cv2.rectangle line:

      app.py

      ...
      for (x, y, w, h) in faces:
          cv2.rectangle(image, (x, y), (x + w, y + h), (0, 255, 0), 2)
          roi_color = image[y:y + h, x:x + w] 
          print("[INFO] Object found. Saving locally.") 
          cv2.imwrite(str(w) + str(h) + '_faces.jpg', roi_color) 
      ...
      

      The roi_color object plots the pixel locations from the faces list on the original input image. The x, y, h, and w variables are the pixel locations for each of the objects detected from faceCascade.detectMultiScale method. The code then prints output stating that an object was found and will be saved locally.

      Once that is done, the code saves the plot as a new image using the cv2.imwrite method. It appends the width and height of the plot to the name of the image being written to. This will keep the name unique in case there are multiple faces detected.

      The updated app.py script will look like this:

      app.py

      import cv2
      import sys
      
      imagePath = sys.argv[1]
      
      image = cv2.imread(imagePath)
      gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
      
      faceCascade = cv2.CascadeClassifier(cv2.data.haarcascades + "haarcascade_frontalface_default.xml")
      faces = faceCascade.detectMultiScale(
          gray,
          scaleFactor=1.3,
          minNeighbors=3,
          minSize=(30, 30)
      )
      
      print("[INFO] Found {0} Faces.".format(len(faces)))
      
      for (x, y, w, h) in faces:
          cv2.rectangle(image, (x, y), (x + w, y + h), (0, 255, 0), 2)
          roi_color = image[y:y + h, x:x + w]
          print("[INFO] Object found. Saving locally.")
          cv2.imwrite(str(w) + str(h) + '_faces.jpg', roi_color)
      
      status = cv2.imwrite('faces_detected.jpg', image)
      print("[INFO] Image faces_detected.jpg written to filesystem: ", status)
      

      To summarize, the updated code uses the pixel locations to extract the faces from the image into a new file. Once you have finished updating the code, save and close the file.

      Now that you've updated the code, you are ready to run the script once more:

      • python app.py path/to/image

      You will see the similar output once your script is done processing the image:

      Output

      [INFO] Found 4 Faces. [INFO] Object found. Saving locally. [INFO] Object found. Saving locally. [INFO] Object found. Saving locally. [INFO] Object found. Saving locally. [INFO] Image faces_detected.jpg written to file-system: True

      Depending on how many faces are in your sample image, you may see more or less output.

      Looking at the contents of the working directory after the execution of the script, you'll see files for the head shots of all faces found in the input image.

      Directory Listing

      You will now see head shots extracted from the input image collected in the working directory:

      Extracted Faces

      In this step, you modified your script to extract the detected objects from the input image and save them locally.

      Conclusion

      In this tutorial, you wrote a script that uses OpenCV and Python to detect, count, and extract faces from an input image. You can update this script to detect different objects by using a different pre-trained Haar Cascade from the OpenCV library, or you can learn how to train your own Haar Cascade.



      Source link