One place for hosting & domains

      Started

      Getting Started With Python Requests – GET Requests


      In many web apps, it’s normal to connect to various third-party services by using APIs. When you use these APIs you can get access to data like weather information, sports scores, movie listings, tweets, search engine results, and pictures. You can also use APIs to add functionality to your app. Examples of these are payments, scheduling, emails, translations, maps, and file transfers. If you were to create any of those on your own it would take a ton of time, but with APIs, it can take only minutes to connect to one and access its features and data.

      In this article, I’m going to talk about the Python Requests library, which allows you to send HTTP requests in Python.

      And since using an API is simply sending HTTP requests and receiving responses, Requests allows you to use APIs in Python. I’ll demonstrate the use of a language translation API here so you can see an example of how it works so you can potentially use it in your own apps.

      Quick Overview of HTTP Requests

      HTTP requests are how the web works. Every time you navigate to a web page, your browser makes multiple requests to the web page’s server. The server then responds with all the data necessary to render the page, and your browser then actually renders the page so you can see it.

      The generic process is this: a client (like a browser or Python script using Requests) will send some data to a URL, and then the server located at the URL will read the data, decide what to do with it, and return a response to the client. Finally, the client can decide what to do with the data in the response.

      Part of the data the client sends in a request is the request method. Some common request methods are GET, POST, and PUT. GET requests are normally for reading data only without making a change to something, while POST and PUT requests generally are for modifying data on the server. So for example, the Stripe API allows you to use POST requests to create a new charge so a user can purchase something from your app.

      This article will cover GET requests only because we won’t be modifying any data on a server.

      When sending a request from a Python script or inside a web app, you, the developer, gets to decide what gets sent in each request and what to do with the response. So let’s explore that by first sending a request to Scotch.io and then by using a language translation API.

      Install Python Requests

      Before we can do anything, we need to install the library. So let’s go ahead and install requests using pip. It’s a good idea to create a virtual environment first if you don’t already have one.

      pip install requests
      

      Our First Request

      To start, let’s use Requests for something simple: requesting the Scotch.io site. Create a file called script.py and add the following code to it. In this article, we won’t have much code to work with, so when something changes you can just update the existing code instead of adding new lines.

      import requests
      
      res = requests.get('https://scotch.io')
      
      print(res)
      

      So all this code is doing is sending a GET request to Scotch.io. This is the same type of request your browser sent to view this page, but the only difference is that Requests can’t actually render the HTML, so instead you will just get the raw HTML and the other response information.

      We’re using the .get() function here, but Requests allows you to use other functions like .post() and .put() to send those requests as well.

      You can run it by executing the script.py file.

      python script.py
      

      And here’s what you get in return:

      Status Codes

      The first thing we can do is check the status code. HTTP codes range from the 1XX to 5XX. Common status codes that you have probably seen are 200, 404, and 500.

      Here’s a quick overview of what each status code means:

      • 1XX – Information
      • 2XX – Success
      • 3XX – Redirect
      • 4XX – Client Error (you messed up)
      • 5XX – Server Error (they messed up)

      Generally, what you’re looking for when you perform your own requests are status codes in the 200s.

      Requests recognizes that 4XX and 5XX status codes are errors, so if those status codes get returned, the response object from the request evaluates to False.

      You can test if a request responded successfully by simply checking the response for truth. For example:

      if res:
          print('Response OK')
      else:
          print('Response Failed')
      

      The message “Response Failed” will only appear if a 400 or 500 status code returns. Try changing the URL to some nonsense to see the response fail with a 404.

      You can take a look at the status code directly by doing:

      print(res.status_code)
      

      This will show you the status code directly so you can check the number yourself.

      Another thing you can get from the response are the headers. You can take a look at them by using the headers dictionary on the response object.

      print(res.headers)
      


      Headers are sent along with the request and returned in the response. Headers are used so both the client and the server know how to interpret the data that is being sent and received in the response/response.

      We see the various headers that are returned. A lot of times you won’t need to use the header information directly, but it’s there if you need it.

      The content type is usually the one you may need because it reveals the format of the data, for example HTML, JSON, PDF, text, etc. But the content type is normally handled by Requests so you can easily access the data that gets returned.

      Response Text

      And finally, if we take a look at res.text (this works for textual data, like a HTML page like we are viewing) we can see all the HTML needed to build the home page of Scotch. It won’t be rendered, but we see that it looks like it belongs to Scotch. If you saved this to a file and opened it, you would see something that resembled the Scotch site. In a real situation, multiple requests are made for a single web page to load things like images, scripts and stylesheets, so if you save only the HTML to a file, it won’t look anything like what the Scotch.io page looks like in your browser because only a single request was performed to get the HTML data.

      print(res.text)
      

      Using the Translate API

      So now let’s move on to something more interesting. We’ll use the Yandex Translate API to perform a request to translate some text to a different language.

      To use the API, first you need to sign up. After you sign up, go to the Translate API and create an API key. Once you have the API key, add it to your file as a constant. Here’s the link where you can do all those things: https://tech.yandex.com/translate/

      API_KEY = 'your yandex api key'
      

      The reason why we need an API key is so Yandex can authenticate us every time we want to use their API. The API key is probably the simplest form of authentication, because it’s simply added on to the end of the request URL when being sent.

      To know which URL we need to send to use the API, we can look at the documentation for Yandex here: https://tech.yandex.com/translate/doc/dg/reference/translate-docpage/

      If we look there, we’ll see all the information needed to use their Translate API to translate text.

      API documentation can be difficult to read at times, but in this case it’s simple. When we see a URL with ampersands (&), question marks (?), and equals signs (=), you can be sure that the URL is for GET requests. Those symbols specify the parameters that go along with the URL.

      Normally things in square brackets ([]) will be optional. In this case, format, options, and callback are optional, while the key, text, and lang are required for the request.

      And of course it’s easy to see the URL. So let’s add some code to send to that URL. You can replace the first request we created with this:

      url="https://translate.yandex.net/api/v1.5/tr.json/translate"
      res = requests.get(url)
      

      There are two ways we can add the parameters. We can either append it to the end of the URL directly, or we can have Requests do it for us. Having Requests do it for us is much easier.

      To do that, we can create a dictionary for our parameters. The three items we need are the key, the text, and the language.

      Let’s create the dictionary using the API key, ‘Hello’ for the text, and ‘en-es’ as the lang, which means we want to translate from English to Spanish.

      If you need to know any other language codes, you can look here: https://en.wikipedia.org/wiki/List_of_ISO_639-1_codes You are looking for the 639-1 column.

      We create a params dictionary by using the dict() function and passing in the keys and values we want in our dictionary.

      params = dict(key=API_KEY, text="Hello", lang='en-es')
      

      Now we take the parameters dictionary and pass it to the .get() function.

      res = requests.get(url, params=params)
      

      When we pass the parameters this way, Requests will go ahead and add the parameters to the URL for us.

      Now let’s add a print statement for the response text and view what gets returned in the response.

      print(res.text)
      

      We see three things. We see the status code, which is exactly the same status code of the response itself, we see the language that we specified, and we see the translated text inside of the list. So you should see ‘Hola’ for the translated text.

      Try again with en-fr as the language code, and you should see ‘Boujour’ in the response now.

      params = dict(key=API_KEY, text="Hello", lang='en-fr')
      


      Let’s take a look at the headers for this particular response.

      print(res.headers)
      


      Obviously the headers should be different because we’re communicating with a different server, but in this case the content type is application/json instead of text/html. What this means that the data can be interpreted as JSON.

      When application/json is the content type of the response, we are able to have Requests convert the response to a dictionary and list so we can access the data easier.

      To have the data parsed as JSON, we use the .json() method on the response object.

      If you print it, you’ll see that the data looks the same, but the format is slightly different.

      json = res.json()
      print(json)
      


      The reason why it’s different is because it’s no longer plain text that you get from res.text. This time it’s a printed version of a dictionary.

      Let’s say we want to access the text. Since this is now a dictionary, we can use the text key.

      print(json['text'])
      


      And now we only see the data for that one key. In this case we are looking at a list of one item, so if we wanted to get that text in the list directly, we can access it by the index.

      print(json['text'][0])
      


      And now the only thing we see is the translated word.

      So of course if we change things in our parameters, we’ll get different results. Let’s change the text to be translated from Hello to Goodbye, change the target language back to Spanish, and send the request again.

      params = dict(key=API_KEY, text="Goodbye", lang='en-es')
      


      Try translating longer text in different languages and see what responses the API gives you.

      Translate API Error Cases

      Finally, we’ll take a look at an error case. Everything doesn’t always work, so we need to know when that happens.

      Try changing your API KEY by removing one character. When you do this your API key will no longer by valid. Then try sending a request.

      If you take a look at the status code, this is what you get:

      print(res.status_code)
      


      So when you are using the API, you’ll want to check if things are successful or not so you can handle the error cases according to the needs of your app.

      Conclusion

      Here’s what we learned:

      • How HTTP requests work
      • The various status codes possible in a response
      • How to send requests and receive responses using the Python Requests library
      • How to use a language translation API to translate text
      • How to convert application/json content responses to dictionariesThat covers the basics of Requests in Python. Of course you can do so much more, but what I talked about in this article is the foundation of most requests. Things may change slightly depending on the circumstances, but the basic ideas will remain the same.

      If you want to do more, check out https://apilist.fun/ to see different APIs that are available, and try to use them with Python Requests.



      Source link

      Getting Started With Laravel


      How to Join

      This Tech Talk is free and open to everyone. Register below to get a link to join the live event.

      Format Date RSVP
      Presentation and Q&A August 12, 2020, 1:00–2:00 p.m. ET

      If you can’t join us live, the video recording will be published here as soon as it’s available.

      About the Talk

      Laravel — a free, open-source PHP web application framework — is one of the best ways to build web applications. It is a strong framework that gives many niceties out of the box, freeing you to create without sweating the small things.

      What You’ll Learn

      • How to use Laravel to start an app
      • How Laravel helps with routing
      • How Laravel helps with the front end and UI

      This Talk is Designed For

      Developers that want to build their own apps and use PHP.

      About the Presenter

      Chris Sevilleja (@chrisoncode) is the founder of scotch.io and Senior Developer Advocate at DigitalOcean. He loves trying to figure out the most efficient and practical way to build apps that we can ship to our customers.



      Source link

      Getting Started with Load Balancing on a Linode Kubernetes Engine (LKE) Cluster


      Updated by Linode Contributed by Linode

      The Linode Kubernetes Engine (LKE) is Linode’s managed Kubernetes service. When you deploy an LKE cluster, you receive a Kubernetes Master which runs your cluster’s control plane components, at no additional cost. The control plane includes Linode’s Cloud Controller Manager (CCM), which provides a way for your cluster to access additional Linode services. Linode’s CCM provides access to Linode’s load balancing service, Linode NodeBalancers.

      NodeBalancers provide your Kubernetes cluster with a reliable way of exposing resources to the public internet. The LKE control plane handles the creation and deletion of the NodeBalancer, and correctly identifies the resources, and their networking, that the NodeBalancer will route traffic to. Whenever a Kubernetes Service of the LoadBalancer type is created, your Kubernetes cluster will create a Linode NodeBalancer service with the help of the Linode CCM.

      Note

      Adding external Linode NodeBalancers to your LKE cluster will incur additional costs. See Linode’s Pricing page for details.

      Note

      All existing LKE clusters receive CCM updates automatically every two weeks when a new LKE release is deployed. See the LKE Changelog for information on the latest LKE release.

      Note

      The Linode Terraform K8s module also deploys a Kubernetes cluster with the Linode CCM installed by default. Any Kubernetes cluster with a Linode CCM installation can make use of Linode NodeBalancers in the ways described in this guide.

      In this Guide

      This guide will show you:

      Before You Begin

      This guide assumes you have a working Kubernetes cluster that was deployed using the Linode Kubernetes Engine (LKE). You can deploy a Kubernetes cluster using LKE in the following ways:

      Adding Linode NodeBalancers to your Kubernetes Cluster

      To add an external load balancer to your Kubernetes cluster you can add the example lines to a new configuration file, or more commonly, to a Service file. When the configuration is applied to your cluster, Linode NodeBalancers will be created, and added to your Kubernetes cluster. Your cluster will be accessible via a public IP address and the NodeBalancers will route external traffic to a Service running on healthy nodes in your cluster.

      Note

      Billing for Linode NodeBalancers begin as soon as the example configuration is successfully applied to your Kubernetes cluster.

      1
      2
      3
      4
      5
      6
      7
      
      spec:
        type: LoadBalancer
        ports:
        - name: http
          port: 80
          protocol: TCP
          targetPort: 80
      • The spec.type of LoadBalancer is responsible for telling Kubernetes to create a Linode NodeBalancer.
      • The remaining lines provide port definitions for your Service’s Pods and maps an incoming port to a container’s targetPort.

      Viewing Linode NodeBalancer Details

      To view details about running NodeBalancers on your cluster:

      1. Get the services running on your cluster:

        kubectl get services
        

        You will see a similar output:

          
        NAME            TYPE           CLUSTER-IP      EXTERNAL-IP    PORT(S)        AGE
        kubernetes      ClusterIP      10.128.0.1      none           443/TCP        3h5m
        example-service LoadBalancer   10.128.171.88   45.79.246.55   80:30028/TCP   36m
              
        
        • Viewing the entry for the example-service, you can find your NodeBalancer’s public IP under the EXTERNAL-IP column.
        • The PORT(S) column displays the example-service incoming port and NodePort.
      2. View details about the example-service to retrieve information about the deployed NodeBalancers:

        kubectl describe service example-service
        
          
        Name:                     nginx-service
        Namespace:                default
        Labels:                   app=nginx
        Annotations:              service.beta.kubernetes.io/linode-loadbalancer-throttle: 4
        Selector:                 app=nginx
        Type:                     LoadBalancer
        IP:                       10.128.171.88
        LoadBalancer Ingress:     192.0.2.0
        Port:                     http  80/TCP
        TargetPort:               80/TCP
        NodePort:                 http  30028/TCP
        Endpoints:                10.2.1.2:80,10.2.1.3:80,10.2.2.2:80
        Session Affinity:         None
        External Traffic Policy:  Cluster
        Events:                   
        

      Configuring your Linode NodeBalancers with Annotations

      The Linode CCM accepts annotations that configure the behavior and settings of your cluster’s underlying NodeBalancers.

      • The table below provides a list of all available annotation suffixes.
      • Each annotation must be prefixed with service.beta.kubernetes.io/linode-loadbalancer-. For example, the complete value for the throttle annotation is service.beta.kubernetes.io/linode-loadbalancer-throttle.
      • Annotation values such as http are case-sensitive.

      Annotations Reference

      Annotation (suffix) Values Default Value Description
      throttle • integer
      020
      0 disables the throttle
      20 The client connection throttle limits the number of new connections-per-second from the same client IP.
      default-protocol • string
      tcp, http, https
      tcp Specifies the protocol for the NodeBalancer to use.
      port-* A JSON object of port configurations
      For example:
      { "tls-secret-name": "prod-app-tls", "protocol": "https"})
      None • Specifies a NodeBalancer port to configure, i.e. port-443.

      • Ports 1-65534 are available for balancing.

      • The available port configurations are:

      "tls-secret-name" use this key to provide a Kubernetes secret name when setting up TLS termination for a service to be accessed over HTTPS. The secret type should be kubernetes.io/tls.

      "protocol" specifies the protocol to use for this port, i.e. tcp, http, https. The default protocol is tcp, unless you provided a different configuration for the default-protocol annotation.

      check-type • string
      none, connection, http, http_body
      None • The type of health check to perform on Nodes to ensure that they are serving requests. The behavior for each check is the following:

      none no check is performed

      connection checks for a valid TCP handshake

      http checks for a 2xx or 3xx response code

      http_body checks for a specific string within the response body of the healthcheck URL. Use the check-body annotation to provide the string to use for the check.

      check-path string None The URL path that the NodeBalancer will use to check on the health of the back-end Nodes.
      check-body string None The string that must be present in the response body of the URL path used for health checks. You must have a check-type annotation configured for a http_body check.
      check-interval integer None The duration, in seconds, between health checks.
      check-timeout • integer
      • value between 130
      None Duration, in seconds, to wait for a health check to succeed before it is considered a failure.
      check-attempts • integer
      • value between 130
      None Number of health checks to perform before removing a back-end Node from service.
      check-passive boolean false When true, 5xx status codes will cause the health check to fail.
      preserve boolean false When true, deleting a LoadBalancer service does not delete the underlying NodeBalancer

      Note

      Configuring Linode NodeBalancers for TLS Encryption

      This section describes how to set up TLS termination on your Linode NodeBalancers so a Kubernetes Service can be accessed over HTTPS.

      Generating a TLS type Secret

      Kubernetes allows you to store sensitive information in a Secret object for use within your cluster. This is useful for storing things like passwords and API tokens. In this section, you will create a Kubernetes secret to store Transport Layer Security (TLS) certificates and keys that you will then use to configure TLS termination on your Linode NodeBalancers.

      In the context of the Linode CCM, Secrets are useful for storing Transport Layer Security (TLS) certificates and keys. The linode-loadbalancer-tls annotation requires TLS certificates and keys to be stored as Kubernetes Secrets with the type tls. Follow the steps in this section to create a Kubernetes TLS Secret.

      Note

      1. Generate a TLS key and certificate using a TLS toolkit like OpenSSL. Be sure to change the CN and O values to those of your own website domain.

        openssl req -newkey rsa:4096 
            -x509 
            -sha256 
            -days 3650 
            -nodes 
            -out tls.crt 
            -keyout tls.key 
            -subj "/CN=mywebsite.com/O=mywebsite.com"
        
      2. Create the secret using the create secret tls command. Ensure you substitute $SECRET_NAME for the name you’d like to give to your secret. This will be how you reference the secret in your Service manifest.

        kubectl create secret tls $SECRET_NAME --cert tls.crt --key tls.key
        
      3. You can check to make sure your Secret has been successfully stored by using describe:

        kubectl describe secret $SECRET_NAME
        

        You should see output like the following:

          
        kubectl describe secret docteamdemosite
        Name:         my-secret
        Namespace:    default
        Labels:       
        Annotations:  
        
        Type:  kubernetes.io/tls
        
        Data
        ====
        tls.crt:  1164 bytes
        tls.key:  1704 bytes
        
        

        If your key is not formatted correctly you’ll receive an error stating that there is no PEM formatted data within the key file.

      Configuring TLS within a Service

      In order to use https you’ll need to instruct the Service to use the correct port using the required annotations. You can add the following code snippet to a Service file to enable TLS termination on your NodeBalancers:

      example-serivce.yaml
      1
      2
      3
      4
      5
      6
      
      ...
      metadata:
        annotations:
          service.beta.kubernetes.io/linode-loadbalancer-default-protocol: http
          service.beta.kubernetes.io/linode-loadbalancer-port-443: '{ "tls-secret-name": "example-secret", "protocol": "https" }'
      ...
      • The service.beta.kubernetes.io/linode-loadbalancer-default-protocol annotation configures the NodeBalancer’s default protocol.

      • service.beta.kubernetes.io/linode-loadbalancer-port-443 specifies port 443 as the port to be configured. The value of this annotation is a JSON object designating the TLS secret name to use (example-secret) and the protocol to use for the port being configured (https).

      If you have multiple Secrets and ports for different environments (testing, staging, etc.), you can define more than one secret and port pair:

      example-serivce.yaml
      1
      2
      3
      4
      5
      6
      7
      
      ...
      metadata:
        annotations:
          service.beta.kubernetes.io/linode-loadbalancer-default-protocol: http
          service.beta.kubernetes.io/linode-loadbalancer-port-443: '{ "tls-secret-name": "example-secret", "protocol": "https" }'
          service.beta.kubernetes.io/linode-loadbalancer-port-8443: '{ "tls-secret-name": "example-secret-staging", "protocol": "https" }'
      ...

      Configuring Session Affinity for Cluster Pods

      kube-proxy will always attempt to proxy traffic to a random backend Pod. To direct traffic to the same Pod, you can use the sessionAffinity mechanism. When set to clientIP, sessionAffinity will ensure that all traffic from the same IP will be directed to the same Pod. You can add the example lines to a Service configuration file to

      1
      2
      3
      4
      5
      6
      7
      8
      
      spec:
        type: LoadBalancer
        selector:
          app: example-app
        sessionAffinity: ClientIP
        sessionAffinityConfig:
          clientIP:
            timeoutSeconds: 100

      Removing Linode NodeBalancers from your Kubernetes Cluster

      To delete a NodeBalancer and the Service that it represents, you can use the Service manifest file you used to create the NodeBalancer. Simply use the delete command and supply your file name with the f flag:

      kubectl delete -f example-service.yaml
      

      Similarly, you can delete the Service by name:

      kubectl delete service example-service
      

      After deleting your service, its corresponding NodeBalancer will be removed from your Linode account.

      Note

      If your Service file used the preserve annotation, the underlying NodeBalancer will not be removed from your Linode account. See the annotations reference for details.

      This guide is published under a CC BY-ND 4.0 license.



      Source link