One place for hosting & domains

      Intelligent

      How To Create an Intelligent Chatbot in Python Using the spaCy NLP Library


      The author selected the COVID-19 Relief Fund to receive a donation as part of the Write for DOnations program.

      Introduction

      Interacting with software can be a daunting task in cases where there are a lot of features. In some cases, performing similar actions requires repeating steps, like navigating menus or filling forms each time an action is performed. Chatbots are virtual assistants that help users of a software system access information or perform actions without having to go through long processes. Many of these assistants are conversational, and that provides a more natural way to interact with the system.

      To create a conversational chatbot, you could use platforms like Dialogflow that help you design chatbots at a high level. Or, you can build one yourself using a library like spaCy, which is a fast and robust Python-based natural language processing (NLP) library. spaCy provides helpful features like determining the parts of speech that words belong to in a statement, finding how similar two statements are in meaning, and so on.

      In this tutorial, you will create a chatbot that not only helps users simplify their interactions with a software system, but is also intelligent enough to communicate with the user in natural language (American English in this tutorial). The chatbot will use the OpenWeather API to tell the user what the current weather is in any city of the world, but you can implement your chatbot to handle a use case with another API.

      Prerequisites

      Before you begin, you will need the following:

      This tutorial assumes you are already familiar with Python—if you would like to improve your knowledge of Python, check out our How To Code in Python 3 series. This tutorial does not require foreknowledge of natural language processing.

      Step 1 — Setting Up Your Environment

      In this step, you will install the spaCy library that will help your chatbot understand the user’s sentences.

      Having set up Python following the Prerequisites, you’ll have a virtual environment. Let’s activate that environment.

      Make sure you are in the directory where you set up your environment and then run the following command:

      • source my_env/bin/activate

      Now install spaCy:

      Finally, you will download a language model. spaCy’s language models are pre-trained NLP models that you can use to process statements to extract meaning. You’ll be working with the English language model, so you’ll download that.

      Run the following command:

      • python -m spacy download en_core_web_md

      If you run into an error like the following:

      Output

      ERROR: Failed building wheel for en-core-web-md

      You need to install wheel:

      Then download the English-language model again.

      To confirm that you have spaCy installed properly, open the Python interpreter:

      Next, import spaCy and load the English-language model:

      >>> import spacy
      >>> nlp = spacy.load("en_core_web_md")
      

      If those two statements execute without any errors, then you have spaCy installed.

      Now close the Python interpreter:

      >>> exit()
      

      You now have everything needed to begin working on the chatbot. In the next section, you’ll create a script to query the OpenWeather API for the current weather in a city.

      Step 2 — Creating the City Weather Program

      In this section, you will create a script that accepts a city name from the user, queries the OpenWeather API for the current weather in that city, and displays the response.

      First, create and open a Python file called weather_bot.py with your preferred editor:

      Next, you’ll create a function to get the current weather in a city from the OpenWeather API. This function will take the city name as a parameter and return the weather description of the city.

      Add the following code into your weather_bot.py file:

      weather_bot.py

      import requests
      
      api_key = "your_api_key"
      
      def get_weather(city_name):
          api_url = "http://api.openweathermap.org/data/2.5/weather?q={}&appid={}".format(city_name, api_key)
      
          response = requests.get(api_url)
          response_dict = response.json()
      
          weather = response_dict["weather"][0]["description"]
      
          if response.status_code == 200:
              return weather
          else:
              print('[!] HTTP {0} calling [{1}]'.format(response.status_code, api_url))
              return None
      

      First, you import the requests library, so you are able to work with and make HTTP requests. Make sure to replace your_api_key with your own API key. The next line begins the definition of the function get_weather() to retrieve the weather of the specified city.

      In this function, you construct the URL for the OpenWeather API. This URL returns the weather information (temperature, weather description, humidity, and so on) of the city and provides the result in JSON format. After that, you make a GET request to the API endpoint, store the result in a response variable, and then convert the response to a Python dictionary for easier access.

      On the next line, you extract just the weather description into a weather variable and then ensure that the status code of the API response is 200 (meaning there were no issues with the request). Finally, you return the weather description.

      If there is an issue with the request, the status code is printed out to the console, and you return None.

      To test the script, call the get_weather() function with a city of your choice (for example, London) and print the result. Add the highlighted code following your function:

      ~/weather_bot.py

      import requests
      
      def get_weather(city_name):
      
        ...
      
        return weather
      
      weather = get_weather("London")
      print(weather)
      

      Save and run the script:

      You will receive a result like the following:

      Output

      scattered clouds

      Having completed that successfully, you can now delete the last two lines from the script.

      Open it with:

      Then delete the two highlighted lines at the end of the file:

      ~/weather_bot.py

      import requests
      
      def get_weather(city_name):
      
        ...
      
        return weather
      
      weather = get_weather("London")
      print(weather)
      

      Save and close the file.

      You now have a function that returns the weather description for a particular city.

      In the next step, you’ll create a chatbot capable of figuring out whether the user wants to get the current weather in a city, and if so, the chatbot will use the get_weather() function to respond appropriately.

      Step 3 — Creating the Chatbot

      In the previous two steps, you installed spaCy and created a function for getting the weather in a specific city. Now, you will create a chatbot to interact with a user in natural language using the weather_bot.py script.

      You’ll write a chatbot() function that compares the user’s statement with a statement that represents checking the weather in a city. To make this comparison, you will use the spaCy similarity() method. This method computes the semantic similarity of two statements, that is, how similar they are in meaning. This will help you determine if the user is trying to check the weather or not.

      To begin, open the script:

      Then, import spaCy and load the English language model:

      ~/weather_bot.py

      import spacy
      import requests
      
      nlp = spacy.load("en_core_web_md")
      
      . . .
      
      

      After the get_weather() function in your file, create a chatbot() function representing the chatbot that will accept a user’s statement and return a response.

      Following your definition, add the highlighted code to create tokens for the two statements you’ll be comparing. Tokens are the different meaningful segments of a statement, like words and punctuation. This is necessary to allow spaCy to compute the semantic similarity:

      ~/weather_bot.py

      import spacy
      
      . . .
      
      def chatbot(statement):
        weather = nlp("Current weather in a city")
        statement = nlp(statement)
      

      Here the weather and statement variables contain spaCy tokens as a result of passing each corresponding string to the nlp() function.

      Save and close your file.

      Next you’ll be introducing the spaCy similarity() method to your chatbot() function. The similarity() method computes the semantic similarity of two statements as a value between 0 and 1, where a higher number means a greater similarity. You need to specify a minimum value that the similarity must have in order to be confident the user wants to check the weather.

      For example, if you check the similarity of statements 2 and 3 with statement 1 following, you get:

      1. Current weather in a city
      2. What is the weather in London? (similarity = 0.86)
      3. Peanut butter and jelly (similarity = 0.31)

      To try this for yourself, open the Python interpreter:

      Next, import spaCy and load the English-language model:

      >>> import spacy
      >>> nlp = spacy.load("en_core_web_md")
      

      Now let’s create tokens from statements 1 and 2:

      >>> statement1 = nlp("Current weather in a city")
      >>> statement2 = nlp("What is the weather in London?")
      

      Finally, let’s obtain the semantic similarity of the two statements:

      >>> print(statement1.similarity(statement2))
      

      You will receive a result like this:

      Output

      0.8557684354027663

      Setting a low minimum value (for example, 0.1) will cause the chatbot to misinterpret the user by taking statements (like statement 3) as similar to statement 1, which is incorrect. Setting a minimum value that’s too high (like 0.9) will exclude some statements that are actually similar to statement 1, such as statement 2.

      We will arbitrarily choose 0.75 for the sake of this tutorial, but you may want to test different values when working on your project.

      Let’s add this value to the script. First, open the file:

      Then add the following highlighted code to introduce the minimum value:

      ~/weather_bot.py

      import spacy
      
      . . .
      
      def chatbot(statement):
        weather = nlp("Current weather in a city")
        statement = nlp(statement)
        min_similarity = 0.75
      

      Now check if the similarity of the user’s statement to the statement about the weather is greater than or equal to the minimum similarity value you specified. Add the following highlighted if statement to check this:

      ~/weather_bot.py

      import spacy
      
      . . .
      
      def chatbot(statement):
        weather = nlp("Current weather in a city")
        statement = nlp(statement)
        min_similarity = 0.75
      
        if weather.similarity(statement) >= min_similarity:
          pass
      

      The final step is to extract the city from the user’s statement so you can pass it to the get_weather() function to retrieve the weather from the API call. Add the following highlighted for loop to implement this:

      ~/weather_bot.py

      import spacy
      
      ...
      
      def chatbot(statement):
        weather = nlp("Current weather in a city")
        statement = nlp(statement)
        min_similarity = 0.75
      
        if weather.similarity(statement) >= min_similarity:
          for ent in statement.ents:
            if ent.label_ == "GPE": # GeoPolitical Entity
              city = ent.text
              break
      

      To do this, you’re using spaCy’s named entity recognition feature. A named entity is a real-world noun that has a name, like a person, or in our case, a city. You want to extract the name of the city from the user’s statement.

      To extract the city name, you get all the named entities in the user’s statement and check which of them is a geopolitical entity (country, state, city). To do this, you loop through all the entities spaCy has extracted from the statement in the ents property, then check whether the entity label (or class) is “GPE” representing Geo-Political Entity. If it is, then you save the name of the entity (its text) in a variable called city.

      You also need to catch cases where no city was entered by adding an else block:

      ~/weather_bot.py

      import spacy
      
      ...
      
      def chatbot(statement):
        weather = nlp("Current weather in a city")
        statement = nlp(statement)
        min_similarity = 0.75
      
        if weather.similarity(statement) >= min_similarity:
          for ent in statement.ents:
            if ent.label_ == "GPE": # GeoPolitical Entity
              city = ent.text
              break
            else:
              return "You need to tell me a city to check."
      

      Now that you have the city, you can call the get_weather() function:

      ~/weather_bot.py

      import spacy
      
      ...
      
      def chatbot(statement):
        weather = nlp("Current weather in a city")
        statement = nlp(statement)
        min_similarity = 0.75
      
        if weather.similarity(statement) >= min_similarity:
          for ent in statement.ents:
            if ent.label_ == "GPE": # GeoPolitical Entity
              city = ent.text
              break
            else:
              return "You need to tell me a city to check."
      
          city_weather = get_weather(city)
          if city_weather is not None:
            return "In " + city + ", the current weather is: " + city_weather
          else:
            return "Something went wrong."
        else:
          return "Sorry I don't understand that. Please rephrase your statement."
      

      Recall that if an error is returned by the OpenWeather API, you print the error code to the terminal, and the get_weather() function returns None. In this code, you first check whether the get_weather() function returns None. If it doesn’t, then you return the weather of the city, but if it does, then you return a string saying something went wrong. The final else block is to handle the case where the user’s statement’s similarity value does not reach the threshold value. In such a case, you ask the user to rephrase their statement.

      Having completed all of that, you now have a chatbot capable of telling a user conversationally what the weather is in a city. The difference between this bot and rule-based chatbots is that the user does not have to enter the same statement every time. Instead, they can phrase their request in different ways and even make typos, but the chatbot would still be able to understand them due to spaCy’s NLP features.

      Let’s test the bot. Call the chatbot() function and pass in a statement asking what the weather is in a city, for example:

      ~/weather_bot.py

      import spacy
      
      . . .
      
      def chatbot(statement):
      
      . . .
      
      response = chatbot("Is it going to rain in Rome today?")
      print(response)
      

      Save and close the file, then run the script in your terminal:

      You will receive output similar to the following:

      Output

      In Rome, the current weather is: clear sky

      You have successfully created an intelligent chatbot capable of responding to dynamic user requests. You can try out more examples to discover the full capabilities of the bot. To do this, you can get other API endpoints from OpenWeather and other sources. Another way to extend the chatbot is to make it capable of responding to more user requests. For this, you could compare the user’s statement with more than one option and find which has the highest semantic similarity.

      Conclusion

      You have created a chatbot that is intelligent enough to respond to a user’s statement—even when the user phrases their statement in different ways. The chatbot uses the OpenWeather API to get the current weather in a city specified by the user.

      To further improve the chatbot, you can:



      Source link

      Exploring the Features of Intelligent Monitoring, powered by INAP INblue


      What would you do if you didn’t have to spend time on routine server- or cloud-related maintenance and monitoring?

      According to INAP’s The State of IT infrastructure Management report, a vast majority of IT professionals say they are not spending enough time designing or implementing new solutions, working on expansions or upgrades, or focusing on information security projects. As it stands, 25 percent of participants say they spend too many hours on monitoring, and it’s clear that there’s a desire to set aside the busywork for value-added projects, allowing IT to be a center for innovation, rather than viewed as “purely keeping the lights on” by the company’s senior management.

      Intelligent Monitoring, powered by INAP INblue—a multicloud infrastructure management platform, gives you time for what matters. It’s a premium managed cloud and monitoring service—available today for INAP Bare Metal customers—that raises the bar for managed hosting solutions by ensuring proactive support, service transparency and consistent performance.

      “Infrastructure monitoring strategies are only as good as the actions that follow alerts,” said Jennifer Curry, SVP of Global Cloud Services at INAP. “We built Intelligent Monitoring to not only improve cloud performance and availability, but to set a new benchmark for managed services transparency.”

      In addition to an improved service experience, Managed Bare Metal customers also have access to the same enterprise-grade monitoring and management tools used by INAP technicians, offering functionality and control that will eliminate the need for customers to invest in third-party remote monitoring and management solutions, including remote execution and scripting, unified log management, patch management and automation, and port, service and URL monitoring.

      Let’s take a closer look at the features that make Intelligent Monitoring a one-of-a-kind solution.

      Advanced Monitoring & Action Items

      Built from the ground up with leading technologies like SaltStack and Elastic Beats, the Intelligent Monitoring agent tracks everything from server resource usage to Apache and MySQL connections. The in-depth, proprietary monitoring technology is installed directly onto your server, enabling INAP technicians to respond to alerts before performance degrading issues arise. Default trigger thresholds are chosen by INAP’s support team based on years of data and first-hand expertise. You have full access to all monitoring metrics and can request custom alert triggers, or modifications to trigger thresholds.

      When you log into INblue, the dashboard will give you a snapshot of your server environment through system events called Action Items. If you subscribe to the fully managed Service First edition, these items allow your INAP support team to proactively manage your environment and rapidly respond to alerts. Action Items are triggered in a variety of ways, including when infrastructure or network monitoring thresholds are surpassed, when a critical service shuts down or when a new software patch becomes available.

      Action Items

      Support Remediation Aided by Smart Workflow System

      INAP technicians remediate Action Items using our proprietary Smart Workflow System, which enables fast, accurate and consistent troubleshooting. Here’s a brief look how it works:

      1. The Smart Workflow System defines the Action Item type and initiates appropriate workflow process.
      2. The system automatically creates a support case for the Action Item, pulling historical correlated issue data, trigger metrics and detailed log info.
      3. Using the data and Action Item type, the assigned INAP Service First support technician investigates the issue following a branching series of software-defined and expert-tested remediation steps. Customers may request custom workflows for scenarios unique to their environment.
      4. Upon resolution of the Action Item, your assigned technician will notify you via the Action Item details page and include relevant root cause data.
      5. The Smart Workflow System constantly improves as new system data and insights from INAP experts modify issue definitions and remediation steps.

      On the other side of the glass, the INblue platform is your vehicle for ensuring absolute transparency. At the top of any Action Item details page, you’ll see the INAP technician assigned to the workflow, the current status of the event and tasks they are currently performing or have already performed. You can review information about correlated past issues, metric and log data pinpointing a trigger, and your full support history for any Action Item.

      However, for most Action Items, you won’t have to do a single thing. Intelligent Monitoring’s Smart Workflow System and the INAP Service First support team are on top of every case.

      Patching and Log Management

      Intelligent Monitoring radically simplifies two activities that most IT professionals consider especially tedious: patching and log management.

      The patching update process is streamlined, as all available patches for your server are proactively listed in groups. You can handle this process in one of two ways, depending on how much control you want. You can confirm and schedule the patch to complete the process with INAP support, or—if you want a hands-off approach—you can choose to auto-patch your server daily and your team will receive calendar invites for each scheduled patch.

      Intelligent Monitoring will also save you from manually browsing events by providing a chronological event log for all your servers. Easily filter by server and file path, or dive deep with a keyword search. You’ll be able to accelerate analysis and locate critical information. Plus, the log management feature provides your INAP technicians critical, actionable intelligence to keep your environment compliant and secure.

      Log Management

      Remote Execution and Scripting

      Intelligent Monitoring allows you to easily create and run remote execution scripts to any of your installed servers, giving you a single portal for taking control of your environment. You can choose from scripts you’ve already created and schedule them by inserting a token from your two-factor authentication application. You will automatically receive an email when the script successfully executes.

      If you want to create your own Bash or Powershell scripts, you can do so via the Script Editor, located in the side navigation of the INblue platform.

      Port, Service and URL Monitoring

      Under the Ports tab on any Server Details page, you can review, edit and monitor triggers for your server’s open and closed ports. Port Status changes will be shown in your Action Items list.

      Intelligent Monitoring also allows you to monitor any available services running on your system. You can also stop, start or restart services from the Services tab. For example, you can enable service monitoring on Chron Dee, enabling the auto-restart feature. With this monitoring feature enabled, you can rest assured that if a service ever fails, it will automatically create a new Action Item.

      Looking Ahead

      This is just the beginning for Intelligent Monitoring, powered by the INAP INblue platform. Many more features and capabilities are on the way, but in the meantime, we hope you enjoy exploring the tool and look forward to hearing your feedback.

      Demo INAP Intelligent Monitoring Today

      GET THE DEMO

      Laura Vietmeyer


      READ MORE



      Source link

      Three Reasons Why We Built Intelligent Monitoring, Powered by INblue


      This week, we officially launched our premium managed cloud and monitoring service, Intelligent Monitoring. Powered by INAP’s new infrastructure and support platform, INblue, the service is an important milestone for our cloud business and one we expect to set new benchmarks for managed services transparency and support experience consistency.

      You can take a closer look at Intelligent Monitoring and its many great features in the video below or over at our product page. Today, however, I’d like to share some insight into why we developed Intelligent Monitoring, and why we think it will benefit the countless businesses who rely on service providers to enable meaningful IT transformations.

      1. To Retire the Concept of ‘Just Trust Us’ in Managed Hosting  

      We built Intelligent Monitoring because traditional managed hosting is stuck in an innovation rut.

      Automation transformed data center operations. It made networks faster. It made the cloud possible by pushing the limits of scalability. But there’s one item automation hasn’t yet cracked: the disjointed, opaque experience of working with a managed service provider after the infrastructure is online.

      Any managed infrastructure service inherently relies on the concept of “just trust us.” Hand over the keys to your critical infrastructure and hope the provider knows what they’re doing if something goes wrong. What too often results from this is a gauntlet of service-related pain points: reactive and inconsistent troubleshooting of regularly occurring issues, a rotating cast of anonymous technicians, unreliable response times, and the inability to see what is being done to the server or cloud environment and why.

      Intelligent Monitoring acknowledges that “just trust us” is no longer acceptable. The service combines the very best elements of advanced monitoring automation with a proactive, software-defined support experience that values transparency and consistency above all else.

      Here’s how it works. A lightweight and secure agent installed on your servers and VMs tracks virtually every aspect of the environment—hardware, resources, network, application metrics, available patches, open ports, etc.—in real time. If a monitoring threshold is triggered for subscribers of our managed cloud service, INAP technicians will respond to alerts before performance degrading issues arise. Tickets are created proactively, and issues are mitigated using a proprietary Smart Workflow System that assists our experts with finding accurate solutions.

      Better yet, customers will immediately know who on the INAP team is assigned to a task via the event’s unique Action Item page. From this single screen, users can track progress on the open task, communicate directly with the technician and view forensic information associated with the event using logs and data from correlated past issues.

      INblue Launch

      2. To Streamline the Actions That Follow Alerts 

      We built Intelligent Monitoring because an infrastructure management strategy is only as good as the actions that follow alerts.

      While there are plenty of standalone tools available for monitoring, patching and managing remotely hosted environments, they often take a significant amount of time to configure. Certain monitoring programs are limited to read-only outputs. Meaning, admins have to manually remediate identified issues or jump to separate tools to complete the job.

      Intelligent Monitoring and the INblue platform seamlessly integrate with customer server environments, allowing IT pros to not just keep tabs on the health of their infrastructure, but take control of it via a powerful toolkit that includes universal log management, patch automation, service controls, and remote execution and script management. With the INblue platform, users are also able to manage their accounts, provision new services, manage firewalls and reboot servers. By hosting with INAP, organizations will eliminate the need for many third-party tools.

      With Intelligent Monitoring, fully managed cloud customers will enjoy the backing of INAP’s Service First Support team. For the do-it-yourself IT admin, we also offer the Intelligent Monitoring toolkit for a low per-server cost. To compare our Service First and Insight editions, download the product overview guide. At launch, Intelligent Monitoring is optimized for Bare Metal customers. Multicloud functionality and support is on the way in future releases.

      3. To Help IT Pros Make Time for What Matters Most  

      Finally, the most important reason why we built Intelligent Monitoring: IT professionals want to be the centers of innovation at their organizations, but are too weighed down by time-intensive, routine infrastructure tasks to achieve that goal.

      Late last year, we interviewed 500 IT professionals about time spent on infrastructure upkeep. Unsurprisingly, the No. 1 task IT professionals spend too much time on is infrastructure monitoring. Additionally, a majority of respondents (52 percent) say the time it takes their teams to address critical patches and updates leaves their organizations exposed to security risks.

      While maintaining the performance and ensuring the security of infrastructure is critically important, our survey respondents acknowledged that the extent of the effort detracts from the broader mission of their job function. If hypothetically given back 16 hours of their week previously spent monitoring and troubleshooting infrastructure, IT pros said a majority of that time would be dedicated to activities like researching new tech, developing new apps and/or products, skill development and application performance optimization.

      Overall, 77 percent of IT professionals agreed they could “bring more value to my organization if I spent less time on routine tasks like server monitoring and maintenance.”

      We believe that Intelligent Monitoring and INAP Managed Bare Metal are just the vehicles for giving IT the time to make a truly value-added difference. I hope you’ll take a moment to see for yourself.

      Demo INAP Intelligent Monitoring Today

      GET THE DEMO

      Jennifer Curry
      • SVP, Global Cloud Services


      Jennifer Curry is SVP, Global Cloud Services. She is an operational and technology leader with over 17 years of experience in the IT industry, including seven years in the hosting/cloud market. READ MORE



      Source link