A Quick Guide to Planning, Scheduling, and Optimization

What is planning, scheduling & optimization?

Add bookmark
Seth Adler
Seth Adler
07/08/2019

Planning & Scheduling

Artificial Intelligence by definition is the process of augmenting or simulating human intelligence. Before trying to solve a complex problem in the physical world, any logical human being will first plan the course of actions in order to reach her goal. For example, a seasoned chef whose goal is to prepare a delicious meal will most certainly plan ahead and make a list of necessary ingredients, portion sizes and determine cooking methods.

This process of planning is also implemented by AI. A system based on AI uses planning to determine ‘what steps to take’ (planning) and ‘when to carry out a certain step’ (scheduling) in order to achieve a goal. The planning process include the following:

  • an initial state, I
  • a goal state, G
  • some actions, A, defined in a domain

A plan enables an AI system to perform a series of actions from A, that turns I into G. In other words, the plan tells the AI what to do; and when to do it. For an autonomous vehicle, the I would be when it is parked on neutral; G would be for the car to be out of the parking spot and at the front entrance; A would include actions like move forward/backward, steer left/right, scan for obstacles and adjust course etc.

Most of the systems developed are domain-independent and therefore allow planning in different application areas. The feature that makes domain-independent planning so valuable, is that one solver can be used to solve many problems. This is especially helpful for a small company or an individual researcher with a problem, who can’t afford to pay a team of software engineers. For bigger companies a domain-independent solver can be used to get things up and running quicker.

The common term given to an intelligent system that implements planning is ‘agent’. The primary characteristic of software agents and physical agents (robots) is that they are autonomous (decide for themselves what to do) and intelligent (can work out best way to act in complex and unpredictable environment). Goals are delegated to an agent by us without having to worry about the specificity of exactly how these goals should be achieved.

The plan implemented by the agent enables it to search through each and every possible action or route to reach the set goal. There are 2 types of searches that the agent performs in order to achieve its goal.

  • Breadth First Search (BFS)
  • Depth First Search (DFS)

The main difference between BFS and DFS is the order in which the agent performs an action. When it comes to learning, there are generally two approaches one can take: one can either go wide, and try to cover as much of the spectrum of a field as possible, or one can go deep, and try to get really, really specific with the topic of interest. Most good learners know that, to some extent, everything that is learned in life — from algorithms to basic life skills — involves some combination of these two approaches. The names BFS and DFS are quite intuitive and correspond to wide style of learning or searching and a deep style of learning or searching respectively. The diagram below (refer to Figure 1) shows a basic unit of DFS and BFS. Each of the nodes are the actions that is available to the agent. These nodes can be thought of as leaves on a tree, and hence the lines connecting them can be considered as the tree branches.

Figure 1 Depth-First Search Vs Breadth First-Search

Depth-first search is the process of traversing down through one branch of a tree until it gets to a leaf, and then working its way back to the “trunk” of the tree. In other words, implementing a DFS means traversing down through the subtrees of a binary search tree. The only real alternative to traveling down one branch of a tree and then another is to travel down the tree section by section — or, level by level. And that’s exactly what BFS is.

The biggest challenge with a searching algorithm such as BFS or DFS, is the possibility of a large set of potential solution plans, which significantly increases the time required to solve a problem. For instance, if there are 20 actions to choose from, then the solution plan is 20 steps long; and in the worst possible case the search algorithm would need to search through all the 20^20 possibilities i.e. a number greater than 1 followed by 26 zeroes! This is more than the estimated age of the universe in seconds and hence not possible for a classic search algorithm to figure within one’s lifetime. Therefore in order to make autonomous systems with AI solve complex problems in real life, it is necessary to find a way through plans that look the most promising. This is where the use of ‘optimization’ becomes a key role for the functionality of any AI.

 


Listen to the Director of the Director of Digital Operations for Western Union share his thoughts on thinking, planning, and evaluating.

Source: The AIIA Network Podcast

 

Optimization

A more advanced type of searching (other than BFS and DFS) is an approach where the agent considers a random node on a model tree (refer to Figure 1) and guesses that node to be the goal state. The agent then calculates the probability of that guessed node being the correct goal state. But in order to do so, the agent first needs a lot of data related to the problem that it’s trying to solve, so that it can be trained to verify what a correct goal state really is. This in a nutshell is an integral part of Machine Learning and Neural Networks.

So where does optimization fit into all of this? Once the agent makes its first random guess, chances are that the guess will be way off than the correct value. Imagine you have never explored or even seen the streets in the city of a foreign country, and you are asked to guess the shortest route from the city’s airport to downtown; it’s not very likely that your random selection out of the hundreds of connections of roads will turn out to be the most optimal one. The same goes for an agent in an AI system. After calculating the probability of its guess (how accurate it is compared to the true value), the agent will have to make certain adjustments to make its prediction to reach the goal more accurate. Optimization is the process where the agent finds the most optimal way to make all the necessary adjustments needed to increase the probability of its guess i.e. to make its prediction as close as possible to reality and the goal state in the most efficient manner.

One of the most common methods of optimization in the field of AI is known as ‘Gradient Descent’. “A gradient measures how much the output of a function changes if you change the inputs a little bit.” — Lex Fridman (MIT).  It simply measures the change in all weights (necessary adjustments in predictions) with regard to the change in error. You can also think of a gradient as the slope of a function. The higher the gradient, the steeper the slope and the faster a model can learn. But if the slope is zero, the model stops learning. The simplest way to understand the application of gradient descent in the physical world is by imagining a person standing at the peak of a mountain as shown in the diagram below (refer to Figure 2)

Figure 2 A Man Trying To Climb Down A Mountain

The goal of this man is to climb down the mountain and in order to do so he needs to carefully plan each step. However if the person chooses a route that takes too long, then he might freeze to death. Therefore it is very important for the person to choose the route that will take him down in the most efficient manner possible. This process of carefully calculating each small step in the most efficient manner is exactly what optimization means.

In mathematical terms, the mountain can be considered as an upside-down ‘U shaped’ curve and the shortest possible distance from the maximum point on the curve (top of the mountain) to the minimum point on the curve (bottom of the mountain) can be found by calculating the gradient (derivative) of the curve. The agent in an AI system achieves optimization through infinitesimal increments in order to achieve its goal state and to reduce its errors, which in mathematical term is known as taking the ‘partial derivatives’ of a function and calculating the local gradient. Hence the term ‘gradient-descent’.

Planning and optimization are the building blocks of every branch of AI. The elegant mathematical and statistical properties of optimization is what makes AI’s so powerful, unique and almost human like. A lot of research is still being conducted by computer scientists and engineers to come up with more powerful optimization techniques.

 


Learn about the nuances of optimizing your Robotic Process Automation in this webinar

Source: AIIA.net: AI and Intelligent Automation

 

References

Coles, A. (2013, July 7). Introduction of AI Planning.
            kcl.com

Joshi, V. (2017, April 10). Breaking down Breadth-First Search.
            medium.com

Khan Academy Courses. (NA).  The Breadth-First Search Algorithm.
            khanacademy.com

Developers, Google. (NA). Reducing Loss: Gradient Descent.
            developers.google.com

Courses, Udacity. (NA). Deep Learning by Google.
            udacity.com

Donges, N. (2018, May 7). Gradient Descent in a Nutshell.
            medium.com

Domingos.P (NA). A Few Useful Things to Know about Machine Learning.

            Washington.edu


 

 

 

           


RECOMMENDED