Work

Routing and Resource Allocation with Probabilistic Objectives

Public Deposited

In this thesis, we study routing and resource allocation problems which have probabilistic objective functions. This class of problems has received limited attention in literature despite its promising applications. A probabilistic objective function is capable of incorporating business targets into the problem modeling and representing the risk attitude of a decision maker. Generally, if the probability of reaching a business target is low, a decision maker tends to take risks, and if the chance to reach the target is high, she tends to avoid risks. First, we study the Orienteering Problem with Stochastic Rewards (OPSR) where the objective is to maximize the probability of reaching the target reward level by constructing a tour visiting a subset of customer locations with random rewards subject to a time constraint. We develop exact solution and bi-objective genetic algorithms. Then, we analyze the characteristics of the problem and test the algorithms computationally. The experimental results identify conditions under which the OPSR results in significant improvements in reaching the target reward when compared with the solution from the deterministic variant of the problem. Later, we present the Static Stochastic Knapsack Problem (SSKP), which is a probabilistic extension of the classical knapsack problem. We present some special algorithms for this problem and then turn our attention to a more interesting problem: the Adaptive Knapsack Problem (AKP). The AKP is similar to the SSKP with one important difference. Unlike the SSKP, in the AKP items are selected sequentially depending on the previous reward realizations. We formulate the AKP as a Dynamic Programming (DP) problem for discrete random rewards and propose a heuristic that mixes adaptive and static policies to overcome "the curse of dimensionality" in the DP. Then, we extend the proposed heuristic to problems with Normally distributed rewards. The proposed heuristic is shown to always outperform the optimal static policy. Also, our numerical study indicates that much of the benefit of a fully adaptive policy can be obtained by using a policy with limited look-ahead capabilities.

Last modified
  • 08/07/2018
Creator
DOI
Subject
Keyword
Date created
Resource type
Rights statement

Relationships

Items