A question posted to OR Stack Exchange, "Linear optimization problem with user-defined cost function", caught my eye. The question has gone through multiple edits, and the title is a bit misleading, in that the objective function is in at least some cases nonlinear. The constraints are both linear and very simple. The user is looking for weights to assign to $n$ vectors, and the weights $x_i$ satisfy $$\sum_{i=1}^n x_i = 1\\x \ge 0.$$ Emma, the original poster, put a working example (in Python) on GitHub. The simplified version of her cost function includes division of one linear expression by another, with an adjustment to deal with division by zero errors (converting the resulting NaN to 0).
The feasible region of the problem is a simplex, which triggered a memory of the Nelder-Mead algorithm (which was known as the "Nelder-Mead simplex algorithm" when I learned it, despite confusion with Dantzig's simplex algorithm for linear programs). The Nelder-Mead algorithm, published in 1965, attempts to optimize a nonlinear function (with no guarantee of convergence to the optimum in general), using only function evaluations (no derivatives). It is based on an earlier algorithm (by Spendley, Hext and Himsworth, in 1962), and I'm pretty sure there have been tweaks to Nelder-Mead over the subsequent years.
The Nelder-Mead algorithm is designed for unconstrained problems. That said, my somewhat fuzzy recollection was that Nelder-Mead starts with a simplex (hopefully containing an optimal solution) and progressively shrinks the uncertainty region, each time getting a simplex that is a subset of the previous simplex. So if we start with the unit simplex $\lbrace (1,0,0,\dots,0,0), (0,1,0,\dots,0,0),\dots,(0,0,0,\dots,0,1)\rbrace$, which is the full feasible region, every subsequent simplex should be comprised of feasible points. It turns out I was not quite right. Depending on the parameter values you use, there is one step (expansion) that can leave the current simplex and thus possibly violate the sign restrictions. That's easily fixed, though, by checking the step size and shrinking it if necessary.
There are several R packages containing a Nelder-Mead function, but most of them look like they are designed for univariate optimization, and the one I could find that was multivariate and allowed specification of the initial simplex would not work for me. So I coded my own, based on the Wikipedia page, which was easy enough. I used what that page describes as typical values for the four step size parameters. It hit my convergent limit (too small a change in the simplex) after 29 iterations, producing a solution that appears to be not quite optimal but close.
Just for comparison purposes, I thought I would try a genetic algorithm (GA). GAs are generally not designed for constrained problems, although there are exceptions. (Search "random key genetic algorithm" to find one.) That's easy to finesse in our case. Getting a GA to produce only nonnegative values is easy: you just have to require the code that generates new solutions (used to seed the initial population, and possibly for immigration) and the code that mutates existing solutions to use only nonnegative numbers. That might actually be the default in a lot of GA libraries. "Crossover" (their term for solutions having children) takes care of itself. So we just need to enforce the lone equation constraint, which we can do by redefining the objective function. We allow the GA to produce solutions without regard to the sum of their components, and instead optimize the function $$\hat{f}(x)=f\left(\frac{x}{\sum_{i=1}^n x_i}\right)$$where $f()$ is the original objective function.
R has multiple GA packages. I used the `genalg` package in my experiments. Running was 100 generations with a population of size 200 took several seconds (so longer than what Nelder-Mead took), but it produced a somewhat better solution. Since the GA is a random algorithm, running it repeatedly will produce different results, some worse, possibly some better. You could also try restarting Nelder-Mead when the polytope gets too small, starting from a new polytope centered around the current optimum, which might possibly improve on the solution obtained.
This was all mostly just to satisfy my curiosity. My R code for both the Nelder-Mead and GA approaches is in an R notebook you are free to download.
No comments:
Post a Comment
Due to intermittent spamming, comments are being moderated. If this is your first time commenting on the blog, please read the Ground Rules for Comments. In particular, if you want to ask an operations research-related question not relevant to this post, consider asking it on Operations Research Stack Exchange.