tag:blogger.com,1999:blog-8781383461061929571.post4311621127612018552..comments2024-03-14T09:08:19.035-04:00Comments on OR in an OB World: A Minimum Variance Convex CombinationPaul A. Rubinhttp://www.blogger.com/profile/05801891157261357482noreply@blogger.comBlogger4125tag:blogger.com,1999:blog-8781383461061929571.post-67111594415345337142013-09-11T11:54:59.791-04:002013-09-11T11:54:59.791-04:00Thanks for letting me know how it turned out. (Als...Thanks for letting me know how it turned out. (Also, I'm gratified to know that the application is medical imaging and not, say, pricing of bizarre financial instruments. :-)) Given that I thought you were dealing with sample covariance or correlation matrices (somewhat noisy buggers), I had thought that using a really tight convergence tolerance would probably be wasteful, but I also figured you were the one to decide that, and I see you did.<br /><br />Glad it's working for you.Paul A. Rubinhttps://www.blogger.com/profile/05801891157261357482noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-61529758984265439622013-09-11T11:36:11.245-04:002013-09-11T11:36:11.245-04:00(I'm the original poster on Quora)
Just wante...(I'm the original poster on Quora)<br /><br />Just wanted to let you know, we've implemented your algorithm in Python and C++ with great success!<br /><br />For our initial tests, we had 6 by 6 matrices. The algorithm usually converges in around 10 iterations (tolerance is a<1e-4, precision is not very important for us) and gives very good solutions. Our first run on a full image (the application is medical image segmentation), with around 1.5 million voxels and thus 1.5 million problems took less than five minutes, which is very reasonable from our point of view.<br /><br />Thanks again for taking the time to think about my problem and coming up with such a nice solution!Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-71973341707182788092013-09-06T11:56:18.863-04:002013-09-06T11:56:18.863-04:00Paul: Thanks for the comment and reference. I'...Paul: Thanks for the comment and reference. I'll confess that I was motivated in picking my method by intellectual laziness (not having to worry about tracking changes to the active set, or equivalently worrying about onto which face I would be projecting the gradient). Turns out that the work per iteration is pretty minimal. The key question is whether there are problem instances for which my method would require too many iterations; if so, gradient projection using the method you cite might pay for its extra cost per iteration by cutting down the iteration count sufficiently.Paul A. Rubinhttps://www.blogger.com/profile/05801891157261357482noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-75237138077490853752013-09-06T06:01:49.325-04:002013-09-06T06:01:49.325-04:00FWIW, (euclidian) projection on the feasible set i...FWIW, (euclidian) projection on the feasible set is an easy problem [1], with a nifty combinatorial approach for solving the dual obtained by applying Lagrangian relaxation to the convexity constraint. If the Hessian is small enough, some accelerated constrained gradient method should provably converge nicely (in O(log(epsilon)) iterations) and each iteration is just a gradient evaluation and one (or two) projection step(s).<br /><br />[1]: Efficient Projections onto the ℓ1-Ball for Learning in High Dimensions, Duchi, Shalev-Shwartz, Singer, and Chandra, 2008.Paul Khuonghttps://www.blogger.com/profile/03625177841966358910noreply@blogger.com