OptSolve++, in development since 1998, quickly brings powerful optimization capabilities to your C++ applications providing developers a more cost effective option than developing from scratch or using unproven solutions. Designed and developed for re-use, OptSolve++ offers a set of scalable and reliable pre-built software libraries that increase your productivity and substantially reduce your schedule and resource risk.
Powerful, standard optimization algorithms; easy-to-use object oriented design
OptSolve++ offers efficient and robust optimization of user-defined merit functions and an easy-to-use object oriented programming API. A canonical interface to the included optimizers is provided, making substituting one optimizer for another straight forward.
Taking full advantage of templating techniques and object-oriented design, OptSolve++ provides maximum flexibility in the choice of argument and return type for the merit function and in the configuration of options for the built-in algorithms.
OptSolve++ allows you to choose between algorithms requiring analytic derivatives and those that do not require gradient information. Functions for estimating numerical derivatives are also available.
With extensible object hierarchies that let you readily implement new algorithms or create a thin interface to existing C or C++ algorithms, OptSolve++ helps accelerate project deployment by reducing the planning, development and testing workload.
OptSolve++ library of nonlinear optimization algorithms
- Powell - A good choice for problems where the slowness of nonlinear simplex is an issue but a gradient calculation is not available. It is not as robust as nonlinear simplex, but it converges to a solution faster.
- Conjugate Gradient - Very robust; however, it requires the gradient of the function. A numerical gradient can be used, but as with all numerical approximations accuracy of the solution suffers as a result.
- Nonlinear Simplex - Very accurate for low-dimensional optimization problems, and since it does not require a gradient calculation it can be used when that information is unavailable or a high-cost function. Nonlinear simplex algorithms are quick to come "in range" of the minimum, but often very slow to provide a high degree of accuracy. This slowness of convergence is a trade off for not using estimations of the function or gradient and the imprecision they include.
- Levenberg-Marquardt - The de facto standard for nonlinear parametric optimizations.
OptSolve++ - 非线性优化