In constrained optimization problems with very simple (constant) constraints it is sometimes useful to simply use a global optimization algorithm with an appropriate one-to-one transformation of the parameters. Suppose we want to optimize an objective function where . If there exists a continuous mapping such that for all , if and only if , then it is equivalent to optimize over . If is the resulting optimum, then we can apply the inverse transformation to obtain .
Below is a table of useful transformations of this type. Most of them are not very difficult to derive, but it seems useful to have a list of them in one place. denotes the constraint set. They can be scaled as needed for other intervals.
Note that the mapping to is the Sigmoid Function.
Finally, a multidimensional transformation is useful when the parameters represent probabilities. Suppose where Here, is the standard simplex. The corresponding mapping is where for and This is the same mapping that arises in the multinomial logit and conditional logit regression models.