### The Kelly solution for one continuous distribution of possible returns

By Elliot Noma with comments from Yu Bai

The Kelly criterion is usually invoked for solving the optimal bet size for a two-outcome gamble. In other posts, we have considered the three outcome solution. Here we consider the optimal bet size for a specific family of continuous distributions, a uniform distribution of outcomes over a unit interval. We show how a solution is derived and the shape of the solution space as a function of the distribution of gamble outcomes

As in the two-outcome case, we consider the long-run return for an infinitely long series of bets and find the bet size that maximizes the return. For the continuous case, the long run return is $z=(1+Sa_1)^{p_1}(1+Sa_2)^{p_2}(1+Sa_3)^{p_3}...$

with outcomes $a_i$ that occur with probability density $p_i$.

Maximizing z is equivalent to maximizing it’s logarithm: $log(z)={p_1}log(1+Sa_1)+{p_2}log(1+Sa_2)+{p_3}log(1+Sa_3)...$

or $log(z)=\int{p_x}log(1+Sx)\,dx$.                            (equation 1)

We find the bet size, S, that maximizes the return by differentiating the integral with respect to the bet size, S, and finding the zero of the equation. $\frac{d log(z)}{dS}=\int\frac{p_xx}{1+Sx}\,dx=0$

The density function of the outcomes, ${p_x}$, can be any distribution, but the bet size can never be greater than $\frac{-1}{min(x|p_x>0)}$. This means that the optimal bet size is zero for any distribution that is unbounded below, such as the Gaussian distribution.

To illustrate the leverage calculations, we considering a family of rectangular distributions of outcomes running from ${x_0}$ to ${1 + x_0}$. Since all outcomes in the range from ${x_0}$ to ${1 + x_0}$ are equally probable and the probabilities integrate to one, we can rewrite the optimization problem as $\frac{d log(z)}{dS}=\int_{x=x_0}^{1 + x_0}\frac{x}{1+Sx}\,dx=0$

We solve for the integral to get $\frac{x}{S} - \frac{1}{S^2}log(1 + Sx)\bigg\vert_{x=x_0}^{1 + x_0}=0$

which can be rewritten $1 - \frac{1}{S}log(\frac{1 + S(1 + x_0)}{1 + S x_0})=0$

or $e^S = \frac{1 + S(1 + x_0)}{1 + Sx_0}$                                   (equation 2)

There is not closed form solution for S, given a lower boundary, ${x_0}$. However,numerical methods can be used to solve for S using the fact that S takes a values between 0 and $\frac{-1}{x_0}$. The value of ${x_0}$ is bounded below at -0.5 since the expected value of the rectangular distribution must be positive for a positive optimal bet size. ${x_0}$ is bounded above at zero since a distribution without negative outcomes would justify an infinite optimal bet size.

To determine the optimal long-run return for the levels of ${x_0}$, we rewrite equation 1 for the rectangular distributions as follows $log(z)=\int_{x=x_0}^{1 + x_0}log(1+Sx)\,dx = (x + \frac{1}{S})log(1 + Sx) - x\bigg\vert_{x=x_0}^{1 + x_0}$

so $z=e^{ (1+x_0 + \frac{1}{S})log(1 + S(1+x_0)) - (1+x_0)-(x_0 + \frac{1}{S})log(1 + Sx_0) + x_0}=\frac{(1+S(1+x_0))^{(1+x_0+\frac{1}{S})}}{e(1+Sx_0)^{(x_0+\frac{1}{S})}}$

which can be written $z=\frac{1+S(1+x_0)}{e}(\frac{1+S(1+x_0)}{1+Sx_0})^{(x_0+\frac{1}{S})}$

Substituting the solution from equation 2 yields $z=\frac{1+S(1+x_0)}{e}e^{S(x_0+\frac{1}{S})}=e^{Sx_0}(1+S(1+x_0))=e^{S(1+x_0)}(1+Sx_0)$

The optimal bet size and the expected return for each bet for different lower bounds are plotted below. As expected, as ${x_0}$ increases, the expected return increases as does the optimal bet size. Also, the long-run return using the optimal bet size always performs at least as well as the unit bet size. From equation 2, we see that the optimal bet equals one only when $e = \frac{1 + (1 + x_0)}{1 + x_0}$

which occurs when $x_0=\frac{2-e}{e-1}=-0.4180233...$

3 Responses to “The Kelly solution for one continuous distribution of possible returns”
1. Vasily Nekrasov says:

Kelly Criterion may also be extended for the optimal asset allocation (consider it as multiple simultaneous games with continuous distribution of possible returns)
Have a look at my paper: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2259133

In principle, we may solve the optimization problem numerically, although VERY many simulations might be necessarily.
The problem is that Kelly is pretty sensitive to the errors of parameter estimation…

• Elliot Noma says:

One thing to remember when using simulations on distributions with infinitely long negative tails is that the theoretical Kelly solution will always be zero since there is a non-zero probability of an infinitely negative return. Since none of the simulated paths will show this result we need to be careful that the most extreme path is not dictating the solution. Alternatively, we can consider the optimal solution as optimal subject to the possibility of a small, but non-zero, probability of ruin.

• Vasily Nekrasov says:

Yes, it is so, unless we have tails cut, and this is what I explicitly assume in my paper.
And we can cut the tails in practice, e.g. by means of options.