## ohenery!

ohenery package to CRAN

read moregilgamath

Sun 30 June 2019
by Steven

Market timing with a discrete feature

read more Wed 19 June 2019
by Steven E. Pav

When I first started working at a quant fund I tried to read
about portfolio theory. (Beyond, you know, "*Hedge Funds for Dummies*.")
I learned about various objectives and portfolio constraints,
including the Markowitz portfolio, which felt very natural.
Markowitz solves the mean-variance optimization problem, as
well as the Sharpe maximization problem, namely

$$
\operatorname{argmax}_w \frac{w^{\top}\mu}{\sqrt{w^{\top} \Sigma w}}.
$$

This is solved, up to scaling, by the Markowitz portfolio \(\Sigma^{-1}\mu\).

When I first read about the theory behind Markowitz, I did not read anything about where \(\mu\) and \(\Sigma\) come from. I assumed the authors I was reading were talking about the vanilla sample estimates of the mean and covariance, though the theory does not require this.

There are some problems with the Markowitz portfolio. For us, as a small quant fund, the most pressing issue was that holding the Markowitz portfolio based on the historical mean and covariance was not a good look. You don't get paid "2 and twenty" for computing some long term averages.

Rather than holding an *unconditional* portfolio,
we sought to construct a *conditional* one,
conditional on some "features".
(I now believe this topic falls under the rubric of "Tactical Asset
Allocation".)
We stumbled on two simple methods for adapting
Markowitz theory to accept conditioning information:
Conditional Markowitz, and "Flattening".

Suppose you observe some \(l\) vector of features, \(f_i\) prior to the time you have to allocate into \(p\) assets to enjoy returns \(x_i\). Assume that the returns are linear in the features, but the covariance is a long term average. That is

$$
E\left[x_i \left|f_i\right.\right] = B f_i,\quad\mbox{Var}\left(x_i \left|f_i\right.\right) = \Sigma.
$$

Note that Markowitz theory never really said how to estimate mean …

read more Sun 09 June 2019
by Steven E. Pav

Consider the problem of *portfolio selection*, where you observe
some historical data on \(p\) assets, say \(n\) days worth in an \(n\times p\)
matrix, \(X\), and then are required to construct a (dollarwise)
portfolio \(w\).
You can view this task as a function \(w\left(X\right)\).
There are a few different kinds of \(w\) function: Markowitz,
equal dollar, Minimum Variance, Equal Risk Contribution ('Risk Parity'),
and so on.

How are we to choose among these competing approaches? Their supporters can point to theoretical underpinnings, but these often seem a bit shaky even from a distance. Usually evidence is provided in the form of backtests on the historical returns of some universe of assets. It can be hard to generalize from a single history, and these backtests rarely offer theoretical justification for the differential performance in methods.

One way to consider these different methods of portfolio
construction is via the lens of *exchangeability*.
Roughly speaking, how does the function \(w\left(X\right)\) react
under certain systematic changes in \(X\) that "shouldn't" matter.
For example, suppose that the ticker changed on
one stock in your universe. Suppose you order the columns of
\(X\) alphabetically, so now you must reorder your \(X\).
Assuming no new data has been observed, shouldn't
\(w\left(X\right)\) simply reorder its output in the same way?

Put another way, suppose a method \(w\) systematically
overweights the first element of the universe
(This seems more like a bug than a feature),
and you observe backtests over the 2000's on
U.S. equities where `AAPL`

happened to be the
first stock in the universe. Your \(w\) might
seem to outperform other methods for no good reason.

Equivariance to order is a kind of exchangeability condition. The 'right' kind of \(w\) is 'order …

read more