No Parity like a Risk Parity.
Sun 09 June 2019
by Steven E. Pav
Portfolio Selection and Exchangeability
Consider the problem of portfolio selection, where you observe
some historical data on \(p\) assets, say \(n\) days worth in an \(n\times p\)
matrix, \(X\), and then are required to construct a (dollarwise)
You can view this task as a function \(w\left(X\right)\).
There are a few different kinds of \(w\) function: Markowitz,
equal dollar, Minimum Variance, Equal Risk Contribution ('Risk Parity'),
and so on.
How are we to choose among these competing approaches?
Their supporters can point to theoretical underpinnings,
but these often seem a bit shaky even from a distance.
Usually evidence is provided in the form of backtests
on the historical returns of some universe of assets.
It can be hard to generalize from a single history,
and these backtests rarely offer theoretical justification
for the differential performance in methods.
One way to consider these different methods of portfolio
construction is via the lens of exchangeability.
Roughly speaking, how does the function \(w\left(X\right)\) react
under certain systematic changes in \(X\) that "shouldn't" matter.
For example, suppose that the ticker changed on
one stock in your universe. Suppose you order the columns of
\(X\) alphabetically, so now you must reorder your \(X\).
Assuming no new data has been observed, shouldn't
\(w\left(X\right)\) simply reorder its output in the same way?
Put another way, suppose a method \(w\) systematically
overweights the first element of the universe
(This seems more like a bug than a feature),
and you observe backtests over the 2000's on
U.S. equities where
AAPL happened to be the
first stock in the universe. Your \(w\) might
seem to outperform other methods for no good reason.
Equivariance to order is a kind of exchangeability condition.
The 'right' kind of \(w\) is 'order …
Sun 13 January 2019
by Steven E. Pav
I recently pushed version 0.2.0 of my
fromo package to
This package implements (relatively) fast, numerically robust
computation of moments via
The big changes in this release are:
- Support for weighted moment estimation.
- Computation of running moments over windows defined
by time (or some other increasing index), rather
than vector index.
- Some modest improvements in speed for the 'dangerous'
use cases (no checking for
NA, no weights, etc.)
The time-based running moments are supported via the
and we support means, standard deviation, skew, kurtosis, centered and
standardized moments and cumulants, z-score, Sharpe, and t-stat. The
idea is that your observations are associated with some increasing
index, which you can think of as the observation time, and you wish
to compute moments over a fixed time window. To bloat the API, the
times from which you 'look back' can optionally be something other
than the time indices of the input, so the input and output size
can be different.
Some example uses might be:
- Compute the volatility of an asset's returns over the previous 6 months,
on every trade day.
- Compute the total monthly sales of a company at month ends.
Because the API also allows you to use weights as implicit time deltas, you can
also do weird and unadvisable things like compute the Sharpe of an asset
over the last 1 million shares traded.
Speed improvements come from my random walk through c++ design idioms.
I also implemented a 'swap' procedure for the running standard deviation
which incorporates a Welford's method addition and removal into a single
step. I do not believe that Welford's method is the fastest algorithm
for a summarizing moment computation: probably a two pass solution to
compute the mean first, then the centered moments is faster. However,
for the …
Twelve Dimensional Chess is Stupid
Tue 16 October 2018
Chess and the Curse of Dimensionality
R in Finance 2018
Fri 01 June 2018
Review of R in Finance 2018 conference