Gilgamathhttps://www.gilgamath.com/Sat, 04 Jan 2020 21:07:06 -0800Nonparametric Market Timinghttps://www.gilgamath.com/nonparametric_market_timing.html<p>Market timing a single instrument with a single feature</p>StevenSat, 04 Jan 2020 21:07:06 -0800tag:www.gilgamath.com,2020-01-04:/nonparametric_market_timing.htmlquant-financeRanalysisstatisticsMarkowitzportfoliotactical-asset-allocationohenery!https://www.gilgamath.com/ohenery.html<p>ohenery package to CRAN</p>StevenWed, 25 Sep 2019 21:32:52 -0700tag:www.gilgamath.com,2019-09-25:/ohenery.htmlRpackageDiscrete State Market Timinghttps://www.gilgamath.com/market-timing.html<p>Market timing with a discrete feature</p>StevenSun, 30 Jun 2019 10:22:58 -0700tag:www.gilgamath.com,2019-06-30:/market-timing.htmlquant-financeRanalysisstatisticsMarkowitzportfoliotactical-asset-allocationConditional Portfolios with Feature Flatteninghttps://www.gilgamath.com/portfolio-flattening.html<h2>Conditional Portfolios</h2>
<p>When I first started working at a quant fund I tried to read
about portfolio theory. (Beyond, you know, "<em>Hedge Funds for Dummies</em>.")
I learned about various objectives and portfolio constraints,
including the Markowitz portfolio, which felt very natural.
Markowitz solves the mean-variance optimization problem, as
well as the Sharpe maximization problem, namely
</p>
<div class="math">$$
\operatorname{argmax}_w \frac{w^{\top}\mu}{\sqrt{w^{\top} \Sigma w}}.
$$</div>
<p>
This is solved, up to scaling, by the Markowitz portfolio <span class="math">\(\Sigma^{-1}\mu\)</span>.</p>
<p>When I first read about the theory behind Markowitz, I
did not read anything about where <span class="math">\(\mu\)</span> and <span class="math">\(\Sigma\)</span> come from.
I assumed the authors I was reading were talking about the
vanilla sample estimates of the mean and covariance,
though the theory does not require this.</p>
<p>There are some problems with the Markowitz portfolio.
For us, as a small quant fund, the most pressing issue
was that holding the Markowitz portfolio based on the
historical mean and covariance was not a good look.
You don't get paid "2 and twenty" for computing some
long term averages.</p>
<!-- PELICAN_END_SUMMARY -->
<p>Rather than holding an <em>unconditional</em> portfolio,
we sought to construct a <em>conditional</em> one,
conditional on some "features".
(I now believe this topic falls under the rubric of "Tactical Asset
Allocation".)
We stumbled on two simple methods for adapting
Markowitz theory to accept conditioning information:
Conditional Markowitz, and "Flattening".</p>
<h2>Conditional Markowitz</h2>
<p>Suppose you observe some <span class="math">\(l\)</span> vector of features, <span class="math">\(f_i\)</span> prior
to the time you have to allocate into <span class="math">\(p\)</span> assets to enjoy
returns <span class="math">\(x_i\)</span>. Assume that the returns are linear in the features,
but the covariance is a long term average. That is
</p>
<div class="math">$$
E\left[x_i \left|f_i\right.\right] = B f_i,\quad\mbox{Var}\left(x_i \left|f_i\right.\right) = \Sigma.
$$</div>
<p>Note that Markowitz theory never really said how to estimate
mean …</p>Steven E. PavWed, 19 Jun 2019 21:04:21 -0700tag:www.gilgamath.com,2019-06-19:/portfolio-flattening.htmlquant-financeanalysisstatisticsMarkowitzportfoliotactical-asset-allocationNo Parity like a Risk Parity.https://www.gilgamath.com/risk-parity.html<h2>Portfolio Selection and Exchangeability</h2>
<p>Consider the problem of <em>portfolio selection</em>, where you observe
some historical data on <span class="math">\(p\)</span> assets, say <span class="math">\(n\)</span> days worth in an <span class="math">\(n\times p\)</span>
matrix, <span class="math">\(X\)</span>, and then are required to construct a (dollarwise)
portfolio <span class="math">\(w\)</span>.
You can view this task as a function <span class="math">\(w\left(X\right)\)</span>.
There are a few different kinds of <span class="math">\(w\)</span> function: Markowitz,
equal dollar, Minimum Variance, Equal Risk Contribution ('Risk Parity'),
and so on.</p>
<p>How are we to choose among these competing approaches?
Their supporters can point to theoretical underpinnings,
but these often seem a bit shaky even from a distance.
Usually evidence is provided in the form of backtests
on the historical returns of some universe of assets.
It can be hard to generalize from a single history,
and these backtests rarely offer theoretical justification
for the differential performance in methods.</p>
<!-- PELICAN_END_SUMMARY -->
<p>One way to consider these different methods of portfolio
construction is via the lens of <em>exchangeability</em>.
Roughly speaking, how does the function <span class="math">\(w\left(X\right)\)</span> react
under certain systematic changes in <span class="math">\(X\)</span> that "shouldn't" matter.
For example, suppose that the ticker changed on
one stock in your universe. Suppose you order the columns of
<span class="math">\(X\)</span> alphabetically, so now you must reorder your <span class="math">\(X\)</span>.
Assuming no new data has been observed, shouldn't
<span class="math">\(w\left(X\right)\)</span> simply reorder its output in the same way?</p>
<p>Put another way, suppose a method <span class="math">\(w\)</span> systematically
overweights the first element of the universe
(This seems more like a bug than a feature),
and you observe backtests over the 2000's on
U.S. equities where <code>AAPL</code> happened to be the
first stock in the universe. Your <span class="math">\(w\)</span> might
seem to outperform other methods for no good reason.</p>
<p>Equivariance to order is a kind of exchangeability condition.
The 'right' kind of <span class="math">\(w\)</span> is 'order …</p>Steven E. PavSun, 09 Jun 2019 22:53:04 -0700tag:www.gilgamath.com,2019-06-09:/risk-parity.htmlquant-financeanalysisstatisticsMarkowitzportfolioRfromo 0.2.0https://www.gilgamath.com/fromo-two.html<p>I recently pushed version 0.2.0 of my <code>fromo</code> package to
<a href="https://cran.r-project.org/package=fromo">CRAN</a>.
This package implements (relatively) fast, numerically robust
computation of moments via <code>Rcpp</code>.
<!-- PELICAN_END_SUMMARY --></p>
<p>The big changes in this release are:</p>
<ul>
<li>Support for weighted moment estimation.</li>
<li>Computation of running moments over windows defined
by time (or some other increasing index), rather
than vector index.</li>
<li>Some modest improvements in speed for the 'dangerous'
use cases (no checking for <code>NA</code>, no weights, <em>etc.</em>)</li>
</ul>
<p>The time-based running moments are supported via the <code>t_running_*</code> operations,
and we support means, standard deviation, skew, kurtosis, centered and
standardized moments and cumulants, z-score, Sharpe, and t-stat. The
idea is that your observations are associated with some increasing
index, which you can think of as the observation time, and you wish
to compute moments over a fixed time window. To bloat the API, the
times from which you 'look back' can optionally be something other
than the time indices of the input, so the input and output size
can be different.</p>
<p>Some example uses might be:</p>
<ul>
<li>Compute the volatility of an asset's returns over the previous 6 months,
on every trade day.</li>
<li>Compute the total monthly sales of a company at month ends.</li>
</ul>
<p>Because the API also allows you to use weights as implicit time deltas, you can
also do weird and unadvisable things like compute the Sharpe of an asset
over the last 1 million shares traded.</p>
<p>Speed improvements come from my random walk through c++ design idioms.
I also implemented a 'swap' procedure for the running standard deviation
which incorporates a Welford's method addition and removal into a single
step. I do not believe that Welford's method is the fastest algorithm
for a summarizing moment computation: probably a two pass solution to
compute the mean first, then the centered moments is faster. However,
for the …</p>Steven E. PavSun, 13 Jan 2019 10:23:39 -0800tag:www.gilgamath.com,2019-01-13:/fromo-two.htmlRpackageTwelve Dimensional Chess is Stupidhttps://www.gilgamath.com/twelve_dimensional_chess.html<p>Chess and the Curse of Dimensionality</p>StevenTue, 16 Oct 2018 22:24:30 -0700tag:www.gilgamath.com,2018-10-16:/twelve_dimensional_chess.htmlanalysischessR in Finance 2018https://www.gilgamath.com/rfin2018.html<p>Review of R in Finance 2018 conference</p>StevenFri, 01 Jun 2018 10:00:32 -0700tag:www.gilgamath.com,2018-06-01:/rfin2018.htmlquant-financereportsAnother Confidence Limit for the Markowitz Signal Noise ratiohttps://www.gilgamath.com/new_mp_ci.html<p>Another confidence limit on the Signal Noise ratio of the Markowitz portfolio.</p>StevenWed, 28 Mar 2018 21:33:59 -0700tag:www.gilgamath.com,2018-03-28:/new_mp_ci.htmlstatisticsquant-financeanalysisRMarkowitz Portfolio Covariance, Elliptical Returnshttps://www.gilgamath.com/markowitz-cov-elliptical.html<p>In a <a href="bad-cis">previous blog post</a>, I looked at asymptotic confidence
intervals for the Signal to Noise ratio of the (sample) Markowitz
portfolio, finding them to be deficient. (Perhaps they are useful if
one has hundreds of thousands of days of data, but are otherwise
awful.) Those confidence intervals came from revision four of my paper
on the <a href="https://arxiv.org/abs/1312.0557">Asymptotic distribution of the Markowitz Portfolio</a>.
In that same update, I also describe, albeit in an obfuscated form,
the asymptotic distribution of the sample Markowitz portfolio for
elliptical returns. Here I check that finding empirically.
<!-- PELICAN_END_SUMMARY --></p>
<p>Suppose you observe a <span class="math">\(p\)</span> vector of returns drawn from an elliptical
distribution with mean <span class="math">\(\mu\)</span>, covariance <span class="math">\(\Sigma\)</span> and 'kurtosis factor',
<span class="math">\(\kappa\)</span>. Three times the kurtosis factor is the kurtosis of marginals
under this assumed model. It takes value <span class="math">\(1\)</span> for a multivariate normal.
This model of returns is slightly more realistic than multivariate normal,
but does not allow for skewness of asset returns, which seems unrealistic.</p>
<p>Nonetheless, let <span class="math">\(\hat{\nu}\)</span> be the Markowitz portfolio built on a sample
of <span class="math">\(n\)</span> days of independent returns:
</p>
<div class="math">$$
\hat{\nu} = \hat{\Sigma}^{-1} \hat{\mu},
$$</div>
<p>
where <span class="math">\(\hat{\mu}, \hat{\Sigma}\)</span> are the regular 'vanilla' estimates
of mean and covariance. The vector <span class="math">\(\hat{\nu}\)</span> is, in a sense, over-corrected,
and we need to cancel out a square root of <span class="math">\(\Sigma\)</span> (the population value). So
we will consider the distribution of <span class="math">\(Q \Sigma^{\top/2} \hat{\nu}\)</span>, where
<span class="math">\(\Sigma^{\top/2}\)</span> is the upper triangular Cholesky factor of <span class="math">\(\Sigma\)</span>,
and where <span class="math">\(Q\)</span> is an orthogonal matrix (<span class="math">\(Q Q^{\top} = I\)</span>), and where
<span class="math">\(Q\)</span> rotates <span class="math">\(\Sigma^{-1/2}\mu\)</span> onto <span class="math">\(e_1\)</span>, the first basis vector:
</p>
<div class="math">$$
Q \Sigma^{-1/2}\mu = \zeta e_1,
$$</div>
<p>
where <span class="math">\(\zeta\)</span> is the Signal to Noise ratio of the population Markowitz
portfolio: <span class="math">\(\zeta = \sqrt{\mu^{\top}\Sigma^{-1}\mu} = \left\Vert …</span></p>Steven E. PavMon, 12 Mar 2018 22:28:31 -0700tag:www.gilgamath.com,2018-03-12:/markowitz-cov-elliptical.htmlquant-financeanalysisstatisticsMarkowitzportfolioR