second annual eighth annual This was my second year at the eighth annual R in Finance conference, my sixth year in Chicago. The weather was a bummer, but the content was fresh. What follows are my (biased, incomplete, sketchy) notes. You should go look up Robert Krzyzanowski's notes for another view. The talks were, for the first time, recorded, and I think can be viewed later.

Day One, Morning Lightning Round

Without a morning keynote, the welcome quickly segued into the lightning round.

  1. Marcelo Perlin kicked off with a talk about his getHFData package, which gets high frequency data from BOVESPA, the Brazilian exchange. You read that correctly, you can get trade and quote data for Brazilian equities and derivatives for free via an R command, with order book data 'coming soon'. I'll take it! He also has a free book.

  2. Jeffrey Mazar talked about the obmodeling package for modeling Order Book data, which provides some 'standard' analytics on OB: spread, imbalance, depth, VPIN and so on.

  3. Yuting Tan then gave a presentation on institutional investors and volatility. The key technical point here is that the 'classical' measure of volatility of returns is confounded by noise in the price. This is a problem I have looked at in the context of the 'bid-ask bounce', which is a phantom negative correlation in returns. Suppose that \(P_t = L_t + z_t\), where \(L_t\) is the 'latent' or 'true' price, and \(z_t\) is a price noise. Suppose the log returns of the latent price follow some stochastic law, where the \(z_t\) might, for example, reflect whether the last print was at the bid or ask. Then log returns of \(P_t\) will show a negative autocorrelation because of price noise, with the magnitude determined by the volatility in price noise and the volatility of latent returns. Popping the stack on that diversion, Yuting presented an estimator of volatility, TSRV, that is less affected (not affected?) by price noise. Then she used it to analyze whether trading by institutional investors is correlated with certain market characteristics, finding positive correlation with volatility, and price noise. As a future work, computing Amihud Liquidity still seems to be confused by price noise.

  4. Stephen Rush opened his talk with some shade on data scientists. He analyzed the trade execution quality at different trade execution venues: execution speed, spread, and best execution, finding that market share affects the same, but, if I have understood this correctly, some of the smaller venues give better execution at the expense of slower execution time. Stephen can correct me on this.

  5. Jerzy Pawlowski was a no-show due to time zones, but maybe he will reschedule.

Day One, Morning Talks

Michael Hirsh talked about some analysis of trades data from ASX, the Australian Stock Exchange. He has about a year of high frequency trade (and book?) data, which also includes the IDs of the buyer and seller, presumably anonymized. Having IDs allows one to perform longitudinal analyses. Michael characterized trades as active or passive, then presented some graphs showing changes in behavior for certain HFT players on certain stocks over time. Some players were liquidity makers, but on the whole they appeared to take liquidity. One interesting analysis was the construction of 2D plots of the 'time to next trade' versus 'time from previous trade', though I did not catch all the details, but it would seem this would be a basic tool for reverse-engineering market players. (I realize that makes it sound like it would be easy; it shouldn't be.) Then followed some network plots where the nodes were pie charts, and the jet lag started in.


Ross Bennett gave a presentation on his work on building factor portfolios for the Arizona Pension system. He started by mentioning the factor zoo, i.e. the growing list of published results on nearly 60 factors which are purported to drive investor behavior or otherwise affect returns. It turns out that different index providers have different ideas about how to build indices on these:

Another takeaway is that many of these factors have very short out-of-sample histories from time of publication, and presumably the authors have dredged the hell out of the history up to publication; moreover, some index providers appear to have avoided huge drawdowns in the 2008 crisis by making questionable choices in weighting functions, as the indices were constructed in the interim. Then followed some kabuki about random portfolio construction with constraints and different objectives, and bam, everyone loves Low Vol. As a refugee from a Quant Fund, I will attest that you cannot charge your "two and twenty" for replicating SPLV unless you're really good at bullshitting clients. That aside, analysis of portfolios is my jam, so I was happy to see.


Seoyung Kim gave a very nice talk on sentiment analysis of a corpus of Enron emails, connecting characteristics of emails to external data on the Enron collapse: public news, the stock price, and so on. She was able to find a shift in email length, but not necessarily sentiment, at the onset of collapse. Joint work with Sanjiv Das, who maybe never sleeps?

Break.


Szilard Pafka gave the first keynote, on 'No Bullshit Data Science'. He compared the performance of different databases for different databases. And he benchmarked implementations of standard machine learning techniques across platforms: Spark, H2O, python, R, xgboost. The time performance and achieved objective were roughly compared, with H2O and xgboost looking pretty. There was a bunch of prosletyzing about using open source; I'm a convert already, but the takeaway is to fight your problem, don't fight your tools.

Lunch.

Day One, Afternoon

Lightning Round.

  1. Jerzy Pawlowski snuck in a lightning talk: Machine Learning, backtesting, and cross validation. His talk was marred by technical difficulties, so I need to take a second look. (Mental note: never use HTML for presentations.) He looks at the chances of selecting a zero-alpha strategy or manager based on in-sample performance, as a function of sample size and breadth. Again, this is my jam. In a previous job I looked at this situation in the context of selecting an allocation on sub-strategies (with a positivity constraint! It's hard to explain to an investor that you've built 10 alpha models, but are shorting two of them!), and tested many of the same heuristics: winner take all, equal dollar, one over var, Markowitz with positivity and so on. The statistical question of the out-of-sample performance is still really interesting.

  2. Francesco Bianchi, lightning talk on coGARCH and the cogarch.rm package. This one flew by, and we landed on forecasting VaR and Expected Shortfall, forward, using a co-GARCH.

  3. Eina Ooka, from The Energy Authority, on modeling the risk of a portfolio of energy producing assets, using Monte Carlo methods. (Side note: power prices are sometimes negative?) The technical challenge was tuning the number of Random Forest trees to achieve the 'proper' amount of volatility? There were a lot of plots here of different kinds, check out the poster. She lamented the lack of a good scoring function for simulations.

  4. Matteo Crimella, from GS, on Operational Risk Stress Testing. This touches near to my professional interests. The setup: summarize historical macroeconomic variables using PCA, use latent variablse as independent variables to forecast loss, and then apply the PCA weightings to scenarios for walking forward. Then the talk got faster: we had a driveby overview of Multivariate Adaptive Regression Splines (Breimen 1996), Neural Networks, Error Correction Models; then plots of scenario forecasts for different models; then recommend bagged MARS. It's always nice to see a bunch of different models give similar results (again, something I am starting to appreciate more in my professional life).

  5. Thomas Zakrzewski, from S&P Global Market Intelligence, talking about using R for Stress testing. This is my bread and butter now. Review of the regulatory environment: DFAST and CCAR. Then and advertorial on Risk Services from S&P Market Intelligence. Using ARIMA to model PD & LGD, from market, yield, inflation, and so on. Using BUGS?

  6. Andy Tang, from William Blair, asking, "how much structure is best?" In particular, structure of the covariance matrix. At a high level, compare unstructured and structured covariance estimates. Unstructured include sample covariance, Shrinkage estimators, etc. Structured include some factor structure like Barra, or something like CO-GARCH. The structured models consume fewer degrees of freedom, but may be misspecified. A third approach, apparently, is that of a 'conditionally structured' covariance? DGM (Dynamic Graph Methods?) were used to model 100 stocks during the European Debt Crisis, somehow with the Fama French factors seeding the structure? Made a nice plot showing error versus the amount of structure, with DGM in the middle, balancing dimensionality and model error.

Keynote

Bob McDonald comes back to RFin. He talked about categorized rataings, such as Morningstar (bond) ratings, whcih are ordinal factors. How are investors affected by ratings, change in ratings, differences in rating standards across asset classes, and whether they can improve decision making for investors using category ratings. A sociological experiment was conducted where subjects were given 12 dollars in cash to invest in six investments (plus a seventh, 'hold cash'). Subjects in two groups were presented the same data on these investments, with the experimental group given an additional 'Category rating' breaking the investments into two arbitrary categories, with a double line separating the categories. The investments are sometimes presented with stars, based on the Sharpe within asset classes (i.e. over all investments when no categorical division is given). Investors are told how the stars are defined. So, are investors affected by stars, and how does the effect change when 'grading on a curve' based on asset category? Findings: adding or losing a star affects, in the direction you would expect, amount invested in an asset. Lots of stuff to unpack here, including a replication of the study on faculty and staff (the first round was on undergrads at U. Iowa). I love behavioural studies like this. Mental note, look up the texreg package for smoothing the R to LaTeX bridge.

Break.


Dries Cornilly on co-moment estimation with factors and linear shrinkage. That's a lot of words: you can think of co-moments as a generalization of the covariance matrix of multiple assets. The coskewness array, for example is 2 by 2 by 2, but usually expressed as a 2 by 4 matrix. Symmetry usually eats a lot of the independence in these: the co-kurtosis matrix is 2 by 8, but has only 5 unique elements? Expressing these quantities as matrices is useful in the following way: If \(\Phi\) is the co-skewness matrix of some assets, the skewness of the returns of portfolio \(w\) allocated dollarwise on \(x\) is

$$w^{\top} \Phi \left(w \otimes w\right).$$

See how that generalizes covariance? Nice, right? That aside, how do you estimate these matrices? There are a few issues here:

  1. One should abuse symmetry rather than perform redundant computations.
  2. How do you make these computations numerically robust. (That's my paranoia.)
  3. Given that one is estimating potentially thousands of co-moments, will the errors swamp your application?

To deal with estimation error, assume some structure. This can reduce error, but subjects you to model misspecification error. One way to impose structure is via shrinkage. Another is to use observed (Boudt, Lu & Peeters, 2015) or unobserved factors (working paper).


Bernhard Pfaff returns again to Rfin. Consider ERC, or 'risk parity' (c.f. Qian, 2005, 2006, 2011, Maillerd et al. 2010, Roncalli, 2013) where each element in the portfolio contributes equal standard deviation of risk. This segued into a multiple criteria risk optimization. How do you do multiple risk criteria optimization? Take a linear combination of your objectives, for some choice of weights, and then something something? How do you choose the weights? He then presents the mcrp package, and takes it for a spin with six objectives on five assets.

Lightning Round.

  1. Oliver Haynold on Practical Options Modeling with the sn package. Heavy stuff here: modeling volatility that captures smirk, using four parameters. Somehow this is described as a skew-t distribution, which has parameters for location, scale, stretch and tails? Ended with a market forecast for December 2017.

  2. Shuang Zhou on Nonparametric Estimate of the Risk-Neutral Density for options. I missed the first part, and then it was a lot of tables, and then some demos, and it's a wrap.

  3. Luis Damiano: A Quick Intro to Hidden Markov Models Applied to Stock Volatility. The key insight is that markets do not behave the same every day, and so have some kind of hidden state. Thus a hidden Markov Model. Nice slides, no equations, good talk.

  4. Oleg Bondarenko: Rearrangement Algorithm and Maximum Entropy. "Can you infer dependence of variables given marginal distributions of the assets and of their weighted sum?" Cue the Block Rearrangement Algorithm (BRA). Create a matrix of quantiles of returns, then perform a greedy rearrangement. (Am I really describing this on my blog? It's too technical!)

  5. Xin Chen: Risk and Performance Estimator Standard Errors for Serially Correlated Returns. Hey, guess what? Performance estimators, like Sharpe, are random variables, and come with noise, which can be biased or large. OK, I knew that. Let's check out some Hedge Fund returns data. I didn't get what the method was, but check out the package, and look forward to results from GSoC 2017.


Mostensio Qiang Kou ('KK') on Text analysis using Apache MxNet. MxNet is a deep learning platform, with buy-in from Amazon. MxNet is designed to work on multiple platforms, with bindings in many languages besides just R. As an example usage, consider Amazon product review data: a bunch of text with some characteristics around them, with a whole lot of data. They get very good accuracy on binary classification, merely by considering the reviews as matrices, not by extracting keywords. That is, an \(n\) character review is encoded as a \(k \times n\) matrix of 0/1 values, where \(k\) is the alphabet size (like \(k=63\) to include upper and lower Roman letters and numerals and a few symbols). This seems like voodoo. He then trains an LSTM model on Shakespeare (?) and generates some random text using it. This is going to raise the authorship debate to a new level. Check out his talk.

Robert Krzyzanowski on Syberia, a 'development framework for R'. I was disappointed he fell sick last year and couldn't give a talk. The impetus for Syberia: R workflows are loosely organized scripts, which are a hairball. Sharing code and making it reproducible is hard as the number of developers grow. The solution: impose order via a framework. That would be Syberia. Introduces 'adapters' to abstract away sinking and sourcing of data. And code? So "everything is a resource". He mentions 'wide' vs. 'narrow' transformations. Idempotent files? Package dependency management for reproducibility using a lockbox. Check out his talk, and go look up Syberia. In talking with Rob, we agree, I think, that a lot of R programming has a gunslinger nature to it, while the more CS-oriented people seem to land in Python or Haskell, and so on. Syberia is Rob's answer, and I think I'm sold.

Lightning Round.

  1. Matt Dancho, New Tools for Performing Financial Analysis Within the 'Tidy' Ecosystem. That would be tidyquant. Apparently 'tidy' is a dirty word in these parts of Chicago. The idea is to use the flexibility of the tidyverse and the speed of xts. Keep your eye on this package.

  2. Leonardo Silvestri, on ztsdb, a time-series DBMS for R users. It seems pretty cool: with R seamless integration, C/C++ bindings, and too much to mention in 6 minutes. Take it for a spin in docker.

On to drinks! Not at the tower that shall not be named!

Day Two

Lightning Round

  1. Stephen Bronder: Integrating Forecasting and Machine Learning in the mlr Framework. I am sad to say I overslept and missed this one. I believe the talks were taped, (they certainly were live streamed) so I am going to go back and look this one up, since mlr seems like a nice framework for ML work.

  2. Leopoldo Catania: Generalized Autoregressive Score Models in R: The GAS Package. I have never seen the Generalized Autoregressive Score idea before. The idea seems to be to compute the likelihoodist's Score function at each time point, and then compute a kind of moving average of scores as an estimate of the underlying population parameter of interest. I do want to follow up on this. See the package

  3. Guanhao Feng: Regularizing Bayesian Predictive Regressions. Again, too early in the morning for me, but it looked like Bayesian something something.

  4. Jonas Rende on partialCI: An R package for the analysis of partially cointegrated time series. Partial cointegration generalizes the notion of cointegration to allow the residual series to contain mean reverting and random walk. See the paper.

  5. Carson Sievert: Interactive visualization for multiple time series. I was hoping for new methods of visualizing multivariate time series. The talk was nice, however, covering some tools for plotting, via plotly, time series. One upside is that the underlying engine, WebGL, can comfortably handle many plot points. (The downside is that it runs in a web browser, so will never work in the constraints of my professional work environment.)

Talks

In one of the more memorable moments of the conference, Emanuele Guidotti presented a movie of using the yuimaGUI package in the style of 'The Matrix'. yuimaGUI, a wrapper on yuima, is intended to be enabling to users, rather than just another interface. I am married to my vim setup, so am unlikely to switch, but I am curious about yuima, which seems to be (yet another) framework for defining models, estimating parameters, simulating models, and so on.

Daniel Kowal then gave a talk on Bayesian Multivariate Functional Dynamic Linear Model, with code in the FDLM package. The idea appears to be to model evolution in observed functional relationships. The running example was the U.S. Yield curve, modeled as a function of time to maturity, with that function changing over time. There is an added constraint of some kind of autocorrelation of the functional relationship, and its decomposition into basis functions. Besides having the blessing of the Bayesian Elders, I was a bit confused why you would run to a Gibbs sampler for this kind of thing, when it seems you could set up a simple matrix factorization with regularization and call it a day.

Break


Jason Foster on Scenario Analysis of Risk Parity using RcppParallel. OK, this one kind of burned my butter. Jason wrote and maintains the roll package for rolling computations using RcppParallel. As was noted in the talk, and I have lamented elsewhere, the runtimes of roll functions grow with window size, and they should not. On the upside, I should say that roll functions are 'obviously' correct, since they apply the correct upstream functions on windowed views of the vector. But since they do not reuse computations, they are slower than they should be.
(Full disclosure/advertisement: I maintain fromo, which is an alternative to roll.) Already triggered, I nearly had an aneurysm when Jason trotted out Euler's decomposition as an expression of the risk of each asset in a portfolio. (c.f. Bai, Scheinberg, Tutuncu, oh hey, my former Optimization prof!) While this is fine for some uses, it is not the "risk in each asset". It can be negative. You do not have -8% risk in an asset, that makes no godamned sense.

If I can calm down for a moment, this is a problem I have considered before. If you hold a dollarwise portfolio \(w\) on assets with covariance \(\Sigma\), your volatility is \(\sigma=\sqrt{w^{\top}\Sigma w}\). Then if you let \(\Sigma^{1/2}\) be the symmetric square root of \(\Sigma\), define \(r=\left|\Sigma^{1/2}w\right|\). I will claim that the elements of \(r\) are the 'risks' of each asset: we have \(\sigma = \sqrt{r^{\top} r}\), or the total volatility is the length of \(r\). The elements of \(r\) are positive, so no negative risks. And furthermore, when you use the symmetric square root, the answers you get are not dependent on the ordering of the assets in your vector (which is arbitrary), as would be the case for a Cholesky factorization. That's how you define the risk of each asset. (Sorry for the rant.)

EDIT My rant here was undeservedly pissy. Rather than pretend I didn't write it, I will leave it here, but will write a more restrained and balanced followup, and invite Jason to give his view. I'll put the link here when it exists.

Lightning Round

  1. Michael Weylandt: Convex Optimization for High-Dimensional Portfolio Construction The idea is to recast Portfolio optimization with an \(L_0\) constraint (which is NP hard) as a statistical problem (presumably also hard), which can be transformed into a LASSO, which is \(L_1\) regularization.

  2. Lukas Elmiger: Risk Parity Under Parameter Uncertainty. This was a comparison of common portfolio construction techniques on returns from the S&P 500 universe, and from global futures data. The gist appears to be to measure the uncertainty in the outcome as a function of the uncertainty in the portfolio.

  3. Ilya Kipnis: Global Adaptive Asset Allocation, and the Possible End of Momentum. Ilya presented a strategy on a list of ETFs given in Meb Faber's 2015 book, tuning the lookback window for the momentum computation and the risk estimation, finding that the former affects performance while the latter does not. That said, there is a huge gap between in- and out-of-sample Sharpe.

  4. Vyacheslav Arbuzov traveled from Siberia to give a talk on a Dividend strategy. The interesting idea here is looking at price changes around the ex-divided date for an equity. It turns out the gap in stock prices are not explained by dividend size, rather they find that the stock price drops less than the dividend payment would suggest. This implies a trading strategy, which he analyzes. There is uncertainty around the execution costs and so on, but a nice talk. around pri

  1. Nabil Bouamara on The Alpha and Beta of Equity Hedge UCITS Funds. I am late to the party here, but 'UCITs' are European Mutual funds which follow guidelines for transparency, liquidity, risk management, regulatory oversight and so on. Nabil used them as building blocks (like ETFs, say) for a fund of funds portfolio. There was a focus on false discovery.

Keynote

Dave DeMers gave a talk on Risk Fast and Slow. This talk was chock-full of war stories ('bedtime stories,' as the speaker put it) about risk management at a few very large funds over the last 20 years. He talked about the ways of forecasting risk in real time: liquidity risk, a kind of crowded-market risk, and so on. He gave some interesting heuristics which could/should be used by a fund. One of these is the 'Absorption ratio', which is the ratio of short to long term "percent variance explained by first \(k\) PCA factors", or something along those lines. He mentioned how his fund had started to de-lever before the 'Quantquake of 2007', though not quickly enough. As a funny aside note, at my first hedge fund job, we launched our quant fund on August 1, 2007, taking off right into the shitstorm. Good times.

Lunch. A different lunch than last year. I somehow got the low sodium option. The desert was coconut gloop with fruit, which is actually not too bad. And more coffee.

Day Two, Afternoon

Matthew Dixon on MLEMVD, an R Package for Maximum Likelihood Estimation of Multivariate Diffusion Models. Matthew is a math guy, my people, and he talked about techniques for modeling multivariate time-homogeneous stochastic diffusions. I don't think I caught all the details: there was something interesting about using the transition function directly, as developed by Sahalia, instead of relying on least squares (in general the errors do not seem to follow a nice form, so summed quadratic error is not necessarily the objective).


Jonathan Regenstein on Reproducible Finance with R: A Global ETF Map. This is a shiny app with clickable maps and time series, then some stuff about code blocks and markdown. As a side note, there is a lot of good work going on around mapping and geospatial data in R. The speaker brought out the naturalEarth package, which apparently simplifies making maps. Also, as a mental note, I should check out the sf package for representing 'simple features'.


David Ardia on Markov-Switching GARCH Models in R via the MSGARCH package. MSGARCH generalizes GARCH, which models volatility clustering via conditional variance, by adding regime switching, or 'Markov-Switching', to deal with structural breaks. The vanilla GARCH is then declared a 'single regime' model. David compares MS and SR, built using MCMC and MLE, and using different models for the underlying (Normal? t for fat tails? skewed t?) for some returns data. MSGARCH comes out looking good compared to brand Z.

Keven Bluteau followed up with a talk describing the capabilities of the MSGARCH package. See also the paper.

Lightning Round

  1. Riccardo Porreca. The title was 'Efficient, Consistent and Flexible Credit Default Simulation', but the real action in this talk was the use of a new Pseudo Random Number Generator, 'TRNG', which is wrapped by the [rTRNG package](https://github.com/miraisolutions/rTRNG]. This fancy PRNG was needed to allow parallelized Monte Carlo simulations which are reproducible, which is a neat trick.

  2. Maisa Aniceto: Machine Learning and the Analysis of Consumer Lending. The speaker applied a bunch of ML techniques on the classification problem of predicting default in consumer lending. She used a database of around 100K consumers, with 21 independent variables, from a Brazilian financial institution. She compared logistic regression, RF, bagging, SVM, boosting. Conclusion: the ensemble methods work better than logistic, though not by terribly much.

Talks

David Smith: Detecting Fraud at 1 Million Transactions per Second. This was a demo of the R computation stack provided by RevolutionR, integrated with the MS R server, some kind of Data Science VM. I'm not a Windows person, and I cannot use frameworks like this in my professional life, but it was interesting nonetheless. For the finale, David paralellized his computations by using three remote desktop sessions.

Break


Thomas Harte on the PE package, 'Modeling private equity in the 21st century.' PE is the high yield part of alternative assets. Total AUM in PE exceeds US 3 trillion or so. But few quants play in this space. A rundown of PE: limited partners (LP) invest in funds managed by the General Partners (GP), who invest in portfolio companies; talked about the timeline of PE funds. The landscape of PE data is: TVE/Thomson ONE was the former gold standard; Cepres is a question mark; Cambridge Associates provides indices; and Preqin, who use FOIAs to get data. The GPs use discounted cash flow (DCF) models, and report modeled NAV. LPs build models from scratch, if that. The Yale Endowment Model is the model used by secondary parties.

Why is PE difficult: PE investment are long term and illiquid, with fund lifetimes around 10 years or so. In addition to market and liquidity risks, there is the funding risk: the LPs are on the hook to pay cash in (up to some amount) to the GPs when requested. This cash piece apparently complicates computation of VaR. So there are adjustments to the VaR to deal with liquidity and so on.

He does some computations to aggregate the various kinds of fees which have to be added on to any VaR computation. As a mental note: investing in PE seems like a good way to lose your lunch money.


Guanhao Feng on The Market for English Premier League (EPL) Odds. Using math to beat bookies in Soccer? Feng wanted to prove you cannot do this. EPL gambling is a trillion dollar market. He creates a model for real time odds of game outcomes, and calibrates to real data. They build a model of the scores of each team as Poisson processes with different intensities, but some correlated component. The difference of Poissons is a Skellam, apparently. This gives the probabilities of outcomes in terms of Skellams. Calibration is performed by taking the empirical odds, provided by the bookies, and using the identities of the Skellam mean and variance to get the intensities. I really enjoyed this talk. It seemed to follow from the stochastic processes/applied probability school of thought, rather than, say, a typical statistical framework.


Bryan Lewis closes the conference with Project and conquer! Bryan talked about the idea of projection onto a subspace (a line in his toy examples), pointing out that points in a space are at least as far away from each other as they are in the projection. Leveraging this idea allows you to quickly perform a number of computations, somewhat surprisingly. For example, fast thresholded distances and correlations can be so computed, apparently in subquadratic time. As an example (from the tcor vignette) to find all paris of columns of a 1K x 20K matrix with greater than 0.99 Pearson correlation requires more than 200 Gflop via brute force, but around 1 Gflop using tcor. Bryan has extended this analysis to cases of millions of columns. He also mentioned Krylov subspaces, the span of \(\beta, X\beta, X^2\beta, ..., X^k\beta\), which is a real old school Numerical Analysis trick (IIRC, they are used in the analysis of the conjugate gradient method). Look up the tcor package for more information.

I had to leave after that to catch the flight home. In all, another great conference. I hope to be back next year.

EDITS

  1. Mon May 22 2017 21:05:22 edit affiliation of Matteo Crimella. Add note regarding 'Risk Parity', with intention of writing further blog post.