Posts

Thoughts on Models with Regularization

Lately I’ve been reflecting on regularization. Early in my data science career I spent some time working with generalized additive models, but I started focusing more and more on traditional statistical methods. I am re-finding the value in regularization and expect to use more of it going forward.

The Winner's Curse: Why it Happens and What to Do About It

When running an A/B test with many variants, say, more than 5, we often run into a phenomenon known as the Winner’s Curse, where the winning variant performs worse when we adopt it universally than it had during the test itself. In this post, we discuss why this phenomenon occurs and what to do about it.

Reflections on 2021 and Interests Going Into 2022

As 2021 wrapped up, I’ve been reflecting on the past year and thinking about the next. I was similarly reflective this time last year, when I wrote about how 2020, for me, was the Year of Emacs.

Robust Portfolio Optimization in Models with Diminishing Returns

In our last post, we discussed how model uncertainty poses a risk when allocating resources among productive assets. In this post, we expand the discussion to models with diminishing returns. Such models are common in economics. As before, we can incorporate model uncertainty directly into the problem, achieving good performance regardless of the true model, with minimal impact to nominal performance. Robust optimization is both powerful and practical.

Robust Portfolio Optimization in Generalized Linear Models

Often we run an A/B test in order to inform some decision. But every A/B test involves uncertainty, no matter the sample size. This uncertainty poses a risk to our decision, which can be hedged by a process analogous to diversifying an investment portfolio. Finding a robust-optimal portfolio is both practical and fast.

Focus on Iteration Speed

The OODA loop (Observe-Orient-Decide-Act) framework was developed by USAF Colonel John Boyd to improve fighter pilot performance in the field. We can apply a similar framework to improving the efficiency with which we develop data science models. The key insight is to embrace the iterative nature of model development and streamline each component of these iterations.

The Alternative to Causal Inference is Worse Causal Inference

Some of the most important questions data scientists investigate are causal questions. They’re also some of the hardest to answer! A well-designed A/B test often provides the cleanest answer, but when a test is infeasible, there are plenty of other causal inference techniques that may be useful. While not perfect, these techniques are much better than the alternative: ad hoc methods with no logical foundation.

Bayesian A/B Testing Considered Harmful

In science we study physically meaningful quantities that have some kind of objective reality, and that means that multiple people should draw substantively equivalent conclusions. But in some situations, this principle is at odds with the Bayesian Coherency Principle, and so we have to choose between internal consistency, or consistency with external reality.

Edgeworth Series in Python

We often use distributions that can be reasonably approximated as Gaussian, typically due to the Central Limit Theorem. When the sample size is large (and the tails of the distribution are reasonable), the approximation is really good and there’s no point worrying about it. But with modest sample sizes, or if the underlying distribution is heavily skewed, the approximation may not be good enough.

Testing with Many Variants

This is a long drive for someone with nothing to think about.