A Machine Learning Blog
Bob Wilson is a decision scientist at Meta, where he focuses on marketing AR/VR devices such as the Meta Quest. Prior roles include Director of Data Science (Marketing) at Ticketmaster, and Director of Analytics at Tinder. His interests include causal inference, natural language processing, and convex optimization. When not tweaking his Emacs init file, Bob enjoys gardening, listening/singing along to Broadway musical soundtracks, and surfeiting on tacos.
M.S.E.E. in Machine Learning, 2013
B.S. in Aerospace Engineering, 2008
University of Illinois, Urbana-Champaign
Imagine we are attempting to identify segments within an audience, perhaps so we can market to them more effectively through personalization. A common approach to doing so is to apply some kind of clustering algorithm (such as K-means) based on various user covariates.
A simple approach to heterogeneous treatment effect estimation relies on a difference in approximations to the outcome function among the two treatment groups. In this post, I derive the conditions under which this approach works.
Lately I’ve been reflecting on regularization. Early in my data science career I spent some time working with generalized additive models, but I started focusing more and more on traditional statistical methods. I am re-finding the value in regularization and expect to use more of it going forward.
Over the last five years, gamdist has formed the backbone of my research agenda. While it is very much a work in progress, this paper summarizes everything I have learned about regression. I think it is most useful as a collection of references! Still to come: details on regularization and the alternating direction method of multipliers.
We present a method of orbit determination robust against non-normal measurement errors. We approach the non-convex optimization problem by repeatedly linearizing the dynamics about the current estimate of the orbital parameters, then minimizing a convex cost function involving a robust penalty on the measurement residuals and a trust region penalty.
We discuss a beer recommendation engine that predicts whether a user has had a given beer as well as the rating the user will assign that beer based on the beers the user has had and the assigned ratings. k-means clustering is used to group similar users for both prediction problems. This framework may be valuable to bars or breweries trying to learn the preferences of their demographic, to consumers wondering what beer to order next, or to beer judges trying to objectively assess quality despite subjective preferences.