• Upcoming Events
  • Awards
  • Distinguished Lecture
  • Latest Seminars and Events
  • Others
  • Seminars
  • Workshop and Conference
  • Past Events
  • Student Issue
Upcoming Events
Topic:How should we do linear regression?
Date:24/03/2026
Time:2:15 pm - 3:15 pm
Venue:YIA LT2, Yasumoto International Academic Park, The Chinese University of Hong Kong
Category:Distinguished Lecture
Speaker:Professor Richard Samworth
PDF:R20260324-DL-Samworth.pdf
Details:

Abstract:

In the context of linear regression, we construct a data-driven convex loss function with respect to which empirical risk minimisation yields optimal asymptotic variance in the downstream estimation of the regression coefficients. At the population level, the negative derivative of the optimal convex loss is the best decreasing approximation of the derivative of the log-density of the noise distribution. This motivates a fitting process via a nonparametric extension of score matching, corresponding to a log-concave projection of the noise distribution with respect to the Fisher divergence. At the sample level, our semiparametric estimator is computationally efficient, and we prove that it attains the minimal asymptotic covariance among all convex $M$-estimators. As an example of a non-log-concave setting, the optimal convex loss function for Cauchy errors is Huber-like, and our procedure yields asymptotic efficiency greater than 0.87 relative to the maximum likelihood estimator of the regression coefficients that uses oracle knowledge of this error distribution. In this sense, we provide robustness and facilitate computation without sacrificing much statistical efficiency. Numerical experiments using our accompanying R package 'asm' confirm the practical merits of our proposal.

This will be the second of a trilogy of talks that I will give at PolyU (23 March), CUHK (24 March) and HKU (25 March). It will not be necessary to attend either of the other talks, but the material will be complementary.