I work as a Quant at
G-Research. I strive to generate innovative and systematic ideas to predict the future of financial markets. I create and develop novel neural network architectures and training methods to find orthogonal signals and form predictions, using large and noisy datasets.
I obtained my PhD from the
Department of Electrical Engineering at Stanford University under the supervision of
Mert Pilanci.
My primary research goal is neural network compression and dimensionality reduction in training or at inference. For that purpose, I explore two seemingly unrelated research areas and their combination: randomized compression techniques (a.k.a. structured random projections) and (unorthodox) neural network representations and optimization methods that trade high dimensionality for convexity.
Google Scholar &
Github.
Throughout my PhD, I also had the pleasure to develop reinforcement and imitation learning algorithms for safety-critical applications with Marco Pavone, visit Laurent El Ghaoui at UC Berkeley, intern under Mohammad Ghavamzadeh at Facebook AI Research, and collaborate with many great researchers.