I am currently writing a Masters thesis for an MS in Computer Science, under Dr. Song-Chun Zhu at the Center for Vision, Cognition, Learning, and Autonomy (VCLA) at UCLA.
Before my Masters, I worked as a Full Stack Software Engineer in the San Francisco Bay Area for almost three years, most recently at NatureBox.
My research interests are in generative modeling and multi-agent systems. I have contributed to several research projects that apply MCMC sampling and generative, inference, and energy based models to various vision and language tasks. Many projects are on my Github.
PhD Statement of Purpose (Updated 1/28/2020)
Speech recognition algorithm for trigger word detection, the technology behind Alexa, Google Home, and Siri
Neural style transfer generates images that reflect the content of one image but the artistic “style” of another
Estimating the number of self-avoiding walks with importance sampling and effective sample size
Explores three techniques that use image gradients to generate new images
Trained SVM classifiers to infer 14 facial traits from low-level image features and use that information to make election predictions
Used AdaBoosting, RealBoosting, Haar Filters, Non-Maximum Suppression, and hard negative mining for the task of face detection
Compares Principle Component Analysis, an Autoencoder, and Fisher Linear Discriminants for the task of analyzing human faces