I am currently editing and adding significant content to a book covering statistical modeling and learning in vision and cognition, soon to be released by my advisor, Dr. Song-Chun Zhu, and Dr. Ying Nian Wu at UCLA. I am also writing a Masters thesis for an MS in Computer Science, under Dr. Song-Chun Zhu at the Center for Vision, Cognition, Learning, and Autonomy (VCLA) at UCLA.
Before my Masters, I worked as a Full Stack Software Engineer in the San Francisco Bay Area for 3 years, most recently at NatureBox.
My research interests are in generative modeling for computer vision and natural language processing. Specifically, I am interested in applying generative learning techniques to language problems. I have contributed to several research projects that apply MCMC sampling and generative, inference, and/or energy based models to various vision and language tasks. Many projects are on my Github.
PhD Statement of Purpose (Updated 1/28/2020)
Speech recognition algorithm for trigger word detection, the technology behind Alexa, Google Home, and Siri
Neural style transfer generates images that reflect the content of one image but the artistic “style” of another
Estimating the number of self-avoiding walks with importance sampling and effective sample size
Explores three techniques that use image gradients to generate new images
Trained SVM classifiers to infer 14 facial traits from low-level image features and use that information to make election predictions
Used AdaBoosting, RealBoosting, Haar Filters, Non-Maximum Suppression, and hard negative mining for the task of face detection
Compares Principle Component Analysis, an Autoencoder, and Fisher Linear Discriminants for the task of analyzing human faces