I graduated from Stanford in 2020 with my BS and MS in Computer Science, with a focus in AI. I'm all about building impactful AI applications, empowering women in tech, singing and guitar, skiing, wilderness photography, and fashion.
August 2021 - Present
- Tech Lead on Agents Quality: We built the initial product, and continue to hill climb on LLM judge metrics and incorporate new AI-powered features to keep it cutting edge.
- Assistant Quality: Built new features and use cases all around the Agentic (and pre-Agentic) engines of our famous Glean Assistant.
- Doc Q&A: Area owner for (built + scaled) the primary tool that reads doc contents in LLM-powered products.
- Query Understanding: Developed early query parsing systems, spellcheck, acronym expansion, etc.
- [Patented]: Expert detection—system to determine who knows the most about any given topic/area.
June 2020 - September 2020
Deep Learning—Diagnosed data sampling processes and implemented new techniques to improve performance of the SOTA stack, e.g. sharding, upsampling on low resourced classes, etc.. Refactored and maintained eval pipelines to increase their throughput and efficiency.
June 2019 - September 2019
Autonomous vehicles. Bonus: Our team won the hackathon with a road quality monitoring/mapping system.
June 2018 - August 2018
Worked on Zero-Shot Multilingual Neural Machine Translation under Professor Rico Sennrich, and designed a discriminator that used an adversarial objective to universalize language representations during training.
June 2017 - September 2017
Computer vision and object detection. Trained deep learning models from scratch to build gun detection capabilities on mobile.
Dive deep into my work, both professional and personal.
Reinforcement Learning
This paper aims to solve both a hierarchical reinforcement learning task and a collision avoidance problem for an autonomous rocket in a field of asteroids. This problem is modeled as a Markov decision process and uses the MAXQ decomposition and MAXQ-0 learning algorithm which are compared against Flat Q-learning.
Spoken Language Processing
We mitigate state-of-the-art models' tendencies to overfit by using a combination of augmentation techniques—making pitch, amplitude, noise, and vocal tract length perturbations, as well as time and frequency masking. All our experiments outperform the baseline in multiple speech recognition metrics.
Computer Vision
Preserving art is a test of time—often there are damaged/missing portions. With no references to how the painting was at its peak of creation, there lies a problem in creating truthful reconstructions of unique, rare art pieces. Utilizing Model Agnostic Meta Learning (MAML)on top of a CNN, we develop a regeneration model that accurately restores paintings, generalized across varying artistic complexity.
Knowledge Graphs
Get this—55% of users read online articles for less than 15 seconds. The general problem of understanding large spans of content is painstaking, with no efficient solution. We create a tool that generates analytical, concept graphs over text, providing the first-ever standardized, structured, and interpretable medium to understand and draw connections from large spans of content.
Let's get in touch.
Just shoot me an email,
or connect with me on LinkedIn.
San Francisco
(858) 380-9511
laurenzhu@alumni.stanford.edu