Hello! I'm a PhD student at Georgia Tech interested in understanding how people learn, generate, and use abstractions and concepts. I'm particularly fascinated by unsupervised learning and how it may be the reason why people can learn so quickly from little data. I'm also interested in how people represent functions and data structures.
I'm interested in postdoctoral positions in cognitive science starting Fall 2025 that involve any of the following research topics:
- How people learn concepts, especially without feedback (unsupervised). Particularly interested in concepts/domains such as:
- functions
- recursion
- structure / analogy
- number
- How people learn and make decisions under uncertainty (using computational cognitive models)
- How people solve difficult (NP-hard/NP-complete) problems such as search, constraint satisfaction, etc.
- How people understand information visualizations
- How people comprehend computer science concepts and textual programming languages
I use the following research methodologies and would love to learn more:
- large online [multiplayer] experiments
- eye-tracking
- computational models of categorization and unsupervised learning
- large language models and other vector representations
- deep convolutional neural networks models
- standard statistical and AI techniques: linear mixed effects models, factor analysis, cluster analysis, SVMs, decision trees, bayes nets, etc.
Research overview
In my research, advised by Professor Sashank Varma at Georgia Tech, I've investigated how people may use clustering, an unsupervised learning technique, to solve the traveling salesperson problem, an intractable problem for which finding the optimal solution is difficult. We've found evidence suggesting that people are strongly guided by the clusters they perceive in dot clouds.
This led me to research how people perceive clusters in dot clouds, a type of stimulus in widespread use across fields of cognitive science. In my opinion, there is surprising little research on this topic, despite it being historically important. I mean, the Gestaltists did good work trying to understand this question in 1914! I have a paper under review outlining the properties of the clusters people perceive and initial evidence of their strategies. An ongoing program of research is to determine whether visual clustering of dot clouds is similar to unsupervised learning of non-visual stimuli. I've found that models of human categorization such SUSTAIN and the Rational model do a decent job of predicting people's clusters. A model that I have developed with visual clustering and biological principles in mind, the competitive clustering model, did better, but there's lots of work to be done to improve the model.
I've recently begun a line of work investigating how people may use clustering to understand the numerosity of a visual stimulus. Prior work has produced mixed results on whether the distribution of the points has an effect on the perceived numerosity of a stimulus, and whether people use clusters of points to ennumerate it. We conducted a large online experiment which suggests that people are using the cluster structure of a stimulus as a heuristic for numerosity, and that the number of clusters people perceive is predictive of their magnitude comparisons and numerosity estimates. I intend to extend this line of work to unsupervised learning to non-dot cloud stimuli, and investigate the properties of semantic spaces that people find easier or more difficult to learn.
In a different (but not completely separate) line of work, I am investigating how people understand functions, both mathematical and computational. We asked people estimate to functions such as log, sqrt, n^2, n^3, n!, 2^n to understand why people perform poorly at estimating such large functions. The results imply that people have a linearization bias in their estimates, similar to the biases found when people learn functions for the first time. In future work, I intend to investigate how people understand computational (procedural) functions and instances when they decide to create a function.
I am interested in a variety of other topics as well. In collaboration with others, I have investigated the similarities between LLMs' and human representations of number, how vision models do better at continual learning tasks when the environment follows a power law, and how LLMs are unlike humans in how they process the magnitudes of words. I've also looked at individual differences in how people detect geometric and topological properties and how computer science debugging is a complicated multitudinous endeavour, evident in how students grapple with errors in their own code. During my undergrad years at the University of Wisconsin-Madison, I was interested in how people understand fractions and proportions. Together with Prof. Andreas Obersteiner and Prof. Martha Alibali, we described how people use benchmarks in fraction comparisons, in addition to other self-reported strategies strategies people use. My undergraduate senior thesis was on how people map tape diagram visualizations to fractions and proportions.
Software
The computer, along with the internet, is one of the most powerful tools cognitive scientists possess in the modern age, and I am passionate about using it to its full potential. I've been a contributor to jsPsych in the past, and now maintain a flexible (but sparsely documented) library called reaction-time that I use to run all my experiments. Here's an introductory talk I've given on making online experiments. Happy to run more workshops on how to design and run online experiments, so please reach out if you are interested.
I strongly believe computational models are one of the only ways cognitive scientists can get to a 'grand theory of cognition'. I have used CNNs, LLMs, and other custom models to both successfully and unsuccessfully predict human behavior. However, I believe there are many rivers to cross before they serve as explanations of cognitive properties. This is one aspect of my skillset I hope to develop in the future.
I strongly believe in the free software movement. I use Linux (GNU/Linux, fine....) as my main operating system, GNU Guile Scheme as my main scripting language, and GNU Emacs as my primary editor. Unfortunately, my most popular software project is a Neovim plugin. Happy to say that my PhD has almost entirely powered by free software. Even this website, which is written in Guile Scheme and my custom markup language called nectar is open-source!
Fun
Welcome to the part of the page I would put at very top if I wasn't searching for a darn postdoc!
I really love music. While my primary genres are ambient and electronica, my tastes range from post rock to black metal, and from easy-going deep house to noise music. Please checkout my music library if you're curious. I always appreciate recommendations, send me an email with some anytime!
I also enjoy making "music" (sounds or noise according to others). Modular synthesis is my jam. FM synthesis continues to tickle my ears. I love using my Eurorack synthesizer. Here's my ModularGrid for Rack 1 and Rack 2. I also enjoy interacting with the monome and VCV Rack ecosystems. I would love to jam with someone with similar interests. I'll come out with an album soon, I swear...
I enjoy reading. A lot of world news and non-fiction, but some fiction too! Currently reading A Canticle for Leibowitz. I wasn't a huge fan of A Year of Rest and Relaxation. Recommendations welcome!
I also appreciate typography and fonts. Feel free to checkout my font collection.