One of the human mind’s most impressive feats is to think beyond the here and now: People can generalize from their current experiences, make predictions for the future, and even reason about other people’s beliefs. What gives rise to these abstract thoughts, how do they become integrated with concrete everyday experiences, and what happens when such integration fails? My research incorporates insights and methods from social, cognitive, and quantitative psychology to better understand how people connect abstract ideas to concrete experiences.
Interpersonal Evaluations as Abstract Ideas versus Concrete Experiences
One of the most important abstract ideas people have are about what they like. People often think about and describe their ideas of liking (e.g., a person might tell his friends that he likes intelligence in a romantic partner). But to what extent do these ideas of liking match people’s actual experiences of liking (e.g., whether a person likes more intelligent romantic partners when he actually encounters them)? In one line of research, I explore when, how, and why ideas about liking differ from actual experience of liking in interpersonal settings.
Do ideas of liking and experienced liking reflect merely two ways of measuring the same evaluative construct, or do they differ in meaningful ways?
Empathy is widely considered to be a moral virtue, but do people always see empathizers positively?
Why do abstract ideas and concrete experiences sometimes become disconnected, and what cognitive processes might lead to such a disconnect?
Bridging Ideas and Experiences in Statistical Inference
The challenge of connecting abstract ideas with concrete experiences not only exists in the everyday phenomena that psychologists study, but also in the research process itself: Researchers’ ideas (and ideals) about the analytic methods they use might not match how they experience and use those methods in practice. Abstract guidelines on research practices (e.g., increase statistical power, minimize false positive rates) often fail to translate into concrete practice when researchers confront barriers such as resource constraints or lack of user-friendly statistical tools. As a result, researchers might end up relying on suboptimal methods or even draw inaccurate inferences (e.g., committing a Type II error based on underpowered studies). My research program on statistical methodology seeks to connect abstract ideals of research practice to the concrete and often messy reality of doing research. I approach this work with a specific focus on identifying practical solutions that balance multiple research goals and providing researchers with user-friendly tools.
Covariate use can boost power by soaking up noise in a dependent variable, but it can also lead to inflated Type I error rates. What is the tradeoff between power and Type I error rates in covariate use, and how can researchers use it smartly in different contexts (e.g., experimental designs, incremental validity testing)?
SEM has become increasingly popular in psychology, yet confusion remains on how to plan for studies that use SEM to achieve statistical power (especially to detect a target effect within a model, such as a regression coefficient among latent variables). What should researchers do?