SEM has become increasingly popular in psychology as a tool to model multivariate relations and to test theories. Compared to standard regressions using observed variables, SEM allows researchers to account for measurement error of many psychological constructs and more accurately estimate true effect sizes. Yet confusion remains on how to plan for studies that use this analytic technique to achieve statistical power. Extant guidance primarily consists of sample size rules of thumb that lack empirical support and power analyses for detecting model misfit (i.e., how well or poorly a model fits the data overall). Missing from current practices is power analysis to detect a target effect within a model (e.g., a regression coefficient between latent variables), which is often central to researchers’ hypotheses. To show the difference between power to detect model misfit and power to detect a target effect, I conducted simulations on the factors that determine power to detect a target effect in SEM, and I created a user-friendly Shiny web app, pwrSEM, that allows researchers to run power analysis for detecting a target effect in SEM, without needing to learn code or simulation procedures (Wang & Rhemtulla, invited revision). I plan on extending this work by examining how power to detect a target effect varies by parameter estimation methods (e.g., Wald vs. chi-square tests) and exogenous covariances.