Joseph Simmons

Associate Professor, Operations, Information, and Decisions at The Wharton School

Schools

  • The Wharton School

Expertise

Links

Biography

The Wharton School

Joe Simmons is an Associate Professor at the Wharton School of the University of Pennsylvania, where he teaches a course on Managerial Decision Making. He has two primary areas of research. The first explores the psychology of judgment and decisionmaking, with an emphasis on understanding and fixing the errors and biases that plague people’s judgments, predictions, and choices. The second area focuses on identifying and promoting easytoadopt research practices that improve the integrity of published findings. Joe is also an author of Data Colada, an online resource that attempts to improve our understanding of scientific methods, evidence, and human behavior, and a cofounder of AsPredicted.org, a website that makes it easy for researchers to preregister their studies.

Leif D. Nelson, Joseph Simmons, Uri Simonsohn (2017), Psychology's Renaissance , Annual Review of Psychology, forthcoming.

Abstract: In 20102012, a few largely coincidental events led experimental psychologists to realize that their approach to collecting, analyzing, and reporting data made it too easy to publish falsepositive findings. This sparked a period of methodological reflection that we review here and call “psychology’s renaissance.” We begin by describing how psychology’s concerns with publication bias shifted from worrying about filedrawered studies to worrying about phacked analyses. We then review the methodological changes that psychologists have proposed and, in some cases, embraced. In describing how the renaissance has unfolded, we attempt to describe different points of view fairly but not neutrally, so as to identify the most promising paths forward. In so doing, we champion disclosure and pre registration, express skepticism about most statistical solutions to publication bias, take positions on the analysis and interpretation of replication failures, and contend that “metaanalytical thinking” increases the prevalence of falsepositives. Our general thesis is that the scientific practices of experimental psychologists have improved dramatically.

Joseph Simmons and Uri Simonsohn (2017), Power Posing: Pcurving the Evidence, Psychological Science , 28 (May), pp. 687693.

Abstract: In a wellknown article, Carney, Cuddy, and Yap (2010) documented the benefits of “power posing.” In their study, participants (N=42) who were randomly assigned to briefly adopt expansive, powerful postures sought more risk, had higher testosterone levels, and had lower cortisol levels than those assigned to adopt contractive, powerless postures. In their response to a failed replication by Ranehill et al. (2015), Carney, Cuddy, and Yap (2015) reviewed 33 successful studies investigating the effects of expansive vs. contractive posing, focusing on differences between these studies and the failed replication, to identify possible moderators that future studies could explore. But before spending valuable resources on that, it is useful to establish whether the literature that Carney et al. (2015) cited actually suggests that power posing is effective. In this paper we rely on pcurve analysis to answer the following question: Does the literature reviewed by Carney et al. (2015) suggest the existence of an effect once we account for selective reporting?  We conclude not. The distribution of pvalues from those 33 studies is indistinguishable from what is expected if (1) the average effect size were zero, and (2) selective reporting (of studies and/or analyses) were solely responsible for the significant effects that are published. Although more highly powered future research may find replicable evidence the purported benefits of power posing (or unexpected detriments), the existing evidence is too weak to justify a search for moderators or to advocate for people to engage in power posing to better their lives.

Joseph Simmons, Leif D. Nelson, Uri Simonsohn (2017), FalsePositive Citations, Perspectives on Psychological Science, forthcoming.

Abstract: This invited paper describes how we came to write an article called "FalsePositive Psychology."

Berkeley J. Dietvorst, Joseph Simmons, Cade Massey (2016), Overcoming Algorithm Aversion: People Will Use Algorithms If They Can (Even Slightly) Modify Them, Management Science, forthcoming.

Abstract: Although evidencebased algorithms consistently outperform human forecasters, people often fail to use them after learning that they are imperfect, a phenomenon known as algorithm aversion. In this paper, we present three studies investigating how to reduce algorithm aversion. In incentivized forecasting tasks, participants chose between using their own forecasts or those of an algorithm that was built by experts. Participants were considerably more likely to choose to use an imperfect algorithm when they could modify its forecasts, and they performed better as a result. Notably, the preference for modifiable algorithms held even when participants were severely restricted in the modifications they could make (Studies 13). In fact, our results suggest that participants’ preference for modifiable algorithms was indicative of a desire for some control over the forecasting outcome, and not for a desire for greater control over the forecasting outcome, as participants’ preference for modifiable algorithms was relatively insensitive to the magnitude of the modifications they were able to make (Study 2). Additionally, we found that giving participants the freedom to modify an imperfect algorithm made them feel more satisfied with the forecasting process, more likely to believe that the algorithm was superior, and more likely to choose to use an algorithm to make subsequent forecasts (Study 3).  This research suggests that one can reduce algorithm aversion by giving people some control even a slight amount over an imperfect algorithm’s forecast.

Theresa F. Kelly and Joseph Simmons (2016), When Does Making Detailed Predictions Make Predictions Worse?, Journal of Experimental Psychology: General, 145 (October), pp. 12981311.

Abstract: In this paper, we investigate whether makring detailed predictions about an event worsens other predictions of the event. Across 19 experiments, 10,896 participants, and 407,045 predictions about 724 professional sports games, we find that people who made detailed predictions about sporting events (e.g., how many hits each baseball team would get) made worse predictions about more general outcomes (e.g., which team would win). We rule out that this effect is caused by inattention or fatigue, thinking too hard, or a differential reliance on holistic information about the teams. Instead, we find that thinking about gamerelevant details before predicting winning teams causes people to give less weight to predictive information, presumably because predicting details makes useless or redundant information more accessible and thus more likely to be incorporated into forecasts. Furthermore, we show that this differential use of information can be used to predict what kinds of events will and will not be suscepible to the negative effect of making detailed predictions.

Uri Simonsohn, Joseph Simmons, Leif D. Nelson (Working), Specification Curve: Descriptive and Inferential Statistics on All Reasonable Specifications.

Abstract: Empirical results often hinge on data analytic decisions that are simultaneously defensible, arbitrary, and motivated. To mitigate this problem we introduce SpecificationCurve Analysis. This approach consists of three steps: (i) estimating the full set of theoretically justified, statistically valid, and nonredundant analytic specifications, (ii) displaying the results graphically in a manner that allows identifying which analytic decisions produce different results, and (iii) conducting statistical tests to determine whether the full set of results is inconsistent with the null hypothesis of no effect. We illustrate its use by applying it to three published findings. One proves robust, one weak, one not robust at all. Although it is impossible to eliminate subjectivity in data analysis, SpecificationCurve Analysis minimizes the impact of subjectivity on the reporting of results, resulting in a more systematic, thorough, and objective presentation of the data.

Uri Simonsohn, Joseph Simmons, Leif D. Nelson (2015), Better Pcurves: Making Pcurve Analysis More Robust To Errors, Fraud, and Ambitious Phacking, A Reply to Ulrich and Miller (2015), Journal of Experimental Psychology: General, 144 (December), pp. 11461152.

Abstract: When studies examine true effects, they generate rightskewed pcurves, distributions of statistically significant results with more low (.01s) than high (.04s) pvalues. What else can cause a rightskewed pcurve? First, we consider the possibility that researchers report only the smallest significant pvalue (as conjectured by Ulrich & Miller, 2015), concluding that it is a very uncommon problem. We then consider more common problems, including (1) pcurvers selecting the wrong pvalues, (2) fake data, (3) honest errors, and (4) ambitiously phacked (beyond p<.05) results. We evaluate the impact of these common problems on the validity of pcurve analysis, and provide practical solutions that substantially increase its robustness.

Berkeley Dietvorst, Joseph Simmons, Cade Massey (2015), Algorithm Aversion: People Erroneously Avoid Algorithms After Seeing Them Err, Journal of Experimental Psychology: General, 144 (February), pp. 114126.

Abstract: Research shows that evidencebased algorithms more accurately predict the future than do human forecasters. Yet, when forecasters are deciding whether to use a human forecaster or a statistical algorithm, they often choose the human forecaster. This phenomenon, which we call algorithm aversion, is costly, and it is important to understand its causes. We show that people are especially averse to algorithmic forecasters after seeing them perform, even when they see them outperform a human forecaster. This is because people more quickly lose confidence in algorithmic than human forecasters after seeing them make the same mistake. In five studies, participants either saw an algorithm make forecasts, a human make forecasts, both, or neither. They then decided whether to tie their incentives to the future predictions of the algorithm or the human. Participants who saw the algorithm perform were less confident in it, and less likely to choose it over an inferior human forecaster. This was true even among those who saw the algorithm outperform the human.

Uri Simonsohn, Leif D. Nelson, Joseph Simmons (2014), PCurve and Effect Size: Correcting for Publication Bias Using Only Significant Results, Perspectives on Psychological Science, 9 (December), pp. 666681.

Abstract: Journals tend to publish only statistically significant evidence, creating a scientific record that markedly overstates the size of effects. We provide a new tool that corrects for this bias without requiring access to nonsignificant results. It capitalizes on the fact that the distribution of significant pvalues, pcurve, is a function of the true underlying effect. Researchers armed only with sample sizes and test results of the published findings can correct for publication bias. We validate the technique with simulations and by reanalyzing data from the ManyLabs Replication project. We demonstrate that pcurve can arrive at inferences opposite that of existing tools by reanalyzing the metaanalysis of the “choice overload” literature.

Uri Simonsohn, Leif D. Nelson, Joseph Simmons (2014), PCurve: A Key to the File Drawer, Journal of Experimental Psychology: General, 143 (April), pp. 534547.

Abstract: Because scientists tend to report only studies (publication bias) or analyses (phacking) that “work”, readers must ask, “Are these effects true, or do they merely reflect selective reporting?” We introduce pcurve as a way to answer this question. Pcurve is the distribution of statistically significant pvalues for a set of studies (ps < .05). Because only true effects are expected to generate rightskewed pcurves – containing more low (.01s) than high (.04s) significant pvalues – only rightskewed pcurves are diagnostic of evidential value. By telling us whether we can rule out selective reporting as the sole explanation for a set of findings, pcurve offers a solution to the ageold inferential problems caused by filedrawers of failed studies and analyses.

Past Courses

OIDD290 DECISION PROCESSES

This course is an intensive introduction to various scientific perspectives on the processes through which people make decisions. Perspectives covered include cognitive psychology of human problemsolving, judgment and choice, theories of rational judgment and decision, and the mathematical theory of games. Much of the material is technically rigorous. Prior or current enrollment in STAT 101 or the equivalent, although not required, is strongly recommended.

OIDD690 MANAG DECSN MAKING

The course is built around lectures reviewing multiple empirical studies, class discussion,and a few cases. Depending on the instructor, grading is determined by some combination of short written assignments, tests, class participation and a final project (see each instructor's syllabus for details).

  • MBA Excellence in Teaching Award, 2014
  • MBA Excellence in Teaching Award, 2013
  • Winner of the Helen Kardon Moss Anvil Award, awarded to the one Wharton faculty member “who has exemplified outstanding teaching quality during the last year”, 2013
  • One of ten faculty nominated by the MBA student body for the Helen Kardon Moss Anvil Award, 2012
  • MBA Excellence in Teaching Award, 2012
  • Wharton Excellence in Teaching Award, Undergraduate Division, 2011

Knowledge @ Wharton

  • Why Humans Distrust Algorithms – and How That Can Change, Knowledge @ Wharton 02/13/2017
  • Confidence Games: Why People Don’t Trust Machines to Be Right, Knowledge @ Wharton 02/13/2015

Courses Taught

Read about executive education

Other experts

Ananya Roy

Ananya Roy is Professor of Urban Planning, Social Welfare and Geography and inaugural Director of The Institute on Inequality and Democracy at UCLA Luskin. She holds The Meyer and Renee Luskin Chair in Inequality and Democracy. Previously she was on the faculty at the University of California, ...

Raymond Boot Handford

Overview Most of the bones of our body develop and grow by way of a cartilage template in a process called endochondral ossification. Cartilage has a single cell type, known as the chondrocyte, which secretes the abundant extracellular matrix in which these cells are embedded. The cartilage templ...

Looking for an expert?

Contact us and we'll find the best option for you.

Something went wrong. We're trying to fix this error.