Uri Simonsohn
Professor of Operations, Information and Decisions at The Wharton School
Schools
- The Wharton School
Links
Biography
The Wharton School
Go to PERSONAL WEBSITE
Go to Data Colada Blog
Professor Simonsohn studies judgment, decision making, and methodological topics
He is a reviewing editor for the journal Science, an associate editor of Management Science, and a consulting editor for the journal Perspectives on Psychological Science.
He teaches decision making related courses to undergraduates, MBA and PhD students (OID290, OID690, OID900, and OID937)
He has published in psychology, management, marketing, and economic journals.
Leif D. Nelson, Joseph Simmons, Uri Simonsohn (2017), Psychology's Renaissance , Annual Review of Psychology, forthcoming.
Abstract: In 20102012, a few largely coincidental events led experimental psychologists to realize that their approach to collecting, analyzing, and reporting data made it too easy to publish falsepositive findings. This sparked a period of methodological reflection that we review here and call “psychology’s renaissance.” We begin by describing how psychology’s concerns with publication bias shifted from worrying about filedrawered studies to worrying about phacked analyses. We then review the methodological changes that psychologists have proposed and, in some cases, embraced. In describing how the renaissance has unfolded, we attempt to describe different points of view fairly but not neutrally, so as to identify the most promising paths forward. In so doing, we champion disclosure and pre registration, express skepticism about most statistical solutions to publication bias, take positions on the analysis and interpretation of replication failures, and contend that “metaanalytical thinking” increases the prevalence of falsepositives. Our general thesis is that the scientific practices of experimental psychologists have improved dramatically.
Joseph Simmons and Uri Simonsohn (2017), Power Posing: Pcurving the Evidence, Psychological Science , 28 (May), pp. 687693.
Abstract: In a wellknown article, Carney, Cuddy, and Yap (2010) documented the benefits of “power posing.” In their study, participants (N=42) who were randomly assigned to briefly adopt expansive, powerful postures sought more risk, had higher testosterone levels, and had lower cortisol levels than those assigned to adopt contractive, powerless postures. In their response to a failed replication by Ranehill et al. (2015), Carney, Cuddy, and Yap (2015) reviewed 33 successful studies investigating the effects of expansive vs. contractive posing, focusing on differences between these studies and the failed replication, to identify possible moderators that future studies could explore. But before spending valuable resources on that, it is useful to establish whether the literature that Carney et al. (2015) cited actually suggests that power posing is effective. In this paper we rely on pcurve analysis to answer the following question: Does the literature reviewed by Carney et al. (2015) suggest the existence of an effect once we account for selective reporting? We conclude not. The distribution of pvalues from those 33 studies is indistinguishable from what is expected if (1) the average effect size were zero, and (2) selective reporting (of studies and/or analyses) were solely responsible for the significant effects that are published. Although more highly powered future research may find replicable evidence the purported benefits of power posing (or unexpected detriments), the existing evidence is too weak to justify a search for moderators or to advocate for people to engage in power posing to better their lives.
Joseph Simmons, Leif D. Nelson, Uri Simonsohn (2017), FalsePositive Citations, Perspectives on Psychological Science, forthcoming.
Abstract: This invited paper describes how we came to write an article called "FalsePositive Psychology."
Robert Mislavsky and Uri Simonsohn (Forthcoming), When Risk is Weird: Unexplained Transaction Features Lower Valuations.
Abstract: We define transactions as weird when they include unexplained features, that is, features not implicitly, explicitly, or selfevidently justified, and propose that people are averse to weird transactions. In six experiments, we show that risky options used in previous research paradigms often attained uncertainty via adding an unexplained transaction feature (e.g., purchasing a coin flip or lottery), and behavior that appears to reflect risk aversion could instead reflect an aversion to weird transactions. Specifically, willingness to pay drops just as much when adding risk to a transaction as when adding unexplained features. Holding transaction features constant, adding additional risk does not further reduce willingness to pay. We interpret our work as generalizing ambiguity aversion to riskless choice.
Robert Mislavsky, Berkeley Dietvorst, Uri Simonsohn (Under Review), Critical Condition: People Only Object to Corporate Experiments if They Object to a Condition.
Abstract: Why have companies faced a backlash for running experiments? Academics and pundits have argued that it is because the public finds corporate experimentation objectionable. In this paper we investigate “experiment aversion,” finding evidence that, if anything, experiments are rated more highly than the least acceptable policies that they contain. In six studies participants evaluated the acceptability of either corporate policy changes or of experiments testing those policy changes. When all policy changes were deemed acceptable, so was the experiment, even when it involved deception, unequal outcomes, and lack of consent. When a policy change was unacceptable, the experiment that included it was deemed less unacceptable. Experiments are not unpopular, unpopular policies are unpopular.
Uri Simonsohn, Joseph Simmons, Leif D. Nelson (Working), Specification Curve: Descriptive and Inferential Statistics on All Reasonable Specifications.
Abstract: Empirical results often hinge on data analytic decisions that are simultaneously defensible, arbitrary, and motivated. To mitigate this problem we introduce SpecificationCurve Analysis. This approach consists of three steps: (i) estimating the full set of theoretically justified, statistically valid, and nonredundant analytic specifications, (ii) displaying the results graphically in a manner that allows identifying which analytic decisions produce different results, and (iii) conducting statistical tests to determine whether the full set of results is inconsistent with the null hypothesis of no effect. We illustrate its use by applying it to three published findings. One proves robust, one weak, one not robust at all. Although it is impossible to eliminate subjectivity in data analysis, SpecificationCurve Analysis minimizes the impact of subjectivity on the reporting of results, resulting in a more systematic, thorough, and objective presentation of the data.
Uri Simonsohn, Joseph Simmons, Leif D. Nelson (2015), Better Pcurves: Making Pcurve Analysis More Robust To Errors, Fraud, and Ambitious Phacking, A Reply to Ulrich and Miller (2015), Journal of Experimental Psychology: General, 144 (December), pp. 11461152.
Abstract: When studies examine true effects, they generate rightskewed pcurves, distributions of statistically significant results with more low (.01s) than high (.04s) pvalues. What else can cause a rightskewed pcurve? First, we consider the possibility that researchers report only the smallest significant pvalue (as conjectured by Ulrich & Miller, 2015), concluding that it is a very uncommon problem. We then consider more common problems, including (1) pcurvers selecting the wrong pvalues, (2) fake data, (3) honest errors, and (4) ambitiously phacked (beyond p<.05) results. We evaluate the impact of these common problems on the validity of pcurve analysis, and provide practical solutions that substantially increase its robustness.
Uri Simonsohn, Leif D. Nelson, Joseph Simmons (2014), PCurve and Effect Size: Correcting for Publication Bias Using Only Significant Results, Perspectives on Psychological Science, 9 (December), pp. 666681.
Abstract: Journals tend to publish only statistically significant evidence, creating a scientific record that markedly overstates the size of effects. We provide a new tool that corrects for this bias without requiring access to nonsignificant results. It capitalizes on the fact that the distribution of significant pvalues, pcurve, is a function of the true underlying effect. Researchers armed only with sample sizes and test results of the published findings can correct for publication bias. We validate the technique with simulations and by reanalyzing data from the ManyLabs Replication project. We demonstrate that pcurve can arrive at inferences opposite that of existing tools by reanalyzing the metaanalysis of the “choice overload” literature.
Uri Simonsohn, Leif D. Nelson, Joseph Simmons (2014), PCurve: A Key to the File Drawer, Journal of Experimental Psychology: General, 143 (April), pp. 534547.
Abstract: Because scientists tend to report only studies (publication bias) or analyses (phacking) that “work”, readers must ask, “Are these effects true, or do they merely reflect selective reporting?” We introduce pcurve as a way to answer this question. Pcurve is the distribution of statistically significant pvalues for a set of studies (ps < .05). Because only true effects are expected to generate rightskewed pcurves – containing more low (.01s) than high (.04s) significant pvalues – only rightskewed pcurves are diagnostic of evidential value. By telling us whether we can rule out selective reporting as the sole explanation for a set of findings, pcurve offers a solution to the ageold inferential problems caused by filedrawers of failed studies and analyses.
Leif D. Nelson, Joseph Simmons, Uri Simonsohn (2012), Let's Publish Fewer Papers, Psychological Inquiry, 23 (3), pp. 291293.
Past Courses
OIDD290 DECISION PROCESSES
This course is an intensive introduction to various scientific perspectives on the processes through which people make decisions. Perspectives covered include cognitive psychology of human problemsolving, judgment and choice, theories of rational judgment and decision, and the mathematical theory of games. Much of the material is technically rigorous. Prior or current enrollment in STAT 101 or the equivalent, although not required, is strongly recommended.
OIDD900 FOUNDATIONS OF DEC PROC
The course is an introduction to research on normative, descriptive and prescriptive models of judgement and choice under uncertainty. We will be studying the underlying theory of decision processes as well as applications in individual group and organizational choice. Guest speakers will relate the concepts of decision processes and behavioral economics to applied problems in their area of expertise. As part of the course there will be a theoretical or empirical term paper on the application of decision processes to each student's particular area of interest.
OIDD937 METHODS STUMBLERS
This PhDlevel course is for students who have already completed at least a year of basic stats/methods training. It assumes students already received a solid theoretical foundation and seeks to pragmatically bridge the gap between standard textbook coverage of methodological and statistical issues and the complexities of everyday behavioral science research. This course focuses on issues that (i) behavioral researchers are likely to encounter as they conduct research, but (ii) may struggle to figure out independently by consulting a textbook or published article.
Wharton Excellence in Teaching Award, Undergraduate Division, 2011 Wharton Excellence in Teaching Award, Undergraduate Division, 2009
Knowledge @ Wharton
Why Being the Last Interview of the Day Could Crush Your Chances, Knowledge @ Wharton 02/13/2013 Pseudo Science: How Lack of Disclosure in Academic Research Can Damage Credibility, Knowledge @ Wharton 06/20/2012 Marketing Crash Course: It’s Not All Bad News When Consumers Collide with Wrong Information, Knowledge @ Wharton 06/23/2010 Predictions and Perceptions: Downloading Wisdom from Online Crowds, Knowledge @ Wharton 08/08/2007
Videos
Uri Simonhson- SPSP 2014 Session on Defining Research Integrity
Read about executive education
Other experts
Barbara Alemanni
Barbara Alemanni is an Affiliate Professor of Banking and Insurance at SDA Bocconi School of Management. Since January 2018, she has been the Banking and Insurance Faculty Deputy at SDA Bocconi School of Management. She is a Full Professor of Economics of Financial Intermediaries at the Universit...
Matthijs Lof
Peer-reviewed scientific articlesJournal article-refereed, Original researchAid and income: Another time-series perspectiveLof, Matthijs; Mekasha, Tseday Jemaneh; Tarp, Finn2015 in WORLD DEVELOPMENT (Elsevier BV)ISSN: 0305-750XRational speculators, contrarians, and excess volatilityLof, Matthijs2...
Kevin Frick
Biography Kevin Frick is the Vice Dean for Education at the Johns Hopkins Carey Business School. Trained as a health economist, he received his PhD in Economics and Health Services Organization and Policy at the University of Michigan. Professor Frick has been with Johns Hopkins University ...
Looking for an expert?
Contact us and we'll find the best option for you.