Emojis

SPOTLIGHT 1. Take a deep breath, everybody.
Great stuff this week reminding us that a finding doesn't necessarily answer a meaningful question. Let's revive the practice of counting to 10 before posting remarkable, data-driven insights… just in case.

This sums up everything that's right, and wrong, with data. In a recent discussion of some impressive accomplishments in sports analytics, prior success leads to this statement: “The bottom line is, if you have enough data, you can come pretty close to predicting almost anything,” says a data scientist. Hmmm. This sounds like someone who has yet to be punched in the face by reality. Thanks to Mara Averick (@dataandme).

Sitting doesn't typically kill people. On the KDNuggets blog, William Schmarzo remarks on the critical thinking part of the equation. For instance, the kerfuffle over evidence that people who sit most of the day are 54% more likely to die of heart attacks. Contemplating even briefly, though, raises the question about other variables, such as exercise, diet, or age.

Basic stats – Common sense = Dangerous conclusions viewed as fact

P-hacking is resilient. On Data Colada, Uri Simonsohn explains why P-Hacked Hypotheses are Deceivingly Robust. Direct, or conceptual, replications are needed now more than ever.

2. Science about science.
The world needs more meta-research. What's the best way to fund research? How can research impact be optimized, and how can that impact be measured? These are the questions being addressed at the International School on Research Impact Assessment, founded by RAND Europe, King's College London, and others. Registration is open for the autumn session, September 19-23 in Melbourne.

Evidence map by Bernadette Wright

3. Three Ways of Getting to Evidence-Based Policy.
In the Stanford Social Innovation Review, Bernadette Wright (@MeaningflEvdenc) does a nice job of describing three ideologies for gathering evidence to inform policy.

  1. Randomista: Views randomized experiments and quasi-experimental research designs as the only reliable evidence for choosing programs.
  2. Explainista: Believes useful evidence needs to provide trustworthy data and strong explanation. This often means synthesizing existing information from reliable sources.
  3. Mapista: Creates a knowledge map of a policy, program, or issue. Visualizes the understanding developed in each study, where studies agree, and where each adds new understanding.

One thought to “Counting to 10, science about science, and Explainista vs. Randomista.”

  • Hans

    People only fund research they’re interested in.

Leave a comment

Scroll Up