14 February 2018

We've moved: INSIGHTS WEEKLY posts are at UglyResearch.com

During our launch of a new marketplace for Data Storytellers, we moved INSIGHTS WEEKLY to the Ugly Research home page. Scroll down to see our latest posts on how data science is changing business, or click on Research/Insights Weekly newsletter. While you're there, take a look at emerging trends for Analytics Translators and Data Storytelling.

31 January 2018

Cognitive bias in algorithms, baseball analytics denied, and soft skills ROI.

Mexico analytics-baseball new york times
1. Recognize bias → Create better algorithms Can we humans better recognize our biases before we turn the machines loose, fully automating them? Here’s a sample of recent caveats about decision-making fails: While improving some lives, we’re making others worse.

Yikes. From HBR, Hiring algorithms are not neutral. If you set up your resume-screening algorithm to duplicate a particular employee or team, you’re probably breaking the rules of ethics and the law, too.

Our biases are well established, yet we continue to repeat our mistakes. Amos Tversky and Daniel Kahneman brilliantly challenged traditional economic theory while producing evidence of our decision bias. View this recap of our founder’s recent Papers We Love talk on behavioral economics and bias in software design. Their early research identified three key, potentially flawed heuristics (mental shortcuts) commonly employed for decision-making: Representativeness, availability, and anchoring/adjustment. The implications for today’s software development must not be overlooked.

Algorithms might be making the poor even less equal. In Automating Inequality, Virginia Eubanks argues that the poor “are the testing ground for new technology that increases inequality.” She argues that our “moralistic view of poverty... has been wrapped into today‘s automated and predictive decision-making tools. These algorithms can make it harder for people to get services while forcing them to deal with an invasive process of personal data collection. As examples, she profiles a Medicaid application process in Indiana, homeless services in Los Angeles, and child protective services in Pittsburgh.”

Prison-sentencing algorithms are also taking fire. “Imagine you’re a judge, and you have a commercial piece of software that says we have big data, and it says this person is high risk...now imagine I tell you I asked 10 people online the same question, and this is what they said. You’d weigh those things differently.” [Wired article] Dartmouth researchers claim that a popular risk-assessment algorithm predicts recidivism about as well as a random online poll. Science Friday also covered similar issues with prison sentencing algorithms. 

Tversky Kahneman-Papers We Love Denver talk

2. Lack of acceptance → Analytics denied. Not every baseball manager is enamored with the explosion of available analytics. Great New York Times story about the extreme reluctance of a Mexico league team to change their conventional decision-making. Baseball enthusiast and Johns Hopkins computer science professor, Anton Dahbura, learned the hard way that The Analytics Guy Failed to Compute One Thing: How to Be Accepted in Mexico. Said a team VP: “It’s completely new down here so, yeah, it’s been a bit of a culture clash."

3. Soft skills training → 250% ROI. Encouraging results for the value of ‘soft’ skills training for workers on the factory floor. (When will we stop referring to these crucial, hard-to-master capabilities as soft skills?) Namrata Kala and colleagues ran a randomized controlled trial in five Bangalore factories. They delivered substantial returns [pdf here] from a 12-month soft skills training program focused on communication, decision-making, time and stress management, and financial literacy. ROI benefits were measured as boosts in worker productivity, ability to perform complex tasks more quickly, short-term gains in improved attendance, and increased employee retention.

4. Gather evidence → Retain employees On Science for Work, Iuila Alina Cioca explains The Thorny Issue of Employee Turnover: An Evidence-Based Approach.

24 January 2018

Long-term thinking, systems of intelligence, and the dangers of sloppy evidence.

Vending machine photo

1. Long view → Better financial performance. A McKinsey Global Institute team sought hard evidence supporting their observation that “Companies deliver superior results when executives manage for long-term value creation,” resisting pressure to focus on quarterly earnings (think Amazon or Unilever). So MGI developed the corporate horizon index, or CHI, to compare performance by firms exhibiting what they call long-termism vs. short-termism.

Findings are relevant to executive decision makers: “Companies that operate with a true long-term mindset have consistently outperformed their industry peers since 2001 across almost every financial measure that matters.” Average revenue and earnings growth were 47% and 36% higher, respectively, and market capitalization also grew faster. Yet short-term thinking appears to be on the rise: “We can all see what appear to be the results of excessive short-termism in the form of record levels of stock buybacks in the U.S. and historic lows in new capital investment.”

CHI-mckinsey-longtermism

Developing the CHI required systematic measurement of 5 indicators for 615 large and mid-cap US public firms. For example, “investment” was evaluated based on the ratio of capital expenditures to depreciation. Read about the methodology in Where companies with a long-term view outperform their peers by Dominic Barton et al.

Additional reading. The full research report, Measuring the economic impact of short-termism, is available as a pdf. See a recap of these insights in the Harvard Business Review: >Finally, Evidence That Managing for the Long Term Pays Off.

2. Analytics maturity → Systems of intelligence. The advanced analytics platform is dead, long live the advanced analytics platform. On Data Science Central, William Vorhies has a nice writeup about how machine learning and data science are evolving. In Data Science is Changing and Data Scientists will Need to Change Too, he explains why technology vendors must focus on an end-user or customer problem: Show them the evidence, not the sausage-making.

Systemofintelligence

Most people don’t want or need to know the details of the “invisible secret sauce middle layer” - Experts say the “next movement will see the advanced analytic platform disappear into an integrated enterprise stack as the critical middle System of Intelligence.... Suddenly Systems of Intelligence is on everyone’s tongue as the next great generational shift in enterprise infrastructure, the great pivot in the ML platform revolution.” Data scientists must feed Systems of Engagement, where people consume insights and findings.

3. Sloppy evidence → Rethinking the clearinghouse

Patrick Lester of the Social Innovation Research Center (@SIRC_tweets examines the recent ‘evidence-based’ crisis in Canary in a Coal Mine? SAMHSA’s Clearinghouse Signals Larger Threat to Evidence-based Policy

First we heard the US government prohibited use of the term ‘evidence-based’. But there’s lots more going on: The Substance Abuse and Mental Health Services Administration (SAMHSA) caused a stir with concerns about the validity of evidence recommended by its clearinghouse. The administration revoked the contract of the National Registry of Effective Prevention Programs, which reviews studies of mental health and drug treatment programs. An independent review highlighted substantial problems with clearinghouse ratings, including potential conflicts of interest. One reviewer of the 113 newly listed programs found that 50%+ were approved on the basis of a single published article, non-peer-reviewed online report, or unpublished report. Plus, many of the studies had design flaws such as small samples or and brief follow-up. Alas, this casts a shadow over sanctioned, evidence-based policymaking.

4. Strong perceptions → Strong placebo effect

The excellent Knowable Magazine did a piece on the placebo effect and imagination. Many studies of placebo effects show them to be strongest in conditions where perceptions are key, such as pain, anxiety and depression. “American anesthesiologist Henry K. Beecher observed that some wounded men from the battlefields of World War II often fared well without morphine.” Thanks to @Koenfucius.

03 August 2017

Underwriters + algorithms, avoiding bad choices, and evidence for rare illness.

Bad Choices book cover

1. Underwriters + algorithms = Best of both worlds.
We hear so much about machine automation replacing humans. But several promising applications are designed to supplement complex human knowledge and guide decisions, not replace them: Think primary care physicians, policy makers, or underwriters. Leslie Scism writes in the Wall Street Journal that AIG “pairs its models with its underwriters. The approach reflects the company’s belief that human judgment is still needed in sizing up most of the midsize to large businesses that it insures.” See Insurance: Where Humans Still Rule Over Machines [paywall] or the podcast Insurance Rates Set by ... Machine Intelligence?

Who wants to be called a flat liner? Does this setup compel people to make changes to algorithmic findings - necessary or not - so their value/contributions are visible? Scism says “AIG even has a nickname for underwriters who keep the same price as the model every time: ‘flat liners.’” This observation is consistent with research we covered last week, showing that people are more comfortable with algorithms they can tweak to reflect their own methods.

AIG “analysts and executives say algorithms work well for standardized policies, such as for homes, cars and small businesses. Data scientists can feed millions of claims into computers to find patterns, and the risks are similar enough that a premium rate spit out by the model can be trusted.” On the human side, analytics teams work with AIG decision makers to foster more methodical, evidence-based decision making, as described in the excellent Harvard Business Review piece How AIG Moved Toward Evidence-Based Decision Making.


2. Another gem from Ali Almossawi.
An Illustrated Book of Bad Arguments was a grass-roots project that blossomed into a stellar book about logical fallacy and barriers to successful, evidence-based decisions. Now Ali Almossawi brings us Bad Choices: How Algorithms Can Help You Think Smarter and Live Happier.

It’s a superb example of explaining complex concepts in simple language. For instance, Chapter 7 on ‘Update that Status’ discusses how crafting a succinct Tweet draws on ideas from data compression. Granted, not everyone wants to understand algorithms - but Bad Choices illustrates useful ways to think methodically, and sort through evidence to solve problems more creatively. From the publisher: “With Bad Choices, Ali Almossawi presents twelve scenes from everyday life that help demonstrate and demystify the fundamental algorithms that drive computer science, bringing these seemingly elusive concepts into the understandable realms of the everyday.”


3. Value guidelines adjusted for novel treatment of rare disease.
Like it or not, oftentimes the assigned “value” of a health treatment depends on how much it costs, compared to how much benefit it provides. Healthcare, time, and money are scarce resources, and payers must balance effectiveness, ethics, and equity.

Guidelines for assessing value are useful when comparing alternative treatments for common diseases. But they fail when considering an emerging treatment or a small patient population suffering from a rare condition. ICER, the Institute for Clinical and Economic Review, has developed a value assessment framework that’s being widely adopted. However, acknowledging the need for more flexibility, ICER has proposed a Value Assessment Framework for Treatments That Represent a Potential Major Advance for Serious Ultra-Rare Conditions.

In a request for comments, ICER recognizes the challenges of generating evidence for rare treatments, including the difficulty of conducting randomized controlled trials, and the need to validate surrogate outcome measures. “They intend to calculate a value-based price benchmark for these treatments using the standard range from $100,000 to $150,000 per QALY [quality adjusted life year], but will [acknowledge] that decision-makers... often give special weighting to other benefits and to contextual considerations that lead to coverage and funding decisions at higher prices, and thus higher cost-effectiveness ratios, than applied to decisions about other treatments.”

28 July 2017

Algorithm reluctance, home-visit showdown, and the problem with wearables.

Kitty with laptop

Hello there. We had to step away from the keyboard for awhile, but we’re back. And yikes, evidence-based decisions seem to be taking on water. Decision makers still resist handing the car keys to others, even when machines make better predictions. And government agencies continue to, ahem, struggle with making evidence-based policy.  — Tracy Altman, editor


1. Evidence-based home visit program loses funding.
The evidence base has developed over 30+ years. Advocates for home visit programs - where professionals visit at-risk families - cite immediate and long-term benefits for parents and for children. Things like positive health-related behavior, fewer arrests, community ties, lower substance abuse [Long-term Effects of Nurse Home Visitation on Children's Criminal and Antisocial Behavior: 15-Year Follow-up of a Randomized Controlled Trial (JAMA, 1998)]. Or Nobel Laureate-led findings that "Every dollar spent on high-quality, birth-to-five programs for disadvantaged children delivers a 13% per annum return on investment" [Research Summary: The Lifecycle Benefits of an Influential Early Childhood Program (2016)].

The Nurse-Family Partnership (@NFP_nursefamily), a well-known provider of home visit programs, is getting the word out in the New York Times and on NPR.

AEI_funnel_27jul17

Yet this bipartisan, evidence-based policy is now defunded. @Jyebreck explains that advocates are “staring down a Sept. 30 deadline.... The Maternal, Infant and Early Childhood Home Visiting program, or MIECHV, supports paying for trained counselors or medical professionals” where they establish long-term relationships.

It’s worth noting that the evidence on childhood programs is often conflated. AEI’s Katharine Stevens and Elizabeth English break it down in their excellent, deep-dive report Does Pre-K Work? They illustrate the dangers of drawing sweeping conclusions about research findings, especially when mixing studies about infants with studies of three- or four-year olds. And home visit advocates emphasize that disadvantage begins in utero and infancy, making a standard pre-K program inherently inadequate. This issue is complex, and Congress’ defunding decision will only hurt efforts to gather evidence about how best to level the playing field for children.

AEI Does Pre-K Work

2. Why do people reject algorithms?
Researchers want to understand our ‘irrational’ responses to algorithmic findings. Why do we resist change, despite evidence that a machine can reliably beat human judgment? Berkeley J. Dietvorst (great name, wasn’t he in Hunger Games?) comments in the MIT Sloan Management Review that “What I find so interesting is that it’s not limited to comparing human and algorithmic judgment; it’s my current method versus a new method, irrelevant of whether that new method is human or technology.”

Job-security concerns might help explain this reluctance. And Dietvorst has studied another cause: We lose trust in an algorithm when we see its imperfections. This hesitation extends to cases where an ‘imperfect’ algorithm remains demonstrably capable of outpredicting us. On the bright side, he found that “people were substantially more willing to use algorithms when they could tweak them, even if just a tiny amount”. Dietvorst is inspired by the work of Robyn Dawes, a pioneering behavioral decision scientist who investigated the Man vs. Machine dilemma. Dawes famously developed a simple model for predicting how students will rank against one another, which significantly outperformed admissions officers. Yet both then and now, humans don’t like to let go of the wheel.

Wearables Graveyard by Aaron Parecki

3. Massive data still does not equal evidence.
For those who doubted the viability of consumer health wearables and the notion of the quantified self, there’s plenty of validation: Jawbone liquidated, Intel dropped out, and Fitbit struggles. People need a compelling reason to wear one (such as fitness coach, or condition diagnosis and treatment).

Rather than a data stream, we need hard evidence about something actionable: Evidence is “the available body of facts or information indicating whether a belief or proposition is true or valid (Google: define evidence).” To be sure, some consumers enjoy wearing a device that tracks sleep patterns or spots out-of-normal-range values - but that market is proving to be limited.

But Rock Health points to positive developments, too. Some wearables demonstrate specific value: Clinical use cases are emerging, including assistance for the blind.

Photo credit: Kitty on Laptop by Ryan Forsythe, CC BY-SA 2.0 via Wikimedia Commons.
Photo credit: Wearables Graveyard by Aaron Parecki on Flickr.

28 December 2016

Valuing patient perspective, moneyball for tenure, visualizing education impacts.

Patient_value
1. Formalized decision process → Conflict about criteria

It's usually a good idea to establish a methodology for making repeatable, complex decisions. But inevitably you'll have to allow wiggle room for the unquantifiable or the unexpected; leaving this gray area exposes you to criticism that it's not a rigorous methodology after all. Other sources of criticism are the weighting and the calculations applied in your decision formulas - and the extent of transparency provided.

How do you set priorities? In healthcare, how do you decide who to treat, at what cost? To formalize the process of choosing among options, several groups have created so-called value frameworks for assessing medical treatments - though not without criticism. Recently Ugly Research co-authored a post summarizing industry reaction to the ICER value framework developed by the Institute for Clinical and Economic Review. Incorporation of patient preferences (or lack thereof) is a hot topic of discussion.

To address this proactively, Faster Cures has led creation of the Patient Perspective Value Framework to inform other frameworks about what's important to patients (cost? impact on daily life? outcomes?). They're asking for comments on their draft report; comment using this questionnaire.

2. Analytics → Better tenure decisions
New analysis in the MIT Sloan Management Review observes "Using analytics to improve hiring decisions has transformed industries from baseball to investment banking. So why are tenure decisions for professors still made the old-fashioned way?"

Ironically, academia often proves to be one of the last fields to adopt change. Erik Brynjolfsson and John Silberholz explain that "Tenure decisions for the scholars of computer science, economics, and statistics — the very pioneers of quantitative metrics and predictive analytics — are often insulated from these tools." The authors say "data-driven models can significantly improve decisions for academic and financial committees. In fact, the scholars recommended for tenure by our model had better future research records, on average, than those who were actually granted tenure by the tenure committees at top institutions."

Education_evidence

3. Visuals of research findings → Useful evidence
The UK Sutton Trust-EEF Teaching and Learning Toolkit is an accessible summary of educational research. The purpose is to help teachers and schools more easily decide how to apply resources to improve outcomes for disadvantaged students. Research findings on selected topics are nicely visualized in terms of implementation cost, strength of supporting evidence, and the average impact on student attainment.

4. Absence of patterns → File-drawer problem
We're only human. We want to see patterns, and are often guilty of 'seeing' patterns that really aren't there. So it's no surprise we're uninterested in research that lacks significance, and disregard findings revealing no discernible pattern. When we stash away projects like this, it's called the file-drawer problem, because this lack of evidence could be valuable to others who might have otherwise pursued a similar line of investigation. But Data Colada says the file-drawer problem is unfixable, and that’s OK.

5. Optimal stopping algorithm → Practical advice?
In Algorithms to Live By, Stewart Brand describes an innovative way to help us make complex decisions. "Deciding when to stop your quest for the ideal apartment, or ideal spouse, depends entirely on how long you expect to be looking.... [Y]ou keep looking and keep finding new bests, though ever less frequently, and you start to wonder if maybe you refused the very best you’ll ever find. And the search is wearing you down. When should you take the leap and look no further?"

Optimal Stopping is a mathematical concept for optimizing a choice, such as making the right hire or landing the right job. Brand says "The answer from computer science is precise: 37% of the way through your search period." The question is, how can people translate this concept into practical steps guiding real decisions? And how can we apply it while we live with the consequences?

14 December 2016

Choices, policy, and evidence-based investment.

Badarguments

1. Bad Arguments → Bad Choices
Great news. There will be a follow-on to the excellent Bad Arguments book by @alialmossawi. The book of Bad Choices will be released this April by major publishers. You can preorder now.

2. Evidence-based decisions → Effective policy outcomes
The conversative think tank, Heritage Foundation, is advocating for evidence-based decisions in the Trump administration. Their recommendations include resurrection of PART (the Program Assessment Rating Tool) from the George W. Bush era, which ranked federal programs according to effectiveness. "Blueprint for a New Administration offers specific steps that the new President and the top officers of all 15 cabinet-level departments and six key executive agencies can take to implement the long-term policy visions reflected in Blueprint for Reform." Read a nice summary here by Patrick Lester at the Social Innovation Research Center (@SIRC_tweets).

Pharmagellan

3. Pioneer drugs → Investment value
"Why do pharma firms sometimes prioritize 'me-too' R&D projects over high-risk, high-reward 'pioneer' programs?" asks Frank David at Pharmagellan (@Frank_S_David). "[M]any pharma financial models assume first-in-class drugs will gain commercial traction more slowly than 'followers.' The problem is that when a drug’s projected revenues are delayed in a financial forecast, this lowers its net present value – which can torpedo the already tenuous investment case for a risky, innovative R&D program." Their research suggests that pioneer drugs see peak sales around 6 years, similar to followers: "Our finding that pioneer drugs are adopted no more slowly than me-too ones could help level the economic playing field and make riskier, but often higher-impact, R&D programs more attractive to executives and investors."

Details appear in the Nature Reviews article, Drug launch curves in the modern era. Pharmagellan will soon release a book on biotech financial modeling.

4. Unrealistic expectations → Questioning 'evidence-based medicine'
As we've noted before, @EvidenceLive has a manifesto addressing how to make healthcare decisions, and how to communicate evidence. The online comments are telling: Evidence-based medicine is perhaps more of a concept than a practical thing. The spot-on @trishgreenhalgh says "The world is messy. There is no view from nowhere, no perspective that is free from bias."

Evidence & Insights Calendar.

Jan 23-25, London: Advanced Pharma Analytics 2017. Spans topics from machine learning to drug discovery, real-world evidence, and commercial decision making.

Feb 1-2, San Francisco. Advanced Analytics for Clinical Data 2017. All about accelerating clinical R&D with data-driven decision making for drug development.

11 November 2016

Building trust with evidence-based insights.

Trust

This week we examine how executives can more fully grasp complex evidence/analysis affecting their outcomes - and how analytics professionals can better communicate these findings to executives. Better performance and more trust are the payoffs.

1. Show how A → B. Our new guide to Promoting Evidence-Based Insights explains how to engage stakeholders with a data value story. Shape content around four essential elements: Top-line, evidence-based, bite-size, and reusable. It's a suitable approach whether you're in marketing, R&D, analytics, or advocacy.

No knowledge salad. To avoid tl;dr or MEGO (My Eyes Glaze Over), be sure to emphasize insights that matter to stakeholders. Explicitly connect specific actions with important outcomes, identify your methods, and provide a simple visual - this establishes trust and crediblity. Be succint; you can drill down into detailed evidence later. The guide is free from Ugly Research.

Guide to Insights by Ugly Research


2. Lack of analytics understanding → Lack of trust.
Great stuff from KPMG: Building trust in analytics: Breaking the cycle of mistrust in D&A. "We believe that organizations must think about trusted analytics as a strategic way to bridge the gap between decision-makers, data scientists and customers, and deliver sustainable business results. In this study, we define four ‘anchors of trust’ which underpin trusted analytics. And we offer seven key recommendations to help executives improve trust throughout the D&A value chain.... It is not a one-time communication exercise or a compliance tick-box. It is a continuous endeavor that should span the D&A lifecycle from data through to insights and ultimately to generating value."

Analytics professionals aren't feeling the C-Suite love. Information Week laments the lack of transparency around analytics: When non-data professionals don't know or understand how it is performed, it leads to a lack of trust. But that doesn't mean the data analytics efforts themselves are not worthy of trust. It means that the non-data pros don't know enough about these efforts to trust them.

KPMG Trust in data and analytics


3. Execs understand advanced analytics → See how to improve business
McKinsey has an interesting take on this. "Execs can't avoid understanding advanced analytics - can no longer just 'leave it to the experts' because they must understand the art of the possible for improving their business."

Analytics expertise is widespread in operational realms such as manufacturing and HR. Finance data science must be a priority for CFOs to secure a place at the planning table. Mary Driscoll explains that CFOs want analysts trained in finance data science. "To be blunt: When [line-of-business] decision makers are using advanced analytics to compare, say, new strategies for volume, pricing and packaging, finance looks silly talking only in terms of past accounting results."

4. Macroeconomics is a pseudoscience.
NYU professor Paul Romer's The Trouble With Macroeconomics is a widely discussed, skeptical analysis of macroeconomics. The opening to his abstract is excellent, making a strong point right out of the gate. Great writing, great questioning of tradition. "For more than three decades, macroeconomics has gone backwards. The treatment of identification now is no more credible than in the early 1970s but escapes challenge because it is so much more opaque. Macroeconomic theorists dismiss mere facts by feigning an obtuse ignorance about such simple assertions as 'tight monetary policy can cause a recession.'" Other critics also seek transparency: Alan Jay Levinovitz writes in @aeonmag The new astrology: By fetishising mathematical models, economists turned economics into a highly paid pseudoscience.

5. Better health evidence to a wider audience.
From the Evidence Live Manifesto: Improving the development, dissemination. and implementation of research evidence for better health.

"7. Evidence Communication.... 7.2 Better communication of research: High quality, important research that matters has to be understandable and informative to a wide audience. Yet , much of what is currently produced is not directed to a lay audience, is often poorly constructed and is underpinned by a lack of training and guidance in this area." Thanks to Daniel Barth-Jones (@dbarthjones).

Photo credit: Steve Lav - Trust on Flickr

06 October 2016

When nudging fails, defensive baseball stats, and cognitive bias cheat sheet.

What works reading list


1. When nudging fails, what else can be done?
Bravo to @CassSunstein, co-author of the popular book Nudge, for a journal abstract that is understandable and clearly identifies recommended actions. This from his upcoming article Nudges that Fail:

"Why are some nudges ineffective, or at least less effective than choice architects hope and expect? Focusing primarily on default rules, this essay emphasizes two reasons. The first involves strong antecedent preferences on the part of choosers. The second involves successful “counternudges,” which persuade people to choose in a way that confounds the efforts of choice architects. Nudges might also be ineffective, and less effective than expected, for five other reasons. (1) Some nudges produce confusion on the part of the target audience. (2) Some nudges have only short-term effects. (3) Some nudges produce “reactance” (though this appears to be rare) (4) Some nudges are based on an inaccurate (though initially plausible) understanding on the part of choice architects of what kinds of choice architecture will move people in particular contexts. (5) Some nudges produce compensating behavior, resulting in no net effect. When a nudge turns out to be insufficiently effective, choice architects have three potential responses: (1) Do nothing; (2) nudge better (or different); and (3) fortify the effects of the nudge, perhaps through counter-counternudges, perhaps through incentives, mandates, or bans."

This work will appear in a promising new journal, behavioral science & policy, "an international, peer-reviewed journal that features short, accessible articles describing actionable policy applications of behavioral scientific research that serves the public interest. articles submitted to bsp undergo a dual-review process. leading scholars from specific disciplinary areas review articles to assess their scientific rigor; at the same time, experts in relevant policy areas evaluate them for relevance and feasibility of implementation.... bsp is a publication of the behavioral science & policy association and the brookings institution press."

Slice of the week @ PepperSlice.

Author: Cass Sunstein

Analytical method: Behavioral economics

Relationship: Counter-nudges → interfere with → behavioral public policy initiatives


2. There will be defensive baseball stats!
Highly recommended: Bruce Schoenfeld's writeup about Statcast, and how it will support development of meaningful statistics for baseball fielding. Cool insight into the work done by insiders like Daren Willman (@darenw). Finally, it won't just be about the slash line.


3. Cognitive bias cheat sheet.
Buster Benson (@buster) posted a cognitive bias cheat sheet that's worth a look. (Thanks @brentrt.)


4. CATO says Donald Trump is wrong.
Conservative think tank @CatoInstitute shares evidence that immigrants don’t commit more crimes. "No matter how researchers slice the data, the numbers show that immigrants commit fewer crimes than native-born Americans.... What the anti-immigration crowd needs to understand is not only are immigrants less likely to commit crimes than native-born Americans, but they also protect us from crimes in several ways."


5. The What Works reading list.
Don't miss the #WhatWorks Reading List: Good Reads That Can Help Make Evidence-Based Policy-Making The New Normal. The group @Results4America has assembled a thought-provoking list of "resources from current and former government officials, university professors, economists and other thought-leaders committed to making evidence-based policy-making the new normal in government."


Evidence & Insights Calendar

Oct 18, online: How Nonprofits Can Attract Corporate Funding: What Goes On Behind Closed Doors. Presented by the Stanford Social Innovation Review (@SSIReview).

Nov 25, Oxford: Intro to Evidence-Based Medicine presented by CEBM. Note: In 2017 CEBM will offer a course on teaching evidence-based medicine.

Dec 13, San Francisco: The all-new Systems We Love, inspired by the excellent Papers We Love meetup series. Background here.

October 19-22, Copenhagen. ISOQOL 23rd annual conference on quality of life research. Pro tip: The Wall Street Journal says Copenhagen is hot.

November 9-10, Philadelphia: Real-World Evidence & Market Access Summit 2016. "No more scandals! Access for Patients. Value for Pharma."

22 September 2016

Improving vs. proving, plus bad evidence reporting.

Turtle slow down and learn something

If you view gathering evidence as simply a means of demonstrating outcomes, you’re missing a trick. It’s most valuable when part of a journey of iterative improvement. - Frances Flaxington

1. Immigrants to US don't disrupt employment.
There is little evidence that immigration significantly affects overall employment of native-born US workers. This according to an expert panel's 500-page report. We thought you might like this condensed version from PepperSlice.

Bad presentation alert: The report, The Economic and Fiscal Consequences of Immigration, offers no summary visuals and buries its conclusions deep within dense chapters. Perhaps methodology is the problem, documenting the "evidence-based consensus of an authoring committee of experts". People need concise synthesis and actionable findings: What can policy makers do with this information?

Bad reporting alert: Perhaps unsatisfied with these findings, Julia Preston of the New York Times slipped her own claim into the coverage, saying the report "did not focus on American technology workers [true], many of whom have been displaced from their jobs in recent years by immigrants on temporary visas [unfounded claim]". Rather sloppy reporting, particularly when covering an extensive economic study of immigration impacts.


Immigration

Key evidence: "Empirical research in recent decades suggests that findings remain by and large consistent with those in The New Americans (National Research Council, 1997) in that, when measured over a period of 10 years or more, the impact of immigration on the wages of natives overall is very small." [page 204]

Immigration also contributes to the nation’s economic growth.... Perhaps even more important than the contribution to labor supply is the infusion by high-skilled immigration of human capital that has boosted the nation’s capacity for innovation and technological change. The contribution of immigrants to human and physical capital formation, entrepreneurship, and innovation are essential to long-run sustained economic growth. [page 243]

Author: @theNASEM, the National Academies of Sciences, Engineering, and Medicine.

Relationship: immigration → sustains → economic growth


2. Improving vs. proving.
On @A4UEvidence: "We often assume that generating evidence is a linear progression towards proving whether a service works. In reality the process is often two steps forward, one step back." Ugly Research supports the 'what works' concept, but wholeheartedly agrees that "The fact is that evidence rarely provides a clear-cut truth – that a service works or is cost-beneficial. Rather, evidence can support or challenge the beliefs that we, and others, have and it can point to ways in which a service might be improved."


3. Who should make sure policy is evidence-based and transparent?
Bad PR alert? Is it government's responsibility to make policy transparent and balanced? If so, some are accusing the FDA of not holding up their end on drug and medical device policy. A recent 'close-held embargo' of an FDA announcement made NPR squirm. Scientific American says the deal was this: "NPR, along with a select group of media outlets, would get a briefing about an upcoming announcement by the U.S. Food and Drug Administration a day before anyone else. But in exchange for the scoop, NPR would have to abandon its reportorial independence. The FDA would dictate whom NPR's reporter could and couldn't interview.

"'My editors are uncomfortable with the condition that we cannot seek reaction,' NPR reporter Rob Stein wrote back to the government officials offering the deal. Stein asked for a little bit of leeway to do some independent reporting but was turned down flat. Take the deal or leave it."


Evidence & Insights Calendar

November 9-10, Philadelphia: Real-World Evidence & Market Access Summit 2016. "No more scandals! Access for Patients. Value for Pharma."

29 Oct-2 Nov, Vienna, Austria: ISPOR 19th Annual European Congress. Plenary: "What Synergies Could Be Created Between Regulatory and Health Technology Assessments?"

October 3-6, National Harbor, Maryland. AMCP Nexus 2016. Special topic: "Behavioral Economics - What Does it All Mean?"


Photo credit: Turtle on Flickr.

Subscribe by email