37 posts categorized "healthcare & health tech"

03 August 2017

Underwriters + algorithms, avoiding bad choices, and evidence for rare illness.

Bad Choices book cover

1. Underwriters + algorithms = Best of both worlds.
We hear so much about machine automation replacing humans. But several promising applications are designed to supplement complex human knowledge and guide decisions, not replace them: Think primary care physicians, policy makers, or underwriters. Leslie Scism writes in the Wall Street Journal that AIG “pairs its models with its underwriters. The approach reflects the company’s belief that human judgment is still needed in sizing up most of the midsize to large businesses that it insures.” See Insurance: Where Humans Still Rule Over Machines [paywall] or the podcast Insurance Rates Set by ... Machine Intelligence?

Who wants to be called a flat liner? Does this setup compel people to make changes to algorithmic findings - necessary or not - so their value/contributions are visible? Scism says “AIG even has a nickname for underwriters who keep the same price as the model every time: ‘flat liners.’” This observation is consistent with research we covered last week, showing that people are more comfortable with algorithms they can tweak to reflect their own methods.

AIG “analysts and executives say algorithms work well for standardized policies, such as for homes, cars and small businesses. Data scientists can feed millions of claims into computers to find patterns, and the risks are similar enough that a premium rate spit out by the model can be trusted.” On the human side, analytics teams work with AIG decision makers to foster more methodical, evidence-based decision making, as described in the excellent Harvard Business Review piece How AIG Moved Toward Evidence-Based Decision Making.


2. Another gem from Ali Almossawi.
An Illustrated Book of Bad Arguments was a grass-roots project that blossomed into a stellar book about logical fallacy and barriers to successful, evidence-based decisions. Now Ali Almossawi brings us Bad Choices: How Algorithms Can Help You Think Smarter and Live Happier.

It’s a superb example of explaining complex concepts in simple language. For instance, Chapter 7 on ‘Update that Status’ discusses how crafting a succinct Tweet draws on ideas from data compression. Granted, not everyone wants to understand algorithms - but Bad Choices illustrates useful ways to think methodically, and sort through evidence to solve problems more creatively. From the publisher: “With Bad Choices, Ali Almossawi presents twelve scenes from everyday life that help demonstrate and demystify the fundamental algorithms that drive computer science, bringing these seemingly elusive concepts into the understandable realms of the everyday.”


3. Value guidelines adjusted for novel treatment of rare disease.
Like it or not, oftentimes the assigned “value” of a health treatment depends on how much it costs, compared to how much benefit it provides. Healthcare, time, and money are scarce resources, and payers must balance effectiveness, ethics, and equity.

Guidelines for assessing value are useful when comparing alternative treatments for common diseases. But they fail when considering an emerging treatment or a small patient population suffering from a rare condition. ICER, the Institute for Clinical and Economic Review, has developed a value assessment framework that’s being widely adopted. However, acknowledging the need for more flexibility, ICER has proposed a Value Assessment Framework for Treatments That Represent a Potential Major Advance for Serious Ultra-Rare Conditions.

In a request for comments, ICER recognizes the challenges of generating evidence for rare treatments, including the difficulty of conducting randomized controlled trials, and the need to validate surrogate outcome measures. “They intend to calculate a value-based price benchmark for these treatments using the standard range from $100,000 to $150,000 per QALY [quality adjusted life year], but will [acknowledge] that decision-makers... often give special weighting to other benefits and to contextual considerations that lead to coverage and funding decisions at higher prices, and thus higher cost-effectiveness ratios, than applied to decisions about other treatments.”

28 July 2017

Algorithm reluctance, home-visit showdown, and the problem with wearables.

Kitty with laptop

Hello there. We had to step away from the keyboard for awhile, but we’re back. And yikes, evidence-based decisions seem to be taking on water. Decision makers still resist handing the car keys to others, even when machines make better predictions. And government agencies continue to, ahem, struggle with making evidence-based policy.  — Tracy Altman, editor


1. Evidence-based home visit program loses funding.
The evidence base has developed over 30+ years. Advocates for home visit programs - where professionals visit at-risk families - cite immediate and long-term benefits for parents and for children. Things like positive health-related behavior, fewer arrests, community ties, lower substance abuse [Long-term Effects of Nurse Home Visitation on Children's Criminal and Antisocial Behavior: 15-Year Follow-up of a Randomized Controlled Trial (JAMA, 1998)]. Or Nobel Laureate-led findings that "Every dollar spent on high-quality, birth-to-five programs for disadvantaged children delivers a 13% per annum return on investment" [Research Summary: The Lifecycle Benefits of an Influential Early Childhood Program (2016)].

The Nurse-Family Partnership (@NFP_nursefamily), a well-known provider of home visit programs, is getting the word out in the New York Times and on NPR.

AEI_funnel_27jul17

Yet this bipartisan, evidence-based policy is now defunded. @Jyebreck explains that advocates are “staring down a Sept. 30 deadline.... The Maternal, Infant and Early Childhood Home Visiting program, or MIECHV, supports paying for trained counselors or medical professionals” where they establish long-term relationships.

It’s worth noting that the evidence on childhood programs is often conflated. AEI’s Katharine Stevens and Elizabeth English break it down in their excellent, deep-dive report Does Pre-K Work? They illustrate the dangers of drawing sweeping conclusions about research findings, especially when mixing studies about infants with studies of three- or four-year olds. And home visit advocates emphasize that disadvantage begins in utero and infancy, making a standard pre-K program inherently inadequate. This issue is complex, and Congress’ defunding decision will only hurt efforts to gather evidence about how best to level the playing field for children.

AEI Does Pre-K Work

2. Why do people reject algorithms?
Researchers want to understand our ‘irrational’ responses to algorithmic findings. Why do we resist change, despite evidence that a machine can reliably beat human judgment? Berkeley J. Dietvorst (great name, wasn’t he in Hunger Games?) comments in the MIT Sloan Management Review that “What I find so interesting is that it’s not limited to comparing human and algorithmic judgment; it’s my current method versus a new method, irrelevant of whether that new method is human or technology.”

Job-security concerns might help explain this reluctance. And Dietvorst has studied another cause: We lose trust in an algorithm when we see its imperfections. This hesitation extends to cases where an ‘imperfect’ algorithm remains demonstrably capable of outpredicting us. On the bright side, he found that “people were substantially more willing to use algorithms when they could tweak them, even if just a tiny amount”. Dietvorst is inspired by the work of Robyn Dawes, a pioneering behavioral decision scientist who investigated the Man vs. Machine dilemma. Dawes famously developed a simple model for predicting how students will rank against one another, which significantly outperformed admissions officers. Yet both then and now, humans don’t like to let go of the wheel.

Wearables Graveyard by Aaron Parecki

3. Massive data still does not equal evidence.
For those who doubted the viability of consumer health wearables and the notion of the quantified self, there’s plenty of validation: Jawbone liquidated, Intel dropped out, and Fitbit struggles. People need a compelling reason to wear one (such as fitness coach, or condition diagnosis and treatment).

Rather than a data stream, we need hard evidence about something actionable: Evidence is “the available body of facts or information indicating whether a belief or proposition is true or valid (Google: define evidence).” To be sure, some consumers enjoy wearing a device that tracks sleep patterns or spots out-of-normal-range values - but that market is proving to be limited.

But Rock Health points to positive developments, too. Some wearables demonstrate specific value: Clinical use cases are emerging, including assistance for the blind.

Photo credit: Kitty on Laptop by Ryan Forsythe, CC BY-SA 2.0 via Wikimedia Commons.
Photo credit: Wearables Graveyard by Aaron Parecki on Flickr.

15 September 2016

Social program science, gut-bias decision test, and enough evidence already.

Paperwork

"The driving force behind MDRC is a conviction that reliable evidence, well communicated, can make an important difference in social policy." -Gordon L. Berlin, President, MDRC

1. Slice of the week: Can behavioral science improve the delivery of child support programs? Yes. Understanding how people respond to communications has improved outcomes. State programs supplemented heavy packets of detailed requirements with simple emails and postcard reminders. (Really, did this require behavioral science? Not to discount the excellent work by @CABS_MDRC, but it seems pretty obvious. Still, a promising outcome.)

Applying Behavioral Science to Child Support: Building a Body of Evidence comes to us from MRDC, a New-York based institute that builds knowledge around social policy.

Data: Collected using random assignment and analyzed with descriptive statistics.

Evidence: Support payments increased with reminders. Simple notices (email or postcards) sent to people not previously receiving them increased by 3% the number of parents making at least one payment.

Relationship: behaviorally informed interventions → solve → child support problems


“A commitment to using best evidence to support decision making in any field is an ethical commitment.”
-Dónal O’Mathuna @DublinCityUni

2. How to test your decision-making instincts.
McKinsey's Andrew Campbell and Jo Whitehead have studied decision-making for execs. They suggest asking yourself these four questions to ensure you're drawing on appropriate experiences and emotions. "Leaders cannot prevent gut instinct from influencing their judgments. What they can do is identify situations where it is likely to be biased and then strengthen the decision process to reduce the resulting risk."

Familiarity test: Have we frequently experienced identical or similar situations?
Feedback test: Did we get reliable feedback in past situations?
Measured-emotions test: Are the emotions we have experienced in similar or related situations measured?
Independence test: Are we likely to be influenced by any inappropriate personal interests or attachments?

Relationship: Test of instincts → reduces → decision bias


3. When is enough evidence enough?
At what point should we agree on the evidence, stop evaluating, and move on? Determining this is particularly difficult where public health is concerned. Despite all the available findings, investigators continue to study the costs and benefits of statin drugs. A new Lancet review takes a comprehensive look and makes a strong case for this important drug class. "Large-scale evidence from randomised trials shows that statin therapy reduces the risk of major vascular events" and "claims that statins commonly cause adverse effects reflect a failure to recognise the limitations of other sources of evidence about the effects of treatment".

The insightful Richard Lehman (@RichardLehman1) provides a straightforward summary: The treatment is so successful that the "main adverse effect of statins is to induce arrogance in their proponents." And Larry Husten explains that Statin Trialists Seek To Bury Debate With Evidence.


Photo credit: paperwork by Camilo Rueda López on Flickr.

09 September 2016

Battling antimicrobial resistance, visualizing data, and value in health.

Dentist-antibiotic-board

PepperSlice Board of the Week: Dentists will slow down on antibiotics if you show them a chart of their prescribing numbers. 

Antimicrobial resistance is a serious public health concern. PLOS Medicine has published findings from an RCT studying whether quantitative feedback and intervention about prescribing patterns will reduce dentists' antibiotic RXs. An intervention group prescribed substantially fewer antibiotics per 100 cases.

The Evidence. Peer-reviewed: An Audit and Feedback Intervention for Reducing Antibiotic Prescribing in General Dental Practice.

Data: Collected using RAPiD Cluster Randomised Controlled Trial, and analyzed with ANCOVA.

Relationship: historical data ➞ influence ➞ dentist antibiotic prescribing rates

This study evaluated the impact of providing general-practice dentists with individualised feedback consisting of a line graph of their monthly antibiotic prescribing rate. Rates in the intervention group were substantially lower than in the control group.

From the authors: "The feedback provided in this study is a relatively straightforward, low-cost public health and patient safety intervention that could potentially help the entire healthcare profession address the increasing challenge of antimicrobial resistance." Authors: Paula Elouafkaoui et al.

#: evidentista, antibiotics, evidence-based practice


Distribution-plots1

2. Visualizing data distributions.
Nathan Yau's fantastic blog, Flowing Data, offers a simple explanation of distributions - the spread of a dataset - and how to compare them. Highly recommended. "Single data points from a large dataset can make it more relatable, but those individual numbers don’t mean much without something to compare to. That’s where distributions come in."


3. Calculating 'expected value' of health interventions.
Frank David provides a useful reminder of the realities of computing 'expected value'. Sooner or later, we must make simplifying assumptions, and compare costs and benefits on similar terms (usually $). On Forbes he walks us through a straightforward calculation of the value of an Epi-pen. (Frank's firm, Pharmagellan, is coming out with a book on biotech financial modeling, and we look forward to that.)


G20-bayes-johnoliver

4. What is Bayesian, really?
In the Three Faces of Bayes, @burrsettles beautifully describes three uses of the term Bayesian, and wonders "Why is it that Bayesian networks, for example, aren’t considered… y’know… Bayesian?" Recommended for readers wanting to know more about these algorithms for machine learning and decision analysis.


Fun Fact: Everyone can stop carrying around fake babies. Evidence tells us baby simulators don't deter teen pregnancy after all.


Evidence & Insights Calendar:

September 19-21; Boston. FierceBiotech Drug Development Forum. Evaluate challenges, trends, and innovation in drug discovery and R&D. Covering the entire drug development process, from basic research through clinical trials.

September 13-14; Palo Alto, California. Nonprofit Management Institute: The Power of Network Leadership to Drive Social Change, hosted by Stanford Social Innovation Review.

September 20-22; Newark, New Jersey. Advanced Pharma Analytics. How to harness real-world evidence to optimize decision-making and improve patient-centric strategies.

25 August 2016

Social determinants of health, nonfinancial performance metrics, and satisficers.

Dear reader: Insights Weekly is starting a new chapter. Our spotlight topics are now accompanied by a 'newsletter' version of a PepperSlice, the capsule form of evidence-based analysis we've created at PepperSlice.com. Let me know what you think, and thanks for your continued readership. - Tracy Altman, Ugly Research

1. Is social services spending associated with better health outcomes? Yes.
Socialhealth-pepperslice-thumbnail Evidence has revealed a significant association between healthcare outcomes and the ratio of social service to healthcare spending in various OECD countries. Now a new study, published in Health Affairs, finds this same pattern within the US. The health differences were substantial. For instance, a 20% change in the median social-to-health spending ratio was associated with 85,000 fewer adults with obesity and more than 950,000 adults with mental illness. Elizabeth Bradley and Lauren Taylor explain on the RWJF Culture of Health blog.

This is great, but we wonder: Where/what is the cause-effect relationship?

The Evidence. Peer-reviewed: Variation In Health Outcomes: The Role Of Spending On Social Services, Public Health, And Health Care, 2000-09.

Data: Collected using longitudinal state-level spending data and analyzed with repeated measures multivariable modeling, retrospective.

Relationship: Social : medical spending → associated → better health outcomes

From the authors: "Reorienting attention and resources from the health care sector to upstream social factors is critical, but it’s also an uphill battle. More research is needed to characterize how the health effects of social determinants like education and poverty act over longer time horizons. Stakeholders need to use information about data on health—not just health care—to make resource allocation decisions."

#: statistical_modeling social_determinants population_health social_services health_policy

2. Are nonfinancial metrics good leading indicators of financial performance? Maybe.
Nonfinancial-metrics During the '90s and early 00's we heard a lot about Kaplan and Norton's Balanced Scorecard. A key concept was the use of nonfinancial management metrics such as customer satisfaction, employee engagement, and openness to innovation. This was thought to encourage actions that increased a company’s long-term value, rather than maximizing short-term financials.

The idea has taken hold, and nonfinancial metrics are often used in designing performance management systems and executive compensation plans. But not everyone is a fan: Some argue this can actually be harmful; for instance, execs might prioritize customer sat over other performance areas. Recent findings in the MIT Sloan Management Review look at whether these metrics truly are leading indicators of financial performance.

The Evidence. Business journal: Are Nonfinancial Metrics Good Leading Indicators of Future Financial Performance?

Data: Collected from American Customer Satisfaction Index, ExecuComp, and Compustat and analyzed with econometrics: panel data analysis.

Relationship: Nonfinancial metrics → predict → Financial performance

From the authors: "We found that there were notable variations in the lead indicator strength of customer satisfaction in a sample of companies drawn from different industries. For instance, for a chemical company in our sample, customer satisfaction’s lead indicator strength was negative; this finding is consistent with prior research suggesting that in many industries, the expense required to increase customer satisfaction can’t be justified. By contrast, for a telecommunications company we studied, customer satisfaction was a strong leading indicator; this finding is consistent with evidence showing that in many service industries, customer satisfaction reduces customer churn and price sensitivity. For a professional service firm in our sample, the lead indicator strength of customer satisfaction was weak; this is consistent with evidence showing that for such services, measures such as trust provide a clearer indication of the economic benefits than customer satisfaction.... Knowledge of whether a nonfinancial metric such as customer satisfaction is a strongly positive, weakly positive, or negative lead indicator of future financial performance can help companies avoid the pitfalls of using a nonfinancial metric to incentivize the wrong behavior."

#: customer_satisfaction nonfinancial balanced_scorecard CEO_compensation performance_management

3. Reliable evidence about p values.
Daniël Lakens (@lakens) puts it very well, saying "One of the most robust findings in psychology replicates again: Psychologists misinterpret p-values." This from Frontiers in Psychology.

4. Satisficers are happier.
Fast Company's article sounds at first just like clickbait, but they have a point. You can change how you see things, and reset your expectations. The Surprising Scientific Link Between Happiness And Decision Making.

Evidence & Insights Calendar:

September 19-21; Boston. FierceBiotech Drug Development Forum. Evaluate challenges, trends, and innovation in drug discovery and R&D. Covering the entire drug development process, from basic research through clinical trials.

September 13-14; Palo Alto, California. Nonprofit Management Institute: The Power of Network Leadership to Drive Social Change, hosted by Stanford Social Innovation Review.

September 20-22; Newark, New Jersey. Advanced Pharma Analytics. How to harness real-world evidence to optimize decision-making and improve patient-centric strategies.


Photo credit: Fat cat by brokinhrt2 on Flickr.

18 August 2016

Science of CEO success?, drug valuation kerfuffle, and event attribution science.

  Fingerpointing


1. Management research: Alchemy → Chemistry?
McKinsey's Michael Birshan and Thomas Meakin set out to "take a data-driven look" at the strategic moves of newly appointed CEOs, and how those moves influenced company returns. The accompanying podcast (with transcript), CEO transitions: The science of success, says "A lot of the existing literature is quite qualitative, anecdotal, and we’ve been able to build a database of 599 CEO transitions and add a bunch of other sources to it and really try and mine that database hard for what we hope are new insights. We are really trying to move the conversation from alchemy to chemistry, if you like."

The research was first reported in How new CEOs can boost their odds of success. McKinsey's evidence says new CEOs make similar moves, with similar frequency, whether they're taking over a struggling company or a profitable one (see chart). For companies not performing well, Birshan says the data support his advice to be bold, and make multiple moves at once. Depending how you slice the numbers, both external and internal hires fared well in the CEO role (8).

  CEO-science-success

Is this science? CEO performance was associated with the metric excess total returns to shareholders, "which is the performance of one company over or beneath the average performance of its industry peers over the same time period". Bottom line, can you attribute CEO activities directly to excess TRS? Organizational redesign was correlated with significant excess TRS (+1.9 percent) for well-performing companies. The authors say "We recognize that excess TRS CAGR does not prove a causal link; too many other variables, some beyond a CEO’s control, have an influence. But we do find the differences that emerged quite plausible." Hmm, correlation still does not equal causation.

Examine the evidence. The report's end notes answer some key questions: Can you observe or measure whether a CEO inspires the top team? Probably not (1). Where do you draw the line between a total re-org and a management change? They define 'management reshuffle' as 50+% turnover in first two years (5). But we have other questions: How were these data collected and analyzed? Some form of content analysis would likely be required to assign values to variables. How were the 599 CEOs chosen as the sample? Selection bias is a concern. Were some items self-reported? Interviews or survey results? Were findings validated by assigning a second team to check for internal reliability? External reliability?


2. ICER + pharma → Fingerpointing.
There's a kerfuffle between pharma companies and the nonprofit ICER (@ICER_review). The Institute for Clinical and Economic Review publishes reports on drug valuation, and studies comparative efficacy. Biopharma Dive explains that "Drugmakers have argued ICER's reviews are driven by the interests of insurers, and fail to take the patient perspective into account." The National Pharmaceutical Council (@npcnow) takes issue with how ICER characterizes its funding sources.

ICER has been doing some damage control, responding to a list of 'myths' about its purpose and methods. Its rebuttal, Addressing the Myths About ICER and Value Assessment, examines criticisms such as "ICER only cares about the short-term cost to insurers, and uses an arbitrary budget cap to suggest low-ball prices." Also, ICER's economic models "use the Quality-Adjusted Life Year (QALY) which discriminates against those with serious conditions and the disabled, 'devaluing' their lives in a way that diminishes the importance of treatments to help them."


Cupid-lesser-known-arrow

3. Immortal time bias → Overstated findings.
You can't get a heart transplant after you're dead. The must-read Hilda Bastian writes on Statistically Funny about immortal time bias, a/k/a event-free time or competing risk bias. This happens when an analysis mishandles events whose occurrence precludes the outcome of interest - such as heart transplant outcomes. Numerous published studies, particularly those including Kaplan-Meier analyses, may suffer from this bias.


4. Climate change → Weird weather?
This week the US is battling huge fires and disastrous floods: Climate change, right? Maybe. There's now a thing called event attribution science, where people apply probabilistic methods in an effort to determine whether an extreme weather resulted from climate change. The idea is to establish/predict adverse impacts.


Evidence & Insights Calendar:

September 20-22; Newark, New Jersey. Advanced Pharma Analytics. How to harness real-world evidence to optimize decision-making and improve patient-centric strategies.

September 13-14; Palo Alto, California. Nonprofit Management Institute: The Power of Network Leadership to Drive Social Change, hosted by Stanford Social Innovation Review.

September 19-23; Melbourne, Australia. International School on Research Impact Assessment. Founded in 2013 by the Agency of Health Quality and Assessment (AQuAS), RAND Europe, and Alberta Innovates.


Photo credit: Fingerpointing by Tom Hilton.

04 August 2016

Health innovation, foster teens, NBA, Gwyneth Paltrow.

Foster_care_youth

1. Behavioral economics → Healthcare innovation.
Jaan Sidorov (@DisMgtCareBlog) writes on the @Health_Affairs blog about roadblocks to healthcare innovation. Behavioral economics can help us truly understand resistance to change, including unconscious bias, so valuable improvements will gain more traction. Sidoro offers concise explanations of hyperbolic discounting, experience weighting, social utility, predictive value, and other relevant economic concepts. He also recommends specific tactics when presenting a technology-based innovation to the C-Suite.

2. Laptops → Foster teen success.
Nobody should have to type their high school essays on their phone. A coalition including Silicon Valley leaders and public sector agencies will ensure all California foster teens can own a laptop computer. Foster Care Counts reports evidence that "providing laptop computers to transition age youth shows measurable improvement in self-esteem and academic performance". KQED's California Report ran a fine story.

For a year, researchers at USC's School of Social Work surveyed 730 foster youth who received laptops, finding that "not only do grades and class attendance improve, but self-esteem and life satisfaction increase, while depression drops precipitously."

3. Analytical meritocracy → Better NBA outcomes.
The Innovation Enterprise Sports Channel explain how the NBA draft is becoming an analytical meritocracy. Predictive models help teams evaluate potential picks, including some they might have overlooked. Example: Andre Roberson, who played very little college ball, was drafted successfully by Oklahoma City based on analytics. It's tricky combining projections for active NBA teams with prospects who may never take the court. One decision aid is ESPN’s Draft Projection model, using Statistical Plus/Minus to predict how someone would perform through season five of a hypothetical NBA career. ESPN designates each player as a Superstar, Starter, Role Player, or Bust, to facilitate risk-reward assessments.

4. Celebrity culture → Clash with scientific evidence.
Health law and policy professor Timothy Caulfield (@CaulfieldTim) examines the impact of celebrity culture on people's choices of diet and healthcare. His new book asks Is Gwyneth Paltrow Wrong About Everything?: How the Famous Sell Us Elixirs of Health, Beauty & Happiness. Caulfield cites many, many peer-reviewed sources of evidence.

Evidence & Insights Calendar:

September 13-14; Palo Alto, California. Nonprofit Management Institute: The Power of Network Leadership to Drive Social Change, hosted by Stanford Social Innovation Review.

September 19-23; Melbourne, Australia. International School on Research Impact Assessment. Founded in 2013 by the Agency of Health Quality and Assessment (AQuAS), RAND Europe, and Alberta Innovates.

February 22-23; London UK. Evidence Europe 2017. How pharma, payers, and patients use real-world evidence to understand and demonstrate drug value and improve care.

Photo credit: Foster Care Counts.

19 July 2016

Are you causing a ripple? How to assess the impact of research.

Raindrops-in-a-bucket

People are recognizing the critical need for meta-research, or the 'science of science'. One focus area is understanding whether research produces desired outcomes, and identifying how to ensure that truly happens going forward. Research impact assessment (RIA) is particularly important when holding organizations accountable for their management of public and donor funding. An RIA community of practice is emerging.

Are you causing a ripple? For those wanting to lead an RIA effort, the International School on Research Impact Assessment was developed to "empower participants on how to assess, measure and optimise research impact with a focus on biomedical and health sciences." ISRIA is a partnership between Alberta Innovates, the Agency for Health Quality and Assessment of Catalonia, and RAND Europe. They're presenting their fourth annual program Sept 19-23 in Melbourne, Australia, hosted by the Commonwealth Scientific and Industrial Research Organisation, Australia’s national research agency.

ISRIA participants are typically in program management, evaluation, knowledge translation, or policy roles. They learn a range of frameworks, tools, and approaches for assessing research impact, and how to develop evidence about 'what works’.

Make an impact with your impact assessment. Management strategies are also part of the ISRIA curriculum: Embedding RIA systemically into organizational practice, reaching agreement on effective methods and reporting, understanding the audience for RIAs, and knowing how to effectively communicate results to various stakeholders.

The 2016 program will cover both qualitative and quantitative analytical methods, along with a mixed design. It will include sessions on evaluating economic, environmental and social impacts. The aim is to expose participants to as many options as possible, including new methods, such as altmetrics. (Plus, there's a black tie event on the first evening.)

 

Photo credit: Raindrops in a Bucket by Smabs Sputzer.

 

13 July 2016

Academic clickbait, FCC doesn't use economics, and tobacco surcharges don't work.

Brady

1. Academics use crazy tricks for clickbait.
Turn to @TheWinnower for an insightful analysis of academic article titles, and how their authors sometimes mimic techniques used for clickbait. Positively framed titles (those stating a specific finding) fare better than vague ones: For example, 'smoking causes lung cancer' vs. 'the relationship between smoking and lung cancer'. Nice use of altmetrics to perform the analysis.

2. FCC doesn't use cost-benefit analysis.
Critics claim Federal Communications Commission policymaking has swerved away from econometric evidence and economic theory. Federal agencies including the EPA must submit cost-benefit analyses to support new regulations, but the FCC is exempted, "free to embrace populism as its guiding principle". @CALinnovates has published a new paper, The Curious Absence of Economic Analysis at the Federal Communications Commission: An Agency In Search of a Mission. Former FCC Chief Economist Gerald Faulhaber, PhD and Hal Singer, PhD review the agency’s "proud history at the cutting edge of industrial economics and its recent divergence from policymaking grounded in facts and analysis".

3. No bias in US police shootings?
There's plenty of evidence showing bias in US police use of force, but not in shootings, says one researcher. But Data Colada, among others, describes "an interesting empirical challenge for interpreting the shares of Whites vs Blacks shot by police while being arrested is that biased officers, those overestimating the threat posted by a Black civilian, will arrest less dangerous Blacks on average. They will arrest those posing a real threat, but also some not posing a real threat, resulting in lower average threat among those arrested by biased officers."

4. Tobacco surcharges don't work.
The Affordable Care Act imposes tobacco surcharges for smokers. But findings suggest the ACA has not led more people to stop smoking.

5. CEOs lose faith in forecasts.
Some CEOs say big-data predictions are failing. “The so-called experts and global economists are proven as often to be wrong as right these days,” claims a WSJ piece In Uncertain Times, CEOs Lose Faith in Forecasts. One consultant advises people to "rely less on forecasts and instead road-test ideas with customers and make fast adjustments when needed. He urges them to supplement big-data predictions with close observation of their customers."

6. Is fMRI evidence flawed?
Motherboard's Why Two Decades of Brain Research Could Be Seriously Flawed recaps research by Anders Eklund. Cost is one reason, he argues: fMRI scans are notoriously expensive. "That makes it hard for researchers to perform large-scale studies with lots of patients". Eklund has written elsewhere about this (Can parametric statistical methods be trusted for fMRI based group studies?), and the issue is being noticed by Neuroskeptic and Science-Based Medicine ("It’s tempting to think that the new idea or technology is going to revolutionize science or medicine, but history has taught us to be cautious. For instance, antioxidants, it turns out, are not going to cure a long list of diseases").

Evidence & Insights Calendar:

August 24-25; San Francisco. Sports Analytics Innovation Summit.

September 13-14; Palo Alto, California. Nonprofit Management Institute: The Power of Network Leadership to Drive Social Change, hosted by Stanford Social Innovation Review.

September 19-23; Melbourne, Australia. International School on Research Impact Assessment. Founded in 2013 by the Agency of Health Quality and Assessment (AQuAS), RAND Europe, and Alberta Innovates.

30 June 2016

Brain training isn't smart, physician peer pressure, and #AskforEvidence.

Brain-Training

1. Spending $ on brain training isn't so smart.
It seems impossible to listen to NPR without hearing from their sponsor, Lumosity, the brain-training company. The target demo is spot on: NPR will be the first to tell you its listeners are the "nation's best and brightest". And bright people don't want to slow down. Alas, spending hard-earned money on brain training isn't looking like a smart investment. New evidence seems to confirm suspicions that this $1 billion industry is built on hope, sampling bias, and placebo effect. Arstechnica says researchers have concluded that earlier, mildly positive "findings suggest that recruitment methods used in past studies created a self-selected groups of participants who believed the training would improve cognition and thus were susceptible to the placebo effect." The study, Placebo Effects in Cognitive Training, was published in the Proceedings of the National Academy of Sciences.

It's not a new theme: In 2014, 70 cognitive scientists signed a statement saying "The strong consensus of this group is that the scientific literature does not support claims that the use of software-based 'brain games' alters neural functioning in ways that improve general cognitive performance in everyday life, or prevent cognitive slowing and brain disease."


Journal.pmed.1002049.t001

2. Ioannidis speaks out on usefulness of research.
After famously claiming that most published research findings are false, John Ioannidis now tells us Why Most Clinical Research Is Not Useful (PLOS Medicine). So, what are the key features of 'useful' research? The problem needs to be important enough to fix. Prior evidence must be evaluated to place the problem into context. Plus, we should expect pragmatism, patient-centeredness, monetary value, and transparency.


Antibiotic_use

3. To nudge physicians, compare them to peers.
Doctors are overwhelmed with alerts and guidance. So how do you intervene when a physician prescribes antibiotics for a virus, despite boatloads of evidence showing they're ineffective? Comparing a doc's records to peers is one promising strategy. Laura Landro recaps research by Jeffrey Linder (Brigham and Women's, Harvard): "Peer comparison helped reduce prescriptions that weren’t warranted from 20% to 4% as doctors got monthly individual feedback about their own prescribing habits for 18 months.

"Doctors with the lower rates were told they were top performers, while the rest were pointedly told they weren’t, in an email that included the number and proportion of antibiotic prescriptions they wrote compared with the top performers." Linder says “You can imagine a bunch of doctors at Harvard being told ‘You aren’t a top performer.’ We expected and got a lot of pushback, but it was the most effective intervention.” Perhaps this same approach would work outside the medical field.

4. Sports analytics taxonomy.
INFORMS is a professional society focused on Operations Research and Management Science. The June issue of their ORMS Today magazine presents v1.0 of a sports analytics taxonomy (page 40). This work, by Gary Cokins et al., demonstrates how classification techniques can be applied to better understand sports analytics. Naturally this includes analytics for players and managers in the major leagues. But it also includes individual sports, amateur sports, franchise management, and venue management.

5. Who writes the Internet, anyway? #AskforEvidence
Ask for Evidence is a public campaign that helps people request for themselves the evidence behind news stories, marketing claims, and policies. Sponsored by @senseaboutsci, the campaign has new animations on YouTube, Twitter, and Facebook. Definitely worth a like or a retweet.

Calendar:
September 13-14; Palo Alto, California. Nonprofit Management Institute: The Power of Network Leadership to Drive Social Change, hosted by Stanford Social Innovation Review.

September 19-23; Melbourne, Australia. International School on Research Impact Assessment. Founded in 2013 by the Agency of Health Quality and Assessment (AQuAS), RAND Europe, and Alberta Innovates.

Subscribe by email