37 posts categorized "healthcare & health tech"

23 June 2016

Open innovation, the value of pharmaceuticals, and liberal-vs-conservative stalemates.

Evidence_from_openinnovation

1. Open Innovation can up your game.
Open Innovation → Better Evidence. Scientists with an agricultural company tell a fascinating story about open innovation success. Improving Analytics Capabilities Through Crowdsourcing (Sloan Review) describes a years-long effort to tap into expertise outside the organization. Over eight years, Syngenta used open-innovation platforms to develop a dozen data-analytics tools, which ultimately revolutionized the way it breeds soybean plants. "By replacing guesswork with science, we are able to grow more with less."

Many open innovation platforms run contests between individuals (think Kaggle), and some facilitate teams. One of these platforms, InnoCentive, hosts mathematicians, physicists, and computer scientists eager to put their problem-solving skills to the test. There was a learning curve, to be sure (example: divide big problems into smaller pieces). Articulating the research question was challenging to say the least.

Several of the associated projects could be tackled by people without subject matter expertise; other steps required knowledge of the biological science, complicating the task of finding team members. But eventually Syngenta "harnessed outside talent to come up with a tool that manages the genetic component of the breeding process — figuring out which soybean varieties to cross with one another and which breeding technique will most likely lead to success." The company reports substantial results from this collaboration: The average rate of improvement of its portfolio grew from 0.8 to 2.5 bushels per acre per year.

 

Value frameworks context matters

 

2. How do you tie drug prices to value?
Systematic Analysis → Better Value for Patients. It's the age-old question: How do you put a dollar value on intangibles - particularly human health and wellbeing? As sophisticated pharmaceuticals succeed in curing more diseases, their prices are climbing. Healthcare groups have developed 'value frameworks' to guide decision-making about these molecules. It's still a touchy subject to weigh the cost of a prescription against potential benefits to a human life.

These frameworks address classic problems, and are useful examples for anyone formalizing the steps of complex decision-making - inside or outside of healthcare. For example, one cancer treatment may be likely to extend a patient's life by 30 to 45 days compared to another, but at much higher cost, or with unacceptable side effects. Value frameworks help people consider these factors.

@ContextMatters studies processes for drug evaluation and regulatory approval. In Creating a Global Context for Value, they compare the different methods of determining whether patients are getting high value. Their Value Framework Comparison Table highlights key evaluation elements from three value frameworks (ASCO, NCCN, ICER) and three health technology assessments (CADTH, G-BA, NICE).

 

Evidencebased-povertyprograms

3. Evidence overcomes the liberal-vs-conservative stalemate.
Evidence-based Programs → Lower Poverty. Veterans of the Bloomberg mayoral administration describe a data-driven strategy to reduce poverty in New York. Results for America Senior Fellows Robert Doar and Linda Gibbs share an insider's perspective in "New York City's Turnaround on Poverty: Why poverty in New York – unlike in other major cities – is dropping."

Experimentation was combined with careful attention to which programs succeeded (Paycheck Plus) and which didn't (Family Rewards). A key factor, common to any successful decision analysis effort: When a program didn't produce the intended results, advocates weren't cast aside as failures. Instead, that evidence was blended with the rest to continuously improve. The authors found that "Solid evidence can trump the liberal-versus-conservative stalemate when the welfare of the country’s most vulnerable people is at stake."

17 May 2016

How women decide, Pay for Success, and Chief Cognitive Officers.

Women-decide

1. Do we judge women's decisions differently?
Cognitive psychologist Therese Huston's new book is How Women Decide: What's True, What's Not, and What Strategies Spark the Best Choices. It may sound unscientific to suggest there's a particular way that several billion people make decisions, but the author doesn't seem nonchalant about drawing specific conclusions.

The book covers some of the usual decision analysis territory: The process of analyzing data to inform decisions. By far the most interesting material isn't about how choices are made, but how they are judged: The author makes a good argument that women's decisions are evaluated differently than men’s, by both males and females. Quick example: Marissa Mayer being hung up to dry for her ban on Yahoo! staff working from home, while Best Buy's CEO mostly avoided bad press after a similar move. Why are we often quick to question a woman’s decision, but inclined to accept a man’s?

Huston offers concrete strategies for defusing the stereotypes that can lead to this double standard. Again, it's dangerous to speak too generally. But the book presents evidence of gender bias in the interpretation of people's choices, and how it feeds into people's perceptions of choices. Worthwhile reading. Sheelah Kolhatkar reviewed for NYTimes books.

2. Better government through Pay for Success.
In Five things to know about pay for success legislation, Urban Institute staff explain their support for the Social Impact Partnership to Pay for Results Act (SIPPRA), which is being considered in the US House. Authors are Justin Milner (@jhmilner), Ben Holston (@benholston), and Rebecca TeKolste.

Under SIPPRA, state and local governments could apply for funding through outcomes-driven “social impact partnerships” like Pay for Success (PFS). This funding would require strong evidence and rigorous evaluation, and would accomodate projects targeting a wide range of outcomes: unemployment, child welfare, homelessness, and high school graduation rates.

One of the key drivers behind SIPPRA is its proposed fix for the so-called wrong pockets problem, where one agency bears the cost of a program, while others benefit as free riders. "The bill would provide a backstop to PFS projects and compensate state and local governments for savings that accrue to federal coffers." Thanks to Meg Massey (@blondnerd).

3. The rise of the Chief Cognitive Officer.
On The Health Care Blog, Dan Housman describes The Rise of the Chief Cognitive Officer. "The upshot of the shift to cognitive clinical decision support is that we will likely increasingly see an evolving marriage and interdependency between the worlds of AI (artificial intelligence) thinking and human provider thinking within medicine." Housman, CMO for ConvergeHealth by Deloitte, proposes a new title of CCO (Chief Cognitive Officer) or CCMO (Chief Cognitive Medical Officer) to modernize the construct of CMIO (Chief Medical Information Officer), and maintain a balance between AI and humans. For example, "If left untrained for a year or two, should the AI lose credentials? How would training be combined between organizations who have different styles or systems of care?"

4. Creating a sports analytics culture.
Stylianos Kampakis describes on the Experfy blog how to create a data-driven culture within a soccer club organization.

5. Blockchain is forcing new decisions.
@mattleising writes for Bloomberg about happenings Inside the Secret Meeting Where Wall Street Tested Digital Cash. Thanks @stevesi. Everywhere you look are examples of how Blockchain will change things.

12 May 2016

Magical thinking about ev-gen, your TA is a bot, and Foursquare predicts stuff really well.

Dreaming about Ev-Gen

1. Magical thinking about ev-gen.
Rachel E. Sherman, M.D., M.P.H., and Robert M. Califf, M.D. of the US FDA have described what is needed to develop an evidence generation system - and must be playing a really long game. "The result? Researchers will be able to distill the data into actionable evidence that can ultimately guide clinical, regulatory, and personal decision-making about health and health care." Recent posts are Part I: Laying the Foundation and Part II: Building Out a National System. Sherman and Califf say "There must be a common approach to how data is presented, reported and analyzed and strict methods for ensuring patient privacy and data security. Rules of engagement must be transparent and developed through a process that builds consensus across the relevant ecosystem and its stakeholders." Examples of projects reflecting these concepts include: Sentinel Initiative (querying claims data to identify safety issues), PCORNet (leveraging EHR data in support of pragmatic clinical research), and NDES (the National Device Evaluation System).

2. It pays to play the long game with data.
Michael Carney shares great examples in So you want to build a data business? Play the long game. These include "Foursquare demonstrating, once again, that it’s capable of predicting public company earnings with an incredible degree of accuracy based on real world foot traffic data.... On April 12, two weeks in advance of the beleaguered restaurant chain’s quarterly earnings report, Foursquare CEO Jeff Glueck published a detailed blog post outlining a decline in foot traffic to Chipotle’s stores and predicting Q1 sales would be 'Down Nearly 30%.' Yesterday, the burrito brand reported a 29.7% decline in quarter over quarter earnings.... Kudos to the company for persisting in the face of public scrutiny and realizing the true potential of its location-based behavioral graph."

3. Meet Jill Watson, AI TA.
Turns out, college students often submit 10,000 questions to their teaching assistants. Per class, per semester. So a Georgia Tech prof experimented with using IBM's Watson Analytics AI engine to pretend to be a live TA - and pulled it off. Cool stories from The Verge and Wall Street Journal.

4. Burst of unsettling healthcare news.
- So now that we know more about the cost of our healthcare, evidence suggests price transparency doesn't seem to cut our outpatient spending. Healthcare reform is hard.

- Recent findings indicate patient-centered medical homes aren't cutting Medicare costs. Buzzkill via THCB.

- Ever been told to have surgery where they do the most procedures? Some data show high-volume surgeries aren't so closely linked to better patient outcomes. Ouch.

07 April 2016

Better evidence for patients, and geeking out on baseball.

Health tech wearables

1. SPOTLIGHT: Redefining how patients get health evidence.

How can people truly understand evidence and the tradeoffs associated with health treatments? How can the medical community lead them through decision-making that's shared - but also evidence-based?

Hoping for cures, patients and their families anxiously Google medical research. Meanwhile, the quantified selves are gathering data at breakneck speed. These won't solve the problem. However, this month's entire Health Affairs issue (April 2016) focuses on consumer uses of evidence and highlights promising ideas.

  • Translating medical evidence. Lots of synthesis and many guidelines are targeted at healthcare professionals, not civilians. Knowledge translation has become an essential piece, although it doesn't always involve patients at early stages. The Boot Camp Translation process is changing that. The method enables leaders to engage patients and develop healthcare language that is accessible and understandable. Topics include colon cancer, asthma, and blood pressure management.
  • Truly patient-centered medicine. Patient engagement is a buzzword, but capturing patient-reported outcomes in the clinical environment is a real thing that might make a big difference. Danielle Lavallee led an investigation into how patients and providers can find more common ground for communicating.
  • Meaningful insight from wearables. These are early days, so it's probably not fair to take shots at the gizmos out there. It will be a beautiful thing when sensors and other devices can deliver more than alerts and reports - and make valuable recommendations in a consumable way. And of course these wearables can play a role in routine collection of patient-reported outcomes.


Statcast

2. Roll your own analytics for fantasy baseball.
For some of us, it's that special time of year when we come to the realization that our favorite baseball team is likely going home early again this season. There's always fantasy baseball, and it's getting easier to geek out with analytics to improve your results.

3. AI engine emerges after 30 years.
No one ever said machine learning was easy. Cyc is an AI engine that reflects 30 years of building a knowledge base. Now its creator, Doug Lenat, says it's ready for prime time. Lucid is commercializing the technology. Personal assistants and healthcare applications are in the works.

Photo credit: fitbit one by Tatsuo Yamashita on Flickr.

23 March 2016

Rapid is the new black, how to ask for money, and should research articles be free?

Digitalhealthnetwork

1. #rapidisthenewblack

The need for speed is paramount, so it's crucial that we test ideas and synthesize evidence quickly without losing necessary rigor. Examples of people working hard to get it right:

  • The Digital Health Breakthrough Network is a very cool idea, supported by an A-list team. They (@AskDHBN) seek New York City-based startups who want to test technology in rigorous pilot studies. The goal is rapid validation of early-stage startups with real end users. Apply here.
  • The UK's fantastic Alliance for Useful Evidence (@A4UEvidence) asks Rapid Evidence Assessments: A bright idea or a false dawn? "Research synthesis will be at the heart of the government’s new What Works centres" - equally true in the US. The idea is "seductive: the rigour of a systematic review, but one that is cheaper and quicker to complete." Much depends on whether the review maps easily onto an existing field of study.
  • Jon Brassey of the Trip database is exploring methods for rapid reviews of health evidence. See Rapid-Reviews.info or @rapidreviews_i.
  • Miles McNall and Pennie G. Foster-Fishman of Michigan State (ouch, still can't get over that bracket-busting March Madness loss) present methods and case studies for rapid evaluations and assessments. In the American Journal of Evaluation, they caution that the central issue is balancing speed and trustworthiness.

2. The science of asking for donations: Unit asking method.
How much would you give to help one person in need? How much would you give to help 20 people? This is the concept behind the unit asking method, a way to make philanthropic fund-raising more successful.

3. Should all research papers be free? 
Good stuff from the New York Times on the conflict between scholarly journal paywalls and Sci-Hub.

4. Now your spreadsheet can tell you what's going on.
Savvy generates a narrative for business intelligence charts in Qlik or Excel.

02 March 2016

NBA heat maps, FICO vs Facebook, and peer review.

Curry-heatmap

Curry-heatmap2016

1. Resistance is futile. You must watch Steph Curry.
The Golden State Warriors grow more irresistible every year, in large part because of Curry’s shooting. With sports data analytics from Basketball-Reference.com, these heat maps illustrate his shift to 3-pointers (and leave no doubt why Curry was called the Babyfaced Assassin; now of course he’s simply MVP).

2. Facebook vs FICO.
Fintech startups are exploring new business models, such as peer-to-peer lending (Lending Club). Another big idea is replacing traditional credit scores with rankings derived from social media profiles and other data: Just 3 months ago, Affirm and others were touted in Fortune’s Why Facebook Profiles are Replacing Credit Scores. But now the Wall Street Journal says those decisions are falling out of favor, in Facebook Isn’t So Good at Judging Your Credit After All. Turns out, regulations and data-sharing policies are interfering. Besides, executives with startups like ZestFinance find social-media lending “creepy”.

3. How to fix science journals.
Harvard Med School’s Jeffrey Flier wrote an excellent op-ed for the Wall Street Journal, How to Keep Bad Science from Getting into Print [paywall]. Key issues: anonymous peer reviewers, and lack of transparent post-publishing dialogue with authors (@PubPeer being a notable exception). Flier says we need a science about how to publish science. Amen to that.

4. Longing for civil, evidence-based discourse?
ProCon.org publishes balanced coverage of controversial issues, presenting side-by-side pros and cons supported by evidence. The nonprofit’s site is ideal for schoolteachers, or anyone wanting a quick glance at important findings.

28 January 2016

Everyone's decision process, C-Suite judgment, and the Golden Gut.

Househunters_decision_checklist

1. SPOTLIGHT: MCDA, a decision process for everyone. 'Multiple criteria decision analysis' is a crummy name for a great concept (aren't all big decisions analyzed using multiple criteria?). MCDA means assessing alternatives while simultaneously considering several objectives. It's a useful way to look at difficult choices in healthcare, oil production, or real estate. But oftentimes, results of these analyses aren't communicated clearly, limiting their usefulness.

Fundamentally, MCDA means listing options, defining decision criteria, weighting those criteria, and then scoring each option. Some experts build complex economic models, but anyone can apply MCDA in effective, less rigorous ways.

You know those checklists at the end of every HouseHunters episode where people weigh location and size against budget? That's essentially it: Making important decisions, applying judgment, and juggling multiple goals (raise the kids in the city or the burbs?) - and even though they start out by ranking priorities, once buyers see their actual options, deciding on a house becomes substantially more complex.

MCDA guidance from ISPOR

As shown in the diagram (source: ISPOR), the analysis hinges on assigning relative weights to individual decision criteria. While this brings rationality and transparency to complex decisions, it also invites passionate discussions. Some might expect these techniques to remove human judgment from the process, but MCDA leaves it front and center.

Pros and cons. Let’s not kid ourselves: You have to optimize on something. MCDA is both beautiful and terrifying because it forces us to identify tradeoffs: Quality, short-term benefits, long-term results? Uncertain outcomes only complicate things futher. 

This method is a good way to bring interdisciplinary groups into a conversation. One of the downsides is that, upon seeing elaborate projections and models, people can become over-confident in the numbers. Uncertainty is never fully recognized or quantified. (Recall the Rumsfeldian unknown unknown.) Sensitivity analysis is essential, to illustrate which predicted outcomes are strongly influenced by small adjustments.

MCDA is gaining traction in healthcare. The International Society For Pharmacoeconomics and Outcomes Research has developed new MCDA guidance, available in the latest issue of Value for Health (paywall). To put it mildly, it’s difficult to balance saving lives with saving money.  To be sure, healthcare decision makers have always weighed medical, social, and economic factors: MCDA helps stakeholders bring concrete choices and transparency to the process of evaluating outcomes research - where controversy is always a possibility.

Resources to learn more. If you want to try MCDA, pick up one of the classic texts, such as Smart Choices: A Practical Guide to Making Better Decisions. Additionally, ISPOR's members offer useful insights into the pluses and minuses of this methodology - see, for example, Does the Future Belong to MCDA? The level of discourse over this guidance illustrates how challenging healthcare decisions have become.  

2. C-Suite judgment must blend with analytics. Paul Blase of PriceWaterhouseCoopers hits the nail on the head, describing how a single analytics department can't be expected to capture the whole story of an enterprise. He explains better ways to involve both the C-Suite and the quants in crucial decision-making.

3. The Man with the Golden Gut. Netflix CEO Reed Hastings explains how and when intuition is more valuable than big data. Algorithms can make only some of the decisions.

4. Embedding analytics culture. How do you compare to the Red Sox? Since Moneyball, clubs have changed dramatically. Is it possible baseball organizations have embedded analytics processes more successfully than other business enterprises?

12 January 2016

Game theory for Jeopardy!, evidence for gun control, and causality.

1. Deep knowledge → Wagering strategy → Jeopardy! win Some Jeopardy! contestants struggle with the strategic elements of the show. Rescuing us is Keith Williams (@TheFinalWager), with the definitive primer on Jeopardy! strategy, applying game theory to every episode and introducing "the fascinating world of determining the optimal approach to almost anything".

2. Gun controls → Less violence? → Less tragedy? Does the evidence support new US gun control proposals? In the Pacific Standard, Francie Diep cites several supporting scientific studies.

3. New data sources → Transparent methods → Health evidence Is 'real-world' health evidence closer to the truth than data from more traditional categories? FDA staff explain in What We Mean When We Talk About Data. Thanks to @MandiBPro.

4. Data model → Cause → Effect In Why: A Guide to Finding and Using Causes, Samantha Kleinberg aims to explain why causality is often misunderstood and misused: What is it, why is it so hard to find, and how can we do better at interpreting it? The book excerpt explains that "Understanding when our inferences are likely to be wrong is particularly important for data science, where we’re often confronted with observational data that is large and messy (rather than well-curated for research)."

5. Empirical results → Verification → Scientific understanding Independent verification is essential to scientific progress. But in academia, verifying empirical results is difficult and not rewarded. This is the reason for Curate Science, a tool making it easier for researchers to independently verify each other’s evidence and award credit for doing so. Follow @CurateScience.

Join me at the HEOR writing workshop March 17 in Philadelphia. I'm speaking about communicating data, and leading an interactive session on data visualization. Save $300 before Jan 15.

22 December 2015

Asthma heartbreak, cranky economists, and prediction markets.

1. Childhood stress → Cortisol → Asthma Heartbreaking stories explain likely connections between difficult childhoods and asthma. Children in Detroit suffer a high incidence of attacks - regardless of allergens, air quality, and other factors. Peer-reviewed research shows excess cortisol may be to blame.

2. Prediction → Research heads up → Better evidence Promising technique for meta-research. A prediction market was created to quantify the reproducibility of 44 studies published in prominent psychology journals, and estimate likelihood of hypothesis acceptance at different stages. The market outperformed individual forecasts, as described in PNAS (Proceedings of the National Academy of Sciences.)

3. Fuzzy evidence → Wage debate → Policy fail More fuel for the minimum-wage fire. Depending on who you ask, a high minimum wage either bolsters the security of hourly workers or destroys the jobs they depend on. Recent example: David Neumark's claims about unfavorable evidence.

4. Decision tools → Flexible analysis → Value-based medicine Drug Abacus is an interactive tool for understanding drug pricing. This very interesting project, led by Peter Bach at Memorial Sloan Kettering, compares the price of a drug (US$) with its "worth", based on outcomes, toxicity, and other factors. Hopefully @drugabacus signals the future for health technology assessment and value-based medicine.

5. Cognitive therapy → Depression relief → Fewer side effects A BMJ systematic review and meta-analysis show that depression can be treated with cognitive behavior therapy, possibly with outcomes equivalent to antidepressants. Consistent CBT treatment is a challenge, however. AHRQ reports similar findings from comparative effectiveness research; the CER study illustrates how to employ expert panels to transparently select research questions and parameters.

11 December 2015

Social program RCTs, health guidelines, and evidence-based mentoring.

 1. Evidence → Social RCTs → Transformational change More progress toward evidence-based social programs. The Laura and John Arnold foundation expanded its funding of low-cost randomized controlled trials. @LJA_Foundation, an advocate for evidence-based, multidisciplinary approaches, has committed $100,000+ for all RCT proposals satisfying its RFP criteria and earning a high rating from its expert review panel.

2. Stakeholder input → Evidence-based health guidelines Canada's Agency for Drugs and Technologies in Health seeks stakeholder input for its Guidelines for the Economic Evaluation of Health Technologies. The @CADTH_ACMTS guidelines detail best practices for conducting economic evaluations and promote the use of high-quality economic evidence in policy, practice, and reimbursement decision-making.

3. Research evidence → Standards → Mentoring effectiveness At the National Mentoring Summit (January 27, Washington DC), practitioners, researchers, corporate partners, and civic leaders will review how best to incorporate research evidence into practice standards for youth mentoring. Topics at #MentoringSummit2016 include benchmarks for different program models (e.g., school-based, group, e-mentoring) and particular populations (e.g.,youth in foster care, children of incarcerated parents).

4. Feature creep → Too many choices → Decision fatigue Hoa Loranger at Nielsen Norman Group offers an insightful explanation of how Simplicity Wins Over Abundance of Choice in user interface design. "The paradox is that consumers are attracted to a large number of choices and may consider a product more appealing if it has many capabilities, but when it comes to making decisions and actually using the product, having fewer options makes it easier for people to make a selection." Thanks to @LoveStats.

5. Hot hand → Home run → Another home run? Evidence of a hot hand in baseball? Findings published on the Social Science Research Network suggest that "recent performance is highly significant in predicting performance.... [A] batter who is 'hot' in home runs is 15-25% more likely... to hit a home run in his next at bat." Not so fast, says @PhilBirnbaum on his Sabermetric blog, saying that the authors' "regression coefficient confounds two factors - streakiness, and additional evidence of the players' relative talent."

Subscribe by email