19 posts categorized "insights for the C-Suite"

11 November 2016

Building trust with evidence-based insights.

Trust

This week we examine how executives can more fully grasp complex evidence/analysis affecting their outcomes - and how analytics professionals can better communicate these findings to executives. Better performance and more trust are the payoffs.

1. Show how A → B. Our new guide to Promoting Evidence-Based Insights explains how to engage stakeholders with a data value story. Shape content around four essential elements: Top-line, evidence-based, bite-size, and reusable. It's a suitable approach whether you're in marketing, R&D, analytics, or advocacy.

No knowledge salad. To avoid tl;dr or MEGO (My Eyes Glaze Over), be sure to emphasize insights that matter to stakeholders. Explicitly connect specific actions with important outcomes, identify your methods, and provide a simple visual - this establishes trust and crediblity. Be succint; you can drill down into detailed evidence later. The guide is free from Ugly Research.

Guide to Insights by Ugly Research


2. Lack of analytics understanding → Lack of trust.
Great stuff from KPMG: Building trust in analytics: Breaking the cycle of mistrust in D&A. "We believe that organizations must think about trusted analytics as a strategic way to bridge the gap between decision-makers, data scientists and customers, and deliver sustainable business results. In this study, we define four ‘anchors of trust’ which underpin trusted analytics. And we offer seven key recommendations to help executives improve trust throughout the D&A value chain.... It is not a one-time communication exercise or a compliance tick-box. It is a continuous endeavor that should span the D&A lifecycle from data through to insights and ultimately to generating value."

Analytics professionals aren't feeling the C-Suite love. Information Week laments the lack of transparency around analytics: When non-data professionals don't know or understand how it is performed, it leads to a lack of trust. But that doesn't mean the data analytics efforts themselves are not worthy of trust. It means that the non-data pros don't know enough about these efforts to trust them.

KPMG Trust in data and analytics


3. Execs understand advanced analytics → See how to improve business
McKinsey has an interesting take on this. "Execs can't avoid understanding advanced analytics - can no longer just 'leave it to the experts' because they must understand the art of the possible for improving their business."

Analytics expertise is widespread in operational realms such as manufacturing and HR. Finance data science must be a priority for CFOs to secure a place at the planning table. Mary Driscoll explains that CFOs want analysts trained in finance data science. "To be blunt: When [line-of-business] decision makers are using advanced analytics to compare, say, new strategies for volume, pricing and packaging, finance looks silly talking only in terms of past accounting results."

4. Macroeconomics is a pseudoscience.
NYU professor Paul Romer's The Trouble With Macroeconomics is a widely discussed, skeptical analysis of macroeconomics. The opening to his abstract is excellent, making a strong point right out of the gate. Great writing, great questioning of tradition. "For more than three decades, macroeconomics has gone backwards. The treatment of identification now is no more credible than in the early 1970s but escapes challenge because it is so much more opaque. Macroeconomic theorists dismiss mere facts by feigning an obtuse ignorance about such simple assertions as 'tight monetary policy can cause a recession.'" Other critics also seek transparency: Alan Jay Levinovitz writes in @aeonmag The new astrology: By fetishising mathematical models, economists turned economics into a highly paid pseudoscience.

5. Better health evidence to a wider audience.
From the Evidence Live Manifesto: Improving the development, dissemination. and implementation of research evidence for better health.

"7. Evidence Communication.... 7.2 Better communication of research: High quality, important research that matters has to be understandable and informative to a wide audience. Yet , much of what is currently produced is not directed to a lay audience, is often poorly constructed and is underpinned by a lack of training and guidance in this area." Thanks to Daniel Barth-Jones (@dbarthjones).

Photo credit: Steve Lav - Trust on Flickr

15 June 2016

Free beer! and the "Science of X".

Chanteuse_flickr_Christian_Hornick

1. Free beer for a year for anyone who can work perfume, velvety voice, and 'Q1 revenue goals were met' into an appropriate C-Suite presentation.
Prezi is a very nice tool enabling you to structure a visual story, without forcing a linear, slide-by-slide presentation format. The best part is you can center an entire talk around one graphic or model, and then dive into details depending on audience response. (Learn more in our writeup on How to Present Like a Boss.)

Now there's a new marketing campaign, the Science of Presentations. Prezi made a darn nice web page. And the ebook offers several useful insights into how to craft and deliver a memorable presentation (e.g., enough with the bullet points already).

But in their pursuit of click-throughs, they've gone too far. It's tempting to claim you're following the "Science of X". To some extent, Prezi provides citations to support its recommendations: The ebook links to a few studies on audience response and so forth. But that's not a "science" - they don't always connect between what they're citing and what they're suggesting to business professionals. Example: "Numerous studies have found that metaphors and descriptive words or phrases — things like 'perfume' and 'she had a velvety voice' - trigger the sensory cortext.... On the other hand, when presented with nondescriptive information — for example, 'The marketing team reached all of its revenue goals in Q1' — the only parts of our brain that are activated are the ones responsible for understanding language. Instead of experiencing the content with which we are being presented, we are simply processing it."

Perhaps in this case "simply processing" the good news is enough experience for a busy executive. But our free beer offer still stands.

2. How should medical guidelines be communicated to patients?

And now for the 'Science of Explaining Guidelines'. It's hard enough to get healthcare professionals to agree on a medical guideline - and then follow it. But it's also hard to decide whether/how those recommendations should be communicated to patients. Many of the specifics are intended for providers' consumption, to improve their practice of medicine. Although it's essential that patients understand relevant evidence, translating a set of recommendations into lay terms is quite problematic.

Groups publish medical guidelines to capture evidence-based recommendations for addressing a particular disease. Sometimes these are widely accepted - and other times not. The poster-child example of breast cancer screening illustrates why patients, and not just providers, must be able to understand guidelines. Implementation Science recently published the first systematic review of methods for disseminating guidelines to patients.

Not surprisingly, the study found weak evidence of methods that are consistently feasible. "Key factors of success were a dissemination plan, written at the start of the recommendation development process, involvement of patients in this development process, and the use of a combination of traditional and innovative dissemination tools." (Schipper et al.)

3. Telling a story with data.
In the Stanford Social Innovation Review (SSIR), @JakePorway explains three things great data storytellers do differently [possible paywall]. Jake is with @DataKind, "harnessing the power of data science in service of humanity".

 

Photo credit: Christian Hornick on Flickr.

17 May 2016

How women decide, Pay for Success, and Chief Cognitive Officers.

Women-decide

1. Do we judge women's decisions differently?
Cognitive psychologist Therese Huston's new book is How Women Decide: What's True, What's Not, and What Strategies Spark the Best Choices. It may sound unscientific to suggest there's a particular way that several billion people make decisions, but the author doesn't seem nonchalant about drawing specific conclusions.

The book covers some of the usual decision analysis territory: The process of analyzing data to inform decisions. By far the most interesting material isn't about how choices are made, but how they are judged: The author makes a good argument that women's decisions are evaluated differently than men’s, by both males and females. Quick example: Marissa Mayer being hung up to dry for her ban on Yahoo! staff working from home, while Best Buy's CEO mostly avoided bad press after a similar move. Why are we often quick to question a woman’s decision, but inclined to accept a man’s?

Huston offers concrete strategies for defusing the stereotypes that can lead to this double standard. Again, it's dangerous to speak too generally. But the book presents evidence of gender bias in the interpretation of people's choices, and how it feeds into people's perceptions of choices. Worthwhile reading. Sheelah Kolhatkar reviewed for NYTimes books.

2. Better government through Pay for Success.
In Five things to know about pay for success legislation, Urban Institute staff explain their support for the Social Impact Partnership to Pay for Results Act (SIPPRA), which is being considered in the US House. Authors are Justin Milner (@jhmilner), Ben Holston (@benholston), and Rebecca TeKolste.

Under SIPPRA, state and local governments could apply for funding through outcomes-driven “social impact partnerships” like Pay for Success (PFS). This funding would require strong evidence and rigorous evaluation, and would accomodate projects targeting a wide range of outcomes: unemployment, child welfare, homelessness, and high school graduation rates.

One of the key drivers behind SIPPRA is its proposed fix for the so-called wrong pockets problem, where one agency bears the cost of a program, while others benefit as free riders. "The bill would provide a backstop to PFS projects and compensate state and local governments for savings that accrue to federal coffers." Thanks to Meg Massey (@blondnerd).

3. The rise of the Chief Cognitive Officer.
On The Health Care Blog, Dan Housman describes The Rise of the Chief Cognitive Officer. "The upshot of the shift to cognitive clinical decision support is that we will likely increasingly see an evolving marriage and interdependency between the worlds of AI (artificial intelligence) thinking and human provider thinking within medicine." Housman, CMO for ConvergeHealth by Deloitte, proposes a new title of CCO (Chief Cognitive Officer) or CCMO (Chief Cognitive Medical Officer) to modernize the construct of CMIO (Chief Medical Information Officer), and maintain a balance between AI and humans. For example, "If left untrained for a year or two, should the AI lose credentials? How would training be combined between organizations who have different styles or systems of care?"

4. Creating a sports analytics culture.
Stylianos Kampakis describes on the Experfy blog how to create a data-driven culture within a soccer club organization.

5. Blockchain is forcing new decisions.
@mattleising writes for Bloomberg about happenings Inside the Secret Meeting Where Wall Street Tested Digital Cash. Thanks @stevesi. Everywhere you look are examples of how Blockchain will change things.

23 March 2016

Rapid is the new black, how to ask for money, and should research articles be free?

Digitalhealthnetwork

1. #rapidisthenewblack

The need for speed is paramount, so it's crucial that we test ideas and synthesize evidence quickly without losing necessary rigor. Examples of people working hard to get it right:

  • The Digital Health Breakthrough Network is a very cool idea, supported by an A-list team. They (@AskDHBN) seek New York City-based startups who want to test technology in rigorous pilot studies. The goal is rapid validation of early-stage startups with real end users. Apply here.
  • The UK's fantastic Alliance for Useful Evidence (@A4UEvidence) asks Rapid Evidence Assessments: A bright idea or a false dawn? "Research synthesis will be at the heart of the government’s new What Works centres" - equally true in the US. The idea is "seductive: the rigour of a systematic review, but one that is cheaper and quicker to complete." Much depends on whether the review maps easily onto an existing field of study.
  • Jon Brassey of the Trip database is exploring methods for rapid reviews of health evidence. See Rapid-Reviews.info or @rapidreviews_i.
  • Miles McNall and Pennie G. Foster-Fishman of Michigan State (ouch, still can't get over that bracket-busting March Madness loss) present methods and case studies for rapid evaluations and assessments. In the American Journal of Evaluation, they caution that the central issue is balancing speed and trustworthiness.

2. The science of asking for donations: Unit asking method.
How much would you give to help one person in need? How much would you give to help 20 people? This is the concept behind the unit asking method, a way to make philanthropic fund-raising more successful.

3. Should all research papers be free? 
Good stuff from the New York Times on the conflict between scholarly journal paywalls and Sci-Hub.

4. Now your spreadsheet can tell you what's going on.
Savvy generates a narrative for business intelligence charts in Qlik or Excel.

25 February 2016

Inspire people with insights, Part 2.

Penguin navel-gazing

To be inspired, your audience needs to see how findings are reliable and relevant. Part 1 talked about creating practical checklists to ensure data-driven research is reproducible. This post describes how to deliver results that resonate with your audience.

It’s nice when people review analytical findings, think "Hmmm, interesting," and add the link to bitly. It’s exponentially nicer when they say “Holy smokes, let’s get started!” Certainly there are big differences between publishing a report, populating an executive dashboard, and presenting face-to-face. But these three techniques can be applied in many settings.

1. Avoid navel-gazing. Regardless of how elegant the analytics are, if your audience doesn’t understand what they might do with them, your efforts won’t have impact. All of us must resist the urge to overemphasize our expertise and hard work, and focus on helping others achieve more. Ask yourself which insights can help someone grow their business, improve team performance, or create social good.

2. Show relationships explicitly. Now more than ever, organizations urgently need *actionable insights* rather than findings. Of course you won’t always know people’s potential actions or decisions; addressing them directly can make you sound presumptuous, or just plain wrong. But you should know the subject matter well enough to anticipate objectives, values, or priorities. Be sure to connect to outcomes that are meaningful: Whenever possible, include a simple illustration, so people see key relationships at a glance.

Example: Before. Writeup of results (paragraphs or bullet points). “Patient engagement enables substantial provider cost savings. In a recent RCT, interactive, web-based patient engagement cut sedation needs 18% and procedure time 14% for first-time colonoscopy patients.”

Example: After. Use a simple illustration of associations, cause-effect, or before-after data relationships.
Interactive colonoscopy education → 14% faster procedures

EMMI offers an excellent example in this writeup of patient engagement research. Note how they name-drop respected medical centers doing a randomized, controlled trial - but quickly shift to simple, powerful visuals and descriptions of the business problem, evidence, and value message. (Bonus points for this Vimeo.)

Line Chart example

3. Build a better dashboard.
Data visualizations on dashboards effectively show what’s happening now, or what already happened. But when you are in a position to specifically advise decision makers, more is required. This spot-on observation by James Taylor (@jamet123) at Decision Management Solutions says it well: “Dashboards are decision support systems, but paradoxically, their design does not usually consider decisions explicitly.”

Example: Before. Graphics can indeed be worth 1,000 words, but simple information feeds and routine forecasts are a commodity.

Example: After. Decision makers need predictions and recommendations/prescriptive analytics. Most powerful are insights into the expected outcomes from untried, hopefully curve-bending or needle-moving activities. Some innovative variations on the standard dashboard are:

- List of specific decisions that could influence the numbers being predicted.

- List of actions that have influenced the numbers being displayed. 

- Environment where people can do what-ifs and think through different operational decisions. Formalizing this in a dashboard isn't realistic for every organization. Accenture provides excellent perspective on industrializing the insight-action-outcome sequence.

Ugly Research is the creator of PepperSlice: Insights Manager as a Service.

18 February 2016

Inspire people with insights, Part 1.

audience clapping

When presenting findings, it’s essential to show their reliability and relevance. Today’s post discusses how to show your evidence is reproducible; next week in Part 2, we’ll cover how to show it’s relevant.

Show that your insights are reproducible. With complexity on the rise, there’s no shortage of quality problems with traditional research: People are finding it impossible to replicate everything from peer-reviewed, published findings to Amy Cuddy's power pose study. A recent examination of psychology evidence was particularly painful.

In a corporate setting, the problem is no less difficult. How do you know a data scientist’s results can be replicated?* How can you be sure an analyst’s Excel model is flawless? Much confusion could be avoided if people produced documentation to add transparency.

Demystify, demystify, demystify. To establish credibility, the audience needs to believe your numbers and your methods are reliable and reproducible. Numerous efforts are bringing transparency to academic research (@figshare, #openscience). Technologies such as self-serve business intelligence and data visualization have added traceability to corporate analyses. Data scientists are coming to grips with the need for replication, evidenced by the Johns Hopkins/Coursera class on reproducible research. At presentation time, include highlights of data collection and analysis so the audience clearly understands the source of your insights.

Make a list: What would you need to know? Imagine a colleague will be auditing or replicating your work - whether it’s a straightforward business analysis, data science, or scientific research. Put together a list of the things they would need to do, and the data they would access, to arrive at your result. Work with your team to set expectations for how projects are completed and documented. No doubt this can be a burdensome task, but the more good habits people develop (e.g., no one-off spreadsheet tweaking), the less pain they’ll experience when defending their insights.

*What is a “reproducible” finding, anyway? Does this mean literally replicated, as in producing essentially the exact same result? Or does it mean a concept or research theory is supported? Is a finding replicated if effect size is different, but direction is the same? Sanjay Srivastava has an excellent explanation of the differences as they apply to psychology research in What counts as a successful or failed replication?

image source: Barney Moss (creative commons)

10 February 2016

How to present like a boss.

1. SPOTLIGHT: Present controversial evidence with just one slide. Throw out your slide deck and try the Extreme Presentation method, developed by Andrew Abela and Paul Radich during years of presentations at Procter & Gamble, McKinsey, and other leading companies. The technique involves first showing the audience the big-picture concept so they'll immediately have a sense of the problem, and where you’re going. Then zero in on the various issues - no need to plod along through slide after slide.

Ballroom or conference room? Who’s your audience? Let that determine the tone and format of your talk. Here, we’re focused on presenting to executive decision makers, communicating complex information such as market research findings or solutions-oriented sales proposals.

Encyclopedia of Slide Layouts

Avoid SME’s disease. As subject matter experts, it’s easy to fall into the trap of going into far more detail than the audience can absorb - and running out of time before reaching the important conclusion. Extreme Presentation helps people avoid that trap by focusing on a single, clear problem or idea. The accompanying handbook, Encyclopedia of Slide Layouts, offers numerous example diagrams for telling a visual story - with names like minefield, process improvement, and patient path. And the website has a 10-step design tool for specifying objectives, sequencing evidence and anecdotes, and measuring success.

Radich recently explained the concept in an excellent webcast, Unleash the Power of Your Data and Evidence With Visual Storytelling. He presented one detailed diagram, and used Prezi to zoom into different sections as he discussed them.

Complication ⇒ Resolution. In the webcast, Abela offered excellent advice on handling audience objections. Rather than wait for Q&A (and going on the defensive), it's better to address likely concerns during the talk, and resolve each one with an example.

2. What to do with your hands. Distracting gestures can substantially weaken the impact of a presentation. @PowerSpeaking offers several great tips on what to do with your hands during your talk. (Open palms are good.) PowerSpeaking offers well-respected programs designed specifically for polishing executive presentation skills, and they write an excellent blog.

04 February 2016

How Warby Parker created a data-driven culture.

 

4 pic Creating a Data Driven Organization 04feb16

 

1. SPOTLIGHT: Warby Parker data scientist on creating data-driven organizations. What does it take to become a data-driven organization? "Far more than having big data or a crack team of unicorn data scientists, it requires establishing an effective, deeply ingrained data culture," says Carl Anderson. In his recent O'Reilly book Creating a Data-Driven Organization, he explains how to build the analytics value chain required for valuable, predictive business models: From data collection and analysis to insights and leadership that drive concrete actions. Follow him @LeapingLlamas.

Practical advice, in a conversational style, is combined with references and examples from the management literature. The book is an excellent resource for real-world examples and highlights of current management research. The chapter on creating the right culture is a good reminder that leadership and transparency are must-haves.

UglyResearch_Action_Outcome

Although the scope is quite ambitious, Anderson offers thoughtful organization, hitting the highlights without an overwhelmingly lengthy literature survey. Ugly Research is delighted to be mentioned in the decision-making chapter (page 196 in the hard copy, page 212 in the pdf download). As shown in the diagram, with PepperSlice we provide a way to present evidence to decision makers in the context of a specific 'action-outcome' prediction or particular decision step.

Devil's advocate point of view. Becoming 'data-driven' is context sensitive, no doubt. The author is Director of Data Science at Warby Parker, so unsurprisingly the emphasis is technologies that enable data-gathering for consumer marketing. While it does address several management and leadership issues, such as selling a data-driven idea internally, the book primarily addresses the perspective of someone two or three degrees of freedom from the data; a senior executive working with an old-style C-Suite would likely need to take additional steps to fill the gaps. The book isn't so much about how to make decisions, as about how to create an environment where decision makers are open to new ideas, and to testing those ideas with data-driven insights. Because without ideas and evidence, what's the point of a good decision process?

2. People management needs prescriptive analytics. There are three types of analytics: descriptive (showing what already happened), predictive (predicting what will happen), and prescriptive (delivering recommended actions to produce optimal results). For HR, this might mean answering "What is our staff retention? What retention is expected for 2016? And more importantly, what concrete steps will improve staff retention for this year?" While smart analytics power many of our interactions as consumers, it is still unusual to get specific business recommendations from enterprise applications. That is changing. Thanks @ISpeakAnalytics.

3. Algorithms need managers, too. Leave it to the machines, and they'll optimize on click-through rates 'til kingdom come - even if customer satisfaction takes a nose dive. That's why people must actively manage marketing algorithms, explain analytics experts in the latest Harvard Business Review.

4. Nonreligious children are more generous? Evidence shows religion doesn't make kids more generous or altruistic. The LA Times reports a series of experiments suggests that children who grow up in nonreligious homes are more generous and altruistic than those from observant families. Thanks @VivooshkaC.

5. Housing-based welfare strategies do not work, and will not work. So says evidence from LSE research, discussing failures of asset-based welfare.  

11 December 2015

Social program RCTs, health guidelines, and evidence-based mentoring.

 1. Evidence → Social RCTs → Transformational change More progress toward evidence-based social programs. The Laura and John Arnold foundation expanded its funding of low-cost randomized controlled trials. @LJA_Foundation, an advocate for evidence-based, multidisciplinary approaches, has committed $100,000+ for all RCT proposals satisfying its RFP criteria and earning a high rating from its expert review panel.

2. Stakeholder input → Evidence-based health guidelines Canada's Agency for Drugs and Technologies in Health seeks stakeholder input for its Guidelines for the Economic Evaluation of Health Technologies. The @CADTH_ACMTS guidelines detail best practices for conducting economic evaluations and promote the use of high-quality economic evidence in policy, practice, and reimbursement decision-making.

3. Research evidence → Standards → Mentoring effectiveness At the National Mentoring Summit (January 27, Washington DC), practitioners, researchers, corporate partners, and civic leaders will review how best to incorporate research evidence into practice standards for youth mentoring. Topics at #MentoringSummit2016 include benchmarks for different program models (e.g., school-based, group, e-mentoring) and particular populations (e.g.,youth in foster care, children of incarcerated parents).

4. Feature creep → Too many choices → Decision fatigue Hoa Loranger at Nielsen Norman Group offers an insightful explanation of how Simplicity Wins Over Abundance of Choice in user interface design. "The paradox is that consumers are attracted to a large number of choices and may consider a product more appealing if it has many capabilities, but when it comes to making decisions and actually using the product, having fewer options makes it easier for people to make a selection." Thanks to @LoveStats.

5. Hot hand → Home run → Another home run? Evidence of a hot hand in baseball? Findings published on the Social Science Research Network suggest that "recent performance is highly significant in predicting performance.... [A] batter who is 'hot' in home runs is 15-25% more likely... to hit a home run in his next at bat." Not so fast, says @PhilBirnbaum on his Sabermetric blog, saying that the authors' "regression coefficient confounds two factors - streakiness, and additional evidence of the players' relative talent."

08 December 2015

Biased hiring algorithms and Uber is not disruptive.

1. Unconscious bias → Biased algorithms → Less hiring diversity On Science Friday (@SciFri), experts pointed out unintended consequences in algorithms for hiring. But even better was the discussion with the caller from Google, who wrote an algorithm predicting tech employee performance and seemed to be relying on unvalidated, self-reported variables. Talk about reinforcing unconscious bias. He seemed sadly unaware of the irony of the situation.

2. Business theory → Narrow definitions → Subtle distinctions If Uber isn't disruptive, then what is? Clayton Christensen (@claychristensen) has chronicled important concepts about business innovation. But now his definition of ‘disruptive innovation’ tells us Uber isn't disruptive - something about entrants and incumbents, and there are charts. Do these distinctions matter? Plus, ever try to get a cab in SF circa 1999? Yet this new HBR article claims Uber didn't "primarily target nonconsumers — people who found the existing alternatives so expensive or inconvenient that they took public transit or drove themselves instead: Uber was launched in San Francisco (a well-served taxi market)".

3. Meta evidence → Research quality → Lower health cost The fantastic Evidence Live conference posted a call for abstracts. Be sure to follow the @EvidenceLive happenings at Oxford University, June 2016. Speakers include luminaries in the movement for better meta research.

4. Mythbusting → Evidence-based HR → People performance The UK group Science for Work is helping organizations gather evidence for HR mythbusting (@ScienceForWork).

5. Misunderstanding behavior → Misguided mandates → Food label fail Aaron E. Carroll (@aaronecarroll), the Incidental Economist, explains on NYTimes Upshot why U.S. requirements for menu labeling don't change consumer behavior.

*** Tracy Altman will be speaking on writing about data at the HEOR and Market Access workshop March 17-18 in Philadelphia. ***

Subscribe by email