We’ve all known this person (or maybe we are this person): Activity tracker firmly attached, obsessively monitoring sleep cycles, heart rate, blood pressure, steps taken, distance covered, calories consumed, calories burned.
Technology has made it possible for endless data about our health to be captured at every (literal) step. What was once nice-to-have health tech for that little extra motivation became a must-have, a simple and quick way to keep tabs on your well-being.
But while wearables data can signal something’s not quite right, it presents a conundrum for your healthcare provider: A physician deals with regulated medical devices, and most people wearing this tech are not qualified to self-diagnose. Health trends and data can be misinterpreted or misread, and fears of illness unfounded or trips to the doctor unnecessary.
The data only becomes really valuable in the hands of a healthcare professional, when it can be combined with your historical health records and other data points collected over time, your doctor’s specialized training, and the tools and ability to do additional testing.
As unrelated as it may seem, the current state of cross-functional research feels a lot like buying wearable technology and using it to diagnose your health.
Disruptive technologies and agile research methodologies have ushered in a period of ubiquitous data access, blurring the lines between job functions across large swaths of industries and companies.
Gathering and analyzing data no longer falls squarely on the shoulders of market research or consumer insights teams. More and more non-research teams are demanding insights, and, in the interest of speed, they often turn to a tech vendor, software platform, or even online survey tools to deliver them.
This isn’t unique to the world of research. A similar scenario has been playing out for decades in IT, exacerbated by the emergence of cloud computing. Whether it’s photo editing software, file sharing systems, or email programs that spring up in departments unbeknownst to IT, these rogue tools are set up with the intention of improving productivity, but ultimately threaten corporate data management best practices. While the impulses behind using unauthorized technology are generally good, the cost can be huge. There is always a cybersecurity risk to unknown, unvetted software, plus there’s an economic effect from inefficient, potentially duplicative spending. Recent studies by Cisco discovered that organizations had between 17 and 20 times more cloud applications running in their company than IT had estimated.
Like any impatient worker who’s guilty of signing up for a quick web app instead of going through the proper IT channels, teams at organizations big and small are under immense pressure to perform. Non-research departments in particular must make faster decisions about their customers and products – and at a low cost, no less. Product and marketing teams are searching for any solution that can help them quickly gather consumer insights and feedback.
The intent is good: An affection for experimentation and efficiency. But the affection isn’t mutual for market researchers, because in the process, teams are unwittingly exposing their companies and their colleagues to an enormous and unmanageable amount of risk.
And it will have a ripple effect on organizations if they continue to operate this way:
- No research guardrails are in place to prevent bias, inaccurate data collection, misleading sample analysis, or other bad practices.
- Limited licenses for certain tools means gated data and an inability to see what is being tested or studied within a company. This opens up the chance that teams will duplicate efforts due to lack of visibility, wasting time and money in the process.
- Ad hoc testing and pricing means there is a barrier to entry to building a solid test-and-learn/insights-as-a-capability culture.
- Potential to overpay as different departments go their own way with testing solutions.
- Potential data privacy risks if ad hoc systems and workflows are collecting and storing any identifying data from respondents.
This shared goal of rapid customer insights combined with a lack of understanding into how each team gets there stands in the way of researchers and their data-loving counterparts in product and marketing. I believe there’s an opportunity to make accessibility and transparency work for researchers and non-researchers alike.
What does that winning combination look like from where I sit, admittedly as the connective tissue between product and marketing for one of those very SaaS platform solutions?
Consider the term “all you can seat” which takes a collaborative, enterprise-wide approach to conducting research, tests, and experiments. Unlike traditional software, which is priced per license (“seat”) and limits access by design, an all you can seat model opens the door to flexible access to information. Not only does this involve everyone in an organization that can and must be accessing research (and its subsequent results), but it provides an org-wide view of each department’s research efforts, which aligns the organization and avoids wasted resources, time, and money.
Where market researchers can contribute the most expertise will be in enacting org-wide testing strategies and best practices. Researchers will also able to dig deeper into the insights that get generated and provide historical context or sentiment analysis. With the right guardrails in place, teams should be able to run their own tests without the risk of unintended bias, duplicative efforts, and making decisions based on flawed data.
There’s no way to escape the rapid access to consumer data (and the tech advancements to analyze it), but human expertise cannot be overstated and should not be underutilized.
Keep tracking your health. Just know when to call the doctor.