Data Masters vs. Data Mastery: Removing Obstacles to Digital Insurance 2.0
What if you could know THE FUTURE?
This used to be our industry’s “data” question. Predictability used to be the standard by which data and analytics worked within insurance. Can we use historical patterns to predict risk? Clearly, we can. Our industry is adept at keeping risks to a minimum.
The industry’s historical business model is based on the gathering and using of data regarding risk (much of it historical internal data), then deciding which large buckets of similar risks should be consolidated for product offerings and pricing. But the definition of a product in a Digital Insurance 2.0 has expanded. Products now include value-added services and customer experience along with pricing and coverages. This shift demands that insurers significantly improve their capabilities to capture, visualize, understand and actively use new forms of data.
Fueling this shift are customers and their willingness to share new sources of data with insurers so that they get a personalized experience, customized products, personalized pricing, relevant services and meaningful, personal interactions. In our 2018 consumer research, we found a strong willingness to share different sources of data, some real-time or on-demand data, for personalized pricing.
What if you could know THE NOW?
This is one of the many new questions insurers will be asking themselves as they position their organizations to build Digital Insurance 2.0 data frameworks. Real-time data, an expansive growth in data sources and a customer willingness to share data in exchange for better service or pricing, will allow insurers to shift from their roles as report-driven developers to data-driven life managers with the potential to make major advancements. Digital Insurance 2.0 capabilities will allow them to prioritize (and activate) preventive measures over aggregate claim predictions. Digital Insurance 2.0 services will allow for faster, intuitive sales processes at the point of need. Digital Insurance 2.0 will be characterized by enterprise data models that can support both operational needs, using internal data sources for business intelligence, and strategic needs, driven by advanced and emerging analytics capabilities coupled with new sources of external data.
Insurers that formerly used the past to predict the future, can begin to use the past, the present, and the probable, to alter and shape the future. They can approach data strategies with an enterprise vision that will break down data silos and empower insurers with new levels of insights. That is the power of data-driven insurance. In Majesco’s recent thought leadership report, Digital Insurance 2.0: Building Your Future on a Robust Data Foundation, we gain an in-depth perspective on the practical aspects of data gathering and use in the new realm of data management.
Data Masters or Data Mastery
To gain access to data’s new sources and heightened analytical power, a major shift must occur in how insurers view their relationship to the data. Are they attempting to be data masters or are they seeking to master the data?
Insurers have been masters of data for centuries, pioneering the use of vast amounts of historical data to predict the likelihood of events in the future, powering decades of growth and progress in the insurance industry. But with the challenges of multiple systems, new and growing data sources and the lack of an enterprise data warehouse and model, many organizations are faltering in their mastery of data. Data and analytics was always “owned by” or controlled by departments, each with their own systems, methods and accessibility. While data is obviously the life blood of any insurance company, the data has not been circulated throughout the organization, but held closely in data fiefdoms. Thus, insurers have had many internal data masters, enslaving the data to their particular needs.
Insurers, as they move into Digital Insurance 2.0, must change how they view data within their enterprise. They must move from many departmental data masters to an enterprise data mastery. Data is the fuel the enables an insurer to run efficiently. The fuel must be clean and flow seamlessly through the enterprise, giving each component within the insurance company’s engine the energy and vitality to operate efficiently and effectively. If the fuel is constricted or constrained, then the company suffers. This is especially true of the Digital Insurance 2.0 age, where data insights may come from many different data sources, internal and external, structured and unstructured. If data is the fuel for insurers to efficiently operate, then analytics is the sparkplug that gets operations running.
In Digital Insurance 2.0, data is a source of competitive advantage for identifying unserved or underserved markets, identifying profitable niches, reducing or eliminating risk, driving channel optimization, enhancing service and improving customer experiences. Combining data from traditional internal and new external sources can improve the richness of information used to make a wide array of business decisions that are increasing the gap between Insurance 1.0 and Digital Insurance 2.0 at an accelerated rate. This includes data derived from some technologies like IoT, drones, vehicles, and more – as well as leveraging new analytics tools.
Mastery at the Speed of Opportunity
If we assume that most insurers grasp the value and necessity of data-driven strategies, then we also have to assume that there may be some major hurdles that are standing in the way of building robust data foundations. In Digital Insurance 2.0: Building Your Future on a Robust Data Foundation we acknowledge the issues and discuss the correlation between the exploding growth of data and how difficult it is for most insurers to harness the capabilities. We then look at the reality of the opportunities and the basic steps involved in becoming a data-driven insurer.
In our next blog, we’ll look at how we can “Get the Basics Right,” examining how “People, Processes and Priorities” can be organized as a first step to optimizing our data foundations.