DIY BI? Don’t forget the data! And know where you are headed.
Is self-service BI really here, and empowering non-technical business users to find out what they want, when they want? Steve Sydee thinks it isn’t – yet – and looks at where DIY BI could fit, when.
Getting the right picture
Fuelled by the clamour from users for more and more insight and visualisations, and the inevitable inability of internal IT departments to meet this desire, more and more organisations are adopting self-service BI and analytics tools. This is facilitating disruption to traditional BI and data warehouse models, moving the “creation of BI” from the IT department to the business. Vendors of self-service BI tools claim this approach enables end-users to create the queries, reports and visualisations they want, while freeing up IT to focus on other tasks – thus benefiting both groups.
However, getting great visualisations is not the end in itself, as BI experts like IMGROUP regularly point out – see ‘Lipstick on a pig?’. Understanding the data, and making sure it is trusted, is key. So adopting these tools requires a clear understanding of how to support their use – and not only in terms of supporting the “how do I use it?” calls to the helpdesk. There also needs to be a solid architecture for analytics, and this is not something that users, and in many cases in-house IT departments, can set up optimally or in many cases even satisfactorily. Reliable, consistent and integrated data is key to trusted insight.
Industry watchers such as Gartner have also spotted this key issue around infrastructure and data quality governance: “Through 2016, less than 10 percent of self-service BI initiatives will be governed sufficiently to prevent inconsistencies that adversely affect the business” they said in a press release of 27/1/15. So is that an example of DIY BI insight, or a warning of a lack of foresight?
Getting data into the picture
In the 2014 Magic Quadrant for Business Intelligence and Analytics Platforms, Gartner explains that although BI tools with data discovery abilities will become a top priority over the next few years, few of the BI vendors offering tools that enable end-users to find and combine data as they need it also provide IT and BI teams with effective data governance and security. As a result, self-service BI tools are creating the very thing they were meant to circumvent – tension between the business and IT.
The report goes on: “These innovations have the potential to expand access to sophisticated interactive analysis and insights to business consumers and non-traditional BI users — the approximately 70% of users in organizations that currently do not use BI tools or have statistical backgrounds”. And in the press release of 27/1/15 they put a target date for this to start to become a reality: “By 2017, most business users and analysts in organizations will have access to self-service tools to prepare data for analysis”.
So, the challenges are clear:
First, within a few years, end-users will have access to great visualisation tools, and the ability to find and combine data sets to some degree, but who will provide the data scrutiny, governance, statistical validity or security to ensure that any insight gleaned is actually meaningful and useful as a basis for key business decision-making?
Second, these data discovery and preparation tools will probably be perfectly acceptable over limited datasets, but who will provide the same service over the wide data sets that are vital in giving true corporate insight.
Getting the right data into the picture
This future generation of “smart” data preparation and data discovery tools will certainly enable a step-change in the ability of the average user to create more meaningful insight. But this is not a replacement for a scalable and integrated enterprise information management capability – going back to IMGROUP’s ‘Lipstick on a pig’ metaphor, this is just another shade of lipstick but this time provided in easy-to-apply packaging…..underneath the glossy picture, the data is still the pig it always was, and will need sorting out properly.
To understand what may be possible in the near-to-mid term, let’s looks at a couple of scenarios.
The first example is to look at the number of projects that are currently looking to “industrialise” local / niche solutions – for instance clickstream analysis. The development of these applications usually follows an opportunistic evolution path, with a clearly defined and limited data set, and a clearly defined and limited set of outcomes – based on certain sites, hierarchies and outcomes. Over time, site usage increases by new user groups who were not the originally intended audience, they extend the requirement, and so the sites become more complex and outcomes need to be measured at a more granular level.
In this example it is possible to see how tools which can identify the data sources, compare semantics, match and combine them, and suggest ways of looking at that data based on natural-language instruction, could be possible in an effective way, in just a few years’ time, for the initial, clearly focussed opportunistic solution. But will this last the course of the evolution, or maybe start to slow it down?
Many BI insight projects firms like IMGROUP deliver have a wide range of purposes over a wide set of user types, and with that comes the need for combining much wider datasets that on the face of it do not have a clear correlation. A good example is the increasing demand for in-depth insight across global territories driving the need for a powerful and sustainable data backbone.
As in many projects of this nature, the insight needed in the future cannot usually be specifically defined, and thus requires as wide a set of meaningful data as possible to be assimilated so that ad hoc analysis could be supported as and when required. The backbone should provide a set of common interfaces to which all of the various business systems – regional financial, operational, and customer-facing systems, utilising different technologies, multiple protocol standards, and a variety of ages – could connect. The data then needs to be assimilated, matched, and prepared for analytics. It is hard to see how any automated data assimilation tools, although possibly great at the local level, would enable this to happen consistently across the enterprise.
So you need to know where you are likely to end up, before you start the journey.
Getting the balance right
The advent of great visualisation tools, and good data discovery and preparation applications, will be viewed as democratising BI and Insight. However, it is important to realise that, in the same way that statistics can be deliberately misused, insight can be unintentionally misinterpreted. And on those misinterpretations, incorrect and costly big decisions can be based. So there will always be a place in important BI projects for those underlying tasks including data integration, assimilation, preparation, quality, and statistical validity.
This article was first published here by Steve Sydee.
About us: 365 Freelance is the first online platform that gives Dynamics partners and end-users instant access to hundreds of contractors. With a network spanning 85 countries, it allows companies to search for freelancers based on their skills, rates and available dates, enabling them to find the best person for their projects without incurring the hefty fees typical of recruitment agencies. Register here – www.365freelance.com.
What are you looking for?
Some Popular Posts
- How to attract Dynamics 365 candidates – 5 step guide January 12, 2018
- Microsoft DLP Dynamics 365 course catalogue – January update January 5, 2018
- How to become a Microsoft Partner – 9 step guide December 15, 2017
- Unity December 6, 2017
- How to build or boost your Cloud skills – free Azure training and resources December 1, 2017