Gartner’s statistic that between 70 – 80 per cent of Business Intelligence (BI) projects fail has been well publicised. Interestingly, despite this failure rate, according to Gartner’s latest report, worldwide business intelligence (BI) platform, analytic applications and performance management (PM) software revenue reached $10.5 billion in 2010, an increase of 13.4 per cent from 2009. Further, BI spending has surpassed IT budget growth overall for several years, claims Gartner.
Clearly, BI as a technology is delivering value to organisations, given today’s information-driven business environment, where data volumes are increasing exponentially. This sentiment and the importance of BI and analytics is independently corroborated by the findings of a survey conducted (end of 2010) by MIT Sloan Management Review in partnership with the IBM Institute for Business Value across more than 30 industries and 100 countries – the top-performing organisations use analytics five times more than lower performers. Also, the top performers unanimously agreed that the use of business information and analytics differentiated them within their industry.
Against this backdrop, while there are numerous reasons why BI projects fail, the most fundamental causes are:
• The big bang approach – There is a tendency to conceptualise and create projects that are large in scope, unrealistically ambitious and with long time scales. The fact is that business conditions change rapidly in today’s dynamic world, resulting in the corresponding business goals and objectives changing too.
• Historical experiences – IT departments often leverage personal relationships and historical experiences when making technology choices. However, two projects are seldom identical in characteristics and scope. This results in the wrong solution being selected for the job.
• Lack of proper solution architectural considerations – Today there are many different types of BI and analytic technologies available – with the underlying computing components relating differently to each other within the solution as they are created to meet varying business objectives. For instance, there are solutions based on traditional databases that are well suited to initially storing data, but they are ill-suited for analysing it. They are inadequate in dealing with the large volumes of data, high query speed and need for immediacy of analysis by users. On the other hand, there are solutions using column databases that are a compelling choice for high-volume analytics. There are also solutions that combine column orientation with capabilities to use knowledge about the data itself.
The explosion in the volume of data being collected by organisations and the desire to mine it for business advantage has been well documented. What may be less well understood is that machine-generated information – such as call detail records, Web and event logs, financial trade data and RFID readings – is increasing at the highest rate. It’s hardly surprising that rapidly expanding data volumes are bumping up against many organisations’ ability to store and manage it all.
Especially given the growth of machine-generated data or big data’, many organisations are appreciating the rationale for a columnar approach to analytics. This means that instead of the data being stored row-by-row, it is stored column-by-column. Such an approach allows more efficient data compression because each column stores a single data type (as opposed to rows that typically contain several data types), and allows compression to be optimised for each particular data type.
As most analytic queries only involve a subset of the columns in a table, a column database retrieves just the required data; this speeds up query time, while reducing disk I/O (input/output) and computing resources. This also means that enterprises don’t need as many servers or as much storage as is required with the more traditional systems, and hence helps to reduce infrastructure footprints that are extremely costly to scale, house, power and maintain.
The choice of solution must be driven by the objective, scope and business requirement. This will result in a faster return on return on investment and create value within the organisation, which will ultimately help in meeting business goals.
For further media information, please contact:
Tel: 07958 474 632
Infobright’s high-performance database is the preferred choice for applications and data marts that analyse large volumes of “machine-generated data” such as Web data, network logs, telecom records, stock tick data and sensor data. Easy to implement and with unmatched data compression, operational simplicity and low cost, Infobright is being used by enterprises, SaaS and software companies in online businesses, telecommunications, financial services and other industries to provide rapid access to critical business data. For more information, please visit www.infobright.com or join our open source community at www.infobright.org.
This press release was distributed by ResponseSource Press Release Wire on behalf of Clearview Communications in the following categories: Business & Finance, Computing & Telecoms, for more information visit https://pressreleasewire.responsesource.com/about.