Business intelligence software is a set of tools used by companies to retrieve, analyze, and transform data into meaningful information. Examples of business intelligence tools include data visualization, data warehousing, dashboards, and reporting.
Covering a range of technologies, business intelligence (BI) loosely refers to tools that retrieve, analyze, and transform data into meaningful information that helps businesses make more intelligent decisions. In contrast to competitive intelligence, business intelligence software pulls from internal data that the business produces, rather than from outside sources.
The term business intelligence started being used sometime around the late 1950s, and grew from a set of technologies called decision support systems. It’s fitting to consider business intelligence in relation to decision support systems, because that’s exactly what business intelligence does: it helps businesses gain a competitive edge by supporting and improving their decisions with relevant, insightful information.
The rise in popularity of bi software is closely linked to the rise of “Big Data.” As technology has progressed and more activities have shifted to the Internet, it has become possible to track and compile behavioral data like never before. And not just human data, but market data, environmental data, and more. By 2018, it’s predicted that big data will be a $20.8 billion market.
To make informed choices, businesses need to base their decisions on evidence. The mountains of data that businesses - not to mention their customers - are producing contain evidence of purchasing patterns and market trends. And thus, business intelligence was born. As mentioned before, BI has been around for a while, usually in the form of quarterly or yearly reports. However, the business intelligence we’re referring to happens at light speed, and can help a company choose a course of action in a matter of minutes.
In the Information Age, everyone produces data. Walmart handles more than 1 million customer transactions per hour. IDC estimates that by 2020, online business-to-business (B2B) and business-to-consumer (B2C) transactions will exceed $450 billion a day. Answers and insight live inside that data, waiting to be uncovered. The businesses that harness that intelligence first, will gain a competitive advantage by predicting customer behavior, forecasting market trends, and outsmarting their rivals.
BI software interprets a sea of quantifiable customer and business actions and returns queries based on patterns in the data. BI comes in many forms, and spans many different types of technology. For the purpose of this guide, we’ll look at three main areas to which BI can be applied, and examine the tools used for each.
Data lives in a number of systems throughout an organization. For example, large enterprises could have information about their customers in their customer relationship management (CRM) application, and have financial data in their enterprise resource planning (ERP) application. The most common first step in utilizing BI is often taking an inventory of all the data your business produces.
Best BI Software (By Category)
|Self-Service||Data Visualization||Data Warehousing||BI Platforms|
|SAP Crystal Reports||iDashboards||Sisense||Tableau|
Business intelligence combines disparate data sources into one database by building a data warehouse. Data warehouses act as a central repository for data to be queried and analyzed by other BI applications. Using the extract, transform, and load method, data warehouses aggregate data from across an organization and make it easier for other applications to quickly access them.
Analytics and reporting tools can still function without data warehouses, but running reports through CRM software, or even point of sale (POS) software not only limits the focus of the intelligence, it also negatively affects the performance of those applications. Also, the data in these systems exist in different formats, making it exceptionally difficult to draw conclusions and identify patterns without restructuring the data into a common format and housing it in a common area.
Data are stored in a data warehouse in dimensions and facts. Facts represent numbers for a specific action, likes the sales of a widget. Dimensions give context to facts by adding dates and locations. For instance, dimensions could break apart the sales of a widget by months or years, making queries easier to perform.
Essentially simpler, narrower versions of data warehouses, data marts focus on a specific subset of data instead of storing data from across the entire company. This could be data that are used frequently, or by only one department. Data marts are cheaper to implement than data warehouses and could provide non-IT staff with a better user experience by limiting the complexity of the database.
Extract, Transform and Load (ETL)
Named for the process by which data is transferred into a data warehouse, ETL applications are for normalizing data in a central location. ETL software can be included with data warehouse software or be purchased as an add-on application. Let’s examine each letter in ETL:
Extract: Often the most difficult aspect of the process, the degree of success by which data are extracted from their source systems - ERP or CRM systems for example - influences the success of the rest of the process. Often data are unstructured, meaning they aren’t formatted well for fitting into rows and columns, which makes it more difficult to analysis once it’s been stored in a data warehouse. Tagging unstructured data with metadata -- information about the author, type of content, and so on -- can help make it more easily found once it’s been extracted.
Transform: To prepare the data for storage in the data warehouse, the second stage of ETL applies rules to incoming data in order to “clean” or normalize it. For analyses to work properly, data must exist in the same format - think apples to apples - or else the queries won’t be accurate..
Load: Now that the data have been extracted from their source systems and normalized through the transform phase, it’s ready to be loaded into the central database, mostly commonly the data warehouse. Load frequencies will vary by organization. Some businesses may enter new data on a weekly basis while others will do it every day.
A very popular data storage framework, Hadoop is an infrastructure for storing and processing large sets of data. Though Hadoop stores data, it does so in a contrasting manner to a traditional data warehouse. Hadoop uses a cluster system -- Hadoop Distributed File System or HDFS -- that allows users to store files in multiple servers.
Regardless of whether businesses choose to store their data in a data warehouse or run queries on the source system, the analysis part of business intelligence is what produces the insight that makes the entire field so appealing. Analytics technologies vary in terms of complexity, but the general method of combining large amounts of normalized data to identify patterns remains consistent across platforms.
Also known as “data discovery,” data mining involves automated and semi-automated analyses of sometimes large sets of data to uncover patterns and inconsistencies. Common correlations drawn from data mining include grouping specific sets of data, finding outliers in data, and drawing connections or dependencies from disparate data sets.
Data mining often uncovers the patterns used in more complex analyses, like predictive modeling, which makes it an essential part of the BI process. Indeed, it could be argued that all the “intelligence” in business intelligence is derived from data mining.
Of the standard processes performed by data mining, association rule learning presents the greatest benefit. By examining data to draw dependencies and construct correlations, the association rule can help businesses better understand the way customers interact with their website or even what factors influence their purchasing behavior.
Association rule learning was originally introduced to uncover connections between purchase data recorded in point of sale systems at supermarkets. For example, if a customer bought ketchup and cheese, association rules would likely uncover that that customer was purchasing hamburger meat as well. While this is a simplistic example, it works to illustrate a type of analysis that now connects incredibly complex chains of events, and helps users find correlations that would have stayed hidden otherwise.
Perhaps one of the most exciting aspects of BI, predictive analytics applications function as an advanced subset of data mining. As the name suggestions, predictive analytics forecast future events based on current and historical data. By drawing connections between data sets, these software applications predict the likelihood of future events, which can lead to a huge competitive advantage for businesses.
Predictive analysis involves very detailed modeling, and even ventures into the realm of machine learning, where software actually learns from past events to predict future consequences. For our purposes, let’s focus on the three main forms of predictive analysis:
The most well-known segment of predictive analytics, this type of software does what its name implies: it predicts, particularly in reference to a single element. Predictive models search for correlations between a particular unit of measurement and at least one or more features pertaining to that unit. The goal is to find the same correlation across different data sets.
Whereas predictive modeling searches for a single correlation between a unit and its features – in order to predict the likelihood of a customer switching insurance providers for example – descriptive modeling seeks to reduce data into manageable sizes and groupings. Descriptive analytics works well for helping to summarize information, such as unique page views or social media mentions.
Decision analytics take into account all the factors related to a particular decision. Decision analytics predict the cascading effect a particular action will have across all the variables involved in making that decision. In other words, decision analytics gives businesses the concrete info they need to take action.
The processing arm of the Hadoop framework, MapReduce processes data in its storage location rather than transporting the data across a server to the location of the processing software. MapReduce then only transfers the finished analysis, which are much smaller files than the large datasets MapReduce is analysis, back to the software location for reporting. And because Hadoop works as a cluster system, MapReduce is able to to analyse data across multiple servers.
Synonymous with text mining, text analytics software combs unstructured data to find patterns hidden within large sets of text data. This type of data is usually difficult to analyze with traditional mining methods. Text analytics are particularly interesting for businesses that work with social media. Using the right software, a business can set up a rule for the software to track certain words or phrases – a business’s name for example – to find patterns in how they’re being mentioned.
Data comes in three main forms: structured, semistructured, and unstructured. Unstructured data is the most common, and includes text documents and other types of files that don’t have an easily readable formats (for a computer at least). It’s widely accepted that the vast majority of data that businesses produce - as much as 85% - comes in an unstructured form.
Unstructured data can’t be stored in rows or columns, which makes it impossible for traditional data mining software to analyze. However, utilizing this data is often crucial to figuring out how to move forward. With so much data stored in unstructured form, text analytics should be a key consideration when trying to find the best business intelligence software.
The previous two applications dealt with the mechanics of business intelligence. How business data are stored, and how these data are refined into meaningful intelligence. Business intelligence reporting focuses on the presentation of these findings.
Online Analytical Processing
Most often used with multidimensional databases, online analytical processing (OLAP) enables users to query data warehouses and create reports that view data from multiple perspectives, say by monthly sales or by number of transactions for a particular item.
OLAP allows users to interact with data in three ways: consolidation, drill-down, and slicing and dicing. Consolidation gathers data from multiple dimensions and helps users anticipate trends. Contrastingly, drill-down navigates down into more specific areas of analyses. Finally, OLAP’s slice and dice functionality lets BI professionals exclude and include certain data in their analysis.
One of the more popular trends in BI, data visualization allows companies to graphically display the results of data mining or other analytics. As part of a broader shift towards better BI usability, the data visualization UX may become a larger factor in the software purchasing decision.
Another, albeit narrower, form of data visualization, dashboard functionality refers to the interface that represents specific analyses. Dashboard software is another segment of business intelligence software that’s growing in popularity due to demand for better BI interfaces.
The state of business intelligence is changing. Far from a misunderstood buzzword, BI is being implemented in a number of different organizations to great effect.
Another survey by InformationWeek found:
Forrester’s research provides some insight into how successful businesses have been in implementing BI:
So while the number of organizations who have adopted business intelligence and gained a competitive advantage remains in the minority, the percentage of organizations that have derived a perceivable competitive advantage remains low.
One of the most difficult areas of implementing business intelligence lies in finding the proper expertise, with 47 percent of the InformationWeek respondents claiming finding employees with the proper data skills as their biggest holdup to implementation.
In Memory Processing
In-memory processing utilizes RAM instead of disk or hard drive processing in order to read information. Accessing information in this manner increases the application performance exponentially, perhaps even into the hundreds of times. The increasing power of RAM in our computing environments coupled with the demand for more agile systems, all lead to In-Memory Processing software having a large stake in the future of BI.
In-Memory Processing was originally introduced in the 1990s, but dramatic drops in memory prices are making it a more popular option to running analysis through multidimensional databases and cubes.
Usability and Visualization
More and more, BI users aren’t IT staff; they’re employees with a standard amount of technological savvy that want to harness the power of BI to get a competitive advantage. Consequently, the design of reporting mechanisms and ease of use of analytics functions are being driven toward a lower barrier of access. No longer is it simply enough to have excellent analysis or data warehousing features; they must not be usable for employees who fall somewhere in the middle of BI layperson and the data expert in the IT department.
In 2013, many of the major BI vendors - SAP, IBM, Microsoft, and SAS - all responded to the uprising of new, smaller companies that offered easy to use visual function by totally redesigning their interfaces. Further, TDWI’s research found that 63 percent of the time, big data analysis is used by departments other than IT. Several vendors are specializing in the 'self-service' BI space, including Tableau and TIBCO Spotfire, which we compare in our post: Tableau vs Spotfire.
Business intelligence is great for sifting through data to find patterns or insights about your customers. The key involves centralizing the information in a data warehouse before its analysed, so the data is cleaned and formatted properly.
O2 Ireland, a cell phone carrier, noticed that a number of its customers would buy pre-paid SIM cards and leave the country a few days after the purchase. From a business perspective, the company wanted to get the most out of its relationships with its customers, but it needed to know which ones to target.
O2 Ireland had the same data problems that plague many businesses today: a number of systems were capturing data, but the data wasn’t unified, and had little overall management.
“The upshot was a very high-cost IT infrastructure, decisions that didn't make sense, and data latency. An event may be analysed up to 10 days after it happened, by which time the opportunity to do anything was very limited,” said head of business intelligence Peter McKenna.
O2’s first step was to create a central data environment, i.e., a data warehouse. Using Teradata as a vendor, O2 funnelled all of its disparate data into one application, making it simpler to perform queries and analysis. Once the data warehouse was built, O2 turned to Cognos for its business intelligence tool.
By running a series of analysis throughout the now centralized data, O2 was able to segment the 65 percent of their customers who stayed in the country after they purchased SIM cards, and who therefore merited a heavier investment than the other 35 percent who left the country shortly after their purchase.
Based off this intelligence O2 was able to launch location specific marketing that's been successful in driving foot traffic to brick and mortar locations during peak buying periods, such as Christmas. O2 is an excellent example of implementing business intelligence with a business goal in mind. Instead of focusing solely on the technology, they focused on what the technology could do for their business.