What are the pitfalls?
Do you know where your data is?
It's no use setting up a big-data product for analysis only to realise that critical data is spread across the organisation in inaccessible and possibly unknown locations.
As mentioned earlier, Qlikview's VP of global field marketing, Henry Sedden, said that most companies aren't on top of the data inside their organisations, and would get lost if they tried to analyse extra data to get value from the big-data ideal.
A lack of direction
According to IDC, the big-data market is expected to grow from US$3.2 billion in 2010 to US$16.9 billion in 2015; a compound annual growth rate (CAGR) of 40 per cent, which is about seven times the growth of the overall ICT market.
Unfortunately, Gartner said that through to 2015, more than 85 per cent of the Fortune 500 organisations will fail to exploit big data to gain a competitive advantage.
"Collecting and analysing the data is not enough; it must be presented in a timely fashion, so that decisions are made as a direct consequence that have a material impact on the productivity, profitability or efficiency of the organisation. Most organisations are ill prepared to address both the technical and management challenges posed by big data; as a direct result, few will be able to effectively exploit this trend for competitive advantage."
Unless firms know what questions they want to answer and what business objectives they hope to achieve, big-data projects just won't bear fruit, according to commentariats.
Ovum advised in its report "2012 Trends to Watch: Big Data" that firms should not analyse data just because it's there, but should build a business case for doing so.
"Look to existing business issues, such as maximising customer retention or improving operational efficiency, and determine whether expanding and deepening the scope of the analytics will deliver tangible business value," Ovum said.
Even if a company decides to go down the big-data path, it may be difficult to hire the right people.
According to Australian research firm Longhaus:
The data scientist requires a unique blend of skills, including a strong statistical and mathematical background, a good command of statistical tools such as SAS, SPSS or the open-source R and an ability to detect patterns in data (like a data-mining specialist), all backed by the domain knowledge and communications skills to understand what to look for and how to deliver it.
This is already proving to be a rare combination; according to McKinsey, the United States faces a shortage of 140,000 to 190,000 people with deep analytical skills, as well as 1.5 million managers and analysts to analyse big data and make decisions based on their findings.
It's important for staff members to know what they're doing, according to Stuart Long, chief technology officer of Systems at Oracle Asia Pacific.
"[Big data] creates a relationship, and then it's up to you to determine whether that relationship is statistically valid or not," he said.
"The amount of permutations and possibilities you can start to do means that a lot of people can start to spin their wheels. Understanding what you're looking for is the key."
Data scientist DJ Patil, who until last year was LinkedIn's head of data products, said in his paper "Building data science teams" that he looks for people who have technical expertise in a scientific discipline; the curiosity to work on a problem until they have a hypothesis that can be tested; a storytelling ability to use data to tell a story; and enough cleverness to be able to look at a problem in different ways.
He said that companies will either need to hire people who have histories of playing with data to create something new, or hire people who are straight out of university, and put them in to an intern program. He also believes in using competitions to attract data scientist hires.
Tracking individuals' data in order to be able to sell to them better will be attractive to a company, but not necessarily to the consumer who is being sold the products. Not everyone wants to have an analysis carried out on their lives, and depending on how privacy regulations develop, which is likely to vary from country to country, companies will need to be careful with how invasive they are with big-data efforts, including how they collect data. Regulations could lead to fines for invasive policies, but perhaps the greater risk is loss of trust.
One illustration of distrust arising from companies using data from people's lives is the famous example from Target, where the company sent coupons to a teenager for pregnancy-related products. Based on her purchasing behaviour, Target's algorithms believed her to be pregnant. Unfortunately, the teenager's father had no idea about the pregnancy, and he verbally abused the company. However, he was forced to admit later that his daughter actually was pregnant. Target later said that it understands people might feel like their privacy is being invaded by Target using buying data to figure out that a customer is pregnant. The company was forced to change its coupon strategy as a result.
Individuals trust companies to keep their data safe. However, because big data is such a new area, products haven't been built with security in mind, despite the fact that the large volumes of data stored mean that there is more at stake than ever before if data goes missing.
There have been a number of highly publicised data breaches in the last year or two, including the breach of hundreds of thousands of Nvidia customer accounts, millions of Sony customer accounts and hundreds of thousands of Telstra customer accounts. The Australian Government has been promising to consider data breach-notification laws since it conducted a privacy review in 2008, but, according to the Office of the Australian Information Commissioner (OAIC), the wait is almost over. The OAIC advised companies to become prepared for a world where they have to notify customers when data is lost. It also said that it would be taking a hard line on companies that are reckless with data.