The Challenges of Successfully Implementing a Big Data Program

The Challenges of Successfully Implementing a Big Data Program





How enterprises need to be cognizant of the challenges in preparing and executing a Big Data Program to gain insights from their data.

The Challenges of Successfully Implementing a Big Data Program

With nearly half of companies worldwide investing in big data, it is clear that big data is changing the way organizations make some of their most important decisions.

A global report by EMC found that the benefits of a big data program, including greater efficiency, cost reduction and improved customer experience, are so valuable that 65% of C-level executives believe they risk becoming irrelevant and/or uncompetitive if they do not embrace big data.

So, how damaging is it to not have a big data program? IBM found that poor data can actually cost businesses an average of 20-35% in operating revenue.

 

Managing Volume, Velocity, & Variety

Much of big data’s value comes from its ability to support proactive rather than reactive decision-making. Proactive decision-making better informs strategy, product innovation and the customer experience.

However, big data programs are not without their challenges. The shear magnitude of data and a shortage of the right talent is creating significant long-term challenges.

Volume, velocity and variety are central to big data, offering countless opportunities for success beneath an array of structural hurdles.

 

Volume

IDC reports that today, about 2.7 zettabytes of data exists worldwide. For enterprises, this increase in data is due to transaction volumes, traditional data types as well as an influx of new types of data.

All of this new data gives businesses enormous potential to discover new patterns and predictions, but the sheer volume of data is extremely challenging to manage.

This level of volume creates several issues, especially as it relates to storage, analysis and processing. When data isn’t processed quickly enough, companies miss out on the real-time benefits data can provide.

This massive increase in volume has created an immediate need for new approaches in data management. As long as volume continues to increase without proper management, companies will fail to remain competitive and continue to spend money in areas that may hold less value.

 

Velocity

With data coming in at such a rapid rate, the problem then becomes how can that data be effectively processed?

Exactly how fast is data being produced? Gartner reports that the average organization’s data is growing at a rate of 40-60% each year. For greater perspective, consider the fact that 90% of today’s data was created just within the past 2-3 years.

Analyzing data in real time is essential to meeting customer needs through product innovation, creating personalized offers, and providing excellent customer service.

However, it is a huge challenge to be able to adequately manage what is an already massive amount of data that continues to grow.

For instance, historical streams can be analyzed and used to improve business initiatives, but it’s all coming in too fast. As a result, proactive decision making is made nearly impossible, leading to reactive decision making.

 

Variety

The obvious goal of examining massive amounts of data is to help companies make better decisions. However, the fact that much of this data is stored in multiple locations and formats creates yet another hurdle. Media files, social media interactions, sensor data, and encrypted packets all hold troves of valuable information. However, much of this data is unstructured and incredibly varied. This all leads to the question of how can this data be standardized and consolidated. A question that is often answered with a heavy allocation of time and resources to tackle this issue.

In fact, that Harvard Business Review found that 80% of analysts’ time is spent simply discovering and preparing data, and that more than half agree that dealing with this task is more challenging than the actual volume of data.

Additionally, variety makes correlation of data difficult, which can lead to analytical instability and the lack of a “big picture” view.

The Need For a Diverse Skillset

Many of the obstacles involved in building and managing a big data program are new challenges, which require a unique and diverse set of skills and knowledge.

Big data analysts are expected to have extensive knowledge of a wide variety of skills, including everything from programming, and data warehousing, to computational frameworks, and machine learning.

There is no exact “data scientist” skillset since big data is a relatively new concept. Most data analysts carry different sets of skills from various professional backgrounds. Each company’s big data is different, and a different skillset will be required for each. On top of finding a person with a skillset that is the right fit, a big data scientist must also be analytical and have a mind for business. This wide range of tools and technologies that are required for various phases of big data processing is massively diverse and ever-expanding subsequently magnifying this challenge.

The Talent Shortage

There is a huge demand for big data talent, but conversely there is a limited supply. In fact supply is so limited that nearly half of companies report the ability to find and retain data analysts as being among their greatest challenges and demand is only increasing.

IBM predicts that the annual demand the roles of data scientists, data developers and  engineers will reach nearly 700,000 openings by 2020.

This surge in demand is creating a significant talent shortage. In fact, according to IBM, DSA jobs remain open an average of 45 days, which is five days longer than the market average. And, once filled, these roles are becoming increasing costly to companies.

Will this shortage level out in the near future? It is not likely. While there are now more than 100 analytics and data science programs at US universities, they are still not producing enough talent to meet this rapidly growing demand.

The limited number of big data analysts, coupled with the outrageously high demand, make finding a right fit employee extremely difficult. Furthermore, high turnover rates interrupt the process of building a consistent big data program.

The pragmatic style of software engineering required to successfully execute a big data project is unconventional and a departure from the traditional IT project execution. This unconventional nature of big data project development makes training and retooling existing resources to big data technologies difficult.

Conclusion

Big data holds enormous potential to create opportunities for both improved and more proactive decision-making, which can lead to reduced costs, higher efficiency and a competitive advantage.

The evidence of big data’s advantages is well documented. However, the question that remains unanswered by many companies is how to get started, best manage and control big data in order fully realize its benefits.

...


Ready for a Big Data Project… TechO2 can be your success enabler!


contact us