X
Innovation

What data management leaders see for the sector in 2022

AI and machine learning will be working overtime in the data management space on numerous new use cases.
Written by Chris Preimesberger, Contributor

According to thought leaders in the data management sector, we'll be using some new terminology when talking about enterprise data in 2022.

CTOs and IT managers at all levels will be defining and testing terms like "data as code" and "just-in-time" data analytics for their own production use cases.

AI will be working overtime in the data management space, enabling call centers to mine more cogent information from customers, patching gaps in supply chains, and bolstering healthcare services, both locally and in the cloud.  

Here are some cogent predictions about what we can expect to see on the data management side of IT in 2022:

We will begin to hear 'data as code' frequently

The Infrastructure as Code movement, in which infrastructure is automatically deployed, is gaining traction. Infrastructure and applications, however, deliver little value without data. Organizations will need to be able to dynamically clone, distribute, and destroy data copies on demand, so they can develop, test, analyze, build AI/ML models, and meet regulatory requirements. As machines generate more data, IT organizations will not be able to manage data by hand. They will need to make data as dynamic and automatic as infrastructure and applications.  

– Stephen Manley, CTO, Druva

AI will be reading between the lines with customers

A major trend in customer service data management for 2022 will be the use of AI to unlock data that is kept in all of the conversations that customers have with contact-center agents. The technology used to review all of these conversations has been in place for a while, but vendors in the space (Genesys, NICE, Twilio, Cisco) are all actively promoting this as a use case -- and even using it as a springboard to start providing solutions for sales and marketing. It's a big play for them. There have been a number of acquisitions in this sector, notably Twilio buying Segment for $3 billion. 

– Max Ball, Principal Analyst, Forrester Research

Supply-chain failures will fuel the meteoric rise of 'just-in-time' data analytics

Faced with a full-blown supply chain crisis, companies will have to address long-standing issues in their data pipelines -- bottlenecks and other fragilities -- that prevent teams from gaining the visibility into supply chains they need to survive the decade. No longer held back by the gravity of legacy models, systems, and approaches, companies will embrace innovative new solutions in a bid to make "just-in-time" data analytics a reality for their businesses. 

– Matthew Halliday, EVP Product, Incorta

AI services will play a major role in generating revenue

Adoption rates and revenue generated from artificial intelligence services are projected to skyrocket as ongoing issues, including the healthcare crisis, labor shortages, and supply chain problems, continue to present considerable risks to businesses. For one example, AI-based chatbots and virtual agents are reducing the pressures on businesses from labor shortages. In health care, AI-based solutions allow care teams to manage a wider patient population and do so with a personalized approach at a patient level. Health and human services agencies are keen on implementing whole-person health initiatives, which require access to high-quality and accurate clinical, social determinants, and public health data for developing customized care programs at an individual level. 

During the next few years, we can expect to see the emergence of federated machine learning, which enables high-traceability technologies and allows researchers to train predictive models on sensitive data transparently. This approach includes everything from support for the evolution of disease prediction to faster responses for autonomous vehicles. 

– Zakir Hussain, EY Americas IoT Leader

New privacy-focused legislation will shift attention to data sovereignty clouds 

With increased focus on General Data Protection Regulation (GDPR) regulating data protection and privacy in the EU and the California Consumer Privacy Act (CCPA) enhancing privacy rights and consumer protection for Californians, other states and countries are facing pressure to enact comprehensive data privacy legislation. As this continues in 2022, I expect we'll see much more focus on data sovereignty clouds to keep data within nations or within a certain physical location. This is a far more specified cloud model that we're starting to see in EMEA with Gaia-X. Some will see this as an obstacle, but once implemented, this will be a good thing as it puts consumer privacy at the core of the business strategy.  

– Danny Allen, CTO, Veeam

New data management approaches at the edge will come to the fore

We'll see data analytics scale at the edge to reduce data or perform data thinning so that analytics software can provide better insights and value to an organizations' management team to handle the volume of data now being generated outside the data center and cloud (Garter says by 2025 it will be 75% of all data). Today's edge computing platforms weren't designed to handle this -- a new approach is needed to store the data cost-effectively, "thin" the data by finding only the useful parts, subsequently making it easy for analytics, machine learning, and AI to extract value for organizations. 

– Bruce Kornfeld, CPMO at StorMagic

The data science industry is making the mistake of putting models before clean data

Without cleaning the data, each model developed and proposed will supply dirty, insignificant data, which will make it impossible to tell if AI that is designed to standardize data is working as it should. This practice affects the data-scientist position and creates trust issues around the use of AI in data management. To successfully utilize data to its full potential, companies must take the first step to standardize datasets in order to transform an industry truly. 

--Dr. Ron Bekkerman, CTO, Cherre

We'll embrace data fabrics 

Data management challenges will not go away in 2022, so enterprises will need to build and embrace data fabric architectures for agility and dynamic decision-making. Instead of simply sending data down a road to be stored, scaled, or analyzed, a data fabric is able to direct data into a holding area so it can be used while it's most relevant. With big data supporting the business goals of 72% of organizations, proper implementation of data fabric is a natural evolution that helps companies to be more informed more quickly.

– Stefan Sigg, Chief Product Officer, Software AG 

Graph databases: A must-have component of the 2022 data landscape

According to Gartner Research, by 2025, graph technologies will be used in 80% of data and analytics innovations, up from 10% in 2021, facilitating rapid decision-making across the enterprise. As the volume of data created and replicated throughout the enterprise continues to increase, scalable graph technology has become the critical link between massive amounts of data and key business insights. The graph will become a major competitive differentiator among the companies in multiple industries -- from financial services and healthcare to retail and manufacturing. Graphs can quickly highlight, discover, and predict complex relationships within data -- insights that uncover financial fraud or help solve logistics challenges within the supply chain.

Throughout 2022, more companies will apply the power of graph analytics to support advanced analytics and machine learning applications, including fraud detection, anti-money laundering (AML), entity resolution, customer 360, recommendations, knowledge graph, cybersecurity, supply chain, IoT, and network analysis. Graphs will become even more linked with ML and AI. Gartner even reports that "as many as 50% of Gartner client inquiries around the topic of AI involve a discussion around the use of graph technology." 

– Richard Henderson, Technical Director at TigerGraph 

Get value from data, AI or lose out to competitors and be shorted by investors

We'll find a better definition of the "democratization of data," particularly data engineering. As more people across organizations and job functions embrace and engage with data, data engineering will continue to evolve to allow these individuals to work in the same space collaboratively. Effective data engineering is required for meaningful downstream uses, including machine learning and analytics. So, collaborative data engineering will be vital to allowing developers who prefer SQL and Python to do their work right alongside those who turn to AI-assisted visual tools. Cloud-based tools will make this increasingly more available.

Thus, low-code and no-code solutions will become increasingly widespread, especially when they enable coders to do their work in the same space as business users. These more sophisticated, next-generation tools will have automatic programmer assistants and embrace modern techniques that allow non-coders to create custom programs without realizing it essentially.

Finally, AI engineering is changing: Think "machine-learning operations." This field will explode in growth as many start-ups make components of this more accessible and practical. 

– Adam Wilson, CEO of Trifacta

Editorial standards