Why the future of AI hinges on trust

Capturezjtzsfjksrk

Do you trust that your data is not just secure but also accurate and complete? Valid AI processes increasingly depend on it.

LESSONS FOR LEADERS
  • Increased reliance on AI and ML means that trust has an outsized influence on an organization's long-term sustainability.
  • Before the rise of AI, enterprises might have had a certain tolerance for inaccuracy or unreliability in their data, but this is no longer a tenable position.
  • Any insights-driven solution is only as effective as the information fed into it and the security systems in place to protect it.

Amid a rising tide of disruption, transformation, and innovation, trust often means many things to different people. It evokes, perhaps most commonly, connotations of security—trust that data is safe from misuse or neglect. But, as more organizations navigate an increasingly complex digital transformation journey, trust is coming to mean much more than that.

Can you trust that data fueling critical systems such as analytics or artificial intelligence is reliable and secure? For example, if AI is deployed to help forecast the effectiveness rates of trial pharmaceuticals and the underlying data doesn't capture a full, comprehensive sample size of patients, the findings are useless. Research from Hewlett Packard Enterprise has found that roughly 70 percent of the data generated by its customers remains unserviceable and, therefore, is inadequate for building trustworthy AI models.

In fact, under this purview, trust has an outsized influence on an organization's long-term sustainability. In an environment where analytics and AI radically transform customer experiences, trust becomes an invaluable differentiator. The ability to generate and guarantee trustworthy data ecosystems, best-in-class usage practices, and IT solutions that are continuously and automatically verified and cryptographically assured all collide to create a new standard of excellence as these systems proliferate.

Adapting to and embracing a type of trust that continuously attests and authenticates access and privileges is an organization-level imperative. Without it, IT leaders risk falling behind on the analytics and AI-driven strategies that will increasingly define the coming years. So, what will it take to infuse a more robust culture of trust within your organization, where trustworthy AI consistently generates valuable, actionable outcomes? And how can leaders leverage this to build an ironclad bedrock for their analytics and AI integrations?

Any insights-driven solution is only as effective as the information fed into it and the security systems in place to protect it. The higher the data fidelity, the stronger the analysis will be. It's a model that easily translates to the real world—a summer beach trip is a great idea theoretically, but that calculus changes once you introduce weather. The challenge, however, is the scale at which enterprises are collecting, generating, and storing data. From customers to partners to employees, every component is a potential data generation point that must be accounted for. From a trust standpoint, these equally represent opportunity and risk.

Systems through the lens of zero trust

In the face of so many potential failure points—whether it be corrupted data or a malicious actor jeopardizing the fidelity of an entire data lake—organizations are increasingly adopting zero trust architecture principles. That is, they're building in guardrails across all access points, with multiple tiers of permissions, to better control when and where data moves.

If, for example, an organization has its data stolen, automatic processes should isolate access to partner networks. This will not only limit the damage, but the compromised data won't make its way any further into the larger analytics or AI ecosystem, thus preventing contamination of any sort. That said, these guardrails are just a first step. True zero trust architectures require continuous attestation and authentication starting at the lowest level of your IT systems. This ongoing process regularly monitors access so permissions can remain fluid, loosening or tightening as new humans or systems join or leave a specific community.

Here again, however, the issue of scale must be addressed. When data workers spend 90 percent of their working week searching, preparing, and analyzing data, automation is required to make verification feasible at an organizational level. AI, deployed strategically, can improve the process. By adding an additional layer of intelligent context collection and mapping behavioral data across the entire IT stack, these systems can track an individual's behavior suspiciously. Together, zero trust can protect data across its entire lifecycle.

A roadmap for tomorrow and beyond

AI presents organizations with the potential to optimize efficiency at an unprecedented scale. But as IT leaders know, transformation is a journey, one that can take years, bucking against evolving tastes, trends, and standards. Thus, it's important to create a roadmap that can help guide efforts while remaining flexible to the needs of the business.

There are, however, a few things to keep in mind regarding the next five to 10 years of digital transformation—namely, the growth of organizational responsibility in education and transparency. Customers, partners, and employees alike look to businesses to take the lead. There will be an increasing expectation that enterprises will embody a new role as educators, proselytizing security investment and trust.

Please read: When should old data be deleted?

Further, trustworthy AI will lead more organizations to pursue a policy of data-centric as opposed to model-centric implementation. Instead of focusing on getting the best models out of data and the outcomes from the model, more organizations will focus on iteratively improving data quality first and foremost to allow the models to evolve from the data itself.

This is because data-centric approaches allow for more consistent high-quality data in all phases, generating the best possible AI insights. If you hold the models fixed and focus on getting quality data, it encourages experimentation and expanded parameters allowing multiple models to do well. The model is more trustworthy because it is engineered for transparency, trust, and robustness.

In the short term, build a collaborative service platform with data governance, security, and trust from the ground up that works across the edge, in the cloud, and on premises to gain the best insights for your business. In addition, embrace an open, partner-centric foundation to enable broader adoption, agility, and efficiency. To this extent, transparency grows, fostering greater trust and investment in the success of your organizational community.

Connecting trustworthy data and AI

All of these efforts are part of a wider enterprise mandate to glean the most value out of one's data. While it is traditionally difficult to define, the value of data can be determined through AI, as it puts some parameters in place to make data more digestible and actionable. What might seem worthless on the surface could be the impetus for a highly valuable action in new and unexpected ways. In fact, the promise of AI, machine learning, and analytics has driven significant investments among many organizations. Gartner says 33 percent of technology providers will invest millions of dollars in AI over the next two years. That financial vote of confidence, however, belies the practical results seen by many who are undergoing digital transformation. Despite major investment, 80 percent of machine learning projects fail to achieve their intended outcome.

Please read: Constant scrutiny is the key to making zero trust happen

While AI could help address many of society's biggest, most pressing challenges, it can be deeply flawed due to the lack of human empathy and compassion. Raw data cannot always account for more abstract issues such as systemic racism or sexism and deliver actionable, effective insights. The challenge isn't limited to such problems; the human element is a powerful driver of exceptional customer experiences and robust, dependable outcomes. AI has yet to reach a stage where it can reliably deliver this perspective.

Attention to ethical and responsible AI principles guiding AI development will improve AI modeling by accounting for the complex decision-making that happens on the human-to-human level, such as:

  • Privacy-enabled and secure – respect individual privacy and be secure
  • Human focused – respect human rights and be designed with mechanisms and safeguards to support human oversight and prevent misuse
  • Inclusive – minimize harmful bias and support equal treatment
  • Responsible – be designed for responsible and accountable use, inform an understanding of the AI, and enable outcomes to be challenged
  • Robust – be engineered to allow for quality testing and include safeguards to maintain functionality and minimize misuse and the impact of failure

This opens up a different set of trust issues originating from the lack of deep understanding of stored and processed data: How can we connect data consumers to data that will lead to trustworthy AI models that abide by the above principles?

Fundamental techniques are needed to address key technology gaps in today's conventional AI and introduce built-in trust mechanisms that track, analyze, and improve the selection of data and models from this perspective throughout the entire AI lifecycle, covering both data- and model-centric views. The goal is to ease the process as much as possible through an automated metadata-driven interaction according to a common standard shared between data producers and data consumers. Instead of having proprietary metadata, it will be important to work with the open source community to leverage existing tools and resources for extracting metadata.

Properly implemented, this approach will establish outcomes that are measurable in trust, reliability, interpretability, and robustness. Users should have the broadest possible selection of quality data to achieve high-value outcomes with more confidence, accuracy, equity, sustainability, privacy, and compliance. Anyone should be able to identify potential corruption within a model or root out potential bias.

Leave a Comment