Déjà vu: AI is Amplifying Mistakes of the Past

As AI accelerates along an exponential curve of features and intelligence, there are hidden risks and opportunities that require active governance strong enough deliver transparency, but also robustly adaptable to emerging regulations and unintended consequences.  Domain and industry leaders are facing unfamiliar challenges as the rush for scalable results breaks proven deployment and oversight solutions.

Full article and graphics are available at Thomson Reuters Institute: https://www.thomsonreuters.com/en-us/posts/technology/ai-amplifying-past-mistakes/

The acronym of “AI” (artificial intelligence) has become like the air—it is all around us, and touches everything we do.  Indeed, the advancements of AI are highly efficient, increase revenues, and leverage humans-in-the-loop.  However, when it comes to AI in all its ever-changing, kaleidoscope of forms, its growing functionalities, its demands for data, and its advancing “intelligence,” who is responsible for creating, managing, and retiring the roadmaps of integration? 

Simply put, how do all these AI solution pieces fit together—“talk” to each other?  What happens when there is a need to audit the cascading inputs and outputs or implement error-corrections?  Is there any way to identify AI created data from traditional systems?  While uniquely different, AI rapid-growth is exposing the factures and fallacies of cascading upstream and downstream integrations—and our ability to assess quality, accuracy, and even systems-of-record.  Indeed, history is repeating itself.

These fundamental questions are like decades of business and IT solutions developed in isolation and which still struggle with human definitions and connections—API’s, quality of data, synchronicity and concurrency, customer complaints, and regulatory enforcement.  Additionally, and growing in its importance, is a mandate that data generated by an AI system be granularly traced to its source—even as the algorithms learn and the outcomes vary. 

The urgency and criticalness of this requirement resides with the “privacy protection, algorithmic bias, data transparency, accountability, and the impact of AI” not just on efficiency and profitability, but also on consumers and society. 

The future of tomorrow requires a proactive integration of innovative research tempered by domain market forces, consumer behaviors, AI technology (e.g., chips and software) and digital data explosions all glued together by security, legal, and regulatory requirements.  It is a future that demands layers of integrated solutions all requiring transparency, heterogeneity, and risk-attributions. 

At its core, AI is a data-driven solution.  At its edges, AI represents an ability to extend data ideation using building blocks of functionality uniquely assembled—but how?  Let’s discuss an illustrative representation of delivering AI Governance by design versus the traditional siloed product mindsets of “one-and-done.”