A Framework of Design for AI-Enabled RegTech (AIERT)—Part 1
AI is all the rage for discussions, industry impacts, and rule-making bodies. Yet, to define innovative frameworks for AI-enabled regulatory compliance utilizing traditional rules-design thinking may create additional costs and complexities for industries hoping for less burdens, more insights, and greater cost-efficiencies.
For the full text and graphics for this three part series, please visit –> https://www.thomsonreuters.com/en-us/posts/technology/ai-enabled-regtech/
As innovation is measured across history, the embrace, and capabilities of artificial intelligence (AI) is nothing short of a quantum shift—exploding within the last 18 months. Moreover, the foundations for AI’s value proposition, business cases, and even applicability to industry defined regulatory compliance are advancing weekly, and all underpinned by decades of rule-defined computer science trial and error solutions.
Traditionally, regulatory compliance represents a set of rules—a set of knowns that are the result of careful designs, comment periods, and data reporting. It is often considered to be about designing the future using the fit of historical transactions and targeted demographics to ensure controls and risks are within acceptable guidelines. However, will the history, the training data, or even the expert trends identified be enough to “fit” AI responsibly within and across existing operational capabilities?
Frequently industry leaders and solution providers view the answers to regulatory compliance as a “prescription” of rules and reports that meet the law of regulation—but perhaps not the spirit of the underlying designs, let alone the constant changes being adopted. Regulatory compliance is historically viewed as a burden, a cost of doing business, and an overhead demand which demands continuous efficiencies. These “innovative” mindsets in turn govern solutions capabilities, investment models, and most importantly, data ingestion designs which feed rules-based systems.
Yet, as innovations undergo rapid-cycle advancements, the promise of intelligent data-driven decision making creates competitive and consumer pressures for adoption (e.g., intelligent robotic process automation, machine learning algorithms, crosslinked AI solutions). While the rationale for adoption mainstreams, the implications of implementation for these designs are evolving, opaque, and frequently unknown. The results, beyond the initial enthusiasm and benefits, create organizational disillusionment, financial burdens, and potential brand damage.
So, where can improvements be made when it comes to sustainability, designs, and constant change? How can the solution “pains” be mitigated, while the release-cycle “gains” are sustained?
Asking Different Questions: What is Possible? What is Likely? What is Happening?
While AI stirs the imagination of what is possible leveraging human experiential skills, it also fundamentally “black boxes” much of the traditional balance and controls, which were common in prior computing generational designs. Indeed, AI will likely displace workers whose purpose was to gather, align, and report against governance criteria after-the-fact, and it will transform their “to-be” purpose into anticipating outcomes and identifying improvements as “Humans within the (AI) Loop”. …