Joe Schifano, Global Head of Regulatory Affairs, Eventus
Trade surveillance is no longer confined to a rules-based function – it’s evolving into an interdisciplinary program where compliance, technology and data science converge. As firms grapple with increasing market complexity, growing data volumes and evolving regulatory expectations, AI is emerging as the connective tissue uniting these disciplines.
A growing number of firms are embracing large language models (LLMs) and operationalizing these capabilities across their platforms – yet their expectations of AI have grown in tandem. Demand for AI is no longer just about workflow automation. Models must deliver insight, trust and defensibility at scale. While we’re not there yet, we’re moving quickly toward that reality.
We’ve written previously about the importance of explainability and deterministic modeling in surveillance. But there’s value in thinking bigger about the role of AI – beyond enhancing a rules-based alert engine, it will someday serve as a fundamental infrastructure layer that ensures model integrity, preserves auditability and enables truly adaptive surveillance. That includes leveraging AI to enhance the entire data science workflow and – eventually – govern itself.
Embedding Data Science: AI as an Enabler of Surveillance Innovation
The future of trade surveillance is data science. As surveillance grows more complex, firms are recognizing the need to embed technical expertise directly within the compliance function. That doesn’t just mean hiring data scientists – it means giving them the tools and autonomy to build, monitor and evolve the models that underpin modern surveillance. Increasingly, that toolkit includes iterations of AI technology. By accelerating core workflows and bridging longstanding operational gaps, AI can enable technical staff to work more efficiently, more collaboratively and with greater alignment to the realities of the business.
As always, discussions about automation and AI governance must be grounded in careful oversight. But it’s precisely by thinking ahead that firms can prepare for – and help shape – what comes next. While many of these capabilities are already taking shape, note that some use cases mentioned in this article remain aspirational – not yet widely deployed, but increasingly plausible as AI systems grow more sophisticated and surveillance teams grow more interdisciplinary.
Take data normalization – one of the most common pain points for surveillance teams. LLMs could be trained to at least identify and potentially correct inconsistent field formats, missing metadata and other discrepancies across trading venues, reducing the hours of manual work required to clean and structure inputs. This would result in time savings, not to mention greater consistency across the downstream analytics pipeline. Reconciliation and mapping efforts would benefit too, with LLMs surfacing mismatches across datasets and even generating proxy data where inputs are incomplete – albeit with caution, as this would introduce risks around bias and model reliability.
These capabilities would help teams both move faster and, more importantly, build stronger models. Feature engineering, a time-consuming and often manual step today, could be enhanced with machine learning tools that suggest relevant inputs based on evolving patterns – such as clustering behaviors or shifts in order book depth. Model monitoring is another logical area of innovation, with AI agents offering significant potential to track statistical drift, spot performance degradation and flag when thresholds or assumptions may need to be recalibrated.
One more area where AI holds tantalizing potential is enhancing the feedback loop between compliance analysts and model builders. Rather than relying on disconnected handoffs or periodic tuning cycles, surveillance programs could ingest analyst input directly into model refinement processes – enabling a more responsive and iterative approach.
None of this will be achievable without data integrity. Poor or incomplete inputs still represent a material risk – weakening model performance, injecting bias and complicating auditability. But AI may prove instrumental in improving data governance. By surfacing anomalies, tracking lineage and highlighting gaps before models are even trained, it can support a more trustworthy surveillance foundation.
For firms seeking to embed data science within the surveillance function, preparing for this kind of AI enablement is no longer optional. It’s the clearest path toward building a surveillance program that is adaptive, defensible and ready to meet the scale and complexity of today’s markets
AI-on-AI Monitoring: The Next Frontier
This brings us to the emerging concept of meta-AI, or AI systems designed to monitor and govern other AI systems – watching for shifts in input quality, identifying blank fields or corrupted values and surfacing inconsistencies that human operators may miss. It goes beyond managing the data itself and homes in on the functions involved with managing that data.
Obviously, for high-stakes domains like trade surveillance, AI requires careful human oversight. Without continuous oversight, AI models can gradually lose effectiveness or develop blind spots, especially in fast-moving or volatile markets. That said, there’s theoretically no reason humans need to be the only oversight layer. With the right approach, AI-on-AI frameworks could potentially detect deviations from expected or established outputs, generate alerts, recommend retraining and even score models for explainability, ensuring they meet both internal standards and external regulatory expectations. The humans in the loop would be empowered to move faster and more confidently, focusing less on what needs to be improved and more on how to make it happen.
In some cases, the meta-AI approach could enable generative models to fill in data gaps or simulate edge-case scenarios for validation purposes. Others would act strictly as self-monitoring agents that flag when another model’s assumptions no longer hold.
This kind of embedded governance will represent a major step forward for compliance at large – moving from reactive oversight to proactive surveillance and system optimization. It also aligns with emerging regulatory expectations around data accountability, auditability and risk management. This innovation is still in its infancy, but it’s an instructive example of where we may be heading if visions for the future of AI ultimately become reality.
The Road Ahead: Trade Surveillance’s Interdisciplinary Future
The expectations placed on surveillance teams are growing faster than the tools many firms still rely on. Static rules, siloed workflows and bolt-on fixes are no match for today’s data volumes, trading velocity or regulatory scrutiny. What’s needed is a shift in mindset – one that treats surveillance as an interdisciplinary function powered by embedded data science and enabled and enhanced by AI from the ground up.
In this new model, data scientists are no longer peripheral actors; they’re central to how surveillance is conceived, executed and evolved. AI will act as both a force multiplier and a connective tissue – accelerating model development, improving input quality and helping compliance and technical teams work in concert rather than in sequence. Done right, this convergence will enable firms to move from reactive controls to proactive risk identification – and to build systems that improve continuously, not periodically.
The firms best positioned for the future will be those that see surveillance not just as a regulatory obligation, but as a strategic asset. AI and data science, working hand-in-hand, are redefining what’s possible in this space. The sooner firms invest in that alignment, the better equipped they’ll be to meet the demands of tomorrow’s markets – wherever, and however, they emerge.