Sandeep Parekh, Managing Parter, Finsec Law Advisor
In one of the first measures taken by an Indian regulator vis- à-vis artificial intelligence (AI), last month the Securities and Exchange Board of India (SEBI) issued a consultation paper seeking feedback on its proposals to regulate the use of AI/machine learning (ML) in the securities market.
As defined in the consultation paper, Al refers to technologies that allow machines to "mimic human decisions to solve problems". ML is a subset of AI, and refers to the automatic learning of rules to perform a task by analysing relevant data.
Currently, SEBI requires market infrastructure intermediaries such as stock exchanges, clearing corporations, depositories, etc., and intermediaries such as mutual funds to report AI/ML systems employed by them, thereby giving the regulator an insight into its use cases.
SEBI has identified that AI/ML is being used for various purposes. For instance, stock exchanges are leveraging AI for sophisticated surveillance and pattern recognition, and brokers are deploying it for product recommendations and algorithmic order execution. Al is also used for customer support.
Based on who is creating an AI/ML system, they can be classified into two categories-built in-house or sourced from a third party. In this context, it is also important to remember that AI/ML systems can be integrated with each other as well as with present systems. Further, the capabilities of AI are expanding rapidly, with models making near-accurate predictions in finance and generating model portfolios that could, not too long from now, give a fund manager a run for their money.
In a forward-looking approach, SEBI's consultation paper proposes guidelines to be framed with five core principles- a model governance framework, investor protection, testing mechanisms, fairness and bias, and data/cybersecurity.
Importantly, SEBI has proposed that third-party services would be deemed to be provided by the intermediary concerned, and thus, liable for any violation of securities laws. Further, it has extended the applicability of investor grievance mechanism in respect of AI/ML systems as well.
SEBI has proposed a "regulatory lite framework" seeking to segregate between AI/ML systems that have an impact on the clients, and those used for internal business operations. Further, even if the system is outsourced, intermediaries will be liable. The real challenge for intermediaries lies in building sophisticated internal teams, robust audit trails, and technical capacity to manage such systems. In this context, it is worth considering if SEBI should revisit this approach and borrow a leaf out of its own playbook.
In February, the regulator introduced a revised framework for safer participation of retail investors in algo trading. In view of several entities providing algo strategies to customers and the consequent risks, SEBI decided to introduce a new class of regulated entities, viz. algo providers. While they aren't directly regulated by it, algo providers would have to become agents of stock brokers and be registered and empanelled with the stock exchanges.
A similar approach can be evaluated in respect of AI/ML systems, and a new class of persons-Al providers-introduced. While it is not necessary that SEBI directly regulates such persons, it could result in better oversight and understanding of the evolving nature of the Al industry and its nexus and impact on the securities market. Further, liability can be fixed on the person or entity actually responsible if an AI/ML system goes wrong, especially if the intermediary had no role in the violation. The alternative results in cascading litigation, as the investor would sue the intermediary which in turn would seek to recover losses from the third- party vendor (Al provider). While the investor grievance mechanism is proposed to be extended to AI/ML systems, introducing a new class of semi-regulated players in the securities market could have a better impact on fostering growth in a transparent, accountable manner with appropriate oversight
SEBI's proposal of a principle-based regulatory lite framework reflects its intention to adapt to innovation in technology that would shape financial markets.
SEBI's proposal includes testing requirements at the time of commencement as well as on ongoing basis, to ensure the AI/ML systems are working as expected. Here, a key reform that could propel their growth is allowing players to access the regulatory sandbox framework to test their products and systems. This would result in a heightened scrutiny of such systems, and allow SEBI to work with emerging players in the AI industry. This would also provide key data points, aiding in evolving best practices across the board. Such a framework would help it to become a proactive regulator as opposed to reacting to technological developments. It would be the first step in transforming SEBI into a regulator whose regulatory frameworks lay down the foundation for more innovation and advancement. This method will allow SEBI to be an enabler rather than impose roadblocks to new technology.
The paper highlights potential dangers of Al too. The regulator explicitly flags the threat of generative Al being used for market manipulation through deepfakes and misinformation-a systemic concentration risk if the industry leans too heavily on a few dominant AI providers. The identification of concentration risk is particularly salient, as there is a danger of unregulated tech providers becoming systemic choke- points for the industry. Further, since there is only a handful of foundational models, the risk of synthetic data loops emerges where everyone uses the same AI model-trained on the same data- which may cause a risk of collusive behaviour and herding.
There is much to applaud about SEBI's proposal of a principle-based regulatory lite framework that reflects its intention to adapt to innovation in technology that would shape financial markets in future. At the same time, there are steps it can take to not only regulate, but also design a regulatory framework that is ahead of the curve and supports growth and innovation.
Co-authored with Parker Karia, senior associate, and Varun Matlani, associate, Finsec Law Advisors
Note: Views expressed in the article are of the Author(s) and do not necessarily represent IICA’s stand on the subject matter.
Your password has been successfully updated! Please login with your new password
The link is unavailable for your login. Please empanel with the ID Databank to access this feature. For more information, email support@independentdirectorsdatabank.in or call 1-800-102-3145.