Friday, July 12, 2024
- Advertisement -
More

    Latest Posts

    From Brussels to New Delhi: Charting India’s Path to Responsible AI Governance

    By Nidhi Singh and Aishwaya Giridhar

    In April, the European Union passed the AI Act, making it the first comprehensive effort to regulate AI on a large scale. This is a significant development that will underscore the EU’s role as a global standard setter and is likely to have far reaching influence around the world.

    In contrast, India’s approach to AI regulation has been reactive and ad-hoc. While the government has attempted some policy thinking around AI governance, the closest it has come to implementation is releasing advisories on the use of AI systems. The EU AI Act provides an opportunity for India to draw insights from the EU’s experience to shape its own strategies for AI governance and policy development.

    The EU AI Act

    The EU AI Act takes a horizontal, cross-sector approach by applying its rules across all types of AI systems, regardless of the industry or use case. It employs a risk-based framework that assigns different levels of obligations based on the assessed risk level of a particular AI system. Those posing an ‘unacceptable risk’ to the safety, rights, and livelihood of individuals are banned. This includes social credit scoring systems and ‘real-time’ biometric identification systems (with certain exemptions), among others.

    Those deemed ‘high-risk’ due to their potential impact on areas like safety, fundamental rights, or the environment face stringent requirements like risk assessments, transparency, and human oversight. For example AI systems which could influence the outcome of an election or systems which are used by public authorities to grant, revoke or reduce welfare benefits, are considered high risk.

    In addition to the risk stratification, the EU AI act also introduces obligations for general purpose AI such as ChatGPT. The Act mandates increased transparency, extensive documentation, and registration of general purpose AI systems before they are launched. Furthermore, it requires such AI providers to provide detailed reports on the datasets which are used to train general purpose AI.

    The good…

    The AI act provides a baseline level of protection and governance for all AI applications throughout the EU. For example, all high-risk systems would have to maintain risk management and mitigation systems, comply with requirements on training and testing datasets, have human oversight, and provide enough information to users to enable them to assess outputs and use the models effectively.

    Moreover, by specifically covering general-purpose AI models that are applicable across diverse sectors and use-cases, the AI Act provides the opportunity to enhance transparency and accountability in the use and deployment of this emerging field of technology.

    …and the bad

    The risk-based approach is relatively narrow, and requires an AI system to be classified into one of the tiers of risk. This may not always be possible – for example, general purpose AI models like ChatGPT are placed outside the risk stratification because of the wide array of tasks they can be used for. With time, developments in technology such as quantum computing are likely to produce more outliers which may require more modularity than what the Act can afford.

    It is unclear how the risks of certain AI systems are being considered. For instance, while emotional recognition systems have been heavily criticised for being inaccurate and are banned for use in certain contexts such as education and workplaces, they are still allowed for migration and law enforcement uses. This demonstrates that socio-political contexts, and not just the potential harm posed by the AI systems, are considered in making risk assessments.

    The race to regulate AI

    As the conversations around AI governance become more concrete, India must formalise its approach towards its own AI framework to be able to contribute to the international discourse.

    In 2023, the IT Minister clarified that there was no plan to introduce a law to regulate AI as a whole. The government has also indicated that the upcoming Digital India Act may have provisions for emerging technologies and some forms of AI, which would apply across sectors. India is also starting to see vertical, sector-specific regulation on AI in sectors like healthcare and finance.

    Meanwhile, AI systems continue to be deployed in different contexts like insurance, facial recognition, and employment in India. The absence of regulation prescribing baseline standards means that companies do not have the legal obligation to disclose their use of AI systems, or to prioritise non-discriminatory, accurate, and fair outcomes. This leaves individuals subject to such systems without recourse, and can exacerbate potential harm.

    There is an urgent need for India to introduce some standards on transparency, explainability, and accuracy for all AI models, so that they can be held accountable for their outcomes. Developing a forward-looking framework for AI governance is likely to be time-intensive – for example, the EU AI Act was the result of a five-year consultation process – and Indian policy-makers must start to think about and invite perspectives on what AI regulation should look like in India.

    Aishwarya Giridhar and Nidhi Singh are Project Managers Project at the Centre for Communication Governance at National Law University Delhi.

    Also Read:

     


    STAY ON TOP OF TECH NEWS: Our daily newsletter with the top story of the day from MediaNama, delivered to your inbox before 9 AM. Click here to sign up today!


     

    The post From Brussels to New Delhi: Charting India’s Path to Responsible AI Governance appeared first on MediaNama.

    Latest Posts

    - Advertisement -

    Don't Miss

    Stay in touch

    To be updated with all the latest news, offers and special announcements.