Sunday, July 21, 2024
- Advertisement -
More

    Latest Posts

    Hong Kong’s Privacy Commissioner proposes framework for AI data protection, says companies should set up AI governance committees

    The Office of the Privacy Commissioner for Personal Data (PCPD) in Hong Kong has put out a framework for personal data protection for artificial intelligence (AI). This framework caters to organizations using third-party AI systems that involve personal data, to ensure that they effectively govern such systems. 

    PCPD’s framework suggests that companies should establish an AI governance committee to oversee the entire lifecycle of all AI solutions. Such a committee should not be restricted to a specific division (like finance or sales) but should rather have oversight across the company. It should have adequate manpower and financial resources to conduct risk assessments and provide adequate training to relevant personnel.  

    The committee should have participation from people with a mix of different skills and perspectives such as business and operational personnel, procurement teams, system analysts, system architects, data scientists, cybersecurity professionals, and legal and compliance professionals. Companies could even consider seeking advice from external experts. An additional ethical AI committee may be also established to conduct an independent review when a project is sufficiently large, with a considerable impact and/or a high profile.

    Using a risk-based approach for procurement and management of models:

    The risk level of an AI model depends on how the organization uses the systems. For instance, an AI used for determining the worthiness of individuals would have a higher risk than one that is used to present individuals with personalized advertisements. This is because the latter is unlikely to have a significant impact on individuals, whereas former may deny them access to credit facilities.

    Risk assessment should be conducted by a cross-functional team during the procurement process or when significant updates are made to the model. This assessment should be properly documented and the results should be reviewed in line with the company’s AI policies as endorsed by its AI governance committee. 

    When conducting risk assessment, companies should consider the following factors—

    • The volume of personal data required for customizing AI models, collected by the system during operation and required to develop and train the model by the AI supplier. 
    • The sensitivity of the data involved.
    • The allowable uses of the data for customizing procured AI solutions and/or fed into the AI systems to make decisions. 
    • The quality of the data, taking into account the source, reliability and accuracy. 
    • The security of the personal data used in the AI system.
    • The probability of privacy risks (like excessive data collection or leakages) and the potential severity of the harm that might result from them. 
    • The potential impact of the AI system on affected individuals, the company, and the wider community. The probability that impact would occur, as well as the severity and duration of any impact. 
    • The adequacy of the mitigation measures put in place to prevent harm. 

    Role of human oversight in risk mitigation:

    In general, an AI with a higher risk profile (one that can have a significant impact on human beings) requires higher levels of human oversight. As such, high-risk AIs should take a “human in the loop” approach, where human actors retain control of the decision-making process to mitigate errors or improper outputs. AI systems with lower risk profiles can take a human-out-of-the-loop approach. 

    If neither of these approaches is suitable for an AI model, such as when the risks are non-negligible or if the “human-in-the-loop” approach is not cost-effective or practicable, organizations may consider a “human-in-command” approach. This approach requires human actors to use the outputs generated by an AI, oversee operations and intervene wherever necessary. 

    Other Key aspects of the PCPD’s framework: 

    Companies’ AI strategy should focus on accountability: 

    It should include directions on the purposes for which AI solutions may be procured, and how AI systems should be implemented. It should set out the functions that the AI system would serve in the tech evolution of the organization and determine unacceptable uses of AI in the organization. Laws surrounding procurement and implementation of AI should also be taken into consideration. 

    The AI strategy must ensure that there is the appropriate technical infrastructure to support lawful, responsible, and quality AI implementation and use including data storage, management, and processing tools. The AI strategy should be regularly communicated to the company’s staff and, where appropriate, also to external stakeholders like partners and customers. 

    Governance issues to consider for third-party AI solutions:  

    Companies should convey the key privacy and security obligations, and ethical requirements to potential AI suppliers. These obligations and requirements should be based on the company’s privacy policy and the ethical principles of AI mentioned in PCPD’s 2021 guidance. Companies must also consider the general criteria and procedures that will qualify an AI solution for review by its AI governance committee. The company must have a plan to continuously monitor, manage and maintain the AI system, with assistance from the supplier where appropriate. It should evaluate the AI suppliers’ competence during due diligence.

    Companies must also define their policy for handling the output generated by an AI system. For example, where possible, companies could employ techniques to anonymize personal data contained in AI-generated content, label or watermark AI-generated content, and filter out AI-generated content that may pose ethical concerns. 

    Data protection compliance considerations: 

    The degree of the company’s involvement with the model may differ depending on the data provided and the instructions given for AI model development and/or customisation. For instance, the degree of involvement with a fully customized model and an off-the-shelf AI solution would be different. In all the different forms of engagement with third parties, there would be data protection compliance issues that should be clearly addressed in the service agreement signed between the company and the supplier. 

    Key issues that the company must consider include—

    • Who the data user is: The party which is responsible for collecting, processing or use of the personal data. For instance, if the company determined the type of personal data used in the customization, testing, and operation of a model, it would be considered the data user. 
    • Who the data processor is: The party who processes personal data on behalf of another is called a data processor. For example, an AI supplier that does not decide on the input data and only provides a platform for the customization of AI is likely to be a data processor.
    • Legality of cross-border transfer of data and its compliance with Hong Kong’s data protection principles.
    • If an organization as a data user transfers data to a data processor, it must adopt contractual or other means to prevent unauthorized or accidental access, processing, erasure, loss or use of the personal data.

    What to consider when preparing data for model customization:

    • Refrain from using personal data for any purpose that is not compatible with the original purpose of collection, unless express and voluntary consent is obtained from the person who’s data is being used.
    • Those whose data is being used must be kept adequately informed such as the classes of persons to whom the data may be transferred, especially where AI suppliers are involved.
    • Consider the appropriate size and complexity of the model and the company’s intended purpose does not require a customized model, then opt for a smaller/simpler model that requires less data for customization. 
    • Erasing personal data from the AI system when the data are no longer required for the customization and use of AI.
    • The quality of the data used for customization should be managed, especially in the case of high-risk models. 

    Review mechanisms for monitoring the AI model:

    • Monitoring inputs into an AI system (prompts, queries etc) to prevent misuse. 
    • Conducting performance audits and investigating any data breach incidents. 
    • Requesting an AI supplier (where necessary and appropriate) to conduct a periodic review of the AI models to ensure that they are performing as intended. 

    Also read:


    STAY ON TOP OF TECH NEWS: Our daily newsletter with the top story of the day from MediaNama, delivered to your inbox before 9 AM. Click here to sign up today!


    The post Hong Kong’s Privacy Commissioner proposes framework for AI data protection, says companies should set up AI governance committees appeared first on MEDIANAMA.

    Latest Posts

    - Advertisement -

    Don't Miss

    Stay in touch

    To be updated with all the latest news, offers and special announcements.