In March 2022, Baker McKenzie’s Data Privacy & Security Team across offices presented the Asia Pacific edition of Deciphering Data, the Firm’s webinar series that aims to help companies and organizations decode complex developments in data privacy and cybersecurity. Our diverse team of cross-border experts offered their expertise and insight in this webinar series to help you understand the legal lay of the land and prepare for the future of privacy in Asia Pacific and beyond.
Session 1: Spotlight on Privacy Developments in Asia Pacific
The data privacy landscape in Asia Pacific has undergone major changes in recent years. The region has seen a wave of new privacy laws, regulations and amendments, which bring with them a new set of regulatory and legislative requirements. Our first session provided a roundup of the developments in the region, and how these new requirements are set to impact multinational businesses.
China’s Personal Information Protection Law (PIPL) came into effect on 1 November 2021. Together with the Data Security Law (DSL) which came into effect on 1 September 2021 and the Cybersecurity Law (CSL) which took effect on 1 June 2017, they form a three-pillar data protection and cybersecurity system in China. The PIPL is the first comprehensive personal data protection law in China and adopts certain concepts under the GDPR. It applies to personal information processing activities conducted within China, cross-border transfers of personal information outside of China and certain cross-border processing activities concerning data subjects in China. The DSL applies to all types of data and data processing activities carried out within the territory of China, with a focus on “important data” and state core data. The CSL has introduced the concept of “Critical Information Infrastructure” and sets out the general rules on cyber data and information security.
Companies should watch out for Vietnam’s four key pieces of upcoming legislation that touch on data protection and cybersecurity issues, namely the Draft Decree detailing the Law on Cybersecurity (“Draft Cybersecurity Decree“), the Draft Decree on Personal Data Protection (“Draft PDPD“), the Draft Decree on Penalties for Administrative Violations in Cybersecurity (“Draft PAVCD“), and the Draft Law on Consumer Protection (“Draft LPCR“).
The Draft Cybersecurity Decree was circulated to the Government Members for approval in May 2019 but the approval is yet to be granted (as of March 2022). The revised Draft PDPD (which has been kept confidential) is expected to be promulgated by May 2022, with further amendments to be applied by 2024. The public consultation on the Draft PAVCD ended last November but it seems that the Ministry of Public Security has not submitted the Draft PAVCD to the Government yet. The public consultation on the Draft LPCR closed in March 2022 and will require traders to comply with the regulations on personal information protection.
The amended Personal Data (Privacy) Ordinance which criminalises “doxxing” took effect on 8 October 2021. “Doxxing” refers to gathering personal data of a specific targeted person and/or related persons (such as family members) through various means, e.g., public registers and discussion platforms, and disclosing such personal data on the Internet, social media or other open platforms (such as public places).
Australia is in the process of reforming the Privacy Act. There is also draft legislation for the creation of a binding online privacy code which would apply to social media services, data brokers, and certain large online platforms operating in Australia. Further, there are developments in Critical Infrastructure and Cyber Security Laws which seek to expand existing protections to more industries, with mandatory reporting requirements when the critical infrastructure is subject to a cyber-attack. The Online Safety Act came into effect in January 2022 and targets inappropriate material and cyberbullying, allowing the eSafety Commissioner to identify offending accounts in order to enforce the Act.
Access the session recording here.
Session 2: Artificial Intelligence (AI) and Privacy
Global digital transformation has resulted in advancements in AI technology, alongside a keen cross-sector interest in utilizing it. However, this technology comes hand in hand with concerns around privacy, ethics, bias and discrimination. Our second session examined the implementation of certain AI’s in the region, potential regulatory developments in the Asia Pacific region in view of the proposals in the EU seeking to govern AI systems more stringently, and key privacy considerations when deploying AI solutions.
AI has become a regular sight in consumer technology — from automated text messaging to computer-controlled video game enemies and many applications that are embedded into our daily lives. However, a number of AI applications come with an increased data privacy risk which must be taken into account, such as the use of AI in an employment context, machine learning and facial recognition. Such applications necessitate the collation and use of a large amount of user data, prompting questions surrounding data privacy, data minimization, storage, legitimate purpose and data subject’s consent.
Developments in the regulation of AI
- EU developments: The EU approach has thus far set the benchmark for regulation of AI technology, with Regulations expected in the latter half of 2022 poised to divide AI programs into categories based on their riskiness. High-risk AI, such as facial recognition and infrastructure-related AI will be subject to strict obligations, requiring risk assessments similar to the GDPR’s Data Protection Impact Assessment (DPIA). The Regulations also propose requirements related to transparency, traceability and human oversight. Obligations for lower-risk AI, such as chatbots, primarily relate to transparency and security.
- Japan: There is currently limited regulation of AI in Japan. While the Ministry of Internal Affairs and Communications issued the Guidelines in 2018 warning those implementing AI to ensure privacy rights of users and data providers are not violated, these are quite high-level and do not address many of the issues raised by AI.
- Australia: While Australia does not have specific privacy laws governing AI, existing privacy legislation applies broadly and impacts upon AI compliance requirements. Australia’s Privacy Act is technology-neutral, principle-based and largely limits the ways in which entities can use information for secondary purposes, requiring data controllers to disclose the purpose for which personal information is collected and to obtain consent for any further purposes. The Australian government is also examining AI regulation outside of the privacy-specific framework, issuing the AI Ethics Framework in 2019 which introduced the AI Ethics Principles. While these Principles are currently voluntary, a number of them touch on data privacy concerns when developing AI technology which mirrors the EU’s “Privacy by Design” approach. Enforcement relating to AI has focused primarily on biometrics and facial recognition, as there were multiple enforcement actions on this front in 2021.
- Singapore: The Model AI Governance Framework, while not mandatory, highlights Singapore’s current approach to AI regulation. The Framework provides a baseline for industry and technology, focusing on the introduction of AI and including a compendium of use cases and a checklist for safe implementation. The approach is similar to other jurisdictions in that the Framework focuses on the principles of transparency, explicability and fairness.
Emerging privacy considerations when using AI
AI systems can be subject to a number of cybersecurity concerns — if the system is not secure, information can be extracted which can constitute a data breach and potential violation of data protection laws. The absence of specific legislation does not mean absence of repercussions, as existing privacy frameworks can apply to the data that powers the AI. Companies using AI should also be aware of where the data comes from, to ensure that the data enabling the AI to make decisions has been gathered lawfully and with the data subjects’ consent.
Access the session recording here.
Session 3: Effective and Sustainable Privacy Compliance Programs
The fast-evolving global and regional data privacy landscape presents privacy counsels with the challenge of implementing and sustaining effective privacy compliance programs for their organisations. Our final session discussed the building of effective and sustainable privacy compliance frameworks, including the EU GDPR considerations.
The big picture – essential components of an effective and sustainable privacy compliance program
Cybersecurity and privacy compliance have become a major concern for companies in recent years. There are four key elements to consider to ensure that a company’s privacy compliance program is both effective and sustainable: knowledge, organisation, process & procedures and balance. Companies can only comply with what they know, and keeping a close watch on the data protection landscape is key to maintaining an effective privacy compliance program. Such programs also require an appropriate set of resources and structure that aligns with the business’ priorities and organisation. While the days of one-size-fits-all policies are over, maintaining a standard set of procedures across the board remains essential. With increased globalization, digitalization and growing complexity of products and services, compliance can be difficult when laws are not easily translated into points of action. Companies must be aware of the fast-evolving global and local data protection landscape and be able to respond as appropriate. While the GDPR remains a good starting point in designing privacy compliance programs, it is by no means the only barometer for compliance with local privacy regimes is becoming more nuanced, particularly around the Asia Pacific region. Commercial and operational considerations are also key factors to take into account in the design of a privacy compliance program, with the company’s objectives, stakeholders, structure and resources also playing a critical role in the program’s design.
Where and how to start
A key starting point is identifying a person who will be responsible for the program’s design and implementation, as well as ensuring cooperation across legal, technology, HR and commercial/marketing teams. Privacy should be built into the company’s leadership structure, and seamlessly incorporated into the day-to-day running of a business as well as its culture of compliance. Privacy programs and policies are not one-size-fits-all models — the company type and activities naturally influences the amount and type of data which the company processes, with varying compliance requirements arising. Data mapping exercises are key in identifying the types of data that companies collect and process, and therefore what they need to manage from a privacy perspective. Technology such as centralization, anonymization and organisational software should also be leveraged to improve the efficacy of a privacy compliance program.