2. An increasingly fractured global rulebook for data, cyber and AI
Data law trends 2026
In brief
The global landscape for data, cyber and AI is shifting fast. Deregulatory moves under the Trump 2.0 administration are in direct tension with the EU’s enforcement-driven digital strategy.
The US is betting on an innovation-first model, while the EU AI Act is reshaping how companies operate. Meanwhile, the UK and countries across the APAC region are pursuing their own, often divergent approaches.
For businesses, the result is a fractured environment where policies have areas that align and conflict across AI governance, data transfers, cybersecurity and consumer protection. Navigating these crosscurrents is now critical to managing risk – and unlocking opportunity – in the digital economy.
Trump’s AI reset: Innovation first
Since retaking office in January 2025, the Trump administration has made clear its commitment to AI innovation and desire to remove regulatory barriers and boost investment in US-based AI companies.
The day after his inauguration, President Trump announced a US$500bn private sector investment project in AI infrastructure. The following month, Vice President J.D. Vance spoke at the AI Action Summit in Paris, outlining the administration’s plans to clear the way for AI innovation and move away from the Biden administration’s focus on AI safety.
The actions taken by President Trump in the immediate weeks following inauguration confirmed this shift, including the signing of a flurry of AI-related executive orders to enact an innovationforward approach and the revocation of some of Biden’s executive orders focused on AI safety.
The release of an unprecedented American AI Action Plan and additional related executive orders in July 2025 affirmed the administration’s new direction.
Despite the federal government’s change in approach, some US states are maintaining a focus on AI safety regulation. For example, California recently passed the ‘Transparency in Frontier Artificial Intelligence Act’ — a new AI law that is narrower in scope than the EU AI Act but imposes overlapping requirements related to AI transparency, governance and incident reporting.
The Trump administration’s new approach to AI innovation has also led to recent policy and personnel changes at US federal agencies. For example, in January 2025, the Equal Employment and Opportunity Commission removed Biden-era AI guidance on the application of federal anti-discrimination law to the use of AI for employment decisions.
The Department of Labor similarly signaled its ‘AI & Inclusive Hiring Framework’ may no longer reflect current policies. In May 2025, soon after the US Copyright Office published a report assessing the legality of the use of copyrighted material to train AI models, the Trump administration fired the head of that agency, which could be construed as a rejection of the report’s conclusions.
US AI and free speech
At the same time the AI Action Plan was released, President Trump signed an executive order entitled ‘Preventing Woke AI in the Federal Government,’ which signaled the Trump administration’s other top priority alongside American-led AI innovation: ensuring this AI is free from ‘ideological bias.’ While this executive order echoed themes of deregulation (‘the Federal Government should be hesitant to regulate the functionality of AI models in the private marketplace’), it also emphasized the obligation on federal agencies to ensure they are only procuring AI technologies that are ‘truth-seeking’ and developed with ‘ideological neutrality.’
This focus on 'ideological neutrality' in technology is not new for President Trump. On his first day back in office, he made clear his aggressive stance on countering perceived censorship on online platforms when he signed an executive order entitled ‘Restoring Freedom of Speech and Ending Federal Censorship.’
Also concerning speech issues, combatting AI-washing, AI-generated deepfakes and other AI-related consumer harms has been a continued focus of federal agencies like the US Federal Trade Commission (FTC) and the US Securities and Exchange Commission, and by Congress.
For example, in 2024, the FTC announced a crackdown on deceptive AI practices; in line with this AI-washing focus, the FTC issued an order in April 2025 alleging a marketing content company had made false claims about its AI capabilities. In May 2025, Congress passed the ‘TAKE IT DOWN Act’ to criminalize non-consensual publication of intimate images, including deepfakes.
![]()
AI and other tech companies face strong crosswinds from the Trump administration. Many see opportunity in the Trump administration’s removal of AI regulations. However, companies must take care to ensure they do not run afoul of rules the Trump administration has set for 'ideological neutrality,' and can continue to expect scrutiny of their products, including by the Federal Trade Commission and state Attorneys General.
Beth George, Partner
Continuity in US cyber and child safety
While the Trump administration has diverged from the Biden administration in notable ways when it comes to its approach to AI, it has continued the efforts of the prior administration in other areas of tech regulation.
![]()
The expansion of AI solutions presents unique security risk exposures that merit analysis and the development of relevant mitigation strategies.
Brock Dahl, Partner
For example, the US Department of Justice and a member of the FTC have signaled that their agencies will enforce rules from the prior administration regulating US data transfers to foreign adversaries, including under the new Data Security Program initiated under a Biden-era executive order and the Protecting Americans’ Data from Foreign Adversaries Act passed in 2024. In June 2025, President Trump signed an executive order maintaining certain federal cybersecurity efforts by President Biden (including federal efforts around post-quantum cryptography, Border Gateway Protocol, and advanced encryption).
Lastly, the new FTC chairman has stated that the agency remains focused on protecting children online, particularly related to social media.
For example, in June 2025, FTC regulations related to the Children’s Online Privacy Protection Act came into effect after they were proposed during the Biden administration.
The UK bets on balance
![]()
The UK is forging a distinctive path in digital governance: while the Online Safety Act introduces strict obligations on platforms, particularly to protect children, our flexible, pro-innovation approach to AI and data signals a clear ambition for the UK. Businesses should prepare for a dual landscape of compliance and opportunity, balancing regulatory risk with data-driven growth.
Rachael Annear, Partner
The UK is positioning itself as a distinct ‘third pillar’ in global digital governance. While it is not always straightforward to definitively characterize the UK as occupying a true middle ground between the EU and US, the UK’s approach aims to balance competing pressures. It is selectively aligning with more prescriptive EU rules where legal certainty and cross-border data flows require it, while championing innovation-led, agile supervision at home.
This third pillar strategy has begun to deliver results, with US companies including Microsoft, Nvidia and Google pledging over £150bn of investment in the UK during President Donald Trump’s visit to the UK in September 2025.
This third pillar is most apparent in the UK’s divergence from the EU on AI and data regulation. The UK government has rejected repeated calls for a specific, EU-style AI bill.
Instead, the UK published a policy paper in March 2025 outlining a new approach for regulators to support growth, stating that the UK should ‘cut red tape’ and ‘create a more effective system’.
Read Chapter 3 of this report to learn more about the UK’s Online Safety Act and why UK and global businesses must rethink their approach towards young people’s data.
The UK’s digital divergence
This pro-innovation approach avoids prescriptive legislation, while still ensuring there is regulatory oversight. In addition, the government’s Data (Use and Access) Act 2025 (DUAA), which became law on 19 June 2025, illustrates a targeted intention to depart from the EU’s General Data Protection Regulation (GDPR) framework in specific areas.
Positioned as a more flexible and innovation-friendly model, the DUAA seeks to streamline compliance obligations and introduce mechanisms that support data-driven growth, with particular emphasis on easing burdens for small and medium-sized enterprises and fostering responsible AI development; for example, by allowing the use of certain cookies without explicit consent in specific low-risk situations.
However, this pro-innovation approach is not without its challenges. The UK government abandoned its plans to introduce a broad copyright exemption for text and data mining following intense backlash from creative industries. While the DUAA requires the government to prepare and publish a report on the use of copyright works in the development of AI systems and an assessment of the economic impact of AI and copyright, the issue remains unresolved, leaving the UK with a clear policy choice: liberalize copyright to align with the US approach or strengthen protections and transparency obligations for rights holders, more akin to the EU AI Act.
Online safety is another area where the UK is pursuing blended alignment and divergence. The Online Safety Act (OSA) became law in October 2023, although its obligations have only recently begun to take effect – platforms having gained a legal duty to protect users from illegal content from 17 March 2025 and a duty to protect children online from 25 July 2025.
The OSA bears notable similarities to the EU’s Digital Services Act (DSA); both regimes adopt a prescriptive structure, including proactive content moderation, risk assessments and transparency reporting, with significant penalties for non-compliance, and an emphasis on platform accountability. At the same time, the UK has given special prominence to child safety, introducing obligations that go beyond the EU model.
UK chooses to converge, diverge, compete
In other areas, the UK has moved to align more closely with the EU, while allowing room to differentiate where it wants to maintain an edge. For example, the policy statement for the proposed Cyber Security and Resilience Bill commits to modernizing the UK’s cyber resilience framework and ensuring it ‘aligns where appropriate’ with the EU’s updated Network and Information Security Directive, NIS2.
In the consumer protection space, there are also clear parallels between the EU’s proposed Digital Fairness Act and the UK’s Digital Markets, Competition and Consumers Act 2024 (DMCCA). The new powers granted to the Competition and Markets Authority under the DMCCA allow it to take direct enforcement action against companies using deceptive ‘dark patterns’ in interface design or hosting fake reviews, tackling many of the same digital fairness issues identified in the EU.
This pattern of selective alignment and strategic divergence signals how the UK is pursuing a dual objective: mirroring Europe’s pro-regulatory instincts where it serves domestic priorities, while prioritizing competitiveness and practical interoperability with global markets, including the US, in high-growth areas like AI.
The EU’s next phase: From rules to rollout
The EU is currently navigating a complex period in digital governance, marked by a drive towards both regulatory coherence and simplification. While often perceived as having a rigid framework, recent developments indicate a more overtly nuanced approach, acknowledging the impact of extensive legislation on innovation and economic competitiveness.
Following the 2019-2024 institutional term – a period of intense legislative activity that produced landmark regulations such as the Data Act, DSA, Digital Markets Act (DMA) and AI Act – the EU is now exploring simplification initiatives (e.g. discussions about ‘targeted changes’ to the GDPR, AI Act and cybersecurity laws as part of the upcoming Digital Omnibus Package) and focusing more on technical implementation (e.g. the General-Purpose AI (GPAI) Code of Practice). These efforts aim to reduce administrative burdens, streamline compliance procedures, and eliminate overlapping requirements across different digital laws.
The EU’s focus is on making the existing framework more efficient, particularly for SMEs, rather than a wholesale deregulation.
![]()
Following a wave of major digital legislation, the EU now appears to be slowly shifting its focus from (only) creating new rules to refining existing ones. The goal is to make compliance simpler and reduce burden on businesses by increasing efficiency and providing more practical and detailed technical guidance. Businesses should therefore pay even closer attention to the publication of secondary legislation and official guidelines.
Theresa Ehlen, Partner
Most prominently, over the past year, the AI Act has moved firmly into its implementation phase following its entry into force on 1 August 2024. Significant milestones were the February and August 2025 deadlines to implement certain measures, which saw the prohibition of AI systems posing an unacceptable risk, such as those used for social scoring, as well as the application of the rules on GPAI models.
The newly established European AI Office has been central to guiding this rollout, in particular with regard to the finalization of the GPAI Code of Practice. Concurrently, Member States have been actively designating national competent authorities to oversee the application of the regulation, with Italy being the first Member State to pass a comprehensive law regulating the use of AI. While progress is evident, the implementation has not been without its challenges, sparking ongoing discussions around the complexities of compliance and the harmonization of the AI Act with existing digital legislation.
Despite its current focus on technical implementation and simplification of existing EU digital rules, we would not expect that the EU’s legislative momentum is likely to fade anytime soon. In fact, new digital proposals such as the Digital Fairness Act and the Digital Networks Act are currently under consultation:
- The Digital Fairness Act is the EU’s attempt to regulate unethical techniques and commercial practices on the internet. These include deceptive or manipulative interface design (such as ‘dark patterns’), addictive design of digital products and unfair personalization practices. Rather than creating entirely new rules, it will update existing EU consumer laws to address these emerging digital challenges.
- The Digital Networks Act seeks to create a genuine single market for telecoms, simplifying regulations to encourage investment in secure, high-speed networks such as fiber and 6G. The draft act also aims to address the economic relationship between network operators and large tech companies that generate significant data traffic. The ultimate goal is to improve access to secure, fast and reliable connectivity in order to facilitate the transition to cloud-based infrastructure and AI.
EU enforcement meets geopolitics
However, the enforcement of existing EU digital rules, in particular on US and Chinese tech companies, presents a challenge for EU regulators. Especially under the second Trump administration, the US government demonstrates a readiness to defend US tech interests, characterizing EU fines as tariffs and threatening retaliatory trade measures. This could lead to increased geopolitical friction and putting pressure on the EU to balance its regulatory ambitions with broader transatlantic relations.
The EU Commission, while affirming its commitment to enforce the EU’s digital rulebook fairly and without bias, could therefore be confronted with demands from Member States to suspend supervisory proceedings against US technology companies in return for the lifting of retaliatory tariffs (if that has not already happened de facto). So far, however, the EU has resolutely defended its position that EU digital laws such as the DSA and DMA are non-negotiable. We would therefore not expect any changes to this position in the short-to-medium term.
APAC charts its own course
![]()
Asian governments are taking a considered and thoughtful approach to AI regulation, forging their own individual pathways between hard law approaches and voluntary frameworks.
Richard Bird, Partner
Unlike the data privacy landscape, where GDPR’s impact on regulation in APAC is indisputable, the extent of the EU AI Act’s influence on the region’s emerging AI regulations is less clear at this stage. Overall, the picture is diverse across the region; reflecting the different economic priorities, political systems and technological maturity of its many constituent countries, and the emergence of several distinct, locally tailored models. And while some Asian governments had initially leaned towards adopting elements of the EU’s risk-based model, the predominant direction of travel has now shifted towards lighter-touch approaches.
China was one of the very first countries to specifically regulate AI, reflecting its policy priorities to ensure control over the content of GenAI outputs, coupled with targeted consumer protection interventions such as mandating the labeling of AI-generated synthetic content and provision of opt-outs from recommendation algorithms.
China had also been understood to be developing a comprehensive AI law, but this no longer features in the 2025 legislative plan. Instead, the 2025 plan lays down an objective of ‘promoting legislation for the healthy development of artificial intelligence’ – an apparent pause that perhaps comes as a response to the unexpected recent technological breakthroughs in this area by the likes of DeepSeek.
For AI developers, the recent Beijing Free Trade Zone (FTZ) negative list for cross-border transfers of ‘important data’ creates a narrow but valuable channel for the export of certain types of training datasets without requiring prior approval, which has also since been adopted by other FTZs.
Other countries in Asia, such as Japan, Vietnam and South Korea, have also recently enacted laws to regulate AI, and preparatory legislative work has begun in Thailand as well. Both the Vietnamese and Korean laws introduce a concept of high-risk AI seen in the EU AI Act, but South Korea has emphasized that its AI law is more business friendly than its European counterpart’s, and Japan’s law does not impose any financial penalties for breach. These are all measures clearly intended to avoid stifling innovation.
Ultimately, these laws mostly set out high-level principles which require further implementing regulations or guidelines to be issued. The practical implications of these laws, as well as the enforcement risks, is therefore unclear as of now. Vietnam has recently also published for public consultation the draft of a comprehensive AI law that is modeled on the EU AI Act and will supersede the provisions on AI in the existing law.
Also focused on promoting the adoption of AI and avoiding overregulation are Hong Kong and Singapore. Both Hong Kong and Singapore have thus far favored guidelines that promote the adoption of good governance practices and internal controls over regulation. Singapore has also been working closely with businesses and other stakeholders to create a trustworthy ecosystem for AI development and adoption (e.g. through AI testing tools) and has played a leading role in formulating APAC governance and ethics guidelines.
Looking Ahead
Global rules on data, cybersecurity and AI are fragmenting fast. Divergent approaches in the US, EU, UK and APAC mean businesses need strategies that are proactive, flexible and geopolitically aware.
Key takeaways for clients:
- Track divergence: Monitor policy shifts closely – from the US’s deregulatory stance to the EU’s prescriptive frameworks – while noting areas of continuity such as cybersecurity, child safety and AI-washing.
- Strengthen governance: Reinforce internal data classification, processing and transfer frameworks to withstand scrutiny across jurisdictions.
- Stay adaptable: Build global principles for AI, data and cyber governance, but tailor controls to meet regional demands like the EU AI Act or UK OSA.
- Factor in geopolitics: Assess how enforcement may be shaped by broader political tensions, adding complexity to compliance and trade.
- Keep ethics central: Regulators remain focused on responsible AI, deceptive practices and child safety. Embedding these principles into products and disclosures reduces legal and reputational risk.
The landscape will only grow more complex. Businesses that anticipate change, integrate ethics and build resilience into governance will be best placed to manage risk and seize opportunity.
Our team
