Skip to main content

Tech and platform regulation

Digital trade - and what it means for business

The rapid advance of digitisation has seen the creation of new data-driven products and online services which can easily be traded across borders. Now, with the advent of technologies such as AI and robotics, we stand on the brink of a new era where even physical services such as surgery could be offered to patients by providers in different countries. So what are governments doing in response – and what does this mean for business?


Digitisation is the process by which information is converted and used in digital form. On their own, these streams of binary code are of little practical value. But add computers and communications technologies, and digital information can be reproduced, distributed and processed at scale.

From its origins in calculation, digitisation now encompasses information that is harvested in bulk from online activity (via algorithms), offline activity (via sensors) and is even coming to be self-generated on the basis of machine learning (via artificial intelligence). Now, thanks to robotics and intelligent devices, interaction between the digital and physical worlds is increasingly reciprocal.

From an economic perspective, this process has revolutionised trade. Today, goods and physically supplied services are increasingly converting to online services, which are inherently tradeable and transferable across borders.

The advance of digitisation has sparked major legal and regulatory developments.

  • First, existing regulatory structures have started to be adapted to new situations, including rules on intellectual property, undesirable content, and marketplace behaviour.
  • Second, digitisation has produced some entirely novel regulatory challenges based on the scale, types and uses of information now in digital form, including in relation to personal privacy, discrimination, liability, competition and infrastructure security.
  • Third, the fact the digital world is inherently cross-border makes it difficult for governments to regulate, at least in a way that they consider to be consistent with other principles such as economic efficiency and personal liberties. In some cases, governments have erected borders, with controls on foreign investment in digital technologies or restrictions on imports and exports of data. In others, they have sought to cooperate, agreeing to promote and, at times, to regulate online economic activity. More recently, they have even started to allow each other to tax corporate income on the basis of where profits are generated rather than where companies are established.

Here, we explore the process of digitisation and how it has led to digital trade; examine how trade agreements are evolving to facilitate this shift; how governments are adapting their legal and regulatory frameworks to assert control over companies entering their markets; and how companies can exploit the opportunities – and manage the risks – of this borderless world.

How did we get here? A brief history of digitisation

In order to understand digital trade, it’s important to understand digitisation and the legal and regulatory issues it raises. Here, we take a whistlestop tour from the mid-1900s to the present day.


Mid 1900s-early 2000s: online world largely replicates offline world; legal and regulatory response generally involves extension and adaptation of existing laws, with the main challenges practical and jurisdictional.

Mid 1900s-early 1990s

  • The mid-1900s see the development of computers capable of processing digital information. The arrival of computer programming means that relatively sophisticated new digital information can be created. Additionally, increasingly vast quantities of information (including previously analogue information), can now be recorded in an easily accessible form, and reproduced and distributed in a way that is indistinguishable from the original.
  • The digital reproduction and distribution of information (both online and in physical stored media) raises concerns about intellectual property protections, as well as concerns about hacking (theft, fraud and security).

Mid-1990s-late 1990s

  • Arrival of the internet makes it possible to distribute textual, visual and audiovisual digital information online to a networked audience, across borders, at effectively zero cost.
  • Explosion in quantity of digital content shifts regulatory focus to the nature of the information being distributed. Authorities are particularly concerned with allocating responsibility for publication by adapting, where necessary, existing laws on fraud, defamation, and illegal or immoral content. The fact it is increasingly easy to contact people, especially by email, leads to new laws on online harassment and spam.

Late 1990s-2000s

  • Emergence of interactive communications leads to development of new types of online services, including ecommerce and dating sites.
  • With the internet becoming an important means of conducting commercial activity, regulators must ensure that existing rules (eg consumer protection laws) extend to these services as well. Competition law starts to engage with digital products and services.

2010-2020: ubiquity of internet communications and technological advances enable the recording of new forms of digital information; legal and regulatory concerns emerge in relation to the scale, types and uses of this information (eg privacy, fraud).

  • Dominance of internet communications and new technological advances enables new types of digital information to be recorded. Information on individual online activity is collected (generally consensually via ‘cookies’ or other metadata). Information on offline activity is collected via GPS systems, cameras and wearables. Where it exists in bulk, it is used individually (for targeted advertising) and in aggregate form (for identifying trends). Advanced technologies can also create and distribute fake realities, which can be used to fraudulent effect.
  • New regulatory concerns emerge around privacy, which are only partly addressed by consents. Competition law starts to take on board privacy issues as well as traditional ‘economic’ issues. While fake or misleading information raises issues in relation to authenticity (fraud). The more that users rely on digital information as their main source of information, the more dangerous this becomes.

2020s. Increasing computer power and new technologies (eg machine learning and AI) prompts regulators to focus on discrimination and human rights. The increasing interconnection of the digital and physical world raises concerns around liability, cyber and national security.

  • Machine learning technology and artificial intelligence enable computers to learn how to respond to unfamiliar inputs independently based on their reading of large datasets. The digital and the physical worlds are also increasingly interconnected via robots (eg driverless cars) and other automated devices.
  • Where algorithms are trained on biased datasets or used for purposes for which they are not designed, there is a risk they produce unreliable (or societally or economically damaging) results. When people are mistreated as a result, this raises issues around discrimination and human rights. Robotics prompt concerns around liability (eg accidents caused by driverless cars; antitrust liability for anticompetitive behaviour caused by AI-led algorithms), while the connecting of the physical and digital worlds raises risks in relation to cyber attacks.