Skip to main content

Why are we talking about ethics now?

As we push the boundaries with data, some innovations will raise concerns. Here we look at a selection of grey areas – and highlight six ethical questions for business.

Although big data and advanced analytics projects risk many of the same pitfalls as traditional projects, in most cases, these risks are accentuated due to the volume and variety of data, or the sophistication of advanced analytics capabilities.

Alexander Linden, research director, Gartner

Social influencing

Sophisticated algorithms and advanced analytics can surface incredible insights from data. When used positively, the benefits are evident. But such tools can also be deployed to influence or manipulate the decisions we make.

Automated decision-making

Automated decision-making also raises concerns. AI algorithms are trained how to ‘think’ using large data sets. However, their decisions are only as good as the information they ingest, and when that data is incomplete or skewed towards a particular demographic, algorithms can amplify biases and perpetuate inequalities. This is particularly so when multiple algorithms are linked together to work alongside and learn from one another, for example in deep learning and neural networks.

Facial recognition technology 

Although systems are improving fast, research from MIT in 2018 revealed that commercial products exhibited racial and gender bias. Systems were able to identify the gender of a person from a photograph with 99 per cent accuracy if the picture was of a white man. Where the subject was a darker-skinned woman, the software was only accurate 65 per cent of the time. It is vital to understand how algorithms have been developed and their potential flaws, particularly when they can be applied to data sets for which they have not been designed.

You get free social media services and free funny cat videos. In exchange, you give up the most valuable asset you have, which is your personal data.

Yuval Noah Harari, author and historian

Biased models and faulty findings

These sorts of biases against minority groups can be exacerbated where algorithms are used in techniques such as predictive analytics, whereby historic data is analysed to make predictions about the future.

Examples include:

  • an insurer that tried (and failed) to monitor social media posts to gauge how dangerously a person might drive;
  • credit card companies that have reportedly limited credit for individuals seeking marriage counselling due to correlation between divorce and default;
  • price comparison websites that have reportedly quoted higher premiums for names implying ethnic minority status; and
  • crime prediction software that increases police surveillance of marginalised groups, perpetuating bias and affecting social cohesion.

There is also an issue around transparency, for example where AI systems are used to screen candidates for jobs and reject applicants for reasons that may be unclear.

Then there are cases where algorithms just don’t work very well. Google’s much-hyped flu prediction algorithm, which analysed search activity to predict where outbreaks of the virus might strike next, suffered ‘model drift’.

Twitter analytics during Hurricane Sandy wrongly placed Manhattan as the disaster hub after the worst-hit areas of Breezy Point, Coney Island and Rockaway saw limited Twitter usage due to blackouts, drained batteries and limited cellular access.

‘Psychographics’, profiling and behavioural targeting

China’s social credit system, criticised by human rights groups, ranks and blacklists citizens based on social and behavioural data, ranging from their levels of debt to whether they cheat in online games or leave fake product reviews. It offers perks like preferential loans and quick access to doctors to those who score well, and restricted travel or access to public services to those who don’t.

Secondary use

Many organisations will want to use data they have collected for another purpose, or to aggregate data before they decide what to do with it. In the EU, the GDPR has rules about how this can be done with personal data.

  • The US National Security Agency’s PRISM initiative, a surveillance programme that grew out of post-9/11 surveillance practices, acquired telephone records and ‘backdoor access’ to private electronic records on citizens held by major technology companies.
  • Property records and geographic profiling were allegedly used to identify pseudonymous artist Banksy.
  • Community maps used to identify properties or clarify land rights could be reused to identify opportunities for redevelopment.
  • Data mining tools are identifying criminals through the ‘digital breadcrumbs’ they leave online. Digital evidence can be admissible in court, even when private.
  • Mobile phone and social media data have been used to halt or throw out unfounded rape allegations; however, the victims’ commissioner for London worries women may not report cases for fear past digital communications will be misinterpreted.

Large companies that have built successful data empires have no doubt realized the ethical implications of these practices, but they have been slow on the draw in terms of taking concrete steps.

Raegan MacDonald, senior policy manager, Mozilla Foundation

Six ethical questions for business

The six areas below form the foundation of data ethics.

Transparency

Questions such as ‘What data are we collecting?’, ‘How are we collecting it?’, ‘What are we going to do with it?’, and ‘Who are we going to share it with? (From a personal data perspective Google was challenged over its approach to transparency when it was fined €50m by the French data protection regulator.)


Respect

How might what we’re doing affect people? Could the insights we glean from our data limit their opportunities?


Access

Are people able to find out what data we hold on them?


Control

Are people able to decide what things we offer them as a result of that data?


Expectation

Is what we’re doing with their data what they would expect us to do? Are our safeguards aligned with these expectations?


Accountability

How will we be held to account if we get it wrong?

Uncertainty about what it means to use customer data appropriately could cause a loss of trust that could lead to instability in the financial services system.

The Appropriate Use of Customer Data in Financial Services, World Economic Forum

Further insights on data ethics

Data ethics explained

  • Data ethics: what do you mean by data ethics - promo image

    What do we mean by data ethics?

  • Data ethics: what are the benefits - promo image

    What are the benefits?

  • Data ethics: how is data use controlled - promo image

    How is data use controlled?