Digital Rights and Emerging Technologies
The rapid expansion of digital technologies and artificial intelligence is creating a new frontier of human rights risk that traditional HRDD frameworks were not designed to address. When the UN Guiding Principles were developed in 2011, smartphones had just become mainstream, social media was in its early years, and machine learning was an academic discipline rather than a commercial product deployed at mass scale. The UNGP framework remains fully applicable - the obligation to identify, prevent, mitigate, and account for adverse human rights impacts does not change depending on the technology involved. But the specific risks, the mechanisms of harm, and the appropriate due diligence responses differ profoundly in the digital context.
The B-Tech Project: Applying UNGPs to Technology Companies
Recognising the need for sector-specific guidance, the UN Office of the High Commissioner for Human Rights (OHCHR) launched the B-Tech Project in 2019. The project aims to produce authoritative guidance on what "respect for human rights" means in practice for technology companies, grounded in the UNGP framework. Its foundational papers cover the technology sector's specific role types - as product manufacturers, platform operators, and data intermediaries - and identify the heightened responsibilities that arise from the scale and speed at which technology companies can affect human rights.
The B-Tech Project identifies four key areas of concern specific to technology companies:
- Content governance and freedom of expression.
- Data privacy and the right to a private life.
- Non-discrimination in algorithmic decision-making.
- The use of technology by governments for surveillance and repression.
B-Tech's Core Insight: Scale and Leverage
The B-Tech Project recognises that technology companies operate at a scale and speed that creates responsibilities qualitatively different from most other businesses. A social media platform can be used to incite violence against an ethnic minority within hours. An algorithmic hiring tool can screen out women or racialised minorities from employment consideration across thousands of recruitment processes simultaneously. The UNGP principle of "leverage" - using influence over business relationships to improve human rights outcomes - takes on specific meaning for platforms that mediate access to information, employment, credit, and public services at global scale.
AI and Algorithmic Bias
Artificial intelligence systems trained on historical data often reproduce and amplify existing patterns of discrimination. This is not a theoretical risk but a documented phenomenon across multiple domains:
- Hiring and recruitment: Amazon's internally developed AI recruitment tool, which was decommissioned in 2018 after internal review, was found to systematically downgrade applications from women because it had been trained on historical hiring data reflecting male-dominated hiring patterns.
- Credit scoring and financial services: Studies have found that algorithmic credit scoring systems in the US have systematically disadvantaged Black and Latino applicants, replicating historical patterns of redlining through the proxy variables used in predictive models.
- Criminal justice: Predictive policing tools and recidivism risk assessment algorithms have been found to assign higher risk scores to Black defendants in the US at rates disproportionate to their actual recidivism outcomes, raising serious concerns about their use in sentencing and bail decisions.
- Healthcare: Algorithms used to allocate healthcare resources in the US were found to systematically underestimate the health needs of Black patients because they used healthcare expenditure as a proxy for health needs, reflecting historically unequal access to care.
From an HRDD perspective, companies deploying AI systems are responsible for identifying these risks before deployment, assessing their potential scale and severity, and implementing mitigation measures - including algorithmic auditing, diverse training data requirements, and meaningful human oversight of high-stakes decisions.
Surveillance in the Workplace
Employee monitoring technologies have expanded dramatically with the rise of remote work. Software that tracks keystrokes, screenshots, webcam images, and productivity metrics raises significant concerns around the right to privacy (ICCPR Article 17), freedom of thought and expression, and psychological health. The ILO's supervisory bodies have noted that intrusive monitoring can undermine workers' dignity and create environments of stress and distrust that adversely affect mental health.
The key human rights questions companies must address when deploying monitoring technologies are:
- Is the monitoring proportionate to a legitimate business objective, or is it intrusive beyond what is necessary?
- Have workers been informed of what is monitored, why, how the data is used, and who has access to it?
- Have worker representatives been consulted on monitoring policies, consistent with freedom of association principles?
- Are there safeguards preventing monitoring data from being used for discriminatory purposes?
Data Privacy and the Right to Private Life
The right to privacy is enshrined in Article 12 of the Universal Declaration of Human Rights and Article 17 of the International Covenant on Civil and Political Rights. In the digital age, this right is exercised primarily through control over personal data. The European Union's General Data Protection Regulation (GDPR) and analogous laws in Brazil (LGPD), India (DPDP Act 2023), and other jurisdictions translate privacy rights into specific legal obligations around data collection, processing, storage, and deletion.
For companies operating internationally, data privacy due diligence requires:
- Mapping the personal data the company collects, the legal basis for processing it in each jurisdiction, and the risks of unauthorised access or disclosure.
- Assessing whether data sharing arrangements with government authorities or third parties create risks to individuals, particularly in jurisdictions where data may be used for surveillance or repression.
- Evaluating the human rights implications of data localisation requirements in authoritarian states, where compliance with local law may require placing data within reach of security services.
Analogy: Technology as a Powerful Tool That Can Harm
A power tool can drive a screw or injure a hand, depending on how it is designed, what safety guards are in place, and who operates it with what level of training. AI and digital surveillance technologies work similarly: they are powerful instruments that can achieve legitimate purposes - efficiency, safety, accessibility - but can also cause serious harm if deployed without adequate safeguards, oversight, and accountability. The HRDD obligation for technology companies is not to avoid using powerful tools, but to ensure those tools are designed, deployed, and governed in ways that respect the people they affect.
Gig Economy and Platform Work
Digital platform companies have created new forms of work that sit outside traditional employment frameworks. Ride-hailing drivers, food delivery couriers, freelance task workers, and content moderators often classified as "independent contractors" may lack basic labour protections: minimum wage guarantees, social security coverage, sick leave, and the right to organise and bargain collectively. The ILO's 2021 report on non-standard forms of employment highlighted that platform work is associated with income volatility, lack of social protection, and algorithmic management that can feel more punitive and opaque than traditional supervision.
Content moderators deserve particular attention. Workers employed (often through third-party contractors) to review graphic and violent content on social media platforms face severe psychological harm from repeated exposure to images of violence, child exploitation, and self-harm. Several major technology companies have faced legal claims from content moderators in the Philippines, Kenya, and Ghana regarding inadequate psychological support and failure to provide safe working conditions.
Example: Biometric Surveillance in the Workplace
Amazon's use of productivity monitoring in its fulfilment warehouses - tracking "time off task" through scanners and cameras and using algorithms to automatically generate disciplinary actions - became the subject of investigation by the UK Information Commissioner's Office and scrutiny by the Italian data protection authority, which fined Amazon EUR 746 million in 2021 for GDPR violations. Beyond data protection, labour rights advocates have argued the system creates working conditions incompatible with dignity and physical health, with workers reporting they avoid bathroom breaks to maintain productivity metrics. This case illustrates how digital rights and labour rights violations frequently co-occur in algorithmic management systems.
Key Takeaways
- 1The UN Guiding Principles apply fully to technology companies, but the B-Tech Project (OHCHR, 2019) provides sector-specific guidance addressing the particular scale and leverage of technology platforms in areas including content governance, data privacy, algorithmic discrimination, and enabling state surveillance
- 2AI systems trained on historical data systematically reproduce patterns of discrimination in hiring, credit, criminal justice, and healthcare, requiring pre-deployment auditing, diverse training data, and meaningful human oversight of high-stakes decisions
- 3Workplace surveillance technologies raise ICCPR Article 17 privacy rights and ILO decent work standards, requiring proportionality, informed consent, worker consultation, and non-discrimination safeguards
- 4Platform and gig economy workers are particularly vulnerable to labour rights violations because they often lack legal employment status, access to social protection, and the practical ability to exercise freedom of association
- 5Content moderation workers face severe psychological harm from exposure to graphic material, and companies' duty of care obligations under the UNGPs apply equally to these workers whether they are directly employed or supplied through third-party contractors