In 2024, Computer Weekly’s coverage on data and ethics kept diving into the ethical mess around data-driven systems, especially AI. They reported on the copyright debates swirling around generative AI tools, raised concerns about AI’s environmental toll, highlighted internet tracking tools that invade privacy, and explored how autonomous weapons challenge our moral awareness.
There were significant stories on the social consequences of these technologies, like how they can fuel violence against migrants and influence political and social dynamics. On January 14, the IMF released an analysis on AI’s impact on the labor market. They warned that while AI could “jumpstart productivity and boost global growth,” it might just as easily “replace jobs and deepen inequality” if not managed correctly. They pointed out that unlike labor income inequality, capital income inequality tends to rise with more AI. The report stated that AI displaces workers and increases demand for capital, thus enhancing returns for those who already hold assets, predominantly the high earners.
In January, Anthropic, a generative AI company, made waves in a US court, arguing that using copyrighted material for training large language models falls under “fair use.” They claimed that restricting access to copyrighted content would stifle the development of essential AI tools. This came in response to music publishers suing the company for alleged copyright infringement over song lyrics. Anthropic insisted that their training methods constituted lawful analysis rather than any violation of expressive rights.
During a conversation with the Migrants Rights Network (MRN) and the Anti-Raids Network (ARN), the groups highlighted how data-sharing between public and private entities fuels a hostile environment for migrants in the UK. With the Labour government ramping up immigration enforcement, they revealed how immigration enforcement uses data from various sources, instilling fear and deterring migrants from seeking help. Julia Tinsley-Kent from MRN explained that this approach creates an atmosphere of self-policing among vulnerable populations.
A major conference in Vienna brought military tech experts together to discuss the chilling psychological effects of AI-powered weapons systems. Concerns included dehumanization, biased target selection, and the emotional detachment of operators from their actions. Experts questioned whether genuine human control over these systems is even possible given how fast warfare has become.
The second global AI summit in Seoul saw governments and companies reaffirm commitments to safer AI development. Participants voiced the need for mandatory safety protocols and broader public involvement. The summit produced tangible outcomes, such as the Seoul Declaration affirming the importance of multi-stakeholder collaboration. Over two dozen governments agreed to establish common risk thresholds for advanced AI models to mitigate potential harm.
However, not all developments were positive. Privacy experts raised alarms about university and charity websites that aim to support people in crisis inadvertently sharing sensitive visitor data with advertisers. The embedded tracking tools often collect personal information without visitors’ consent, raising significant privacy concerns.
Documentary director Thomas Dekeyser shared insights about Clodo, a secretive group of French IT workers who sabotaged tech in the 1980s. He highlighted the diverse motivations behind technological refusal, pushing back against the stigmas surrounding those who oppose certain tech developments.
In May, hundreds of workers on Amazon’s Mechanical Turk platform found themselves locked out of their accounts due to a suspected glitch in Amazon’s payment system, impacting their ability to earn income on the platform. Advocacy group Turkopticon called attention to the broader issues around worker protections on such platforms.
Refugee lawyer Petra Molnar discussed the extreme dangers faced by migrants at borders, exacerbated by surveillance technologies and hostile political climates. She noted that these tech solutions create lethal conditions for individuals seeking safety, often leading them to desperate measures to avoid detection.
Finally, discussions at the AI Summit London emphasized how AI can support sustainability initiatives, while also acknowledging its environmental impacts. Speakers noted the need to integrate sustainability considerations early in the tech development lifecycle to mitigate the considerable energy and resource consumption associated with AI systems.