Tuesday, January 6, 2026

Firewall Challenge Week 3 – DEV Community

Keep Your Ubuntu-based VPN Server Up to Date

Enterprise-Grade Security for Small Businesses with Linux and Open Source

Ethics for Ephemeral Signals – A Manifesto

When Regex Falls Short – Auditing Discord Bots with AI Reasoning Models

Cisco Live 2025: Bridging the Gap in the Digital Workplace to Achieve ‘Distance Zero’

Agentforce London: Salesforce Reports 78% of UK Companies Embrace Agentic AI

WhatsApp Aims to Collaborate with Apple on Legal Challenge Against Home Office Encryption Directives

AI and the Creative Industries: A Misguided Decision by the UK Government

Nvidia advances accelerated computing advantage

In the second quarter of 2025, Nvidia saw a remarkable 154% increase in revenue for its datacentre business, reaching $26.3bn. This contributed to the company’s overall record quarterly revenue of $30bn, up 15% from the previous quarter and 122% from the previous year.

Nvidia’s founder and CEO, Jensen Huang, attributed this success to the transition to accelerated computing by datacentre operators globally. Demand for Nvidia’s Hopper GPU computing platform, specifically for training and inferencing in large language models, recommendation engines, and generative AI applications, drove the growth in the datacentre business.

The company reported that growth in revenue came from both cloud service providers and consumer internet and enterprise companies. Nvidia’s networking products also saw a 16% sequential increase in revenue.

Huang emphasized the importance of accelerated computing in addressing the increasing demand for computational power. He highlighted the cost-saving benefits of GPU-powered servers and predicted that all datacentres would eventually incorporate GPU technology.

Huang’s vision for accelerated computing aims to provide datacentre operators with increased computational power in a sustainable manner, while also reducing the cost of computing and avoiding “computing inflation”. He pointed out the advantages of liquid-cooled datacentres, which offer three to five times the AI throughput compared to traditional methods.