Thursday, November 21, 2024

The Emergence of Cyber Clones: When What You See Isn’t Truth

Equipped with just a Flipper Zero, some deepfake apps, and a mischievous spirit, I recently demonstrated how easy it is to convincingly impersonate anyone in the world.

I can be a top Hollywood actor without ever leaving my hometown of Bournemouth, channel Jason Bourne without climbing any buildings, or even become the “Wolf of Wall Street,” albeit without any romantic escapades with Margot Robbie. How is this possible? Welcome to the era of AI.

We’re living in a time when AI and deepfakes enable us to unleash our creativity and, unfortunately, spread misinformation. We can clone public figures and even infiltrate offices, impersonating CEOs and taking selfies in boardrooms—exactly what I did recently.

My aim was to showcase that when AI is harnessed for malicious purposes, it creates a perilous new landscape in the cyber realm, giving criminals unprecedented opportunities for elaborate scams straight out of a Hollywood thriller. To substantiate my claim, I needed a willing participant.

Having already approached many friends for my research-based hacking experiments, I found a candidate in Jason Gault, founder of TeamJobs. The objective was clear: infiltrate Jason’s office, bypass all security protocols, and do so without raising any suspicion.

Jason was an ideal choice for this experiment, boasting thousands of LinkedIn followers, a fancy office in Dorset, and a naïve belief that what I was suggesting was impossible. Spoiler: he was mistaken.

During the initial phase of my experiment, I successfully acquired Jason’s office RFID card during a meeting, knowing it would grant me access to the building. I used a device called the Flipper Zero, capable of cloning nearly any signal it encounters—be it car keys, hotel keys, office access, or even credit cards. It’s available on Amazon for under £180, a small investment considering the potential for criminal exploits.

After obtaining the RFID card, I had to navigate past CCTV cameras and security personnel. This turned out to be a breeze, thanks to a tool called Swapface, which allowed me to alter my appearance. As I passed the cameras, security guards were none the wiser.

Once inside the building, I made my way to the boardroom, where I persuaded an unsuspecting employee to take a picture of me. It was alarmingly easy—both Jason and I were taken aback by how well it worked.

But this was merely the first phase of the experiment. For the next step, we set out to see if my abilities could deceive Jason’s over 2,000 LinkedIn followers. Once again, I found it surprisingly simple.

Utilizing HeyGen AI, I shared a video on LinkedIn featuring Jason, announcing a fictitious cycling trip from the UK to Australia. Given Jason’s avid interest in cycling, it wasn’t entirely far-fetched to his audience. When the video garnered over 4,000 views and hundreds of likes and comments, I knew I had succeeded.

Eventually, after a panicked call from Jason’s CFO inquiring about his six-month plans, we had to take down the video prematurely. However, the experiment highlighted how effectively deepfakes can propagate misinformation.

These were just experiments conducted with easily accessible tools, but it begs the question: what could a malicious actor achieve? Imagine a criminal impersonating a CEO to trick employees into processing an urgent bank transfer or posting a video on LinkedIn to promote a fake charity fundraiser.

People instinctively trust familiar faces, but in the digital landscape, that trust is no longer guaranteed. While individuals are advised to be wary of emails from unknown sources, how should they respond to requests from recognized faces? With advancements in deepfakes and AI, we may not always be able to discern authenticity in these cases.

This calls for a new level of security awareness training, equipping computer users to recognize deepfakes—where keen observation must be prioritized to identify tell-tale signs like mismatched lip syncing or unusual graphics.

These challenges are imminent, and it’s crucial to prepare for them now. At DTX London this week, I hosted a session titled “The Rise of Cyber Clones,” illustrating how AI is enabling criminals in unprecedented ways. I shared my recent exploits and detailed how I impersonated Jason Gault, infiltrated his office, and deceived many of his LinkedIn followers—all in just a few simple steps.

The aim of the session was to educate and remind everyone that seeing isn’t always believing in our digital world. Stay vigilant out there.

Jake Moore is a global cybersecurity advisor at ESET. A former police officer from Dorset specializing in digital forensics and cybercrime, Jake transitioned to the private sector in 2018, guiding clients through their security challenges. He also engages in security research and analysis and enjoys exploring innovative ethical hacking methods, often utilizing AI. He is a regular speaker and commentator on significant cybersecurity issues.