A significant security flaw in Apple’s Vision Pro mixed reality headset, codenamed GAZEploit, has brought concerns about the safety of virtual keyboard inputs to the forefront. The flaw, which has since been patched, allowed attackers to infer sensitive data, such as passwords, by analyzing a user’s eye movements as they typed on the device’s virtual keyboard.
The attack, identified by security researchers from the University of Florida, CertiK Skyfall Team, and Texas Tech University, has been assigned the CVE-2024-40865 identifier. GAZEploit exploits vulnerabilities in the Vision Pro’s gaze-controlled text entry system, enabling attackers to remotely track and reconstruct text input by monitoring virtual avatars.
How GAZEploit Works
GAZEploit works by tracking the gaze of a Vision Pro user’s virtual avatar. The device, which uses eye-tracking technology to enable typing and other interactions, inadvertently leaked enough biometric data for malicious actors to infer what a user was typing on the virtual keyboard. According to the researchers, the attack uses machine learning to analyze eye movements and differentiate between typing sessions and other activities like watching videos or playing games.
By analyzing eye aspect ratios (EAR) and gaze direction, attackers could map where the user was looking on the virtual keyboard and potentially reconstruct keystrokes. The threat actor could exploit this vulnerability by capturing the virtual avatar via online meeting apps, live streaming platforms, or video calls, effectively compromising the privacy of the user.
A New Frontier in Cybersecurity Threats
“This is the first known attack that exploits leaked gaze information to remotely perform keystroke inference,” the researchers noted. By leveraging supervised learning models trained on Persona recordings (the virtual avatars representing users in the Vision Pro ecosystem), the attack makes it possible to capture keystrokes without physical access to the headset.
The implications of this vulnerability are serious. Given that virtual reality (VR) and augmented reality (AR) technologies are becoming more prevalent, GAZEploit highlights a critical weakness in how personal data is handled in these emerging tech ecosystems. Attackers using this method could potentially steal passwords, sensitive business information, or personal details—all by analyzing a user’s avatar during a video call or virtual meeting.
Apple’s Response: Patch in visionOS 1.3
Apple responded to the vulnerability responsibly, patching it in visionOS 1.3 on July 29, 2024. The update addressed the issue by suspending Persona (the virtual avatar component) while the virtual keyboard is active, effectively preventing attackers from collecting and analyzing eye movement data during typing sessions.
According to Apple’s security advisory, the vulnerability was linked to a component called Presence, which allowed inputs to the virtual keyboard to be inferred from the virtual avatar, or Persona. With the update in place, Apple assures users that their virtual interactions, particularly those involving sensitive data entry, are now secure.
What This Means for the Future of VR/AR Security
GAZEploit represents a pivotal moment in cybersecurity for AR/VR devices, as it demonstrates that biometric data like eye-tracking can be exploited in ways previously thought to be secure. This is particularly relevant for industries and consumers embracing mixed-reality technologies for daily use, as it highlights the need for continued innovation in safeguarding sensitive data within these digital environments.
As more businesses and individuals move towards virtual meeting platforms and remote collaborations, the risk of similar attacks exploiting biometric data is likely to grow. Cybersecurity researchers are calling for more robust encryption and stricter controls on how biometric information is used in virtual spaces to protect users from emerging threats like GAZEploit.