Apple has unveiled its much-anticipated mixed reality headset, the Apple Vision Pro, though it describes it as “a revolutionary spatial computer that seamlessly blends digital content with the physical world, while allowing users to stay present and connected to others”.
At the launch event, Apple described what the headset delivers as augmented reality, though it looks more like a virtual reality headset. Powered by visionOS, which Apple describes as “the world’s first spatial operating system”, the headset introduces a three-dimensional user interface controlled by the user’s eyes, hands, and voice.
Vision Pro will be available early next year, initially in the US, with more countries to follow, priced at $3,499 (£2,816) – seven times the cost of Meta’s recently-announced Meta Quest 3 headset, due to launch in the autumn.
“Today marks the beginning of a new era for computing,” said Apple CEO, Tim Cook. “Just as the Mac introduced us to personal computing, and iPhone introduced us to mobile computing, Apple Vision Pro introduces us to spatial computing. Built upon decades of Apple innovation, Vision Pro is years ahead and unlike anything created before – with a revolutionary new input system and thousands of groundbreaking innovations. It unlocks incredible experiences for our users and exciting new opportunities for our developers.”
The device offers support for Magic Keyboard and Magic Trackpad, so users can set bring the capabilities of their Mac into Vision Pro wirelessly, creating a private, portable 4K display.
With two ultra-high-resolution displays, Apple said that Vision Pro can transform any space into a personal movie theatre with a screen that feels 100 feet wide and an advanced spatial audio system. Users can watch movies and TV shows, or enjoy three-dimensional movies.
Vision Pro also offers Immersive Environments, where a user’s world can grow beyond the dimensions of a physical room with dynamic landscapes that can help them focus or reduce clutter in busy spaces. A twist of the Digital Crown lets a user control how present or immersed they are in an environment.
FaceTime calls take advantage of the room around the user, with everyone on the call reflected in life-size tiles, as well as spatial audio. Users wearing Vision Pro during a FaceTime call are reflected as a Persona – a digital representation of themselves created using Apple’s most advanced machine learning techniques – which reflects face and hand movements in real time. Users can do things together like watch a movie, browse photos, or collaborate on a presentation.
Finally, Apple Vision Pro has an all-new App Store where users can discover apps and content from developers, and access hundreds of thousands of familiar iPhone and iPad apps that run great and automatically work with the new input system for Vision Pro. Apple’s developer community can leverage Vision Pro and visionOS to design new app experiences, and reimagine existing ones for spatial computing.