Business Week 70 - Not inspired, yet
Is the spatial interface the new big thing?
We've already had several iterations of HMI (Human Machine Interface). Just to mention a few most important ones - physical buttons on machines, keyboard & mouse for PCs, and touch & multi-touch gestures for smartphones. For a couple of years, it sounded like the next iteration will be the voice control, but it looks like the challenges with natural language and privacy impacted heavily on the low adoption of audio assistants.
Now, Apple wants to achieve something that other big players failed to achieve - widespread VR/AR (MR?) and a new spatial interface, where voice, eye movements, and finger gestures should play the role of combined controllers for the interface.
We are at the very beginning of this journey but I am already anxious about the future. Will the spatial interface be the next big step towards the final goal - the direct interface between a human brain and a machine? Or will it gonna be another failure as people as not yet ready for the (creepy?) vision of both connecting and disconnecting people using a mask?
I believe it's too early to judge, the first iteration of the Apple Watch was a disaster but right now, a few years later, it's the most popular watch. Will the same happen to extremely expensive for now Apple Vision?
⏳ Time spent: 30 minutes (12.5%)
Plan for week #71
Play with visionOS at least a bit to be more familiar with that idea for the interface...