Interfaces of the future must focus on human needs

31 : 08 : 2018 Artificial Intelligence : Intuitive Interfaces : Territory Projects

Virtual interfaces that were once Hollywood fantasies are fast becoming a reality. But, asks Lee Fasciani, founder and creative director of Territory Projects, will they make our lives better?

Proletariat.ai by Eirini Malliaraki Proletariat.ai by Eirini Malliaraki

To truly transform interaction, we need customisable, product-agnostic virtual assistants that unite disparate hardware, operating systems and software as one seamless ecosystem.

Lee Fasciani, founder and creative director, Territory Projects

From Hollywood fantasy to our own reality, technology is nearing a level of sophistication where the fictional interfaces and virtual platforms we’ve seen in blockbuster films will soon be part of our daily lives.

Even today, a host of once-futuristic interfaces are in advanced development, from gesture-based to holographic, voice, and brain-computer interfaces, as well as the already-familiar AI assistants, and virtual reality (VR) and augmented reality (AR) applications that have put intuitive experiences within our grasp.

In fact, I believe the key developments that will shape the future of human experiences and interaction will be driven by virtual assistants, conversational user interfaces, and a wealth of AR applications. Together with advances in machine learning and AI, these technologies will create more harmonious relationships between people and technology.

Speech recognition software still needs to reach a point where it treats users as individuals and recognises dialects, idiosyncratic use of language, and even children’s voices.

With humanised user interfaces in mind, since Apple launched Siri in 2011, natural language user interfaces have become standard for voice devices. And with the launch of Amazon Echo smart speakers in 2015, three years on, consumers are now comfortable talking to technology.

While answers can still be entertaining and frustrating, speech recognition has grown increasingly sophisticated. But, arguably, there is still some way to go. For example, speech recognition software still needs to reach a point where it treats users as individuals and recognises dialects, idiosyncratic use of language, children’s voices, and more crucially then ever, professional and intellectual routines and preferences. Combined with conversational flow, we'll soon see a time when speaking instructions and refining requests with a voice assistant will feel like a more natural interaction with a friend.

And then, more recently, connected virtual assistants arrived in our lives promising to provide frictionless services. Perhaps unsurprisingly, the current assistants on the market reflect the interests and priorities of their developers; Amazon Alexa, for example, is best as a shopping assistant, while Google is optimised to work specifically with its suite of products.

In future educational environments, students will put on MR glasses and immediately gain access to interactive material, tailor-made to support individuals’ learning abilities.

Yet to truly transform interaction and harness the potential of these applications, consumers need customisable, product-agnostic virtual assistants that function across disparate hardware, operating systems and software, uniting them as one seamless ecosystem. As a result, these dynamic and intelligent systems will simplify their busy, distraction-filled lives, enabling them to do complex research, manage home utilities, order shopping and make appointments through slick, voice-led interfaces.

There is also great potential, beyond entertainment, edutainment and retail applications, for AR and mixed-reality (MR) technology to transform human knowledge and skills. Already, AR and MR show great promise in workplace training, surgical interventions and manufacturing sectors. In time, I believe these technologies will merge with AI, allowing users to not only see and interact with virtual images but – as the technology is refined and enhanced by AI – also revolutionise product prototyping, medical care and in particular, the field of education.

In fact, the education sector is one that could undergo massive transformation, turning what were once Hollywood projections of virtual learning into reality. In future educational environments, students will put on their MR glasses and immediately gain access to engaging visual and interactive material, tailor-made to support specific subjects, individuals’ learning abilities and their curricular level. Such positive interactions with richly detailed learning material could truly democratise learning, leading to educational opportunities for people around the world – something that simply doesn't exist today.

Lee Fasciani is founder and creative director of Territory Projects, a branding and digital product design agency offering innovative brand-led solutions.