Naho Kitano watches as a small mechanical doll bearing a tea cup on a saucer rolls towards her on the table. She lifts the empty cup and the doll stops.
She goes through the motions of pretending to drink imaginary tea from the empty cup and places it on the saucer again. The doll whirrs to life, executes a graceful half turn and goes back to the doll maker at the end of the table.
The doll is a Karakuri puppet — automatons made in Japan between the 17th and 19th centuries primarily for entertainment. (The word karakuri means ‘mechanisms’ or ‘trick’.) And Naho Kitano, robot sociologist, roboethics researcher and the founder of HiBot, has been studying the relationship between robots and humans in society.
In an interview for the documentary ‘The Human Robot’, where she interacts with the tea-serving mechanical doll, she talks about how the old Shinto philosophy of imbuing objects with a soul has influenced the Japanese people’s relationship with modern robots.
Kitano explains that the Japanese live in harmony with technology and interact naturally with robots because they see them as being ‘alive’ — but only in a way that’s defined by their interactions with them, and how their relationship with the tool evolves with their needs.
With time, our dolls and robots have become highly advanced. Some robots even look, talk and behave like us. But many view them with suspicion. Will they become more and more human-like? What does that mean for us — the original humans?
But what we don’t often see is that as we shape technology, it shapes us too. We’re augmenting ourselves in ways we couldn’t have imagined a hundred years ago.
Mechanising Humans and Humanising Tech
The line between biology and technology is blurring. For instance, we’ve been using computers as an extension of our brains to store large amounts of data and make sense of it for decades. And that raised important questions like ‘what is intelligence?’ and ‘how are we more intelligent than the most intelligent machine?’ Because as we built machines to do our work for us, we began to understand ourselves better too.
From creating manual dolls and automatons, to designing wearables like Intel’s Spider dress and cyborg bodies— our history with machines is on a clear path to our merging with them to overcome human limitations.
Evolutionary biologist and the first Director-General of UNESCO, Julian Huxley, was also the first to coin the term ‘transhumanism’. In a set of essays called New Bottles for New Wine, he says that transhumanism can help us collectively transcend our current selves, and once we believe that it is possible, “the human species will be on the threshold of a new kind of existence, as different from ours as ours is from that of Peking man [an extinct species of early humans]”.
The tenets of transhumanism rests on the premise of three supers: super longevity, super intelligence and super wellbeing. And the predicted point in time when machines reach human-level intelligence and we begin to live this elevated existence is termed the ‘technological singularity’.
But long before this probable singularity event, we are already functioning far more in resonance with machines than we ever have before. And we are moving past current tech solutions into the age of AI to achieve this.
A large part of why that’s happening is because we’ve realized a great truth — the why behind the humanisation of technology. As Hiroshi Ishiguro, a professor of robotics at Osaka university, puts it:
“The human brain has a very strong innate function to recognize humans. So that means a human is the best interface for a human.”
We’ve discovered that to integrate advanced machines into our everyday lives, we need to make them speak the same language as us, read our emotions, understand context, empathize with our problems and in essence, have more in similar with the human race than with an inanimate piece of technology. And artificial intelligence is our best bet to get there.
The Present Lays the Path for the Edited Human
So we humanise tech to be able to work easily with machines without having to learn how a new, complex system works. But as the machines around us get better and better, we realize that many of them can be far more useful if they were an actual part of us.
There is no need to carry devices if they were attached to our clothes and accessories like the Snap spectacles, or even a part of our bodies like RFID chips inserted under the skin, which is not as uncommon as you may think it is. The retail and fashion industries touch a wide range of human necessities and interests — from health, work and self-identity to art, science, and entertainment. And it becomes easier to enhance the overall quality of life and increase functionality across these spheres when we’re not limited by our biological bodies.
For example, small, organized tech cultures like QS or the Quantified Self — where members are interested in tracking their lives in some way, are looking for ways to go beyond the biased commercial apps that are available now. Their motto is ‘self-knowledge through numbers’. A transhumanist future may give QS enthusiasts internal prosthetics to look within and monitor biological functions closely, and also learn how they respond to food intake, exercise and medication.
With new checkout-less shopping experiences like Amazon Go, the possibilities of integrating with wearables and personalized AI systems to boost convenience and save time are massive. In the retail space, there is a cry to use technology to emotionally connect with the customer at the right time. The human touch and personalization have become essential to keep customers happy.
Or we can take the route to becoming super-sensory beings. Take Anouk Wipprecht, designer and wearables artist, who used Intel’s Edison module to create the animatronic spider dress. The dress reacts to external stimuli to defend or invite someone into the wearer’s personal space based on sensory data gathered from both the wearer and the object or person approaching the wearer.
Go one step further and we have legally recognized transhumans like Neil Harbisson and wearable computing pioneers like Steve Mann pushing the boundaries of what we can perceive and the perspectives we can mine from non-biological sensors for a ‘cyborg lifestyle’.
For instance, Harbisson has an extreme case of colour blindness which only lets him see in greyscale, but the antenna implanted in his skull lets him ‘hear’ colours. And while his implant is very much tech, his experiences with it are more intangible and special — he says that walking into Times Square is like listening to twenty electronic concerts going on around him at the same time.
Merging with machines doesn’t make us more machine-like. It gives us the power to expand the definitions of what it means to be human — to experience things beyond our sensory ken and interpret it in very human ways.
And we haven’t even ventured into the murky waters of the future yet. And really…
Do We Need to Think Centuries Ahead?
Many experimental designers and computational experts already are. The humanisation of machines and mechanisation of humans may soon meet at a middle ground, giving rise to a technologically evolved being: the transhuman. But given the rate of technological advancement, we may not need to wait centuries for this to happen, only a few decades.
But it’s not just people who change. Environments and habitats change too. Who knows what place or planet the human race will call home next? What living conditions will they face in a century or three?
Neri Oxman and the members of the Mediated Matter group at the MIT Media Lab try to imagine the possibilities. They’ve created a series of exoskeletal ‘clothing’ which can facilitate interplanetary travel and survival by housing symbiotic lifeforms in its ridges. It is aptly named Wanderers: An Astrobiological Exploration.
For example, the piece Ootared from the collection (named after the Roman god, Mercury) is a 3D-printed skin that aims to grow true bone structures that will protect against Mercury’s nonexistent atmosphere and mercurial volatility.
But if we must think of the distant future, it doesn’t necessarily need to be about future technology or science fiction. We can examine the path we’re on and where we’re headed to make wiser decisions today. Tiffany Arntson, founder of the research division at The Future Laboratory and Rogue Matters, puts it this way:
“Successful application of technology is about perspective and perception. To those brands who want to balance subversion and scale while remaining authentic and human, I recommend going off grid for a while. Rediscover the age old technology of human intuition, creativity, diversity of thought and strategic thinking. That will do more to close the gap with your audience than loading expectations on to the latest tech.”
Just as Naho Kitano looks at Karakuri puppets, human philosophy and the past to decipher the present, brands must find ways to make the experimental future applicable in the mainstream today, even if they’re small steps. With AI, robotics and computer vision, we’re already on our way to the future.
German scientist, experimental physicist and aphorist Georg Christoph Lichtenberg said it best — where the frontier of science once was is now the centre.
Disclaimer: Mind Over Matter is an editorial series. The featured opinions in this article are not an endorsement of Vue.ai or Mad Street Den.