Welcome to the series “May the 4th be with you – Star Wars and the AI of the future”. Published once a year, this blog entry highlights AI in the cult series. It has long since been seen in more than just movies and series – AI is a reality. At the same time, this is the kickoff to the new blog series, “AI in Film and Television.”
This May 4, C3PO is the focus of the reflections. He is a protocol droid that is repaired and modified by young Anakin Skywalker. The background for his creation is simple: he is to support Anakin’s mother.
C3PO is more than just a droid. He is almost human in many ways, autonomous, learns on his own and worries too much on principle.
AI and being human
The fact that an artificial intelligence worries or feels fear is still difficult to reconcile with current advances in AI research. Even though many robots appear human through their communication, the behaviors are programmed in advance or determined by a trained model – that is, shaped by the input of a human.
An example from today are the robots from the robot hotel in Japan. At check-in, for example, the robot woman (or the robot velociraptor in a pageboy hat) “talks” to the hotel guest. However, she or he can only respond to programmed and fixed questions with answers. Small talk is not possible.
Sophia, the robot woman, has made headlines in recent years (partly because she has been granted citizenship in Saudi Arabia). The fact that her appearances are more likely to cause amusement is shown, among other things, by her public conversation with Angela Merkel:
We can see that we are still a long way from the ideas in Star Wars. In everyday life, we already see automated waiters in one restaurant or another that avoid obstacles and make their way to the tables. However, the interaction of these robots with guests is limited to silent emojis that appear on the display.
Star Wars and language
In the world of Star Wars, voice recognition and output works excellently. Whether small talk or expressions of concern – the repertoire knows no bounds. It even exceeds that of the protagonists immensely. For example, when C3PO, Luke Skywalker and Han Solo are surrounded by Ewoks on Endor, the droid’s voice recognition immediately kicks in and he makes the first contact. C3PO himself says that he speaks more than six million languages, thanks to the built-in TranLang III communications module. In “Star Wars – The Force Awakens” he can even speak seven million organic and inorganic languages.
In addition, the droid has phonetic pattern analyzers that analyze and simultaneously learn new, unknown languages.
While we at link|that are very proud of our text to speech and speech to text services, behind them is hard work and most importantly hundreds of hours of “labeling” and recordings. Very specialized AI networks may be able to see through speech patterns (and thus possibly even detect signs of illness). However, the fact that a network of artificial neuronal networks and program logic would be able to acquire a completely new language on its own is simply science fiction today.
What lies ahead?
If Musk’s (time) plan works out, we will be on the verge of androids entering our everyday human environment. Of course, it can be assumed that the start will be very bumpy. Nevertheless, perhaps the biggest hurdle will be overcome: the robots will perceive us and our environment among humans. Which means they collect a seemingly endless amount of audiovisual and contextual information that they can access together. Tesla’s android uses “auto-labeling” to learn in a very similar way to humans: It literally picks up information as it passes by and later verifies it through repetition.
Future advances in auto-ML (automated machine learning) will stand alongside this, generating new models and new behavior from the insights – at least in theory. Without question, this “up-close” perception of the environment through cameras and sensors represents a huge step forward for robotics. The findings from this will also show how we can use the potential of artificial neural networks in the future. Because language(s) and behavior could henceforth be learned through observation and imitation. Whether they can meaningfully replace the aforementioned hundreds of hours of labeling remains to be seen.
Self-awareness as a milestone
Outlook
In summary, there is still a lot to do before C3PO. What is evident, however: The strategies are multifaceted, and hopes for major technical breakthroughs have been unwavering for many years. We are definitely not running out of ideas, and the interplay of hardware advances and creative new approaches suggests great things for AI. For the first time, an AI spring seems to be holding and not being slowed down by an AI winter (as happened back in the 70s and 90s).
While we at link|that won’t be selling you androids, we will continue to research on the neural network front and incorporate those findings into our services – and into this blog series.