Skip to content
blank

May the 4th be With You – Part I. AI in the Media

Welcome to the series “May the 4th be with you – Star Wars and the AI of the future”. Published once a year, this blog entry highlights AI in the cult series. It has long since been seen in more than just movies and series – AI is a reality. At the same time, this is the kickoff to the new blog series, “AI in Film and Television.”

This May 4, C3PO is the focus of the reflections. He is a protocol droid that is repaired and modified by young Anakin Skywalker. The background for his creation is simple: he is to support Anakin’s mother.

C3PO is more than just a droid. He is almost human in many ways, autonomous, learns on his own and worries too much on principle.

AI and being human

The fact that an artificial intelligence worries or feels fear is still difficult to reconcile with current advances in AI research. Even though many robots appear human through their communication, the behaviors are programmed in advance or determined by a trained model – that is, shaped by the input of a human.

An example from today are the robots from the robot hotel in Japan. At check-in, for example, the robot woman (or the robot velociraptor in a pageboy hat) “talks” to the hotel guest. However, she or he can only respond to programmed and fixed questions with answers. Small talk is not possible.

Sophia, the robot woman, has made headlines in recent years (partly because she has been granted citizenship in Saudi Arabia). The fact that her appearances are more likely to cause amusement is shown, among other things, by her public conversation with Angela Merkel:

We can see that we are still a long way from the ideas in Star Wars. In everyday life, we already see automated waiters in one restaurant or another that avoid obstacles and make their way to the tables. However, the interaction of these robots with guests is limited to silent emojis that appear on the display.

Star Wars and language

In the world of Star Wars, voice recognition and output works excellently. Whether small talk or expressions of concern – the repertoire knows no bounds. It even exceeds that of the protagonists immensely. For example, when C3PO, Luke Skywalker and Han Solo are surrounded by Ewoks on Endor, the droid’s voice recognition immediately kicks in and he makes the first contact. C3PO himself says that he speaks more than six million languages, thanks to the built-in TranLang III communications module. In “Star Wars – The Force Awakens” he can even speak seven million organic and inorganic languages.

In addition, the droid has phonetic pattern analyzers that analyze and simultaneously learn new, unknown languages.

While we at link|that are very proud of our text to speech and speech to text services, behind them is hard work and most importantly hundreds of hours of “labeling” and recordings. Very specialized AI networks may be able to see through speech patterns (and thus possibly even detect signs of illness). However, the fact that a network of artificial neuronal networks and program logic would be able to acquire a completely new language on its own is simply science fiction today.

What lies ahead?

On the list of steps to be fulfilled until a functioning C3PO, an announcement from the previous year is probably at the top: At Tesla AI Day, the Tesla Bot was announced for 2022. If you skip the jokes and the very dark humor, the interesting part of the presentation begins.
blank

If Musk’s (time) plan works out, we will be on the verge of androids entering our everyday human environment. Of course, it can be assumed that the start will be very bumpy. Nevertheless, perhaps the biggest hurdle will be overcome: the robots will perceive us and our environment among humans. Which means they collect a seemingly endless amount of audiovisual and contextual information that they can access together. Tesla’s android uses “auto-labeling” to learn in a very similar way to humans: It literally picks up information as it passes by and later verifies it through repetition.

Future advances in auto-ML (automated machine learning) will stand alongside this, generating new models and new behavior from the insights – at least in theory. Without question, this “up-close” perception of the environment through cameras and sensors represents a huge step forward for robotics. The findings from this will also show how we can use the potential of artificial neural networks in the future. Because language(s) and behavior could henceforth be learned through observation and imitation. Whether they can meaningfully replace the aforementioned hundreds of hours of labeling remains to be seen.

Self-awareness as a milestone

Finally, let’s return to C3PO’s worries mentioned in the introduction: The ability to feel emotions or empathy will certainly not come about on its own through such automatisms. For that, a robot must perceive itself and others in temporal difference (i.e., with memories of past events). A prerequisite for this are more complex network architectures, which go hand in hand with a (significantly higher degree of) memory and reflection capacity. Here we continue to bump up against our technical limits in many respects, and some research strands therefore postpone their hopes until later (when we can then trust quantum computers, for example).
 
Technically, complex GANs (Generative Adversarial Networks) could enable a kind of reflection in androids. These follow the strategy of letting a second network judge the results of an artificial network. Even if this is currently used more as an “autonomous learning aid”, one can at least already philosophize that a robot will be able to use it to judge its own actions and those of others.

Outlook

In summary, there is still a lot to do before C3PO. What is evident, however: The strategies are multifaceted, and hopes for major technical breakthroughs have been unwavering for many years. We are definitely not running out of ideas, and the interplay of hardware advances and creative new approaches suggests great things for AI. For the first time, an AI spring seems to be holding and not being slowed down by an AI winter (as happened back in the 70s and 90s).

While we at link|that won’t be selling you androids, we will continue to research on the neural network front and incorporate those findings into our services – and into this blog series.

Written by Tina Waldner & Harald Kerschhofer, partly translated by the AI of deepl.com
Picture of Tina W.

Tina W.

Tina is the head of our labeling team for AI trainings and uses her creative vein to provide readers with blog posts about Artificial Intelligence.

Do you want to find out more?