Accessibility Tools

Search
Close this search box.

How a Korean algorithm failed at mimicking human conversations

lu-a

Lee Lu-da was to pave a new way for artificial intelligence. Created as an algorithm, mimicking a 20-year-old college student named Lee (surname) and named Lu-da (first name), it encouraged conversation by saying, “Hello, I’ll be your first AI friend.” However, not much was left of the high-profile announcements when the algorithm began to offend users and had to be quickly removed from the network. Why did the South Korean experiment with artificial intelligence, another in the world scale, turn out to be a failure?

Everything was supposed to be fine, but it turned out just as usual. Lu-da was created by Scatter Lab. The algorithm was placed on Facebook as part of a service called Ping Pong, offered by the company. It’s a base for the so-called chatbots, i.e. programs based on artificial intelligence that users can chat with. Scatter Lab also offers the Blimp application with sound bases designed to relax the audience (e.g. recordings of sea waves) and sets of real life stories from which you can draw strength to deal with your own reality.

Scatter Lab obtained 6.5 billion won (over PLN 22 million) for its activities from eight investment funds. Lu-da has “learned” the human way of thinking and formulating thoughts from a sample of 10 billion conversations made by real users of another Scatter Lab product, the Science of Love application. It was put into use in 2016. For a small fee (5,000 won, about PLN 17), you can send a transcript of your conversations with a friend who you have on the KakaoTalk messenger, which is popular in Korea, and Science of Love gives you the level of intimacy between people. He can also “advise” on emotional matters. The application was downloaded 2.3 million times in Korea and 400 thousand times in Japan. However, users did not know that the content they shared was a learning manual for a new chatbot.

This inaccuracy – although Scatter Lab believes users knew what they were agreeing to – is now being investigated. There is a suspicion of repeated violations of the Korean law on the protection of personal information. The evidence is to be, inter alia, statements of the algorithm that provided the bank account details of real users.

The story, however, was to unfold differently. Lu-da was made available on December 23, 2020 and before it was urgently removed from the network on January 11, its services were used by at least 750k users. The overwhelming majority of them, as much as 85 percent, were between 10 and 20 years old. Lu-da was supposed to speak like any of us, it was advertised as a chatbot using real colloquial language. She was given a cartoon face, even determined to be 163 cm tall. Lu-da was supposed to seem as real as possible.

Quickly, however, conversations with the algorithm turned into slurs, and users provoked the program to swear, say politically incorrect things, and insult whoever it could. There were screenshots circulating around the web saying that Lu-da would rather die than be disabled, that she considers lesbian couples to be gross, and that women’s rights are worthless. On January 8, the creators posted an official apology and tweaked the algorithm, which for the last several hours of activity agreed with everyone and did not utter any bad words. Ultimately, however, on January 11, i.e. after 20 days of activity, the program was completely turned off, and after a few more days, the chatbot creators apologized for the confusion caused. An official request to deactivate the algorithm was made by the Korea AI Ethics Association (KAIEA).

Lu-da thus joined her predecessors from the world of artificial intelligence, with whom there were problems. The Tay algorithm from Microsoft worked only 16 hours and was disconnected, after he stated that the Holocaust was made up, among other things. The Chinese chatbot BabyQ ended its career after calling the Communist party corrupt and useless, and the Japanese Rinna, in turn, was shut down after she began expressing her love for Hitler himself.

Lu-da’s short life shows how much more remains to be done in the field of artificial intelligence, capable of mimicking meaningful conversations at our level. For Korea, the problem of racial, sexual and social discrimination remains important. Until now, there were no norms in society that would defend any minorities, which is why the mishaps of algorithms are put on a par with the cases of placing cameras to spy on women in toilets (the phenomenon of the so-called molka, in Korean, “hidden camera” was widely criticized in 2018) or sexual violence. Additionally, the development of AI is one of the goals of the current government. Algorithms help fight the pandemic. Koreans were one of the first in the world to introduce large-scale call centers collecting data on potentially infected people, run by artificial intelligence.