top of page
  • Writer's pictureIVECA Center

Improving Artificial Intelligence for Intercultural Understanding

Updated: Jan 8

Students from the United States and South Korea took part in profound and enlightening discussions during their IVECA Live Classes on December 11th and 13th. Tackling the topic of “Promoting Cultural Diversity in AI”, students spent the last several months researching and analyzing the increased use of Artificial Intelligence within their countries and the impact these programs have had on society.

South Korean students critically analyzed the effects of AI bias from creating cases of misunderstanding and discrimination between individuals to the erosion of trust and instability of international partnerships. Acknowledging the potential for AI to impact direct users through responses given to questions, South Korean students also touched on other possible effects of AI bias, including the issue of convincing material, such as “deep fakes” being used to influence large groups of people through social media. Students also noted the difference between the use of AI in the two countries. The students claimed that one way to bridge the gap between diverse groups was to ensure access to technology across all social groups. As one student thoughtfully suggested, “It is important to understand and fix the unfairness in AI” by promoting intercultural learning through accessible cost-effective education and avoiding exclusive technologies.

Meanwhile, student groups from the United States also shared their perspective on cultural bias in AI, noting several contributing factors and ways to avoid increased bias in the future. As one group explained, monoculturalism (the policy or process of supporting, advocating, or allowing the expression of the culture of a single social or ethnic group) stems from the suppression of differing voices and opinions. Therefore, they argued that creating diversity in AI requires algorithms that “overrules anything that resembles monoculturalism” while also acknowledging that further education on the subject is necessary on the user’s end. They suggest AI programs contain “a warning label that alerts the questioner if the given response will contain cultural bias”. Another American student group suggested a way to develop unbiased AI programs from the start by including a balanced group of diverse programmers, writers, and researchers in the algorithm creation before combining all sources of input into one.

Agreeing on the importance of fair, unbiased artificial intelligence was easy. Students from both countries shared the position of understanding that AI, while an invaluable asset to modern day society, must also be carefully integrated with balanced sources of information to avoid increasing impact on users around the globe. Further solidifying the sense of community between the two schools, students participated in sharing cultural performances, Q&A sessions, and thoughtful farewells after presentations had finished.

Brought together through serious discussion, critical insight, laughter, and even cheeky jokes from each side of the globe, the teenagers in both countries were the image of global citizenship. By being part of the conversation around cultural bias, they actively became part of the solution.

bottom of page