Plenary Speakers

    NC Cultural Foundation
    Google Fellow
    As a tech visionary, humanist, and business leader, Songyee Yoon has been at the forefront and intersection of technology, ethics and play for decades. After graduating from the Korea Advanced Institute of Science and Technology, Songyee went on to earn her Ph.D. from MIT in Computational Neuroscience.
    Her strong belief in the concept of “play” as a crucible for innovation eventually led her to NCSOFT, a leading global video game developer and publisher, where she served as president and chief strategy officer. Forecasting the importance of emerging technologies, she founded the NCSOFT AI Center and Natural Language Processing Center.

    Songyee brings a unique lens to bridging countries’ approaches to technology and ethics and is an involved figure in the international business community. While a member of South Korea’s Presidential Advisory Council for Science and Technology, Songyee served under two presidents.

    She is a member of the advisory council at the Stanford Institute for Human- Centered AI, served as an advisory board member of the Center for Asia Pacific Policy, and was a visiting fellow at the Center to Advance Racial Equity Policy at RAND, where she continued to explore social impacts of AI, equity and ethical sides of technology. She also serves on the board of trustees for the Carnegie Endowment for International Peace. She was named a Young Global Leader by the World Economic Forum and one of the 50 Women to Watch in Business by the Wall Street Journal.
    Johan Schalkwyk, a Google Fellow, has been a leader in the speech industry for over 25 years. His passion is to make speech a usable interface that everyone in the world uses. He is instrumental in Google DeepMind’s Multimodal perception and Large Language Model efforts, and continues to serve as Google's Speech Area Tech Lead guiding research investments across speech recognition and synthesis.

    In 2008, Johan built the first search by voice experience in the world, Google Voice Search. He has led Google's speech team, bringing research innovations such as on-device and neural models to products from Google Assistant to YouTube for over 80 languages.

    Thanks to his continued leadership, Google speech research is leading in both industry and academia, publishing and launching numerous breakthroughs in Deep Learning for Speech Recognition and Synthesis. When not building speech recognizers, Johan enjoys mountain biking around the world, cooking Vietnamese food, and baking desserts.
    Title: Multi Modal Large Language models as the path towards Language Inclusivity

    In this talk we will explore the topic of large language models, specifically multimodal large language models, and how they can help to promote language inclusivity. As technology advances, it is important that we develop tools that are accessible to everyone, regardless of their native language. Multi modal large language models have the potential to bridge the gap between languages by modeling semantically across spoken, written, and image modalities. This talk will discuss the research efforts in developing these models, the challenges involved, and the potential they hold for the future.