Overview:
During the next year, SFI’s CounterBalance seminars will run a special series of meetings, called Deconstructing Meaning, looking at the technical and ethical complexities of content parsing in the age of AI. Co-hosted by Siegel Family Endowment, this series is being organized in collaboration between the Santa Fe Institute, the Trust & Safety Professional Association, and Google. Past events in this series can be accessed by members of the CounterBalance community through this link, using the email that you registered under.
Background:
The proliferation of digital content presents significant challenges for technology platforms who bear the onus of curating content responsibly. Accurate parsing of this content is key to upholding content responsibility, a broad term used to describe the maintenance of healthy online communities, protecting users, and preserving platforms’ reputations. Regulators striving to prevent societal harms increasingly scrutinize tech platforms’ content moderation practices. Reliable and robust parsing techniques help to demonstrate due diligence and compliance with emerging regulations surrounding online content. Accurate content parsing is a complex challenge that has proven elusive, despite the advances made by content moderation techniques and technologies; as distinguishing between intent, sentiment, and context poses intricate technical and ethical dilemmas. The rise of generative AI further complicates this landscape, with its ability to produce human-quality text that can both illuminate and obfuscate meaning.
Natural language processing (NLP) and machine learning form the backbone of content parsing. Challenges arise due to the nuances of human expression and the increasing sophistication of generative AI models. Intent, for example, may be shrouded in sarcasm or disguised as humor, while sentiment can be multifaceted and easily misconstrued. Understanding context demands significant knowledge about cultural references, current events, and individual backgrounds. Additionally, real-time content parsing poses substantial computational demands.
Addressing the complex challenges of content parsing, with its intertwined hardware, software and ethical considerations, holds profound significance for tech platforms, regulators, and the future of every online participant’s experience.
Second Session
The second virtual 85-minute session will take place on will take place on November 22 at 9AM US Mountain Time, and will explore the ethical issues that can arise when applying AI to content parsing. The session is structured as a salon discussion with initial remarks by speakers Tina Eliassi-Rad (SFI & Northeastern) and Karine Mellata (Intrinsic), followed by a roundtable discussion with a panel of experts.
Speakers
Tina Eliassi-RadProfessor, Computer Science, Northeastern University; Science Steering Committee Member + External Professor at SFI
Karine MellataCo-founder and CEO at IntrinsicPanelists
Laura WeidingerStaff Research Scientist at Google DeepMind
Johnny Hartz SørakerEthics Lead at Google
Aaron RodericksHead of Trust and Safety at Bluesky
Michael MullerSenior Research Scientist in the Human Centered AI group at IBMModerator
William TracyVice President for Applied Complexity, SFIOrganizing Committee
Amanda MenkingResearch and Program Director at Trust and Safety Foundation
Charlotte WillnerExecutive Director at Trust and Safety Professional Association
Jan EissfeldtGlobal Head, Trust & Safety at Wikimedia Foundation
Sujata MukherjeeTrust and Safety Leader
William TracyVice President for Applied Complexity, SFI
Jason DjangSenior Program & Strategy Lead, Americas, Trust & Safety Global Engagements at Google