AI generated image
CounterBalance Seminar
  US Mountain Time

This event is closed to the public.

Interactions with AI tools change the way their users understand the world. As AI becomes more proficient and more ubiquitous, these tools will increasingly disrupt the distribution of interpretive authority in society.

At present, AI tools seem to shift interpretive authority towards the firms that provide AI services. These firms typically rely on their trust and safety departments to minimize their AI's potential to offend or harm users. This “do no harm” ethos is an admirable stance for technology firms. However, there is a risk that these AIs are shifting interpretive authority away from epistemic institutions that have historically guided their communities' understanding of the world. (In this context, examples of epistemic institutions include churches, universities, unions, and clubs.)

Over the last year, different groups of researchers, coders, and nonprofits have developed tools that allow epistemic institutions to use AI while maintaining their own interpretive authority. These tools include orchestration layers, user-level APIs, and system prompt editors.

This 90-minute virtual meeting will showcase some of these tools. We will also discuss how epistemic institutions can deploy such tools.

Organizers

Umang BhattUmang BhattAssistant Professor in Trustworthy Artificial Intelligence at the University of Cambridge
Allison StangerAllison StangerProfessor of International Politics + Economics at Middlebury College; Science Board Member + External Professor at SFI
William TracyWilliam TracyVice President for Applied Complexity, SFI

More SFI Events