BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Sabre//Sabre VObject 4.5.8//EN
CALSCALE:GREGORIAN
BEGIN:VTIMEZONE
TZID:Europe/Zurich
X-LIC-LOCATION:Europe/Zurich
TZURL:http://tzurl.org/zoneinfo/Europe/Zurich
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:19810329T020000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=-1SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:19961027T030000
RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=-1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
UID:news335@eikones.philhist.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20260303T182638
DTSTART;TZID=Europe/Zurich:20260429T181500
SUMMARY:Media Studies and AI Literacies: Colloquium # 4
DESCRIPTION:AI benchmarks comprise a multitude of quantitative techniques f
 or evaluating the capabilities and potential risks of AI models. They rang
 e from multiple-choice questionnaires to frameworks for assessing the free
 -text outputs of AI models\, and evaluations more akin to psychological in
 telligence tests. Within the AI industry\, benchmarks have long been relie
 d upon as valuable sources of insight about the performance of AI models. 
 They are also used in policy contexts to assess and minimize potential soc
 ietal harms posed by AI. In her talk\, Maria Eriksson approaches AI benchm
 arking as an experimental practice and epistemic template for how to “kn
 ow” AI models. Drawing on insights from the anthropology and sociology o
 f tests and experimental modes of knowledge production\, she examines how 
 benchmarks do not simply describe AI systems\, but actively (re)configure 
 research agendas\, competitive dynamics\, regulatory imaginaries\, and vis
 ions of technological progress.
X-ALT-DESC:<p>AI benchmarks comprise a multitude of quantitative techniques
  for evaluating the capabilities and potential risks of AI models. They ra
 nge from multiple-choice questionnaires to frameworks for assessing the fr
 ee-text outputs of AI models\, and evaluations more akin to psychological 
 intelligence tests. Within the AI industry\, benchmarks have long been rel
 ied upon as valuable sources of insight about the performance of AI models
 . They are also used in policy contexts to assess and minimize potential s
 ocietal harms posed by AI. In her talk\, Maria Eriksson approaches AI benc
 hmarking as an experimental practice and epistemic template for how to “
 know” AI models. Drawing on insights from the anthropology and sociology
  of tests and experimental modes of knowledge production\, she examines ho
 w benchmarks do not simply describe AI systems\, but actively (re)configur
 e research agendas\, competitive dynamics\, regulatory imaginaries\, and v
 isions of technological progress.</p>
DTEND;TZID=Europe/Zurich:20260429T200000
END:VEVENT
END:VCALENDAR
