BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//pretalx//nyc2024.pydata.org//cfp//B7VM9Z
BEGIN:VTIMEZONE
TZID:US/Eastern
BEGIN:STANDARD
DTSTART:20001029T020000
RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=10;UNTIL=20061029T060000Z
TZNAME:EST
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
END:STANDARD
BEGIN:STANDARD
DTSTART:20071104T020000
RRULE:FREQ=YEARLY;BYDAY=1SU;BYMONTH=11
TZNAME:EST
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
END:STANDARD
BEGIN:DAYLIGHT
DTSTART:20000402T020000
RRULE:FREQ=YEARLY;BYDAY=1SU;BYMONTH=4;UNTIL=20060402T070000Z
TZNAME:EDT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
END:DAYLIGHT
BEGIN:DAYLIGHT
DTSTART:20070311T020000
RRULE:FREQ=YEARLY;BYDAY=2SU;BYMONTH=3
TZNAME:EDT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
END:DAYLIGHT
END:VTIMEZONE
BEGIN:VEVENT
UID:pretalx-cfp-CYDU3W@nyc2024.pydata.org
DTSTART;TZID=US/Eastern:20241106T105000
DTEND;TZID=US/Eastern:20241106T122000
DESCRIPTION:Large Language Models (LLMs) generate contextual informative re
 sponses \, but they also pose risks related to harmful outputs such as vio
 lent speech\, threats\, explicit content\, and adversarial attacks. In thi
 s tutorial\, we will focus on building a robust content moderation pipelin
 e for LLM-generated text\, designed to detect and mitigate harmful outputs
  in real-time. We will work through a hands-on project where participants 
 will implement a content moderation system from scratch via two different 
 ways. First is through using open source LLM models via Ollama and conduct
 ing various prompt engineering techniques. The second is fine tuning small
  open source LLMs on a content moderation specific datasets. It will also 
 include identifying adversarial attacks\, including jailbreaks\, and apply
 ing both rule-based and machine learning approaches to filter inappropriat
 e content.\n\nThis tutorial is aimed at AI engineers\, researchers\, and p
 ractitioners who are involved in deploying LLMs and are looking to impleme
 nt moderation systems that prevent harmful content. A basic understanding 
 of LLMs\, NLP techniques and comfort in Python and Pytorch will be helpful
 . The GitHub repository contained code and datasets will be shared prior t
 o the tutorial.
DTSTAMP:20250709T215040Z
LOCATION:Winter Garden
SUMMARY:Responsible AI: Building Moderation Pipelines for Harmful and Adver
 sarial Content - Aziza Mirsaidova
URL:https://nyc2024.pydata.org/cfp/talk/CYDU3W/
END:VEVENT
END:VCALENDAR
