Large generative AI models (LGAIMs), such as ChatGPT, GPT-4 or Stable Diffusion, are rapidly transforming the way we communicate, illustrate, and create. This informal project considers these new generative models in the context of AI regulation and ponders not only how laws might be tailored to their capabilities, but also to what extent those capabilities can be constrained to comply with laws.
Our group is actively curating a reading list on some of the practical aspects of implementing regulations on large language models. The papers, presentations and posts we're finding have been prepared by some of the leading thinkers in AI, law and government, several of whom Tetherless World researchers know and have collaborated with in the past.
Some of the topics under consideration:
- The Big Picture on Regulating LLMs
- Identifying and Tracking LLM output
- Identifying and Tracking Diffusion Output
- Generative AI and Virtual Child Pornography
- Evaluating Safety, Trustworthiness and Fairness in Generative AI
- Detecting Hate Speech
- Detecting & Mitigating Bias
- Building Biases into AI
- LLM-assisted Copyright Infringement Detection
- Generative AI and Copyright Law
- Generative AI and Creative Commons Licensing
- LLMs and Scientific Communication
- LLMs and Education
- LLMs and Legal Practice
- Jailbreaking Generative AI-based Systems
Please contact John Erickson if you have suggestions for this list!