On prompt injections: Because AI chatbots are trained to be helpful and to understand context, jailbreakers are able to engineer scenarios where the AI believes ignoring its usual ethical guidelines is appropriate. Is it currently possible to safeguard LLMs from injection attacks at scale?

Newsletter Microposts