The researchers are applying a way referred to as adversarial education to halt ChatGPT from letting consumers trick it into behaving badly (called jailbreaking). This work pits numerous chatbots against one another: just one chatbot plays the adversary and assaults A further chatbot by building textual content to pressure it to buck its usual cons