It Takes Only 250 Documents To Poison Any AI Model

It Takes Only 250 Documents To Poison Any AI Model

Researchers find it takes far less to manipulate a large language model's (LLM) behavior than anyone previously assumed.

Source: Dark Reading