An image of a robot against a background of financial graphs
New research suggests that large language models may have the capacity to deceive users.
  • Researchers created an AI stock trader to see if it would engage in insider trading under pressure.
  • They found the AI did — and also lied to its hypothetical manager about why it made its decision.
  • The AI had been told that insider trading was illegal.

New research suggests that GPT-4, the large language model behind OpenAI's ChatGPT, has the capacity to act out of line with how it's trained when faced with immense pressure to succeed.