Researchers have developed the first scientifically validated ‘personality test’ framework for popular AI chatbots, and have shown that chatbots not only mimic human personality traits, but their ‘personality’ can be reliably tested and precisely shaped – raising implications for AI safety and ethics.
The research team, led by the University of Cambridge and Google DeepMind, developed a method to measure and influence the synthetic ‘personality’ of 18 different large language models (LLMs) – the systems behind popular AI chatbots such as ChatGPT – based on psychological testing methods usually used to assess human personality traits.
The researchers found that larger, instruction-tuned models such as GPT-4o most accurately emulated human personality traits, and these traits can be manipulated through prompts, altering how the AI completes certain tasks.
Their study, published in the journal Nature Machine Intelligence, also warns that personality shaping could make AI chatbots more persuasive, raising concerns about manipulation and ‘AI psychosis’. The authors say that regulation of AI systems is urgently needed to ensure transparency and prevent misuse.
Read more here.





























