Social media feeds 'misaligned' when viewed through AI safety framework
foommagazine.org·1d·
Discuss: Hacker News
Flag this post

31 Oct 2025 — 4 min read


Results add to doubts about whether corporations can be expected to voluntarily align powerful incoming AI systems when they do not align existing algorithms.


In a study from September 17 a group of researchers from the University of Michigan, Stanford University, and the Massachusetts Institute of Technology (MIT) showed that one of the most widely-used social media feeds, Twitter/X, owned by the company xAI, is recognizably misaligned with the values of its users, preferentially showing them posts that rank highly for the values of ‘stimulation’ and ‘hedonism’ over collective values like ‘caring’ and ‘universal concern.’

Social media feeds that curate lists of posts for users to scroll th…

Similar Posts

Loading similar posts...