As another year draws to a close, many of us will be bracing for an onslaught of reflective Instagram Reels, filtered achievements - and resolutions to doomscroll less.
Indeed, social media remains a dominant force in our lives; a way we measure our successes, connect with others, and keep up with news and trends. It’s even re-shaped the language we use, with many of the dictionaries’ 2025 words being social media coined: rage bait, parasocial and [AI slop](https://www.euronews.com/culture/2025/11/26/ai-slop-macquarie-dictionarys-word-of-the-year-is-a-sad-ref…
As another year draws to a close, many of us will be bracing for an onslaught of reflective Instagram Reels, filtered achievements - and resolutions to doomscroll less.
Indeed, social media remains a dominant force in our lives; a way we measure our successes, connect with others, and keep up with news and trends. It’s even re-shaped the language we use, with many of the dictionaries’ 2025 words being social media coined: rage bait, parasocial and AI slop, to name a few.
Since the rise of artificial intelligence (AI), however, there’s also been a major shift in how people are using and viewing social media. Mounting misinformation has led to distrust and a sense of disillusionment that’s reflected in platform usage.
While Facebook remains most popular, according to the search engine marketing company Semrush, community-driven apps such as Reddit and Discord continue to grow as people search for more meaningful, intimate, and authentic online spaces.
At the same time, regulators are continuing to navigate the tensions between an open internet and online safety, making 2025 feel like a major turning point for how social media companies continue to operate.
From age verification laws to major controversies involving Elon Musk’s Grok AI chatbot, here’s a closer look at some of the key talking points in social media this year.
Social media bans and protecting minors
On 10 December, Australia enforced a world first: the banning of social media for anyone under 16. This meant that children could no longer access accounts on platforms including Instagram, Snapchat, TikTok, YouTube, X and Facebook, all of which face hefty fines if found to violate the law.
While extreme, the move reflects growing concerns over social media harming young people’s mental health, with the World Health Organization (WHO) reporting that 1 in 10 adolescents have experienced negative consequences from using it.
Denmark has since announced plans to follow suit, proposing that anyone under 15 be blocked from accessing social media unless parents fill out a specific assessment. Other countries, including Spain, Greece and France, have also been calling for similar protective measures.
Meanwhile, stringent age verification laws were implemented under the UK’s Online Safety Act in July, preventing minors from viewing adult content or anything that might encourage dangerous behaviours.
The effectiveness of these new legislations is yet to be fully understood, with some experts maintaining scepticism, but we’re already hearing of creative ways teens are trying to circumvent the rules. Many are turning to messenger apps like WhatsApp instead, or even buying adult-looking mesh masks to try and fool facial recognition.
AI slop, deepfakes and the spread of misinformation
2025 was the year that AI slop took over. A term referring to the fake images and videos created by generative AI tools such as OpenAI’s Sora, it’s overwhelmed our social feeds with low-effort absurdities like puppies morphing into cinnamon buns, cats being arrested, or the strangely popular ‘Italian brain rot’ memes.
While seemingly harmless, it’s made connecting with real content created by actual people even harder. In some cases, it’s also led to the proliferation of scams and misinformation - even by politicians. US President Donald Trump continues to be one of the worst offenders for this, in one instance sharing AI-generated images depicting singer Taylor Swift endorsing him.
AI has also been used to ramp up the creation of deepfakes, videos that mimic a person’s face, body or voice to spread false information. One such example involved a fabricated video published on TikTok that showed a woman on a TV show confessing to welfare fraud, which news outlets like Fox News mistakenly covered.
In an attempt to combat the above, platforms such as Meta and TikTok have begun labelling any AI-generated content. Still, the scale at which such content is being produced has made this difficult to fully enforce, with a June report by Meta’s internal oversight board finding its labelling to be ‘inconsistent’.
Elon Musk’s chatbot and hate speech
Many of the big social media platforms have integrated AI assistants into their services, offering automated support for content creation, searches and customer service queries. It’s Elon Musk’s Grok chatbot, however, that has caused the most controversy this year.
Created by the tech billionaire’s company xAI, Grok made headlines in July for praising Adolf Hitler, and accusing a bot account with a Jewish last name of celebrating the deaths of white children in the central Texas floods.
At the time, Musk responded that the AI tool was “too eager to please and be manipulated,” an issue that was “being addressed”. Yet Grok has still continued to share concerning responses, including antisemitic conspiracy theories and advice on how to stalk people.
Tighter regulations and algorithmic accountability
Tighter regulation of online spaces ramped up this year, with the UK’s Online Safety Act coming into force and calling for greater transparency and accountability from social media companies.
The EU’s Digital Services Act (DSA) also imposed its first-ever fine, charging Elon Musk’s X €120 million. The platform’s advertising policy and blue checkmarks (once used to signal a verified account but now sold to anyone) were found to fall short of EU law due to a lack of clarity.
TikTok was also fined €530 million by the Irish Data Protection Commission (DPC) in May, for failing to protect EU users’ personal data during a transfer to China.
The massive amounts of data (and power) that social media platforms wield, along with the aforementioned worries about their potentially harmful impact, mean legislative scrutiny is likely to intensify even more in 2026.