Weekly Buzz: Instagram Unveils Algorithm Transparency

Hello June, Happy Pride Month, and Happy Friday! Welcome back to the Weekly Download.

Here’s what you can expect in today’s edition:

Instagram’s attempt to get more transparent about its algorithm, TikTok’s first-ever LGBTQ Pride Visionary Voices list spotlights creators, Twitter’s new image fact checks, and Meta begins test blocking news on Instagram and Facebook for some Canadians, plus some Friday fun.

Instagram attempts to get more transparent about its algorithm

A new blog post from Head of Instagram Adam Mosseri attempted to shed light on the platform’s algorithm, sharing how and why people see what they see when they scroll. Mosseri also covered “shadowbanning”: a type of social media censorship where a post isn’t removed, but largely hidden from public view – often for unclear reasons.

Addressing “the community’s concerns” regarding shadowbanning, Mosseri said that Instagram will increase its transparency and provide support to creators who think their content has been wrongfully flagged by the algorithm. While some users were grateful for the information, others weren’t convinced. According to Mashable, a significant portion of the comments on the video were from frustrated users with personal complaints about their content visibility.

So, will Instagram follow through when it comes to transparency around shadowbanning? We’ll have to wait and see.

TikTok’s first-ever LGBTQ Pride Visionary Voices list spotlights creators

TikTok announced its support for Pride Month by creating its first-ever LGBTQIA+ Visionary Voices list and launching the “You Belong Here” campaign to protect LGBTQIA+ users.

The initiative aims to celebrate and recognize the LGBTQIA+ community’s contributions to TikTok by introducing new activations and events, including the #ForYourPride hashtag and hub, and collaborations with LGBTQIA+ creators and businesses. The app’s Sounds page will feature a #PrideAnthems activation with special music guests. TikTok also plans to host webinars, in-person events like the TikTok Pride Creator Ball, and curated playlists.

Despite these efforts, TikTok acknowledged the existence of hateful content and interactions within its platform. TikTok emphasized its commitment to fostering an inclusive space and protecting the LGBTQIA+ community by taking action against harassment, bullying, hate speech, and misgendering, and noted safety features such as confidential reporting, comment filtering tools, and privacy settings.

Twitter will now allow Community Notes ‘fact checks’ on images

Have you ever been tricked by an AI-generated image? Twitter’s got a fix for that.

The platform has updated its “fact checking” feature to include AI-generated images and manipulated media. The new feature called “Notes on Media” is part of Twitter’s Community Notes program, allowing users to add context or fact checks directly to tweets and the media within them.

Previously, users could only add notes at the tweet level, which wouldn’t stay with the media if it was reuploaded in a new post. Now, the Community Note will automatically appear on matching images — even if they are reuploaded.

There’s a slight catch: users must apply and be accepted into the Community Notes program, rate existing notes, and have their own notes added to tweets in order to write Community Notes on images. Notes on Media will be distinguishable by a label indicating that readers have added context to the image.

It’s also worth noting that this feature is separate from Twitter’s existing “Manipulated media” label, which is applied by the company to tweets containing synthetic or manipulated images or videos.

Currently, Notes on Media only works for single still images, but Twitter plans to expand the program to include videos, GIFs, and tweets with multiple media in the future. How effective will it be? It remains to be seen. Users have found a way around content detection on other platforms in the past.

Meta will begin test blocking news on Instagram and Facebook for some Canadians

Meta is planning to temporarily block news content for some Canadian users on Facebook and Instagram as a test, which is expected to last for most of June. The move follows Google’s five-week block on news links earlier this year in response to Bill C-18, a controversial piece of legislation that would require tech giants to pay publishers for using their content.

If C-18 is passed, Meta says it is ready to permanently block news for Canadian users on its platforms. During the test, randomly selected Canadians will not be able to view or share news content, including news links, Reels, or Stories. Rachel Curran, head of public policy for Meta Canada, said this first temporary move will affect one to five per cent of its 24 million Canadian users, with the number of those impacted fluctuating throughout the test.

Canadian Heritage Minister Pablo Rodriguez criticized Meta’s refusal to cooperate in a new statement, calling it ‘irresponsible’, later adding that “Canadians will not be intimidated by these tactics.”

Meta says it will ensure that non-news agencies are not mistakenly blocked, as happened during a similar situation in Australia. The company claims that news content generates negligible revenue, and that less than three percent of content seen on Facebook feeds consists of news articles.

According to Meta, the move is a business decision driven by competitive pressures and economic uncertainty, rather than a devaluation of the social importance of news.

Friday Fun

Think you could stay motivated in the gym with this little cutie by your side?

Line drawn peony from Spodek & Co Digital marketing site