The Embiossa Foundation

Recommendation Algorithms

It is now completely normal that opaque and highly personalized recommendation algorithms decide which posts a user sees, and which they don't. While recommendation algorithms make it easy to discover interesting and valuable content, they keeps many users glued to their devices for hours each day, even when they say they want to cut back. They are especially addictive when combined with cleverly devised UX design (keyword: "dark patterns"), like scolling/swiping feeds with no end where the next tiny dopamine rush is always around the corner, and detailed personality profiles, facilitated by today's large scale data collection on users. This phenomenon, sometimes referred to as "doomscrolling," is becoming more and more widespread. Its negative health effects are proven. The slowly growing public awareness of the problem, and the advent of tools for keeping track of and limiting one's daily screen time are emerging, are early signs of a counter-movement (keyword: "digital wellbeing"). Major platforms such as Instagram and TikTok have already added these features natively to their apps, although they continue to employ recommendation algorithms and design patterns that draw users in. Critics therefore argue the companies may priorize image management over user well-being (see "Facebook Project Mercury"). As useful as these measures are as interim aids, no major change in this matter can be expected until social media platforms are legally obligated to adequately protect users' wellbeing. Since the core interest of commercial social media providers is to maximize profit, which heavily depends on users spending as much time consuming content on the platform as possible, regulation seems indispensable.

There are two categories of posts that spread especially well through the feedback loop of algorithmic recommendations and user behavior, and they are both especially problematic: Posts that trigger strong dopamine hits (keyword: "brain rot"), and polarizing posts that provoke intense emotional reactions (keyword: "rage bait"). Current recommendation algorithms favor posts that reliably drive interaction, whether through basic emotional responses like outrage or desire, a sense of novelty, or through reassuring the user's beliefs. Today it's common for posts to be explicitly designed to lure users into interacting with them (keywords: "clickbait" and "engagement bait"). Short posts, especially videos, seem to dominate. Users get used to constant stimulation and increasingly struggle with tasks where rewards take longer to arrive—like reading a book. Genuine, authenic posts that are not engineered to appeal to the algorithm and the user's attention are at a disadvantage.

Since many users don't want to give up the convenience of feeds filled by recommendation algorithms, a key research question is how to design algorithms and user inferfaces that are "healthier" while still being attractive. An approach that naturally comes to mind is to allow each user to transparly configure the kind of recommendations they want to get. Technically, this is simple. This approach is not appealing to commercial social media platforms because it means relinquishing some control, and thereby, some possibly to monetize their service. We will likely only see it happen on a large scale either when regulation requires it, or on alternative, non-commercial social media platforms. Taking the idea further, how about a federated social network that, in addition to having content-storing nodes, also comprises recommendation nodes? Users could subscribe to one or multiple recommendation nodes of their choice, and any user could spin up their own recommendation node that works the way they seem fit - for themselves and others. Recommendation nodes index and rank what can be found on the content nodes, similar to a search engine. There could be a marketplace of user-created recommendation algorithms to deploy, all adhering to a common API. Public benchmarks could check the quality of these algorithms and evaluate their neutrality or bias. A related practical question is how platforms employing "better" recommendation algorithms could actually gain adoption. Besides an appealing product, good marketing and an initial userbase, this requires identifying regulatory barriers and creating fair conditions for competition.