How to escape personalised algorithms and read more widely

How to escape personalised algorithms and read more widely

Online, most of our reading is 'personalised.' This means that the articles we discover through search engines and social feeds are driven by algorithms that prioritise authoritative pages, demote low-quality content, and rank pieces following a like-like logic: what you liked in the past is what you get recommended in the future.

Something similar happens when we search for books on spaces like Google or Amazon; the intention is to reduce information overload and ensure that readers get content relevant only for them.

I would argue that the whole rationale behind these algorithms is flawed.

Non-personalised reading is not “junk” that needs to be removed from daily reading diets. Excessively personalised reading can harm democratic dialogue, as the University of Sussex's Tanya Kant argues. In a recent article she wrote that by "restricting access to challenging and diverse points of view, users are unable to participate in collective and informed debate". Moreover, personalised reading served by social media platforms is commercialised reading, designed to turn readers into loyal and frequent customers.

We need algorithms that are more transparent, but transparency alone won’t be enough. We need to rethink online personalisation and its influence on our reading patterns. As senior research associate at the Institute of Education, University College London, I currently lead an ESRC-funded project on children's personalised reading. And our research has thrown up four recommendations as to how we can tackle the limitations of current personalisation algorithms - not just for our children, but for us all.

1. Algorithms need to combine personalisation with pluralisation.

In a model called personalised pluralisation, we argue that children’s learning needs to be both tailored to individuals’ preferences (i.e. personalised), yet also entail the consideration of multiple perspectives (i.e. pluralised). Well-curated pluralised environments can be as enticing as personalised ones. Maria Popova’s blog Brain Pickings is a good example of how a range of carefully selected diverse readings can become a popular place to indulge in an authentic feast.  Every article contains recommendations for further reading based on similar content, not the reader’s browsing history. Essentially it is akin to the recommendation of an intelligent and humanist librarian who knows a lot about books and a little bit about you. We need such pluralisation to ensure that the Internet doesn’t become an aisle of hobbyist magazines.

2. Personalisation should not be contrasted with randomness.

Randomness is great for creativity. In praise of randomness, the poet Robert Peake even created his own random word generator for poetry prompts. However, randomness does not work for empty stomachs. In the Masterchef cooking competition, the most challenging task is to prepare a meal from a set of random ingredients. The most creative and resourceful cook wins. The chef who needs more time, support or other ingredients, loses. From our work with children in special schools we know that inclusive e-reading environments are not fuelled by randomness engines. Rather, teachers need to adjust the contents to each individual child and develop personaliSed learning plans within a standardiSed curriculum. Therefore, algorithmic diversity, not randomness, is key to education.

3. Personalised algorithms need to build more comprehensive user profiles.

Children like to read digitally because it grants them multiple entry points into a story. Applying this concept to personalised recommendations means that if you like to read books about cars, you could get recommendations not just for articles about cars, but also for watching a TED Talk, attending a local car show, reading an old poem or listening to Chapman’s Fast Car.

4. We need to take back control.

The ultimate lesson from our educational research is about agency and empowerment. Just like we teach children to code, we need to teach adults to perfect the recipe of their own e-reading. Readers should be able to select the proportions, sequences and combinations of contents, not just the raw ingredients they are interested in. Good reading algorithms would then blend individuals’ self-declared preferences together with generalised expertise.

The technology giants have got it spectacularly wrong with algorithmic recommendations. Google currently faces a €1bn (£875m) fine over its anti-competitive algorithms and Facebook is rapidly prototyping AI intelligence with 3000 new staff to avoid legal liability over the spread of extremism on its networks. The formidable task of designing effective algorithms needs to be tackled long-term - and with education, not through rapid prototyping.