Hi! 👋 I’m Jessa.

I blog daily about life, work, and the future.


“… this is a book about humans. It’s about who we are, where we’re going, what’s important to us and how that is changing through technology. It’s about our relationship with the algorithms that are already here, the ones working alongside us, amplifying our abilities, correcting our mistakes, solving our problems and creating new ones along the way.

It’s about asking if an algorithm is having a net benefit on society. About when you should trust a machine over your own judgment, and when you should resist the temptation to leave machines in control. It’s about breaking open the algorithms and finding their limits; and about looking hard at ourselves and finding our own. About separating the harm from the good and deciding what kind of world we want to live in.

Because the future doesn’t just happen. We create it.”

Quotes from the book

Understanding our own flaws and weaknesses — as well as those of the machine — is the key to remaining in control.

But there’s a distinction that needs making here. Because trusting a usually reliable algorithm, is one thing. Trusting on without any firm understanding of its quality is quite another.

In whatever corner of the internet you use, hiding in the background, these algorithms are trading on information you didn’t know they had and never willingly offered. They have made your most personal, private secrets into a commodity.

All around the world, people have free and easy access to instant global communication networks, the wealth of human knowledge at their fingertips, up-to-the-minute information from across the earth, and unlimited usage of the most remarkable software and technology, built by private companies, paid for by adverts. That was the deal that we made. Free technology in return for your data and the ability to use it to influence and profit from you.

Whenever we use an algorithm — especially a free one — we need to ask ourselves about the hidden incentives.

Whenever we use an algorithm — especially a free one — we need to ask ourselves about the hidden incentives. Why is this app giving me all this stuff for free? What is this algorithm really doing? Is this a trade I’m comfortable with? Would I be better off without it?

Twenty-six years before the Air France crash, in 1983, the psychologist Lisanne Bainbridge wrote a seminal essay on the hidden dangers of relying too heavily on automated systems. Build a machine to improve human performance, she explained, and it will lead — ironically — to a reduction in human ability. By now, we’ve all borne witness to this in some small way. It’s why people can’t remember phone numbers any more, why many of us struggle to read our own handwriting and why lots of us can’t navigate anywhere without GPS. With technology to do it all for us, there’s little opportunity to practise our skills.

Every company on earth appeals to our fantasies to sell their products.

Every company on earth appeals to our fantasies to sell their products. But for me, there’s a difference between buying a perfume because I think it will make me more attractive, and buying a car because I think its full autonomy will keep me safe.

Just because something is successful, that doesn’t mean it’s of a high quality.

We can’t design an algorithm to compose or find a ‘good’ song if we can’t define what we mean by ‘good’.

But in our urge to automate, in our hurry to solve many of the world’s issues, we seem to have swapped one problem for another. The algorithms — useful and impressive as they are — have left us with a tangle of complications to unpick.

In my view, the best algorithms are the ones that take the human into account at every stage. The ones that recognize our habit of over-trusting the output of a machine, while embracing their own flaws and wearing their uncertainty proudly front and centre.

In the age of algorithms, humans have never been more important.