Who will make the best cuppa? An algorithm or you?

Who will make the best cuppa? An algorithm or you?

We have all heard of the word ‘algorithm’, but what exactly does it mean? The humble algorithm bears its origins from 9th-century mathematics and simply means, "A set of instructions that comes to an end." Albeit the instructions could be monstrously complicated and when coupled with computer data, can produce what was previously ‘technically’ impossible.

The algorithm is simple in its execution but extraordinary in its power because computers can execute the defined tasks at unbelievable speed and with absolute accuracy.

The algorithm can, when correctly formed, use data to predict the weather, find you your perfect holiday or decide if you are eligible for a mortgage. Algorithms can produce results in seconds.

Why now?

Why have algorithms become so important? No one was interested in them 15 years ago (unless you are a rare breed, you know who you are!). The catalyst of this transformation has most probably been the increase in digital data. With so much data to be manipulated, read and interpreted, the algorithm is now a huge part of our daily lives.


One prominent analogy for the progression of algorithms is the relatable task of making tea. We all know how to do it. There is, in fact, an ISO standard for making tea ISO 3101:2019. There is a general idea of how the end product is acquired through sequential steps. We know how to make tea, but there are variables, such as how much milk, how many sugars, maybe different vessels to hold the liquid.

Both the human and the machine-driven algorithm can make theoretical cups of tea, so what is the difference? The difference would be if a human and an algorithm were tasked with making 1,000,000 cups of tea a day, over a week with ever-changing requirements. Then to organise those teas into categories and produce statistics. The machine would have this daily task done in a fraction of the time the human could.

Speed is king

The power behind algorithms is the speed that they can read, write and transform data and with our ever-growing and widening world data sets, algorithms can perform unique and revealing computations that were previously unachievable.

This, of course, is only half the story. What if we could create an algorithm that mutates and changes the way it behaves? What if they could learn? What if they could seek goals?

Rise of the machines

With the popular word algorithm comes the words artificial intelligence, machine learning etc. In a nutshell, these terms loosely comprise of adding a level of intelligence into our algorithms.

Suppose we were to revert to our tea analogy, for example, what if a tea machine-driven algorithm could learn the habits of tea drinkers? In that case, certain days might establish particular tea drinking patterns, like that on specific days there was no requirement for sugar.

We could as algorithm architects notice this pattern and manually change the algorithm to adapt but better than that, we can build our algorithm to learn how to change itself.

What are the implications of changing, learning and mutating algorithms for our day-to-day lives as we apply for loans, sit exams, apply for jobs or drink tea?


So to get our algorithm to learn about its environment, we set some goals. Algorithms can be given a set of aims, in our case, it could be the most efficient way to make tea or we could want predictive tea supply ordering, so we always have the right ingredients.

A simple example of learning is that we may allow our algorithm to make small random tweaks to individual task ordering, so we will enable the machine to mix up the tea processing order. The results from different arrangements are scored, the higher the scores, the more the algorithm will process in that particular order.

You will end up with generations of tea-making algorithms that self-replicate until the most efficient tea making process had been achieved.

Can a machine be a data controller?

One of the most poignant questions that we face today about algorithms is that if we allow our machines to learn and adapt the way they work, who is responsible for the outcomes? Is it the algorithm architect? Is it the data controller of the input data or the output data?

We have all seen world examples where we blame the algorithm for mistakes made. How much significant control are we handing over and building into automated decision making?

As our computing power and data increase, the more comprehensive algorithm will become more autonomous, posing some difficult questions as to who can control what machines will do with our data.

What is next?

Algorithms are here to stay. We are too integrated with digital devices to turn back now, so how do we manage our relationship with our machines? As algorithms develop and become more integrated, we will have to find a balance of control and benefit.


*Nicholas Clyde-Smith is a Senior Developer/Consultant at Corefocus Consultancy Limited. Corefocus won the Technology Project of the Year award at the Jersey Tech Awards 2019 for their work developing the Jersey Office of the Information Commissioner website and app.

The views and opinions expressed in this blog are those of the author and do not necessarily reflect the official position of the Jersey Data Protection Authority (including the Jersey Office of the Information Commissioner) (the "Authority"). The Authority is not responsible for the accuracy of any of the information supplied by the guest writer/bloggers and the Authority accepts no liability for any errors, omissions or representations. The copyright of this content belongs to the author and any liability with regards to infringement of intellectual property rights remains with them.