Trust and Trustless Systems

Trust is an important thing on the internet, especially when payments are involved. Who knows who you’re dealing with over email, or on some website, run by who knows how many people. In this article we look into how trust occurs online, and how this is being changed by trustless systems.


Let’s take a look at how the first solution to trust on the internet came about. In the 1990’s, the internet was bright and fresh, a wild west of opportunities. As certain as the sun rising, those few people who are out to ruin it for everyone else, started making trouble and scamming people. We shouldn’t bemoan their existence, it’s to be expected: human nature is what it is. So what systems evolved to solve the problem of trust? eBay is a great early example, with their rating and review system. The early buyers were most likely young people who were out to get some good gear second hand, or at least cheaper than in a store (with all the overheads of a physical location, pushing up the price of the items, eBay helped to reduce costs). So they were mostly assumed to be trustworthy, most of the time, at least in the early days. The sellers, on the other hand, were business people, out to make a profit: their motives are already on suspicious ground. If a seller sold 100 items in a month, and had a 20% profit margin, they stood to earn a lot of money (80% extra on a sale!) by simply not sending the item to the buyer, once a month. If eBay was watching and scrutinizing them, to check if they were fraudulently taking people’s money and not sending items in that way, then there wouldn’t be an angry crowd of dissatisfied buyers waiting in protest, instead there would just be one or two people who probably had their package stolen, perhaps. So in this way, it was a problem that eBay had to solve. They solved trust and the reliability of sellers (not so much buyers) by implementing a feedback system, measured specifically as a percentage. The bar for this percentage is extremely high: the percentage is displayed down to two decimal places, and someone with 99.05% positive feedback has a substantially worse score than someone with 99.95% feedback. So, buyers learned to trust sellers with consistently high scores over many hundreds (if not thousands) of orders. This worked fine in the early days, but as systems evolve, unintended consequences of the design emerge. Buyers start leaving positive feedback when they’ve had a bad experience because they don’t want to receive reciprocal bad feedback themselves. Sellers offer to refund an order at the drop of a hat, living in fear of negative feedback.

More concerning, what if eBay started to exclude some negative feedback, based on its age, or some statistical measure of certainty, designed to lessen the impact of the occasional (accidentally) lost package? How do you know that their algorithm hasn’t run amok, and is hiding all the deadly negative feedback that you wanted to see? If the feeling of risk and distrust is rising in the back of your mind, you’d be right. The assumption underlying all of this is that you have to trust eBay, the person collecting and presenting the feedback score. If you don’t trust that number being displayed on the website, then it’s worthless. So, eBay’s rating and review system is inherently a trust-based system: you have to trust eBay. What possible alternative is there, I hear you asking? Well, let me tell you about my friend, the “trustless system”.

Trustless systems

Despite the confusing name, it does not mean a den of thieves where no-one is trustworthy. It means that you don’t need to trust the system itself. In contrast to the centralized and arbitrarily editable ratings & reviews system of eBay, there are many proposed blockchain solutions to establishing trust.

But what is a trustless system? A trustless system is one where the inner workings of the system are completely open and transparent, so that you can see there are no arbitrary back doors to undo negative feedback, or assign undue trust to an individual who is behind the scenes. Furthermore, a trustless system should be decentralized, so that there is no one physical server or piece of hardware that is vulnerable to tampering. In eBay’s case, who is to say that a server technician in the back room couldn’t one day edit his own personal account’s feedback score to be positive, with you none the wiser? In a distributed and decentralized system, this isn’t a risk, as you don’t have to trust that the hardware is secure.

The Blockchain

The blockchain is a good example of a trustless system. In the blockchain, there is no single entity that you have to trust to run the system. Bitcoin is a trustless system, and the blockchain is the underlying technology. A blockchain is a decentralized, immutable index of data. In the case of Bitcoin, it’s the list of payments that have ever been made. This is all open and transparent, anyone can download the software and view the blockchain, so they know exactly how it works. Suppose you are browsing the internet looking to buy a kitchen sink (of all things). While it doesn’t currently exist in a popular form at the time of writing (March 2021), in the future, there will no doubt be an implementation of a blockchain to solve the question of feedback scores for online sellers, that will be publicly verifiable, so that anyone browsing a specific merchant’s website can check and see if they really are the best sink company, and feel assured.

In summary

Trustless systems are systems that you do not have to trust, because they are open and transparent, and because they are decentralized. If the system is all open and transparent, then you know exactly how it works, and you don’t have to trust that the creators built it with no nefarious ideas in mind. If the system is decentralized, then there is no single entity that we have to trust to run the system, physically. So, in these systems, we do not have to trust anyone, and can operate and trade without worry or risk.

Gameable Systems

In this article we look at a few systems that exist in the real world, that are supposed to be secure, or solve a certain problem, and then we look at how people have tricked the system (“gamed”) into giving them an unfair benefit. We also look at what makes systems more or less gameable. Let’s start by looking at two example systems that have been gamed.

School grades are a fantastic example of a gameable system. The system of grades in school was invented with the purpose of measuring how well a student is learning. If schools didn’t have an objective measure of who had learned the content, and who hadn’t, then anyone graduating from Harvard could be a completely incompetent person. So, “grades” were invented. Teachers and professors created “A”, “B”, “C”, “D” and “F” to give a clear and comparable mark to each student. The students can compare their grades to each other, and understand how they rank compared to their peers. Suddenly, the outside world stopped wondering how much or how well a student had learned the content of their study, and started solely looking at the grades that a student got. This then meant that the students not only have to care about learning, but they also have to care about getting good grades, which is a separate task. Students began to game the system, by focusing on how to get better grades, instead of focusing on learning the course material. A large part of this comes from how the grades are calculated: exams can only have a certain small percentage of the course material, and so the students just need to figure out what sort of questions will be on the exam, and spend all their time practicing those questions, and neglect to study the material that isn’t on the exam. If you get a fantastic grade, but have intentionally avoided learning part of the course material, then you have successfully gamed the system of school.

Websites on the internet
When the internet was young, there were not that many websites around, and so they were simply listed in indexes, like a phone book. As the web grew, the need for a service that allows you to search through the internet to find what you want, became more and more pressing. So, Google now exists, and promises to find exactly what you're looking for, when you search. However, this system has also been gamed. Just like how students compete to get the best grade, instead of competing to learn the material best, people who own websites compete with each other to appear on Google, instead of having the best content on their website. One way to increase your likelihood of appearing ahead of other competitors is to buy backlinks to your website. There are many other aspects to SEO however, which is a topic for another day, and most likely another blog.

What makes a system more or less attractive to being gamed?
Fundamentally, a system is more gameable if there is something that can be gained by gaming it. A system is less gameable if there isn't a reward for gaming it. It also matters whether there is competition, and how able the other players in the competition are to also game the system. If everyone is honest, then it's a fair playing field, and the score everyone gets is accurate with their ability, they get the rewards they deserve. However, as you may have encountered, not everyone is honest.

What makes a system gameable?
A system is gameable if there is a difference between what is supposed to be being measured (e.g. measuring learning, or measuring how relevant and interesting a website's content is), and how it is being measured. In the case of schooling, grades are an approximation of learning, because you're also measuring how well a student performs under pressure, how well they are able to think quickly, and a number of other factors besides how well they have learned the material. So, the goal for the designers of systems, is to make the measurement method as close as possible to what they are really trying to measure. This can be hard.

In the blockchain system, let's take for example, Bitcoin, there is a huge incentive to game the system: money. If you can trick or game Bitcoin somehow, then you essentially have nearly unlimited free money. So, there is a lot of motivation for a lot of talented people to attempt to game Bitcoin. How gameable is it? Well, what bitcoin is trying to measure is a bit of a mystery. It's not measuring how well anyone has learned something, and it's not measuring website content. It's a subtle answer, let's break it down into two parts. Firstly, let's look at holding Bitcoin. The system has been designed so that it is assumed everyone is untrustworthy, and that everyone stands to gain by gaming the system. As such, strong encryption was used when gaming the system, and the protocols were designed to work with zero trust. This aspect of Bitcoin is completely un-gameable. It sounds like an extremely strong and over-confident statement, but it's probably one of the first systems in the world that is not gameable. Let's look at the other part of Bitcoin, the mining. Bitcoin is trying to measure who has done the mining of transactions. Bitcoin is trying to measure and reward the people who have done the most transaction processing for the Bitcoin network. It does this by making everyone compute hashes, which results in a cryptographic proof that they have completed the hashes that they claim to have. If they try to trick the other peers, then the peers will know and the claimed work will be ignored. You must compute the hashes to get a chance at the reward. So, everyone's efforts have gone into direct competition at the intended and correct task: computing as many hashes as possible, as fast as possible, as cheaply as possible. And this is a beautiful thing.