- Joined
- Nov 19, 2023
- Messages
- 2,718
- Reaction score
- 24,949
- Points
- 113
- Thread Author
- #1
This article provides an introduction to the world of smart contract security for people with a background in traditional cyber security and little knowledge of crypto and blockchain tech. It’s drawn from the experiences of our team of exploit developers and pentesters turned smart contract auditors.
While other smart contract platforms exist, we will be focusing on Ethereum, which is currently the most widely adopted platform. As this is a high-level article, most points should apply to platforms of a similar nature.
TL;DR
- Smart contracts allow for interaction with assets according to particular rules.
- Smart contracts are publicly accessible and the source code is generally open-source.
- More than $115 billion is currently locked in smart contracts in the Ethereum Defi space alone, and in 2020 over $50 million was lost in smart contract hacks.
- A smart contract hack could result in significant, immediate financial gain for an attacker.
- Smart contracts are not only attacked through faults in code, but through high-privileged user roles, external protocols, and complex economic attacks.
Why smart contracts are open source
Cryptocurrencies allow us to interact with assets over the internet while generally minimizing trust assumptions. Smart contracts extend this concept to more complex transactions, such as lending, governance, derivatives trading, and much more. Smart contracts are to the law as cryptocurrencies are to money: a technical solution to a problem that has previously been solved through social or legal means.
As an example, consider the following pseudocode that allows any user to swap Ether, Ethereum’s native currency, for a synthetic representation of gold.
function buyGold() {
uint goldPrice = Oracle.fetchPrice('gold:eth');
uint goldShares = msg.value / goldPrice;
gold.transfer(msg.sender, goldShares); // transfer equivalent gold to user
}
function sellGold(uint goldShares) {
uint goldPrice = Oracle.fetchPrice('gold:eth');
uint ethValue = goldShares * goldPrice;
gold.transferFrom(msg.sender, address(this), goldShares); // transfer gold from user back to contract
transfer(msg.sender, ethValue); // transfer equivalent ether to user
}
When working with smart contracts, we generally don’t need to trust a legal system, a court, or even, in many cases, owners of a contract. But we do need to trust the contract’s code.
The bytecode for all smart contracts on a public blockchain is viewable to everyone, and can thus be decompiled and audited. Theoretically, a contract’s code could be obfuscated, as is common practice for desktop and mobile applications, but doing so would undermine the purpose of having a smart contract in the first place – if you’re counting on people to trust your code to the exclusion of all else, that code has to be open to scrutiny. For this reason, many projects open-source their contract source code on platforms like GitHub.
As an example, consider the following pseudocode that allows any user to swap Ether, Ethereum’s native currency, for a synthetic representation of gold.
function buyGold() {
uint goldPrice = Oracle.fetchPrice('gold:eth');
uint goldShares = msg.value / goldPrice;
gold.transfer(msg.sender, goldShares); // transfer equivalent gold to user
}
function sellGold(uint goldShares) {
uint goldPrice = Oracle.fetchPrice('gold:eth');
uint ethValue = goldShares * goldPrice;
gold.transferFrom(msg.sender, address(this), goldShares); // transfer gold from user back to contract
transfer(msg.sender, ethValue); // transfer equivalent ether to user
}
When working with smart contracts, we generally don’t need to trust a legal system, a court, or even, in many cases, owners of a contract. But we do need to trust the contract’s code.
The bytecode for all smart contracts on a public blockchain is viewable to everyone, and can thus be decompiled and audited. Theoretically, a contract’s code could be obfuscated, as is common practice for desktop and mobile applications, but doing so would undermine the purpose of having a smart contract in the first place – if you’re counting on people to trust your code to the exclusion of all else, that code has to be open to scrutiny. For this reason, many projects open-source their contract source code on platforms like GitHub.
Why do smart contracts keep getting hacked
One ought to design systems under the assumption that the enemy will immediately gain full familiarity with them. — Shannon’s maxim
The quote above is usually applied to cryptography but applies equally to smart contract security. Security through obscurity is impossible in this space. There’s no hiding behind private transactions, firewalls, or authentication, and no mitigating risk through network segregation. Code has to defend itself.
In many ways, the smart contract security model is very similar to the security model adopted by free and open-source software: all code is made public, and the public is encouraged to find and report bugs in it. Proponents of free and open-source software often emphasize the robustness inherent to this security model in comparison with closed-source software, the source code of which is only ever seen by a small team involved in its development. Linus Torvalds expressed it thus:
Given enough eyeballs, all bugs are shallow. — Linus’s Law
However, as has been demonstrated by long-lived security vulnerabilities like Heartbleed, Linus’s Law does not always apply: just because your software is open source and popular doesn’t mean anyone’s actually looking at the code. Most people need some kind of incentive to find and report bugs in open-source code.
In what is perhaps a double-edge sword for smart contract security, incentives for finding bugs abound, because serious vulnerabilities have direct financial implications. An attacker who managed to discover Heartbleed, or even a more severe vulnerability like BlueKeep would need to come up with an attack campaign to benefit from it financially, perhaps involving ransomware or stealing and selling customer information. Exploiting a serious smart contract vulnerability, on the other hand, will get you immediate financial gain in the form of Ether or other cryptocurrency tokens.
This paints a rather bleak picture for smart contract security, but to some extent this is the point of the threat model. Andreas Antonopoulos coined the phrase “Bubble Boy and the Sewer Rat”, to draw an analogy to how traditional networks largely rely on their external infrastructure to protect themselves, while public blockchains dangle a multibillion dollar piece of cheese for anyone on the internet to take a piece of. Protocols will be attacked, and smart contracts will be hacked, but over time the end result is a system that has been exposed to the worst the environment has to offer – a hardy internet sewer rat.
The quote above is usually applied to cryptography but applies equally to smart contract security. Security through obscurity is impossible in this space. There’s no hiding behind private transactions, firewalls, or authentication, and no mitigating risk through network segregation. Code has to defend itself.
In many ways, the smart contract security model is very similar to the security model adopted by free and open-source software: all code is made public, and the public is encouraged to find and report bugs in it. Proponents of free and open-source software often emphasize the robustness inherent to this security model in comparison with closed-source software, the source code of which is only ever seen by a small team involved in its development. Linus Torvalds expressed it thus:
Given enough eyeballs, all bugs are shallow. — Linus’s Law
However, as has been demonstrated by long-lived security vulnerabilities like Heartbleed, Linus’s Law does not always apply: just because your software is open source and popular doesn’t mean anyone’s actually looking at the code. Most people need some kind of incentive to find and report bugs in open-source code.
In what is perhaps a double-edge sword for smart contract security, incentives for finding bugs abound, because serious vulnerabilities have direct financial implications. An attacker who managed to discover Heartbleed, or even a more severe vulnerability like BlueKeep would need to come up with an attack campaign to benefit from it financially, perhaps involving ransomware or stealing and selling customer information. Exploiting a serious smart contract vulnerability, on the other hand, will get you immediate financial gain in the form of Ether or other cryptocurrency tokens.
This paints a rather bleak picture for smart contract security, but to some extent this is the point of the threat model. Andreas Antonopoulos coined the phrase “Bubble Boy and the Sewer Rat”, to draw an analogy to how traditional networks largely rely on their external infrastructure to protect themselves, while public blockchains dangle a multibillion dollar piece of cheese for anyone on the internet to take a piece of. Protocols will be attacked, and smart contracts will be hacked, but over time the end result is a system that has been exposed to the worst the environment has to offer – a hardy internet sewer rat.
The weird and wonderful world of smart contract attack vectors
Here are some examples of how the threat model of smart contracts can be quite different from traditional applications.
Bugs almost always have a financial impact. Interacting with an actively used blockchain is typically slow and expensive. As a result, transactions are almost exclusively reserved for operations that would typically be deemed as high risk, such as transferring or managing assets. Contracts have had hundreds of millions of dollars thrown into them within hours of deployment and ended in disaster – see Fei protocol and Eminence. When you come across a bug in the wild (e.g. something that affects the state of the contract), exploiting it will most likely impact contract users financially, whether through funds being stolen, locked, or miscounted.
Common examples of simple bugs that will usually have direct financial implications include:
Bugs almost always have a financial impact. Interacting with an actively used blockchain is typically slow and expensive. As a result, transactions are almost exclusively reserved for operations that would typically be deemed as high risk, such as transferring or managing assets. Contracts have had hundreds of millions of dollars thrown into them within hours of deployment and ended in disaster – see Fei protocol and Eminence. When you come across a bug in the wild (e.g. something that affects the state of the contract), exploiting it will most likely impact contract users financially, whether through funds being stolen, locked, or miscounted.
Common examples of simple bugs that will usually have direct financial implications include:
- Whether a method is public or private.
constructor() public {
initializeContract();
}
function initializeContract() public {
owner = msg.sender;
}
initializeContract();
}
function initializeContract() public {
owner = msg.sender;
}
- Integer over- and underflows – which affect real assets.
function withdraw(uint _amount) {
require(balances[msg.sender] - _amount > 0);
msg.sender.transfer(_amount);
balances[msg.sender] -= _amount;
}
require(balances[msg.sender] - _amount > 0);
msg.sender.transfer(_amount);
balances[msg.sender] -= _amount;
}
- Statement ordering. Notice in the code below that call can execute a function in another contract, which might then call this function, resulting in being partially executed multiple times before the balance is reduced.
function withdraw(uint _amount) {
require(balances[msg.sender] >= _amount);
msg.sender.call.value(_amount)();
balances[msg.sender] -= _amount;
}
Product owners are public enemy no. 1. In the second half of 2020, DeFi scams made up 99% of crypto-based fraud. The most common type of scam is the rug pull, in which developers design their contracts to give themselves wide-ranging control over user-deposited funds, promote their platform on social media for a few days, and then disappear with their users’ funds.
For this reason, developers are seen as a primary threat actors. Malicious developers may include subtle bugs in their code (see Solidity Underhanded Competition, a competition for backdooring smart contracts), or blatantly include functionality that lets them drain user funds. They may also be negligent with the private key(s) used to deploy and manage the contract, which could allow others to abuse trusted management functionality.
Common mitigations include minimizing privileged functionality, releasing public audits to highlight trust assumptions to users, using a multi-sig wallet for contract management, and not using upgradeable contracts.
Code is immutable (kind of). In the traditional security space, it is taken as a given that applications and platforms may be upgraded and changed by their creators. This is not always the case with smart contracts, as code is immutable once deployed onto the blockchain, providing users with a level of certainty about future interactions. This, however, also means that any bugs in the code will exist in perpetuity.
To facilitate bug fixes and continuous contract development, special upgradeable patterns have been created to allow the contract logic to be upgraded. Protocols can use a proxy pattern with a mutable reference to the current logic implementation to implement an upgrade. This comes with its own security considerations, as now new bugs can be introduced into a previously secure platform – importantly, this also makes it easier for developers to perform the aforementioned rug pull. As such, these patterns are typically accompanied by security mitigations, such as time locks, which afford users a minimum amount of time to exit the system before an upgrade is deployed.
Composability is key. Decentralized Finance (DeFi) sells itself as the future of finance. A key aspect of this future is that different platforms can interact with each other, allowing for complex operations. For example, a user might:
require(balances[msg.sender] >= _amount);
msg.sender.call.value(_amount)();
balances[msg.sender] -= _amount;
}
Product owners are public enemy no. 1. In the second half of 2020, DeFi scams made up 99% of crypto-based fraud. The most common type of scam is the rug pull, in which developers design their contracts to give themselves wide-ranging control over user-deposited funds, promote their platform on social media for a few days, and then disappear with their users’ funds.
For this reason, developers are seen as a primary threat actors. Malicious developers may include subtle bugs in their code (see Solidity Underhanded Competition, a competition for backdooring smart contracts), or blatantly include functionality that lets them drain user funds. They may also be negligent with the private key(s) used to deploy and manage the contract, which could allow others to abuse trusted management functionality.
Common mitigations include minimizing privileged functionality, releasing public audits to highlight trust assumptions to users, using a multi-sig wallet for contract management, and not using upgradeable contracts.
Code is immutable (kind of). In the traditional security space, it is taken as a given that applications and platforms may be upgraded and changed by their creators. This is not always the case with smart contracts, as code is immutable once deployed onto the blockchain, providing users with a level of certainty about future interactions. This, however, also means that any bugs in the code will exist in perpetuity.
To facilitate bug fixes and continuous contract development, special upgradeable patterns have been created to allow the contract logic to be upgraded. Protocols can use a proxy pattern with a mutable reference to the current logic implementation to implement an upgrade. This comes with its own security considerations, as now new bugs can be introduced into a previously secure platform – importantly, this also makes it easier for developers to perform the aforementioned rug pull. As such, these patterns are typically accompanied by security mitigations, such as time locks, which afford users a minimum amount of time to exit the system before an upgrade is deployed.
Composability is key. Decentralized Finance (DeFi) sells itself as the future of finance. A key aspect of this future is that different platforms can interact with each other, allowing for complex operations. For example, a user might:
- Use an algorithmically backed stable coin (e.g. Dai) as collateral to borrow Ether through a lending protocol, and then…
- Use that borrowed Ether to provide liquidity to a decentralized exchange (DEX) to earn rewards, usually in the form of other tokens, and then…
- Use the rewards to govern the DEX, which may include voting on which code is run by the protocol.
Thanks to this immense complexity, yield farming protocols have gained popularity. These protocols aim to abstract some of the complexity away from end-users by allowing them to simply deploy funds into a protocol, which is then managed by the protocol developers who create and manage strategies to maximize returns.
Each moving part extends the attack surface, and so it comes as no surprise that yield aggregator platforms have repeatedly been targeted in attacks: Pickle Finance ($19m), Yearn ($11m), and Akropolis ($2m) all suffered hacks in recent times.
Attacks don’t just come from bugs. Given the breakneck pace of innovation, crypto is fraught with the novel, complex attacks. Oracle manipulation attacks through flash loans are a good example of this. Flash loans allow users to borrow hundreds of millions of dollars, provided they borrow and repay the capital and a fee in a single transaction. There are many productive uses of this, including performing arbitrage. However, in 2020 flash loans were repeatedly used in what became known as oracle manipulation attacks.
In order to determine asset prices, protocols would query DEXs, which, being exchanges, should, in theory, be able to provide current pricing data. As an example, consider Uniswap, the largest DEX (which recently saw a weekly trading volume of over $10b) is an automated market maker (AMM), which rather than relying on off-chain price data, uses the concept of a bonding curve to determine asset prices. In a pool of two assets, as one asset is subject to higher demand than the other, the relative price of the in-demand asset goes up and becomes more expensive according to the curve. Any excessive changes in price are corrected by arbitrageurs.
This worked reasonably well until flash loans gave users access to enough capital to manipulate these prices to the extent that one side of the pool is almost free. Using this technique, attackers can temporarily manipulate a price feed used by a protocol and reliably extract value from it, without exploiting any bugs in its code. Flash loans lower the barrier for market manipulation from whales to anyone who can write a smart contract. Imagine pentesting an e-commerce store and reporting that you were able to purchase products at a reduced price by manipulating the national currency. Since then safety measures have been put in place to improve the reliability of these feeds, such as the introduction of time-weighted average prices, which limits the viability of short-term manipulation attacks.
In another example of a non-bug-related attack, when the SushiSwap DEX first came into existence, it used what is known as a vampire attack to steal liquidity away from the existing incumbent Uniswap. SushiSwap took advantage of Uniswap’s permissive source code license by forking it and offering an additional governance token that was issued to liquidity providers (LPs) according to the amount and duration of the liquidity provided. This meant that it was more profitable for LPs to move their funds into SushiSwap, which drained both the liquidity and trading volume from Uniswap into SushiSwap. As a response to this and other protocol attacks, Uniswap released their own governance token, added a novel licensing mechanism that put limits on when and how their codebase could be forked and disallowed certain functionality from being used by external contracts in their v3 release.
Each moving part extends the attack surface, and so it comes as no surprise that yield aggregator platforms have repeatedly been targeted in attacks: Pickle Finance ($19m), Yearn ($11m), and Akropolis ($2m) all suffered hacks in recent times.
Attacks don’t just come from bugs. Given the breakneck pace of innovation, crypto is fraught with the novel, complex attacks. Oracle manipulation attacks through flash loans are a good example of this. Flash loans allow users to borrow hundreds of millions of dollars, provided they borrow and repay the capital and a fee in a single transaction. There are many productive uses of this, including performing arbitrage. However, in 2020 flash loans were repeatedly used in what became known as oracle manipulation attacks.
In order to determine asset prices, protocols would query DEXs, which, being exchanges, should, in theory, be able to provide current pricing data. As an example, consider Uniswap, the largest DEX (which recently saw a weekly trading volume of over $10b) is an automated market maker (AMM), which rather than relying on off-chain price data, uses the concept of a bonding curve to determine asset prices. In a pool of two assets, as one asset is subject to higher demand than the other, the relative price of the in-demand asset goes up and becomes more expensive according to the curve. Any excessive changes in price are corrected by arbitrageurs.
This worked reasonably well until flash loans gave users access to enough capital to manipulate these prices to the extent that one side of the pool is almost free. Using this technique, attackers can temporarily manipulate a price feed used by a protocol and reliably extract value from it, without exploiting any bugs in its code. Flash loans lower the barrier for market manipulation from whales to anyone who can write a smart contract. Imagine pentesting an e-commerce store and reporting that you were able to purchase products at a reduced price by manipulating the national currency. Since then safety measures have been put in place to improve the reliability of these feeds, such as the introduction of time-weighted average prices, which limits the viability of short-term manipulation attacks.
In another example of a non-bug-related attack, when the SushiSwap DEX first came into existence, it used what is known as a vampire attack to steal liquidity away from the existing incumbent Uniswap. SushiSwap took advantage of Uniswap’s permissive source code license by forking it and offering an additional governance token that was issued to liquidity providers (LPs) according to the amount and duration of the liquidity provided. This meant that it was more profitable for LPs to move their funds into SushiSwap, which drained both the liquidity and trading volume from Uniswap into SushiSwap. As a response to this and other protocol attacks, Uniswap released their own governance token, added a novel licensing mechanism that put limits on when and how their codebase could be forked and disallowed certain functionality from being used by external contracts in their v3 release.
Final thoughts
From a security researcher’s perspective, you couldn’t ask for more. With smart contracts, the stakes are high, the technology is fairly nascent, and security skills are in sore demand. The wide use of bug bounty programs is a testament to this, with many crypto projects offering $1m+ bounties.
If you’ve got a background in traditional cyber security and would like to get started in the space, you may want to check out some of the publicly available CTFs:
If you’ve got a background in traditional cyber security and would like to get started in the space, you may want to check out some of the publicly available CTFs:
- Ethernaut: good for beginners to get a feel for Solidity. Start with this one.
- Damn Vulnerable DeFi: requires an understanding of some DeFi concepts, such as flash loans.
- Paradigm CTF: an advanced CTF covering a variety of DeFi and Solidity concepts.