Liquity V2 is currently live on ETH Sepolia Testnet. Mainnet Launch is coming soon
The oracle conundrum
Liquity V1
Liquity V2
Sam Lekhak, Rick Pardoe
·
August 18, 2023
The oracle conundrum


In the complex world of DeFi, oracles play a crucial role. Oracles are the invisible hands that feed real-world data to smart contracts, ensuring DeFi protocols operate seamlessly.
However, as the DeFi ecosystem evolves, the quest for the perfect oracle continues. While some prioritize accuracy, others emphasize security, decentralization, or low latency.

This article delves into the different types of price oracles, exploring their merits and drawbacks, while highlighting the oracle selection considerations for Liquity v2.

Our oracle goal with v2

Our goal with v2 is to create a stablecoin solution that is scalable, secure, and decentralized.
This new system aims to improve upon the limitations faced by LUSD, with a focus on scalability. It involves an immutable system, and utilizes a single staked ETH collateral reserve for the minting and redeeming of the stablecoin. In addition to this, it introduces leveraged ETH positions and borrowing capabilities, which hinge on an staked ETH/USD price oracle for pricing funds in the reserve.

Price oracles play a crucial role in providing accurate market data to Liquity v2. While researching on oracles, we came upon some pertinent questions:

  • What oracle should we use?
  • Which oracle, along with our own protocol design mechanisms, can best help mitigate frontrunning?
  • How should the price feed logic be designed?
  • Should there be a fallback?

Before we dive into all of that and more, let’s take a step back and look at the most common types of oracles that are out there.

Exploring Different Types of Price Oracles

There are namely 3 types of oracles that are most commonly used in DeFi. These are
1) Push-based oracles
2) Low latency oracles
3) TWAP oracles

Let’s dive into each one and see what some of the pros and cons are:

Push-Based Oracles:

Push-based oracles follow a simple mechanism where they regularly "push" price data on-chain at specific intervals and on various deviation thresholds. Chainlink is a well-known example of a push-based oracle. At predetermined time intervals, or if the price of a said asset deviates by more than 0.5% since the last update,  Chainlink ‘pushes’ the latest market price data to the blockchain.

Smart contracts can access and read this data to make informed decisions, and integration with push-based oracles is relatively straightforward, making it a popular choice for most of DeFi (it is used in V1 as the primary oracle too)

Pros of Push-Based Oracles:

- Easy to integrate and use for developers, offering a straightforward developer experience (UX).
- The price data is readily available on-chain for smart contracts to utilize.

Cons of Push-Based Oracles:

  • There might be occasional delays in updating price data, leading to potential latency issues which can cause problems. Eg., if the ETH price moves by 0.49%, there won’t be a trigger for an update, and it might update after 1 hour.
  • The centralized nature of some push-based oracles raises concerns about single points of failure and control

How about pull-based oracles?

Low Latency / Pull-based Oracles:

Low latency oracles take a different approach by using a pull-based mechanism, where the user or keeper fetches the price data on demand from the oracle. Two oracles that you may have heard of that follow this are Redstone and Pyth.

These oracles require keepers or users to first pull the price, and then send the price data to the dAPP as part of the transaction the user wants to perform. The oracle provides the most up-to-date price data immediately upon request, reducing any potential delays.
Redstone and Pyth’s approach aims to improve the speed and responsiveness of price data retrieval, by reducing the lag between the price being used on the blockchain by the user, compared to the actual market price.

Pros of Low Latency Oracles:

  • Instant access to very current price data, reducing the chances of lag, and thus arbitrage opportunities within the protocol.
  • Can potentially offer a better user experience and more up-to-date prices being used in transactions.

Cons of Low Latency Oracles:

  • The pull-based approach can introduce some complexity in the front-end user experience and composability, making it less straightforward than push-based oracles.
  • Some low latency oracles might still have elements of centralization or admin control, leading to concerns about decentralization.

And what about TWAP oracles?

TWAP (Time-Weighted Average Price) Oracles:

TWAP oracles utilize data from DEXes like Uniswap v3 to calculate the time-weighted average price over a specific period.  Instead of relying on single data points, TWAP oracles use data from DEX trading pairs to calculate an average price over a certain time frame. TWAP oracles can be a useful proxy for the actual market price, and can closely track it.

Pros of TWAP Oracles:

  • TWAP oracles provide an average price over time, reducing the impact of price manipulation or sudden price spikes.
  • They leverage the liquidity of DEXes - the greater the DEX pair liquidity, the greater the attack costs for a given TWAP scheme.

Cons of TWAP Oracles:

  • TWAP attack costs and lag are inversely proportional - the more expensive you want it to be to manipulate, the longer the lag you’ll have to accept.
  • There are no liquidity guarantees - attack costs can drop suddenly at the whim of the LPs. (eg. liquidity moving from Uniswap v3 to Uniswap v4).

As mentioned at the start of this article, when choosing an oracle for v2, we had a few research questions that really needed solving:

Oracle criteria for v2

Decentralization and immutability: For the level of decentralization we aim for in v2, oracle immutability with strong decentralization is preferred; i.e. as little admin control as possible.

  • Compatibility: Since v2 aims to be immutable, it needs to ensure the chosen oracle has guaranteed endpoints that won’t change.
  • Latency: Low latency is crucial, especially for leverage positions, as lags could lead to front-running (more on this later)
  • Track record: A reliable and battle-tested oracle is preferable to ensure accuracy and prevent potential vulnerabilities.
  • Crypto-economic guarantees - The oracle’s consensus mechanism needs to have guarantees in place in order to avoid potential attacks

Attack costs and trust assumptions: The oracle's security is essential, along with trust assumptions in the network where the system will be deployed (e.g., mainnet).

Node decentralization & presence on Ethereum mainnet - it is essential that the nodes are as decentralized as possible, and that there is presence of the oracle on Ethereum mainnet.

So taking all of this into account, is there one oracle that can solve it all?

As you can see from the infographic above, there is no oracle that is perfect - they all have different trade-offs.  In our initial research, there were two that stood out when it came to the decentralization and immutability aspects  - Tellor and Uniswap v3 TWAP.

Both, however, have two ‘deal breakers’ which wouldn’t work for Liquity v2.

In Uniswap’s case, the concern stems from liquidity guarantees; once v4 launches, will the liquidity on Uniswap v3 stick? Considering the goal for us is to build a protocol that stands the test of time, it is definitely a big risk.

In Tellor’s case, there is the ‘price dispute’ period.

Tellor’s price feeds have a ‘dispute’ policy where prices can be disputed for a period between 10-20 minutes, to allow time for any ‘fake’ prices to be weeded out.  As a reminder,  v2 will have a component of leverage built into it, which requires price feeds to have low latency; waiting for 10-20 minutes, unfortunately, becomes a deal-breaker.

So why is having a solution to front-running critical to v2?

Price frontrunning occurs when a trader sees the market price move and exploits that information with an on-chain operation before the on-chain price has caught up with the market price.
This can be an issue on v2 on both the leveraged operations and the stable operations.

Leverage Position Front Running: A user can open a position as a front run to an anticipated price rise, then quickly close the leverage position after the price updates on-chain, and gain leverage-amplified profits.  

Leverage Loss Evasion: If a user already has a position open, they can close it as a front run to an expected price drop, immediately reopen it after the price updates on-chain, and avoid a leverage loss.

On the stablecoins operations side, this issue can also persist as users can front-run price updates with stablecoin mint and redeem operations. This would in turn extract ETH from the reserve and on the redemption side, can extract extra stablecoins. However, the extractable value from these is not as large as the leveraged operations, as they are not amplified. 

So how can you solve front-running?

Addressing Front-Running Challenges

Though numerous strategies exist to tackle frontrunning, given our criteria for v2, we've identified four primary methods to address these concerns in our system. Let's delve into each

  1. Minimum Delay: This introduces uncertainty for attackers attempting to front-run by sandwiching operations around price updates. By introducing this delay, attackers won't know the exact price they will be executing at when they close their position, making their front-running efforts non-risk-free. 

However, this solution is only effective in one direction, as attackers could still evade a leverage loss by quickly closing their position during an anticipated price drop and reopening it from a different account.

For mitigating loss evasion, a two-step commit-confirm process would work - for example, to close their position, a user must send one transaction committing to closing and then after some minimum delay, send a second transaction to actually close it. They could be penalized for not closing in time, to remove excess optionality. The small delay between commit and confirm would introduce price uncertainty, and make loss evasion risky. However, this breaks the composability of closing positions by requiring two transactions.

  1. Pausing During High Volatility: By pausing operations during high volatility periods, the worst front-running opportunities can be prevented without interrupting regular system functionality.  Pausing the system during extreme volatility, even for a few seconds per week, can prevent the worst front-run attempts. In v2’s case, pausing would work if the system is in a healthy range (i.e. when the backing ratio is not low).

  2. Using a Worse Price Approach to Prevent Front Running

The worst price approach is an interesting strategy to counter-front running, pioneered by Synthetix. It involves using two Oracle prices: the current price, and a lagged price. The key idea is to pick whichever price is worse for the user in each operation.

When a user opens a position, the system uses the maximum price from the current and lagged prices. Conversely, when they close their position, the system goes with the minimum price of the two. By using the worst price for both opening and closing positions, an ‘effective fee’ is established. This reduces the profits front runners could make by exploiting price differences. This approach has a nice property - the effective fee is proportional to volatility.   Let’s look at the graph below to understand how this works:

During low volatility (first half of the graph), the gap (Delta) between the ETH price and lagging ETH price (constant lag) remains relatively small. When the market gets more volatile (second half), this gap widens significantly, and the delta increases greatly!

Why is this important?

Front runners thrive on volatility. As prices swing more, they can make larger profits. The effective fee created using the worst price approach follows the same pattern. It becomes larger as volatility increases, hitting front runners where it hurts the most - their potential profits.

4) Low Latency Oracles:

When prices on chain reflect real-world prices near-instantaneously, the window of opportunity for front-runners shrinks dramatically. Let's dive into the two main approaches low-latency oracles take when it comes to mitigating front-running.

Self-Serve: Users pull the price themselves. This approach is especially interesting as prices can be published with sub-second frequency; in the worst case, the latency is only one block time (12 seconds). The self-serve oracles utilize the same set of node operators and multi-layered data aggregation mechanism currently deployed in the existing oracle providers (eg. Chainlink, Pyth) reference feeds, but with low latency!

Deferred Settlement: The deferred settlement approach involves a keeper pulling the price data from the oracle after the user commits to an operation. This method introduces a delay between the user's commitment to the operation and the finalization of it by the keeper. This delay is put in place to mitigate front running, as it becomes challenging for malicious actors to anticipate price movements within this timeframe. While this approach effectively counters front running, it does come with a downside. It reduces composability, which refers to the seamless interoperability between different protocols and applications within the decentralized ecosystem. In this case, the need for two transactions (user commitment and keeper finalization) breaks the composability of the operation. 

So what else is there to plan for?

Planning for Oracle Failures and Contingencies

v2's system needs to handle potential oracle failures automatically since we plan to make the protocol as immutable as it can be.

The focus is on simplicity in fallback and contingency designs to ensure reliability. We have learned some lessons from the complexity of design that was involved in the Tellor issue that came with Liquity v1. If we do decide to have a fallback oracle in case the primary fails or freezes, we want to favor simplicity!

What does this mean?

  • Simple fallback logic - if the primary oracle fails, fall back once (and don’t return)
  • Simple conditions for fallback - eg., the frozen price for >12 hours, or bad data / revert
  • Protect against technical failures, and not manipulation
  • No Circuit breaker (like Gyroscope's sophisticated approach) - Gyroscope uses Chainlink as a primary oracle and pauses minting & redemptions if Chainlink prices jump too much or deviate from the median of signed prices. The problem here comes from the potential need for human intervention to set a new oracle and unpause the system,  which doesn’t align with v2.

Considering all this, simplicity and automation will be prioritized in contingency planning.

In conclusion, we’re actively exploring various oracle solutions, keeping in mind the need for scalability, decentralization, and reliability.  

Please get involved, join our Discord’s #v2channel, and shill us some oracles!

-->