BAMM -Bounded Automated Market Making

Please see https://gateway.pinata.cloud/ipfs/QmZuRY3t7NvyVcXpV9cS6JEqsBcGcsufWr4TnrpygK3Ma6

2 Likes

Love this idea Daniel btw

I keep thinking about the arbitrage component of AMMs and if it’s possible for the pool itself to self-‘arbitrage’ itself before external actors do it and thus stopping that leak in the process, making it more capital efficient.

Thank you for taking a look at my idea. I don’t have much experience arbitraging between exchanges/AMMs, so I cannot comment on that. My idea is mostly about capitical effeiencey and impose a limit to impermanent loss.

I don’t think external arbitrage leaks funds out of a liquidity pool.

I’ve heard people say that impermanent loss is caused by arbitragers taking profit out of the pool, but that does not really make sense. When the ratio returns to the mean, the impermanent loss disappears again. This would then imply that arbitragers are putting back their profit into the pool.

Also note that arbitrage is an important part of the volume generated on liquidity pools.

Idea sounds very interesting… I’ll have a look at the math somewhere this week. Would you mind keeping everyone here up to date on when someone found a good solution?

1 Like

Yes I will (with possible delays)

Probably everyone knows their own graphing function, but a good one I’ve used before is: Graphing Calculator

(you can even have multiple expressions and compare)

are we not simply wanting a flatter function aka less change in ratios, but still approach infinity? Like so,

The xy=k uniswap function is quite aggressive, we just need to tame it down abit. Thoughts?

1 Like

The 0.8 I used in this example is just an arbitrary number much like k in the Uniswap function is the product of tokenA and tokenB. You would want this derived value to be simply obtained but also adjust as liquidity scales (possibly) OR based on a ratio of tokenA : tokenB.

You could even have different values for tokenA’s 0.8 and tokenB’s 0.8 if you wanted a asymmetrical AMM (say 80/20% weights), although more thought would needed to be had on this.

I wrote a very simple simulation of BAMM with JS, it seems very easy to be implemented.

I like this idea too… it seems like a good way to in a sense bootstrap small pools into bigger ones.

At the end of the day LP’ers want people swapping a lot to collect fees on these swaps. But if the slippage is too large then no one swaps in the beginning, which creates a chicken and egg problem.

Using this mechanism to lower the slippage in less liquid pools should help bring fees and ultimately more LP’ers in to collect those fees in theory.

Combining this with other LP incentives should be a nice combo to really bring in lots of fresh liquidity. More swappers, more LP’ers … more LP’ers, more swappers.

1 Like

As you mention towards the end of the document, it is a lot more intuitive to think of bonding curves not in terms of bonding curves but as limit orders on an orderbook. The only difference is that the limit orders generated by a bonding curve are “prioritised” - which means that they are moved after every market order. Limit orderbooks on the other hand are typically involved in race conditions - you are allowed to make multiple market orders, limit orders or cancellations before anyone else reacts, if you’re fast enough.

Bonding curves are hence best understood as an orderbook where a very specific kind of market maker is given priority to move his orders after every trade. An additional property is that they are usually path invariant - if the price returns to the original price, the market maker usually experiences no net change in inventory.

Once you have this perspective, the obvious next thing to do is look at how market makers actually operate in the traditional world. Stoikov market making is popular and it also has quantitative analysis to back it - it takes into consideration the risk tolerance of the market maker as well as the expected volatility of the asset. DeFi assets may not be well approximated by a geometric brownian price process, which means other models may be required.

In any case it would be more beneficial to design automated market makers with an orderbook perspective imo. Bonding curves were designed mainly to save gas by minimising computational operations. A ZK rollup like Loopring does not have as tight a contraint on computation, and can hence ask both quants and retail what kind of market makers should be “prioritised”, if at all you want the notion of a prioritised market maker. Retail benefits from any automanaged market-making strategy - this does not necessarily mean that the exchange itself has to prioritise these orders in its matching the way bonding curves do. Although there are definitely benefits - such as retail earning more profit. Or avoiding the wasteful war of attrition that traditional finance often sees where millions of dollars are blown into creating faster hardware and minimisng latency to frontrun others.

2 Likes

Thank you for sharing your thought. The reason we designed BAMM is because we are working on something called liquidty merging – we want to unify the UI so users can access AMM liqudity and orderbook liquidity without knowing where the liquidity actually comes from. This means we have to ‘transform’ AMM liquidty somehow to present them into a orderbook. Anyway, I like the way you think about AMM design from order-book perspective. I didn’t know Stoikov market making, I will try to learn more.

Back to BAMM, do you feel there will be any issues/confusions for traders, AMM liquidity providers, market makers, and arbitrators? I feel like you have experiences.

2 Likes

Interesting idea. I put together some notes to wrap my head around it and formalize some of the math. Here’s a link to my notes (a work in progress). My main concern is determining what impact the residual virtual deposits have on the liquidity providers. I feel like it would translate into some kind of loss relative to a traditional AMM but I haven’t quite been able to quantify it at this point.

It is definitely a good model and much closer to what happens in the real world. People provide liquidity upto a certain depth only. If it is explained properly, people will eventually get used to this model imo.

You can get orderbook equivalent with some math.
xy = k = (x + del x)(y + del y)
initial price, p = y/x
final price, p’ = p + del p = (y + del y) / (x + del x)

Derive: del x = sqrt(k/p’) - x

Assume x = 1000, y = 1000, this imples initial price p = 1, and k = 1000000
we plot del x = sqrt(1000000/p’) - 1000

image

Green are cumulative buy orders, red are cumulative sell. x-axis is final price p’ and y-axis is del x, the change in number of tokens of x in the pool. It might look like sell orders have more x tokens but the buy orders are pretty high too, so both areas are equal.

In your model you stop putting orders beyond 1/p and p, this means capital is used much more efficiently, since it isn’t allocated near price = 0 or price = infinity.

2 Likes