A physicist view on Finance (VI) — Optimal Execution, yet another theory!

Gil-Arnaud Coche
12 min readSep 4, 2023

Disclaimer: This is not investment advice, this is merely for educational purposes and definitely an expression of personal thoughts. Investing is always a risky endeavour.

Asset Managers are responsible for funds and generate wealth by investing them in the right sectors of the economy. Generally holding substantial amounts of assets, some of their trades can be so big that they may represent most of the Average Daily Volume (ADV) on a given day. Therefore, they cannot enter such buy or sell trades without anticipating a potential adverse market move on the price

This is where Execution Services of Brokerage firms come into play. These providers take charge of high volume trades and ensure that the liquidity of traded assets is not impacted too much by the investment decisions of the Asset Manager. By ensuring a stable liquidity, they consequently reduce the Market Impact on the price during the trade.

Market Impact is used industry wide to describe the move of the price during a large trade, it may be:

  • upward for a buy trade
  • downward for a sell trade.

Scientific literature is very rich on the matter and has extensively investigated the best ways to handle execution with as little price move as possible. Some thought leaders have established key results on the topic: Almgren and Chris [1], Bouchaud [2] and Gatheral [3] being probably amongst the earliest ones.

In this article, I do not wish to say again things that have been widely researched by academics. Instead, I would like to focus on pragmatic descriptions of the cash fows to focus on simple understanding of the underlying mechanics of optimal execution.

To do so, I will

  • discuss the Execution Puzzle that the broker has to manage ; when taking on a trade, the broker has to choose wisely between liquidity and market risk ;
  • introduce the reconstruction time of the order book as a discretization step τ for the trading execution; the reconstruction time of the order book was introduced by Bouchaud [2] and it will be useful in the mathematical descriptions ;
  • keep a hands-on approach to the maths: the idea is not mesmerize you with some fancy algebra or numerical computation tricks ; I wish to provide you with simple description of optimal trading and I believe that maths is the best tool to describe flows, no matter their nature (cash flows, matter flows, energy flows, etc).
    As long as it remains simple.

You may have to take a pen and go through the steps. But, IMHO, it is worth it…

行くぞ!

The optimal Execution Puzzle

I use the term Execution Puzzle because when receiving a trading mandate, a broker has to face a harsh reality: either they ruin the liquidity and impact the price by trading too fast, or they expose themselves to a volatily-driven market move by holding the position, even in part, for too long.

When buying (selling), either the trader is aggressive and meets the ask (bid) on the other side, mechanically increasing the price and inciting other participants to post higher asks (bids), or the trader is passive and posts on the bid (ask) increasing the volume on best levels and indicating ask (bid) participants to go higher (lower).

They are as many trading tactics to try and keep the market impact as low as possible. One of them is for the broker to garantee a benchmark price for the execution: the Asset Manager pays the broker the realised benchmark at the end of the trading period.

There can be as many bechmarks as one may feel like engineering, however, some standard ones have emerged.

In the following we will consider a trading period from t = 0 to t = T, the asset price p(t) at time t and the cumulated traded volume on the market V(t) from the start of trade with its derivative v(t).

  • Volume Weighted Average Price — the average price over the trading period, weighted by the traded volume on that same period, is the benchmark.
    By considering the price obtained for the actual traded volume, the broker ensures some parity with the market activity.
    What the market does is what the Asset Managers gets as an execution price.

    One caveats: for really huge trades, this benchmark does not necessarily favours the Asset Managers as the market maybe dominated by their own trade. And so the broker could obtain an execution price very close to the VWAP but the Asset Manager would potentially loose part its wealth because of the possible impact.
    Mathematically, the VWAP is given by
  • Time Weighted Average Price (TWAP) — the average price over the trading period is the benchmark.
    This benchmark is useful over very long periods of time (when the trade spans over multiple days or even weeks). As the time increases the importance of volumes in the calculation of VWAP decreases and eventually, only the exposure to volatility-driven market moves matters. Therefore the VWAP becomes a TWAP. Although this benchmark appears from upscaling the VWAP, market practices have made it an acceptable benchmark for shorter time periods. Afterall, it is a very rapid way to filter out volatility from the execution price.

It has been proven, on a database of sensible VWAP and TWAP trades (see Gatheral) that the shape of the absolute variation from the inital price, throughout the trade, follows the trajectory below.

Above the absolute variation from the inital price (source: Bouchaud [3]).

The pic of the curve ℐ(Q) has empirically been found to follow a Square Root Law such that

Square Root Law on the maximum Implementation Shortfall (or peak of the curve)

This means that for reasonable VWAP- and TWAP-benchmarked executions, it is expected that the price will move, no matter what. The amplitude of this move is dependant on the liquidity of the asset only (volatility σ and ADV). Therefore the Asset Manager always pays a price for the execution, even in well executed trades.

Naturally, brokers eventually offered Asset Managers to benchmark the execution to the inital price which clearly shows whether the execution has preserved the Asset Manager’s wealth. This is called the Implementation Shortfall.

  • The Implementation Shortfall — the initial price over the trading period is the benchmark.
    As explained above, the VWAP and TWAP can be detriemental to the Asset Manager. By benchmarking with the initial price, the broker is bound to offer the best performance possible for the Asset Manager. With the initial price as a benchmark, the broker is exposed to both impact and risk.

The orderbook reconstruction time

In Bouchaud [2], orderbooks (OB) are characterized by their mid price, the depth on the bid and on the ask, the spread and their reconstruction time.

The above figure shows these aspects. On the left, the OB at equilibrium, the spread measures the distance between the bid side and the ask side, while the depths measure the available liquidity on one side. To quantify further the available liquidity, an element of volumetry represented by the curve inserted on the left of the OB has to be explicitly added to the model. The mid price gives the position of the overall book along the price axis.

I should also mention the tick and lot (price size and volume size), although it would add unnecessary complexity to my presentation. Just keep in mind it is there as well!

The reconstruction time is a dynamical liquidity characteristics. It quantifies the time taken for the OB to come back to its stable state (defined by a spread equal to its equilibrium value). Say, for example and as shown on the sketch above, that an event occurs and three levels of the OB are depleted on the ask side. It has now a new spread, a new mid price, and new depths. But this is not a local equilibrium and depending on the market conditions, market participants may place new orders on the ask but also on the bid (as shown on the sketch) or only on the ask or only on the bid. Eventually, the OB gets back to having about the same spread, but potentially a new depths and mid price.

This time taken for the spread to come back to its equilibrium value, is the reconstruction time and we will name it τ.

Time discretized benchmarks

Coming back to our earlier definitions of the VWAP and TWAP, we can now discretize on the reconstruction time τ so that T = Nτ.

The philosophy behind this choice is that we assume that when the broker takes action on the order book, they wait exactly for a duration τ until they take another action. It is of course assumed that τ << T and therefore discretising the above integrals makes sense.

The above formulae become thus

Now, in this discrete form it is straight forward to write benchmarks as

with the weights

The Maths! 🏞️

To me, the maths cannot be correctly derived without an accurate description of flows. It needs a proper understanding of the geometries involved to come to life (in a way…).

Here the flows will be ones of orders and cash on the geometry of the OB.

General Formula for the trading costs

And below is how I see them.

Let’s spend some time describing the sketch of the figure above. At every trading interval of duration τ, equal to the OB’s recontruction time, liquidity is consumed by the broker along the OB’s depth on the ask side (we will assume a buy trade from now on). The liquidity could be consumed passively on the bid side, however, in order to have the most thorough estimate of costs, better provide the most conservative scenario for the broker: an aggressive only trade.

At each trading interval n, a quantity qₙ is consumed, represented by the blue arrows, along the depth distribution fₙ(δ), represented by the curves on the sketch. δ is he depth from the best ask.

Mathematically this boils down to considering that the acquisition cost of the position is obtained from

Because of the overall stability of the OB over short periods of time, it is fair to assume that fₙ(δ) is a constant of time n and thus is equal to f(δ) at all trading interval. Also since

the trading costs simplify to

and with the change of variable χ = F(δ) we get to

General Formula for the Trade Performance

The trade performance is the difference between the trading costs garanteed to the client and the realized costs. Using the formulae for the general benchmark we have that the trade performance is equal to

if we assume that the reference price is the best bid. Again as we are pricing the worst outcomes, we remain conservative and assume that the ideal broker would garantee the most favourable price to the Asset Manager.

Assuming that the spread is constant at each trading interval (I could have mentioned this fact earlier…), the above formula becomes

This last result is interesting.

(i) — In a conservative framework, there is a incompressible negative performance due to the spread.

(ii) — If one wants to completely remove the market risk, one could naively set the trading objective at each trading interval to be qₙ = ωₙQ. In the context of the IS benchmark, this is completely ridiculous as one would have a maximum impact by trading everything from the get-go.
For TWAP and VWAP, this is more reasonable and reduces the costs to

Although this formula is quite accurate for the TWAP as ωₙ is a constant equal to 1/N, for the VWAP, this performance is not accurate as ωₙ is random and all one can do prior to the trading interval n is to provide an estimate which will be more or less accurate.

Case of the TWAP

For the TWAP, the trade performance is

and thus an optimal horizon may be derived as

and obtained from

To find the optimal horizon would require numerical computations which we will not get into now (hopefully in a next article!). And at the optimal the trade performance is given by

Which sets the price of the trade at a minimum of

For a duration set by the Asset Manager, the price in bips is simply

Case of the VWAP

The general formula for the minimum price in bps of the VWAP is

and holds asymptotically (when N tends to infinity) if the estimate of ωₙ is non-biased. In case of a biais (meaning the broker is systematically off by a fixed value), the minimum price ought to be

if the price follows a random walk.

To obtain the optimal execution horizon, the volume profile (ωₙ) needs to be computed. However the idea is the same and eventually, the minimum of the equation below has to be found for N.

Assuming that an appropriate estimate of ωₙ has been computed.

Case of the IS

In this case, it should be an article all by itself, which I may write now that I think about it. I’ll simply state that there is no naive way to mitigate the market risk from the trading performance and thus one has to pick an objective function suited to their needs. Given the random nature of the the trade performance, it may be wise to consider some risk minimising function which will provide the optimal trading horizon and then the average will provide the value of the trade (traditional actuarial like approach).

Edit: Now that I read this paragraph two weeks later, I find it to be not so clear. Here is why this particular case is rather different.

The trade performance for an IS benchmark is

and as mentionned earlier, the market component cannot be removed by an heuristics where ωₙ = 1 for n = 0 and 0 otherwise: the impact would be maximum on the orderbook.

mₙ is random and morevover, as explained by the authors I mentionnedd ([1]-[3]), the price is self-impacted by the trade itself meaning that the price dynamics could very well be described by something like

where G(n,p) is a response function — see [2] or [3].

Because of that random nature, risk should be considered!

Wrapping up

Well , well… Looks like I got a little excited. 😹

When it comes to quantitative modelling, I usually get into another world and let the maths speak. In any case, the message here is very clear:

  • Optimal execution can be obtained through heuristics to mitigate market risk. This is what brokers have been doing for decades, using the VWAP and/or TWAP when engaging in huge trades for big investors and asset managers.
  • But these heuristic may be a at disadvantage for the latter when the trade represents a consequent share of the ADV. On such cases, more actuarial like approaches should be considered. And as the last paragraph on IS has started to mention, variance considerations on the expected performance may be a good way to go.

In a following article, I will most probably have an empirical analysis of what I am claiming here. Not sure if I will be focusing on TWAP, VWAP or IS. I might as well be the most general possible and develop a pricer for generic strategy with a generic benchmark and then compare with the specificities of TWAP, VWAP… We’ll see.

Until then, happy to hear your thoughts on this post!

Edit: Here is a follow up on this article where you get to work!

--

--

Gil-Arnaud Coche

What am I? I am not sure... An engineer? A physicist? A mathematician? What I know is that I terribly enjoy to model stuff.